Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 May 1.
Published in final edited form as: Magn Reson Med. 2024 Jan 9;91(5):2074–2088. doi: 10.1002/mrm.29983

Quantifying 3D Magnetic Resonance Fingerprinting (3D-MRF) reproducibility across subjects, sessions, and scanners automatically using MNI atlases

Andrew Dupuis 1, Yong Chen 2,3, Michael Hansen 4, Kelvin Chow 5, Jessie EP Sun 2, Chaitra Badve 3, Dan Ma 1, Mark A Griswold 2, Rasim Boyacioglu 2
PMCID: PMC10950529  NIHMSID: NIHMS1950648  PMID: 38192239

Abstract

Purpose:

Quantitative MRI techniques such as Magnetic Resonance Fingerprinting (MRF) promise more objective and comparable measurements of tissue properties at the point-of-care than weighted imaging. However, few direct cross-modal comparisons of MRF's repeatability and reproducibility versus weighted acquisitions have been performed. This work proposes a novel fully automated pipeline for quantitatively comparing cross-modal imaging performance in-vivo via atlas-based sampling.

Methods:

We acquire whole-brain 3D-MRF, TSE, and MPRAGE sequences three times each on two scanners across 10 subjects, for a total of 60 multimodal datasets. The proposed automated registration and analysis pipeline uses linear and nonlinear registration to align all qualitative and quantitative DICOM stacks to MNI-152 space, then samples each dataset's native space through transformation inversion to compare performance within atlas regions across subjects, scanners, and repetitions.

Results:

Voxel values within MRF-derived maps were found to be more repeatable (σT1=1.90, σT2 =3.20) across sessions than vendor-reconstructed MPRAGE (σT1w=6.04) or TSE (σT2w=5.66) images. Additionally, MRF was found to be more reproducible across scanners (σT1=2.21, σT2 =3.89) than either qualitative modality (σT1w=7.84, σT2w =7.76). Notably, differences between repeatability and reproducibility of in-vivo MRF were insignificant, unlike the weighted images.

Conclusion:

Magnetic Resonance Fingerprinting data from many sessions and scanners can potentially be treated as a single dataset for harmonized analysis or longitudinal comparisons without the additional regularization steps needed for qualitative modalities.

Keywords: Magnetic resonance fingerprinting, quantitative mapping, reproducibility, repeatability, automated reproducibility quantification

Introduction

While most clinical MR applications still regularly employ qualitative image generation as the default technique, research studies are consistently demonstrating areas where quantitative MR techniques may exhibit a direct clinical benefit over qualitative alternatives1. One quantitative technique is Magnetic Resonance Fingerprinting (MRF), an FDA-approved MRI method that performs simultaneous measurements of various tissue properties, including T1 and T2 relaxation times2. Quantitative analysis of MRF maps has been beneficial in better understanding and tracking of healthy function, disease diagnosis, and disease progression in areas such as the brain3,4,5,6,7, heart8,9, liver10,11,12, prostate13 and kidney14,15.

However, before any widespread clinical adoption, quantitative methods such as MRF need to offer significant, provable advantages over established, qualitative MRI scans. In this study, we establish reproducibility and repeatability within MRF techniques. We then demonstrate how reproducibility and repeatability are improved in MRF compared to current clinical imaging techniques, bringing more sensitive diagnostic tools to current imaging suites. Additionally, throughout this study, we determine techniques for creating measurable benchmarks of success and ensuring consistent techniques for future convenience and reliability within the world of MRF imaging, factors that are not consistently considered in qualitative imaging.

While qualitative weighted imaging techniques regulate noise characteristics16, spatial accuracy17, and signal uniformity18, the resultant images are not required to reproduce exact contrasts for different scanners and sites with the same input parameters. These weighted images do not have to be comparable directly to previously acquired images and thus, cannot be reliably considered quantifiable MR images19. On the contrary, in prior phantom20 and in-vivo21,22,23,24,25,26 studies, MRF images were able to quantify T1 and T2 values reliably and reproducibly. Some reproducibility studies used 2D acquisitions22 with partial brain coverage and manually drawn ROIs21 while others used 3D MRF, where the coefficient of variation (CoV); intra-class correlation (ICC); mean gray and white matter T1 and T2 values23; cortical thickness; and subcortical region volumes25 were studied. However, there are few cross-modal comparisons of MRF (or other quantitative MRI techniques) against conventional weighted imaging in terms of repeatability and reproducibility. Compared to the larger body of MRF work, only a few previous studies specifically compared the repeatability of MRF24 or reproducibility of T1 mapping27 against conventional T1 weighted imaging. None of these prior studies include the entire imaging and analysis chain as a composite source of variability.

The goal of this study was to present a fully integrated acquisition and online reconstruction and analysis framework for 3D MRF that forms an automated, reproducible, traceable pipeline. We then evaluate both the repeatability and reproducibility of 3D MRF compared to clinically standard T1 and T2 weighted imaging using this pipeline. To establish repeatability and reproducibility, previously determined definitions of traceability and uncertainty28 are imbued with specific meanings within specific parameters of MRF. We compared the mean values of different Montreal Neurological Institute (MNI) brain atlas regions to examine the in vivo repeatability and reproducibility of MRF and conventional weighted imaging. We used a fully automated registration pipeline designed to maximize cross-modality spatial coherence and to establish specific benchmarks for repeatability and reproducibility, using automated analysis. We also ensured that all analysis techniques used are available in version-controlled and traceable venues.

Methods

Study Design

Ten healthy volunteers (36.8±14.9 years; five men, five women) gave written consent and were scanned in this study according to the applicable IRB-approved protocol. Volunteers were imaged over two sessions occurring on different days, with one session each on two 3T scanners running different software versions (MAGNETOM Vida, VA20 and VA31, Siemens Healthcare, Erlangen, Germany).

All imaging was performed by using a 20-channel head coil.

This study aims to compare the intrasession, intersession, and interscanner regional reproducibility between MRF and two qualitative acquisition approaches. Each scan session consisted of three sets of acquisitions, with each set consisting of three series of images: 3D-MRF FISP with a B1 mapping prescan, product 3D-MPRAGE, and product multislice 2D-TSE. All were acquired with a field of view of 250x250x150 mm3 and a spatial resolution of 1x1x2.5 mm3.

The first set, referred to as the “original” set on each scanner, served as an initial baseline for each sequence. Immediately following the “original” set, a “repetition” set was acquired, consisting of the same three sequences in the same order. This “repetition” set is intended to serve as the intrasession test-retest because the subject was left in the scanner and the imaging FOV was copied directly from the “original.” After completion of all “repetition” images, subjects were asked to leave the scanner, stand up, walk around, and then asked to reenter the scanner. A new localizer was acquired to establish the subject’s new position, while also forcing a recalibration of the scanner’s acquisition system and reshimming of the B0 and B1 fields. Finally, a “reposition” set was acquired to serve as an intersession test-retest. Subjects were asked to return within 7 days to repeat the entire protocol on the second scanner. The scanner order was randomized between subjects to minimize the potential for scanner-specific effects on the results.

Sequences

The MRF data was acquired first in every set for all the subjects and sessions. Before acquiring MRF data, to minimize simulation mismatch errors that result from B1+ inhomogeneities, the RF transmit field (B1+) was mapped for later use during pattern matching.

The MRF sequence is an IR-FISP-based sequence initiated with an adiabatic inversion pulse, followed by the acquisition of a series of 960 time points. This is then repeated for each partition in a stack-of-spirals 3D approach with an acceleration factor of 2 in the slice direction6. An extra 2 second pause was added between each partition to allow for longitudinal relaxation. A 2π dephasing moment in the slice direction was used within each TR29. A fixed TR of 10.5ms and a fixed TE of 1.7ms were used for every time point. The acquisition time for each partition was 10.1 seconds for MRF with a total acquisition time of 362.4 seconds for all partitions. A figure showing the flip angle pattern used in this study, as well as a human-readable JSON specification of all relevant MRF pulse parameters, is available in the supporting information. Variable density spiral trajectory with 48 spiral arms rotated by 7.5° between successive time points was designed for a field of view of 250x250x150mm and a matrix size of 256x256x60. The trajectories were measured on scanner one using the approach described by Duyn et al.30, and the resulting trajectory was used in the NUFFT gridding operator regardless of scanner.

Next, a product 3D-MPRAGE sequence was acquired to represent a T1-weighted contrast and to be the structural baseline image within the intrasession, intersession, and interscanner registration pipelines. The product 3D-MPRAGE sequence was used with the following acquisition parameters: TR=2100ms, TE=2.59ms, TI=900ms, and TA=4m53s.

Finally, a product multislice 2D-TSE acquisition was acquired to represent a T2-weighted contrast with the following acquisition parameters: TR=10620ms, TE=93ms, and TF(turbo factor)=17, TA=3m2s. Product MPRAGE and TSE sequences had the same FOV, matrix size and position as the 3DMRF scan. Product MPRAGE and TSE scan parameters were fixed across all subjects and were selected with the assistance of a research technologist to match standard clinical contrasts. Vendor-default prescan normalization was left enabled for all weighted imaging sequences to match the settings used in clinical practice at our institution and to avoid including biases in our data that are well accounted for by standardized techniques with existing regulatory approval.

Reconstruction

Reconstruction of the product sequences, as well as the B1+ mapping sequence, was performed by the standard pipeline provided by the scanner/vendor. Header-complete DICOM images were exported and saved within directories specific to each subject, scanner, and acquisition set.

3D-MRF data was reconstructed online via a custom Gadgetron Kubernetes cluster using the FIRE interface prototype31. All data were also converted to ISMRMD32 format for future analysis or retrospective reconstruction. The SNR-constrained realtime compression provided by the FIRE prototype was disabled used to avoid potential differences in compression implementation across software platforms – instead, uncompressed raw data was sent to a remote reconstruction server via SSH. Reconstruction was performed within a GPU-enabled Docker container containing the precalculated MRF dictionary, the SVD compression matrix33, and spiral density compensation function (DCF). In order to conserve memory on the reconstruction server, time-domain SVD compression and the 2D Nonuniform Fast Fourier Transform (NUFFT) was performed on each partition on a rolling basis as raw data was received. Once all partition data has been uploaded at the end of the scan, a through-partition 1D Fast Fourier Transform (FFT) was applied to generate the 3D SVD-space images. Finally, the resulting SVD images and the associated B1+ maps were then pattern matched based on maximum inner product to obtain T1 and T2 maps21. The resulting voxel wise quantitative maps were then returned to the scanner via the SSH tunnel and stored as header complete DICOM images by the FIRE prototype interface. A figure detailing the online reconstruction pipeline used is available in the supporting information.

Postprocessing

The postprocessing pipeline inputs were the DICOM image series organized in a semantic hierarchical file scheme. All postprocessing performed on this file hierarchy was performed by an automated pipeline based on a version-controlled Docker container to ensure identical settings and procedures were applied to all data. Before the postprocessing pipeline could be run, DICOM images were converted to NIFTI files using the open-source dcm2niix package35, with consistent naming and versioning enforced by a subject-specific file map stored within each subject’s DICOM repository. The resulting unified NIFTI directory structure was then processed by a Python-based image registration and statistical analysis pipeline utilizing the NiBabel36 NIFTI management package and NiPype’s37 NIFTI pipelining system and FSL38 interface extension.

Acquired images went through postprocessing to establish a shared registration space among all image series for a specific subject, across all sets and scanners. The first step of the postprocessing pipeline is the generation of synthetic MPRAGE and TSE contrast images for all MRF image sets as a basis for intermodality linear registrations. While there are existing approaches to generating synthetic contrasts from MRF timeseries images using convolutional networks, this work requires pixel-precise coregistration that can be corrupted by generative approaches, such as U-NETs. Instead, synthetic contrasts were generated using a simple Tensorflow39 voxelwise regression network trained on MRF T1/T2 parameter maps as inputs versus registered MPRAGE and TSE contrasts as outputs to provide MRF-space synthetic contrasts with adequate similarity to each qualitative approach such that performant registration is possible within FSL. For network training, an initial registration was performed between MPRAGE, TSE, and T1/T2 map pairs, and quantitative maps were manually masked to avoid the influence of free-space spiral artifacts on the network. All data from three prior volunteers were linearized into independent, two-input-one-output voxel datasets for each conventional contrast. From this linearized voxel table, data were split into training (50%) and test (50%) sets, and a dense two-layer regression model with a data normalizer was trained using the Adam optimizer with a mean absolute error loss function. The resulting network functions similarly to a colormap, with unique combinations of T1 and T2 values corresponding to unique greyscale values based on the qualitative images used for training. This simple non-convolutional model was then saved and used for voxelwise prediction of synthetic image values from each volunteer’s T1/T2 map pairs.

Following synthetic image generation, all imaging sets from each subject were linearly registered together using FSL’s FLIRT tool40,41. Within each image set, the product TSE images were first registered to the MRF-derived synthetic TSE images, resulting in a transformation matrix TSE to MRF space. Then, the MRF-derived synthetic MPRAGE images were registered to the product MPRAGE images. This process yielded two sets of linear transformation matrices, bringing all images within each set to the shared space of the product MPRAGE contrast.

Within each scanner, the “original” MPRAGE image series was used as a basis to which the “repetition” and “reposition” MPRAGE image series were linearly registered. As a result, linear transformations were known for all images from a single scanner to a single image space as defined by the “original” MPRAGE image series. This process was repeated for each scanner, and each scanner’s respective “original” MPRAGE image series were linearly registered to each other, yielding a full chain of invertible transformation matrices, for all image series to a single shared space. Finally, a single nonlinear warp field was calculated via FNIRT to establish a transformation from the “first” scanner’s “original” MPRAGE series to MNI-152-2mm template space.

By inverting and combining all linear and nonlinear transformations throughout the registration chain, MNI-152 space atlases, ROIs, or other labels can be projected into each source image series. The registration flow is further detailed in Figure 1.

Figure 1:

Figure 1:

7-parameter registration via FLIRT was used within subjects. For each imaging set, TSE images were registered to MRF maps via synthetic TSE. MRF was then registered to MPRAGE via synthetic MPRAGE. MPRAGE images from each imaging set and scanner were then registered to the “original” MPRAGE series, which was then nonlinearly warped via FNIRT to MNI-152 space. The linear and nonlinear transformations saved for each scanner/set/series combination were then inverted and used to generate set-specific atlas label maps. These maps were used to sample and save voxel buffers for all atlas regions.

All raw data, DICOMs, NIFTI-converted images, linear and nonlinear transformation matrices, warped label maps, and atlas-label voxel dictionaries were uploaded to Azure Blob storage for validation and retrospective analysis. Access via a Python API is available under a data sharing agreement to encourage further investigation and statistical analysis of the data.

Statistical Analysis

Inverse transformations from the registration were used to generate warped atlas label maps in the original image spaces for each subject/scanner/set/series combination. Eighteen well-defined, homogenous regions from the Harvard-Oxford subcortical atlas were selected for regional comparison. For each tissue compartment in every image series, the mean, median, and standard deviation of the voxel values were calculated by using the warped series-specific label maps as a voxel mask and generating arrays of all constituent voxel values from each compartment. No post-scan normalization was applied outside of the vendor’s online reconstruction – all regional statistics were generated from DICOM values in exported datasets.

The repeatability of the intrasession, intersession, and interscanner cases was assessed via comparison of the mean values of registered regions across paired samples from the same subject. The unweighted mean and standard deviation of the differences in mean value across all subject and regions is reported for each modality and case in Table 2. Additionally, since regions have vastly different volumes, a weighted mean and weighted standard deviation42 for the performance of each modality across subjects and regions was performed. The resulting bias and agreement metrics were then compared to establish the relative stability of each imaging approach.

Table 2:

Repeatability and Reproducibility Performance by Modality, Unweighted

mean ± stddev
(bias ± agreement)
Intrascanner Interscanner
Same-Session Cross-Session Cross-Session
T1 (%) 0.98 ± 4.36 −0.10 ± 4.51 −1.07 ± 4.61
T2 (%) 1.24 ± 7.48 0.67 ± 6.81 −0.17 ± 7.93
MPRAGE (%) 1.64 ± 10.80 −0.97 ± 10.35 −0.23 ± 11.18
TSE (%) −0.17 ± 6.20 −1.68 ± 6.78 −1.26 ± 8.52

Summary of mean value intrascanner repeatability and interscanner reproducibility for selected MNI-152 regions between conventional MPRAGE and TSE imaging versus 3D-MRF.

The resulting aggregate data from each region and image type were then used to generate Bland-Altman plots. The plots compared the stability of regional mean values for all qualitative and quantitative image series, across all subjects in the intrasession, intersession, and interscanner cases.

Results

Sample MRF maps, resulting synthetic images, and the associated MPRAGE (T1w) and TSE (T2w) images are shown in Figure 2 for a representative subject. Other subjects’ sample maps are available in the supporting information. Maps visualizing automated registration pipeline performance for the same representative subject across sessions and scanners are illustrated in Figure 3. Similar maps for all the subjects are available in the supporting information. Bulk registration errors were not seen across any of the image series, and no manual intervention or registration correction was performed outside of the automated pipeline.

Figure 2:

Figure 2:

Synthetic MPRAGE (T1w) and TSE (T2w) images generated from MRF maps were used as the input to the registration pipeline. Both synthetic qualitative contrasts showed similar intraregional contrast compared to the MPRAGE and TSE imaging acquired.

Figure 3:

Figure 3:

The results of canny edge detection (σ=1) applied to the first scanner's original MPRAGE series projected over the first scanner's original T1 map (represented here in greyscale) and TSE series and the second scanners reposition MPRAGE, T1 map, and TSE series.

Atlas registration performance maps were generated for each subject to demonstrate the performance of the FNIRT nonlinear registration between MNI-152-2mm space and the first scanner’s original MPRAGE image series. The results for the same representative subject and the other subjects are in Figure 4 and the supporting information, respectively. The aggregated mean T1 and T2 values and their respective standard deviations derived from 3D MRF for the examined MNI-152 atlas regions across all sessions on both scanners for all subjects are presented in Table 1.Bland-Altman plots were prepared comparing the repeatability of 3D-MRF, TSE, and MPRAGE acquisitions using regional means compared between immediate repetitions (Figure 5) and subject repositions on the same scanner (Figure 6).

Figure 4:

Figure 4:

The atlas label maps from MNI space were inversely warped and eroded to remove mislabeling artifacts introduced by nonlinear interpolation of integer label values. The resulting atlas region overlays were colorized according to the legend used in the subsequent Bland Altman plots.

Table 1:

Fingerprinting-derived T1 and T2 Values of MNI-152 Regions

Tissue Compartment T1mean(ms) σT1(ms) T2mean(ms) σT2(ms)
Cerebral White Matter 914.1 40.5 (4.4%) 45.9 2.3 (5.0%)
Cerebral Cortex 1889.4 129.5 (6.9%) 124.6 15.8 (12.6%)
Lateral Ventricles 4341.4 447.9 (10.3%) 467.0 26.7 (5.7%)
Thalamus 1175.5 73.7 (6.3%) 49.6 3.7 (7.5%)
Caudate 1338.4 66.1 (4.9%) 51.5 6.0 (11.6%)
Putamen 1234.0 53.6 (4.3%) 45.0 4.6 (10.1%)
Pallidum 938.7 53.2 (5.7%) 30.3 3.3 (11.0%)
Hippocampus 1621.8 93.1 (5.7%) 76.0 9.9 (13.0%)
Amygdala 1463.4 55.2 (3.8%) 62.5 3.6 (5.7%)

Measured mean T1 and T2 values and associated standard deviations in the selected tissue compartments, derived from 3D MRF maps across all sessions on both scanners for all subjects.

Figure 5:

Figure 5:

The repeatability between the “original" and “repetition" sets on each scanner was compared for all subjects. This represents a same-session test-retest on the same scanner hardware since the subject remained in the bore and the scanner has not performed adjustments between acquisitions. The grey and green dashed lines show the confidence intervals for the mean and weighted (by region volume) mean cases.

Figure 6:

Figure 6:

The repeatability between the “original" and “reposition" sets on each scanner were compared for all subjects. Between acquisitions, the subject was asked to leave the bore and walk around before being repositioned inside, triggering a reshimming of the system and frequency adjustments. This represents a cross-session test-retest on the same scanner hardware. The grey and green dashed lines show the confidence intervals for the mean and weighted (by region size) mean cases.

Reproducibility was tested by comparing all combinations of subjects, scanners, and sets (Figure 7). Table 2 summarizes the mean percent differences and standard deviations of 3D-MRF, TSE, and MPRAGE across the intrasession, intersession, and interscanner cases. In all cases except TSE, smaller regions have higher variability and skew the aggregate reproducibility unproportionally. When the size of each region is included in the estimation of the weighted standard deviation42 for the overall repeatability and reproducibility (Table 3), the performance of T1, T2, and MPRAGE better reflect the visual and histogrammatic similarity of the regions.

Figure 7:

Figure 7:

The reproducibility across all combinations of subjects, scanners, and sets was compared. The data included is the full matrix of 9 combinations of original, repetition, and reposition sets across both scanners. The grey and green dashed lines show the confidence intervals for the mean and weighted (by region size) mean cases.

Table 3:

Repeatability and Reproducibility Performance by Modality, Weighted by Region Sizes

weighted mean ± weighted stddev
(bias ± agreement)
Intrascanner Interscanner
Same-Session Cross-Session Cross-Session
T1 (%) 0.80 ± 2.34 0.30 ± 1.90 −1.02 ± 2.21
T2 (%) 2.32 ± 4.34 2.07 ± 3.20 −3.24 ± 3.89
MPRAGE (%) −0.03 ± 5.63 −2.49 ± 6.04 −1.32 ± 7.84
TSE (%) −0.26 ± 5.49 −1.85 ± 5.66 −1.10 ± 7.76

Summary of weighted mean value intrascanner repeatability and interscanner reproducibility for selected MNI-152 regions between conventional MPRAGE and TSE imaging versus 3D-MRF. Regional differences are weighted by voxel population size on a per-set, per-subject basis to represent the whole-brain aggregate performance of each acquisition approach.

Discussion

This study aimed to provide an online MRF reconstruction and analysis framework, while also investigating the reproducibility of MRF and conventional weighted imaging. From the start, we developed a traceable and online 3DMRF reconstruction, which outputs DICOMs directly to the scanner. We then evaluated our fully automatic online 3D MRF reconstruction, as well as our cross-modality registration and analysis pipeline, to determine whether in vivo 3D MRF repeatability and reproducibility meets or exceeds that of vendor product MPRAGE and TSE.

The brains of healthy volunteers were imaged using MPRAGE, TSE, and MRF protocols across multiple sets following varying perturbations to the subject, repeated on different scanners on multiple days. The consistent volume-weighted biases and standard deviations in Table 3 indicate that the T1 and T2 values generated by in-vivo 3D-MRF were repeatable whether a scan is repeated immediately (T1: 0.80±2.34%, T2: 2.32±4.34%) or with a reposition of the subject on the same scanner (T1: 0.30±1.90%, T2: 2.07±3.20%), and reproducible on a different scanner on a different day (T1: −1.02±2.21%, T2: −3.24±3.89%). Most importantly, regardless of scanner and session, the intrasubject variations of both T1 and T2 were found to be lower than the variations within T1 and T2 regions across the sampled population shown in Table 1.

Considering the observed negligible differences between intrascanner and interscanner variations, the apparent reproducibility of in-vivo 3D-MRF offers multiple opportunities: data from many sessions, scanners, and sites can potentially be treated as a single dataset for harmonized analysis. Similarly, structural, or statistical intrasubject comparisons are valid across scanners or sessions for the proposed 3D-MRF pipeline without any additional data regularization steps. The same is not true of the baseline product imaging methods that we tested here.

Because the direct output of MRF is an actual quantifiable measurement, there are additional criteria and opportunities, in terms of repeatability and reproducibility, that need to be satisfied which might not apply to conventional weighted imaging at the scanner output level.

The reproducibility of 3D MRF was investigated in various common clinical situations in a traceable framework via bounds of uncertainty that we set and explored through cortical and subcortical regional mean values. Defining and monitoring traceability and uncertainty with structured boundaries will be useful for careful integration into clinical settings and to further ensure physician confidence28.

Traceability

The data analysis for this study recognized the importance of traceable research documenting all the stages of a study from data acquisition to presentation of the results. This structured record becomes functionally important as a foundation for studies involving multiple scanners, sites, and even across vendors. In our framework, we defined and ensured traceability at the data acquisition, reconstruction, and post-processing steps. After the fully integrated acquisition and automated online reconstruction, the DICOM images are fed back to the MR console via the FIRE interface prototype. This allows MR radiologists and technologists to interact in real time with MRF quantitative maps in their preferred environments (PACS or MR console) in the vendor coordinate system, with full DICOM capability.

The next step in the traceability chain was the analysis pipeline for automated post-processing of the DICOM images. The cloud-based and version-controlled registration and regional analysis pipeline can support future applications and more complex analyses that use MRF in longitudinal or multi-center large scale studies. This scale of traceability for every step ensures that comparisons across different MRF variants, sites, scanners, and even vendors are possible and valid. Due to the pipeline's flexible infrastructure, other tools or software packages can also be integrated at any level. In prior studies, the use of offline reconstructions and manual analysis pipelines often impeded the use of cross-dataset registration and statistical tools. This resultant loss of significant portions of the metadata present in clinical sequences such as position, scale, and subject identifiers made it difficult for cross-modal comparisons. Holding the history of the pipeline accountable could further increase the confidence in MRF and help usher in its adoption in clinical settings.

Uncertainty

After the traceability chain was established and documented for in vivo quantitative mapping with MRF, the next step evaluated the associated uncertainty of the quantitative maps.

Quantitative tissue properties and imaging biomarkers are only meaningful when measurement uncertainties are provided. The goal of the uncertainty evaluation was not to define a measure of error for the quantitative maps, but rather, provide guidance for the decision-making process based on the maps. Like UK Biobank43, large scale studies that also include reproducible MRF quantitative maps can be enabled to define population-based normative tissue property values with known uncertainties. Eventually, decisions can be made about the individual patients directly at the MR console with confidence, such as manually or automatically flagging significant findings with respect to a population reference (obtained from large scale studies) or a past measurement of the same subject (longitudinal).

The MRF reconstruction pipeline inputted the B1 maps acquired and reconstructed with the standard vendor sequence and corrected for the B1 inhomogeneity by matching each voxel's fingerprint to the portion of the dictionary with the voxel's relative B1. Among the previous MRF repeatability and reproducibility studies only Korzdorfer et al. corrected for B1 inhomogeneities21. B1 correction eliminates the bias from MRF T2 maps improving accuracy34, thus it is expected to lower the variability between sessions. Different scanners and software versions can have varying limits and adjustments of the RF power and can affect the T2 contrast for weighted imaging and MRF time series unless accounted for. MRF data shows reduced variability compared to conventional images and MRF's ability to consider variable and inhomogeneous B1 could be a factor.

Previously, some MRF repeatability and reproducibility studies used 2D MRF21,22 rather than the volumetric 3D acquisitions commonly used in neuroradiological clinical practice. Besides using 2D acquisitions, manually drawn ROIs often formed the basis for intrasession, intersession, and interscanner comparisons of the resulting maps21,26. These ROIs potentially introduced errors due to human intervention in the processing pipeline, while also reducing the scalability of the study and reducing the potential for intermodality comparisons of MRF results against clinical standard contrasts provided by vendor product implementations. For these studies, 2D in vivo brain repeatability was shown to be 2-3% for T1 and 5-8% for T222; 2-3% for T1 and 3-8% for T221. Reproducibility was slightly lower for both studies: 3-8% for T1 and 8-14% for T222; 3.4% for T1 and 8% for T221.

Two other studies that investigated the repeatability and reproducibility of a 3D in vivo brain MRF data with different analyses reported similar results23,25. Buonincontri et al. based the analysis on average GM, WM, and CSF relaxation times and reported <2% T1 and <5% repeatability, and 6% GM T1 and 10% GM T2 reproducibility23. With automated segmentation of the same 3D MRF data into cortical and subcortical regions, Fujita et al. reported repeatability (cortical: T1 4% and T2 6%, subcortical: T1 1.3% and T2 5%) and reproducibility of T1 and T2 (cortical: T1 2.2% and T2 6.7%, subcortical: T1 3.2% and T2 5.8%), as well as cortical thickness and subcortical volumes25.

Qualitative Imaging

MPRAGE and TSE are common acquisition schemes for diagnostic MRI of the brain and are frequently used in clinical practice. Both techniques are qualitative acquisitions that are adjusted to maximize contrast between specific tissues and normalized by proprietary vendor reconstructions and therefore, are not expected to have reproducible intensity for certain tissues.

As a result of changes in receiver tuning, coil loading, and image autoscaling, we expected a linear bias (mean shift in reproducibility) would still govern any inter- and intrasession regional variations.

The result, however, was a muddling of the underlying inter-regional contrasts, evidenced by the low bias and high variability within an individual session (MPRAGE: −0.03±5.63%, TSE: −0.26±5.49%), across sessions (MPRAGE: −2.49±6.04%, TSE: −1.85±5.66%) and across scanners (MPRAGE: −1.32±7.84%, TSE: −1.10±7.76%). Because MPRAGE images form the basis for the registration approach, the comparatively lower reproducibility of MPRAGE is not due to poor within-subject or MNI registration since any registration errors that may contribute to variability in MPRAGE image sets would have propagated to 3D-MRF and TSE images.

While in some cases the bias of the differences was lower for qualitative modalities than the MRF-derived values, the standard deviation of the differences was always higher for qualitative modalities than MRF. Low to moderate bias, especially linear bias across a measurement parameter, can be easily corrected via calibration during standard scanner quality assurance and maintenance procedures. In fact, both qualitative methods assessed in this study already benefit from post-hoc calibration due to the vendor applying both surface-coil intensity normalization and noise prewhitening in the product image reconstruction pipeline. The evaluated MRF reconstruction does not implement any form of scanner- or sequence-specific normalization.

The contrast variability of the weighted images must be interpreted by radiologists with appropriate window and level adjustments to accommodate reading images acquired in different sessions.

Additionally, most qualitative or quantitative post-processing methods using raw DICOM images could be affected by the contrast variability as the algorithms behind the techniques rely on certain contrast between tissues44. Eck et al.45 showed that many radiomics features, extracted from TSE, are not robust when image contrast, resolution, and acceleration factors are changed. Scanner software upgrades might cause additional problems for longitudinal studies due to B1 variations and signal saturation19. Reproducibility is critical for longitudinal comparisons of disease states and for the training of direct or convolutional inference networks based on value-normalized large-scale datasets, which are becoming more common. Since the MRF quantitative maps are found to be more reproducible than MPRAGE or TSE in common clinical scenarios, future cross-scanner or cross-site large scale studies would be justified in using MRF, instead of or additional to conventional imaging. Automated image analysis tools, such as FSL and Freesurfer, can also be expanded to operate on quantitative MRF maps.

Study Limitations and Future Work

This study investigated the reproducibility of MRF and conventional imaging on a pixel basis rather than regional volume or other structural metrics. T1w contrast is the sole input for most open-source brain analysis tools, yet an analysis engine could be optimized to extract regions from each contrast/relaxation map separately. An additional consideration is that synthetic weighted contrast generation from quantitative maps, required to run most analysis tools, is not straightforward and optimization of these methods is out of the scope of this paper.

A linear regression lookup table approach was used in lieu of direct Bloch simulation approaches because accurate proton density (PD) maps were not immediately available from the online reconstruction process used in this study. Direct substitution of M0 as PD yielded inconsistent contrasts versus ground truth MPRAGE, resulting in the development of the proposed 2-to-1 lookup table approach, which substantially improved robustness of the automated registration and skull stripping processes needed in this work, and was therefore used in the described limited capacity. The synthetic images generated with the regression network provided an MPRAGE- and TSE-like contrast only for the registration purposes. As illustrated in Figures 2 and 3 coupled with the same anatomy and similar position between modalities, the synthetic images had adequate tissue contrast to ensure an accurate registration within a set. The reliability of the resulting maps suggests that synthetic contrast generation based on MRF maps may allow for system- and session-agnostic T1/T2 weighted contrasts with reproducibility magnitudes exceeding the current clinical standard. Future work will focus on creating better synthetic weighted images, which could be used to compare reproducibility at the structural volume or biomarker level for MRF and conventional weighted imaging with established community tools.

Conclusions

To improve traceability with minimal manual interventions, we presented a fully automated data acquisition, reconstruction, and analysis pipeline for 3D-MRF. Reproducibility of quantitative MRF maps, and qualitative MPRAGE and TSE images were evaluated over sessions and scanners by comparing mean values from MNI brain atlas regions. The proposed MRF acquisition, reconstruction and analysis pipeline was found to be more repeatable and reproducible than qualitative methods, which should open the door to wider clinical adoption and widespread use.

Supplementary Material

Supinfo

Supporting Information Figure S1: Sequence flip angles for each of the 960 timepoints per partition used for the MRF acquisition. TR and TE were held constant at 10.5ms and 2.2ms respectively. The flip angle was smoothly varied in a pseudosinusoidal pattern between 0 and 57 degrees. 48 spiral interleaves were acquired in a wrapping sequential pattern that repeats 20 times

Supporting Information Figure S2: System architecture for the online Kubernetes-based 3D-MRF Reconstruction. Data is sent from the scanner to Azure via an SSH tunnel between the scanner's host and an SSH jump pod within the Kubernetes cluster. One or multiple GPU-enabled nodes then share the load of storing temporary dependencies and reconstructing datasets that arrive on the cluster. Logs are stored to a persistant Prometheus appliance for debugging and monitoring purposes.

Supporting Information Figure S3-S12: Image Quality for V01-V10 respectively. Synthetic MPRAGE and TSE images generated from MRF maps were used as the input to the registration pipeline. Both synthetic qualitative contrasts showed similar intraregional contrast compared to the MPRAGE and TSE imaging acquired.

Supporting Information Figure S13-S22: Linear registration performance for V01-V10 respectively. The results of canny edge detection applied to the first scanner's original MPRAGE series projected over the first scanner's original T1 and TSE series and the second scanner's reposition MPRAGE, T1, and TSE series.

Supporting Information Figure S23-S32: Atlas registration quality for V01-V10 respectively. Atlas label maps from MNI space were inversely warped and eroded to remove mislabeling artifacts introduced by nonlinear interpolation of integer label values. The resulting atlas region overlays were colorized according to the legend used in the subsequent Bland Altman plots.

Data Availability Statement

The code that supports the findings of this study are available at https://doi.org/10.5281/zenodo.8184908 All image data, in NIFTI format, are available at https://doi.org/10.5281/zenodo.8183344

References

  • 1.Seiler A, Noth U, Hok P, et al. Multiparametric Quantitative MRI in Neurological Diseases. Front. Neurol 2021;December-2021:640239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Dan Ma, Vikas Gulani, Nicole Seiberlich, et al. Magnetic resonance fingerprinting. Nature. 2013; 495:187–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Elisabeth Springer, Lima Cardoso Pedro, Bernhard Strasser, et al. MR Fingerprinting-A Radio genomic Marker for Diffuse Gliomas. Cancers. 2022;14. [Google Scholar]
  • 4.Che Hung Sheng, Yong Chen, Thian Yap Pew, Weili Lin. Magnetic Resonance Fingerprinting of the Pediatric Brain. Magnetic resonance imaging clinics of North America. 2021; 29:605–616. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Mostardeiro Thomaz R, Ananya Panda, Witte Robert J., et al. Whole-brain 3D MR fingerprinting brain imaging: clinical validation and feasibility to patients with meningioma. Magma (New York, N.Y.). 2021; 34:697–706. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Dan Ma, Yun Jiang, Yong Chen, et al. Fast 3D magnetic resonance fingerprinting for whole-brain coverage. Magnetic Resonance in Medicine. 2018; 79:2190–2197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Yong Chen, Hsiang Chen Meng, Baluyot Kristine R., Potts Taylor M., Jordan Jimenez, Weili Lin. MR fingerprinting enables quantitative measures of brain tissue relaxation times and myelin water fraction in the first five years of life. NeuroImage. 2019; 186:782–793. [DOI] [PubMed] [Google Scholar]
  • 8.Eck Brendan L, Michael Yim, Hamilton Jesse I., et al. Cardiac Magnetic Resonance Fingerprinting: Potential Clinical Applications. Current cardiology reports. 2023;25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Hamilton Jesse I, Shivani Pahwa, Joseph Adedigba, et al. Simultaneous Mapping of T1 and T2 Using Cardiac Magnetic Resonance Fingerprinting in a Cohort of Healthy Subjects at 1.5T. Journal of magnetic resonance imaging: JMRI. 2020; 52:1044–1052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Shohei Fujita, Katsuhiro Sano, Gastao Cruz, et al. MR Fingerprinting for Liver Tissue Characterization: A Histopathologic Correlation Study. Radiology. 2023; 306:150–159. [DOI] [PubMed] [Google Scholar]
  • 11.Carlos Velasco, Gastao Cruz, Olivier Jaubert, Begona Lavin, Botnar Rene M., Prieto Claudia. Simultaneous comprehensive liver T1, T2, T1rho, and fat fraction characterization with MR fingerprinting. Magnetic Resonance in Medicine. 2022; 87:1980–1991. [DOI] [PubMed] [Google Scholar]
  • 12.Huang Sherry S, Rasim Boyacioglu, Reid Bolding, Christina MacAskill, Yong Chen, Griswold Mark A. Free-Breathing Abdominal Magnetic Resonance Fingerprinting Using a Pilot Tone Navigator. Journal of Magnetic Resonance Imaging. 2021; 54:1138–1151. [DOI] [PubMed] [Google Scholar]
  • 13.Ching Lo Wei, Ananya Panda, Yun Jiang, James Ahad, Vikas Gulani, Nicole Seiberlich. MR fingerprinting of the prostate. Magma (New York, N.Y.). 2022; 35:557–571. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.MacAskill Christina J, Michael Markley, Susan Farr, et al. Rapid B1-Insensitive MR Fingerprinting for Quantitative Kidney Imaging. Radiology. 2021; 300:380–387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ingo Hermann, Jorge Chacon-Caldera, Irene Brumer, et al. Magnetic resonance fingerprinting for simultaneous renal T1 and T1rho mapping in a single breath-hold. Magnetic resonance in medicine. 2020; 83:1940–1948. [DOI] [PubMed] [Google Scholar]
  • 16.Determination of Signal-to-Noise Ratio (SNR) in Diagnostic Magnetic Resonance Imaging, NEMA Standards Publication MS 1–2008 (R2014-R2020). National Electrical Manufactures Association. 2020; [Google Scholar]
  • 17.Determination of Two-Dimensional Geometric Distortion in Diagnostic Magnetic Resonance Images, NEMA Standards Publication MS 2-2008 (R2014-R2020). National Electrical Manufactures Association. 2020; [Google Scholar]
  • 18.Determination of Image Uniformity in Diagnostic Magnetic Resonance Images, NEMA Standards Publication MS 3–2008 (R2014-R2020). National Electrical Manufactures Association. 2020; [Google Scholar]
  • 19.Keenan Kathryn E, Zydrunas Gimbutas, Andrew Dienstfrey, Stupic Karl F. Assessing effects of scanner upgrades for clinical studies. Journal of Magnetic Resonance Imaging. 2019; 50:1948–1954. [DOI] [PubMed] [Google Scholar]
  • 20.Yun Jiang, Dan Ma, Keenan Kathryn E., Stupic Karl F., Vikas Gulani, Griswold Mark A. Repeatability of magnetic resonance fingerprinting T1 and T2 estimates assessed using the ISMRM/NIST MRI system phantom. Magnetic resonance in medicine. 2017; 78:1452–1457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Gregor Korzdorfer, Rainer Kirsch, Kecheng Liu, et al. Reproducibility and Repeatability of MR Fingerprinting Relaxometry in the Human Brain. Radiology. 2019; 292:429–437. [DOI] [PubMed] [Google Scholar]
  • 22.Guido Buonincontri, Laura Biagi, Alessandra Retico, et al. Multi-site repeatability and reproducibility of MR fingerprinting of the healthy brain at 1.5 and 3.0T. NeuroImage. 2019; 195:362–372. [DOI] [PubMed] [Google Scholar]
  • 23.Guido Buonincontri, Kurzawski Jan W., Kaggie Joshua D., et al. Three dimensional MRF obtains highly repeatable and reproducible multi-parametric estimations in the healthy human brain at 1.5T and 3T. NeuroImage. 2021; 226:117573. [DOI] [PubMed] [Google Scholar]
  • 24.Shohei Fujita, Guido Buonincontri, Matteo Cencini, et al. Repeatability and reproducibility of human brain morphometry using three-dimensional magnetic resonance fingerprinting. Human brain mapping. 2021; 42:275–285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Shohei Fujita, Matteo Cencini, Guido Buonincontri, et al. Simultaneous relaxometry and morphometry of human brain structures with 3D magnetic resonance fingerprinting: a multicenter, multiplatform, multifield-strength study. Cerebral cortex (New York, N.Y.: 1991). 2022; [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Ching Lo Wei, Kayat Bittencourt Leonardo, Ananya Panda, et al. Multicenter Repeatability and Reproducibility of MR Fingerprinting in Phantoms and in Prostatic Tissue. Magnetic resonance in medicine. 2022; 88:1818–1827. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Voelker Maximilian N, Oliver Kraff, Steffen Goerke, et al. The traveling heads 2.0: Multicenter reproducibility of quantitative imaging methods at 7 Tesla. NeuroImage. 2021;232. [DOI] [PubMed] [Google Scholar]
  • 28.Cashmore Matt T, McCann Aaron J, Wastling Stephen J, Cormac McGrath, John Thornton, Hall Matt G.. Clinical quantitative MRI and the need for metrology. British Journal of Radiology. 2021; 94:20201215. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Yun Jiang, Dan Ma, Nicole Seiberlich, Vikas Gulani, Griswold Mark A. MR fingerprinting using fast imaging with steady state precession (FISP) with spiral readout. Magnetic Resonance in Medicine. 2014; 74:1621–1631. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Duyn JH, Yang Y, Frank JA, van der Veen JW. Simple correction method for k-space trajectory deviations in MRI. J Magn Reson. 1998. May;132(1):150–3. doi: 10.1006/jmre.1998.1396. [DOI] [PubMed] [Google Scholar]
  • 31.Chow K, Kellman P, Xue H. Prototyping Image Reconstruction and Analysis with FIRE. In: proc. SCMR. Virtual Scientific Sessions; 2021. p. 838972 [Google Scholar]
  • 32.Inati SJ, Naegele JD, Zwart NR, Roopchansingh V, Lizak MJ, Hansen DC, Liu CY, Atkinson D, Kellman P, Kozerke S, Xue H, Campbell-Washburn AE, Sørensen TS, Hansen MS. ISMRM Raw data format: A proposed standard for MRI raw datasets. Magn Reson Med. 2017. Jan;77(1):411–421. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.McGivney DF, Pierre E, Ma D, et al. SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain. IEEE Transactions on Medical Imaging. 2014; 33:2311–2322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Dan Ma, Simone Coppo, Yong Chen, et al. Slice profile and B1 corrections in 2D magnetic resonance fingerprinting. Magnetic Resonance in Medicine. 2017; 78:1781–1789. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Xiangrui Li, Morgan Paul S., John Ashburner, Jolinda Smith, Christopher Rorden. The first step for neuroimaging data analysis: DICOM to NIfTI conversion. Journal of neuroscience methods. 2016; 264:47–56. [DOI] [PubMed] [Google Scholar]
  • 36.Matthew Brett, Markiewicz Christopher J., Michael Hanke, et al. nipy/nibabel: 5.0.1. 2023. [Google Scholar]
  • 37.Krzysztof Gorgolewski, Burns Christopher D., Cindee Madison, et al. Nipype: A flexible, lightweight, and extensible neuroimaging data processing framework in Python. Frontiers in Neuroinformatics. 2011;5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Mark Jenkinson, Beckmann Christian F., Behrens Timothy E.J., Woolrich Mark W., Smith Stephen M. FSL. NeuroImage. 2012; 62:782–790. [DOI] [PubMed] [Google Scholar]
  • 39.Developers TensorFlow. TensorFlow. 2023. [Google Scholar]
  • 40.Mark Jenkinson, Stephen Smith. A global optimization method for robust affine registration of brain images. Medical Image Analysis. 2001; 5:143–156. [DOI] [PubMed] [Google Scholar]
  • 41.Mark Jenkinson, Peter Bannister, Michael Brady, Stephen Smith. Improved optimization for the robust and accurate linear registration and motion correction of brain images. NeuroImage. 2002; 17:825–841. [DOI] [PubMed] [Google Scholar]
  • 42.Heckert Alan N, Filliben James J. NIST Handbook 148: Dataplot Reference Manual, Volume 2: Let Subcommands and Library Functions. National Institute of Standards and Technology Handbook Series. 2003; [Google Scholar]
  • 43.Fidel Alfaro-Almagro, Mark Jenkinson, Bangerter Neal K., et al. Image processing and Quality Control for the first 10,000 brain imaging datasets from UK Biobank. NeuroImage. 2018; 166:400–424. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Leung Kelvin K, Clarkson Matthew J, Bartlett Jonathan W, et al. Robust atrophy rate measurement in Alzheimer’s disease using multi-site serial MRI: Tissue-specific intensity normalization and parameter selection. NeuroImage. 2010; 50:516–523. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Brendan Eck, Chirra Prathyush V., Avani Muchhala, et al. Prospective Evaluation of Repeatability and Robustness of Radiomic Descriptors in Healthy Brain Tissue Regions In Vivo Across Systematic Variations in T2-Weighted Magnetic Resonance Imaging Acquisition Parameters. Journal of magnetic resonance imaging: JMRI. 2021; 54:1009–1021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Dupuis Andrew, Chen Yong, Hansen Michael, Chow Kelvin, Sun Jessie E.P., Badve Chaitra, Ma Dan, Griswold Mark A., & Boyacioglu Rasim. (2023). Intrasession, Intersession, and Interscanner Qualitative and Quantitative MRI Datasets of Healthy Brains at 3.0T (0.1.0). Zenodo. 10.5281/zenodo.8183344 [DOI] [Google Scholar]
  • 47.Dupuis Andrew, Chen Yong, Griswold Mark A, & Boyacioglu Rasim. (2023). Python Code for Quantifying 3D Magnetic Resonance Fingerprinting (3D-MRF) reproducibility across subjects, sessions, and scanners automatically using MNI atlases (0.0.1). Zenodo. 10.5281/zenodo.8184908 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supinfo

Supporting Information Figure S1: Sequence flip angles for each of the 960 timepoints per partition used for the MRF acquisition. TR and TE were held constant at 10.5ms and 2.2ms respectively. The flip angle was smoothly varied in a pseudosinusoidal pattern between 0 and 57 degrees. 48 spiral interleaves were acquired in a wrapping sequential pattern that repeats 20 times

Supporting Information Figure S2: System architecture for the online Kubernetes-based 3D-MRF Reconstruction. Data is sent from the scanner to Azure via an SSH tunnel between the scanner's host and an SSH jump pod within the Kubernetes cluster. One or multiple GPU-enabled nodes then share the load of storing temporary dependencies and reconstructing datasets that arrive on the cluster. Logs are stored to a persistant Prometheus appliance for debugging and monitoring purposes.

Supporting Information Figure S3-S12: Image Quality for V01-V10 respectively. Synthetic MPRAGE and TSE images generated from MRF maps were used as the input to the registration pipeline. Both synthetic qualitative contrasts showed similar intraregional contrast compared to the MPRAGE and TSE imaging acquired.

Supporting Information Figure S13-S22: Linear registration performance for V01-V10 respectively. The results of canny edge detection applied to the first scanner's original MPRAGE series projected over the first scanner's original T1 and TSE series and the second scanner's reposition MPRAGE, T1, and TSE series.

Supporting Information Figure S23-S32: Atlas registration quality for V01-V10 respectively. Atlas label maps from MNI space were inversely warped and eroded to remove mislabeling artifacts introduced by nonlinear interpolation of integer label values. The resulting atlas region overlays were colorized according to the legend used in the subsequent Bland Altman plots.

Data Availability Statement

The code that supports the findings of this study are available at https://doi.org/10.5281/zenodo.8184908 All image data, in NIFTI format, are available at https://doi.org/10.5281/zenodo.8183344

RESOURCES