Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Mar 17.
Published in final edited form as: Neuroimage. 2023 Jan 17;268:119886. doi: 10.1016/j.neuroimage.2023.119886

LARO: Learned acquisition and reconstruction optimization to accelerate quantitative susceptibility mapping

Jinwei Zhang a,b, Pascal Spincemaille b, Hang Zhang b,c, Thanh D Nguyen b, Chao Li b,d, Jiahao Li a,b, Ilhami Kovanlikaya b, Mert R Sabuncu b,c, Yi Wang a,b,*
PMCID: PMC10021353  NIHMSID: NIHMS1874422  PMID: 36669747

Abstract

Quantitative susceptibility mapping (QSM) involves acquisition and reconstruction of a series of images at multi-echo time points to estimate tissue field, which prolongs scan time and requires specific reconstruction technique. In this paper, we present our new framework, called Learned Acquisition and Reconstruction Optimization (LARO), which aims to accelerate the multi-echo gradient echo (mGRE) pulse sequence for QSM. Our approach involves optimizing a Cartesian multi-echo k-space sampling pattern with a deep reconstruction network. Next, this optimized sampling pattern was implemented in an mGRE sequence using Cartesian fan-beam k-space segmenting and ordering for prospective scans. Furthermore, we propose to insert a recurrent temporal feature fusion module into the reconstruction network to capture signal redundancies along echo time. Our ablation studies show that both the optimized sampling pattern and proposed reconstruction strategy help improve the quality of the multi-echo image reconstructions. Generalization experiments show that LARO is robust on the test data with new pathologies and different sequence parameters. Our code is available at https://github.com/Jinwei1209/LARO-QSM.git.

Keywords: Quantitative susceptibility mapping, Multi-echo gradient echo, Unrolled reconstruction, Sampling pattern optimization

1. Introduction

Quantitative magnetic resonance imaging (MRI) provides biomarkers for clinical assessment of diverse diseases, including T1 and T2 relaxation time (Deichmann, 2005; Deoni et al., 2005), fat fraction (Yu et al., 2008), quantitative susceptibility mapping (QSM) (de Rochefort et al., 2010), etc. For QSM, a multi-echo gradient echo (mGRE) pulse sequence is used to acquire signals at different echo times. A tissue-induced local magnetic field map can be obtained by fitting the acquired complex multi-echo signals (Kressler et al., 2009; Liu et al., 2013). Then, a tissue susceptibility map can be computed using an inverse problem solver, such as regularized dipole inversion (Liu et al., 2012).

For QSM, the range of echo times needs to be large enough to cover both small and large susceptibilities in tissue (Wang and Liu, 2015), such as in the application of QSM in multiple sclerosis (MS), where QSM has been shown to be sensitive to myelin content as well as iron (Wang and Liu, 2015), both of which are modified in MS. However, limited scan time in clinics only allows for mGRE with a compromised spatial resolution, making visualization of smaller MS lesion more challenging. Overcoming this compromise is a major motivation for this work.

The significantly increased scan time of mGRE sequence can be partly overcome using classical acceleration techniques such as Parallel imaging (PI) (Griswold et al., 2002; Pruessmann et al., 1999), compressed sensing (CS) (Lustig et al., 2007), or their combination (PI-CS) (Murphy et al., 2012; Otazo et al., 2010). Recently, deep learning has been used to optimize k-space sampling patterns from training data, such as in LOUPE (Bahadir et al., 2020) and its extension LOUPE-ST (Zhang et al., 2020), experimental design with the constrained Cramer-Rao bound (OEDIPUS) (Haldar and Kim, 2019) and greedy pattern selection (Gözcü et al., 2018). Building on these prior works, we propose here to learn an optimal sampling pattern to accelerate QSM acquisition and improve reconstruction quality.

Reconstruction from under-sampled measurements can be solved using regularization to exploit signal redundancies, such as low-rank and/or sparsity constraints (Zhao et al., 2015; Peng et al., 2016; Zhang et al., 2015). More recently, convolutional neural networks have been proposed for compressed sensing reconstruction. One popular neural network technique involves implementing the unrolled iterations of an optimization process, coupled with a learned regularizer, as in MoDL (Aggarwal et al., 2018) and VarNet (Aggarwal et al., 2018). These architectural designs have been applied to single-echo image reconstruction, and extended to dynamic image sequence reconstruction via cascaded (Schlemper et al., 2017) and recurrent networks (Qin et al., 2018). Recently QSM acquisition was accelerated using 2D incoherent Cartesian under-sampling and deep neural network reconstruction with a variable density sampling pattern manually designed and fixed across echoes (Gao et al., 2021).

We propose Learned Acquisition and Reconstruction Optimization (LARO) to further optimize the sampling pattern across echoes by inferring the temporal variation through adding a temporal dimension to LOUPE-ST (Zhang et al., 2020) for the multi-echo case. Images are reconstructed accordingly using an unrolled reconstruction network based on alternating direction method of multipliers (ADMM) (Boyd et al., 2011) to capture the signal evolution and compensate the aliasing patterns of mGRE images with a temporal feature fusion module.

In this study, the learning based acquisition acceleration is not used to increase the spatial resolution but to instead accelerate the clinical protocol. For LARO training and testing experiments, we used retrospective under-sampling on fully sampled k-space data either simulated from the existing clinical protocol by taking inverse Fourier transform of the clinical mGRE images, or directly acquired from the scanner; the fully sampled k-space data served as ground truth for LARO sampling pattern optimization and under-sampled reconstruction. The optimized sampling pattern was then implemented in a modified mGRE sequence such that prospectively under-sampled data could be acquired and reconstructed with LARO. This work is extended from our conference paper (Zhang et al., 2021) where preliminary retrospective results were shown as a proof of concept of LARO.

2. Theory

In QSM data acquisition, multi-echo k-space sampling with multiple receiver coils is modeled as:

bjk=UjFEksj+njk, (1)

where bjk is the measured k-space data of the k-th receiver coil at the j-th echo time, with NC receiver coils and NT echo times, Uj is the k-space under-sampling pattern at the j-th echo time, F is the Fourier transform, Ek is the sensitivity map of the k-th coil, sj is the complex image of the j-th coil to be reconstructed, and njk is the acquisition notse, assumed to be Gausian.

Having acquired bjk with fixed Uj, we aim at reconstructing all sj simultaneously with a cross-echo regularization loss R({sj}). Based on Eq. (1), a solution {sf} can be obtained by solving the following optimization

{s^j}=argmin{sj}E({sj})=argmin{sj}j=1NTk=1NCUjFEksjbjk22+R({sj}) (2)

We denote the iterative reconstruction method solving Eq. (2) as {s^j}=A({Uj};{bjk}). With this notation, the sampling pattern optimization problem consists of finding, for a given under-sampling ratio γ and a given set of fully sampled training data {bjki,sji}i=1.N, the sampling pattern {Uˆj} that solves:

{Uˆj}=argmin{Uj}G({Uj})=argmin{Uj}1Ni=1NL({s^ji},{sji}),subjectto{s^ji}=A({Uj}:{Ujbjki})andUj¯=γforalliandj (3)

where N is the total number of samples in the training dataset, {sji} is the i-th fully sampled multiecho image, {s^ji} is the i-th reconstructed under-sampled multi-echo obtained using solver A({Uj};{Ujbjki}) and L is the metric to quantify difference between {s^ji} and {sji}, such as the L1 loss. In the following section, we will propose a unified framework called LARO (Learned Acquisition and Reconstruction Optimization) to tackle both Eqs. (2) and (3) using deep learning techniques.

2.1. Sampling pattern optimization (SPO)

For k-space sampling pattern optimization Eq. (3), we extend the previously proposed LOUPE-ST method (Zhang et al., 2020) to the multi-echo setting. We consider 2D variable density Cartesian sampling patterns in the kykz plane with a fixed under-sampling ratio as shown in Fig. 1b, in which learnable weights {wj} are used to generate a multi-echo probabilistic pattern {Pj} through sigmoid transformation and sampling ratio renormalization:

Pj=Renorm(11+ea·wj) (4)

where a is the slope parameter of the sigmoid function and Renorm(·) is a linear scaling operation to make sure the mean value of probabilistic pattern is equal to the desired under-sampling ratio (Bahadir et al., 2020). Assuming an independent Bernoulli distribution Ber(P) at each k-space location, a binary under-sampling pattern Uj is generated via stochastic sampling from Pj.

Uj=1z<Pj (5)

where 1x is the indicator function on the truth value of x and z is uniformly distributed between [0, 1]. Then {Uj} are used to retrospectively acquire {bjk} from fully sampled multi-echo k-space data. The stochastic sampling layer in Eq. (5) has zero gradient almost everywhere when backpropagating through this layer, which makes updating {wj} infeasible (Gu et al., 2015). To solve this issue, LOUPE-ST implements a straight-through estimator (Bengio et al., 2013) for beckpropagation through the stochastic sampling layer by using the probability distribution P instead:

d1z<PjdwjdPjdwj (6)

which solves the zero gradient issue and performs better than other gradient approximations, such as the one implemented in LOUPE (Zhang et al., 2020).

Fig. 1.

Fig. 1.

Network architecture of LARO. (a): deep ADMM was used as the backbone for under-sampled k-space reconstruction. (b): a sampling pattern optimization (SPO) module was used to learn the optimal k-space under-sampling pattern. (c): a temporal feature fusion (TFF) module was inserted into deep ADMM to capture the signal evolution along echoes.

2.2. Temporal feature fusion (TFF) for reconstruction

For image reconstruction Eq. (2), we propose an unrolled architecture with a temporal feature fusion (TFF) module based on the plug-and-play ADMM (Chan et al, 2016) strategy. In plug-and-play ADMM, auxiliary variables vj=sj, for each echo j were introduced and an off-the-shelf image denoiser {vj(t+1)}=𝒟({v˜j(t)}), where v˜j(t)=sj(t)+1ρuj(t) with uf(t) the dual variable of the t-th outer loop and ρ the penalty parameter in ADMM, was applied. We propose to unroll the iterative scheme of plug-and-play ADMM as a data graph which we call “deep ADMM” network as shown in Fig. 1a, where a CNN denoiser 𝒟({v˜j(t)};wD) with weights wD is designed to replace 𝒟({v˜j(t)}) as:

vj(t+1)=𝒟(v˜j(t);wD) (7)

To incorporate the dynamic nature of multi-echo images into 𝒟({v˜j(t)};wD), we propose a temporal feature fusion (TFF) module as shown in Fig. 1 c. In TFF, a recurrent module is repeated NT times in which at the j-th repetition (corresponding to the j-th echo), sj (real and imaginary parts concatenated along the channel dimension) and sj1‘s hidden state feature hj1 are fed into the module to generate sj’s hidden state feature hj:

hj=ReLU(Ns(sj)+Nh(hj1)), (8)

where Ns(·) and Nh(·) are convolutional layers for sj and hj1, and ReLU is the Rectified Linear Unit activation function. The learnable weights in Ns(·) or Nh(·) are shared across recurrent repetitions. At the j-th recurrent forward pass shown in Eq. (8), feature maps hj are generated by aggregating sj and hj1 through convolutions and nonlinear activations, which implicitly capture the echo dynamics and fuses features from the preceding echoes. After a full recurrent pass over echoes, all feature maps hj are concatenated along the batch dimension and fed into a denoising network to generate {vj(t+1)}. The dynamic nature of the signal over echo times is implicitly captured with the recurrent forward process due to the parameter sharing mechanism which attempts to exploit the relationship between a given echo and all earlier echoes.

2.3. K-space under-sampling sequence design

The learned k-space sampling patterns Uj were implemented in an mGRE pulse sequence for prospective data acquisition. Gradient pulses along the phase and slice encoding directions were added between consecutive echoes to allow for the modification of k-space sampling locations echo-by-echo during one TR. To avoid large changes in the phase and slice encoding gradients between two echoes, the following k-space ordering strategy was deployed: for each echo j, the sampled k-space locations Uj, were first divided into multiple ordered segments of equal size based on their angle with respect to the positive ky axis. Within each segment, k-space locations were ordered based on their distance with respect to the k-space center. Using such k-space ordering strategy, sampled locations will follow a similar trajectory for all echoes, avoiding large changes in the phase and slice encoding gradients from echo to echo during one TR. Illustration of the proposed segmented k-space ordering and pulse sequence design is shown in Fig. 2. In this example, number of echoes NT=10, acceleration factor R=8,Ny=206,Nz=80,Ns (number of segments) = 11, Nind (number of k-space location per segment) = 188 so that Ns×Nind=Ny×Nz/R. Fig. 2a exemplified the sampled ky-kz locations (yellow dots) in current k-space segment (yellow hollow triangles) during a certain TR. Gy and Gz gradients (blue solid triangles) in Fig. 2b are added between two unipolar readouts in Gx to adjust next sampled location in ky-kz plane.

Fig. 2.

Fig. 2.

Illustration of (a): the proposed segmented k-space ordering strategy of ten echoes and (b): pulse sequence design. In (a), segmented centric k-space ordering is indexed by greyscale level. In a certain TR, sampled ky-kz locations (yellow dots) in current k-space segment (yellow hollow triangles) are exemplified. In (b), additional Gy and Gz gradients (blue solid triangles) are added between two unipolar readouts in Gx to adjust next sampled location in ky-kz plane. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

3. Methods

3.1. Data acquisition and preprocessing

Data were acquired following an IRB approved protocol. All images used in this work were de-identified to protect the privacy of human participants.

3.2. Fully sampled acquired k-space data

Cartesian fully sampled k-space data were acquired in 13 healthy subjects (3 females, age: 30.7 ± 7.3) using a 3D mGRE sequence on a 3T GE scanner with a 32-channel head coil. Imaging parameters included FA = 15°, FOV = 25.6 cm, TE1 = 1.972 ms, TR = 36 ms, #TE = 10, ΔTE = 3.384 ms, acquisitionmatrix=256×206×80(readout×phaseencoding×phaseencoding), voxelsize=1×1×2mm3, BW = 64 kHz. Total scan time was 9:30 mins per subject. 32-coil k-space data of each echo were compressed into 8 virtual coils using a geometric singular value decomposition coil compression algorithm (Zhang et al., 2013). After compression, coil sensitivity maps of each echo were estimated with a reconstruction null space eigenvector decomposition algorithm ESPIRiT (Uecker et al., 2014) using a centric 20 × 20 × 20 self-calibration k-space region for each compressed coil. From the fully sampled data, coil combined multi-echo images were computed using the obtained coil sensitivity maps to provide the ground truth labels for both network training and performance comparison. Training, validation and testing has been performed on 2D coronal slices. To this end, the 200 central coronal slices per subject were selected along the readout direction, as these contain mostly brain anatomy to avoid a bias from slices that do not resemble the brain. 8/1/4 subjects (1600/200/800 slices) were used as training, validation, and test datasets, respectively.

To demonstrate the generalization ability of LARO, Cartesian fully sampled k-space data were also acquired in one of the healthy test subjects with the following sequence parameter modifications: another flip angle (25°), number of echoes (7 echoes), voxel size (0.75 × 0.75 × 1.5 mm3), a second MRI scanner from the same manufacturer (GE, 12-channel head coil) and a third MRI scanner from another manufacturer (Siemens, 64-channel head coil). Same k-space processing was applied to these data to get compressed 8-coil k-space, coil sensitivity maps and ground truth labels.

3.3. Fully sampled synthetic k-space data

To demonstrate LARO’s improvement on pathologic reconstruction, supplementary synthetic k-space datasets from healthy subjects, multiple sclerosis (MS) and intracerebral hemorrhage (ICH) patients were simulated, considering unavailability of acquired fully sampled k-space data from patients. Multi-echo complex images of 7 healthy subjects, 4 MS patients and 1 ICH patient were acquired using a 3D mGRE sequence on a 3T GE scanner. Imaging parameters included FA = 15°, FOV = 25.6 cm, TE1 = 6.69 ms, TR = 49 ms, #TE = 10, ΔTE = 4.06 ms, acquisitionmatrix=256×206×68(readout×phaseencoding×phaseencoding), voxelsize=1×1×2mm3, BW = 64 kHz. Synthetic single-coil k-space data was generated through Fourier transform of the complex multi-echo images. Retrospective Cartesian under-sampling was applied on the synthetic k-space data along two phase encoding directions. Training, validation and testing has been performed on 2D coronal slices. To this end, the 200 central coronal slices per subject were selected along the readout direction, as these contain mostly brain anatomy to avoid a bias from slices that do not resemble the brain. Data from 6/1 healthy subjects (1200/200 slices) was used as training/validation. Data from the MS (800 slices) and ICH (200 slices) patients was used as two test datasets.

3.4. Under-sampled k-epace dan in both retrospective and prospective staclies

For a retrospective study, an acceleration factor R=8 (12.5% undersampling ratio) was applied on the fully sampled acquired k-space dataset and acceleration factor R=4 (25% under-sampling ratio) was applied on the fully sampled synthetic k-space dataset. For a prospective study, Cartesian under-sampled k-space data was prospectively acquired in 10 healthy test subjects (3 females, age: 28.4 ± 4.1) using a modified 3D mGRE sequence with the same 3T GE scanner and imaging parameters. Different sampling patterns with R=8 were applied during prospective scans and compared. For the optimized k-space sampling pattern, each echo was divided into 11 segments with 188 locations in each segment, resulting in 188×11=2068 k-space locations to sample in total. Corresponding scan time was 1:20 mins. For reference, the default imaging protocol using the same imaging parameters except for elliptical R=2 uniform under-sampling reconstruction using the SENSE implementation (Pruessmann et al., 1999) on the scanner was performed on the same subjects.

3.5. Implementation details

3.5.1. Network architecture

The proposed network architecture is shown in Fig. 1. Real and imaginary parts of multi-echo images were concatenated along the channel dimension, yielding 20 channels to represent multi-echo complex images in the network. Under-sampled k-space data was zero-filled and Fourier-transformed to be used as input for deep ADMM (Fig. 1a) with NI=10 unrolled iterations. In deep ADMM, the denoiser D(·;wD) consisted of five convolutional layers equipped with 320 channels with instance normalization (Ulyanov et al., 2016) + ReLU activation after convolution for each hidden layer. The TFF module (Fig. 1c) used 64 channels in both convolutional layers for sj and hj. The hidden state feature maps hj were concatenated along the channel dimension and fed into D(·;wD) to generate denoised multi-echo images. The SPO module (Fig. 1b) was used to learn optimal sampling patterns, where weights {wj} (with matrix size 206 × 68 × 10 for synthetic k-space data and 206 × 80 × 10 for the acquired k-space data) were initialized as zeros and slope parameter 𝑎 in sigmoid function was 0.25. After generating binary patterns {Uj} from probabilistic patterns {Pj}, values in central 20 × 20 locations of {Uj} were set as ones for self-calibration.

3.5.2. Training strategy

The training process consists of two phases. In phase one, weights in the deep ADMM network and SPO module were updated simultaneously by maximizing a channel-wise structural similarity index measure (SSIM) (Wang et al., 2004): 1NiNj=1NTSSIM(s^ji,sji) with the measure between two windows x and y of common size (10 × 10) and location in s^ji and sji as

SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μx2+μy2+c1)(σx2+σy2+c2), (9)

where μs,μy and σs,σy are the mean and variance of x and y,σxy is the covariance between x and y,c1=0.012 and c2=0.032. In phase two, the pre-trained deep ADMM network from phase one was fine-tuned with fixed binary sampling patterns {Uj} either manually designed using a multi-level sampling scheme (Roman et al., 2014) or generated from the learned probabilistic patterns {Pγ} in phase one. We implemented in PyTorch using the Adam optimizer (Kingma and Adam, 2014) (batch size 1, number of epochs 100 and initial learning rate 10−3) on a RTX 2080Ti GPU. Our code is available at https://github.com/Jinwei1209/LARO-QSM.git.

3.5.3. Ablation study

An ablation study regarding the effectiveness of TFF and SFO modules were investigated by removing one or more of these modules and quantifying the corresponding loss in performance. First, a manually designed variable density sampling pattern was generated based on a multi-level sampling scheme (Roman et al., 2014) and used to train a baseline deep ADMM network without TFF or SPO (denoted by TFF=0/SPO=0). Then TFF (denoted as TFF = 1), single-echo SPO (optimized sampling pattern was fixed across echoes, denoted as SPO = 1) and multi-echo SPO (denoted as SPO = 2) were progressively added to the baseline deep ADMM network to check the effectiveness of each module, with LARO representing TFF with multi-echo SPO (i.e., TFF = 1, SPO = 2). For baseline deep ADMM without TFF, Eq. (8) was replaced with hj=ReLU(Ns(sj)) by removing Nh(hj1) to show the effectiveness of recurrent forward pass of hidden state features {hj} in TFF, where two 64-channel convolutional layers in Ns(·) were used to match the memory usage of TFF during ablation study.

3.5.4. Performance comparison

Iterative method locally low rank (LLR) (Zhang et al., 2015) and a deep learning method MoDL (Aggarwal et al., 2018) were used as two benchmark reconstruction methods, where MoDL was modified to reconstruct multi-echo images simultaneously with concatenated real and imaginary parts of multi-echoes along channel dimension. Manually designed and optimized sampling patterns were applied to all reconstruction methods and compared. From the resulting gradient echo images, R2* was estimated using ARLO (Pei et al., 2015) and QSM using morphology enabled dipole inversion with CSF-0 reference (Liu et al., 2018) from relative difference local field (RDF), which was estimated using nonlinear field estimation (Liu et al., 2013), phase unwrapping and background field removal (Liu et al., 2011).

For all retrospectively under-sampled datasets, quantitative comparisons were presented with fully sampled data as reference, where PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index) (Wang et al., 2004) metrics per reconstructed coronal slice were used to measure the reconstruction accuracy of the echo-combined magnitude image j=1NT|sj|2, R2* and RDF maps. RMSE (Root-Mean-Square Error), HFEN (High-Frequency Error Norm) (Ravishankar and Bresler, 2010) and SSIM (Wang et al., 2004) per 3D volume were used to measure the reconstruction quality of QSM.

For the MS patient dataset, lesions were manually segmented by an experienced neuroradiologist based on the corresponding T2-weighted FLAIR images which were spatially registered to the magnitude of mGRE data. A linear regression was performed of the mean susceptibility of all lesions between filly sampled and under-sampled test data.

For the prospectively under-sampled dataset, reconstructions were performed by LLR, MoDL and TFF reconstructions with different sampling patterns. The SENSE reconstruction from the scanner with acceleration factor 2 was used as a reference for comparison. Detailed structures in QSM and R2* such as white matter tracts were qualitatively compared. The perivascular spaces were segmented manually into a single region of interest ROIp. From this ROIp, a border ROIb was computed by dilating ROIp by 1 pixel and removing the original ROIp. The sharpness was defined as the difference of average susceptibility of ROIp and ROIb. Mean QSM and R2* values and standard deviations in manually drawn ROIs including Globus pallidus (GP), Substantia Nigra (SN), Red Nucleus (RN), Caudate Nucleus (CN), Putamen (PU), thalamus (TH), optic radiation (OR) and cerebral cortex (CC, starting from the top of the brain, drawn on the tenth slice of QSMs covering some part of frontal and parietal lobes) were computed and compared.

3.5.5. Generalization experiments

When acquiring the fully sampled test data with sequence parameter modifications, only one parameter was modified in each scan, except for a different voxel size, where increased spatial resolution also increased echo spacing ΔTE to 4.728 ms and acquisition matrix to 320 × 258 × 112 (readout × phase encoding × phase encoding). Sampling patterns of this voxel size were obtained by bicubic interpolation of the pre-trained probabilistic sampling distribution Pj with matrix size 206 × 80 in Eq. (4) to Pj with matrix size 258 × 112 for the new voxel size. Then the new binary sampling patterns Uj were generated using Eq. (5): Uj=1z<Pj, where 1x is the indicator function on the truth value of x and z is uniformly distributed between [0, 1]. For the test data with 7 echoes, the first 7 sampling patterns were used when applying LARO with SPO = 2. Fully sampled data were used as the reference for quantitative comparison in R2*, RDF and QSM, except in magnitude due to signal intensity variations of different scans.

4. Results

For abbreviations, “TFF = 0” or “1” denotes “with” or “without” temporal feature fusion module; “SPO = 0”, “1” or “2” denotes “without”, “with single-echo”, or “with multi-echo” sampling pattern optimization. In terms of reconstruction methods, “TFF” denotes the proposed reconstruction with “TFF = 1” under different sampling patterns; “LARO” denotes “TFF” reconstruction specifically under “SPO = 2” sampling pattern, i.e., the proposed learned acquisition and reconstruction optimization framework.

4.1. Sampling patterns

Fig. 3 shows SPO = 2 sampling pattern of the first echo (Echo1) and difference maps between two adjacent echoes (ΔEcho#) in (a): acquired k-space data (acceleration factor R=8) and (b): synthetic k-space data (acceleration factor R=4). Different k-space sampling patterns were generated from the learned probabilistic patterns per echo, introducing additional incoherency along the temporal dimension.

Fig. 3.

Fig. 3.

SPO = 2 sampling pattern of the first echo (Echo1) and difference maps between two adjacent echoes (ΔEcho#) in (a): acquired k-space data (acceleration factor R=8) and (b): synthetic k-space data (acceleration factor R=4). Different k-space sampling pattern was generated from the learned probabilistic pattern per echo, introducing additional incoherency along temporal dimension.

Acquired k-space data

4.1.1. Ablation study

Reconstructed magnitude, R2*, RDF and QSM in one representative slice are shown in Fig. 4. As TFF and SPO modules were gradually added to the baseline deep ADMM architecture, reconstruction errors (2nd, 4th, 6th and 8th rows) were progressively reduced in magnitude, R2*, RDF and QSM maps, where LARO (TFF = 1, SPO = 2) performed the best. Depictions of white matter tracts (insets) in R2* and QSM maps were improved as more modules were added. Quantitative metrics of the ablation study is shown in Table S1. Reconstruction accuracies of magnitude, R2*, RDF and QSM maps were progressively improved as more modules were introduced, where LARO (TFF = 1, SPO = 2) performed the best.

Fig. 4.

Fig. 4.

Ablation study on acquired k-space dataset with acceleration factor R=8. Reconstruction errors were progressively reduced in magnitude, R2* and QSM as more modules were added. White matter tracts (insets) were blurry in all reconstructed R2* and QSMs except LARO (TFF = 1, SPO = 2). Abbreviation: TFF = 0 or 1, with or without temporal feature fusion module; SPO = 0,1 or 2, “without”, “with single-echo”, or “with multi-echo” sampling pattern optimization.

4.2. Performance comparison

Reconstructed magnitude, R2*, RDF and QSM with SPO = 2 sampling pattern (Fig. 3a) in one representative slice are shown in Fig. 5. LLR had larger reconstruction errors with heavy block-like artifacts in RDFs and QSMs compared to MoDL and LARO. Pronounced noise in QSMs and R2* (insets) were showed in MoDL, which were not seen in LARO. Reconstructions with SPO = 0 and 1 sampling patterns are shown in Figure S1. Quantitative metrics are shown in Table S2. For each method, reconstruction accuracies of magnitude, R2* and QSM maps were progressively improved from sampling pattern SPO = 0, 1 to 2. For each sampling pattern, TFF reconstruction consistently outperformed MoDL and LLR.

Fig. 5.

Fig. 5.

Performance comparison of acquired k-space test dataset under-sampled by the optimized sampling pattern with acceleration factor R=8 (Fig. 3a). LLR had heavy block-like artifacts in RDFs and QSMs with larger errors compared to MoDL and LARO. Insets in QSMs and R2* showed pronounced noise in MoDL, which were not seen in LARO.

4.3. Synthetic k-space data

4.3.1. Ablation study

Reconstructed magnitude, R2*, RDF and QSM at one representative slice of MS test dataset are shown in Figure S2 with quantitative metrics of ablation study in Table S3. Similar to the acquired k-space data, reconstruction accuracies were progressively improved as more modules were added. In Figure S2, putamen in QSMs (insets in QSMs) were better depicted as more modules were added.

4.3.2. Performance comparison on MS dataset

Reconstructed magnitude, R2*, RDF and QSM with SPO = 2 sampling pattern (Fig. 3b) in one representative slice are shown in Fig. 6. LLR had much larger errors compared to MoDL and TFF. TFF slightly outperformed MoDL. Reconstructions with SPO = 0 and 1 sampling patterns are shown in Figure S3. Quantitative metrics are shown in Table S4. Both TFF and SPO = 2 outperformed other baseline reconstruction methods and sampling patterns.

Fig. 6.

Fig. 6.

Performance comparison of MS lesion dataset under-sampled by the optimized sampling pattern with acceleration factor R=4 (Fig. 3b). MoDL and LARO dramatically outperformed LLR in terms of reconstruction accuracy, while LARO was slightly better than MoDL.

Linear regressions of lesion-wise mean susceptibility values between fully sampled and reconstructed QSMs are shown in Figure S4. For SPO = 0, 1 and 2, linear coefficients for TFF were 1.08, 0.96, and 0.97 with the highest R2: 0.95, 0.98 and 0.99 compared to LLR and MoDL under each sampling pattern. LLR had linear coefficients 1.13, 0.98, 0.95 with the lowest R2: 0.84, 0.81 and 0.92. MoDL had linear coefficients 1.20, 1.07 and 1.10 with R2 in between: 0.89, 0.94 and 0.95. Both TFF and SPO = 2 outperformed other baselines.

4.3.3. Performance comparison on ICH dataset

The pre-trained models were tested on the ICH patient data with acceleration factor R=4 and compared. Reconstructed magnitude, R2*, RDF and QSM in one representative slice containing hemorrhage are shown in Figure S5. LLR had the highest errors among the three methods. MoDL showed some errors (red solid arrows) in QSMs which were not seen in TFF. Quantitative metrics show that both TFF and SPO = 2 outperformed their baselines.

4.4. Prospective study

Prospectively under-sampled scans with acceleration factor R=8 were acquired using the modified sequence (Fig. 2) with sampling patterns SPO = 0, 1 and 2. TFF reconstructions with different sampling patterns are shown in Fig. 7, where SENSE reconstructions with R=2 were used as reference. Depictions of white matter tracts in R2* maps (insets in R2* maps) were progressively improved from SPO = 0, 1 to 2. Sharpness scores of perivascular spaces inside putamen (insets in QSMs) were 0.0270, 0.0111, 0.0247 and 0.0411 for SENSE, SPO = 0, 1 and 2. LARO achieved comparable image quality with R=2 SENSE reference. LLR, MoDL and LARO reconstructions with SPO = 2 sampling pattern (Fig. 3a) are shown in Fig. 8. LLR had the largest errors with heavy block-like artifacts. LARO outperformed MoDL in the depiction of white matter tracts in R2* maps (insets) and vein structures in QSMs (insets). ROI analyses are shown in Tables S5 and S6. In Table S5, with R=2 SENSE as reference, QSM under-estimations in SN, RN, CN and CC reconstructed by MoDL and TFF were observed when SPO = 0 and 1 but were reduced or recovered when SPO = 2. LLR had more deviations than MoDL and TFF. In Table S6, R2* over-estimations in GP, PU and CC were seen when SPO = 0 and 1 but were recovered when SPO = 2 for LLR, MoDL and TFF.

Fig. 7.

Fig. 7.

TFF reconstructions on prospectively under-sampled raw k-space data of one healthy subject with acceleration factor R=8. Compared to SENSE reconstruction with R=2 as reference, depictions of white matter tracts in R2* maps (insets in R2* maps) were progressively improved from SPO = 0, 1 to LARO (SPO = 2). Sharpness scores of perivascular spaces inside putamen (insets in QSMs) were 0.0270, 0.0111, 0.0247 and 0.0411 for SENSE, SPO = 0, 1 and 2. Abbreviation: TFF = 1, with temporal feature fusion module; SPO = 0, 1 or 2, without, with single-echo or with multi-echo sampling pattern optimization.

Fig. 8.

Fig. 8.

Performance comparison on the prospectively under-sampled raw k-space data of one healthy subject with SPO = 2 and acceleration factor R=8 (Fig. 3a). SENSE reconstructions with R=2 were used as references. LLR had heavy block-like artifacts in RDFs and QSMs. White matter tracts in R2* maps (insets in R2* maps) and vein structures in QSMs (insets in QSMs) were blurrier in MoDL than LARO.

4.5. Generalization study

Reconstructions of different test datasets retrospectively under-sampled by SPO = 2 were shown in Fig. 9. Error maps and quantitative metrics were computed in R2*, RDF and QSM according to their fully sampled references except in magnitude due to signal intensity variations of different datasets. No visible artifacts were seen when applying the pre-trained reconstruction network to the datasets with another flip angle (25°, 2nd column), number of echoes (7 echoes, 1st column) and a second MRI scanner from the same manufacturer (GE, 3rd column). Moderate noise appeared (red arrows in the last column) when tested with another voxel size (0.75 × 0.75 × 1.5 mm3, last column), while moderate residual aliasing artifacts existed when tested with a third MRI scanner from another manufacturer (Siemens, 4th column). Reconstructions retrospectively under-sampled by SPO = 0 and 1 were shown in Figures S6 and S7. For each test dataset, reconstruction performance was consistently improved from sampling pattern SPO = 0, 1 to 2.

Fig. 9.

Fig. 9.

Generalization experiments of LARO with different imaging parameters retrospectively under-sampled by SPO = 2 sampling pattern. Fully sampled reference of each test dataset was used to compute error maps and quantitative metrics. Magnitude images were not considered for quantitative comparison due to signal intensity variations among scans. LARO performed well without visible artifacts on test datasets with another flip angle (25°, 2nd column), number of echoes (7 echoes, 1st column) and a second MRI scanner from the same manufacturer (GE, 3rd column), but had moderate noise (red arrows in the last column) on another voxel size) 0.75 × 0.75 × 1.5 mm3, last column) and moderate residual aliasing artifacts on a third MRI scanner from another manufacturer (Siemens, 4th column). Reconstructions on these datasets retrospectively under-sampled by SPO = 1 and 0 were shown in Figures S6 and S7. For each test dataset, reconstruction performance was consistently improved from sampling pattern SPO = 0, 1 to 2. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

5. Discussion

In this work, we demonstrated the feasibility of learning a sampling pattern and reconstruction process specifically designed to accelerate the acquisition of multi-echo gradient echo data for the purpose of computing a susceptibility map (QSM). R=8 acceleration was achieved while maintaining QSM quality in both healthy subjects as well as in an MS patient. Both retrospective and prospective acceleration was demonstrated. Finally, reconstruction performance was observed to be superior when compared to previously proposed acceleration techniques.

The original LOUPE (Bahadir et al., 2020)/LOUPE-ST (Zhang et al., 2020) learned an optimized variable density sampling pattern from fully sampled single-echo k-space data. In the SPO = 1 method in this work, LOUPE-ST was performed to learn a single optimized sampling pattern from fully sampled multi-echo k-space data and the obtained sampling pattern and reconstruction was applied to all echoes. The SPO = 2 method differs from SPO = 1 by learning a sampling pattern for each echo, allowing the introduction of additional sampling incoherency along echoes. LOUPE/LOUPE-ST (SPO = 1) outperformed manually designed variable density patterns (SPO = 0) in that LOUPE/LOUPE-ST optimized the sampling pattern variable density by learning a probabilistic density distribution in Eq. (4) that was updated during training to improve the reconstruction performance.

In this work, multi-echo sampling pattern optimization SPO = 2 (Eq. (3) was learned, achieving both optimized k-space variable density as in SPO = 1 and additional incoherency along echoes, which may result in better aliasing patterns for gradient echo images of different echoes that can be combined and compensated during reconstruction. SPO = 2 sampling pattern distinguishes the proposed framework from another deep learning based mGRE acceleration method (Gao et al., 2021), where manually designed 2D variable density sampling pattern (SPO = 0) was applied, which may not be optimal for mGRE acquisition. We extend our conference paper (Zhang et al., 2021) by implementing SPO = 2 sampling pattern into the existing mGRE sequence. The proposed multi-echo adaptive fan-beam ordered strategy (Fig. 2a) prevented large changes in the phase and slice encodings between echoes within one TR, improving image quality (Spincemaille et al., 2004; Spincemaille et al., 2006). The prospective results in Figs. 6 and 7 show the feasibility of achieving R=8 factor acceleration using the modified mGRE sequence with QSM image quality comparable to R=2 SENSE.

Our reconstruction architecture (Fig. 1) was based on unrolling a plug-and-play ADMM iterative scheme (Chan et al., 2016) and replacing the regularization step with a deep neural network denoiser. This idea is inspired by MoDL (Aggarwal et al., 2018) where quasi-Newton iterative scheme was unrolled as a network architecture and a five-convolution-layer neural network denoiser was applied. In (Gao et al., 2021), a MoDL-like architecture (Fig. 1 in (Gao et al., 2021) was proposed but only one repetition of unrolling was applied. As reported in MoDL (Aggarwal et al., 2018), more iterations/repetitions of the unrolled architecture helped improve reconstruction performance. We used NI=10 unrolled iterations same as MoDL to ensure good performance.

Recently, using convolutional neural networks to solve inverse problems related to multi-echo MRI signals has been explored in (Kim et al., 2022; Cho et al., 2022; Zhang et al., 2021; Zhang et al., 2021; Jafari et al., 2021; Zhang et al., 2020; Zhang et al., 2020; Zhang et al., 2020), where the established U-Net architecture (Ronneberger et al., 2015) was always applied. LARO is novel here because it introduces a TFF module (Fig. 1c) to implicitly capture the multi-echo correlation and effectively compensate temporally incoherent aliasing patterns of the GRE echo signals when SPO = 2. The benefit of the TFF module was apparent in our ablation study (Fig. 4 and Fig. S2, Tables S1 and S3) and comparison to MoDL (Fig. 7, Figs. S1, S3 and S5). This distinguishes the proposed framework from (Gao et al., 2021) as well, since in (Gao et al., 2021) multi-echo images were only concatenated into channel dimension for convolution.

Pathologies such as hemorrhagic lesions which were not seen in the healthy training data were still effectively reconstructed by LARO and MoDL with low reconstruction error (2nd row in Figure S5). We speculate that the use of the data consistency module in the proposed method allows for accurate image reconstruction of pathologies not seen during training. Generalization experiments of LARO (Fig. 9, Figs. S6 and S7) demonstrate that changing the flip angle, number of echoes or using a different scanner from the same manufacturer led to small image reconstruction errors. At the same time, using a smaller voxel size or a scanner from a different manufacturer led to a moderate increase in image noise (red arrows in the last column of Fig. 9) or residual aliasing (4th column in Fig. 9). One potential cause for the decreased performance when changing the voxel size is that it currently requires interpolating the optimal sampling pattern. For optimal performance, LARO may need to be retrained. It is however possible that fine-tuning the existing weights using a small set of fully sampled data acquired with the new resolution may be sufficient, the details of which should be the subject of future research.

Despite the limitations, pre-trained sampling patterns from SPO = 0, 1 to 2 consistently improved the reconstruction performance on all test datasets, which implies that for brain mGRE acquisition, the optimized k-space variable density distribution (Eq. (4)) may be independent of the scanning parameters/manufacturers and can be generalized effectively. LARO is also independent of the number of receiver coil channels used for scan, as both TFF and denoiser networks are applied to the coil-combined image, which also improves the generalization ability of LARO.

For raw k-space data, fully sampled training dataset was only available on healthy volunteers because of long scan time (9:30 mins), which was not feasible on patients. To incorporate patients’ dataset for training, an unrolled reconstruction network may be trained without fully sampled k-space data using self-supervised learning (Yaman et al., 2020), where during training, one portion of the under-sampled k-space data is included in the data consistency module and the remaining k-space data is used in a forward model loss, which promises to achieve test results comparable to supervised training on fully sampled data. The reconstruction network of LARO may be enhanced by incorporating under-sampled patient data with such self-supervised learning strategy.

LARO is applied here to mGRE for accelerating QSM that is useful for studying tissue magnetism (Wang and Liu, 2015), particularly paramagnetic iron (Liu et al., 2010), and is promising for assessing various diseases (Wang et al., 2017), such as multiple sclerosis (Zhang et al., 2016). The proposed combination of sampling and reconstruction optimization can be extended to other mGRE tasks with different organs, such as liver and cardiac QSM (Jafari et al., 2019; Wen et al., 2018; Wen et al., 2019), or other quantitative imaging tasks, such as T1 (Deichmann, 2005) and T2 (Deoni et al., 2005) mapping, where signal models based on Bloch equations are used to describe signal intensity changes over time. The proposed sampling strategy and temporal feature fusion may be useful to obtain better multi-contrast images. Furthermore, with the emergence of quantitative multi-parametric MRI (Christodoulou et al., 2018), sampling and reconstructing multi-contrast images together in one sequence can be an effective strategy, since multi-contrast images that are intrinsically registered in one scan have redundancy in both spatial and temporal dimensions, which can be utilized to regularize the image series during reconstruction. Our future work will extend LARO to other mGRE and multi-contrast MRI tasks.

6. Conclusion

We propose LARO, a unified method to optimize the mGRE signal acquisition and image reconstruction to accelerate QSM. The proposed reconstruction network inserts a recurrent network module into a deep ADMM network to capture the signal evolution and compensate the aliasing artifacts along echo time. The proposed sampling pattern optimization module allows acquiring k-space data along echoes with an optimized multi-echo sampling pattern. Experimental results showed superior performance LARO with good generalization ability. Prospective scan using the optimized multi-echo sampling pattern shows the feasibility of LARO.

Supplementary Material

1

Acknowledgments

This work was supported in part by research grants from the NIH: R01NS105144, R01NS090464, S10OD021782, R01LM012719 and R01AG053949, the NSF NeuroNex grant 1707312, the NSF CAREER 1748377 grant and the National MS Society: RG-1602-07671.

Footnotes

Declaration of Competing Interest

YW and PS are inventors of QSM-related patents issued to Cornell University and hold equity in Medimagemetric LLC. PS is a paid consultant for Medimagemetric LLC.

Credit authorship contribution statement

Jinwei Zhang: Conceptualization, Methodology, Software, Writing – original draft. Pascal Spincemaille: Methodology, Writing – review & editing, Supervision. Hang Zhang: Methodology, Software, Validation. Thanh D. Nguyen: Data curation, Supervision. Chao Li: Software, Validation. Jiahao Li: Software, Validation. Ilhami Kovanlikaya: Visualization, Investigation, Data curation. Mert R. Sabuncu: Methodology, Writing – review & editing, Supervision. Yi Wang: Methodology, Writing – original draft, Writing – review & editing, Supervision, Funding acquisition.

Data statements

All images used in this work are de-identified to protect privacy of human participants. Code is available at https://github.com/Jinwei1209/LARO-QSM.git. Data are available to interested researchers upon reasonable request.

Supplementary materials

Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.neuroimage.2023.119886.

Data availability

Data will be made available on request.

References

  1. Aggarwal HK, Mani MP, Jacob M, 2018. MoDL: model-based deep learning architecture for inverse problems. IEEE Trans. Med. Imaging 38 (2), 394–405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bahadir CD, Wang AQ, Dalca AV, Sabuncu MR, 2020. Deep-learning-based optimization of the under-sampling pattern in MRI. IEEE Trans. Comput. Imaging 6, 1139–1152. [Google Scholar]
  3. Bengio Y, Léonard N, Courville A, 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv. 1308.3432. [Google Scholar]
  4. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J, 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3 (1), 1–122. [Google Scholar]
  5. Chan SH, Wang X, Elgendy OA, 2016. Plug-and-play ADMM for image restoration: fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3 (1), 84–98. [Google Scholar]
  6. Cho J, Zhang J, Spincemaille P, Zhang H, Hubertus S, Wen Y, Jafari R, Zhang S, Nguyen TD, Dimov AV, 2022. QQ-NET–using deep learning to solve quantitative susceptibility mapping and quantitative blood oxygen level dependent magnitude (QSM + qBOLD or QQ) based oxygen extraction fraction (OEF) mapping. Magn. Reson. Med. 87 (3), 1583–1594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Christodoulou AG, Shaw JL, Nguyen C, Yang Q, Xie Y, Wang N, Li D, 2018. Magnetic resonance multitasking for motion-resolved quantitative cardiovascular imaging. Nat. Biomed. Eng. 2 (4), 215–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Deichmann R, 2005. Fast high-resolution T1 mapping of the human brain. Magn. Reson. Med. 54 (1), 20–27. [DOI] [PubMed] [Google Scholar]
  9. Deoni SC, Peters TM, Rutt BK, 2005. High-resolution T1 and T2 mapping of the brain in a clinically acceptable time with DESPOT1 and DESPOT2. Magn. Reson. Med. 53 (1), 237–241. [DOI] [PubMed] [Google Scholar]
  10. de Rochefort L, Liu T, Kressler B, Liu J, Spincemaille P, Lebon V, Wu J, Wang Y, 2010. Quantitative susceptibility map reconstruction from MR phase data using bayesian regularization: validation and application to brain imaging. Magn. Reson. Med. 63 (1), 194–206. [DOI] [PubMed] [Google Scholar]
  11. Gao Y, Cloos M, Liu F, Crozier S, Pike GB, Sun H, 2021. Accelerating quantitative susceptibility and R2* mapping using incoherent undersampling and deep neural network reconstruction. Neuroimage 240, 118404. [DOI] [PubMed] [Google Scholar]
  12. Gözcü B, Mahabadi RK, Li Y−H, Ilıcak E, Cukur T, Scarlett J, Cevher V, 2018. Learning-based compressive MRI. IEEE Trans. Med. Imaging 37 (6), 1394–1406. [DOI] [PubMed] [Google Scholar]
  13. Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A, 2002. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn. Reson. Med. 47 (6), 1202–1210. [DOI] [PubMed] [Google Scholar]
  14. Gu S, Levine S, Sutskever I, Muprop Mnih A., 2015. Unbiased backpropagation for stochastic neural networks. arXiv preprint arXiv. 1511.05176. [Google Scholar]
  15. Haldar JP, Kim D, 2019. OEDIPUS: an experiment design framework for sparsity-constrained MRI. IEEE Trans. Med. Imaging 38 (7), 1545–1558. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Jafari R, Sheth S, Spincemaille P, Nguyen TD, Prince MR, Wen Y, Guo Y, Deh K, Liu Z, Margolis D, 2019. Rapid automated liver quantitative susceptibility mapping. J. Magn. Reson. Imaging 50 (3), 725–732. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Jafari R, Spincemaille P, Zhang J, Nguyen TD, Luo X, Cho J, Margolis D, Prince MR, Wang Y, 2021. Deep neural network for water/fat separation: supervised training, unsupervised training, and no training. Magn. Reson. Med. 85 (4), 2263–2277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Kim J, Nguyen TD, Zhang J, Gauthier SA, Marcille M, Zhang H, Cho J, Spincemaille P, Wang Y, 2022. Subsecond accurate myelin water fraction reconstruction from FAST-T2 data with 3D UNET. Magn. Reson. Med.. [DOI] [PubMed] [Google Scholar]
  19. Kingma DP, Adam Ba J., 2014. A method for stochastic optimization. arXiv preprint arXiv. 1412.6980. [Google Scholar]
  20. Kressler B, De Rochefort L, Liu T, Spincemaille P, Jiang Q, Wang Y, 2009. Nonlinear regularization for per voxel estimation of magnetic susceptibility distributions from MRI field maps. IEEE Trans. Med. Imaging 29 (2), 273–281. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Liu J, Liu T, de Rochefort L, Ledoux J, Khalidov I, Chen W, Tsiouris AJ, Wisnieff C, Spincemaille P, Prince MR, 2012. Morphology enabled dipole inversion for quantitative susceptibility mapping using structural consistency between the magnitude image and the susceptibility map. Neuroimage 59 (3), 2560–2568. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Liu T, Khalidov I, de Rochefort L, Spincemaille P, Liu J, Tsiouris AJ, Wang Y, 2011. A novel background field removal method for MRI using projection onto dipole fields. NMR Biomed. 24 (9), 1129–1136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Liu T, Spincemaille P, de Rochefort L, Wong R, Prince M, Wang Y, 2010. Unam-biguous identification of superparamagnetic iron oxide particles through quantitative susceptibility mapping of the nonlinear response to magnetic fields. Magn. Reson. Imaging 28 (9), 1383–1389. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Liu T, Wisnieff C, Lou M, Chen W, Spincemaille P, Wang Y, 2013. Nonlinear formulation of the magnetic field to source relationship for robust quantitative susceptibility mapping. Magn. Reson. Med. 69 (2), 467–476. [DOI] [PubMed] [Google Scholar]
  25. Liu Z, Spincemaille P, Yao Y, Zhang Y, MEDI Wang Y., 2018. 0: morphology enabled dipole inversion with automatic uniform cerebrospinal fluid zero reference for quantitative susceptibility mapping. Magn. Reson. Med. 79 (5), 2795–2803. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Lustig M, Donoho D, Pauly JM, 2007. Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58 (6), 1182–1195. [DOI] [PubMed] [Google Scholar]
  27. Murphy M, Alley M, Demmel J, Keutzer K, Vasanawala S, Lustig M, 2012. Fast $\ell_1 $-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime. IEEE Trans. Med. Imaging 31 (6), 1250–1262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Otazo R, Kim D, Axel L, Sodickson DK, 2010. Combination of compressed sensing and parallel imaging for highly accelerated first-pass cardiac perfusion MRI. Magn. Reson. Med. 64 (3), 767–776. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Pei M, Nguyen TD, Thimmappa ND, Salustri C, Dong F, Cooper MA, Li J, Prince MR, Wang Y, 2015. Algorithm for fast monoexponential fitting based on auto-regression on linear operations (ARLO) of data. Magn. Reson. Med. 73 (2), 843–850. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Peng X, Ying L, Liu Y, Yuan J, Liu X, Liang D, 2016. Accelerated exponential parameterization of T2 relaxation with model-driven low rank and sparsity priors (MORASA). Magn. Reson. Med. 76 (6), 1865–1878. [DOI] [PubMed] [Google Scholar]
  31. Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P, 1999. SENSE: sensitivity encoding for fast MRI. Magn. Reson. Med. 42 (5), 952–962. [PubMed] [Google Scholar]
  32. Qin C, Schlemper J, Caballero J, Price AN, Hajnal JV, Rueckert D, 2018. Convolutional recurrent neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 38 (1), 280–290. [DOI] [PubMed] [Google Scholar]
  33. Ravishankar S, Bresler Y, 2010. MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Trans. Med. Imaging 30 (5), 1028–1041. [DOI] [PubMed] [Google Scholar]
  34. Roman B, Hansen A, Adcock B, 2014. On asymptotic structure in compressed sensing. arXiv preprint arXiv. 1406.4178. [Google Scholar]
  35. Ronneberger O, Fischer P, Brox T, 2015. U-Net: Convolutional Networks For Biomedical Image Segmentation. Springer, pp. 234–241. [Google Scholar]
  36. Schlemper J, Caballero J, Hajnal JV, Price AN, Rueckert D, 2017. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 37 (2), 491–503. [DOI] [PubMed] [Google Scholar]
  37. Spincemaille P, Hai ZX, Cheng L, Prince M, Wang Y, 2006. Motion Artifact Suppression in Breath Hold 3D Contrast Enhanced Magnetic Resonance Angiography Using ECG Ordering. IEEE, pp. 739–742. [DOI] [PubMed] [Google Scholar]
  38. Spincemaille P, Nguyen TD, Wang Y, 2004. View ordering for magnetization prepared steady state free precession acquisition: application in contrast-enhanced MR angiography. Magn. Reson. Med. 52 (3), 461–466. [DOI] [PubMed] [Google Scholar]
  39. Uecker M, Lai P, Murphy MJ, Virtue P, Elad M, Pauly JM, Vasanawala SS, Lustig M, 2014. ESPIRiT—An eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA. Magn. Reson. Med. 71 (3), 990–1001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Ulyanov D, Vedaldi A, Lempitsky V, 2016. Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv. 1607.08022. [Google Scholar]
  41. Wang Y, Liu T, 2015b. Quantitative susceptibility mapping (QSM): decoding MRI data for a tissue magnetic biomarker. Maggn. Reson. Med. 73 (1), 82–101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Wang Y, Spincemaille P, Liu Z, Dimov A, Deh K, Li J, Zhang Y, Yao Y, Gillen KM, Wilman AH, Gupta A, Tsiouris AJ, Kovanlikaya I, Chiang GC, Weinsaft JW, Tanenbaum L, Chen W, Zhu W, Chang S, Lou M, Kopell BH, Kaplitt MG, Devos D, Hirai T, Huang X, Korogi Y, Shtilbans A, Jahng GH, Pelletier D, Gauthier SA, Pitt D, Bush AI, Brittenham GM, Prince MR, 2017. Clinical quantitative susceptibility mapping (QSM): biometal imaging and its emerging roles in patient care. J. Magn. Reson. Imaging 46 (4), 951–971. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP, 2004. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13 (4), 600–612. [DOI] [PubMed] [Google Scholar]
  44. Wen Y, Nguyen TD, Liu Z, Spincemaille P, Zhou D, Dimov A, Kee Y, Deh K, Kim J, Weinsaft JW, 2018a. Cardiac quantitative susceptibility mapping (QSM) for heart chamber oxygenation. Magn. Reson. Med. 79 (3), 1545–1552. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Wen Y, Weinsaft JW, Nguyen TD, Liu Z, Horn EM, Singh H, Kochav J, Eskreis-Winkler S, Deh K, Kim J, 2019b. Free breathing three-dimensional cardiac quantitative susceptibility mapping for differential cardiac chamber blood oxygenation–initial validation in patients with cardiovascular disease inclusive of direct comparison to invasive catheterization. J. Cardiovas. Magn. Reson. 21 (1), 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Yaman B, Hosseini SAH, Moeller S, Ellermann J, Uğurbil K, Akçakaya M, 2020. Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn. Reson. Med. 84 (6), 3172–3191. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Yu H, Shimakawa A, McKenzie CA, Brodsky E, Brittain JH, Reeder SB, 2008. Multiecho water-fat separation and simultaneous R estimation with multifrequency fat spectrum modeling. Magn. Reson. Med. 60 (5), 1122–1134. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Zhang H, Zhang J, Wang R, Zhang Q, Spincemaille P, Nguyen TD, Wang Y, 2021a. Efficient folded attention for medical image reconstruction and segmentation. Proc. AAAI Conf. Artif. Intell. 35 (12), 10868–10876. [Google Scholar]
  49. Zhang J, Liu Z, Zhang S, Zhang H, Spincemaille P, Nguyen TD, Sabuncu MR, Wang Y, 2020a. Fidelity imposed network edit (FINE) for solving ill-posed image reconstruction. Neuroimage 211, 116579. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Zhang J, Zhang H, Li C, Spincemaille P, Sabuncu M, Nguyen TD, Wang Y, 2021b. Temporal Feature Fusion With Sampling Pattern Optimization for Multi-echo Gradient Echo Acquisition and Image Reconstruction. Springer, pp. 232–242. [Google Scholar]
  51. Zhang J, Zhang H, Sabuncu M, Spincemaille P, Nguyen T, Wang Y, 2020b. Bayesian Learning of Probabilistic Dipole Inversion For Quantitative Susceptibility Mapping. PMLR, pp. 892–902. [Google Scholar]
  52. Zhang J, Zhang H, Sabuncu M, Spincemaille P, Nguyen T, Wang Y, 2020c. Probabilistic dipole inversion for adaptive quantitative susceptibility mapping. arXiv preprint arXiv. 2009.04251. [Google Scholar]
  53. Zhang J, Zhang H, Spincemaille P, Nguyen T, Sabuncu MR, Wang Y, 2021c. Hybrid Optimization Between Iterative and Network Fine-Tuning Reconstructions For Fast Quantitative Susceptibility Mapping. PMLR, pp. 870–880. [Google Scholar]
  54. Zhang J, Zhang H, Wang A, Zhang Q, Sabuncu M, Spincemaille P, Nguyen TD, Wang Y, 2020d. Extending LOUPE For K-space Under-sampling Pattern Optimization in Multi-coil MRI. Springer, pp. 91–101. [Google Scholar]
  55. Zhang T, Pauly JM, Levesque IR, 2015. Accelerating parameter mapping with a locally low rank constraint. Magn. Reson. Med. 73 (2), 655–661. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Zhang T, Pauly JM, Vasanawala SS, Lustig M, 2013. Coil compression for accelerated imaging with Cartesian sampling. Magn. Reson. Med. 69 (2), 571–582. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Zhang Y, Gauthier SA, Gupta A, Comunale J, Chia-Yi Chiang G, Zhou D, Chen W, Giambrone AE, Zhu W, Wang Y, 2016. Longitudinal change in magnetic susceptibility of new enhanced multiple sclerosis (MS) lesions measured on serial quantitative susceptibility mapping (QSM). J. Magn. Reson. Imaging 44 (2), 426–432. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Zhao B, Lu W, Hitchens TK, Lam F, Ho C, Liang ZP, 2015. Accelerated MR parameter mapping with low-rank and sparsity constraints. Magn. Reson. Med. 74 (2), 489–498. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

Data Availability Statement

Data will be made available on request.

RESOURCES