Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 May 13.
Published in final edited form as: Proc IEEE Int Symp Biomed Imaging. 2022 Apr 26;2022:10.1109/isbi52829.2022.9761497. doi: 10.1109/isbi52829.2022.9761497

Data-Consistent non-Cartesian deep subspace learning for efficient dynamic MR image reconstruction

Zihao Chen 1,2, Yuhua Chen 1,2, Yibin Xie 1, Debiao Li 1,2, Anthony G Christodoulou 1,2
PMCID: PMC9104888  NIHMSID: NIHMS1803011  PMID: 35572068

Abstract

Non-Cartesian sampling with subspace-constrained image reconstruction is a popular approach to dynamic MRI, but slow iterative reconstruction limits its clinical application. Data-consistent (DC) deep learning can accelerate reconstruction with good image quality, but has not been formulated for non-Cartesian subspace imaging. In this study, we propose a DC non-Cartesian deep subspace learning framework for fast, accurate dynamic MR image reconstruction. Four novel DC formulations are developed and evaluated: two gradient decent approaches, a directly solved approach, and a conjugate gradient approach. We applied a U-Net model with and without DC layers to reconstruct T1-weighted images for cardiac MR Multitasking (an advanced multidimensional imaging method), comparing our results to the iteratively reconstructed reference. Experimental results show that the proposed framework significantly improves reconstruction accuracy over the U-Net model without DC, while significantly accelerating the reconstruction over conventional iterative reconstruction.

Index Terms—: MRI reconstruction, Deep learning, Non-Cartesian, Subspace, Dynamic MRI

1. INTRODUCTION

Dynamic magnetic resonance imaging (MRI) can be used to evaluate organ motion, tissue dynamic contrast enhancement (DCE), and nuclear magnetic resonance (NMR) relaxations. Dynamic imaging is therefore vital for applications such as cardiac and cancer imaging, as well as for relaxometry in any organ. The slow speed of MRI often results in high undersampling of the acquired k-t space signals, motivating constrained subspace/low-rank [1] and/or compressed sensing [2] reconstruction alongside acquisition schemes such as randomized and/or non-Cartesian sampling. The specific combination of non-Cartesian acquisition and subspace reconstruction is central to many promising imaging frameworks such as MR fingerprinting [35], MR Multitasking [6], Extreme MRI [7], and GRASP-Pro [8, 9].

Subspace methods alleviate many of the computational challenges of dynamic imaging by efficiently modeling, reconstructing, and storing dynamic images in a low-dimensional subspace rather than in the time domain. However, the use of non-Cartesian sampling counteracts this computational efficiency: non-Cartesian Fourier encoding operators such as the nonuniform fast Fourier transform (NUFFT) [10] have no direct inverse, so non-Cartesian subspace imaging is still largely performed via iterative reconstruction. The resulting long reconstruction time limits practical adoption of these imaging frameworks.

In recent years, deep learning reconstruction methods have shown great advantage over iterative methods in reducing reconstruction time [1113]. Data-consistency (DC) layers, or unrolled deep learning, utilize the acquired data to adjust the network’s output and have been shown to improve image quality and generalizability over purely training-data-driven deep-learning reconstruction [1420]. The general DC problem has previously been formulated using gradient descent (GD) [16, 17, 19], density-compensated gradient descent (DGD) [20], or conjugate gradient (CG) [15, 18] approaches—and in the case of Cartesian encoding, as a coil-by-coil direct inversion layer [14, 15].

In the realm of subspace image reconstruction, Chen et al. have formulated non-Cartesian deep subspace learning without DC layers [21], and Sandino et al. [22, 23] have formulated DC deep subspace learning reconstruction for Cartesian trajectories using GD-DC or CG-DC layers. However, data-consistent non-Cartesian deep subspace learning remains a challenge: current non-Cartesian DC MRI strategies for dynamic imaging are formulated in k-t space [16, 1820] rather than in the more computationally efficient subspace; GD based DC layers are inefficient due to illconditioning of the non-Cartesian reconstruction problem [24]; and iterative CG-DC layers offer slower reconstruction.

In this study, we developed and evaluated four novel formulations of data-consistent non-Cartesian deep subspace learning image reconstruction: GD-DC, preconditioned gradient descent (PGD-DC), directly-solved data consistency (DS-DC), and CG-DC. Inspired by Toeplitz encoding models [25], the DS-DC approach relies on an invertible block-Toeplitz model of the combined forward/adjoint encoding operator, allowing a coil-wise closed-form solution of the non-Cartesian DC equation. We compared these DC layers to each other and to non-DC deep learning in the context of cardiac MR Multitasking [26], using a U-Net [27] model as the network module.

2. THEORY

2.1. Non-Cartesian subspace image reconstruction

In subspace reconstruction for non-Cartesian dynamic MRI, a dynamic image represented as a matrix XNxNy×T is decomposed into a spatial factor UNxNy×L and a temporal factor ΦL×T according to X = . When the rows of Φ constitute an orthonormal basis, U can be interpreted as coordinates within the L-dimensional subspace spanned by the rows of Φ. A suitable Φ can often be quickly extracted from a subset of acquired data b via PCA or SVD [1], or calculated a priori [28], depending on the application. The most time-consuming step of reconstruction is therefore to estimate U, typically by solving:

U^=argminUbAΦ(U)22+λR(U) (1)
AΦ(U)=Ω([FNUSU]Φ) (2)

Here FNU is the NUFFT, S applies sensitivity maps, Ω is the k-t space undersampling operator and λR(U) is a regularization term, often a sparse regularizer in order to leverage compressed sensing [29]. Conventionally, this minimization problem is solved by iterative methods such as alternating direction method of multipliers (ADMM) or the fast iterative soft-thresholding algorithm (FISTA).

2.2. Non-Cartesian deep subspace learning

Deep learning frameworks instead use a feedforward neural network to reconstruct U. This can be done, for example, by passing an initial guess U0 through a convolutional neural network (CNN):

Ucnn=CNN(U0), (3)

where the network has been trained to produce an output Ucnn that resembles the solution to Eq. (1) [21].

This network output Ucnn can further pass through a data consistency (DC) layer that improves data fidelity by re-incorporating the measured k-space into the network’s reconstruction. We formulate the DC problem similarly to Eq. (1), replacing R(U) with UUcnn22 in order to produce a CNN-regularized reconstruction:

U^=argminUbAΦ(U)22+λUUcnn22. (4)

The solution can be expressed as:

U^=(AΦ*AΦ+λI)1(AΦ*b+λUcnn), (5)

where the operator AΦ* is the conjugate transpose of AΦ.

However, Eq. (5) is difficult to solve analytically for non-Cartesian MRI. Time-consuming CG iterations could be used to solve Eq. (5), but would offset the reconstruction time advantages of using deep learning.

2.3. Non-Cartesian subspace DC layer with gradient descent methods

To avoid inverting AΦ*AΦ+λI, we can formulate the DC network in a gradient descent manner:

Udc=Ucnnα[AΦ*AΦ(Ucnn)AΦ*(b)]. (6)

Eq. (6) subtracts the gradient of the data-fidelity term in Eq. (4) from the CNN output Ucnn to improve data fidelity.

AΦ*AΦ(U) is calculated according to:

AΦ*AΦ(U)=SHFNUH[Ω*Ω(FNUSUΦ)ΦH]. (7)

A subspace kernel method can efficiently compute Eq. (7) by calculating FNUSU , right-multiplying L × L subsets of Ω*ΩΦ)ΦH for each unique k-space trajectory, and finally applying SHFNUH [29]. This keeps all calculations within the L-dimensional subspace and avoids the larger memory usage of the time domain.

It was established in previous work that the non-Cartesian reconstruction problem is ill-conditioned due to nonuniform density, and that adding a preconditioner can accelerate convergence in gradient descent [24]. As such, we also formulate preconditioned gradient descent for non-Cartesian subspace DC as:

Udc=UcnnαSP[EΦ*EΦ(SUcnn)EΦ*(b)] (8)
EΦ(Y)=Ω([FNUY]Φ) (9)

Here EΦ is the coil-wise encoding matrix, and P is a preconditioner approximating the pseudoinverse of FNUHFNU by compensating its nonuniform weighting in k-space.

2.4. Non-Cartesian subspace directly-solved DC layer with inverse Block Toeplitz method

Inspired by the Toeplitz method that can significantly accelerate the calculation of E*E(x)=FNUH[Ω*Ω(FNUx)] for static images [25], here we formulate a block-Toeplitz model for EΦ*EΦ which can be analytically inverted, opening the door to a direct solution to non-Cartesian subspace DC problem. The static Toeplitz method models E*E as a linear shift-invariant system that performs a convolution and can therefore be represented by a Toeplitz matrix. This Toeplitz matrix can be represented as E*E = ZHF−1QFZ, where F is the Cartesian FFT, Z zero-pads to twice the image size in each spatial dimension to accommodate circular convolution boundaries, and where Q is a diagonal matrix that performs a k-space multiplication derived from the FFT of the point spread function (PSF).

The block-Toeplitz model for EΦ*EΦ combines the Toeplitz model of non-Cartesian forward/adjoint encoding and the L × L subspace kernel concept to express EΦ*EΦ as L × L block-Toeplitz, with the (i,j)-th block of EΦ*EΦ taking the form:

[EΦ*Eϕ]i,j=ZHF1Q(i,j)FZ, (10)

where Q(i,j) applies the k-space filter for that block. Equivalently, we can say that EΦ*EΦ(Y)=ZHF1W(FZY), where W(·) right-multiplies an L × L kernel W(n) with elements wij(n)=qnn(i,j) at the nth of 2Nx · 2Ny k-space locations.

Then we can consider a coil-wise DC equation

Y^=argminYEΦ(Y)b22+λYSUcnn22 (11)

which has the solution

Y^=(Eϕ*EΦ+λI)1(EΦ*(b)+λSUcnn). (12)

The operator EΦ*EΦ+λI can be directly inverted by regularized inversion of each L × L kernel W(n), i.e., as:

Y^=ZHF1(W+λI)1(FZ[EΦ*(b)+λSUcnn]) (13)

Where the function (W + λI)−1(·) right-multiplies an L × L kernel (W(n) + λI)−1 at the nth k-space location.

Then, the data-consistent spatial factor Udc can be calculated by complex coil combination (Udc=SY^). This method directly solves the non-Cartesian DC problem.

3. EXPERIMENTS

In this work, we evaluated three DC options described in the Theory section above: vanilla GD-DC based on Eq. (6), PGD-DC based on Eq. (8), and DS-DC based on the block-Toeplitz inversion in Eq. (13). We further evaluated a CG-DC layer based on Eq. (5) with 5 CG iterations.

3.1. Datasets

All data were dynamic MR cardiac images acquired with a T1 MR Multitasking protocol [26] on three different 3T MRI scanners (MAGNETOM Verio, MAGNETOM Vida, and Biograph mMR; Siemens Healthcare, Erlangen, Germany) at the same center. The k-t space data were acquired with a continuous IR-FLASH sequence and golden-angle radial trajectories. Label images were reconstructed iteratively as described in [26] with L = 32, during which the temporal factors were generated from dictionaries and auxiliary data [6, 26]. This produced a multidimensional array of images at each combination of c =20 cardiac phases, r = 6 respiratory phases and τ =344 T1-recovery timepoints. Thus, there are T = 20×6×344 = 41,280 temporal frames for each dynamic image. The image matrix size for each frame is 320 × 320, corresponding to a FOV of 540 × 540 mm2 (twice the prescribed FOV of 270 × 270 mm2).

For deep subspace reconstruction, we directly fed spatial factors U rather than images X into the network. We concatenated the real and imaginary parts of U0’s and of labeled U’s as separate channels to preserve complex values, for network input and output sizes of 320 × 320 × 64. In total there were 120 dynamic image sets, in which 96 sets were used for training, 12 sets for validation and 12 sets for testing.

3.2. Evaluation metrics

Since the networks were trained on spatial factors U but the final dynamic images are calculated as X = , we did comparisons for both U and for reconstructed dynamic images. The reference images for our comparisons were iteratively reconstructed using wavelet sparsity regularization and 20 iterations of an ADMM algorithm.

For spatial factor/subspace coordinates U, we used normalized root-mean-square error (NRMSE) to evaluate the networks. For dynamic images, peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and NRMSE were calculated from the reconstructed image sequences for the whole cardiac cycle (20 frames) at the end-expiration (EE) respiratory phase, and for inversion times corresponding to bright-blood and dark-blood contrast weighting (i.e., the two most clinically important qualitative image contrasts).

3.3. Experimental setup

All the proposed DC networks consist of a CNN Block (U-Net) and a DC layer (Fig. 1), which were implemented in TensorFlow. The input U0 was obtained by zero-filled regridding: U0=SFNUHDΩ*(b)ΦH, where D applies a density compensation filter.

Fig. 1.

Fig. 1.

Network architecture in proposed methods

To compare the performance of different DC layers, we pretrained the U-Net block without DC layer for 200 epochs (5 hours) on one Nvidia Titan RTX GPU with 24GB of RAM, and directly added different DC layers at the end of the U-Net block without further training. Adam optimizer and mean squared error (MSE) loss were used in training. The step size α in GD-DC and PGD-DC, as well as the regularization coefficient λ in DS-DC and CG-DC, were determined by choosing the best values for the validation set. The preconditioner in PGD-DC was a Cartesian k-space ramp which adjusted for radial k-space density. A single DC layer was used for each network to minimize reconstruction time.

In the comparison, four proposed DC networks (GD-DC, PGD-DC, DS-DC and CG-DC) and the pretrained U-Net without DC layer were compared with iterative reconstructed reference images among the testing set.

4. RESULTS

The spatial factor reconstruction time is shown in Table 1. GPU memory use of each DC layer was 1.6 GB. All the single-step DC models accelerated the reconstruction time by more than 50x compared to 180 sec iterative reconstruction.

Table 1.

NRMSE over spatial factor U among the testing set. Values in brackets are standard deviations. Best results among single-step DC models are bolded. Best results among all models are italic.

No DC Single-step DC Iterative DC
Model Input U 0 U-Net w/o DC GD-DC PGD-DC DS-DC CG-DC
NRMSE 0.279 (0.096) 0.169 (0.052) 0.156 (0.053) 0.152 (0.050) 0.135 (0.041) 0.117 (0.036)
Inference Time N/A 1.7s 3.5s 3.5s 3.5s 22s

For quantitative comparison among the testing set, Table 1 shows the NRMSE of different models for the spatial factor U directly output by the network; Table 2 shows the PSNR, SSIM and NRMSE of different models for reconstructed dynamic images for bright blood and dark blood contrast weightings. Since each dynamic image contains 20 cardiac frames, we have 12×20=240 testing images for each contrast in Table 2. CG-DC had the best quantitative metrics (p<0.001), but its computation time was >6x that of the single-step DC layers. Among the single-step DC layers, DS-DC performed best (p<0.001), followed by PGD-DC, GD-DC and U-Net w/o DC, for both the spatial factor and dynamic images.

Table 2.

Quantitative metrics over dynamic images among the testing set. Values in brackets are standard deviations. Best results among single-step DC models are bolded. Best results among all models are italic.

Model PSNR SSIM NRMSE
Bright Blood Dark Blood Bright Blood Dark Blood Bright Blood Dark Blood
No DC U-Net w/o DC 33.67 (2.95) 36.89 (2.60) 0.854 (0.057) 0.915 (0.024) 0.174 (0.054) 0.108 (0.032)
Single-step DC GD-DC 34.14 (3.03) 37.44 (2.69) 0.873 (0.053) 0.926 (0.024) 0.165 (0.053) 0.102 (0.032)
PGD-DC 34.31 (3.00) 37.76 (2.44) 0.873 (0.051) 0.927 (0.022) 0.163 (0.058) 0.098 (0.028)
DS-DC 35.37 (2.87) 38.68 (2.41) 0.888 (0.040) 0.933 (0.017) 0.144 (0.048) 0.088 (0.024)
Iterative DC CG-DC 35.73 (2.83) 39.25 (2.29) 0.897 (0.040) 0.945 (0.014) 0.138 (0.044) 0.082 (0.022)

Fig. 2 shows an example testing case of the T1-weighted images at EE respiratory phase and end-diastolic cardiac phase for bright blood and dark blood contrasts. The visual comparison is consistent with the quantitative results: CG-DC and DS-DC have the smallest errors, followed by PGD-DC, GD-DC and U-Net w/o DC. The error maps of CG-DC and DS-DC have fewer structural features than those of other models, implying that CG-DC and DS-DC provided less systematic error.

Fig. 2.

Fig. 2.

Example T1-weighted images from iterative reconstruction and different networks with corresponding error maps. (A): Bright blood contrast images; (B): dark blood contrast images. Top row: examples images; bottom row: corresponding error maps.

5. DISCUSSION & CONCLUSIONS

In this study, we developed a DC non-Cartesian deep subspace learning framework to accelerate dynamic MR reconstruction and proposed four DC approaches: GD-DC, PGD-DC, DS-DC and CG-DC. All the deep learning models except CG-DC accelerated reconstruction by more than 50x over iterative reconstruction. All DC models outperformed the naïve U-Net w/o DC in quantitative comparisons. CG-DC had the least error but longest inference time (22 s), while DS-DC provided the best accuracy amongst the fast (3.5 s) single-step DC layers. CG-DC may be desirable when imaging a single slice with one inference, whereas DS-DC may be more attractive when imaging multiple slices.

All the subspace DC formulations substantially reduced the memory required for large-scale dynamic image reconstruction compared to direct implementation of previously proposed DC layers in the time domain. A DC layer in k-t space here would have required operating on the 8,200 readout time points rather than the L = 32 entries in U, requiring 410 GB of memory instead of our 1.6 GB.

Although our DC layers were applied in subspaces here, the proposed inverse block-Toeplitz DS-DC can be readily adapted to time-domain or static imaging to improve the efficiency of general non-Cartesian deep learning.

In this work, we chose U-Net as our CNN block for simplicity. The proposed DC layers can be easily added to other advanced CNN blocks to further improve their reconstruction quality. This study also only evaluated a single pre-trained CNN+DC block, but multiple CNN+DC blocks and end-to-end training may offer even further improvement.

In conclusion, the proposed DC deep subspace learning framework significantly improves reconstruction accuracy over the plain U-Net model, while significantly accelerating reconstruction over conventional iterative algorithms. Clinical studies are needed to evaluate the diagnostic accuracy and clinical value of the proposed deep learning reconstruction model.

ACKNOWLEDGMENTS

This work was supported by NIH R01 EB028146.

Footnotes

COMPLIANCE WITH ETHICAL STANDARDS

Informed consent was obtained for all human subjects in accordance with an institutional review board protocol at Cedars-Sinai Medical Center.

REFERENCES

  • [1].Liang Z-P, “Spatiotemporal imaging with partially separable functions,” Proc IEEE Int Symp Biomed Imaging, pp. 988–991, 2007. [Google Scholar]
  • [2].Lustig M, Donoho D, and Pauly JM, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magn Reson Med, vol. 58, no. 6, pp. 1182–1195, 2007. [DOI] [PubMed] [Google Scholar]
  • [3].Ma D et al. , “Magnetic resonance fingerprinting,” Nature, vol. 495, no. 7440, pp. 187–192, 2013/March/01 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Zhao B et al. , “Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling,” Magn Reson Med, vol. 79, no. 2, pp. 933–942, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Assländer J, Cloos MA, Knoll F, Sodickson DK, Hennig J, and Lattanzi R, “Low rank alternating direction method of multipliers reconstruction for MR fingerprinting,” Magn Reson Med, vol. 79, no. 1, pp. 83–96, Jan 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Christodoulou AG et al. , “Magnetic resonance multitasking for motion-resolved quantitative cardiovascular imaging,” Nature Biomed Eng, vol. 2, no. 4, pp. 215–226, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Ong F et al. , “Extreme MRI: Large-scale volumetric dynamic imaging from continuous non-gated acquisitions,” Magn Reson Med, vol. 84, no. 4, pp. 1763–1780, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Feng L, Wen Q, Huang C, Tong A, Liu F, and Chandarana H, “GRASP-Pro: imProving GRASP DCE-MRI through self-calibrating subspace-modeling and contrast phase automation,” Magn Reson Med, vol. 83, no. 1, pp. 94–108, January 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Feng L et al. , “Magnetization-prepared GRASP MRI for rapid 3D T1 mapping and fat/water-separated T1 mapping,” Magn Reson Med, vol. 86, no. 1, pp. 97–114, July 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Fessler J and Sutton B, “Nonuniform fast Fourier transforms using min-max interpolation,” IEEE Trans Signal Process, vol. 51, no. 2, pp. 560–574, 2003. [Google Scholar]
  • [11].Wang S et al. , “Accelerating magnetic resonance imaging via deep learning,” Proc IEEE Int Symp Biomed Imaging, pp. 514–517, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Yang G et al. , “DAGAN: Deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction,” IEEE Trans Med Imaging, vol. 37, no. 6, pp. 1310–1321, 2017. [DOI] [PubMed] [Google Scholar]
  • [13].Zhu B, Liu JZ, Cauley SF, Rosen BR, and Rosen MS, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, no. 7697, pp. 487–492, 2018. [DOI] [PubMed] [Google Scholar]
  • [14].Schlemper J, Caballero J, Hajnal JV, Price AN, and Rueckert D, “A deep cascade of convolutional neural networks for dynamic MR image reconstruction,” IEEE Trans Med Imaging, vol. 37, no. 2, pp. 491–503, 2017. [DOI] [PubMed] [Google Scholar]
  • [15].Aggarwal HK, Mani MP, and Jacob M, “MoDL: Model-based deep learning architecture for inverse problems,” IEEE Trans Med Imaging, vol. 38, no. 2, pp. 394–405, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Malavé MO et al. , “Reconstruction of undersampled 3D non-Cartesian image-based navigators for coronary MRA using an unrolled deep learning model,” Magn Reson Med, vol. 84, no. 2, pp. 800–812, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Sandino CM, Lai P, Vasanawala SS, and Cheng JY, “Accelerating cardiac cine MRI using a deep learning‐based ESPIRiT reconstruction,” Magn Reson Med, vol. 85, no. 1, pp. 152–167, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Kofler A, Haltmeier M, Schaeffter T, and Kolbitsch C, “An end-to-end-trainable iterative network architecture for accelerated radial multi-coil 2D cine MR image reconstruction,” Med Phys, vol. 48, no. 5, pp. 2412–2425, 2021. [DOI] [PubMed] [Google Scholar]
  • [19].Zhang Y, She H, and Du YP, “Dynamic MRI of the abdomen using parallel non-Cartesian convolutional recurrent neural networks,” Magn Reson Med, vol. 86, no. 2, pp. 964–973, 2021. [DOI] [PubMed] [Google Scholar]
  • [20].Ramzi Z, Starck J-L, and Ciuciu P, “Density Compensated Unrolled Networks for Non-Cartesian MRI Reconstruction,” Proc IEEE Int Symp Biomed Imaging, pp. 1443–1447, 2021. [DOI] [PubMed] [Google Scholar]
  • [21].Chen Y, Shaw JL, Xie Y, Li D, and Christodoulou AG, “Deep learning within a priori temporal feature spaces for large-scale dynamic MR image reconstruction: Application to 5-D cardiac MR Multitasking,” Med Image Comput Comput Assist Interv, pp. 495–504, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Sandino CM, Ong F, and Vasanawala SS, “Deep subspace learning: Enhancing speed and scalability of deep learning-based reconstruction of dynamic imaging data,” Proc Int Soc Magn Reson Med, 2020. [Google Scholar]
  • [23].Sandino CM, Ong F, Wang K, Lustig M, and Vasanawala SS, “DSLR+: Enhancing deep subspace learning reconstruction for high-dimensional MRI,” Proc Int Soc Magn Reson Med, 2021. [Google Scholar]
  • [24].Ong F, Uecker M, and Lustig M, “Accelerating non-Cartesian MRI reconstruction convergence using k-space preconditioning,” IEEE Trans Med Imaging, vol. 39, no. 5, pp. 1646–1654, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Baron CA, Dwork N, Pauly JM, and Nishimura DG, “Rapid compressed sensing reconstruction of 3D non-Cartesian MRI,” Magn Reson Med, vol. 79, no. 5, pp. 26852692, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Shaw JL et al. , “Free-breathing, non-ECG, continuous myocardial T1 mapping with cardiovascular magnetic resonance multitasking,” Magn Reson Med, vol. 81, no. 4, pp. 2450–2463, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Ronneberger O, Fischer P, and Brox T, “U-net: Convolutional networks for biomedical image segmentation,” Med Image Comput Comput Assist Interv, pp. 234–241, 2015. [Google Scholar]
  • [28].Huang C, Graff CG, Clarkson EW, Bilgin A, and Altbach MI, “T2 mapping from highly undersampled data by reconstruction of principal component coefficient maps using compressed sensing,” Magn Reson Med, vol. 67, no. 5, pp. 1355–1366, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Zhao B, Haldar JP, Christodoulou AG, and Liang Z-P, “Image reconstruction from highly undersampled (k,t)-space data with joint partial separability and sparsity constraints,” IEEE Trans Med Imaging, vol. 31, no. 9, pp. 1809–1820, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES