Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Apr 1.
Published in final edited form as: NMR Biomed. 2020 Oct 15;35(4):e4416. doi: 10.1002/nbm.4416

Rapid MR relaxometry using deep learning: An overview of current techniques and emerging trends

Li Feng 1, Dan Ma 2, Fang Liu 3
PMCID: PMC8046845  NIHMSID: NIHMS1644150  PMID: 33063400

Abstract

Quantitative mapping of MR tissue parameters such as the spin-lattice relaxation time (T1), the spin-spin relaxation time (T2), and the spin-lattice relaxation in the rotating frame (T), referred to as MR relaxometry in general, has demonstrated improved assessment in a wide range of clinical applications. Compared with conventional contrast-weighted (eg T1-, T2-, or T-weighted) MRI, MR relaxometry provides increased sensitivity to pathologies and delivers important information that can be more specific to tissue composition and microenvironment. The rise of deep learning in the past several years has been revolutionizing many aspects of MRI research, including image reconstruction, image analysis, and disease diagnosis and prognosis. Although deep learning has also shown great potential for MR relaxometry and quantitative MRI in general, this research direction has been much less explored to date. The goal of this paper is to discuss the applications of deep learning for rapid MR relaxometry and to review emerging deep-learning-based techniques that can be applied to improve MR relaxometry in terms of imaging speed, image quality, and quantification robustness. The paper is comprised of an introduction and four more sections. Section 2 describes a summary of the imaging models of quantitative MR relaxometry. In Section 3, we review existing “classical” methods for accelerating MR relaxometry, including state-of-the-art spatiotemporal acceleration techniques, model-based reconstruction methods, and efficient parameter generation approaches. Section 4 then presents how deep learning can be used to improve MR relaxometry and how it is linked to conventional techniques. The final section concludes the review by discussing the promise and existing challenges of deep learning for rapid MR relaxometry and potential solutions to address these challenges.

Keywords: artificial intelligence, deep learning, image reconstruction, MR relaxometry, parameter mapping, quantitative MRI

1 |. INTRODUCTION

MRI is a diverse and powerful imaging modality, with a broad range of applications both in clinical diagnosis and in basic scientific research.1,2 Compared with other cross-sectional imaging modalities such as computed tomography (CT) or positron emission tomography (PET), MRI offers superior soft-tissue characterization and more flexible contrast mechanisms without radiation exposure. These unique advantages of MRI allow acquisitions of functional, hemodynamic, and metabolic information in addition to high-spatial-resolution anatomical images for a comprehensive examination.2 However, despite an essential role in routine clinical diagnosis, the day-to-day use of MRI today is substantially limited to the qualitative assessment of contrast-weighted images, which are created based on the variation of underlying MR tissue parameters (eg T1, T2) across different types of tissue.3 The changes in these tissue parameters in lesions typically result in hyper-intense or hypo-intense features, thus generating useful information for routine clinical diagnosis.4,5

MRI also allows quantitative measurements of inherent tissue T1 and T2 values, which are referred to as T1/T2 MR relaxometry or T1/T2 mapping. The estimation of spin-lattice relaxation in the rotating frame (T)6,7 (T mapping) is also performed in some applications.8 Since the early history of MRI, there has long been an interest in the use of quantitative MR relaxometry to gain deeper insights into the disease environment.9,10 The increased clinical value of MR relaxometry has been widely documented in diagnosis, stage, evaluation, and monitoring of various human diseases, including neurocognitive disorders,1115 neurodegeneration,1619 cancer,2023 myocardial and cardiovascular abnormalities,2427 degenerative musculoskeletal diseases,2832 and hepatic and pulmonary diseases.33,34 Compared with conventional T1-weighted or T2-weighted images, MR relaxometry provides increased sensitivity to different diseases that could enable early identification of pathologies.10 It also delivers information that can be more specific to tissue composition and microenvironment.10,3538 Meanwhile, MR relaxometry is also more robust to surface coil effects, which may yield non-uniform signals unrelated to pathology and thus hinder clinical interpretation in conventional qualitative images. However, well known limitations of MR relaxometry include long scan times due to the need for repeated acquisitions with varying sampling parameters, cumbersome post-processing,3942 and sensitivity to different system imperfections.43,44 For example, to estimate the T2 value of an object, multiple images of the object need to be acquired first with varying T2-decay contrast (eg different echo times (TE)) for subsequent T2 parameter fitting, thus leading to a several-fold increase of scan time compared with conventional T2-weighted imaging.39 A post-processing step is then performed to fit the acquired image series to a T2 signal decay model so that corresponding T2 values in selected regions of interest (ROIs) or a pixel-by-pixel T2 map can be generated.39,42 In addition, pre-calibration steps are sometimes needed to ensure that the prescribed imaging protocol (eg the flip angle (FA)) is as expected to ensure accurate and precise parameter generation.4547 These challenges and underlying complexity can all lead to the non-reproducible performance of MR relaxometry and can significantly restrict its routine clinical implementation and ultimate clinical translation.48,49

The past two decades have seen remarkable advances in MR relaxometry in terms of scan times, quality, and robustness.38 In particular, the imaging speed of MRI has been dramatically improved with faster imaging sequences, more efficient sampling trajectories, better gradient systems, and coil arrays with an increased number of elements. In the meantime, there has been an explosive growth of techniques to reconstruct undersampled MR data, from parallel imaging5056 to different spatiotemporal (k-t) acceleration techniques (including k-t parallel imaging and constrained k-t reconstruction methods).5766 Many of these techniques have been successfully demonstrated for rapid MR relaxometry with improved imaging performance.6772 In addition, MR relaxometry model-based reconstruction methods (simply referred to as model-based reconstruction hereafter) have also been proposed to embed corresponding parameter fitting models into iterative reconstruction for direct estimation of MR parameters from acquired k-space.7376 This synergistic reconstruction strategy combines traditionally separated imaging and parameter estimation steps into a single joint process, leading to significantly increased imaging efficiency. Moreover, the introduction of MR fingerprinting (MRF)77 has further disrupted the way in which traditional MR relaxometry is performed, allowing an efficient generation of multiple MR parameters from a single acquisition. All of these efforts have resulted in improved imaging speed and performance that were previously inaccessible in MR relaxometry, and some of these methods have been extensively optimized and have seen early clinical translation for routine evaluation.

The recent rise of deep learning78 has attracted substantial attention in the MRI community and has been revolutionizing many aspects of MRI research, including image reconstruction,7983 image analysis and processing,8486 and image-based disease diagnosis and prognosis.8688 Although the application of deep learning in quantitative MRI has been less explored compared with other techniques, a number of early studies have recently shown its great promise to improve MR relaxometry in terms of speed, efficiency, and quality. The goal of this paper is to discuss the potential application of deep learning for MR relaxometry, to review emerging deep-learning-based techniques that have been developed for MR relaxometry, and to highlight future directions. The remainder of the paper consists of four sections. In Section 2, we summarize and give a brief overview of the imaging models for MR relaxometry. In Section 3, we review current classical accelerated imaging methods for rapid MR relaxometry, and we mainly focus on techniques for reconstructing undersampled MR data towards parameter mapping. These techniques include state-of-the-art k-t acceleration approaches, model-based reconstruction methods, and novel methods for efficient generation of accurate MR parameters. In Section 4, we present how deep learning can be applied to improve rapid MR relaxometry with specific examples and how the use of deep learning is linked to conventional methods. In the last section, we illustrate the advantages and existing challenges of applying deep learning for rapid MR relaxometry, discuss potential solutions to address these challenges, and highlight potential directions to further improve their synergy. For simplicity and a more focused scope, this review paper is focused on the rapid mapping of T1, T2, and T only (to which MR relaxometry typically refers), but it should be noted that these techniques could also be generalized to the quantification of other MR parameters.

2 |. MR RELAXOMETRY: THE IMAGING MODEL

2.1 |. Data acquisition and image reconstruction

Standard MR relaxometry involves acquisitions of a series of contrast-weighted images on the same object with varying imaging parameters and contrast, followed by fitting the signal evolution of each image pixel (or an ROI) across the dynamic/parameter dimension to a specific MR relaxometry model for generating corresponding parameters of interest. We begin with the MR forward model for acquiring MR data of a 2D + time dynamic image series, which can be written as

s(kx,ky,t)=d(x,y,t)ei2π(kxx+kyy)dxdy. (1)

Here, d denotes the dynamic image series to be acquired with varying contrast along the parameter dimension (size = nx × ny × np in the spatial dimension and the parameter dimension). s denotes the corresponding dynamic k-space with the same size. kx and ky are the spatial-frequency variables in k-space. x and y represent the coordinates in the image domain, and t represents the dynamic position along the parameter dimension. This signal equation can be extended into 3D by adding extra phase-encoding along the slice or partition dimension. With proper discretization, Equation 1 can be rewritten in matrix notation as

s=Fd (2)

where F denotes the fast Fourier transform (FFT) operation to transform dynamic images into dynamic k-space. When the sampling of k-space satisfies the Nyquist rate (ie, the sampling frequency is at least twice the maximal signal frequency that is fulfilled if every k-space location is sampled, also known as full sampling), image reconstruction can be given by simply performing an inverse Fourier transform on acquired k-space under perfect imaging conditions (eg in the absence of B0/B1 inhomogeneities):

d˜=FHs (3)

where d˜ denotes the reconstructed dynamic images that can be later fitted to a signal model (see sections below) for generating quantitative parameters of interest.

Since MR relaxometry typically involves acquisitions of a dynamic image series, it requires much longer scan times compared with conventional qualitative contrast-weighted acquisitions that normally produce static images. As a result, accelerated imaging methods are usually needed to speed up the acquisition of dynamic images for quantitative parameter mapping. While the development of fast imaging sequences and more efficient imaging trajectories have been topics of interest, the imaging speed of MRI is fundamentally restricted by its sequential acquisition nature. As a result, undersampling (by skipping certain k-space measurements) remains a more effective way of accelerating MR data acquisitions. When undersampling is applied, Equation 2 is further extended by incorporating an undersampling operator (Λ) into the encoding operator:

s=ΛFd. (4)

Since the Nyquist sampling rate is no longer satisfied in this scenario, more advanced reconstruction algorithms beyond a simple FFT are needed for image reconstruction. These techniques include simple dynamic view-sharing methods,66,8991 various parallel imaging,5053,56,92 temporal parallel imaging, and k-t acceleration approaches,54,55,5759,66 and different constrained reconstruction strategies such as compressed sensing6064,93,94 or low-rank-based methods.65,68,95 Meanwhile, the MRF framework represents another direction of MR relaxometry to generate multiple quantitative parameters simultaneously without the need to reconstruct clean dynamic images.77 A review of these classical techniques will be the main focus of discussion in the next section.

2.2 |. MR parameter fitting

Given the reconstructed dynamic image series, generation of MR parameters (MR parameter fitting) can be performed by fitting the image series into a relevant signal model with least-square minimization, which can be described as

p˜=argminpM(p)d˜22 (5)

where M and p = [p1, p2, …, pn] represent the selected signal model (eg T1 recovery or T2 decay) and the corresponding parameters to be estimated, respectively. The fitting model is highly dependent on the sequence design and the parameters of interest. For example, a multiple-echo sequence (eg a turbo spin echo sequence) or a T2-prepared sequence with different preparation lengths can be used for T2 mapping, and an exponential T2 decay mode can be used.96 For T1 mapping, an inversion recovery-prepared sequence9799 or a steady-state gradient echo sequence with variable flip angles (VFA)100,101 can be used to capture the T1 recovery rate to generate T1 values.102 For T mapping, a spin-lock-prepared sequence103105 with different preparation lengths is typically implemented to capture the decay rate of locked magnetization in a rotating frame.6 Recent studies have also suggested that a model from the Bloch equation can more accurately represent the T2 signal decay by better accounting for system imperfections.44 Moreover, one can also implement a sequence that is sensitive to different tissue parameters simultaneously (see MRF below) and design a multiparametric model based on the Bloch equation.77 Discussion on the use of a specific sequence, the selection of models, and their accuracy of different models is beyond the scope of this study. The key point to deliver here is that, no matter which model is used, the fitting process generally follows Equation 5 to generate parameters of interest. This traditional two-step MR parameter mapping framework, including one step to acquire and reconstruct dynamic multi-contrast images and another step for parameter fitting, as shown in Figure 1A, is widely employed in a variety of studies.6772

FIGURE 1.

FIGURE 1

A, Standard MR parameter mapping is typically comprised of two separate steps, including one step to generate a dynamic multi-contrast image series and the next step to fit the dynamic images into a signal model to generate parameters of interest. B, MR parameter mapping can also be performed by combining the two separate steps into a model-based reconstruction framework, in which MR parameter maps can be directly estimated from undersampled dynamic k-space. A variable-density Cartesian undersampling scheme at a 4-fold acceleration is demonstrated in this schematic example

3 |. STATE-OF-THE-ART METHODS FOR RAPID MR RELAXOMETRY

This section briefly reviews the current state-of-the-art methods for reconstructing dynamic MR relaxometry images from undersampled data and for generating relevant MR parameters based on the imaging models described in the previous section.

3.1 |. Image reconstruction from accelerated MR relaxometry data

3.1.1 |. Parallel MRI

Multiple-coil arrays are widely used in modern MR scanners, and enable the reduction of scan times by skipping certain k-space measurements in most clinical applications using a technique known as parallel MRI.51,52,54,55,106 Mathematically, the undersampled signal equation described in Equation 4 can be adapted with additional coil sensitivity encoding as

s=ΛFCd=Ed (6)

where C = [C1, C2, ⋯, Cj] represent coil sensitivities that can be pre-estimated or self-calibrated and E is called an encoding matrix, and combines the undersampling operator, Fourier encoding, and coil encoding. During the reconstruction process, parallel MRI aims to unfold aliased undersampled images (as in the sensitivity encoding (SENSE)-type methods51) or fill in the missing k-space data (as in the generalized autocalibrating partial parallel acquisition (GRAPPA)-type methods52) using data simultaneously acquired with multiple coils. The choice of reconstruction strategy can be selected based on specific applications and the way in which coil sensitivity maps are generated. When coil sensitivity maps are available, the reconstruction of Equation 6 can be performed by minimizing the following least-square error in a generalized formalism53:

d˜=argmindEds22. (7)

Depending on the sampling schemes, the solution of Equation 7 can be found by computing the Moore-Penrose pseudoinverse directly51 or can be solved with iterative algorithms (eg the gradient descent algorithm) in a more generalized case.107

Parallel MRI was introduced more than two decades ago and remained the cornerstone in most of the current routine clinical examinations. However, the maximum acceleration that can be achieved with parallel imaging alone is fundamentally limited by the number of coil elements and the design of coil arrays, and it is ultimately restricted by the electrodynamic principles.51,53 However, as shown in the following subsections, parallel MRI can be synergistically combined with other more advanced image reconstruction methods for better reconstruction performance.

3.1.2 |. Constrained reconstruction

Additional regularizations can be incorporated into the parallel MRI framework to further increase acceleration rates and/or improve reconstruction performance.62,64,93,94,108110 The incorporation of additional constraints inherently changes the weighting of competing considerations in the reconstruction problem and can result in more stable solutions (eg better suppression of artifacts/noise). In general, the combination of parallel imaging with constrained reconstruction can be represented by

d˜=argmindEds22+λR(d) (8)

where R denotes a regularization (sometimes therecan be two or more regularizers) enforced on the dynamic relaxometry images to be reconstructed, and λ represents a weighting parameter to control the balance between data consistency (the left-hand term) and promotion of regularization (the right-hand term).

Among many regularizations that have been proposed for image reconstruction, ℓ1-norm regularization, which is the basis of compressed sensing theory,93,111,112 has received considerable attention and interest and has been extensively applied to accelerate MR relaxometry to exploit temporal image sparsity.60,61,67,69 The ℓ1-norm regularization can also be replaced by a low-rank constraint, which is another popular reconstruction scheme commonly applied for rapid MR relaxometry.65,68,113 It can be further modified to enforce a so-called subspace constraint,114 which has received substantial interest in dynamic MRI reconstruction and has demonstrated superior performance to standard ℓ1-norm constrained or low-rank-constrained reconstruction in many dynamic MRI studies.7072,95,115117

Adding one or more regularizations (Equation 8) often makes the reconstruction problem non-linear, and thus an iterative reconstruction algorithm is needed. This prolongs reconstruction time compared with linear reconstruction (eg parallel MRI reconstruction), which can range from a few minutes to a few hours. When sparsity is exploited in reconstruction, incoherent undersampling, such as random Cartesian undersampling93 or non-Cartesian undersampling,94 is usually implemented to fulfill the requirement of incoherence in the compressed sensing theory.

3.2 |. Model-based reconstruction of MR parameters

A number of studies have proposed incorporating the MR parameter fitting models into image reconstruction so that corresponding parameter maps can be directly reconstructed from undersampled k-space in a single step with increased efficiency, performance, and robustness.7375,118,119 The general framework of model-based reconstruction is shown in Figure 1B, and mathematically this can be expressed as

p˜=argminpEM(p)s22. (9)

Here, the parameter fitting process previously described in Equation 5 is combined with Equation 7. The model-based reconstruction strategy has two specific advantages. First, it eliminates the parameter fitting step, which is traditionally treated as a separate process, is often cumbersome and is sensitive to residual noise and/or artifacts. Second, the parameter fitting model is employed as a constraint, which serves as an intrinsic regularizer for image reconstruction. This can lead to better suppression of noise and artifacts and thus potentially increased acceleration rates. In addition, an extra regularization can be further enforced on the parameters to be reconstructed to improve reconstruction performance74,76:

p˜=argminpEM(p)s22+λR(p). (10)

The main challenge of model-based reconstruction is the added computational complexity, which demands prolonged reconstruction time (eg 10–20 min or longer per 2D slice75) and can restrict its clinical translation.

3.3 |. Efficient MR parameter mapping

Traditional MR parameter fitting, as shown in Equation 5, aims to find the least-square solution that minimizes the error between the underlying signal evolution and synthesized signal curve from the parameters to be fitted based on corresponding models. This is usually implemented through an iterative non-linear least-square fitting process, which is computationally expensive, particularly when pixel-by-pixel based fitting is desired and when additional considerations (eg correction of B1+ profile or slice profile) need to be taken into account. To speed up parameter fitting, a number of studies have proposed pattern-recognition-based approaches.44,120,121 In this type of algorithm, a dictionary containing signal evolutions from a range of possible parameters is simulated first using relevant signal models or the Bloch equations. During the parameter generation step, a pattern-recognition-based process is then performed to search for the element from the dictionary that best matches the signal evolution for a given image pixel (eg the element with the smallest least-square error with the signal to be fitted). Once the desired dictionary atom is found, its associated MR parameters are then assigned to this pixel, and this process is looped for all the pixels to generate parameter maps. The pattern-recognition-based fitting approach gives a couple of unique advantages over traditional iterative fitting methods. First, it involves only a linear searching step, thus enabling dramatically faster fitting speed. Second, since the dictionary can be pre-generated, it is well-suited for efficient parameter fitting using a complicated model.

MRF77 is a more advanced framework that allows the simultaneous quantification of multiple tissue properties (eg joint T1 and T2 mapping) from a single MR scan in clinically feasible scan time (eg a few seconds to a few minutes). It is also based on pattern recognition to generate quantitative MRI parameter maps, as shown in Figure 2, and the dictionary in MRF is typically generated using the Bloch equations to cover a wide range of physiological tissue properties. In addition, an MRF sequence further employs randomized imaging parameters, such as FA, TE, and time of repetition (TR), to generate highly variable signal evolutions that simultaneously depend on different tissue properties. This also ensures that different dictionary elements can be better differentiable to reduce the chances of false dictionary matching. A combination of these new features provides much higher flexibility on simulating different encoding effects, such as FA and TR/TE, system imperfections, such as B0 and B1 inhomogeneities, and multiple tissue contributions, such as T1, T2, and diffusion.122,123

FIGURE 2.

FIGURE 2

Schematic demonstration of pattern-recognition-based MR parameter generation in the MRF framework. A, Examples of four dictionary entries representing four primary tissues: cerebrospinal fluid (CSF), fat, white matter, and gray matter. B, Pattern matching of the voxel fingerprint in the dictionary, which allows retrieval of the tissue features represented by this voxel. C, The intensity variation of a voxel across the undersampled images. D, Parameter maps obtained by repeating the matching process for each voxel. (Image reproduced from Figure 1 in Panda et al. Magnetic resonance fingerprinting—an overview. Curr Opin Biomed Eng. 2017 Sep;3:56–66 with permission)

The pattern-recognition-based MR parameter generation, including the MRF framework, can be combined with previously described accelerated imaging techniques for improved imaging speed and performance. For example, standard non-Cartesian parallel image reconstruction, view-sharing approaches, or compressed sensing reconstruction have been used to improve MRF.124127 Imaging performance can be further improved with low-rank-based reconstruction methods128133 or subspace-based reconstruction methods.132,134137 In addition, the model-based reconstruction strategy can also be incorporated to directly reconstruct desired MR parameters.135,121,138

The main challenge of the pattern-recognition-based approach is the high computational demand to generate a dictionary and the large size of the dictionary that needs to be stored for signal matching, particularly when many parameters need to be considered simultaneously, as in MRF. This challenge can be alleviated by increasing the gap/footprint between consecutive dictionary elements, but this could lead to a reduction of fitting precision compared with conventional non-linear fitting approaches.

4 |. NEW GENERATION OF MR RELAXOMETRY: THE RISE OF DEEP LEARNING

4.1 |. Introduction of deep learning

Deep learning is one particular form of machine learning that uses a combination of weighted non-linear functions to represent complex learning functions.78 Deep learning can be viewed as a multi-level feature representation model starting with learning simple linear features in initial network layers followed by more sophisticated features in deeper layers. The combination of a large number of learning modules leads to flexible and scalable learning capability. Along with the increase of computation power, deep learning has been revolutionizing computer vision and data science, and it has quickly expanded to many other modern scientific disciplines, including medical imaging research.78

Deep learning utilizes a neural network to learn latent data information. Inspired by the anatomy of neurons and how they function in the brain to perform cognitive tasks, a neural network consists of multiple hidden layers, and each of them has interconnected artificial neurons with varying ‘weights’ representing the strength of connections. A basic artificial neural network architecture is the fully connected network (FCN), in which each node in one layer is interconnected to all other nodes in a subsequent layer (Figure 3). The connection weights can be updated during network training to form a complicated non-linear relationship between the input and output. To facilitate analysis of multi-dimensional image data, a set of processing modules is further introduced into neural networks for characterizing the inter-correlations among image pixels using convolution, where a set of kernels is used to identify image features such as intensity variations, edges, and patterns. The extracted features are then multiplied by activation functions, similar to real neuron activation, so that non-linearity is added to the learned features to increase model complexity and to enhance learning capability. In a typical neural network, the lth convolutional layer can be described as

ful=σ(Wnll*ful1+bnll). (11)

Here, ful is the output of this convolutional layer with input ful1 as the output from the previous layer. * denotes multi-dimensional convolution. Wnll and bnll are corresponding convolution kernels and biases, respectively, with a total of nl filters. σ(·) denotes an activation function, such as the most commonly used rectified linear unit (ReLU). The size of the feature map can change throughout the network by using special convolutional processes such as dilated convolutions139 and transposed convolutions,140 or by using additional operators such as pooling141 and interpolations. Those operations can help maintain essential image information while diversifying learned features depending on study-specific learning purposes. Such a neural network structure is referred to as a convolutional neural network (CNN); it consists of many interconnected convolutional layers and allows learning of sophisticated features that are inherently embedded in a training database. The basic CNN can also be further extended to include many advanced processing modules, such as batch normalization,142 residual learning,143 dense connection,144 and dropout operation145 to increase the learning capability, efficiency, and robustness.

FIGURE 3.

FIGURE 3

Schematic demonstration of an FCN with one input layer, one output layer, and three hidden layers. Each node in one layer is interconnected to all nodes in the following layer. This network can represent a complicated non-linear relationship between the input and the output

In recent years, different variants of CNN architectures have been proposed. In particular, U-Net146 is an architecture that has been widely applied in medical image applications. The U-Net structure is an efficient CNN system to characterize pixel-wise dense image content using a paired encoder and decoder network. The encoder network consists of several convolutional layers, ReLU activation, batch normalization, and Max-pooling. It aims to characterize inherent image features while removing uncorrelated image structures and compressing image information. The decoder network uses a mirrored structure of the encoder to decompress the output of the encoder network. It recovers image resolution and then generates desirable image contrast through multiple levels of convolution operation. Multiple symmetric shortcut connections are also applied between the encoder and decoder networks to directly transfer image features with increased mapping efficiency.147 Figure 4 shows an example of a customized U-Net architecture.

FIGURE 4.

FIGURE 4

An example of mapping an undersampled knee MR image series into a pair of a T2 map and proton density (I0) map using an endto-end CNN structure in Reference 169. A, The network structure of a U-Net146 implemented for the end-to-end mapping. The U-Net structure consists of an encoder network and a decoder network with multiple shortcut connections (eg concatenation) between them to enhance mapping performance. The encoder network is used to characterize robust and inherent image features while compressing image information, and the decoder network is applied to generate desirable image contrast using the extracted features of the encoder network. The abbreviations for the CNN layers include BN for batch normalization, Conv for 2D convolution, and Deconv for 2D deconvolution. The parameters for the convolution layers are labeled in the figure as image size @ the number of 2D filters. B, Schematic diagram of a set of extracted image features at different convolutional layers of the encoder (L1–L5) and decoder (L11-L15) networks. These connected processing modules allow the network to explore spatial-temporal correlation and to learn multi-level structural features to characterize complex image information for correcting image artifacts and removing image noise due to image undersampling

ResNet143 is another popular CNN architecture that is also widely used in medical imaging applications. This architecture is often implemented for training a deep network. When a network has too many convolutional layers, a degradation problem can occur, causing rapid loss of network accuracy along with the increase of network depth. ResNet introduces a residual block where the layer input is connected to the layer output, thus forcing the layer to learn residual information with respect to the layer input. Compared with standard CNN, this turns out to be easier and less complex for network training, which potentially leads to improved network accuracy and performance.143

CNN can also be extended to utilize spatial-temporal filters148,149 or an architecture of recurrent CNN (RCNN)150,151 for better capturing dynamic information and learning spatial-temporal information in dynamic images. RCNN introduces a ‘memory’ module to maintain an internal state of the network, which keeps active information from not only the current input but also its neighboring inputs over time. As a result, information learned from a dynamic frame can be used to help to learn about a different frame within the RCNN structure, and dynamic information can be propagated efficiently when the input changes with time.

The abovementioned network architectures usually use pixel-wise loss functions such as ℓ1 or ℓ2 norms to calculate the difference of network outputs with respect to training references. Recent studies have found that these simple loss functions are likely to cause image bias and blurring in imaging applications.152 To address this challenge, the generative adversarial network (GAN)153 has been developed, and it has gained increasing attention in the deep learning field. With an adversarial learning scheme, GAN uses a separate network as a discriminator to evaluate the similarity between the outputs from the original neural network (typically referred to as a generator) and the training references. Such a training strategy enforces that the generator produces outputs that are indistinguishable from the training references. It enables assessment of network outputs from a perspective of multiple-level features, thus leading to better performance than standard ℓ1 or ℓ2 norm-based pixel-wise loss functions.

To this end, more content about network architectures can be found in the latest review articles.154,155 The key point here is that the scalability and flexibility of constructing different networks with a large number of advanced processing modules provide a great many degrees of freedom to reformulate learning problems for MR relaxometry. These abovementioned network architectures and training strategies have been implemented and tested in recent deep-learning-based MR relaxometry applications and have demonstrated promising performance (Table 1), as will be seen in the following subsections.

TABLE 1.

Summary of recent representative studies on deep-learning-based rapid MR relaxometry

Reference Method Relaxation type Network architecture Image sequence Training data Testing data Key results
Cai et al156 Deep OLED T2 T2 mapping ResNet Single-shot OLED planar imaging Simulated data Simulated phantom; in vivo brain Reliable T2 mapping with higher accuracy and faster reconstruction than standard reconstruction method
Li et al158 MSCNN T and T2 mapping CNN Magnetization-prepared angle-modulated partitioned k-space spoiled gradient echo snapshots T and T2 quantification In vivo knee In vivo knee Up to 10-fold acceleration for simultaneous T and T2 maps with quantification results comparable to reference maps
Cohen et al164 MRF-DRONE T1 and T2 mapping FCN Modified gradient-echo EPI MRF pulse sequence; fast imaging with steady-state precession MRF pulse sequence Simulated data Simulated phantom; phantom; in vivo brain Accurate, 300 to 5000 times faster, and more robust to noise and dictionary undersampling than conventional MRF dictionary matching
Fang et al165 SCQ network T1 and T2 mapping FCN + U-Net Fast imaging with steady-state precession MRF pulse sequence In vivo brain In vivo brain Accurate quantification for T1 and T2 by using only 25% of time points of the original sequence
Liu et al169 MANTIS T2 mapping U-Net Multi-echo spin-echo T2 mapping In vivo knee In vivo knee Accurate and reliable quantification for T2 at up to eightfold acceleration, robust against k-space trajectory undersampling variation
Liu et al170 MANTIS-GAN T2 mapping GAN (generator, U-Net; discriminator, PatchGAN) Multi-echo spin-echo T2 mapping Simulated data Simulated data Up to eightfold acceleration for T2 mapping with accuracy and high image sharpness and texture preservation compared with the reference
Zha et al171 Relax-MANTIS T1 mapping U-Net Variable flip angle spoiled gradient echo T1 mapping In vivo lung In vivo lung Physics model regularized and self-supervised T1 mapping at reduced image acquisition time, robust against noise
Zibetti et al175 VN T T mapping VN Modified 3D Cartesian turbo-FLASH sequence In vivo knee In vivo knee Better T quantification using deep learning image reconstruction than compressed sensing
Jeelani et al177 DeepT1 T1 mapping RCNN + U-Net Modified Cartesian Look-Locker imaging In vivo cardiac In vivo cardiac Noise-robust estimates compared with the traditional pixel-wise T1 parameter fitting at fivefold acceleration
Chaudhari et al178 MRSR T2 mapping CNN DESS sequence In vivo knee In vivo knee Minimally biased T2 from robust super-resolution in thin slice compared with the reference

Abbreviations:

DeepT1, deep learning for T1 mapping; MSCNN, model skipped convolutional neural network; Relax-MANTIS, reference-free latent map extraction MANTIS.

4.2 |. End-to-end deep learning for efficient MR parameter mapping

Neural networks can be constructed to learn spatial correlations and contrast relationships between input datasets and desirable outputs. This process is known as end-to-end deep learning or end-to-end mapping, which forms a non-linear transform function between two image domains. For MR relaxometry, a straightforward and effective application of end-to-end mapping is to directly translate dynamic MR relaxometry images (in domain Du, typically undersampled images) to MR parameter maps (in domain P) through domain transform learning (denoted as DuP) given data pairs du and p representing the input image series and to-be-generated MR parameters. The data distribution in the training datasets can be denoted as du ~ P(du), and the corresponding learning process can then be formulated as

θ˜=arg minθ(EduP(du)[C(duθ)pp]) (12)

where C(du|θ):dup is a neural network mapping function conditioned on a network parameter set θ; ∥ ∥p represents a p-norm function such as the ℓ1-norm or ℓ2-norm; and Edu~P(du)[] is an expectation operator given that du belongs to the data distribution P(du). It can be seen that Equation 12 is in a similar format to the parameter fitting model described in Equation 5, and the use of deep learning enables the direct generation of MR parameters from undersampled dynamic images. It should be noted that the optimization target in Equation 12 is fundamentally different from that in Equation 5. While Equation 5 aims to generate parameter maps that are consistent with the underlying dynamic signal evolution, Equation 12 attempts to estimate a network parameter set θ˜, conditioned on which the neural network optimizes the mapping performance by minimizing the difference between the network outputs (eg deep-learning-generated parameters) and the training references (eg reference MR parameters). More specifically, the training process aims to characterize latent features that can be learned from the training datasets by updating network parameters. Once the training is completed, the learned parameter set θ˜ for the neural network is fixed, and it can be used to convert newly acquired undersampled images to their corresponding parameter maps p˜ directly. This process is referred to as image inference:

p˜=C(duθ˜),duDu. (13)

Since the network structure and parameters are both fixed during the inference process, the forward operation in reconstruction can be achieved in a time-efficient fashion with computing time typically of the order of seconds using a modern graphics processing unit (GPU) or multi-threaded central processing unit (CPU). A number of recent studies have demonstrated the performance of end-to-end mapping to achieve efficient and accurate MR relaxometry with representative examples briefly summarized below.

Cai et al investigated the use of end-to-end CNN mapping to directly estimate the T2 map from single-shot MR images of the brain156 that were acquired using an overlapping-echo detachment (OLED) planar imaging sequence.157 The proposed method applied a 2D ResNet to generate T2 maps from input OLED images directly. The CNN was trained on simulated image datasets, and the trained network was applied for evaluation in real brain data. Compared with T2 maps obtained from conventional constrained reconstruction, the T2 maps generated with the proposed deep learning method produced reduced image artifact, noise, and blurring (Figure 5). Meanwhile, deep learning also led to increased accuracy in T2 estimation with respect to reference T2 maps.

FIGURE 5.

FIGURE 5

Examples showing the comparison of T2 estimation between methods. A, Full FOV spin-echo images. B, Expanded reference T2 maps. C, Expanded T2 maps from constrained echo-detachment-based reconstruction method. D, Expanded T2 maps from ResNet. Expanded ROIs are marked by the red rectangles in A. The reconstructed T2 mappings from the echo-detachment-based method show much more noise (regular noise-like artifacts) and blurring around the texture edges. This indicates the difficulty of denoising and reducing the blurring effect at the same time for the echo-detachment-based method. However, the ResNet method simultaneously achieves both quite well, and the results show good agreement with the reference T2 mappings. (Image reproduced from Figure 6 in Cai et al. Single-shot T2 mapping using overlapping-echo detachment planar imaging and a deep convolutional neural network. Magn Reson Med. 2018. https://doi.org/10.1002/mrm.27205 with permission)

Li et al proposed rapid T and T2 mapping of the knee using an end-to-end CNN.158 In this study, dynamic MR relaxometry images were acquired using a magnetization-prepared spoiled gradient echo snapshot sequence that was previously developed for T and T2 quantification. During the training process, the reference T and T2 maps were obtained by fitting fully sampled k-space datasets, and undersampled MR images were retrospectively generated using a 2D Poisson-disk random undersampling mask. An end-to-end 3D CNN was constructed to jointly learn spatial-temporal information and T and T2 contrast simultaneously. The learned CNN was then applied to convert newly acquired undersampled MR images to both T and T2 maps directly. The deep-learning-based method was found to provide accurate T and T2 quantification at 10-fold acceleration compared with the reference parameter maps.

End-to-end mapping has also been applied to improve MRF in several recent studies with different applications.159163 The first application is to help MRF with a better and more efficient generation of MR parameter maps. For example, Cohen et al developed a deep-learning-based method called MRF deep reconstruction network (MRF-DRONE), which uses an FCN to directly map MRF signal curves to T1 and T2 values on a pixel-by-pixel basis.164 The network learning was performed on a large dictionary, and the FCN uses multiple hidden layers to characterize the correlations between MRF signal patterns and corresponding T1 and T2 values. The highly non-linear nature of the deep learning network allows sufficient feature compression and more efficient pattern recognition, which resulted in 300–5000 faster mapping speed compared with standard MRF matching.

In a later study, Fang et al developed a framework called spatially constrained tissue quantification (SCQ), which uses end-to-end mapping for both feature learning and signal matching in MRF,165 as shown in Figure 6. Similarly to MRF-DRONE, an FCN was trained to compress MRF signal evolution curves into a low-dimensional feature vector to better characterize signal patterns and remove uncorrelated noise and artifacts. A set of feature maps was formed by concatenating all feature vectors for all pixels in MRF images. In addition, a U-Net structure was constructed to convert the low-dimensional features into T1 and T2 parameter maps. This method achieved accurate T1 and T2 estimation in the brain with only a quarter of the MRF data that are needed without deep learning, leading to a fourfold acceleration rate. Using deep learning, the same group has also demonstrated improved 2D MRF with an in-plane spatial resolution of 0.8 mm2 in a scan time of 7.5 s (Reference 166) and improved 3D MRF with 1 mm3 isotropic spatial resolution in a scan time of 7 min.167

FIGURE 6.

FIGURE 6

A diagram of the deep learning model for tissue quantification in MRF. First, the feature extraction module extracts a lower-dimensional feature vector from each MR signal evolution. A spatially constrained quantification module using end-to-end CNN mapping is then applied to estimate the tissue maps from the extracted feature maps with spatial information. This SCQ method achieved accurate T1 and T2 estimation using only a quarter of the required MRF signal initially, leading to an apparent fourfold acceleration for the brain. (Image reprinted from Figure 1 in Fang et al. Sub-millimeter MR fingerprinting using deep-learning-based tissue quantification. Magn Reson Med. 2019. https://doi.org/10.1002/mrm.28136 with permission)

Deep learning has also been used to help with the MRF dictionary generation. For example, Yang et al168 developed a GAN-based method to learn to synthesize signal evolution curves from a reference MRF dictionary. The signal curves generated by deep learning were found to be consistent with those generated from the Bloch equations, and corresponding MR parameter maps generated from the learned dictionary were in a good agreement with those from the Bloch-equation-based dictionary for in vivo studies. The main advantage of using deep learning for MRF dictionary generation is the much faster computational speed (~1000-fold) compared with standard Bloch-equation-based simulation, particularly when a wide range of tissue parameters need to be covered.

4.3 |. Model-based deep learning reconstruction of MR parameters

In analogy to the model-based reconstruction shown in Equation 10, the end-to-end CNN mapping in Equation 12 can be further combined with a data fidelity term that enforces data/model consistency, and the CNN mapping can be treated as a deep-learning-based regularizer in this scenario as shown below:

θ˜=arg minθ(λ1EduP(du)EM(C(duθ))s22+λ2EduP(du)[C(duθ)pp]). (14)

Here, λ1 and λ2 are weighting parameters to balance the model fidelity (the left-hand term) and CNN mapping (the right-hand term), respectively. When multicoil arrays are employed, Equation 14 can also incorporate parallel imaging to enforce multicoil data/model consistency. The training of Equation 14 is equivalent to jointly learning two objectives. The first one (model fidelity) is to ensure that the reconstructed parameter maps from CNN mapping produce undersampled k-space data that match acquired k-space based on corresponding signal models. Similar to previous studies, the second objective (CNN mapping) is to ensure that undersampled MR relaxometry images produce parametric maps that are consistent with the reference parameter maps. The synergetic combination of these two loss terms inherits the advantage of high learning efficiency from end-to-end CNN mapping while incorporating prior MR physics knowledge into the training process, which can result in a more generalizable deep-learning-based image reconstruction model. The model-based deep learning reconstruction (Equation 14) forms a completely different problem to deep-learning-based parameter mapping (Equation 12). Strictly speaking, Equation 12 aims to map fully sampled or undersampled image series to parameters, while Equation 14 is more like a reconstruction problem that aims to reconstruct parameter maps consistent with acquired k-space, a process that is implemented in conventional iterative reconstruction as shown in Equations 8 and 10.

In a recent study, Liu et al demonstrated the performance of a model-based reconstruction framework using deep learning for rapid T2 relaxometry.169 The approach is called model-augmented neural network with incoherent k-space sampling (MANTIS), and aims to reconstruct T2 maps from a series of undersampled multi-echo spin-echo MR images. As shown in Figure 7, the training of MANTIS aims to minimize the combined loss terms described in Equation 14 to enforce data/model consistency while removing undersampling-induced artifacts. Specifically, a U-Net was implemented as the CNN mapping function to learn spatial-temporal correlations between the undersampled input images and reference T2 and proton density maps derived from fully sampled images. MANTIS was demonstrated for up to eightfold accelerated T2 mapping of the knee with accurate T2 estimation with respect to fully sampled references. Compared with conventional constrained reconstruction methods, MANTIS shows improved reconstruction performance with a lower error and higher structural similarity (Figure 8) and much faster reconstruction time.

FIGURE 7.

FIGURE 7

Illustration of the MANTIS framework for rapid MR parameter mapping, which features two loss components as shown in Equation 14. The first loss term (Loss 1) ensures that the undersampled multi-echo images produce parameter maps that are the same as the reference parameter maps generated from reference multi-echo images. The second loss term (Loss 2) ensures that the parameter maps reconstructed from the CNN mapping produce synthetic undersampled image data matching the acquired k-space measurements. This approach jointly implements both the data-driven deep learning component and the signal model from the fundamental MR physics. The framework can be extended to other types of parameter mapping with appropriate MR signal models. Other advanced CNN structures and loss functions can also be applied to augment the reconstruction performance

FIGURE 8.

FIGURE 8

Comparison of T2 maps estimated from MANTIS and different conventional sparsity-based reconstruction approaches at an acceleration rate of 5 (R = 5). MANTIS generated T2 maps with well-preserved sharpness and texture comparable to the reference. Other methods created suboptimal T2 maps with either reduced image sharpness or residual artifacts, as indicated by the arrows. The superior performance of MANTIS reconstruction was confirmed by the normalized root mean square error (nRMSE) and the residual error maps. The joint X-P constrained reconstruction methods, including global low rank and local low rank, were implemented based on Reference 68. (Image reproduced from Figure 3 in Liu et al. MANTIS: Model-Augmented Neural neTwork with Incoherent k-space Sampling for efficient MR parameter mapping. Magn Reson Med. 2019 Jul;82(1):174–188 with permission)

Liu et al have also further extended the MANTIS framework to an approach called MANTIS-GAN for rapid T2 mapping of the brain by incorporating an additional adversarial loss function.170 Specifically, based on the two loss terms in Equation 14, an additional adversarial loss is incorporated into the learning process to enable more realistic and accurate parameter maps. As shown in Figure 9 for a representative example, MANTIS-GAN maintains similar performance to standard MANTIS in suppressing artifacts and noise while enabling better preservation of tissue texture and image sharpness.

FIGURE 9.

FIGURE 9

Representative examples of T2 maps estimated from the different reconstruction methods at an acceleration rate of 8 (R = 8) in an axial brain slice. Undersampling at this high acceleration rate prevented the reliable reconstruction of a T2 map with a simple inverse FFT (Zero-Fill) and advanced joint X-P compressed sensing methods, including k-t SLR65 and ALOHA.113 The deep learning reconstruction MANTIS successfully removed aliasing artifacts and preserved better tissue contrast, which is similar to that of the reference but with some remaining blurring and loss of tissue texture. The deep learning reconstruction with adversarial training MANTIS-GAN provided not only accurate T2 contrast but also much-improved image sharpness and tissue details that are superior to all other methods. The highest degree of correspondence between deep learning methods and the reference was confirmed by the nRMSEs, which were 3.2% and 3.6% for MANTIS and MANTIS-GAN, and 5.1% and 7.1% for k-t SLR and ALOHA, respectively

In some applications (such as MR relaxometry of the lung), it may be challenging to obtain an accurate and reliable reference for training due to low signal-to-noise ratio (SNR) and/or to motion. To address this issue, Zha et al proposed a model-based deep learning reconstruction approach171 by enforcing only the joint model/data consistency term from Equation 14, as shown below:

θ˜=arg minθ(EdP(d)EM(C(dθ))s22). (15)

This leads to a ‘self-supervised’ deep learning method that does not require references during training. The authors of this study investigated its performance in rapid T1 mapping of the lung that was based on a variable flip angle ultra-short echo time (UTE) spoiled gradient echo (SPGR) sequence following the driven equilibrium single pulse observation of T1 (DESPOT1) method.172 Although the method was evaluated for fully sampled images only, it enabled accurate T1 mapping from only three FA images compared with corresponding T1 maps obtained from five FA images (Figure 10), thus leading to 40% reduction in scan time.

FIGURE 10.

FIGURE 10

Examples of the estimated T1 from UTE VFA lung data (SNR = 8.7) derived using standard non-linear least-squares fitting (NNLS)215 with five FAs, the widely used maximum likelihood variable projection method (VPM)216 with three FAs, and a self-supervised reference-free deep learning method (Relax-MANTIS) with three FAs. The T1 values obtained with Relax-MANTIS showed similar regional variations as seen with NNLS with five FAs and less noisy measurements (white arrows) seen from VPM with three FAs. The lung T1 histogram comparison suggests that whole lung T1 distribution estimated using the deep learning method Relax-MANTIS with three FAs (orange curve) conforms much better to the distribution from the NNLS with five FAs (bar graph in blue). Relax-MANTIS provides good quantitative agreement with five-FA standard NNLS while using only three FAs, indicating a 40% scan time reduction for an accurate whole lung parenchymal T1 quantification. The computing time for each 3D lung volume is significantly lower for the deep learning method at 26 s in comparison with the conventional methods at several minutes to hours. (Image courtesy of Wei Zha, PhD)

4.4 |. Deep-learning-based image reconstruction from accelerated MR relaxometry data

In analogy to the traditional two-step MR relaxometry method (one step for image reconstruction and the other step for parameter estimation), one can also focus on improving image reconstruction using deep learning for improved parameter estimation in the second step. Similarly to Equation 8, this type of image reconstruction using deep learning can be formulated as

θ˜=arg minθ(λ1EduP(du)E(C(duθ))s22+λ2EduP(du)[C(duθ)dp]). (16)

Compared with Equation 8, Equation 16 aims to use deep learning as a data-driven regularizer, which is expected to provide better reconstruction performance compared with standard reconstruction employing a generic constraint. In addition, the deep-learning-based regularization can also be combined with traditional physics-based constraints, such as a low-rank constraint or a subspace-based constraint.173,174 However, it should be noted that Equation 16 only reconstructs MR images, and an additional fitting process is still needed to produce MR parameter maps, which can be implemented following either Equation 5 (standard fitting) or Equation 12 (deep-learning-based mapping). Several studies have proposed different learning-based reconstruction methods within this category to improve MR relaxometry, as summarized below.

Zibetti et al investigated the use of a variational network (VN) for accelerated 3D T mapping of the knee.175 The reference in vivo T datasets were acquired using a modified 3D Cartesian sequence with spin-lock preparation176 at different spin-lock times. Images were retrospectively undersampled, and the learned network was applied to reconstruct each T image separately. This study demonstrated that VN could provide better image reconstruction performance than conventional compressed-sensing-based reconstruction methods at an acceleration factor from 2 to 6. The improvement of image quality leads to the improved fitting of T maps for cartilage quantification in the knee joint at different acceleration factors (Figure 11).

FIGURE 11.

FIGURE 11

Comparison of T maps estimated from VN and different compressed sensing reconstruction schemes. The advantages of VN over compressed sensing include faster image reconstruction and lower reconstruction error at acceleration factors R = 2–6. VN performed better than compressed sensing with lower error and bias in cartilage T quantification under most of the acceleration factors. CS-SFD, compressed sensing using sparsity regularization on spatial finite differences; CS-STFD, compressed sensing using sparsity regularization on spatiotemporal finite differences; MNAD, median of normalized absolute deviation. (Image courtesy of Marcelo V. W. Zibetti, PhD)

Jeelani et al developed a deep-learning-based method for accelerated T1 mapping of the heart.177 The proposed method consisted of a CNN for image reconstruction and a second CNN for generating T1 maps using end-to-end mapping. Specifically, undersampled images acquired with a modified Look-Locker (MOLLI) sequence at eight inversion recovery time points were first sent to an RCNN reconstruction network151 to remove noise and artifacts by exploiting dynamic image information. The reconstructed MOLLI images were then used as the input for the second mapping network, a U-Net, which converts images directly to T1 maps. This two-step reconstruction method has demonstrated better performance than conventional constrained reconstruction for cardiac T1 mapping at fivefold acceleration.

Chaudhari et al proposed a deep-learning-based super-resolution technique for accelerated T2 mapping of the knee.178,179 Images were acquired using a dual-echo steady-state sequence (DESS), and a 3D CNN was constructed to improve through-plane resolution from thick slices using high-resolution thin-slice images as training references. This leads to increased volumetric coverage without prolonging times. This deep-learning-based super-resolution approach was found to outperform standard interpolation methods and sparsity-based super-resolution reconstruction methods179 towards improved T2 estimation with complete knee joint coverage. The proposed super-resolution method also enabled less biased T2 measurements with respect to those obtained from the references using thick DESS images.180

5 |. SUMMARY AND DISCUSSION

This review article discusses the potential of deep learning for rapid MR relaxometry and presents an overview of different pilot studies that have applied deep learning to improve MR parameter mapping. The paper first summarizes the basic imaging model for MR relaxometry, followed by different categories of traditional (non-deep-learning) approaches and deep-learning-based methods that have been applied to rapid MR relaxometry in terms of accelerated data acquisitions and/or efficient and accurate parameter fitting. This structure presents how deep-learning-based methods can be designed for rapid MR relaxometry and how they can be linked to conventional approaches.

The scope of this paper is limited to MR relaxometry, which typically refers to mapping of T1, T2, or T parameters. However, the overall framework presented in this paper could also be extended to other quantitative imaging applications, such as diffusion imaging,181,182 quantitative susceptibility mapping,183185 or perfusion imaging.186188 Meanwhile, the scope of deep learning for MR relaxometry is further limited to data acquisition, image reconstruction, and parameter fitting (eg using deep learning to generate better images and/or better parameter maps). The use of deep learning could also go beyond these applications and can help MR relaxometry from other perspectives. For example, recent studies have applied deep-learning-based segmentation approaches for automated analysis of cardiac T1 mapping images189 and knee joint T2/T mapping images.190 The deep learning method was implemented to automatically segment the myocardial wall, and knee cartilage and meniscus, a process that is typically performed manually or semi-manually in conventional image analysis.

There has long been an interest in translating MR relaxometry into the clinical environment. However, this has been challenging due to its long acquisition times, slow reconstruction speed, and cumbersome overall imaging workflow. All of these have, in turn, restricted thorough clinical evaluation of MR relaxometry in terms of repeatability, reproducibility, and true added clinical value compared with conventional weighted images. As a relatively new direction, deep learning holds great promise in further pushing the translation of MR relaxometry into routine clinical evaluation. The improved imaging efficiency, including highly accelerated data acquisitions, faster image/parameter reconstruction, and more efficient parameter fitting, offered by deep learning would enable routine evaluation of MR relaxometry in large-scale clinical studies, so that its clinical value, repeatability, reproducibility, and robustness can be further investigated and evaluated in day-to-day MRI exams. However, this is a task that would require vendor involvement and close academic-industrial partnership.

The main challenge of deep learning is the requirement of training datasets, which are normally fully sampled MR images for training an image reconstruction algorithm. For conventional static MRI reconstruction, remarkable efforts have been made to address these issues, such as the recently proposed fastMRI project and its database that is released for free public use.191 However, this challenge is expected to be much greater and more complicated for quantitative MRI, including MR relaxometry, since quantitative MRI is not routinely performed in a current clinic environment, thus limiting the accumulation of training datasets. Image augmentation with MR simulation is a potential solution. Realistic numerical simulation using Bloch equations, Bloch-McConnell equations, or other physical models describing molecular diffusion and perfusion could generate training datasets that provide realistic MR signal variation for training deep learning networks.122,192 However, it is important that the simulation can account for various system imperfection effects to mimic the realistic image data that are often seen in practice and can incorporate an adequate range of image features for creating robust and generalizable deep learning models. Alternatively, recent studies have also proposed the use of GAN to synthesize MR images for augmenting the training database.193199 While these approaches have shown promising results to generate pseudo-images for automated medical image segmentation and disease diagnosis, it is still unclear whether adequate dynamic information can be generated by using GAN to characterize sufficient signal variation for training a robust model for MR relaxometry. It should be noted that the training of GAN is a challenging task because the competing nature of the CNN generator and discriminator in GANs can cause unstable status and model collapse.200203 Additional care should also be taken to avoid GAN-induced image hallucination (eg pseudo image features), which can potentially degrade the overall training datasets and the resulting deep learning models. Moreover, transfer learning is a widely applied technique for training with a small database,204,205 where a network can be pre-trained on datasets that are not related to the task to be performed and can then be fine-tuned with datasets that are related to the task. However, whether transfer learning can be applied to transfer quantitative MR information remains to be explored. Finally, the development of new deep learning architectures that work on limited training data might provide a solution to this problem. For example, instead of completely relying on training datasets to provide supervision for network, model-based deep learning approaches in the context of weak supervision or self-supervision use prior physical knowledge to regularize network training to learn useful image features.169,171 Without compromising training performance, this could alleviate the demand on training datasets and provide more efficient learning capability with a small number of training datasets. A more comprehensive investigation on training deep-learning-based MR relaxometry with limited datasets would play an important role in facilitating its translation into clinical practice.

Another challenge of applying deep learning to MR relaxometry is the large image dimension size. MR relaxometry datasets are usually comprised of many dynamic frames, which would thus demand higher-performance computing hardware with more GPU memory and longer training time compared with application in static images. This issue is expected to be overcome by the rapid evolution of computing technologies and the development of new deep learning algorithms for handling multidimensional datasets. Recent efforts have been made in both industry and academia to develop more powerful parallel computing devices, cloud computing, and large-scale servers to address this bottleneck in applications of large computing problems. Meanwhile, memory-efficient deep learning structures have also been actively developed and tested.206210 With the advance of memory-efficient neural network design, a flexible configuration of complex deep learning architectures such as RCNNs can be implemented to fully characterize the dynamic image features in MR relaxometry. A deeper network with more convolutional layers and processing modules can also be made to improve the learning capability for MR relaxometry. The combination of these advances in both software and hardware is expected to substantially promote the use of deep learning in MR relaxometry.

Finally, there is a challenge to create a robust and generalizable deep learning model for MR relaxometry. Recent studies have reported that deep-learning-based MRI reconstruction might provide an inconsistent result when imaging parameters change from the network training step to the evaluation step.82,211 As imaging parameters can vary across different times, scanners, coils, and protocols, there is a need to investigate the generalization of deep learning models against possible discrepancies of imaging parameters that often occur in a clinical setting. This might be achieved by embedding the variation of imaging parameters into the training of deep neural networks, so that different imaging parameters can be treated as extra information to allow joint learning with image features, similar to the recent demonstration in References 212214. It is also important to perform regular model calibration to ensure consistent performance at different time points. Furthermore, deep learning models are expected to be robust against pathologies that are unseen in training datasets. Thus, careful clinical evaluation of a trained network on a large number of clinical examinations with a wide range of disease phenotypes is important to investigate whether it can be robust in detecting different abnormalities in the clinic.

In summary, deep learning holds great potential to address the current challenges associated with MR relaxometry and to deliver a rapid and efficient MR relaxometry framework that is clinically translatable. However, many challenges still exist, which require further careful investigation and rigorous research effort before clinical examinations can truly benefit from these new imaging methods.

ACKNOWLEDGEMENT

Research support was provided by National Institutes of Health grants P41 EB022544, R01 CA165221, R01 NS109439, and R21 EB026764.

Abbreviations:

BN

batch normalization network

CNN

convolutional neural network

CPU

central processing unit

CT

computed tomography

DeepT1

deep learning for T1 mapping

DESS

dual-echo steady-state sequence

FA

flip angle

FCN

fully connected neural network

FFT

fast Fourier transform

GAN

generative adversarial network

GPU

graphics processing unit

MANTIS

model-augmented neural network with incoherent k-space sampling

MANTIS-GAN

MANTIS with adversarial training

MOLLI

modified Look-Locker

MR

magnetic resonance

MRF

MR fingerprinting

MRF-Drone

MR fingerprinting deep reconstruction network

MRI

magnetic resonance imaging network

MRSR

MR super-resolution

MSCNN

model skipped convolutional neural network

nRMSE

normalized root mean square error

OLED

overlapping-echo detachment

PCA

principal coefficient analysis

RCNN

recurrent convolutional neural network

Relax-MANTIS

reference-free latent map extraction MANTIS

ReLU

rectified linear unit

RF

radiofrequency

SCQ

spatially-constrained tissue quantification

SNR

signal-to-noise ratio

SVD

singular value decomposition

TrueFISP

True Fast Imaging with Steady State Precession

T 1

spin-lattice relaxation time

T

spin-lattice relaxation in the rotating frame

T 2

spin-spin relaxation time

T E

echo time

T R

repetition time

UTE

ultra-short echo time

VN

variational network

REFERENCES

  • 1.van Beek EJR, Kuhl C, Anzai Y, et al. Value of MRI in medicine: more than just another test? J Magn Reson Imaging. 2019;49:e14–e25. 10.1002/jmri.26211 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Magnetic Resonance Imaging—an overview. ScienceDirect Topics. https://www.sciencedirect.com/topics/engineering/magnetic-resonance-imaging. Accessed June 11, 2020. [Google Scholar]
  • 3.McRobbie DW, Moore EA, Graves MJ, Prince MR. Radiology MRI: from picture to proton. Radiology. 2004;52319;474. [Google Scholar]
  • 4.Cleary JOSH, Guimarães AR. Magnetic resonance imaging. In: McManus LM, Mitchell RN, eds. Pathobiology of Human Disease: A Dynamic Encyclopedia of Disease Mechanisms. Elsevier; 2014:3987–4004. 10.1016/B978-0-12-386456-7.07609-7 [DOI] [Google Scholar]
  • 5.Ortendahl DA, Hylton NM, Kaufman L. Tissue characterization with MRI: the value of the MR parameters. In: Higer HP, Bielke G, eds. Tissue Characterization in MR Imaging. Berlin: Springer; 1990:126–138. 10.1007/978-3-642-74993-3_20 [DOI] [Google Scholar]
  • 6.Redfield AG. Nuclear spin thermodynamics in the rotating frame. Science. 1969;164:1015–1023. 10.1126/science.164.3883.1015 [DOI] [PubMed] [Google Scholar]
  • 7.Sepponen RE, Pohjonen JA, Sipponen JT, Tanttu JI. A method for T1p imaging. J Comput Assist Tomogr. 1985;9:1007–1011. 10.1097/00004728-198511000-00002 [DOI] [PubMed] [Google Scholar]
  • 8.Wáng Y-XJ, Zhang Q, Li X, Chen W, Ahuja A, Yuan J. T1ρ magnetic resonance: basic physics principles and applications in knee and intervertebral disc imaging. Quant Imaging Med Surg. 2015;5:858–85885. 10.3978/j.issn.2223-4292.2015.12.06 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Doran SJ. Chapter 21 Quantitative magnetic resonance imaging: applications and estimation of errors. Data Handl Sci Technol. 1996;18:452–488. 10.1016/S0922-3487(96)80058-X [DOI] [Google Scholar]
  • 10.Cheng HLM, Stikov N, Ghugre NR, Wright GA. Practical medical applications of quantitative MR relaxometry. J Magn Reson Imaging. 2012;36:805–824. 10.1002/jmri.23718 [DOI] [PubMed] [Google Scholar]
  • 11.Laakso MP, Partanen K, Soininen H, et al. MR T2 relaxometry in Alzheimer’s disease and age-associated memory impairment. Neurobiol Aging. 1996; 17:535–540. 10.1016/0197-4580(96)00036-x [DOI] [PubMed] [Google Scholar]
  • 12.House MJ, Pierre TG St., Foster JK, Martins RN, Clarnette R. Quantitative MR imaging R2 relaxometry in elderly participants reporting memory loss. Am J Neuroradiol. 2006;27:430–439. [PMC free article] [PubMed] [Google Scholar]
  • 13.Jackson GD, Connelly A, Duncan JS, Grunewald RA, Gadian DG. Detection of hippocampal pathology in intractable partial epilepsy: increased sensitivity with quantitative magnetic resonance T2 relaxometry. Neurology. 1993;43:1793–1799. 10.1212/wnl.43.9.1793 [DOI] [PubMed] [Google Scholar]
  • 14.Townsend TN, Bernasconi N, Pike GB, Bernasconi A. Quantitative analysis of temporal lobe white matter T2 relaxation time in temporal lobe epilepsy. NeuroImage. 2004;23:318–324. 10.1016/j.neuroimage.2004.06.009 [DOI] [PubMed] [Google Scholar]
  • 15.Ma D, Jones SE, Deshmane A, et al. Development of high-resolution 3D MR fingerprinting for detection and characterization of epileptic lesions. J Magn Reson Imaging. 2019;49:1333–1346. 10.1002/jmri.26319 [DOI] [PubMed] [Google Scholar]
  • 16.Mondino F, Filippi P, Magliola U, Duca S. Magnetic resonance relaxometry in Parkinson’s disease. Neurol Sci. 2002;23:s87–s88. 10.1007/s100720200083 [DOI] [PubMed] [Google Scholar]
  • 17.Egger K, Amtage F, Yang S, et al. T2* relaxometry in patients with Parkinson’s disease: use of an automated atlas-based approach. Clin Neuroradiol. 2018;28:63–67. 10.1007/s00062-016-0523-2 [DOI] [PubMed] [Google Scholar]
  • 18.Vymazal J, Righini A, Brooks RA, et al. T1 and T2 in the brain of healthy subjects, patients with Parkinson disease, and patients with multiple system atrophy: relation to iron content. Radiology. 1999;211:489–495. 10.1148/radiology.211.2.r99ma53489 [DOI] [PubMed] [Google Scholar]
  • 19.Manfredonia F, Ciccarelli O, Khaleeli Z, et al. Normal-appearing brain T1 relaxation time predicts disability in early primary progressive multiple sclerosis. Arch Neurol. 2007;64:411–415. 10.1001/archneur.64.3.411 [DOI] [PubMed] [Google Scholar]
  • 20.Rinck PA, Muller RN, Fischer HW. Magnetic resonance relaxometry and tumors. In: Breit A, Heuck A, Lukas P, Kneschaurek P, Mayr M, eds. Tumor Response Monitoring and Treatment Planning. Heidelberg: Springer; 1992:11–14 10.1007/978-3-642-48681-4_2 [DOI] [Google Scholar]
  • 21.De Haro LP, Karaulanov T, Vreeland EC, et al. Magnetic relaxometry as applied to sensitive cancer detection and localization. Biomed Tech (Berl). 2015;60:445–455. 10.1515/bmt-2015-0053 [DOI] [PubMed] [Google Scholar]
  • 22.Mai J, Abubrig M, Lehmann T, et al. T2 mapping in prostate cancer. Invest Radiol. 2019;54:146–152. 10.1097/RLI.0000000000000520 [DOI] [PubMed] [Google Scholar]
  • 23.Chatterjee A, Devaraj A, Mathew M, et al. Performance of T2 maps in the detection of prostate cancer. Acad Radiol. 2019;26:15–21. 10.1016/j.acra.2018.04.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Hosch W, Bock M, Libicher M, et al. MR-relaxometry of myocardial tissue. Invest Radiol. 2007;42:636–642. 10.1097/RLI.0b013e318059e021 [DOI] [PubMed] [Google Scholar]
  • 25.Taylor AJ, Salerno M, Dharmakumar R, Jerosch-Herold M. T1 mapping: basic techniques and clinical applications. JACC Cardiovasc Imaging. 2016;9: 67–81. 10.1016/j.jcmg.2015.11.005 [DOI] [PubMed] [Google Scholar]
  • 26.Palmisano A, Benedetti G, Faletti R, et al. Early T1 myocardial MRI mapping: value in detecting myocardial hyperemia in acute myocarditis. Radiology. 2020;295:316–325. 10.1148/radiol.2020191623 [DOI] [PubMed] [Google Scholar]
  • 27.De Cecco CN, Monti CB. Use of early T1 mapping for MRI in acute myocarditis. Radiology. 2020;295:326–327. 10.1148/radiol.2020200171 [DOI] [PubMed] [Google Scholar]
  • 28.Regatte RR, Akella SVS, Wheaton AJ, et al. 3D-T-relaxation mapping of articular cartilage: in vivo assessment of early degenerative changes in symptomatic osteoarthritic subjects. Acad Radiol. 2004;11:741–749. 10.1016/j.acra.2004.03.051 [DOI] [PubMed] [Google Scholar]
  • 29.Regatte RR, Akella SVS, Lonner JH, Kneeland JB, Reddy R. T1ρ mapping in human osteoarthritis (OA) cartilage: comparison of T1ρ with T2. J Magn Reson Imaging. 2006;23:547–553. 10.1002/jmri.20536 [DOI] [PubMed] [Google Scholar]
  • 30.Liu F, Choi KW, Samsonov A, et al. Articular cartilage of the human knee joint: in vivo multicomponent T2 Analysis at 3.0 T. Radiology. 2015;277: 477–488. 10.1148/radiol.2015142201 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Kijowski R, Blankenbaker DG, Munoz del Rio A, Baer GS, Graf BK. Evaluation of the articular cartilage of the knee joint: value of adding a T2 mapping sequence to a routine MR imaging protocol. Radiology. 2013;267:503–513. 10.1148/radiol.12121413 [DOI] [PubMed] [Google Scholar]
  • 32.Guermazi A, Alizai H, Crema MD, Trattnig S, Regatte RR, Roemer FW. Compositional MRI techniques for evaluation of cartilage degeneration in osteoarthritis. Osteoarthr Cartil. 2015;23:1639–1653. 10.1016/j.joca.2015.05.026 [DOI] [PubMed] [Google Scholar]
  • 33.Luetkens JA, Klein S, Träber F, et al. Quantification of liver fibrosis at T1 and T2 mapping with extracellular volume fraction MRI: preclinical results. Radiology. 2018;288:748–754. 10.1148/radiol.2018180051 [DOI] [PubMed] [Google Scholar]
  • 34.Stadler A, Jakob PM, Griswold M, Stiebellehner L, Barth M, Bankier AA. T1 mapping of the entire lung parenchyma: influence of respiratory phase and correlation to lung function test results in patients with diffuse lung disease. Magn Reson Med. 2008;59:96–101. 10.1002/mrm.21446 [DOI] [PubMed] [Google Scholar]
  • 35.Poon CS, Henkelman RM. Practical T2 quantitation for clinical applications. J Magn Reson Imaging. 1992;2:541–553. 10.1002/jmri.1880020512 [DOI] [PubMed] [Google Scholar]
  • 36.Deoni SCL. Quantitative relaxometry of the brain. Top Magn Reson Imaging. 2010;21:101–113. 10.1097/RMR.0b013e31821e56d8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.De Mello R, Ma Y, Ji Y, Du J, Chang EY. Quantitative MRI musculoskeletal techniques: an update. Am J Roentgenol. 2019;213:524–533. 10.2214/AJR.19.21143 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Relaxometry—an overview. ScienceDirect Topics. https://www.sciencedirect.com/topics/medicine-and-dentistry/relaxometry. Accessed June 11, 2020. [Google Scholar]
  • 39.Whittall KP, MacKay AL. Quantitative interpretation of NMR relaxation data. J Magn Reson. 1989;84:134–152. 10.1016/0022-2364(89)90011-5 [DOI] [Google Scholar]
  • 40.Deoni SC, Rutt BK, Arun T, Pierpaoli C, Jones DK. Gleaning multicomponent T1 and T2 information from steady-state imaging data. Magn Reson Med. 2008;60:1372–1387. 10.1002/mrm.21704 [DOI] [PubMed] [Google Scholar]
  • 41.Liu F, Block WF, Kijowski R, Samsonov A. Rapid multicomponent relaxometry in steady state with correction of magnetization transfer effects. Magn Reson Med. 2016;75:1423–1433. 10.1002/mrm.25672 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Liu F, Kijowski R. Assessment of different fitting methods for in-vivo bi-component T2* analysis of human patellar tendon in magnetic resonance imaging. Muscles Ligaments Tendons J. 2017;7:163–172. 10.11138/mltj/2017.7.1.163 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Hennig J Multiecho imaging sequences with low refocusing flip angles. J Magn Reson. 1988;78:397–407. 10.1016/0022-2364(88)90128-X [DOI] [Google Scholar]
  • 44.Ben-Eliezer N, Sodickson DK, Block KT. Rapid and accurate T2 mapping from multi-spin-echo data using Bloch-simulation-based reconstruction. Magn Reson Med. 2015;73:809–817. 10.1002/mrm.25156 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Yarnykh VL. Actual flip-angle imaging in the pulsed steady state: a method for rapid three-dimensional mapping of the transmitted radiofrequency field. Magn Reson Med. 2007;57:192–200. 10.1002/mrm.21120 [DOI] [PubMed] [Google Scholar]
  • 46.Deoni SCL. High-resolution T1 mapping of the brain at 3T with driven equilibrium single pulse observation of T1 with high-speed incorporation of RF field inhomogeneities (DESPOT1-HIFI). J Magn Reson Imaging. 2007;26:1106–1111. 10.1002/jmri.21130 [DOI] [PubMed] [Google Scholar]
  • 47.Sacolick LI, Wiesinger F, Hancu I, Vogel MW. B1 mapping by Bloch-Siegert shift. Magn Reson Med. 2010;63:1315–1322. 10.1002/mrm.22357 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Jansen JFA, Kooi ME, Kessels AGH, Nicolay K, Backes WH. Reproducibility of quantitative cerebral T2 relaxometry, diffusion tensor imaging, and 1H magnetic resonance spectroscopy at 3.0 tesla. Invest Radiol. 2007;42:327–337. 10.1097/01.rli.0000262757.10271.e5 [DOI] [PubMed] [Google Scholar]
  • 49.Waterton JC, Hines CDG, Hockings PD, et al. Repeatability and reproducibility of longitudinal relaxation rate in 12 small-animal MRI systems. Magn Reson Imaging. 2019;59:121–129. 10.1016/j.mri.2019.03.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Fast S, Coil R, Sodickson DK, Manning WJ. Simultaneous acquisition of spatial harmonics (SMASH): fast imaging with radiofrequency coil arrays. Magn Reson Med. 38:591–603. [DOI] [PubMed] [Google Scholar]
  • 51.Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: sensitivity encoding for fast MRI. Magn Reson Med. 1999;42:952–962. [PubMed] [Google Scholar]
  • 52.Griswold MA, Jakob PM, Heidemann RM, et al. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn Reson Med. 2002;47:1202–1210. 10.1002/mrm.10171 [DOI] [PubMed] [Google Scholar]
  • 53.Sodickson DK, McKenzie CA. A generalized approach to parallel magnetic resonance imaging. Med Phys. 2001;28:1629–1643. 10.1118/1.1386778 [DOI] [PubMed] [Google Scholar]
  • 54.Kellman P, Epstein FH, McVeigh ER. Adaptive sensitivity encoding incorporating temporal filtering (TSENSE). Magn Reson Med. 2001;45:846–852. 10.1002/mrm.1113 [DOI] [PubMed] [Google Scholar]
  • 55.Breuer FA, Kellman P, Griswold MA, Jakob PM. Dynamic autocalibrated parallel imaging using temporal GRAPPA (TGRAPPA). Magn Reson Med. 2005;53:981–985. 10.1002/mrm.20430 [DOI] [PubMed] [Google Scholar]
  • 56.Deshmane A, Gulani V, Griswold MA, Seiberlich N. Parallel MR imaging. J Magn Reson Imaging. 2012;36:55–72. 10.1002/jmri.23639 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Tsao J, Boesiger P, Pruessmann KP. k-t BLAST and k-t SENSE: dynamic MRI with high frame rate exploiting spatiotemporal correlations. Magn Reson Med. 2003;50:1031–1042. 10.1002/mrm.10611 [DOI] [PubMed] [Google Scholar]
  • 58.Huang F, Akao J, Vijayakumar S, Duensing GR, Limkeman M. k-t GRAPPA: a k-space implementation for dynamic MRI with high reduction factor. Magn Reson Med. 2005;54:1172–1184. 10.1002/mrm.20641 [DOI] [PubMed] [Google Scholar]
  • 59.Xu D, King KF, Liang ZP. Improving k-t SENSE by adaptive regularization. Magn Reson Med. 2007;57:918–930. 10.1002/mrm.21203 [DOI] [PubMed] [Google Scholar]
  • 60.Lustig M, Santos JM, Donoho DL, Pauly JM. k-t SPARSE: high frame rate dynamic MRI exploiting spatio-temporal sparsity. Proc Int Soc Magn Reson Med. 2006;14:2420. [Google Scholar]
  • 61.Gamper U, Boesiger P, Kozerke S. Compressed sensing in dynamic MRI. Magn Reson Med. 2008;59:365–373. 10.1002/mrm.21477 [DOI] [PubMed] [Google Scholar]
  • 62.Otazo R, Kim D, Axel L, Sodickson DK. Combination of compressed sensing and parallel imaging for highly accelerated first-pass cardiac perfusion MRI. Magn Reson Med. 2010;64:767–776. 10.1002/mrm.22463 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Jung H, Sung K, Nayak KS, Kim EY, Ye JC. k-t FOCUSS: a general compressed sensing framework for high resolution dynamic MRI. Magn Reson Med. 2009;61:103–116. 10.1002/mrm.21757 [DOI] [PubMed] [Google Scholar]
  • 64.Feng L, Grimm R, Block KT, et al. Golden-angle radial sparse parallel MRI: combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric MRI. Magn Reson Med. 2014;72:707–717. 10.1002/mrm.24980 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Lingala SG, Hu Y, DiBella E, Jacob M. Accelerated dynamic MRI exploiting sparsity and low-rank structure: k-t SLR. IEEE Trans Med Imaging. 2011;30: 1042–1054. 10.1109/TMI.2010.2100850 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Tsao J, Kozerke S. MRI temporal acceleration techniques. J Magn Reson Imaging. 2012;36:543–560. 10.1002/jmri.23640 [DOI] [PubMed] [Google Scholar]
  • 67.Feng L, Otazo R, Jung H, et al. Accelerated cardiac T2 mapping using breath-hold multiecho fast spin-echo pulse sequence with k-t FOCUSS. Magn Reson Med. 2011;65:1661–1669. 10.1002/mrm.22756 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Zhang T, Pauly JM, Levesque IR. Accelerating parameter mapping with a locally low rank constraint. Magn Reson Med. 2015;73:655–661. 10.1002/mrm.25161 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Velikina JV, Alexander AL, Samsonov A. Accelerating MR parameter mapping using sparsity-promoting regularization in parametric dimension. Magn Reson Med. 2013;70:1263–1273. 10.1002/mrm.24577 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Doneva M, Börnert P, Eggers H, Stehning C, Sénégas J, Mertins A. Compressed sensing reconstruction for magnetic resonance parameter mapping. Magn Reson Med. 2010;64:1114–1120. 10.1002/mrm.22483 [DOI] [PubMed] [Google Scholar]
  • 71.Huang C, Graff CG, Clarkson EW, Bilgin A, Altbach MI. T2 mapping from highly undersampled data by reconstruction of principal component coefficient maps using compressed sensing. Magn Reson Med. 2012;67:1355–1366. 10.1002/mrm.23128 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Petzschner FH, Ponce IP, Blaimer M, Jakob PM, Breuer FA. Fast MR parameter mapping using k-t principal component analysis. Magn Reson Med. 2011;66:706–716. 10.1002/mrm.22826 [DOI] [PubMed] [Google Scholar]
  • 73.Sumpf TJ, Uecker M, Boretius S, Frahm J. Model-based nonlinear inverse reconstruction for T2 mapping using highly undersampled spin-echo MRI. J Magn Reson Imaging. 2011;34:420–428. 10.1002/jmri.22634 [DOI] [PubMed] [Google Scholar]
  • 74.Block KTKT, Uecker M, Frahm J. Model-based iterative reconstruction for radial fast spin-echo MRI. IEEE Trans Med Imaging. 2009;28:1759–1769. 10.1109/TMI.2009.2023119 [DOI] [PubMed] [Google Scholar]
  • 75.Wang X, Roeloffs V, Klosowski J, et al. Model-based T1 mapping with sparsity constraints using single-shot inversion-recovery radial FLASH. Magn Reson Med. 2018;79:730–740. 10.1002/mrm.26726 [DOI] [PubMed] [Google Scholar]
  • 76.Fessler J Model-based image reconstruction for MRI. IEEE Signal Process Mag. 2010;27:81–89. 10.1109/MSP.2010.936726 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Ma D, Gulani V, Seiberlich N, et al. Magnetic resonance fingerprinting. Nature. 2013;495:187–192. 10.1038/nature11971 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–444. 10.1038/nature14539 [DOI] [PubMed] [Google Scholar]
  • 79.Hammernik K, Klatzer T, Kobler E, et al. Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med. 2017;79:3055–3071. 10.1002/mrm.26977 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature. 2018;555:487–492. 10.1038/nature25988 [DOI] [PubMed] [Google Scholar]
  • 81.Schlemper J, Caballero J, Hajnal JV, Price A, Rueckert D. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. In: Niethammer M et al. , eds. Information Processing in Medical Imaging. IPMI 2017. Lecture Notes in Computer Science, Vol 10265 Cham, Switzerland: Springer; 2017. 10.1007/978-3-319-59050-9_51 [DOI] [Google Scholar]
  • 82.Liu F, Samsonov A, Chen L, Kijowski R, Feng L. SANTIS: Sampling-Augmented Neural neTwork with Incoherent Structure for MR image reconstruction. Magn Reson Med. 2019;1–15. 10.1002/mrm.27827 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Akçakaya M, Moeller S, Weingärtner S, Uğurbil K. Scan-specific robust artificial-neural-networks for k-space interpolation (RAKI) reconstruction: database-free deep learning for fast imaging. Magn Reson Med. 2019;81:439–453. 10.1002/mrm.27420 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–42: 88. 10.1016/j.media.2017.07.005 [DOI] [PubMed] [Google Scholar]
  • 85.Shen D, Wu G, Suk H-I. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017;19:221–248. 10.1146/annurev-bioeng-071516-044442 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys. 2019;29:102–127. 10.1016/j.zemedi.2018.11.002 [DOI] [PubMed] [Google Scholar]
  • 87.Kijowski R, Liu F, Caliva F, Pedoia V. Deep learning for lesion detection, progression, and prediction of musculoskeletal disease. J Magn Reson Imaging. 2019;jmri.27001. 10.1002/jmri.27001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer. 2018;18:500–510. 10.1038/s41568-018-0016-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Van Vaals JJ, Brummer ME, Thomas Dixon W, et al. “Keyhole” method for accelerating imaging of contrast agent uptake. J Magn Reson Imaging. 1993;3:671–675. 10.1002/jmri.1880030419 [DOI] [PubMed] [Google Scholar]
  • 90.Doyle M, Walsh EG, Blackwell GG, Pohost GM. Block regional interpolation scheme for k-space (BRISK): a rapid cardiac imaging technique. Magn Reson Med. 1995;33:163–170. 10.1002/mrm.1910330204 [DOI] [PubMed] [Google Scholar]
  • 91.Parrish T, Hu X. Continuous update with random encoding (CURE): a new strategy for dynamic imaging. Magn Reson Med. 1995;33:326–336. 10.1002/mrm.1910330307 [DOI] [PubMed] [Google Scholar]
  • 92.Wright KL, Hamilton JI, Griswold MA, Gulani V, Seiberlich N. Non-Cartesian parallel imaging reconstruction. J Magn Reson Imaging. 2014;40:1022–1040. 10.1002/jmri.24521 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93.Lustig M, Donoho D, Pauly JM. Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn Reson Med. 2007;58:1182–1195. 10.1002/mrm.21391 [DOI] [PubMed] [Google Scholar]
  • 94.Block KT, Uecker M, Frahm J. Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint. Magn Reson Med. 2007;57:1086–1098. 10.1002/mrm.21236 [DOI] [PubMed] [Google Scholar]
  • 95.Christodoulou AG, Shaw JL, Nguyen C, et al. Magnetic resonance multitasking for motion-resolved quantitative cardiovascular imaging. Nat Biomed Eng. 2018;2:215–226. 10.1038/s41551-018-0217-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.Pai A, Li X, Majumdar S. A comparative study at 3 T of sequence dependence of T2 quantitation in the knee. Magn Reson Imaging. 2008;26: 1215–1220. 10.1016/j.mri.2008.02.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 97.Hahn EL. An accurate nuclear magnetic resonance method for measuring spin-lattice relaxation times. Phys Rev. 1949;76:145–146. 10.1103/PhysRev.76.145 [DOI] [Google Scholar]
  • 98.Drain LE. A direct method of measuring nuclear spin-lattice relaxation times. Proc Phys Soc A. 1949;62:301–306. 10.1088/0370-1298/62/5/306 [DOI] [Google Scholar]
  • 99.Messroghli DR, Radjenovic A, Kozerke S, Higgins DM, Sivananthan MU, Ridgway JP. Modified Look-Locker inversion recovery (MOLLI) for high-resolution T1 mapping of the heart. Magn Reson Med. 2004;52:141–146. 10.1002/mrm.20110 [DOI] [PubMed] [Google Scholar]
  • 100.Wang HZ, Riederer SJ, Lee JN. Optimizing the precision in T1 relaxation estimation using limited flip angles. Magn Reson Med. 1987;5:399–416. 10.1002/mrm.1910050502 [DOI] [PubMed] [Google Scholar]
  • 101.Venkatesan R, Lin W, Haacke EM. Accurate determination of spin-density and T1 in the presence of RF- field inhomogeneities and flip-angle miscalibration. Magn Reson Med. 1998;40:592–602. 10.1002/mrm.1910400412 [DOI] [PubMed] [Google Scholar]
  • 102.Stikov N, Boudreau M, Levesque IR, Tardif CL, Barral JK, Pike GB. On the accuracy of T1 mapping: searching for common ground. Magn Reson Med. 2015;73:514–522. 10.1002/mrm.25135 [DOI] [PubMed] [Google Scholar]
  • 103.Duvvuri U, Reddy R, Patel SD, Kaufman JH, Kneeland JB, Leigh JS. T-relaxation in articular cartilage: effects of enzymatic degradation. Magn Reson Med. 1997;38:863–867. 10.1002/mrm.1910380602 [DOI] [PubMed] [Google Scholar]
  • 104.Dixon WT, Oshinski JN, Trudeau JD, Arnold BC, Pettigrew RI. Myocardial suppression in vivo by spin locking with composite pulses. Magn Reson Med. 1996;36:90–94. 10.1002/mrm.1910360116 [DOI] [PubMed] [Google Scholar]
  • 105.Li X, Han ET, Ma CB, Link TM, Newitt DC, Majumdar S. In vivo 3T spiral imaging based multi-slice T mapping of knee cartilage in osteoarthritis. Magn Reson Med. 2005;54:929–936. 10.1002/mrm.20609 [DOI] [PubMed] [Google Scholar]
  • 106.Sodickson DK, Manning WJ, Fast S, Coil R, Sodickson DK, Manning WJ. Simultaneous acquisition of spatial harmonics (SMASH): fast imaging with radiofrequency coil arrays. Magn Reson Med. 1997;38:591–603. [DOI] [PubMed] [Google Scholar]
  • 107.Pruessmann KP, Weiger M, Börnert P, Boesiger P. Advances in sensitivity encoding with arbitrary k-space trajectories. Magn Reson Med. 2001;46: 638–651. [DOI] [PubMed] [Google Scholar]
  • 108.Liu B, Zou YM, Ying L. SparseSENSE: application of compressed sensing in parallel MRI. In: 2008 International Conference on Technology and Applications in Biomedicine. IEEE; 2008:127–130. 10.1109/ITAB.2008.4570588 [DOI] [Google Scholar]
  • 109.Feng L, Srichai MB, Lim RP, et al. Highly accelerated real-time cardiac cine MRI using k-t SPARSE-SENSE. Magn Reson Med. 2013;70:64–74. 10.1002/mrm.24440 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110.Liang ZP, Boada FE, Constable RT, Haacke EM, Lauterbur PC, Smith MR. Constrained reconstruction methods in MR imaging. Rev Magn Reson Med. 1992;4:67–185. [Google Scholar]
  • 111.Candès EJ, Romberg J, Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory. 2006;52:489–509. 10.1109/TIT.2005.862083 [DOI] [Google Scholar]
  • 112.Donoho DL. Compressed sensing. IEEE Trans Inf Theory. 2006;52:1289–1306. 10.1109/TIT.2006.871582 [DOI] [Google Scholar]
  • 113.Lee D, Jin KH, Kim EY, Park S-H, Ye JC. Acceleration of MR parameter mapping using annihilating filter-based low rank Hankel matrix (ALOHA). Magn Reson Med. 2016;76:1848–1864. 10.1002/mrm.26081 [DOI] [PubMed] [Google Scholar]
  • 114.Liang Z-P. Spatiotemporal imaging with partially separable functions. In: 2007 Joint Meeting of the 6th International Symposium on Noninvasive Functional Source Imaging of the Brain and Heart and the International Conference on Functional Biomedical Imaging. IEEE; 2007:181–182. 10.1109/NFSI-ICFBI.2007.4387720 [DOI] [Google Scholar]
  • 115.Zhao B, Lu W, Hitchens TK, Lam F, Ho C, Liang ZP. Accelerated MR parameter mapping with low-rank and sparsity constraints. Magn Reson Med. 2015;74:489–498. 10.1002/mrm.25421 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 116.Zhao B, Haldar JP, Christodoulou AG, Liang ZP. Image reconstruction from highly undersampled (k, t)-space data with joint partial separability and sparsity constraints. IEEE Trans Med Imaging. 2012;31:1809–1820. 10.1109/TMI.2012.2203921 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 117.Feng L, Wen Q, Huang C, Tong A, Liu F, Chandarana H. GRASP-Pro: imProving GRASP DCE-MRI through self-calibrating subspace-modeling and contrast phase automation. Magn Reson Med. 2020;83:94–108. 10.1002/mrm.27903 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 118.Sumpf TJ, Petrovic A, Uecker M, Knoll F, Frahm J. Fast T2 mapping with improved accuracy using undersampled spin-echo MRI and model-based reconstructions with a generating function. IEEE Trans Med Imaging. 2014;33:2213–2222. 10.1109/TMI.2014.2333370 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 119.Wang X, Kohler F, Unterberg-Buchwald C, Lotz J, Frahm J, Uecker M. Model-based myocardial T1 mapping with sparsity constraints using single-shot inversion-recovery radial FLASH cardiovascular magnetic resonance. J Cardiovasc Magn Reson. 2019;21(1):60. 10.1186/s12968-019-0570-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 120.Huang C, Altbach MI, El Fakhri G. Pattern recognition for rapid T2 mapping with stimulated echo compensation. Magn Reson Imaging. 2014;32: 969–974. 10.1016/j.mri.2014.04.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 121.Ben-Eliezer N, Sodickson DK, Shepherd T, Wiggins GC, Block KT. Accelerated and motion-robust in vivo T2 mapping from radially undersampled data using bloch-simulation-based iterative reconstruction. Magn Reson Med. 2016;75:1346–1354. 10.1002/mrm.25558 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 122.Liu F, Velikina JV, Block WF, Kijowski R, Samsonov AA. Fast realistic MRI simulations based on generalized multi-pool exchange tissue model. IEEE Trans Med Imaging. 2017;36:527–537. 10.1109/TMI.2016.2620961 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 123.Weigel M Extended phase graphs: dephasing, RF pulses, and echoes—pure and simple. J Magn Reson Imaging. 2015;41:266–295. 10.1002/jmri.24619 [DOI] [PubMed] [Google Scholar]
  • 124.Davies M, Puy G, Vandergheynst P, Wiaux Y. A compressed sensing framework for magnetic resonance fingerprinting. SIAM J Imaging Sci. 2014;7: 2623–2656. 10.1137/130947246 [DOI] [Google Scholar]
  • 125.Cloos MA, Knoll F, Zhao T, et al. Multiparametric imaging with heterogeneous radiofrequency fields. Nat Commun. 2016;7:1–10. 10.1038/ncomms12445 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 126.Wang Z, Zhang Q, Yuan J, Wang X. MRF denoising with compressed sensing and adaptive filtering. In: 2014 IEEE 11th International Symposium on Biomedical Imaging, ISBI 2014. IEEE; 2014:870–873. 10.1109/isbi.2014.6868009 [DOI] [Google Scholar]
  • 127.Liao C, Bilgic B, Manhard MK, et al. 3D MR fingerprinting with accelerated stack-of-spirals and hybrid sliding-window and GRAPPA reconstruction. NeuroImage. 2017;162:13–22. 10.1016/j.neuroimage.2017.08.030 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 128.Mazor G, Weizman L, Tal A, Eldar YC. Low-rank magnetic resonance fingerprinting. Med Phys. 2018;45:4066–4084. 10.1002/mp.13078 [DOI] [PubMed] [Google Scholar]
  • 129.Jaubert O, Cruz G, Bustin A, et al. Free-running cardiac magnetic resonance fingerprinting: joint T1/T2 map and Cine imaging. Magn Reson Imaging. 2020;68:173–182. 10.1016/j.mri.2020.02.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 130.Lima da Cruz G, Bustin A, Jaubert O, Schneider T, Botnar RM, Prieto C. Sparsity and locally low rank regularization for MR fingerprinting. Magn Reson Med. 2019;81:3530–3543. 10.1002/mrm.27665 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 131.Yang M, Ma D, Jiang Y, et al. Low rank approximation methods for MR fingerprinting with large scale dictionaries. Magn Reson Med. 2018;79: 2392–2400. 10.1002/mrm.26867 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 132.Bustin A, Lima da Cruz G, Jaubert O, Lopez K, Botnar RM, Prieto C. High-dimensionality undersampled patch-based reconstruction (HD-PROST) for accelerated multi-contrast MRI. Magn Reson Med. 2019;81:3705–3719. 10.1002/mrm.27694 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 133.Doneva M, Amthor T, Koken P, Sommer K, Börnert P. Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data. Magn Reson Imaging. 2017;41:41–52. 10.1016/j.mri.2017.02.007 [DOI] [PubMed] [Google Scholar]
  • 134.Zhao B, Setsompop K, Adalsteinsson E, et al. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling. Magn Reson Med. 2018;79:933–942. 10.1002/mrm.26701 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 135.Assländer J, Cloos MA, Knoll F, Sodickson DK, Hennig J, Lattanzi R. Low rank alternating direction method of multipliers reconstruction for MR fingerprinting. Magn Reson Med. 2018;79:83–96. 10.1002/mrm.26639 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 136.Hamilton JI, Jiang Y, Eck B, Griswold M, Seiberlich N. Cardiac cine magnetic resonance fingerprinting for combined ejection fraction, T1 and T2 quantification. NMR Biomed. 2020;33:e4323. 10.1002/nbm.4323 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 137.McGivney D, Pierre E, Ma D, et al. SVD compression for magnetic resonance fingerprinting in the time domain. IEEE Trans Med Imaging. 2014;0062: 1–13. 10.1109/TMI.2014.2337321 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 138.Zhao B, Setsompop K, Ye H, Cauley SF, Wald LL. Maximum likelihood reconstruction for magnetic resonance fingerprinting. IEEE Trans Med Imaging. 2016;35:1812–1823. 10.1038/nature11971 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 139.Yu F, Koltun V. Multi-scale context aggregation by dilated convolutions. Poster presented at: Fourth International Conference on Learning Representations. ICLR 2016; May 2–4, 2016; San Juan, Puerto Rico. [Google Scholar]
  • 140.Dumoulin V, Visin F. A guide to convolution arithmetic for deep learning [abstract]. ArXiv. 2016. https://arxiv.org/abs/1603.07285. Accessed August 24, 2020. [Google Scholar]
  • 141.Sun M, Song Z, Jiang X, Pan J, Pang Y. Learning pooling for convolutional neural network. Neurocomputing. 2017;224:96–104. 10.1016/j.neucom.2016.10.049 [DOI] [Google Scholar]
  • 142.Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift [abstract]. ArXiv. 2015. https://arxiv.org/abs/1502.03167v3 [Google Scholar]
  • 143.He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition [abstract]. ArXiv. 2015. https://arxiv.org/abs/1512.03385 [Google Scholar]
  • 144.Huang G, Liu Z, van der Maaten L, Weinberger KQ. Densely connected convolutional networks [abstract]. ArXiv. 2016. https://arxiv.org/abs/1608.06993 [Google Scholar]
  • 145.Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15:1929–1958. 10.1214/12-AOS1000 [DOI] [Google Scholar]
  • 146.Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III. Cham, Switzerland: Springer; 2015:234–241 10.1007/978-3-319-24574-4_28 [DOI] [Google Scholar]
  • 147.Drozdzal M, Vorontsov E, Chartrand G, Kadoury S, Pal C. The importance of skip connections in biomedical image segmentation. In: Carneiro G et al. , eds. Deep Learning and Data Labeling for Medical Applications. DLMIA 2016, LABELS 2016. Lecture Notes in Computer Science, Vol 10008 Cham, Swiyzerland: Springer; 2016:179–187 10.1007/978-3-319-46976-8_19 [DOI] [Google Scholar]
  • 148.Küstner T, Fuin N, Hammernik K, et al. CINENet: deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions. Sci Rep. 2020;10:1–13. 10.1038/s41598-020-70551-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 149.Sandino CM, Lai P, Vasanawala SS, Cheng JY. Accelerating cardiac cine MRI using a deep learning-based ESPIRiT reconstruction. Magn Reson Med. 2020;10:13710. 10.1002/mrm.28420 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 150.Liang M, Hu X. Recurrent convolutional neural network for object recognition. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2015:3367–3375. 10.1109/CVPR.2015.7298958 [DOI] [Google Scholar]
  • 151.Qin C, Schlemper J, Caballero J, Price AN, Hajnal JV, Rueckert D. Convolutional recurrent neural networks for dynamic MR image reconstruction. IEEE Trans Med Imaging. 2019;38:280–290. 10.1109/TMI.2018.2863670 [DOI] [PubMed] [Google Scholar]
  • 152.Zhao H, Gallo O, Frosio I, Kautz J. Loss functions for image restoration with neural networks. IEEE Trans Comput Imaging. 2017;3:47–57. 10.1109/TCI.2016.2644865 [DOI] [Google Scholar]
  • 153.Goodfellow I, Pouget-Abadie J, Mirza M. Generative adversarial networks [abstract]. arXiv. 2014. 10.1001/jamainternmed.2016.8245 [DOI] [Google Scholar]
  • 154.Alom MZ, Taha TM, Yakopcic C, et al. A state-of-the-art survey on deep learning theory and architectures. Electronics. 2019;8(3):292. 10.3390/electronics8030292 [DOI] [Google Scholar]
  • 155.Khan A, Sohail A, Zahoora U, Qureshi AS. A survey of the recent architectures of deep convolutional neural networks. Artif Intell Rev. 2020;53: 5455–5516. 10.1007/s10462-020-09825-6 [DOI] [Google Scholar]
  • 156.Cai C, Wang C, Zeng Y, et al. Single-shot T2 mapping using overlapping-echo detachment planar imaging and a deep convolutional neural network. Magn Reson Med. 2018;80(5):2202–2214. 10.1002/mrm.27205 [DOI] [PubMed] [Google Scholar]
  • 157.Cai C, Zeng Y, Zhuang Y, et al. Single-shot T2 mapping through overlapping-echo detachment (OLED) planar imaging. IEEE Trans Biomed Eng. 2017; 64:2450–2461. 10.1109/TBME.2017.2661840 [DOI] [PubMed] [Google Scholar]
  • 158.Li H, Yang M, Kim J, et al. Ultra-fast simultaneous T1rho and T2 mapping using deep learning. Poster presented at: ISMRM & SMRT Virtual Conference; August 8–14, 2020. [Google Scholar]
  • 159.Hamilton JI, Seiberlich N. Machine learning for rapid magnetic resonance fingerprinting tissue property quantification. Proc IEEE. 2020;108:69–85. 10.1109/JPROC.2019.2936998 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 160.Virtue P, Yu SX, Lustig M. Better than real: complex-valued neural nets for MRI fingerprinting. In: 2017 IEEE International Conference on Image Processing (ICIP). IEEE; 2017:3953–3957. 10.1109/ICIP.2017.8297024 [DOI] [Google Scholar]
  • 161.Hoppe E, Körzdörfer G, Würfl T, et al. Deep learning for magnetic resonance fingerprinting: a new approach for predicting quantitative parameter values from time series. In: Röhrig R, Timmer A, Binder H, Sax U, eds. German Medical Data Sciences: Visions and Bridges. Studies in Health Technology and Informatics, Vol 243. Amsterdam: IOS Press; 2017:202–206 10.3233/978-1-61499-808-2-202 [DOI] [PubMed] [Google Scholar]
  • 162.Balsiger F, Konar AS, Chikop S, et al. Magnetic resonance fingerprinting reconstruction via spatiotemporal convolutional neural networks. In: Knoll F, Maier A, Rueckert D, eds. Machine Learning for Medical Image Reconstruction. MLMIR 2018. Lecture Notes in Computer Science, Vol 11074 Cham, Switzerland: Springer; 2018:39–46 10.1007/978-3-030-00129-2_5 [DOI] [Google Scholar]
  • 163.Song P, Eldar YC, Mazor G, Rodrigues MRD. HYDRA: hybrid deep magnetic resonance fingerprinting. Med Phys. 2019;46:4951–4969. 10.1002/mp.13727 [DOI] [PubMed] [Google Scholar]
  • 164.Cohen O, Zhu B, Rosen MS. MR fingerprinting Deep RecOnstruction NEtwork (DRONE). Magn Reson Med. 2018;80:885–894. 10.1002/mrm.27198 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 165.Fang Z, Chen Y, Liu M, et al. Deep learning for fast and spatially constrained tissue quantification from highly accelerated data in magnetic resonance fingerprinting. IEEE Trans Med Imaging. 2019;38:2364–2374. 10.1109/TMI.2019.2899328 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 166.Fang Z, Chen Y, Hung S, Zhang X, Lin W, Shen D. Submillimeter MR fingerprinting using deep learning-based tissue quantification. Magn Reson Med. 2019;84:579–591. 10.1002/mrm.28136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 167.Chen Y, Fang Z, Hung SC, Chang WT, Shen D, Lin W. High-resolution 3D MR fingerprinting using parallel imaging and deep learning. NeuroImage. 2020;206:116329. 10.1016/j.neuroimage.2019.116329 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 168.Yang M, Jiang Y, Ma D, Mehta BB, Griswold MA. Game of learning Bloch equation simulations for MR fingerprinting. Paper presented at: Joint Annual Meeting ISMRM-ESMRMB; June 16–21, 2018; Paris, France. [Google Scholar]
  • 169.Liu F, Feng L, Kijowski R. MANTIS: Model-Augmented Neural neTwork with Incoherent k-space Sampling for efficient MR parameter mapping. Magn Reson Med. 2019;82:174–188. 10.1002/mrm.27707 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 170.Liu F, Kijowski R, Feng L, El Fakhri G. High-performance rapid MR parameter mapping using model-based deep adversarial learning. Magn. Reson. Imaging 2020;74:152–160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 171.Zha W, Fain SB, Kijowski R, Liu F. Relax-MANTIS: REference-free LAtent map-eXtracting MANTIS for efficient MR parametric mapping with unsupervised deep learning. Paper presented at: 27th ISMRM Annual Meeting; May 16, 2019; Montreal, Canada. [Google Scholar]
  • 172.Deoni SCL, Rutt BK, Peters TM. Rapid combined T1 and T2 mapping using gradient recalled acquisition in the steady state. Magn Reson Med. 2003; 49:515–526. 10.1002/mrm.10407 [DOI] [PubMed] [Google Scholar]
  • 173.Liu F, Feng L. k-t SANTIS: Subspace Augmented Neural neTwork with Incoherent Sampling for dynamic image reconstruction. Paper presented at: ISMRM Workshop on Data Sampling & Image Reconstruction; January 26–29, 2020; Sedona, AZ. [Google Scholar]
  • 174.Han Y, Sunwoo L, Ye JC. K-Space deep learning for accelerated MRI. IEEE Trans Med Imaging. 2020;39:377–386. 10.1109/TMI.2019.2927101 [DOI] [PubMed] [Google Scholar]
  • 175.Zibetti MVW, Sharafi A, Hammernik K, Knoll F, Regatte RR. Comparing learned variational networks and compressed sensing for T1ρ mapping of knee cartilage. Paper presented at: ISMRM Workshop on Machine Learning, Part II; October 25–28, 2018; Washington, DC. [Google Scholar]
  • 176.Sharafi A, Xia D, Chang G, Regatte RR. Biexponential T1ρ relaxation mapping of human knee cartilage in vivo at 3 T. NMR Biomed. 2017;30:e3760. 10.1002/nbm.3760 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 177.Jeelani H, Yang Y, Zhou R, Kramer CM, Salerno M, Weller DS. A myocardial T1-mapping framework with recurrent and U-net convolutional neural networks. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE; 2020:1941–1944. 10.1109/ISBI45749.2020.9098459 [DOI] [Google Scholar]
  • 178.Chaudhari A, Fang Z, Lee JH, Gold G, Hargreaves B. Deep learning super-resolution enables rapid simultaneous morphological and quantitative magnetic resonance imaging [abstract]. ArXiv. 2018. https://arxiv.org/abs/1808.04447 [Google Scholar]
  • 179.Chaudhari AS, Fang Z, Kogan F, et al. Super-resolution musculoskeletal MRI using deep learning. Magn Reson Med. 2018;80:2139–2154. 10.1002/mrm.27178 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 180.Staroswiecki E, Granlund KL, Alley MT, Gold GE, Hargreaves BA. Simultaneous estimation of T2 and apparent diffusion coefficient in human articular cartilage in vivo with a modified three-dimensional double echo steady state (DESS) sequence at 3 T. Magn Reson Med. 2012;67:1086–1096. 10.1002/mrm.23090 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 181.Golkov V, Dosovitskiy A, Sperl JI, et al. q-space deep learning: twelve-fold shorter and model-free diffusion MRI scans. IEEE Trans Med Imaging. 2016;35:1344–1351. 10.1109/TMI.2016.2551324 [DOI] [PubMed] [Google Scholar]
  • 182.Tian Q, Bilgic B, Fan Q, et al. DeepDTI: high-fidelity six-direction diffusion tensor imaging using deep learning. NeuroImage. 2020;219:117017. 10.1016/j.neuroimage.2020.117017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 183.Yoon J, Gong E, Chatnuntawech I, et al. Quantitative susceptibility mapping using deep neural network: QSMnet. NeuroImage. 2018;179:199–206. 10.1016/j.neuroimage.2018.06.030 [DOI] [PubMed] [Google Scholar]
  • 184.Bollmann S, Rasmussen KGB, Kristensen M, et al. DeepQSM—using deep learning to solve the dipole inversion for quantitative susceptibility mapping. NeuroImage. 2019;195:373–383. 10.1016/j.neuroimage.2019.03.060 [DOI] [PubMed] [Google Scholar]
  • 185.Chen Y, Jakary A, Avadiappan S, Hess CP, Lupo JM. QSMGAN: Improved Quantitative Susceptibility Mapping using 3D Generative Adversarial Networks with increased receptive field. NeuroImage. 2020;207:116389. 10.1016/j.neuroimage.2019.116389 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 186.Gong E, Pauly JM, Wintermark M, Zaharchuk G. Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging. 2018;48:330–340. 10.1002/jmri.25970 [DOI] [PubMed] [Google Scholar]
  • 187.Gong K, Han P, El Fakhri G, Ma C, Li Q. Arterial spin labeling MR image denoising and reconstruction using unsupervised deep learning. NMR Biomed. 2019;e4224. 10.1002/nbm.4224 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 188.Xie D, Li Y, Yang H, et al. Denoising arterial spin labeling perfusion MRI with deep machine learning. Magn Reson Imaging. 2020;68:95–105. 10.1016/j.mri.2020.01.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 189.Fahmy AS, El-Rewaidy H, Nezafat M, Nakamori S, Nezafat R. Automated analysis of cardiovascular magnetic resonance myocardial native T1 mapping images using fully convolutional neural networks. J Cardiovasc Magn Reson. 2019;21:7. 10.1186/s12968-018-0516-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 190.Norman B, Pedoia V, Majumdar S. Use of 2D U-Net convolutional neural networks for automated cartilage and meniscus segmentation of knee MR imaging data to determine relaxometry and morphometry. Radiology. 2018;288(1):177–185. 10.1148/radiol.2018172322 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 191.Zbontar J, Knoll F, Sriram A, et al. fastMRI: an open dataset and benchmarks for accelerated MRI. ArXiv. 2018. https://arxiv.org/abs/1811.08839 [Google Scholar]
  • 192.Stöcker T, Vahedipour K, Pflugfelder D, Shah NJ. High-performance computing MRI simulations. Magn Reson Med. 2010;64:186–193. 10.1002/mrm.22406 [DOI] [PubMed] [Google Scholar]
  • 193.Liu F SUSAN: Segment Unannotated image Structure using Adversarial Network. Magn Reson Med. 2019;81(5):3330–3345. 10.1002/mrm.27627 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 194.Shin H-C, Tenenholtz NA, Rogers JK, et al. Medical image synthesis for data augmentation and anonymization using generative adversarial networks. In: Gooya A, Goksel O, Oguz I, Burgos N, eds. Simulation and Synthesis in Medical Imaging. SASHIMI 2018. Lecture Notes in Computer Science, Vol 11037 Cham, Switzerland: Springer; 2018. 10.1007/978-3-030-00536-8_1 [DOI] [Google Scholar]
  • 195.Han C, Hayashi H, Rundo L, et al. GAN-based synthetic brain MR image generation. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE; 2018:734–738. 10.1109/ISBI.2018.8363678 [DOI] [Google Scholar]
  • 196.Kazuhiro K, Werner RA, Toriumi F, et al. Generative adversarial networks for the creation of realistic artificial brain magnetic resonance images. Tomography. 2018;4:159–163. 10.18383/j.tom.2018.00042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 197.Calimeri F, Marzullo A, Stamile C, Terracina G. Biomedical data augmentation using generative adversarial neural networks. In: Lintas A, Rovetta S, Verschure P, Villa A, eds. Artificial Neural Networks and Machine Learning—ICANN 2017. Lecture Notes in Computer Science, Vol 10614 Cham, Switzerland: Springer; 2017:626–634 10.1007/978-3-319-68612-7_71 [DOI] [Google Scholar]
  • 198.Mok TCW, Chung ACS. Learning data augmentation for brain tumor segmentation with coarse-to-fine generative adversarial networks. In: Crimi A, Bakas S, Kuijf H, Keyvan F, Reyes M, van Walsum T, eds. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2018. Lecture Notes in Computer Science, Vol 11383; 2019:70–80 10.1007/978-3-030-11723-8_7 [DOI] [Google Scholar]
  • 199.Bowles C, Chen L, Guerrero R, et al. GAN augmentation: augmenting training data using Generative Adversarial Networks [abstract]. ArXiv. 2018. https://arxiv.org/abs/1810.10863 [Google Scholar]
  • 200.Arjovsky M, Chintala S, Bottou L. Wasserstein GAN [abstract]. ArXiv. 2017. https://arxiv.org/abs/1701.07875 [Google Scholar]
  • 201.Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville A. Improved training of Wasserstein GANs [abstract]. ArXiv. 2017. https://arxiv.org/abs/1704.00028 [Google Scholar]
  • 202.Mao X, Li Q, Xie H, Lau RYK, Wang Z, Smolley SP. Least squares generative adversarial networks [abstract]. ArXiv. 2016. https://arxiv.org/abs/1611.04076 [DOI] [PubMed] [Google Scholar]
  • 203.Kodali N, Abernethy J, Hays J, Kira Z. On convergence and stability of GANs [abstract]. ArXiv. 2017. https://arxiv.org/abs/1705.07215 [Google Scholar]
  • 204.Shin H-C, Roth HR, Gao M, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging. 2016;1602:1285–1298. 10.1109/TMI.2016.2528162 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 205.Tajbakhsh N, Shin JY, Gurudu SR, et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2016;35:1299–1312. 10.1109/TMI.2016.2535302 [DOI] [PubMed] [Google Scholar]
  • 206.Bulo SR, Porzi L, Kontschieder P. In-place activated batchnorm for memory-optimized training of DNNs. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE; 2018:5639–5647. 10.1109/CVPR.2018.00591 [DOI] [Google Scholar]
  • 207.Meng C, Sun M, Yang J, Qiu M, Gu Y. Training deeper models by GPU memory optimization on TensorFlow. Paper presented at: 31st Conference on Neural Information Processing Systems (NIPS 2017). December 4–9, 2017. Long Beach, CA. [Google Scholar]
  • 208.Pleiss G, Chen D, Huang G, Li T, van der Maaten L, Weinberger KQ. Memory-efficient implementation of DenseNets [abstract]. ArXiv. 2017. https://arxiv.org/abs/1707.06990 [Google Scholar]
  • 209.Rhu M, Gimelshein N, Clemons J, Zulfiqar A, Keckler SW. vDNN: virtualized deep neural networks for scalable, memory-efficient neural network design [abstract]. ArXiv. 2016. https://arxiv.org/abs/1602.08124 [Google Scholar]
  • 210.Guan B, Zhang J, Sethares WA, Kijowski R, Liu F. SpecNet: Spectral Domain Convolutional Neural Network [abstract]. ArXiv. 2019. https://arxiv.org/abs/1905.10915 [Google Scholar]
  • 211.Knoll F, Hammernik K, Kobler E, Pock T, Recht MP, Sodickson DK. Assessment of the generalization of learned image reconstruction and the potential for transfer learning. Magn Reson Med. 2019;81:116–128. 10.1002/mrm.27355 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 212.Wang X, Peng Y, Lu L, Lu Z, Summers RM. TieNet: Text-Image Embedding network for common thorax disease classification and reporting in chest x-rays. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE; 2018:9049–9058. 10.1109/CVPR.2018.00943 [DOI] [Google Scholar]
  • 213.Mei X, Lee H, Diao K, et al. Artificial intelligence-enabled rapid diagnosis of patients with COVID-19. Nat Med. 2020;26:1224–1228. 10.1038/s41591-020-0931-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 214.Guan B, Liu F, Haj-Mirzaian A, et al. Deep learning risk assessment models for predicting progression of radiographic medial joint space loss over a 48-month follow-up period. Osteoarthr Cartil. 2020;28:428–437. 10.1016/j.joca.2020.01.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 215.Chang LC, Koay CG, Basser PJ, Pierpaoli C. Linear least-squares method for unbiased estimation of T1 from SPGR signals. Magn Reson Med. 2008;60: 496–501. 10.1002/mrm.21669 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 216.Nataraj G, Nielsen JF, Scott C, Fessler JA. Dictionary-free MRI PERK: parameter estimation via regression with kernels. IEEE Trans Med Imaging. 2018;37:2103–2114. 10.1109/TMI.2018.2817547 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES