Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Aug 1.
Published in final edited form as: Magn Reson Med. 2014 Aug 27;74(2):489–498. doi: 10.1002/mrm.25421

Accelerated MR Parameter Mapping with Low-Rank and Sparsity Constraints

Bo Zhao 1,2,*, Wenmiao Lu 2, T Kevin Hitchens 3,4, Fan Lam 1,2, Chien Ho 3,4, Zhi-Pei Liang 1,2
PMCID: PMC4344441  NIHMSID: NIHMS619739  PMID: 25163720

Abstract

Purpose:

To enable accurate MR parameter mapping with accelerated data acquisition, utilizing recent advances in constrained imaging with sparse sampling.

Theory and Methods:

A new constrained reconstruction method based on low-rank and sparsity constraints is proposed to accelerate MR parameter mapping. More specifically, the proposed method simultaneously imposes low-rank and joint sparse structures on contrast-weighted image sequences within a unified mathematical formulation. With a pre-estimated subspace, this formulation results in a convex optimization problem, which is solved using an efficient numerical algorithm based on the alternating direction method of multipliers.

Results:

To evaluate the performance of the proposed method, two application examples were considered: i) T2 mapping of the human brain, and ii) T1 mapping of the rat brain. For each application, the proposed method was evaluated at both moderate and high acceleration levels. Additionally, the proposed method was compared with two state-of-the-art methods that only use a single low-rank or joint sparsity constraint. The results demonstrate that the proposed method can achieve accurate parameter estimation with both moderately and highly undersampled data. Although all methods performed fairly well with moderately undersampled data, the proposed method achieved much better performance (e.g., more accurate parameter values) than the other two methods with highly undersampled data.

Conclusions:

Simultaneously imposing low-rank and sparsity constraints can effectively improve the accuracy of fast MR parameter mapping with sparse sampling.

Keywords: constrained reconstruction, low-rank constraint, joint sparsity constraint, parameter mapping, T1 mapping, T2 mapping

Introduction

MR parameter mapping (e.g., T1, T2, or T2 mapping) has developed into a powerful quantitative imaging tool for tissue characterization. It has been utilized in a wide variety of biomedical research and clinical applications, such as study of neurodegenerative diseases (1), evaluation of myocardial fibrosis (2), tracking of labeled cells (3), and assessment of knee cartilage damage (4). Parameter mapping experiments typically involve acquisition of a sequence of contrast-weighted MR images, each of which is acquired with different acquisition parameters (e.g., flip angle, echo time, or repetition time). To obtain accurate parameter values, a large number of contrast-weighted images often have to be acquired. This can lead to prolonged data acquisition time, especially for applications that require high spatial resolution and/or broad volume coverage, which hinders the practical utility of MR parameter mapping.

Various fast imaging techniques can be adapted, or have been developed specifically, to accelerate parameter mapping experiments. For example, advanced fast pulse sequences (e.g., (5, 6)) and parallel imaging (e.g., (710)) are useful to improve the acquisition efficiency of parameter mapping experiments. Furthermore, a variety of constrained reconstruction methods that utilize specific spatial and/or parametric characteristics of contrast-weighted image sequences also have demonstrated effectiveness in achieving faster parameter mapping through sparse sampling. These methods utilize various constraints associated with lower-dimensional signal/image models, including temporal smoothness constraint (11), sparsity or structured sparsity constraint (1217), low-rank constraint (18, 19), contrast-weighting constraint (2022), or combination of the aforementioned constraints (2326).

In this work, we present a new constrained reconstruction method to accelerate MR parameter mapping. It is based on an extension of our early work on dynamic MRI (27), but now sparse sampling is considered in the k-parametric domain, i.e., k-p space. In the proposed method, we utilize a mathematical formulation that simultaneously enforces low-rank and joint sparse structures of contrast-weighted image sequences. With data-driven subspace pre-estimation, the proposed formulation results in a convex optimization problem. An algorithm based on the alternating direction method of multipliers (ADMM) (2830) is described to efficiently solve the optimization problem. The performance of the proposed method was evaluated in both T1 and T2 mapping applications. Its superior performance, over the two state-of-the-art methods that only use either low-rank (18) or joint sparse structure (15, 17), will be demonstrated. A preliminary account of this work was presented in (24, 25).

Theory

Data Model

Parameter mapping experiments involve acquisition of a sequence of contrast-weighted images {Im(x)}m=1M, which are related to the measured k-space data by

dm,c(k)=Sc(x)Im(x)exp(2πkx)dx+nm,c(k), [1]

for m = 1, … , M and c = 1, … , Nc, where Sc(x) denotes the coil sensitivity profile for the c-th receiver coil, Nc denotes the number of coils, and nm,c(k) is assumed to be complex white Gaussian noise. For simplicity, we consider a discrete image model, which assumes that the contrast-weighted image Im(x) can be completely represented by its values on a grid at N spatial locations {xn}n=1N. As a consequence, the following Casorati matrix (31), i.e.,

C=[I1(x1)IM(x1)I1(xN)IM(xN)]N×M, [2]

can be treated as a complete representation of {Im(x)}m=1M, whose first and second directions represent the spatial and parameter dimensions, respectively. Therefore, Eq. [1] can be rewritten as

dc=Ω(FScC)+nc, [3]

for c = 1, … , Nc, where dc ∈ ℂP contains the measured data for the contrast-weighted image sequence from the c-th coil, Ω(·) : ℂN×M → ℂP denotes the undersampling operator that sparsely acquires k-space data for each contrast-weighted image and then concatenates them into the data vector dc, F ∈ ℂN×N denotes the Fourier encoding matrix (e.g., the standard discrete Fourier transform matrix for the Catesian case), Sc ∈ ℂN×N is a diagonal matrix that contains the sensitivity map of the c-th coil, and nc ∈ ℂP is the noise vector.

Formulation

If dc contains only sparsely sampled data, direct Fourier inversion of the measured data generally incurs severe artifacts in reconstructed contrast-weighted images and parameter maps. Here, we propose a formulation that simultaneously enforces the low-rank and joint sparsity constraints to enable reconstruction of C from highly undersampled data, i.e.,

C^=argminCN×Mc=1NcdcΩ(FScC)22+Rr(C)+Rs(C), [4]

where Rr (·) denotes the low-rank constraint, and Rs(·) denotes the joint sparsity constraint.

First, the low-rank constraint Rr (C) is based on the assumption that there is strong correlation of relaxation signals from different types of tissues, which leads to the partial separability/low-rank modeling of C (31, 32). The low-rank constraint can be enforced in multiple ways (3134). Here, we use an explicit rank constraint through matrix factorization, i.e., C = UV, where U ∈ ℂN×L, V ∈ ℂL×M , and L « min{M, N}. Note that columns of U and rows of V span the spatial and parametric subspaces of C, respectively. A stronger rank constraint can be enforced by pre-estimating V from some acquired auxiliary data using the principal component analysis or singular value decomposition (18, 31, 35, 36). We adopt such rank constraint for Rr (C) in Eq. [4].

Secondly, the joint sparsity constraint Rs(C) is motivated by the assumption that sparse coefficients of different coregistered contrast-weighted images are often highly correlated. Such correlated sparse structure can be more effectively captured by joint sparse modeling (15,17), since it not only enforces sparsity constraint for each individual image, but also favors a shared sparse support for different contrast weighted images. Mathematically, joint sparse constraint can be enforced by the mixed 2/1 norm, i.e., Rs(C) = ∥ψC2,1, where A2,1=n=1NA(n)2, and A(n) denotes the nth row of A. In this work specifically, we chose ψ as the finite difference operator to exploit the joint edge sparsity. For simplicity, we consider a two direction finite difference transform to obtain a concrete formulation, i.e., Rs(C) = λDxC2,1 + λDyC2,1, where Dx and Dy represent the horizontal and vertical finite differences, respectively. Extensions to other forms of joint sparsity constraints can be mathematically straightforward.

With the above specific low-rank and joint sparsity constraints, Eq. [4] can be rewritten as

U^=argminUN×Mc=1NcdcΩ(FScUV^)22+λDxUV^2,1+λDyUV^2,1, [5]

and the image sequence can be reconstructed as Ĉ = Û , where denotes the estimated subspace from auxiliary data, and λ denotes the regularization parameter. Eq. [4] integrates both low-rank and joint sparse modelling of C into a unified mathematical framework. It is easy to see the connection of Eq. [5] with the two state-of-the-art methods that use either low-rank or joint sparsity constraint. Specifically, Eq. [5] reduces to the subspace-augmented low-rank constrained reconstruction (i.e., kt-PCA (18)) if λ = 0, and that Eq. [5] reduces to the joint sparsity constrained reconstruction (15) if L = M (i.e., the full rank is used.).

The complementary roles that low-rank and sparsity constraints play are comprehensively studied in our early work for dynamic MRI (27). Here, for MR parameter mapping, the low-rank model provides strong power to represent the ensemble of relaxation signals of interest. However, low-rank constrained reconstruction (in particular with the pre-estimated ) often suffers from ill-conditioning issues with highly undersampled data, which can lead to severe image artifacts and SNR penalty. The joint sparsity constraint acts as not only an additional prior but also an effective regularizer to reduce image artifacts and enhance SNR. The benefits of simultaneously imposing the low-rank and joint sparsity constraints, over using each of these two constraints individually, will be demonstrated in the Results section.

Algorithm

Note that Eq. [5] is a convex optimization problem with nonsmooth regularization, for which there are a number of numerical algorithms that can be used. Here, we describe an efficient, globally convergent algorithm based on the ADMM (2830) to solve it. The algorithm consists of the following major steps. Firstly, Eq. [5] is converted into the following equivalent constrained optimization problem through variable splitting, i.e.,

{U^,G^,H^}=argminU,G,Hc=1NcdcΩ(FScUV^)22+λG2,1+λH2,1,s.t.G=DxUV^andH=DyUV^. [6]

Secondly, the augmented Lagrangian function for Eq. [6] can be written as

L(U,G,H,Y,Z)=c=1NcdcΩ(FScUV^)22+λG2,1+λH2,1+<Y,GDxUV^>+<Z,HDyUV^>+μ12GDxUV^F2+μ22HDyUV^F2, [7]

where Y ∈ ℂN×M and Z ∈ ℂN×M are the two Lagrangian multipliers, and μ1, μ2 > 0 are penalty parameters related to convergence speed of the algorithm (30).

Thirdly, Eq. [7] can be minimized through the following alternating direction method, i.e.,

Gk+1=argminGL(Uk,G,Hk,Yk,Zk), [8]
Hk+1=argminHL(Uk,Gk+1,H,Yk,Zk), [9]
Uk+1=argminUL(U,Gk+1,Hk+1,Yk,Zk), [10]
Yk+1=Yk+μ1(Gk+1DxUk+1V^), [11]
Zk+1=Zk+μ2(Hk+1DyUk+1V^). [12]

The solutions to the subproblems Eqs. [8] - [10] are described in the Appendix.

In practical implementation, we initialize U(0) with the projection of zero-filling reconstruction onto the low-rank subspace spanned by , while G(0), H(0), Y(0), and Z(0) were all initialized with zeros matrices. It should be noted that for the convex optimization problem in Eq. [5], the ADMM algorithm is guaranteed to have global convergence from any initializations. With respect to the penalty parameters, μ1 is set equal to μ2, considering that the finite differences of the horizontal and vertical directions are approximately at the same scale. Furthermore, we use the following stopping criteria, i.e.,

max{Uk+1UkFUkF,Gk+1GkFGkF,Hk+1HkFHkF} [13]

and

k>Kmax, [14]

where and Kmax are the pre-defined tolerance parameter and maximum number of iterations, respectively. The algorithm is terminated until either Eq. [13] or Eq. [14] is satisfied. Finally, to summarize the above ADMM-base algorithm and related details, a diagram is provided in the Supplementary Material of this work.

Parameter Estimation

After reconstructing Ĉ, relaxation parameters of interest (e.g., T1 or T2 maps) can be easily determined voxel-by-voxel via solving a nonlinear least-squares (NLS) fitting problem, for which a number of algorithms can be used. But, note that different from a generic NLS problem, the problem here has separable structure (37) in the sense that the nonlinear contrast weighting model linearly depends on a subset of its unknowns, i.e., the proton density value. To take advantage of such special structure, we adopt the variable projection (VARPRO) algorithm (3740), which has been shown to be more computationally efficient than generic nonlinear optimization algorithms (37, 39). In our case, after variable projection, the optimization problem becomes one dimensional, so we can discretize the relaxation parameters of interest into a finite set of values, and apply VARPRO with one-dimensional grid search (38, 40), which is guaranteed to result in a global optimal solution.

Methods

Two sets of experimental data were used to evaluate the performance of the proposed method. The first data set was acquired from an in vivo human brain T2 mapping experiment on a healthy volunteer, with the approval from the Institutional Review Board at the University of Illinois and informed consent from subjects. The experiment was performed on a 3T Siemens Trio scanner (Siemens Medical Solutions, Erlangen, Germany) equipped with a 12-channel receiver headcoil. A multi-echo spin-echo imaging sequence was used with 25 evenly spaced echoes (the first echo time TE1 = 11.5 ms and the echo spacing ΔTE = 11.5 ms). Other relevant imaging parameters were: repetition time (TR) = 3.11 s, field-of-view (FOV) = 180 mm × 240 mm, matrix size = 208 × 256, number of slices = 8, and slice thickness = 3 mm. A pilot scan with a rapid GRE sequence was also performed, from which the coil sensitivity maps Sc were estimated.

We performed retrospective undersampling of this fully sampled data, and Fig. 1a illustrates one representative sampling scheme in k-p space, where the parametric dimension p refers to the echo number for the T2 mapping experiments. Specifically, in this acquisition scheme, one central k-space readout was fully acquired at all echo times and treated as training data, from which we estimate using the principal component analysis (18, 31, 35). To measure the sparse sampling level, the acceleration factor (AF) is defined as MN/P. Specifically, the following four AFs, i.e., AF = 2.8, 4.1, 6.0, and 8.0, were considered for this set of data. The T2 map estimated from the fully sampled data was treated as a reference, with which we evaluated the performance of different reconstruction methods. We performed slice-by-slice reconstructions from undersampled data using the proposed method. The rank L was selected as 3, and the regularization parameter λ was empirically optimized by visual inspection (more discussion on the parameter selection is in the Discussion section).

Fig. 1.

Fig. 1

Representative undersampling patterns in k-p space for (a) T2 mapping and (b) T1 mapping experiments used in the proposed method. The white bars and black bars respectively denote the acquired and unacquired k-space readouts. For both experiments, we acquire one central k-space readout at each acquisition parameter and use this set of data as training data, whereas we sparsely acquire data in other region of k-p space with a uniform random sampling pattern and use such data as imaging data. Furthermore, to enhance SNR, we densely acquire the low resolution data at the first TE for T2 mapping experiments, and the low resolution data at the last TR for T1 mapping experiments.

To demonstrate the benefits of imposing simultaneous low-rank and joint sparsity constraints, we also performed low-rank based reconstruction (i.e., kt-PCA (18)) and joint sparsity based reconstruction (15, 17) (denoted as joint sparse hereafter). For kt-PCA, we used the same sampling pattern as the one used for the proposed method. However, for joint-sparse, since such sampling scheme often leads to sub-optimal performance, a variable density random sampling pattern with densely acquired central k-space (15) was adopted. Furthermore, we used the same rank/model order for kt-PCA as for the proposed method. For joint sparse, the regularization parameter was manually optimized by visual inspection. All three reconstruction methods shared the same set of sensitivity maps. After reconstructions, T2 maps were estimated using VARPRO based on a mono-exponential T2 relaxation model. A discrete set of T2 values, i.e., {1, … , 500} ms, was used as the search grid for VARPRO, with which the resolution of T2 values is 1 ms.

The second set of data was from an in vivo rat brain T1 mapping experiment, which was approved by the Carnegie Mellon University Institutional Animal Care and Use Committee. The experiment was performed on a Bruker Avance III 7T scanner (Bruker Biospin, Billerica, MA) with a single receiver coil using a saturation recovery spin echo sequence with 16 evenly spaced repetition times from 200 ms to 8520 ms. Other relevant imaging parameters were: FOV = 32 mm × 32 mm, matrix size = 128 × 128, flip angle = 90°, number of slices = 1, and slice thickness = 2 mm.

Similar to the human in-vivo data, we performed reconstructions of contrast-weighed image sequences from retrospectively undersampled data with kt-PCA, joint sparse and the proposed method. For the T1 experiments, one representative k-p space sparse sampling scheme as shown in Fig. 1b was used for kt-PCA and the proposed method, whereas a variable density random sampling pattern was used for joint sparse. For kt-PCA and the proposed mehtod, we again acquired one single central k-space readout at every TR as training data to estimate . Three different acceleration factors were considered, i.e., AF = 3.0, 4.0 and 5.0. Similarly, the rank L = 3 was used for kt-PCA and the proposed method, and the regularization parameters were optimized for the proposed method and joint sparsity reconstruction based on visual inspection. After reconstruction, T1 maps were estimated using VARPRO based on a mono-exponential T1 relaxation model. A search grid of T1 values {1, … , 3000} ms was used.

For all of image reconstruction, we used the initialization and stopping criteria described in Theory section. Specifically, = 5e−4 and Kmax = 50 were respectively set in Eqs. [13] and [14] for the stopping criteria. Under these conditions, the ADMM algorithm typically converged within 20 iterations, although the specific number of iterations depended on the number of measurements acquired. The above image reconstruction was performed on a workstation with a 3.47GHz dual-hex-core Intel Xeon processor X5690, 96 GB RAM, Linux system and Matlab R2012a. The computation time is within 7 minutes for all the T2 mapping reconstruction (using the multi-channel data), and within 1 minute for all the T1 mapping reconstruction (using the single-channel data). After reconstruction, the VARPRO algorithm was used to estimate the T1 or T2 maps, which took around 6 seconds for both applications.

To perform quantitative evaluation of different reconstruction methods, we use the following three metrics: i) voxelwise error = (γnγ^n)γn, where γn and γ^n respectively denote the true and estimated relaxation parameter at the nth voxel, ii) region-of-interest (ROI)error=γROIγ^ROI2γROI2, where γROI and γ^ROI respectively contain the true and estimated relaxation parameters in a specific ROI, and iii) overall error = (ROI)error=γγ^2γ2, where γ and γ^ respectively denote the true and estimated relaxation parameter map that contains all image voxels.

Results

Representative results from the above two sets of data are shown to illustrate the effectiveness of the proposed method.

Figure 2 shows the reconstructed T2 maps of the human brain of slice 4 from the T2 mapping data using joint sparse, kt-PCA, and the proposed method at two AFs (i.e., AF = 4.1 and 8.0). Along with reconstructions, the corresponding voxelwise error maps are also showed with the overall errors indicated in the top left corner of the images. As can be seen, at a moderate acceleration AF = 4.1, all three methods perform fairly well both qualitatively and quantitatively, although the proposed method yields noticeably better performance than the other two methods. As AF is increased to 8.0, the performance of joint sparse and kt-PCA dramatically degrades. Qualitatively, the edge structures of the T2 map obtained by joint sparse are severely smoothed out, while the T2 map obtained by kt-PCA is corrupted by severe artifacts induced by the ill-conditioning issue. Quantitatively, the T2 values from joint sparse and kt-PCA also become much less accurate at AF = 8.0. In contrast, by simultaneously enforcing low-rank and joint sparsity constraints, the proposed method has much better preserved features and significantly reduced artifacts compared to the other two methods. It also yields much more accurate T2 values.

Fig. 2.

Fig. 2

Reconstructed T2 maps of the human brain and associated errors for slice 4 at different AFs. a, b: Row (a) shows the reconstructed T2 maps from joint sparse, kt-PCA, and the proposed method at AF = 4.1 and Row (b) shows the voxelwise error maps and the overall errors for the reconstructions in Row (a). c, d: Row (c) shows the reconstructed T2 maps from joint sparse, kt-PCA, and the proposed method at AF = 8.0 and Row (d) shows the voxelwise error maps and the overall errors for the reconstructions in Row (c).

Figure 3 shows the reconstructed T2 maps using the proposed method at the highest AF (i.e., AF = 8.0) for slices 2, 3, 6, and 8, along with corresponding voxelwise error maps and overall errors. As can be seen, the proposed method has consistent performance across different slice locations at such a high AF.

Fig. 3.

Fig. 3

Reconstructed T2 maps of the human brain and associated errors for slices 2, 3, 6, and 7 at AF = 8.0 using the proposed method. a: the reference T2 maps of the four slices, b: the reconstructed T2 maps from the proposed method of these slices, and c: the voxelwise error maps and the overall errors for the reconstructions in b.

Figure 4 shows the reconstructed T1 maps of the rat brain, the corresponding voxelwise error maps, and the overall errors from the T1 mapping data using joint sparse, kt-PCA, and the proposed method at AF = 3.0 and 5.0. Consistent with the results shown in the T2 mapping example, the proposed method improves over the other two methods, both qualitatively and quantitatively.

Fig. 4.

Fig. 4

Reconstructed T1 maps of the rat brain and associated errors at different AFs. a, b: Row (a) shows the reconstructed T1 maps using joint sparse, kt-PCA, and the proposed method at AF = 3.0 and Row (b) shows the corresponding voxelwise error maps and the overall errors for the reconstructions in Row (a). c, d: Row (c) shows the reconstructed T1 maps using joint sparse, kt-PCA, and the proposed method at AF = 5.0 and Row (d) shows the corresponding voxelwise error maps and the overall errors for the reconstructions in Row (c).

Figure 5 shows the ROI error versus AF for the two ROIs, which were chosen from a region of the white matter from the human data (marked in Fig. 2) and a region of the hippocampus from the rat data set (marked in Fig. 4), respectively. As can be seen, this figure further illustrates the improved accuracy by using the proposed method. Furthermore, note that although the performance of all three methods degrades as AF increases, the proposed method has improved robustness over the other two methods with respect to the change of AF, which again demonstrates the benefits offered by simultaneously using two constraints.

Fig. 5.

Fig. 5

The ROI error with respect to AF. a: Error plot for a ROI in the white matter of the human brain (marked in Fig. 2). b: Error plot for a ROI in the hippocampus of the rat brain (marked in Fig. 4).

Discussion

The effectiveness of integrating low-rank and joint sparsity constraints for accelerated parameter mapping has been demonstrated. It is worthwhile to make further comments on some points. First of all, parameter subspaces estimated from limited training data can accurately capture the underlying relaxation process. For example, for parameter mapping applications in Ref. (18) and in this work, estimated from only a single central k-space readout results in accurate parameter values. Since this amount of training data typically only comprises a small portion of the total number of measurements, acquiring training data does not significantly compromise the overall acceleration of parameter mapping experiments. Note that an alternative is to estimate the subspace from ensemble of relaxation signals generated using a pre-assumed signal model with a range of parameters (23). However, data-driven subspaces from acquired data can be more faithful to capture the underlying relaxation process, and they can also provide better robustness to potential signal model mismatches (e.g., multi-exponential relaxation).

The proposed method requires to select the rank L. Theoretically, a proper rank is determined by the number of distinct tissue types. Practically, as shown in Refs. (18, 23) and this work, L = 3 enables accurate T1 or T2 mapping with a mono-exponential signal model. Note, however, that the optimal choice of L may be different for other parameter mapping applications (e.g., multi-exponential models). A useful way to select L is to first adjust it with some reference data set, and then translate the optimally tuned rank to experiments with similar imaging protocols.

In addition to selecting L, the proposed method also requires to choose the regularization parameter λ. This work empirically chooses it based on visual inspection, which leads to good empirical results. A number of alternative methods, such as the L-curve (41) and SURE-based scheme (42), can also be useful to help choose an optimized λ. Furthermore, since in the proposed method, the joint sparsity constraint is mainly used as a regularizer to stabilize the reconstruction problem, a relatively large range of λ values would result in reconstructions with similar level of accuracy, as long as stability is achieved.

The explicit rank constraint is enforced in Eq. [5]. Alternatively, the rank constraint can also be imposed implicitly via various surrogate functions (e.g., the nuclear or Schatten-p norm (19, 34)). Since an explicit rank constraint with pre-estimated subspace has reduced degrees of freedom comparing to implicit rank constraint (27), it can be more effective for applications with highly undersampled data. Furthermore, considering that very small rank values (e.g., L = 3) are used in MR parameter mapping applications, explicit rank constraint can also lead to a much easier computational problem than the implicit constraint.

This work demonstrates the superior performance of the proposed method over the kt-PCA and joint sparse reconstruction. From a modeling perspective, the proposed method employs the simultaneous low-rank and sparse model, whereas the kt-PCA and joint sparse reconstruction are respectively based on individual low-rank and sparse model. Both this work and the early work in dynamic imaging (27, 34) reveal that the low-rank constraint and sparsity constraint play complementary roles to each other, which can lead to significantly improved performance over using a single constraint for sparse sampling. Furthermore, it is worth noting that beyond these results in imaging, recent theoretical analysis has also provided useful insights into the benefits of such simultaneously structured modeling (43).

We integrated the proposed signal model with the SENSE-based parallel imaging technique and estimated the sensitivity maps from pilot scan. For the brain imaging applications considered in this work, since there is no severe motion, the proposed method provided good accuracy. In the case of significant motion between the pilot scan and the parameter mapping scan, inaccurate sensitivity maps can degrade the performance. However, in this case, self-calibration based parallel imaging (e.g., self-calibrated SENSE or SPIRiT (9)), which have improved robustness to motion, can be used together with the proposed model. Alternatively, we can always perform channel-by-channel reconstruction of image sequences, and then estimate parameter maps from sum-of-square reconstructions.

We showed one set of representative sampling schemes in Fig. 1 for the proposed method. It is worth noting that with the pre-estimated parametric subspace , the proposed method allows for flexible design of sparse sampling schemes. Various alternative acquisition patterns can also be feasible. In particular, the sampling of the actual imaging data does not have to be random. Preliminary results (not shown in the paper) indicate that the proposed method results in reconstructions with similar accuracy level, even with imaging data acquired in a structured manner (e.g., using the lattice sampling). But, note that how to design an optimal sampling scheme for the proposed method remains an interesting open problem that requires further systematic research.

Despite the appealing performance demonstrated in this work, some related aspects of the proposed method are worth further research. First, establishing its resolution property is very important. But, similar to other compressive sensing techniques, the proposed method is associated with a nonlinear reconstruction process, for which rigorous resolution quantification is still an open problem. Furthermore, beyond the proof-of-the-concept study in this work, it is worth evaluating the clinical utility of the proposed method for specific parameter mapping applications. Finally, from a signal processing perspective, it is useful to gain better understanding of the simultaneous low-rank and sparse modeling, such as the theoretical limit of the model in terms of sparse sampling.

Conclusions

In this note, a new constrained reconstruction method is proposed to accelerate MR parameter mapping. It effectively integrates low-rank constraint with joint sparsity constraint into a unified mathematical formulation. With data-driven parameter subspace pre-estimation, the proposed formulation results in a convex optimization problem, which is solved by an efficient algorithm based on ADMM. Representative results from two sets of in vivo data demonstrate that the proposed method significantly improves, both qualitatively and quantitatively, over state-of-the-art methods that only use low-rank constraint or joint sparsity constraint, when parameter mapping experiments are highly accelerated. The proposed method should prove useful for fast MR parameter mapping with sparse sampling.

Supplementary Material

Supp Material

Acknowledgement

This work was supported in part by the National Institute of Health under Grants: NIH-P41-EB015904, NIH-P41-EB001977, and NIH-1RO1-EB013695. B. Zhao would like to thank Bryan Clifford for helping with the skull removal of the human brain data set.

Appendix

We present the specific procedures to solve the subproblems in Eq. [8] - Eq. [10]. Note that Eq. [8] can be rewritten as

Gk+1=argminG12GDxUkV^+1μ1YkF2+λμ1G2,1=argminG12GDxUkV^+1μ1YkF2+λμ1n=1NG(n)2, (15)

where G(n) denotes the nth row of G. It can be shown that Eq. [15] is separable with respect to each row of G. Solving Eq. [15] is equivalent to solving

Gk+1(n)=argminG(n)12G(n)(Dx(n)UkV^1μ1Y(n))22+λμ1G(n)2, [16]

for n = 1, … , N. This problem admits a closed-form solution, which can be obtained via the following soft-thresholding operation, i.e.,

Gk+1(n)=Tk(n)Tk(n)2max{Tk(n)2λμ1,0},n=1,,N, [17]

where Tk(n)=Dx(n)UkV^1μ1Yk(n).

For the subproblem in Eq. [9], it can be solved by a very similar procedure as the above. Specifically, each row of Hk+1 can be obtained as follows:

Hk+1(n)=Qk(n)Qk(n)2max{Qk(n)2λμ2,0},n=1,,N, [18]

where Qk(n)=Dy(n)UkV^1μ2Zk(n).

For the subproblem in Eq. [10], note that it can be written as

Uk+1=argminUc=1NcdcΩ(FScUV^)22+μ12Gk+1DxUV^+1μ1YkF2+μ22Hk+1DyUV^+1μ2ZkF2, [19]

which is a large-scale quadratic optimization problem. It can be efficiently solved by a number of numerical algorithms. Here, the conjugate gradient (CG) algorithm is applied, with U initialized by Uk from the last iteration. Furthermore, it should be noted that in the above CG iterations, the sampling operator Ω and the Fourier encoding matrix F do not have to be explicitly stored, since they can be evaluated via very fast operation or transformation.

Footnotes

Submission category: Note

References

  • 1.Hauser RA, Olanow CW. Magnetic resonance imaging of neurodegenerative diseases. J Neuroimaging. 1994;4:146–158. doi: 10.1111/jon199443146. [DOI] [PubMed] [Google Scholar]
  • 2.Iles L, Pfluger H, Phrommintikul A, Cherayath J, Aksit P, Gupta SN, Kaye DM, Taylor AJ. Evaluation of diffuse myocardial fibrosis in heart failure with cardiac magnetic resonance contrast-enhanced T1 mapping. J Am Coll Cardiol. 2008;52:1574–1580. doi: 10.1016/j.jacc.2008.06.049. [DOI] [PubMed] [Google Scholar]
  • 3.Liu W, Dahnke H, Rahmer J, Jordan EK, Frank JA. Ultrashort T2* relaxometry for quantitation of highly concentrated superparamagnetic iron oxide (SPIO) nanoparticle labeled cells. Magn Reson Med. 2009;61:761–766. doi: 10.1002/mrm.21923. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Souza RB, Feeley BT, Zarins ZA, Link TM, Li X, Majumdar S. T1rho MRI relaxation in knee OA subjects with varying sizes of cartilage lesions. Knee. 2013;20:113–119. doi: 10.1016/j.knee.2012.10.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Deoni SC, Rutt BK, Peters TM. Rapid combined T1 and T2 mapping using gradient recalled acquisition in the steady state. Magn Reson Med. 2003;49:515–526. doi: 10.1002/mrm.10407. [DOI] [PubMed] [Google Scholar]
  • 6.Ma D, Gulani V, Seiberlich N, Liu K, Sunshine J, Duerk J, Griswold M. Magnetic resonance fingerprinting. Nature. 2013;495:187–192. doi: 10.1038/nature11971. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A. Generalized autocalibrating partially parallel acquisitions (GRAPPA) Magn Reson Med. 2002;47:1202–1210. doi: 10.1002/mrm.10171. [DOI] [PubMed] [Google Scholar]
  • 8.Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: Sensitivity encoding for fast MRI. Magn Reson Med. 1999;42:952–962. [PubMed] [Google Scholar]
  • 9.Lustig M, Pauly JM. SPIRiT: Iterative self-consistent parallel imaging reconstruction from arbitrary k-space. Magn Reson Med. 2010;64:457–471. doi: 10.1002/mrm.22428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Ying L, Liang ZP. Parallel MRI using phased array coils. IEEE Signal Process Mag. 2010;27:90–98. [Google Scholar]
  • 11.Velikina JV, Alexander AL, Samsonov A. Accelerating MR parameter mapping using sparsity-promoting regularization in parametric dimension. Magn Reson Med. 2013;70:1263–1273. doi: 10.1002/mrm.24577. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Doneva M, Bornert P, Eggers H, Stehning C, Senegas J, Mertins A. Compressed sensing reconstruction for magnetic resonance parameter mapping. Magn Reson Med. 2010;64:1114–1120. doi: 10.1002/mrm.22483. [DOI] [PubMed] [Google Scholar]
  • 13.Feng L, Otazo R, Jung H, Jensen JH, Ye JC, Sodickson DK, Kim D. Accelerated cardiac T2 mapping using breath-hold multiecho fast spin-echo pulse sequence with k-t FOCUSS. Magn Reson Med. 2011;65:1661–1669. doi: 10.1002/mrm.22756. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Bilgic B, Goyal VK, Adalsteinsson E. Multi-contrast reconstruction with Bayesian compressed sensing. Magn Reson Med. 2011;66:1601–1615. doi: 10.1002/mrm.22956. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Majumdar A, Ward RK. Accelerating multi-echo T2 weighted MR imaging: Analysis prior group-sparse optimization. J Magn Reson Imaging. 2011;210:90–97. doi: 10.1016/j.jmr.2011.02.015. [DOI] [PubMed] [Google Scholar]
  • 16.Yuan J, Liang D, Zhao F, Li Y, Xiang YX, Ying L. k-t ISD compressed sensing reconstruction for T1 mapping: A study in rat brains at 3T; Proceedings of the 20th Annual Meeting of ISMRM; Melbourne, Australia. 2012. p. 4197. [Google Scholar]
  • 17.Huang J, Chen C, Axel L. Fast multi-contrast MRI reconstruction; Proc. MICCAI; 2012. pp. 281–288. [DOI] [PubMed] [Google Scholar]
  • 18.Petzschner FH, Ponce IP, Blaimer M, Jakob PM, Breuer FA. Fast MR parameter mapping using k-t principal component analysis. Magn Reson Med. 2011;66:706–716. doi: 10.1002/mrm.22826. [DOI] [PubMed] [Google Scholar]
  • 19.Zhang T, Pauly JM, Levesque IR. Accelerating parameter mapping with a locally low rank constraint. Magn Reson Med. 2014 doi: 10.1002/mrm.25161. doi:10.1002/mrm.25161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Haldar JP, Hernando D, Liang ZP. Super-resolution reconstruction of MR image sequences with contrast modeling; Proceedings of IEEE International Symposium on Biomedical Imaging; Boston, USA. 2009. pp. 266–269. [Google Scholar]
  • 21.Block K, Uecker M, Frahm J. Model-based iterative reconstruction for radial fast spin-echo MRI. IEEE Trans Med Imaging. 2009;28:1759–1769. doi: 10.1109/TMI.2009.2023119. [DOI] [PubMed] [Google Scholar]
  • 22.Sumpf TJ, Uecker M, Boretius S, Frahm J. Model-based nonlinear inverse reconstruction for T2 mapping using highly undersampled spin-echo MRI. J Magn Reson Imaging. 2011;34:420–428. doi: 10.1002/jmri.22634. [DOI] [PubMed] [Google Scholar]
  • 23.Huang C, Graff CG, Clarkson EW, Bilgin A, Altbach MI. T2 mapping from highly undersampled data by reconstruction of principal component coefficient maps using compressed sensing. Magn Reson Med. 2012;67:1355–1366. doi: 10.1002/mrm.23128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Zhao B, Lu W, Liang ZP. Highly accelerated parameter mapping with joint partial separability and sparsity constraints; Proceedings of the 20th Annual Meeting of ISMRM; Melbourne, Australia. 2012. p. 2233. [Google Scholar]
  • 25.Zhao B, Hitchens TK, Christodoulou AG, Lam F, Wu YL, Ho C, Liang ZP. Accelerated 3D UTE relaxometry for quantification of iron-oxide labeled cells; Proceedings of the 21st Annual Meeting of ISMRM; Salt Lake City, USA. 2013. p. 2455. [Google Scholar]
  • 26.Zhao B, Lam F, Liang ZP. Model-based MR parameter mapping with sparsity constraints: Parameter estimation and performance bounds. IEEE Trans Med Imaging. 2014 doi: 10.1109/TMI.2014.2322815. doi:10.1109/TMI.2014.2322815. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Zhao B, Haldar J, Christodoulou A, Liang ZP. Image reconstruction from highly undersampled (k, t)-space data with joint partial separability and sparsity constraints. IEEE Trans Med Imaging. 2012;31:1809–1820. doi: 10.1109/TMI.2012.2203921. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Yang J, Zhang Y. Alternating direction algorithms for ℓ1-problems in compressive sensing. SIAM J Sci Comput. 2011;33:250–278. [Google Scholar]
  • 29.Ramani S, Fessler J. Parallel MR image reconstruction using augmented Lagrangian methods. IEEE Trans Med Imaging. 2011;30:694–706. doi: 10.1109/TMI.2010.2093536. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Boyd S, Parikh N, Chu E, Peleato B, Eckstein J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundation and Trends in Machine Learning. 2011;3:1–122. [Google Scholar]
  • 31.Liang ZP. Spatiotemporal imaging with partially separable functions; Proceedings of IEEE International Symposium on Biomedical Imaging; Washington DC, USA. 2007. pp. 988–991. [Google Scholar]
  • 32.Haldar JP, Liang ZP. Spatiotemporal imaging with partially separable functions: A matrix recovery approach; Proceedings of IEEE International Symposium on Biomedical Imaging; Rotterdam, Netherlands. 2010. pp. 716–719. [Google Scholar]
  • 33.Zhao B, Haldar J, Brinegar C, Liang ZP. Low rank matrix recovery for real-time cardiac MRI; Proceedings of IEEE International Symposium on Biomedical Imaging; Rotterdam, Netherlands. 2010. pp. 996–999. [Google Scholar]
  • 34.Lingala S, Hu Y, DiBella E, Jacob M. Accelerated dynamic MRI exploiting sparsity and low-rank structure: k-t SLR. IEEE Trans Med Imaging. 2011;30:1042–1054. doi: 10.1109/TMI.2010.2100850. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Sen Gupta A, Liang ZP. Dynamic imaging by temporal modeling with principle component analysis; Proceedings of the 9th Annual Meeting of ISMRM; Glasgow, Scotland. 2001. p. 10. [Google Scholar]
  • 36.Pedersen H, Kozerke S, Ringgaard S, Nehrke K, Kim WY. k-t PCA: Temporally constrained k-t BLAST reconstruction using principal component analysis. Magn Reson Med. 2009;62:706–716. doi: 10.1002/mrm.22052. [DOI] [PubMed] [Google Scholar]
  • 37.Golub G, Pereyra V. Separable nonlinear least squares: the variable projection method and its applications. Inverse Problems. 2003;19:R1–R26. [Google Scholar]
  • 38.Haldar JP, Anderson J, Sun SW. Maximum likelihood estimation of T1 relaxation parameters using VARPRO; Proceedings of the 15th Annual Meeting of ISMRM; Berlin, Germany. 2007. p. 41. [Google Scholar]
  • 39.Trasko J, Mostardi PM, Riederer SJ, Manduca A. A simplified nonlinear fitting strategy for estimating T1 from variable flip angle sequences; Proceedings of the 19th Annual Meeting of ISMRM; Montreal, Canada. 2011. p. 4561. [Google Scholar]
  • 40.Barral JK, Gudmundson E, Stikov N, Etezadi-Amoli M, Stoica P, Nishimura DG. A robust methodology for in vivo T1 mapping. Magn Reson Med. 2010;64:1057–1067. doi: 10.1002/mrm.22497. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Vogel CR. Computational methods for Inverse Problems. SIAM; Philadelphia: 2002. [Google Scholar]
  • 42.Ramani S, Liu Z, Rosen J, Nielsen JF, Fessler J. Regularization parameter selection for nonlinear iterative image restoration and MRI reconstruction using GCV and SURE-based methods. IEEE Trans Image Process. 2012;21:3659–3672. doi: 10.1109/TIP.2012.2195015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Lam F, Ma C, Liang ZP. Performance analysis of denoising with low-rank and sparsity constraints; Proceedings of the IEEE International Symposium on Biomedical Imaging; San Francisco, USA. 2013. pp. 1223–1226. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supp Material

RESOURCES