Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Aug 1.
Published in final edited form as: IEEE Trans Med Imaging. 2018 Dec 11;38(8):1841–1851. doi: 10.1109/TMI.2018.2886290

A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery

Yue Hu 1, Xiaohan Liu 2, Mathews Jacob 3
PMCID: PMC6559879  NIHMSID: NIHMS1009543  PMID: 30561342

Abstract

Recent theory of mapping an image into a structured low-rank Toeplitz or Hankel matrix has become an effective method to restore images. In this paper, we introduce a generalized structured low-rank algorithm to recover images from their undersampled Fourier coefficients using infimal convolution regularizations. The image is modeled as the superposition of a piecewise constant component and a piecewise linear component. The Fourier coefficients of each component satisfy an annihilation relation, which results in a structured Toeplitz matrix, respectively. We exploit the low-rank property of the matrices to formulate a combined regularized optimization problem. In order to solve the problem efficiently and to avoid the high memory demand resulting from the large-scale Toeplitz matrices, we introduce a fast and memory efficient algorithm based on the half-circulant approximation of the Toeplitz matrix. We demonstrate our algorithm in the context of single and multi-channel MR images recovery. Numerical experiments indicate that the proposed algorithm provides improved recovery performance over the state-of-the-art approaches.

Keywords: Structured low-rank matrix, infimal convolution, compressed sensing, image recovery

I. Introduction

The recovery of images from their limited and noisy measurements is an important problem in a wide range of biomedical imaging applications, including microscopy [1], magnetic resonance imaging (MRI) [2], and computed tomography [3]. The common method is to formulate the image reconstruction as an optimization problem, where the criterion is a linear combination of data consistency error and a regularization penalty. The regularization penalties are usually chosen to exploit the smoothness or the sparsity priors in the discrete image domain. For example, compressed sensing methods are capable of recovering the original MR images from their partial k space measurements [2] using the L1 norm in the total variation (TV) or wavelet domain. The reconstruction performance is determined by the effectiveness of the regularization. In order to improve the quality of the reconstructed images, several extensions and generalizations of TV are also proposed, such as total generalized variation (TGV) [4], [5], Hessian-based norm regularization [6], and higher degree total variation (HDTV) [7], [8]. All of these regularization penalties are formulated in the discrete domain, and hence suffer from discretization errors as well as lack of rotation invariance.

Recently, a new family of reconstruction methods, which are based on the low-rank property of structured Hankel or Toeplitz matrices built from the Fourier coefficients of the image, have been introduced as powerful continuous domain alternatives to the above discrete domain penalties [9], [10], [11], [12], [13]. Since these methods minimize discretization errors and rotation dependence, they provide improved reconstruction. These algorithms can be viewed as the multidimensional extensions of the finite-rate-of-innovation (FRI) frame-work [14], [15]. All of these methods exploit the “annihilation property”, which implies that image derivatives can be annihilated by multiplication with a bandlimited polynomial function in the spatial domain; this image domain relation translates to a convolutional annihilation relationship in the Fourier domain. Since the locations of the discontinuities are not isolated in the multidimensional setting, the theoretical tools used to show perfect recovery are very different from the 1-D FRI setting [11], [16]. The convolution relations are compactly represented as a multiplication between a block Hankel structured matrix and the Fourier coefficients of the filter. It has been shown that the above structured matrix is low-rank, which allows the recovery of the unknown matrix entries using structured low-rank matrix completion. Empirical results show improved performance over classical total variation methods [17], [16], [9], [18]. Haldar proposed a Hankel structured low-rank matrix algorithm (LORAKS) for the reconstructions of single coil MR images [10] with the assumption that the image has limited spatial support and smooth phase. The effectiveness of the algorithm was also investigated in parallel MRI [19], [20], [21].

In this paper, we extend the structured low-rank framework to recover the sum of two piecewise smooth functions from their sparse measurements. This work is inspired by the infimal convolution framework [22], where the sum of a piecewise constant and a piecewise linear function was recovered; the infimal convolution of functions with first and second order derivatives were considered as penalties in [22]. The algorithm was then applied in a general discrete setting for image denoising [23] to obtain improved performance over standard TV. The extension of TV using infimal convolution (ICTV) was applied in video and image reconstruction in [24]. The infimal convolution of TGV (ICTGV) was proposed in the context of dynamic MRI reconstruction by balancing the weights for the spatial and temporal regularizations [25]. In [26] and [27], the infimal convolution of two total variation Bregman distances are applied to exploit the structural information in the reconstruction of dynamic MR datasets and the joint reconstruction of PET-MRI, respectively. In [28], the authors adopted the robust PCA method [29] into dynamic MRI, where the dataset is decomposed into low-rank and sparse component (L+S). In [30], instead of imposing low-rank assumptions, the k-t PCA method was improved using model consistency constraints (MOCCO) to obtain temporal basis functions from low resolution dynamic MR data. In this paper, we propose to model the image as the combination of a piecewise constant component and a piecewise linear component. For the piecewise constant component, the Fourier coefficients of the gradient of the component satisfy the annihilation relation. We thus build a structured Toeplitz matrix, which can be proved to be low-rank. Similarly, we can obtain a structured low-rank Toeplitz matrix from the Fourier coefficients of the second order partial derivatives of the piecewise linear component. By introducing the generalized structured low-rank method, the image can be automatically separated into components where either the strong edges and feature details or the smooth regions of the image can be accurately recovered. Thus, the optimal balance can be obtained between the first order and higher order penalties.

Since the proposed method involves the recovery of large-scale first and second order derivatives lifted Toeplitz matrices, the implementation of the method is associated with high computational complexity and memory demand. In order to solve the corresponding optimization problem efficiently, we introduce an algorithm based on the half-circulant approximation of Toeplitz matrices, which is a generalization of the Generic Iteratively Reweighted Annihilating Filter (GIRAF) algorithm proposed in [17], [31]. This algorithm alternates between the estimation of the annihilation filter of the image, and the computation of the image annihilated by the filter in a least squares formulation. By replacing the linear convolution by circular convolution, the algorithm can be implemented efficiently using Fast Fourier Transforms, which significantly reduces the computational complexity and the memory demand. We investigate the performance of the algorithm in the context of compressed sensing MR images reconstruction. Experiments show that the proposed method is capable of providing more accurate recovery results than the state-of-the-art algorithms. The preliminary version of the work was accepted as a conference paper. Compared to the work [32], the theoretical and algorithmic frameworks are further developed here. We have significantly more validation in the current version, in addition to the generalization of the method to parallel MRI setting.

II. Generalized Structure Low-Rank Matrix Recovery

A. Image recovery model

We consider the recovery of a discrete 2-D image ρ ∈ ℂN from its noisy and degraded measurements b ∈ ℂM. We model the measurements as b=A(ρ)+n, where AM×N is a linear degradation operator which maps ρ to b, and n ∈ ℂM is assumed to be the Gaussian distributed white noise. Since the recovery of ρ from the measurements b is ill-posed in many practical cases, the general approach is to pose the recovery as a regularized optimization problem, i.e.,

ρ=argminρA(ρ)b2+λJ(ρ) (1)

where A(ρ)b2 is the data consistency term, λ is the balancing parameter, and J(ρ) is the regularization term, which determines the quality of the recovered image. Common choices for the regularization term include total variation, wavelet, and their combinations. Researchers have also proposed the extensions of total variations [4], [6], [8] to improve the performance.

B. Structured low-rank matrix completion

Consider the general model for a 2-D piecewise smooth image ρ(r) at the spatial location r = (x, y) ∈ ℤ2:

ρ(r)=i=1Ngi(r)χΩi(r) (2)

where χΩi is a characteristic function of the set Ωi and the functions gi(r) are smooth polynomial functions which vanish with a collection of differential operators D ={D1, …, DN} within the region Ωi. It is proved that under certain assumptions on the edge set Ω=i=1NΩi, the Fourier transform of derivatives of ρ(r) satisfies an annihilation property [11]. We assume that a bandlimited trigonometric polynomial function µ(r) vanishes on the edge set of the image:

μ(r)=kΔ1c[k]ej2πk,r (3)

where c[k] denotes the Fourier coefficients of µ and ∆1 is any finite set of ℤ2. According to [13], the family of functions in (2) is a general form including many common image models by choosing different set of differential operators D.

  • 1)
    Piecewise constant images: Assume ρ1(r) is a piecewise constant image function, thus the first order partial derivative of the image D1ρ1 = ρ1 = (xρ1, yρ1) is annihilated by multiplication with µ1 in the spatial domain:
    μ1ρ1=0 (4)
    The multiplication in spatial domain translates to the convolution in Fourier domain, which is expressed as:
    kΔ1ρ1^[k]c1[k]=0,2 (5)
    where ρ1^[k]=j2π(kxρ^1[k],kyρ^1[k]) for k = (kx, ky). Thus the annihilation property can be formulated as a matrix multiplication:
    T1(ρ^1)c=[Tx(ρ^)Ty(ρ^)]c1=0 (6)
    where T1(ρ^1) is a Toeplitz matrix built from the entries of ρ^1, the Fourier coefficients of ρ1. Specifically, Tx(ρ^1), Ty(ρ^1) are matrices corresponding to the discrete 2-D convolution of kxρ^1[k] and kyρ^1[k] for (kx, ky) ∈ Γ, omitting the irrelevant factor j2π. Here c1 is the vectorized version of the filter c1[k]. Note that c is supported in ∆1. Thus, we can obtain that:
    ρ^1[k]*d[k]=0,kΓ (7)
    Here d[k] = c1[k] * h[k], where h[k] is any FIR filter. Note that ∆1 is smaller than Γ, the support of d. Thus, if we take a larger filter size than the minimal filter c1[k], the annihilation matrix will have a larger null space. Therefore, T1(ρ^1) is a low-rank matrix. The method corresponding to this case is referred to as the first order structured low-rank algorithm (first order SLA) for simplicity.
  • 2)
    Piecewise linear images: Assume ρ2(r) is a piecewise linear image function, it can be proved that the second order partial derivatives of the image D2ρ2=(xx2ρ2,xy2ρ2,yy2ρ2) satisfy the annihilation property [13]
    μ22D2ρ2=0 (8)
    Thus, the Fourier transform of D2ρ2 is annihilated by convolution with the Fourier coefficients c2[k]; k ∈ ∆2 of μ22:
    kΔ2D2ρ2^[-k]c2[k]=0,2 (9)
    where D2ρ2^[k]=(j2π)2(kx2ρ^2[k],kxkyρ^2[k],ky2ρ^2[k]) for k = (kx, ky). Similarly, the annihilation relation can be expressed as:
    T2(ρ^2)d=[Txx(ρ^2)Txy(ρ^2)Tyy(ρ^2)]c2=0 (10)
    where T2(ρ^2) is a Toeplitz matrix.Txx(ρ^2), Txy(ρ^2) and Tyy(ρ^2) are matrices corresponding to the discrete convolution of kx2ρ^2[k], kxkyρ^2[k], and ky2ρ^2[k], omitting the insignificant factor, and c2 is the vectorized version of d[k]. Similar to the piecewise constant case, the Toeplitz matrix T2(ρ^2) is also a low-rank matrix. The method exploiting the low-rank property of T2(ρ^2) is referred to as the second order structured low-rank algorithm (second order SLA).

C. Generalized structured low-rank image recovery (GSLR)

We assume that a 2-D image ρ is a piecewise smooth function, which can be decomposed into two components ρ = ρ1 + ρ2, such that ρ1 represents the piecewise constant component of ρ, while ρ2 represents the piecewise linear component of ρ. We assume that the gradient of ρ1 and the second derivative of ρ2 vanish on the zero sets of µ1 and µ2, respectively. This relation transforms to a convolution relation between the weighted Fourier coefficients of ρ1 and ρ2 with c1 and c2, respectively, based on the analysis in Section. II-B. Inspired by the concept of infimal convolution, we consider the framework of a combined regularization procedure, where we formulate the reconstruction of the Fourier data ρ^ from the undersampled measurements b as follows:

minρ^1+ρ^2=ρ^rank[T1(ρ^1)]+rank[T2(ρ^2)]such that b=A(ρ^)+n (11)

Since the above problem is NP hard, we choose the Schatten p (0 ≤ p < 1) norm as the relaxation function, which makes (11) as the following optimization problem:

{ρ^1,ρ^2}=arg minρ^1,ρ^2λ1T1(ρ^1)p+λ2T2(ρ^2)p+A(ρ^1+ρ^2)b2 (12)

where T1(ρ^1) and T2(ρ^2) are the structured Toeplitz matrices in the first and second order partial derivatives weighted lifted domain, respectively. λ1 and λ2 are regularization parameters which balance the data consistency and the degree to which T1(ρ^1) and T2(ρ^2) are low-rank. ǁ·ǁp is the Schatten p norm (0 ≤ p < 1), defined for an arbitrary matrix X as:

Xp=1pTr[(X*X)p2]=1pTr[(XX*)p2]=1piσip (13)

where σi are the singular values of X. Note that when p → 1, ǁXǁp is the nuclear norm; when p → 0, Xp=ilog σi. The penalty ǁXǁp is convex for p = 1, and non-convex for 0 ≤ p < 1.

III. Optimization Algorithm

We apply the iterative reweighted least squares (IRLS) algorithm to solve the optimization problem (12). Based on the equation Xp=XH12F2 where H=(X*X)p21, let X=Ti(ρ^i)(i=1,2), (12) becomes:

{ρ^1,ρ^2}=arg min ρ^1,ρ^2λ1T1(ρ^1)H112F2+λ2T2(ρ^2)H212F2+A(ρ^1+ρ^2)b2 (14)

In order to solve (14), we can use an alternating minimization scheme, which alternates between the following subproblems: updating the weight matrices H1 and H2, and solving a weighted least squares problem. Specifically, at nth iteration, we compute:

H1(n)=[T1(ρ^1(n))*T1(ρ^1(n))G1+ϵnI]p21 (15)
H2(n)=[T2(ρ^2(n))*T2(ρ^2(n))G2+ϵnI]p21 (16)
{ρ^1(n),ρ^2(n)}=arg minρ^1,ρ^2A(ρ^1+ρ^2)b2+λ1T1(ρ^1)(H1(n))12F2+λ2T2(ρ^2)(H2(n))12F2 (17)

where ϵn → 0 is a small factor used to stabilize the inverse. We now show how to efficiently solve the subproblems.

A. Update of least squares

First, let H112=[h1(1),,h1(L)],H212=[h2(1),,h2(M)], we rewrite the least squares problem (17) as follows:

{ρ^1(n),ρ^2(n)}=arg minρ^1,ρ^2A(ρ^1+ρ^2)b2+λ1l=1LT1(ρ^1)h1(l)F2+λ2m=1MT2(ρ^2)h2(m)F2 (18)

We now focus on the update of ρ^1. The update of ρ^2 can be derived likewise. From the structure property of T1(ρ^1) and the convolution relationship, we can obtain:

T1(ρ^1)h1(l)=PΓ1(M1ρ^1*h1(l))=PΓ1(h1(l)*M1ρ^1)=P1C1(l)M1ρ^1,l=1,,L (19)

where C1(l) denotes the linear convolution by h1(l),PΓ1 is the projection of the convolution to a finite set Γ1 of the valid k space index, which is expressed by the matrix P1. M1 is the linear transformation in k space, which denotes the multiplication with the first order Fourier derivatives jkx and j2πky, referred to as the gradient weighted lifting case. We can approximate C1(l) by a circular convolution by h1(l) on a sufficiently large convolution grid. Then, we can obtain C1(l)=FS1(l)F*, where F is the 2-D DFT and S1(l) is a diagonal matrix representing multiplication with the inverse DFT of h1(l). Assuming P1*P1I, we can thus rewrite the second term in (18) as:

λ1l=1LP1C1(l)M1ρ^12=λ1ρ^1*M1*Fl=1LS1(l)*S1(l)F*M1ρ^1=λ1S112F*M1ρ^12 (20)

where S1 is a diagonal matrix with entries l=1N|μl(r)|2, and μl(r) is the trigonometric polynomial of inverse Fourier transform of h1(l). S1 is specified as

S1=l=1LS1(l)*S1(l)=diag(F*P1*h1) (21)

Similarly, the third term in (18) can be rewritten as λ2S212F*M2ρ^22, where S2 is given by

S2=m=1MS2(m)*S2(m)=diag(F*P2*h2) (22)

Therefore, we can reformulate the optimization problem (18) as:

minρ^1,ρ^2A(ρ^1+ρ^2)b2+λ1S112y1F2+λ2S212y2F2s.t. Fy1=M1ρ^1,Fy2=M2ρ^2 (23)

The above constrained problem can be efficiently solved using the alternating directions method of multipliers (ADMM) algorithm [33], which yields to solving the following subproblems:

y1(n)=miny1S112y122+γ1q1(n1)+F*M1ρ^1(n1)y122 (24)
y2(n)=miny2S112y122+γ1q2(n1)+F*M2ρ^2(n1)y222 (25)
ρ^1(n)=minρ^1A(ρ^1+ρ^2)b22+γ1λ1q1(n1)+F*M1ρ^1y1(n)22 (26)
ρ^2n=minρ^2Aρ^1+ρ^2b22+γ2λ2q2n1+F*M2ρ^2y2n22 (27)
qi(n)=qi(n1)+F*Miρ^i(n)yi(n)i=1,2 (28)

where qi (i = 1, 2) represent the vectors of Lagrange multipliers, and γi (i = 1, 2) are fixed parameters tuned to improve the conditioning of the subproblems. Subproblems (24) to (27) are quadratic and thus can be solved easily as follows:

y1(n)=(S1+γ1I)1[γ1(q1(n1)+F*M1ρ^1(n1))] (29)
y2(n)=(S2+γ2I)1[γ2(q2(n1)+F*M2ρ^2(n1))] (30)
ρ^1(n)=(A*A+γ1λ1M1*M1)1[γ1λ1(M1*F)(y1(n)q1(n1))+A*bA*Aρ^2(n1)] (31)
ρ^2(n)=(A*A+γ2λ2M2*M2)1[γ2λ2(M2*F)(y2(n)q2(n1))+A*bA*Aρ^1(n1)] (32)

B. Update of weight matrices

We now show how to update the weight matrices H1 and H2 in (15) and (16) efficiently based on the GIRAF method [11]. In order to obtain the weight matrices H1 and H2, we first compute the Gram matrix G1 and G2 as:

G1=T1(ρ^1)*T1(ρ^1) (33)
G2=T2(ρ^2)*T2(ρ^2) (34)

Let (V1, Λ1) denote the eigen-decomposition of G1, where V1 is the orthogonal basis of eigenvectors v1(l) and Λ1 is the diagonal matrix of eigenvalues λp1(l), which satisfy G1 = V1 Λ1 V1*. Then we can rewrite the weight matrix H1 as:

H1=[V1(Λ1+ϵI)V1*]p21=V1(Λ1+ϵI)p21V1* (35)

Thus, one choice of the matrix square root H112 is

H112=(Λ1+ϵI)p412V1*=[(λp1(1)+ϵ)p412(v1(1))*,,(λp1(L)+ϵ)p412(v1(L))*]=[h1(1),,h1(L)] (36)

Similarly, we can obtain H212 as:

H212=[(λp2(1)+ϵ)p412(v2(1))*,,(λp2(M)+ϵ)p412(v2(M))*]=[h2(1),,h2(M)] (37)

C. Implementation details

The details for solving the optimization problem (14) are shown in the following pseudocode Algorithm 1. In order to investigate how the SNR values of the recovered image behave as the function of the balancing parameters λ1 and λ2, we plot the parameters optimization results for two images in Fig. 1, where (a) correspond to the parameters for the compressed sensing reconstruction of an ankle MR image with the acceleration factor of 6, and (b) correspond to the parameters choices for the recovery of a phantom image with 4-fold undersampling. We find that the tuning of the two parameters is not very time consuming, since we observe that the optimal parameters are localized in a narrow range between 105 and 106 for λ1 and between 107 and 108 for λ2, for different images under different scenarios.

Fig. 1:

Fig. 1:

Parameters optimization results for the recovery of two images. (a) shows the SNR as the function of the two parameters λ1 and λ2 for the 6-fold undersampling recovery of an ankle MR image (as shown in Fig. 5). (b) shows the SNR as the function of λ1 and λ2 for the 4-fold undersampling recovery of a piecewise smooth phantom image (as shown in Fig. 3). The black rectangles correspond to the optimal parameters with the highest SNR values.

IV. Experiments and Results

A. 1-D signal recovery

We first experiment on a 1-D signal to investigate the performance of the algorithm on recovering signals from their undersampled measurements. Fig. 2 (a) shows the original signal, which is 2-fold undersampled in k space using variable density undersampling pattern, indicated in (b). The direct IFFT recovery is shown in (c). (d) shows the recovered signal (in blue solid line) using the proposed GSLR method in 1-D, and the decomposition results of the piecewise constant component ρ1 (in black dotted line) and the piecewise linear component ρ2 (in red dotted line). We then experiment on the signal recovery using a 4-fold random undersampling pattern, shown in (e). (f) is the direct IFFT of the undersampled measurements. (g) represents the recovered signal and the decomposition results. The results clearly show that by the GSLR method, both the jump discontinuities and the linear parts of the signal are nicely restored.

Fig. 2:

Fig. 2:

1-D signal recovery using the GSLR method. (a) is the original signal, which is undersampled by a 2-fold variable density random undersampling pattern, indicated in (b). (c) is the direct IFFT recovery of the undersampled measurements. (d) shows the recovered signal (in blue solid line) and the decomposition results of the two components, namely the piecewise constant component (showed in black dotted line) and the piecewise linear component (showed in red dotted line). (e)–(g) represent the results of a 4-fold random undersampling compressed sensing signal recovery experiment. Note that the proposed scheme recovers the Fourier coefficients of the signal; the ringing at the edges is associated by the inverse Fourier transform of the recovered Fourier coefficients.

B. MR images recovery

The performance of the proposed method is investigated in the context of compressed sensing MR images reconstruction. We compare the proposed GSLR method with the first order and the second order structured low-rank algorithms. We also study the improvement of the image quality offered by the GSLR algorithm over standard TV, TGV algorithm [4], and the LORAKS method [10]. For all of the experiments, we have manually tuned the parameters to ensure the optimal performance in each scenario. Specifically, we determine the parameters to obtain the optimized signal-to-noise ratio (SNR) to ensure fair comparisons between different methods. The SNR of the recovered image is computed as:

SNR=10 log 10(forigf^F2forigF2) (38)

where f^ is the recovered image, forig is the original image, and F is the Frobenius norm.

In the experiments, we consider two types of undersampling trajectory: a radial trajectory with uniform angular spacing, and a 3-D variable density random retrospective undersampling trajectory. For the 3-D sampling pattern, since the readout direction is orthogonal to the image plane, such undersampling patterns can be implemented on the scanner.

We first study the performance of the proposed method for the recovery of a piecewise smooth phantom image from its noiseless k space data. We assume the data was sampled with 26 k radial spokes, with the approximate acceleration factor of 10.7. Fig. 3 (a) is the actual image, (b) to (g) are the recovered images using GSLR with filter size of 51 × 51, GSLR with filter size of 31 × 31, the 1st SLA and the 2nd SLA with filter size of 31 × 31, TGV, and the standard TV, respectively. The second row show the zoomed regions of the corresponding images. (o) is the undersampling pattern, (q) to (v) are the error images.

Fig. 3:

Fig. 3:

Recovery of a piecewise smooth phantom image from around 10-fold undersampled measurements. (a): the actual image. (b)–(g): Reconstructions using the proposed GSLR method with filter size of 51 × 51, GSLR with filter size of 31 × 31, the 1st and 2nd SLA with filter size of 31 × 31, TGV, and the standard TV, respectively. (h)–(n): the zoomed regions of the area indicated in the red rectangle for the corresponding images. (o): the undersampling pattern. (q)–(v): the error images. Note that recovered images using TGV and TV methods present undersampling artifacts, indicated in green arrows, while the GSLR methods outperform the other methods in recovering both the edges and the smooth regions, indicated in red arrows.

Algorithm 1: GSLR image recovary algorithm¯_Initialization: ρ^(0)A*b,n1,ϵ(0)>0;whilen<NmaxdoStep1:Update of weight matrices;Compute the Gram matricesGi(i=1,2)using(33) and (34);Compute eigendecompositions (λpi(k),vi(k))k=1KofGi(i=1,2);Compute h1l and h2(m) using (36) and (37);Step2:Update of least squares;Compute the weight matricesS1 and S2 using (21) and (22);Solve the following least squares problem :minρ^1,ρ^2λ1S112y1F2+λ2S212y2F2+A(ρ^1+ρ^2)b2 using ADMM iterations (24) to (28);Choose ϵ(n) such that 0<ϵ(n)ϵ(n1);end_

We observe that the structured low-rank algorithms outperform TGV and the standard TV algorithms under this scenario, in that the recovered images by TGV and TV methods suffer from obvious undersampling artifacts, indicated in green arrows. For the structured low-rank algorithms with filter size of 31 × 31, it is shown that GSLR performs better than the 1st SLA and the 2nd SLA in recovering the edges, indicated in red arrows. It is shown that with larger filter sizes (51 × 51), the GSLR method provides the best reconstruction result with the SNR improvement of around 3dB over standard TV.

In the following experiments, we investigate the proposed GSLR method on the reconstruction of single-coil real MR images. The reconstructions of a brain MR image at the acceleration of 4 is shown in Fig. 4, where we compare the proposed GSLR method using different filter sizes with the first and second order SLA, S-LORAKS method, TGV, and the standard TV method. Fig. 4 (a) is the original image. (b) to (h) are the reconstructions using the GSLR with filter size of 51 × 51, GSLR with filter size 31 × 31, the 1st and 2nd SLA with filter size 31 × 31, S-LORAKS, TGV, and the standard TV. (i) to (p) are the zoomed versions of the images, indicated by the red rectangle. (q) is the variable density random undersampling pattern. (r) to (x) are the error images using the corresponding methods. It is seen that among all of the methods, GSLR performs the best in preserving the details and providing the most accurately recovered image. Note that by increasing the filter size from 31 × 31 to 51 × 51, the image quality is significantly improved, with only a modest increase in runtime (85 s versus 36 s).

Fig. 4:

Fig. 4:

Recovery of a brain MR image from 4-fold undersampled measurements. (a): the original image. (b)–(h): Reconstructions using GSLR with filter size of 51 × 51, GSLR with filter size 31 × 31, the 1st and 2nd SLA with filter size 31 × 31, S-LORAKS, TGV, and the standard TV, respectively. (i)–(p): The zoomed versions of the area shown in the red rectangle in (a). (q): The undersampling pattern. (r)–(x): the error images.

We demonstrate the performance of the proposed method on the reconstruction of an ankle MR image at the approximate acceleration rate of 6.7 using the radial undersampling pattern in Fig. 5. In this experiment, we compare the proposed GSLR method with 1st and 2nd SLR, S-LORAKS, G-LORAKS, TGV, and the standard TV. All of the structured low rank methods are with filter size of 51 × 51. (a) is the original image. (b) to (h) are the reconstructed images using GSLR, 1st SLA, 2nd SLA, S-LORAKS, G-LORAKS, TGV, and TV, respectively. (j) to (p) are the zoomed versions of the images indicated by the red rectangle shown in (a). (q) is the radial undersampling pattern. (r) to (x) are the error images. We observe that TV method gives blurry reconstruction. The images recovered by S-LORAKS, G-LORAKS, and TGV methods have undersampling artifacts, indicated in the green arrow. The structured low rank methods provide improved results, among which GSLR performs better in preserving image details, shown in red arrows.

Fig. 5:

Fig. 5:

Recovery of an ankle MR image from 6.7-fold undersampled measurements with radial undersampling pattern. (a): The actual image. (b)–(f): The recovery images using GSLR, 1st SLA, 2nd SLA with filter size of 51 × 51, S-LORAKS, G-LORAKS, TGV, and TV, respectively. (i)–(p): The zoomed versions of the images. (q): The undersampling pattern. (r)–(s): Error images using the corresponding methods. Note that the proposed GSLR method performs the best in preserving the edges and eliminating the artifacts caused by the undersampling, compared with the other methods.

In Fig. 6, we experiment on a brain MR image using radial undersampling pattern with the acceleration factor around 4.8. (a) is the actual image. (b) to (h) are the reconstruction results using GSLR with filter size of 51 × 51, 1st and 2nd SLA with filter size of 51 × 51, S-LORAKS, G-LORAKS, TGV, and the standard TV, respectively. The second row are the zoomed versions of the red rectangular area shown in (a) for different methods. (q) shows the undersampling pattern. (r) to (x) are the error images of the corresponding methods, respectively. We observe that TV and TGV methods provide blurry reconstructions, and the LORAKS methods preserve fine details better while suffers from undersampling artifacts. Among the methods, GLSR provides the best reconstruction and improves the SNR by around 2dB compared over standard TV.

Fig. 6:

Fig. 6:

Recovery of a brain MR image from 4.85-fold undersampled measurements. (a): The actual image. (b)–(h): The recovery images using GSLR, 1st and 2nd SLA with 51 × 51 filter size, S-LORAKS, G-LORAKS, TGV, and the standard TV, respectively. (i)–(p): Zoomed versions of the red rectangular area for different methods. (q): the undersampling pattern. (r)–(x): Error images of the corresponding methods.

In Fig. 7, we compare different methods on the recovery of a multicoil MR dataset acquired using four coils from 8-fold undersampled measurements. The data was retrospectively undersampled using the variable density random undersampling pattern. (a) is the actual image. (b) to (g) show the reconstruction results using GSLR, 1st and 2nd SLA with filter size of 31 × 31, S-LORAKS, G-LORAKS, and the standard TV, respectively. (h) to (n) are the zoomed regions indicated in the red rectangle. (o) is the undersampling pattern. (p) to (u) indicate error images by different methods. According to the results, we observe that compared with the other methods, GLSR performs the best in preserving the image features and providing the recovered image with highest SNR value. We have shown the phase images of all the datasets in Fig. 8. We note that all of the images are associated with reasonable phase variations, expected from a typical MR acquisition. We note that GSLR relies on the compact representation of the image, enabled by its decomposition into piecewise constant and linear components. Since S-LORAKS and G-LORAKS do not exploit these property, we obtain improved reconstructions with filter sizes larger than 31 × 31.

Fig. 7:

Fig. 7:

Recovery of a multicoil brain MR dataset from 8-fold undersampled measurements. (a): The actual image. (b)–(g): The reconstructions using GSLR, the first and second order SLA with 31 × 31 filter size, S-LORAKS and G-LORAKS method, and the standard TV. (h)–(n): The zoomed regions. (o): The undersampling pattern. (p)–(u): The error images. Note that GLSR provides the most accurate reconstruction result compared with the other methods, indicated by red arrows.

Fig. 8:

Fig. 8:

Phase images of the real datasets. (a) Brain image in Fig.4. (b) Ankle image in Fig.5. (c) Brain image in Fig.6. (d) Brain image in Fig.7.

The SNRs of the recovered images using variable density random undersampling patterns are shown in Table I, and the reconstruction results using radial undersampling patterns are shown in Table II. We compare the 1st and 2nd SLA, S-LORAKS, G-LORAKS, TGV, the standard TV with the proposed GSLR method. For the structured low-rank algorithms, we compare the performance by different filter sizes. Specifically, we use three different filter sizes, 15 × 15, 31 × 31 and 51 × 51 for the variable density undersampling experiments, and two filter sizes for the radial undersampling experiments. Note that when the filter size is 15 × 15, the results provided by GSLR are not comparable to the other methods for some cases. However, using larger filter sizes leads to significantly improved image quality. For filter sizes 31 × 31 and 51 × 51, GSLR consistently obtains the best results, with the SNR improvement by around 2–3 dB over standard TV. The reason is that the size of the filter specifies the type of curves or edges that its zero set can capture. Specifically, smaller filters can only represent simpler and smoother curves, while larger filters can represent complex shapes (see [11] for an illustration). When complex structures are presented in the image, the use of a smaller filter fails to capture the intricate details. We note that for most images, we need to use larger filters to ensure that the details are well captured. The use of larger filters is made possible by the proposed IRLS algorithm, which does not require us to explicitly compute the Toeplitz matrices.

Table I:

Comparison of MR image recovery algorithms using variable density random undersampling pattern

filter size [15,15] [31,31] [51,51] [15,15] [31,31] [51,51] [15,15] [31,31] [51,51]

Phantom acc=2 acc=4 acc=5

 First SLA 47.63 53.78 55.34 38.06 44.20 46.20 34.87 39.93 42.06
Second SLA 48.77 56.26 57.66 33.73 43.30 45.53 33.62 38.07 41.10
TGV 42.17 42.17 42.17 37.86 37.86 37.86 35.40 35.40 35.40
TV 41.31 41.31 41.31 35.88 35.88 35.88 35.06 35.06 35.06
GSLR 51.13 56.94 58.21 39.22 45.31 47.18 35.93 40.79 43.11

Brain Fig.4 acc=2 acc=4 acc=5

First SLA 34.93 36.14 36.72 24.85 26.40 26.80 23.00 24.66 25.24
Second SLA 34.10 35.67 36.31 24.28 25.90 26.43 22.63 24.30 24.73
TGV 33.09 33.09 33.09 25.36 25.36 25.36 23.34 23.34 23.34
TV 32.40 32.40 32.40 24.71 24.71 24.71 22.90 22.90 22.90
S-LORAKS 35.43 35.43 35.43 25.86 25.86 25.86 24.22 24.22 24.22
G-LORAKS GSLR 33.36 33.36 33.36 24.93 24.93 24.93 23.01 23.01 23.01
35.34 36.38 37.19 25.34 26.72 27.36 23.58 25.03 26.07

Brain Fig.6 acc=2 acc=4 acc=5

First SLA 30.31 32.30 32.61 20.64 23.63 24.17 18.77 21.36 22.54
Second SLA 30.16 31.99 32.33 19.24 22.81 23.62 17.56 20.70 22.19
TGV 30.26 30.26 30.26 23.05 23.05 23.05 21.13 21.13 21.13
TV 30.10 30.10 30.10 22.65 22.65 22.65 20.83 20.83 20.83
S-LORAKS 29.83 29.83 29.83 22.71 22.71 22.71 21.02 21.02 21.02
G-LORAKS 28.69 28.69 28.69 22.06 22.06 22.06 20.69 20.69 20.69
GSLR 30.70 32.51 33.17 21.41 24.20 24.93 19.51 22.15 23.08

Ankle acc=2 acc=4 acc=5

First SLA 37.80 38.13 38.42 30.01 30.89 31.17 27.37 28.26 28.50
Second SLA 37.96 38.38 38.61 29.65 30.69 30.90 26.65 27.31 28.08
TGV 36.89 36.89 36.89 30.43 30.43 30.43 28.05 28.05 28.05
TV 33.43 33.43 33.43 28.38 28.38 28.38 26.22 26.22 26.22
S-LORAKS 37.91 37.91 37.91 30.18 30.18 30.18 27.44 27.44 27.44
G-LORAKS 37.02 37.02 37.02 29.27 29.27 29.27 26.85 26.85 ?26.85
GSLR 38.05 38.47 39.05 30.46 31.00 31.66 27.96 28.43 28.96

Multi-coil acc=4 acc=6 acc=8

First SLA 29.64 30.97 31.32 23.48 25.45 25.81 21.43 23.68 24.04
Second SLA 29.70 31.22 31.68 23.74 25.67 25.99 21.02 23.39 23.62
TGV 27.48 27.48 27.48 21.53 21.53 21.53 21.70 21.70 21.70
TV 26.68 26.68 26.68 22.15 22.15 22.15 21.26 21.26 21.26
S-LORAK 27.83 27.83 27.83 23.92 23.92 23.92 21.53 21.53 21.53
G-LORAKS 27.22 27.22 27.22 22.81 22.81 22.81 20.83 20.83 20.83
GSLR 30.08 31.48 32.24 23.88 25.82 26.29 22.00 24.16 24.58

Table II:

Comparison of MR image recovery algorithms using radial undersampling pattern

filter size [31,31] [51,51] [31,31] [51,51]

Brain Fig.4 acc4.8 acc6.7

First SLA 26.64 26.85 23.19 23.80
Second SLA 26.05 26.55 22.82 23.32
TGV 25.90 25.90 23.11 23.11
TV 25.26 25.26 22.59 22.59
S-LORAKS 26.31 26.31 22.85 22.85
G-LORAKS 25.97 25.97 21.30 21.30
GSLR 26.77 27.25 23.45 24.18

Brain Fig.6 acc4.8 acc6.7

First SLA 24.99 26.25 21.17 22.62
Second SLA 23.82 24.95 20.86 22.29
TGV 24.26 24.26 21.63 21.63
TV 23.65 23.65 20.88 20.88
S-LORAKS 24.23 24.23 21.20 21.20
G-LORAKS 23.75 23.75 21.53 21.53
GSLR 25.23 26.62 22.47 23.01

Ankle acc4.8 acc6.7

First SLA 31.02 31.37 26.15 26.76
Second SLA 31.14 31.42 26.07 26.49
TGV 30.53 30.53 25.59 25.59
TV 28.53 28.53 24.15 24.15
S-LORAKS 31.03 31.03 25.15 25.15
G-LORAKS 29.89 29.89 25.24 25.24
GSLR 31.39 31.69 26.60 27.08

Multi-coil acc=5.2 acc=10

First SLA 27.78 29.01 22.33 22.87
Second SLA 27.24 28.15 21.76 22.60
TGV 26.08 26.08 20.77 20.77
TV 24.92 24.92 18.53 18.53
S-LORAK 27.32 27.32 21.01 21.01
G-LORAKS 26.58 26.58 20.22 20.22
GSLR 27.90 29.43 22.51 23.15

V. Conclusion

We proposed a novel generalized structured low-rank algorithm to recover images from their undersampled k space measurements. We assume that an image can be modeled as the superposition of two piecewise smooth functions, namely a piecewise constant component, and a piecewise linear component. Each component can be annihilated by multiplication with a bandlimited polynomial function, which yields to a structured Toeplitz matrix. We formulate a combined regularized optimization algorithm by exploiting the low-rank property of the Toeplitz matrix. In order to solve the corresponding problem efficiently, we adapt the iteratively reweighted least squares method which alternates between the computation of the annihilation filter and the least squares problem. We investigate the proposed algorithm on the compressed sensing reconstruction of single-coil and multi-coil MR images. Experiments show that the proposed algorithm provides more accurate recovery results compared with the state-of-the-art approaches.

Acknowledgments

This work was supported by grants Natural Science Foundation of China (NSFC) 61501146, Natural Science Foundation of Heilongjiang province F2016018, and NIH 1R01EB019961–01A1.

Contributor Information

Yue Hu, School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, China 150001 (huyue@hit.edu.cn)..

Xiaohan Liu, School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, China 150001.

Mathews Jacob, Department of Electrical and Computer Engineering, University of Iowa, IA 52246, USA (mathews-jacob@uiowa.edu)..

REFERENCES

  • [1].Dey N, Blanc-Feraud L, Zimmer C, Roux P, Kam Z, Olivo-Marin J-C, and Zerubia J, “Richardson–lucy algorithm with total variation regularization for 3D confocal microscope deconvolution,” Microscopy research and technique, vol. 69, no. 4, pp. 260–266, 2006. [DOI] [PubMed] [Google Scholar]
  • [2].Lustig M, Donoho D, and Pauly JM, “Sparse MRI: The application of compressed sensing for rapid mr imaging,” Magnetic resonance in medicine, vol. 58, no. 6, pp. 1182–1195, 2007. [DOI] [PubMed] [Google Scholar]
  • [3].Sidky EY and Pan X, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” Physics in Medicine & Biology, vol. 53, no. 17, p. 4777, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Knoll F, Bredies K, Pock T, and Stollberger R, “Second order total generalized variation (TGV) for MRI,” Magnetic resonance in medicine, vol. 65, no. 2, pp. 480–491, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Knoll F, Clason C, Bredies K, Uecker M, and Stollberger R, “Parallel imaging with nonlinear reconstruction using variational penalties,” Magnetic Resonance in Medicine, vol. 67, no. 1, pp. 34–41, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Lefkimmiatis S, Bourquard A, and Unser M, “Hessian-based norm regularization for image restoration with biomedical applications,” IEEE Transactions on Image Processing, vol. 21, no. 3, pp. 983–995, 2012. [DOI] [PubMed] [Google Scholar]
  • [7].Hu Y and Jacob M, “Higher degree total variation (HDTV) regularization for image recovery,” IEEE Transactions on Image Processing, vol. 21, no. 5, pp. 2559–2571, 2012. [DOI] [PubMed] [Google Scholar]
  • [8].Hu Y, Ongie G, Ramani S, and Jacob M, “Generalized higher degree total variation (HDTV) regularization,” IEEE Transactions on Image Processing, vol. 23, no. 6, pp. 2423–2435, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Jin KH, Lee D, and Ye JC, “A general framework for compressed sensing and parallel MRI using annihilating filter based low-rank Hankel matrix,” IEEE Transactions on Computational Imaging, vol. 2, no. 4, pp. 480–495, 2016. [Google Scholar]
  • [10].Haldar JP, “Low-rank modeling of local k-space neighborhoods (LORAKS) for constrained MRI,” IEEE Transactions on Medical Imaging, vol. 33, no. 3, pp. 668–681, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Ongie G and Jacob M, “Off-the-grid recovery of piecewise constant images from few Fourier samples,” SIAM Journal on Imaging Sciences, vol. 9, no. 3, pp. 1004–1041, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Pan H, Blu T, and Dragotti PL, “Sampling curves with finite rate of innovation,” IEEE Transactions on Signal Processing, vol. 62, no. 2, pp. 458–471, 2014. [Google Scholar]
  • [13].Ongie G and Jacob M, “Recovery of piecewise smooth images from few Fourier samples,” in Sampling Theory and Applications (SampTA), 2015 International Conference on. IEEE, 2015, pp. 543–547. [Google Scholar]
  • [14].Vetterli M, Marziliano P, and Blu T, “Sampling signals with finite rate of innovation,” IEEE Transactions on Signal Processing, vol. 50, no. 6, pp. 1417–1428, 2002. [Google Scholar]
  • [15].Maravic I and Vetterli M, “Sampling and reconstruction of signals with finite rate of innovation in the presence of noise,” IEEE Transactions on Signal Processing, vol. 53, no. 8, pp. 2788–2805, 2005. [Google Scholar]
  • [16].Ongie G, Biswas S, and Jacob M, “Convex recovery of continuous domain piecewise constant images from nonuniform fourier samples,” IEEE Transactions on Signal Processing, vol. 66, no. 1, pp. 236–250, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Ongie G and Jacob M, “A fast algorithm for convolutional structured low-rank matrix recovery,” IEEE Transactions on Computational Imaging, vol. 3, no. 4, pp. 535–550, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Jin KH, Lee D, and Ye JC, “A novel k-space annihilating filter method for unification between compressed sensing and parallel mri,” in Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on IEEE, 2015, pp. 327–330. [Google Scholar]
  • [19].Shin PJ, Larson PE, Ohliger MA, Elad M, Pauly JM, Vigneron DB, and Lustig M, “Calibrationless parallel imaging reconstruction based on structured low-rank matrix completion,” Magnetic resonance in medicine, vol. 72, no. 4, pp. 959–970, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Haldar JP and Zhuo J, “P-LORAKS: Low-rank modeling of local k-space neighborhoods with parallel imaging data,” Magnetic resonance in medicine, vol. 75, no. 4, pp. 1499–1514, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Kim TH, Setsompop K, and Haldar JP, “LORAKS makes better SENSE: Phase-constrained partial fourier sense reconstruction without phase calibration,” Magnetic resonance in medicine, vol. 77, no. 3, pp. 1021–1035, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Chambolle A and Lions P-L, “Image recovery via total variation minimization and related problems,” Numerische Mathematik, vol. 76, no. 2, pp. 167–188, 1997. [Google Scholar]
  • [23].Setzer S, Steidl G, and Teuber T, “Infimal convolution regularizations with discrete L1-type functionals,” Communications in Mathematical Sciences, vol. 9, no. 3, pp. 797–827, 2011. [Google Scholar]
  • [24].Holler M and Kunisch K, “On infimal convolution of TV-type functionals and applications to video and image reconstruction,” SIAM Journal on Imaging Sciences, vol. 7, no. 4, pp. 2258–2300, 2014. [Google Scholar]
  • [25].Schloegl M, Holler M, Schwarzl A, Bredies K, and Stollberger R, “Infimal convolution of total generalized variation functionals for dynamic MRI,” Magnetic resonance in medicine, vol. 78, no. 1, pp. 142–155, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Rasch J, Brinkmann E-M, and Burger M, “Joint reconstruction via coupled Bregman iterations with applications to PET-MR imaging,” Inverse Problems, vol. 34, no. 1, p. 014001, 2017. [Google Scholar]
  • [27].Rasch J, Kolehmainen V, Nivajärvi R, Kettunen M, Gro¨hn O, Burger M, and Brinkmann E-M, “Dynamic MRI reconstruction from undersampled data with an anatomical prescan,” Inverse Problems, vol. 34, no. 7, p. 074001, 2018. [Google Scholar]
  • [28].Otazo R, Candès E, and Sodickson DK, “Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components,” Magnetic Resonance in Medicine, vol. 73, no. 3, pp. 1125–1136, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Candès EJ, Li X, Ma Y, and Wright J, “Robust principal component analysis?” Journal of the ACM (JACM), vol. 58, no. 3, p. 11, 2011. [Google Scholar]
  • [30].Velikina JV and Samsonov AA, “Reconstruction of dynamic image series from undersampled mri data using data-driven model consistency condition (MOCCO),” Magnetic resonance in medicine, vol. 74, no. 5, pp. 1279–1290, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Ongie G and Jacob M, “A fast algorithm for structured low-rank matrix recovery with applications to undersampled MRI reconstruction,” in Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on. IEEE, 2016, pp. 522–525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Hu Y, Liu X, and Jacob M, “Adative structured low-rank algorithm for MR image recovery,” arXiv preprint arXiv:1805.05013 [DOI] [PMC free article] [PubMed]
  • [33].Esser E, “Applications of Lagrangian-based alternating direction methods and connections to split Bregman,” CAM report, vol. 9, p. 31, 2009. [Google Scholar]

RESOURCES