Abstract
This paper introduces an efficient algorithm for magnetic resonance (MR) image reconstruction. The proposed method minimizes a linear combination of nonlocal total variation and least-square data-fitting term to reconstruct the MR images from undersampled k-space data. The nonlocal total variation is taken as the L 1-regularization functional and solved using Split Bregman iteration. The proposed algorithm is compared with previous methods in terms of the reconstruction accuracy and computational complexity. The comparison results demonstrate the superiority of the proposed algorithm for compressed MR image reconstruction.
1. Introduction
Magnetic resonance (MR) imaging has been utilized in diagnosis because of its glorious depiction of soft tissue changes and noninvasive nature. As explained in [1, 2], it is possible to accurately reconstruct the MR images from undersampled k-space data using compressed sensing and considerably scale back the scanning period. Suppose u ∈ R n is a sparse signal, and let K be a measurement matrix such that Ku = v, where v is the observed data. Then, recovering u from v is equivalent to solve
| (1) |
where J(u) is a regularizing function; usually it may be bounded variation or Besov norm. In case of compressed sensing MRI, K is partial Fourier matrix (K = Pℱ), where P ∈ R M×N is an identity matrix (M ≪ N), ℱ is a discrete Fourier matrix, and v is the observed k-space data contaminated with Gaussian noise of variance σ 2, the relaxation form for (1) should be given by
| (2) |
In order to make (1) simple to solve, first convert it into an unconstrained optimization problem. One common method for doing this is to use a penalty function/continuation method, which approximates (1) by a problem of the form
| (3) |
where μ is a balancing parameter between the fidelity term and sparsity term [3, 4]. This is the unconstrained problem we need to solve.
There are a lot of iterative methods existing to reconstruct MR images from undersampled data such as conjugate gradient algorithm (CG) [4], operator splitting algorithm (TVCMRI) [5], variable splitting method (RecPF) [6], composite splitting algorithm (CSA) and its accelerated version called a fast composite splitting algorithm (FCSA) [7], and split Bregman algorithm combined with total variation (SB-TV) [8]. The Split Bregman method provides better solution to a wide class of L 1-regularized problems. In SB-TV, the Split Bregman technique is applied to the Rudin-Osher-Fatemi functional to a compressed sensing problem that arises in a magnetic resonance imaging. A detailed explanation about the Split Bregman technique is given in Section 2. Reconstruction using CG is very slow for MR images. TVCMRI and RecPF can replace iterative linear solvers with Fourier domain computations, which can provide a reduction in computation time. FCSA performs much better than TVCMRI and RecPF and CSA. FCSA is based on wavelet sparsity and total variation (TV). However, despite the huge popularity of TV minimization, it has also some unwanted effects. In the presence of noise, it tends to piecewise constant solutions, the so called staircasing effect which was analyzed in detail in [9, 10]. Another deficit of TV regularization is the systematic loss of contrast in the reconstructions, even if the given data is noise free. This effect is the well-known systematic error of the total variation minimization and was studied extensively in [11, 12]. The major problem associated with TV-based compressed sensing method is that it tries to uniformly penalize the image gradient irrespective of the underlying image structures, and thus low contrast regions are sometimes over smoothed [13]. To resolve this issue, we propose a new algorithm which jointly minimizes a linear combination of nonlocal total variation and least-square data-fitting term to reconstruct the MR images from under-sampled data. In medical image reconstruction, fine structures, details, and texture should be preserved. The nonlocal total variation makes the recovered image quality sharper, and they preserve the fine structures, details, and textures. NLTV is much better than TV for improving the signal-to-noise ratio in practical application [14–16]. Authors of [17, 18] showed that among the existing nonlocal regularization techniques, NLTV is performing well in reconstruction. So in the proposed method, the nonlocal total variation is taken as the L 1-regularization functional and solved using Split Bregman algorithm. Numerous experiments have been done on real MR images to show the efficiency and advantages of the proposed work in terms of computational complexity and reconstruction accuracy.
2. Split Bregman Algorithm and Regularization Method
The L 1-regularized problems have many important applications in engineering and imaging science. The general form of such problems is
| (4) |
where |·| represents the L 1-norm and both Φ(u) and L(u) are convex functions. Many important problems in image processing can be interpreted as L 1-regularized optimization problems. Equation (3) is an example of such a problem
| (5) |
Unfortunately, for several issues, selecting a large value for μ makes (5) extremely tough to solve numerically [19, 20]. Also, for several applications, μ should be increased in small steps, creating the strategy less efficient. Bregman iteration can be used to reduce (5) into small sequence of unconstrained problems for further processing.
2.1. Bregman Iteration
Bregman iteration is a method for finding extrema of convex functionals [21]. Bregman iteration was already applied to solve the basis pursuit problem in [22] and medical imaging problem in [23]. The general formulation of this method is explained by using Bregman distance. The Bregman distance associated with a convex function J at the point v is
| (6) |
where p is in the subgradient of J at v. Clearly, this is not a distance in the usual sense, it measures closeness in the sense that D J p(u, v) ≥ 0 and D J p(u, v)≥D J p(w, v) for w on the line segment between u and v. Again, consider two convex energy functionals, J and L, defined over R n with minu∈Rn L(u) = 0. The associated unconstrained minimization problem is
| (7) |
Modify this problem by iteratively solving
| (8) |
| (9) |
as suggested by Bregman in [21]. For simplicity, assume that L is differentiable, and in this case, we have 0 ∈ δ(D J p(u, u k) + μL(u)), where this subdifferential is evaluated at u k+1. Since p k+1 ∈ δJ(u k+1) at this location, then
| (10) |
Apply the Bregman iteration (8) on (5)
| (11) |
Bregman iterations of this form were considered in [12]. Here, it is shown that when K is linear, this seemingly complicated iteration is equivalent to the simplified method
| (12) |
2.2. Split Bregman Method
Apply Bregman framework to solve L 1-regularized problem (4). In this case, assume that both Φ(u) and L(u) are convex functions and Φ(u) is differentiable. In this method, decouple the L 1 and L 2 parts of energy in (4). Now, consider (4) as
| (13) |
To solve this problem, first convert it into an unconstrained problem
| (14) |
Let J(u, d) = |d | +L(u), and let K(u, d) = d − Φ(u), then we can see that (14) is simply an application of (5). Use (11) to obtain the solution for the above problem:
| (15) |
Applying simplification presented in (12), we get the following solutions:
| (16) |
| (17) |
In order to perform the minimization effectively, split the L 1 and L 2 components of (16) and minimize with respect to u and d separately. The two steps result in the following solutions:
| (18) |
| (19) |
In (19), there is no coupling between elements of d. We can use shrinkage operator to find the optimal value of d as follows:
| (20) |
where
| (21) |
This shrinkage operation is extremely fast and requires only few operations per element of d k+1.
Based on the above described equations, the generalized Split Bregman algorithm can be explained in Algorithm 1.
Algorithm 1.

Generalized Split Bregman algorithm.
2.3. Nonlocal Total Variation Norm
Total Variation (TV) regularization [24, 25] makes the recovered image quality sharper, but they do not preserve the fine structures, details, and textures. This effect is caused by the regularity assumption of the TV formulation of the image model, namely, that the image has a simple geometric description consisting of a set of connected sets (objects) with smooth contours (edges). Additionally, the model assumes that the image is smooth inside single objects and has discontinuous jumps across the boundaries. Therefore, TV regularization is optimal to reduce the noise and to reconstruct the main geometrical configuration in an image. However, it fails to preserve texture, details, and fine structures, because they behave in all aspects like noise and thus cannot be distinguished from noise. The nonlocal total variation (NLTV) extends the TV functional to a nonlocal variant using the definition of nonlocal derivative operators based on a nonlocal weight function [14–17, 26, 27]. NLTV is an effective tool instead of TV for improving the signal-to-noise ratio in practical application [14–16]. Recently, it has been successfully used for 4D computed tomography reconstruction from few projection data [28]. NLTV extends the TV functional to a nonlocal variant using the definition of nonlocal derivative operators based on a nonlocal weight function (graph). The notion nonlocal means that any point can directly interact with any other point in the whole image domain, where the intensity of the interaction is depending on the value of the weight function. This weight function should represent the similarity of the two points and should be significant, if both points are similar in an appropriate measure. Therefore, the expectation is that such an approach is able to process both structures (geometrical parts) and texture within the same framework, due to the identification of recurring structures in the whole image. A brief review regarding the definition of nonlocal functional is given below.
Let Ω ∈ R 2, and let u :Ω → R be a real function. Assume that w : Ω × Ω → R is a weight function which is symmetric and nonnegative. Then, the nonlocal gradient of an image u(x) is defined as
| (22) |
The norm of a vector q : Ω × Ω → R at point x ∈ Ω is given by
| (23) |
Hence, the norm of the nonlocal gradient of a function u : Ω → R at point x ∈ Ω is represented as
| (24) |
The nonlocal divergence operator can be defined using the standard adjoint relation with the nonlocal gradient as follows:
| (25) |
which guides to the definition of nonlocal divergence of the nonlocal vector q
| (26) |
Next, the nonlocal Laplacian operator is defined as
| (27) |
The discrete forms of the nonlocal gradient, divergence, and laplace operators can be represented as follows:
| (28) |
where u i represents the value of a pixel i in the image (1 ≤ i ≤ N) and w ij is the discrete sparse version of w(x, y), and it is defined as
| (29) |
where h is a filtering parameter; in general h corresponds to the noise level; usually, we set it to be the standard deviation of the noise, and G σ is a Gaussian of standard deviation σ, and u(x) and u(y) are the image values in pixel x and y. The weight functions w(x, y) denote how much the difference between pixels x and y is penalized in the images. The more similar the neighborhoods of x and y are, more the difference should be penalized, and vice versa. The weight functions w(x, y), significant only if the window around y looks like the corresponding window around x. Hence, the nonlocal algorithm is very efficient in reducing noise, while preserving contrast in natural images and redundant structures such as texture. In our work, we used 5 × 5 pixel patches, a search neighborhood window of size 11 × 11. The observed noisy data v is taken as the reference image to construct the weight, and by this weighed averaging, the structures, for example, boundaries, are reinforced, while the noise gets cancelled out [29]. The weights are computed by using either a distance between the noisy pixel values |u(x) − u(y)| [30–32] or a distance between the patches around x and y [33–35]. The use of NLTV reduces the noise in the reconstructed image, thus the difference between reference and reconstructed image reduces. From the definition of signal-to-noise ratio (SNR), it is clear that the reduction in image difference increases the SNR.
3. Proposed Work
The proposed work jointly minimizes a linear combination of nonlocal total variation and least-square data-fitting term to reconstruct the MR image from undersampled data. The main aim is to solve the compressed sensing MRI problem (2) using Split Bregman algorithm and nonlocal total variation. In this work, the nonlocal total variation is taken as the L 1-regularization functional and solved using Split Bregman iteration. Recall (2)
| (30) |
Using Bregman iteration method, (30) can be reduced to a sequence of unconstrained problems of the form
| (31) |
| (32) |
where J(u) represents L 1-regularization term. In order to proceed further selection of regularization method is important. Here, we choose nonlocal total variation (NLTV) as the regularizer; that is
| (33) |
Now, (31) becomes
| (34) |
where ||∇NL u||1 = ∑i | ∇NL u|i. We can write (34) as follows by introducing an auxiliary variable d instead of ∇NL u:
| (35) |
Equation (35) can be converted into unconstrained form by using the quadratic penalty method
| (36) |
Using split Bregman method, (36) can be transformed into the following forms:
| (37) |
| (38) |
Equation (37) is convex and can be minimized by alternatively solving the following two minimization subproblems with respect to u and d
| (39) |
By direct computation, the optimal conditions of (39) are
| (40) |
| (41) |
Use the Gauss-Seidel iteration to get a fast solution of (40), and the discrete solution is represented as
| (42) |
As explained in the introduction, K is partial Fourier matrix (K = Pℱ), where P ∈ R M×N is an identity matrix (M ≪ N) and ℱ is a discrete Fourier matrix. Using the identity ℱ T = ℱ −1, now (42) becomes
| (43) |
The discrete solution for (41) can be written as
| (44) |
Finally, the Bregman variable is updated as
| (45) |
The proposed method is summarized as Algorithm 2.
Algorithm 2.

Proposed algorithm.
4. Evaluation of Image Quality
In this work, a detailed evaluation study has done on the reconstruction of MR images, which represent varying degrees of object structural complexity. Even though algorithms based on regularization techniques effectively remove streaks, other aspects of image quality should also be analyzed. To address this, a number of image quality evaluations are performed at different levels including qualitative visualization-based evaluation and quantitative metric-based evaluation.
4.1. Qualitative Visualization-Based Evaluation
In qualitative visualization-based evaluation, reconstructed image obtained with different algorithms are visually compared with the reference image.
4.2. Quantitative Metric-Based Evaluation
Besides the visualization-based evaluation, similarity between reconstructed and reference images is quantitatively assessed by means of four measures such as signal-to-noise ratio (SNR), relative error (RE), structural similarity index (SSIM), and feature similarity index (FSIM). SNR and RE are widely used for measuring reconstruction accuracy, SSIM and FSIM are used for evaluating the similarity between reconstructed and reference image.
4.2.1. Signal-to-Noise Ratio (SNR)
One can see that
| (46) |
where u ref is the reference image, is the mean intensity value of u ref, and u rec is the reconstructed image.
4.2.2. Relative Error (RE)
One can see that
| (47) |
4.2.3. Structural Similarity Index (SSIM)
The SNR measurement gives a numerical value on the damage, but it does not describe its type. Moreover, as is noted in [36, 37], it does not quite represent the quality perceived by human observers. For medical imaging applications, where images are degraded, must eventually be examined by experts, traditional evaluation remains insufficient. For this reason, objective approaches are needed to assess the medical imaging quality. We then evaluate a new paradigm to estimate the quality of medical images based on the assumption that the human visual system (HVS) is highly adapted to extract structural information. The similarity index compares the brightness I(x, y), contrast c(x, y), and structure s(x, y) between each pair of vectors, where the SSIM index between two signals x and y is given by the following expression [38, 39]:
| (48) |
However, the comparison of brightness is determined by the following expression:
| (49) |
where the average intensity of signal x is given by
| (50) |
the constant K 1 ≪ 1, and L is the dynamic row of the pixel values (255 for an image coded on 8 bits). The function of contrast comparison takes the following form:
| (51) |
where is the standard deviation of the original signal x, C 2 = (K 2 L)2, and the constant K 2 ≪ 1.
The function of structure comparison is defined as follows:
| (52) |
where cov(x, y) = μ xy − μ x μ y and C 3 = C 2/2.
Then, the expression of the structural similarity index becomes
| (53) |
4.2.4. Feature Similarity Index (FSIM)
SSIM index provides image quality assessment (IQA) from pixel-based stage to structure-based stage. Human visual system (HVS) understands an image mainly based on its low-level features: mainly, the phase congruency (PC), which is a measure of the significance of a local structure and it is dimensionless. PC is used as the primary feature in FSIM [40]. The secondary feature used in FSIM is the image gradient magnitude (GM). In order to find out the feature similarity between two images f 1 and f 2, the above mentioned parameters PC and GM are to be calaculated first.
Let PC1, PC2, G 1, and G 2 be the phase congruency and gradient magnitude of images f 1 and f 2, respectively. Initially, separate the feature similarity measurement between f 1(x) and f 2(x) into two components. First similarity measure is based on PC1(x) and PC2(x) and is defined as
| (54) |
where T 1 is a positive constant which increases the stability of S PC. Value of T 1 depends on the dynamic range of PC. Similarly, the similarity measure based on GM values G 1(x) and G 2(x) is defined as
| (55) |
where T 1 is a positive constant which depends on the dynamic range of GM value.
Next step is to combine S PC(x) and S G(x) to get the similarity measure S L(x) between f 1(x) and f 2(x) and is defined as
| (56) |
The relative importance of PC and GM features can be adjusted by means of the parameters α and β. For simplicity, set α = β = 1, then (56) becomes
| (57) |
where S L(x) represents the similarity at each location x, and the overall similarity should be found. For a given location x, if any of f 1(x) and f 2(x) has a significant PC value, it implies that this position x will have a high impact on HVS in evaluating the similarity between f 1 and f 2. Therefore, introducing a new term PCm(x) = max(PC1(x), PC2(x)) to weight the importance of S L(x) in the overall similarity between f 1 and f 2. Accordingly, the FSIM index can defined as
| (58) |
where Ω means the whole image spatial domain.
5. Experiments and Numerical Results
The experimental setup used in previous works [5–7] is explained here. Suppose that an MR image u has m × m pixels and the partial Fourier transform K in (3) consists of n rows of a m × m matrix corresponding to the full 2D discrete Fourier transform. The n selected rows correspond to the acquired data v. The sampling ratio is outlined as n/m. The scanning time is shorter if the sampling ratio is smaller. In the k-space, randomly choose more samples in low frequencies and fewer samples in higher frequencies. This sample theme has been widely used for compressed MR image reconstruction, and therefore similar themes are utilized in [4–7]. Practically, the sampling scheme and speed in MR imaging also depend on the physical and physiological limitations [4]. In the proposed work, the compressive sensing matrix K = Pℱ, where P is a row selector matrix, and ℱ is the Fourier transform matrix. For an N × N image, we randomly choose m coefficients, then R is a sampling matrix of size m × N 2. All experiments are conducted on a PC with an Intel core-i72670, 2.2 GHz CPU in MATLAB environment. The result of the proposed work is compared with the existing methods like TVCMRI [5], RecPF [6], CSA, FCSA [7], and SB-TV [8]. The observation measurement v is synthesized as v = Ku + n, where n is the noise, K and v are given as inputs, and u is the unknown target. In this work we considered two sets of observations: one is contaminated with Gaussian noise of standard deviation σ = 0.01, and the other is contaminated with Rician noise of noise level 5%.
The proposed and existing algorithms are tested using four 2D MR images: brain, chest, artery, and cardiac, imges respectively, as shown in Figure 1. They have been used for validation in [7]. For convenience, they have the same size of 256 × 256. The sample ratio is set to be approximately 20%. To perform comparisons, all methods run 50 iterations, except SB-TV and the proposed method. SB-TV completes reconstruction in 35 iterations, while the proposed method takes only 30 iterations. The proposed method provides best visual effects on all MR images. Figures 2, 3, 4, and 5 show the visual comparisons of the reconstructed results using Gaussian noisy observations by different methods in the brain, chest, artery, and cardiac images, respectively. Figure 10 gives the performance comparisons between different methods in terms of the CPU time over SNR. Table 1 shows the average value of CPU time, SNR, RE, SSIM and FSIM of different methods in Gaussian noise case. Figures 6, 7, 8, and 9 show the visual comparisons of the reconstructed results using Rician noisy observations by different methods in the brain, chest, artery, and cardiac images, respectively. Figure 11 gives the performance comparisons between different methods in terms of the CPU time over SNR. Table 2 shows the average value of CPU time, SNR, RE, SSIM and FSIM of different methods in Rician noise case. In both cases, the reconstructed results produced by the proposed method is better than those produced by the TVCMRI, RecPF, CSA, FCSA, and SB-TV. The reconstruction performance of the proposed work is the best in terms of both the computational complexity and reconstruction accuracy, which clearly demonstrate the efficiency of this method for the compressed MR image reconstruction.
Figure 1.
2D MR test images.
Figure 2.
Brain MR image reconstruction from 20% samples (Gaussian noise case).
Figure 3.
Chest MR image reconstruction from 20% samples (Gaussian noise case).
Figure 4.
Artery MR image reconstruction from 20% samples (Gaussian noise case).
Figure 5.
Cardiac MR image reconstruction from 20% samples (Gaussian noise case).
Figure 10.
Performance comparisons (CPU time versus SNR) on different MR images (Gaussian noise case).
Table 1.
Comparison of the average values of quality evaluation parameters (Gaussian noise case).
| Image | TVCMRI | RecPF | CSA | FCSA | SB-TV | Proposed | |
|---|---|---|---|---|---|---|---|
| SNR (dB) | Brain | 14.19 | 14.75 | 15.20 | 15.78 | 16.31 | 17.49 |
| Chest | 16.99 | 17.38 | 17.13 | 17.63 | 18.03 | 18.89 | |
| Artery | 18.70 | 19.98 | 22.15 | 23.61 | 24.17 | 25.75 | |
| Cardiac | 16.97 | 17.65 | 17.95 | 18.77 | 18.98 | 19.47 | |
|
| |||||||
| RE (%) | Brain | 16.96 | 16.54 | 07.97 | 07.37 | 06.90 | 05.98 |
| Chest | 11.58 | 11.21 | 10.84 | 10.33 | 08.89 | 07.96 | |
| Artery | 12.61 | 11.84 | 05.77 | 04.90 | 04.57 | 04.10 | |
| Cardiac | 14.03 | 13.39 | 11.10 | 09.78 | 08.29 | 07.72 | |
|
| |||||||
| SSIM | Brain | 0.9567 | 0.9742 | 0.9906 | 0.9935 | 0.9940 | 0.9947 |
| Chest | 0.9829 | 0.9839 | 0.9848 | 0.9861 | 0.9887 | 0.9914 | |
| Artery | 0.9825 | 0.9835 | 0.9848 | 0.9919 | 0.9924 | 0.9940 | |
| Cardiac | 0.9614 | 0.9642 | 0.9748 | 0.9799 | 0.9835 | 0.9889 | |
|
| |||||||
| FSIM | Brain | 0.8712 | 0.8773 | 0.9489 | 0.9654 | 0.9719 | 0.9765 |
| Chest | 0.8990 | 0.9012 | 0.9063 | 0.9148 | 0.9237 | 0.9374 | |
| Artery | 0.9010 | 0.9168 | 0.9516 | 0.9668 | 0.9705 | 0.9742 | |
| Cardiac | 0.9313 | 0.9366 | 0.9454 | 0.9551 | 0.9587 | 0.9629 | |
|
| |||||||
| CPU time (s) | Brain | 2.14 | 2.05 | 1.96 | 1.88 | 1.78 | 1.62 |
| Chest | 2.21 | 2.16 | 1.95 | 1.86 | 1.73 | 1.62 | |
| Artery | 2.26 | 2.15 | 1.86 | 1.83 | 1.72 | 1.64 | |
| Cardiac | 2.13 | 2.10 | 1.84 | 1.82 | 1.80 | 1.67 | |
Figure 6.
Brain MR image reconstruction from 20% samples (Rician noise case).
Figure 7.
Chest MR image reconstruction from 20% samples (Rician noise case).
Figure 8.
Artery MR image reconstruction from 20% samples (Rician noise case).
Figure 9.
Cardiac MR image reconstruction from 20% samples (Rician noise case).
Figure 11.
Performance comparisons (CPU time versus SNR) on different MR images (Rician noise case).
Table 2.
Comparison of the average values of quality evaluation parameters (Rician noise case).
| Image | TVCMRI | RecPF | CSA | FCSA | SB-TV | Proposed | |
|---|---|---|---|---|---|---|---|
| SNR (dB) | Brain | 13.31 | 13.85 | 14.81 | 15.31 | 16.24 | 17.08 |
| Chest | 15.30 | 15.50 | 15.45 | 15.69 | 16.67 | 17.18 | |
| Artery | 18.34 | 19.55 | 21.26 | 22.54 | 23.64 | 24.28 | |
| Cardiac | 14.68 | 14.95 | 14.98 | 15.26 | 17.52 | 18.93 | |
|
| |||||||
| RE (%) | Brain | 17.15 | 16.78 | 13.01 | 12.39 | 10.36 | 10.01 |
| Chest | 13.19 | 12.94 | 11.58 | 10.98 | 09.38 | 08.81 | |
| Artery | 13.24 | 12.46 | 06.76 | 05.95 | 05.04 | 04.71 | |
| Cardiac | 15.06 | 14.73 | 12.69 | 10.24 | 09.89 | 08.96 | |
|
| |||||||
| SSIM | Brain | 0.9511 | 0.9625 | 0.9787 | 0.9807 | 0.9860 | 0.9870 |
| Chest | 0.9747 | 0.9762 | 0.9801 | 0.9816 | 0.9828 | 0.9889 | |
| Artery | 0.9810 | 0.9824 | 0.9832 | 0.9889 | 0.9904 | 0.9924 | |
| Cardiac | 0.9582 | 0.9592 | 0.9697 | 0.9717 | 0.9812 | 0.9858 | |
|
| |||||||
| FSIM | Brain | 0.8499 | 0.8562 | 0.8994 | 0.9082 | 0.9290 | 0.9339 |
| Chest | 0.8860 | 0.8968 | 0.9031 | 0.9067 | 0.9221 | 0.9298 | |
| Artery | 0.8652 | 0.8797 | 0.9424 | 0.9566 | 0.9672 | 0.9715 | |
| Cardiac | 0.9220 | 0.9304 | 0.9352 | 0.9404 | 0.9518 | 0.9565 | |
|
| |||||||
| CPU time (s) | Brain | 2.10 | 2.05 | 1.96 | 1.88 | 1.78 | 1.62 |
| Chest | 2.26 | 2.14 | 1.97 | 1.85 | 1.73 | 1.63 | |
| Artery | 2.18 | 2.14 | 1.88 | 1.84 | 1.74 | 1.63 | |
| Cardiac | 2.10 | 2.12 | 1.84 | 1.82 | 1.79 | 1.67 | |
6. Conclusion
An efficient algorithm is proposed for the compressed MR image reconstruction based on nonlocal total variation and Split Bregman method. The algorithm effectively solves a NLTV-based L 1-norm term by using the Split Bregman method. NLTV can effectively avoid blocky artifacts caused by traditional TV regularization. Numerous experiments were conducted to compare different reconstruction methods. The results of our study indicate that the proposed method outperforms the classic methods in terms of both accuracy and complexity.
Acknowledgment
The authors would like to thank the Government of Canada for the financial support through the Commonwealth Scholarship 2011-12 for this research work.
References
- 1.Candès EJ, Romberg J, Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory. 2006;52(2):489–509. [Google Scholar]
- 2.Donoho DL. Compressed sensing. IEEE Transactions on Information Theory. 2006;52(4):1289–1306. [Google Scholar]
- 3.He L, Chang TC, Osher S, Fang T, Speier P. MR image reconstruction by using the iterative refinement method and nonlinear inverse scale space methods. UCLA CAM Report. 2006
- 4.Lustig M, Donoho D, Pauly JM. Sparse MRI: the application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine. 2007;58(6):1182–1195. doi: 10.1002/mrm.21391. [DOI] [PubMed] [Google Scholar]
- 5.Ma S, Yin W, Zhang Y, Chakraborty A. An efficient algorithm for compressed MR imaging using total variation and wavelets. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’08); June 2008; Anchorage, Alaska, USA. pp. 1–8. [Google Scholar]
- 6.Yang J, Zhang Y, Yin W. A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data. IEEE Journal on Selected Topics in Signal Processing. 2010;4(2):288–297. [Google Scholar]
- 7.Huang J, Zhang S, Metaxas D. Efficient MR image reconstruction for compressed MR imaging. Medical Image Analysis. 2011;15(5):670–679. doi: 10.1016/j.media.2011.06.001. [DOI] [PubMed] [Google Scholar]
- 8.Goldstein T, Osher S. The split Bregman method for L1-regularized problems. SIAM Journal on Imaging Sciences. 2009;2(2):323–343. [Google Scholar]
- 9.Dobson DC, Santosa F. Recovery of blocky images from noisy and blurred data. SIAM Journal on Applied Mathematics. 1996;56(4):1181–1198. [Google Scholar]
- 10.Chambolle A, Lions PL. Image recovery via total variation minimization and related problems. Numerische Mathematik. 1997;76(2):167–188. [Google Scholar]
- 11.Meyer Y. Oscillating Patterns in Image Processing and Nonlinear Evolution Equations: The Fifteenth Dean Jacqueline B. Lewis Memorial Lectures. Vol. 22. Boston, Mass, USA: American Mathematical Society; 2001. (University Lecture Series). [Google Scholar]
- 12.Osher S, Burger M, Goldfarb D, Xu J, Yin W. An iterative regularization method for total variation-based image restoration. Multiscale Modeling and Simulation. 2005;4(2):460–489. [Google Scholar]
- 13.Tian Z, Jia X, Yuan K, Pan T, Jiang SB. Low-dose CT reconstruction via edge-preserving total variation regularization. Physics in Medicine and Biology. 2011;56(18):5949–5967. doi: 10.1088/0031-9155/56/18/011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Gilboa G, Osher S. Nonlocal operators with applications to image processing. Multiscale Modeling and Simulation. 2008;7(3):1005–1028. [Google Scholar]
- 15.Zhang X, Burger M, Bresson X, Osher S. Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM Journal on Imaging Sciences. 2010;3(3):253–276. [Google Scholar]
- 16.Peyre G, Bougleux S, Cohen L. Non-local regularization of inverse problems. (Lecture Notes in Computer Science).Computer Vision—ECCV 2008. 2008;5304:57–68. [Google Scholar]
- 17.Lou Y, Zhang X, Osher S, Bertozzi A. Image recovery via nonlocal operators. Journal of Scientific Computing. 2010;42(2):185–197. [Google Scholar]
- 18.Zhang X, Burger M, Bresson X, Osher S. Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM Journal on Imaging Sciences. 2010;3(3):253–276. [Google Scholar]
- 19.Boyd S, Vandenberghe L. Convex Optimization. Cambridge, UK: Cambridge University Press; 2004. [Google Scholar]
- 20.Nocedal J, Wright S. Numerical Optimization. 2nd edition. New York, NY, USA: Springer; 2006. [Google Scholar]
- 21.Bregman LM. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics. 1967;7(3):200–217. [Google Scholar]
- 22.Osher A, Mao Y, Dong B, Yin W. UCLA CAM Report. Los Angeles, Calif, USA: University of California at Los Angeles; 2008. Fast linearized Bregman iterations for compressed sensing and sparse denoising. [Google Scholar]
- 23.He L, Chang TC, Osher S. MR image reconstruction from sparse radial samples by using iterative refinement procedures. Proceedings of the 13th Annual Meeting of ISMRM; 2005; p. p. 696. [Google Scholar]
- 24.Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D. 1992;60(1–4):259–268. [Google Scholar]
- 25.Chambolle A. An algorithm for total variation minimization and applications. Journal of Mathematical Imaging and Vision. 2004;20(1-2):89–97. [Google Scholar]
- 26.Mignotte M. A non-local regularization strategy for image deconvolution. Pattern Recognition Letters. 2008;29(16):2206–2212. [Google Scholar]
- 27.Elmoataz A, Lezoray O, Bougleux S. Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing. IEEE Transactions on Image Processing. 2008;17(7):1047–1060. doi: 10.1109/TIP.2008.924284. [DOI] [PubMed] [Google Scholar]
- 28.Jia X, Lou Y, Dong B, Tian Z, Jiang S. 4D computed tomography reconstruction from few-projection data via temporal non-local regularization. (Lecture Notes in Computer Science ).Medical Image Computing and Computer-Assisted Intervention—MICCAI 2010. 2010;6361:143–150. doi: 10.1007/978-3-642-15705-9_18. [DOI] [PubMed] [Google Scholar]
- 29.Buades A, Coll B, Morel JM. A review of image denoising algorithms, with a new one. Multiscale Modeling and Simulation. 2005;4(2):490–530. [Google Scholar]
- 30.Spira A, Kimmel R, Sochen N. A short-time Beltrami kernel for smoothing images and manifolds. IEEE Transactions on Image Processing. 2007;16(6):1628–1636. doi: 10.1109/tip.2007.894253. [DOI] [PubMed] [Google Scholar]
- 31.Tomasi C, Manduchi R. Bilateral filtering for gray and color images. Proceedings of the 6th IEEE International Conference on Computer Vision; January 1998; Bombay, India. pp. 839–846. [Google Scholar]
- 32.Yaroslavsky LP. Digital Picture Processing—An Introduction. Berlin, Germany: Springer; 1985. [Google Scholar]
- 33.Coifman RR, Lafon S, Lee AB, et al. Geometric diffusions as a tool for harmonic analysis and structure definition of data: diffusion maps. Proceedings of the National Academy of Sciences of the United States of America. 2005;102(21):7426–7431. doi: 10.1073/pnas.0500334102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Buades A, Coll B, Morel JM. A review of image denoising algorithms, with a new one. Multiscale Modeling and Simulation. 2005;4(2):490–530. [Google Scholar]
- 35.Szlam A, Maggioni M, Coifman RR. A general framework for adaptive regularization based on diffusion processes on graphs. Journal of Machine Learning Research. 2008;9:1711–1739. [Google Scholar]
- 36.Geisler WS, Banks MS. Handbook of Optics. Vol. 1. New York, NY, USA: McGraw-Hill; 1995. Visual performance. [Google Scholar]
- 37.Watson AB, Kreslake LB. Measurement of visual impairment scales for digital video. 6th Human Vision and Electronic Imaging; January 2001; San Jose, Calif, USA. pp. 79–89. [Google Scholar]
- 38.Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing. 2004;13(4):600–612. doi: 10.1109/tip.2003.819861. [DOI] [PubMed] [Google Scholar]
- 39.Wang Z, Bovik AC. A universal image quality index. IEEE Signal Processing Letters. 2002;9(3):81–84. [Google Scholar]
- 40.Zhang L, Zhang L, Mou X, Zhang D. FSIM: a feature similarity index for image quality assessment. IEEE Transactions on Image Processing. 2011;20(8):2378–2386. doi: 10.1109/TIP.2011.2109730. [DOI] [PubMed] [Google Scholar]











