Abstract
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction.
Index Terms: Image reconstruction, non-Cartesian MRI, regularization parameter, Stein’s unbiased risk estimate (SURE), Monte-Carlo methods
I. Introduction
IMAGE reconstruction is a crucial task in magnetic resonance imaging (MRI).Model-based reconstruction methods [1] can improve image-quality over direct methods such as iFFT- or gridding-based reconstruction [2], especially for undersampled k-space data. The problem is usually solved by minimizing a cost function involving a model-based data-fidelity term and regularization. Regularization is often included to reduce ill-posedness of the problem for undersampled cases, to stabilize the reconstruction process and also to incorporate prior information about the object being reconstructed. Nonquadratic regularizers can better suppress noise and aliasing artifacts compared to quadratic ones [3]. Sparsity promoting regularizers such as those based on the ℓ1-norm and edge-preserving total variation (TV) are popular nonquadratic regularizers in MRI [4]–[9]. Successful regularization requires careful selection of associated regularization parameters that control the strength of these regularizers during reconstruction. These parameters are often set manually (based on visual perception) for MRI reconstruction. In this paper, we focus on the problem of automatic selection of these parameters for MRI reconstruction from undersampled k-space data.
Various quantitative criteria exist for automatic selection of parameters for regularized image reconstruction in general [10], [11]. These may be broadly classified as those based on the discrepancy principle [10], [11], the L-curve [12]–[14], generalized cross-validation (GCV) [15]–[19] and estimation of (weighted) mean squared-error (MSE, also known as risk) using the principles underlying Stein’s unbiased risk estimate (SURE) [20]–[27]. Unlike task-based methods [28]–[30] that focus on developing quality assessment criteria specific to a given task (e.g., detecting a lesion), the above parameter selection methods only determine a “reasonable” solution from a “feasible set” that is predetermined by the chosen cost function.
Among these methods, we focus on the weighted MSE (WMSE) based approach since WMSE is easily manipulated and estimated using the SURE-framework [23], [24], [27] and also because it is commonly used to quantify reconstruction quality [22]–[27]. Moreover, SURE-based methods can tackle noniterative nonlinear reconstruction [22], [25], [26] and iterative regularized reconstruction using nonquadratic regularizers [23], [24], [27] and also provide (near) MSE-optimal (regularization) parameter selection [22]–[27]. SURE-based parameter selection assumes that real- or complex-valued noise in the observed data follows a Gaussian distribution with known mean and covariance, so it is well-suited for MRI.
Previous applications of SURE-type parameter selection for MRI include noniterative denoising of magnitude images [25], SENSitivity Encoding [31] (SENSE) based noniterative reconstruction from uniformly undersampled multi-coil Cartesian k-space data [26] and iterative MRI reconstruction (using nonquadratic regularizers) from single-coil Cartesian k-space data with arbitrary undersampling [27]. These papers derive analytically a (weighted) SURE-type estimate of a (weighted) MSE for a particular (iterative) reconstruction algorithm.
In this work, we propose a SURE-based regularization parameter selection method for iterative MRI reconstruction from undersampled data using nonquadratic regularizers. Unlike earlier work [23]–[27], we propose a Monte-Carlo scheme for computing the desired weighted SURE-type estimate. This Monte-Carlo scheme extends our previous work for real-valued denoising algorithms [32] to complex-valued reconstruction algorithms with application to MRI reconstruction. Our Monte-Carlo method depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings beyond confirming that it satisfies certain (weak) differentiability conditions, so it is very flexible and can be applied to a wide variety of iterative/noniterative nonlinear algorithms.
We illustrate the efficacy of the proposed Monte-Carlo scheme for MRI reconstruction from single-coil undersampled non-Cartesian k-space data with several nonquadratic regularizers such as a smooth edge-preserving one, TV and an ℓ1-regularizer. We present numerical results for simulations with the analytical Shepp-Logan phantom [33] and experiments with real GE phantom data and in-vivo human brain data. These results extend those in our previous work [27] for MRI reconstruction from single-coil undersampled Cartesian data. We demonstrate that the proposed Monte-Carlo SURE-based method provides near-MSE-optimal regularization parameter selection and performs equally well or better than GCV for nonlinear algorithms [18], [27, Eq. (7)]. Methods proposed in this paper can also be extended to tackle nonquadratic regularization based iterative parallel MRI reconstruction from Cartesian and non-Cartesian k-space data with arbitrary undersampling (see Section VII).
The paper is organized as follows. We introduce our data model and describe the parameter selection problem mathematically in Section II. We briefly review the principles underlying SURE in Section III and describe the proposed Monte-Carlo method in detail in Section IV. We briefly describe regularized iterative single-coil non-Cartesian MRI reconstruction in Section V. We present a variety of experimental results in Section VI and discuss implementation aspects and possible extensions to this work in Section VII. We finally conclude with Section VIII.
In the rest of the paper, (·)⊤, (·)′ respectively denote the non-Hermitian and Hermitian transposes, and (·)ℛ and (·)ℐ respectively indicate the real and imaginary components of a complex vector or matrix. The mth element of any vector y is denoted by either [y]m or ym and the mnth element of any matrix A is written as [A]mn. For any vector y and any matrix W, .
II. Problem Description
A. Data Model
In MRI, noise originates in the analog domain (due to thermal fluctuations of spins) before acquisition of k-space samples but can be modeled reasonably accurately as additive Gaussian in the acquired k-space samples. So, we use the following data-model [1, Eq. (12)]:
| (1) |
where we assume that ytrue ∈ ℂM, containing samples of the true unknown MR signal, is a deterministic unknown, y ∈ ℂM contains noisy measurements, and ξ ∈ ℂM is a zero-mean complex-valued Gaussian random vector with covariance matrix Ω ∈ ℂM × M.
At this point, (1) does not involve discretization of the underlying continuous-domain object χtrue that is being scanned. Thus, (1) can accommodate continuous-domain physical-effects representative of MR physics and imaging such as transverse relaxation, inhomogeneity of the applied magnetic field, chemical shifts and nonuniform sensitivity of receive coils [1, Eq. (10)] via ytrue. It also applies to several types of MRI including single-coil/parallel imaging, undersampled Cartesian/non-Cartesian imaging and combinations thereof.
B. Image Reconstruction
For the purpose of image reconstruction, we use the following discretized linear model [1, Eq. (18)]
| (2) |
that is based on a discretization [1, Eq. (14)], xtrue, of the continuous-domain object χtrue. This discretization correspondingly yields [1, Eqs. (14)–(17)] a system matrix, A, that approximates continuous-domain imaging operations such as those mentioned in Section II-A. The matrix A depends mainly upon (among other factors such as the pulse sequence and coil geometry) the k-space trajectory used to acquire y and is assumed to be known. While A is essential for image reconstruction, we remark that xtrue is a hypothetical object that is not necessary for the methods proposed in this paper and is used purely for validating our simulations. For an appropriate discretization [1], A represents (nonuniform) discrete Fourier transform for (non-Cartesian) single-coil imaging (ignoring field inhomogeneity and relaxation effects) while for parallel MRI, it corresponds to the combined Fourier and spatial sensitivity encoding matrix [3].
Given (1)–(2), the goal of image reconstruction is to obtain a discretized estimate, x̂, of χtrue from y. This corresponds to an ill-posed inverse problem when M < N and is usually tackled in a regularized-reconstruction framework where an iterative reconstruction algorithm is applied on y to yield x̂. We denote the reconstruction process by
| (3) |
where uλ : ℂM → ℂN is a (possibly nonlinear) operator representative of the corresponding iterative reconstruction algorithm. The vector λ in uλ denotes one or more tunable parameters (e.g., number of iterations, regularization strength) that characterize the reconstruction method and govern the quality of x̂. Selecting a suitable λ thus plays an important role in problems such as (3). Often, λ is adjusted manually based on visual perception of x̂. In this work, we focus on quantitative methods for selecting λ automatically. Specifically, we propose to use a weighted squared-error measure in the measurement domain that can be estimated using Stein’s principle [20], [21] and then minimized to yield an appropriate choice of λ.
C. Weighted Squared-Error Measures
In imaging inverse problems, reconstruction quality is often quantified using mean squared-error, , and is thus a reasonable metric for adjusting λ. However, MSE(λ) is neither accessible in practice (due to its dependence on xtrue) nor amenable for estimation1 (e.g., using Stein’s principle) in ill-posed inverse problems due to the ill-posedness of (2) for M < N [21], [23], [27].
1) Previous Extensions to MSE
To circumvent this difficulty, some authors [21], [23] have focussed on
| (4) |
where P ≜ A′(AA′)†A, (·)† represents pseudo-inverse. Another alternative [11], [27] is
| (5) |
Both of these metrics are tractable with Stein’s principle [21], [23], [27]. In our previous work [27], we considered a weighted variant,
| (6) |
that subsumes both Projected-MSE(λ) and Predicted-MSE(λ) for appropriate choices of the symmetric positive semi-definite weighting matrix W ≥ 0 [27, Sec. III-B]. All of these metrics that depend on xtrue assume that the observed data y follows the discretized linear model in (2). For such a model (2), WMSE(λ) can be unbiasedly estimated using Stein’s principle to yield WSURE(λ) [27, Eq. (12)] when ξ in (2) is Gaussian [27, Thm. 2]. Unlike MSE(λ) however, WMSE(λ) evaluates the error in the measurement-domain, i.e., the range space of A; for MRI, WMSE(λ) corresponds to evaluating weighted squared-error in k-space. Despite this dissimilarity from MSE(λ), we found that WMSE(λ), via its estimate WSURE(λ) [27, Eq. (12)], can be used to obtain near-MSE-optimal regularization parameters for iterative nonlinear image-deblurring and MRI reconstruction from undersampled Cartesian k-space data [27].
Using Stein’s principle [20], [21] to estimate WMSE(λ) involves substituting Axtrue = y − ξ from (2) in WMSE(λ) (6) and exploiting the statistics of ξ to analytically evaluate ξ-related terms in the expectation sense [27, Thm. 1]. The resulting unbiased estimate WSURE(λ) [27, Eq. (12)] is independent of Axtrue and depends only on y, a first-order differential response of uλ and the mean and covariance of ξ thereby making it a practical proxy for WMSE(λ). However, the unbiasedness of WSURE(λ) to WMSE(λ) is meaningful only when the observed data follows (2). The discretized linear model (2), although crucial for image reconstruction, does not adequately describe how imaging systems work in practice: observed data y often involves continuous-domain imaging operations, e.g., representative of MR physics described in Section II-A, that may not be completely captured by the discretization in Axtrue. Thus, since WSURE(λ) depends on y and not on Axtrue, a discrepancy arises in reasoning that WSURE(λ) is unbiased for practical imaging inverse problems.
2) Proposed Measure
To avoid this discrepancy in reasoning, we propose to consider the following WMSE metric with respect to the True Data ytrue since ytrue accounts for continuous-domain imaging operations:
| (7) |
We still require Auλ(y) in (7) because we are reconstructing a discretized version, i.e., uλ(y), of the original continuous-domain object χtrue so that A maps uλ(y) to its corresponding k-space vector. Similar to WMSE(λ), WMSETD(λ) is also a measurement-domain error metric that is not directly accessible due to its dependence on the true unknown samples ytrue. However, since ytrue describes MR data-acquisition more realistically via continuous-domain operations than Axtrue, WMSETD(λ) is a more accurate representation of the k-space error than WMSE(λ). Below, we show that Stein’s principle [20], [21] can be used to estimate2 WMSETD(λ) and leads to an expression for WSURE(λ) that is very similar to that reported in our previous work [27, Eq. (12)].
Due to the generality of (1)–(2), we can use WMSETD(λ) [via WSURE(λ)] to tune λ in a variety of MRI reconstruction problems including single-coil / multi-coil MRI reconstruction (from undersampled data) with / without compensation for field-inhomogeneity and relaxation effects. However, the appropriateness of WMSETD(λ) for a given MRI technique needs to be validated using numerical experiments on a case-by-case basis. In this paper, we consider single-coil non-Cartesian MRI ignoring field-inhomogeneity and relaxation effects as an extension to our previous work [27] that focussed on single-coil Cartesian3 MRI. We present experimental results in Section VI illustrating that WSURE(λ) can provide near-MSE-optimal regularization parameter selection for regularized MRI reconstruction from single-coil undersampled non-Cartesian k-space data. We also briefly discuss extensions to parallel MRI in Section VII and report results for using the proposed methods for parallel MRI reconstruction using two different algorithms in [34]–[36].
III. Estimating WMSETD Using Stein’s Principle
Expanding WMSETD(λ) and using (1) to write ytrue = y − ξ, we get that
| (8) |
where ℛ{·} stands for real part of a complex-number. Apart from the irrelevant constant that does not depend on λ, the only inaccessible term is ξ′WAuλ(y). In the sequel, we use the principles underlying Stein’s result [20] and generalized SURE [21] for estimating this term.
Lemma 1: Let the following be true:
ξ ∈ ℂM in (1) is complex Gaussian with 𝖤ξ{ξ} = 0, 𝖤ξ{ξξ⊤} = 0, and 𝖤ξ{ξξ′} = Ω ≻ 0, where 𝖤ξ denotes expectation with respect to ξ,
uλ : ℂM → ℂN is individually analytic [37] with respect to the real and imaginary parts of its argument (in the weak sense of distributions [38, Ch. 6]), and
- the matrix
satisfies 𝖤ξ{|[Γuλ(y)]m|} < ∞, m = 1,…, M.(9)
Then, we have that
| (10) |
where tr{·} denotes the trace of a matrix and Juλ (y) ∈ ℂN × M is the Jacobian matrix of (weak) partial derivatives of the components of uλ with respect to the components of y and is defined via its elements as
| (11) |
Proof: The proof is a straightforward extension of previous results [20], [21, Thm. 1], [27, Lem. 1] and is given in Appendix A for completeness.
We now use (10) to show that
| (12) |
is an unbiased estimate of WMSETD(λ).
Theorem 1: Let uλ(y) and Γ in (9) satisfy the hypotheses of Lemma 1. Then WSURE(λ) (12) is an unbiased estimate of WMSETD(λ) (7), i.e., 𝖤ξ{WMSETD(λ)} = 𝖤ξ{WSURE(λ)}.
Proof: The proof is straightforward and uses Lemma 1 to estimate ξ′Auλ(y) in WMSETD(λ) (8).
The estimate, WSURE(λ) (12), of WMSETD(λ) (7) is independent of ytrue and depends only on y, the noise covariance matrix Ω and uλ via tr{ΓJuλ(y)}. Thus, it is feasible to compute WSURE(λ) as a proxy for WMSETD(λ) for tuning λ. In our previous work [27], we analytically evaluated Juλ(y) recursively for some iterative reconstruction algorithms for image-deblurring and single-coil undersampled Cartesian MRI reconstruction. Although accurate, such an analytical approach demands tedious mathematical derivations that depend on the specifics of uλ and that must be repeated for different uλ individually on a case-by-case basis.
In this work, we propose a Monte-Carlo scheme for numerically estimating tr{ΓJuλ(y)} in WSURE(λ) (12). The proposed scheme does not require knowledge of the implementation details of uλ as we shall see next; this advantage makes it readily applicable to a wide variety of (weakly differentiable) estimators uλ.
IV. Monte-Carlo Estimation
The proposed Monte-Carlo method for tuning λ extends our previous result, [32, Thm. 2] that focussed on real-valued uλ for denoising applications, to handle complex-valued uλ in (3) with application to imaging inverse problems, especially MRI. Similar to [32, Thm. 2], we probe uλ and analyze its response to complex-valued random perturbations in y to estimate tr{ΓJuλ (y)}.
Theorem 2: Consider the random vector
| (13) |
where b ∈ ℂM is an i.i.d. random vector independent of y such that 𝖤b{b} = 0, 𝖤b{bb⊤} = 0, 𝖤b{bb′} = IM, and Λ ∈ ℂM × M is an invertible deterministic matrix. If uλ admits a second order Taylor expansion in addition to satisfying the hypotheses in Lemma 1, we have that
| (14) |
Proof: When uλ(y) admits a second-order Taylor expansion, we have that [39]
| (15) |
where o(Λb, ε) satisfies limε→0 𝖤b{|bmo(Λb, ε)|}/ε = 0, for m = 1,…, M. Then, from (15), we have that
| (16) |
where the last term in the right-hand-side (rhs) of (15) vanishes due to the limit. The second term in the rhs of (16) vanishes since
| (17) |
while the first term can be manipulated as
| (18) |
which is the desired result.
Theorem 2 generalizes [32, Thm. 2] to complex-valued problems allowing for a correlation matrix Λ in (13)–(14). We briefly discuss the role of Λ later in this section and in Section VII. The Monte-Carlo result (14) does not explicitly rely on the functional form of uλ and is equally applicable to both linear and nonlinear uλ.
A generic linear reconstruction algorithm has the form
| (19) |
for some (reconstruction) matrix Hλ ∈ ℂN × M parametrized by λ. Our Monte-Carlo result (14) further simplifies for linear uλ (19) as shown in the following corollary that extends our previous result [32, Prop. 2] to the case of complex-valued uλ.
Corollary 1: When uλ is linear, (14) holds without the limit, independent of ε leading to the following identity
| (20) |
Proof: For linear uλ (19), the rhs of (14) reduces to 𝖤b{b′Λ−1ΓHλΛb} without limε→0, which does not depend on ε. A manipulation similar to that in (18) leads to (20).
When Λ = IM, Corollary 1 is a restatement of existing results [40]–[43] for Monte-Carlo estimation of the trace of a matrix and is useful [via WSURE(λ)] for adjusting λ of linear MRI reconstruction algorithms [32], [40], e.g, conjugate phase reconstruction with density compensation [2], [44] where λ could describe some parametrization of the density compensation weights or such as those encountered when using Tikhonov-type quadratic regularizers [32], [40] where λ could denote regularization parameters.
For MRI reconstruction from undersampled data, it preferable to use nonquadratic regularizers to better reduce aliasing artifacts and noise in the reconstructed image [3], [5]. The reconstruction process associated with a nonquadratic regularizer is nonlinear, so henceforth we concentrate on nonlinear uλ.
In practice, for nonlinear uλ, the limit in (14) cannot be applied analytically except in some special cases where uλ analytically tractable. So we make an approximation to (14) by dropping the limit and the 𝖤b{·} operations similar to [32, Eq. (17)] and use
| (21) |
for a sufficiently small ε and one realization of a complex-valued random vector b satisfying the hypotheses of Theorem 2. The choice of ε represents a trade-off: for too small an ε-value, uλ may be insensitive to the perturbation εΛb in y + εΛb due to finite numerical precision of digital computers, so the Monte-Carlo estimate (21) could be unstable, i.e., it could have large variance. On the other hand, the approximation (21) may be inaccurate for large ε-values for nonlinear uλ.
The robustness of (21) to the choice of ε depends on several factors such as the magnitude of the elements of Γ (9), the energy of Λb, , relative to that of y, , numerical precision of the variables used in the implementation and the sensitivity of uλ(y) to changes in y; the approximation (21) must thus be validated for a given data model (1)–(2) and a reconstruction algorithm (3) individually. The matrix Λ in (21) may be chosen so as to scale the elements of Λb relative to those of y, essentially allowing different amounts of perturbation for different elements of y. This may be beneficial in some applications such as MRI where the elements of y span several orders of magnitude and relatively scaling the perturbation can help maintain the accuracy of the approximation (21) for a fixed, sufficiently small ε for varying y. Although ε is a user-provided parameter, we show in Section VI-B that the choice of ε spans several decades without significantly affecting the results, so the proposed MCSURE method can be applied without having to repeatedly adjust ε.
Using (21), we thus require only two evaluations of uλ for a given y and λ, i.e., the response of uλ to y and y + εΛb for estimating tr{ΓJuλ(y)} for a given λ. Our approach does not need the knowledge of the structure of uλ, so (21) is very flexible in its applicability. This is unlike the analytical development in our earlier work [27] that varied with the choice of uλ and also required storage and computation equivalent to 3 evaluations of uλ for a given λ as discussed in [27, Sec. VI-C].
Theorem 2 is somewhat restrictive in its applicability since it is based on a Taylor expansion of uλ. In practice, uλ may involve weakly differentiable operators that do not admit (15). A typical instance is when ℓ1-type (including total variation) regularizers are used for reconstruction; uλ for these regularizers would involve (for certain implementations) a nonsmooth shrinkage operator that satisfies Lemma 1 but not (15). In such cases, it is possible to extend the scope of Theorem 2 to weakly differentiable functions similar to that documented in [32, Thm. 2]. However, this would require tedious derivations using measure theory and the theory of distributions [38, Ch. 6] and is beyond the scope of this paper. Instead, we investigate using (21) for uλ corresponding to ℓ1-type regularizers based on empirical validation with numerical experiments both in the paper (see Secs. VI-C–VI-D) and in a supplementary material.4
Finally, our Monte-Carlo result (14) precludes iterative / noniterative estimators that involve non-weakly-differentiable operators, e.g., the hard-thresholding operator [45], [32, Sec. V-B]; such operators do not satisfy the conditions of Lemma 1 and are not suitable for use with WSURE(λ).
V. Single-Coil Non-Cartesian MRI Reconstruction
The theoretical development so far has been general both in terms of the data model (1)–(2) and the reconstruction algorithm (3) due to the Monte-Carlo nature of our approach for estimating WMSETD(λ) (7). However, numerical validation of our approach needs to be done on a case-by-case basis for different applications and reconstruction algorithms. For illustration, we henceforth focus on single-coil non-Cartesian MRI ignoring field-inhomogeneity and relaxation effects as an extension to our previous work [27] on single-coil Cartesian MRI. In this case, a good model for noise in (1) is ξ ~ 𝒩(0, σ2IM), so that
| (22) |
in (9). For the purpose of reconstruction (3), we use the discretized linear model in (2). Unlike for Cartesian MRI [27], A is not a simple undersampled DFT matrix for non-Cartesian MRI. But for a suitable discretization, A in (2) can be implemented using nonuniform FFT (NUFFT) [46] for single-coil non-Cartesian MRI. We then formulate MRI reconstruction in (3) as
| (23) |
where x̂ ∈ ℂN is the reconstructed image, λ ≜ λ > 0 is the scalar regularization parameter, Ψ is a (possibly nonsmooth) convex regularizer, and is a regularization operator, e.g., finite differences.
We used the split-Bregman (SB) scheme [47] for uλ in (23). At each iteration, the SB algorithm requires (among other simple update steps) “inverting” a matrix B ≜ A′A + μR′R [27, Eq. (32)] for some penalty parameter5 μ > 0 [27], [47]. For Cartesian MRI, this step can be achieved via FFTs [47, Sec. 5.2], [27, Sec. IV-F]. For non-Cartesian MRI however, B is block-Toeplitz with Toeplitz-blocks [49] and cannot be inverted noniteratively for large image sizes, i.e., for large N, so we used a preconditioned conjugate gradient (PCG) solver with a circulant preconditioner [48] that approximately matched B−1. We implemented A′A using the “embeddingtoeplitz-in-circulant” trick, i.e., A′A = Z′QZ, where Z is PN × N zero-padding matrix and Q is an appropriate PN × PN circulant matrix [50] (P = 2 for 1D and P = 4 for 2D images). In all our experiments, we ran 5 PCG iterations for this step [27, Eq. (32)] and 100 iterations of the SB algorithm. These numbers ensured that the SB algorithm nearly converged in the sense that the normalized “distance” between two successive iterates ‖x(k) − x(k−1)‖2/‖x(k−1)‖2 was close to zero for a large range of λ-values.
VI. Experiments
A. Setup
In all our experiments, we focussed on selecting λ in (23) by minimizing the proposed Monte-Carlo estimate, WSURE(λ) (12), of WMSETD(λ) (7). We investigated two versions of WMSETD(λ) corresponding to W= IM and
| (24) |
where D ≥ 0 is a diagonal matrix of suitable density compensation weights [2] for non-Cartesian trajectories and α > 0 is chosen so that W has a user-provided condition number κ(W); we set α such that κ(W) = 100. For W = IM, WMSETD(λ) can be interpreted as the predicted squared-error (similar to Predicted-MSE [11], [27]) that uniformly weighs the error at all sample locations in k-space. For W in (24), WMSETD(λ) favors errors at certain sample locations in k-space more than others depending upon D; typically, for non-Cartesian trajectories, the central k-space is more densely sampled than outer k-space, so D is designed to provide higher weighting for outer k-space samples than around central k-space [2].
We implemented the SB algorithm and conducted all experiments in Matlab using double-precision variables. We used the conjugate phase (CP) reconstruction with suitable density compensation [2] (described later), A′Dy, to initialize the SB algorithm in all experiments.
In the proposed Monte-Carlo estimation scheme (21), we used where bℛ, bℐ are independent binary random vectors6 whose elements are i.i.d. and assume either +1 or −1 with equal probability. It is easily verified that b± satisfies the hypotheses of Theorem 2. For simplicity, we used Λ = IM in (21) throughout. To avoid repeated computation of Γ′b in (21) for use in (12) with several λ-values, we precomputed and stored c ≜ Γ′b and used c′ in (21). In our simulations, we assumed that the noise variance σ2 was known for computing WSURE(λ) via (12) and (22), while for experiments with real MR data, we used an estimate computed by empirical sample-variance from outer k-space data samples as those are mostly dominated by noise. We compared λ-selection using the proposed WSURE(λ) (12) against that using generalized cross-validation for nonlinear algorithms (NGCV) [18], [27, Eq. (7)]:
| (25) |
where we used the Monte-Carlo estimation procedure (21) for tr{ΓJuλ (y)} in the denominator of NGCV(λ). Thus, NGCV(λ) has the same computation cost as the proposed WSURE(λ).
We experimented with 3 types of regularizers in (23): a smooth convex regularizer with Fair potential (FP) [51], [52] given by
| (26) |
where ΦFP(x) ≜ x/δ − log(1 + x/δ), δ > 0, total variation (TV) regularizer
| (27) |
and an ℓ1-regularizer
| (28) |
We used finite differences for R in (26)–(28) with P = 4 (horizontal, vertical, and two diagonal) directions in all experiments.
It is possible to verify that the SB algorithm for uλ satisfies the hypotheses of Theorem 2 for ΨFP (26) because it is differentiable everywhere. However, Theorem 2 is not directly applicable when ΨTV or Ψℓ1 are involved in (23) as the corresponding uλ may not satisfy the hypotheses of Theorem 2. As discussed at the end of Section IV, we demonstrate using numerical experiments in Sections VI-C – VI-D (and in the supplementary material) that the proposed Monte-Carlo approach can be used for estimating WSURE(λ) for ΨTV and Ψℓ1 in (23). In all experiments, we minimized WSURE(λ) and NGCV(λ) as a function of λ.
B. Radial MRI Simulation
We used the analytical Shepp-Logan phantom [33] to simulate noisy data y of 40 dB SNR on a radial trajectory with 96 spokes each containing 512 samples (reduction factor ≈ 8). We used the approach in [53], [54] for selecting the density compensation weights D (24). We set Ψ = ΨFP (26) in (23) with .
1) Variance of WSURE
To analyze the accuracy of (21), we reconstructed 512 × 512 images of the Shepp-Logan phantom for three different values of λ, and correspondingly computed the standard deviation of Monte-Carlo WSURE(λ) by averaging it over 25 realizations of b± for different ε. Fig. 1 plots the standard deviation of Monte-Carlo WSURE(λ) normalized by WMSETD(λ) as a function of ε. The plots indicate that ε < 10−7 consistently leads to increased variance. Moreover, the variance is approximately constant for ε ∈ [10−7, 10−3] indicating the robustness of the approximation in (21). We present similar results for varying SNR of data in the supplementary material.
Fig. 1.
Plots of standard deviation of WSURE(λ) normalized by WMSETD(λ) as a function of ε in (21) for (top) λ = λopt/10, (middle) λ = λopt, and (bottom) λ = 10λopt, where λopt is the MSE-optimal value of the regularization parameter. The curves correspond to the experiment in Section VI-B1 where WSURE(λ) was obtained by averaging (21) over 25 realizations of b±;. As expected, the variance rapidly increases for smaller ε.
2) Selection of λ for different ε
We used only one realization of b± in (21) for computing WSURE(λ) (12). We varied ε, minimized MSE(λ) and WSURE(λ) with respect to λ for each ε. Fig. 2a plots the resulting λ-values, while Fig. 2b plots peak-SNR (PSNR) defined as
as functions of ε for the various λ-selections. For ε ∈ [10−7, 10−2], WSURE(λ) based λ-selection and corresponding PSNR(λ) are close to those of minimum MSE(λ) selection. We present similar results for varying SNR of data and the TV regularizer in the supplementary material.
Fig. 2.
Plots of (a) regularization parameter λ, and (b) PSNR(λ) as functions of ε for λ selected to minimize WSURE(λ) with W = IM and WD in (24) and MSE(λ) for the experiment described in Section VI-B2.
Based on Figs. 1–2 and corresponding results in the supplementary material, a suitable choice of ε appears to be in the range [10−7, 10−2]. However, from our experience, it is beneficial to be conservative with ε, so we recommend choosing ε ∈ [10−5, 10−2].
In the remaining experiments, we set ε = 10−4 and used only one realization of b± in (21) for computing WSURE(λ) (12) and NGCV(λ) (25).
3) Trends of WMSETD(λ) and WSURE(λ)
We reconstructed 512 × 512 images, and computed WSURE(λ), the oracles WMSETD(λ), and MSE(λ), for a range of λ-values. Fig. 3 plots WSURE(λ), WMSETD(λ), and MSE(λ) as a function of λ. WSURE(λ) captures the trend of WMSETD(λ) over the entire range of λ indicating the accuracy of the proposed Monte-Carlo scheme with a single realization of b±. Moreover, the minima of WMSETD(λ) and WSURE(λ) are all close to that of the true MSE(λ) indicating their reliability in selecting λ. In Fig. 4, we plot PSNR(λ) for a range of λ-values indicating the λ-selections made by NGCV(λ) and WSURE(λ). Both NGCV(λ) and WSURE(λ) led to the same λ-value close to the MSE-optimal one in this experiment. Fig. 5 presents 512 × 512 images reconstructed using λ-values that minimized NGCV(λ) and WSURE(λ). As expected, the respective reconstructed images, Fig. 5d–5f, closely resemble that obtained using the true minimum-MSE-λ in Fig. 5c. Finally, all the regularized reconstructed images, Fig. 5c–5f, have almost no radial-artifacts and display improved quality over CP reconstruction, Fig. 5b.
Fig. 3.
Simulation with the analytical Shepp-Logan phantom (Section VI-B3). Plots of MSE(λ), WMSETD(λ), WSURE(λ) versus λ for W = IM (left) and WD in (24) (right). Vertical dashed lines indicate minima of various curves. WSURE(λ) captures the trend of WMSETD(λ) in both plots and their minima are close to that of the true MSE(λ).
Fig. 4.
Simulation with the analytical Shepp-Logan phantom (Section VI-B3). Plot of PSNR(λ) versus λ. Vertical dashed lines indicate λ-selections made by various methods. WSURE(λ) and NGCV(λ) lead to near-PSNR-optimal reconstructions.
Fig. 5.
Simulation with the analytical Shepp-Logan phantom (Section VI-B3). (a) Discretized noise-free 512 × 512 phantom; (b) CP reconstruction (PSNR = 16.57 dB) has prominent streak artifacts and noise; Images reconstructed using ΨFP regularizer with λ selected to minimize (c) true MSE(λ) (λ = 4.3 × 10−7; PSNR = 33.81 dB); (d) NGCV(λ) (λ = 1.7 × 10−7; PSNR = 33.66 dB); (e) WSURE(λ) with W = IM (λ = 1.7 × 10−7; PSNR = 33.66 dB); (f) WSURE(λ) with WD in (24) (λ = 1.7 × 10−7; PSNR = 33.66 dB). In this experiment, WSURE and NGCV lead to the same λ-selections, see Fig. 4, thus resulting in similar visual quality comparable to the true MSE(λ)-based reconstruction in (c).
4) Varying Noise Level
We repeated the radial MRI simulation with varying levels of noise in the simulated data. We tabulate PSNR of reconstructed images obtained by minimizing WSURE(λ) and NGCV(λ) in Table I. WSURE(λ) was able to provide near-MSE-optimal λ-selections as indicated by the PSNR-values in Table I. NGCV also provided similar λ-selections in this experiment.
TABLE I.
Experiment in Section VI-B4: PSNR of images reconstructed using ΨFP with λ optimized by various methods for data with varying SNR.
| PSNR (dB) | ||||
|---|---|---|---|---|
| SNR (dB) |
MSE(λ) | NGCV(λ) | WSURE(λ) | |
| W = IM | W = WD | |||
| 20 | 28.60 | 28.60 | 28.60 | 28.60 |
| 30 | 32.26 | 32.26 | 32.26 | 32.26 |
| 40 | 33.81 | 33.66 | 33.66 | 33.66 |
5) Varying Reduction Factor
We repeated the radial MRI simulation for varying number of spokes of the radial trajectory corresponding to reduction factors of 2, 3, 4 and 5 and for fixed data-SNR of 40 dB. We tabulate PSNR of reconstructed images obtained by minimizing WSURE(λ) and NGCV(λ) for ΨTV in Table II. WSURE(λ) was able to provide near-MSE-optimal λ-selection as indicated by the PSNR-values in Table II. NGCV also provides similar λ-selections. This experiment illustrates that WMSETD(λ) [via WSURE(λ)] is a reasonable metric for optimizing λ for agreeable reduction factors for single-coil non-Cartesian MRI reconstruction.
TABLE II.
Experiment in Section VI-B5: PSNR of images reconstructed using ΨTV with λ optimized by various methods for data with varying number of samples (reduction factors).
| PSNR (dB) | ||||
|---|---|---|---|---|
| Reduction Factor |
MSE(λ) | NGCV(λ) | WSURE(λ) | |
| W = IM | W = WD | |||
| 5 | 28.41 | 28.37 | 28.34 | 28.34 |
| 4 | 28.58 | 28.54 | 28.54 | 28.51 |
| 3 | 28.81 | 28.81 | 28.81 | 28.78 |
| 2 | 28.98 | 28.94 | 28.98 | 28.94 |
C. GE Phantom MRI Scan
We scanned a GE resolution phantom using a 3T GE scanner with the following scan setting: gradient-echo sequence, TR = 300 ms, TE ≈ 2 ms, FOV = 15 cm, flip angle = 40°, slice thickness = 5 mm. We used a 2D variable density (VD) spiral k-space trajectory7 with 120 leaves each containing 841 samples. The readout duration per leaf was 3.3 ms, which is sufficiently short to make the assumption that any distortion due to field-inhomogeneity is negligible. We designed the VD spiral so that the central k-space was over-sampled by a factor of two and achieved Nyquist sampling at the periphery. We acquired 3 independent 2D data-sets using the same scan-setting and averaged them to obtain a relatively less-noisy data-set. We used D = diag{d} in CP reconstruction A′Dy, where the l-th element [d]l = |k1l + ιk2l| with k1l and k2l indexing the k-space sample locations in 2D. Then, we reconstructed a 256 × 256 reference image, xref in Fig. 6a, by running the SB algorithm on this data-set using (23) with ΨTV and λ ≈ 0 (such that λ ≪ ‖y‖2) in (23).
Fig. 6.
Experiment with real GE phantom data (Section VI-C). (a) Very mildly ΨTV-regularized 256 × 256 reference reconstruction from “fully-sampled” data averaged over 3 acquisitions; (b) CP reconstruction (from 2× undersampled data from a single acquisition) is strewn with spiral artifacts; Images reconstructed from 2× undersampled data (from a single acquisition) using ΨTV-regularizer with λ selected to minimize (c) NGCV(λ) (λ = 53); (d) WSURE(λ) with W = IM (λ = 37); (e) WSURE(λ) with WD in (24) (λ = 37). The λ-value selected by NGCV is slightly higher than those selected by WSURE. The resulting image (e) is thus slightly over smoothed, although the over smoothing is not visually apparent due to the piece-wise constant nature of the GE phantom. Moreover, some fine details present in (a) are lost in (c)–(e) owing both to undersampling and regularization.
Next, we simulated undersampling of one of the 3 datasets by retaining only 60 equally spaced interleaves (reduction factor = 2) and reconstructed 256 × 256 images with ΨTV in (23) by minimizing NGCV(λ) and WSURE(λ). The corresponding reconstructed images, Fig. 6c–6e, are devoid of spiral artifacts present in CP reconstruction, Fig. 6b, and closely resemble xref, Fig. 6a, in this experiment. These results also illustrate the reliability of the proposed Monte-Carlo scheme (21) employed in WSURE(λ) (12) and NGCV(λ) (25) for optimizing λ for ΨTV.
D. In-vivo Human Brain Imaging
We acquired 3 independent 3D VD stack-of-spiral data-sets (with the same 2D VD spiral trajectory described in Section VI-C) of a live human brain using a 3T GE scanner with the following scan setting: spoiled gradient-echo sequence, TR ≈ 18.5 ms, TE ≈ 2 ms, FOV = 25 cm, flip angle = 15°, slice thickness = 5 mm, number of slices = 24. We averaged these 3 data-sets and reconstructed a single 256 × 256 2D reference image (corresponding to Slice 14), xref in Fig. 7a, by running the SB algorithm with Ψℓ1 and λ ≈ 0 (such that λ ≪ ‖y‖2) in (23).
Fig. 7.
Experiment with real in-vivo human head data (Section VI-D); Slice 14. (a) Very mildly Ψℓ1-regularized 256 × 256 reference reconstruction from “fully-sampled” data averaged over 3 acquisitions; (b) CP reconstruction (from 2× undersampled data from a single acquisition) is strewn with spiral artifacts; Images reconstructed from 2× undersampled data (from a single acquisition) using Ψℓ1 -regularizer with λ selected to minimize (c) NGCV(λ) (λ = 3); (d) WSURE(λ) with W= IM (λ = 0.6); (e) WSURE(λ) with WD in (24) (λ = 0.3). In this experiment, NGCV(λ) resulted in a noticeably over-smoothed image due to a correspondingly higher value of λ, while WSURE(λ) still yielded results comparable to the reference (a). Some fine details in (a) are lost in (d), (e) that also contain minor residual spiral artifacts; these can be attributed to undersampling of k-space data.
We again undersampled one of the 3 data-sets (corresponding to Slice 14) with a reduction factor of 2 and reconstructed 256 × 256 2D images with Ψℓ1 in (23) by minimizing NGCV(λ) and WSURE(λ). In this experiment, NGCV yielded an over-smoothed result, Fig. 7c, that lacks fine details in xref, Fig. 7a. However, WSURE(λ) led to images that exhibit reasonably better quality than CP reconstruction, Fig. 7b and the NGCV-result, Fig. 7c, and closely resemble xxref. These results indicate the robustness of the proposed Monte-Carlo WSURE(λ) for λ-selection and also its applicability for Ψℓ1 in (23). We obtained similar promising results (included in the supplementary material) for reconstructing other slices of this 3D volume.
VII. Discussion
As with other parameter tuning methods such as the discrepancy principle, L-curve, and generalized cross-validation, the proposed Monte-Carlo WSURE-method requires multiple evaluations of the reconstruction algorithm uλ for optimizing λ. For the purpose of illustration, we optimized λ = λ by searching over a range of scalar λ-values in our experiments. In practice, derivative-free optimization schemes can be used, e.g., golden-section search for optimizing the scalar λ or the Powell method [55] for optimizing the vector λ.
WSURE(λ) with W = IM and WD (24) led to similar λ-selections in all our experiments both in the paper and in supplementary material. This is probably because there is only one degree of freedom, in terms of the scalar λ, in minimizing WSURE(λ). However, minimizing WSURE(λ) with respect to the vector λ may lead to different parameter selections depending upon whether W= IM or WD (24) in WMSETD(λ) (7) and WSURE(λ) (12). As an illustration, we repeated the experiment in Section VI-D, but used ΨFP (26) and optimized λ and δ of ΨFP jointly by exhaustive search. Optimizing WSURE(λ, δ) with W = IM led to (λ, δ) = (0.36, 0.31) × 10−7, while WSURE(λ, δ) with W = WD yielded (λ, δ) = (10, 6.7) × 10−7. While (λ, δ)-values are different in each case, the images reconstructed with these selections, Fig. 8, appear visually similar. This is probably because the ratio λ/δ that appears in λΨFP (23), (26) is approximately the same for these selections.
Fig. 8.
Experiment with real in-vivo human head data (Section VI-D); Slice 14. Images were reconstructed using ΨFP (26) with λ and δ chosen to minimize WSURE(λ, δ). Left image corresponds to W = IM, λ = 0.36 × 10−7, δ = 0.31 × 10−7. Right image corresponds to W = WD, λ = 10 × 10−7, δ = 6.7 × 10−7. Although the parameter selections are different, the resulting image quality is similar in both cases and is comparable to Figs. 7d, 7e.
Methods proposed in this paper can also tackle WSURE(λ) with arbitrary measurement-domain symmetric positive semi-definite weighting matrices W ≥ 0, e.g., a nondiagonal matrix such as that encountered in Projected-MSE(λ) [27, Sec. III-B] or a diagonal matrix with zeros and ones that corresponds to specifying a subset of k-space locations that contribute to WMSETD(λ) and WSURE(λ). One could also use a diagonal W with significantly larger weights for outer k-space samples so as to boost the error in high spatial frequencies when computing WMSETD(λ) and WSURE(λ). The proposed methods thus allow the user some freedom in choosing the type of k-space weighting W for the quadratic error WMSETD(λ). Finding suitable weighting matrices, WD, that yield “better” parameter selections than W = IM is interesting future work.
Theorem 2 is a key result in this work that forms the basis of our Monte-Carlo parameter selection method for single-coil MRI. While it demands strong differentiability hypotheses on uλ as presented in Section IV, numerical experiments in this paper and the accompanying supplementary material corroborate its applicability to complex-valued weakly differentiable uλ as well. Broadening the theoretical scope of Theorem 2 to such uλ along with a bias-variance analysis of the Monte-Carlo estimate (21) are interesting directions for future research. The bias-variance analysis especially is important from a practical perspective as it can help the user choose a suitable Λ and ε in (21) for a given reconstruction method uλ.
Another interesting extension of this work is application to parameter selection for parallel MRI. A straightforward way of doing this would be to directly apply the proposed Monte-Carlo WSURE approach individually for data from each coil of a multi-coil array and combine the resulting MR images for all coils via a sum-of-squares-type method. Alternatively, one could use a SENSE-based [3], [31], [56] approach: the data model (1), proposed metric (7) and Monte-Carlo WSURE (12), (21) are directly applicable to this case with A = FS [3], [9], where F represents the Fourier encoding matrix and S denotes the matrix of sensitivity maps for all coils. However caution must be exercised in this case: in practice, S is usually unknown and needs to be estimated, e.g., from low-resolution images. Since WMSETD(λ) [and WSURE(λ)] involves S (via A), its appropriateness as an image-quality metric depends on the quality of the estimate, Ŝ, of S, and needs to be validated for a given Ŝ. One faces a similar issue with image-domain SURE-based methods for SENSE-type parallel MRI reconstruction [26].
To circumvent the dependence on S, we recently proposed a similar Monte-Carlo WSURE-based parameter tuning scheme [34]–[36] for some existing parallel MRI reconstruction methods such as ℓ1-SPIRiT [7] and DESIGN [8] (based on GRAPPA [57] and sparsity) that do not need explicit knowledge of coil-sensitivity maps S. Preliminary results [34]–[36] for undersampled Cartesian parallel MR data indicate that our WSURE-based approach is able to provide near-MSE-optimal selection of regularization parameters for these methods. We are currently investigating extensions to undersampled non-Cartesian parallel MRI.
VIII. Summary & Conclusion
Selection of proper regularization parameters λ is a crucial task in regularized MRI reconstruction from undersampled k-space data. We proposed a weighted squared-error measure in k-space, WMSETD(λ) (7), to assess MRI reconstruction quality and thereby adjust λ by minimizing it. The proposed WMSETD(λ) is amenable for estimation using Stein’s principle [20], [21] for Gaussian noise. The Stein-type estimate of WMSETD(λ), denoted by WSURE(λ), requires (in addition to the noise covariance matrix) computing the trace of a linear transformation of the Jacobian matrix of the MRI reconstruction algorithm uλ with respect to k-space data y. Our major contribution in this work is a Monte-Carlo scheme that enables the estimation of this trace without requiring the knowledge of the internal working of uλ. This feature thus enables its applicability for a wide-range of reconstruction algorithms involving a variety of convex nonquadratic regularizers including total variation and ℓ1-regularization. The proposed Monte-Carlo method extends our previous result for denoising of real-valued images in [32, Thm. 2] to the case of inverse problems involving complex-valued images with application to MRI reconstruction.
Although WMSETD(λ) differs from the image-domain MSE(λ) that is not amenable for estimation in practical inverse problems [21], we demonstrated using experiments with undersampled synthetic and real MR data that WMSETD(λ), via its estimate WSURE(λ), is able to provide near-MSE-optimal selection of regularization parameters for single-coil non-Cartesian MRI reconstruction. These results both extend and corroborate our previous work [27] on similar parameter-tuning methods for single-coil undersampled Cartesian MRI reconstruction. Theoretical developments in this paper are fairly general and can be readily extended to handle parameter-tuning for (iterative) linear/nonlinear parallel MRI reconstruction from undersampled Cartesian/non-Cartesian k-space data.
Supplementary Material
Acknowledgments
This work was supported by the National Institutes of Health under Grant P01 CA87634 and by CPU donations from Intel.
Appendix A
Proof of Lemma 1
From the hypotheses of Lemma 1, it is clear that the probability density function of ξ is given by g(ξ) = K exp(−ξ′Ω−1ξ), where K > 0 is some normalization constant. It is easy to verify that g(ξ) satisfies
| (29) |
where and ∇ξℛ, ∇ξℐ are 1 × M gradient operators with respect to the real, ξℛ, and imaginary, ξℐ, parts of ξ, respectively. We start from the left hand side of (10) and use (9), (29) and dξ ≜ dξℛ dξℐ to obtain
| (30) |
In the sequel, m = 1,…, M and n = 1,…, N, respectively. We focus on the term involving ∇ξℛ in (30) and use integration-by-parts along with the fact that 𝖤ξ{|[Γuλ(y)]m|} < ∞, to get that [21, Thm. 1]
| (31) |
where we have set ∂/∂ξℛm = ∂/∂yℛm since ytrue in (1) is a deterministic constant. Similarly,
| (32) |
Combining (30)–(32) and using (11), we get that
which is the desired result.
Footnotes
In some special cases such as where A has full column-rank or when uλ(y) belongs to the range-space of A′, it is possible to estimate MSE(λ) [21], [23], [27].
Since (1) and (2) are based on the same noise model, WMSE(λ) (6) and WMSETD(λ) (7) lead to functionally similar WSURE(λ) such as [27, Eq. (12)] and (12) in this paper. However, it is more apt to interpret WSURE(λ) as an unbiased estimate of WMSETD(λ) for practical imaging inverse problems.
Previously [27], we assumed that the observed data followed the discretized linear model (2) for single-coil MRI reconstruction with retrospective undersampling, so we focussed on WMSE(λ) (6) in [27]. However, since the model in (1) is more realistic than that in (2), we prefer WMSETD(λ) over WMSE(λ) in this work.
The supplementary material is available at http://ieeexplore.ieee.org.
We chose μ = μmin × 10−2 in all experiments, where μmin minimized the condition number of Ã′Ã + μR′R for a given R, where Ã′Ã is a circulant approximation to A′A [48].
Another choice is complex Gaussian b ~ 𝒩(0, IM).
An illustration of the VD spiral trajectory used in this experiment is presented in the supplementary material.
Contributor Information
Sathish Ramani, Email: sramani@umich.edu, Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, U.S.A..
Daniel S. Weller, Email: dweller@alum.mit.edu, Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, U.S.A..
Jon-Fredrik Nielsen, Email: jfnielse@umich.edu, fMRI Laboratory, University of Michigan, Ann Arbor, MI, U.S.A..
Jeffrey A. Fessler, Email: fessler@umich.edu, Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, U.S.A..
References
- 1.Fessler JA. Model-based image reconstruction for MRI. IEEE Sig. Proc. Mag. 2010 Jul;vol. 27(no. 4):81–89. doi: 10.1109/MSP.2010.936726. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Bydder M, Samsonov AA, Du J. Evaluation of optimal density weighting for regridding. Mag. Res. Im. 2007 Jun;vol. 25(no. 5):695–702. doi: 10.1016/j.mri.2006.09.021. [DOI] [PubMed] [Google Scholar]
- 3.Ying L, Liu B, Steckner M, Wu G, Wu M, Li S-J. A statistical approach to SENSE regularization with arbitrary k-space trajectories. Mag. Res. Med. 2008 Aug;vol. 60(no. 2):414–421. doi: 10.1002/mrm.21665. [DOI] [PubMed] [Google Scholar]
- 4.Block KT, Uecker M, Frahm J. Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint. Mag. Res. Med. 2007 Jun;vol. 57(no. 6):1086–1098. doi: 10.1002/mrm.21236. [DOI] [PubMed] [Google Scholar]
- 5.Lustig M, Donoho D, Pauly JM. Sparse MRI: The application of compressed sensing for rapid MR imaging. Mag. Res. Med. 2007 Dec;vol. 58(no. 6):1182–1195. doi: 10.1002/mrm.21391. [DOI] [PubMed] [Google Scholar]
- 6.Guerquin-Kern M, Haberlin M, Pruessmann KP, Unser M. A fast wavelet-based reconstruction method for magnetic resonance imaging. IEEE Trans. Med. Imag. 2011 Sep;vol. 30(no. 9):1649–1660. doi: 10.1109/TMI.2011.2140121. [DOI] [PubMed] [Google Scholar]
- 7.Murphy M, Alley M, Demmel J, Keutzer K, Vasanawala S, Lustig M. Fast ℓ1-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime. IEEE Trans. Med. Imag. 2012;vol. 31:1250–1262. doi: 10.1109/TMI.2012.2188039. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Weller DS, Polimeni JR, Grady L, Wald LL, Adalsteinsson E, Goyal VK. Denoising sparse images from GRAPPA using the nullspace method (DESIGN) Mag. Res. Med. 2012 Oct;vol. 68(no. 4):1176–1189. doi: 10.1002/mrm.24116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Ramani S, Fessler JA. Parallel MR image reconstruction using augmented Lagrangian methods. IEEE Trans. Med. Imag. 2011 Mar;vol. 30(no. 3):694–706. doi: 10.1109/TMI.2010.2093536. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Karl WC. Regularization in image restoration and reconstruction. In: Bovik A, editor. Handbook of Image & Video Processing. 2nd edition. ELSEVIER; 2005. pp. 183–202. [Google Scholar]
- 11.Galatsanos NP, Katsaggelos AK. Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation. IEEE Trans. Im. Proc. 1992 Jul;vol. 1(no. 3):322–336. doi: 10.1109/83.148606. [DOI] [PubMed] [Google Scholar]
- 12.Hansen PC. Analysis of discrete ill-posed problems by means of the L-curve. SIAM Review. 1992;vol. 34(no. 4):561–580. [Google Scholar]
- 13.Hansen PC, O’Leary DP. The use of the L-curve in the regularization of discrete ill-posed problems. SIAM J. Sci. Comput. 1993;vol. 14(no. 6):1487–1503. [Google Scholar]
- 14.Lin F-H, Kwong KK, Belliveau JW, Wald LL. Parallel imaging reconstruction using automatic regularization. Mag. Res. Med. 2004 Mar;vol. 51(no. 3):559–567. doi: 10.1002/mrm.10718. [DOI] [PubMed] [Google Scholar]
- 15.Craven P, Wahba G. Smoothing noisy data with spline functions. Numer. Math. 1979;vol. 31:377–403. [Google Scholar]
- 16.Reeves SJ, Mersereau RM. Optimal estimation of the regularization parameter and stabilizing functional for regularized image restoration. Opt. Engg. 1990;vol. 29(no. 5):446–454. [Google Scholar]
- 17.Girard DA. The fast Monte-Carlo Cross-Validation and CL procedures: Comments, new results and application to image recovery problems. Computation. Stat. 1995;vol. 10:205–231. [Google Scholar]
- 18.Girard DA. The fast Monte-Carlo Cross-Validation and CL procedures: Comments, new results and application to image recovery problems - Rejoinder. Computation. Stat. 1995;vol. 10:251–258. [Google Scholar]
- 19.Carew JD, Wahba G, Xie X, Nordheim EV, Meyerandb ME. Optimal spline smoothing of fMRI time series by generalized cross-validation. NeuroImage. 2003;vol. 18:950–961. doi: 10.1016/s1053-8119(03)00013-2. [DOI] [PubMed] [Google Scholar]
- 20.Stein C. Estimation of the mean of a multivariate normal distribution. Ann. Stat. 1981 Nov;vol. 9(no. 6):1135–1151. [Google Scholar]
- 21.Eldar YC. Generalized SURE for exponential families: applications to regularization. IEEE Trans. Sig. Proc. 2009 Feb;vol. 57(no. 2):471–481. [Google Scholar]
- 22.Pesquet J-C, Benazza-Benyahia A, Chaux C. A SURE approach for digital signal/image deconvolution problems. IEEE Trans. Sig. Proc. 2009 Dec;vol. 57(no. 12):4616–4632. [Google Scholar]
- 23.Giryes R, Elad M, Eldar YC. The projected GSURE for automatic parameter tuning in iterative shrinkage methods. Applied and Computational Harmonic Analysis. 2011 May;vol. 30(no. 3):407–422. [Google Scholar]
- 24.Vonesch C, Ramani S, Unser M. Recursive risk estimation for non-linear image deconvolution with a wavelet-domain sparsity constraint; Proc. IEEE Intl. Conf. Img. Proc; 2008. pp. 665–668. [Google Scholar]
- 25.Luisier F, Blu T, Wolfe PJ. A CURE for noisy magnetic resonance images: Chi-square unbiased risk estimation. IEEE Trans. Im. Proc. 2012;vol. 21(no. 8):3454–3466. doi: 10.1109/TIP.2012.2191565. [DOI] [PubMed] [Google Scholar]
- 26.Marin A, Chaux C, Pesquet J-C, Ciuciu P. Image reconstruction from multiple sensors using stein’s principle. Application to parallel MRI. Biomedical Imaging: From Nano to Macro, 2011 IEEE International Symposium on; 30 2011–april 2 2011; pp. 465–468. [Google Scholar]
- 27.Ramani S, Liu Z, Rosen J, Nielsen J-F, Fessler JA. Regularization parameter selection for nonlinear iterative image restoration and MRI reconstruction using GCV and SURE-based methods. IEEE Trans. Im. Proc. 2012 Aug;vol. 21(no. 8):3659–3672. doi: 10.1109/TIP.2012.2195015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Brankov JG, Yang Y, Wei L, El Naqa I, Wernick MN. Learning a channelized observer for image quality assessment. IEEE Trans. Med. Imag. 2009 Jul;vol. 28(no. 7):991–999. doi: 10.1109/TMI.2008.2008956. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Zhang L, C-Ménard C, Callet PL, Tanguy J-Y. A perceptually relevant channelized joint observer (PCJO) for the detection-localization of parametric signals. IEEE Trans. Med. Imag. 2012;vol. 31(no. 10):1875–1888. doi: 10.1109/TMI.2012.2205267. [DOI] [PubMed] [Google Scholar]
- 30.Luong H, Goossens B, Aelterman J, Platisa L, Philips W. Optimizing image quality in MRI: On the evaluation of k-space trajectories for under-sampled MR acquisition. Proc. 4th Intl. Work. Qual. Mult. Exp. (QoMEX) 2012:25–26. [Google Scholar]
- 31.Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: sensitivity encoding for fast MRI. Mag. Res. Med. 1999 Nov;vol. 42(no. 5):952–962. [PubMed] [Google Scholar]
- 32.Ramani S, Blu T, Unser M. Monte-Carlo SURE: A black-box optimization of regularization parameters for general denoising algorithms. IEEE Trans. Im. Proc. 2008 Sept.vol. 17(no. 9):1540–1554. doi: 10.1109/TIP.2008.2001404. [DOI] [PubMed] [Google Scholar]
- 33.Guerquin-Kern M, Lejeune L, Pruessmann KP, Unser M. Realistic analytical phantoms for parallel magnetic resonance imaging. IEEE Trans. Med. Imag. 2012 Mar;vol. 31(no. 3):626–636. doi: 10.1109/TMI.2011.2174158. [DOI] [PubMed] [Google Scholar]
- 34.Weller DS, Ramani S, Nielsen J-F, Fessler JA. SURE-based parameter selection for parallel MRI reconstruction using GRAPPA and sparsity; Proc. IEEE Intl. Symp. Biomed. Imag; 2013. to appear. [Google Scholar]
- 35.Weller DS, Ramani S, Nielsen J-F, Fessler JA. Automatic ℓ1-SPIRiT regularization parameter selection using Monte-Carlo SURE. Proc. Ann. Mtg. Intl. Soc. Mag. Res. Med. 2013 to appear. [Google Scholar]
- 36.Weller DS, Ramani S, Nielsen J-F, Fessler JA. Monte-Carlo SURE-based parameter selection for parallel magnetic resonance imaging reconstruction. Mag. Res. Med. doi: 10.1002/mrm.24840. submitted. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Brandwood DH. A complex gradient operator and its application in adaptive array theory. IEE Proceedings H: Microwaves, Optics and Antennas. 1983 Feb;vol. 130(no. 1):11–16. [Google Scholar]
- 38.Lieb EH, Loss M. Analysis. 2nd, revised edition. Providence, RI, USA: American Mathematical Society; 2001. [Google Scholar]
- 39.Van den Bos A. Complex gradient and hessian. IEE Proc. Vis. Im. Sig. Proc. 1994 Dec;vol. 141(no. 6):380–382. [Google Scholar]
- 40.Girard DA. A fast ’Monte-Carlo Cross-Validation’ procedure for large least squares problems with noisy data. Numer. Math. 1989;vol. 56:1–23. [Google Scholar]
- 41.Hutchinson MF. A Stochastic Estimator of the Trace of the Influence Matrix for Laplacian Smoothing Splines. Commun. Stat. Simul. Comput. 1989;vol. 18(no. 3):1059–1076. [Google Scholar]
- 42.Bai Z, Fahey M, Golub G. Some large-scale matrix computation problems. J. Comput. Appl. Math. 1996;vol. 74:71–89. [Google Scholar]
- 43.Dong S, Liu K. Stochastic estimation with z2 noise. Phys. Lett. B. 1994;vol. 328:130–136. [Google Scholar]
- 44.Noll DC, Fessler JA, Sutton BP. Conjugate phase MRI reconstruction with spatially variant sample density correction. IEEE Trans. Med. Imag. 2005 Mar;vol. 24(no. 3):325–336. doi: 10.1109/tmi.2004.842452. [DOI] [PubMed] [Google Scholar]
- 45.Hurvich CM, Tsai C-L. A crossvalidatory AIC for hard wavelet thresholding in spatially adaptive function estimation. Biometrika. 1998;vol. 85(no. 3):701–710. [Google Scholar]
- 46.Fessler JA, Sutton BP. Nonuniform fast Fourier transforms using min-max interpolation. IEEE Trans. Sig. Proc. 2003 Feb;vol. 51(no. 2):560–574. [Google Scholar]
- 47.Goldstein T, Osher S. The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2009;vol. 2(no. 2):323–343. [Google Scholar]
- 48.Chan TF. An optimal circulant preconditioner for Toeplitz systems. SIAM J. Sci. Stat. Comp. 1988 Jul;vol. 9(no. 4):766–771. [Google Scholar]
- 49.Yagle AE. A fast algorithm for Toeplitz-block-Toeplitz linear systems. Proc. IEEE Intl. Conf. Acoust., Sp., Sig, Proc. 2001;vol. 3:1929–1932. [Google Scholar]
- 50.Chan RH, Ng MK. Conjugate gradient methods for Toeplitz systems. SIAM Review. 1996 Sept.vol. 38(no. 3):427–482. [Google Scholar]
- 51.Fessler JA, Booth SD. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction. IEEE Trans. Im. Proc. 1999 May;vol. 8(no. 5):688–699. doi: 10.1109/83.760336. [DOI] [PubMed] [Google Scholar]
- 52.Ramani S, Fessler JA. A splitting-based iterative algorithm for accelerated statistical X-ray CT reconstruction. IEEE Trans. Med. Imag. 2012 Mar;vol. 31(no. 3):677–688. doi: 10.1109/TMI.2011.2175233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Lauzon ML, Rutt BK. Effects of polar sampling in k-space. Mag. Res. Med. 1996 Dec;vol. 36(no. 6):940–949. doi: 10.1002/mrm.1910360617. [DOI] [PubMed] [Google Scholar]
- 54.Joseph PM. Sampling errors in projection reconstruction MRI. Mag. Res. Med. 1998 Sep;vol. 40(no. 3):460–466. doi: 10.1002/mrm.1910400317. [DOI] [PubMed] [Google Scholar]
- 55.Powell MJD. An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Computer Journal. 1964;vol. 7(no. 2):155–162. [Google Scholar]
- 56.Pruessmann KP, Weiger M, Börnert P, Boesiger P. Advances in sensitivity encoding with arbitrary k-space trajectories. Mag. Res. Med. 2001 Oct;vol. 46(no. 4):638–651. doi: 10.1002/mrm.1241. [DOI] [PubMed] [Google Scholar]
- 57.Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A. Generalized autocalibrating partially parallel acquisitions (GRAPPA) Mag. Res. Med. 2002 Jun;vol. 47(no. 6):1202–1210. doi: 10.1002/mrm.10171. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.








