Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Apr 1.
Published in final edited form as: Inverse Probl. 2015 Apr 1;31(4):045008. doi: 10.1088/0266-5611/31/4/045008

Parallel Magnetic Resonance Imaging as Approximation in a Reproducing Kernel Hilbert Space

Vivek Athalye 1, Michael Lustig 1, Martin Uecker 1
PMCID: PMC4429804  NIHMSID: NIHMS683657  PMID: 25983363

Abstract

In Magnetic Resonance Imaging (MRI) data samples are collected in the spatial frequency domain (k-space), typically by time-consuming line-by-line scanning on a Cartesian grid. Scans can be accelerated by simultaneous acquisition of data using multiple receivers (parallel imaging), and by using more efficient non-Cartesian sampling schemes. To understand and design k-space sampling patterns, a theoretical framework is needed to analyze how well arbitrary sampling patterns reconstruct unsampled k-space using receive coil information. As shown here, reconstruction from samples at arbitrary locations can be understood as approximation of vector-valued functions from the acquired samples and formulated using a Reproducing Kernel Hilbert Space (RKHS) with a matrix-valued kernel defined by the spatial sensitivities of the receive coils. This establishes a formal connection between approximation theory and parallel imaging. Theoretical tools from approximation theory can then be used to understand reconstruction in k-space and to extend the analysis of the effects of samples selection beyond the traditional image-domain g-factor noise analysis to both noise amplification and approximation errors in k-space. This is demonstrated with numerical examples.

Keywords: Reproducing Kernel Hilbert Space, Magnetic Resonance Imaging, Image Reconstruction, Approximation, Inverse Problems, Non-Cartesian Sampling

1 Introduction

Magnetic Resonance Imaging (MRI) is a non-invasive tomographic imaging technique with many applications in medicine and bio-medical research. The scanner produces images depicting spatial distributions of magnetization by scanning the spatial frequency domain (k-space). Because measurements require applying gradient fields to directly modulate the magnetization being imaged, MRI is inherently serial and thus time-consuming and susceptible to motion artifacts. Parallel MRI accelerates the measurement process by simultaneously using multiple receivers placed around the subject to gain additional information about the spatial origin of the signal. This spatial encoding due to the coils’ distinct spatial sensitivities is complementary to gradient-based Fourier encoding. Consequently, the signal can be restored from data which has been sampled below the Nyquist limit[1, 2, 3].

Most naturally, image reconstruction can be formulated as an inverse problem as in SENSE, where the image is estimated from the acquired data by solving a linear signal model [4, 2, 5, 6]. The conditioning of this system depends critically on the number and positions of the acquired samples. Although using local approximations in k-space, the earlier SMASH method is based on the same fundamental principles as SENSE. A classical review of parallel imaging methods and a discussion of the relationship between SENSE and SMASH can be found in [7]. The reconstruction from non-Cartesian (scattered) samples can also be formulated as an inverse problem and can be solved exactly in a continuous Hilbert space formulation [8, 9] or more commonly using discretization and efficient gridding techniques [10, 11, 12, 13]. Non-Cartesian sampling can be combined with SENSE [5] and other advanced reconstruction algorithms (see [14, 15, 16, 17] for recent examples).

Because the reconstruction problem is well-posed for sufficiently dense and regular sampling, a different two-step reconstruction strategy is applied in certain k-space methods. Using the acquired samples, a vector-valued function is approximated on a Nyquist-sampled grid in k-space, which is then transformed to the image domain for all coils and only then combined into a final image. This strategy is used in coil-by-coil SMASH [18], GRAPPA, and similar methods, which have first been formulated for sampling on a grid and later extended to non-Cartesian sampling in various ways [19, 20, 21, 22, 23, 24, 25, 26, 15].

Previously, we have shown how both types of reconstruction constrain the data to a subspace spanned by the spatial sensitivity profiles of the receive coils [27]. In this work, we extend this idea by showing that parallel imaging from arbitrary - Cartesian as well as non-Cartesian - samples in k-space can be expressed formally as the approximation problem in a Reproducing Kernel Hilbert Space (RKHS) [28, 29]. A RKHS is a Hilbert space of functions where the point-evaluation functionals are continuous, i.e. they are compatible with the norm of the Hilbert space. This is a natural and intuitive property that means functions close in norm difference are also close at each point and provides the additional structure necessary to describe sampling in a Hilbert space setting. A RKHS is uniquely characterized by its reproducing kernel. In parallel MRI, the reproducing kernel is determined by the coil sensitivities, which can be derived directly from the basic signal equation. While some related ideas can be found in the literature, i.e. GRAPPA has been related to the geostatistical framework of Kriging [30] and the “kernel trick” known from support vector machines has been used to develop a non-linear variant of GRAPPA [31], a full mathematical description has so far not been available. This gap is closed in the present work by formulating parallel imaging in the framework of approximation theory. It does not only provide an optimal interpolation formula as a (theoretical) basis for image reconstruction in parallel MRI, but also enables a much deeper understanding of the reconstruction problem itself. In particular, the power function [32] and Frobenius norm maps, that naturally come out of the RKHS formulation, give local bounds of the approximation error and local information about noise amplification in multi-coil k-space or - with a small extension - directly for the Fourier transform of the image. Both functions depend on the sample points but not on the data and can be used to study the effect of sample selection on the reconstruction error. This is demonstrated with numerical examples.

2 Theory

2.1 Overview

An overview of the theory developed in the following is shown in Figure 1. Please refer to Appendix 7.1 for some comments about the notation and to Table 1 for a list of important symbols.

Figure 1.

Figure 1

Image reconstruction for parallel MRI as approximation in a reproducing kernel Hilbert space.

Table 1.

Important symbols.

x, y k-space positions
S finite set of sample locations
xk, yl indexed sample locations in S
i, j, n indices used for vector components (channels)
H Hilbert space of vector-valued k-space functions
Kx,i(y) representer of evaluation of channel i at x
Kij(x, y) matrix-valued kernel
Mki,lj kernel matrix
uk,i(x) cardinal functions
〈 ·, · 〉H inner product in H
Pn(x) Power function for channel n
Ω the field of view (FOV)
L2(Ω, Inline graphic) square-integrable functions on O
r image-domain position
cj(r) coil sensitivity map for channel j
εx,I(r) encoding functions
ρ(r) image
(·)¯
complex conjugate

We consider parallel imaging as an inverse problem with a linear forward model F : ID, which maps from a Hilbert space of images I to a data space D. The range of F is the space of ideal signals HD. From the data space a set yY of samples are acquired, which is described by a sampling operator T. Then, the general setting is the following:

IFHDGTY

Here, G = TF. A general formulation of linear inverse problems with discrete data can be found in [8]. The inverse problem can be solved by computing a regularized least-squares solution by minimizing a functional

ρα=Gαy=argminρ^Gρ^-y22+αR(ρ^) (1)

with a suitable regularization term R. In the limit α → 0, this yields a minimum-norm least-squares solution (MNLS). In general, the mapping F is injective and has a stable inverse defined on its range H. Alternatively to solving the inverse problem directly, one can approximate a function H from the data yY and obtain a solution by computing ρ̂ = F−1.

2.2 Parallel Magnetic Resonance Imaging

We begin with the standard setup of the parallel imaging problem. For concreteness, we consider two-dimensional imaging. Let the magnetization image ρ : ℝ2Inline graphic belong to the space L2(Ω, Inline graphic) of square-integrable functions with compact support Ω on a subset of the plane called the field of view (FOV). The forward operator F maps magnetization images to smooth signals in k-space:

F:L2(Ω,)C(2,N) (2)
ρfρ=Fρ (3)

Each vector component, which is the signal of one of N receive coils, is given by the signal equation:

fjρ(x)=Ωdrρ(r)cj(r)e-2π-1x·r1jN (4)

In words, the jth component of the vector-valued function fρ is the Fourier transform of coil j’s image. The k-space signals are smooth because they are Fourier transforms of compactly supported functions. The coil sensitivities cj are generally smooth, complex-valued functions in image space describing the spatial sensitivity profiles of each receiver coil. In areas where all coil sensitivities vanish simultaneously, no information about the image can be recovered. Without loss of generality, we will simply assume that the definition of Ω excludes such areas. Using the inner product definition

ρ,σL2=Ωdrρ(r)¯σ(r) (5)

and the encoding functions εx,j(r)=e2π-1x·rcj(r)¯ [2], we can write fjρ(x)=εx,j,ρL2. During the measurement process samples at a finite number of locations xkS ⊂ ℝ2 are collected. Samples can be assumed to be corrupted by i.i.d. complex Gaussian white noise. Although in practice receive channels might have different noise levels and correlations, this can be removed by a prewhitening step and a change-of-variable transformation of the coil sensitivities [5].

2.3 Reproducing Kernel Hilbert Space

The vector-valued functions considered in parallel imaging have the particular structure specified in Equation 4. We now encapsulate this structure within a reproducing kernel Hilbert space H with a matrix-valued kernel [33, 34].

Let X be a set of points, and H a Hilbert space of vector-valued functions on X. H is an RKHS if the point-evaluation functionals Lx : HInline graphic, ff(x) are continuous for all xX. From the Riesz representation theorem, it then follows that there are unique functions Kx,iH for each xX and each vector component 1 ≤ iN such that fi(x) = 〈 Kx,i, fH. As before, we define the inner product 〈 ·, · 〉H in H to be conjugate linear in the first argument. The functions Kx,i are called representers of evaluation. They span H, but do not generally form a basis. If a series of functions fn converges to f in the Hilbert space norm, then for any xX and vector component 1 ≤ iN, we have

fin(x)-fi(x)=Kx,i,fn-fHKx,iHfn-fH. (6)

This means that convergence in norm implies point-wise convergence and that local bounds can be obtained using information about the representers.

The structure of the space can then be described by a positive-definite matrix-valued kernel K : X × XInline graphic such that an element of the kernel Kij(x, y) = 〈 Kx,i, Ky,jH. A RKHS is uniquely characterized by its positive-definite kernel and to every positive-definite kernel there is a unique RKHS. From the definition of the representers of evaluation it follows that Kij(x,y)=Kiy,j(x), the ith component of the vector-valued function Ky,j that evaluates the jth component of a function at y. Thus, the reproducing property holds:

fj(y)=K·j(·,y),fHyX (7)

Applying this framework to parallel MRI, the space H is the range of F, i.e. it consists of ideal signals f on ℝ2 given by Equation 4 for all possible images ρL2(Ω, Inline graphic). We can assume that at least one of the coil sensitivities cj is non-zero at each point r ∈ Ω. Then F can be inverted by applying an inverse Fourier transform (practical computation can be done on a Nyquist-sampled grid) and dividing by a non-zero ci. This enables the following definition of an inner product in H:

f,gH:=F-1f,F-1gL2 (8)

With this inner product, we formulate our main result:

Theorem

The space H of ideal multi-channel signals in MRI is a RKHS with kernel:

Kij(x,y)=εx,i,εy,jL2=Ωdre-2π-1(x-y)·rci(r)cj(r)¯ (9)
Proof

We must show two properties: For each y ∈ ℝ2 and 1 ≤ jN the Ky,j must lie in H, and 〈 Ky,j, fH = fj(y). We observe that the kernel can be obtained by applying F to the encoding functions εy,j:

(Fεy,j)i(x)=εx,i,εy,jL2=Kij(x,y)=Kiy,j(x) (10)

Thus, Ky,j is in the range of F and therefore in H. The second property follows directly from the definition of the inner product. For every fρ = , it follows:

Ky,j,fρH=Fεy,j,FρH=εy,j,ρL2=fjρ(y) (11)

In words, Kij(x, y) captures the similarity between encoding functions εx,i and εy,j. Note that Kij(x, y) = Kij(x + Δ, y + Δ), and so the kernel is shift invariant. It should be noted that there is some freedom in the choice of the inner product. Using a different inner product will lead to a different kernel. The inner product used here corresponds to the inner product of the Hilbert space of images. As shown later, the final reconstruction will then be optimal with respect to this norm.

Having characterized the multi-channel k-space of ideal signals as an RKHS with the shift-invariant kernel given in Equation 9, we can now proceed to describe sampling and reconstruction in this framework.

2.4 Sampling and Reconstruction

Sampling and reconstruction from arbitrary samples can be described in the framework of approximation theory (see [29] as general reference). Because it is usually formulated for the scalar case only, we summarize the main results using our notation.

Samples are collected at a finite number of locations xkS ⊂ ℝ2. Ideal samples of fH are then given by the inner product evaluations fi(xk) = 〈 Kxk,i, fH for all i ∈ 1, ···, N and xkS. Assuming no measurement error, a solution to the reconstruction problem is usually defined as the function H of smallest norm which interpolates f at the sample locations, i.e. (xk) = f(xk) We first define a measurement subspace HSH that is spanned by Kxl,j for all xlS, 1 ≤ jN. Thus, any function f||HS can be represented as

f=l=1Sj=1Nal,jKxl,j. (12)

This subspace turns out to be the right space for interpolation: In the absence of errors, all functions in HS can be recovered exactly (see below), while the functions fHS are those for which the samples provide no information, i.e. f(S) = {0} (by Eq. 7). The minimum-norm interpolant for fH for the samples S is the projection PSf of f onto HS. To compute this projection, we can directly solve for coefficients al,j such that PSf (xk) = f (xk) for xkS. Evaluating Eq. 12 at the sample locations xk by building the inner product with the representers of evaluation Kxk,i yields a finite number of linear equations

fi(xk)=Kxk,i,PSfH (13)
=l=1Sj=1Nal,jKxk,i,Kxl,jH (14)
=l=1Sj=1Nal,jKij(xk,xl)Mki,lj. (15)

The kernel matrix MInline graphic is constructed by evaluating the kernel at all sample positions xkS, i.e. Mki,lj = Kij(xk, xl). Here, the pairs of indices k, i and l, j have each been combined to obtain the two indices of the matrix. Although the kernel matrix is not guaranteed to have full rank, this equation has a solution (for ideal samples) which is known to exist in HS.

Next, we formulate a generic solution which only depends on the sample locations and not on the data. For each point x, interpolation weights called cardinal functions uk,i(x) can then be computed by solving a linear system of equations:

k=1Si=1NMki,ljuk,i(x)=Kxl,j(x) (16)

Because Kixl,j(x)=Kij(x,xl), not only the kernel matrix M but also the right-hand side of this equation can be obtained directly by evaluation of the kernel (Eq. 9). Intuitively, the cardinal function are coefficients which interpolate the representers Kxl,j with xlS at arbitrary x exactly from their values at the sample locations xkS encoded in the kernel matrix Mki,lj=Kixl,j(xk). Because this set of representers spans HS they can then be used to interpolate all functions in HS exactly (see Appendix 7.2):

f^(x)=k=1Si=1Nfi(xk)uk,i(x). (17)

This interpolation computes the same solution as solving for the coefficients in Eq. 15. An alternate way is to derive cardinal functions as conjugate coefficients for the projection PSKx,i of an arbitrary representer Kx,i from H (i.e. to arbitrary x) onto HS (see Appendix 7.3).

In principle, parallel MRI reconstruction can be performed using this formula: The interpolation formula can be used to compute samples on a Nyquist-sampled grid followed by an FFT algorithm to compute an image for each coil. Evaluation of Equation 16 requires the solution of the linear system of equations of size |S|N × |S|N for each point of the Nyquist-sampled grid. Although all solutions can be obtained efficiently after computation of the pseudo-inverse (or Cholesky decomposition) of the kernel matrix, this is still too expensive for image reconstruction in clinical applications due to the size of this matrix. Nevertheless, more efficient practical algorithms such as GRAPPA are based on a local approximation of Eq. 17.

2.5 Approximation Error and Noise

A useful tool to estimate approximation errors is the power function P [32]. It yields point-wise bounds of the approximation error:

fn(x)-f^n(x)2fH2·Pn2(x) (18)

The power function depends only on the sample locations and is independent from the data values. It is given in terms of the kernel and the cardinal functions (see Appendix 7.4). This concept enables us to understand how good our sample set S is for approximating f at unacquired points. A single combined power function can be obtained as the root of sum of squares of the power functions for all coils. A large power function indicates that we should expect a large approximation error at this point. A small value of the power function means that the function is approximated well from the available samples and that a new sample at x would not provide much information.

Another quantity of interest is the Frobenius norm of the local reconstruction operator:

nn(x)=k=1Si=1N|unk,i(x)|2 (19)

It relates to the stability of the interpolation, describing the local noise amplifi-cation in k-space for each individual coil (or for all coils using a combined map). This yields different and complementary information to the g-factor maps in the image domain [2, 35]. While the g-factor map yields information about how noise affects the final reconstructed image, these new noise amplification maps yield useful information about the source of the noise in k-space and can guide the design of optimal sampling patterns. A related quantity is the classical Lebesgue function, which is defined as

ln(x)=k=1Si=1Nunk,i(x). (20)

Its maximum is the Lebesgue constant, which can be used to bound the interpolation error in the maximum norm when the error of the data is bounded (for example, see [36]).

2.6 Minimum-Norm Reconstruction

Eq. 1 defines the MNLS solution of SENSE in the continuous (not discretized) space of image. It is more precise than a conventional SENSE reconstruction using Dirac distributions as voxel functions, which is affected by truncation artifacts [37]. A MNLS reconstruction in the image domain can be computed directly [7] or approximated efficiently by computation on an image-domain grid with higher resolution [38, 39, 40].

With our choice of the inner product, the k-space recovered with Eq. 17 corresponds to this MNLS reconstruction of the image. Solving for the cardinal functions (Eq. 16) using the pseudo-inverse M of the kernel matrix and inserting into the interpolation formula (Eq. 17) yields:

f^=l=1Sj=1NKxl,jk=1Si=1NMlj,kifi(xk) (21)

Expanding the relations

Mki,lj=Kij(xk,xl)=εxk,i,εxl,jL2 (22)
Kxl,j=Fεxl,j (23)

and noting that the result can be re-written using the forward operator

G:L2(Ω,)SN (24)
ρyki=εxk,i,ρL2 (25)

and its adjoint GH, one obtains M = GGH and further

f^=FGH(GGH)y (26)
=FGy. (27)

From this formulation it is directly evident that the k-space signal interpolated with Eq. 17 is the same as the signal predicted with the operator F from the MNLS solution Gy of the continuous (not discretized in the image domain) SENSE problem. No discretization errors arise, because the kernel matrix M, i.e. GGH, is a finite-dimensional matrix even for the continuous case. Of course, this is not a unique feature of the proposed formulation: The relation GH(GGH) = G used in the last equation can be applied in reverse to the SENSE problem to directly compute the MNLS solution without discretization errors [7]. The same idea has also been proposed earlier for the reconstruction of non-Cartesian data from a single coil [9].

2.7 Relationship to GRAPPA

Equation 17 is similar to the reconstruction part of the GRAPPA algorithm, with the GRAPPA weights corresponding to the cardinal functions. To make actual computation feasible, GRAPPA uses only samples in a small patch near a given point. If SxS is the set of samples near x which are used for reconstruction, then the GRAPPA reconstruction G is given by:

f^G(x)=kxkSxi=1Nfi(xk)wk,i(x) (28)

In principle, reconstruction weights can be computed from the receive sensitivities using a localized version of Equation 16 or, equivalently, of Equation 27. This yields a relationship between SENSE and the reconstruction formula used in GRAPPA [41, 42]. In the original GRAPPA method, the weights are determined with a calibration procedure. The calibration (and reconstruction) in GRAPPA is restricted to samples on a Cartesian grid. Using shift invariance, the weights wk,i(x) are learned by a least-squares fit at many different grid positions xtC in a calibration region where all required samples (on the grid) have been acquired:

wjk,i(x)=argminw^k,it=1C|kxkSxi=1Nfi(xt+xk-xΔxk)w^k,i-fj(xt)|2 (29)

The distance vectors Δxk which appear in the sum for a specific target position x depend on the local sampling pattern. The least-squares problem can be solved explicitly using the normal equations. Because in GRAPPA only neighboring samples are used, it is useful to define a so-called calibration matrix, which is computed by sliding a small window through the calibration area and taking each patch as a row, i.e. At,ki = fi(xt + Δxk) for all possible distance vectors Δxk in a small patch. Assuming a special index 0 with Δx0 = 0, i.e. At,0i = fi(xt), the normal equations are given by (with l such that Δxl +xSx and j = 1 ··· N)

kΔxk+xSxi=1Nt=1CAt,lj¯At,kiwhk,i(x)=t=1CAt,lj¯At,0h. (30)

Again, this relates to the present framework as this is Equation 16 expressed using relative distances and with a kernel matrix Mki,lj=tAt,kiAt,lj¯. This kernel matrix is related to an estimate of a truncated covariance function given by (assuming vanishing mean):

Covij(Δx)=1Ct=1Cfi(xt)fj(xt+Δx)¯ (31)

Note, that due to the kernel’s shift invariance, the GRAPPA weights (or cardinal functions) in a local approximation of the interpolation formula (Eq. 17) are identical at positions in k-space which share the same local sampling pattern. In this way, computation time for (quasi-)periodic sampling can be reduced considerably.

In summary, the GRAPPA algorithm can be re-formulated and understood as function approximation in a RKHS. Instead of the optimal kernel, which has analytically been derived from the coil sensitivities in Equation 9, a different kernel (which corresponds to a different RKHS) related to the empirical covariance function is used. Reconstruction differs from the optimal formula by only using local samples for interpolation.

3 Numerical Examples

3.1 Methods

Numerical experiments have been performed using a 2D slice of size 115 × 90 extracted from a larger fully-sampled 3D data set of a human head acquired at 1.5 T using an eight-channel head coil (inversion-recovery prepared RF-spoiled 3D-FLASH, TR/TE = 12.2/5.2 ms, TI = 450 ms, FA = 20°, BW = 15 KHz). In order to demonstrate k-space interpolation from Cartesian and arbitrary, non-Cartesian samples, sampling patterns were generated on a k-space grid oversampled by 3× in each dimension by zero-padding in the image domain. From this data set samples have been obtained using Cartesian, Poisson-disc, and uniform random sampling. Cartesian sampling used an undersampling of four in one phase-encoding direction (4 × 1) and of two in both phase-encoding directions (2 × 2). In addition, different CAIPIRINHA [43] patterns have been studied. The Poisson-disc radius used yielded 2494 samples corresponding to an acceleration factor about four relative to the original number of samples, and the uniform random sampling pattern was generated to match this number of samples.

Coil sensitivities have been estimated using ESPIRiT [27], which yields very accurate sensitivities up to point-wise normalization and a mask which defines the area with signal. From these coil sensitivities the kernel has been computed by evaluating Equation 9 using a zero-padded Fourier transform. To demonstrate reconstruction errors from interpolation errors only, i.e. excluding errors from noise, a synthetic data set was created: The fully-sampled data was combined into a single image and then data was simulated using the coil sensitivities.

For all sampling patterns, a kernel matrix has been constructed by evaluating the kernel at the sample positions. Especially when some samples are close together as in the random sampling pattern, corresponding rows and columns in the kernel matrix are very similar, and the condition number is large. For this reason, the inversion of the kernel matrix has to be stabilized with Tikhonov regularization in the presence of noise and numerical errors. The maximum eigenvalue as determined by power iteration of the kernel matrix was 51.0062 for Cartesian 4×1, 31.033 for Cartesian 2×2, 33.863 for Poisson-disc, and 116.45 for random sampling. For the CAIPIRINHA patterns the values were between 30.0936 and 35.3207. To avoid a large influence on the solution, Tikhonov regularization (ridge regression) was used with a much smaller parameter of 0.01 for all experiments.

For each point on an oversampled and extended Cartesian grid, the cardinal and power functions have been computed. To reduce computation time, Equation 16 is solved by forward and backward substitution using a Cholesky decomposition of the kernel matrix. To estimate interpolation error and noise amplification at each position, a combined power function for all coils is computed by root of sum of squares and the cardinal functions are combined in the same way which yields the Frobenius norm of the local interpolation operator. Using the cardinal function, a Nyquist-sampled k-space has then been approximated from the acquired samples for synthetic and noisy data and transformed into coil images using an FFT.

The simulations were performed using Matlab (The MathWorks, Inc., Natick, MA) on a cluster with two quad-core Intel Xeon E5550 CPUs (2.67 GHz) per node. Computing the kernel matrix of size ~ 200002 and calculating the Cholesky decomposition took about two CPU hours. Solving for the cardinal functions for all interpolation points was broken into smaller parallel jobs to reduce runtime and memory use. To interpolate from the samples to the full 115×90 grid took ~ 350 CPU hours and to interpolate over the 3× oversampled and extended grid of size 405 × 330 took about ~ 4500 CPU hours. Utilizing about 20 parallel jobs on the cluster, this took about 6 hours for the full grid and about 3 days for the oversampled grid. All computations used double-precision floating-point arithmetic.

3.2 Results

Figure 2 shows the coil image and the corresponding sensitivity map for all receive channels as well as the combined image. Figure 3 shows the combined power function and the Frobenius norm of the cardinal functions for different sampling patterns. While for Cartesian 2 × 2 and Poisson-disc sampling the power function is small everywhere inside the sampled region indicating that interpolation error is small, the situation is different for Cartesian 4×1 and uniform random sampling: Where larger gaps appear in the sampling pattern, the power function has high values. The power functions themselves are bounded by the diagonal elements of the kernel. This bound is approached in regions where the cardinal functions go to zero, i.e. far from acquired samples, and corresponds to a situation where nothing is known about the k-space value. The bound for the combined power functions is 6.5530 for the kernel used here. Consistent with this upper bound, the maximum values observed near the boundary in the computed maps are 6.4802 for Cartesian 4× 1, 6.3921 for Cartesian 2× 2, 6.3797 for Poisson-disc, and 6.4277 for uniform random sampling. Computing the maximum in a smaller inner region of size 305 × 230 far from the boundary, the maximum values are 0.6154, 0.05252, 0.1087, and 4.3303, respectively. The last number highlights the fact that high values are attained even inside the sampled area for uniform random sampling. While the error bound for Poisson-disc sampling is twice as large as for the Cartesian 2 × 2 pattern, it is still very small, i.e. 60× smaller than the maximum which is obtained in unsampled regions. The reconstruction results (Fig. 4) for noise-less data confirm that the interpolation error is lower for Cartesian and Poisson-disc than for uniform random sampling. Cartesian 4 × 1 performs worse than Cartesian 2 × 2, confirming the notion that it is usually better to distribute the acceleration along different phase-encoding directions. The structure of the error maps in k-space is predicted well by the power function for all sampling patterns (Fig. 5). It has to be noted that the power function yields only a worst-case bound (scaled by the norm of the data) which depends on the sampling pattern, but not on the actual signal. In contrast, the actual error values in k-space depend on the energy distribution of the signal and are much higher in the k-space center than in the periphery.

Figure 2.

Figure 2

Top row: Spatial sensitivity maps for each channel of the receive coil (limited to the support of the object). Bottom row: Corresponding coil images computed by Fourier transform of each channel of the fully-sampled data. Right: Combined image computed as the pixel-wise root of the sum of absolute squares of all coil images.

Figure 3.

Figure 3

Sampling pattern, combined power function, and local noise amplification for Cartesian, Poisson-disc, and random sampling on an oversampled and extended grid. The maximum possible value for the power function is 6.5530 in regions where no information is available. Enlarged and individually scaled regions from the center show details with sample positions indicated by black dots.

Figure 4.

Figure 4

Reconstructed images and corresponding error maps for Cartesian, Poisson-disc, and uniform random sampling for simulated (top) and noisy data (bottom). All sampling schemes used an undersampling factor of 4. Error maps have been scaled by a factor of 80 (simulated) and 8 (noisy) to aid visibility.

Figure 5.

Figure 5

Theoretical power function and noise amplification maps as well as actual reconstruction errors in k-space for simulated and noisy data. Because energy of the error is much higher near the center of k-space, the maps have been raised to a power of 1/3 for improved visualization of their structure. For this reason, please note that the relative intensity of the different maps is misleading.

In addition to the interpolation error, noise is amplified during the reconstruction. Assuming Gaussian white noise, this effect is described by the Frobenius norm of the cardinal functions. In Nyquist-sampled regions, if all channels contribute equally one would expect a value of 1/N because the data from all channels is averaged. Values can be much higher in case of undersampling, but can also be lower for regions very far from acquired samples. This can be seen in at the boundary of the computed maps shown in Figure 3. In agreement with the higher values of the Frobenius norm for Cartesian 4 × 1 and uniform random sampling, the respective reconstruction results for the noisy data show much more noise in the reconstructed image. Again, the distribution of noise and errors in k-space has the same structure as the Frobenius norm of the local reconstruction operators and the power function predict (Fig. 5).

The differences between Cartesian 4 × 1 and the CAIPIRINHA patterns for the power function and Frobenius norm of the cardinal functions are shown in Figure 6. Essentially distributing the undersampling in both phase-encoding dimensions, all CAIPIRINHA patterns show much lower values for both functions than the Cartesian 4 × 1 pattern. The CAIPIRINHA pattern with a shift of two performs slightly better with respect to noise amplification than the two others. The predictions are confirmed in the reconstructions from noisy data and corresponding error maps in image and k-space domain (Fig. 7).

Figure 6.

Figure 6

Sampling pattern, combined power function, and local noise amplification for Cartesian 4 × 1 and three different CAIPIRINHA sampling patterns with different shifts. Only a part of the full grid is shown.

Figure 7.

Figure 7

Images reconstructed from noisy data (top) and corresponding error maps in the image domain (middle) and k-space domain (bottom) for Cartesian 4 × 1 sampling and CAIPIRINHA with different shifts. The k-space maps have been raised to a power of 1/3 for improved visualization.

In summary, the results confirm the intuition that Cartesian 2 × 2 and Poisson-disc sampling yield better k-space interpolation and less noise amplification and consequently better image reconstruction than uniform random sampling. Poisson-disc sampling is only slightly worse than Cartesian 2 × 2 sampling. Also as expected, Cartesian 4 × 1 performs worse than Cartesian 2 × 2 and all CAIPIRINHA 4 × 1 sampling patterns. The new proposed metrics can predict local reconstruction errors in k-space for different sampling patterns.

4 Discussion

4.1 Parallel Imaging as Approximation in an RKHS

In this work, it has been shown that the space of ideal multi-channel k-space signals in MRI is an RKHS. As such, it is completely characterized by its kernel, which can be derived from the spatial sensitivity profiles of the receive coils. Based on this result, the connection to approximation theory is fully developed. The interpolation formula (Eq. 17) allows optimal reconstruction from samples at arbitrary locations in k-space, i.e. it provides a solution to the reconstruction problem of Cartesian and non-Cartesian parallel MRI. If samples are acquired on arbitrary positions, the kernel (Eq. 9) can be evaluated by non-uniform FFT techniques [10, 11, 13]. It has to be acknowledged that solving Equation 16 is far too expensive for practical applications. In fact, image reconstruction using iterative optimization of Equation 1 still seems to be the best approach due to its efficiency and flexibility. This does not mean that Equation 16 is of purely theoretical interest. While the present study focused on numerical exactness, practical algorithms such as GRAPPA [3] and PARS [19] can be understood as a local approximation of this formula. Valuable insights can also be expected by analyzing other existing methods in this framework. For example, the calibration-consistency condition W x = x used in SPIRiT is a discrete local version of the reproducing property (Eq. 7). On the other hand, methods which use a nonlinear model to jointly estimate image content and coil sensitivities [44, 45], or use non-linear regularization terms can not directly be addressed.

4.2 Error Bounds

Previous work in parallel imaging uses g-factor maps to quantify noise behaviour in the image domain. These maps can be calculated analytically for periodic sampling patterns for SENSE [2] and GRAPPA [35]. For arbitrary sampling patterns g-factor maps can be computed using Monte-Carlo methods based on full reconstructions [46]. While the g-factor map is a valuable tool to assess noise in the reconstructed image, it does not offer any direct insights into the source of these errors in k-space.

The present work describes new tools to study approximation error and noise amplification in k-space. The power functions can be used to predict local approximation errors for different sampling patterns, and the noise behaviour can be analyzed using the Frobenius norm of the cardinal functions. This is demonstrated in several experimental examples. CAIPIRINHA patterns have been developed to improve parallel imaging in 3D MRI by shifting samples in each k-space row by a different amount. Using the power function, it was directly confirmed that this leads to smaller errors bounds between samples. In the important combination of compressed sensing and parallel imaging [47, 48, 49], sampling schemes must provide incoherence while optimally exploiting the information from multiple coils. In this context, Poisson-disc sampling has been proposed as a replacement for random sampling based on the idea that the close area around a sample should not be sampled again because it is highly correlated for multiple coils and can be recovered with parallel MRI [50]. This intuitive idea could be confirmed for a specific coil array by comparing the power function for different sampling schemes. It is noteworthy that the lowest values for the power function could be achieved with Cartesian sampling. Although the power function yields useful error bounds in k-space, it is important to keep in mind that the optimal choice of the sampling scheme may depend on other factors such as the structure of the aliasing in the image domain. A limitation of the present work is that only a linear reconstruction is considered, while compressed sensing is a non-linear method. In compressed sensing, random or Poisson-disc sampling is used to produce incoherent aliasing which can then be removed using sparsity constraints to achieve higher acceleration.

4.3 Extensions

In the present study, it was assumed that all channel are always sampled simultaneously. While this is a reasonable assumption for data acquisition in MRI, the mathematical theory does not impose this limitation. In fact, relaxing this condition allows some interesting extensions. For example, by augmenting the RKHS with a uniformly sensitive “coil” that collects no data [51], it is possible to bound point-wise approximation errors of the Fourier transform of the single underlying image, ultimately the quantity of interest. Another possible application is the evaluation of the individual contribution of samples from different coils, which might be useful in the context of coil selection and array compression schemes [52, 53, 54]. An interesting application of the new metrics derived in this work could be the automatic design of optimal sampling patterns. For example, such techniques have previously been developed based on a greedy approach using an analytical formula for the global noise error [55], using simulated annealing using an approximate reconstruction [56], or Bayesian experimental design [57]. In these applications, localized error metrics in k-space could be used to guide the automatic selection of new sample points.

In this work, regularization has been used only at a small level to stabilize the numerical computations with the ill-conditioned kernel matrix. A higher regularization parameter can be used to optimize the trade-off between noise amplification and approximation error. This will be studied in a future work.

Coil sensitivities have to be estimated from experimental data, for example using ESPIRiT. One important result of ESPIRiT is that in some cases of corruption the coil sensitivities can not be determined uniquely. In this case, multiple sets of maps cjl for l = 1 … L appear, which have to be used simultaneously in the reconstruction. In the framework described here, this corresponds to kernels of the form

Kij(x,y):=Ωdre-2π-1(x-y)·rl=1Lcil(r)cjl(r)¯. (32)

Finally, it should be noted that the framework of approximation theory is very general and can be applied to completely different encoding schemes, e.g. non-linear gradient fields or multi-slice excitation [58, 59].

5 Conclusion

In the present work, parallel MRI has been formulated as an approximation of vector-valued functions in a RKHS. This space can be completely characterized by a kernel derived from the sensitivities of the receive coils. The new formulation provides a sound mathematical framework for understanding the reconstruction process and sample selection, which has been demonstrated by experimental examples comparing Cartesian, Poisson-disc, and random sampling.

Acknowledgments

The authors thank Robert Schaback for helpful discussions. This study was supported by NIH grants P41RR009784 and R01EB009690, American Heart Association 12BGIA9660006, Sloan Research Fellowship, GE Healthcare, and a National Science Foundation Graduate Research Fellowship.

7 Appendix

7.1 Notation

Important symbols are listed in Table 1. Bold quantities denote vectors or vector-valued functions. An upper subscript denotes a relationship to another quantity, e.g. Kx,i is the representer of evaluation at k-space position x for channel i. Lower subscripts always denote discrete indices which select a component of a vector-valued function or an element out of a set. For example, the representer Kx,i itself is a vector-valued function, which can be evaluated at a specific sample position ylS and subscripted to obtain the component for a specific channel j, which could then be written as Kjx,i(yl).

7.2 Interpolation

The interpolation formula given in Equation 17 computes the projection PSf of any function fH onto HS from its samples f(xk) with xkS. From the reproducing property (Eq. 7) and the definition of HS follows that f(S) = {0}, while an arbitrary function f||HS can be interpolated exactly. This can be shown by expressing f|| by its samples f||(xk) at xkS:

f(x)=Eq.12l=1Sj=1Naxl,jKxl,j(x) (33)
=Eq.16l=1Sj=1Naxl,jkSi=1NKij(xk,xl)uk,i(x) (34)
=k=1Si=1Nl=1Sj=1Naxl,jKij(xk,xl)uk,i(x) (35)
=Eq.12k=1Si=1Nfi(xk)uk,i(x) (36)

7.3 Alternative Derivation of Cardinal Functions

An alternate derivation motivates cardinal functions as arising from projecting representers of evaluation on the subspace HS. First, we observe that fi(x) = 〈Kx,i, PSfH = 〈PSK x,i, fH, because projection operators are Hermitian. We can introduce cardinal functions uk,i(x) as the conjugate coefficients in the expansion (Eq. 12) of PSKx,i for an arbitrary representer Kx,iH in terms of the set of representers which span HS:

PSKx,i=l=1Sj=1Nuixl,j(x)¯Kxl,j (37)

The coefficients can be computed as before (Eq. 15), which corresponds to a complex-conjugate version of Eq. 16. Computing the scalar product of this projected representer with a function fS then yields the interpolating formula (Eq. 17) from data to unsampled points:

f^i(x)=PSKx,i,fH (38)
=l=1Sj=1Nuixl,j(x)¯Kxl,j,fH (39)
=l=1Sj=1Nuixl,j(x)Kxl,j,fH (40)
=l=1Sj=1Nuixl,j(x)fj(xl) (41)

7.4 Error Bounds

For functions in H, an point-wise error bound can be computed:

en(x)=fn(x)-f^n(x) (42)
=|fn(x)-k=1Si=1Nfi(xk)unk,i(x)| (43)
=|Kx,n-k=1Si=1NKxk,iunk,i(x),fH| (44)
fHKx,n-k=1Si=1NKxk,iunk,i(x)HPn(x) (45)

Where Pn is the n-th component of the power function. Using the reproducing property on the kernel itself 〈Kx,i, Ky,jH = Kij(x, y), it can be expressed as:

Pn2(x)=Knn(x,x)-2Rek=1Si=1NKin(xk,x)unik(x) (46)
+l=1Sk=1Si=1Nj=1NKij(xk,xl)unik(x)unjl(x)¯ (47)

For a (unregularized) minimum-norm least-squares reconstruction as considered in this work, interpolation with the cardinal functions corresponds to the projection PS. Thus, the power function Pn(x) = ||Kx,nPSKx,n||H measures how well PSKx,n approximates Kx,n. If a representer of evaluation Kx,n is orthogonal to HS so PSKx,n = 0, then a sample at that location would provide completely new information. If Kx,n lies completely in HS so Kx,n = PSKx,n, then that sample is redundant. Note that the interpolation error is orthogonal to the interpolant, i.e. < Kx,nPSKx,n, Kx,n >H = 0, and the power function can be simplified to:

Pn2(x)=Knn(x,x)-l=1Sj=1NKnj(x,xl)unjl(x)¯ (48)

References

  • 1.Sodickson DK, Manning WJ. Simultaneous acquisition of spatial harmonics (SMASH): Fast imaging with radiofrequency coil arrays. Magn Reson Med. 1997;38:591–603. doi: 10.1002/mrm.1910380414. [DOI] [PubMed] [Google Scholar]
  • 2.Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: Sensitivity encoding for fast MRI. Magn Reson Med. 1999;42:952–962. [PubMed] [Google Scholar]
  • 3.Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A. Generalized autocalibrating partially parallel acquisitions (GRAPPA) Magn Reson Med. 2002;47:1202–10. doi: 10.1002/mrm.10171. [DOI] [PubMed] [Google Scholar]
  • 4.Ra JB, Rim CY. Fast imaging using subencoding data sets from multiple detectors. Magn Reson Med. 1993;30:142–145. doi: 10.1002/mrm.1910300123. [DOI] [PubMed] [Google Scholar]
  • 5.Pruessmann KP, Weiger M, Börnert P, Boesiger P. Advances in sensitivity encoding with arbitrary k-space trajectories. Magn Reson Med. 2001;46:638–651. doi: 10.1002/mrm.1241. [DOI] [PubMed] [Google Scholar]
  • 6.Kannengiesser SAR, Brenner AR, Noll TG. Accelerated image reconstruction for sensitivity encoded imaging with arbitrary k-space trajectories. Proceedings of the 8th ISMRM Annual Meeting; Denver. 2000. p. 155. [Google Scholar]
  • 7.Sodickson DK, McKenzie CA. A generalized approach to parallel magnetic resonance imaging. Med Phys. 2001;28:1629–1643. doi: 10.1118/1.1386778. [DOI] [PubMed] [Google Scholar]
  • 8.Bertero M, De Mol C, Pike ER. Linear inverse problems with discrete data. i. general formulation and singular system analysis. Inverse Problems. 1985;1:301–330. [Google Scholar]
  • 9.Van de Walle R, Barrett HH, Myers KJ, Altbach MI, Desplanques B, Gmitro AF, Cornelis J, Lemahieu I. Reconstruction of MR images from data acquired on a general nonregular grid by pseudoinverse calculation. IEEE Trans Med Imaging. 2000;19:1160–1167. doi: 10.1109/42.897806. [DOI] [PubMed] [Google Scholar]
  • 10.O’Sullivan JD. A fast sinc function gridding algorithm for Fourier inversion in computer tomography. IEEE Trans Med Imaging. 1985;4:200–207. doi: 10.1109/TMI.1985.4307723. [DOI] [PubMed] [Google Scholar]
  • 11.Jackson JI, Meyer CH, Nishimura DG, Macovski A. Selection of a convolution function for Fourier inversion using gridding. IEEE Trans Med Imaging. 1991;3:473–478. doi: 10.1109/42.97598. [DOI] [PubMed] [Google Scholar]
  • 12.Fessler JA, Sutton BP. Nonuniform fast Fourier transforms using min-max interpolation. IEEE Trans Med Imaging. 2003;51:560–574. [Google Scholar]
  • 13.Beatty PJ, Nishimura DG, Pauly JM. Rapid gridding reconstruction with a minimal oversampling ratio. IEEE Trans Med Imaging. 2005;24:799–808. doi: 10.1109/TMI.2005.848376. [DOI] [PubMed] [Google Scholar]
  • 14.Uecker M, Zhang S, Voit D, Merboldt K-D, Frahm J. Real time MRI: Recent advances using radial FLASH. Imaging in Medicine. 2012;4:461–476. [Google Scholar]
  • 15.Seiberlich N, Ehses P, Duerk J, Gilkeson R, Griswold M. Improved radial GRAPPA calibration for real-time free-breathing cardiac imaging. Magn Reson Med. 2011;65:492–505. doi: 10.1002/mrm.22618. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Xu B, Spincemaille P, Chen G, Agrawal M, Nguyen TD, Prince MR, Wang Y. Fast 3d contrast enhanced MRI of the liver using temporal resolution acceleration with constrained evolution reconstruction. Magn Reson Med. 2013;69:370–381. doi: 10.1002/mrm.24253. [DOI] [PubMed] [Google Scholar]
  • 17.Feng L, Grimm R, Block KT, Chandarana H, Kim S, Xu J, Axel L, Sodickon DK, Otazo R. Golden-angle radial sparse parallel MRI: Combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric MRI. Magn Reson Med. 2014;72:707–717. doi: 10.1002/mrm.24980. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.McKenzie CA, Ohliger MA, Yeh EN, Price MD, Sodickson DK. Coil-by-coil image reconstruction with SMASH. Magn Reson Med. 2001;46:619–623. doi: 10.1002/mrm.1236. [DOI] [PubMed] [Google Scholar]
  • 19.Yeh EN, McKenzie CA, Ohliger MA, Sodickson DK. Parallel magnetic resonance imaging with adaptive radius in k-space (PARS): constrained image reconstruction using k-space locality in radiofrequency coil encoded data. Magn Reson Med. 2005;53:1383–1392. doi: 10.1002/mrm.20490. [DOI] [PubMed] [Google Scholar]
  • 20.Heidemann RM, Griswold MA, Seiberlich N, Krüger G, Kannengiesser SAR, Kiefer B, Wiggins G, Wald LL, Jakob P. Direct parallel image reconstructions for spiral trajectories using GRAPPA. Magn Reson Med. 2006;56:317–326. doi: 10.1002/mrm.20951. [DOI] [PubMed] [Google Scholar]
  • 21.Samsonov AA, Block WF, Arunachalam A, Field AS. Advances in locally constrained k-space-based parallel MRI. Magn Reson Med. 2006;55:431–438. doi: 10.1002/mrm.20757. [DOI] [PubMed] [Google Scholar]
  • 22.Beatty PJ, Hargreaves BA, Gurney PT, Nishimura DG. Method for non-cartesian parallel imaging reconstruction with improved calibration. Proceedings of the 15th ISMRM Annual Meeting; Berlin. 2007. p. 335. [Google Scholar]
  • 23.Huang F, Vijayakumar S, Li Y, Hertel S, Reza S, Duensing GR. Self-calibration method for radial GRAPPA/k-t GRAPPA. Magn Reson Med. 2007;57:1075–1085. doi: 10.1002/mrm.21233. [DOI] [PubMed] [Google Scholar]
  • 24.Seiberlich N, Breuer FA, Blaimer M, Barkauskas K, Jakob PM, Griswold MA. Non-Cartesian data reconstruction using GRAPPA operator gridding (GROG) Magn Reson Med. 2007;58:1257–1265. doi: 10.1002/mrm.21435. [DOI] [PubMed] [Google Scholar]
  • 25.Seiberlich N, Breuer F, Heidemann R, Blaimer M, Griswold M, Jakob P. Reconstruction of undersampled non-Cartesian data sets using pseudo-Cartesian GRAPPA in conjunction with GROG. Magn Reson Med. 2008;59:1127–1137. doi: 10.1002/mrm.21602. [DOI] [PubMed] [Google Scholar]
  • 26.Codella NCF, Spincemaille P, Prince M, Wang Y. A radial self-calibrated (RASCAL) generalized autocalibrating partially parallel acquisition (GRAPPA) method using weight interpolation. NMR Biomed. 2011;24:844–854. doi: 10.1002/nbm.1630. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Uecker M, Lai P, Murphy MJ, Virtue P, Elad M, Pauly JM, Vasanawala SS, Lustig M. ESPIRiT – an eigenvalue approach to auto-calibrating parallel MRI: Where SENSE meets GRAPPA. Magn Reson Med. 2014;71:990–1001. doi: 10.1002/mrm.24751. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Aronszajn N. Theory of reproducing kernels. Trans Amer Math Soc. 1950;68:337–404. [Google Scholar]
  • 29.Wendland H. Scattered Data Approximation. Cambridge University Press; 2005. [Google Scholar]
  • 30.Heberlein KA, Hu X. Kriging and GRAPPA: a new perspective on parallel imaging reconstruction. Proceedings of the 14th ISMRM Annual Meeting; Seattle. 2006. p. 2465. [Google Scholar]
  • 31.Chang Y, Liang D, Ying L. Nonlinear GRAPPA: A kernel approach to parallel MRI reconstruction. Magn Reson Med. 2012;68:730–740. doi: 10.1002/mrm.23279. [DOI] [PubMed] [Google Scholar]
  • 32.Wu Z, Schaback R. Local error estimates for radial basis function interpolation of scattered data. PIMA J Numer Anal. 1993;13:13–27. [Google Scholar]
  • 33.Narcowich FJ, Ward JD. Generalized hermite interpolation via matrix-valued conditionally positive definite functions. Math Comp. 1994;63:661–687. [Google Scholar]
  • 34.Fuselier EJ. PhD thesis. Texas A&M University; 2006. Refined Error Estimates for Matrix-Valued Radial Basis Functions. [Google Scholar]
  • 35.Breuer FA, Kannengiesser SAR, Blaimer M, Seiberlich N, Jakob PM, Griswold MA. General formulation for quantitative g-factor calculation in GRAPPA reconstructions. Magn Reson Med. 2009;62:739–746. doi: 10.1002/mrm.22066. [DOI] [PubMed] [Google Scholar]
  • 36.Trefethen LN. Approximation Theory and Approximation Practice. SIAM; 2013. [Google Scholar]
  • 37.Yuan L, Ying L, Xu D, Liang Z-P. Truncation effects in SENSE reconstruction. Magn Reson Imaging. 2006;24:1311–1318. doi: 10.1016/j.mri.2006.08.014. [DOI] [PubMed] [Google Scholar]
  • 38.Tsao J, Sánchez J, Boesiger P, Pruessmann KP. Minimum-norm reconstruction for optimal spatial response in high-resolution SENSE imaging. Proceedings of the 11th ISMRM Annual Meeting; 2003. p. 14. [Google Scholar]
  • 39.Sánchez-González J, Tsao J, Dydak U, Desco M, Boesiger P, Pruessmann KP. Minimum-norm reconstruction for sensitivity-encoded magnetic resonance spectroscopic imaging. Magn Reson Med. 2006;55:287–295. doi: 10.1002/mrm.20758. [DOI] [PubMed] [Google Scholar]
  • 40.Uecker M. PhD thesis. Georg-August-Universität Göttingen; 2009. Nonlinear Reconstruction Methods for Parallel Magnetic Resonance Imaging. [Google Scholar]
  • 41.Kholmovski EG, Parker DL. Spatially variant GRAPPA. Proceedings of the 14th ISMRM Annual Meeting; Seattle: 2006. p. 285. [Google Scholar]
  • 42.Hoge WS, Brooks DH. On the complementarity of SENSE and GRAPPA in parallel MR imaging. Proceedings of the 28th IEEE EMBS Annual International Conference; New York City. 2006. pp. 755–758. [DOI] [PubMed] [Google Scholar]
  • 43.Breuer FA, Blaimer M, Mueller MF, Seiberlich N, Heidemann RM, Griswold MA, Jakob PM. Controlled aliasing in volumetric parallel imaging (2D CAIPIRINHA) Magn Reson Med. 2006;55:549–556. doi: 10.1002/mrm.20787. [DOI] [PubMed] [Google Scholar]
  • 44.Ying L, Sheng J. Joint image reconstruction and sensitivity estimation in sense (JSENSE) Magn Reson Med. 2007;57:1196–1202. doi: 10.1002/mrm.21245. [DOI] [PubMed] [Google Scholar]
  • 45.Uecker M, Hohage T, Block KT, Frahm J. Image reconstruction by regularized nonlinear inversion – joint estimation of coil sensitivities and image content. Magn Reson Med. 2008;60:674–682. doi: 10.1002/mrm.21691. [DOI] [PubMed] [Google Scholar]
  • 46.Robson PM, Grant AK, Madhuranthakam AJ, Lattanzi R, Sodickson DK, McKenzie CA. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions. Magn Reson Med. 2008;60:895–907. doi: 10.1002/mrm.21728. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Block KT, Uecker M, Frahm J. Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint. Magn Reson Med. 2007;57:1086–1098. doi: 10.1002/mrm.21236. [DOI] [PubMed] [Google Scholar]
  • 48.Liu B, King K, Steckner M, Xie J, Sheng J, Ying L. Regularized sensitivity encoding (SENSE) reconstruction using bregman iterations. Magn Reson Med. 2009;61:145–152. doi: 10.1002/mrm.21799. [DOI] [PubMed] [Google Scholar]
  • 49.Lustig M, Pauly JM. SPIRiT: Iterative self-consistent parallel imaging reconstruction from arbitrary k-space. Magn Reson Med. 2010;64:457–471. doi: 10.1002/mrm.22428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Vasanawala SS, Murphy MJ, Alley MT, Lai P, Keutzer K, Pauly JM, Lustig M. Practical parallel imaging compressed sensing mri: Summary of two years of experience in accelerating body mri of pediatric patients. Biomedical Imaging: From Nano to Macro, 2011 IEEE International Symposium on; Chicago. 2011. pp. 1039–1043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Beatty PJ, Sun W, Brau AC. Direct virtual coil (DVC) reconstruction for data-driven parallel imaging. Proceedings of the 16th ISMRM Annual Meeting; Toronto. 2008. p. 8. [Google Scholar]
  • 52.Buehrer M, Pruessmann KP, Boesiger P, Kozerke S. Array compression for MRI with large coil arrays. Magn Reson Med. 2007;57:1131–1139. doi: 10.1002/mrm.21237. [DOI] [PubMed] [Google Scholar]
  • 53.Huang F, Vijayakumar S, Li Y, Hertel S, Duensing GR. A software channel compression technique for faster reconstruction with many channels. Magn Reson Imaging. 2008;26:133–141. doi: 10.1016/j.mri.2007.04.010. [DOI] [PubMed] [Google Scholar]
  • 54.Zhang T, Pauly JM, Vasanawala SS, Lustig M. Coil compression for accelerated imaging with Cartesian sampling. Magn Reson Med. 2013;69:571–582. doi: 10.1002/mrm.24267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Xu D, Jacob M, Liang ZP. Optimal sampling of k-space on Cartesian grids for parallel MR imaging. Proceedings of the 13th ISMRM Annual Meeting; Miami. 2005. p. 2450. [Google Scholar]
  • 56.Gong E, Huang F, Ying K, Liu X, Duensing GR. An efficient scheme of trajectory optimization for both parallel imaging and compressed sensing. Proceedings of the 21th ISMRM Annual Meeting; Salt Lake City. 2013. p. 2376. [Google Scholar]
  • 57.Seeger M, Nickisch H, Pohmann R, Schölkopf B. Optimization of k-space trajectories for compressed sensing by Bayesian experimental design. Magn Reson Med. 2010;63:116–126. doi: 10.1002/mrm.22180. [DOI] [PubMed] [Google Scholar]
  • 58.Hennig J, Welz AM, Schultz G, Korvink J, Liu Z, Speck O, Zaitsev M. Parallel imaging in non-bijective, curvilinear magnetic field gradients: a concept study. MAGMA. 2008;21:5–14. doi: 10.1007/s10334-008-0105-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Larkman DJ, Hajnal JV, Herlihy AH, Coutts GA, Young IR, Ehnholm G. Use of multicoil arrays for separation of signal from multiple slices simultaneously excited. J Magn Reson Imaging. 2001;13:313–317. doi: 10.1002/1522-2586(200102)13:2<313::aid-jmri1045>3.0.co;2-w. [DOI] [PubMed] [Google Scholar]

RESOURCES