Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Apr 26.
Published in final edited form as: Inf Process Med Imaging. 2013;23:619–631. doi: 10.1007/978-3-642-38868-2_52

Dictionary Learning on the Manifold of Square Root Densities and Application to Reconstruction of Diffusion Propagator Fields*

Jiaqi Sun 1, Yuchen Xie 2, Wenxing Ye 3, Jeffrey Ho 1, Alireza Entezari 1, Stephen J Blackband 4, Baba C Vemuri 1,
PMCID: PMC4000552  NIHMSID: NIHMS509410  PMID: 24684004

Abstract

In this paper, we present a novel dictionary learning framework for data lying on the manifold of square root densities and apply it to the reconstruction of diffusion propagator (DP) fields given a multi-shell diffusion MRI data set. Unlike most of the existing dictionary learning algorithms which rely on the assumption that the data points are vectors in some Euclidean space, our dictionary learning algorithm is designed to incorporate the intrinsic geometric structure of manifolds and performs better than traditional dictionary learning approaches when applied to data lying on the manifold of square root densities. Non-negativity as well as smoothness across the whole field of the reconstructed DPs is guaranteed in our approach. We demonstrate the advantage of our approach by comparing it with an existing dictionary based reconstruction method on synthetic and real multi-shell MRI data.

Keywords: Dictionary learning, Manifold, DW-MRI, Diffusion propagator reconstruction

1 Introduction

Diffusion weighted MRI, as a non-invasive imaging technique, helps explore the complex micro-structure of fibrous tissues through sensing the Brownian motion of water molecules [1]. Water diffusion is fully characterized by the diffusion Probability Density Function (PDF) called the diffusion propagator (DP) [2]. Under the narrow pulse assumption, the diffusion propagator denoted by P(r) and the diffusion signal attenuation E(q) are related through the Fourier transform[2]:

P(r)=E(q)exp(2πiqr)dq (1)

where E(q) = S(q)/S0, S0 is the diffusion signal with zero diffusion gradient.

Given the diffusion MRI data, reconstructing the DP is one of the most important problems in the field. Numerous techniques have been proposed to this end [35]. For further reading, we refer the interested reader to a recent survey [6]. Most of these methods either assume a model in which case, the basis functions for reconstruction are predefined or in the case of model free approaches, they have to explicitly enforce the positivity constraints on the DP, which in some works was not done. In our work, we take a fresh approach to this problem namely a dictionary learning approach. This approach will move away from the requirement of pre-specifying the basis functions and instead learns it from the data and hence is data adaptive. This is a more flexible approach over fixing the basis. Further, by using the square root density representation of the DP, we make use of the intrinsic structure of the manifold of square root densities in the reconstruction process without having to resort to explicit enforcement of non-negativity constraint on DP reconstruction. In the following, we first present a brief review of relevant dictionary learning techniques and then present a review of state-of-the-art in DP reconstruction from multi-shell diffusion MRI.

1.1 Dictionary learning on Riemannian manifolds: Literature Review

Sparse coding which calls for modeling data as a linear combination of a small number of elements from a collection of atoms, i.e., the dictionary, has been proven very effective in many image processing tasks [7]. In these tasks, learning a dictionary that adapts well to the data is of great significance for a good performance of the sparse representation. Therefore, considering the geometric structure of the data space is critical to the success of dictionary learning. Most existing dictionary learning algorithms often assume that the data points and the atoms are vectors in a Euclidean space, and the dictionary is learned based on the vector space structure of the input data. However, the data involved in many image analysis tasks often reside on Riemannian manifolds such as the space of, symmetric positive definite (SPD) matrices [8] and square root densities. Therefore, the existing extrinsic approaches which overlook the potentially important intrinsic geometric structure of the data are inadequate in the context of such applications. Recently, this inadequacy was addressed by a few researchers [9] leading to the generalization of dictionary learning to manifolds, specifically, to the manifold of SPD matrices. However, most of these methods seek to transform the problem to a simpler space and solve it there, instead of respecting the geometric structure of the SPD matrix manifold. Needless to say, none of them truly incorporated the intrinsic geometry implied by the data as is done in this paper.

In general, dictionary learning in the Euclidean setting can be formulated as minc1,,cn,Di=1nsiDci2+Sp(ci), where s1, ⋯, sn is the given collection of data points, D is the matrix with columns composed of the atoms ai, ci the sparse coding coefficients and Sp(ci) the sparsity promoting term. When generalizing it to a Riemannian manifold Inline graphic, one of the key difficulties that needs resolution is to make sure that the collection of atoms as well as the approximation of data points generated using the atoms still lie on the manifold. The reason being, in Euclidean space, it is the global linear structure that guarantees the data synthesized from the atoms is contained in the same space, whereas, on Riemannian manifolds the Riemannian geometry provides only local linear structures through the Riemannian exponential and logarithmic maps. Yet, by taking advantage of this diversity of linear structures it is possible to formulate the dictionary learning in a data specific way. Details regarding this formulation will be discussed in subsequent sections. It suffices to say that we employ the log and exp maps along with an affine constraint to achieve this goal.

1.2 DP reconstruction from multi-shell acquisitions: Literature review

We now present a brief review of DP reconstruction from multi-shell diffusion MRI data. Various techniques have been proposed to reconstruct the DP from multi-shell acquisitions of the diffusion signal[5, 10], which, compared to single shell acquisitions provide additional information about the radial signal decay. Most of them assume a particular model for the diffusion signal as in q-ball imaging (QBI) [11]. As an alternative, another category of methods place weak assumptions about the diffusion signal and therefore are capable of generating relatively unbiased reconstruction results, such as diffusion spectrum imaging (DSI) proposed in [12] and the tomographic reconstruction methods in [1315]. These methods interpolate the spherical domain data samples onto a dense regular lattice and then reconstruct the DP using the Fourier transform relationship between P(r) and E(q). This idea is also adopted in our proposed method.

However, all of the aforementioned multi-shell methods solve the reconstruction problem in a voxel-wise manner, thereby always lead to a noisy reconstruction across the field. This gives us a strong motivation for applying dictionary learning to the reconstruction of DP fields, because the globally defined dictionary plays an implicit role in regularizing the reconstructions over the entire field. In recent years, a few dictionary learning based DP reconstruction methods have been proposed. In [16], Bilgic et al. applied adaptive dictionaries to accelerate the DSI method for estimating the DPs. In [17], Merlet et al. proposed a parametric dictionary learning framework obtaining a closed form DP and ODF modeling from diffusion MRI data. An over-complete dictionary based reconstruction of DP fields from single shell acquisition was presented in [18]. Nevertheless, due to the absence of explicit use of the geometric structure of the data space itself, none of these methods can guarantee the non-negativity of the reconstructed propagators, an intrinsic and basic property of the DP. Accordingly, these methods are prone to higher numerical errors.

Recently several approaches that guarantee the non-negativity of the reconstructed DP or ODF were proposed. For instance, in [19] authors used the Spherical Harmonic (SH) representation for ODF and enforced non-negativity on the continuous domain by enforcing the positive semi-definiteness of Toeplitz-like matrices constructed from the SH representation. Cheng et al. in [20] proposed to reconstruct ODFs (DPs) by estimating the square root of ODF (DP) called the wave-function directly from diffusion signals, ensuring non-negativity. The idea of taking advantage of the square root parameterization of DPs is also adopted in our proposed method. However, unlike dictionary based methods, the two methods discussed above are not guaranteed to yield a smooth reconstruction across the field and the reconstruction basis are pre-specified.

In this paper, we propose to apply the dictionary learning method generalized to the manifold of square root densities to the reconstruction of DP fields. As the nature of a globally learned dictionary indicates, our method will yield a smooth reconstruction which is desirable in real applications. By taking into consideration the intrinsic geometric structure of the manifold formed by the square root of DPs, our method performs better than the reconstruction techniques based on dictionary learning in a Euclidean setting. Furthermore, the non-negativity of the reconstructed DPs is naturally guaranteed due to the adoption of the square root representation and use of the intrinsic geometry of this space.

Rest of the paper is organized as follows. Section 2 contains brief background on Riemannian manifolds and the dictionary learning formulation. We present application to reconstruction of DP fields in section 3 and provide several examples in section 4. Finally section 5 contains conclusions.

2 Theory

2.1 Relevant basics of Riemannian manifolds

In this subsection, we briefly go over some fundamentals of Riemannian geometry, details of which can be found in [21]. A manifold Inline graphic of dimension d is a topological space that is locally homeomorphic to open subsets of the Euclidean space ℝd at each point. With a globally defined differential structure, manifold Inline graphic becomes a differentiable manifold. The global differential structure allows one to define the globally differentiable tangent space. The tangent space at pInline graphic denoted by Tp Inline graphic is a vector space that contains all the tangent vectors to Inline graphic at point p. A Riemannian manifold is a differentiable manifold on which each tangent space Tp Inline graphic at point p is equipped with a differentiable varying inner product 〈·, ·〉p. The family of the inner products is called a Riemannian metric. Let pi, pj be two points on manifold Inline graphic, the geodesic curve γ : [0, 1] → Inline graphic is a smooth curve with the minimum length connecting pi and pj. Let vTp Inline graphic be a tangent vector to the manifold at point p, there exists a unique geodesic γv satisfying γv(0) = p with initial tangent vector v. The exponential map expp : Tp Inline graphicInline graphic of v is defined as expp(v) = γv(1). Logarithmic map, as the inverse of the exponential map, is denoted as logp : Inline graphicTp Inline graphic. Given two points pi, pjInline graphic, logpi maps point pj to the unique tangent vector at pi that is the initial velocity of the geodesic γ with γ(0) = pi and γ(1) = pj. The geodesic distance between pi and pj is computed by dist(pi, pj) = ‖logpi(pj)‖pi.

2.2 Dictionary learning on Riemannian manifolds: Formulation

In the Euclidean setting, given a collection of signals s1, ⋯, sn ∈ ℝd, classical dictionary learning methods seek to find a dictionary D ∈ ℝd×m whose columns consist of m atoms such that each signal si can be approximated as a sparse linear combination of these atoms siDci, where ci ∈ ℝm is the coefficient vector. Using l1 regularization on ci, the dictionary learning problem can be formulated as:

minci,Di=1n(siDci22+λci1) (2)

where λ is a regularization parameter.

In the Riemannian manifold setting, denote s1, …, snInline graphic as a collection of n data points on the manifold Inline graphic, and a1, …, amInline graphic as atoms of the learned dictionary Inline graphic = {a1, …, am}. Due to the local linear geometric structure of Inline graphic, it is improper to use the linear combination of atoms s^i=j=1mcijaj to approximate the data si, since there is no guarantee that ŝi is on the manifold. Instead, by using the geodesic linear interpolation on Inline graphic, si can be estimated by s^i=expsi(j=1mcijlogsi(aj)) where, expsi and logsi are exponential and logarithmic map at si respectively, and cij ∈ ℝ are the coefficients. Intuitively, in order to approximate data point si, we project all the atoms in the dictionary to the tangent space at si and perform linear combination vi=j=1mcijlogsi(aj) on the tangent vector space Tsi Inline graphic, then the approximation ŝi is obtained by taking the exponential map of vi at si.

Our goal is to build a dictionary that minimizes the sum of reconstruction error for each data point. Define

Edata=i=1ndist(si,s^i)2=i=1nlogsi(s^i)si2=i=1nj=1mcijlogsi(aj)si2. (3)

By using the l1 sparsity regularization, the dictionary learning problem on the manifold Inline graphic can be formulated as the following optimization problem

minC,Di=1nj=1mcijlogsi(aj)si2+λC1,s.t.j=1mcij=1,i=1,,n (4)

where C ∈ ℝn×m and the (i, j) entry of C is written as cij. A similar data term was used in [22] but the atoms were assumed fixed. The affine constraint implies that we are using affine subspaces to approximate the data instead of the usual subspaces, which are simply affine subspaces based at the origin. Generalizing from vector spaces to Riemannian manifolds, there is no corresponding notion of the origin that can be used to define subspaces, and this geometric fact requires the abandonment of the usual subspaces in favor of general affine subspaces. We can also introduce other regularizations in our framework instead of the l1 norm but that will be a topic for future research. Similar to classical dictionary learning methods, we use the iterative method to solve this optimization problem:

  1. Sparse coding step: fix the dictionary Inline graphic and optimize with respect to the coefficients C.

  2. Codebook optimization step: fix C and optimize with respect to Inline graphic.

The first step is a regular sparse coding problem that can be easily solved by many existing fast algorithms. However the second subproblem is much more challenging, since the optimization methods in Euclidean space are not appropriate for atoms on manifolds.

We developed a line search based algorithm on Riemannian manifold to update the dictionary Inline graphic. Let the cost function to be minimized be denoted by f(a1, …, am). First, we need to initialize the atoms in the dictionary. One possible choice of initialization is the m clusters of the data s1, …, sn generated by a K-means algorithm applied to all the data on Inline graphic. Then, a line search on the manifold is used to optimize f(a1, …, am). Intuitively, the idea is to find a descent direction v on the tangent space, and then walk a step along the geodesic γ whose initial velocity is v. The details are listed in Algorithm 1. The convergence analysis of the line search method on manifold is discussed in [23].


Algorithm 1 Line search on Riemannian manifold

Input: A set of data Inline graphic = {s1, …, sn} on the manifold Inline graphic, coefficients C ∈ ℝn×m and initial dictionary atoms a10,,am0.
Output: The optimal dictionary atoms ( a1,,am) that minimize the cost function f(a1, …, am).
  1. Set scalars α > 0, β, σ ∈ (0, 1) and initialize k = 0.

  2. Compute grad f(a1k,,amk)=(f(a1k)a1,,f(amk)am)

  3. Pick, ηk=(η1k,,ηmk)=grad f, where ηikTaikM.

  4. Find the smallest t such that f(expa1k(αβtη1k),,expamk(αβtηmk))f(a1k,,amk)i=1mσαβtηikaik.

  5. Set, aik+1=expaik(αβtηik),i=1,,m.

  6. Stop if f does not change much, otherwise set k = k + 1 and go back to step 2.


2.3 Manifold of square root densities

In this section, without loss of generality we restrict the analysis to PDFs defined on the interval [0, T] for simplicity: P={p:[0,T]|s,p(s)0,0Tp(s)ds=1}. In [24], the Fisher-Rao metric was introduced to study the Riemannian structure formed by the statistical manifold. For a PDF piInline graphic, the Fisher-Rao metric is defined as vj,vk=0Tvj(s)vk(s)1pi(s)ds, where vj, vkTpi Inline graphic. The Fisher-Rao metric is invariant to reparameterizations of the functions. In order to facilitate easy computations when using Riemannian operations, the square root density representation ψ=p was used in [25]. The space of square root density functions is defined as Ψ={ψ:[0,T]|s,ψ(s)0,0Tψ2(s)ds=1}. As we can see, Ψ forms a convex subset of the unit sphere in a Hilbert space. Then the Fisher-Rao metric can be obtained as vj,vk=0Tvj(s)vk(s)ds, where vj, vkTψiΨ are tangent vectors. Given any two functions ψi, ψjΨ, the geodesic distance between these two points is given in closed form by dist(ψi, ψj) = cos−1(〈ψi, ψj〉), which is just the angle between ψi and ψj on the unit hypersphere. The geodesic at ψi with a direction vTψiΨ is defined as γ(t)=cos(t)ψi+sin(t)v|v|. Then, the exponential map can be represented as expψi(v)=cos(|v|)ψi+sin(|v|)v|v|. To ensure the exponential map is a bijection, we restrict |v| ∈ [0, π). The logarithmic map is then given by logψi(ψj)=ucos1(ψi,ψj)/u,u, where u = ψj − 〈ψi, ψjψi.

Using the expressions for Ψ discussed above, we can perform the dictionary learning on square root density functions. Let s1, …, snΨ be a collection of square root density functions, and ai, …, amΨ be atoms in the dictionary Inline graphic. C is a n × m matrix. If we use l1 regularization, our dictionary learning framework becomes

minC,Di=1nj=1mcijcos1(si,aj)uij|uij|si2+λC1,s.t.j=1mcij=1,i=1,,n. (5)

where uij = aj − 〈si, ajsi. Note that in this formulation, the normalization on atoms ai is not needed. Because by incorporating the manifold structure of the square root densities, the atoms we learned are always on the hypersphere, while traditional dictionary learning (Equation (2)) needs the normalization to guarantee the unique solution. This optimization problem can be efficiently solved using the algorithm presented in section 2.2.

3 Application to reconstruction of DP fields

As mentioned in the introduction section, in the DP reconstruction problem, we aim to reconstruct a smooth field of DPs P(r, x) from a given field of multi-shell diffusion weighted MRI data E(q, x), where x represents the spatial locations. We propose to solve this problem in two steps. Briefly speaking, the first step is to acquire a rough estimation of the DP at each voxel through the Fourier transform relationship between the signal E(q) and the DP, P(r), specified in Equation (1). In the second step, in order to get a smooth reconstruction of the DPs over the entire field, we apply the proposed dictionary learning algorithm on the set of square root densities obtained by taking the square root of the DPs estimated from step 1. The implementation details are given below.

Despite the simple relationship between the diffusion signal and the DP described in Equation (1), it is often infeasible to reconstruct the DP from the diffusion signal directly through Fourier transform. The reason is that in practice, the diffusion signal is sampled in the q space following some pre-specified sampling scheme, which might not be uniform and regular. One solution is to define a regular lattice in q space and estimate the values on this lattice through interpolation. Inspired by the work of Ye et al.[15], we choose Body Centered Cubic (BCC) lattice to be our regular lattice for interpolation and apply Fourier transform to the interpolated values to get an estimate of the DPs. As demonstrated in [15], in 3-D, BCC lattice is the optimal lattice for q space sampling because its reciprocal lattice (i.e. the FCC lattice) is the densest sphere packing lattice.

Specifically, given N sample measurements E(qn) on multiple spherical shells, the desired K values E(xk) on the BCC lattice xkInline graphic can be estimated by solving the following linear system E(qn)=xkL1kKE(xk)sincL(qnxk),n=1,,N, where sincInline graphic(x) is the ideal interpolation function that depends on the sampling lattice Inline graphic. The sincInline graphic(x) for the BCC lattice is computed by Ye et. al in [15]. Once the K estimates E(xk) on the lattice are obtained, we get a continuous representation of E(q) as E(q)=xkL1kKE(xk)sincL(qxk). Taking Fourier transform on this equation, we get P(r)=box(r)xkL1kKE(xk)exp(2πixkr), where box(r) is the Fourier transform of sincInline graphic(q).

According to the definition of DP, P(r) is a PDF defined on a 3-D displacement r space. Therefore, by adopting the square root parameterization, we are able to map the estimated DPs from the space of PDFs to the manifold of square root densities. Let ψ(r)=P(r) denote the square root of the DP at a single voxel, we apply the proposed dictionary learning algorithm on the set of ψ(r) over the entire field. After solving for the globally defined dictionary Inline graphic and the coefficient matrix C over the field, the reconstructed DPs can be obtained by solving a weighted mean problem on the hypersphere as described in section 2.2.

4 Experiments

In this section, we evaluate our reconstruction method by comparing it to a traditional dictionary learning based DP reconstruction method on both synthetic and real data sets.

4.1 Synthetic Data

We synthesized a 32 × 32 field of diffusion signals simulating two straight fibers crossing in the center. The signals were generated using a mixture of two Gaussian functions. The data was sampled on multiple q shells using the interlaced scheme described in [15]. Note that this sampling scheme is not a necessity for the application of our proposed method, we used it simply due to its high resolution in q space. Rician noise with level δ varying from 0.05 to 0.3 was added on the generated data.

Next we give the parameter settings in our reconstruction framework. In the first step, we chose a BCC lattice to interpolate the signals onto, which consists of two staggered Cartesian lattices of size (11 × 11 × 11) and (12 × 12 × 12) respectively. The ‖r‖ value to evaluate P(r) on was set to be 18. Then in the process of dictionary learning on the square root density manifold, we set the dictionary size to be 100.

In order to demonstrate the advantage of incorporating the manifold structure into the reconstruction, we compared our method with the method in [18]. Since in [18] the authors adopted an adaptive kernel framework to model the signal in q space, which assumes a single b value in the signal acquisition, their framework can not handle data acquired on multiple q shells. Therefore, for the purpose of comparison we generalized it to make it applicable to multi-shell data, by using a tensor product of two 1-D splines in place of the 1-D spline in the Kernel they used. Also, we removed the NLM-based term in the cost function of [18] to achieve fair comparisons with our method.

We applied both methods to the synthetic data we generated with varying noise levels. The accuracy of the reconstruction was evaluated in terms of the average angular error over the entire field as well as within the crossing area. The angular error was computed based on the reconstructed P(r) value at ‖r‖ = 18. The quantitative comparisons of the two methods are given in Fig.1. The plot shows that our method has a higher accuracy in the reconstruction over the traditional dictionary learning based method. Note that the scales of the Average Angular Error axis are different in these two graphs. By comparing them we can see that the advantage of our scheme over the other one is more significant in the crossing area, where an accurate reconstruction is more difficult to achieve. This good performance is due to the incorporation of the intrinsic geometric structure of the Riemannian manifold in our reconstruction process.

Fig. 1.

Fig. 1

The average angular error on the synthetic data set with varying noise levels. (a) Over the entire field. (b) Within the crossing area.

In addition to the numerical comparison, a visual comparison is also shown above. The plots of the reconstructed P(r) field from synthetic data with noise level δ = 0.25 for the two methods are displayed side by side in Fig. 2. As is shown in the image, our method yields a smoother reconstruction over the entire field, especially in the crossing region. Furthermore, in our result(b), the fiber directions can be easily identified at each voxel in the crossing area whereas in the other one(a), the information is lost at some locations.

Fig. 2.

Fig. 2

Reconstruction of P(r) on synthetic data set with noise level δ = 0.25. (a) The traditional dictionary learning based method. (b) The proposed method.

4.2 Real Data

In this section we present experimental results on real diffusion MRI data acquired from two different regions of a mouse brain: 1) a sagittal set through the midline and 2) a coronal set at the level of the corpus callosum. All magnetic resonance imaging was performed on a 600 MHz Bruker imaging spectrometer, using a conventional diffusion weighted spin echo sequence. Parameters of the data acquisition for the two data sets are as follows. Dataset 1 was acquired with: slicethickness = 0.35mm, 1.8 × 0.9cm2 field-of-view, 256 × 128 data matrix, and 70.3 μm in-plane resolution. Diffusion parameters include: diffusion time, Δ = 12msec, diffusion gradient duration, δ = 1msec, and b-values of 187, 750, 1687 and 3000s/mm2. Dataset 2: slicethickness = 0.3mm, 1.2 × 1.2cm2 field-of-view, 192 × 192 data matrix, and 62.5μm in-plane resolution, the diffusion parameters are identical to the ones in Dataset 1. The sampling scheme was the same as used in the synthetic experiments.

The P(r) reconstruction results for both data sets are displayed in Fig.3. The images in the first row correspond to Dataset 1 while the second row Dataset 2. In each row, from left to right, the images are respectively the S0 image of the entire image plane including the ROI (the region in the red box), the reconstruction result using traditional dictionary learning based method and the one given by our proposed method. It is obvious from the visualization that our method performs better at smoothing out the noise from both data sets and therefore yields smoother DP fields. Furthermore, the fiber orientations estimated in our method are in accordance with expectations. As we can see in the ROI of Fig 3(a), which is part of the mouse cerebellum, the orientations of the DPs in the white matter (corresponding to the dark region in the S0 image) are more consistent in (c) than in (b).

Fig. 3.

Fig. 3

Reconstruction results on real data. (a) The S0 image of the entire field of Dataset 1, where the ROI is indicated by the red box. (b) P(r) Reconstruction using traditional dictionary learning based method on Dataset 1. (c) P(r) Reconstruction using the proposed method on Dataset 1. (d)(e)(f) are the corresponding images for Dataset 2.

5 Conclusions

In this paper, we generalized the traditional dictionary learning methods on Euclidean space to Riemannian manifolds. Specifically, we proposed a novel dictionary learning framework for data on the manifold of square root densities and applied it to the reconstruction of DP fields from multi-shell diffusion MRI data. Through multiple synthetic and real data experiments, we showed that our reconstruction method performs well in comparison to the traditional dictionary learning based DP reconstruction methods, hence, justifying the incorporation of the geometric structure of the data space (square root density Riemannian manifold in our case) into our reconstruction.

Footnotes

*

This research was in part funded by the NIH grant NS066340 to Baba C. Vemuri, and the following grants AFOSR FA9550-12-1-0304, ONR N000141210862, NSF CCF-1018149 to Alireza Entezari.

References

  • 1.Basser P, Mattiello J, Lebihan D. Estimation of the effective self-diffusion tensor from the nmr spin echo. Journal of Magnetic Resonance. 1994 doi: 10.1006/jmrb.1994.1037. [DOI] [PubMed] [Google Scholar]
  • 2.Callaghan PT. Principles of nuclear magnetic resonance microscopy. Oxford University Press; 1991. [Google Scholar]
  • 3.Ozarslan E, Shepherd TM, Vemuri BC, Blackband SJ, Mareci TH. Resolution of complex tissue microarchitecture using the diffusion orientation transform (DOT) Neuroimage. 2006 doi: 10.1016/j.neuroimage.2006.01.024. [DOI] [PubMed] [Google Scholar]
  • 4.Jian B, Vemuri BC, Ozarslan E, Carney PR, Mareci TH. A novel tensor distribution model for the diffusion-weighted MR signal. NeuroImage. 2007 doi: 10.1016/j.neuroimage.2007.03.074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Descoteaux M, Deriche R, Bihan DL, Mangin J, Poupon C. Multiple q-shell diffusion propagator imaging. MIA. 2011 doi: 10.1016/j.media.2010.07.001. [DOI] [PubMed] [Google Scholar]
  • 6.Assemlal H, Tschumperle D, Brun L, Siddiqi K. Recent advances in diffusion MRI modeling: Angular and radial reconstruction. MIA. 2011 doi: 10.1016/j.media.2011.02.002. [DOI] [PubMed] [Google Scholar]
  • 7.Aharon M, Elad M, Bruckstein A. K-svd: An algorithm for designing over-complete dictionaries for sparse representation. Signal Processing, IEEE Transactions. 2006 [Google Scholar]
  • 8.Fletcher P, Joshi S. Riemannian geometry for the statistical analysis of diffusion tensor data. Signal Processing. 2007 [Google Scholar]
  • 9.Sra S, Cherian A. Generalized dictionary learning for symmetric positive definite matrices with application to nearest neighbor retrieval. ECML. 2011 [Google Scholar]
  • 10.Caruyer E, Deriche R. Diffusion MRI signal reconstruction with continuity constraint and optimal regularization. MIA. 2012 doi: 10.1016/j.media.2012.06.011. [DOI] [PubMed] [Google Scholar]
  • 11.Tuch DS. Q-ball imaging. MRM. 2004 doi: 10.1002/mrm.20279. [DOI] [PubMed] [Google Scholar]
  • 12.Wedeen VJ, Hagmann P, Tseng WY, Reese TG, Weisskoff RM. Mapping complex tissue architecture with diffusion spectrum magnetic resonance imaging. MRM. 2005 doi: 10.1002/mrm.20642. [DOI] [PubMed] [Google Scholar]
  • 13.Pickalov V, Basser P. 3D tomographic reconstruction of the average propagator from MRI data. ISBI. 2006 [Google Scholar]
  • 14.Wu Y, Alexander A. Hybrid diffusion imaging. NeuroImage. 2007 doi: 10.1016/j.neuroimage.2007.02.050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ye W, Portony S, Entezari A, Blackband SJ, Vemuri BC. An efficient interlaced multi-shell sampling scheme for reconstruction of diffusion propagators. IEEE TIP. 2012 doi: 10.1109/TMI.2012.2184551. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Bilgic B, Setsompop K, Cohen-Adad J, Wedeen V, Wald L, Adalsteins-son E. Accelerated diffusion spectrum imaging with compressed sensing using adaptive dictionaries. MICCAI. 2012 doi: 10.1007/978-3-642-33454-2_1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Merlet S, Caruyer E, Deriche R. Parametric dictionary learning for modeling EAP and ODF in diffusion MRI. MICCAI. 2012 doi: 10.1007/978-3-642-33454-2_2. [DOI] [PubMed] [Google Scholar]
  • 18.Ye W, Vemuri BC, Entezari A. An over-complete dictionary based reguralized reconstruction of a field of ensemble average propagators. ISBI. 2012 doi: 10.1109/ISBI.2012.6235711. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Schwab E, Afsari B, Vidal R. Estimation of non-negative ODFs using the eigenvalue distribution of spherical functions. MICCAI. 2012 doi: 10.1007/978-3-642-33418-4_40. [DOI] [PubMed] [Google Scholar]
  • 20.Cheng J, Jiang T, Deriche R. Nonnegative definite EAP and ODF estimation via a unified multi-shell HARDI reconstruction. MICCAI. 2012 doi: 10.1007/978-3-642-33418-4_39. [DOI] [PubMed] [Google Scholar]
  • 21.Spivak M. A comprehensive introduction to differential geometry. Publish or perish Berkeley. 1979 [Google Scholar]
  • 22.Cetingul HE, Vidal R. Sparse Riemannian manifold clustering for HARDI segmentation. ISBI. 2011 [Google Scholar]
  • 23.Absil P, Mahony R, Sepulchre R. Optimization algorithms on matrix manifolds. Universtiy Press; 2008. [Google Scholar]
  • 24.Rao CR. Information and accuracy attainable in the estimation of statitical parameters. Bull Calcutta Math Soc. 1945 [Google Scholar]
  • 25.Srivastava A, Jermyn I, Joshi S. Riemannian analysis of probability density functions with applications in vision. CVPR. 2007 [Google Scholar]

RESOURCES