Abstract
Sparsity-based models and techniques have been exploited in many signal processing and imaging applications. Data-driven methods based on dictionary and sparsifying transform learning enable learning rich image features from data and can outperform analytical models. In particular, alternating optimization algorithms have been popular for learning such models. In this work, we focus on alternating minimization for a specific structured unitary sparsifying operator learning problem and provide a convergence analysis. While the algorithm converges to the critical points of the problem generally, our analysis establishes under mild assumptions, the local linear convergence of the algorithm to the underlying sparsifying model of the data. Analysis and numerical simulations show that our assumptions hold for standard probabilistic data models. In practice, the algorithm is robust to initialization.
Keywords: sparse representations, dictionary learning, transform learning, alternating minimization, convergence guarantees, generative models, fast algorithms
1. Introduction
Various models of signals and images have been exploited in signal processing and imaging applications, such as dictionary and sparsifying transform models, tensor models and manifold models. Wavelets and other analytical sparsifying transforms have been used in compression standards [21], denoising and magnetic resonance image reconstruction from compressive measurements [19]. While these approaches used fixed or analytical image models that are independent of the input data, there has been a rising interest in data-dependent or data-driven models. Learned models may outperform analytical models in various applications. For example, learned dictionaries and sparsifying transforms work well in applications such as denoising [16], in-painting [20,41] and medical image reconstruction [45]. This work focuses on analysing the convergence behaviour of a structured (unitary) sparsifying transform learning algorithm and investigates its ability to recover underlying data models. In the following, we present some background on dictionary and sparsifying operator learning, before discussing the specific learning problem and algorithm, and our contributions.
1.1 Background
Signals can be modeled as sparse in different ways such as in a synthesis dictionary or in a transform domain. In particular, the synthesis dictionary model represents a given signal
as
with
denoting the synthesizing dictionary and
denoting the sparse code, i.e.
with the
‘norm’ counting the number of non-zero vector entries. The synthesis dictionary model is often referred to as a union of (low-dimensional) subspaces model for signals, wherein different signals may be approximately spanned by different subsets of dictionary columns or atoms. Finding the optimal sparse representation for a signal in the synthesis dictionary model involves solving the well-known synthesis sparse coding problem.1 This problem is known to be non-deterministic polynomial-time hard (NP-hard) in general [23] and numerous algorithms exist for approximating the solution to the sparse coding problem [13–15,24,26] that provide the correct solution under certain conditions. On the other hand, the sparsifying transform model assumes that
, where
denotes a sparsifying transform and
is assumed to emit a sparse structure (where zeros correspond to the transform rows that approximately annihilate the signal). The sparsifying transform model is a generalization [30] of the analysis model [22] that assumes that applying
to a signal produces several zeros in the output. These models can be viewed as a union of null-spaces model for signals.2 For the transform model, sparse transform-domain approximations are obtained exactly by simple (e.g. hard or soft) thresholding [30].
The learning of dictionaries and sparsifying transforms from a collection of signals has been explored in many recent works [3,25,30,36,39,47]. The learning problems are often highly non-convex (e.g. non-convexity due to product of matrices structure or non-convex constraints such as using the
“norm”), and many learning algorithms lack proven convergence guarantees or model recovery guarantees. Recent works [1,2,5,8,9,31,40,46] have studied the convergence of specific learning algorithms. Some of these works demonstrate promising results in applications for efficient synthesis dictionary [8,9,46] or transform [10,31] learning algorithms and prove convergence of the learning methods to the critical points (or generalized stationary points [34]) of the underlying costs. These works all employ the
“norm” or other non-convex regularizers in their costs, which work well in applications. Other works such as [1,2] use the
norm and prove the recovery of the underlying generative model for specific learning methods using an alternating minimization approach, but rely on restrictive assumptions on sparsity and the initial error. Arora et al. [4] analysed alternating minimization approaches to synthesis dictionary learning and provided a convergence radius
(i.e. initializations within the radius provide convergence), but the upper bound on the iterate error included a non-zero offset and fresh samples may be needed in each iteration. In [5], the authors propose and analyse polynomial time algorithms for learning overcomplete dictionaries but comment that their algorithms are not suitable for large-scale applications due to computational runtime costs. Moreover, these and other schemes [40] have not been demonstrated to be practically powerful in applications such as inverse problems and can be computationally expensive.
Often, additional properties may be enforced on the model during learning such as incoherence [11,28], non-singularity [40], etc. In a recent two-part work, Sun et al. [42,43] focused on complete dictionaries and studied the geometric properties of the non-convex objective for dictionary learning over a high-dimensional sphere. Their work showed with high probability that there were no spurious local minimizers and proposed an algorithm that converged to local minimizers. While other works such as [4,6,12,38] provided theoretical guarantees for specific dictionary learning algorithms, they do not enforce structural constraints on the dictionary during learning. This work enforces the learned model to be unitary, which has been demonstrated to be both effective and computationally advantageous in practice [7,18,29,32]. While alternating minimization algorithms for general synthesis dictionary learning typically require iterative or greedy or other approximate techniques to solve the subproblems [1,3], the corresponding algorithms with unitary models, even with the
“norm”, typically have efficient closed-form solutions [32]. Although unitary dictionary learning has shown promise empirically, there has been a lack of theoretical guarantees for proposed methods [7,18,29]. Given the recently increasing interest in such models and their effectiveness in applications such as inverse problems [32,33], our work focuses on analysing the convergence of algorithms for such structured non-convex learning problems.
In the following section, we outline the structured (unitary) operator learning approach that involves simple, computationally cheap updates. We investigate its convergence properties in the rest of the paper.
1.2 Unitary operator learning formulation and algorithm
Given an
training data set
, whose columns represent training signals, our goal is to find an
sparsifying transformation matrix
and an
sparse coefficients (representation) matrix
by solving the following constrained optimization problem:
![]() |
(1.1) |
We focus on the learning of unitary sparsifying operators (
with
denoting the identity matrix) that have shown promise in applications such as denoising [29] and medical image reconstruction [32]. The columns
of
have at most
non-zeros (measured using the
“norm”), where
is a given parameter. Alternatives to Problem (1.1) involve replacing the column-wise sparsity constraints with a constraint on the total sparsity (aggregate sparsity) of the entire matrix
or using a sparsity penalty (e.g.
penalties with
). Problem (1.1) is an instance of sparsifying transform learning [27,30], with unitary constraint on the operator or filter set. Sparsifying transform learning generalizes conventional analysis dictionary learning. Analysis dictionary learning approaches typically minimize the
or
norm of
subject to non-triviality constraints on
that prevent trivial solutions such as the all-zero matrix [48]. Popular variations to model noisy data minimize
subject to sparsity-type constraints on
and constraints on
[35,36,49]. Sparse coding in the latter variation (i.e. estimating
for fixed
) can be NP-hard in general. Problem (1.1) learns a different generalization of the analysis model, where
is assumed “approximately” sparse in the transformed domain. Natural signals and images are well known to be approximately sparse in the wavelet or discrete cosine transform (DCT) domain, etc., and such sparsifying transforms have also been exploited for denoising data. Problem (1.1) with the unitary constraint on
is also equivalent to learning a synthesis dictionary
for sparsely approximating the training data
as
.
Alternating minimization algorithms are commonly used for learning synthesis dictionaries [3,17,37,39], analysis dictionaries (such as the noisy data variation above) [36,49] and sparsifying transforms [27,30,44]. In particular, unlike sparse coding in the first two models, which could be NP-hard in general, computing sparse approximations in the transform model is cheap involving thresholding, and thus, various efficient and effective algorithms have been proposed for transform learning with different properties or constraints on
. One could alternate between solving for
and
in Problem (1.1) [29,31]. In this case, the solution for the
th
update (sparse coding step) is obtained as
, where
and
denote the
th columns of
and
, respectively, and the operator
zeros out all but the
largest magnitude elements of a vector, leaving other entries unchanged (i.e. thresholding to
largest magnitude elements). The solution for the subsequent
update (operator update step) is obtained by first computing the full singular value decomposition (SVD) of
as
, and then
. The algorithm repeats these relatively cheap updates until convergence. The overall method is provided in Algorithm 1.
Although Problem (1.1) is non-convex because of the
sparsity and unitary operator constraints, the alternating minimization algorithm involves cheap and closed-form update steps. The thresholding-type solution for the sparse coding step readily generalizes to alternative formulations such as with an
aggregate sparsity constraint or sparsity penalties [31]. These advantages of unitary operator learning (that also extend to general sparsifying transform learning [44]) and its effectiveness in applications [32] render it quite attractive vis-à-vis alternatives such as overcomplete synthesis dictionary learning, and hence we investigate it further in this work.
Problem (1.1) is also interpreted as training an efficient convolutional or filterbank model [27,33] for two-dimensional (or higher dimensional) images, with thresholding-type nonlinearities. To see this, we observe that if overlapping patches of an image or collection of images of size
are (vectorized and) used for training with periodic image boundary condition (so patches at image boundaries wrap around on the opposite side of the image) and a patch stride of
pixel in the horizontal and vertical directions (maximal patch overlap), then the transform learned by Problem (1.1) is applied to sparse code the data by first applying each row to all the image patches via inner products, followed by thresholding operations. The sparse outputs of the transform are thus generated by circularly convolving its reshaped (into two-dimensional patches and flipped) rows with the image followed by thresholding. Thus, Problem (1.1) adapts a collection of orthogonal sparsifying filters for images, and Algorithm 1 can also be implemented with filtering-based operations.
1.3 Contributions
In this work, we focus on investigating the convergence properties of the aforementioned efficient alternating minimization algorithm for unitary sparsifying operator learning. Recent works have shown convergence of the algorithm (or its variants) to critical points of the equivalent unconstrained problem [10,31,32], where the constraints are replaced with barrier penalties (that take value
when the constraint is violated and
otherwise). Here we further prove the fast local linear convergence of the algorithm to the underlying data models. Our results hold under mild assumptions that depend on the properties of the underlying sparse coefficients matrix
. In addition to showing convergence, we also characterize the convergence radius and rate and discuss general and example distributions of the data for which our results hold. We also show experimentally that our assumptions and convergence guarantees hold for well-known probabilistic models of
. Our experiments and initial arguments indicate that the learning algorithm is robust to initialization.
1.4 Organization
The rest of this paper is organized as follows. Section 2 presents the main convergence results and proofs. Section 3 presents experimental results supporting the statements in Section 2 and illustrating the empirical behaviour of the transform learning algorithm. In Section 4, we conclude with proposals for future work.
Algorithm 1 Alternating optimization for (1.1) —
Input: Training data matrix
, maximum iteration count
, sparsity
Output:
,
Initialize:
and
for
do
![]()
⊳
=
⊳
![]()
![]()
end for
2. Convergence analysis
The main contribution of this work is the convergence analysis of Algorithm 1. We begin this section outlining notation and the assumptions under which our analysis operates. Following this, we summarize the theoretical guarantees of our work and present the proofs for these results.
2.1 Notation
We adopt the following notation in the rest of the paper. Matrix
denotes the
sparse coefficients matrix,
is the
sparsifying transform, and
denotes the
data set. The
th approximation of a variable (iterate in the algorithm) is denoted
. The capital letter
is reserved for the transpose operator, i.e. the variable
should be read as the transpose the of
th approximation for
. With the exception of
, capitalized letters are used for matrices and lowercase letters are used for vectors, with further subscripts denoting the row, column or entry of the matrix or vector. The
th row,
th column and the
th entry of a matrix
are denoted
,
and
, respectively. For any vector
,
denotes the function that returns the support, i.e.
, where
denotes the
th entry (scalar) of
. The operator
leaves the
largest magnitude elements of
unchanged and zeros out all other entries (i.e. thresholding to
largest magnitude elements). Matrix
denotes an
diagonal matrix of ones and a zero at location
. Additionally,
denotes an
diagonal matrix that has ones at entries
for
and zeros elsewhere, and matrix
is defined in Section 2.2 (see Assumption
). The Frobenius norm, denoted
, is the square root of the sum of squared elements of
and
denotes the spectral norm. Lastly,
denotes the appropriately sized identity matrix.
2.2 Assumptions
We begin with the following assumptions that will be used in various results:
(A1) Generative model: There exists a
and unitary
such that
and
(normalized data).(A2) Sparsity: The columns of
are
-sparse, i.e.
.(A3) Spectral property: The underlying
satisfies the bound
, where
denotes the condition number (ratio of largest to smallest singular value).(A4) Orthogonal coefficients: The rows of
are orthonormal, i.e.
.(A5) Initialization:
for an appropriate small
.
The first two assumptions are on the model for the data, i.e. we would like the algorithm to find an underlying (unitary) sparsifying transform and representation matrix such that
holds (data generated as
), where the columns of
have
non-zeros. The coefficients are assumed “structured” in Assumption
, satisfying a spectral property, which will be used to establish our theoretical results. When
and
, we show that Assumption
simplifies to very intuitive and deterministic conditions of uniqueness (the support of no two rows of
fully coincide) and irreducibility (each row of
has at least one non-zero, i.e. each atom or row of
contributes to at least one non-zero in the data representation). More generally or when
, the condition that each row of
has at least one non-zero is still required in order for
to hold (as otherwise
) but the assumption does not reduce to a simple setting. We will present an analysis and empirical results showing that the spectral property holds for well-known probabilistic models. The analysis will also show that in general the underlying matrices
and
defining the spectral property behave similarly for the probabilistic models as
as for the special
case above. Assumption
on orthogonality of coefficient matrix (normalized) rows simplifies the condition in Assumption
(since
) and is used in presenting/proving one version of the results, but is omitted in the generalization. For well-known probabilistic models of the coefficient matrix, we will show that the orthogonality holds asymptotically. Assumption
on algorithm initialization states that the initial sparsifying transform,
is sufficiently close to the solution
. Such an assumption has also been made in other works, where the issue of good initialization is tackled separately [1,2]. Section 2.3 characterizes
in Assumption
in more detail. While the main results in Section 2.3.1 use Assumption
, we also discuss a generalization in Section 2.3.4. Our theoretical results are stated next.
2.3 Results
In the following, Theorem 2.1 first presents a convergence result using all the aforementioned assumptions. Then Theorem 2.2 generalizes the result by dropping Assumption
. Proposition 2.3 states that Assumption
holds under a general probabilistic model on the sparse representation matrix
. We also later show numerical results illustrating Proposition 2.3. We also provide a corollary on a special case of Theorems 2.1 and 2.2 and some remarks. In particular, Remark 2.1 discusses dropping the data normalization assumption in
, and Remark 2.2 discusses the effect of noise on Theorems 2.1 and 2.2. Proposition 2.4 and Remark 2.3 characterize and discuss the behaviour of
in Assumption
.
2.3.1 Main results
Theorem 2.1
Under Assumptions
–
, the Frobenius error between the iterates generated by Algorithm 1 and the underlying model in Assumptions
and
is bounded as follows:
(2.1) where
and
is fixed based on the initialization.
Here, the symbol “
” indicates equality up to first-order terms, with the other terms negligible. We will mostly only refer to the dominant component of
in the discussions. The latter components are considered in more detail in the convergence radius analysis later (Section 2.3.3 and Appendix A).
Theorem 2.2
Under Assumptions
–
and
, the iterates in Algorithm 1 converge linearly to the underlying model in Assumptions
and
, i.e. the Frobenius error between the iterates and the underlying model satisfies
(2.2) where
and
is fixed based on the initialization.
Next we discuss special cases of Theorems 2.1 and 2.2 when
. In the case of Theorem 2.1, a simple intuitive condition that the supports of no two rows of
fully overlap ensures linear convergence (
), i.e. ensures Assumption
holds.
Corollary 2.1
(Case
) For Theorem 2.1, when
and no two rows of
have identical support, then
holds in Assumption
. For Theorem 2.2 (without Assumption (
)), when
, then
holds in Assumption (
) if
for all
, where the norm is computed only with respect to the elements of
in the support
.
Remark 2.1 discusses the effect of dropping the data normalization assumption (in
) on the convergence rate. In particular, the convergence rate factor
is modified by being normalized by
, keeping it invariant to scaling of
.
Remark 2.1
When the unit spectral norm condition on
in Assumption
is dropped, the
bound in Theorem 2.2 holds with
. The bound
as in (2.2) holds with the aforementioned
but with
replaced by
.
As will be clear from the proofs in Section 2.4, when Assumption
stating
is relaxed to
, then the (common) linear contraction factor
for the error in each iteration in Theorem 2.2 (with respect to previous iteration’s error) is replaced with
, where
is defined similar to
but with respect to
(which is shown in Section 2.4 to contain
).
Finally, we have the following generalization of Theorem 2.2 for noisy models. The
in Assumption
would be smaller in the presence of noise and the noise is assumed small enough so that the support recovery and Taylor Series convergence properties used in the proofs in Section 2.4 hold.
Remark 2.2
When a noisy model of the data is used in Assumption
, i.e.
, where
denotes noise, then for sufficiently small noise, Theorem 2.2 holds, except that the term
, where
is a constant, is added to the right-hand side of (2.2).
2.3.2 Convergence rate
While our main results assume that the spectral property in Assumption
holds, the next result discusses the scenario and models under which the assumption
is generally valid.
Proposition 2.3
Suppose the locations of the
non-zeros in each column of
are chosen independently and uniformly at random, and the non-zero entries are i.i.d. with mean zero and variance
. Then, for fixed
,
, and
, we have that
![]()
for large enough
with high probability. In particular, we have the following limit almost surely:
(2.3)
Proposition 2.3 holds for several well-known distributions of
such as when its column supports are drawn independently and uniformly at random and the non-zero entries are a) i.i.d. with
or b) i.i.d. scaled (by
) random signs with ‘+’ and ‘−’ being equally probable. Section 3 empirically shows the algorithm’s convergence and the behaviour of
when
, a commonly used sparsity criterion in many applications (i.e. with
, where
is a small fraction).
2.3.3 Convergence radius
While the main convergence results make use of Assumption
, here we discuss the behaviour of the convergence radius
, including when the number of training signals
. The following proposition and remark characterize a sufficient
in Assumption
for Theorems 2.1 and 2.2.
Proposition 2.4
The iterate convergence in Theorem 2.1 holds when the radius of convergence
in Assumption
satisfies
, where
with
computing the smallest non-zero magnitude in a vector, and
with
, and
is defined as follows:
(2.4)
In Proposition 2.4,
arises from the sparse coding step of Algorithm 1 and ensures recovery of the support of the underlying sparse coefficients. The bound
arises in the operator update step of Algorithm 1 and is primarily to ensure the convergence and boundedness of Taylor Series expansions discussed in the proof. The largest permissible
that suffices for Theorem 2.1 is obtained by maximizing the function
over
. The end points of this interval both correspond to
. So the maximum of the continuous (non-negative) function
would occur inside the interval. The constant
is monotone decreasing as
with the limiting
. The result indicates that the radius of convergence depends on the properties of the underlying sparse coefficients.
Remark 2.3
Proposition 2.4 holds for Theorem 2.2 but with
depending on
. In particular, as
,
takes the same form as in Proposition 2.4. Moreover, for the distributions in Proposition 2.3,
and
almost surely as
.
Remark 2.3 indicates that
in Proposition 2.4 remains unchanged for Theorem 2.2. However, the
arising from the operator update step depends on
. For example, a bound of
(smaller for larger condition numbers) ensures the convergence of one of the Taylor Series in the proof in Section 2.4.2. Importantly, for the distributions of
in Proposition 2.3, the limiting value of
stated in Remark 2.3 depends only on the ratio
.
The limiting behaviour of
as
would depend on the distribution of
. Appendix B discusses some example distributions that satisfy the assumptions in Proposition 2.3 and have the non-zero values bounded away from zero, for which
holds for each
, where
is a positive constant. Practically, peak physical intensity and numerical precision bound non-zero entries of the sparse coefficient matrix. In practice, we expect the radius
in Proposition 2.4 to be limited more by
, since
depends approximately only on the ratio
for large
(Remark 2.3) and would be a constant when
.
2.3.4 Discussion of generalization of convergence radius assumptions
Here we discuss the effect of
values larger than in Proposition 2.4 (or Remark 2.3) on the convergence of Algorithm 1. The following lemma shows the behaviour of the sparse coding error for general algorithm initializations (or general
values that may not ensure support recovery).
Lemma 2.1
For
in Algorithm 1 and under Assumptions
and
and denoting
with
for some non-negative
, we have that
(2.5)
Appendix C provides the proof of Lemma 2.1. Lemma 2.1 suggests that regardless of how close the initial transform is to the underlying model, the bound on the sparse coding error is at most twice that in Theorem 2.2. In this case, the contraction factor in the operator update step would need to satisfy
in order to consistently decrease the error. We have
for the operator update step, where
is a diagonal matrix with ones at entries
for
and zeros elsewhere. If the supports of
and
are mismatched, then
could in general have more ones than
. In other words,
could be larger than the
in Theorem 2.2. Thus, the (larger) effective (or overall) factor of
could lead to slow convergence initially from more general initializations. This is also corroborated by the experiments in Section 3, where slower convergence is observed from general initializations until the underlying support is fully recovered, at which point, the linear convergence behaviour predicted in Theorem 2.2 is fully observed, with a similar rate of convergence regardless of initialization.
2.4 Proofs of theorems, corollary, propositions and remarks
We first prove Theorem 2.1 and then the proof of Theorem 2.2 is briefly presented highlighting the distinctions arising from the generalization. The proof of Corollary 2.1 is presented for the case of Theorem 2.1 (the proof for the case of Theorem 2.2 is similar). The proof of Remark 2.2 follows along the same lines as those of the theorems and is omitted. Finally, the proof of Proposition 2.3 is presented. The proofs of Proposition 2.4 and Remark 2.3 are outlined in Appendix A.
To prove Theorem 2.1, we will first prove two supporting lemmas that establish properties of the iterates. First, Lemma 2.2 shows that the error between the iterate
and
is bounded and the bound depends on the approximation error with respect to
for the initial
(bounded by
as in Assumption
). Lemmas 2.3 and 2.4 show that the error between the first
iterate (
) and
is bounded above by
for Theorems 2.1 and 2.2, respectively. Similar bounds are shown to hold for subsequent iterations. Therefore, for Algorithm 1 to converge linearly, one only needs
as in Assumption
or as established by Proposition 2.3. The scaling indicated in Remark 2.1 follows from the proofs of Lemmas 2.2 and 2.4.
2.4.1 Proof of Theorem 2.1
For our proofs, we define the sequences
and
such that
![]() |
(2.6) |
![]() |
(2.7) |
Lemma 2.2
(Approximation error for
) For
in Algorithm 1 and under Assumptions
, the Frobenius norm of the approximation error of the estimated sparse coefficients with respect to
is bounded by
as defined in
. In particular, we have that
where
.
Proof.
For each column indexed by
, of the sparse coefficients matrix
, the following hold:
(2.8) where
is a diagonal matrix with a one in the
th entry if
and zero otherwise and
is as defined in (2.6). The last equality above follows from the fact that the support of
includes that of
, for small enough
(Assumption
). In particular, since
, we have
Therefore, whenever
with
being the smallest non-zero magnitude vector entry, the support of
includes3 that of
(the entries of the perturbation
are not large enough to change the support). The following results then hold:
Here,
follows by definition of
; step
holds for the Frobenius norm of a matrix–matrix product, and the last equality holds because
(Assumption
). By Assumption
,
, which completes the proof.
Lemma 2.3
(Approximation error for
) For
in Algorithm 1 and under Assumptions
, the Frobenius norm of the approximation error of the estimated transform with respect to
is bounded as
where
is a scalar coefficient as in Theorem 2.1.
Proof.
Denote the SVD of
as
. From Algorithm 1, we have
Using the SVD of
, we rewrite the above equations as
(2.9) Now the error between
and
satisfies
(2.10) where the matrix
can be further rewritten as follows:
(2.11) The above equality holds for all
, which suffices to ensure
is invertible. Note that the matrix square root (i.e. the matrix
in the decomposition
) in (b) above is the positive-definite square root.
Using Taylor Series Expansions for the matrix inverse and positive-definite square root along with (2.7) and the assumption
, we have that
(2.12)
(2.13) where
denotes corresponding higher order series terms and is bounded in norm by
for some constant
.
Substituting these expressions in (2.10), the error between the first transform iterate
and
is bounded as
(2.14) The approximation error above is bounded in norm by
, which is negligible for small
. So we only bound the dominant term
on the right. The matrix
clearly has a zero diagonal (skew-symmetric). Thus, we have the following inequalities:
(2.15)
(2.16)
(2.17) where we more simply write (ignoring higher order terms in (2.14))
. Since
by Assumption
, we obtain the desired result.
Thus, we have shown the results for the
case. We complete the proof of Theorem 2.1 by observing that for each subsequent iteration
, the same steps as above can be repeated along with the induction hypothesis (IH) to show that
![]() |
2.4.2 Proof of Theorem 2.2
Here we present the distinctions in the proof of Theorem 2.2. When Assumption
is dropped, Lemma 2.2 and its proof remain unaffected. The change to Lemma 2.3 and its proof are outlined next.
Lemma 2.4
(Removing Assumption
) For
in Algorithm 1 and under Assumptions
and
, the Frobenius norm of the approximation error of the estimated transform with respect to
is bounded as
where
is a scalar coefficient as in Theorem 2.2.
Proof.
The proof of Lemma 2.4 relies on the general Taylor Series Expansions for the matrix inverse and positive-definite square root. In particular, (2.13) uses these expansions under the assumption that
. To establish a result without this assumption, we first use the general Taylor Series Expansions for matrix inverse and square root then rely on algebraic identities of the Kronecker sum and product to manipulate the error bound of
.
To that end, let
. First, we look at the series expansion of
, for which the following equalities hold:
where we factored out4
and then computed the series expansion of a matrix inverse. The Taylor series converges when
or when
.
For the series expansion of the matrix square root in (2.11), we first observe that
Let
, where
denotes the remainder of terms within the square root. The Taylor Series Expansion for
can be written as
, where the operator
reshapes a matrix into a vector by stacking the columns,
undoes or inverts the
operation by reshaping a vector into an
matrix, and the gradient of the square root function is obtained as follows, where
denotes the Kronecker product and
denotes the Kronecker sum:
(2.18) Using the above expressions, (2.11) in this case becomes
(2.19) with
denoting corresponding higher order series terms in each step above.
Now recall from (2.14) that
, where
![]()
. To bound the required error
, first, using the property of the
operator that
, we can easily obtain a simplified expression for
ignoring the
terms (since they are bounded in norm by
, which is negligible for small
and
is a constant) in (2.19) as follows:
(2.20) Denoting the SVD of (positive-definite)
as
, it can be shown that the SVD of the Kronecker sum
is5
or that
. Using these SVDs and the standard result that
(2.21) the following results readily hold:
(2.22)
(2.23) Substituting (2.22) and (2.23) in (2.20) simplifies (2.20) as follows:
(2.24) Moreover, we have that
(2.25) Thus, equation (2.24) further simplifies to
(2.26) where the matrix
is defined as
(2.27) Finally, we use (2.26) to obtain
(2.28) Here, the submultiplicativity of the spectral norm and the fact that
ensures that
(2.29) where the last equality follows from the facts that
(for unitary matrix);
(by Assumption (
));
, where
denotes the smallest matrix singular value; and the fact that
(using Assumption (
)). Substituting (2.29) in (2.28) and using a similar set of inequalities as in (2.17) to bound the
term in (2.28) provides the following bound:
(2.30) where we more simply write
. Since by Assumption
,
, we obtain the desired result.
2.4.3 Proof of Corollary 2.1
We have
(focusing on the dominant component) with
by Assumptions (
) and (
), respectively. For brevity in notation, let
. Here the matrix
zeros out the
th row of
and
zeros out the columns corresponding to the complement of the support of the
th row of
.
The matrix
is then a diagonal matrix where the
th entry is
and the
th entry for
is
, where
coincides with
on
and is zero outside this support. Clearly, the
th row and column of
are zero and its other off-diagonal entries are
because each column of
has at most
non-zeros and
for
. So, we readily have that
![]() |
where the last inequality bound follows from the fact that
for all
, which holds because each row of
has unit
norm (Assumption
) and no two rows have the exact same support. 
2.4.4 Proof of Proposition 2.3
Under the conditions stated in Proposition 2.3, the (dominant)
factor is expected to be less than
given sufficient training signals, i.e. large
.
For the proof, we study the asymptotic behaviour of the matrices
and
, where
, which appear in
as defined in Remark 2.1. First, we show that
almost surely as
using
. Then we will show that
almost surely as
using
.
Let
. Then the non-zero entries of
have zero mean and variance of
. Let
denote the indicator function that takes the value
when
and is zero otherwise. Since
, using the law of large numbers, the diagonal entries of
converge almost surely as follows:
![]() |
(2.31) |
where
is i.i.d. over the columns
. The random variable
is non-zero (the non-zero part has mean
) with probability (w.p.)6
and is zero w.p.
, implying
. Similarly, the off-diagonal entries
for
converge as follows:
![]() |
(2.32) |
where
is non-zero w.p.7
and zero w.p.
, implying
, where
is the product of two i.i.d. zero mean random variables. Therefore, from (2.31) and (2.32), it follows that
converges to
almost surely. Thus, as
,
in the definition of
, converges to
almost surely.
Now consider
and note that the
th row and column of the matrix
are zero. As
, the diagonal entries of
have the following limit almost surely:
![]() |
(2.33) |
which holds for all
. The expectation follows from the fact that
is i.i.d. over the columns8
, is non-zero (mean
for non-zero part) w.p.
and is zero otherwise.
The following limit holds almost surely for the off-diagonal entries of
:
![]() |
(2.34) |
which follows because the indexes
,
and
all lie in the support of the
th column (to get non-zero indicator function) w.p.
, and the expectation of the product of zero mean i.i.d. random variables is zero. It is obvious from (2.33) and (2.34) that
![]() |
(2.35) |
Thus, as
,
almost surely, and the same is true for
. Combining all the above results, the required result (2.3) is readily established.
Note that under the assumed probabilistic model of
, the matrix
in the proof of Proposition 2.3 above approaches a diagonal matrix as
, whereas in the proof of Corollary 2.1 for the
case, it is deterministically a diagonal matrix for each
.
3. Experiments
In this section, we provide numerical results supporting our findings. We also discuss the empirical behaviour of the algorithm with respect to different initializations.
3.1 Empirical performance of algorithm
In the first two experiments, we generated the training set
using randomly generated
and
, and set
,
, and
. The transform
is generated in each case by applying Matlab’s orth() function on a standard Gaussian matrix. For generating
, the support of each column is chosen uniformly at random and the non-zero entries are drawn i.i.d. from a Gaussian distribution with mean zero and variance
. Section 2 (Theorems 2.1 and 2.2) established model recovery guarantees for Algorithm 1. Figure 1 shows the empirical evolution of the Frobenius norm of the approximation error of the transform iterates with respect to
, for an
initialization (
– see (2.8)). The plots illustrate the observed linear convergence of the iterates to the underlying true operator W
.
Fig. 1.

The performance of Algorithm 1 for recovering W
for
and
.
Figures 2 and 3 show the behaviour of Algorithm 1 with different initializations. We consider six different initializations and plot the evolution of the objective function over iterations. The first initialization, labelled ‘eps’, denotes an initialization as in Fig. 1 with
. The other initializations are as follows: entries of
drawn i.i.d. from a standard Gaussian distribution (labelled ‘rand’); an
identity matrix
labelled ‘id’; a discrete cosine transform (DCT) initialization labelled ‘dct’; entries of
drawn i.i.d. from a uniform distribution ranging from 0 to 1 (labelled ‘unif’); and
labelled ‘zero’. Note that the minimum objective value in (1.1) is
. For non-epsilon initializations, we see that the behaviour of Algorithm 1 is split into two phases. In the first phase, the iterates slowly decrease the objective. When the iterates are close enough to a solution, the second phase occurs and during this phase, Algorithm 1 enjoys rapid convergence (towards 0). For different initializations, the algorithm converged to a scaled (by a diagonal
matrix), row permuted version of the predetermined W
. Figures 2 and 3 also show the proportion of recovered (entry-wise) support of
(up to row-permutation and sign changes). The grey region highlights the range of iterations in which the true support of
is estimated well by the different initializations (i.e. where the proportion of recovered support reaches near 1 or 100%). These empirical results show that the aforementioned second phase of the convergence behaviour occurs in the iterations proceeding the point when the algorithm acquires the true support of
. Furthermore, note that the objective’s convergence rate in the second phase is similar to that of the ‘eps’ case, where
is selected to ensure that the support of
is recovered in one iteration. These results concur with the analysis and discussion in Section 2.
Fig. 2.

The performance of Algorithm 1 over iterations with various initializations for
: objective function (left) and proportion of recovered support of
(right).
Fig. 3.

The performance of Algorithm 1 with various initializations for
: objective function (left) and proportion of recovered support of
(right).
The behaviour of Algorithm 1 is similar for
and
, with the latter case taking more iterations to enter the second phase of convergence. This makes sense since there are more coefficients to learn for larger
. This experiment shows that Algorithm 1 is robust to initialization.
3.2 The
factor in Proposition 2.3
In our last experiment, we illustrate Proposition 2.3 empirically. For each trial, we fix the signal dimension to be
. In addition to varying
, we vary
. In the first experiment, the
non-zero entries for each column of
are selected uniformly at random where values are drawn i.i.d. from a Gaussian distribution with mean
and variance
. We also simulate the case when the non-zeros are i.i.d. scaled random signs with mean
and variance
, with ‘+’ and ‘-’ being equally probable. We then compute the following functions of
: the condition number
, the maximum spectral norm over choice
for
and the contraction factor
that is a function of these quantities. The top and bottom rows of Fig. 4 plot these quantities for the Gaussian and scaled sign coefficients, respectively.
Fig. 4.

On the x-axis we plot the number of training data points
and on the y-axis, (left) the condition number
, (center) the maximum spectral norm over choice
for
and (right) the contraction factor
. The top row of plots corresponds to the case when the non-zeros are i.i.d. Gaussian and the bottom plots correspond to the non-zeros being i.i.d. scaled random signs. In both cases,
and we vary
.
The plots clearly show that
for large
for each distribution and
setting. The maximum spectral norm plots quickly converged close to their expected values of
. Moreover,
approaches close to
as
increases as expected, indicating that the probabilistic sparsity model approaches the scenario in Theorem 2.1. We have observed similar empirical behaviour for the
factor, when the non-zero entries are drawn from other distributions.
4. Conclusion
In this work, we presented a study of the model recovery properties of the alternating minimization algorithm for structured, unitary sparsifying transform learning. The algorithm converges rapidly to the generative model(s) from local neighbourhoods under mild assumptions and the assumptions are shown to hold for various probabilistic models. In addition to showing that the algorithm convergences linearly, we also characterize the asymptotic behaviour of the convergence rate and radius with respect to the number of data points or training signals
. In practice, the sparsifying operator learning method is robust to initialization. Our numerical results and initial analysis showed that the algorithm performs well under various initializations, with similar eventual rates of convergence. We have observed empirically that the algorithm converges to the specific W
even with quite large perturbations for the initial
from W
(i.e. large
values in Assumption
). We plan to further analyse the effects of initialization and the behaviour of transform learning in inverse problems in future work.
Funding
This work was partly conceived when S.R. was at the University of Illinois at Urbana-Champaign (supported by National Science Foundation CCF 1320953 to S.R.), and was partly done when S.R. was at the University of Michigan, Ann Arbor and was supported by the Office of Naval Research (N00014-15-1-2141 to S.R.); Defense Advanced Research Projects Agency Young Faculty Award (D14AP00086 to S.R.); US Army Research Office Multidisciplinary University Research Initiative (W911NF-11-1-0391, 2015-05174-05 to S.R.); National Institutes of Health (R01 EB023618, U01 EB018753 to S.R.); and University of Michigan-Shanghai Jiao Tong University seed grant (to S.R.). This material was also supported by the National Science Foundation (DMS-1440140 to A.M. and D.N.) while the authors were in residence at the Mathematical Science Research Institute in Berkeley, California during the Fall 2017 semester; National Science Foundation CAREER (1348721 to A.M. and D.N.); and National Science Foundation BIGDATA (1740325 to A.M. and D.N.).
A. Proofs of Proposition 2.4 and Remark 2.3
Here we present the proof of Proposition 2.4 and briefly comment on Remark 2.3. The form of
was discussed in the proof of Lemma 2.2 and ensures recovery of the support of
. We derive the form of
(i.e. sufficient
for the operator update step) based on the proof of Lemma 2.3. In particular, we bound
to ensure convergence of the Taylor Series in (2.12) and to bound the higher order terms in the product
.
The matrix inverse series
converges when
. We have
, where the last inequality follows from Assumption
and Lemma 2.2. Thus,
suffices. Similarly, the series for
converges when the perturbation
satisfies
. Since
, we have that
or
suffices, which also works for the matrix inverse series.
Using the notation above, the product in (2.13) simplifies as
![]() |
(A.1) |
![]() |
(2.13) |
where
and
are the remaining higher order terms in the respective series. The
terms in (2.13) are given as
. We bound the Frobenius norm of these summands to characterize
in
.
First, we have the following bound:
![]() |
(A.2) |
We also have the next bound, which follows from
(since
) and
:
![]() |
(A.3) |
Third, we have for the matrix square root Taylor series that
, where the right-hand side is the magnitude of the remainder of the series for
after the first order term. Thus, we have the following standard bound for the remainder for some
:
![]() |
(A.4) |
Here the last inequality used
and
when
. Finally, we have
![]() |
(A.5) |
Combining (A.2)–(A.5), we easily get
with
as defined in (2.4). Including the
term in (2.14), the effective convergence rate in Lemma 2.3 is
with the dominant
. Thus,
suffices for linear convergence or
. Since
is monotone increasing in
(the upper bound comes from the aforementioned Taylor Series convergence conditions) with
,
for which
. This would be
(largest permissible
) for the operator update step. It is easy to see that this
is equivalently obtained by maximizing
in
, where
. Note that we ignored the higher order effects in our Assumptions, since
is negligible for sufficiently small
, where the effective convergence rate is approximately
.
In the case of Remark 2.3 for Theorem 2.2, the form of
remains the same as above. The proof of Lemma 2.4 showed that the matrix inverse series converges when the perturbation term satisfies
, or
suffices. Similarly, the bounds for the other series terms also depend on
. Clearly, as
, we approach Assumption
for which
takes the same form as in Proposition 2.4. The limit for
in Remark 2.3 holds for the distributions in Proposition 2.3 because
(see (2.35)) almost surely as
.
B. Distributions in Section 2.3.3
Various distributions of
lead to interesting behaviour for
. Here we discuss example distributions and the corresponding behaviour of
, which we show to be lower bounded by
. The distributions below satisfy the conditions in Proposition 2.3 (i.e. the column supports of
of cardinality
are drawn independently and uniformly at random, and the non-zero entries are i.i.d. with mean zero and variance
) to ensure good convergence rate properties.
The non-zeros are random signs scaled by
and ‘+’ and ‘-’ are equally probable
.Non-zeros are uniformly distributed in
with
. When
and
, then
.Non-zeros are drawn from the density
when
and
otherwise, with
and
and
. For a given
,
,
,
, and
.
The non-zeros of
above are assumed to be upper bounded (in practice, the bound is determined by the peak physical intensity in the signals considered) and lower bounded (determined by numerical precision).
We briefly show the
bounds for the examples above. When the non-zeros of
are random signs scaled by
, it is obvious that
.
When the non-zeros are uniformly distributed with
for
with
, then clearly
. The variance of the distribution is
. Setting this to the required value of
yields
. Solving the quadratic equation for
yields a root
, which is non-negative when
(i.e.
). Moreover,
implies
. Then the distributions with
and
readily satisfy
and
. Thus, clearly
. For the special case
and
.
When the non-zeros are drawn from
for
and
otherwise, with
and
and
, clearly
and the variance is
. Setting the variance to
yields a nonlinear equation in
,
and
, with many solutions. To extract one set of solutions, we set
and
for some
, which implies
. Substituting these in the variance equation simplifies it to
. Thus,
with
and
in this case. We then easily get
.
C. Proof of Lemma 2.1
Each column
(
) of the sparse coefficients matrix
satisfies
![]() |
(C.1) |
where
and
is a diagonal matrix with a one at the
th entry when
and zero otherwise. Matrix
is similarly defined with respect to
, and ‘
’ denotes element-wise multiplication.
It follows that
, where the two summands have disjoint supports because
is diagonal with zeros and with ‘-1’ only for the portion of the support of
left out in
. Therefore, we have
![]() |
(C.2) |
Let
. To simplify and bound (C.2), we first consider the case when only one element, say
was left out in
. Suppose that in its place, we have a new entry
with
. Then we must have
![]() |
(C.3) |
where the first inequality is necessary for the
th entry to swap with the
th entry in the support and the second inequality is the reverse triangle inequality. Thus, we have
![]() |
(C.4) |
Note that this holds even if
and
, i.e. only the
th entry is left out of
without a new non-zero (
th) entry. Using these results, (C.2) can be readily simplified for this case as
![]() |
(C.5) |
The last equality above follows because
includes
as a non-zero entry, and
is the same as
except that its
th entry is also
.
In (C.5),
. The first two summands in (C.5) are bounded by
and
, respectively. Thus, when one element of the true support is misestimated in each column of
, we have
![]() |
(C6) |
This proves (2.5) for the case when (at most) one entry of the support of each
is wrongly estimated (left out) in
. In the general case, when multiple elements of the support of
may be left out in
, each such element can be paired with a corresponding ‘new’ element in
, and (C.4) holds for each such pair.9 The proof in this general case is similar to the aforementioned case, except that there would be summations over the left out or new indices in various equations. For example, the first summand
in (C.5) would include a summation over all ‘new’ indices
in
. However, this summation is still bounded by
. Similarly, the second summand in (C.5) would be summed over the number of (disjoint) pairs, which is again bounded by
. Thus,
holds generally (including when the true support is correctly estimated in
). Therefore, a bound as in (C.6) holds in the general case.
Footnotes
For example, one may minimize
with respect to
subject to
, where
denotes a set sparsity level or an alternative version of this problem.
Depending on the signal set, either a compact (i.e. without too many atoms) dictionary or sparsifying transform may be best suited for them.
In this case, the support of
in fact coincides with that of
. If we relaxed Assumption
from
to
, then
holds, and the lemma still holds.
Matrix
must be invertible for
to be finite and Assumption
to hold.
The SVD of the Kronecker sum is established by the following equalities that use the definitions of the Kronecker sum and SVD of
and (2.21):
.
The probability that
is
.
This is the probability that the two indexes
and
both appear in the support of the
th column of
. Thus,
.
Note that
.
The elements left out of the support of
can be paired with ‘new’ elements in
one by one, i.e. no overlaps between the pairs. If multiple new elements satisfy (C.4), the pairing picks the one with the smallest magnitude.
References
- 1. Agarwal, A., Anandkumar, A., Jain, P. & Netrapalli, P. (2016) Learning sparsely used overcomplete dictionaries via alternating minimization. SIAM J. Optim., 26, 2775–2799. [Google Scholar]
- 2. Agarwal, A., Anandkumar, A., Jain, P., Netrapalli, P. & Tandon, R. (2014) Learning sparsely used overcomplete dictionaries. J. Mach. Learn. Res., 35, 1–15. [Google Scholar]
- 3. Aharon, M., Elad, M. & Bruckstein, A. (2006) K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process., 54, 4311–4322. [Google Scholar]
- 4. Arora, S., Ge, R., Ma, T. & Moitra, A. (2015) Simple, efficient, and neural algorithms for sparse coding. Conference on Learning Theory. PMLR, Paris, France. pp. 113–149. [Google Scholar]
- 5. Arora, S., Ge, R. & Moitra, A. (2014) New algorithms for learning incoherent and overcomplete dictionaries. Proceedings of the 27th Conference on Learning Theory. PMLR, Barcelona, Spain. pp. 779–806. [Google Scholar]
- 6. Bai, Y., Jiang, Q. & Sun, J. (2018) Subgradient descent learns orthogonal dictionaries. arXiv preprint arXiv:1810.10702. [Google Scholar]
- 7. Bao, C., Cai, J.-F. & Ji, H. (2013) Fast sparsity-based orthogonal dictionary learning for image restoration. Proceedings of the IEEE International Conference on Computer Vision. IEEE , Sydney, Australia . pp. 3384–3391. [Google Scholar]
- 8. Bao, C., Ji, H., Quan, Y. & Shen, Z. (2014) L0 norm based dictionary learning by proximal methods with global convergence. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE , Columbus, Ohio. pp. 3858–3865. [Google Scholar]
- 9. Bao, C., Ji, H., Quan, Y. & Shen, Z. (2016) Dictionary learning for sparse coding: algorithms and convergence analysis. IEEE Trans. Pattern Anal. Mach. Intell., 38, 1356–1369. [DOI] [PubMed] [Google Scholar]
- 10. Bao, C., Ji, H. & Shen, Z. (2015) Convergence analysis for iterative data-driven tight frame construction scheme. Appl. Comput. Harmon. Anal., 38, 510–523. [Google Scholar]
- 11. Barchiesi, D. & Plumbley, M. D. (2013) Learning incoherent dictionaries for sparse approximation using iterative projections and rotations. IEEE Trans. Signal Process., 61, 2055–2065. [Google Scholar]
- 12. Chatterji, N. & Bartlett, P. L. (2017) Alternating minimization for dictionary learning with random initialization. Advances in Neural Information Processing Systems, Curran Associates, Inc, Long Beach, CA. vol. 30. pp. 1997–2006. [Google Scholar]
- 13. Chen, S. S., Donoho, D. L. & Saunders, M. A. (1998) Atomic decomposition by basis pursuit. SIAM J. Sci. Comput., 20, 33–61. [Google Scholar]
- 14. Dai, W. & Milenkovic, O. (2009) Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory, 55, 2230–2249. [Google Scholar]
- 15. Efron, B., Hastie, T., Johnstone, I. & Tibshirani, R. (2004) Least angle regression. Ann. Statist., 32, 407–499. [Google Scholar]
- 16. Elad, M. & Aharon, M. (2006) Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process., 15, 3736–3745. [DOI] [PubMed] [Google Scholar]
- 17. Engan, K., Aase, S. & Hakon-Husoy, J. (1999) Method of optimal directions for frame design. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Phoenix, AZ, IEEE. pp. 2443–2446. [Google Scholar]
- 18. Hanif, M. & Seghouane, A.-K. (2014) Maximum likelihood orthogonal dictionary learning. 2014 IEEE Workshop on Statistical Signal Processing (SSP). IEEE, Gold Coast, Australia. pp. 141–144. [Google Scholar]
- 19. Lustig, M., Donoho, D. & Pauly, J. (2007) Sparse MRI: the application of compressed sensing for rapid MR Imaging. Magn. Reson. Med., 58, 1182–1195. [DOI] [PubMed] [Google Scholar]
- 20. Mairal, J., Bach, F., Ponce, J. & Sapiro, G. (2010) Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res., 11, 19–60. [Google Scholar]
- 21. Marcellin, M. W., Gormish, M. J., Bilgin, A. & Boliek, M. P. (2000) An overview of JPEG-2000. Proceedings of the Data Compression Conference. IEEE, Snowbird, Utah. : pp. 523–541. [Google Scholar]
- 22. Nam, S., Davies, M. E., Elad, M. & Gribonval, R. (2011) Cosparse analysis modeling—uniqueness and algorithms. ICASSP. IEEE, Prague, Czech Republic . pp. 5804–5807. [Google Scholar]
- 23. Natarajan, B. K. (1995) Sparse approximate solutions to linear systems. SIAM J. Comput., 24, 227–234. [Google Scholar]
- 24. Needell, D. & Tropp, J. (2009) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal., 26, 301–321. [Google Scholar]
- 25. Olshausen, B. A. & Field, D. J. (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381, 607–609. [DOI] [PubMed] [Google Scholar]
- 26. Pati, Y., Rezaiifar, R. & Krishnaprasad, P. (1993) Orthogonal Matching Pursuit: recursive function approximation with applications to wavelet decomposition. Asilomar Conference on Signals, Systems and Computers, vol. 1. IEEE, Pacific Grove, CA. pp. 40–44. [Google Scholar]
- 27. Pfister, L. & Bresler, Y. (2019) Learning filter bank sparsifying transforms. IEEE Trans. Signal Process., 67, 504–519. [Google Scholar]
- 28. Ramirez, I., Sprechmann, P. & Sapiro, G. (2010) Classification and clustering via dictionary learning with structured incoherence and shared features. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2010. IEEE, San Francisco, CA. pp. 3501–3508. [Google Scholar]
- 29. Ravishankar, S. & Bresler, Y. (2013a) Closed-form solutions within sparsifying transform learning. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, Vancouver, Canada. pp. 5378–5382. [Google Scholar]
- 30. Ravishankar, S. & Bresler, Y. (2013b) Learning sparsifying transforms. IEEE Trans. Signal Process., 61, 1072–1086. [DOI] [PubMed] [Google Scholar]
- 31. Ravishankar, S. & Bresler, Y. (2015) L0 sparsifying transform learning with efficient optimal updates and convergence guarantees. IEEE Trans. Signal Process., 63, 2389–2404. [Google Scholar]
- 32. Ravishankar, S. & Bresler, Y. (2016) Data-driven learning of a union of sparsifying transforms model for blind compressed sensing. IEEE Trans. Comput. Imaging, 2, 294–309. [Google Scholar]
- 33. Ravishankar, S. & Wohlberg, B. (2018) Learning multi-layer transform models. 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, Monticello, IL. pp. 160–165. [Google Scholar]
- 34. Rockafellar, R. T. & Wets, R. J.-B. (1998) Variational Analysis. Heidelberg, Germany: Springer. [Google Scholar]
- 35. Rubinstein, R., Faktor, T. & Elad, M. (2012) K-SVD dictionary-learning for the analysis sparse model. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, Kyoto, Japan. pp. 5405–5408. [Google Scholar]
- 36. Rubinstein, R., Peleg, T. & Elad, M. (2013) Analysis K-SVD: a dictionary-learning algorithm for the analysis sparse model. IEEE Trans. Signal Process., 61, 661–677. [Google Scholar]
- 37. Rubinstein, R., Zibulevsky, M. & Elad, M. (2010) Double sparsity: learning sparse dictionaries for sparse signal approximation. IEEE Trans. Signal Process., 58, 1553–1564. [Google Scholar]
- 38. Schnass, K. (2018) Convergence radius and sample complexity of ITKM algorithms for dictionary learning. App. Comp. Harm. Ana., 45, 22–58. [Google Scholar]
- 39. Smith, L. N. & Elad, M. (2013) Improving dictionary learning: multiple dictionary updates and coefficient reuse. IEEE Signal Process. Lett., 20, 79–82. [Google Scholar]
- 40. Spielman, D. A., Wang, H. & Wright, J. (2012) Exact recovery of sparsely-used dictionaries. Proceedings of the 25th Annual Conference on Learning Theory. PMLR, Edinburgh, Scotland. pp. 37.1–37.18. [Google Scholar]
- 41. Studer, C. & Baraniuk, R. G. (2012) Dictionary learning from sparsely corrupted or compressed signals. ICASSP. IEEE, Kyoto, Japan. pp. 3341–3344. [Google Scholar]
- 42. Sun, J., Qu, Q. & Wright, J. (2017a) Complete dictionary recovery over the sphere I: overview and the geometric picture. IEEE Trans. Inf. Theory, 63, 853–884. [Google Scholar]
- 43. Sun, J., Qu, Q. & Wright, J. (2017b) Complete dictionary recovery over the sphere II: recovery by Riemannian trust-region method. IEEE Trans. Inf. Theory, 63, 885–914. [Google Scholar]
- 44. Wen, B., Ravishankar, S. & Bresler, Y. (2015) Structured overcomplete sparsifying transform learning with convergence guarantees and applications. Int. J. Comput. Vis., 114, 137–167. [Google Scholar]
- 45. Xu, Q., Yu, H., Mou, X., Zhang, L., Hsieh, J. & Wang, G. (2012) Low-dose X-ray CT reconstruction via dictionary learning. IEEE Trans. Med. Imaging, 31, 1682–1697. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Xu, Y. & Yin, W. (2016) A fast patch-dictionary method for whole image recovery. Inverse Probl. Imaging, 10, 563–583. [Google Scholar]
- 47. Yaghoobi, M., Blumensath, T. & Davies, M. (2009) Dictionary learning for sparse approximations with the majorization method. IEEE Trans. Signal Process., 57, 2178–2191. [Google Scholar]
- 48. Yaghoobi, M., Nam, S., Gribonval, R. & Davies, M. (2011) Analysis operator learning for overcomplete cosparse representations. European Signal Processing Conference (EUSIPCO). IEEE, Barcelona, Spain. pp. 1470–1474. [Google Scholar]
- 49. Yaghoobi, M., Nam, S., Gribonval, R. & Davies, M. E. (2012) Noise aware analysis operator learning for approximately cosparse signals. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, Kyoto, Japan. pp. 5409–5412. [Google Scholar]



























































































































































































































































