Abstract
Although the residual method, or constrained regularization, is frequently used in applications, a detailed study of its properties is still missing. This sharply contrasts the progress of the theory of Tikhonov regularization, where a series of new results for regularization in Banach spaces has been published in the recent years. The present paper intends to bridge the gap between the existing theories as far as possible. We develop a stability and convergence theory for the residual method in general topological spaces. In addition, we prove convergence rates in terms of (generalized) Bregman distances, which can also be applied to non-convex regularization functionals.
We provide three examples that show the applicability of our theory. The first example is the regularized solution of linear operator equations on Lp-spaces, where we show that the results of Tikhonov regularization generalize unchanged to the residual method. As a second example, we consider the problem of density estimation from a finite number of sampling points, using the Wasserstein distance as a fidelity term and an entropy measure as regularization term. It is shown that the densities obtained in this way depend continuously on the location of the sampled points and that the underlying density can be recovered as the number of sampling points tends to infinity. Finally, we apply our theory to compressed sensing. Here, we show the well-posedness of the method and derive convergence rates both for convex and non-convex regularization under rather weak conditions.
Keywords: Ill-posed problems, Regularization, Residual method, Sparsity, Stability, Convergence rates
1. Introduction
We study the solution of ill-posed operator equations
(1) |
where F : X → Y is an operator between the topological spaces X and Y, and y ∈ Y are given, noisy data, which are assumed to be close to some unknown, noise-free data y† ∈ ran(F). If the operator F is not continuously invertible, then (1) may not have a solution and, if a solution exists, arbitrarily small perturbations of the data may lead to unacceptable results.
If Y is a Banach space and the given data are known to satisfy an estimate ∥y† − y∥ ⩽ β, one strategy for defining an approximate solution of (1) is to solve the constrained minimization problem
(2) |
Here, the regularization term is intended to enforce certain regularity properties of the approximate solution and to stabilize the process of solving (1). In [39,55], this strategy is called the residual method. It is closely related to Tikhonov regularization
(3) |
where α > 0 is a regularization parameter. In the case that the operator F is linear and is convex, (2) and (3) are basically equivalent, if α is chosen according to Morozov’s discrepancy principle (see [39, Chap. 3]).
While the theory of Tikhonov regularization has received much attention in the literature (see for instance [1,14,21,22,33,37,45,49,52,56,58]), the same cannot be said about the residual method. The existing results are mainly concerned with the existence theory of (2) and with the question of convergence, which asks whether solutions of (2) converge to a solution of (1) as ∥y − y†∥ ⩽ β → 0. These problems have been treated in very general settings in [38,51] (see also [34,54,55]). Convergence rates have been derived in [6] for linear equations in Hilbert spaces and later generalized in [34] to non-linear equations in Banach spaces. Convergence rates have also been derived in [7,9,32] for the reconstruction of sparse sequences.
The problem of stability, however, that is, continuous dependence of the solution of (2) on the input data y and the presumed noise level β, has been hardly considered at all. One reason for the lack of results is that, in contrast to Tikhonov regularization, stability simply does not hold for general non-linear operator equations. But even for the linear case, where we indeed prove stability, so far stability theorems are non-existent in the literature. Though some results have been derived in [34], they only cover a very weak form of stability, which states that the solutions of (2) with perturbed data stay close to the solution with unperturbed data, if one additionally increases the regularization parameter β in the perturbed problem by a sufficient amount.
The present paper tries to generalize the existent theory on the residual method as far as possible. We assume that X and Y are mere topological spaces and consider the minimization of subject to the constraint . Here is some distance like functional taking over the role of the norm in (2). In addition, we discuss the case where the operator F is not known exactly. This subsumes errors due to the modeling process as well as discretizations of the problem necessary for its numerical solution. We provide different criteria that ensure stability (Lemma 3.6, Theorem 3.9 and Proposition 4.3) and convergence (Theorem 3.10 and Proposition 4.3) of the residual method. In particular, our conditions also include certain non-linear operators (see Example 4.6).
Section 5 is concerned with the derivation of convergence rates, i.e., quantitative estimates between solutions of (2) and the exact data y†. Using notions of abstract convexity, we define a generalized Bregman distance that allows us to state and prove rates on arbitrary topological spaces (see Theorem 5.4). In Section 6 we apply our general results to the case of sparse ℓp-regularization with p ∈ (0, 2). We prove the well-posedness of the method and derive convergence rates with respect to the norm in a fairly general setting. In the case of convex regularization, that is, p ⩾ 1, we derive a convergence rate of order . In the non-convex case 0 < p < 1, we show that the rate holds.
2. Definitions and mathematical preliminaries
Throughout the paper, X and Y denote sets. Moreover, is a functional on X, and is a functional on Y × Y such that if and only if y = z.
2.1. The residual method
For given mapping F : X → Y, given data y ∈ Y, and fixed parameter β ⩾ 0, we consider the constrained minimization problem
(4) |
For the analysis of the residual method (4) it is convenient to introduce the following notation.
The feasible set Φ(F, y, β), the value v(F, y, β), and the set of solutions Σ(F, y, β) of (4) are defined by
In particular, Φ(F, y, 0) consist of all solutions of the equation F(x) = y. The elements of Σ(F, y, 0) are therefore referred to as -minimizing solutions of F(x) = y.
In addition, for t ⩾ 0, we set
(5) |
An immediate consequence of the above definitions is the identity
(6) |
Remark 2.1
We do not assume a priori that a solution of the minimization problem (4) exists. Only in the next section shall we deduce the existence of solutions under a compactness assumption on the sets , see Theorem 3.1.
Lemma 2.2
The sets defined in (5) satisfy
(7) for every γ, ε ⩾ 0, and
(8)
Proof
The inclusion (7) follows immediately from the definition of . For the proof of (8) note that if and only if for all γ > 0 and for all ε > 0. This, however, is the case if and only if and , which means that . □
Further properties of the value v and the sets and Σ are summarized in Appendix A.
2.2. Convergence of sets of solutions
In the next section we study convergence and stability of the residual method, that is, the behavior of the set of solutions Σ(Fk, yk, βk) for βk → β, yk → y, and Fk → F. In [21,50], where convergence and stability of Tikhonov regularization have been investigated, the stability results are of the following form: for every sequence and every sequence of minimizers there exists a subsequence of that converges to a minimizer of . In this paper we prove similar results for the residual method but with a different notation using a type of convergence of sets (see, for example, [41, Section 29]).
Definition 2.3
Let τ be a topology on X and let be a sequence of subsets of X.
- (a)
The upper limit of is defined aswhere τ − cl denotes the closure with respect to τ.
- (b)
An element x ∈ X is contained in the lower limit of the sequence , in shortif for every neighborhood N of x there exists such that N ∩ Σk ≠ ∅ for every k ⩾ k0.
- (c)
If the lower limit and the upper limit of coincide, we defineas the limit of the sequence .
Remark 2.4
As a direct consequence of Definition 2.3, an element x is contained in the upper limit τ − Lim supk→∞Σk, if and only if for every neighborhood N of x and every there exists k ⩾ k0 with N ∩ Σk ≠ ∅.
If X satisfies the first axiom of countability, then x ∈ τ − Lim supk→∞Σk, if and only if there exists a subsequence of and a sequence of elements such that xj → τx (see [41, Section 29.IV]). Note that in particular every metric space satisfies the first axiom of countability.
The following proposition clarifies the relation between the stability and convergence results in [21,50] and the results in the present paper.
Proposition 2.5
Let be a sequence of nonempty subsets of X, and assume that there exists a compact set K such that Σk ⊂ K for all . Then τ − Lim supk→∞Σk is non-empty.
If, in addition, X satisfies the first axiom of countability, then every sequence of elements xk ∈ Σk has a subsequence converging to some element x ∈ τ − Lim supk→∞Σk.
Proof
By assumption, the sets form a decreasing family of non-empty, compact sets. Thus also their intersection is non-empty (see [40, Theorem 5.1]).
Now assume that X satisfies the first axiom of countability. Then in particular every compact set is sequentially compact (see [40, Theorem 5.5]). Let now xk ∈ Σk for every . Then is a sequence in the compact set K and therefore has a subsequence converging to some element x ∈ K. From Remark 2.4 it follows that x ∈ τ − Lim supk→∞Σk, which shows the assertion. □
2.3. Convergence of the data
In addition to the convergence of subsets Σk of X, it is necessary to define a notion of convergence on the set Y that is compatible with the distance measure .
Definition 2.6
The sequence converges -uniformly to y ∈ Y, if
The sequence of mappings Fk : X → Y converges locally -uniformly to F : X → Y, if
for every t ⩾ 0.
Remark 2.7
The -uniform convergence on Y is induced by the extended metric
If the distance measure itself equals a metric, then coincides with . Similarly, local -uniform convergence of a sequence of mappings Fk equals the uniform convergence of Fk on -bounded sets with respect to the extended metric
3. Well-posedness of the residual method
In the following we investigate the existence of minimizers, and the stability and the convergence of the residual method. Throughout the whole section we assume that τ is a topology on X, F : X → Y is a mapping, y ∈ Y are given data and β ⩾ 0 is a fixed parameter.
3.1. Existence
We first investigate under which conditions Σ(F, y, β), the set of solutions of (4), is not empty.
Theorem 3.1 Existence —
Assume that is τ-compact for every t ⩾ 0 and non-empty for some t0 ⩾ 0. Then Problem (4) has a solution.
Proof
Eq. (6) and Lemma 2.2 imply the identity
Because , the value of (4) satisfies v(F, y, β) ⩽ t0 < ∞ and therefore for every ε > 0. Consequently, Σ(F, y, β) is the intersection of a decreasing family of non-empty τ-compact sets and thus non-empty (see [40, Theorem 5.1]). □
Recall that a mapping is lower semi-continuous, if its lower level sets are closed for every t ⩾ 0. Moreover, the mapping is coercive, if its lower level sets are pre-compact, see [4]. (In a Banach space one often calls a functional coercive, if it is unbounded on unbounded sets. The notion used here is equivalent if the Banach space is reflexive and τ is the weak topology.) In particular, the mapping is lower semi-continuous and coercive, if and only if its lower level sets are compact.
Proposition 3.2
Assume that and are lower semi-continuous and one of them, or their sum, is coercive. Then is τ-compact for every t ⩾ 0. If additionally is non-empty for some t0 ⩾ 0, then Problem (4) has a solution.
Proof
If and are lower semi-continuous and one of them is coercive, then
is the intersection of a closed and a τ-compact set and therefore itself τ-compact. In case that only the sum is coercive, the set
is a closed set contained in a τ-compact set and therefore again τ-compact. □
The lower semi-continuity of certainly holds if F is continuous and is lower semi-continuous with respect to the first component (for some given topology on Y). It is, however, also possible to obtain lower semi-continuity, if F is not continuous but the functional satisfies a stronger condition:
Proposition 3.3
Let τ′ be a topology on Y such that is lower semi-continuous and coercive, and assume that F : X → Y has a closed graph. Then the functional is lower semi-continuous.
Proof
Because F has a closed graph, the pre-image under F of every compact set is closed (see [38, Theorem 4]). This shows that
is closed for every β, that is, the mapping is lower semi-continuous. □
3.2. Stability
Stability is concerned with the continuous dependence of the solutions of (4) of the input data, that is, the element y, the parameter β, and, possibly, the operator F. Given sequences βk → β, yk → y, and Fk → F, we ask whether the sequence of sets Σ(Fk, yk, βk) converges to Σ(F, y, β). As already indicated in Section 2, we will make use of the upper convergence of sets introduced in Definition 2.3. The topology, however, with respect to which the results are formulated, is stronger than τ.
Definition 3.4
The topology on X is generated by all sets of the form with and U ∈ τ and all sets of the form with and U ∈ τ. (Hence consists of all unions of finite intersections of sets of the form or .)
Note that a sequence converges to x with respect to , if and only if converges to x with respect to τ and satisfies for k → ∞.
For the stability results we make the following assumption:
Assumption 3.5
- 1.
Let β ⩾ 0, let y ∈ Y, and let F : X → Y be a mapping.
- 2.
Let be a sequence of nonnegative numbers, let be a sequence in Y, and let be a sequence of mappings Fk : X → Y.
- 3.
The sequence converges to β, the sequence converges -uniformly to y, and converges locally -uniformly to F.
- 4.
The sets and are compact for all w, γ, t, and k. Moreover, for every w, γ, k there exist some t0 such that and are nonempty.
The following lemma is the key result to prove stability of the residual method.
Lemma 3.6
Let Assumption 3.5 hold and assume that
(9) Then,
(10) If, additionally, the set Σ(F, y, β) consists of a single element xβ, then
(11)
Proof
In order to simplify the notation, we define
Moreover we define the set T ≔ τ − Lim supk→∞Σk. Because the topology is finer than τ, it follows that . We proceed by showing that ∅ ≠ T ⊂ Σ and , which then gives the assertion (10).
The inequality (9) implies that for every ε > 0 there exists some such that vk ⩽ v + ε for all k ⩾ k0. Since βk → β, we may additionally assume that βk ⩽ β + ε. Lemma A.1 implies, after possibly enlarging k0,
(12) for all k ⩾ k0. Thus,
(13) The sets are closed and non-empty and, by assumption, the set is compact. Thus T is the intersection of a decreasing family of non-empty compact sets and therefore non-empty. Moreover, because (13) holds for every ε > 0, we have
(14) Next we show the inclusion . To that end, we first prove that
(15) Recall that Theorem 3.1 implies that Φk(vk) = Σk is non-empty. Therefore, (12) implies that also is non-empty, which in turn shows that vk ⩾ v(F, y, β + 2ε) for all k large enough. Consequently,
(16) for all ε > 0. From Lemma A.2 we obtain that v = supε>0v(F, y, β + 2ε). Together with (16) and (9) this shows (15).
Let now x ∈ T, let N be a neighborhood of x with respect to τ, let δ > 0 and . Since T ⊂ Σ (see (14)), it follows that . Thus it follows from (15) that there exists k1 ⩾ k0 such that
for all k ⩾ k1. In particular,
(17) for all k ⩾ k1. Remark 2.4 implies that there exists k2 ⩾ k1 such that
(18) Now recall that the sets form a basis of neighborhoods of x for the topology . Therefore (17) and (18), and the characterization of the upper limit of sets given in Remark 2.4 imply that . Thus the inclusion (10) follows.
If the set Σ(F, y, β) consists of a single element xβ, then the first part of the assertion implies that for every subsequence we have
Thus the assertion follows from Lemma A.3. □
The crucial condition in Lemma 3.6 is the inequality (9). Indeed, one can easily construct examples, where this condition fails and the solution of Problem (4) is unstable, see Example 3.7. What happens in this example is that the upper limit consists of local minima of on Φ(F, y, β) that fail to be global minima of restricted to Φ(F, y, β).
Example 3.7
Consider the function (illustrated in Fig. 1) and the regularization functional . Let y > 0 and choose β = y. Then
(19) Now let yk > y. Then
where xk is the unique solution of the equation F(x) = yk − y. Thus, if the sequence converges to y from above, we have xk > 1 for all k and limk→∞xk = 1. According to (19), however, the solution of the limit problem equals zero.
Fig. 1.
The nonlinear function F from Example 3.7. The feasible set consists of an interval and the isolated point {0}.
The following two theorems are central results of this paper. They answer the question to which extent we obtain stability results for the residual method similar to the ones known for Tikhonov regularization.
Theorem 3.8 Approximate Stability —
Let Assumption 3.5 hold. Then there exists a sequence εk → 0 such that
Proof
Define
Lemma A.1 and the assumption that βk → β imply that εk → 0. Since by assumption
we obtain that v(Fk, yk, βk + εk) ⩽ v(F, y, β). Thus the assertion follows from Lemma 3.6. □
Theorem 3.8, is a stability result in the same spirit as the one derived in [34]. While it does not assert that, in the general setting described by Assumption 3.5, the residual method is stable in the sense that the solutions depend continuously on the input data, it does state that the solutions of the perturbed problems stay close to the solution of the original problem, if one allows the regularization parameter β to increase slightly. Apart from the more general, topological setting, the main difference to [34, Lemma 2.2] is the additional inclusion of operator errors into the result.
The next theorem provides a true stability theorem, including both data as well as operator perturbations.
Theorem 3.9 Stability —
Let Assumption 3.5 hold with β > 0 and assume that the inclusion
(20) holds for every t ⩾ 0. Then,
(21) If, additionally, the set Σ(F, y, β) consists of a single element xβ, then
Proof
The convergence of to β and Lemma A.1 imply that for every ε > 0 and there exists such that
for all k ⩾ k0. Consequently,
From (20) we obtain that
For Theorem 3.9 to hold, the mapping has to satisfy the additional regularity property (20). This property requires that every x ∈ X for which F(x) ≠ y can be approximated by elements with and . That is, the function does not have local minima in the sets . As will be shown in the following Section 4, this property is naturally satisfied for linear operators on Banach spaces.
3.3. Convergence
The following theorem states the solutions obtained with the residual method indeed converge to the -minimizing solution of the equation F(x) = y, if the noise level decreases to zero. Recall that the set of all -minimizing solution of the equation F(x) = y is given by Σ(F, y, 0).
Theorem 3.10 Convergence —
Let y ∈ Y be such that there exists x ∈ X with F(x) = y and and assume that is τ-compact for all w ∈ Y and γ, t ⩾ 0. If converges -uniformly to y and satisfies , then
(22) In particular,
(23) If, additionally, the -minimizing solution x† is unique, then
(24)
Proof
By assumption S(y, yk) ⩽ βk, which implies that for all x′ ∈ Φ(F, y, 0). This proves (22). Now (23) and (24) follow from Lemma 3.6. □
4. Linear spaces
Now we assume that X and Y are subsets of topological vector spaces. Then the linear structures allows us to introduce more tangible conditions implying stability of the residual method.
For the following we assume that F : X → Y and y ∈ Y are fixed.
Assumption 4.1
Assume that the following hold:
- 1.
The set X is a convex subset of a topological vector space, and Y is a topological vector space.
- 2.
The mapping is semi-strictly quasi-convex. That is, for all x0, x1 ∈ X with , and all 0 < λ < 1 we haveMoreover, the inequality is strict whenever .
- 3.
For every β ⩾ 0 there exists x ∈ X with and .
- 4.
The domain of is convex and for every the restriction of tois continuous.
We now show that Assumption 4.1 implies the main condition of the stability result Theorem 3.9, the inclusion (20):
Lemma 4.2
Assume that Assumption 4.1 holds. Then (20) is satisfied.
Proof
Let for some β > 0. We have to show that for every neighborhood N ⊂ X of x0 and every δ > 0 there exist ε > 0 and x′ ∈ N such that .
Item 3 in Assumption 4.1 implies the existence of some x1 ∈ X satisfying the inequalities and . Since we have and , we obtain from Item 2 that for every x ∈ L ≔ {λx0 + (1 − λ)x1 : 0 ⩽ λ < 1}. Since , it follows from Item 4 that is continuous on L. Consequently . In particular, there exists λ0 < 1 such that for all 1 > λ > λ0. Since X is a topological vector space (Item 1), it follows that x′ ≔ λx0 + (1 − λ)x1 ∈ N for some 1 > λ > λ0. This shows the assertion with . □
Lemma 4.2 allows us to apply the stability result Theorem 3.9, which shows that Assumption 4.1 implies the continuous dependence of the solutions of (4) on the data y and the regularization parameter β.
Proposition 4.3 Stability & Convergence —
Let Assumption 4.1 hold and assume that the sets are compact for every , and w ∈ Y. Assume moreover that converges -uniformly to y ∈ Y, and that βk → β. If β = 0, assume in addition that . Then
If, additionally, the set Σ(F, y, β) consists of a single element xβ, then
Proof
If β = 0, the assertion follows from Theorem 3.10. In the case β > 0, Lemma 4.2 implies that (20) holds. Thus, the assertion follows from Theorem 3.9. Note that the non-emptyness of the sets for some t follows from Item 3 in Assumption 4.1. □
Proposition 4.4 Stability —
Let Assumption 4.1 hold. Assume that converges -uniformly to y ∈ Y, the mappings Fk : X → Y converge locally -uniformly to F : X → Y (see Definition 2.6), and βk → β > 0. Assume that the sets and are compact for every , and w ∈ Y. Then
If, additionally, the set Σ(F, y, β) consists of a single element xβ, then
Proof
Again, Lemma 4.2 shows that (20) holds. Moreover, the non-emptyness of the sets and (at least for k sufficiently large) for some t follows from Item 3 in Assumption 4.1 and the local -uniform convergence of the mappings Fk to F. Thus the assertion follows from Theorem 3.9. □
Item 2 in Assumption 4.1 is concerned with the interplay of the functional F and the distance measure . The next two examples consider two situations, where this part of the assumption holds. Example 4.5 considers linear operators F and convex distance measures . Example 4.6 introduces a class of non-linear operators on Hilbert spaces, where Item 2 is satisfied if the distance measure equals the squared Hilbert space norm.
Example 4.5
Assume that F : X → Y is linear and is convex in its first component. Then Item 2 in Assumption 4.1 is satisfied. Indeed, in such a situation,
If moreover, and 0 < λ < 1, then the last inequality is strict.
Example 4.6
Assume that Y is a Hilbert space, , and F : X → Y is two times Gâteaux differentiable. Then Item 2 in Assumption 4.1 holds if for all x0 ≠ x1 ∈ X the mapping
has no local maxima. This condition holds, if the inequality is satisfied whenever ∂tφ(0; x0, x1) = 0. The computation of the derivative of φ(·; x0, x1) at zero yields that
and
Consequently, Item 2 in Assumption 4.1 is satisfied if, for every x0, x1 ∈ X with x1 ≠ 0, the equality 〈F′(x0)(x1), F(x0)〉 = 0 implies that
4.1. Regularization on Lp-spaces
Let p ∈ (1, ∞) and set X = Lp(Ω, μ) for some σ-finite measure space (Ω, μ). Assume that Y is a Banach space and F : X → Y is a bounded linear operator with dense range. Let and . We thus consider the minimization problem
We now show that in this situation the assumptions of Proposition 4.3 are satisfied. To that end, let τ be the weak topology on Lp(Ω, μ). As Lp(Ω, μ) is reflexive, the level sets are weakly compact. Moreover, the mapping x ↦ ∥Fx − y∥ is weakly lower semi-continuous. Thus all the sets are weakly compact. Example 4.5 shows that Item 2 in Assumption 4.1 holds. Item 3 follows from the density of the range of F. Finally, Item 4 holds, because is norm continuous and convex.
Now assume that yk → y and βk → β. If β = 0 assume in addition that ∥yk − y∥ ⩽ βk. The strict convexity of and convexity of the mappings x ↦ ∥Fx − yk∥ imply that each set Σ(F, yk, βk) consists of a single element xk. Similarly, Σ(F, y, β) consists of a single element x†. From Proposition 4.3 we now obtain that weakly converges to x† and . Thus, in fact, the sequence strongly converges to x† (see [44, Corrollory 5.2.19]).
Let β > 0 and assume that Fk : X → Y is a sequence of bounded linear operators converging to F with respect to the strong topology on L(X, Y), that is, sup{∥Fkx − Fx∥ : ∥x∥ ⩽ 1} → 0. Let again βk → β and yk → y, and denote by xk the single element in Σ(Fk, yk, βk). Applying Proposition 4.4, we again obtain that xk → x†.
Remark 4.7
The above results rely heavily on the assumption that p > 1, which implies that the space Lp(Ω, μ) is reflexive. In the case X = L1(Ω, μ), the level sets {x ∈ X : ∥x∥1 ⩽ t} fail to be weakly compact, and thus even the existence of a solution of Problem (4) need not hold.
Remark 4.8
The assertions concerning stability and convergence with respect to the norm topology remain valid, if X is any uniformly convex Banach space and the norm on X to some power p > 1. Also in this case, weak convergence and convergence of norms imply the strong convergence of a sequence [44, Theorem 5.2.18]. More generally, this property is called the Radon–Riesz property [44, p. 453]. Spaces satisfying this property are also called Efimov–Stechkin spaces in [55].
4.2. Regularization of probability measures
Let (Ω, d) be a separable, complete metric space with distance d and denote by the space of probability measures on the Borel sets of Ω. That is, consists of all positive Borel measures μ on Ω that satisfy μ(Ω) = 1. For p ⩾ 1 the p-Wasserstein distance on is defined as
Here denotes the push forward of the measure ξ by means of the ith projection. In other words, and for every Borel set U ⊂ Ω.
Recall that the narrow topology on is induced by the action of elements of on continuous functions u ∈ C(Ω). That is, a sequence converges narrowly to , if
Lemma 4.9
Let p ⩾ 1. Then the Wasserstein distance satisfies, for every and 0 ⩽ λ ⩽ 1, the inequality
(25) Moreover it is lower semi-continuous with respect to the narrow topology.
Proof
The lower semi-continuity of Wp has, for instance, been shown in [27]. In order to show the inequality (25), let be two measures that realize the infimum in the definition of Wp(μ1, ν) and Wp(μ2, ν), respectively. Then and , which implies that the measure λξ1 + (1 − λ)ξ2 is admissible for measuring the distance between λμ1 + (1 − λ)μ2 and ν. Therefore
which proves the assertion. □
Since is a convex subset of the space of all finite Radon measures on Ω, and the narrow topology on is the restriction of the weak∗ topology on considered as the dual of Cb(Ω), the space of bounded continuous functions on Ω, it is possible to apply the results of this section also to the situation where and . As an easy example, we consider the problem of density estimation from a finite number of measurements.
Example 4.10
Let be an open domain. Given a finite number of measurements {y1, … , yk} ⊂ Ω, the task of density estimation is the problem of finding a simple density function u on Ω in such a way that the measurements look like a typical sample of the distribution defined by u. Interpreting the measurements as a normalized sum of delta peaks, that is, equating {y1, … , yk} with the measure , we can easily translate the problem into the setting of this paper.
We set X≔{u ∈ L1(Ω) : u ⩾ 0 and ∥u∥1 = 1}, which is a convex and closed subset of L1(Ω), , and consider the embedding . Then F is continuous with respect to the weak topology on X and the narrow topology on . We now consider the distance measure for some p ⩾ 1 and the Euclidean distance d on Ω. Then Lemma 4.9 implies that, for every , the mapping u ↦ Wp(Fu, μ) is weakly lower semi-continuous.
There are several possibilities for choosing a regularization functional on X. If Ω is bounded (or at least ), one can, for instance, use the Boltzmann–Shannon entropy defined by
Then the theorems of De la Vallée Poussin and Dunford–Pettis (see [24, Theorems 2.29, 2.54]) show that the lower level sets of are weakly pre-compact in L1(Ω). Moreover, the functional is convex and therefore weakly lower semi-continuous (see [24, Theorem 5.14]). Using Proposition 3.2, we therefore obtain that the compactness required in Assumption 3.5 holds. Also, Lemma 4.9 shows that Item 2 in Assumption 4.1 holds. Items 1 and 4 are trivially satisfied. Finally, Item 3 follows from the density of dom R in X and the density (with respect to the narrow topology) of ran F in . In addition, it has been shown in [59] that the weak convergence of a sequence to u ∈ L1(Ω) together with the convergence imply that ∥uk − u∥1 → 0. Thus the topology coincides with the strong topology on X.
Proposition 4.3 therefore implies that the residual method is a stable and convergent regularization method with respect to the strong topology on X. More precisely, given a sample , the density estimate u depends continuously on the positions yi of the measurements and on the regularization parameter β. In addition, if the number of measurements increases, then the Wasserstein distance between the sample and the true probability converges almost surely to zero. Thus also the reconstructed density converges to the true underlying density, provided the regularization parameters decrease to zero slowly enough.
5. Convergence rates
In this section we derive quantitative estimates (convergence rates) for the difference between regularized solutions xβ ∈ Σ(F, y, β) and the exact solution of the equation F(x†) = y†.
For Tikhonov regularization, convergence rates have been derived in [3,6,23,36,46,47] in terms of the Bregman distance. However, its classical definition,
(26) |
where , requires the space X to be linear and the functional to be convex, as the (standard) subdifferential is only defined for convex functionals. In the sequel we will extend the notion of subdifferentials and Bregman distances to work for arbitrary functionals on arbitrary sets X. To that end, we make use of a generalized notion of convexity, which is not based on the duality between a Banach space X and its dual X∗ but on more general pairings (see [53]). The same notion has recently been used in [29] for the derivation of convergence rates for non-convex regularization functionals.
Definition 5.1 Generalized Bregman Distance —
Let W be a set of functions , let be a functional and let x† ∈ X.
- (a)
The functional is convex at x† with respect to W, if
(27) - (b)
Let be convex at x† with respect to W. The subdifferential at x† with respect W is defined as
- (c)
Let be convex at x† with respect to W. For and x ∈ X, the Bregman distance between x† and x with respect to w is defined as
(28)
Remark 5.2
Let X be a Banach space and set W = X∗. Then a functional is convex with respect to W, if and only if it is lower semi-continuous and convex in the classical sense. Moreover, at every x† ∈ X, the subdifferential with respect W coincides with the classical subdifferential . Finally, the standard Bregman distance, defined by (26), coincides with the Bregman distance obtained by means of Definition 5.1.
In the following, let W be a given family of real valued functions on X. Convergence rates in Bregman distance with respect to W will be derived under the following assumption:
Assumption 5.3
- 1.
There exists a monotonically increasing function ψ : [0, ∞) → [0, ∞) such that
(29) - 2.
For some given point x† ∈ X, the functional is convex at x† with respect to W.
- 3.
There exist and constants γ1 ∈ [0, 1), γ2 ⩾ 0 such thatfor every .
(30)
In a Banach space setting, the source inequality (30) has already been used in [36,50] to derive convergence rates for Tikhonov regularization with convex functionals and in [34] for multiparameter regularization. Eq. (29) is an alternate for the missing triangle inequality in the non-metric case.
Theorem 5.4 Convergence Rates —
Let Assumption 5.3 hold and let y ∈ Y satisfy . Then, the estimate
(31) holds for all xβ ∈ Σ(F, y, β).
Proof
Let xβ ∈ Σ(F, y, β). This, together with (29) and the assumption that , implies that
Together with (30) it follows that
The assumption γ1 ∈ [0, 1) implies the inequality
(32) Since , we have . Therefore (32) and (29) imply
which concludes the proof. □
Remark 5.5
Typically, convergence rates are formulated in a setting which slightly differs from the one of Theorem 5.4, see [6,21,36,50]. There one assumes the existence of an -minimizing solution x† ∈ X of the equation F(x†) = y†, for some exact data y† ∈ ran(F). Instead of y†, only noisy data y ∈ Y and the error bound are given.
For this setting, (31) implies the rate
where xβ ∈ Σ(F, y, β) denotes any regularized solution.
Remark 5.6
The inequality (30) is equivalent to the existence of η1, η2 > 0 such that
(33) Indeed, we obtain (33) from (30) by setting η1 ≔ γ1/(1 − γ1) and η2 ≔ γ2/(1 − γ1). Conversely, (33) implies (30) by taking γ1 ≔ η1/(1 + η1) and γ2 ≔ η2/(1 + η1).
5.1. Convergence rates in Banach spaces
In the following, assume that X and Y are Banach spaces with norms ∥ · ∥ and ∥ · ∥, and assume that is a convex and lower semi-continuous functional on X. We set and let Dξ with denote the classical Bregman distance (see Remark 5.2).
If x† satisfies the inequality
(34) |
and y are given data with ∥F(x†) − y∥ ⩽ β, then Theorem 5.4 implies the convergence rate . In the special case where X is a Hilbert space and we have Dξ(x, x†) = ∥x − x†∥2, which implies the convergence rate with respect to the norm. In Proposition 5.8 we show that the same convergence rate holds on any 2-convex space. For r-convex Banach spaces with r > 2, we derive the rate .
Definition 5.7
The Banach space X is called r-convex (or is said to have modulus of convexity of power type r), if there exists a constant C > 0 such that
for all ε ∈ [0, 2].
Note that every Hilbert space is 2-convex and that there is no Banach space (with dim (X) ⩾ 2) that is r-convex for some r < 2 (see [42, pp. 63ff]).
Proposition 5.8 Convergence rates in the norm —
Let X be an r-convex Banach space with r ⩾ 2 and let . Assume that there exists x† ∈ X, a subgradient , and constants γ1 ∈ [0, 1), γ2 ⩾ 0, β0 > 0 such that (34) holds for every .
Then there exists a constant c > 0 such that
(35) for all β ∈ [0, β0], all y ∈ Y with ∥F(x†) − y∥ ⩽ β, and all xβ ∈ Σ(F, y, β).
Proof
Let denote the duality mapping with respect to the weight function s ↦ sr−1. In [60, Eq. (2.17)′] it is shown that there exists a constant K > 0 such that
(36) for all x†, z ∈ X and jr(x†) ∈ Jr(x†). By Asplund’s theorem [13, Chap. 1, Theorem 4.4], the duality mapping Jr equals the subgradient of . Therefore, by taking z = x − x† and jr(x†) = ξ, inequality (36) implies
(37) Consequently, (35) follows from Theorem 5.4. □
Exact values for the constant K in (37) (and thus for the constant c in (35)) can be derived from [60]. Bregman distances satisfying (37) are called r-coercive in [35]. This r-coercivity has already been applied in [2] for the minimization of Tikhonov functionals in Banach spaces.
Example 5.9
The spaces X = Lp(Ω, μ) for p ∈ (1, 2] and some σ-finite measure space (Ω, μ) are examples of 2-convex Banach spaces (see [42, p. 81, Remarks following Theorem 1.f.1.]). Consequently we obtain for these spaces the convergence rate . The spaces X = Lp(Ω, μ) for p > 2 are only p-convex, leading to the rate in those spaces.
Remark 5.10
The book [50, pp. 70ff] clarifies the relation between (34) and the source conditions used to derive convergence rates for convex functionals on Banach spaces. In particular, it is shown that, if F and are Gâteaux differentiable at x† and there exist γ > 0 and ω ∈ Y∗ such that γ∥ω∥ < 1 and
(38)
(39) for every x ∈ X, then (34) holds on X. (Here F′(x†)∗ : Y∗ → X∗ is the adjoint of F′(x†).) Conversely, if satisfies (34), then (38) holds for every x ∈ X.
In the particular case that F : X → Y is linear and bounded, the inequality (39) is trivially satisfied with γ = 0. Thus, (34) is equivalent to the sourcewise representability of the subgradient, .
6. Sparse regularization
Let Λ be an at most countable index set, define
and assume that F : X ≔ ℓ2(Λ) → Y is a bounded linear operator with dense range in the Hilbert space Y. We consider for p ∈ (0, 2) the minimization problem
(40) |
For p > 1, the subdifferential is at most single valued and is identified with its single element. (The subdifferential may be empty since we consider as functions on ℓ2(Λ).)
Remark 6.1 Compressed Sensing —
In a finite dimensional setting with p = 1, the minimization problem (40) has received a lot of attention during the last years under the name of compressed sensing (see [7,8,10,16–18,20,26,57]). Under some assumptions, the solution of (40) with y = Fx† and β = 0 has been shown to recover x† exactly provided the set has sufficiently small cardinality (that is, it is sufficiently sparse). Results for p < 1 can be found in [11,15,25,48].
In this section we prove well-posedness of (40) and derive convergence rates in a possibly infinite dimensional setting. This inverse problems point of view has so far only been treated for the case p = 1 (see [32]). The more general setting has only been considered for Tikhonov regularization
(see [12,14,28,31,43,61]).
6.1. Well-posedness
In the following, τ denotes the weak topology on ℓ2(Λ), and denotes the topology as in Definition 3.4. Then a sequence converges to x ∈ ℓ2(Λ) with respect to τp if and only if xk → x and .
Proposition 6.2 Well-Posedness —
Let F : ℓ2(Λ) → Y be a bounded linear operator with dense range. Then constrained ℓp regularization with 0 < p < 2 is well-posed:
- 1.
Existence: For every β > 0 and y ∈ Y, the set of regularized solutions Σ(F, y, β) is non-empty.
- 2.
Stability: Let (βk) and (yk) be sequences with βk → β > 0 and yk → y ∈ Y. Then .
- 3.
Convergence: Let ∥yk − y∥ ⩽ βk → 0 and assume that the equation Fx = y has a solution in ℓp(Λ). Then we haveMoreover, if the equation Fx = y has a unique -minimizing solution x†, then we have τp − Lim supk→∞Σ(F, yk, βk) = {x†}.
Proof
In order to prove the existence of minimizers, we apply Theorem 3.1 by showing that is compact with respect to the weak topology on ℓ2(Λ) for every t > 0 and is nonempty for some t. Because F has dense range, the set
is non-empty for t large enough.
It remains to show that the sets are weakly compact on ℓ2(λ) for every positive t. The functional is weakly lower semi-continuous (on ℓ2(λ)) as the sum of non-negative and weakly continuous functionals (see [19]). Moreover, the mapping F is weakly continuous, and therefore x ↦ ∥Fx − y∥2 is weakly lower semi-continuous, too. The estimate (see [31, Eq. (5)]) shows that is weakly coercive. Therefore the sets are weakly compact for all t > 0, see Proposition 3.2.
Taking into account Example 4.5, it follows that , and F satisfy Assumption 4.1. Consequently, Items 2 and 3 follow from Proposition 4.3. □
Remark 6.3
In the case p > 1, the functional is strictly convex, and therefore the -minimizing solution x† of Fx = y is unique. Consequently the equality
holds for every y in the range of the operator F.
Remark 6.4
For the convex case p ⩾ 1, it is shown in [31, Lemma 2] that the τp convergence of a sequence xk already implies . In particular, the topology τp is stronger than the topology induced by . A similar result for 0 < p < 1 has been derived in [30].
6.2. Convergence rates
In the following, we derive two types of convergence rates results with respect the ℓ2-norm: the convergence rate (for p ∈ (1, 2)), and the convergence rate (for every p ∈ (0, 2)) for sparse sequences—here and in the following, x† ∈ ℓ2(Λ) is called sparse, if
is finite. The convergence rates results for constrained ℓp regularization, derived in this section, are summarized in Table 1.
Table 1.
Convergence rates for constrained ℓp regularization.
Rate | Norm | Premises (besides | Results |
---|---|---|---|
β1/2 | p ∈ (1, 2) | Proposition 6.5 | |
β1/2 | p ∈ (1, 2) | Remark 6.6 | |
β1/p | p ∈ [1, 2), sparsity, injectivity on V | Proposition 6.7 | |
β | p ∈ (0, 1), uniqueness of x†, sparsity, injectivity on V | Proposition 6.7 |
For p ⩾ 1, the same type of results (Propositions 6.5, 6.7) has also been obtained for ℓp-Tikhonov regularization in [31,50]. The results for the non-convex case, p ∈ (0, 1), are based on [30], where the same rate for non-convex Tikhonov regularization with a priori parameter choice has been derived (see also [29]). Similar, but weaker, results have been already been derived in [5,28,61] in the context of Tikhonov regularization. In [61], the conditions for the convergence rates result for non-convex regularization are basically the same as in Proposition 6.7, but only a rate of order has been obtained. In [5,28], a linear convergence rate is proven, but with a considerably stronger range condition: each standard basis vector eλ, λ ∈ Λ, has to satisfy eλ ∈ ran F∗.
Proposition 6.5
Let , and let F : ℓ2(Λ) → Y be a bounded linear operator. Moreover, assume that there exists ω ∈ Y with . Then the set Σ(F,y,β) =: {xβ} consists of a single element and there exists a constant dp > 0 only depending on p, such that
(41) for all β > 0 and y ∈ Y with ∥F(x†) − y∥ ⩽ β.
Proof
The assumption then implies that (30) is satisfied with W = X∗, γ1 = 0 and γ2 = ∥ω∥. Theorem 5.4 therefore implies the inequality
(42) From [31, Lemma 10] we obtain the inequality
(43) for all . Now, (41) follows from (42) and (43). □
Remark 6.6
Since ℓp(Λ) is 2-convex (see [42]) and continuously embedded in ℓ2(Λ), Proposition 5.8 provides an alternative estimate for xβ − x† in terms of the stronger distance . The prefactor in (35), however, is constant, whereas the prefactor in (41) tends to 0 as increases. Thus the two estimates are somehow independent from each other.
Proposition 6.7 Sparse Regularization —
Let p ∈ (0, 2), let be sparse, and let F : ℓ2(Λ) → Y be bounded linear. Assume that one of the following conditions holds:
- •
We have p ∈ (1, 2), there exists ω ∈ Y with , and F is injective on
- •
We have p = 1, there exist and ω ∈ Y with ξ = F∗ω, and F is injective on
- •
We have p ∈ (0, 1), x† is the unique -minimizing solution of Fx = Fx†, and F is injective on
Then
Proof
Assume first that p ∈ (1, 2). Define . Then the functional is convex at x† with respect to W. Moreover it has been shown in [31, Proof of Theorem 14] that there exists w(x) = −c∥x − x†∥p ∈ ∂W(x†) ⊂ W such that for some η1, η2 > 0 the inequality
(44) holds on Σ(2β, y†, F) for β small enough. Using Remark 5.6, Theorem 5.4 therefore implies the rate
The assertion then follows from the fact that the norm on ℓ2(Λ) can be bounded by the Bregman distance Dw.
The proofs for p = 1 and p ∈ (0, 1) are similar; the required estimate (44) has been shown for p = 1 in [31, Proof of Theorem 15] and for p ∈ (0, 1) in [30, Eq. (7)]. □
7. Conclusion
Due to modeling, computing, and measurement errors, the solution of an ill-posed equation F(x) = y, even if it exists, typically yields unacceptable results. The residual method replaces the exact solution by the set , where is a stabilizing functional and denotes a distance measure between F(x) and y. This paper shows that in a very general setting Σ(F, y, β) is stable with respect to perturbations of the data y and the operator F (Lemma 3.6 and Theorem 3.9), and the regularized solutions converge to -minimizing solutions of F(x) = y as β → 0 (Theorem 3.10). In particular the stability issue has hardly been considered so far in the literature.
In the case where F acts between linear spaces X and Y, stability and convergence have been shown under a list of reasonable properties (see Assumption 4.1). These assumptions are satisfied for bounded linear operators, but also for a certain class of nonlinear operators (Example 4.6). If Y is reflexive, X satisfies the Radon–Riesz property, F is a closed linear operator, and and are given by powers of the norms on X and Y, the set Σ(F, y, β) consists of a single element xβ. This element is shown to converge strongly to the minimal norm solution x† as β → 0. In this special situation, norm convergence has also been shown in [39, Theorem 3.4.1].
In Section 5 we have derived quantitative estimates (convergence rates) for the difference between x† and minimizers xβ ∈ Σ(F, y, β) in terms of a (generalized) Bregman distance. All these estimates hold provided and a source inequality introduced in [36] is satisfied. For linear operators, the required source inequality follows from a source wise representation of a subgradient of at x†. This carries on the result of [6] for constrained regularization. In the special case that X is an r-convex Banach space with r ⩾ 2 and is the rth power of the norm on X, we have obtained convergence rates with respect to the norm. The spaces X = Lp(Ω) for p ∈ (1, 2] are examples of 2-convex Banach spaces, leading to the rate in those spaces.
As an application for our rather general results we have investigated sparse ℓp regularization with p ∈ (0, 2). We have shown well-posedness in both the convex (p ⩾ 1) and the non-convex case (p < 1). In addition, we have studied the reconstruction of sparse sequence. There we have derived the improved convergence rates for the convex and for the non-convex case.
Acknowledgement
This work has been supported by the Austrian Science Fund (FWF), projects 9203-N12 and project S10505-N20.
Contributor Information
Markus Grasmair, Email: markus.grasmair@univie.ac.at.
Markus Haltmeier, Email: markus.haltmeier@mpibpc.mpg.de.
Otmar Scherzer, Email: otmar.scherzer@univie.ac.at.
Appendix A. Auxiliary results
Lemma A.1
Assume that converges -uniformly to y ∈ Y and the mappings Fk : X → Y converge locally -uniformly to F : X → Y.
Then, for every β > 0, t > 0 and ε > 0, there exists some such that
(A.1) for every t′ ⩽ t and k ⩾ k0.
Proof
Since yk → y -uniformly and Fk → F locally -uniformly, there exists such that
(A.2) for all x ∈ X with and k ⩾ k0.
Now let t′ ⩽ t and let . Then (A.2) implies that
and thus
that is, , which proves the first inclusion in (A.1). The second inclusion is shown in a similar manner. □
The following lemma states that the value of the minimization problem (4) behaves well as the parameter β decreases.
Lemma A.2
Assume that is τ-compact for every γ and every t. Then the value v of the constraint optimization problem (4) is right continuous in the first variable, that is,
(A.3)
Proof
Since , it follows that v(F, y, β) ⩾ v(F, y, β + ε) for every ε > 0, and therefore v(F, y, β) ⩾ supε>0v(F, y, β + ε).
In order to show the converse inequality, let δ > 0. Then the definition of v(F, y, β) implies that . Since (cf. Lemma 2.2)
(A.4) and the right hand side of (A.4) is a decreasing family of compact sets. It follows that already for some ε0 > 0, and thus
Since δ was arbitrary, this shows the assertion. □
Lemma A.3
Let be a sequence of subsets of X. Then U = τ − Limk→∞Σk, if and only if every subsequence satisfies
Proof
See [41, Section 29.V]. □
References
- 1.Acar R., Vogel C.R. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Prob. 1994;10(6):1217–1229. [Google Scholar]
- 2.Bonesky T., Kazimierski K.S., Maass P., Schöpfer F., Schuster T. Minimization of Tikhonov functionals in Banach spaces. Abstr. Appl. Anal. 2008;19 Art. ID 192679. [Google Scholar]
- 3.Boţ R.I., Hofmann B. An extension of the variational approach for obtaining convergence rates in regularization of nonlinear ill-posed problems. J. Integral Equ. Appl. 2010;22:369–392. [Google Scholar]
- 4.Braides A. vol. 22. Oxford University Press; Oxford: 2002. Γ-convergence for beginners. (Oxford Lecture Series in Mathematics and its Applications). [Google Scholar]
- 5.Bredies K., Lorenz D. Regularization with non-convex separable constraints. Inverse Prob. 2009;25(8):085011. (14p) [Google Scholar]
- 6.Burger M., Osher S. Convergence rates of convex variational regularization. Inverse Prob. 2004;20(5):1411–1421. [Google Scholar]
- 7.Candès E.J. The restricted isometry property and its implications for compressed sensing. C.R. Math. Acad. Sci. Paris. 2008;346(9–10):589–592. [Google Scholar]
- 8.Candès E.J., Romberg J., Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 2006;52(2):489–509. [Google Scholar]
- 9.Candès E.J., Romberg J.K., Taom T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 2006;59(8):1207–1223. [Google Scholar]
- 10.Candès E.J., Tao T. Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory. 2006;52(12) [Google Scholar]
- 11.Chartrand R. Exact reconstructions of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 2007;14:707–710. [Google Scholar]
- 12.Chaux C., Combettes P.L., Pesquet J.-C., Wajs V.R. A variational formulation for frame-based inverse problems. Inverse Prob. 2007;23(4):1495–1518. [Google Scholar]
- 13.Cioranescu I., Spaces Geometry of Banach. vol. 62. Kluwer; Dordrecht: 1990. Duality mappings and nonlinear problems. (Mathematics and its Applications). [Google Scholar]
- 14.Daubechies I., Defrise M., De Mol C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004;57(11):1413–1457. [Google Scholar]
- 15.Davies M.E., Gribonval R. Restricted isometry constants where ℓp sparse recovery can fail for 0 < p ⩽ 1. IEEE Trans. Inf. Theory. 2009;55(5):2203–2214. [Google Scholar]
- 16.Donoho D.L. For most large underdetermined systems of equations, the minimal ℓ1-norm near-solution approximates the sparsest near-solution. Commun. Pure Appl. Math. 2006;59(7):907–934. [Google Scholar]
- 17.Donoho D.L. For most large underdetermined systems of linear equations the minimal ℓ1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 2006;59(6):797–829. [Google Scholar]
- 18.Donoho D.L., Elad M., Temlyakov V.N. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory. 2006;52(1):6–18. [Google Scholar]
- 19.Ekeland I., Temam R. North-Holland; Amsterdam: 1976. Convex Analysis and Variational Problems. [Google Scholar]
- 20.Elad M. Springer; New York: 2010. Sparse and Redundant Representations. From theory to applications in signal and image processing, With a foreword by Alfred M. Bruckstein. [Google Scholar]
- 21.Engl H.W., Hanke M., Neubauer A. vol. 375. Kluwer Academic Publishers Group; Dordrecht: 1996. Regularization of inverse problems. (Mathematics and its Applications). [Google Scholar]
- 22.Engl H.W., Kunisch K., Neubauer A. Convergence rates for Tikhonov regularisation of nonlinear ill-posed problems. Inverse Prob. 1989;5(3):523–540. [Google Scholar]
- 23.Flemming J., Hofmann B. A new approach to source conditions in regularization with general residual term. Numer. Funct. Anal. Optim. 2010;31(3):245–284. [Google Scholar]
- 24.Fonseca I., Leoni G. Modern methods in the calculus of variations: Lp spaces. Springer; New York: 2007. (Springer Monographs in Mathematics). [Google Scholar]
- 25.Foucart S., Lai M. Sparsest solutions of underdetermined linear systems via ℓq-minimization for 0 < q ⩽ 1. Appl. Comput. Harmon. Anal. 2009;26(3):395–407. [Google Scholar]
- 26.Fuchs J.J. Recovery of exact sparse representations in the presence of bounded noise. IEEE Trans. Inf. Theory. 2005;51(10):3601–3608. [Google Scholar]
- 27.Givens C.R., Shortt R.M. A class of Wasserstein metrics for probability distributions. Michigan Math. J. 1984;31(2):231–240. [Google Scholar]
- 28.Grasmair M. Well-posedness and convergence rates for sparse regularization with sublinear lq penalty term. Inverse Prob. Imaging. 2009;3(3):383–387. [Google Scholar]
- 29.Grasmair M. Generalized Bregman distances and convergence rates for non-convex regularization methods. Inverse Prob. 2010;26(11):115014. [Google Scholar]
- 30.Grasmair M. Non-convex sparse regularisation. J. Math. Anal. Appl. 2010;365(1):19–28. [Google Scholar]
- 31.Grasmair M., Haltmeier M., Scherzer O. Sparse regularization with lq penalty term. Inverse Prob. 2008;24(5):055020. [Google Scholar]
- 32.Grasmair M., Haltmeier M., Scherzer O. Necessary and sufficient conditions for linear convergence of ℓ1-regularization. Commun. Pure Appl. Math. 2011;64(2):161–182. [Google Scholar]
- 33.Groetsch C.W. Pitman; Boston: 1984. The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind. [Google Scholar]
- 34.Hein T. Convergence rates for multi-parameter regularization in Banach spaces. Int. J. Pure Appl. Math. 2008;43(4):593–614. [Google Scholar]
- 35.Hein T., Hofmann B. Approximate source conditions for nonlinear ill-posed problems – chances and limitations. Inverse Prob. 2009;25:035003. [Google Scholar]
- 36.Hofmann B., Kaltenbacher B., Pöschl C., Scherzer O. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Prob. 2007;23(3):987–1010. [Google Scholar]
- 37.Hofmann B., Yamamoto M. Convergence rates for Tikhonov regularization based on range inclusions. Inverse Prob. 2005;21(3):805–820. [Google Scholar]
- 38.Ivanov V.K. Ill-posed problems in topological spaces. Sibirsk. Mat. Ž. 1969;10:1065–1074. In Russian. [Google Scholar]
- 39.Ivanov V.K., Vasin V.V., Tanana V.P. Theory of linear ill-posed problems and its applications. second ed. VSP; Utrecht: 2002. (Inverse and Ill-posed Problems Series). Translated and revised from the 1978 Russian original. [Google Scholar]
- 40.Kelley J.L. D. Van Nostrand Company; Toronto-New York-London: 1955. General Topology. [Google Scholar]
- 41.Kuratowski K. Topology. In: Jaworowski J., editor. Revised and Augmented. New ed. vol. I. Academic Press; New York: 1966. (Translated from the French). [Google Scholar]
- 42.Lindenstrauss J., Tzafriri L. vol. 97. Springer Verlag; Berlin: 1979. Classical Banach spaces. II. (Ergebnisse der Mathematik und ihrer Grenzgebiete [Results in Mathematics and Related Areas]). Function spaces. [Google Scholar]
- 43.Lorenz D. Convergence rates and source conditions for Tikhonov regularization with sparsity constraints. J. Inverse Ill-Posed Prob. 2008;16(5):463–478. [Google Scholar]
- 44.Megginson R.E. vol. 183. Springer Verlag; New York: 1998. An Introduction to Banach Space Theory. (Graduate Texts in Mathematics). [Google Scholar]
- 45.Morozov V.A. CRC Press; Boca Raton: 1993. Regularization Methods for Ill-Posed Problems. [Google Scholar]
- 46.Resmerita E. Regularization of ill-posed problems in Banach spaces: convergence rates. Inverse Prob. 2005;21(4):1303–1314. [Google Scholar]
- 47.Resmerita E., Scherzer O. Error estimates for non-quadratic regularization and the relation to enhancement. Inverse Prob. 2006;22(3):801–814. [Google Scholar]
- 48.R. Saab, R. Chartrand, O. Yilmaz, Stable sparse approximations via nonconvex optimization, in: 33rd International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2008.
- 49.Scherzer O., Engl H.W., Kunisch K. Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. SIAM J. Numer. Anal. 1993;30(6):1796–1838. [Google Scholar]
- 50.Scherzer O., Grasmair M., Grossauer H., Haltmeier M., Lenzen F. vol. 167. Springer; New York: 2009. Variational methods in imaging. (Applied Mathematical Sciences). [Google Scholar]
- 51.Seidman T.I. Convergent approximation methods for ill-posed problems. I. General theory. Polish Academy of Sciences. Institute of Applied Cybernetics. Control Cybernet. 1981;10(1–2):31–49. [Google Scholar]
- 52.Seidman T.I., Vogel C.R. Well posedness and convergence of some regularisation methods for non-linear ill posed problems. Inverse Prob. 1989;5(2):227–238. [Google Scholar]
- 53.Singer I. Abstract convex analysis. John Wiley & Sons; Inc., New York: 1997. (Canadian Mathematical Society Series of Monographs and Advanced Texts). With a foreword by A.M. Rubinov, A Wiley-Interscience Publication. [Google Scholar]
- 54.Tanana V.P. On a criterion for the convergence of the residual method. Dokl. Akad. Nauk. 1995;343(1):22–24. [Google Scholar]
- 55.Tanana V.P. Methods for solution of nonlinear operator equations. VSP; Utrecht: 1997. (Inverse and Ill-posed Problems Series). [Google Scholar]
- 56.Tikhonov A.N., Arsenin V.Y. John Wiley & Sons; Washington, DC: 1977. Solutions of Ill-Posed Problems. [Google Scholar]
- 57.Tropp J.A. Just relax: convex programming methods for identifying sparse signals in noise. IEEE Trans. Inf. Theory. 2006;52(3):1030–1051. [Google Scholar]
- 58.Vasin V.V. Some tendencies in the Tikhonov regularization of ill-posed problems. J. Inverse Ill-Posed Prob. 2006;14(8):813–840. [Google Scholar]
- 59.Visintin A. Strong convergence results related to strict convexity. Comm. Partial Diff. Equ. 1984;9(5):439–466. [Google Scholar]
- 60.Xu Z.B., Roach G.F. Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces. J. Math. Anal. Appl. 1991;157(1):189–210. [Google Scholar]
- 61.Zarzer C.A. On Tikhonov regularization with non-convex sparsity constraints. Inverse Prob. 2009;25:025006. [Google Scholar]