ABSTRACT
In this article, we prove optimal convergence rates results for regularization methods for solving linear ill-posed operator equations in Hilbert spaces. The results generalizes existing convergence rates results on optimality to general source conditions, such as logarithmic source conditions. Moreover, we also provide optimality results under variational source conditions and show the connection to approximative source conditions.
KEYWORDS: Approximative source conditions, convergence rates, linear inverse problems, regularization, variational source conditions
MATHEMATICS SUBJECT CLASSIFICATION: 47A52, 49N45, 65J22
1. Introduction
Let L: X → Y be a bounded linear operator between two Hilbert spaces X and Y. We are interested in finding the minimum-norm solution x † ∈ X of the equation
for some y ∈ ℛ(L), that is the element x † ∈ {x ∈ X | Lx = y} with the property ‖x †‖ = inf {‖x‖ | Lx = y}. It is well-known that this minimal-norm solution exists and is unique, see for example [3, Theorem 2.5].
Since y is typically not exactly known and only an approximation
with
is given, we are looking for a family
of approximative solutions so that for every sequence
converging to y, we find a sequence (αk)k∈ℕ of regularization parameters such that
tends to the minimum-norm solution x
†.
A standard way to construct this family is by using Tikhonov regularization:
where the minimiser can be explicitly calculated from the optimality condition and reads as follows:
More generally, we want to analyze regularized solutions of the form
with some appropriately chosen function r α, see for example [9].
The aim of this article is to characterize for a given regularization method, generated by a family (r
α)α>0, the optimal convergence rate with which
tends to the minimum-norm solution x
†. This convergence rate depends on the solution x
†, and we will give an explicit relation between the spectral projections of x
† with respect to the operator L*L and the convergence rate; first in Section 2 for the convergence of x
α(y) with the exact data y, and then in Section 3 for
with noisy data
. This generalizes existing convergence rates results of [10] to general source conditions, such as logarithmic source conditions.
Afterwards, we show in Section 4 that these convergence rates can also be obtained from variational inequalities and establish the optimality of these general variational source conditions, extending the results of [1]. It is interesting to note that variational source conditions are equivalent to convergence rates of the regularized solutions, while the classical results in [5] are not.
Finally, we consider in Section 5 approximate source conditions that relate the convergence rates of the regularized solutions to the decay rate of a distance function, measuring how far away the minimum-norm solution is from the classical range condition, see [4, 9]. We can show that these approximate source conditions are indeed equivalent to the convergence rates.
2. Convergence rates for exact data
In the following, we analyze the convergence rate of the sequence (x α(y))α>0 with the exact data y ∈ ℛ(L) to the minimum-norm solution x † of Lx = y.
We investigate regularization methods of the form (2), which are generated by functions satisfying the following properties.
Definition 2.1
We call a family (r α)α>0 of continuous functions r α: [0, ∞) → [0, ∞) the generator of a regularization method if
there exists a constant ρ ∈ (0, 1) such that
the error function, defined by is decreasing.
For fixed λ ≥0 the map
is continuous and increasing, and
there exists a constantsuch that
Remark
These conditions do not yet enforce that x α(y) → x †. To ensure this, we could additionally impose that
for every λ > 0 as α → 0.
Let us now fix the notation for the rest of the article.
Notation 2.2
Let L: X → Y be a bounded linear operator between two real Hilbert spaces X and Y, y ∈ ℛ(L), and x † ∈ X be the minimum-norm solution of Lx = y.
We choose a generator (r α)α>0 of a regularization method, introduce the family
of its error functions, and the corresponding family of regularized solutions shall be given by (2).
We denote by A → E A and A → F A the spectral measures of the operators L*L and LL*, respectively, on all Borel sets A ⊆ [0, ∞).
Next, we define the right-continuous and increasing function
Moreover, if f: (0, ∞) → ℝ is a right-continuous, increasing, and bounded function, we write
for the Lebesgue–Stieltjes integral of f, where μf denotes the unique non-negative Borel measure defined by μf((λ1, λ2]) = f(λ2) − f(λ1) and g ∈ L 1(μ).
Remark
In this setting, we can write the error
according to spectral theory in the form
We want to point out here that it directly follows from the definition that the minimum-norm solution x † is in the orthogonal complement 𝒩(L)⊥ of the nullspace of L, and we therefore do not have to consider the point λ = 0 in the integrals in equation (6).
We first want to establish a relation between the convergence rate of the regularized solution x α(y) for exact data y to the minimum-norm solution x † and the behaviour of the spectral function (4).
Proposition 2.3
We use Notation 2.2 and assume that there exist an increasing function ϕ: (0, ∞) → (0, ∞) and constants μ ∈ (0, 1) and A > 0 such that we have for every α > 0 the inequality
Then, the following two statements are equivalent:
There exists a constant C > 0 with There exists a constantwith
Proof
According to Definition 2.1 (ii) the error function
is decreasing, and thus it follows together with (6) that for all α > 0
(10) Let first (8) hold. Then, it follows from (10) that for all α > 0
(11) Now, we use Definition 2.1 (i), which gives that
Using this estimate in (11) yields (9) with
.
Conversely, let (9) hold. Since ‖x α(y) − x †‖2 ≤ ‖x †‖2 (which follows from (6) with
), it is enough to check the condition (8) for all α ∈ (0, ‖L‖2].
We use (6) and integrate the right hand side by parts, see for example [2, Theorem 6.2.2] regarding the integration by parts for Lebesgue–Stieltjes integrals, and obtain that
(12) We split up the integral on the right hand side into two terms:
(13) The first term is estimated by using that the function e is increasing and by utilising the assumption (9):
The second integral term in (13) is estimated by using the inequalities (9) and (7):
where we used Definition 2.1 (iv) in the last step. Inserting the two estimates in (13) and in (12), we find with e(‖L‖2) = ‖x †‖2 that
(14) From (7), we deduce further that
since ϕ is increasing and μ <1.
Thus, we get from (14) that
with
Remark
The condition (7) with the choice
was already used in [4], and such a function ϕ was called a qualification of the regularization method.
Example 2.4
In the case of Tikhonov regularization, given by (1), we have
and therefore we get for the error function
, defined by (3), the expression
. So, clearly,
and all the conditions of Definition 2.1 are fulfilled.
To recover the classical equivalence results, see [10, Theorem 2.1], we set ϕ(α) = α2ν for some ν ∈ (0, 1) and find that the condition (7) with A = 1 is for every μ ≥ ν fulfilled, since we havefor arbitrary α > 0 and λ > 0.
Thus, Proposition 2.3 yields for every ν ∈ (0, 1) the equivalence of ‖x α(y) − x †‖2 = 𝒪(α2ν) and e(λ) = 𝒪(λ2ν).
Similarly, we also get the equivalence in the case of logarithmic convergence rates. Let 0 < ν < μ <1 and define forthe function ϕ(α) = |log α|−ν (for bigger values of α, we may simply set
). Then, we have
for all
. Thus,
is decreasing on
, which implies (7) with A = 1.
So, Proposition 2.3 tells us that ‖x α(y) − x †‖2 = 𝒪(|log α|−ν) if and only if e(λ) = 𝒪(|log λ|−ν).
3. Convergence rates for noisy data
We now want to estimate the distance of the regularized solution
to the minimum-norm solution x
† if we do not have the exact data y, but only some approximation
of it.
In this case, we consider the regularization parameter α as a function of the noisy data
such that the distance between
and x
† is minimal. Thus, we are interested in the convergence rate of the expression
to zero as the distance between
and y tends to zero. We therefore want to find an upper bound for the expression
, where
denotes the closed ball with radius δ > 0 around the data y.
Let us first consider the trivial case where ‖x α(y) − x †‖ = 0 for all α in a vicinity of 0.
Lemma 3.1
We use Notation 2.2 and assume that there exists an ϵ > 0 such that
Then, we have
(15) where ρ > 0 is chosen as in Definition 2.1 (i).
Proof
Let
be fixed. Then, using that Lr α(L*L) = r α(LL*)L, it follows from Definition 2.1 (i) that
(16) The right hand side is uniform for all
. Thus, picking α = ϵ, we get
which is (15).
In the general case, we estimate the optimal regularization parameter α to be in the vicinity of the value αδ, which is chosen as the solution of the implicit equation (17) and is therefore only depending on the distance δ between the correct data y and the noisy data
.
Lemma 3.2
We use again Notation 2.2 and consider the case where ‖x α(y) − x †‖ > 0 for all α > 0.
If we choose for every δ > 0 the parameter αδ > 0 such that
(17) then there exists a constant C 1 > 0 such that
(18) Moreover, there exists a constant C 0 > 0 such that
(19) for all δ > 0 which fulfil that αδ ∈ σ(LL*), where σ(LL*) ⊂ [0, ∞) denotes the spectrum of the operator LL*.
Proof
First, we remark that the function
is, according to Definition 2.1 (iii) together with the assumption that ‖x α(y) − x †‖ > 0 for all α > 0, continuous and strictly increasing and satisfies lim α→0 A(α) = 0 and lim α→∞ A(α) = ∞. Therefore, we find for every δ > 0 a unique value αδ = A −1(δ2).
Let
. Then, as in the proof of Lemma 3.1, see (16), we find that
From this estimate, we obtain with the triangular inequality and with the definition (17) of αδ that
which is the upper bound (18) with the constant C 1 = (1 + ρ)2.
For the lower bound (19), we write similarly
(20) Now, from the continuity of
and Definition 2.1 (iv), we find that for every δ > 0 there exists a parameter a δ ∈ (0, αδ) such that
.
Then, the assumption αδ ∈ σ(LL*) implies that the spectral measure F of the operator LL* fulfils F [aδ, 2αδ] ≠ 0.
Suppose now that
(21) Then, choosing
, equation (20) becomes
Thus, we may drop the last term as it is non-negative, which gives us the lower bound
Since we get from Definition 2.1 (ii) the inequality
we can estimate further
Now, since
is for every λ > 0 increasing, see Definition 2.1 (iii), the first term is increasing in α, see (6), and the second term is decreasing in α. Thus, we can estimate the expression for α < αδ from below by the second term at α = αδ, and for α ≥ αδ by the first term at α = αδ:
which is (19) with
.
If z δ, as defined by (21), happens to vanish, the same argument works with an arbitrary non-zero element z δ ∈ ℛ(F [aδ, 2αδ]) since the last term in (20) is zero for
.
From Lemma 3.1 and Lemma 3.2, we now get an equivalence relation between the noisy and the noise-free convergence rates.
Proposition 3.3
We use Notation 2.2. Let further ϕ: [0, ∞) → [0, ∞) be a strictly increasing function satisfying ϕ(0) = 0 and
(22) for some increasing function g: (0, ∞) → (0, ∞).
Moreover, we assume that there exists a constant C > 0 with
(23) and there is a constant
such that
(24) We define
(25) Then, the following two statements are equivalent:
There exists a constant c > 0 such that
(26) There exists a constantsuch that
(27)
Proof
We first remark that (22) implies that
, and so by setting
,
and
, we get
Thus, we have
(28) where
.
In the case where ‖x α(y) − x †‖ = 0 for all α ∈ (0, ϵ] for some ϵ > 0, the inequality (27) is trivially fulfilled for some
. Moreover, we know from Lemma 3.1 that then the inequality (15) holds, which implies the inequality (26) for some constant c > 0, since we have according to the definition of the function ψ that ψ(δ) ≥ aδ2 for all δ ∈ (0, δ0) for some constants a > 0 and δ0 > 0.
Thus, we may assume that ‖x α(y) − x †‖ > 0 for all α > 0.
Let (27) hold. For arbitrary δ > 0, we use the regularization parameter αδ defined in (17). Then, the inequality (27) implies that
Consequently
and therefore, using the inequality (18) obtained in Lemma 3.2, we find with (28) that
which is the estimate (26) with
.
Conversely, if (26) holds, we choose an arbitrary δ > 0 such that αδ defined by (17) is in the spectrum σ(LL*). Then, we can use the inequality (19) of Lemma 3.2 to obtain from the condition (26) that
Thus, by the definition of ψ, we have
So, finally, we get with (22) that
and since this holds for every δ such that αδ ∈ σ(LL*), we have with
that
(29) Finally, we consider some α ∉ σ(LL*), α < ‖L‖2, and set
Then, recalling that σ(L*L)∖{0} = σ(LL*)∖{0}, see for example [6, Problem 61], we find for α− > 0 (for α− = 0, the first term in the following calculation simply vanishes) that
Using the conditions (23) and (24), we have with (29) that
which is (27) with
.
Remark
If we consider Tikhonov regularization, then we can ignore the conditions (23) and (24) in Proposition 3.3 if we have a quadratic upper bound on the function g in (22).
Indeed, let ϕ: [0, ∞) → [0, ∞) be an arbitrary increasing function fulfilling (22) for some increasing function g: (0, ∞) → (0, ∞) which is bounded by
(30) for some constant C > 0. Then, conditions (23) and (24) are fulfilled for the error function
of Tikhonov regularization, given by
.
To see this, we remark that for 0 < α ≤ β, the ratio
is decreasing in λ. Therefore, for λ ≥ β we get that
which is (23).
We similarly find for λ ≤ α that
which is (24) with
.
We want to apply this theorem now to the two special cases discussed previously in Example 2.4.
Example 3.4
In the case of Example 2.4 (i), where we considered Tikhonov regularization with a convergence rate given by ϕ(α) = α2ν for some ν ∈ (0, 1), the condition (22) in Proposition 3.3 is clearly fulfilled with g(γ) = γ2ν. In particular, g satisfies that g(γ) ≤1 + γ2, which is (30) with C = 4, and thus the conditions (23) and (24) in Proposition 3.3 follow as in the remark above.
So, we can apply Proposition 3.3 and it only remains to calculate
Thus, we recover the classical result, see [10, Theorem 2.6], that the convergence rate ‖x α(y) − x †‖2 = 𝒪(α2ν) for the correct data y is equivalent to the convergence rate
for noisy data.
Next, we look at Tikhonov regularization with the logarithmic convergence ratesee Example 2.4 (ii). First, we remark that ϕ is concave. This is because ϕ is increasing, constant for α > e−(1+ν), and for 0 < α <e−(1+ν) we have
and because ϕ(0) = 0, we have
Thus, using that ϕ is increasing, the requirement (22) in Proposition 3.3 is fulfilled with
In particular, this function g satisfies the inequality (30) with C = 4 and therefore, also the conditions (23) and (24) in Proposition 3.3 are fulfilled according to the previous remark.
To get the corresponding function ψ, as defined in (25), we have to solve the implicit equation, where
is defined in (25) and with the specific choice of ϕ(α) = |log α|−ν for α <e−(1+ν) satisfies
. This equation then reads as follows:
(31) By solving this equation for δ, we getwhich, in particular, shows that the function ψ is increasing and furthermore, because of lim δ↓0ψ(δ) = 0, ψ(δ) <1 for sufficiently small δ > 0. Therefore, we find for small δ > 0 that
(32) Since, we find parameters ϵ ∈ (0, 1) and δ0 ∈ (0, 1) such that we have for all δ < δ0 the inequality 0 ≤ log (|log δ|ν) ≤ ϵ|log δ|. Assuming that f(δ) ≥1 gives
which is a contradiction to the assumption. Thus, f(δ) <1.
Since we know already from (32) that f(δ) ≥2−ν, it therefore follows from Proposition 3.3 that the convergence rate ‖x α(y) − x †‖2 = 𝒪(|log α|−ν) is equivalent to
.
4. Relation to variational inequalities
Instead of characterizing the convergence rate of the regularized solution via the behavior of the spectral decomposition of the minimum-norm solution x †, we may also check variational inequalities for the element x †, see [7, 8, 11, 12]. In [1], it was shown that for Tikhonov regularization and convergence rates of the order 𝒪(α2ν), ν ∈ (0, 1), such variational inequalities are equivalent to specific convergence rates.
In this section, we generalize this result to cover general regularization methods and convergence rates.
Proposition 4.1
We consider again the setting of Notation 2.2. Moreover, let ϕ: [0, ∞) → [0, ∞) be an increasing, continuous function and ν ∈ (0, 1).
Then, the following two statements are equivalent:
There exists a constant C > 0 with
(33) There exists a constantsuch that
(34)
Proof
Assume first that (34) holds. Then, we have for all λ > 0
which implies (33) with
.
On the other hand, if (33) is fulfilled then we can estimate for arbitrary Λ > 0 and every x ∈ X
(35) Furthermore, we get with the bounded, invertible operator T = ϕ(L*L)|ℛ(E[Λ, ∞)) that
(36) Integrating by parts, we can rewrite the integral in the form
Using now (33) and dropping all negative terms, we arrive at
with the constant c > 0 given by
. Plugging this into (36), we find that
(37) We now pick
and assume that Λ > 0; otherwise <x †, x > = 0 and (34) is trivially fulfilled. Then, the right continuity of λ → <E [0, λ] x †, x > implies that
Moreover, we have that
for every λ ∈ (0, Λ). Therefore, the left continuity of λ → <E [λ, ∞) x †, x > implies that
Thus, we get with the estimates (35) and (37) that
We remark that the first part of this proof also works in the limit case ν = 1, which shows that (34) implies (33) for ν = 1 as well.
Corollary 4.2
We use again Notation 2.2. Let further ϕ: [0, ∞) → [0, ∞) be an increasing, continuous function and ν ∈ (0, 1].
Then, the standard source condition
(38) implies the variational inequality
(39) for some constant C > 0.
Conversely, the variational inequality (39) implies that
for every continuous function ψ: [0, ∞) → [0, ∞) with ψ ≥cϕμ for some constant c > 0 and some μ ∈ (0, ν).
Proof
If x † fulfils (38), then there exists an element ω ∈X with
(40) Using the interpolation inequality, see for example [3, Chapter 2.3], we find
which is (39) with C = ‖ω‖.
If, on the other hand, (39) holds, then, according to Proposition 4.1 there exists a constant
such that
. Now, similarly to the proof of Proposition 4.1 we get with T = ψ(L*L)|ℛ(E(Λ, ∞)) that
and, using the lower bound on ψ, that
for some constant
. So
which implies that x † ∈ ℛ(ψ(L*L)), see for example [11, Lemma 8.21].
Remark
In general, the inequality (39) does not imply the standard source condition (38). Let us for example consider the case where we have an increasing, continuous function ϕ: [0, ∞) → [0, ∞) with ϕ(0) = 0, ϕ(λ) > 0 for all λ > 0, and
for some constants 0 < c ≤ C.
Now, the standard source condition (38) would imply that we can find a ξ ∈ 𝒩(L)⊥ with x † = ϕν(L*L)ξ. Thus, we would get with T = ϕν(L*L)|ℛ(E(Λ, ∞)) that
However, in the limit Λ → 0, we have that
which is a contradiction to the existence of such a point ξ.
5. Connection to approximate source conditions
Another approach to weakening the standard source condition (38) in order to obtain a condition which is equivalent to the convergence rate was introduced in [9], see also [4]. The idea was that for the argument (40), which shows that the standard source condition (38) implies the variational inequality (39), it would have been enough to be able to approximate the minimum-norm solution x
† by a bounded sequence in ℛ(ϕν(L*L)). And, the smaller the bound on the sequence, the smaller the constant C in the variational inequality (39) will be. Therefore, the distance between x
† and
as a function of the radius R of the closed ball
should be directly related to the convergence rate.
Definition 5.1
In the setting of Notation 2.2, we define the distance function d ϕ of a continuous function ϕ: [0, ∞) → [0, ∞) by
(41)
Indeed, this distance function gives us directly an upper bound on the error between the regularized solution x α(y) and the minimum-norm solution x †, see [9, Theorem 5.5] or [4, Proposition 2]. For convenience, we repeat the argument here.
Lemma 5.2
We use Notation 2.2 and assume that ϕ: [0, ∞) → [0, ∞) is an increasing, continuous function with ϕ(0) = 0 so that there exists a constant A > 0 such that the inequality
(42) holds for every α > 0.
Then, we have for every ξ ∈X that
(43)
Proof
For every vector ξ ∈X, we find from (5) with the definition (3) of the error function
that
Now, since
, we have that
. Moreover, with e ξ(λ) = ‖E (0, λ]ξ‖2, we get from the inequality (42) that
So, putting the two inequalities together, we obtain (43).
Thus, taking the infimum over all
in (43), the error ‖x
α(y) − x
†‖ can be bound by a combination of d
ϕ(R) and ϕ(α)R. By balancing these terms, we obtain from a given distance function d
ϕ the corresponding convergence rate.
Conversely, we can also show that an upper bound on the spectral projections of the minimum-norm solution gives us an upper bound on the distance function, which then yields another equivalent characterisation for the convergence rate of the regularization method.
Proposition 5.3
We use Notation 2.2 and assume that ϕ: [0, ∞) → [0, ∞) is an increasing, continuous function with ϕ(0) = 0 so that there exists a constant A > 0 with
(44) Moreover, let d ϕ be the distance function of ϕ, and let ν ∈ (0, 1) be arbitrary.
Then, the following statements are equivalent:
There exists a constant C > 0 so that
(45) There exists a constantso that
(46)
Proof
Assume first that (46) holds. Then, from Lemma 5.2, we get by taking the infimum of (43) over all
for an arbitrary R > 0 that
Since the first term is decreasing and the second term is increasing in R, we pick for R the value R(α) given by
Thus, we end up with
Applying Proposition 2.3 with the function ϕ, therein replaced by ϕ2ν (we remark that the condition (7) is then fulfilled with μ = ν, since (44) implies
) we find that there exists a constant C > 0 so that (45) holds.
Conversely, if we have the relation (45), then we define for arbitrary α > 0 with the operator T = ϕ(L*L)|ℛ(E(α, ∞)) the element
Now, the distance of ϕ(L*L)ξα to the minimum-norm solution x † can be estimated according to (45) by
(47) Moreover, we can get an upper bound on the norm of ξα by
Using assumption (45), evaluating the integral, and dropping the resulting two negative terms, we find that
(48) with
.
So, combining (47) and (48), we have by definition (41) of the distance function d ϕ with R = cϕ−(1−ν)(α) that
and thus it follows by switching to the variable R that
where
.
6. Conclusion
In this article, we have proven optimal convergence rates results for regularization methods for solving linear ill-posed operator equations in Hilbert spaces. The result generalizes existing convergence rates results on optimality of [10] to general source conditions, such as logarithmic source conditions. The results state that convergence rates results of regularised solution require a certain decay of the solution in terms of the spectral decomposition. Moreover, we also provide optimality results under variational source conditions, extending the results of [1]. It is interesting to note that variational source conditions are equivalent to convergence rates of the regularized solutions, while the classical results are not. Moreover, we also show that rates of the distance function developed in [4, 9] are equivalent to convergence rates of the regularized solutions.
References
- Andreev R., Elbau P., de Hoop M. V., Qiu L., Scherzer O. Generalized convergence rates results for linear inverse problems in Hilbert spaces. Numer. Funct. Anal. Optim. 2015;36(5):549–566. doi: 10.1080/01630563.2016.1144070. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carter M., van Brunt B. 2000 The Lebesgue–Stieltjes integral. A Practical Introduction. Undergraduate Mathematics Series. Springer, New York, NY. [Google Scholar]
- Engl H. W., Hanke M., Neubauer A. 1996 Regularization of inverse problems. Mathematics and its Applications, Vol. 375. Kluwer Academic Publishers Group, Dordrecht, Netherlands. [Google Scholar]
- Flemming J., Hofmann B., Mathé P. Sharp converse results for the regularization error using distance functions. Inverse Probl. 2011;27 doi: 10.1088/0266-5611/27/2/025006. page 025006. [DOI] [Google Scholar]
- Groetsch C. W. The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind. Pitman Advanced Publishing Program; Boston, MA: 1984. [Google Scholar]
- Halmos P. R. A Hilbert Space Problem Book. Springer; New York, NY: 1974. Graduate Texts in Mathematics, Vol. 19. [Google Scholar]
- Hein T., Hofmann B. Approximate source conditions for nonlinear ill-posed problems: Chances and limitations. Inverse Probl. 2009;25 035003, 16 pages. [Google Scholar]
- Hofmann B., Kaltenbacher B., Pöschl C., Scherzer O. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Probl. 2007;23(3):987–1010. [Google Scholar]
- Hofmann B., Mathé P. Analysis of profile functions for general linear regularization methods. SIAM J. Numer. Anal. 2007;45(3):1122–1141. [Google Scholar]
- Neubauer A. On converse and saturation results for Tikhonov regularization of linear ill-posed problems. SIAM J. Numer. Anal. 1997;34:517–527. [Google Scholar]
- Scherzer O., Grasmair M., Grossauer H., Haltmeier M., Lenzen F. Variational methods in imaging. Applied Mathematical Sciences. 2009 vol. 167. Springer, New York, NY. [Google Scholar]
- Schuster T., Kaltenbacher B., Hofmann B., Kazimierski K. S. 2012 Regularization methods in Banach spaces. Radon Series on Computational and Applied Mathematics, vol. 10. Walter de Gruyter, Berlin, Germany. [Google Scholar]














































































































