Skip to main content
Taylor & Francis Open Select logoLink to Taylor & Francis Open Select
. 2016 Feb 8;37(5):521–540. doi: 10.1080/01630563.2016.1144070

Optimal Convergence Rates Results for Linear Inverse Problems in Hilbert Spaces

V Albani a, P Elbau a,*, M V de Hoop b, O Scherzer a,c
PMCID: PMC4959128  PMID: 27499565

ABSTRACT

In this article, we prove optimal convergence rates results for regularization methods for solving linear ill-posed operator equations in Hilbert spaces. The results generalizes existing convergence rates results on optimality to general source conditions, such as logarithmic source conditions. Moreover, we also provide optimality results under variational source conditions and show the connection to approximative source conditions.

KEYWORDS: Approximative source conditions, convergence rates, linear inverse problems, regularization, variational source conditions

MATHEMATICS SUBJECT CLASSIFICATION: 47A52, 49N45, 65J22

1. Introduction

Let L: X → Y be a bounded linear operator between two Hilbert spaces X and Y. We are interested in finding the minimum-norm solution x  ∈ X of the equation

1.

for some y ∈ ℛ(L), that is the element x  ∈ {x ∈ X | Lx = y} with the property ‖x ‖ = inf {‖x‖ | Lx = y}. It is well-known that this minimal-norm solution exists and is unique, see for example [3, Theorem 2.5].

Since y is typically not exactly known and only an approximation Inline graphic with Inline graphic is given, we are looking for a family Inline graphic of approximative solutions so that for every sequence Inline graphic converging to y, we find a sequence (αk)k∈ℕ of regularization parameters such that Inline graphic tends to the minimum-norm solution x .

A standard way to construct this family is by using Tikhonov regularization:

1.

where the minimiser can be explicitly calculated from the optimality condition and reads as follows:

graphic file with name lnfa_a_1144070_m0001.jpg (1)

More generally, we want to analyze regularized solutions of the form

graphic file with name lnfa_a_1144070_m0002.jpg (2)

with some appropriately chosen function r α, see for example [9].

The aim of this article is to characterize for a given regularization method, generated by a family (r α)α>0, the optimal convergence rate with which Inline graphic tends to the minimum-norm solution x . This convergence rate depends on the solution x , and we will give an explicit relation between the spectral projections of x with respect to the operator L*L and the convergence rate; first in Section 2 for the convergence of x α(y) with the exact data y, and then in Section 3 for Inline graphic with noisy data Inline graphic. This generalizes existing convergence rates results of [10] to general source conditions, such as logarithmic source conditions.

Afterwards, we show in Section 4 that these convergence rates can also be obtained from variational inequalities and establish the optimality of these general variational source conditions, extending the results of [1]. It is interesting to note that variational source conditions are equivalent to convergence rates of the regularized solutions, while the classical results in [5] are not.

Finally, we consider in Section 5 approximate source conditions that relate the convergence rates of the regularized solutions to the decay rate of a distance function, measuring how far away the minimum-norm solution is from the classical range condition, see [4, 9]. We can show that these approximate source conditions are indeed equivalent to the convergence rates.

2. Convergence rates for exact data

In the following, we analyze the convergence rate of the sequence (x α(y))α>0 with the exact data y ∈ ℛ(L) to the minimum-norm solution x of Lx = y.

We investigate regularization methods of the form (2), which are generated by functions satisfying the following properties.

Definition 2.1

We call a family (r α)α>0 of continuous functions r α: [0, ∞) → [0, ∞) the generator of a regularization method if

  1. there exists a constant ρ ∈ (0, 1) such that
    graphic file with name lnfa_a_1144070_um0003.jpg
  2. the error function Inline graphic, defined by
    graphic file with name lnfa_a_1144070_m0003.jpg (3)
    is decreasing.
  3. For fixed λ ≥0 the map Inline graphic is continuous and increasing, and

  4. there exists a constant Inline graphic such that
    graphic file with name lnfa_a_1144070_um0004.jpg

Remark

These conditions do not yet enforce that x α(y) → x . To ensure this, we could additionally impose that Inline graphic for every λ > 0 as α → 0.

Let us now fix the notation for the rest of the article.

Notation 2.2

Let L: X → Y be a bounded linear operator between two real Hilbert spaces X and Y, y ∈ ℛ(L), and x  ∈ X be the minimum-norm solution of Lx = y.

We choose a generator (r α)α>0 of a regularization method, introduce the family Inline graphic of its error functions, and the corresponding family of regularized solutions shall be given by (2).

We denote by A → E A and A → F A the spectral measures of the operators L*L and LL*, respectively, on all Borel sets A ⊆ [0, ∞).

Next, we define the right-continuous and increasing function

graphic file with name lnfa_a_1144070_m0004.jpg (4)

Moreover, if f: (0, ∞) → ℝ is a right-continuous, increasing, and bounded function, we write

Notation 2.2

for the Lebesgue–Stieltjes integral of f, where μf denotes the unique non-negative Borel measure defined by μf((λ1, λ2]) = f2) − f1) and g ∈ L 1(μ).

Remark

In this setting, we can write the error

graphic file with name lnfa_a_1144070_m0005.jpg (5)

according to spectral theory in the form

graphic file with name lnfa_a_1144070_m0006.jpg (6)

We want to point out here that it directly follows from the definition that the minimum-norm solution x is in the orthogonal complement 𝒩(L) of the nullspace of L, and we therefore do not have to consider the point λ = 0 in the integrals in equation (6).

We first want to establish a relation between the convergence rate of the regularized solution x α(y) for exact data y to the minimum-norm solution x and the behaviour of the spectral function (4).

Proposition 2.3

We use Notation 2.2 and assume that there exist an increasing function ϕ: (0, ∞) → (0, ∞) and constants μ ∈ (0, 1) and A > 0 such that we have for every α > 0 the inequality

graphic file with name lnfa_a_1144070_m0007.jpg (7)

Then, the following two statements are equivalent:

  1. There exists a constant C > 0 with
    graphic file with name lnfa_a_1144070_m0008.jpg (8)
  2. There exists a constant Inline graphic with
    graphic file with name lnfa_a_1144070_m0009.jpg (9)

Proof

According to Definition 2.1 (ii) the error function Inline graphic is decreasing, and thus it follows together with (6) that for all α > 0

graphic file with name lnfa_a_1144070_m0010.jpg (10)

Let first (8) hold. Then, it follows from (10) that for all α > 0

graphic file with name lnfa_a_1144070_m0011.jpg (11)

Now, we use Definition 2.1 (i), which gives that

Proof

Using this estimate in (11) yields (9) with Inline graphic.

Conversely, let (9) hold. Since ‖x α(y) − x 2 ≤ ‖x 2 (which follows from (6) with Inline graphic), it is enough to check the condition (8) for all α ∈ (0, ‖L2].

We use (6) and integrate the right hand side by parts, see for example [2, Theorem 6.2.2] regarding the integration by parts for Lebesgue–Stieltjes integrals, and obtain that

graphic file with name lnfa_a_1144070_m0012.jpg (12)

We split up the integral on the right hand side into two terms:

graphic file with name lnfa_a_1144070_m0013.jpg (13)

The first term is estimated by using that the function e is increasing and by utilising the assumption (9):

Proof

The second integral term in (13) is estimated by using the inequalities (9) and (7):

Proof

where we used Definition 2.1 (iv) in the last step. Inserting the two estimates in (13) and in (12), we find with e(‖L2) = ‖x 2 that

graphic file with name lnfa_a_1144070_m0014.jpg (14)

From (7), we deduce further that

Proof

since ϕ is increasing and μ <1.

Thus, we get from (14) that

Proof

with

Proof

Remark

The condition (7) with the choice Inline graphic was already used in [4], and such a function ϕ was called a qualification of the regularization method.

Example 2.4

In the case of Tikhonov regularization, given by (1), we have Inline graphic and therefore we get for the error function Inline graphic, defined by (3), the expression Inline graphic. So, clearly, Inline graphic and all the conditions of Definition 2.1 are fulfilled.

  1. To recover the classical equivalence results, see [10, Theorem 2.1], we set ϕ(α) = α for some ν ∈ (0, 1) and find that the condition (7) with A = 1 is for every μ ≥ ν fulfilled, since we have
    graphic file with name lnfa_a_1144070_um0012.jpg
    for arbitrary α > 0 and λ > 0.

    Thus, Proposition 2.3 yields for every ν ∈ (0, 1) the equivalence of ‖x α(y) − x 2 = 𝒪(α) and e(λ) = 𝒪(λ).

  2. Similarly, we also get the equivalence in the case of logarithmic convergence rates. Let 0 < ν < μ <1 and define for Inline graphic the function ϕ(α) = |log α|−ν (for bigger values of α, we may simply set Inline graphic). Then, we have
    graphic file with name lnfa_a_1144070_um0013.jpg
    for all Inline graphic. Thus, Inline graphic is decreasing on Inline graphic, which implies (7) with A = 1.

    So, Proposition 2.3 tells us that ‖x α(y) − x 2 = 𝒪(|log α|−ν) if and only if e(λ) = 𝒪(|log λ|−ν).

3. Convergence rates for noisy data

We now want to estimate the distance of the regularized solution Inline graphic to the minimum-norm solution x if we do not have the exact data y, but only some approximation Inline graphic of it.

In this case, we consider the regularization parameter α as a function of the noisy data Inline graphic such that the distance between Inline graphic and x is minimal. Thus, we are interested in the convergence rate of the expression Inline graphic to zero as the distance between Inline graphic and y tends to zero. We therefore want to find an upper bound for the expression Inline graphic, where Inline graphic denotes the closed ball with radius δ > 0 around the data y.

Let us first consider the trivial case where ‖x α(y) − x ‖ = 0 for all α in a vicinity of 0.

Lemma 3.1

We use Notation 2.2 and assume that there exists an ϵ > 0 such that

Lemma 3.1

Then, we have

graphic file with name lnfa_a_1144070_m0015.jpg (15)

where ρ > 0 is chosen as in Definition 2.1 (i).

Proof

Let Inline graphic be fixed. Then, using that Lr α(L*L) = r α(LL*)L, it follows from Definition 2.1 (i) that

graphic file with name lnfa_a_1144070_m0016.jpg (16)

The right hand side is uniform for all Inline graphic. Thus, picking α = ϵ, we get

Proof

which is (15).

In the general case, we estimate the optimal regularization parameter α to be in the vicinity of the value αδ, which is chosen as the solution of the implicit equation (17) and is therefore only depending on the distance δ between the correct data y and the noisy data Inline graphic.

Lemma 3.2

We use again Notation 2.2 and consider the case where ‖x α(y) − x ‖ > 0 for all α > 0.

If we choose for every δ > 0 the parameter αδ > 0 such that

graphic file with name lnfa_a_1144070_m0017.jpg (17)

then there exists a constant C 1 > 0 such that

graphic file with name lnfa_a_1144070_m0018.jpg (18)

Moreover, there exists a constant C 0 > 0 such that

graphic file with name lnfa_a_1144070_m0019.jpg (19)

for all δ > 0 which fulfil that αδ ∈ σ(LL*), where σ(LL*) ⊂ [0, ∞) denotes the spectrum of the operator LL*.

Proof

First, we remark that the function

Proof

is, according to Definition 2.1 (iii) together with the assumption that ‖x α(y) − x ‖ > 0 for all α > 0, continuous and strictly increasing and satisfies lim α→0 A(α) = 0 and lim α→∞ A(α) = ∞. Therefore, we find for every δ > 0 a unique value αδ = A −12).

Let Inline graphic. Then, as in the proof of Lemma 3.1, see (16), we find that

Proof

From this estimate, we obtain with the triangular inequality and with the definition (17) of αδ that

Proof

which is the upper bound (18) with the constant C 1 = (1 + ρ)2.

For the lower bound (19), we write similarly

graphic file with name lnfa_a_1144070_m0020.jpg (20)

Now, from the continuity of Inline graphic and Definition 2.1 (iv), we find that for every δ > 0 there exists a parameter a δ ∈ (0, αδ) such that Inline graphic.

Then, the assumption αδ ∈ σ(LL*) implies that the spectral measure F of the operator LL* fulfils F [aδ, 2αδ] ≠ 0.

Suppose now that

graphic file with name lnfa_a_1144070_m0021.jpg (21)

Then, choosing Inline graphic, equation (20) becomes

Proof

Thus, we may drop the last term as it is non-negative, which gives us the lower bound

Proof

Since we get from Definition 2.1 (ii) the inequality

Proof

we can estimate further

Proof

Now, since Inline graphic is for every λ > 0 increasing, see Definition 2.1 (iii), the first term is increasing in α, see (6), and the second term is decreasing in α. Thus, we can estimate the expression for α < αδ from below by the second term at α = αδ, and for α ≥ αδ by the first term at α = αδ:

Proof

which is (19) with Inline graphic.

If z δ, as defined by (21), happens to vanish, the same argument works with an arbitrary non-zero element z δ ∈ ℛ(F [aδ, 2αδ]) since the last term in (20) is zero for Inline graphic.

From Lemma 3.1 and Lemma 3.2, we now get an equivalence relation between the noisy and the noise-free convergence rates.

Proposition 3.3

We use Notation 2.2. Let further ϕ: [0, ∞) → [0, ∞) be a strictly increasing function satisfying ϕ(0) = 0 and

graphic file with name lnfa_a_1144070_m0022.jpg (22)

for some increasing function g: (0, ∞) → (0, ∞).

Moreover, we assume that there exists a constant C > 0 with

graphic file with name lnfa_a_1144070_m0023.jpg (23)

and there is a constant Inline graphic such that

graphic file with name lnfa_a_1144070_m0024.jpg (24)

We define

graphic file with name lnfa_a_1144070_m0025.jpg (25)

Then, the following two statements are equivalent:

  1. There exists a constant c > 0 such that
    graphic file with name lnfa_a_1144070_m0026.jpg (26)
  2. There exists a constant Inline graphic such that
    graphic file with name lnfa_a_1144070_m0027.jpg (27)

Proof

We first remark that (22) implies that Inline graphic, and so by setting Inline graphic, Inline graphic and Inline graphic, we get

Proof

Thus, we have

graphic file with name lnfa_a_1144070_m0028.jpg (28)

where Inline graphic.

In the case where ‖x α(y) − x ‖ = 0 for all α ∈ (0, ϵ] for some ϵ > 0, the inequality (27) is trivially fulfilled for some Inline graphic. Moreover, we know from Lemma 3.1 that then the inequality (15) holds, which implies the inequality (26) for some constant c > 0, since we have according to the definition of the function ψ that ψ(δ) ≥ aδ2 for all δ ∈ (0, δ0) for some constants a > 0 and δ0 > 0.

Thus, we may assume that ‖x α(y) − x ‖ > 0 for all α > 0.

Let (27) hold. For arbitrary δ > 0, we use the regularization parameter αδ defined in (17). Then, the inequality (27) implies that

Proof

Consequently

Proof

and therefore, using the inequality (18) obtained in Lemma 3.2, we find with (28) that

Proof

which is the estimate (26) with Inline graphic.

Conversely, if (26) holds, we choose an arbitrary δ > 0 such that αδ defined by (17) is in the spectrum σ(LL*). Then, we can use the inequality (19) of Lemma 3.2 to obtain from the condition (26) that

Proof

Thus, by the definition of ψ, we have

Proof

So, finally, we get with (22) that

Proof

and since this holds for every δ such that αδ ∈ σ(LL*), we have with Inline graphic that

graphic file with name lnfa_a_1144070_m0029.jpg (29)

Finally, we consider some α ∉ σ(LL*), α < ‖L2, and set

Proof

Then, recalling that σ(L*L)∖{0} = σ(LL*)∖{0}, see for example [6, Problem 61], we find for α > 0 (for α = 0, the first term in the following calculation simply vanishes) that

Proof

Using the conditions (23) and (24), we have with (29) that

Proof

which is (27) with Inline graphic.

Remark

If we consider Tikhonov regularization, then we can ignore the conditions (23) and (24) in Proposition 3.3 if we have a quadratic upper bound on the function g in (22).

Indeed, let ϕ: [0, ∞) → [0, ∞) be an arbitrary increasing function fulfilling (22) for some increasing function g: (0, ∞) → (0, ∞) which is bounded by

graphic file with name lnfa_a_1144070_m0030.jpg (30)

for some constant C > 0. Then, conditions (23) and (24) are fulfilled for the error function Inline graphic of Tikhonov regularization, given by Inline graphic.

To see this, we remark that for 0 < α ≤ β, the ratio

Remark

is decreasing in λ. Therefore, for λ ≥ β we get that

Remark

which is (23).

We similarly find for λ ≤ α that

Remark

which is (24) with Inline graphic.

We want to apply this theorem now to the two special cases discussed previously in Example 2.4.

Example 3.4

  1. In the case of Example 2.4 (i), where we considered Tikhonov regularization with a convergence rate given by ϕ(α) = α for some ν ∈ (0, 1), the condition (22) in Proposition 3.3 is clearly fulfilled with g(γ) = γ. In particular, g satisfies that g(γ) ≤1 + γ2, which is (30) with C = 4, and thus the conditions (23) and (24) in Proposition 3.3 follow as in the remark above.

    So, we can apply Proposition 3.3 and it only remains to calculate
    graphic file with name lnfa_a_1144070_um0037.jpg

    Thus, we recover the classical result, see [10, Theorem 2.6], that the convergence rate ‖x α(y) − x 2 = 𝒪(α) for the correct data y is equivalent to the convergence rate Inline graphic for noisy data.

  2. Next, we look at Tikhonov regularization with the logarithmic convergence rate
    graphic file with name lnfa_a_1144070_um0038.jpg
    see Example 2.4 (ii). First, we remark that ϕ is concave. This is because ϕ is increasing, constant for α > e−(1+ν), and for 0 < α <e−(1+ν) we have
    graphic file with name lnfa_a_1144070_um0039.jpg
    and because ϕ(0) = 0, we have
    graphic file with name lnfa_a_1144070_um0040.jpg
    Thus, using that ϕ is increasing, the requirement (22) in Proposition 3.3 is fulfilled with
    graphic file with name lnfa_a_1144070_um0041.jpg

    In particular, this function g satisfies the inequality (30) with C = 4 and therefore, also the conditions (23) and (24) in Proposition 3.3 are fulfilled according to the previous remark.

    To get the corresponding function ψ, as defined in (25), we have to solve the implicit equation Inline graphic, where Inline graphic is defined in (25) and with the specific choice of ϕ(α) = |log α|−ν for α <e−(1+ν) satisfies Inline graphic. This equation then reads as follows:
    graphic file with name lnfa_a_1144070_m0031.jpg (31)
    By solving this equation for δ, we get
    graphic file with name lnfa_a_1144070_um0042.jpg
    which, in particular, shows that the function ψ is increasing and furthermore, because of lim δ↓0ψ(δ) = 0, ψ(δ) <1 for sufficiently small δ > 0. Therefore, we find for small δ > 0 that
    graphic file with name lnfa_a_1144070_m0032.jpg (32)
    Moreover, if we write ψ as
    graphic file with name lnfa_a_1144070_um0043.jpg
    for some function f, the implicit equation (31) becomes
    graphic file with name lnfa_a_1144070_um0044.jpg
    Since Inline graphic, we find parameters ϵ ∈ (0, 1) and δ0 ∈ (0, 1) such that we have for all δ < δ0 the inequality 0 ≤ log (|log δ|ν) ≤ ϵ|log δ|. Assuming that f(δ) ≥1 gives
    graphic file with name lnfa_a_1144070_um0045.jpg
    which is a contradiction to the assumption. Thus, f(δ) <1.

    Since we know already from (32) that f(δ) ≥2−ν, it therefore follows from Proposition 3.3 that the convergence rate ‖x α(y) − x 2 = 𝒪(|log α|−ν) is equivalent to Inline graphic.

4. Relation to variational inequalities

Instead of characterizing the convergence rate of the regularized solution via the behavior of the spectral decomposition of the minimum-norm solution x , we may also check variational inequalities for the element x , see [7, 8, 11, 12]. In [1], it was shown that for Tikhonov regularization and convergence rates of the order 𝒪(α), ν ∈ (0, 1), such variational inequalities are equivalent to specific convergence rates.

In this section, we generalize this result to cover general regularization methods and convergence rates.

Proposition 4.1

We consider again the setting of Notation 2.2. Moreover, let ϕ: [0, ∞) → [0, ∞) be an increasing, continuous function and ν ∈ (0, 1).

Then, the following two statements are equivalent:

  1. There exists a constant C > 0 with
    graphic file with name lnfa_a_1144070_m0033.jpg (33)
  2. There exists a constant Inline graphic such that
    graphic file with name lnfa_a_1144070_m0034.jpg (34)

Proof

Assume first that (34) holds. Then, we have for all λ > 0

Proof

which implies (33) with Inline graphic.

On the other hand, if (33) is fulfilled then we can estimate for arbitrary Λ > 0 and every x ∈ X

graphic file with name lnfa_a_1144070_m0035.jpg (35)

Furthermore, we get with the bounded, invertible operator T = ϕ(L*L)|ℛ(E[Λ, ∞)) that

graphic file with name lnfa_a_1144070_m0036.jpg (36)

Integrating by parts, we can rewrite the integral in the form

Proof

Using now (33) and dropping all negative terms, we arrive at

Proof

with the constant c > 0 given by Inline graphic. Plugging this into (36), we find that

graphic file with name lnfa_a_1144070_m0037.jpg (37)

We now pick

Proof

and assume that Λ > 0; otherwise <x , x > = 0 and (34) is trivially fulfilled. Then, the right continuity of λ → <E [0, λ] x , x > implies that

Proof

Moreover, we have that

Proof

for every λ ∈ (0, Λ). Therefore, the left continuity of λ → <E [λ, ∞) x , x > implies that

Proof

Thus, we get with the estimates (35) and (37) that

Proof

We remark that the first part of this proof also works in the limit case ν = 1, which shows that (34) implies (33) for ν = 1 as well.

Corollary 4.2

We use again Notation 2.2. Let further ϕ: [0, ∞) → [0, ∞) be an increasing, continuous function and ν ∈ (0, 1].

Then, the standard source condition

graphic file with name lnfa_a_1144070_m0038.jpg (38)

implies the variational inequality

graphic file with name lnfa_a_1144070_m0039.jpg (39)

for some constant C > 0.

Conversely, the variational inequality (39) implies that

Corollary 4.2

for every continuous function ψ: [0, ∞) → [0, ∞) with ψ ≥cϕμ for some constant c > 0 and some μ ∈ (0, ν).

Proof

If x fulfils (38), then there exists an element ω ∈X with

graphic file with name lnfa_a_1144070_m0040.jpg (40)

Using the interpolation inequality, see for example [3, Chapter 2.3], we find

Proof

which is (39) with C = ‖ω‖.

If, on the other hand, (39) holds, then, according to Proposition 4.1 there exists a constant Inline graphic such that Inline graphic. Now, similarly to the proof of Proposition 4.1 we get with T = ψ(L*L)|ℛ(E(Λ, ∞)) that

Proof

and, using the lower bound on ψ, that

Proof

for some constant Inline graphic. So

Proof

which implies that x ∈ ℛ(ψ(L*L)), see for example [11, Lemma 8.21].

Remark

In general, the inequality (39) does not imply the standard source condition (38). Let us for example consider the case where we have an increasing, continuous function ϕ: [0, ∞) → [0, ∞) with ϕ(0) = 0, ϕ(λ) > 0 for all λ > 0, and

Remark

for some constants 0 < c ≤ C.

Now, the standard source condition (38) would imply that we can find a ξ ∈ 𝒩(L) with x  = ϕν(L*L)ξ. Thus, we would get with T = ϕν(L*L)|ℛ(E(Λ, ∞)) that

Remark

However, in the limit Λ → 0, we have that

Remark

which is a contradiction to the existence of such a point ξ.

5. Connection to approximate source conditions

Another approach to weakening the standard source condition (38) in order to obtain a condition which is equivalent to the convergence rate was introduced in [9], see also [4]. The idea was that for the argument (40), which shows that the standard source condition (38) implies the variational inequality (39), it would have been enough to be able to approximate the minimum-norm solution x by a bounded sequence in ℛ(ϕν(L*L)). And, the smaller the bound on the sequence, the smaller the constant C in the variational inequality (39) will be. Therefore, the distance between x and Inline graphic as a function of the radius R of the closed ball Inline graphic should be directly related to the convergence rate.

Definition 5.1

In the setting of Notation 2.2, we define the distance function d ϕ of a continuous function ϕ: [0, ∞) → [0, ∞) by

graphic file with name lnfa_a_1144070_m0041.jpg (41)

Indeed, this distance function gives us directly an upper bound on the error between the regularized solution x α(y) and the minimum-norm solution x , see [9, Theorem 5.5] or [4, Proposition 2]. For convenience, we repeat the argument here.

Lemma 5.2

We use Notation 2.2 and assume that ϕ: [0, ∞) → [0, ∞) is an increasing, continuous function with ϕ(0) = 0 so that there exists a constant A > 0 such that the inequality

graphic file with name lnfa_a_1144070_m0042.jpg (42)

holds for every α > 0.

Then, we have for every ξ ∈X that

graphic file with name lnfa_a_1144070_m0043.jpg (43)

Proof

For every vector ξ ∈X, we find from (5) with the definition (3) of the error function Inline graphic that

Proof

Now, since Inline graphic, we have that Inline graphic. Moreover, with e ξ(λ) = ‖E (0, λ]ξ‖2, we get from the inequality (42) that

Proof

So, putting the two inequalities together, we obtain (43).

Thus, taking the infimum over all Inline graphic in (43), the error ‖x α(y) − x ‖ can be bound by a combination of d ϕ(R) and ϕ(α)R. By balancing these terms, we obtain from a given distance function d ϕ the corresponding convergence rate.

Conversely, we can also show that an upper bound on the spectral projections of the minimum-norm solution gives us an upper bound on the distance function, which then yields another equivalent characterisation for the convergence rate of the regularization method.

Proposition 5.3

We use Notation 2.2 and assume that ϕ: [0, ∞) → [0, ∞) is an increasing, continuous function with ϕ(0) = 0 so that there exists a constant A > 0 with

graphic file with name lnfa_a_1144070_m0044.jpg (44)

Moreover, let d ϕ be the distance function of ϕ, and let ν ∈ (0, 1) be arbitrary.

Then, the following statements are equivalent:

  1. There exists a constant C > 0 so that
    graphic file with name lnfa_a_1144070_m0045.jpg (45)
  2. There exists a constant Inline graphic so that
    graphic file with name lnfa_a_1144070_m0046.jpg (46)

Proof

Assume first that (46) holds. Then, from Lemma 5.2, we get by taking the infimum of (43) over all Inline graphic for an arbitrary R > 0 that

Proof

Since the first term is decreasing and the second term is increasing in R, we pick for R the value R(α) given by

Proof

Thus, we end up with

Proof

Applying Proposition 2.3 with the function ϕ, therein replaced by ϕ (we remark that the condition (7) is then fulfilled with μ = ν, since (44) implies Inline graphic) we find that there exists a constant C > 0 so that (45) holds.

Conversely, if we have the relation (45), then we define for arbitrary α > 0 with the operator T = ϕ(L*L)|ℛ(E(α, ∞)) the element

Proof

Now, the distance of ϕ(L*Lα to the minimum-norm solution x can be estimated according to (45) by

graphic file with name lnfa_a_1144070_m0047.jpg (47)

Moreover, we can get an upper bound on the norm of ξα by

Proof

Using assumption (45), evaluating the integral, and dropping the resulting two negative terms, we find that

graphic file with name lnfa_a_1144070_m0048.jpg (48)

with Inline graphic.

So, combining (47) and (48), we have by definition (41) of the distance function d ϕ with R = cϕ−(1−ν)(α) that

Proof

and thus it follows by switching to the variable R that

Proof

where Inline graphic.

6. Conclusion

In this article, we have proven optimal convergence rates results for regularization methods for solving linear ill-posed operator equations in Hilbert spaces. The result generalizes existing convergence rates results on optimality of [10] to general source conditions, such as logarithmic source conditions. The results state that convergence rates results of regularised solution require a certain decay of the solution in terms of the spectral decomposition. Moreover, we also provide optimality results under variational source conditions, extending the results of [1]. It is interesting to note that variational source conditions are equivalent to convergence rates of the regularized solutions, while the classical results are not. Moreover, we also show that rates of the distance function developed in [4, 9] are equivalent to convergence rates of the regularized solutions.

References

  1. Andreev R., Elbau P., de Hoop M. V., Qiu L., Scherzer O. Generalized convergence rates results for linear inverse problems in Hilbert spaces. Numer. Funct. Anal. Optim. 2015;36(5):549–566. doi: 10.1080/01630563.2016.1144070. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Carter M., van Brunt B. 2000 The Lebesgue–Stieltjes integral. A Practical Introduction. Undergraduate Mathematics Series. Springer, New York, NY. [Google Scholar]
  3. Engl H. W., Hanke M., Neubauer A. 1996 Regularization of inverse problems. Mathematics and its Applications, Vol. 375. Kluwer Academic Publishers Group, Dordrecht, Netherlands. [Google Scholar]
  4. Flemming J., Hofmann B., Mathé P. Sharp converse results for the regularization error using distance functions. Inverse Probl. 2011;27 doi: 10.1088/0266-5611/27/2/025006. page 025006. [DOI] [Google Scholar]
  5. Groetsch C. W. The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind. Pitman Advanced Publishing Program; Boston, MA: 1984. [Google Scholar]
  6. Halmos P. R. A Hilbert Space Problem Book. Springer; New York, NY: 1974. Graduate Texts in Mathematics, Vol. 19. [Google Scholar]
  7. Hein T., Hofmann B. Approximate source conditions for nonlinear ill-posed problems: Chances and limitations. Inverse Probl. 2009;25 035003, 16 pages. [Google Scholar]
  8. Hofmann B., Kaltenbacher B., Pöschl C., Scherzer O. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Probl. 2007;23(3):987–1010. [Google Scholar]
  9. Hofmann B., Mathé P. Analysis of profile functions for general linear regularization methods. SIAM J. Numer. Anal. 2007;45(3):1122–1141. [Google Scholar]
  10. Neubauer A. On converse and saturation results for Tikhonov regularization of linear ill-posed problems. SIAM J. Numer. Anal. 1997;34:517–527. [Google Scholar]
  11. Scherzer O., Grasmair M., Grossauer H., Haltmeier M., Lenzen F. Variational methods in imaging. Applied Mathematical Sciences. 2009 vol. 167. Springer, New York, NY. [Google Scholar]
  12. Schuster T., Kaltenbacher B., Hofmann B., Kazimierski K. S. 2012 Regularization methods in Banach spaces. Radon Series on Computational and Applied Mathematics, vol. 10. Walter de Gruyter, Berlin, Germany. [Google Scholar]

Articles from Numerical Functional Analysis and Optimization are provided here courtesy of Taylor & Francis

RESOURCES