Skip to main content
Journal of Imaging logoLink to Journal of Imaging
. 2021 Nov 15;7(11):239. doi: 10.3390/jimaging7110239

Discretization of Learned NETT Regularization for Solving Inverse Problems

Stephan Antholzer 1, Markus Haltmeier 1,*
Editors: Fabiana Zama1, Elena Loli Piccolomini1
PMCID: PMC8625045  PMID: 34821870

Abstract

Deep learning based reconstruction methods deliver outstanding results for solving inverse problems and are therefore becoming increasingly important. A recently invented class of learning-based reconstruction methods is the so-called NETT (for Network Tikhonov Regularization), which contains a trained neural network as regularizer in generalized Tikhonov regularization. The existing analysis of NETT considers fixed operators and fixed regularizers and analyzes the convergence as the noise level in the data approaches zero. In this paper, we extend the frameworks and analysis considerably to reflect various practical aspects and take into account discretization of the data space, the solution space, the forward operator and the neural network defining the regularizer. We show the asymptotic convergence of the discretized NETT approach for decreasing noise levels and discretization errors. Additionally, we derive convergence rates and present numerical results for a limited data problem in photoacoustic tomography.

Keywords: deep learning, inverse problems, discretization of NETT, regularization, convergence analysis, learned regularizer, limited data, photoacoustic tomography

1. Introduction

In this paper, we are interested in neural network based solutions to inverse problems of the form

Findxfromdatayδ=Ax+η. (1)

Here A is a potentially non-linear operator between Banach spaces X and Y, yδ are the given noisy data, x is the unknown to be recovered, η is the unknown noise perturbation and δ0 indicates the noise level. Numerous image reconstruction problems, parameter identification tasks and geophysical applications can be stated as such inverse problems [1,2,3,4]. Special challenges in solving inverse problems are the non-uniqueness of the solutions and the instability of the solutions with respect to the given data. To overcome these issues, regularization methods are needed, which select specific solutions and at the same time stabilize the inversion process.

1.1. Reconstruction with Learned Regularizers

One of the most established class of methods for solving inverse problems is variational regularization where regularized solutions are defined as minimizers of the generelaized Tikhonov functional [2,5,6]

Tyδ,α:X[0,]:xD(Ax,yδ)+αR(x). (2)

Here D is a distance like function measuring closeness of the data, R a regularization term enforcing regularity of the minimizer and α is the regularization parameter. Taking minimizers of this functional as regularized solution is also called (generalized) Tikhonov regularization. In the case that D and the regularizer are defined by the Hilbert space norms, (2) is classical Tikhonov regularization for which the theory is quite complete [1,7]. In particular, in this case, convergence rates, which name quantitative estimates for the distance between the true noise-free solution and regularized solutions from noisy data, are well known. Convergence rates for non-convex regularizers are derived in [8].

Typical regularization techniques are based on simple hand crafted regularization terms such as the total variation fTV=|f| or quadratic Sobolev norms f22=|f|2 on some function space. However, these regularizers are quite simplistic and might not well reflect the actual complexity of the underlying class of functions. Therefore, recently, it has been proposed and analyzed in [9] to use machine learning to construct regularizers in a data driven manner. In particular, the strategy in [9] is to construct a data-driven regularizer via the following consecutive steps:

  • (T1)

    Choose a family of desired reconstructions (xi)i=1n.

  • (T2)

    For some B:YX, construct undesired reconstructions (BAxi)i=1n.

  • (T3)

    Choose a class (Φθ)θΘ of functions (networks) Φθ:XX.

  • (T4)

    Determine θΘ with Φθ(xi)xiΦθ(BAxi)xi.

  • (T5)

    Define R(x)=r(x,Φ(x)) with Φ=Φθ for some r:Y×Y[0,].

For imaging applications, the function class (Φθ)θΘ can be chosen as convolutional neural networks which have demonstrated to give powerful classes of mappings between image spaces. The function r measures distance between a potential reconstruction x and the output of the network Φ(x), and possibly contains additional regularization [10,11]. According to the training strategy in item (T4) the value of the regularizer will be small if the reconstruction is similar to elements in (xi)i=1n and large for elements in (BAxi)i=1n. A simple example that we will use for our numerical results is the learned regularizer R(x)=xΦ(x)2+xTV.

Convergence analysis and convergence rates for NETT (which stands for Network Tikhonov; referring to variants of (2), where the regularization term is given by a neural network) as well as training strategies have been established in [9,11,12]. A different training strategy for learning a regularizer has been proposed in [13,14]. Note that learning the regularizer first and then minimizing the Tikhonov functional is different from variational and iterative networks [15,16,17,18,19,20] where an iterative scheme is applied to enroll the functional Dθ(Ax,yδ)+αRθ(x) which is then trained in an end to end fashion. Training the regularizer first has the advantage of being more modular, sharing some similarity with plug and play techniques [21], and the network training is independent of the forward operator A. Moreover, it enables to derive a convergence analysis as the noise level tends to zero and therefore comes with theoretical recovery guarantees.

1.2. Discrete NETT

The existing analysis of NETT considers minimizers of the Tikhonov functional (2) with regularizer of the form R(x)=r(x,Φ(x)) before discretization, typically in an infinite dimensional setting. However, in practice, only finite dimensional approximations of the unknown, the operator and the neural network are given. To address these issues, in this paper, we study discrete NETT regularization which considers minimizers of

Tyδ,α,n:XnY:xD(Anx,yδ)+αRn(x). (3)

Here (Xn)nN, (An)nN and (Rn)nN are families of subspaces of XnX, mappings An:XY and regularizers Rn:X[0,], respectively, which reflect discretization of all involved operations. We present a full convergence analysis as the noise level δ converges to zero and n,α are chosen accordingly. Discretization of variational regularization has studied in [22] for the case that D is given by the norm distance and the regularizer R is taken convex and fixed. However, in the case of discrete NETT regularization it is natural to consider the case where the regularization depends on the discretization as regularization is learned in a discretized setting based on actual data. For that purpose our analysis includes non-convex regularizers that are allowed to depend on the discretization and the noise level.

1.3. Outline

The convergence analysis including convergence rates is presented in Section 2. In Section 3 we will present numerical results for a non-standard limited data problem in photoacoustic tomography that can be considered as simultaneous inpainting and artifact removal problem. We conclude the paper with a short summary and conclusion presented in Section 4.

2. Convergence Analysis

In this section we study the convergence of (3) and derive convergence rates.

2.1. Well-Posedness

First we state the assumptions that we will use for well-posedness (existence and stability) of minimizing NETT.

Assumption 1

(Conditions for well-posedness).

  • (W1) 

    X, Y are Banach spaces, X reflexive, DX weakly sequentially closed.

  • (W2) 

    The distance measure D:Y×Y[0,] satisfies

    • (a) 

      τ1:y1,y2,y3Y:D(y1,y2)τD(y1,y3)+τD(y3,y2).

    • (b) 

      y1,y2Y:D(y1,y2)=0y1=y2.

    • (c) 

      y,y˜Y:D(y,y˜)<y˜yk0D(y,yk)D(y,y˜).

    • (d) 

      yY:yky0D(yk,y)0.

    • (e) 

      D is weakly sequentially lower semi-continuous (wslsc).

  • (W3) 

    R:X[0,] is proper and wslsc.

  • (W4) 

    A:DXY is weakly sequentially continuous.

  • (W5) 

    y,α,C:{xXTy,αC} is nonempty and bounded.

  • (W6) 

    (Xn)nN is a sequence of subspaces of X.

  • (W7) 

    (An)nN is a family of weakly sequentially continuous An:DY.

  • (W8) 

    (Rn)nN is a family of proper wslsc regularizers Rn:X[0,].

  • (W9) 

    y,α,C,n:{xXnTy,α,nC} is nonempty and bounded.

Conditions (W2)–(W5) are quite standard for Tikhonov regularization in Banach spaces to guarantee the existence and stability of minimizers of the Tikhonov functional and the given conditions are similar to [2,8,9,10,12,23,24]. In particular, (W2) describes the properties that the distance measure D should have. Clearly, the norm distance on Y fulfills these properties. Moreover, (W2a) holds for the norm with τ=1 since it then corresponds to the triangle inequality. Item (W2c) is the continuity of D(y,·) while (W2d) considers the continuity of D(·,y) at y. While (W2c) is not needed for existence and convergence of NETT it is required for the stability result as shown in [10] (Example 2.7). On the other hand (W2e) implies that the Tikhonov functional is wslsc which is needed for existence. Assumption (W5) is a coercivity condition; see [9] (Remark 2.4f.) on how to achieve this for a regularizer defined by neural networks. Item (W8) poses some restrictions on the regularizers. For NETT this is not an issue as neural networks used in practice are continuous. Note that for convergence and convergence rates we will require additional conditions that concern the discretization of the reconstruction space, the forward operator and regularizer.

The references [8,9,10,23] all consider general distance measures and allow non-convex regularizers. However, existence and stability of minimizing (2) are shown under assumptions slightly different from (W1)–(W5). Below we therefore give a short proof of the existence and stability results.

Theorem 1

(Existence and Stability). Let Assumption 1 hold. Then for all yδY, α>0, nN the following assertions hold true:

  • (a) 

    argminTy,α,n.

  • (b) 

    Let (yk)kNYN with yky and consider xkargminTyk,α,n.

    • (xk)kN has at least one weak accumulation point.

    • Every weak accumulation point (xk)kN is a minimizer of Ty,α,n.

  • (c) 

    The statements in (a),(b) also hold for Ty,α in place of Ty,α,n,

Proof. 

Since (W1), (W6)–(W9) for Ty,α,n when nN are fixed give the same assumption as (W1), (W3)–(W5) for the non-discrete counterpart Ty,α, it is sufficient to verify (a), (b) for the latter. Existence of minimizers follows from (W1), (W2e), (W3)–(W5), because these items imply that the Ty,α is a wslsc coercive functional defined on a nonempty weakly sequentially closed subset of a reflexive Banach space. To show stability one notes that according to (W2a) for all xX we have

D(Axk,y)+αR(xk)τD(Axk,yk)+αR(xk)+τD(y,yk)τD(Ax,yk)+αR(x)+τD(y,yk).

According to (W2c), (W2d), (W5) there exists xX such that the right hand side is bounded, which by (W5) shows that (xk)k has a weak accumulation point. Following the standard proof [2] (Theorem 3.23) shows the weak accumulation points (xk)k are minimizers of Ty,α. This uses the fact that the weak topology is indeed weaker than the norm topology, and that the involved functionals are wslsc.    □

In the following we write xα,nδ for minimizers of Tyδ,α,n. For yY we call x+argmin{R(x)xXAx=y} an R-minimizing solution of Ax=y.

Lemma 1

(Existence of R-minimizing solutions). Let Assumption 1 hold. For any yA(D) an R-minimizing solution of Ax=y exists. Likewise, if nN and yAn(D) an Rn-minimizing solution of Anx=y exists.

Proof. 

Again is is sufficient the verify the claim for R-minimizing solution. Because yA(D), the set A1({y})={xXAx=y} is non-empty. Hence we can choose a sequence (xk)kN in A1({y}) with R(xk)inf{R(x)xXAx=y}. Due to (W2b), (xk)kN is contained in {xXD(A(x),y)+αR(x)C} for some C>0 which is bounded according to (W5). By (W1) X is reflexive and therefore (xk)kN has a weak accumulation point x+. From (W1), (W4), (W3) we conclude that x+ is an R-minimizing solution of Ax=y. The case of Rn-minimizing solutions follows analogous.    □

2.2. Convergence

Next we proof that discrete NETT converges as the noise level goes to zero and the discretization as well as the regularization parameter are chosen properly. We write Dn,M:={xDXnRn(x)M} and formulate the following approximation conditions for obtaining convergence.

Assumption 2

(Conditions for convergence).

Element x+D satisfies the following for all M>0:

  • (C1) 

    (zn)nN(DXn) with λn:=|Rn(zn)R(x+)|0.

  • (C2) 

    ρn:=supxDn,M|Rn(x)R(x)|0.

  • (C3) 

    γn:=D(Anzn,Ax+)0.

  • (C4) 

    an:=supxDn,M|D(Anx,Ax+)D(Ax,Ax+)|0.

Conditions (C1) and (C3) concerns the approximation of the true unknown x with elements in the discretization space, that is compatible with the discretization of the forward operator and regularizer. Conditions (C2) and (C4) are uniform approximation properties of the operator and the regularizer on Rn-bounded sets.

Theorem 2

(Convergence). Let (W1)–(W9) hold, yA(D) and let x+ be an R-minimizing solution of Ax=y that satisfies (C1)–(C4). Moreover, suppose (δk)kN(0,)N converges to zero and (yk)kNYN satisfies D(y,yk)δk. Choose (αk)kN and (nk)kN such that as k we have

αk0 (4)
nk (5)
(δk+D(Ankznk,y))/αk0. (6)

Then for xkargminTyk,δk,nk the following hold:

  • (a) 

    (xk)kN has a weakly convergent subsequence (xσ(k))kN

  • (b) 

    The weak limit of (xσ(k))kN is an R-minimizing solution of Ax=y.

  • (c) 

    Rσ(k)(xσ(k))R(x), where x is the weak limit of (xσ(k))kN.

  • (d) 

    If the R-minimizing solution of Ax=y is unique, then (xk)kNx+.

Proof. 

For convenience and some abuse of notation we use the abbreviations Rk:=Rnk, Ak:=Ank, ak:=ank, zk:=znk and ρk:=ρnk. Because xk is a minimizer of the discrete NETT functional Tyk,δk,nk by (W2) we have

D(Akxk,yk)+αkRk(xk)D(Akzk,yk)+αkRk(zk)τD(Akzk,y)+τD(y,yk)+αkRk(zk)=τD(Akzk,y)+τδk+αkRk(zk)

According to (C1), (4), we get

D(Akxk,yk)τ(D(Akzk,y)+δk), (7)
Rk(xk)τ·D(Akzk,yk)+δkαk+Rk(zk). (8)

According to (C1), (C3), (5), (6) the right hand side in (7) converges to zero and the right hand side in (8) to R(x+). Together with (C2) we obtain R(xk)Rk(xk)+ρkR(x+) and D(Axk,y)D(Akxk,y)+akτD(Akxk,y)+ak+τδk0. This shows that (D(Axk,y)+R(xk))kN is bounded and by (W1), (W9) there exists a weakly convergent subsequence (xσ(k))kN. We denote the weak limit by xX. From (W2), (W4) we obtain Ax=y. The weak lower semi-continuity of R assumed in (W3) shows

R(x)lim infkR(xσ(k))lim supkR(xσ(k))lim supk(Rσ(k)(xσ(k))+ρk)R(x+).

Consequently, x is an R-minimizing solution of Ax=y and R(xσ(k))R(x). If the R-minimizing solution is unique then x+ is the only weak accumulation point of (xk)kN which concludes the proof.    □

2.3. Convergence Rates

Next we derive quantitative error estimates (convergence rates) in terms of the absolute Bregman distance. Recall that a function R:X[0,] is Gâteaux differentiable at some xX if the directional derivative R(x)(h):=(R(x+th)R(x))/t exist for every hX. We denote by R(x) the Gâteaux derivative of R at x. In [9] we introduced the absolute Bregman distance BR(·,x):X[0,] of a Gâteaux differentiable functional R:X[0,] at xX with respect to R defined by

xX:BR(x,x):=|R(x)R(x)R(x)(xx)|. (9)

We write supyδH(yδ):=sup{H(yδ)yδXD(Ax+,yδ)δ}. Convergence rates in terms of the Bregman distance are derived under a smoothness assumption on the true solution in the form of a certain variational inequality. More precisely we assume the following:

Assumption 3

(Conditions for convergence rates).

Element x+D satisfies the following for all M,δ>0:

  • (R1) 

    Items (C1), (C2) hold.

  • (R2) 

    γn,δ:=supyδ|D(Anzn,yδ)D(Ax+,yδ)|0.

  • (R3) 

    an,δ:=supyδsupxDn,M|D(Anx,yδ)D(Ax,yδ)|0.

  • (R4) 

    R is Gâteaux differentiable at x+

  • (R5) 
    There exist a concave, continuous, strictly increasing φ:[0,)[0,) with φ(0)=0 and ϵ,β>0 such that for all xX
    |R(x)R(x+)|ϵβBR(x,x+)R(x)R(x+)+φD(Ax,Ax+).

According to (R5) the inverse function φ1:[0,)[0,) exists and is convex. We denote by φ*(s):=sup{stφ1(t)t0} its Fenchel conjugate.

Proposition 1

(Error estimates). Let yA(D) and x+ be an R-minimizing solution of Ax=y such that (W1)–(W9) and (R1)–(R5) are satisfied. For yδY with D(y,yδ)δ let xα,nδargminTyδ,α,n. Then for sufficient small δ,α>0 and sufficiently large nN, we have the error estimate

BR(xα,nδ,x+)an,δ+γn,δ+δα+ρn+λn+φ(τδ)+φ*(τα)τα. (10)

Proof. 

According to Theorem 2 we can assume |R(xα,nδ)R(x+)|<ϵ and with (R5) we obtain

αβBR(xα,nδ,x+)αR(xα,nδ)αR(x+)+αφ(D(Axα,nδ,y))αRn(xα,nδ)αRn(zn)+αρn+αλn+αφ(D(Axα,nδ,y))D(Anzn,yδ)D(Anxα,nδ,yδ)+αρn+αλn+αφ(D(Axα,nδ,y))δD(Axα,nδ,yδ)+γn,δ+an,δ+αρn+αλn+αφ(τδ)+αφ(τD(Axα,nδ,yδ))δ+γn,δ+an,δ+αρn+αλn+αφ(τδ)+τ1φ*(τδ).

For the second inequality we used (C1) and (C2). We have D(Anxα,nδ,yδ)+αRn(xα,nδ)D(Anzn,yδ)+αRn(zn) and thus we get an estimate for Rn(xα,nδ)Rn(zn) which we used for the third inequality. For the next inequality we used (R2) and (R3). Finally we used Young’s inequality αφ(τt)t+τ1φ*(τα) for the last step.    □

Remark 1.

The error estimate (10) includes the approximation quality of the discrete or inexact forward operator An and the discrete or inexact regularizer Rn described by an,δ and ρn, respectively. What might be unexpected at first is the inclusion of two new parameters λn and γn,δ. These factors both arise from the approximation of X by the finite dimensional spaces Xn, where γn,δ reflects approximation accuracy in the image of the operator A and λn approximation accuracy with respect to the true regularization functional R. Note that in the case where the forward operator, the regularizer, and the solution space X are given precisely, we have an,δ=γn,δ=λn=ρn=0. In this particular case we recover the estimate derived for the NETT in [9].

Theorem 3

(Convergence rates). Let the assumptions of Proposition 1 hold and consider the parameter choice rule α(δ)δ/φ(δ) and let the approximation errors satisfy an,δ+γn,δ=O(δ), ρn+λn=O(φ(τδ)). Then we have the convergence rate

BR(xα(δ),n(δ)δ,x+)=O(φ(τδ)). (11)

Proof. 

Noting that φ*(τδ/φ(τδ))/δ remains bounded as δ0, this directly follows from Proposition 1.    □

Next we verify that a variational inequality of the form (R5) is satisfied with φ(t)=ct under a typical source like condition.

Lemma 2

(Variational inequality under source condition). Let R, A be Gâteaux differentiable at x+X, consider the distance measure D(y1,y2)=y1y22 and assume there exist ηX and c1,c2,ϵ>0 with c1η<1 such that for all xX with |R(x)R(x+)|ϵ we have

R(x+)=A(x+)*ηAxAx+A(x+)(xx+)c1BR(x,x+)R(x+)R(x)c2AxAx+. (12)

Then (R5) holds with φ(t)=(η+2c2)t and β=1c1η.

Proof. 

Let xX with |R(x)R(x+)|ϵ. Using the Cauchy-Schwarz inequality and Equation (12), we can estimate

|R(x+),xx+|A(x+)(xx+)ηAxAx+η+AxAx+A(x+)(xx+)ηAxAx+η+c1ηBR(x,x+).

Additionally, if R(x)R(x+), we have |R(x)R(x+)|=R(x)R(x+), and on the other hand if R(x)<R(x+), we have |R(x)R(x+)|R(x)R(x+)+2(R(x+)R(x))R(x)R(x+)+2c2AxAx+. Putting this together we get

BR(x,x+)|R(x)R(x+)|+|R(x+),xx+|R(x)R(x+)+(η+2c2)AxAx++c1ηBR(x,x+),

and thus (1c1η)BR(x,x+)R(x)R(x+)+(η+2c2)AxAx+.    □

Corollary 1

(Convergence rates under source condition). Let the conditions of Lemma 2 hold and suppose

α(δ)δ|Rn(δ)(zn(δ))R(x+)|=O(δ)sup{|Rn(δ)(x)R(x)|xDn(δ),M}=O(δ)An(δ)zn(δ)Ax+=O(δ)sup{An(δ)xAxxDn(δ),M}=O(δ)sup{An(δ)xxDn(δ),M}<.

Then we have the convergence rates result

BR(xα(δ),n(δ)δ,x+)=O(δ). (13)

Proof. 

Follows from Theorem 3 and Lemma 2. Note that we use · in the theorem, while D(y1,y1)=y1y22 uses the squared norm ·2 and thus the approximation rates for the terms concerning An(δ) are order δ instead of δ as in Theorem 3.    □

In Corollary 1, the approximation quality of the discrete operator An and the discrete and inexact regularization functional Rn need to be of the same order.

3. Application to a Limited Data Problem in PAT

Photoacoustic Tomography (PAT) is an emerging non-invasive coupled-physics biomedical imaging technique with high contrast and high spatial resolution [25,26]. It works by illuminating a semi-transparent sample with short optical pulses which causes heating of the sample followed by expansion and the subsequent emission of an acoustic wave. Sensors on the outside of the sample measure the acoustic wave and these measurements are then used to reconstruct the initial pressure f:RdR, which provides information about the interior of the object. The cases d=2 and d=3 are relevant for applications in PAT. Here we only consider the case d=2 and assume a circular measurement geometry. The 2D case arises for example when using integrating line detectors in PAT [26].

3.1. Discrete Forward Operator

The pressure data p:R2×[0,)R satisfies the wave equation (t2Δ)p(r,t)=0for(r,t)R2×(0,) with initial data p(·,0)= and tp(·,0)=0. In the case of circular measurement geometry one assumes that f vanishes outside the unit disc D1:={rR2r<1} and the measurement sensors are located on the boundary D1=S1. We assume that the phantom will not generate any data for some region ID1, for example when the acoustic pressure generated inside I is too small to be recorded. This masked PAT problem consists in the recovery of the function f from sampled noisy measurements of g=W(1Icf) where W denotes the solution operator of the wave equation and 𝟙Ic the indicator function on Ic:=R2I. Note that the resulting inverse problem can be seen of the combination of an inpainting problem and in inverse problems for the wave equation.

In order to implement the PAT forward operator we use a basis ansatz f(r)=i=1N×Nxiψ(rri) where xiR are basis coefficients and ψ:R2R a generalized Kaiser-Bessel (KB) and ri=(i1)/N with i=(i1,i2){1,,N}2. The generalized KB functions are popular in tomographic inverse problems [27,28,29,30] and denote radially symmetric functions with support in DR defined by

ψ(r):=1r2/R2m/2Imγ1r2/R2Im(γ)forrR. (14)

Here Im is the modified Bessel function of the first kind of order nN and the parameters γ>0 and R denote the window taper and support radius, respectively. Since W is linear we have Wf=i=1N×NxiW(ψ(·ri)). For convenience we will use a pseudo-3D approach where use the 3D solution of Wψ for which there exists an analytical representation [29]. Denote by sk uniformly spaced sensor locations on S1 and by tj>0 uniformly sampled measurement times in [0,2]. Define the NtNs×N2 model matrix by WNt(k1)+j,N(i11)+i2=W(ψ(·ri))(sk,tj) and an N2×N2 diagonal matrix by (MI)N(i11)+i2,N(i11)+i2=1 if riIc and zero otherwise. Let WMI=UΣV be the singular valued decomposition. We then consider the discrete forward matrix A=UΣV where Σ is the diagonal matrix derived from Σ by setting singular values smaller than some σ to zero. This allows us to easily calculate A+=VΣ+U where Σ+ is calculated by inverting all diagonal elements of Σ that are greater than zero. In our experiments we use N=Nt=128, Ns=150 and take I fixed as a diagonal stripe of width 0.34.

3.2. Discrete NETT

We consider the discrete NETT with discrepancy term D(Ax,yδ)=Axyδ22/2 and regularizer given by

R(m)(x)=xΦ(m)(x)22+βx1,ϵ, (15)

where x1,ϵ:=i1,i2=1128|ri1+1,i2ri|2+|ri1,i2+1ri1,i2|2+ϵ2 with ϵ>0 is a smooth version of the total variation [31] and Φ(m) is a learnable network. We take Φ(m) as the U-Net [32] with residual connection, which has first been applied to PAT image reconstruction in [33]. Here m stands for the number of down-/upsampling steps performed in the U-Net (the original one had m=4 i.e., mN. This means that larger m yield a deeper network with more parameters. We generate training data that consist of square shaped rings with random profile and random location. See Figure 1 for an example of one such phantom (note that all plots in signal space use the same colorbar) and the corresponding data. We get a set of phantoms x1,,x1000 and corresponding basic reconstructions ha:=A+(Axa+ηa), where A+ is the pseudo-inverse and ηa is Gaussian noise with standard deviation of σAxa with σ=0.01. The networks are trained by minimizing a=11000Φ(m)(ha)xa1+γΦ(m)(xa)xa1 where we used the Adam optimizer with learning rate 0.01 and γ=0.1. The considered loss is that we want the trained regularizer to give small values for xa and large values for ha. The strategy is similar to [9] but we use the final output of the network for the regularizer as proposed in [34]. To minimize (15) we use Algorithm 1 which implements a forward-backward scheme [35]. The most expensive step of this algorithm is the matrix inversion but since we use constant stepsize one also has to option to only calculate the inverse of the matrix once and reuse it. Thus one only has to perform two matrix-vector multiplications which are of the order O(N2NtNs) and O(N4) since N2 is the dimension of our phantoms. On the other hand calculating the gradient has similar complexity than applying the neural network which is in the order O(F2LN2) with F the number of convolution channels and L the number of layers.

Algorithm 1: NETT optimization.
graphic file with name jimaging-07-00239-i001.jpg

Figure 1.

Figure 1

Top from left to right: phantom, masked phantom, and initial reconstruction A+Ax. The difference between the phantoms on the left and the middle one shows the mask region ID1 where no data is generated. Bottom from left to right: data without noise, low noise σ=0.01, and high noise σ=0.1.

3.3. Numerical Results

For the numerical results we train two regularizers R(1) and R(3) as described in Section 3.2. The networks are implemented using PyTorch [36]. We also use PyTorch in order to calculate the gradient xR(m). We take Niter=15, s=0.25 and x0=Φ(m)Ay in Algorithm 1 and compute the inverse (AAsId)1 only once and then use it for all examples. We set α=0.015 for the noise-free case, α=0.016 for the low noise case and α=0.02 for the high noise cases, respectively, and selected a fixed β=15. We expect that the NETT functional will yield better results due to data consistency, which is mainly helpful outside the masked center diagonal.

First we use the phantom from the testdata shown in Figure 1. The results using post processing and NETT are shown in Figure 2. One sees that all results with higher noise than used during training are not very good. This indicates that one should use similar noise as in the later applications even for the NETT. Figure 3 shows the average error using 10 test phantoms similar to the on in Figure 1. Careful numerical comparison of the numerical convergence rates and the theoretical results of Theorem 1 is an interesting aspect of further research. To investigate the stability of our method with respect to phantoms that are different from the training data we create a phantom with different structures as seen in Figure 4. As expected, the post processing network Φ(3) is not really able to reconstruct the circles object, since it is quite different from the training data, but it also does not break down completely. On the other hand, the NETT approach yields good results due to data consistency.

Figure 2.

Figure 2

Top row: reconstructions using post-processing network Φ(1). Middle row: NETT reconstructions using R(1). Bottom row: NETT reconstructions using R(3). From Left to Right: Reconstructions from data without noise, low noise (σ=0.01) and high noise (σ=0.1).

Figure 3.

Figure 3

Semilogarithmic plot of the mean squared errors of the NETT using R(1) and R(3) depending on the noise level. The crosses are the values for the phantoms in Figure 2.

Figure 4.

Figure 4

Left column: phantom with a structure not contained in the training data (top) and pseudo inverse reconstruction (bottom). Middle column: Post-processing reconstructions with Φ(3) using exact (top) and noisy data (bottom). Right column: NETT reconstructions with R(3) using exact (top) and noisy data (bottom).

4. Conclusions

We have analyzed the convergence a discretized NETT approach and derived the convergence rates under certain assumptions on the approximation quality of the involved operators. We performed numerical experiments using a limited data problem for PAT that is the combination of an inverse problem for the wave equation and an inpainting problem. To the best of our knowledge this is the first such problem studied with deep learning. The NETT approach yields better results that post processing for phantoms different from the training data. NETT still fails to recover some missing parts of the phantom in cases the data contains more noise than the training data. This highlights the relevance of using different regularizers for different noise levels. Finding ways to make the regularizers less dependent on the noise level used during training is a possible future research direction. Another interesting question is if this results can be combined with with approximation error estimates for neural networks e.g., [37,38]. It seems not obvious how these two approaches can be combined. Furthermore, studying how one can define neural network based regularizers that fulfill (12) might also be an interesting line of future research.

Author Contributions

M.H. prosed the conceptualization, framework and the long term vision of the work. M.H and S.A. developed the ideas, performed the formal analysis and have written and edited the paper. S.A. conducted the numerical experiments and has written the software. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported bay the Austrian Science Fund (FWF), project P 30747-N32.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data and code are freely available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Engl H.W., Hanke M., Neubauer A. Mathematics and Its Applications. Volume 375 Kluwer Academic Publishers Group; Dordrecht, The Netherlands: 1996. Regularization of Inverse Problems. [Google Scholar]
  • 2.Scherzer O., Grasmair M., Grossauer H., Haltmeier M., Lenzen F. Applied Mathematical Sciences. Volume 167 Springer; New York, NY, USA: 2009. Variational methods in imaging. [Google Scholar]
  • 3.Natterer F., Wübbeling F. Monographs on Mathematical Modeling and Computation. Volume 5 SIAM; Philadelphia, PA, USA: 2001. Mathematical Methods in Image Reconstruction. [Google Scholar]
  • 4.Zhdanov M.S. Geophysical Inverse Theory and Regularization Problems. Volume 36 Elsevier; Amsterdam, The Netherlands: 2002. [Google Scholar]
  • 5.Morozov V.A. Methods for Solving Incorrectly Posed Problems. Springer; New York, NY, USA: 1984. [Google Scholar]
  • 6.Tikhonov A.N., Arsenin V.Y. Solutions of Ill-Posed Problems. John Wiley & Sons; Washington, DC, USA: 1977. [Google Scholar]
  • 7.Ivanov V.K., Vasin V.V., Tanana V.P. Theory of Linear Ill-Posed Problems and Its Applications. 2nd ed. VSP; Utrecht, The Netherlands: 2002. (Inverse and Ill-posed Problems Series). [Google Scholar]
  • 8.Grasmair M. Generalized Bregman distances and convergence rates for non-convex regularization methods. Inverse Probl. 2010;26:115014. doi: 10.1088/0266-5611/26/11/115014. [DOI] [Google Scholar]
  • 9.Li H., Schwab J., Antholzer S., Haltmeier M. NETT: Solving inverse problems with deep neural networks. Inverse Probl. 2020;36:065005. doi: 10.1088/1361-6420/ab6d57. [DOI] [Google Scholar]
  • 10.Obmann D., Nguyen L., Schwab J., Haltmeier M. Sparse ℓq-regularization of Inverse Problems Using Deep Learning. arXiv. 20191908.03006 [Google Scholar]
  • 11.Obmann D., Nguyen L., Schwab J., Haltmeier M. Augmented NETT regularization of inverse problems. J. Phys. Commun. 2021;5:105002. doi: 10.1088/2399-6528/ac26aa. [DOI] [Google Scholar]
  • 12.Haltmeier M., Nguyen L.V. Regularization of Inverse Problems by Neural Networks. arXiv. 20202006.03972 [Google Scholar]
  • 13.Lunz S., Öktem O., Schönlieb C.B. Adversarial Regularizers in Inverse Problems. NIPS; Montreal, QC, Canada: 2018. pp. 8507–8516. [Google Scholar]
  • 14.Mukherjee S., Dittmer S., Shumaylov Z., Lunz S., Öktem O., Schönlieb C.B. Learned convex regularizers for inverse problems. arXiv. 20202008.02839 [Google Scholar]
  • 15.Adler J., Öktem O. Solving ill-posed inverse problems using iterative deep neural networks. Inverse Probl. 2017;33:124007. doi: 10.1088/1361-6420/aa9581. [DOI] [Google Scholar]
  • 16.Aggarwal H.K., Mani M.P., Jacob M. MoDL: Model-based deep learning architecture for inverse problems. IEEE Trans. Med. Imaging. 2018;38:394–405. doi: 10.1109/TMI.2018.2865356. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.de Hoop M.V., Lassas M., Wong C.A. Deep learning architectures for nonlinear operator functions and nonlinear inverse problems. arXiv. 20191912.11090 [Google Scholar]
  • 18.Kobler E., Klatzer T., Hammernik K., Pock T. Variational networks: Connecting variational methods and deep learning; Proceedings of the German Conference on Pattern Recognition; Basel, Switzerland. 12–15 September 2017; Cham, Switzerland: Springer; 2017. pp. 281–293. [Google Scholar]
  • 19.Yang Y., Sun J., Li H., Xu Z. Deep ADMM-Net for Compressive Sensing MRI; Proceedings of the 30th International Conference on Neural Information Processing Systems; Barcelona, Spain. 5–10 December 2016; pp. 10–18. [Google Scholar]
  • 20.Shang Y. Subspace confinement for switched linear systems. Forum Math. 2017;29:693–699. doi: 10.1515/forum-2015-0188. [DOI] [Google Scholar]
  • 21.Romano Y., Elad M., Milanfar P. The little engine that could: Regularization by denoising (RED) SIAM J. Imaging Sci. 2017;10:1804–1844. doi: 10.1137/16M1102884. [DOI] [Google Scholar]
  • 22.Pöschl C., Resmerita E., Scherzer O. Discretization of variational regularization in Banach spaces. Inverse Probl. 2010;26:105017. doi: 10.1088/0266-5611/26/10/105017. [DOI] [Google Scholar]
  • 23.Pöschl C. Ph.D. Thesis. University of Innsbruck; Innsbruck, Austria: 2008. Tikhonov Regularization with General Residual Term. [Google Scholar]
  • 24.Tikhonov A.N., Leonov A.S., Yagola A.G. Applied Mathematics and Mathematical Computation. Volumes 1, 2 and 14 Chapman & Hall; London, UK: 1998. Nonlinear ill-posed problems. Translated from the Russian. [Google Scholar]
  • 25.Kruger R., Lui P., Fang Y., Appledorn R. Photoacoustic ultrasound (PAUS)—Reconstruction tomography. Med. Phys. 1995;22:1605–1609. doi: 10.1118/1.597429. [DOI] [PubMed] [Google Scholar]
  • 26.Paltauf G., Nuster R., Haltmeier M., Burgholzer P. Photoacoustic tomography using a Mach-Zehnder interferometer as an acoustic line detector. Appl. Opt. 2007;46:3352–3358. doi: 10.1364/AO.46.003352. [DOI] [PubMed] [Google Scholar]
  • 27.Matej S., Lewitt R.M. Practical considerations for 3-D image reconstruction using spherically symmetric volume elements. IEEE Trans. Med. Imaging. 1996;15:68–78. doi: 10.1109/42.481442. [DOI] [PubMed] [Google Scholar]
  • 28.Schwab J., Pereverzyev S., Jr., Haltmeier M. A Galerkin least squares approach for photoacoustic tomography. SIAM J. Numer. Anal. 2018;56:160–184. doi: 10.1137/16M1109114. [DOI] [Google Scholar]
  • 29.Wang K., Schoonover R.W., Su R., Oraevsky A., Anastasio M.A. Discrete Imaging Models for Three-Dimensional Optoacoustic Tomography Using Radially Symmetric Expansion Functions. IEEE Trans. Med. Imaging. 2014;33:1180–1193. doi: 10.1109/TMI.2014.2308478. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Wang K., Su R., Oraevsky A.A., Anastasio M.A. Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography. Phys. Med. Biol. 2012;57:5399. doi: 10.1088/0031-9155/57/17/5399. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Acar R., Vogel C.R. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Probl. 1994;10:1217. doi: 10.1088/0266-5611/10/6/003. [DOI] [Google Scholar]
  • 32.Ronneberger O., Fischer P., Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N., Hornegger J., Wells W.M., Frangi A.F., editors. Proceedings of the MICCAI 2015; Munich, Germany. 5–9 October 2015; Cham, Switzerland: Springer; 2015. pp. 234–241. [Google Scholar]
  • 33.Antholzer S., Haltmeier M., Schwab J. Deep learning for photoacoustic tomography from sparse data. Inverse Probl. Sci. Eng. 2019;27:987–1005. doi: 10.1080/17415977.2018.1518444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Antholzer S., Schwab J., Bauer-Marschallinger J., Burgholzer P., Haltmeier M. NETT regularization for compressed sensing photoacoustic tomography; Proceedings of the Photons Plus Ultrasound: Imaging and Sensing 2019; San Francisco, CA, USA. 3–6 February 2019; p. 108783B. [Google Scholar]
  • 35.Combettes P.L., Pesquet J.C. Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer; Berlin/Heidelberg, Germany: 2011. Proximal splitting methods in signal processing; pp. 185–212. [Google Scholar]
  • 36.Paszke A., Gross S. PyTorch: An Imperative Style, High-Performance Deep Learning Library. NIPS; Montreal, QC, Canada: 2018. pp. 8024–8035. [Google Scholar]
  • 37.Hornik K. Some new results on neural network approximation. Neural Netw. 1993;6:1069–1072. doi: 10.1016/S0893-6080(09)80018-X. [DOI] [Google Scholar]
  • 38.Barron A.R. Approximation and estimation bounds for artificial neural networks. Mach. Learn. 1994;14:115–133. doi: 10.1007/BF00993164. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data and code are freely available upon request.


Articles from Journal of Imaging are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES