Skip to main content
Springer logoLink to Springer
. 2016 Dec 15;59(3):394–414. doi: 10.1007/s10851-016-0692-2

Acceleration of the PDHGM on Partially Strongly Convex Functions

Tuomo Valkonen 1,2,, Thomas Pock 3,4
PMCID: PMC6961483  PMID: 32009737

Abstract

We propose several variants of the primal–dual method due to Chambolle and Pock. Without requiring full strong convexity of the objective functions, our methods are accelerated on subspaces with strong convexity. This yields mixed rates, O(1/N2) with respect to initialisation and O(1 / N) with respect to the dual sequence, and the residual part of the primal sequence. We demonstrate the efficacy of the proposed methods on image processing problems lacking strong convexity, such as total generalised variation denoising and total variation deblurring.

Keywords: Primal–dual, Accelerated, Subspace, Total generalised variation

Introduction

Let G:XR¯ and F:YR¯ be convex, proper, and lower semicontinuous functionals on Hilbert spaces X and Y, possibly infinite dimensional. Also let KL(X;Y) be a bounded linear operator. We then wish to solve the problem

minxXG(x)+F(Kx).

This can under mild conditions on F (see, for example, [1, 2]) also be written with the help of the convex conjugate F in the minimax form

minxXmaxyYG(x)+Kx,y-F(y).

One possibility for the numerical solution of the latter form is the primal–dual algorithm of Chambolle and Pock [3], a type of proximal point or extragradient method, also classified as the ‘modified primal–dual hybrid gradient method’ or PDHGM by Esser et al. [4]. If either G or F is strongly convex, the method can be accelerated to O(1/N2) convergence rates of the iterates and an ergodic duality gap [3]. But what if we have only partial strong convexity? For example, what if

G(x)=G0(Px)

for a projection operator P to a subspace X0X, and strongly convex G0:X0R? This kind of structure is common in many applications in image processing and data science, as we will more closely review in Sect. 5. Under such partial strong convexity, can we obtain a method that would give an accelerated rate of convergence at least for Px?

We provide a partially positive answer: we can obtain mixed rates, O(1/N2) with respect to initialisation, and O(1 / N) with respect to bounds on the ‘residual variables’ y and (I-P)x. In this respect, our results are similar to the ‘optimal’ algorithm of Chen et al. [5]. Instead of strong convexity, they assume smoothness of G to derive a primal–dual algorithm based on backward–forward steps, instead of the backward–backward steps of [3].

The derivation of our algorithms is based, firstly, on replacing simple step length parameters by a variety of abstract step length operators and, secondly, a type of abstract partial strong monotonicity property

G(x)-G(x),T~-1(x-x)x-xT~-1,Γ2-penalty_term, 1

the full details of which we provide in Sect. 2. Here T~ is an auxiliary step length operator. Our factor of strong convexity is a positive semidefinite operator Γ0; however, to make our algorithms work, we need to introduce additional artificial strong convexity through another operator Γ, which may not satisfy 0ΓΓ. This introduces the penalty term in (1). The exact procedure can be seen as a type of smoothing, famously studied by Nesterov [6], and more recently, for instance, by Beck and Teboulle [7]. In these approaches, one computes a priori a level of smoothing—comparable to Γ—needed to achieve prescribed solution quality. One then solves a smoothed problem, which can be done at O(1/N2) rate. However, to obtain a solution with higher quality than the a priori prescribed one, one needs to solve a new problem from scratch, as the smoothing alters the problem being solved. One can also employ restarting strategies, to take some advantage of the previous solution, see, for example, [8]. Our approach does not depend on restarting and a priori chosen solution qualities: the method will converge to an optimal solution to the original non-smooth problem. Indeed, the introduced additional strong convexity Γ is controlled automatically.

The ‘fast dual proximal gradient method’, or FDPG [9], also possesses different type of mixed rates, O(1 / N) for the primal, and O(1/N2) for the dual. This is, however, under standard strong convexity assumptions. Other than that, our work is related to various further developments from the PDHGM, such as variants for nonlinear K [10, 11] and non-convex G [12]. The PDHGM has been the basis for inertial methods for monotone inclusions [13] and primal–dual stochastic coordinate descent methods without separability requirements [14]. Finally, the FISTA [15, 16] can be seen as a primal-only relative of the PDHGM. Not attempting to do full justice here to the large family of closely related methods, we point to [4, 17, 18] for further references.

The contributions of our paper are twofold: firstly, to paint a bigger picture of what is possible, we derive a very general version of the PDHGM. This algorithm, useful as a basis for deriving other new algorithms besides ours, is the content of Sect. 2. In this section, we provide an abstract bound on the iterates of the algorithm, later used to derive convergence rates. In Sect. 3, we extend the bound to include an ergodic duality gap under stricter conditions on the acceleration scheme and the step length operators. A by-product of this work is the shortest convergence rate proof for the accelerated PDHGM known to us. Afterwards, in Sect. 4, we derive from the general algorithm two efficient mixed-rate algorithms for problems exhibiting strong convexity only on subspaces. The first one employs the penalty or smoothing ψ on both the primal and the dual. The second one only employs the penalty on the dual. We finish the study with numerical experiments in Sect. 5. The main results of interest for readers wishing to apply our work are Algorithms 3 and 4 along with the respective convergence results, Theorems 4.1 and 4.2.

A General Primal–Dual Method

Notation

To make the notation definite, we denote by L(X;Y) the space of bounded linear operators between Hilbert spaces X and Y. For T,SL(X;X), the notation TS means that T-S is positive semidefinite; in particular, T0 means that T is positive semidefinite. In this case, we also denote

[0,T]:={λTλ[0,1]}.

The identity operator is denoted by I, as is standard.

For 0ML(X;X), which can possibly not be self-adjoint, we employ the notation

a,bM:=Ma,b,andaM:=a,aM. 2

We also use the notation T-1,:=(T-1).

Background

As in the introduction, let us be given convex, proper, lower semicontinuous functionals G:XR¯ and F:YR¯ on Hilbert spaces X and Y, as well as a bounded linear operator KL(X;Y). We then wish to solve the minimax problem

minxXmaxyYG(x)+Kx,y-F(y), P

assuming the existence of a solution u^=(x^,y^) satisfying the optimality conditions

-Ky^G(x^),andKx^F(y^). OC

Such a point always exists if limxG(x)/x= and limyF(y)/y=, as follows from [2, Proposition VI.1.2 & Proposition VI.2.2]. More generally the existence has to be proved explicitly. In finite dimensions, see, for example, [19] for several sufficient conditions.

The primal–dual method of Chambolle and Pock [3] for the solving (P) consists of iterating

xi+1:=(I+τiG)-1(xi-τiKyi), 3a
x¯i+1:=ωi(xi+1-xi)+xi+1, 3b
yi+1:=(I+σi+1F)-1(yi+σi+1Kx¯i+1). 3c

In the basic version of the algorithm, ωi=1, τiτ0, and σiσ0, assuming that the step length parameters satisfy τ0σ0K2<1. The method has O(1 / N) rate for the ergodic duality gap [3]. If G is strongly convex with factor γ, we may use the acceleration scheme [3]

ωi:=1/1+2γτi,τi+1:=τiωi,andσi+1:=σi/ωi, 4

to achieve O(1/N2) convergence rates of the iterates and an ergodic duality gap, defined in [3]. To motivate our choices later on, observe that σ0 is never used expect to calculate σ1. We may therefore equivalently parametrise the algorithm by δ=1-K2τ0σ0>0.

We note that the order of the steps in (3) is different from the original ordering in [3]. This is because with the present order, the method (3) may also be written in the proximal point form. This formulation, first observed in [20] and later utilised in [10, 11, 21], is also what we will use to streamline our analysis. Introducing the general variable splitting notation,

u=(x,y),

the system (3) then reduces into

0H(ui+1)+Mbasic,i(ui+1-ui), 5

for the monotone operator

H(u):=G(x)+KyF(y)-Kx, 6

and the preconditioning or step length operator

Mbasic,i:=I/τi-K-ωiKI/σi+1. 7

We note that the optimality conditions (OC) can also be encoded as 0H(u^).

Abstract Partial Monotonicity

Our plan now is to formulate a general version of (3), replacing τi and σi by operators TiL(X;X) and ΣiL(Y;Y). In fact, we will need two additional operators T~iL(X;X) and T^iL(Y;Y) to help communicate change in Ti to Σi. They replace ωi in (3b) and (7), operating as T^i+1KT~i-1ωiK from both sides of K. The role of T~i is to split the original primal step length τi in the space X into the two parts Ti and T~i with potentially different rates. The role of T^i is to transfer T~i into the space Y, to eventually control the dual step length Σi. In the basic algorithm (3), we would simply have T~i=Ti=τiIL(X;X), and T^i=τiIL(Y;Y) for the scalar τi.

To start the algorithm derivation, we now formulate abstract forms of partially strong monotonicity. As a first step, we take subsets of invertible operators

T~L(X;X),andT^L(Y;Y),

as well as subsets of positive semidefinite operators

0K~L(X;X),and0K^L(Y;Y).

We assume T~ and T^ closed with respect to composition: T~1T~2T~ for T~1,T~2T~.

We use the sets K~ and K^ as follows. We suppose that G is partially strongly (ψ,T~,K~) -monotone, meaning that for all x,xX,T~T~, Γ[0,Γ]+K~ holds

G(x)-G(x),T~-1(x-x)x-xT~-1,Γ2-ψT~-1,(Γ-Γ)(x-x), G-PM

for some family of functionals {ψT:XR}, and a linear operator 0ΓL(X;X) which models partial strong monotonicity. The inequality in (G-PM), and all such set inequalities in the remainder of this paper, is understood to hold for all elements of the sets G(x) and G(x). The operator T~T~ acts as a testing operator, and the operator ΓK~ as introduced strong monotonicity. The functional ψT~-1,(Γ-Γ) is a penalty corresponding to the test and the introduced strong monotonicity. The role of testing will become more apparent in Sect. 2.4.

Similarly to (G-PM), we assume that F is (ϕ,T^,K^) -monotone with respect to T^ in the sense that for all y,yY, T^T^,RK^ holds graphic file with name 10851_2016_692_Figc_HTML.jpg

for some family of functionals {ϕT:YR}. Again, the inequality in (F-PM) is understood to hold for all elements of the sets F(y) and F(y).

In our general analysis, we do not set any conditions on ψ and ϕ, as their role is simply symbolic transfer of dissatisfaction of strong monotonicity into a penalty in our abstract convergence results.

Let us next look at a few examples on how (G-PM) or (F-PM) might be satisfied. First we have the very well-behaved case of quadratic functions.

Example 2.1

G(x)=f-Ax2/2 satisfies (G-PM) with Γ=AA, K~={0}, and ψ0 for any invertible T~. Indeed, G is differentiable with G(x)-G(x),T~-1(x-x)=AA(x-x),T~-1(x-x)=x-xT~-1,Γ2.

graphic file with name 10851_2016_692_Figd_HTML.jpg

The next lemma demonstrates what can be done when all the parameters are scalar. It naturally extends to functions of the form G(x1,x2)=G(x1)+G(x2) with corresponding product form parameters.

Lemma 2.1

Let G:XR¯ be convex and proper with domG bounded. Then,

G(x)-G(x)G(x),x-x+γ2x-x2-Cψ, 8

for some constant Cψ0, every γ0, and x,xX.

Proof

We denote A:=domG. If xA, we have G(x)=, so (8) holds irrespective of γ and C. If xA, we have G(x)=, so (8) again holds. We may therefore compute the constants based on x,xA. Now, there is a constant M such that supxAxM. Then, x-x2M. Thus, if we pick C=4M2, then (γ/2)(x-x2-C)0 for every γ0 and x,xA. By the convexity of G, (8) holds.

Example 2.2

An indicator function ιA of a convex bounded set A satisfies the conditions of Lemma 2.1. This is generally what we will use and need.

A General Algorithm and the Idea of Testing

The only change we make to the proximal point formulation (5) of the method (3) is to replace the basic step length or preconditioning operator Mbasic,i by the operator

Mi:=Ti-1-K-T^i+1KT~i-1Σi+1-1. 10

As we have remarked, the operators T^i+1 and T~i play the role of ωi, acting from both sides of K. Our proposed algorithm can thus be characterised as solving on each iteration iN for the next iterate ui+1 the preconditioned proximal point problem

0H(ui+1)+Mi(ui+1-ui). PP

To study the convergence properties of (PP), we define the testing operator

Si:=T~i-1,00T^i+1-1. 11

It will turn out that multiplying or ‘testing’ (PP) by this operator will allow us to derive convergence rates. The testing of (PP) by Si is why we introduced testing into the monotonicity conditions (G-PM) and (F-PM). If we only tested (PP) with Si=I, we could at most obtain ergodic convergence of the duality gap for the unaccelerated method. But by testing with something appropriate and faster increasing, such as (11), we are able to extract better convergence rates from (PP).

We also set

Γ¯i=2ΓiT~i(KT~i-1-T^i+1-1K)T^i+1(KT~i-1-T^i+1-1K)2Ri+1,

for some Γi[0,Γ]+K~ and Ri+1K^. We will see in Sect. 2.6 that Γ¯i is a factor of partial strong monotonicity for H with respect to testing by Si. With this, taking a fixed δ>0, the properties

Si(Mi+Γ¯i)Si+1Mi+1,and C1
SiMiδT~i-1,Ti-10000, C2

will turn out to be the crucial defining properties for the convergence rates of the iteration (PP). The method resulting from the combination of (PP), (C1), and (C2) can also be expressed as Algorithm 1. The main steps in developing practical algorithms based on Algorithm 1 will be in the choice of the various step length operators. This will be the content of Sects. 3 and 4. Before this, we expand the conditions (C1) and (C2) to see how they might be satisfied and study abstract convergence results.

A Simplified Condition

We expand

SiMi=T~i-1,Ti-1-T~i-1,K-KT~i-1T^i+1-1Σi+1-1, 12

as well as

SiΓ¯i=2T~i-1,ΓiT~i-1,K-KT^i+1-1,KT~i-1-T^i+1-1K2T^i+1-1Ri+1, 13

and

Si(Mi+Γ¯i)=T~i-1,(Ti-1+2Γi)-KT^i+1-1,-T^i+1-1KT^i+1-1(Σi+1-1+2Ri+1).

We observe that if S,TL(X;Y), then for arbitrary invertible ZL(Y;Y) a type of Cauchy (or Young) inequality holds, namely

0TSST0=0TZZ-1,SSZ-1ZT0TZZT00SZ-1Z-1,S. 14

The inequality here can be verified using the basic Cauchy inequality 2x,yx2+y2. Applying (14) in (12), we see that (C2) is satisfied when

T^i+1-1Σi+1-1KZi-1Zi-1,K,and(1-δ)T~i-1,Ti-1T~i-1,ZiZiT~-1, 15

for some invertible ZiL(X;X). The second condition in (15) is satisfied as an equality if

ZiZi=(1-δ)Ti-1T~i. 16

By the spectral theorem for self-adjoint operators on Hilbert spaces (e.g. [22, Chapter 12]), we can make the choice (16) if

Ti-1T~iQ:={AL(X;X)Ais self-adjoint and positive definite}.

Equivalently, by the same spectral theorem, T~i-1TiQ. Therefore, we see from (15) that (C2) holds when graphic file with name 10851_2016_692_Fige_HTML.jpg Also, (C1) can be rewritten as graphic file with name 10851_2016_692_Figf_HTML.jpg

Basic Convergence Result

Our main result on Algorithm 1 is the following theorem, providing some general convergence estimates. It is, however, important to note that the theorem does not yet directly prove convergence, as its estimates depend on the rate of decrease in TNT~N, as well as the rate of increase in the penalty sum i=0N-1Di+1 coming from the dissatisfaction of strong convexity. Deriving these rates in special cases will be the topic of Sect. 4.

Theorem 2.1

Let us be given KL(X;Y), and convex, proper, lower semicontinuous functionals G:XR¯ and F:YR¯ on Hilbert spaces X and Y, satisfying (G-PM) and (F-PM). Pick δ(0,1), and suppose (C1) and (C2) are satisfied for each iN for some invertible TiL(X;X), T~iT~, T^i+1T^, and Σi+1L(Y;Y), as well as Γi[0,Γ]+K~ and Ri+1K^. Suppose that T~i-1,Ti-1 and T^i+1-1Σi+1-1 are self-adjoint. Let u^=(x^,y^) satisfy (OC). Then, the iterates of Algorithm 1 satisfy

δ2xN-x^T~N-1,TN-12C0+i=0N-1D~i+1,(N1), 17

for

D~i+1:=ψT~i-1,(Γi-Γ)(xi+1-x^)+ϕT^i+1-1Ri+1(yi+1-y^),andC0:=12u0-u^S0M02. 18

Remark 2.1

The term D~i+1, coming from the dissatisfaction of strong convexity, penalises the basic convergence, which is on the right-hand side of (17) presented by the constant C0. If TNT~N is of the order O(1/N2), at least on a subspace, and we can bound the penalty D~i+1C for some constant C, then we clearly obtain mixed O(1/N2)+O(1/N) convergence rates on the subspace. If we can assume that D~i+1 actually converges to zero at some rate, then it will even be possible to obtain improved convergence rates. Since typically T~i,T^i+10 reduce to scalar factors within D~i+1, this would require prior knowledge of the rates of convergence xix^ and yiy^. Boundedness of the iterates {(xi,yi)}i=0, we can, however, usually ensure.

Proof

Since 0H(u^), we have

H(ui+1),Si(ui+1-u^)H(ui+1)-H(u^),Si(ui+1-u^).

Recalling the definition of Si from (11), and of H from (6), it follows

H(ui+1),Si(ui+1-u^)G(xi+1)-G(x^),T~i-1(xi+1-x^)+F(yi+1)-F(y^),T^i+1-1,(yi+1-y^)+K(yi+1-y^),T~i-1(xi+1-x^)-K(xi+1-x^),T^i+1-1,(yi+1-y^).

An application of (G-PM) and (F-PM) consequently gives

H(ui+1),Si(ui+1-u^)xi+1-x^T~i-1,Γi2+yi+1-y^T^i+i-1Ri+12-ϕT^i+1-1Ri+1(yi+1-y^)-ψT~i-1,(Γi-Γ)(xi+1-x^)+KT~i-1(xi+1-x^),yi+1-y^-T^i+1-1K(xi+1-x^),yi+1-y^.

Using the expression (13) for SiΓ¯i, and (18) for D~i+1, we thus deduce

H(ui+1),Si(ui+1-u^)12ui+1-u^SiΓ¯i2-D~i+1. 19

For arbitrary self-adjoint ML(X×Y;X×Y), we calculate

ui+1-ui,ui+1-u^M=12ui+1-uiM2-12ui-u^M2+12ui+1-u^M2. 20

We observe that SiMi in (12) is self-adjoint as we have assumed that T~i-1,Ti-1 and T^i+1-1Σi+1-1 are self-adjoint. In consequence, using (20) we obtain

Mi(ui-ui+1),Si(ui+1-u^)=-12ui+1-uiSiMi2+12ui-u^SiMi2-12ui+1-u^SiMi2.

Using (C1) to estimate 12ui+1-u^SiMi2 and (C2) to eliminate 12ui+1-uiSiMi2 yields

Mi(ui-ui+1),Si(ui+1-u^)12ui-u^SiMi2-12ui+1-u^Si+1Mi+12+12ui+1-u^SiΓ¯i2. 21

Combining (19) and (21) through (PP), we thus obtain

12ui+1-u^Si+1Mi+1212ui-u^SiMi2+D~i+1. 22

Summing (22) over i=1,,N-1, and applying (C2) to estimate SNMN from below, we obtain (17).

Scalar Off-diagonal Updates and the Ergodic Duality Gap

One relatively easy way to satisfy (G-PM), (F-PM), (C1) and (C2) is to take the ‘off-diagonal’ step length operators T^i and T~i as equal scalars. Another good starting point would be to choose T~i=Ti. We, however, do not explore this route in the present work. Instead, we now specialise Theorem 2.1 to the scalar case. We then explore ways to add estimates of the ergodic duality gap into (17). While this would be possible in the general framework through convexity notions analogous to (G-PM) and (F-PM), the resulting gap would not be particularly meaningful. We therefore concentrate on the scalar off-diagonal updates to derive estimates on the ergodic duality gap.graphic file with name 10851_2016_692_Figg_HTML.jpg

Scalar Specialisation of Algorithm 1

We take both T~i=τ~iI, and T^i=τ~iI for some τ~i>0. With

ω~i:=τ~i+1/τ~i,

the condition (C2) then becomes graphic file with name 10851_2016_692_Figh_HTML.jpg

The off-diagonal terms cancelling out (C1) on the other hand become graphic file with name 10851_2016_692_Figi_HTML.jpg Observe also that Mi is under this setup self-adjoint if Ti and Σi+1 are.

For simplicity, we now assume ϕ and ψ to satisfy the identities

ψT(-x)=ψT(x),andψαT(x)=αψT(x),(xX;0<αR). 24

The monotonicity conditions (G-PM) and (F-PM) then simplify into

G(x)-G(x),x-xx-xΓ2-ψΓ-Γ(x-x), G-pm

holding for all x,xX, and Γ[0,Γ]+K~; and

F(y)-F(y),y-yy-yR2-ϕR(y-y), F*-pm

holding for all y,yY, and RK^.

We have thus converted the main conditions (C2), (C1), (G-PM), and (F-PM) of Theorem 2.1 into the respective conditions (C2), (C1), (G-pm), and (F-pm). Rewriting (C1) in terms of 0<ΩiL(X;X) and ω~i>0 satisfying

Ti+1=TiΩiandτ~i+1=τ~iω~i,

we reorganise (C1) and (C2) into the parameter update rules (23) of Algorithm 2. For ease of expression, we introduce there Σ0 and R0 as dummy variables that are not used anywhere else. Equating w¯i+1=Kx¯i+1, we observe that Algorithm 2 is an instance of Algorithm 1.

Example 3.1

(The method of Chambolle and Pock) Let G be strongly convex with factor γ0. We take Ti=τiI, T~i=τiI, T^i=τiI, and Σi+1=σi+1I for some scalars τi,σi+1>0. The conditions (G-pm) and (F-pm) then hold with ψ0 and ϕ0, while (C2) and (C1) reduce with Ri+1=0, Γi=γI, Ωi=ωiI, and ω~i=ωi into

ωi2(1+2γτi)1,and(1-δ)/K2τi+2σi+2τi+1σi+1.

Updating σi+1 such that the last inequality holds as an equality, we recover the accelerated PDHGM (3) + (4). If γ=0, we recover the unaccelerated PDHGM.

The Ergodic Duality Gap and Convergence

To study the convergence of an ergodic duality gap, we now introduce convexity notions analogous to (G-pm) and (F-pm). Namely, we assume

G(x)-G(x)G(x),x-x+12x-xΓ2-12ψΓ-Γ(x-x), G-pc

to hold for all x,xX and Γ[0,Γ]+K~ and graphic file with name 10851_2016_692_Figj_HTML.jpg to hold for all y,yY and RK^. Clearly these imply (G-pm) and (F-pm).

To define an ergodic duality gap, we set

q~N:=i=0N-1τ~i-1,andq^N:=i=0N-1τ~i+1-1, 25

and define the weighted averages

xN:=q~N-1i=0N-1τ~i-1xi+1,andyN:=q^N-1i=0N-1τ~i+1-1yi+1.

With these, the ergodic duality gap at iteration N is defined as the duality gap for (xN,YN), namely

GN:=(G(xN)+y^,KxN-F(y^))-(G(x^)+yN,Kx^-F(yN)),

and we have the following convergence result.

Theorem 3.1

Let us be given KL(X;Y), and convex, proper, lower semicontinuous functionals G:XR¯ and F:YR¯ on Hilbert spaces X and Y, satisfying (G-pc) and (F-pc) for some sets K~, K^, and 0ΓL(X;X). Pick δ(0,1), and suppose (C2) and (C1) are satisfied for each iN for some invertible self-adjoint TiQ, ΣiL(Y;Y), graphic file with name 10851_2016_692_Figk_HTML.jpg as well as Γiλ([0,Γ]+K~) and RiλK^ for λ=1/2. Let u^=(x^,y^) satisfy (OC). Then, the iterates of Algorithm 2 satisfy

δ2xN-x^τ~N-1TN-12+q~NGNC0+i=0N-1Di+1. 26

Here C0 is as in (18), and

Di+1:=τ~i-1ψΓi-λΓ(xi+1-x^)+τ~i+1-1ϕRi+1(yi+1-y^). 27

If only (G-pm) and (F-pm) hold instead of (G-pc) and (F-pc), and we take λ=1, then (26) holds with the modification GN:=0.

Remark 3.1

For convergence of the gap, we must accelerate less (factor 1 / 2 on Γi).

Example 3.2

(No acceleration) Consider Example 3.1, where ψ0 and ϕ0. If γ=0, we get ergodic convergence of the duality gap at rate O(1 / N). Indeed, we are in the scalar step setting, with τ~j=τ~j=τ0. Thus, presently q~N=Nτ0.

Example 3.3

(Full acceleration) With γ>0 in Example 3.1, we know from [3, Corollary 1] that

limNNτNγ=1. 28

Thus, q~N is of the order Ω(N2), while τ~NTN=τN2I is of the order O(1/N2). Therefore, (26) shows O(1/N2) convergence of the squared distance to solution. For O(1/N2) convergence of the ergodic duality gap, we need to slow down (4) to ωi=1/1+γτi.

Remark 3.2

The result (28) can be improved to estimate τNCτ/N without a qualifier NN0. Indeed, from [3, Lemma 2] we know the following for the rule ωi=1/1+2γτi: given λ>0 and N0 with γτNλ, for any 0 holds

1γτN+1+λ1γτN+1γτN+.

If we pick N=0 and λ=γτ0, this says

1γτ0+1+γτ01γτ1γτ0+.

The first inequality gives τ(1+γτ0)/(τ0-1+γ)(γ-1+τ0)/.

Therefore, τNCτ/N for Cτ:=γ-1+τ0. Moreover, the second inequality gives τN-1τ0-1+γN.

Proof

(Theorem 3.1) The non-gap estimate in the last paragraph of the theorem statement, where λ=1, we modify GN:=0, is a direct consequence of Theorem 2.1. We therefore concentrate on the estimate that includes the gap, and fix λ=1/2. We begin by expanding

H(ui+1),Si(ui+1-u^)=τ~i-1G(xi+1),xi+1-x^+τ~i+1-1F(yi+1),yi+1-y^+τ~i-1Kyi+1,xi+1-x^-τ~i+1-1Kxi+1,yi+1-y^

Since then Γi([0,Γ]+K~)/2, and Ri+1K^/2, we may take Γ=2Γi and R=2Ri+1 in (G-pc) and F-pc. It follows

H(ui+1),Si(ui+1-u^)τ~i-1(G(xi+1)-G(x^)+12xi+1-x^2Γi2-12ψ2Γi-Γ(xi+1-x^))+τ~i+1-1(F(yi+1)-F(y^)+12yi+1-y^2Ri+12-12ϕ2Ri+1(yi+1-y^))-τ~i-1yi+1,Kx^+τ~i+1-1y^,Kxi+1+(τ~i-1-τ~i+1-1)yi+1,Kxi+1.

Using (2) and (24), we can make all of the factors ‘2’ and ‘1/2’ in this expression annihilate each other. With Di+1 as in (27) and λ=1/2, we therefore have

H(ui+1),Si(ui+1-u^)τ~i-1G(xi+1)-G(x^)+y^,Kxi+1+xi+1-x^τ~i-1Γi2+τ~i+1-1F(yi+1)-F(y^)-yi+1,Kx^+yi+1-y^τ~i+1-1Ri+12+(τ~i-1-τ~i+1-1)yi+1-y^,K(xi+1-x^)-y^,Kx^-Di+1.

A little bit of reorganisation and referral to (13) for the expansion of SiΓ¯i thus gives

H(ui+1),Si(ui+1-u^)τ~i-1G(xi+1)-G(x^)+y^,Kxi+1+τ~i+1-1F(yi+1)-F(y^)-yi+1,Kx^-(τ~i-1-τ~i+1-1)y^,Kx^+12ui+1-u^SiΓ¯i2-Di+1. 29

Let us write

G+i(ui+1,u^):=(τ~i-1G(xi+1)+τ~i-1y^,Kxi+1-τ~i-1F(y^))-(τ~i+1-1G(x^)+τ~i+1-1yi+1,Kx^-τ~i+1-1F(yi+1)).

Observing here the switches between the indices i+1 and i of the step length parameters in comparison with the last step of (29), we thus obtain

H(ui+1),Si(ui+1-u^)G+i(ui+1,u^)-G+i(u^,u^)+12ui+1-u^SiΓ¯i2-Di+1. 30

We note that SiMi in (12) is self-adjoint as we have assumed Ti and Σi+1 to be, and taken T~i and T^i+1 to be scalars times the identity. We therefore deduce from the proof of Theorem 2.1 that (21) holds. Using (PP) to combine (21) and (30), we thus deduce

12ui+1-u^Si+1Mi+12+G+i(ui+1,u^)-G+i(u^,u^)12ui-u^SiMi2+Di+1.

Summing this for i=0,,N-1 gives with C0 from (27) the estimate

12uN-u^SNMN2+i=0N-1G+i(ui+1,u^)-G+i(u^,u^)C0+i=0N-1Di+1. 31

We want to estimate the sum of the gaps G+i in (31). Using the convexity of G and F, we observe

i=0N-1τ~i-1G(xi+1)q~NG(xN),andi=0N-1τ~i+1-1F(yi+1)q^NF(yN). 32

Also, by (25) and simple reorganisation

i=0N-1τ~i+1-1G(x^)=q~NG(x^)+τ~N-1G(x^)-τ~0-1G(x^),and 33
i=0N-1τ~i-1F(y^)=q^NF(yN)-τ~N-1F(y^)+τ~0-1F(y^). 34

All of (32)–(34) together give

i=0N-1G+i(ui+1,u^)(q~NG(xN)+q~Ny^,KxN-q^NF(y^))-(q~NG(x^)+q^NyN,Kx^-q^NF(yN))+τ~N-1G(x^)-τ~0-1G(x^)+τ~N-1FT^N-1,(x^)-τ~0-1F(y^).

Another use of (25) gives

i=0N-1G+i(u^,u^)=(q~N-q^N)y^,Kx^+τ~N-1G(x^)-τ~0-1G(x^)+τ~N-1F(x^)-τ~0-1F(y^).

Thus,

i=0N-1(G+i(ui+1,u^)-G+i(u^,u^))q~NGN+rN, 35

where the remainder

rN=(q~N-q^N)F(y^)-F(yN)-y^-yN,Kx^.

At a solution u^=(x^,y^) to (OC), we have Kx^F(y^), so rN0 provided q~Nq^N. But q~N-q^N=τ~0-1-τ~N-1, so this is guaranteed by our assumption (C3). Using (35) in (31) therefore gives

12uN-u^SNMN2+q~NGN+rNC0+i=0N-1Di+1. 36

A referral to (C2) to estimate SNMN from below shows (26), concluding the proof.

Convergence Rates in Special Cases

To derive a practical algorithm, we need to satisfy the update rules (C1) and (C2), as well as the partial monotonicity conditions (G-PM) and (F-PM). As we have already discussed in Sect. 3, this can be done when for some τ~i>0 we set

T~i=τ~iI,andT^i=τ~iI. 37

The result of these choices is Algorithm 2, whose convergence we studied in Theorem 3.1. Our task now is to verify its conditions, in particular (G-pc) and F-pc [alternatively (F-pm) and (G-pm)], as well as (C1), (C2), and (C3) for Γ of the projection form γP.

An Approach to Updating Σ

We have not yet defined an explicit update rule for Σi+1, merely requiring that it has to satisfy (C2) and (C1). The former in particular requires

Σi+1-1ω~i(1-δ)-1KTiK.

Hiring the help of some linear operator FL(L(Y;Y); L(Y;Y)) satisfying

F(KTiK)KTiK, 38

our approach is to define

Σi+1-1:=ω~i(1-δ)-1F(KTiK). 39

Then, (C2) is satisfied provided Ti-1Q. Since τ~i+1-1Σi+1-1=τ~i-1(1-δ)-1F(KTiK), the condition (C1) reduces into the satisfaction for each iN of

τ~i-1(I+2ΓTi)Ti-1-τ~i+1-1Ti+1-1-2τ~i-1(Γi-Γ),and 40a
11-δτ~i-1FKTiK-τ~i+1-1FKTi+1K-2τ~i+1-1Ri+1. 40b

To apply Theorem 3.1, all that remains is to verify in special cases the conditions (40) together with (C3) and the partial strong convexity conditions (G-pc) and F-pc.

When Γ is a Multiple of a Projection

We now take Γ=γ¯P for some γ¯>0, and a projection operator PL(X;X): idempotent, P2=P, and self-adjoint, P=P. We let P:=I-P. Then, PP=PP=0. With this, we assume that K~ is such that for some γ¯>0 holds

[0,γ¯P]K~. 41

To unify our analysis for gap and non-gap estimates of Theorem 3.1, we now pick λ=1/2 in the former case, and λ=1 in the latter. We then pick 0γλγ¯, and 0γiλγ¯, and set

Ti=τiP+τiP,Ωi=ωiP+ωiP,andΓi=γP+γiP. 42

With this, τi,τi>0 guarantee TiQ. Moreover, Ti is self-adjoint. Moreover, Γiλ([0,Γ]+K~), exactly as required in both the gap and the non-gap cases of Theorem 3.1.

Since

KTiK=τiKPK+τiKPK=(τi-τi)KPK+τiKK,

we are encouraged to take

F(KTiK):=max{0,τi-τi}KP2I+τiK2I. 43

Observe that (43) satisfies (38). Inserting (43) into (39), we obtain

Σi+1=σi+1Iwithσi+1-1=ω~i1-δmax{0,τi-τi}KP2+τiK2. 44

Since Σi+1 is now equivalent to a scalar, (40b), we also take Ri+1=ρi+1I, assuming for some ρ¯>0 that

[0,ρ¯I]K^.

Setting

ηi:=τ~i-1max{0,τi-τi}-τ~i+1-1max{0,τi+1-τi+1}

we thus expand (40) as

τ~i-1(1+2γτi)τi-1-τ~i+1τi+1-10, 45a
τ~i-1τi,-1-τ~i+1-1τi+1,-1-2τ~i-1γi, 45b
11-δηiKP2+(τ~i-1τi-τ~i+1-1τi+1)K2-2τ~i+1-1ρi+1. 45c

We are almost ready to state a general convergence result for projective Γ. However, we want to make one more thing more explicit. Since the choices (42) satisfy

Γi-λΓ=(γ-λγ¯)P+γiPγiPandRi+1=ρi+1I,

we suppose for simplicity that

ψΓi-λΓ(x)=γiψ(Px)andϕRi+1(y)=ρi+1ϕ(y) 46

for some ψ:PXR and ϕ:YR. The conditions (G-pc) and F-pc reduce in this case to the satisfaction for some γ¯,γ¯,ρ¯>0 ofgraphic file with name 10851_2016_692_Figl_HTML.jpgfor all x,xX and 0γγ¯, as well as of graphic file with name 10851_2016_692_Figm_HTML.jpg for all y,yY and 0ρρ¯. Analogues of (G-pm) and (F-pm) can be formed.

To summarise the findings of this section, we state the following proposition.

Proposition 4.1

Suppose (G-pcr) and (F-pcr) hold for some projection operator PL(X;X) and scalars γ¯,γ¯,ρ¯>0. With λ=1/2, pick γ[0,λγ¯]. For each iN, suppose (45) is satisfied with

0γiλγ¯,0ρiλρ¯,andτ~0τ~i>0. 47

If we solve (45a) exactly, define Ti, Γi, and Σi+1 through (42) and (44), and set Ri+1=ρi+1I, then the iterates of Algorithm 2 satisfy with C0 and Di+1 as in (27) the estimate

δ2P(xN-x^)2+1τ0-1+2γGNτ~NτNC0+i=0N-1Di+1. 48

If we take λ=1, then (48) holds with GN=0.

Observe that presently

Di+1=τ~i-1γiψ(P(xi+1-x^))+τ~i+1-1ρi+1ϕ(yi+1-y^). 49

Proof

As we have assumed through (47), or otherwise already verified its conditions, we may apply Theorem 3.1. Multiplying (26) by τ~NτN, we obtain

δ2xN-x^P2+q~Nτ~NτNGNτ~NτN(C0+i=0N-1Di+1). 50

Now, observe that solving (45a) exactly gives

τ~N-1τN-1=τ~N-1-1τN-1-1+2γτ~N-1-1=τ~0-1τ0-1+j=0N-12γτ~j-1=τ~0-1τ0-1+2γq~N. 51

Therefore, we have the estimate

q~Nτ~NτN=q~Nτ~0-1τ0-1+2γq~N=1τ~0-1τ0-1q~N-1+2γ1τ0-1+2γ. 52

With this, (50) yields (48).

Primal and Dual Penalties with Projective Γ

We now study conditions that guarantee the convergence of the sum τ~NτNi=0N-1Di+1 in (48). Indeed, the right-hand sides of (45b) and (45c) relate to Di+1. In most practical cases, which we study below, ϕ and ψ transfer these right-hand side penalties into simple linear factors within Di+1. Optimal rates are therefore obtained by solving (45b) and (45c) as equalities, with the right-hand sides proportional to each other. Since ηi0, and it will be the case that ηi=0 for large i, we, however, replace (45c) by the simpler condition

11-δ(τ~i-1τi-τ~i+1-1τi+1)K2-2τ~i+1-1ρi+1. 53

Then, we try to make the left-hand sides of (45b) and (53) proportional with only τi+1 as a free variable. That is, for some proportionality constant ζ>0, we solve

τ~i-1τi,-1-τ~i+1-1τi+1,-1=ζ(τ~i-1τi-τ~i+1-1τi+1). 54

Multiplying both sides of (54) by ζ-1τ~i+1τi+1, gives on τi+1 the quadratic condition

τi+1,2+ω~i(ζ-1τi,-1-τi)τi+1-ζ-1=0.

Thus,

τi+1=12ω~i(τi-ζ-1τi,-1)+ω~i2(τi-ζ-1τi,-1)2+4ζ-1. 55

Solving (45b) and (53) as equalities, (54) and (55) give

2τ~i-1γi=2ζ(1-δ)K2τ~i+1-1ρi+1=ζ(τ~i+1-1τi+1-τ~i-1τi). 56

Note that this quantity is non-negative exactly when ωiω~i. We have

ωiω~i=τi+1τiω~i=121-ζ-1τi,-2+(1-ζ-1τi,-2)2+4ζ-1ω~i-2τi,-2.

This quickly yields ωiω~i if ω~i1. In particular, (56) is non-negative when ω~i1.

The next lemma summarises these results for the standard choice of ω~i.

Lemma 4.1

Let τi+1 by given by (55), and set

ω~i=ωi=1/1+2γτi. 57

Then, ωiω~i, τ~iτ~0, and (45) is satisfied with the right-hand sides given by the non-negative quantity in (56). Moreover,

τiζ-1/2τi+1ζ-1/2. 58

Proof

The choice (57) satisfies (45a), so that (45) in its entirety will be satisfied with the right-hand sides of (45b)–(45c) given by (56). The bound τ~iτ~0 follows from ω~i1. Finally, the implication (58) is a simple estimation of (55).

Specialisation of Algorithm 2 to the choices in Lemma 4.1 yields the steps of Algorithm 3. Observe that τ~i entirely disappears from the algorithm. To obtain convergence rates, and to justify the initial conditions, we will shortly seek to exploit with specific ϕ and ψ the telescoping property stemming from the non-negativity of the last term of (56).graphic file with name 10851_2016_692_Fign_HTML.jpg

There is still, however, one matter to take care of. We need ρiλρ¯ and γiλγ¯, although in many cases of practical interest, the upper bounds are infinite and hence inconsequential. We calculate from (55) and (57) that

γi=ζ2(ω~i-1τi+1-τi)=12-ζτi-τi,-1+(ζτi-τi,-1)2+4ζω~i-2ζ(ω~i-2-1)=2ζγτi2ζγτ0. 60

Therefore, we need to choose ζ and τ0 to satisfy 2ζγτ0(λγ¯)2. Likewise, we calculate from (56), (57), and (60) that

ρi+1=ω~icγi=K2ω~i(1-δ)ζγiK2ω~i(1-δ)ζ2ζγτi=K2(1-δ)ζ2ζγτ0.

This tells us to choose τ0 and ζ to satisfy 2K4/(1-δ)2ζ-1γτ0(λρ¯)2. Overall, we obtain for τ0 and ζ the condition

0<τ0λ22γminγ¯,2ζ,ρ¯2ζ(1-δ)2K4. 61

This can always be satisfied through suitable choices of τ0 and ζ.

If now ϕCϕ and ψCψ, using the non-negativity of (56), we calculate

i=0N-1τ~i+1-1ρi+1ϕ(yi+1-y^)=K2Cϕ2(1-δ)i=0N-1τ~i+1-1τi+12-τ~i-1τi2K2Cϕ2(1-δ)τ~N-1τN. 62

Similarly

i=0N-1τ~i-1γiψ(xi+1-x^)ζCψ2τ~N-1τN. 63

Using these expression to expand (49), we obtain the following convergence result.

Theorem 4.1

Suppose (G-pcr) and (F-pcr) hold for some projection operator PL(X;X), scalars γ¯,γ¯,ρ¯>0 with ϕCϕ, and ψCψ, for some constants Cϕ,Cψ>0. With λ=1/2, fix γ(0,λγ¯]. Select initial τ0,τ0>0, as well as δ(0,1) and ζ(τ0)-2 satisfying (61). Then, Algorithm 3 satisfies for some C0,Cτ>0 the estimate

δ2P(xN-x^)2+1τ0-1+2γGNC0Cτ2N2+Cτ2Nζ1/2Cψ+ζ-1/2K21-δCϕ,(N0). 64

If we take λ=1, then (48) holds with GN=0.

Proof

During the course of the derivation of Algorithm 3, we have verified (45), solving (45a) as an equality. Moreover, Lemma 4.1 and (61) guarantee (47). We may therefore apply Proposition 4.1. Inserting (62) and (63) into (48) and (49) gives

δ2P(xN-x^)2+1τ0-1+2γGNτNτ~N×(C0+ζCψ2τ~N-1τN+K2Cϕ2(1-δ)τ~N-1τN). 65

The condition ζ(τ0)-2 now guarantees τNζ-1/2 through (58). Now we note that τ~i is not used in Algorithm 3, so it only affects the convergence rate estimates. We therefore simply take τ~0=τ0, so that τ~N=τN for all NN. With this and the bound τNCτ/N from Remark 3.2, (64) follows by simple estimation of (65).

Remark 4.1

As a special case of Algorithm 3, if we choose ζ=τ0,-2, then we can show from (55) that τi=τ0=ζ-1/2 for all iN.

Remark 4.2

The convergence rate provided by Theorem 4.1 is a mixed O(1/N2)+O(1/N) rate, similarly to that derived in [5] for a type of forward–backward splitting algorithm for smooth G. Ours is of course backward–backward type algorithm. It is interesting to note that using the differentiability properties of infimal convolutions [23, Proposition 18.7], and the presentation of a smooth G as an infimal convolution, it is formally possible to derive a forward–backward algorithm from Algorithm 3. The difficulties lie in combining this conversion trick with conditions on the step lengths.

Dual Penalty Only with Projective Γ

Continuing with the projective Γ setup of Sect. 4.2, we now study the case K~={0}, that is, when only the dual penalty ϕ is available with ψ0. To use Proposition 4.1, we need to satisfy (47) and (45), with (45a) holding as an equality. Since γi=0, (45b) becomes

τ~i-1τi,-1-τ~i+1-1τi+1,-10. 66

With respect to τi+1, the left-hand side of (45c) is maximised (and the penalty on the right-hand side minimised) when (66) is minimised. Thus, we solve (66) exactly, which gives

τi+1=τiω~i-1.

In consequence ωi=ω~i-1, and (45c) becomes

11-δηiKP2+τ~i-21-δ(1-ω~i-2)K2-2τ~i+1-1ρi+1. 67

In order to simultaneously satisfy (45a), this suggests for some, yet undetermined, ai>0, to choose

ω~i:=11+aiτ~i2andωi:=1ω~i(1+2γτi). 68

Since ηi0, (67) is satisfied with the choice (68) if we take

ρi+1=τ~i+1aiK22(1-δ).

To use Proposition 4.1, we need to satisfy ρi+1λρ¯. Since (68) implies that {τ~i}i=0 is non-increasing, we can satisfy this for large enough i if ai0. To ensure satisfaction for all iN, it suffices to take {ai}i=0 non-increasing, and satisfy the initial condition

a0τ~0K22(1-δ)λρ¯. 69

The rule τ~i+1=ω~iτ~i and (68) give τ~i+1-2=τ~i-2+ai. We therefore see that

τ~N-1τN-1=τ~0-1τ0-1+2γi=0N-1τ~0-2+j=0i-1aj2γi=0N-1τ~0-2+j=0i-1aj=:1/μ0N.

Assuming ϕ to have the structure (46), moreover,

i=0N-1Di+1=i=0N-1ϕτ~i+1-1Ri+1(yi+1-y^)=K22(1-δ)i=0N-1aiϕ(yi+1-y^).

Thus, the rate (48) in Proposition 4.1 states

δ2P(xN-x^)2+1τ0-1+2γGNμ0NC0+K22(1-δ)μ1N 70

for

μ1N:=μ0Ni=0N-1aiϕ(yi+1-y^).

The convergence rate is thus completely determined by μ0N and μ1N.

Remark 4.3

If ϕ0, that is, if F is strongly convex, we may simply pick ω~i=ωi=1/1+2γτi, that is ai=2γ, and obtain from (70) a O(1/N2) convergence rate.

For a more generally applicable algorithm, suppose ϕ(yi+1-y^)Cϕ as in Theorem 4.1. We need to choose ai. One possibility is to pick some q(0,1] and

ai:=τ~0-2((i+1)q-iq). 71

The concavity of iqi for q(0,1] easily shows that {ai}i=0 is non-increasing. With the choice (71), we then compute

i=0N-1τ~0-2+j=0i-1aj=τ~0-1i=0N-1iq/2τ~0-10N-1xq/2dx=τ~0-11+q/2(N-1)1+q/2,

and

i=0N-1aiτ~0-2Nq.

If N2, we find with Ca=(1+q/2)/(21+q/2λγ) that

μ0Nτ~0CaN1+q/2,andμ1NCaCϕτ~0N1-q/2. 72

The choice q=0 gives uniform O(1 / N) over both the initialisation and the dual sequence. By choosing q=1, we get O(1/N3/2) convergence with respect to the initialisation, and O(1/N1/2) with respect to the residual sequence.

With these choices, Algorithm 2 yields Algorithm 4, whose convergence properties are stated in the next theorem.graphic file with name 10851_2016_692_Figo_HTML.jpg

Theorem 4.2

Suppose (G-pcr) and (F-pcr) hold for some projection operator PL(X;X) and γ¯,γ¯,ρ¯0 with ψ0 and ϕCϕ for some constant Cϕ0. With λ=1/2, choose γ(0,λγ¯], and pick the sequence {ai}i=0 by (71) for some q(0,1]. Select initial τ0,τ0,τ~0>0 and δ(0,1) verifying (69). Then, Algorithm 4 satisfies

δ2P(xN-x^)2+1τ0-1+γGNτ~0CaC0N1+q/2+CaCϕK22(1-δ)τ~02N1-q/2,(N2). 74

If we take λ=1, then (74) holds with GN=0.

Proof

We apply Proposition 4.1 whose assumptions we have verified during the course of the present section. In particular, τ~iτ~0 through the choice (68) that forces ω~i1. Also, have already derived the rate (70) from (48). Inserting (72) into (70), noting that the former is only valid for N2, immediately gives (74).

Examples from Image Processing and the Data Sciences

We now consider several applications of our algorithms. We generally have to consider discretisations, since many interesting infinite-dimensional problems necessitate Banach spaces. Using Bregman distances, it would be possible to generalise our work form Hilbert spaces to Banach spaces, as was done in [24] for the original method of [3]. This is, however, outside the scope of the present work.

Regularised Least Squares

A large range of interesting application problems can be written in the Tikhonov regularisation or empirical loss minimisation form

minxXG0(f-Ax)+αF(Kx). 75

Here α>0 is a regularisation parameter, G0:ZR typically convex and smooth fidelity term with data fZ. The forward operator AL(X;Z)—which can often also be data—maps our unknown to the space of data. The operator KL(X;Y) and the typically non-smooth and convex F:YR¯ act as a regulariser.

We are particularly interested in strongly convex G0 and A with a non-trivial null-space. Examples include, for example, Lasso—a type of regularised regression—with G0=x22/2, K=I, and F(x)=x1, on finite-dimensional spaces. If the data of the Lasso is ‘sparse’, in the sense that A has a non-trivial null-space, then, based on accelerating the strongly convex part of the variable, our algorithm can provide improved convergence rates compared to standard non-accelerated methods.

In image processing examples abound, we refer to [25] for an overview. In total variation (TV) regularisation, we still take F(x)=x1, but is K= the gradient operator. Strictly speaking, this has to be formulated in the Banach space BV(Ω), but we will consider the discretised setting to avoid this problem. For denoising of Gaussian noise with TV regularisation, we take A=I, and again G0=x22/2. This problem is not so interesting to us, as it is fully strongly convex. In a simple form of TV inpainting—filling in missing regions of an image—we take A as a subsampling operator S mapping an image xL2(Ω) to one in L2(Ω\Ωd), for ΩdΩ the defect region that we want to recreate. Observe that in this case, Γ=SS is directly a projection operator. This is therefore a problem for our algorithms! Related problems include reconstruction from subsampled magnetic resonance imaging (MRI) data (see, for example, [11, 26]), where we take A=SF for F the Fourier transform. Still, AA is a projection operator, so the problem perfectly suits our algorithms.

Another related problem is total variation deblurring, where A is a convolution kernel. This problem is slightly more complicated to handle, as AA is not a projection operator. Assuming periodic boundary conditions on a box Ω=i=1m[ci,di], we can write A=Fa^F, multiplying the Fourier transform by some a^L2(Ω). If |a^|γ on a subdomain, we obtain a projection form Γ (it would also be possible to extend our theory to non-constant γ, but we have decided not to extend the length of the paper by doing so. Dualisation likewise provides a further alternative).

Satisfaction of convexity conditions

In all of the above examples, when written in the saddle point form (P), F is a simple pointwise ball constraint. Lemma 2.1 thus guarantees (F-pcr). If F(x)=x1 and K=I, then clearly Px^ can be bounded in Z=L1 for x^ the optimal solution to (75). Thus, for some M>0, we can add to (75) the artificial constraint

G(x):=ι·ZM(Px). 76

In finite dimensions, this gives a bound in L2. Lemma 2.1 gives (G-pcr) with γ¯=.

In case of our total variation examples, F(x)=x1 and K=. Provided mean-zero functions are not in the kernel of A, one can through Poincar’s inequality [27] on BV(Ω) and a two-dimensional connected domain ΩR2 show that even the original infinite-dimensional problems have bounded solutions in L2(Ω). We may therefore again add the artificial constraint (76) with Z=L2 to (75).

Dynamic bounds and pseudo-duality gaps

We seldom know the exact bound M, but can derive conservative estimates. Nevertheless, adding such a bound to Algorithm 4 is a simple, easily implemented projection of P(xi-TiKyi) into the constraint set. In practise, we do not use or need the projection, and update the bound M dynamically so as to ensure that the constraint (76) is never active. Indeed, A having a non-trivial nullspace also causes duality gaps for (P) to be numerically infinite. In [28], a ‘pseudo-duality gap’ was therefore introduced, based on dynamically updating M. We will also use this type of dynamic duality gaps in our reporting.

TGV2 Regularised Problems

So far, we have considered very simple regularisation terms. Total generalised variation, TGV, was introduced in [29] as a higher-order generalisation of TV. It avoids the unfortunate stair-casing effect of TV—large flat areas with sharp transitions—while preserving the critical edge preservation property that smooth regularisers lack. We concentrate on the second-order TGV2. In all of our image processing examples, we can replace TV by TGV2.

As with total variation, we have to consider discretised models due the original problem being set in the Banach space BV(Ω). For two parameters α,β>0, the regularisation functional is written in the differentiation cascade form of [30] as

TGV(β,α)2(u):=minwαu-w1+βEu1.

Here E=(T+)/2 is the symmetrised gradient. With x=(v,w) and y=(y1,y2), we may write the problem

minvG0(f-Av)+TGV(β,α)2(v), 77

in the saddle point form (P) with

G(x):=G0(f-Av),F(y)=ι·Lα(y1)+ι·Lβ(y2),andK:=-I0E.

If A=I, as is the case for denoising, we have

Γ=γPforP=I000,

perfectly uncoupling in both Algorithm 3 and Algorithm 4 the prox updates for G into ones for G1 and G2. The condition (F-pcr) with ρ¯= is then immediate from Lemma 2.1. Moreover, the Sobolev–Korn inequality [31] allows us to bound on a connected domain ΩR2 an optimal w^ to (77) as

infw¯affinew^-w¯L2CΩEw^1CΩG0(f)

for some constant CΩ>0. We may assume that w¯=0, as the affine part of w is not used in (77). Therefore we may again replace G2=0 by the artificial constraint G2(w)=ι·L2M(w). By Lemma 2.1, G will then satisfy (G-pcr) with γ¯=.

Numerical Results

We demonstrate our algorithms on TGV2 denoising and TV deblurring. Our tests are done on the photographs in Fig. 1, both at the original resolution of 768×512, and scaled down by a factor of 0.25 to 192×128 pixels. It is image #23 from the free Kodak image suite. Other images from the collection that we have experimented on give analogous computational results. For both of our example problems, we calculate a target solution by taking one million iterations of the basic PDHGM (3). We also tried interior point methods for this, but they are only practical for the smaller denoising problem.

Fig. 1.

Fig. 1

We use sample image (b) for denoising, and (c) for deblurring experiments. Free Kodak image suite photo, at the time of writing online at http://r0k.us/graphics/kodak/. a True image. b Noise image. c Blurry image

We evaluate Algorithms 3 and 4 against the standard unaccelerated PDHGM of [3], as well as (a) the mixed-rate method of [5], denoted here C-L-O, (b) the relaxed PDHGM of [20, 32], denoted here ‘Relax’, and (c) the adaptive PDHGM of [33], denoted here ‘Adapt’. All of these methods are very closely linked and have comparable low costs for each step. This makes them straightforward to compare.

As we have discussed, for comparison and stopping purposes, we need to calculate a pseudo-duality gap as in [28], because the real duality gap is in practise infinite when A has a non-trivial nullspace. We do this dynamically; upgrading, the M in (76) every time, we compute the duality gap. For both of our example problems, we use for simplicity Z=L2 in (76). In the calculation of the final duality gaps comparing each algorithm, we then take as M the maximum over all evaluations of all the algorithms. This makes the results fully comparable. We always report the duality gap in decibels 10log10(gap2/gap02) relative to the initial iterate. Similarly, we report the distance to the target solution u^ in decibels 10log10(ui-u^2/u^2), and the primal objective value val(x):=G(x)+F(Kx) relative to the target as 10log10(val(x)2/val(x^)2). Our computations were performed in MATLAB+C-MEX on a MacBook Pro with 16GB RAM and a 2.8 GHz Intel Core i5 CPU.

TGV2 denoising The noise in our high-resolution test image, with values in the range [0, 255], has standard deviation 29.6 or 12 dB. In the downscaled image, these become, respectively, 6.15 or 25.7 dB. As parameters (β,α) of the TGV2 regularisation functional, we choose (4.4, 4) for the downscale image, and translate this to the original image by multiplying by the scaling vector (0.25-2,0.25-1) corresponding to the 0.25 downscaling factor. See [34] for a discussion about rescaling and regularisation factors, as well as for a justification of the β/α ratio.

For the PDHGM and our algorithms, we take γ=0.5, corresponding to the gap convergence results. We choose δ=0.01, and parametrise the PDHGM with σ0=1.9/K and τ0=τ00.52/K solved from τ0σ0=(1-δ)K2. These are values that typically work well. For forward-differences discretisation of TGV2 with cell width h=1, we have K211.4 [28]. We use the same value of δ for Algorithm 3 and Algorithm 4, but choose τ0=3τ0, and τ0=τ~0=80τ0. We also take ζ=τ0,-2 for Algorithm 3. These values have been found to work well by trial and error, while keeping δ comparable to the PDHGM. A similar choice of τ0 with a corresponding modification of σ0 would significantly reduce the performance of the PDHGM. For Algorithm 4, we take exponent q=0.1 for the sequence {ai}. This gives in principle a mixed O(1/N1.5)+O(1/N0.5) rate, possibly improved by the convergence of the dual sequence. We plot the evolution of the step length for these and some other choices in Fig. 2. For the C-L-O, we use the detailed parametrisation from [35, Corollary 2.4], taking as ΩY the true L2-norm Bregman divergence of B(0,α)×B(0,β), and ΩX=10·f2/2 as a conservative estimate of a ball containing the true solution. For ‘Adapt’, we use the exact choices of α0, η, and c from [33]. For ‘Relax’, we use the value 1.5 for the inertial ρ parameter of [32]. For both of these algorithms, we use the same choices of σ0 and τ0 as for the PDHGM.

Fig. 2.

Fig. 2

Step length parameter evolution, both axes logarithmic. ‘Alg.3’ and ‘Alg.4 q=1’ have the same parameters as our numerical experiments for the respective algorithms, in particular ζ=τ0,-2 for Algorithm 3, which yields constant τ. ‘Alg.3 ζ/100’ uses the value ζ=τ0,-2/100, which causes τ to increase for some iterations. ‘Alg.4 q=0.1’ uses the value q=0.1 for Algorithm 4, everything else being kept equal

We take fixed 20,000 iterations and initialise each algorithm with y0=0 and x0=0. To reduce computational overheads, we compute the duality gap and distance to target only every 10 iterations instead of at each iteration. The results are in Fig. 3 and Table 1. As we can see, Algorithm 3 performs extremely well for the low-resolution image, especially in its initial iterations. After about 700 or 200 iterations, depending on the criterion, the standard and relaxed PDHGM start to overtake. This is a general effect that we have seen in our tests: the standard PDHGM performs in practise very well asymptotically, although in principle all that exists is a O(1 / N) rate on the ergodic duality gap. Algorithm 4, by contrast, does not perform asymptotically so well. It can be extremely fast on its initial iterations, but then quickly flattens out. The C-L-O surprisingly performs better on the high-resolution image than on the low-resolution image, where it does somewhat poorly in comparison with the other algorithms. The adaptive PDHGM performs very poorly for TGV2 denoising, and we have indeed excluded the high-resolution results from our reports to keep the scaling of the plots informative. Overall, Algorithm 3 gives good results fast, although the basic and relaxed PDHGM seems to perform, in practise, better asymptotically.

Fig. 3.

Fig. 3

TGV2 denoising performance, 20,000 iterations, high- and low-resolution images. The plot is logarithmic, with the decibels calculated as in Sect. 5.3. The poor high-resolution results for ‘Adapt’ [33] have been omitted to avoid poor scaling of the plots. a Gap, low resolution, b target, low resolution, c value, low resolution, d gap, high resolution, e target, high resolution, f value, high resolution

Table 1.

TGV2 denoising performance, maximum 20,000 iterations

Low resolution
Method Gap -50 dB Tgt -40 dB Val 1 dB
Iter Time (s) Iter Time (s) Iter Time (s)
PDHGM 30 0.40 40 0.46 30 0.40
C-L-O 500 4.67 1210 11.31 970 9.04
Alg.3 20 0.29 10 0.22 20 0.29
Alg.4 20 0.47 20 0.47 20 0.47
Relax 20 0.34 30 0.45 20 0.34
Adapt 5360 106.63 2040 41.38 3530 70.78
High resolution
Method Gap -40 dB Tgt -30 dB Val 1 dB
Iter Time (s) Iter Time (s) Iter Time (s)
PDHGM 50 8.85 30 5.13 30 5.13
C-L-O 80 15.76 30 5.97 80 15.76
Alg.3 40 6.20 20 3.10 40 6.20
Alg.4 60 9.18 30 4.53 60 9.18
Relax 40 7.45 20 3.70 20 3.70
Adapt

The CPU time and number of iterations (at a resolution of 10) needed to reach given solution quality in terms of the duality gap, distance to target, or primal objective value

TV deblurring Our test image has now been distorted by Gaussian blur of standard deviation 4, which we intent to remove. We denote by a^ the Fourier presentation of the blur operator as discussed in Sect. 5.1. For numerical stability of the pseudo-duality gap, we zero out small entries, replacing this a^ by a^χ|a^(·)|a^/1000(ξ). Note that this is only needed for the stable computation of G for the pseudo-duality gap, to compare the algorithms; the algorithms themselves are stable without this modification. To construct the projection operator P, we then set p^(ξ)=χ|a^(·)|0.3a^(ξ), and P=Fp^F.

We use TV parameter 2.55 for the high-resolution image and the scaled parameter 2.550.15 for the low-resolution image. We parametrise all the algorithms almost exactly as TGV2 denoising above, of course with appropriate ΩU and K28 corresponding to K= [36]. The only difference in parameterisation is that we take q=1 instead of q=0.1 for Algorithm 4.

The results are in Fig. 4 and Table 2. It does not appear numerically feasible to go significantly below -100 or -80 dB gap. Our guess is that this is due to the numerical inaccuracies of the fast Fourier transform implementation in MATLAB. The C-L-O performs very well judged by the duality gap, although the images themselves and the primal objective value appear to take a little bit longer to converge. The relaxed PDHGM is again slightly improved from the standard PDHGM. The adaptive PDHGM performs very well, slightly outperforming Algorithm 3, although not Algorithm 4. This time Algorithm 4 performs remarkably well.

Fig. 4.

Fig. 4

TV deblurring performance, 10,000 iterations, high- and low-resolution images. The plot is logarithmic, with the decibels calculated as in Sect. 5.3. a Gap, low resolution. b Target, low resolution. c Value, low resolution. d Gap, high resolution. e Target, high resolution. f Value, high resolution

Table 2.

TV deblurring performance, maximum 10,000 iterations

Method Low resolution High resolution
Gap -60 dB Tgt -40 dB Val 1 dB Gap -60 dB Tgt -30 dB Val 1 dB
Iter Time (s) Iter Time (s) Iter Time (s) Iter Time (s) Iter Time (s) Iter Time (s)
PDHGM 390 2.53 2630 17.41 60 0.47 1180 118.30 970 98.98 70 6.59
C-L-O 600 3.81 8930 54.20 950 5.95 500 48.44 1940 187.42 1000 96.60
Alg.3 130 1.14 880 7.22 20 0.25 400 58.42 320 46.16 40 6.13
Alg.4 30 0.47 90 0.97 10 0.29 60 7.97 50 6.66 30 3.98
Relax 260 1.62 1750 11.34 40 0.29 790 77.31 650 63.84 50 5.29
Adapt 110 1.12 660 5.94 10 0.16 260 39.39 150 23.30 30 4.72

The CPU time and number of iterations (at a resolution of 10) needed to reach given solution quality in terms of the duality gap, distance to target, or primal objective value

Conclusion

To conclude, overall, our algorithms are very competitive within the class of proposed variants of the PDHGM. Within our analysis, we have, moreover, proposed very streamlined derivations of convergence rates for even the standard PDHGM, based on the proximal point formulation and the idea of testing. Interesting continuations of this study include whether the condition T^iK=KT~i can reasonably be relaxed such that T^i and T~i would not have to be scalars, as well as the relation to block coordinate descent methods, in particular [14, 37].

Acknowledgements

This research was started while T. Valkonen was at the Center for Mathematical Modeling at Escuela Politécnica Nacional in Quito, supported by a Prometeo scholarship of the Senescyt (Ecuadorian Ministry of Science, Technology, Education, and Innovation). In Cambridge, T. Valkonen has been supported by the EPSRC Grant EP/M00483X/1 “Efficient computational tools for inverse imaging problems”. Thomas Pock is supported by the European Research Council under the Horizon 2020 programme, ERC starting Grant Agreement 640156.

Biographies

Tuomo Valkonen

received his Ph.D. from the University of Jyväskylä in 2008. Since then he has worked as researcher in Graz, Cambridge, and Quito. In February 2016 he started as a lecturer at the University of Liverpool.graphic file with name 10851_2016_692_Figa_HTML.jpg

Thomas Pock

born 1978 in Graz, received his M.Sc. (1998–2004) and his Ph.D. (2005–2008) in Computer Engineering (Telematik) from Graz University of Technology. After a Post-doc position at the University of Bonn, he moved back to Graz University of Technology where he has been an Assistant Professor at the Institute for Computer Graphics and Vision. In 2013 he received the START price of the Austrian Science Fund (FWF) and the German Pattern recognition award of the German association for pattern recognition (DAGM) and in 2014, he received an starting grant from the European Research Council (ERC). Since June 2014, he is a Professor of Computer Science at Graz University of Technology (AIT Stiftungsprofessur “Mobile Computer Vision”) and a principal scientist at the Digital Department of Safety and Security at the Austrian Institute of Technology (AIT). The focus of his research is the development of mathematical models for computer vision and image processing as well as the development of efficient algorithms to solve these models.graphic file with name 10851_2016_692_Figb_HTML.jpg

Compliance with ethical standards

A Data Statement for the EPSRC

This is primarily a theory paper, with some demonstrations on a photograph freely available from the Internet. As this article was written, the used photograph from the Kodak image suite was, in particular, available at http://r0k.us/graphics/kodak/. It has also been archived with our implementations of the algorithms at https://www.repository.cam.ac.uk/handle/1810/253697.

Contributor Information

Tuomo Valkonen, Email: tuomo.valkonen@iki.fi.

Thomas Pock, Email: pock@icg.tugraz.at.

References

  • 1.Rockafellar RT. Convex Analysis. Princeton: Princeton University Press; 1972. [Google Scholar]
  • 2.Ekeland, I., Temam, R.: Convex Analysis and Variational Problems. SIAM (1999)
  • 3.Chambolle A, Pock T. A first-order primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011;40:120–145. doi: 10.1007/s10851-010-0251-1. [DOI] [Google Scholar]
  • 4.Esser E, Zhang X, Chan TF. A general framework for a class of first order primal–dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci. 2010;3(4):1015–1046. doi: 10.1137/09076934X. [DOI] [Google Scholar]
  • 5.Chen Y, Lan G, Ouyang Y. Optimal primal–dual methods for a class of saddle point problems. SIAM J. Optim. 2014;24(4):1779–1814. doi: 10.1137/130919362. [DOI] [Google Scholar]
  • 6.Nesterov Y. Smooth minimization of non-smooth functions. Math. Program. 2005;103(1):127–152. doi: 10.1007/s10107-004-0552-5. [DOI] [Google Scholar]
  • 7.Beck A, Teboulle M. Smoothing and first order methods: a unified framework. SIAM J. Optim. 2012;22(2):557–580. doi: 10.1137/100818327. [DOI] [Google Scholar]
  • 8.O’Donoghue B, Candès E. Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 2015;15(3):715–732. doi: 10.1007/s10208-013-9150-3. [DOI] [Google Scholar]
  • 9.Beck A, Teboulle M. A fast dual proximal gradient algorithm for convex minimization and applications. Oper. Res. Lett. 2014;42(1):1–6. doi: 10.1016/j.orl.2013.10.007. [DOI] [Google Scholar]
  • 10.Valkonen T. A primal–dual hybrid gradient method for non-linear operators with applications to MRI. Inverse Probl. 2014;30(5):055,012. doi: 10.1088/0266-5611/30/5/055012. [DOI] [Google Scholar]
  • 11.Benning, M., Knoll, F., Schönlieb, C.B., Valkonen, T.: Preconditioned ADMM with nonlinear operator constraint (2015). arXiv:1511.00425
  • 12.Möllenhoff T, Strekalovskiy E, Moeller M, Cremers D. The primal–dual hybrid gradient method for semiconvex splittings. SIAM J. Imaging Sci. 2015;8(2):827–857. doi: 10.1137/140976601. [DOI] [Google Scholar]
  • 13.Lorenz D, Pock T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015;51(2):311–325. doi: 10.1007/s10851-014-0523-2. [DOI] [Google Scholar]
  • 14.Fercoq, O., Bianchi, P.: A coordinate descent primal–dual algorithm with large step size and possibly non separable functions (2015). arXiv:1508.04625
  • 15.Beck A, Teboulle M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009;2(1):183–202. doi: 10.1137/080716542. [DOI] [Google Scholar]
  • 16.Beck A, Teboulle M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009;18(11):2419–2434. doi: 10.1109/TIP.2009.2028250. [DOI] [PubMed] [Google Scholar]
  • 17.Setzer S. Operator splittings, Bregman methods and frame shrinkage in image processing. Int. J. Comput. Vis. 2011;92(3):265–280. doi: 10.1007/s11263-010-0357-3. [DOI] [Google Scholar]
  • 18.Valkonen, T.: Optimising big images. In: A. Emrouznejad (ed.) Big Data Optimization: Recent Developments and Challenges, Studies in Big Data, pp. 97–131. Springer, Berlin (2016). doi:10.1007/978-3-319-30265-2_5
  • 19.Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, Berlin (1998). doi:10.1007/978-3-642-02431-3
  • 20.He B, Yuan X. Convergence analysis of primal–dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imaging Sci. 2012;5(1):119–149. doi: 10.1137/100814494. [DOI] [Google Scholar]
  • 21.Pock, T., Chambolle, A.: Diagonal preconditioning for first order primal–dual algorithms in convex optimization. In: Computer Vision (ICCV), 2011 IEEE International Conference on, pp. 1762 –1769 (2011). doi:10.1109/ICCV.2011.6126441
  • 22.Rudin, W.: Functional Analysis. International series in Pure and Applied Mathematics. McGraw-Hill, New York (2006)
  • 23.Bauschke H, Combettes P. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Berlin: Springer; 2011. [Google Scholar]
  • 24.Hohage, T., Homann, C.: A generalization of the Chambolle–Pock algorithm to Banach spaces with applications to inverse problems (2014). arXiv:1412.0126
  • 25.Chan, T., Shen, J.: Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods. Society for Industrial and Applied Mathematics (SIAM) (2005)
  • 26.Benning M, Gladden L, Holland D, Schönlieb CB, Valkonen T. Phase reconstruction from velocity-encoded MRI measurements—a survey of sparsity-promoting variational approaches. J. Magn. Reson. 2014;238:26–43. doi: 10.1016/j.jmr.2013.10.003. [DOI] [PubMed] [Google Scholar]
  • 27.Ambrosio L, Fusco N, Pallara D. Functions of Bounded Variation and Free Discontinuity Problems. Oxford: Oxford University Press; 2000. [Google Scholar]
  • 28.Valkonen T, Bredies K, Knoll F. Total generalised variation in diffusion tensor imaging. SIAM J. Imaging Sci. 2013;6(1):487–525. doi: 10.1137/120867172. [DOI] [Google Scholar]
  • 29.Bredies K, Kunisch K, Pock T. Total generalized variation. SIAM J. Imaging Sci. 2011;3:492–526. doi: 10.1137/090769521. [DOI] [Google Scholar]
  • 30.Bredies, K., Valkonen, T.: Inverse problems with second-order total generalized variation constraints. In: Proceedings of the 9th International Conference on Sampling Theory and Applications (SampTA) 2011, Singapore (2011)
  • 31.Temam, R.: Mathematical Problems in Plasticity. Gauthier-Villars (1985)
  • 32.Chambolle A, Pock T. On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 2015 [Google Scholar]
  • 33.Goldstein T, Li M, Yuan X. Adaptive primal–dual splitting methods for statistical learning and image processing. Adv. Neural Inf. Process. Syst. 2015;28:2080–2088. [Google Scholar]
  • 34.de Los Reyes, J.C., Schönlieb, C.B., Valkonen, T.: Bilevel parameter learning for higher-order total variation regularisation models. J. Math. Imaging Vis. (2016). doi:10.1007/s10851-016-0662-8. Published online [DOI] [PMC free article] [PubMed]
  • 35.Chen K, Lorenz DA. Image sequence interpolation using optimal control. J. Math. Imaging Vis. 2011;41:222–238. doi: 10.1007/s10851-011-0274-2. [DOI] [Google Scholar]
  • 36.Chambolle A. An algorithm for mean curvature motion. Interfaces Free Bound. 2004;6(2):195. doi: 10.4171/IFB/97. [DOI] [Google Scholar]
  • 37.Suzuki, T.: Stochastic dual coordinate ascent with alternating direction multiplier method (2013). arXiv:1311.0622v1

Articles from Journal of Mathematical Imaging and Vision are provided here courtesy of Springer

RESOURCES