Abstract
We propose several variants of the primal–dual method due to Chambolle and Pock. Without requiring full strong convexity of the objective functions, our methods are accelerated on subspaces with strong convexity. This yields mixed rates, with respect to initialisation and O(1 / N) with respect to the dual sequence, and the residual part of the primal sequence. We demonstrate the efficacy of the proposed methods on image processing problems lacking strong convexity, such as total generalised variation denoising and total variation deblurring.
Keywords: Primal–dual, Accelerated, Subspace, Total generalised variation
Introduction
Let and be convex, proper, and lower semicontinuous functionals on Hilbert spaces X and Y, possibly infinite dimensional. Also let be a bounded linear operator. We then wish to solve the problem
This can under mild conditions on F (see, for example, [1, 2]) also be written with the help of the convex conjugate in the minimax form
One possibility for the numerical solution of the latter form is the primal–dual algorithm of Chambolle and Pock [3], a type of proximal point or extragradient method, also classified as the ‘modified primal–dual hybrid gradient method’ or PDHGM by Esser et al. [4]. If either G or is strongly convex, the method can be accelerated to convergence rates of the iterates and an ergodic duality gap [3]. But what if we have only partial strong convexity? For example, what if
for a projection operator P to a subspace , and strongly convex ? This kind of structure is common in many applications in image processing and data science, as we will more closely review in Sect. 5. Under such partial strong convexity, can we obtain a method that would give an accelerated rate of convergence at least for Px?
We provide a partially positive answer: we can obtain mixed rates, with respect to initialisation, and O(1 / N) with respect to bounds on the ‘residual variables’ y and . In this respect, our results are similar to the ‘optimal’ algorithm of Chen et al. [5]. Instead of strong convexity, they assume smoothness of G to derive a primal–dual algorithm based on backward–forward steps, instead of the backward–backward steps of [3].
The derivation of our algorithms is based, firstly, on replacing simple step length parameters by a variety of abstract step length operators and, secondly, a type of abstract partial strong monotonicity property
| 1 |
the full details of which we provide in Sect. 2. Here is an auxiliary step length operator. Our factor of strong convexity is a positive semidefinite operator ; however, to make our algorithms work, we need to introduce additional artificial strong convexity through another operator , which may not satisfy . This introduces the penalty term in (1). The exact procedure can be seen as a type of smoothing, famously studied by Nesterov [6], and more recently, for instance, by Beck and Teboulle [7]. In these approaches, one computes a priori a level of smoothing—comparable to —needed to achieve prescribed solution quality. One then solves a smoothed problem, which can be done at rate. However, to obtain a solution with higher quality than the a priori prescribed one, one needs to solve a new problem from scratch, as the smoothing alters the problem being solved. One can also employ restarting strategies, to take some advantage of the previous solution, see, for example, [8]. Our approach does not depend on restarting and a priori chosen solution qualities: the method will converge to an optimal solution to the original non-smooth problem. Indeed, the introduced additional strong convexity is controlled automatically.
The ‘fast dual proximal gradient method’, or FDPG [9], also possesses different type of mixed rates, O(1 / N) for the primal, and for the dual. This is, however, under standard strong convexity assumptions. Other than that, our work is related to various further developments from the PDHGM, such as variants for nonlinear K [10, 11] and non-convex G [12]. The PDHGM has been the basis for inertial methods for monotone inclusions [13] and primal–dual stochastic coordinate descent methods without separability requirements [14]. Finally, the FISTA [15, 16] can be seen as a primal-only relative of the PDHGM. Not attempting to do full justice here to the large family of closely related methods, we point to [4, 17, 18] for further references.
The contributions of our paper are twofold: firstly, to paint a bigger picture of what is possible, we derive a very general version of the PDHGM. This algorithm, useful as a basis for deriving other new algorithms besides ours, is the content of Sect. 2. In this section, we provide an abstract bound on the iterates of the algorithm, later used to derive convergence rates. In Sect. 3, we extend the bound to include an ergodic duality gap under stricter conditions on the acceleration scheme and the step length operators. A by-product of this work is the shortest convergence rate proof for the accelerated PDHGM known to us. Afterwards, in Sect. 4, we derive from the general algorithm two efficient mixed-rate algorithms for problems exhibiting strong convexity only on subspaces. The first one employs the penalty or smoothing on both the primal and the dual. The second one only employs the penalty on the dual. We finish the study with numerical experiments in Sect. 5. The main results of interest for readers wishing to apply our work are Algorithms 3 and 4 along with the respective convergence results, Theorems 4.1 and 4.2.
A General Primal–Dual Method
Notation
To make the notation definite, we denote by the space of bounded linear operators between Hilbert spaces X and Y. For , the notation means that is positive semidefinite; in particular, means that T is positive semidefinite. In this case, we also denote
The identity operator is denoted by I, as is standard.
For , which can possibly not be self-adjoint, we employ the notation
| 2 |
We also use the notation .
Background
As in the introduction, let us be given convex, proper, lower semicontinuous functionals and on Hilbert spaces X and Y, as well as a bounded linear operator . We then wish to solve the minimax problem
| P |
assuming the existence of a solution satisfying the optimality conditions
| OC |
Such a point always exists if and , as follows from [2, Proposition VI.1.2 & Proposition VI.2.2]. More generally the existence has to be proved explicitly. In finite dimensions, see, for example, [19] for several sufficient conditions.
The primal–dual method of Chambolle and Pock [3] for the solving (P) consists of iterating
| 3a |
| 3b |
| 3c |
In the basic version of the algorithm, , , and , assuming that the step length parameters satisfy . The method has O(1 / N) rate for the ergodic duality gap [3]. If G is strongly convex with factor , we may use the acceleration scheme [3]
| 4 |
to achieve convergence rates of the iterates and an ergodic duality gap, defined in [3]. To motivate our choices later on, observe that is never used expect to calculate . We may therefore equivalently parametrise the algorithm by .
We note that the order of the steps in (3) is different from the original ordering in [3]. This is because with the present order, the method (3) may also be written in the proximal point form. This formulation, first observed in [20] and later utilised in [10, 11, 21], is also what we will use to streamline our analysis. Introducing the general variable splitting notation,
the system (3) then reduces into
| 5 |
for the monotone operator
| 6 |
and the preconditioning or step length operator
| 7 |
We note that the optimality conditions (OC) can also be encoded as .
Abstract Partial Monotonicity
Our plan now is to formulate a general version of (3), replacing and by operators and . In fact, we will need two additional operators and to help communicate change in to . They replace in (3b) and (7), operating as from both sides of K. The role of is to split the original primal step length in the space X into the two parts and with potentially different rates. The role of is to transfer into the space Y, to eventually control the dual step length . In the basic algorithm (3), we would simply have , and for the scalar .
To start the algorithm derivation, we now formulate abstract forms of partially strong monotonicity. As a first step, we take subsets of invertible operators
as well as subsets of positive semidefinite operators
We assume and closed with respect to composition: for .
We use the sets and as follows. We suppose that is partially strongly -monotone, meaning that for all , holds
| G-PM |
for some family of functionals , and a linear operator which models partial strong monotonicity. The inequality in (G-PM), and all such set inequalities in the remainder of this paper, is understood to hold for all elements of the sets and . The operator acts as a testing operator, and the operator as introduced strong monotonicity. The functional is a penalty corresponding to the test and the introduced strong monotonicity. The role of testing will become more apparent in Sect. 2.4.
Similarly to (G-PM), we assume that is
-monotone with respect to in the sense that for all , holds
for some family of functionals . Again, the inequality in (-PM) is understood to hold for all elements of the sets and .
In our general analysis, we do not set any conditions on and , as their role is simply symbolic transfer of dissatisfaction of strong monotonicity into a penalty in our abstract convergence results.
Let us next look at a few examples on how (G-PM) or (-PM) might be satisfied. First we have the very well-behaved case of quadratic functions.
Example 2.1
satisfies (G-PM) with , , and for any invertible . Indeed, G is differentiable with .

The next lemma demonstrates what can be done when all the parameters are scalar. It naturally extends to functions of the form with corresponding product form parameters.
Lemma 2.1
Let be convex and proper with bounded. Then,
| 8 |
for some constant , every , and .
Proof
We denote . If , we have , so (8) holds irrespective of and C. If , we have , so (8) again holds. We may therefore compute the constants based on . Now, there is a constant M such that . Then, . Thus, if we pick , then for every and . By the convexity of G, (8) holds.
Example 2.2
An indicator function of a convex bounded set A satisfies the conditions of Lemma 2.1. This is generally what we will use and need.
A General Algorithm and the Idea of Testing
The only change we make to the proximal point formulation (5) of the method (3) is to replace the basic step length or preconditioning operator by the operator
| 10 |
As we have remarked, the operators and play the role of , acting from both sides of K. Our proposed algorithm can thus be characterised as solving on each iteration for the next iterate the preconditioned proximal point problem
| PP |
To study the convergence properties of (PP), we define the testing operator
| 11 |
It will turn out that multiplying or ‘testing’ (PP) by this operator will allow us to derive convergence rates. The testing of (PP) by is why we introduced testing into the monotonicity conditions (G-PM) and (-PM). If we only tested (PP) with , we could at most obtain ergodic convergence of the duality gap for the unaccelerated method. But by testing with something appropriate and faster increasing, such as (11), we are able to extract better convergence rates from (PP).
We also set
for some and . We will see in Sect. 2.6 that is a factor of partial strong monotonicity for H with respect to testing by . With this, taking a fixed , the properties
| C1 |
| C2 |
will turn out to be the crucial defining properties for the convergence rates of the iteration (PP). The method resulting from the combination of (PP), (C1), and (C2) can also be expressed as Algorithm 1. The main steps in developing practical algorithms based on Algorithm 1 will be in the choice of the various step length operators. This will be the content of Sects. 3 and 4. Before this, we expand the conditions (C1) and (C2) to see how they might be satisfied and study abstract convergence results.
A Simplified Condition
We expand
| 12 |
as well as
| 13 |
and
We observe that if , then for arbitrary invertible a type of Cauchy (or Young) inequality holds, namely
| 14 |
The inequality here can be verified using the basic Cauchy inequality . Applying (14) in (12), we see that (C2) is satisfied when
| 15 |
for some invertible . The second condition in (15) is satisfied as an equality if
| 16 |
By the spectral theorem for self-adjoint operators on Hilbert spaces (e.g. [22, Chapter 12]), we can make the choice (16) if
Equivalently, by the same spectral theorem, . Therefore, we see from (15) that (C2) holds when
Also, (C1) can be rewritten as
Basic Convergence Result
Our main result on Algorithm 1 is the following theorem, providing some general convergence estimates. It is, however, important to note that the theorem does not yet directly prove convergence, as its estimates depend on the rate of decrease in , as well as the rate of increase in the penalty sum coming from the dissatisfaction of strong convexity. Deriving these rates in special cases will be the topic of Sect. 4.
Theorem 2.1
Let us be given , and convex, proper, lower semicontinuous functionals and on Hilbert spaces X and Y, satisfying (G-PM) and (-PM). Pick , and suppose (C1) and (C2) are satisfied for each for some invertible , , , and , as well as and . Suppose that and are self-adjoint. Let satisfy (OC). Then, the iterates of Algorithm 1 satisfy
| 17 |
for
| 18 |
Remark 2.1
The term , coming from the dissatisfaction of strong convexity, penalises the basic convergence, which is on the right-hand side of (17) presented by the constant . If is of the order , at least on a subspace, and we can bound the penalty for some constant C, then we clearly obtain mixed convergence rates on the subspace. If we can assume that actually converges to zero at some rate, then it will even be possible to obtain improved convergence rates. Since typically reduce to scalar factors within , this would require prior knowledge of the rates of convergence and . Boundedness of the iterates , we can, however, usually ensure.
Proof
Since , we have
Recalling the definition of from (11), and of H from (6), it follows
An application of (G-PM) and (-PM) consequently gives
Using the expression (13) for , and (18) for , we thus deduce
| 19 |
For arbitrary self-adjoint , we calculate
| 20 |
We observe that in (12) is self-adjoint as we have assumed that and are self-adjoint. In consequence, using (20) we obtain
Using (C1) to estimate and (C2) to eliminate yields
| 21 |
Combining (19) and (21) through (PP), we thus obtain
| 22 |
Summing (22) over , and applying (C2) to estimate from below, we obtain (17).
Scalar Off-diagonal Updates and the Ergodic Duality Gap
One relatively easy way to satisfy (G-PM), (-PM), (C1) and (C2) is to take the ‘off-diagonal’ step length operators and as equal scalars. Another good starting point would be to choose . We, however, do not explore this route in the present work. Instead, we now specialise Theorem 2.1 to the scalar case. We then explore ways to add estimates of the ergodic duality gap into (17). While this would be possible in the general framework through convexity notions analogous to (G-PM) and (-PM), the resulting gap would not be particularly meaningful. We therefore concentrate on the scalar off-diagonal updates to derive estimates on the ergodic duality gap.
Scalar Specialisation of Algorithm 1
We take both , and for some . With
the condition () then becomes
The off-diagonal terms cancelling out () on the other hand become
Observe also that is under this setup self-adjoint if and are.
For simplicity, we now assume and to satisfy the identities
| 24 |
The monotonicity conditions (G-PM) and (-PM) then simplify into
| G-pm |
holding for all , and ; and
| F*-pm |
holding for all , and .
We have thus converted the main conditions (C2), (C1), (G-PM), and (-PM) of Theorem 2.1 into the respective conditions (), (), (G-pm), and (). Rewriting () in terms of and satisfying
we reorganise () and () into the parameter update rules (23) of Algorithm 2. For ease of expression, we introduce there and as dummy variables that are not used anywhere else. Equating , we observe that Algorithm 2 is an instance of Algorithm 1.
Example 3.1
(The method of Chambolle and Pock) Let G be strongly convex with factor . We take , , , and for some scalars . The conditions (G-pm) and () then hold with and , while () and () reduce with , , , and into
Updating such that the last inequality holds as an equality, we recover the accelerated PDHGM (3) (4). If , we recover the unaccelerated PDHGM.
The Ergodic Duality Gap and Convergence
To study the convergence of an ergodic duality gap, we now introduce convexity notions analogous to (G-pm) and (). Namely, we assume
| G-pc |
to hold for all and and
to hold for all and . Clearly these imply (G-pm) and ().
To define an ergodic duality gap, we set
| 25 |
and define the weighted averages
With these, the ergodic duality gap at iteration N is defined as the duality gap for , namely
and we have the following convergence result.
Theorem 3.1
Let us be given , and convex, proper, lower semicontinuous functionals and on Hilbert spaces X and Y, satisfying (G-pc) and () for some sets , , and . Pick , and suppose () and () are satisfied for each for some invertible self-adjoint , ,
as well as and for . Let satisfy (OC). Then, the iterates of Algorithm 2 satisfy
| 26 |
Here is as in (18), and
| 27 |
If only (G-pm) and () hold instead of (G-pc) and (), and we take , then (26) holds with the modification .
Remark 3.1
For convergence of the gap, we must accelerate less (factor 1 / 2 on ).
Example 3.2
(No acceleration) Consider Example 3.1, where and . If , we get ergodic convergence of the duality gap at rate O(1 / N). Indeed, we are in the scalar step setting, with . Thus, presently .
Example 3.3
(Full acceleration) With in Example 3.1, we know from [3, Corollary 1] that
| 28 |
Thus, is of the order , while is of the order . Therefore, (26) shows convergence of the squared distance to solution. For convergence of the ergodic duality gap, we need to slow down (4) to .
Remark 3.2
The result (28) can be improved to estimate without a qualifier . Indeed, from [3, Lemma 2] we know the following for the rule : given and with , for any holds
If we pick and , this says
The first inequality gives .
Therefore, for . Moreover, the second inequality gives .
Proof
(Theorem 3.1) The non-gap estimate in the last paragraph of the theorem statement, where , we modify , is a direct consequence of Theorem 2.1. We therefore concentrate on the estimate that includes the gap, and fix . We begin by expanding
Since then , and , we may take and in (G-pc) and . It follows
Using (2) and (24), we can make all of the factors ‘2’ and ‘1/2’ in this expression annihilate each other. With as in (27) and , we therefore have
A little bit of reorganisation and referral to (13) for the expansion of thus gives
| 29 |
Let us write
Observing here the switches between the indices and i of the step length parameters in comparison with the last step of (29), we thus obtain
| 30 |
We note that in (12) is self-adjoint as we have assumed and to be, and taken and to be scalars times the identity. We therefore deduce from the proof of Theorem 2.1 that (21) holds. Using (PP) to combine (21) and (30), we thus deduce
Summing this for gives with from (27) the estimate
| 31 |
We want to estimate the sum of the gaps in (31). Using the convexity of G and , we observe
| 32 |
Also, by (25) and simple reorganisation
| 33 |
| 34 |
All of (32)–(34) together give
Another use of (25) gives
Thus,
| 35 |
where the remainder
At a solution to (OC), we have , so provided . But , so this is guaranteed by our assumption (). Using (35) in (31) therefore gives
| 36 |
A referral to (C2) to estimate from below shows (26), concluding the proof.
Convergence Rates in Special Cases
To derive a practical algorithm, we need to satisfy the update rules (C1) and (C2), as well as the partial monotonicity conditions (G-PM) and (-PM). As we have already discussed in Sect. 3, this can be done when for some we set
| 37 |
The result of these choices is Algorithm 2, whose convergence we studied in Theorem 3.1. Our task now is to verify its conditions, in particular (G-pc) and [alternatively () and (G-pm)], as well as (), (), and () for of the projection form .
An Approach to Updating
We have not yet defined an explicit update rule for , merely requiring that it has to satisfy () and (). The former in particular requires
Hiring the help of some linear operator ; satisfying
| 38 |
our approach is to define
| 39 |
Then, () is satisfied provided . Since the condition () reduces into the satisfaction for each of
| 40a |
| 40b |
To apply Theorem 3.1, all that remains is to verify in special cases the conditions (40) together with () and the partial strong convexity conditions (G-pc) and .
When is a Multiple of a Projection
We now take for some , and a projection operator : idempotent, , and self-adjoint, . We let . Then, . With this, we assume that is such that for some holds
| 41 |
To unify our analysis for gap and non-gap estimates of Theorem 3.1, we now pick in the former case, and in the latter. We then pick , and , and set
| 42 |
With this, guarantee . Moreover, is self-adjoint. Moreover, , exactly as required in both the gap and the non-gap cases of Theorem 3.1.
Since
we are encouraged to take
| 43 |
Observe that (43) satisfies (38). Inserting (43) into (39), we obtain
| 44 |
Since is now equivalent to a scalar, (40b), we also take , assuming for some that
Setting
we thus expand (40) as
| 45a |
| 45b |
| 45c |
We are almost ready to state a general convergence result for projective . However, we want to make one more thing more explicit. Since the choices (42) satisfy
we suppose for simplicity that
| 46 |
for some and . The conditions (G-pc) and reduce in this case to the satisfaction for some of
for all and , as well as of
for all and . Analogues of (G-pm) and () can be formed.
To summarise the findings of this section, we state the following proposition.
Proposition 4.1
Suppose (G-pcr) and (-pcr) hold for some projection operator and scalars . With , pick . For each , suppose (45) is satisfied with
| 47 |
If we solve (45a) exactly, define , , and through (42) and (44), and set , then the iterates of Algorithm 2 satisfy with and as in (27) the estimate
| 48 |
If we take , then (48) holds with .
Observe that presently
| 49 |
Proof
As we have assumed through (47), or otherwise already verified its conditions, we may apply Theorem 3.1. Multiplying (26) by , we obtain
| 50 |
Now, observe that solving (45a) exactly gives
| 51 |
Therefore, we have the estimate
| 52 |
Primal and Dual Penalties with Projective
We now study conditions that guarantee the convergence of the sum in (48). Indeed, the right-hand sides of (45b) and (45c) relate to . In most practical cases, which we study below, and transfer these right-hand side penalties into simple linear factors within . Optimal rates are therefore obtained by solving (45b) and (45c) as equalities, with the right-hand sides proportional to each other. Since , and it will be the case that for large i, we, however, replace (45c) by the simpler condition
| 53 |
Then, we try to make the left-hand sides of (45b) and (53) proportional with only as a free variable. That is, for some proportionality constant , we solve
| 54 |
Multiplying both sides of (54) by , gives on the quadratic condition
Thus,
| 55 |
Solving (45b) and (53) as equalities, (54) and (55) give
| 56 |
Note that this quantity is non-negative exactly when . We have
This quickly yields if . In particular, (56) is non-negative when .
The next lemma summarises these results for the standard choice of .
Lemma 4.1
Let by given by (55), and set
| 57 |
Then, , , and (45) is satisfied with the right-hand sides given by the non-negative quantity in (56). Moreover,
| 58 |
Proof
The choice (57) satisfies (45a), so that (45) in its entirety will be satisfied with the right-hand sides of (45b)–(45c) given by (56). The bound follows from . Finally, the implication (58) is a simple estimation of (55).
Specialisation of Algorithm 2 to the choices in Lemma 4.1 yields the steps of Algorithm 3. Observe that entirely disappears from the algorithm. To obtain convergence rates, and to justify the initial conditions, we will shortly seek to exploit with specific and the telescoping property stemming from the non-negativity of the last term of (56).
There is still, however, one matter to take care of. We need and , although in many cases of practical interest, the upper bounds are infinite and hence inconsequential. We calculate from (55) and (57) that
| 60 |
Therefore, we need to choose and to satisfy . Likewise, we calculate from (56), (57), and (60) that
This tells us to choose and to satisfy . Overall, we obtain for and the condition
| 61 |
This can always be satisfied through suitable choices of and .
If now and , using the non-negativity of (56), we calculate
| 62 |
Similarly
| 63 |
Using these expression to expand (49), we obtain the following convergence result.
Theorem 4.1
Suppose (G-pcr) and (-pcr) hold for some projection operator , scalars with , and , for some constants . With , fix . Select initial , as well as and satisfying (61). Then, Algorithm 3 satisfies for some the estimate
| 64 |
If we take , then (48) holds with .
Proof
During the course of the derivation of Algorithm 3, we have verified (45), solving (45a) as an equality. Moreover, Lemma 4.1 and (61) guarantee (47). We may therefore apply Proposition 4.1. Inserting (62) and (63) into (48) and (49) gives
| 65 |
The condition now guarantees through (58). Now we note that is not used in Algorithm 3, so it only affects the convergence rate estimates. We therefore simply take , so that for all . With this and the bound from Remark 3.2, (64) follows by simple estimation of (65).
Remark 4.1
As a special case of Algorithm 3, if we choose , then we can show from (55) that for all .
Remark 4.2
The convergence rate provided by Theorem 4.1 is a mixed rate, similarly to that derived in [5] for a type of forward–backward splitting algorithm for smooth G. Ours is of course backward–backward type algorithm. It is interesting to note that using the differentiability properties of infimal convolutions [23, Proposition 18.7], and the presentation of a smooth G as an infimal convolution, it is formally possible to derive a forward–backward algorithm from Algorithm 3. The difficulties lie in combining this conversion trick with conditions on the step lengths.
Dual Penalty Only with Projective
Continuing with the projective setup of Sect. 4.2, we now study the case , that is, when only the dual penalty is available with . To use Proposition 4.1, we need to satisfy (47) and (45), with (45a) holding as an equality. Since , (45b) becomes
| 66 |
With respect to , the left-hand side of (45c) is maximised (and the penalty on the right-hand side minimised) when (66) is minimised. Thus, we solve (66) exactly, which gives
In consequence , and (45c) becomes
| 67 |
In order to simultaneously satisfy (45a), this suggests for some, yet undetermined, , to choose
| 68 |
Since , (67) is satisfied with the choice (68) if we take
To use Proposition 4.1, we need to satisfy . Since (68) implies that is non-increasing, we can satisfy this for large enough i if . To ensure satisfaction for all , it suffices to take non-increasing, and satisfy the initial condition
| 69 |
The rule and (68) give . We therefore see that
Assuming to have the structure (46), moreover,
Thus, the rate (48) in Proposition 4.1 states
| 70 |
for
The convergence rate is thus completely determined by and .
Remark 4.3
If , that is, if is strongly convex, we may simply pick , that is , and obtain from (70) a convergence rate.
For a more generally applicable algorithm, suppose as in Theorem 4.1. We need to choose . One possibility is to pick some and
| 71 |
The concavity of for easily shows that is non-increasing. With the choice (71), we then compute
and
If , we find with that
| 72 |
The choice gives uniform O(1 / N) over both the initialisation and the dual sequence. By choosing , we get convergence with respect to the initialisation, and with respect to the residual sequence.
With these choices, Algorithm 2 yields Algorithm 4, whose convergence properties are stated in the next theorem.
Theorem 4.2
Suppose (G-pcr) and (-pcr) hold for some projection operator and with and for some constant . With , choose , and pick the sequence by (71) for some . Select initial and verifying (69). Then, Algorithm 4 satisfies
| 74 |
If we take , then (74) holds with .
Proof
We apply Proposition 4.1 whose assumptions we have verified during the course of the present section. In particular, through the choice (68) that forces . Also, have already derived the rate (70) from (48). Inserting (72) into (70), noting that the former is only valid for , immediately gives (74).
Examples from Image Processing and the Data Sciences
We now consider several applications of our algorithms. We generally have to consider discretisations, since many interesting infinite-dimensional problems necessitate Banach spaces. Using Bregman distances, it would be possible to generalise our work form Hilbert spaces to Banach spaces, as was done in [24] for the original method of [3]. This is, however, outside the scope of the present work.
Regularised Least Squares
A large range of interesting application problems can be written in the Tikhonov regularisation or empirical loss minimisation form
| 75 |
Here is a regularisation parameter, typically convex and smooth fidelity term with data . The forward operator —which can often also be data—maps our unknown to the space of data. The operator and the typically non-smooth and convex act as a regulariser.
We are particularly interested in strongly convex and A with a non-trivial null-space. Examples include, for example, Lasso—a type of regularised regression—with , , and , on finite-dimensional spaces. If the data of the Lasso is ‘sparse’, in the sense that A has a non-trivial null-space, then, based on accelerating the strongly convex part of the variable, our algorithm can provide improved convergence rates compared to standard non-accelerated methods.
In image processing examples abound, we refer to [25] for an overview. In total variation () regularisation, we still take , but is the gradient operator. Strictly speaking, this has to be formulated in the Banach space , but we will consider the discretised setting to avoid this problem. For denoising of Gaussian noise with regularisation, we take , and again . This problem is not so interesting to us, as it is fully strongly convex. In a simple form of inpainting—filling in missing regions of an image—we take A as a subsampling operator S mapping an image to one in , for the defect region that we want to recreate. Observe that in this case, is directly a projection operator. This is therefore a problem for our algorithms! Related problems include reconstruction from subsampled magnetic resonance imaging (MRI) data (see, for example, [11, 26]), where we take for the Fourier transform. Still, is a projection operator, so the problem perfectly suits our algorithms.
Another related problem is total variation deblurring, where A is a convolution kernel. This problem is slightly more complicated to handle, as is not a projection operator. Assuming periodic boundary conditions on a box , we can write , multiplying the Fourier transform by some . If on a subdomain, we obtain a projection form (it would also be possible to extend our theory to non-constant , but we have decided not to extend the length of the paper by doing so. Dualisation likewise provides a further alternative).
Satisfaction of convexity conditions
In all of the above examples, when written in the saddle point form (P), is a simple pointwise ball constraint. Lemma 2.1 thus guarantees (-pcr). If and , then clearly can be bounded in for the optimal solution to (75). Thus, for some , we can add to (75) the artificial constraint
| 76 |
In finite dimensions, this gives a bound in . Lemma 2.1 gives (G-pcr) with .
In case of our total variation examples, and . Provided mean-zero functions are not in the kernel of A, one can through Poincar’s inequality [27] on and a two-dimensional connected domain show that even the original infinite-dimensional problems have bounded solutions in . We may therefore again add the artificial constraint (76) with to (75).
Dynamic bounds and pseudo-duality gaps
We seldom know the exact bound M, but can derive conservative estimates. Nevertheless, adding such a bound to Algorithm 4 is a simple, easily implemented projection of into the constraint set. In practise, we do not use or need the projection, and update the bound M dynamically so as to ensure that the constraint (76) is never active. Indeed, A having a non-trivial nullspace also causes duality gaps for (P) to be numerically infinite. In [28], a ‘pseudo-duality gap’ was therefore introduced, based on dynamically updating M. We will also use this type of dynamic duality gaps in our reporting.
Regularised Problems
So far, we have considered very simple regularisation terms. Total generalised variation, , was introduced in [29] as a higher-order generalisation of . It avoids the unfortunate stair-casing effect of —large flat areas with sharp transitions—while preserving the critical edge preservation property that smooth regularisers lack. We concentrate on the second-order . In all of our image processing examples, we can replace by .
As with total variation, we have to consider discretised models due the original problem being set in the Banach space . For two parameters , the regularisation functional is written in the differentiation cascade form of [30] as
Here is the symmetrised gradient. With and , we may write the problem
| 77 |
in the saddle point form (P) with
If , as is the case for denoising, we have
perfectly uncoupling in both Algorithm 3 and Algorithm 4 the prox updates for G into ones for and . The condition (-pcr) with is then immediate from Lemma 2.1. Moreover, the Sobolev–Korn inequality [31] allows us to bound on a connected domain an optimal to (77) as
for some constant . We may assume that , as the affine part of w is not used in (77). Therefore we may again replace by the artificial constraint . By Lemma 2.1, G will then satisfy (G-pcr) with .
Numerical Results
We demonstrate our algorithms on denoising and deblurring. Our tests are done on the photographs in Fig. 1, both at the original resolution of , and scaled down by a factor of 0.25 to pixels. It is image #23 from the free Kodak image suite. Other images from the collection that we have experimented on give analogous computational results. For both of our example problems, we calculate a target solution by taking one million iterations of the basic PDHGM (3). We also tried interior point methods for this, but they are only practical for the smaller denoising problem.
Fig. 1.

We use sample image (b) for denoising, and (c) for deblurring experiments. Free Kodak image suite photo, at the time of writing online at http://r0k.us/graphics/kodak/. a True image. b Noise image. c Blurry image
We evaluate Algorithms 3 and 4 against the standard unaccelerated PDHGM of [3], as well as (a) the mixed-rate method of [5], denoted here C-L-O, (b) the relaxed PDHGM of [20, 32], denoted here ‘Relax’, and (c) the adaptive PDHGM of [33], denoted here ‘Adapt’. All of these methods are very closely linked and have comparable low costs for each step. This makes them straightforward to compare.
As we have discussed, for comparison and stopping purposes, we need to calculate a pseudo-duality gap as in [28], because the real duality gap is in practise infinite when A has a non-trivial nullspace. We do this dynamically; upgrading, the M in (76) every time, we compute the duality gap. For both of our example problems, we use for simplicity in (76). In the calculation of the final duality gaps comparing each algorithm, we then take as M the maximum over all evaluations of all the algorithms. This makes the results fully comparable. We always report the duality gap in decibels relative to the initial iterate. Similarly, we report the distance to the target solution in decibels , and the primal objective value relative to the target as . Our computations were performed in MATLAB+C-MEX on a MacBook Pro with 16GB RAM and a 2.8 GHz Intel Core i5 CPU.
denoising The noise in our high-resolution test image, with values in the range [0, 255], has standard deviation 29.6 or 12 dB. In the downscaled image, these become, respectively, 6.15 or 25.7 dB. As parameters of the regularisation functional, we choose (4.4, 4) for the downscale image, and translate this to the original image by multiplying by the scaling vector corresponding to the 0.25 downscaling factor. See [34] for a discussion about rescaling and regularisation factors, as well as for a justification of the ratio.
For the PDHGM and our algorithms, we take , corresponding to the gap convergence results. We choose , and parametrise the PDHGM with and solved from . These are values that typically work well. For forward-differences discretisation of with cell width , we have [28]. We use the same value of for Algorithm 3 and Algorithm 4, but choose , and . We also take for Algorithm 3. These values have been found to work well by trial and error, while keeping comparable to the PDHGM. A similar choice of with a corresponding modification of would significantly reduce the performance of the PDHGM. For Algorithm 4, we take exponent for the sequence . This gives in principle a mixed rate, possibly improved by the convergence of the dual sequence. We plot the evolution of the step length for these and some other choices in Fig. 2. For the C-L-O, we use the detailed parametrisation from [35, Corollary 2.4], taking as the true -norm Bregman divergence of , and as a conservative estimate of a ball containing the true solution. For ‘Adapt’, we use the exact choices of , , and c from [33]. For ‘Relax’, we use the value 1.5 for the inertial parameter of [32]. For both of these algorithms, we use the same choices of and as for the PDHGM.
Fig. 2.
Step length parameter evolution, both axes logarithmic. ‘Alg.3’ and ‘Alg.4 q=1’ have the same parameters as our numerical experiments for the respective algorithms, in particular for Algorithm 3, which yields constant . ‘Alg.3 ’ uses the value , which causes to increase for some iterations. ‘Alg.4 q=0.1’ uses the value for Algorithm 4, everything else being kept equal
We take fixed 20,000 iterations and initialise each algorithm with and . To reduce computational overheads, we compute the duality gap and distance to target only every 10 iterations instead of at each iteration. The results are in Fig. 3 and Table 1. As we can see, Algorithm 3 performs extremely well for the low-resolution image, especially in its initial iterations. After about 700 or 200 iterations, depending on the criterion, the standard and relaxed PDHGM start to overtake. This is a general effect that we have seen in our tests: the standard PDHGM performs in practise very well asymptotically, although in principle all that exists is a O(1 / N) rate on the ergodic duality gap. Algorithm 4, by contrast, does not perform asymptotically so well. It can be extremely fast on its initial iterations, but then quickly flattens out. The C-L-O surprisingly performs better on the high-resolution image than on the low-resolution image, where it does somewhat poorly in comparison with the other algorithms. The adaptive PDHGM performs very poorly for denoising, and we have indeed excluded the high-resolution results from our reports to keep the scaling of the plots informative. Overall, Algorithm 3 gives good results fast, although the basic and relaxed PDHGM seems to perform, in practise, better asymptotically.
Fig. 3.

denoising performance, 20,000 iterations, high- and low-resolution images. The plot is logarithmic, with the decibels calculated as in Sect. 5.3. The poor high-resolution results for ‘Adapt’ [33] have been omitted to avoid poor scaling of the plots. a Gap, low resolution, b target, low resolution, c value, low resolution, d gap, high resolution, e target, high resolution, f value, high resolution
Table 1.
denoising performance, maximum 20,000 iterations
| Low resolution | ||||||
|---|---|---|---|---|---|---|
| Method | Gap dB | Tgt dB | Val dB | |||
| Iter | Time (s) | Iter | Time (s) | Iter | Time (s) | |
| PDHGM | 30 | 0.40 | 40 | 0.46 | 30 | 0.40 |
| C-L-O | 500 | 4.67 | 1210 | 11.31 | 970 | 9.04 |
| Alg.3 | 20 | 0.29 | 10 | 0.22 | 20 | 0.29 |
| Alg.4 | 20 | 0.47 | 20 | 0.47 | 20 | 0.47 |
| Relax | 20 | 0.34 | 30 | 0.45 | 20 | 0.34 |
| Adapt | 5360 | 106.63 | 2040 | 41.38 | 3530 | 70.78 |
| High resolution | ||||||
|---|---|---|---|---|---|---|
| Method | Gap dB | Tgt dB | Val dB | |||
| Iter | Time (s) | Iter | Time (s) | Iter | Time (s) | |
| PDHGM | 50 | 8.85 | 30 | 5.13 | 30 | 5.13 |
| C-L-O | 80 | 15.76 | 30 | 5.97 | 80 | 15.76 |
| Alg.3 | 40 | 6.20 | 20 | 3.10 | 40 | 6.20 |
| Alg.4 | 60 | 9.18 | 30 | 4.53 | 60 | 9.18 |
| Relax | 40 | 7.45 | 20 | 3.70 | 20 | 3.70 |
| Adapt | ||||||
The CPU time and number of iterations (at a resolution of 10) needed to reach given solution quality in terms of the duality gap, distance to target, or primal objective value
deblurring Our test image has now been distorted by Gaussian blur of standard deviation 4, which we intent to remove. We denote by the Fourier presentation of the blur operator as discussed in Sect. 5.1. For numerical stability of the pseudo-duality gap, we zero out small entries, replacing this by . Note that this is only needed for the stable computation of for the pseudo-duality gap, to compare the algorithms; the algorithms themselves are stable without this modification. To construct the projection operator P, we then set , and .
We use parameter 2.55 for the high-resolution image and the scaled parameter for the low-resolution image. We parametrise all the algorithms almost exactly as denoising above, of course with appropriate and corresponding to [36]. The only difference in parameterisation is that we take instead of for Algorithm 4.
The results are in Fig. 4 and Table 2. It does not appear numerically feasible to go significantly below or dB gap. Our guess is that this is due to the numerical inaccuracies of the fast Fourier transform implementation in MATLAB. The C-L-O performs very well judged by the duality gap, although the images themselves and the primal objective value appear to take a little bit longer to converge. The relaxed PDHGM is again slightly improved from the standard PDHGM. The adaptive PDHGM performs very well, slightly outperforming Algorithm 3, although not Algorithm 4. This time Algorithm 4 performs remarkably well.
Fig. 4.

deblurring performance, 10,000 iterations, high- and low-resolution images. The plot is logarithmic, with the decibels calculated as in Sect. 5.3. a Gap, low resolution. b Target, low resolution. c Value, low resolution. d Gap, high resolution. e Target, high resolution. f Value, high resolution
Table 2.
deblurring performance, maximum 10,000 iterations
| Method | Low resolution | High resolution | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Gap dB | Tgt dB | Val dB | Gap dB | Tgt dB | Val dB | |||||||
| Iter | Time (s) | Iter | Time (s) | Iter | Time (s) | Iter | Time (s) | Iter | Time (s) | Iter | Time (s) | |
| PDHGM | 390 | 2.53 | 2630 | 17.41 | 60 | 0.47 | 1180 | 118.30 | 970 | 98.98 | 70 | 6.59 |
| C-L-O | 600 | 3.81 | 8930 | 54.20 | 950 | 5.95 | 500 | 48.44 | 1940 | 187.42 | 1000 | 96.60 |
| Alg.3 | 130 | 1.14 | 880 | 7.22 | 20 | 0.25 | 400 | 58.42 | 320 | 46.16 | 40 | 6.13 |
| Alg.4 | 30 | 0.47 | 90 | 0.97 | 10 | 0.29 | 60 | 7.97 | 50 | 6.66 | 30 | 3.98 |
| Relax | 260 | 1.62 | 1750 | 11.34 | 40 | 0.29 | 790 | 77.31 | 650 | 63.84 | 50 | 5.29 |
| Adapt | 110 | 1.12 | 660 | 5.94 | 10 | 0.16 | 260 | 39.39 | 150 | 23.30 | 30 | 4.72 |
The CPU time and number of iterations (at a resolution of 10) needed to reach given solution quality in terms of the duality gap, distance to target, or primal objective value
Conclusion
To conclude, overall, our algorithms are very competitive within the class of proposed variants of the PDHGM. Within our analysis, we have, moreover, proposed very streamlined derivations of convergence rates for even the standard PDHGM, based on the proximal point formulation and the idea of testing. Interesting continuations of this study include whether the condition can reasonably be relaxed such that and would not have to be scalars, as well as the relation to block coordinate descent methods, in particular [14, 37].
Acknowledgements
This research was started while T. Valkonen was at the Center for Mathematical Modeling at Escuela Politécnica Nacional in Quito, supported by a Prometeo scholarship of the Senescyt (Ecuadorian Ministry of Science, Technology, Education, and Innovation). In Cambridge, T. Valkonen has been supported by the EPSRC Grant EP/M00483X/1 “Efficient computational tools for inverse imaging problems”. Thomas Pock is supported by the European Research Council under the Horizon 2020 programme, ERC starting Grant Agreement 640156.
Biographies
Tuomo Valkonen
received his Ph.D. from the University of Jyväskylä in 2008. Since then he has worked as researcher in Graz, Cambridge, and Quito. In February 2016 he started as a lecturer at the University of Liverpool.
Thomas Pock
born 1978 in Graz, received his M.Sc. (1998–2004) and his Ph.D. (2005–2008) in Computer Engineering (Telematik) from Graz University of Technology. After a Post-doc position at the University of Bonn, he moved back to Graz University of Technology where he has been an Assistant Professor at the Institute for Computer Graphics and Vision. In 2013 he received the START price of the Austrian Science Fund (FWF) and the German Pattern recognition award of the German association for pattern recognition (DAGM) and in 2014, he received an starting grant from the European Research Council (ERC). Since June 2014, he is a Professor of Computer Science at Graz University of Technology (AIT Stiftungsprofessur “Mobile Computer Vision”) and a principal scientist at the Digital Department of Safety and Security at the Austrian Institute of Technology (AIT). The focus of his research is the development of mathematical models for computer vision and image processing as well as the development of efficient algorithms to solve these models.
Compliance with ethical standards
A Data Statement for the EPSRC
This is primarily a theory paper, with some demonstrations on a photograph freely available from the Internet. As this article was written, the used photograph from the Kodak image suite was, in particular, available at http://r0k.us/graphics/kodak/. It has also been archived with our implementations of the algorithms at https://www.repository.cam.ac.uk/handle/1810/253697.
Contributor Information
Tuomo Valkonen, Email: tuomo.valkonen@iki.fi.
Thomas Pock, Email: pock@icg.tugraz.at.
References
- 1.Rockafellar RT. Convex Analysis. Princeton: Princeton University Press; 1972. [Google Scholar]
- 2.Ekeland, I., Temam, R.: Convex Analysis and Variational Problems. SIAM (1999)
- 3.Chambolle A, Pock T. A first-order primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011;40:120–145. doi: 10.1007/s10851-010-0251-1. [DOI] [Google Scholar]
- 4.Esser E, Zhang X, Chan TF. A general framework for a class of first order primal–dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci. 2010;3(4):1015–1046. doi: 10.1137/09076934X. [DOI] [Google Scholar]
- 5.Chen Y, Lan G, Ouyang Y. Optimal primal–dual methods for a class of saddle point problems. SIAM J. Optim. 2014;24(4):1779–1814. doi: 10.1137/130919362. [DOI] [Google Scholar]
- 6.Nesterov Y. Smooth minimization of non-smooth functions. Math. Program. 2005;103(1):127–152. doi: 10.1007/s10107-004-0552-5. [DOI] [Google Scholar]
- 7.Beck A, Teboulle M. Smoothing and first order methods: a unified framework. SIAM J. Optim. 2012;22(2):557–580. doi: 10.1137/100818327. [DOI] [Google Scholar]
- 8.O’Donoghue B, Candès E. Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 2015;15(3):715–732. doi: 10.1007/s10208-013-9150-3. [DOI] [Google Scholar]
- 9.Beck A, Teboulle M. A fast dual proximal gradient algorithm for convex minimization and applications. Oper. Res. Lett. 2014;42(1):1–6. doi: 10.1016/j.orl.2013.10.007. [DOI] [Google Scholar]
- 10.Valkonen T. A primal–dual hybrid gradient method for non-linear operators with applications to MRI. Inverse Probl. 2014;30(5):055,012. doi: 10.1088/0266-5611/30/5/055012. [DOI] [Google Scholar]
- 11.Benning, M., Knoll, F., Schönlieb, C.B., Valkonen, T.: Preconditioned ADMM with nonlinear operator constraint (2015). arXiv:1511.00425
- 12.Möllenhoff T, Strekalovskiy E, Moeller M, Cremers D. The primal–dual hybrid gradient method for semiconvex splittings. SIAM J. Imaging Sci. 2015;8(2):827–857. doi: 10.1137/140976601. [DOI] [Google Scholar]
- 13.Lorenz D, Pock T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015;51(2):311–325. doi: 10.1007/s10851-014-0523-2. [DOI] [Google Scholar]
- 14.Fercoq, O., Bianchi, P.: A coordinate descent primal–dual algorithm with large step size and possibly non separable functions (2015). arXiv:1508.04625
- 15.Beck A, Teboulle M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009;2(1):183–202. doi: 10.1137/080716542. [DOI] [Google Scholar]
- 16.Beck A, Teboulle M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009;18(11):2419–2434. doi: 10.1109/TIP.2009.2028250. [DOI] [PubMed] [Google Scholar]
- 17.Setzer S. Operator splittings, Bregman methods and frame shrinkage in image processing. Int. J. Comput. Vis. 2011;92(3):265–280. doi: 10.1007/s11263-010-0357-3. [DOI] [Google Scholar]
- 18.Valkonen, T.: Optimising big images. In: A. Emrouznejad (ed.) Big Data Optimization: Recent Developments and Challenges, Studies in Big Data, pp. 97–131. Springer, Berlin (2016). doi:10.1007/978-3-319-30265-2_5
- 19.Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, Berlin (1998). doi:10.1007/978-3-642-02431-3
- 20.He B, Yuan X. Convergence analysis of primal–dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imaging Sci. 2012;5(1):119–149. doi: 10.1137/100814494. [DOI] [Google Scholar]
- 21.Pock, T., Chambolle, A.: Diagonal preconditioning for first order primal–dual algorithms in convex optimization. In: Computer Vision (ICCV), 2011 IEEE International Conference on, pp. 1762 –1769 (2011). doi:10.1109/ICCV.2011.6126441
- 22.Rudin, W.: Functional Analysis. International series in Pure and Applied Mathematics. McGraw-Hill, New York (2006)
- 23.Bauschke H, Combettes P. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Berlin: Springer; 2011. [Google Scholar]
- 24.Hohage, T., Homann, C.: A generalization of the Chambolle–Pock algorithm to Banach spaces with applications to inverse problems (2014). arXiv:1412.0126
- 25.Chan, T., Shen, J.: Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods. Society for Industrial and Applied Mathematics (SIAM) (2005)
- 26.Benning M, Gladden L, Holland D, Schönlieb CB, Valkonen T. Phase reconstruction from velocity-encoded MRI measurements—a survey of sparsity-promoting variational approaches. J. Magn. Reson. 2014;238:26–43. doi: 10.1016/j.jmr.2013.10.003. [DOI] [PubMed] [Google Scholar]
- 27.Ambrosio L, Fusco N, Pallara D. Functions of Bounded Variation and Free Discontinuity Problems. Oxford: Oxford University Press; 2000. [Google Scholar]
- 28.Valkonen T, Bredies K, Knoll F. Total generalised variation in diffusion tensor imaging. SIAM J. Imaging Sci. 2013;6(1):487–525. doi: 10.1137/120867172. [DOI] [Google Scholar]
- 29.Bredies K, Kunisch K, Pock T. Total generalized variation. SIAM J. Imaging Sci. 2011;3:492–526. doi: 10.1137/090769521. [DOI] [Google Scholar]
- 30.Bredies, K., Valkonen, T.: Inverse problems with second-order total generalized variation constraints. In: Proceedings of the 9th International Conference on Sampling Theory and Applications (SampTA) 2011, Singapore (2011)
- 31.Temam, R.: Mathematical Problems in Plasticity. Gauthier-Villars (1985)
- 32.Chambolle A, Pock T. On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 2015 [Google Scholar]
- 33.Goldstein T, Li M, Yuan X. Adaptive primal–dual splitting methods for statistical learning and image processing. Adv. Neural Inf. Process. Syst. 2015;28:2080–2088. [Google Scholar]
- 34.de Los Reyes, J.C., Schönlieb, C.B., Valkonen, T.: Bilevel parameter learning for higher-order total variation regularisation models. J. Math. Imaging Vis. (2016). doi:10.1007/s10851-016-0662-8. Published online [DOI] [PMC free article] [PubMed]
- 35.Chen K, Lorenz DA. Image sequence interpolation using optimal control. J. Math. Imaging Vis. 2011;41:222–238. doi: 10.1007/s10851-011-0274-2. [DOI] [Google Scholar]
- 36.Chambolle A. An algorithm for mean curvature motion. Interfaces Free Bound. 2004;6(2):195. doi: 10.4171/IFB/97. [DOI] [Google Scholar]
- 37.Suzuki, T.: Stochastic dual coordinate ascent with alternating direction multiplier method (2013). arXiv:1311.0622v1

