Abstract
We aim to solve a structured convex optimization problem, where a nonsmooth function is composed with a linear operator. When opting for full splitting schemes, usually, primal–dual type methods are employed as they are effective and also well studied. However, under the additional assumption of Lipschitz continuity of the nonsmooth function which is composed with the linear operator we can derive novel algorithms through regularization via the Moreau envelope. Furthermore, we tackle large scale problems by means of stochastic oracle calls, very similar to stochastic gradient techniques. Applications to total variational denoising and deblurring, and matrix factorization are provided.
Keywords: Structured convex optimization problem, Variable smoothing algorithm, Convergence rate, Stochastic gradients
Introduction
The problem at hand is the following structured convex optimization problem
| 1 |
for real Hilbert spaces and , a proper, convex and lower semicontinuous function, a, possibly nonsmooth, convex and Lipschitz continuous function, and a linear continuous operator.
Our aim will be to devise an algorithm for solving (1) following the full splitting paradigm (see [5, 6, 8, 9, 15, 17, 29]). In other words, we allow only proximal evaluations for simple nonsmooth functions, but no proximal evaluations for compositions with linear continuous operators, like, for instance, for .
We will accomplish this feat by the means of a smoothing strategy, which, for the purpose of this paper, means, making use of the Moreau-Yosida approximation. The approach can be described as follows: we “smooth” g, i.e. we replace it by its Moreau envelope, and solve the resulting optimization problem by an accelerated proximal-gradient algorithm (see [3, 13, 21]). This approach is similar to those in [7, 10, 11, 20, 22], where a convergence rate of is proved. However, our techniques (for the deterministic case) resemble more the ones in [28], where an improved rate of is shown, with the most notable difference to our work is that we use a simpler stepsize and treat the stochastic case.
The only other family of methods able to solve problems of type (1) are the so called primal–dual algorithms, first and foremost the primal–dual hybrid gradient (PDHG) introduced in [15]. In comparison, this method does not need the Lipschitz continuity of g in order to prove convergence. However, in this very general case, convergence rates can only be shown for the so-called restricted primal–dual gap function. In order to derive from here convergence rates for the primal objective function, either Lipschitz continuity of g or finite dimensionality of the problem plus the condition that g must have full domain are necessary (see, for instance, [5, Theorem 9]). This means, that for infinite dimensional problems the assumptions required by both, PDHG and our method, for deriving convergence rates for the primal objective function are in fact equal, but for finite dimensional problems the assumption of PDHG are weaker. In either case, however, we are able to prove these rates for the sequence of iterates itself whereas PDHG only has them for the sequence of so-called ergodic iterates, i.e. , which is naturally undesirable as the averaging slows the convergence down. Furthermore, we do not show any convergence for the iterates as these are notoriously hard to obtain for accelerated method whereas PDHG gets these in the strongly convex setting via standard fixed point arguments (see e.g. [29]).
Furthermore, we will also consider the case where only a stochastic oracle of the proximal operator of g is available to us. This setup corresponds e.g. to the case where the objective function is given as
| 2 |
where, for , are real Hilbert spaces, are convex and Lipschitz continuous functions and are linear continuous operators, but the number of summands being large we wish to not compute all proximal operators of all , for purpose of making iterations cheaper to compute.
For the finite sum case (2), there exist algorithms of similar spirit such as those in [14, 24]. Some algorithms do in fact deal with a similar setup of stochastic gradient like evaluations, see [26], but only for smooth terms in the objective function.
In Sect. 2 we will cover the preliminaries about the Moreau-Yosida envelope as well as useful identities and estimates connected to it. In Sect. 3 we will deal with the deterministic case and prove a convergence rate of for the function values at the iterates. Next up, in Sect. 4, we will consider the stochastic case as described above and prove a convergence rate of . Last but not least, we will look at some numerical examples in image processing in Sect. 5.
It is important to note that the proof for the deterministic setting differs surprisingly from the one for the stochastic setting. The technique for the stochastic setting is less refined in the sense that there is no coupling between the smoothing parameter and the extrapolation parameter. Where as this technique works also works for the deterministic setting it gives a worse convergence rate of . The tight coupling of the two sequences of parameters, however does not work in the proof of the stochastic algorithm as it does not allow for the particular choice of the smoothing parameters needed there.
Preliminaries
In the main problem (1), the nonsmooth function regularizer g is supposed to be Lipschitz continuous. This assumption is necessary to ensure our main convergence results, however, many of the preliminary lemmata of this section hold true similarly if the function is only assumed to be lower semicontinuous. We will point this out in every statement of this section individually.
Definition 2.1
For a proper, convex and lower semicontinuous function , its convex conjugate is denoted by defined as a function from to , given by
As mentioned in the introduction, we want to smooth a nonsmooth function by considering its Moreau envelope. The next definition will clarify exactly what object we are talking about.
Definition 2.2
For a proper, convex and lower semicontinuous function , its Moreau envelope with the parameter is defined as a function from to , given by
From this definition, however, it is not completely evident that the Moreau envelope indeed fulfills its purpose in being a smooth representation of the original function. The next lemma will remedy this fact.
Lemma 2.1
(see [2, Proposition 12.29]) Let be a proper, convex and lower semicontinuous function and . Then its Moreau envelope is Fréchet differentiable on . In particular, the gradient itself is given by
and is -Lipschitz continuous.
In particular, for all , a gradient step with respect to the Moreau envelope corresponds to a proximal step
The previous lemma establishes two things. Not only does it clarify the smoothness of the Moreau envelope, but it also gives a way of computing its gradient. Obviously, a smooth representation whose gradient we would not be able to compute would not be any good.
As mentioned in the introduction, we want to smooth the nonsmooth summand of the objective function which is composed with the linear operator as this can be considered the crux of problem (1). The function will be smoothed via considering instead . Clearly, by the chain rule, this function is continuously differentiable with gradient given for every by
and is thus Lipschitz continuous with Lipschitz constant , where denotes the operator norm of K.
Lipschitz continuity will play an integral role in our investigations, as can be seen by the following lemmas.
Lemma 2.2
(see [4, Proposition 4.4.6]) Let be a convex and -Lipschitz continuous function. Then, the domain of its Fenchel conjugate is bounded, i.e.
where denotes the open ball with radius around the origin.
The Moreau envelope even preserves the Lipschitzness of the original function.
Lemma 2.3
(see [18, Lemma 2.1]) Let be a convex and -Lipschitz continuous function. Then its Moreau envelope is -Lipschitz as well, i.e.
Proof
We observe that for all
Therefore we can bound the gradient norm
| 3 |
where we used in the last step that the Lipschitz continuity of g. The statement follows from the mean-value theorem.
The following lemmata are not new, but we provide proofs anyways in order to remain self-contained and to shed insight on how to use the Moreau envelope for the interested reader.
Lemma 2.4
(see [28, Lemma 10 (a)]) Let be proper, convex and lower semicontinuous. The maximizing argument in the definition of the Moreau-Yosida envelope is given by its gradient, i.e. for it holds that
Proof
Let be fixed. It holds
and the conclusion follows by using Lemma 2.1.
Lemma 2.5
(see [28, Lemma 10 (a)]) For a proper, convex and lower semicontinuous function and every we can consider the mapping from to given by
| 4 |
This mapping is convex and differentiable and its derivative is given by
Proof
Let be fixed. From the definition of the Moreau-Yosida envelope we can see that the mapping given in (4) is a pointwise supremum of functions which are linear in . It is therefore convex. Furthermore, since the objective function is strongly concave, this supremum is uniquely attained at . According to the Danskin Theorem, the function is differentiable and its gradient is given by
Lemma 2.6
( [28, Lemma 10 (b)]) Let be proper, convex and lower semicontinuous. For and every it holds
| 5 |
If g is additionally -Lipschitz and if , then
| 6 |
Proof
Let be fixed. Via Lemma 2.5 we know that the map is convex and differentiable. We can therefore use the gradient inequality to deduce that
which is exactly the first statement of the lemma. The first inequality of (6) follows directly from the definition of the Moreau envelope and the second one from (5) and (3).
By applying a limiting argument it is easy to see that (6) implies that for any
| 7 |
which shows that the Moreau envelope is always a lower approximation the original function.
Lemma 2.7
(see [28, Lemma 10 (c)]) Let be proper, convex and lower semicontinuous. Then, for and every we have that
Proof
Using Lemma 2.4 and the definition of the Moreau-Yosida envelope we get that
In the convergence proof of Lemma 3.3 we will need the inequality in the above lemma at the points Kx and Ky, namely
| 8 |
The following lemma is a standard result for convex and Fréchet differentiable functions.
Lemma 2.8
(see [23]) For a convex and Fréchet differentiable function with -Lipschitz continuous gradient we have that
By applying Lemma 2.8 with , Kx and Ky instead of h, x and y respectively, we obtain
| 9 |
The following technical result will be used in the proof of the convergence statement.
Lemma 2.9
For and every we have that
Deterministic Method
Problem 3.1
The problem at hand reads
for a proper, convex and lower semicontinuous function , a convex and -Lipschitz continuous function , and a nonzero linear continuous operator .
The idea of the algorithm which we propose to solve (1) is to smooth g and then to solve the resulting problem by means of an accelerated proximal-gradient method.
Algorithm 3.1
(Variable Accelerated SmooThing (VAST)) Let , and a sequence of real numbers with and for every . Consider the following iterative scheme
Remark 3.1
The assumption can be removed but guarantees easier computation and is also in line with classical choices of in [13, 21].
Remark 3.2
The sequence given by
despite not appearing in the algorithm, will feature a prominent role in the convergence proof. Due to the convention we have that
We also denote
The next theorem is the main result of this section and it will play a fundamental role when proving a convergence rate of for the sequence .
Theorem 3.1
Consider the setup of Problem 3.1 and let and be the sequences generated by Algorithm 3.1. Assume that for every
and
Then, for every optimal solution of Problem 3.1, it holds
The proof of this result relies on several partial results which we will prove as follows.
Lemma 3.1
The following statement holds for every and every
Proof
Let be fixed. Since, by the definition of the proximal map, is the minimizer of a -strongly convex function we know that for every
Next we use the -smoothness of and the fact that to deduce
Lemma 3.2
Let be an optimal solution of Problem 3.1. Then it holds
Proof
We use the gradient inequality to deduce that for every and every
and plug this into the statement of Lemma 3.1 to conclude that
For we get that
Now we us the fact that and to obtain the conclusion.
Lemma 3.3
Let be an optimal solution of Problem 3.1. The following descent-type inequality holds for every
Proof
Let be fixed. We apply Lemma 3.1 with to deduce that
Using the convexity of f gives
| 10 |
Now, we use (8) to deduce that
| 11 |
and (9) to conclude that
| 12 |
Combining (10), (11) and (12) gives
The first term on the right hand side is but we would like it to be . Therefore we use Lemma 2.6 to deduce that
| 13 |
Next up we want to estimate all the norms of gradients by using Lemma 2.9 which says that
| 14 |
Now we combine the two terms containing and get that
By subtracting on both sides we finally obtain
Now we are in the position to prove Theorem 3.1.
Proof of Theorem 3.1
We start with the statement of Lemma 3.3 and use the assumption that
to make the last term in the inequality disappear for every
Now we use the assumption that
to get that for every
| 15 |
Let . Summing (15) from to and getting rid of the nonnegative term gives
Since , the above inequality is fulfilled also for . Using Lemma 3.2 shows that
The above inequality, however, is still in terms of the smoothed objective function. In order to go to the actual objective function we apply (7) and deduce that
Corollary 3.1
By choosing the parameters in the following way,
and for every
| 16 |
they fulfill
| 17 |
and
| 18 |
For this choice of the parameters we have that
Proof
Since and are a scalar multiple of each other (18) is equivalent to
and further to (by taking into account that for every )
| 19 |
Our update choice in (16) for the sequence is exactly chosen in such a way that it satisfies this. Plugging (19) into (17) gives for every the condition
which is equivalent to
and further to
Plugging in we get that this equivalent to
which is evidently fulfilled. Thus, the choices in (16) are indeed feasible for our algorithm.
Now we want to prove the claimed convergence rates. Via induction we show that
| 20 |
Evidently, this holds for . Assuming that (20) holds for , we easily see that
and, on the other hand,
In the following we prove a similar estimate for the sequence . To this end we show, again by induction, the following recursion for every
| 21 |
For this follows from the definition (19). Assume now that (21) holds for . From here we have that
Using (21) together with (20) we can check that for every
| 22 |
where we used in the last step the fact that .
The last thing to check is the fact that goes to zero like . First we check that for every
| 23 |
This can be seen via
By bringing to the other side we get that
from which we can deduce (23) by dividing by .
We plug in the estimate (23) in (21) and get for every
With the above inequalities we can to deduce the claimed convergence rates. First note that from Theorem 3.1 we have
Now, in order to obtain the desired conclusion, we use the above estimates and deduce for every
where we used that
as shown in (22).
Remark 3.3
Consider the choice (see [21])
and
Since
we see that in this setting we have to choose
Thus, the sequence of optimal function values approaches a -approximation of the optimal objective value with a convergence rate of , i.e.
Stochastic Method
Problem 4.1
The problem is the same as in the deterministic case
other than the fact that at each iteration we are only given a stochastic estimator of the quantity
Remark 4.1
See Algorithm 4.3 for a setting where such an estimator is easily computed.
For the stochastic quantities arising in this section we will use the following notation. For every , we denote by the smallest -algebra generated by the family of random variables and by the conditional expectation with respect to this -algebra.
Algorithm 4.1
(stochastic Variable Accelerated SmooThing (sVAST)) Let a sequence of positive and nonincreasing real numbers, and a sequence of real numbers with and for every . Consider the following iterative scheme
where we make the standard assumptions about our gradient estimator of being unbiased, i.e.
and having bounded variance
for every .
Note that we use the same notations as in the deterministic case
Lemma 4.1
The following statement holds for every (deterministic) and every
Proof
Here we have to proceed a little bit different from Lemma 3.1. Namely, we have to treat the gradient step and the proximal step differently. For this purpose we define the auxiliary variable
Let be fixed. From the gradient step we get
Taking the conditional expectation gives
Using the gradient inequality we deduce
and therefore
| 24 |
Also from the smoothness of we deduce via the Descent Lemma that
Plugging in the definition of and using the fact that we get
Now we take the conditional expectation to deduce that
| 25 |
Multiplying (25) by and adding it to (24) gives
Now we use the assumption about the bounded variance to deduce that
| 26 |
Next up for the proximal step we deduce
| 27 |
Taking the conditional expectation and combining (26) and (27) we get
From here, using now Lemma 2.3, we get that
Now we use
to obtain that
Lemma 4.2
Let be an optimal solution of Problem 4.1. Then it holds
Proof
Applying the previous lemma with and , we get that
Therefore, using the fact that and ,
which finishes the proof.
Theorem 4.1
Consider the setup of Problem 4.1 and let and denote the sequences generated by Algorithm 4.1. Assume that for all
Then, for every optimal solution of Problem 4.1, it holds
Proof of Theorem 4.1
Let be fixed. Lemma 4.1 for gives
From here and from the convexity of follows
Now, by multiplying both sides with by , we deduce
| 28 |
Next, by adding on both sides of (28), gives
Utilizing (6) together with the assumption that is nonincreasing leads to
Now, using that , we get
Multiplying both sides with and putting all terms on the correct sides yields
| 29 |
At this point we would like to discard the term which we currently cannot as the positivity of is not ensured. So we add on both sides of (29) and get
| 30 |
Using again (6) to deduce that
we can now discard said term from (30), giving
| 31 |
Last but not least we use that and to follow that
| 32 |
Combining (31) and (32) yields
| 33 |
Let . We take the expected value on both sides (33) and sum from to . Getting rid of the non-negative terms gives
Since , the above inequality holds also for . Now, using Lemma 4.2 we get that for every
From (7) we follow that
therefore, for every
By using the fact that for every gives
Thus,
Corollary 4.1
Let
and, for ,
Then,
Furthermore, we have that converges almost surely to as .
Proof
First we notice that the choice of fulfills that
Now we derive the stated convergence result by first showing via induction that
Assuming that this holds for , we have that
and
Furthermore, for every we have that
| 34 |
The statement of the convergence rate in expectation follows now by plugging in our parameter choices into the statement of Theorem 4.1, using the estimate (34) and checking that
The almost sure convergence of can be deduced by looking at (33) and dividing by and using that as well as , which gives for every
Plugging in our choice of parameters gives for every
where .
Thus, by the famous Robbins-Siegmund Theorem (see [25, Theorem 1]) we get that converges almost surely. In particular, from the convergence to 0 in expectation we know that the almost sure limit must also be the constant zero.
Finite Sum The formulation of the previous section can be used to deal e.g. with problems of the form
| 35 |
for a proper, convex and lower semicontinuous function, convex and -Lipschitz continuous functions and linear continuous operators for .
Clearly one could consider
![]() |
with and
in order to reformulate the problem as
and use Algorithm 3.1 together with the parameter choices described in Corollary 3.1 on this. This results in the following algorithm.
Algorithm 4.2
Let , for , and . Consider the following iterative scheme
However, Problem (35) also lends itself to be tackled via the stochastic version of our method, Algorithm 4.1, by randomly choosing a subset of the summands. Together with the parameter choices described in Corollary 4.1 which results in the following scheme.
Algorithm 4.3
Let , and . Consider the following iterative scheme
with a sequence of i.i.d., random variables and .
Since the above two methods were not explicitly developed for this separable case and can therefore not make use of more refined estimation of the constant , as it is done in e.g. [14]. However, in the stochastic case, this fact is remedied due to the scaling of the stepsize with respect to the i-th component by .
Remark 4.2
In theory Algorithm 4.1 could be used to treat more general stochastic problems than finite sums like (35), but in the former case it is not clear anymore how a gradient estimator can be found, so we do not discuss it here.
Numerical Examples
We will focus our numerical experiments on image processing problems. The examples are implemented in python using the operator discretization library (ODL) [1]. We define the discrete gradient operators and representing the discretized derivative in the first and second coordinate respectively, which we will need for the numerical examples. Both map from to and are defined by
and
The operator norm of and , respectively, is 2 (where we equipped with the Frobenius norm). This yields an operator norm of for the total gradient as a map from to , see also [12].
We will compare our methods, i.e. the Variable Accelerated SmooThing (VAST) and its stochastic counterpart (sVAST) to the Primal Dual Hybrid Gradient (PDHG) of [15] as well as its stochastic version (sPDHG) from [14]. Furthermore, we will illustrate another competitor, the method by Pesquet and Repetti, see [24], which is another stochastic version of PDHG (see also [29]).
In all examples we choose the parameters in accordance with [14]:
for PDHG and Pesquet&Repetti:
for sPDHG: and ,
where .
Total Variation Denoising
The task at hand is to reconstruct an image from its noisy observation. We do this by solving
with as regularization parameter, in the following setting: .
Figure 1 illustrates the images (of dimension and ) used in for this example. These include the groundtruth, i.e. the uncorrupted image, as well as the data for the optimization problem b, which visualizes the level of noise. In Fig. 2 we can see that for the deterministic setting our method is as good as PDHG. For the objective function values, Fig. 2b, this is not too surprising as both algorithms share the same convergence rate. For the distance to a solution however we completely lack a convergence result. Nevertheless in Fig. 2a we can see that our method performs also well with respect to this measure.
Fig. 1.
TV denoising. Images used. The approximate solution is computed by running PDHG for 7000 iterations
Fig. 2.
TV denoising. Plots illustrating the performance of different methods
In the stochastic setting we can see in Fig. 2 that, while sPDHG provides some benefit over its deterministic counterpart, the stochastic version of our method, although significantly increasing the variance, provides great benefit, at least for the objective function values.
Furthermore, Fig. 3, shows the reconstructions of sPDHG and our method which are, despite the different objective function values, quite comparable.
Fig. 3.

TV Denoising. A comparison of the reconstruction for the stochastic variable smoothing method and the stochastic PDHG
Total Variation Deblurring
For this example we want to reconstruct an image from a blurred and noisy image. We assume to know the blurring operator . This is done by solving
| 36 |
for as regularization parameter, in the following setting: .
Figure 4 shows the images used to set up the optimization problem (36), in particular Fig. 4b which corresponds to b in said problem.
Fig. 4.
TV Deblurring. The approximate solution is computed by running PDHG for 3000 iterations
In Fig. 5 we see that while PDGH performs better in the deterministic setting, in particular in the later iteration, the stochastic variable smoothing method provides a significant improvement where sPDHG method seems not to converge. It is interesting to note that in this setting even the deterministic version of our algorithm exhibits a slightly chaotic behaviour. Although neither of the two methods is monotone in the primal objective function PDHG seems here much more stable.
Fig. 5.
TV deblurring. Plots illustrating the performance of different methods
Matrix Factorization
In this section we want to solve a nonconvex and nonsmooth optimization problem of completely positive matrix factorization, see [16, 19, 27]. For an observed matrix we want to find a completely positive low rank factorization, meaning we are looking for with such that . This can be formulated as the following optimization problem
| 37 |
where denotes the transpose of the matrix x. The more natural approach might be to use a smooth formulation where is used instead of the 1-Norm we are suggesting. However, the former choice of distance measure, albeit smooth, comes with its own set of problems (mainly a non-Lipschitz gradient).
The so called Prox-Linear method presented in [18] solves the above problem (37), by linearizing the smooth (-valued) function inside the nonsmooth distance function. In particular for the problem
for a smooth vector valued function c and a convex and Lipschitz function g, [18] proposes to iteratively solve the subproblem
| 38 |
for a stepsize . For our particular problem described in (37) the subproblem looks as follows
| 39 |
and therefore fits our general setup described in (1) with the identification , and . Moreover, due to its separable structure, the subproblem (39) fits the special case described in (35) and can therefore be tackled by the stochastic version of our algorithm presented in Algorithm 4.3. In particular reformulating (38) for the stochastic finite sum setting we interpret the subproblem as
where A[i, : ] denotes the i-th row of the matrix A (Fig. 6).
Fig. 6.
Comparison of the evolutions of the objective function values for different starting points. We run 40 epochs with 5 iterations each. For each epoch we choose the last iterate of the previous epoch as the linearization. For the stochastic methods we fix the number of rows (batch size) which are randomly chosen in each update a priori and count d divided by this number as one iteration. For the randomly chosen initial point we use a batch size of 3 (to allow for more exploration) and for the one close to the solution we use 5 in order to give a more accuracy. The parameter b in the variable smoothing method was chosen with minimal tuning to be 0.1 for both the deterministic and the stochastic version
In comparison to Sects. 5.1 and 5.2 a new aspect becomes important when evaluating methods for solving (38). Now, it is not only relevant how well subproblem (39) is solved, but also the trajectory taken in doing so as different paths might lead to different local minima. This can be seen in Fig. 6 where PDHG gets stuck early on in bad local minima. The variable smoothing method (especially the stochastic version) is able to move further from the starting point and find better local minima. Note that in general the methods have a difficulty in finding the global minimum (with optimal objective function value zero, as constructed in all examples).
Acknowledgements
The authors are thankful to two anonymous reviewers for comments and remarks which improved the quality of the presentation and led to the numerical experiment on matrix factorization.
Funding
Open access funding provided by Austrian Science Fund (FWF).
Footnotes
Research partially supported by FWF (Austrian Science Fund) project I 2419-N32. Research supported by the doctoral programme Vienna Graduate School on Computational Optimization (VGSCO), FWF (Austrian Science Fund), Project W 1260.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Radu Ioan Boţ, Email: radu.bot@univie.ac.at.
Axel Böhm, Email: axel.boehm@univie.ac.at.
References
- 1.Adler, J., Kohr, H., Öktem, O.: Operator Discretization Library. https://odlgroup.github.io/odl/ (2017)
- 2.Bauschke HH, Combettes PL. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. New York: Springer; 2011. [Google Scholar]
- 3.Beck A, Teboulle M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009;2(1):183–202. doi: 10.1137/080716542. [DOI] [Google Scholar]
- 4.Borwein JM, Vanderwerff JD. Convex Functions: Constructions, Characterizations and Counterexamples. Cambridge: Cambridge University Press; 2010. [Google Scholar]
- 5.Boţ RI, Csetnek ER. On the convergence rate of a forward–backward type primal–dual splitting algorithm for convex optimization problems. Optimization. 2015;64(1):5–23. doi: 10.1080/02331934.2014.966306. [DOI] [Google Scholar]
- 6.Boţ RI, Csetnek ER, Heinrich A, Hendrich C. On the convergence rate improvement of a primal–dual splitting algorithm for solving monotone inclusion problems. Math. Program. 2015;150(2):251–279. doi: 10.1007/s10107-014-0766-0. [DOI] [Google Scholar]
- 7.Boţ RI, Hendrich C. A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems. Comput. Optim. Appl. 2013;54(2):239–262. doi: 10.1007/s10589-012-9523-6. [DOI] [Google Scholar]
- 8.Boţ RI, Hendrich CC. A Douglas-Rachford type primal–dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators. SIAM J. Optim. 2013;23(4):2541–2565. doi: 10.1137/120901106. [DOI] [Google Scholar]
- 9.Boţ RI, Hendrich C. Convergence analysis for a primal–dual monotone+ skew splitting algorithm with applications to total variation minimization. J. Math. Imaging Vis. 2014;49(3):551–568. doi: 10.1007/s10851-013-0486-8. [DOI] [Google Scholar]
- 10.Boţ RI, Hendrich C. On the acceleration of the double smoothing technique for unconstrained convex optimization problems. Optimization. 2015;64(2):265–288. doi: 10.1080/02331934.2012.745530. [DOI] [Google Scholar]
- 11.Boţ RI, Hendrich C. A variable smoothing algorithm for solving convex optimization problems. TOP. 2015;23(1):124–150. doi: 10.1007/s11750-014-0326-z. [DOI] [Google Scholar]
- 12.Chambolle A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 2004;20(1–2):89–97. [Google Scholar]
- 13.Chambolle A, Dossal C. On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm. J. Optim. Theory Appl. 2015;166(3):968–982. doi: 10.1007/s10957-015-0746-4. [DOI] [Google Scholar]
- 14.Chambolle A, Ehrhardt MJ, Richtárik P, Schönlieb CB. Stochastic primal–dual hybrid gradient algorithm with arbitrary sampling and imaging applications. SIAM J. Optim. 2018;28(4):2783–2808. doi: 10.1137/17M1134834. [DOI] [Google Scholar]
- 15.Chambolle A, Pock T. A first-order primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011;40(1):120–145. doi: 10.1007/s10851-010-0251-1. [DOI] [Google Scholar]
- 16.Chen, C., Pong, T.K., Tan, L., Zeng, L.: A difference-of-convex approach for split feasibility with applications to matrix factorizations and outlier detection. J. Glob. Optim. 10.1007/s10898-020-00899-8 (2020)
- 17.Condat L. A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 2013;158(2):460–479. doi: 10.1007/s10957-012-0245-9. [DOI] [Google Scholar]
- 18.Drusvyatskiy D, Paquette C. Efficiency of minimizing compositions of convex functions and smooth maps. Math. Program. 2019;178:1–56. doi: 10.1007/s10107-018-1311-3. [DOI] [Google Scholar]
- 19.Groetzner P, Dür M. A factorization method for completely positive matrices. Linear Algebra Appl. 2020;591:1–24. doi: 10.1016/j.laa.2019.12.024. [DOI] [Google Scholar]
- 20.Nesterov Y. Smooth minimization of non-smooth functions. Math. Program. 2005;103(1):127–152. doi: 10.1007/s10107-004-0552-5. [DOI] [Google Scholar]
- 21.Nesterov Y. A method for unconstrained convex minimization problem with the rate of convergence Doklady Akademija Nauk USSR. 1983;269:543–547. [Google Scholar]
- 22.Nesterov Y. Smoothing technique and its applications in semidefinite optimization. Math. Program. 2007;110(2):245–259. doi: 10.1007/s10107-006-0001-8. [DOI] [Google Scholar]
- 23.Nesterov Y. Introductory Lectures on Convex Optimization: A Basic Course. New York: Springer; 2013. [Google Scholar]
- 24.Pesquet J-C, Repetti A. A class of randomized primal–dual algorithms for distributed optimization. J. Nonlinear Convex Anal. 2015;16(12):2453–2490. [Google Scholar]
- 25.Robbins, H., Siegmund, D.: A convergence theorem for non negative almost supermartingales and some applications. In: Optimizing Methods in Statistics, Proceedings of a Symposium Held at the Center for Tomorrow, Ohio State University, June 14–16, Elsevier, pp. 233–257 (1971)
- 26.Rosasco L, Villa S, Vũ BC. A first-order stochastic primal-dual algorithm with correction step. Numer. Funct. Anal. Optim. 2017;38(5):602–626. doi: 10.1080/01630563.2016.1254243. [DOI] [Google Scholar]
- 27.Shi Q, Sun H, Songtao L, Hong M, Razaviyayn M. Inexact block coordinate descent methods for symmetric nonnegative matrix factorization. IEEE Trans. Signal Process. 2017;65(22):5995–6008. doi: 10.1109/TSP.2017.2731321. [DOI] [Google Scholar]
- 28.Tran-Dinh Q, Fercoq O, Cevher V. A smooth primal–dual optimization framework for nonsmooth composite convex minimization. SIAM J. Optim. 2018;28(1):96–134. doi: 10.1137/16M1093094. [DOI] [Google Scholar]
- 29.Vũ BC. A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv. Comput. Math. 2013;38(3):667–681. doi: 10.1007/s10444-011-9254-8. [DOI] [Google Scholar]






