Abstract
We consider the problem of minimizing a smooth convex objective function subject to the set of minima of another differentiable convex function. In order to solve this problem, we propose an algorithm which combines the gradient method with a penalization technique. Moreover, we insert in our algorithm an inertial term, which is able to take advantage of the history of the iterates. We show weak convergence of the generated sequence of iterates to an optimal solution of the optimization problem, provided a condition expressed via the Fenchel conjugate of the constraint function is fulfilled. We also prove convergence for the objective function values to the optimal objective value. The convergence analysis carried out in this paper relies on the celebrated Opial Lemma and generalized Fejér monotonicity techniques. We illustrate the functionality of the method via a numerical experiment addressing image classification via support vector machines.
Keywords: Gradient method, Penalization, Fenchel conjugate, Inertial algorithm
Introduction and preliminaries
Let H be a real Hilbert space with the norm and inner product given by and , respectively, and f and g be convex functions acting on H, which we assume for simplicity to be everywhere defined and (Fréchet) differentiable. The object of our investigation is the optimization problem
| 1 |
We assume that
and that the gradients and are Lipschitz continuous operators with constants and , respectively.
The work [5] of Attouch and Czarnecki has attracted since its appearance a huge interest from the research community, since it undertakes a qualitative analysis of the optimal solutions of (1) from the perspective of a penalty-term based dynamical system. This represented the starting point for the design and development of numerical algorithms for solving the minimization problem (1), several variants of it involving also nonsmooth data up to monotone inclusions that are related to optimality systems of constrained optimization problems. We refer the reader to [4–8, 10, 11, 13–15, 20–23, 33, 35] and the references therein for more insights into this research topic.
A key assumption used in this context in order to guarantee the convergence properties of the numerical algorithms is the condition
where and are positive sequences, is the Fenchel conjugate of g:
is the support function of the set :
and is the normal cone to the set , defined by
for and for . Finally, denotes the range of the normal cone , that is, if and only if there exists such that . Let us notice that for one has if and only if . We also assume without loss of generality that .
In this paper we propose a numerical algorithm for solving (1) that combines the gradient method with penalization strategies also by employing inertial and memory effects. Algorithms of inertial type result from the time discretization of differential inclusions of second order type (see [1, 3]) and were first investigated in the context of the minimization of a differentiable function by Polyak [36] and Bertsekas [12]. The resulting iterative schemes share the feature that the next iterate is defined by means of the last two iterates, a fact which induces the inertial effect in the algorithm. Since the works [1, 3], one can notice an increasing number of research efforts dedicated to algorithms of inertial type (see [1–3, 9, 16–19, 24–28, 30–32, 34]).
In this paper we consider the following inertial algorithm for solving (1):
Algorithm 1
Initialization: Choose the positive sequences and , and a positive constant parameter . Take arbitrary .
Iterative step: For given current iterates (), define by
We notice that in the above iterative scheme represents the sequence of step sizes, the sequence of penalty parameters, while controls the influence of the inertial term.
For every we denote by , which is also a (Fréchet) differentiable function, and notice that is -Lipschitz continuous.
In case , Algorithm 1 collapses in the algorithm considered in [35] for solving (1). We prove weak convergence for the generated iterates to an optimal solution of (1), by making use of generalized Fejér monotonicity techniques and the Opial Lemma and by imposing the key assumption mentioned above as well as some mild conditions on the involved parameters. Moreover, the performed analysis allows us also to show the convergence of the objective function values to the optimal objective value of (1). As an illustration of the theoretical results, we present in the last section an application addressing image classification via support vector machines.
Convergence analysis
This section is devoted to the asymptotic analysis of Algorithm 1.
Assumption 2
Assume that the following statements hold:
-
(I)
The function f is bounded from below;
-
(II)
There exist positive constants and such that and for all ;
-
(III)
For every , we have ;
-
(IV)
, for all and .
We would like to mention that in [21] we proposed a forward-backward-forward algorithm of penalty-type, endowed with inertial and memory effects, for solving monotone inclusion problems, which gave rise to a primal-dual iterative scheme for solving convex optimization problems with complex structures. However, we succeeded in proving only weak ergodic convergence for the generated iterates, while with the specific choice of the sequences and in Assumption 2 we will be able to prove weak convergence of the iterates generated in Algorithm 1 to an optimal solution of (1).
Remark 3
The conditions in Assumption 2 slightly extend the ones considered in [35] in the noninertial case. The only differences are given by the first inequality in (II), which here involves the constant which controls the inertial terms (for the corresponding condition in [35] one only has to take ), and by the inequality for all .
We refer to Remark 12 for situations where the fulfillment of the conditions in Assumption 2 is guaranteed.
We start the convergence analysis with three technical results.
Lemma 4
Let and set . We have for all
| 2 |
where .
Proof
Since , we have according to the first-order optimality conditions that , thus . Notice that for all
where . This, together with the monotonicity of , imply that
| 3 |
so
| 4 |
On the other hand, since g is convex and differentiable, we have for all
which means that
| 5 |
As for all
and
it follows
| 6 |
Combining (4), (5) and (6), we obtain that for each
| 7 |
Finally, since , we have that for all
which completes the proof.
Lemma 5
We have for all
| 8 |
Proof
From the descent Lemma and the fact that is -Lipschitz continuous, we get that
Since , it holds for all
and then
which is nothing else than
| 9 |
By the Cauchy–Schwarz inequalty it holds that
hence, (9) becomes
For and all , we set
and, for simplicity, we denote
Lemma 6
Let and set . We have for all
| 10 |
Proof
According to Lemma 5 and Assumption 2(II), (8) becomes for all
| 11 |
On the other hand, after multiplying (2) by K, we obtain for all
| 12 |
After summing up the relations (11) and (12) and adding on both sides of the resulting inequality the expressions and for all , we obtain the required statement.
The following proposition will play an essential role in the convergence analysis (see also [1–3, 16]).
Proposition 7
Let and be real sequences and be given. Assume that is bounded from below, is nonnegative and such that
Then the following statements hold:
-
(i)
, where ;
-
(ii)
converges and .
The following lemma collects some convergence properties of the sequences involved in our analysis.
Lemma 8
Let . Then the following statements are true:
-
(i)
The sequence is bounded from below.
-
(ii)
and exists.
-
(iii)
exists and .
-
(iv)
exists.
-
(v)
and every sequential weak cluster point of the sequence lies in .
Proof
We set and recall that .
(i) Since f is convex and differentiable, it holds for all
which means that is bounded from below. Notice that the first inequality in the above relation is a consequence of Assumption 2(II), since , thus for all .
(ii) For all , we may set
and
We fix a natural number . Then
Since f is bounded from below and , it follows that .
We notice that and, since , we have for all
| 13 |
Thus, according Lemma 6, we get for all
We fix another natural number and sum up the last inequality for . We obtain
| 14 |
which, by taking into account Assumption 2(III), means that is bounded from above by a positive number that we denote by M. Consequently, for all we have
so
which further implies that
We have for all
hence
| 15 |
Consequently, for the arbitrarily chosen natural number , we have [see (14)]
which together with (15) and the fact that implies that
On the other hand, due to (13) we have for all . Consequently, using also that , (10) implies that
According to Proposition 7 and by taking into account that is bounded from below, we obtain that exists.
(iii) By Lemma 4 and Proposition 7, exists and .
(iv) Since for all , by using (ii) and (iii), we get that exists.
(v) Since , we also obtain that Let w be a sequential weak cluster point of and assume that the subsequence converges weakly to w. Since g is weak lower semicontinuous, we have
which implies that . This completes the proof.
In order to show also the convergence of the sequence , we prove first the following result.
Lemma 9
Let be given. We have
Proof
Since f is convex and differentiable, we have for all
Since g is convex and differentiable, we have for all
which together imply that
From here we obtain for all [see (6)]
Hence, by using the previous lemma, the required result holds.
The Opial Lemma that we recall below will play an important role in the proof of the main result of this paper.
Proposition 10
(Opial Lemma) Let H be a real Hilbert space, a nonempty set and a given sequence such that:
-
(i)
For every exists.
-
(ii)
Every sequential weak cluster point of lies in C.
Then the sequence converges weakly to a point in C.
Theorem 11
-
(i)
The sequence converges weakly to a point in .
-
(ii)
The sequence converges to the optimal objective value of the optimization problem (1).
Proof
(i) According to Lemma 8, exists for all . Let w be a sequential weak cluster point of . Then there exists a subsequence of such that converges weakly to w as . By Lemma 8, we have that . This means that in order to come to the conclusion it suffices to show that for all . From Lemma 9, Lemma 8 and the fact that , it follows that for all . Thus,
which shows that . Hence, thanks to Opial Lemma, converges weakly to a point in .
(ii) The statement follows easily from the above considerations.
In the end of this section we present some situations where Assumption 2 is verified.
Remark 12
Let and be arbitrarily chosen. We set
and
for all .
-
(i)
Since , we have , which implies that for all .
-
(ii)For all it holds
-
(iii)
It holds .
-
(iv)For all we have
-
(v)
Since , we have , which implies that .
-
(vi)Finally, as , we have and this implies that . We present a situation where Assumption 2(III) holds and refer to [10] for further examples. For instance, if where , then for every . Thus, for , we have
Hence converges, if converges or, equivalently, if converges. This holds for the above choices of and when .
Numerical example: image classification via support vector machines
In this section we employ the algorithm proposed in this paper in the context of image classification via support vector machines.
Having a set of training data belonging to one of two given classes denoted by “” and “”, the aim is to construct by using this information a decision function given in the form of a separating hyperplane, which assigns every new data to one of the two classes with a misclassification rate as low as possible. In order to be able to handle the situation when a full separation is not possible, we make use of non-negative slack variables ; thus the goal will be to find as optimal solution of the following optimization problem
where for is equal to if belongs to the class “” and it is equal to , otherwise. Each new data will by assigned to one of the two classes by means of the resulting decision function , namely, a will be assigned to the class “”, if , and to the class “”, otherwise. For more theoretical insights in support vector machines we refer the reader to [29].
By making use of the matrix
the problem under investigation can be written as
or, equivalently,
By considering as , we have and notice that is -Lipschitz continuous.
Further, for , we have and notice that is -Lipschitz continuous, where denotes the projection operator on the set .
For the numerical experiments we used a data set consisting of 6.000 training images and 2.060 test images of size taken from the website http://www.cs.nyu.edu/~roweis/data.html representing the handwritten digits 2 and 7, labeled by and , respectively (see Fig. 1). We evaluated the quality of the resulting decision function on test data set by computing the percentage of misclassified images.
Fig. 1.
A sample of images belonging to the classes and , respectively
We denote by the set of available training data consisting of 3.000 images in the class and 3.000 images in the class . Due to numerical reasons each image has been vectorized and normalized. We tested in MATLAB different combinations of parameters chosen as in Remark 12 by running the algorithm for 3.000 iterations. A sample of misclassified images is shown in Fig. 2.
Fig. 2.
A sample of misclassified images
In Table 1 we present the misclassification rate in percentage for different choices for the parameters (we recall that in this case we take ) and , while for which corresponds to the noninertial version of the algorithm we consider different choices of the parameter and . We observe that when combining with each regularization parameters leads to the lowest misclassification rate with 2.1845%.
Table 1.
Misclassification rate in percentage for different choices for the parameters and C when and
| 0.1 | 2.2330 | 2.2330 | 2.2330 | 2.1845 | 2.1845 | 2.1845 |
| 0.3 | 2.2330 | 2.2816 | 2.2816 | 2.2816 | 2.2816 | 2.2816 |
| 0.5 | 2.2330 | 2.2330 | 2.2330 | 2.2816 | 2.2816 | 2.3301 |
| 0.7 | 2.3786 | 2.3786 | 2.3786 | 2.3786 | 2.3786 | 2.3786 |
| 0.9 | 2.9126 | 2.9126 | 2.9126 | 2.9126 | 2.8641 | 2.8155 |
| 0 () | 3.1068 | 3.0583 | 3.0583 | 2.9612 | 2.9612 | 2.7184 |
| 0 () | 2.2816 | 2.2330 | 2.2330 | 2.2330 | 2.2330 | 2.2330 |
| 0 () | 2.2816 | 2.2330 | 2.2330 | 2.2330 | 2.2330 | 2.2330 |
| 0 () | 2.2330 | 2.2330 | 2.2330 | 2.2330 | 2.2330 | 2.2330 |
| 0 () | 2.2330 | 2.2330 | 2.2330 | 2.2330 | 2.2330 | 2.2330 |
In Table 2 we present the misclassification rate in percentage for different choices of the parameters and . The lowest classification rate of is obtained for each regularization parameter .
Table 2.
Misclassification rate in percentage for different choices for the parameters C and when and
| C | |||||
|---|---|---|---|---|---|
| 0.1 | 2.2330 | 2.2330 | 2.2330 | 2.2330 | 2.2330 |
| 1 | 2.2330 | 2.2330 | 2.2330 | 2.2330 | 2.2330 |
| 2 | 2.2330 | 2.2330 | 2.2330 | 2.2330 | 2.2330 |
| 5 | 2.1845 | 2.1845 | 2.1845 | 2.1845 | 2.1845 |
| 10 | 2.1845 | 2.1845 | 2.1845 | 2.1845 | 2.1845 |
| 100 | 2.1845 | 2.1845 | 2.1845 | 2.1845 | 2.1845 |
Finally, Table 3 shows the misclassification rate in percentage for different choices for the parameters and . The lowest classification rate of is obtained when combining the value with each regularization parameter .
Table 3.
Misclassification rate in percentage for different choices for the parameters C and when and
| C | |||
|---|---|---|---|
| 0.1 | 2.2816 | 2.3301 | 2.2330 |
| 1 | 2.2330 | 2.2816 | 2.2330 |
| 2 | 2.2816 | 2.2816 | 2.2330 |
| 5 | 2.2330 | 2.2816 | 2.1845 |
| 10 | 2.2330 | 2.2816 | 2.1845 |
| 100 | 2.2330 | 2.2330 | 2.1845 |
Acknowledgements
Ernö Robert Csetnek’s Research was supported by FWF (Austrian Science Fund), Lise Meitner Programme, project M 1682-N25. Nimit Nimana is thankful to the Royal Golden Jubilee Ph.D. Program for financial support. Research done during the two months’ stay of the third author in Spring 2016 at the Faculty of Mathematics of the University of Vienna. The authors are thankful to two anonymous reviewers for hints and comments which improved the quality of the paper.
Contributor Information
Radu Ioan Boţ, Email: radu.bot@univie.ac.at.
Ernö Robert Csetnek, Email: ernoe.robert.csetnek@univie.ac.at.
Nimit Nimana, Email: nimitn@hotmail.com.
References
- 1.Alvarez F. On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 2000;38(4):1102–1119. doi: 10.1137/S0363012998335802. [DOI] [Google Scholar]
- 2.Alvarez F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 2004;14(3):773–782. doi: 10.1137/S1052623403427859. [DOI] [Google Scholar]
- 3.Alvarez F, Attouch H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001;9:3–11. doi: 10.1023/A:1011253113155. [DOI] [Google Scholar]
- 4.Attouch, H., Cabot, A., Czarnecki, M.-O.: Asymptotic behavior of nonautonomous monotone and subgradient evolution equations. Trans. Am. Math. Soc. (to appear) (2016). arXiv:1601.00767
- 5.Attouch H, Czarnecki M-O. Asymptotic behavior of coupled dynamical systems with multiscale aspects. J. Differ. Equ. 2010;248(6):1315–1344. doi: 10.1016/j.jde.2009.06.014. [DOI] [Google Scholar]
- 6.Attouch H, Czarnecki M-O. Asymptotic behavior of gradient-like dynamical systems involving inertia and multiscale aspects. J. Differ. Equ. 2017;262(3):2745–2770. doi: 10.1016/j.jde.2016.11.009. [DOI] [Google Scholar]
- 7.Attouch H, Czarnecki M-O, Peypouquet J. Prox-penalization and splitting methods for constrained variational problems. SIAM J. Optim. 2011;21(1):149–173. doi: 10.1137/100789464. [DOI] [Google Scholar]
- 8.Attouch H, Czarnecki M-O, Peypouquet J. Coupling forward-backward with penalty schemes and parallel splitting for constrained variational inequalities. SIAM J. Optim. 2011;21(4):1251–1274. doi: 10.1137/110820300. [DOI] [Google Scholar]
- 9.Attouch H, Peypouquet J, Redont P. A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Optim. 2014;24(1):232–256. doi: 10.1137/130910294. [DOI] [Google Scholar]
- 10.Banert S, Boţ RI. Backward penalty schemes for monotone inclusion problems. J. Optim. Theory Appl. 2015;166(3):930–948. doi: 10.1007/s10957-014-0700-x. [DOI] [Google Scholar]
- 11.Bauschke HH, Combettes PL. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. New York: Springer; 2011. [Google Scholar]
- 12.Bertsekas DP. Nonlinear Programming. 2. Cambridge: Athena Scientific; 1999. [Google Scholar]
- 13.Boţ RI, Csetnek ER. Forward-backward and Tseng’s type penalty schemes for monotone inclusion problems. Set-Valued Var. Anal. 2014;22:313–331. doi: 10.1007/s11228-014-0274-7. [DOI] [Google Scholar]
- 14.Boţ RI, Csetnek ER. A Tseng’s type penalty scheme for solving inclusion problems involving linearly composed and parallel-sum type monotone operators. Vietnam J. Math. 2014;42(4):451–465. doi: 10.1007/s10013-013-0050-2. [DOI] [Google Scholar]
- 15.Boţ, R.I., Csetnek, E.R.: Levenberg–Marquardt dynamics associated to variational inequalities. Set-Valued Var. Anal. (2017). doi:10.1007/s11228-017-0409-8
- 16.Boţ RI, Csetnek ER. An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algorithms. 2016;71:519–540. doi: 10.1007/s11075-015-0007-5. [DOI] [Google Scholar]
- 17.Boţ RI, Csetnek ER. An inertial alternating direction method of multipliers. Minimax Theory Appl. 2016;1(1):29–49. [Google Scholar]
- 18.Boţ RI, Csetnek ER. A hybrid proximal-extragradient algorithm with inertial effects. Numer. Funct. Anal. Optim. 2015;36(8):951–963. doi: 10.1080/01630563.2015.1042113. [DOI] [Google Scholar]
- 19.Boţ RI, Csetnek ER. An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems. J. Optim. Theory Appl. 2016;171(2):600–616. doi: 10.1007/s10957-015-0730-z. [DOI] [Google Scholar]
- 20.Boţ RI, Csetnek ER. Approaching the solving of constrained variational inequalities via penalty term-based dynamical systems. J. Math. Anal. Appl. 2016;435:1688–1700. doi: 10.1016/j.jmaa.2015.11.032. [DOI] [Google Scholar]
- 21.Boţ RI, Csetnek ER. Penalty schemes with inertial effects for monotone inclusion problems. Optimization. 2017;66(6):965–982. doi: 10.1080/02331934.2016.1181759. [DOI] [Google Scholar]
- 22.Boţ RI, Csetnek ER. Second order dynamical systems associated to variational inequalities. Appl. Anal. 2017;96(5):799–809. doi: 10.1080/00036811.2016.1157589. [DOI] [Google Scholar]
- 23.Boţ, R.I., Csetnek, E.R.: A second order dynamical system with Hessian-driven damping and penalty term associated to variational inequalities (2016). arXiv:1608.04137 [DOI] [PMC free article] [PubMed]
- 24.Boţ RI, Csetnek ER, Hendrich C. Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015;256:472–487. [Google Scholar]
- 25.Boţ RI, Csetnek ER, László S. An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions. EURO J. Comput. Optim. 2016;4:3–25. doi: 10.1007/s13675-015-0045-8. [DOI] [Google Scholar]
- 26.Cabot A, Frankel P. Asymptotics for some proximal-like method involving inertia and memory aspects. Set-Valued Var. Anal. 2011;19:59–74. doi: 10.1007/s11228-010-0140-1. [DOI] [Google Scholar]
- 27.Chen C, Chan RH, MA S, Yang J. Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 2015;8(4):2239–2267. doi: 10.1137/15100463X. [DOI] [Google Scholar]
- 28.Chen C, MA S, Yang J. A general inertial proximal point algorithm for mixed variational inequality problem. SIAM J. Optim. 2015;25(4):2120–2142. doi: 10.1137/140980910. [DOI] [Google Scholar]
- 29.Cristianini N, Taylor JS. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge: Cambridge University Press; 2000. [Google Scholar]
- 30.Maingé P-E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008;219:223–236. doi: 10.1016/j.cam.2007.07.021. [DOI] [Google Scholar]
- 31.Maingé P-E, Moudafi A. Convergence of new inertial proximal methods for dc programming. SIAM J. Optim. 2008;19(1):397–413. doi: 10.1137/060655183. [DOI] [Google Scholar]
- 32.Moudafi A, Oliny M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003;155:447–454. doi: 10.1016/S0377-0427(02)00906-8. [DOI] [Google Scholar]
- 33.Noun N, Peypouquet J. Forward-backward penalty scheme for constrained convex minimization without inf-compactness. J. Optim. Theory Appl. 2013;158(3):787–795. doi: 10.1007/s10957-013-0296-6. [DOI] [Google Scholar]
- 34.Ochs P, Chen Y, Brox T, Pock T. iPiano: Inertial proximal algorithm for non-convex optimization. SIAM J. Imaging Sci. 2014;7(2):1388–1419. doi: 10.1137/130942954. [DOI] [Google Scholar]
- 35.Peypouquet J. Coupling the gradient method with a general exterior penalization scheme for convex minimization. J. Optim. Theory Appl. 2012;153(1):123–138. doi: 10.1007/s10957-011-9936-x. [DOI] [Google Scholar]
- 36.Polyak BT. Introduction to Optimization, (Translated from the Russian) Translations Series in Mathematics and Engineering. New York: Optimization Software Inc., Publications Division; 1987. [Google Scholar]


