Abstract
We propose an SQP algorithm for mathematical programs with vanishing constraints which solves at each iteration a quadratic program with linear vanishing constraints. The algorithm is based on the newly developed concept of -stationarity (Benko and Gfrerer in Optimization 66(1):61–92, 2017). We demonstrate how -stationary solutions of the quadratic program can be obtained. We show that all limit points of the sequence of iterates generated by the basic SQP method are at least M-stationary and by some extension of the method we also guarantee the stronger property of -stationarity of the limit points.
Keywords: SQP method, Mathematical programs with vanishing constraints, -stationarity, -stationarity
Introduction
Consider the following mathematical program with vanishing constraints (MPVC)
| 1 |
with continuously differentiable functions and finite index sets E, I and V.
Theoretically, MPVCs can be viewed as standard nonlinear optimization problems, but due to the vanishing constraints, many of the standard constraint qualifications of nonlinear programming are violated at any feasible point with for some . On the other hand, by introducing slack variables, MPVCs may be reformulated as so-called mathematical programs with complementarity constraints (MPCCs), see [7]. However, this approach is also not satisfactory as it has turned out that MPCCs are in fact even more difficult to handle than MPVCs. This makes it necessary, both from a theoretical and numerical point of view, to consider special tailored algorithms for solving MPVCs. Recent numerical methods follow different directions. A smoothing-continuation method and a regularization approach for MPCCs are considered in [6, 10] and a combination of these techniques, a smoothing-regularization approach for MPVCs is investigated in [2]. In [3, 8] the relaxation method has been suggested in order to deal with the inherent difficulties of MPVCs.
In this paper, we carry over a well known SQP method from nonlinear programming to MPVCs. We proceed in a similar manner as in [4], where an SQP method for MPCCs was introduced by Benko and Gfrerer. The main task of our method is to solve in each iteration step a quadratic program with linear vanishing constraints, a so-called auxiliary problem. Then we compute the next iterate by reducing a certain merit function along some polygonal line which is given by the solution procedure for the auxiliary problem. To solve the auxiliary problem we exploit the new concept of -stationarity introduced in the recent paper by Benko and Gfrerer [5]. -stationarity is in general stronger than M-stationarity and it turns out to be very suitable for a numerical approach as it allows to handle the program with vanishing constraints without relying on enumeration techniques. Surprisingly, we compute at least a -stationary solution of the auxiliary problem just by means of quadratic programming by solving appropriate convex subproblems.
Next we study the convergence of the SQP method. We show that every limit point of the generated sequence is at least M-stationary. Moreover, we consider the extended version of our SQP method, where at each iterate a correction of the iterate is made to prevent the method from converging to undesired points. Consequently we show that under some additional assumptions all limit points are at least -stationary. Numerical tests indicate that our method behaves very reliably.
A short outline of this paper is as follows. In Sect. 2 we recall the basic stationarity concepts for MPVCs as well as the recently developed concepts of - and -stationarity. In Sect. 3 we describe an algorithm based on quadratic programming for solving the auxiliary problem occurring in every iteration of our SQP method. We prove the finiteness and summarize some other properties of this algorithm. In Sect. 4 we propose the basic SQP method. We describe how the next iterate is computed by means of the solution of the auxiliary problem and we consider the convergence of the overall algorithm. In Sect. 5 we consider the extended version of the overall algorithm and we discuss its convergence. Section 6 is a summary of numerical results we obtained by implementing our basic algorithm in MATLAB and by testing it on a subset of test problems considered in the thesis of Hoheisel [7].
In what follows we use the following notation. Given a set M we denote by the collection of all partitions of M. Further, for a real number a we use the notation . For a vector we define componentwise, i.e. , etc. Moreover, for and we denote the norm of u by and we use the notation for the standard norm. Finally, given a sequence , a point and an infinite set we write instead of .
Stationary points for MPVCs
Given a point feasible for (1) we define the following index sets
| 2 |
In contrast to nonlinear programming there exist a lot of stationarity concepts for MPVCs.
Definition 2.1
Let be feasible for (1). Then is called
- Weakly stationary, if there are multipliers such that
and3 4 - M-stationary, if it is weakly stationary and
5 -stationary, if there is some partition such that is -stationary with respect to .
-stationary, if it is -stationary and at least one of the multipliers and fulfills M-stationarity condition (5).
- S-stationary, if it is weakly stationary and
The concepts of -stationarity and -stationarity were introduced in the recent paper by Benko and Gfrerer [5], whereas the other stationarity concepts are very common in the literature, see e.g. [1, 7, 8]. The following implications hold:
The first implication follows from the fact that the multiplier corresponding to S-stationarity fulfills the requirements for both and . The third implication holds because for the multiplier fulfills (5) since for .
Note that the S-stationarity conditions are nothing else than the Karush-Kuhn-Tucker conditions for the problem (1). As we will demonstrate in the next theorems, a local minimizer is S-stationary only under some comparatively stronger constraint qualification, while it is -stationary under very weak constraint qualifications. Before stating the theorems we recall some common definitions.
Denoting
| 7 |
| 8 |
we see that problem (1) can be rewritten as
Recall that the contingent (also tangent) cone to a closed set at is defined by
The linearized cone to at is then defined as .
Further recall that is called B-stationary if
Every local minimizer is known to be B-stationary.
Definition 2.2
Let be feasible for (1), i.e . We say that the generalized Guignard constraint qualification (GGCQ) holds at , if the polar cone of equals the polar cone of .
Theorem 2.1
(c.f. [5, Theorem 8]) Assume that GGCQ is fulfilled at the point . If is B-stationary, then is -stationary for (1) with respect to every partition and it is also -stationary.
Theorem 2.2
(c.f. [5, Theorem 8]) If is Q-stationary with respect to a partition , such that for every there exists some fulfilling
| 9 |
and there is some such that
| 10 |
then is S-stationary and consequently also B-stationary.
Note that these two theorems together also imply that a local minimizer is S-stationary provided GGCQ is fulfilled at and there exists a partition , such that for every there exists fulfilling (9) and fulfilling (10).
Moreover, note that (9) and (10) are fulfilled for every partition e.g. if the gradients of active constraints are linearly independent. On the other hand, in the special case of partition , this conditions read as the requirement that the system
has a solution, which resembles the well-known Mangasarian-Fromovitz constraint qualification (MFCQ) of nonlinear programming and it seems to be a rather weak and possibly often fulfilled assumption.
Finally, we recall the definitions of normal cones. The regular normal cone to a closed set at can be defined as the polar cone to the tangent cone by
The limiting normal cone to a closed set at is given by
| 11 |
In case when is a convex set, regular and limiting normal cone coincide with the classical normal cone of convex analysis, i.e.
| 12 |
Well-known is also the following description of the limiting normal cone
| 13 |
We conclude this section by the following characterization of M- and -stationarity via limiting normal cone. Straightforward calculations yield that
and hence the M-stationarity conditions (4) and (5) can be replaced by
| 14 |
and the -stationarity conditions (4) and (6) can be replaced by
| 15 |
| 16 |
where for we define
Note also that for every we have
| 17 |
Solving the auxiliary problem
In this section, we describe an algorithm for solving quadratic problems with vanishing constraints of the type
| 18 |
here the vector is chosen at the beginning of the algorithm such that some feasible point is known in advance, e.g. . The parameter has to be chosen sufficiently large and acts like a penalty parameter forcing to be near zero at the solution. B is a symmetric positive definite matrix, denote row vectors in and are real numbers. Note that this problem is a special case of problem (1) and consequently the definition of and stationarity as well as the definition of index sets (2) remain valid.
It turns out to be much more convenient to operate with a more general notation. Let us denote by a vector in , by a matrix and by and two subsets of . Note that for P given by (7) it holds that . The problem (18) can now be equivalently rewritten in a form
| 19 |
For a given feasible point for the problem we define the following index sets
where the index sets are given by (2).
Further, consider the distance function d defined by
for and . The following proposition summarizes some well-known properties of d.
Proposition 3.1
Let and .
- Let , then
In particular,20 21 - is Lipschitz continuous with Lipschitz modulus and consequently
22 is convex, provided A is convex.
Due to the disjunctive structure of the auxiliary problem we can subdivide it into several QP-pieces. For every partition we define the convex quadratic problem
| 23 |
Since form a partition of V it is sufficient to define since is given by .
At the solution of there is a corresponding multiplier and a number with fulfilling the KKT conditions:
| 24 |
| 25 |
| 26 |
| 27 |
| 28 |
where for . Since and are convex sets, the above normal cones are given by (12).
The definition of the problem allows the following interpretation of -stationarity, which is a direct consequence of (15) and (16).
Lemma 3.1
A point is -stationary with respect to for (19) if and only if it is the solution of the convex problems and .
Moreover, since for the conditions (27),(28) read as , it follows from (17) that if a point is the solution of then it is M-stationary for (19).
Finally, let us denote by the objective value at a solution of the problem
| 29 |
An outline of the algorithm for solving is as follows.
Algorithm 3.1
(Solving the QPVC) Let and be given.
- Initialize:
- Set the starting point , define the vector by
and set the partition and the counter of pieces .30 - Compute as the solution and as the corresponding multiplier of the convex problem and set .
- If , perform a restart: set and go to step 1.
- Improvement step:
- while is not a solution of the following four convex problems:
31 32 - Compute as the solution and as the corresponding multiplier of the first problem with , set to the corresponding index set and increase the counter t of pieces by 1.
- If , perform a restart: set and go to step 1.
- Check for successful termination:
- If set , stop the algorithm and return.
- Check the degeneracy:
- If the non-degeneracy condition
is fulfilled, perform a restart: set and go to step 1.33 - Else stop the algorithm because of degeneracy.
The selection of the index sets in step 2 is motivated by Lemma 3.1, since if is the solution of convex problems (31), then it is -stationary and if is also the solution of convex problems (32), then it is even -stationary for problem (19).
We first summarize some consequences of the Initialization step.
Proposition 3.2
- Vector is chosen in a way that for all it holds that
34 - Partition is chosen in a way that for it holds that
35
Proof
1. If we have and (34) obviously holds. If we have and we obtain
by (21) and (20). Finally, if we have and thus
follows again by (21) and (20).
2. If for some and , by (22) and (34) we obtain
and consequently , because of (20). Hence we conclude that implies for and the statement now follows from the fact that and .
The following lemma plays a crucial part in proving the finiteness of the Algorithm 3.1.
Lemma 3.2
For each partition there exists a positive constant such that for every the solution of fulfills .
Proof
Let denote a solution of (29). Since , it follows that the problem
| 36 |
is feasible and by we denote the solution of this problem and by the corresponding multiplier. Further, is a solution of (29) and by we denote the corresponding multiplier.
Then, triple and fulfills (24) and (26)–(28). Moreover, triple and fulfills (26)-(28) and
| 37 |
| 38 |
for some with .
Let be a positive constant such that for all we have
and set and . We will now show that for such it holds that is the solution of .
Clearly, and the triple and also fulfills (24) due to (37) and it fulfills (26)-(28) due to the convexity of the normal cones. Moreover, taking into account the definitions of and together with (38), we obtain
showing also (25). Hence is the solution of and the proof is complete.
We now formulate the main theorem of this section.
Theorem 3.1
Proof
1. The algorithm is obviously finite unless we perform a restart and hence increase . Thus we can assume that is sufficiently large, say
with given by the previous lemma. However this means, taking into account also Proposition 3.3 (1.), that is feasible for the problem for all t, hence and is the solution of , implying and consequently . Therefore we do not perform a restart in step 1 or step 2. On the other hand, since we enter steps 3 and 4 with , we either terminate the algorithm in step 3 with if the non-degeneracy condition (33) is fulfilled or we terminate the algorithm because of degeneracy in step 4. This finishes the proof.
2. The statement regarding stationarity follows easily from the fact that we enter step 3 of the algorithm only when is a solution of problems (32) and this means that it is also -stationary with respect to by Lemma 3.1. Thus, is also -stationary for problem (19). The claim about follows from the assumption that the Algorithm 3.1 is not terminated because of degeneracy.
We conclude this section with the following proposition that brings together the basic properties of the Algorithm 3.1.
Proposition 3.3
If the Algorithm 3.1 is not terminated because of degeneracy, then the following properties hold:
For all the points and are feasible for the problem and the point is also the solution of the convex problem .
- For all it holds that
39 - There exists a constant , dependent only on the number of constraints, such that
40
Proof
1. By definitions of the problems and it follows that a point , feasible for , is feasible for if and only if
| 41 |
The point is clearly feasible for and similarly the point is feasible for for all , since the partition is defined by one of the index sets of (31)-(32) and thus fulfills (41). However, feasibility of for , together with being the solution of , then follows from its definition.
2. Statement follows from , from the fact that we perform a restart whenever occurs and from the constraint .
3. Since whenever the parameter is increased the algorithm goes to the step 1 and thus the counter t of the pieces is reset to 0, it follows that after the last time the algorithm enters step 1 we keep constant. It is obvious that all the index sets are pairwise different implying that the maximum of switches to a new piece is .
The basic SQP algorithm for MPVC
An outline of the basic algorithm is as follows.
Algorithm 4.1
(Solving the MPVC)
- Initialization:
- Select a starting point together with a positive definite matrix , a parameter and constants and .
- Select positive penalty parameters .
- Set the iteration counter .
- Next iterate:
- Compute new penalty parameters .
- Set where is a point on the polygonal line connecting the points such that an appropriate merit function depending on is decreased.
- Set , the final value of in Algorithm 3.1.
- Update to get positive definite matrix .
- Set and go to step 2.
Remark 4.1
We terminate the Algorithm 4.1 only in the following two cases. In the first case no sufficient reduction of the violation of the constraints can be achieved. The second case will be satisfied only by chance when the current iterate is a -stationary solution. Normally, this algorithm produces an infinite sequence of iterates and we must include a stopping criterion for convergence. Such a criterion could be that the violation of the constraints at some iterate is sufficiently small,
where is given by (7) and the expected decrease in our merit function is sufficiently small,
see Proposition 4.1 below.
The next iterate
Denote the outcome of Algorithm 3.1 at the th iterate by
The new penalty parameters are computed by
| 42 |
where
| 43 |
with maximum being taken over and . Note that this choice of ensures
| 44 |
The merit function
We are looking for the next iterate at the polygonal line connecting the points . For each line segment we consider the functions
where and etc. and we further denote
| 45 |
Lemma 4.1
For every the function is convex.
- For every the function is a first order approximation of , that is
where .
Proof
1. By convexity of and is convex because it is sum of convex functions.
2. By Lipschitz continuity of distance function with Lipschitz modulus we conclude
and hence the assertion follows.
We state now the main result of this subsection. For the sake of simplicity we omit the iteration index k in this part.
Proposition 4.1
For every
| 46 |
| 47 |
Proof
Fix and note that
because of . For consider defined by (45). We obtain
| 48 |
Using that is the solution of and multiplying the first order optimality condition (24) by yields
| 49 |
Summing up the expression on the left hand side from to t, subtracting it from the right hand side of (48) and taking into account the identity
we obtain for
| 50 |
First, we claim that
| 51 |
Consider and with . By the feasibility of and for it follows that
and hence from (27) and (12) we conclude
and consequently
| 52 |
follows by the Hölder inequality and (34).
Analogous argumentation yields (52) also for with and since form a partition of V, the claimed inequality (51) follows.
Further, we claim that for it holds that
| 53 |
From feasibility of for either or for it follows that
and hence, using (34) and (22),
| 54 |
Again, for or it holds that by analogous argumentation and since and form a partition of V, the claimed inequality (53) follows.
Finally, we have
| 55 |
due to the fact that form a partition of V and (35).
Similar arguments as above show
Taking this into account and putting together (50), (51), (53) and (55) we obtain for
and hence (46) and (47) follow by monotonicity of and (44). This completes the proof.
Searching for the next iterate
We choose the next iterate as a point from the polygonal line connecting the points . Each line segment corresponds to the convex subproblem solved by Algorithm 3.1 and hence each line search function corresponds to the usual merit function from nonlinear programming. This makes it technically more difficult to prove the convergence behavior stated in Proposition 4.2 which is also the motivation for the following procedure.
First we parametrize the polygonal line connecting the points by its length as a curve in the following way. We define , for every we denote by the smallest number t such that and we set ,
where for . Then we define
Note that .
In order to simplify the proof of Proposition 4.2, for we further consider the following line search functions
| 56 |
Now consider some sequence of positive numbers with for all . Consider the smallest j, denoted by j(k) such that for some given constant one has
| 57 |
Then the new iterate is given by
As can be seen from the proof of Lemma 4.5, this choice ensures a decrease in merit function defined in the next subsection.
The following relations are direct consequences of the properties of and
| 58 |
The last property holds due to Proposition 4.1 and
| 59 |
which follows from and hence . We recall that and are defined by (45).
Lemma 4.2
The new iterate is well defined.
Proof
In order to show that the new iterate is well defined, we have to prove the existence of some j such that (57) is fulfilled. Note that and . There is some such that , whenever . Since , we can choose j sufficiently large to fulfill and then and , since . This yields
| 60 |
Then by second property of (58), (59), taking into account by Proposition 4.1 and we obtain
Thus (57) is fulfilled for this j and the lemma is proved.
Convergence of the basic algorithm
We consider the behavior of the Algorithm 4.1 when it does not prematurely stop and it generates an infinite sequence of iterates
Note that . We discuss the convergence behavior under the following assumption.
Assumption 1
- There exist constants such that
for all k, where . There exist constants such that for all k, where denotes the smallest eigenvalue of .
For our convergence analysis we need one more merit function
Lemma 4.3
For each k and for any it holds that
| 61 |
Proof
The first claim follows from the definitions of and and the estimate
which holds by (20). The second claim follows from (35).
A simple consequence of the way that we define the penalty parameters in (42) is the following lemma.
Lemma 4.4
Under Assumption 1 there exists some such that for all the penalty parameters remain constant, and consequently .
Remark 4.2
Note that we do not use for calculating the new iterate because its first order approximation is in general not convex on the line segments connecting and due to the involved min operation.
Lemma 4.5
Assume that Assumption 1 is fulfilled. Then
| 62 |
Proof
Take an existed from Lemma 4.4. Then we have for
and therefore . Hence the sequence is monotonically decreasing and therefore convergent, because it is bounded below by Assumption 1. Hence
and the assertion follows.
Proposition 4.2
Assume that Assumption 1 is fulfilled. Then
| 63 |
and consequently
| 64 |
Proof
We prove (63) by contraposition. Assuming on the contrary that (63) does not hold, by taking into account by Proposition 4.1, there exists a subsequence such that . By passing to a subsequence we can assume that for all we have with given by Lemma 4.4 and , where we have taken into account (40). By passing to a subsequence once more we can also assume that
where and are defined by (45). Note that .
Let us first consider the case . There exists such that whenever . Since we can assume that . Then
and this implies that for the next iterate we have and hence , contradicting (62).
Now consider the case and let us define the number . Note that Proposition 4.1 yields
| 65 |
and therefore , where . By passing to a subsequence we can assume that for every and every we have .
Now assume that for infinitely many we have , i.e. . Then we conclude
contradicting (62). Hence for all but finitely many , without loss of generality for all , we have .
There exists such that
| 66 |
whenever . By eventually choosing smaller we can assume and by passing to a subsequence if necessary we can also assume that for all we have
| 67 |
Now let for each k the index denote the smallest j with . It obviously holds that and by (67) we obtain
implying and
by (67).
Taking this into account together with (66) and we conclude
Now we can proceed as in the proof of Lemma 4.2 to show that fulfills (57).
However, this yields by definition of j(k) and hence showing . But then we also have and from (57) we obtain
contradicting (62) and so (63) is proved. Condition (64) now follows from (63) because we conclude from (65) that .
Now we are ready to state the main result of this section.
Theorem 4.1
Let Assumption 1 be fulfilled. Then every limit point of the sequence of iterates is at least M-stationary for problem (1).
Proof
Let denote a limit point of the sequence and let K denote a subsequence such that . Further let be a limit point of the bounded sequence and assume without loss of generality that . First we show feasibility of for the problem (1) together with
| 68 |
Consider . For all k it holds that
Since we have and together with by Proposition 4.2 we conclude
and
Hence . Similar arguments show that for every we have
Finally consider . Taking into account (22), (34) and we obtain
Hence, by Proposition 4.2 implies
showing the feasibility of . Moreover, the previous arguments also imply
| 69 |
Taking into account (14), the fact that fulfills M-stationarity conditions at for (19) yields
However, this together with , (69), and (13) yield and consequently (68) follows.
Moreover, by first order optimality condition we have
for each k and by passing to a limit and by taking into account that by Proposition 4.2 we obtain
Hence, invoking (14) again, this together with the feasibility of and (68) implies M-stationarity of and the proof is complete.
The extended SQP algorithm for MPVC
In this section we investigate what can be done in order to secure -stationarity of the limit points. First, note that to prove M-stationarity of the limit points in Theorem 4.1 we only used that , i.e. it is sufficient to exploit only the M-stationarity of the solutions of auxiliary problems. Further, recalling the comments after Lemma 3.1, the solution of is M-stationary for the auxiliary problem. Thus, in Algorithm 3.1 for solving the auxiliary problem, it is sufficient to consider only the last problem of the four problems (31),(32). Moreover, definition of limiting normal cone (11) reveals that, in general, the limiting process abolishes any stationarity stronger that M-stationarity, even S-stationarity.
Nevertheless, in practical situations it is likely that some assumption, securing that a stronger stationarity will be preserved in the limiting process, may be fulfilled. E.g., let be a limit point of . If we assume that for all k sufficiently large it holds that , then is at least -stationary for (1). This follows easily, since now for all it holds that and consequently
This observation suggests that to obtain a stronger stationarity of a limit point, the key is to correctly identify the bi-active index set at the limit point and it serves as a motivation for the extended version of our SQP method. Before we can discuss the extended version, we summarize some preliminary results.
Preliminary results
Let and be continuously differentiable. Given a vector we define the linear problem
| 70 |
Note that is always feasible for this problem. Next we define a set A by
| 71 |
Let and recall that the Mangasarian-Fromovitz constraint qualification (MFCQ) holds at if the matrix has full row rank and there exists a vector such that
Moreover, for a matrix M we denote by the norm given by
| 72 |
and we also omit the index p in case .
Lemma 5.1
Let , let assume that MFCQ holds at and let denote the solution of . Then for every there exists such that if then
| 73 |
where d denotes the solution of LP(x).
Proof
The classical Robinson’s result (c.f. [9, Corollary 1, Theorem 3]), together with MFCQ at , yield the existence of and such that for every x with there exists with and
Since , by setting we obtain that is feasible for LP(x) and
Thus, taking into account and , we obtain
Hence, given , by continuity of objective and constraint functions as well as their derivatives at we can define such that for all x with it holds that
Consequently, we obtain
and since by feasibility of for LP(x), the claim is proved.
Lemma 5.2
Let be a given constant and for a vector of positive parameters let us define the following function
| 74 |
Further assume that there exist and a compact set C such that for all it holds that , where d denotes the solution of LP(x). Then there exists such that
| 75 |
holds for all and every .
Proof
Definition of , together with for , yield
| 76 |
By uniform continuity of the derivatives of constraint functions and objective function on compact sets, it follows that there exists such that for all and every h with we have
| 77 |
Hence, for all and every we obtain
On the other hand, taking into account , (77) and
we similarly obtain for all and every
Consequently, (75) follows from (76) and the proof is complete.
The extended version of Algorithm 4.1
For every vector and every partition we define the linear problem
| 78 |
Note that is always feasible for this problem and that the problem coincides with the problem LP(x) with a, b given by
| 79 |
The following proposition provides the motivation for introducing the problem .
Proposition 5.1
Let be feasible for (1). Then is -stationary with respect to if and only if the solutions and of the problems and fulfill
| 80 |
Proof
Feasibility of for and implies
Denote by and the solutions of and without the constraint , and denote these problems by and . Clearly, we have
The dual problem of for is given by
| 81 |
where .
Assume first that is -stationary with respect to . Then the multipliers from definition of -stationarity are feasible for dual problems of and , respectively, both with the objective value equal to zero. Hence, duality theory of linear programming yields that and consequently (80) follows.
On the other hand, if (80) is fulfilled, is follows that as well. Thus, is an optimal solution for and and duality theory of linear programming yields that the solutions and of the dual problems exist and their objective values are both zero. However, this implies that for we have
and consequently fulfills the conditions of and fulfills the conditions of , showing that is indeed -stationary with respect to .
Now for each k consider two partitions and let and denote the solutions of and . Choose such that
| 82 |
and let denote the corresponding partition. Next, we define the function in the following way
| 83 |
Note that the function coincides with for a, b given by (79) with and given by
Proposition 5.2
For all it holds that
| 84 |
Proof
Non-negativity of the distance function, together with (20) yield for every
Hence (84) now follows from
An outline of the extended algorithm is as follows.
Algorithm 5.1
(Solving the MPVC*)
- Initialization:
- Select a starting point together with a positive definite matrix , a parameter and constants and .
- Select positive penalty parameters .
- Set the iteration counter .
- Next iterate:
- Compute new penalty parameters .
- Set where is a point on the polygonal line connecting the points such that an appropriate merit function depending on is decreased.
- Set , the final value of in Algorithm 3.1.
- Update to get positive definite matrix .
- Set and go to step 2.
Naturally, Remark 4.1 regarding the stopping criteria for Algorithm 4.1 aplies to this algorithm as well.
Lemma 5.3
Index j(k) is well defined.
Proof
In order to show that j(k) is well defined, we have to prove the existence of some j such that either (85) or (86) is fulfilled. By (84) we know that . In case every j sufficiently large clearly fulfills (86). On the other hand, if , taking into account (84) we obtain
However, Lemma 5.2 for and yields that if then there exists some such that
holds for all and thus (85) is fulfilled for every j sufficiently large. This finishes the proof.
Convergence of the extended algorithm
We consider the behavior of the Algorithm 5.1 when it does not prematurely stop and it generates an infinite sequence of iterates
We discuss the convergence behavior under the following additional assumption.
Assumption 2
Let be a limit point of the sequence of iterates .
Mangasarian-Fromovitz constraint qualification (MFCQ) holds at for constraints , where A is given by (71) and a, b are given by (79) with or .
- There exists a subsequence such that and
Note that the Next iterate step from Algorithm 5.1 remains almost unchanged compared to the Next iterate step from Algorithm 4.1, we just consider the point instead of . Consequently, most of the results from subsections 4.1 and 4.2 remain valid, possibly after replacing by where needed, e.g. in Lemma 4.3. The only exception is the proof of Lemma 4.5, where we have to show that the sequence is monotonically decreasing. This follows now from (85) and hence Lemma 4.5 remains valid as well.
We state now the main result of this section.
Theorem 5.1
Let Assumptions 1 and 2 be fulfilled. Then every limit point of the sequence of iterates is at least -stationary for problem (1).
Proof
Let denote a limit point of the sequence and let denote a subsequence from Assumption 2 (2.). Since
we conclude that and by applying Theorem 4.1 to sequence we obtain the feasibility of for problem (1).
Next we consider as in Proposition 5.1 with and without loss of generality we only consider , where is given by Lemma 4.4. We show by contraposition that the case can not occur. Let us assume on the contrary that, say . Assumption 2 (2.) yields that and feasibility of for (1) together with imply for A given by (71) and a, b given by (79) with . Taking into account Assumption 2 (1.), Lemma 5.1 then yields that for there exists such that for all we have , with given by (82).
Next, we choose to be such that for it holds that and we set . From Lemma 5.2 we obtain that
| 87 |
holds for all . Moreover, by choosing larger if necessary we can assume that for all we have
| 88 |
For the partition corresponding to it holds that and this, together with the feasibility of for (1), imply for . Therefore, taking into account (22), we obtain
Consequently, (84) and (88) yield for all
Thus, from (87) and (84) we obtain for all
Now consider j with . We see that , since and consequently j fulfills (85) and violates (86). However, then we obtain for all
a contradiction.
Hence it follows that the solutions fulfill and by Proposition 5.1 we conclude that is -stationary with respect to and consequently also -stationary for problem (1).
Finally, we discuss how to choose the partitions and such that Assumption 2 (2.) will be fulfilled. Let us consider a sequence of nonnegative numbers such that for every limit point with it holds that
| 89 |
and let us define
Proposition 5.3
For and defined by and the Assumption 2 (2.) is fulfilled.
Proof
Let be a limit point of the sequence such that . Recall that is given by (8) and let us set , where is given by (72). Further, taking into account (89), consider such that for all it holds that . Hence, for all with we conclude
| 90 |
Now consider , i.e. . By choosing larger if necessary we can assume that for all it holds that and consequently, taking into account (90), for all we have
showing . By similar argumentation and by increasing if necessary we obtain that for all it holds that
| 91 |
However, feasibility of for (1) yields
and the index sets are pairwise disjoint subsets of V by definition. Hence we claim that (91) must in fact hold with equalities. Indeed, e.g.
This finishes the proof.
Note that if we assume that there exist a constant , a number and a limit point such that for all it holds that
by setting we obtain (89), since
Numerical results
Algorithm 4.1 was implemented in MATLAB. To perform numerical tests we used a subset of test problems considered in the thesis of Hoheisel [7].
First we considered the so-called academic example
| 92 |
As in [7], we tested 289 different starting points with . For 84 starting points our algorithm found a global minimizer (0, 0) with objective value 0, while for the remaining 205 starting points a local minimizer (0, 5) with objective value 10 was found. Hence, convergence to the perfidious candidate , which is not a local minimizer, did not occur (see [7]).
Expectantly, after adding constraint to the model (92), to artificially exclude the point (0, 0), unsuitable for the practical application, we reached the point (0, 5), now a global minimizer. For more detailed information about the problem we refer the reader to [7] and [2].
Next we solved 2 examples in truss topology optimization, the so called Ten-bar Truss and Cantilever Arm. The underlying model for both of them is as follows:
| 93 |
Here the matrix K(a) denotes the global stiffness matrix of the structure a and the vector contains the external forces applying at the nodal points. Further, for each i the function denotes the stress of the th potential bar and are positive constants. Again, for more background of the model and the following truss topology optimization problems we refer to [7].
In the Ten-bar Truss example we consider the ground structure depicted in Fig. 1a consisting of potential bars and 6 nodal points. We consider a load which applies at the bottom right hand node pulling vertically to the ground with force . The two left hand nodes are fixed, and hence the structure has degrees of freedom for displacements.
Fig. 1.
Ten-bar Truss example
We set and as in [7] and the resulting structure consisting of 5 bars is shown in Fig. 1b and is the same as the one in [7]. For comparison, in the following table we show the full data containing also the stress values.
| i | |||
|---|---|---|---|
| 1 | 0 | 1.029700000000000 | −1.000000000000000 |
| 2 | 1.000000000000000 | 1.000000000000000 | 1.000000000000000 |
| 3 | 0 | 1.119550000000000 | −2.000000000000000 |
| 4 | 1.000000000000000 | 1.000000000000000 | 1.302400000000000 |
| 5 | 0 | 0.485150000000000 | −1.970300000000000 |
| 6 | 1.414213562373095 | 1.000000000000000 | −3.000000000000000 |
| 7 | 0 | 0.302400000000000 | −8.000000000000000 |
| 8 | 1.414213562373095 | 1.000000000000000 | −6.511800000000000 |
| 9 | 2.000000000000000 | 1.000000000000000 | |
| 10 | 0 | 1.488200000000000 |
We can see that although our final structure and optimal volume are the same as the final structure and the optimal volume in [7], the solution is different. For instance, since , our solution does not reach the maximal compliance. Similarly as in [7], we observe the effect of vanishing constraints since the stress values from the table show that
In the Cantilever Arm example we consider the ground structure depicted in Fig. 2a consisting of potential bars and 27 nodal points. Again, we consider a load acting at the bottom right hand node pulling vertically to the ground with force . Now the three left hand nodes are fixed, and hence .
Fig. 2.
Cantilever Arm example
We proceed as in [7] and we first set and . The resulting structure consisting of only 24 bars (compared to 38 bars in [7]) is shown in Fig. 2b. Similarly as in [7], we have and . On the other hand, our optimal volume is a bit larger than the optimal volume 23.1399 in [7]. Also, analysis of our stress values shows that
![]() |
and hence, although it holds true that both absolute stresses as well as absolute ”fictitious stresses” (i.e., for zero bars) are small compared to as in [7], the difference is that in our case they are not the same.
The situation becomes more interesting when we change the stress bound to . The obtained structure consisting again of only 25 bars (compared to 37 or 31 bars in [7]) is shown in Fig. 2c. As before we have and . Our optimal volume is now much closer to the optimal volumes 23.6608 and 23.6633 in [7]. Similarly as in [7], we clearly observe the effect of vanishing constraints since our stress values show
Finally, we obtained 32 bars (in contrast to 24 bars in [7]) satisfying both
To better demonstrate the performance of our algorithm we conclude this section by a table with more detailed information about solving Ten-bar Truss problem and 2 Cantilever Arm problems (CA1 with and CA2 with ). We use the following notation.
| Problem | Name of the test problem |
|---|---|
| (n, q) | Number of variables, number of all constraints |
| Total number of outer iterations of the SQP method | |
| Total numbers of inner iterations corresponding to each outer iteration | |
| Overall sum of steps made during line search | |
| Total number of function evaluations, | |
| Total number of gradient evaluations, |
| Problem | (n, q) | |||||
|---|---|---|---|---|---|---|
| Ten-bar Truss | (18, 39) | 14 | 67 | 81 | 15 | |
| CA1 | (272, 721) | 401 | 401 | 802 | 402 | |
| CA2 | (272, 721) | 1850 | 1850 | 3700 | 1851 |
Acknowledgements
Open access funding provided by Austrian Science Fund (FWF). This work was supported by the Austrian Science Fund (FWF) under Grant P 26132-N25.
Contributor Information
Matúš Benko, Email: benko@numa.uni-linz.ac.at.
Helmut Gfrerer, Email: helmut.gfrerer@jku.at.
References
- 1.Achtizger W, Kanzow C. Mathematical programs with vanishing constraints: optimality conditions and constraint qualifications. Math. Program. 2008;114:69–99. doi: 10.1007/s10107-006-0083-3. [DOI] [Google Scholar]
- 2.Achtizger W, Hoheisel T, Kanzow C. A smoothing-regularization approach to mathematical programs with vanishing constraints. Comput. Optim. Appl. 2013;55:733–767. doi: 10.1007/s10589-013-9539-6. [DOI] [Google Scholar]
- 3.Achtizger W, Kanzow C, Hoheisel T. On a relaxation method for mathematical programs with vanishing constraints. GAMM-Mitt. 2012;35:110–130. doi: 10.1002/gamm.201210009. [DOI] [Google Scholar]
- 4.Benko M, Gfrerer H. An SQP method for mathematical programs with complementarity constraints with strong convergence properties. Kybernetika. 2016;52:169–208. doi: 10.1007/s10589-017-9894-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Benko M, Gfrerer H. On estimating the regular normal cone to constraint systems and stationary conditions. Optimization. 2017;66(1):61–92. doi: 10.1080/02331934.2016.1252915. [DOI] [Google Scholar]
- 6.Fukushima M, Pang J S. Convergence of a smoothing continuation method for mathematical programs with complementarity constraints. In: Théra M, Tichatschke R, editors. Ill-Posed Variational Problems and Regularization Techniques. Lecture Notes in Economics and Mathematical Systems. Berlin: Springer; 1999. [Google Scholar]
- 7.Hoheisel, T.: Mathematical programs with vanishing constraints. Dissertation, Department of Mathematics, University of Würzburg (2009)
- 8.Izmailov AF, Solodov MV. Mathematical programs with vanishing constraints: optimality conditions, sensitivity, and a relaxation method. J. Optim. Theory Appl. 2009;142:501–532. doi: 10.1007/s10957-009-9517-4. [DOI] [Google Scholar]
- 9.Robinson SM. Stability theory for systems of inequalities, part II: differentiable nonlinear systems. SIAM J. Numer. Anal. 1976;13:497–513. doi: 10.1137/0713043. [DOI] [Google Scholar]
- 10.Scholtes S. Convergence properties of a regularization scheme for mathematical programs with complementarity constraints. SIAM J. Optim. 2001;11:918–936. doi: 10.1137/S1052623499361233. [DOI] [Google Scholar]



