Abstract
We propose a complete-search algorithm for solving a class of non-convex, possibly infinite-dimensional, optimization problems to global optimality. We assume that the optimization variables are in a bounded subset of a Hilbert space, and we determine worst-case run-time bounds for the algorithm under certain regularity conditions of the cost functional and the constraint set. Because these run-time bounds are independent of the number of optimization variables and, in particular, are valid for optimization problems with infinitely many optimization variables, we prove that the algorithm converges to an -suboptimal global solution within finite run-time for any given termination tolerance . Finally, we illustrate these results for a problem of calculus of variations.
Keywords: Infinite-dimensional optimization, Complete search, Branch-and-lift, Convergence analysis, Complexity analysis
Introduction
Infinite-dimensional optimization problems arise in many research fields, including optimal control [7, 8, 24, 54], optimization with partial differential equations (PDE) embedded [22], and shape/topology optimization [5]. In practice, these problems are often solved approximately by applying discretization techniques; the original infinite-dimensional problem is replaced by a finite-dimensional approximation that can then be tackled using standard optimization techniques. However, the resulting discretized optimization problems may comprise a large number of optimization variables, which grows unbounded as the accuracy of the approximation is refined. Unfortunately, worst-case run-time bounds for complete-search algorithms in nonlinear programming (NLP) scale poorly with the number of optimization variables. For instance, the worst-case run-time of spatial branch-and-bound [17, 44] scales exponentially with the number of optimization variables. In contrast, algorithms for solving convex optimization problems in polynomial run-time are known [11, 40], e.g. in linear programming (LP) or convex quadratic programming (QP). While these efficient algorithms enable the solution of very large-scale convex optimization problems, such as structured or sparse problems, in general their worst-case run-time bounds also grow unbounded as the number of decision variables tends to infinity.
Existing theory and algorithms that directly analyze and exploit the infinite-dimensional nature of an optimization problem are mainly found in the field of convex optimization. For the most part, these algorithms rely on duality in convex optimization in order to construct upper and lower bounds on the optimal solution value, although establishing strong duality in infinite-dimensional problems can prove difficult. In this context, infinite-dimensional linear programming problems have been analyzed thoroughly [3]. A variety of algorithms are also available for dealing with convex infinite-dimensional optimization problems, some of which have been analyzed in generic Banach spaces [14], as well as certain tailored algorithms for continuous linear programming [4, 13, 32].
In the field of non-convex optimization, problems with an infinite number of variables are typically studied in a local neighborhood of a stationary point. For instance, local optimality in continuous-time optimal control problems can be analyzed by using Pontryagin’s maximum principle [46], and a number of local optimal control algorithms are based on this analysis [6, 12, 51, 54]. More generally, approaches in the classical field of variational analysis [37] rely on local analysis concepts, from which information about global extrema may not be derived in general. In fact, non-convex infinite-dimensional optimization remains an open field of research and, to the best of our knowledge, there currently are no generic complete-search algorithms for solving such problems to global optimality.
This paper asks the question whether a global optimization algorithm can be constructed, whose worst-case run-time complexity is independent of the number of optimization variables thereof, such that this algorithm would remain tractable for infinite-dimensional optimization problems. Clearly, devising such an algorithm may only be possible for a certain class of optimization problems. Interestingly, the fact that the “complexity” or “hardness” of an optimization problem does not necessarily depend on the number of optimization variables has been observed—and it is in fact exploited—in state-of-the-art global optimization solvers for NLP/MINLP, although these observations are still to be analyzed in full detail. For instance, instead of applying a branch-and-bound algorithm in the original space of optimization variables, global NLP/MINLP solvers such as BARON [49, 52] or ANTIGONE [34] proceed by lifting the problem to a higher-dimensional space via the introduction of auxiliary variables from the DAG decomposition of the objective and constraint functions. In a different context, the solution of a lifted problem in a higher-dimensional space has become popular in numerical optimal control, where the so-called multiple-shooting methods often outperform their single-shooting counterparts despite the fact that the former calls for the solution a larger-scale (discretized) NLP problem [7, 8]. This idea that certain optimization problems become easier to solve than equivalent problems in fewer variables is also central to the work on lifted Newton methods [2]. To the best of our knowledge, such behaviors cannot be explained currently with results from the field of complexity analysis, which typically give monotonically increasing worst-case run-time bounds as the number of optimization variables increases. In this respect, these run-time bounds therefore predict the opposite behavior to what can sometimes be observed in practice.
Problem formulation
The focus of this paper is on complete-search algorithms for solving non-convex optimization problems of the form:
1 |
where and denote the cost functional and the constraint set, respectively; and the domain H of this problem is a (possibly infinite-dimensional) Hilbert space with respect to the inner product . The theoretical considerations in the paper do not assume a separable Hilbert space, although our various illustrating examples are based on separable spaces.
Definition 1
A feasible point is said to be an -suboptimal global solution—or -global optimum–of (1), with , if
We make the following assumptions regarding the geometry of C throughout the paper.
Assumption 1
The constraint set C is convex, has a nonempty relative interior, and is bounded with respect to the induced norm on H; that is, there exists a constant such that
Our main objective in this paper is to develop an algorithm that can locate an -suboptimal global optimum of Problem (1), in finite run-time for any given accuracy , provided that F satisfies certain regularity conditions alongside Assumption 1.
Remark 1
Certain infinite-dimensional optimization problems are formulated in a Banach space rather than a Hilbert space, for instance in the field of optimal control of partial differential equations in order to analyze the existence of extrema [22]. The optimization problem (1) becomes
2 |
with and a convex bounded subset of B. Provided that:
the Hilbert space is convex and dense in ;
the function is upper semi-continuous in ; and
the constraint set has a nonempty relative interior;
we may nonetheless consider Problem (1) with instead of (2), for any -suboptimal global solution of the former is also an -suboptimal global solution of (2), and both problems have such -suboptimal points. Because Conditions 1–3 are often satisfied in practical applications, it is for the purpose of this paper not restrictive to assume that the domain of the optimization variables is indeed a Hilbert space.
Outline and contributions
The paper starts by discussing several regularity conditions for sets and functionals defined in a Hilbert space in Sect. 2, based on which complete-search algorithms can be constructed whose run-time is independent of the number of optimization variables. Such an algorithm is presented in Sect. 3 and analyzed in Sect. 4, which constitutes the main contributions and novelty. A numerical case study is presented in Sect. 5 in order to illustrate the main results, before concluding the paper in Sect. 6.
Although certain of these algorithmic ideas are inspired by a recent paper on global optimal control [25], we develop herein a much more general framework for optimization in Hilbert space. Besides, Sect. 4 derives novel worst-case complexity estimates for the proposed algorithm. We argue that these ideas could help lay the foundations towards new ways of analyzing the complexity of certain optimization problems based on their structural properties rather than their number of optimization variables. Although the run-time estimates for the proposed algorithm remain conservative, they indicate that complexity in numerical optimization does not necessarily depend on whether the problem at hand being small-scale, large-scale, or even infinite-dimensional.
Some regularity conditions for sets and functionals in Hilbert space
This section builds upon basic concepts in infinite-dimensional Hilbert spaces in order to arrive at certain regularity conditions for sets and functionals defined in such spaces. Our focusing on Hilbert space is motivated by the ability to construct an orthogonal basis such that
for some scalars . We make the following assumption throughout the paper:
Assumption 2
The basis functions are uniformly bounded with respect to .
Equipped with such a basis, we can define the associated projection functions for each as
A natural question to ask at this point, is what can be said about the distance between an element and its projection for a given .
Definition 2
We call the distance between an element and its projection . Moreover, given the constraint set , we define
Lemma 1
Under Assumption 1, the function is uniformly bounded from above by .
Proof
For each , we have
The result follows by Assumption 1.
Despite being uniformly bounded, the function may not converge to zero as in an infinite-dimensional Hilbert space in general. Such lack of convergence is illustrated in the following example.
Example 1
Consider the case that all the basis functions are in the constraint set C, and define the sequence with . For all , we have , and therefore
This behavior is unfortunate because the existence of minimizers to Problem (1) cannot be ascertained without making further regularity assumptions. Moreover, for a sequence of feasible points of Problem (1) converging to an infimum, it could be that
That is, any attempt to approximate the infimum by constructing a sequence of finite parameterizations of the optimization variable x could in principle be unsuccessful.
A principal aim of the following sections is to develop an optimization algorithm, whose convergence to an -global optimum of Problem (1) can be certified. But instead of making assumptions about the existence, or even the regularity, of the minimizers of Problem (1), we shall impose suitable regularity conditions on the objective function F in (1). In preparation for this analysis, we start by formalizing a particular notion of regularity for the elements of H.
Definition 3
An element is said to be regular for the constraint set C if
3 |
Moreover, we call the function the convergence rate at g on C.
Theorem 1
For any , we have
4 |
In the particular case of g being a regular element for C, we have
Proof
Let , and consider the optimization problem
where we have introduced the variable such that
Since the functions are orthogonal to each other, we have for all , and it follows that
Next, we use duality to obtain
where are multipliers associated with the constraints for . Applying the Cauchy-Schwarz inequality gives
and with the particular choice for each , we have
The optimal value of the minimization problem
can be estimated analogously, giving , and the result follows.
The following example establishes the regularity of piecewise smooth functions with a finite number of singularities in the Hilbert space of square-integrable functions with the Legendre polynomials as orthogonal basis functions.
Example 2
We consider the Hilbert space of standard square-integrable functions on the interval [0, 1] equipped with the standard inner product, , and we choose the Legendre polynomials on the interval [0, 1] with weighting factors as orthogonal basis functions . Our focus is on piecewise smooth functions with a given finite number of singularities, for which we want to establish regularity in the sense of Definition 3 for a bounded constraint set .
There are numerous results on approximating functions using polynomials, including convergence rate estimates [15]. One such result in [48] shows that any piecewise smooth function can be approximated with a polynomial of degree M such that
5 |
for any given with either and , or and ; some constants ; and where d(y) denotes the distance to the nearest singularity. In particular, the following convergence rate estimate can be derived using this result in the present example, for any piecewise smooth functions with a finite number of singularities:
for some constant . In order to establish the very last part of the above inequality, it is enough to consider a function g with a single singularity, e.g., at the mid-point , and using :1
6 |
Convergence rate estimates for k-times differentiable and piecewise smooth functions can be obtained in a similar way, using for instance the results in [15, 48].
A useful generalization of Definition 3 and a corollary of Theorem 1 are given below.
Definition 4
A set is said to be regular for C if
Moreover, we call the function the worst-case convergence rate for G on C.
Corollary 1
For any regular set , we have
Remark 2
While any subset of the Euclidean space is trivially regular for a given bounded subset , only certain subsets/subspaces of an infinite-dimensional Hilbert space happen to be regular. Consider for instance the space of square-integrable functions, , and let be any subset of p-times differentiable functions on [a, b], with uniformly Lipschitz-continuous p-th derivatives. It can be shown—e.g., from the analysis in [27] using the standard trigonometric Fourier basis, or from the results in [55] using the Legendre polynomial basis—that
for any bounded constraint set , and is thereby regular for C. This leads to a rather typical situation, whereby the stronger the regularity assumptions on the function class, the faster the convergence of the associated worst-case convergence rate —an increase in the convergence rate order with p in this instance. In the limit of smooth () functions, it can even be established—e.g., using standard results from Fourier analysis [19, 28]—that the convergence rate becomes exponential,
Example 2
(Continued) Consider the following set of unit-step functions
for which we want to establish regularity in the sense of Definition 4. Using earlier results in Example 2, it is known that the function can be approximated with a sequence of polynomials of degree M such that
For every likewise, we can construct the family of polynomials
Since the latter satisfy the same property as that
where the constant is independent of t or M, we have .
This example can be generalized to other classes of functions. For instance, given any smooth function , the subset
is regular in H, and also satisfies . This result can be established by writing the elements in as the product between the piecewise smooth function f and the function , and then approximating the factors separately.
In the remainder of this section, we analyze and illustrate a regularity condition for the cost functional in Problem (1).
Definition 5
The functional is said to be strongly Lipschitz-continuous on C if there exists a bounded subset which is regular on C and a constant such that
7 |
Remark 3
In the special case of an affine functional F, given by
where , and is a regular element for C, the condition (7) is trivially satisfied with and . In this interpretation, the regularity condition (7) essentially provides a means of keeping the nonlinear part of F under control.
Remark 4
Consider the finite-dimensional Euclidean space , a bounded subset , and a continuously-differentiable function whose first derivative is bounded in the subset . By the mean-value theorem, F satisfies
Thus, any continuously differentiable function with a bounded first derivative is strongly Lipschitz-continuous on any bounded subset of . This result can be generalized to certain classes of functionals in infinite-dimensional Hilbert space. For instance, let be Fréchet differentiable, such that
and let the set of Fréchet derivatives be both bounded and regular on C. Then, F is strongly Lipschitz-continuous on C.
The following two examples investigate strong Lipschitz continuity for certain classes of functionals in the practical space of square-integrable functions with the Legendre polynomials as orthogonal basis functions. The first one (Example 3) illustrates the case of a functional that is not strongly Lipschitz-continuous; the second one (Example 4) identifies a broad class of strongly Lipschitz-continuous functionals defined via the solution of an embedded ODE system. The intention here is to help the reader develop an intuition that strongly Lipschitz-continuous functionals occur naturally in many, although not all, problems of practical relevance.
Example 3
We consider the Hilbert space of square-integrable functions on the interval [0, 1] with the standard inner product, and select the orthogonal basis functions as the Legendre polynomials on the interval [0, 1] with weighting factors . We investigate whether the functional F given below is strongly Lipschitz-continuous on the set ,
Consider the family of sets defined by
If the condition (7) were to hold for some bounded and regular set G, we would have by Theorem 1 that
and it would follow from Corollary 1 that
However, this leads to a contradiction since we also have
Therefore, the regularity condition (7) may not be satisfied for any bounded and regular set G, and F is not strongly Lipschitz-continuous on C.
Remark 5
The result that the functional F in Example 3 is not strongly Lipschitz-continuous on C is not in contradiction with Remark 4. Although F is Fréchet differentiable in , the corresponding set G of the Fréchet derivatives of F is indeed unbounded.
Example 4
We again consider the Hilbert space of square-integrable functions on the interval [0, 1] equipped with the standard inner product, and select the orthogonal basis functions as the Legendre polynomials on the interval [0, 1] with weighting factors . Our focus is on the ordinary differential equation (ODE)
8 |
where is a constant matrix; and , a continuously-differentiable and globally Lipschitz-continuous function, so that the solution trajectory is well-defined for all . For simplicity, we consider the functional F given by
for some real vector . Moreover, the constraint set may be any uniformly bounded function subset, such as simple uniform bounds of the form
The following developments aim to establish that F is strongly Lipschitz-continuous on C.
By Taylor’s theorem, the defect satisfies the differential equation
with and . The right-hand-side function f being globally Lipschitz-continuous, we have for any given smooth matrix-valued function ,
for some constant . For a particular choice of A, we can decompose into the sum corresponding to the solution of the ODE system
9 |
10 |
with . In this decomposition, the left-hand side of (7) satisfies
Regarding the linear term first, we have
11 |
with
where denotes the fundamental solution of the linear ODE (9) such that
Since A is smooth, it follows from Example 2 that the set is both regular on C and bounded, and satisfies
Regarding the nonlinear term , since the function is uniformly bounded, applying Gronwall’s lemma to the ODE (10) gives
12 |
for some constant . Finally, combining (11) and (12) shows that F satisfies the condition (7) with , thus F is strongly Lipschitz-continuous on C.
Remark 6
The functional F in the previous example is defined implicitly via the solution of an ODE. The result that such functionals are strongly Lipschitz-continuous is particularly significant insofar as the proposed optimization framework will indeed encompass a broad class of optimal control problems as well as problems in the calculus of variations. In fact, it turns out that strong Lipschitzness still holds in replacing the constant matrix B in (8) with any matrix-valued continuously differentiable and globally Lipschitz-continuous function of x(t, u), thus encompassing quite a general class of nonlinear affine-control systems. In the case of general nonlinear ODEs, however, strong Lipschitzness may be lost. Strong Lipschitzness could nevertheless be recovered by restricting condition (7) in Definition 5 as
with the projection error set , and also restricting the constraint set C to only contain uniformly bounded and Lipschitz-continuous functions in with uniformly bounded Lipschitz constants.
We close this section with a brief analysis of the relationship between strong and classical Lipschitzness in infinite-dimensional Hilbert space.
Lemma 2
Every strongly Lipschitz-continuous functional on C is also Lipschitz-continuous on C.
Proof
Let G be a bounded and regular subset of H on C such that the condition (7) is satisfied. Since G is bounded, there exists a constant constant such that . Applying the Cauchy–Schwarz inequality to the right-hand side of (7) gives
and so F is Lipschitz-continuous on C.
Remark 7
With regularity of the set G alone, i.e. without boundedness, the condition (7) may not imply Lipschitz-continuity, or even continuity of F. As a counter-example, let be the subspace spanned by the first N basis functions in the infinite-dimensional Hilbert space H. It is clear that G is regular on any bounded set since for all . Now, let the functional given by
for some . For every , we have
Therefore, despite being discontinuous, the condition (7) is indeed satisfied.
Remark 8
In general, Lipschitz-continuity does not imply strong Lipschitz-continuity in an infinite-dimensional Hilbert space. A counter-example is easily contrived for the functional given by
Although this functional is Lipschitz-continuous, it can be shown by a similar argument as in Example 3 that it is not strongly Lipschitz-continuous.
Global optimization in Hilbert space using complete search
The application of complete-search strategies to infinite-dimensional optimization problems such as (1) calls for an extension of the (spatial) branch-and-bound principle [23] to general Hilbert space. The approach presented in this section differs from branch-and-bound in that the dimension M of the search space is adjusted, as necessary, during the iterations of the algorithm, by using a so-called lifting operation—hence the name branch-and-lift algorithm. The basic idea is to bracket the optimal solution value of Problem (1) and progressively refine these bounds via this lifting mechanism, combined with traditional branching and fathoming.
Based on the developments in Sect. 2, the following subsections describe methods for exhaustive partitioning in infinite-dimensional Hilbert space (Sect. 3.1) and for computing rigorous upper and lower bounds on given subsets of the variable domain (Sect. 3.2), before presenting the proposed branch-and-lift algorithm (Sect. 3.3).
Partitioning in infinite-dimensional Hilbert space
Similar to branch-and-bound search, the proposed branch-and-lift algorithm maintains a partition of finite-dimensional sets . This partition is updated through the repeated application of certain operations, including branching and lifting, in order to close the gap between an upper and a lower bound on the global solution value of the optimization problem (1). The following definition is useful in order to formalize these operations:
Definition 6
With each pair , we associate a subregion of H given by
Moreover, we say that the set A is infeasible if .
Notice that each subregion is a convex set if the sets C and A are themselves convex. For practical reasons, we restrict ourselves to compact subsets herein, where the class of sets is easily stored and manipulated by a computer. For example, could be a set of interval boxes, polytopes, ellipsoids, etc.
The ability to detect infeasibility of a set is pivotal for complete search. Under the assumption that the constraint set C is convex (Assumption 1), a certificate of infeasibility can be obtained by considering the convex optimization problem
13 |
It readily follows from the Cauchy–Schwarz inequality that
for any (normalized) basis function , and so implies . Consequently, a set A is infeasible if and only if . Because Slater’s constraint qualification holds for Problem (13) under Assumption 1, one approach to checking infeasibility to within high numerical accuracy relies on duality for computing lower bounds on the optimal solution value —similar in essence to the infinite-dimensional convex optimization techniques in [4, 14]. For the purpose of this paper, our focus is on a general class of non-convex objective functionals F, whereas the constraint set C is assumed to be convex and to have a simple geometry in order to avoid numerical issues in solving feasibility problems of the form (13). We shall therefore assume, from this point onwards, that infeasibility can be verified with high numerical accuracy for any set .
A branching operation subdivides any set in the partition into two compact subsets such that , thereby updating the partition as
On the other hand, a lifting operation essentially lifts any set into a higher-dimensional space under the function , defined such that
The question as to defining the higher-order coefficient in such a lifting is related to the so called moment problem that asks the question under which conditions on a sequence , named moment sequence, can we find an associated element with for each . Classical examples of such moment problems are Stieltjes’, Hamburger’s, and Legendre’s moment problems [1]. Here, we adopt the modern standpoint on moment problems using convex optimization [30, 42], by considering the following optimization subproblems:
14 |
Although both optimization problems in (14) are convex when A and C are convex, they remain infinite-dimensional, and thus intractable in general. Obtaining lower and upper bounds , is nonetheless straightforward under Assumption 1. In case no better approach is available, one can always use
which follows readily from the Cauchy–Schwarz inequality and the property that . As already mentioned in the introduction of the paper, a variety of algorithms are now available for tackling convex infinite dimensional problems both efficiently and reliably [4, 14], which could provide tighter bounds in practical applications.
A number of remarks are in order:
Remark 9
The idea to introduce a lifting operation to enable partition in infinite-dimensional function space was originally introduced by the authors in a recently publication [25], focusing on global optimization of optimal control problems. One principal contribution in the present paper is a generalization of these ideas to global optimization in any Hilbert space, by identifying a set of sufficient regularity conditions on the cost functional and constraint set for the resulting branch-and-lift algorithms to converge to an -global solution in finite run-time.
Remark 10
Many recent optimization techniques for global optimization are based on the theory of positive polynomials and their associated linear matrix inequality (LMI) approximations [30, 45], which are also originally inspired by moment problems. Although these LMI techniques may be applied in the practical implementation of the aforementioned lifting operation, they are not directly related to the branch-and-lift algorithm that is developed in the following sections. An important motivation for moving away from the generic LMI framework is that the available implementations scale quite poorly with the number of optimization variables, due to the combinatorial increase of the number of monomials in the associated multivariate polynomial. Therefore, a direct approximation of the cost function F with multivariate polynomials would conflict with our primary objective to develop a global optimization algorithm whose worst-case run-time does not depend on the number of optimization variables.
Strategies for upper and lower bounding of functionals
Besides partitioning, the efficient construction of tight upper and lower bounds on the global solution value of (1) for given subregions of H is key in a practical implementation of branch-and-lift. Thereafter, functions such that
15 |
shall be call lower- and upper-bounding functions of the functional F, respectively. A simple approach to constructing these lower and upper bounds relies on the following two-step decomposition:
- Compute bounds and on the finite-dimensional approximation of F as
Clearly, it depends on the particular expression of F how to determine such bounds in practice. In the case that F is factorable, various arithmetics can be used to propagate bounds through a DAG of the function, including interval arithmetic [36], McCormick relaxations [9, 33], and Taylor/Chebyshev model arithmetic [10, 43, 47]. Moreover, if the expression of F is embedding a dynamic system described by differential equations, validated bounds can be obtained by using a variety of set-propagation techniques as described, e.g., in [26, 31, 38, 50, 53]; or via hierarchies of LMI relaxations as in [21, 29].16 - Compute a bound on the approximation errors such that
In the case that F is strongly Lipschitz-continuous on C, we can always take , where the constant and the bounded regular set G satisfy the condition (7). Naturally, better bounds may be derived by exploiting a particular structure or expression of F.17
By construction, the lower-bounding function and the upper-bounding function trivially satisfy (15). Moreover, when the set is infeasible—see related discussion in Sect. 3.1—we may set .
We state the following assumptions in anticipation of the convergence analysis in Sect. 4.
Assumption 3
The cost functional F in Problem (1) is strongly Lipschitz-continuous on C, with the condition (7) holding for the constant and the bounded regular subset .
Remark 11
Under Assumption 3, Lemma 2 implies that
for a Lipschitz constant . Thus, if Assumption 2 is also satisfied, any pair is such that
with and . It follows that
and therefore the gap can be made arbitrarily small under Assumption 3 by choosing a sufficiently large order M and a sufficiently small diameter for the set A. This result will be exploited systematically in the convergence analysis in Sect. 4.
Remark 12
An alternative upper bound in (15) may be computed more directly by solving the following nonconvex optimization problem to local optimality,
18 |
Without further assumptions on the orthogonal basis functions and on the constraint set C, however, it is not hard to contrive examples where for all and all ; that is, contrive examples where the upper bound (18) does not converge as . This upper-bounding approach could nonetheless be combined with another bounding approach based on set arithmetics in order to prevent convergence issues; e.g., use the solution value of (18) as long as it provides a bound that is smaller than .
Branch-and-lift algorithm
The foregoing considerations on partitioning and bounding in Hilbert space can be combined in Algorithm 1 for solving infinite-dimensional optimization problems to -global optimality.
A number of remarks are in order:
- Regarding initialization, the branch-and-lift iterations starts with . A possible way of initializing the partition is by noting that
under Assumption 1. - Besides the branching and lifting operations introduced earlier in Sect. 3.1, fathoming in Step 4 of Algorithm 1 refers to the process of discarding a given set from the partition if
- The main idea behind the lifting condition defined in Step 6 of Algorithm 1, namely
is that a subset A should be lifted to a higher-dimensional space whenever the approximation error due to the finite parameterization becomes of the same order of magnitude as the current optimality gap . The aim here is to apply as few lifts as possible, since it is preferable to branch in a lower dimensional space. The convergence of the branch-and-lift algorithm under this lifting condition is examined in Sect. 4 below. Notice also that a lifting operation is applied globally—that is, to all parameter subsets in the partition –in Algorithm 1, so all the subsets in share the same parameterization order at any iteration. In a variant of Algorithm 1, one could also imagine a family of subsets that would have different parameterization orders by applying the lifting condition locally instead.19 Finally, it will be established in the following section that, upon termination and under certain assumptions, Algorithm 1 returns an -suboptimal solution of Problem (1). In particular, Assumption 1 rules out the possibility of an infeasible solution.
Convergence analysis of branch-and-lift
This section investigates the convergence properties of the branch-and-lift algorithm (Algorithm 1) developed previously. It is convenient to introduce the following notation in order to conduct the analysis:
Definition 7
Let be a regular set for C, and define the inverse function by
The following result is a direct consequence of the lifting condition (19) in the branch-and-lift algorithm:
Lemma 3
Let Assumption 3 hold, and suppose that finite bounds , and satisfying (16)–(17) can be computed for any feasible pair . Then, the number of lifting operations in a run of Algorithm 1 as applied to Problem (1) is at most
regardless of whether or not the algorithm terminates finitely.
Proof
Assume that in Algorithm 1, and that the termination condition is not yet satisfied; that is,
for a certain feasible set . If the lifting condition (19) were to hold for A, then it would follow from (16)–(17) that
Moreover, F being strongly Lipschitz-continuous on C by Assumption 3, we would have
This is a contradiction, since by Definition 7.
Besides having a finite number of lifting operations, the convergence of Algorithm 1 can be established if the elements of a partition can be made arbitrarily small after applying a finite number of subdivisions.
Definition 8
A partitioning scheme is said to be exhaustive if, given any dimension , any tolerance , and any bounded initial partition , we have
after finitely many subdivisions, where . Moreover, we denoted by an upper bound on the corresponding number of subdivisions in an exhaustive scheme.
The following theorem provides the main convergence result for the proposed branch-and-lift algorithm.
Theorem 2
Let Assumptions 1, 2 and 3 hold, and suppose that finite bounds , and satisfying (16)–(17) can be computed for any feasible pair . If the partitioning scheme is exhaustive, then Algorithm 1 terminates after at most iterations, where
20 |
Proof
By Lemma 3, the maximal number M of lifting operations during a run of Algorithm 1 is finite, such that . Therefore, the lifting condition (19) may not be satisfied for any feasible subset , and we have
Since and , it follows that the termination condition is satisfied if
By Assumptions 2 and 3 and Remark 11, we have
and the termination condition is thus satisfied if
This latter condition is met after at most iterations under the assumption that the partitioning scheme is exhaustive.
Remark 13
In the case that the sets are simple interval boxes and the lifting process is implemented per (14), we have
Therefore, one can always subdivide these boxes in such a way that the condition is satisfied after at most subdivisions, with
for any given dimension M. In particular, is monotonically increasing in M, and (20) simplifies to
It should be clear, at this point, that the worst-case estimate given in Theorem 2 may be extremely conservative, and the performance of Algorithm 1 could be much better in practice. Nonetheless, a key property of this estimate is that it is independent of the actual nature or the number of optimization variables in Problem (1), be it a finite-dimensional or even an infinite-dimensional optimization problem. As already pointed in the introduction of the paper, this result is quite remarkable since available run-time estimates for standard convex and non-convex optimization algorithms do not enjoy this property. On the other hand, is dependent on:
the bound on the constraint set C;
the Lipschitz constants K and L of the cost functional F;
the uniform bound and the scaling factors of the chosen orthogonal functions ; and
the lifting parameter and the termination tolerance in Algorithm 1.
All these dependencies are illustrated in the following example.
Example 5
Consider the space of square-integrable functions , for which it has been established in Remark 2 that any subset of p-times differentiable functions with uniformly Lipschitz-continuous p-th derivatives on is regular, with convergence rate for some constant . On choosing the standard trigonometric Fourier basis, such that are constant scaling factors and , and doing the partitioning using simple interval boxes as in Remark 13, a worst-case iteration count can be obtained as
Furthermore, if the global minimizer of Problem (1) happens to be a smooth () function, the convergence rate can be expected to be of the form , and Theorem 2 then predicts a worst-case iteration count as
which is much more favorable.
Numerical case study
We consider the Hilbert space of square-integrable functions on the interval [0, T], here with . Our focus is on the following nonconvex, infinite-dimensional optimization problem
21 |
with the functions and given by
Notice the symmetry in the optimization problem (21), as and if and only if . Thus, if is a global solution point of (21), then is also a global solution point.
Although it might be possible to apply techniques from the field of variational analysis to determine the set of optimal solutions, our main objective here is to apply Algorithm 1 without exploiting any particular knowledge about the solution set. For this, we use the Legendre polynomials as basis functions in ,
which are orthogonal by construction.
We start by showing that the functional F is strongly Lipschitz-continuous, with the bounded regular subset G in condition (7) taken as
where we use the shorthand notation and . For all and all , we have
where L is any upper bound on the term
22 |
In order to obtain an explicit bound, we need to further analyze the term . First of all, we have
Next, recalling that the Legendre approximation error for any smooth function is bounded as
for all , and working out explicit bounds on the derivatives of the functions and , we obtain
It follows by Theorem 1 that
Combining all the bounds and substituting shows that the constant satisfies the condition (22).
Based on the foregoing developments and the considerations in Sect. 3.2, a simple bound on the approximation error satisfying (17) can be obtained as
Although rather loose for very small M, this estimate converges quickly to 0 for larger M; for instance, . Note also that, in a practical implementation, the computation of —and also to validate the generalized Lipschitz constant L—could be automated using computer algebra programs, such as Chebfun (http://www.chebfun.org/) [16] or MC++ (https://github.com/omegaicl/mcpp) [35].
With regards to the computation of bounds and satisfying (16), we note that F(x) can be interpreted as a quadratic form in x,
with the elements of the matrix Q given by
Of the available approaches [18, 39, 41] to compute bounds and such that
for interval boxes , we use standard LMI relaxation techniques [20] here.
At this point, we have all the elements needed for implementing Algorithm 1 for Problem (21). On selecting the termination tolerance and the lifting parameter , Algorithm 1 terminates after less than 100 iterations and applies 8 lifting operations (starting with ). The corresponding decrease in the gap between upper and lower bounds as a function of the lifted subspace dimension M—immediately after each lifting operation–is shown on the left plot of Fig. 1. Upon convergence, the infimum of (21) is bracketed as
and a corresponding -global solution x is reported on the right plot of Fig. 1; the symmetric function provides another -global solution for this problem. Overall, this case study demonstrates that the proposed branch-and-lift algorithm is thus capable of solving such non-convex and infinite-dimensional optimization problem to global optimality within reasonable computational effort.
Fig. 1.
Results of Algorithm 1 applied to Problem (21) for and . Left gap between upper and lower bounds as a function of the lifted subspace dimension M. Right a globally -suboptimal solution x
Conclusions
This paper has presented a complete-search algorithm, called branch-and-lift, for global optimization of problems with a non-convex cost functional and a bounded and convex constraint sets defined on a Hilbert space. A key contribution is the determination of run-time complexity bounds for branch-and-lift that are independent of the number of variables in the optimization problem, provided that the cost functional is strongly Lipschitz-continuous with respect to a regular and bounded subset of that Hilbert space. The corresponding convergence conditions are satisfied for a large class of practically relevant problems in calculus of variations and optimal control. In particular, the complexity analysis in this paper implies that branch-and-lift can be applied to solve potentially non-convex and infinite-dimensional optimization problems without needing a-priori knowledge about the existence or regularity of minimizers, as the run-time bounds solely depend on the structural and regularity properties of the cost functional as well as the underlying Hilbert space and the geometry of the constraint set. This could pave the way for a new complexity analysis of optimization problems, whereby the “complexity” or “hardness” of a problem does not necessarily depend on their number of optimization variables. In order to demonstrate that these algorithmic ideas and complexity analysis are not of pure theoretical interest only, the practical applicability of branch-and-lift has been illustrated with a numerical case study for a problem of calculus of variations. The case study of an optimal control problem in [25] provides another illustration.
Acknowledgements
This paper is based upon work supported by the Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/J006572/1, National Natural Science Foundation of China (NSFC) under Grant 61473185, and ShanghaiTech University under Grant F-0203-14-012. Financial support from Marie Curie Career Integration Grant PCIG09-GA-2011-293953 and from the Centre of Process Systems Engineering (CPSE) of Imperial College is gratefully acknowledged. The authors would like to thank Co-Editor Dr. Sven Leyffer for his constructive comments about minimality of assumptions for the convergence of branch-and-lift.
Footnotes
We have used the integration formula for the integral term in (6).
References
- 1.Akhiezer NI. The Classical Moment Problem and Some Related Questions in Analysis. Translated by N. Kemmer. New York: Hafner Publishing Co.; 1965. [Google Scholar]
- 2.Albersmeyer J, Diehl M. The lifted Newton method and its application in optimization. SIAM J. Optim. 2010;20(3):1655–1684. [Google Scholar]
- 3.Anderson EJ, Nash P. Linear Programming in Infinite-Dimensional Spaces. Hoboken: Wiley; 1987. [Google Scholar]
- 4.Bampou D, Kuhn D. Polynomial approximations for continuous linear programs. SIAM J. Optim. 2012;22(2):628–648. [Google Scholar]
- 5.Bendsøe MP, Sigmund O. Topology Optimization: Theory, Methods, and Applications. Berlin: Springer; 2004. [Google Scholar]
- 6.Betts JT. Practical Methods for Optimal Control Using Nonlinear Programming. 2. Philadelphia: SIAM; 2010. [Google Scholar]
- 7.Biegler LT. Solution of dynamic optimization problems by successive quadratic programming and orthogonal collocation. Comput. Chem. Eng. 1984;8:243–248. [Google Scholar]
- 8.Bock, H.G., Plitt, K.J.: A multiple shooting algorithm for direct solution of optimal control problems. In: Proceedings 9th IFAC World Congress Budapest, pp. 243–247. Pergamon Press, Oxford (1984)
- 9.Bompadre A, Mitsos A. Convergence rate of McCormick relaxations. J. Glob. Optim. 2012;52(1):1–28. [Google Scholar]
- 10.Bompadre A, Mitsos A, Chachuat B. Convergence analysis of Taylor and McCormick-Taylor models. J. Glob. Optim. 2013;57(1):75–114. [Google Scholar]
- 11.Boyd S, Vandenberghe L. Convex Optimization. Cambridge: University Press; 2004. [Google Scholar]
- 12.Bryson AE, Ho Y. Applied Optimal Control. Washington: Hemisphere; 1975. [Google Scholar]
- 13.Buie R, Abrham J. Numerical solutions to continuous linear programming problems. Z. Oper. Res. 1973;17(3):107–117. [Google Scholar]
- 14.Devolder O, Glineur F, Nesterov Y. Solving infinite-dimensional optimization problems by polynomial approximation. In: Diehl M, Glineur F, Jarlebring E, Michiels W, editors. Recent Advances in Optimization and its Applications in Engineering. Berlin Heidelberg: Springer; 2010. pp. 31–40. [Google Scholar]
- 15.Ditzian Z, Totik V. Moduli of Smoothness. Berlin: Springer; 1987. [Google Scholar]
- 16.Driscoll TA, Hale N, Trefethen LN. Chebfun Guide. Oxford: Pafnuty Publications; 2014. [Google Scholar]
- 17.Floudas CA. Deterministic Global Optimization: Theory, Methods, and Applications. Dordrecht: Kluwer; 1999. [Google Scholar]
- 18.Goemans MX, Williamson DP. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM. 1995;42(6):1115–1145. [Google Scholar]
- 19.Gottlieb D, Shu CW. On the Gibbs phenomenon and its resolution. SIAM Rev. 1997;39(4):644–668. [Google Scholar]
- 20.Henrion D, Tarbouriech S, Arzelier D. LMI approximations for the radius of the intersection of ellipsoids: a survey. J. Optim. Theory Appl. 2001;108(1):1–28. [Google Scholar]
- 21.Henrion D, Korda M. Convex computation of the region of attraction of polynomial control systems. IEEE Trans. Autom. Control. 2014;59(2):297–312. [Google Scholar]
- 22.Hinze M, Pinnau R, Ulbrich M, Ulbrich S. Optimization with PDE Constraints. Berlin: Springer; 2009. [Google Scholar]
- 23.Horst R, Tuy H. Global Optimization: Deterministic Approaches. 3. Berlin, Germany: Springer; 1996. [Google Scholar]
- 24.Houska B, Ferreau HJ, Diehl M. ACADO toolkit–an open-source framework for automatic control and dynamic optimization. Optim. Control Appl. Methods. 2011;32:298–312. [Google Scholar]
- 25.Houska B, Chachuat B. Branch-and-lift algorithm for deterministic global optimization in nonlinear optimal control. J. Optim. Theory Appl. 2014;162(1):208–248. [Google Scholar]
- 26.Houska B, Villanueva ME, Chachuat B. Stable set-valued integration of nonlinear dynamic systems using affine set parameterizations. SIAM J. Numer. Anal. 2015;53(5):2307–2328. [Google Scholar]
- 27.Jackson D. The Theory of Approximation. New York: AMS Colloquium Publication; 1930. [Google Scholar]
- 28.Katznelson Y. An Introduction to Harmonic Analysis. 2. New York: Dover Publications; 1976. [Google Scholar]
- 29.Korda M, Henrion D, Jones CN. Convex computation of the maximum controlled invariant set for polynomial control systems. SIAM J. Control Optim. 2014;52(5):2944–2969. [Google Scholar]
- 30.Lasserre JB. Moments, Positive Polynomials and Their Applications. London: Imperial College Press; 2009. [Google Scholar]
- 31.Lin Y, Stadtherr MA. Validated solutions of initial value problems for parametric ODEs. Appl. Numer. Math. 2007;57(10):1145–1162. [Google Scholar]
- 32.Luo X, Bertsimas D. A new algorithm for state-constrained separated continuous linear programs. SIAM J. Control Optim. 1998;37:177–210. [Google Scholar]
- 33.McCormick GP. Computability of global solutions to factorable nonconvex programs: part I-convex underestimating problems. Math. Program. 1976;10:147–175. [Google Scholar]
- 34.Misener R, Floudas CA. ANTIGONE: algorithms for continuous/integer global optimization of nonlinear equations. J. Glob. Optim. 2014;59(2–3):503–526. [Google Scholar]
- 35.Mitsos A, Chachuat B, Barton PI. McCormick based relaxations of algorithms. SIAM J. Optim. 2009;20:573–601. [Google Scholar]
- 36.Moore RE. Methods and Applications of Interval Analysis. Philadelphia: SIAM; 1979. [Google Scholar]
- 37.Mordukhovich BS. Variational Analysis and Generalized Differentiation I: Basic Theory. Berlin: Springer; 2006. [Google Scholar]
- 38.Neher M, Jackson KR, Nedialkov NS. On Taylor model based integration of ODEs. SIAM J. Numer. Anal. 2007;45:236–262. [Google Scholar]
- 39.Nemirovski A, Roos C, Terlaky T. On maximization of quadratic form over intersection of ellipsoids with common center. Math. Program. 1999;86(3):463–473. [Google Scholar]
- 40.Nesterov Y, Nemirovskii A. Interior-Point Polynomial Methods in Convex Programming. Philadelphia: SIAM; 1994. [Google Scholar]
- 41.Nesterov Y. Semidefinite relaxation and non-convex quadratic optimization. Optim. Methods Softw. 1997;12:1–20. [Google Scholar]
- 42.Nesterov Y. Squared functional systems and optimization problems. In: Frenk H, Roos K, Terlaky T, Zhang S, editors. High Performance Optimization. Dordrecht: Kluwer Academic Publishers; 2000. pp. 405–440. [Google Scholar]
- 43.Neumaier A. Taylor forms—use and limits. Reliab. Comput. 2002;9(1):43–79. [Google Scholar]
- 44.Neumaier A. Complete search in continuous global optimization and constraint satisfaction. Acta Numer. 2004;13:271–369. [Google Scholar]
- 45.Parrilo, P.A.: Polynomial games and sum of squares optimization. In: Proceedings of the 45th IEEE Conference on Decision & Control, pp. 2855–2860. San Diego (CA) (2006)
- 46.Pontryagin LS, Boltyanskii VG, Gamkrelidze RV, Mishchenko EF. The Mathematical Theory of Optimal Processes. New York: Wiley; 1962. [Google Scholar]
- 47.Rajyaguru J, Villanueva ME, Houska B, Chachuat B. Chebyshev model arithmetic for factorable functions. J. Glob. Optim. 2017;68(2):413–438. doi: 10.1007/s10898-016-0474-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Saff EB, Totik V. Polynomial approximation of piecewise analytic functions. J. Lond. Math. Soc. 1989;39(2):487–498. [Google Scholar]
- 49.Sahinidis NV. A general purpose global optimization software package. J. Glob. Optim. 1996;8(2):201–205. [Google Scholar]
- 50.Scott JK, Chachuat B, Barton PI. Nonlinear convex and concave relaxations for the solutions of parametric ODEs. Optim. Control Appl. Methods. 2013;34(2):145–163. [Google Scholar]
- 51.von Stryk O, Bulirsch R. Direct and indirect methods for trajectory optimization. Ann. Oper. Res. 1992;37:357–373. [Google Scholar]
- 52.Tawarmalani M, Sahinidis NV. A polyhedral branch-and-cut approach to global optimization. Math. Program. 2005;103(2):225–249. [Google Scholar]
- 53.Villanueva ME, Houska B, Chachuat B. Unified framework for the propagation of continuous-time enclosures for parametric nonlinear ODEs. J. Glob. Optim. 2015;62(3):575–613. [Google Scholar]
- 54.Vinter R. Optimal Control. Berlin: Springer; 2010. [Google Scholar]
- 55.Wang H, Xiang S. On the convergence rates of Legendre approximation. Math. Comput. 2012;81(278):861–877. [Google Scholar]