Skip to main content
Springer logoLink to Springer
. 2017 Dec 16;173(1):221–249. doi: 10.1007/s10107-017-1215-7

Global optimization in Hilbert space

Boris Houska 1, Benoît Chachuat 2,
PMCID: PMC6383673  PMID: 30872865

Abstract

We propose a complete-search algorithm for solving a class of non-convex, possibly infinite-dimensional, optimization problems to global optimality. We assume that the optimization variables are in a bounded subset of a Hilbert space, and we determine worst-case run-time bounds for the algorithm under certain regularity conditions of the cost functional and the constraint set. Because these run-time bounds are independent of the number of optimization variables and, in particular, are valid for optimization problems with infinitely many optimization variables, we prove that the algorithm converges to an ε-suboptimal global solution within finite run-time for any given termination tolerance ε>0. Finally, we illustrate these results for a problem of calculus of variations.

Keywords: Infinite-dimensional optimization, Complete search, Branch-and-lift, Convergence analysis, Complexity analysis

Introduction

Infinite-dimensional optimization problems arise in many research fields, including optimal control [7, 8, 24, 54], optimization with partial differential equations (PDE) embedded [22], and shape/topology optimization [5]. In practice, these problems are often solved approximately by applying discretization techniques; the original infinite-dimensional problem is replaced by a finite-dimensional approximation that can then be tackled using standard optimization techniques. However, the resulting discretized optimization problems may comprise a large number of optimization variables, which grows unbounded as the accuracy of the approximation is refined. Unfortunately, worst-case run-time bounds for complete-search algorithms in nonlinear programming (NLP) scale poorly with the number of optimization variables. For instance, the worst-case run-time of spatial branch-and-bound [17, 44] scales exponentially with the number of optimization variables. In contrast, algorithms for solving convex optimization problems in polynomial run-time are known [11, 40], e.g. in linear programming (LP) or convex quadratic programming (QP). While these efficient algorithms enable the solution of very large-scale convex optimization problems, such as structured or sparse problems, in general their worst-case run-time bounds also grow unbounded as the number of decision variables tends to infinity.

Existing theory and algorithms that directly analyze and exploit the infinite-dimensional nature of an optimization problem are mainly found in the field of convex optimization. For the most part, these algorithms rely on duality in convex optimization in order to construct upper and lower bounds on the optimal solution value, although establishing strong duality in infinite-dimensional problems can prove difficult. In this context, infinite-dimensional linear programming problems have been analyzed thoroughly [3]. A variety of algorithms are also available for dealing with convex infinite-dimensional optimization problems, some of which have been analyzed in generic Banach spaces [14], as well as certain tailored algorithms for continuous linear programming [4, 13, 32].

In the field of non-convex optimization, problems with an infinite number of variables are typically studied in a local neighborhood of a stationary point. For instance, local optimality in continuous-time optimal control problems can be analyzed by using Pontryagin’s maximum principle [46], and a number of local optimal control algorithms are based on this analysis [6, 12, 51, 54]. More generally, approaches in the classical field of variational analysis [37] rely on local analysis concepts, from which information about global extrema may not be derived in general. In fact, non-convex infinite-dimensional optimization remains an open field of research and, to the best of our knowledge, there currently are no generic complete-search algorithms for solving such problems to global optimality.

This paper asks the question whether a global optimization algorithm can be constructed, whose worst-case run-time complexity is independent of the number of optimization variables thereof, such that this algorithm would remain tractable for infinite-dimensional optimization problems. Clearly, devising such an algorithm may only be possible for a certain class of optimization problems. Interestingly, the fact that the “complexity” or “hardness” of an optimization problem does not necessarily depend on the number of optimization variables has been observed—and it is in fact exploited—in state-of-the-art global optimization solvers for NLP/MINLP, although these observations are still to be analyzed in full detail. For instance, instead of applying a branch-and-bound algorithm in the original space of optimization variables, global NLP/MINLP solvers such as BARON [49, 52] or ANTIGONE [34] proceed by lifting the problem to a higher-dimensional space via the introduction of auxiliary variables from the DAG decomposition of the objective and constraint functions. In a different context, the solution of a lifted problem in a higher-dimensional space has become popular in numerical optimal control, where the so-called multiple-shooting methods often outperform their single-shooting counterparts despite the fact that the former calls for the solution a larger-scale (discretized) NLP problem [7, 8]. This idea that certain optimization problems become easier to solve than equivalent problems in fewer variables is also central to the work on lifted Newton methods [2]. To the best of our knowledge, such behaviors cannot be explained currently with results from the field of complexity analysis, which typically give monotonically increasing worst-case run-time bounds as the number of optimization variables increases. In this respect, these run-time bounds therefore predict the opposite behavior to what can sometimes be observed in practice.

Problem formulation

The focus of this paper is on complete-search algorithms for solving non-convex optimization problems of the form:

infxCF(x), 1

where F:HR and CH denote the cost functional and the constraint set, respectively; and the domain H of this problem is a (possibly infinite-dimensional) Hilbert space with respect to the inner product ·,·:H×HR. The theoretical considerations in the paper do not assume a separable Hilbert space, although our various illustrating examples are based on separable spaces.

Definition 1

A feasible point xC is said to be an ε-suboptimal global solution—or ε-global optimum–of (1), with ε>0, if

xC,F(x)F(x)+ε.

We make the following assumptions regarding the geometry of C throughout the paper.

Assumption 1

The constraint set C is convex, has a nonempty relative interior, and is bounded with respect to the induced norm on H; that is, there exists a constant γ< such that

xC,xH:=x,xγ.

Our main objective in this paper is to develop an algorithm that can locate an ε-suboptimal global optimum of Problem (1), in finite run-time for any given accuracy ε>0, provided that F satisfies certain regularity conditions alongside Assumption 1.

Remark 1

Certain infinite-dimensional optimization problems are formulated in a Banach space (B,·) rather than a Hilbert space, for instance in the field of optimal control of partial differential equations in order to analyze the existence of extrema [22]. The optimization problem (1) becomes

infxC^F^(x) 2

with F^:BR and C^ a convex bounded subset of B. Provided that:

  1. the Hilbert space HB is convex and dense in (B,·);

  2. the function F^ is upper semi-continuous in C^; and

  3. the constraint set C^ has a nonempty relative interior;

we may nonetheless consider Problem (1) with C:=C^H instead of (2), for any ε-suboptimal global solution of the former is also an ε-suboptimal global solution of (2), and both problems have such ε-suboptimal points. Because Conditions 1–3 are often satisfied in practical applications, it is for the purpose of this paper not restrictive to assume that the domain of the optimization variables is indeed a Hilbert space.

Outline and contributions

The paper starts by discussing several regularity conditions for sets and functionals defined in a Hilbert space in Sect. 2, based on which complete-search algorithms can be constructed whose run-time is independent of the number of optimization variables. Such an algorithm is presented in Sect. 3 and analyzed in Sect. 4, which constitutes the main contributions and novelty. A numerical case study is presented in Sect. 5 in order to illustrate the main results, before concluding the paper in Sect. 6.

Although certain of these algorithmic ideas are inspired by a recent paper on global optimal control [25], we develop herein a much more general framework for optimization in Hilbert space. Besides, Sect. 4 derives novel worst-case complexity estimates for the proposed algorithm. We argue that these ideas could help lay the foundations towards new ways of analyzing the complexity of certain optimization problems based on their structural properties rather than their number of optimization variables. Although the run-time estimates for the proposed algorithm remain conservative, they indicate that complexity in numerical optimization does not necessarily depend on whether the problem at hand being small-scale, large-scale, or even infinite-dimensional.

Some regularity conditions for sets and functionals in Hilbert space

This section builds upon basic concepts in infinite-dimensional Hilbert spaces in order to arrive at certain regularity conditions for sets and functionals defined in such spaces. Our focusing on Hilbert space is motivated by the ability to construct an orthogonal basis Φ0,Φ1,H such that

i,jN,1σiΦi,Φj=δi,j:=0ifij,1otherwise,

for some scalars σ0,σ1,R++. We make the following assumption throughout the paper:

Assumption 2

The basis functions Φk are uniformly bounded with respect to ·H.

Equipped with such a basis, we can define the associated projection functions PM:HH for each MN as

xH,PM(x):=k=0Mx,ΦkσkΦk.

A natural question to ask at this point, is what can be said about the distance between an element xH and its projection PM(x) for a given MN.

Definition 2

We call D(M,x):=x-PM(x)H the distance between an element xH and its projection PM(x). Moreover, given the constraint set CH, we define

D¯C(M):=supxCD(M,x).

Lemma 1

Under Assumption 1, the function D¯C:NR is uniformly bounded from above by γ.

Proof

For each MN, we have

D¯C(M)2=supxCx-PM(x)H2xH2.

The result follows by Assumption 1.

Despite being uniformly bounded, the function D¯C(M) may not converge to zero as M in an infinite-dimensional Hilbert space in general. Such lack of convergence is illustrated in the following example.

Example 1

Consider the case that all the basis functions Φ0,Φ1, are in the constraint set C, and define the sequence {xk}kN with xk:=Φk+1. For all kN, we have Pk(xk)=0, and therefore

D¯C(k)D(k,xk)=xk-Pk(xk)H=xkH=1.

This behavior is unfortunate because the existence of minimizers to Problem (1) cannot be ascertained without making further regularity assumptions. Moreover, for a sequence (xk)kN of feasible points of Problem (1) converging to an infimum, it could be that

lim supMlim supkD(M,xk)lim supklim supMD(M,xk).

That is, any attempt to approximate the infimum by constructing a sequence of finite parameterizations of the optimization variable x could in principle be unsuccessful.

A principal aim of the following sections is to develop an optimization algorithm, whose convergence to an ε-global optimum of Problem (1) can be certified. But instead of making assumptions about the existence, or even the regularity, of the minimizers of Problem (1), we shall impose suitable regularity conditions on the objective function F in (1). In preparation for this analysis, we start by formalizing a particular notion of regularity for the elements of H.

Definition 3

An element gH is said to be regular for the constraint set C if

limMRC(M,g)=0withRC(M,g):=D¯C(M)D(M,g). 3

Moreover, we call the function RC(·,g):NR+ the convergence rate at g on C.

Theorem 1

For any gH, we have

MN,supxCg,x-PM(x)RC(M,g). 4

In the particular case of g being a regular element for C, we have

limMsupxCg,x-PM(x)=0.

Proof

Let MN, and consider the optimization problem

V¯M:=supxCg,x-PM(x)=supxCg,w,

where we have introduced the variable w:=x-PM(x) such that

xC,wHD¯C(M).

Since the functions Φ0,,ΦM are orthogonal to each other, we have Φk,w=0 for all k{0,,M}, and it follows that

V¯MsupwHg,ws.t.Φ0,w==ΦM,w=0,wHD¯C(M).

Next, we use duality to obtain

V¯MinfλRM+1supwHg-k=0MλkΦk,ws.t.wHD¯C(M),

where λRM+1 are multipliers associated with the constraints Φk,w=0 for k{0,,M}. Applying the Cauchy-Schwarz inequality gives

λRM+1,V¯Mg-k=0MλkΦkHD¯C(M),

and with the particular choice λk:=g,Φkσk for each k{0,,M}, we have

V¯Mg-PM(g)HD¯C(M)=RC(M,g).

The optimal value of the minimization problem

V_M:=infxCg,x-PM(x)

can be estimated analogously, giving V_M-RC(M,g), and the result follows.

The following example establishes the regularity of piecewise smooth functions with a finite number of singularities in the Hilbert space of square-integrable functions with the Legendre polynomials as orthogonal basis functions.

Example 2

We consider the Hilbert space H=L2[0,1] of standard square-integrable functions on the interval [0, 1] equipped with the standard inner product, f,g:=01f(s)g(s)ds, and we choose the Legendre polynomials on the interval [0, 1] with weighting factors σk=12k+1 as orthogonal basis functions (Φk)kN. Our focus is on piecewise smooth functions g:[0,1]R with a given finite number of singularities, for which we want to establish regularity in the sense of Definition 3 for a bounded constraint set CL2[0,1].

There are numerous results on approximating functions using polynomials, including convergence rate estimates [15]. One such result in [48] shows that any piecewise smooth function f:[0,1]R can be approximated with a polynomial pfM:[0,1]R of degree M such that

y[0,1],f(y)-pfM(y)K1exp-K2Mαd(y)β, 5

for any given α,β>0 with either α<1 and βα, or α=1 and β>1; some constants K1,K2>0; and where d(y) denotes the distance to the nearest singularity. In particular, the following convergence rate estimate can be derived using this result in the present example, for any piecewise smooth functions g:[0,1]R with a finite number of singularities:

RC(M,g)=g-PM(g)2D¯C(M)=infλg-k=0MλkΦk2D¯C(M)(Lemma 1)infλg-k=0MλkΦk2γKM

for some constant K<. In order to establish the very last part of the above inequality, it is enough to consider a function g with a single singularity, e.g., at the mid-point y=12, and using α=β=12:1

infλg-k=0MλkΦk201K12exp-2K2My-1/2dy=K12K2M2+O1Mexp-K2M=OM-1/2. 6

Convergence rate estimates for k-times differentiable and piecewise smooth functions can be obtained in a similar way, using for instance the results in [15, 48].

A useful generalization of Definition 3 and a corollary of Theorem 1 are given below.

Definition 4

A set GH is said to be regular for C if

limMR¯C(M,G)=0withR¯C(M,G):=supgGRC(M,g).

Moreover, we call the function R¯C(·,G):NR+ the worst-case convergence rate for G on C.

Corollary 1

For any regular set GH, we have

limMsupgG,xCg,x-PM(x)=0.

Remark 2

While any subset of the Euclidean space Rn is trivially regular for a given bounded subset CRn, only certain subsets/subspaces of an infinite-dimensional Hilbert space happen to be regular. Consider for instance the space of square-integrable functions, H:=L2[a,b], and let Gp be any subset of p-times differentiable functions on [ab], with uniformly Lipschitz-continuous p-th derivatives. It can be shown—e.g., from the analysis in [27] using the standard trigonometric Fourier basis, or from the results in [55] using the Legendre polynomial basis—that

R¯C(M,Gp)Olog(M)M-p-1OM-p,

for any bounded constraint set CL2[a,b], and Gp is thereby regular for C. This leads to a rather typical situation, whereby the stronger the regularity assumptions on the function class, the faster the convergence of the associated worst-case convergence rate R(·,Gp)—an increase in the convergence rate order log(M)M-p-1 with p in this instance. In the limit of smooth (C) functions, it can even be established—e.g., using standard results from Fourier analysis [19, 28]—that the convergence rate becomes exponential,

R¯C(M,G)Oexp(-βM)withβ>0.

Example 2

(Continued) Consider the following set of unit-step functions

Gt:=xt|t[0,1]withτ[0,1],xt(τ):=1ifτt,0otherwise,

for which we want to establish regularity in the sense of Definition 4. Using earlier results in Example 2, it is known that the function x0.5 can be approximated with a sequence of polynomials p0.5M:[0,1]R of degree M such that

x0.5-p0.5M2OM-1/2.

For every t[0,1] likewise, we can construct the family of polynomials

τ[0,1],ptM(τ):=p0.5M1-t+τ2.

Since the latter satisfy the same property as x0.5 that

xt-ptM2KM,

where the constant K< is independent of t or M, we have R¯C(M,Gt)OM-1/2.

This example can be generalized to other classes of functions. For instance, given any smooth function fL2[0,1], the subset

Gf:=gH|t[0,1]:g(τ)=f(τ)ifτt;g(τ)=0otherwise

is regular in H, and also satisfies R¯C(M,Gf)OM-1/2. This result can be established by writing the elements in Gf as the product between the piecewise smooth function f and the function xt, and then approximating the factors separately.

In the remainder of this section, we analyze and illustrate a regularity condition for the cost functional in Problem (1).

Definition 5

The functional F:HR is said to be strongly Lipschitz-continuous on C if there exists a bounded subset GH which is regular on C and a constant L< such that

eH,supxCF(x+e)-F(x)LsupgGg,e. 7

Remark 3

In the special case of an affine functional F, given by

F(x):=F0+g^,x

where F0H, and g^H is a regular element for C, the condition (7) is trivially satisfied with L=1 and G={g^}. In this interpretation, the regularity condition (7) essentially provides a means of keeping the nonlinear part of F under control.

Remark 4

Consider the finite-dimensional Euclidean space Rn, a bounded subset SRn, and a continuously-differentiable function F:RnR whose first derivative is bounded in the subset GRn. By the mean-value theorem, F satisfies

eRn,supxSF(x+e)-F(x)=supxS01Fx(x+ηe),edηsupgGg,e.

Thus, any continuously differentiable function with a bounded first derivative is strongly Lipschitz-continuous on any bounded subset of Rn. This result can be generalized to certain classes of functionals in infinite-dimensional Hilbert space. For instance, let F:HR be Fréchet differentiable, such that

(x,e)C×H,F(x+e)-F(x)=01DF(x+ηe),edη,

and let the set of Fréchet derivatives G:={DF(x)xH}H be both bounded and regular on C. Then, F is strongly Lipschitz-continuous on C.

The following two examples investigate strong Lipschitz continuity for certain classes of functionals in the practical space of square-integrable functions with the Legendre polynomials as orthogonal basis functions. The first one (Example 3) illustrates the case of a functional that is not strongly Lipschitz-continuous; the second one (Example 4) identifies a broad class of strongly Lipschitz-continuous functionals defined via the solution of an embedded ODE system. The intention here is to help the reader develop an intuition that strongly Lipschitz-continuous functionals occur naturally in many, although not all, problems of practical relevance.

Example 3

We consider the Hilbert space H=L2[0,1] of square-integrable functions on the interval [0, 1] with the standard inner product, and select the orthogonal basis functions (Φk)kN as the Legendre polynomials on the interval [0, 1] with weighting factors σk=12k+1. We investigate whether the functional F given below is strongly Lipschitz-continuous on the set C:=xL2[0,1]s[0,1],|x(s)|1,

xL2[0,1],F(x):=x22=01x(s)2ds.

Consider the family of sets defined by

MN,EM:=PM(x)-xxCL2[0,1].

If the condition (7) were to hold for some bounded and regular set G, we would have by Theorem 1 that

supeEM,xCF(x+e)-F(x)LsupeEM,gGg,e=LsupxC,gGg,x-PM(x),

and it would follow from Corollary 1 that

limMsupeEM,xCF(x+e)-F(x)=0.

However, this leads to a contradiction since we also have

MN,supeEM,xCF(x+e)-F(x)supeEMF(e)=supxCx-PM(x)22=1.

Therefore, the regularity condition (7) may not be satisfied for any bounded and regular set G, and F is not strongly Lipschitz-continuous on C.

Remark 5

The result that the functional F in Example 3 is not strongly Lipschitz-continuous on C is not in contradiction with Remark 4. Although F is Fréchet differentiable in L2[0,1], the corresponding set G of the Fréchet derivatives of F is indeed unbounded.

Example 4

We again consider the Hilbert space H=L2[0,1] of square-integrable functions on the interval [0, 1] equipped with the standard inner product, and select the orthogonal basis functions (Φk)kN as the Legendre polynomials on the interval [0, 1] with weighting factors σk=12k+1. Our focus is on the ordinary differential equation (ODE)

t[0,1],xt(t,u)=f(x(t,u))+Bu(t)withx(0,u)=0, 8

where BRn×n is a constant matrix; and f:RnRn, a continuously-differentiable and globally Lipschitz-continuous function, so that the solution trajectory x(·,u):[0,1]Rn is well-defined for all uL2[0,1]. For simplicity, we consider the functional F given by

F(u):=cTx(1,u),

for some real vector cRn. Moreover, the constraint set CH may be any uniformly bounded function subset, such as simple uniform bounds of the form

C:=uL2[0,1]τ[0,1],|u(τ)|1.

The following developments aim to establish that F is strongly Lipschitz-continuous on C.

By Taylor’s theorem, the defect δ(t,u,e):=x(t,u+e)-x(t,u) satisfies the differential equation

t[0,1],δt(t,u,e)=Λ(t,u,e)δ(t,u,e)+Be(t)

with δ(0,u,e)=0 and Λ(t,u,e):=01fx(x(t,u)+ηδ(t,u,e))dη. The right-hand-side function f being globally Lipschitz-continuous, we have for any given smooth matrix-valued function A:[0,1]Rn×n,

(t,u,e)[0,1]×C×H,Λ(t,u,e)-A(t)1,

for some constant 1<. For a particular choice of A, we can decompose δ(t,u,e) into the sum δl(t,e)+δn(t,u,e,δl) corresponding to the solution of the ODE system

t[0,1],δl(t,e)=A(t)δl(t,e)+Be(t) 9
δn(t,u,e,δl)=Λ(t,u,e)δn(t,u,e,δl)+[Λ(t,u,e)-A(t)]δl(t,e) 10

with δl(0,e)=δn(0,u,e,δl)=0. In this decomposition, the left-hand side of (7) satisfies

eH,supuCF(u+e)-F(u)cTδl(1,e)+supuCcTδn(1,u,e).

Regarding the linear term δl first, we have

s[0,1],cTδl(s,e)=gs,e 11

with

t[0,1],gs(t):=0tcTΓ(t,τ)Bdτifts,0otherwise,

where Γ(t,τ) denotes the fundamental solution of the linear ODE (9) such that

(τ,t)[0,1]2,tΓ(t,τ)=A(t)Γ(t,τ)withG(τ,τ)=I.

Since A is smooth, it follows from Example 2 that the set G:={gss[0,1]} is both regular on C and bounded, and satisfies

R¯C(M,G)OM1/2.

Regarding the nonlinear term δn, since the function Λ is uniformly bounded, applying Gronwall’s lemma to the ODE (10) gives

(t,u,e)[0,1]×C×H,cTδn(t,u,e,δl)exp()sups[0,1]|cTδl(s,e)|exp()supgG|g,e|, 12

for some constant <. Finally, combining (11) and (12) shows that F satisfies the condition (7) with L:=1+exp(), thus F is strongly Lipschitz-continuous on C.

Remark 6

The functional F in the previous example is defined implicitly via the solution of an ODE. The result that such functionals are strongly Lipschitz-continuous is particularly significant insofar as the proposed optimization framework will indeed encompass a broad class of optimal control problems as well as problems in the calculus of variations. In fact, it turns out that strong Lipschitzness still holds in replacing the constant matrix B in (8) with any matrix-valued continuously differentiable and globally Lipschitz-continuous function of x(tu), thus encompassing quite a general class of nonlinear affine-control systems. In the case of general nonlinear ODEs, however, strong Lipschitzness may be lost. Strong Lipschitzness could nevertheless be recovered by restricting condition (7) in Definition 5 as

eEC,supxCF(x+e)-F(x)LsupgGg,e,

with the projection error set EC:=PM(x)-x|xC,MNH, and also restricting the constraint set C to only contain uniformly bounded and Lipschitz-continuous functions in L2[0,1] with uniformly bounded Lipschitz constants.

We close this section with a brief analysis of the relationship between strong and classical Lipschitzness in infinite-dimensional Hilbert space.

Lemma 2

Every strongly Lipschitz-continuous functional F:HR on C is also Lipschitz-continuous on C.

Proof

Let G be a bounded and regular subset of H on C such that the condition (7) is satisfied. Since G is bounded, there exists a constant constant α< such that supgGgHα. Applying the Cauchy–Schwarz inequality to the right-hand side of (7) gives

eH,supxCF(x+e)-F(x)LαeH,

and so F is Lipschitz-continuous on C.

Remark 7

With regularity of the set G alone, i.e. without boundedness, the condition (7) may not imply Lipschitz-continuity, or even continuity of F. As a counter-example, let G:=spanΦ0,Φ1,,ΦN be the subspace spanned by the first N basis functions in the infinite-dimensional Hilbert space H. It is clear that G is regular on any bounded set CH since R¯C(M,G)=0 for all MN. Now, let the functional F:HR given by

F(x):=0ifg^,x01otherwise

for some g^G. For every (x,e)C×H, we have

F(x+e)-F(x)0ifg^,e=01otherwise0ifPN(e)=0otherwise=supgGg,e.

Therefore, despite being discontinuous, the condition (7) is indeed satisfied.

Remark 8

In general, Lipschitz-continuity does not imply strong Lipschitz-continuity in an infinite-dimensional Hilbert space. A counter-example is easily contrived for the functional F:L2[0,1]R given by

xL2[0,1],F(x):=max{1,x22}.

Although this functional is Lipschitz-continuous, it can be shown by a similar argument as in Example 3 that it is not strongly Lipschitz-continuous.

Global optimization in Hilbert space using complete search

The application of complete-search strategies to infinite-dimensional optimization problems such as (1) calls for an extension of the (spatial) branch-and-bound principle [23] to general Hilbert space. The approach presented in this section differs from branch-and-bound in that the dimension M of the search space is adjusted, as necessary, during the iterations of the algorithm, by using a so-called lifting operation—hence the name branch-and-lift algorithm. The basic idea is to bracket the optimal solution value of Problem (1) and progressively refine these bounds via this lifting mechanism, combined with traditional branching and fathoming.

Based on the developments in Sect. 2, the following subsections describe methods for exhaustive partitioning in infinite-dimensional Hilbert space (Sect. 3.1) and for computing rigorous upper and lower bounds on given subsets of the variable domain (Sect. 3.2), before presenting the proposed branch-and-lift algorithm (Sect. 3.3).

Partitioning in infinite-dimensional Hilbert space

Similar to branch-and-bound search, the proposed branch-and-lift algorithm maintains a partition A:={A1,,Ak} of finite-dimensional sets A1,,Ak. This partition is updated through the repeated application of certain operations, including branching and lifting, in order to close the gap between an upper and a lower bound on the global solution value of the optimization problem (1). The following definition is useful in order to formalize these operations:

Definition 6

With each pair (M,A)N×P(RM+1), we associate a subregion XM(A) of H given by

XM(A):=xCx,Φ0σ0,,x,ΦMσMTA.

Moreover, we say that the set A is infeasible if XM(A)=.

Notice that each subregion XM(A) is a convex set if the sets C and A are themselves convex. For practical reasons, we restrict ourselves to compact subsets ASM+1P(RM+1) herein, where the class of sets SM+1 is easily stored and manipulated by a computer. For example, SM+1 could be a set of interval boxes, polytopes, ellipsoids, etc.

The ability to detect infeasibility of a set ASM+1 is pivotal for complete search. Under the assumption that the constraint set C is convex (Assumption 1), a certificate of infeasibility can be obtained by considering the convex optimization problem

dC(A):=minx,yHx-yHs.t.y,Φ0σ0,,y,ΦMσMTA,xC. 13

It readily follows from the Cauchy–Schwarz inequality that

-x-yHx,Φk-y,Φkx-yH,

for any (normalized) basis function Φk, and so x-yH=0 implies x,Φk=y,Φk. Consequently, a set A is infeasible if and only if dC(A)>0. Because Slater’s constraint qualification holds for Problem (13) under Assumption 1, one approach to checking infeasibility to within high numerical accuracy relies on duality for computing lower bounds on the optimal solution value dC(A)—similar in essence to the infinite-dimensional convex optimization techniques in [4, 14]. For the purpose of this paper, our focus is on a general class of non-convex objective functionals F, whereas the constraint set C is assumed to be convex and to have a simple geometry in order to avoid numerical issues in solving feasibility problems of the form (13). We shall therefore assume, from this point onwards, that infeasibility can be verified with high numerical accuracy for any set ASM+1.

A branching operation subdivides any set ASM+1 in the partition A into two compact subsets Al,ArSM+1 such that AlArA, thereby updating the partition as

AA\{A}{Al,Ar}.

On the other hand, a lifting operation essentially lifts any set ASM+1 into a higher-dimensional space under the function ΓM:SM+1SM+2, defined such that

ASM+1,XM(A)XM+1(ΓM(A)).

The question as to defining the higher-order coefficient x,ΦM+1 in such a lifting is related to the so called moment problem that asks the question under which conditions on a sequence (ak)k{1,,N}, named moment sequence, can we find an associated element xH with ak=x,Φkσk for each k{1,,N}. Classical examples of such moment problems are Stieltjes’, Hamburger’s, and Legendre’s moment problems [1]. Here, we adopt the modern standpoint on moment problems using convex optimization [30, 42], by considering the following optimization subproblems:

a_M+1(A)minxXM(A)x,ΦM+1σM+1anda¯M+1(A)maxxXM(A)x,ΦM+1σM+1. 14

Although both optimization problems in (14) are convex when A and C are convex, they remain infinite-dimensional, and thus intractable in general. Obtaining lower and upper bounds a_M+1(A), a¯M+1(A) is nonetheless straightforward under Assumption 1. In case no better approach is available, one can always use

a_M+1(A):=-γσM+1anda¯M+1(A):=γσM+1,

which follows readily from the Cauchy–Schwarz inequality and the property that ΦM+1H=1. As already mentioned in the introduction of the paper, a variety of algorithms are now available for tackling convex infinite dimensional problems both efficiently and reliably [4, 14], which could provide tighter bounds in practical applications.

A number of remarks are in order:

Remark 9

The idea to introduce a lifting operation to enable partition in infinite-dimensional function space was originally introduced by the authors in a recently publication [25], focusing on global optimization of optimal control problems. One principal contribution in the present paper is a generalization of these ideas to global optimization in any Hilbert space, by identifying a set of sufficient regularity conditions on the cost functional and constraint set for the resulting branch-and-lift algorithms to converge to an ε-global solution in finite run-time.

Remark 10

Many recent optimization techniques for global optimization are based on the theory of positive polynomials and their associated linear matrix inequality (LMI) approximations [30, 45], which are also originally inspired by moment problems. Although these LMI techniques may be applied in the practical implementation of the aforementioned lifting operation, they are not directly related to the branch-and-lift algorithm that is developed in the following sections. An important motivation for moving away from the generic LMI framework is that the available implementations scale quite poorly with the number of optimization variables, due to the combinatorial increase of the number of monomials in the associated multivariate polynomial. Therefore, a direct approximation of the cost function F with multivariate polynomials would conflict with our primary objective to develop a global optimization algorithm whose worst-case run-time does not depend on the number of optimization variables.

Strategies for upper and lower bounding of functionals

Besides partitioning, the efficient construction of tight upper and lower bounds on the global solution value of (1) for given subregions of H is key in a practical implementation of branch-and-lift. Thereafter, functions LM,UM:SM+1R such that

ASM+1,LM(A)infxXM(A)F(x)UM(A), 15

shall be call lower- and upper-bounding functions of the functional F, respectively. A simple approach to constructing these lower and upper bounds relies on the following two-step decomposition:

  1. Compute bounds LM0(A) and UM0(A) on the finite-dimensional approximation of F as
    ASM+1,LM0(A)infaAFi=0MaiΦiUM0(A). 16
    Clearly, it depends on the particular expression of F how to determine such bounds in practice. In the case that F is factorable, various arithmetics can be used to propagate bounds through a DAG of the function, including interval arithmetic [36], McCormick relaxations [9, 33], and Taylor/Chebyshev model arithmetic [10, 43, 47]. Moreover, if the expression of F is embedding a dynamic system described by differential equations, validated bounds can be obtained by using a variety of set-propagation techniques as described, e.g., in [26, 31, 38, 50, 53]; or via hierarchies of LMI relaxations as in [21, 29].
  2. Compute a bound ΔM(A) on the approximation errors such that
    ASM+1,infxXM(A)F(x)-infaAFi=0MaiΦiΔM(A). 17
    In the case that F is strongly Lipschitz-continuous on C, we can always take ΔM(A):=LR¯C(M,G), where the constant L< and the bounded regular set G satisfy the condition (7). Naturally, better bounds may be derived by exploiting a particular structure or expression of F.

By construction, the lower-bounding function LM(A):=LM0(A)-ΔM(A) and the upper-bounding function UM(A):=UM0(A)+ΔM(A) trivially satisfy (15). Moreover, when the set ASM+1 is infeasible—see related discussion in Sect. 3.1—we may set ΔM(A)=LM(A)=UM(A)=.

We state the following assumptions in anticipation of the convergence analysis in Sect. 4.

Assumption 3

The cost functional F in Problem (1) is strongly Lipschitz-continuous on C, with the condition (7) holding for the constant L< and the bounded regular subset GH.

Remark 11

Under Assumption 3, Lemma 2 implies that

a,aA,Fk=0MakΦk-Fk=0MakΦkLk=0M(ak-ak)ΦkH

for a Lipschitz constant LLsupgGgH. Thus, if Assumption 2 is also satisfied, any pair (M,A)N×SM+1 is such that

a,aA,Fk=0MakΦk-Fk=0MakΦkLk=0M|ak-ak|ΦkKd1(A)

with K:=LsupkNΦkH and d1(A):=i=0Msupa,aA|ai-ai|. It follows that

(M,A)N×SM+1,UM(A)-LM(A)Kd1(A)+2LR¯C(M,G),

and therefore the gap UM(A)-LM(A) can be made arbitrarily small under Assumption 3 by choosing a sufficiently large order M and a sufficiently small diameter for the set A. This result will be exploited systematically in the convergence analysis in Sect. 4.

Remark 12

An alternative upper bound UM(A) in (15) may be computed more directly by solving the following nonconvex optimization problem to local optimality,

minaAFk=0MakΦks.t.k=0MakΦkC. 18

Without further assumptions on the orthogonal basis functions Φ0,Φ1, and on the constraint set C, however, it is not hard to contrive examples where PM(x)C for all xC and all MN; that is, contrive examples where the upper bound (18) does not converge as M. This upper-bounding approach could nonetheless be combined with another bounding approach based on set arithmetics in order to prevent convergence issues; e.g., use the solution value of (18) as long as it provides a bound that is smaller than UM0(A)+ΔM(A).

Branch-and-lift algorithm

The foregoing considerations on partitioning and bounding in Hilbert space can be combined in Algorithm 1 for solving infinite-dimensional optimization problems to ε-global optimality. graphic file with name 10107_2017_1215_Figa_HTML.jpg

A number of remarks are in order:

  • Regarding initialization, the branch-and-lift iterations starts with M=0. A possible way of initializing the partition A=A0 is by noting that
    {x,Φ0xC}-γσ0,γσ0
    under Assumption 1.
  • Besides the branching and lifting operations introduced earlier in Sect. 3.1, fathoming in Step 4 of Algorithm 1 refers to the process of discarding a given set AA from the partition if
    LM(A)=orAA:LM(A)>UM(A).
  • The main idea behind the lifting condition defined in Step 6 of Algorithm 1, namely
    AA,UM(A)-LM(A)2(1+ρ)ΔM(A), 19
    is that a subset A should be lifted to a higher-dimensional space whenever the approximation error ΔM(A) due to the finite parameterization becomes of the same order of magnitude as the current optimality gap UM(A)-LM(A). The aim here is to apply as few lifts as possible, since it is preferable to branch in a lower dimensional space. The convergence of the branch-and-lift algorithm under this lifting condition is examined in Sect. 4 below. Notice also that a lifting operation is applied globally—that is, to all parameter subsets in the partition A–in Algorithm 1, so all the subsets in A share the same parameterization order at any iteration. In a variant of Algorithm 1, one could also imagine a family of subsets that would have different parameterization orders by applying the lifting condition locally instead.
  • Finally, it will be established in the following section that, upon termination and under certain assumptions, Algorithm 1 returns an ε-suboptimal solution of Problem (1). In particular, Assumption 1 rules out the possibility of an infeasible solution.

Convergence analysis of branch-and-lift

This section investigates the convergence properties of the branch-and-lift algorithm (Algorithm 1) developed previously. It is convenient to introduce the following notation in order to conduct the analysis:

Definition 7

Let GH be a regular set for C, and define the inverse function R¯C-1(·,G):R++N by

ε>0,R¯C-1(ε,G):=minMNMs.t.R¯C(M,G)ε.

The following result is a direct consequence of the lifting condition (19) in the branch-and-lift algorithm:

Lemma 3

Let Assumption 3 hold, and suppose that finite bounds LM0(A), UM0(A) and ΔM(A) satisfying (16)–(17) can be computed for any feasible pair (M,A)N×SM+1. Then, the number of lifting operations in a run of Algorithm 1 as applied to Problem (1) is at most

M¯:=R¯C-1ε2(ρ+1)L,G,

regardless of whether or not the algorithm terminates finitely.

Proof

Assume that M=M¯ in Algorithm 1, and that the termination condition is not yet satisfied; that is,

UM¯(A)-LM¯(A)>ε

for a certain feasible set AA. If the lifting condition (19) were to hold for A, then it would follow from (16)–(17) that

ε-2ΔM¯(A)<UM¯0(A)-LM¯0(A)2ρΔM¯(A).

Moreover, F being strongly Lipschitz-continuous on C by Assumption 3, we would have

R¯C(M¯,G)>ε2(ρ+1)L.

This is a contradiction, since R¯C(M¯,G)ε2(ρ+1)L by Definition 7.

Besides having a finite number of lifting operations, the convergence of Algorithm 1 can be established if the elements of a partition can be made arbitrarily small after applying a finite number of subdivisions.

Definition 8

A partitioning scheme is said to be exhaustive if, given any dimension MN, any tolerance η>0, and any bounded initial partition A={A0}, we have

diamA:=maxAAdiamA<η,

after finitely many subdivisions, where diamA:=supa,aAa-a. Moreover, we denoted by Σ(η,M) an upper bound on the corresponding number of subdivisions in an exhaustive scheme.

The following theorem provides the main convergence result for the proposed branch-and-lift algorithm.

Theorem 2

Let Assumptions 1, 2 and 3 hold, and suppose that finite bounds LM0(A), UM0(A) and ΔM(A) satisfying (16)–(17) can be computed for any feasible pair (M,A)N×SM+1. If the partitioning scheme is exhaustive, then Algorithm 1 terminates after at most Σ¯ iterations, where

Σ¯max0MM¯ΣερK(ρ+1),M,withK:=LsupkNΦkH. 20

Proof

By Lemma 3, the maximal number M of lifting operations during a run of Algorithm 1 is finite, such that MM¯. Therefore, the lifting condition (19) may not be satisfied for any feasible subset AA, and we have

ΔM(A)UM0(A)-LM0(A)2ρ.

Since LM(A)=LM0(A)-ΔM(A) and UM(A)=UM0(A)+ΔM(A), it follows that the termination condition UM(A)-LM(A)ε is satisfied if

UM0(A)-LM0(A)ρερ+1.

By Assumptions 2 and 3 and Remark 11, we have

UM0(A)-LM(A)KdiamA,

and the termination condition is thus satisfied if

diamAερK(ρ+1).

This latter condition is met after at most ΣερK(ρ+1),M iterations under the assumption that the partitioning scheme is exhaustive.

Remark 13

In the case that the sets AA are simple interval boxes and the lifting process is implemented per (14), we have

k{0,,M},a_k(A),a¯k(A)-γσk,γσk.

Therefore, one can always subdivide these boxes in such a way that the condition diamAη is satisfied after at most Σ(η,M) subdivisions, with

Σ(η,M):=k=0MγησkN,

for any given dimension M. In particular, Σ(η,M) is monotonically increasing in M, and (20) simplifies to

Σ¯ΣερK(ρ+1),M¯.

It should be clear, at this point, that the worst-case estimate Σ¯ given in Theorem 2 may be extremely conservative, and the performance of Algorithm 1 could be much better in practice. Nonetheless, a key property of this estimate Σ¯ is that it is independent of the actual nature or the number of optimization variables in Problem (1), be it a finite-dimensional or even an infinite-dimensional optimization problem. As already pointed in the introduction of the paper, this result is quite remarkable since available run-time estimates for standard convex and non-convex optimization algorithms do not enjoy this property. On the other hand, Σ¯ is dependent on:

  • the bound γ on the constraint set C;

  • the Lipschitz constants K and L of the cost functional F;

  • the uniform bound supkΦkH and the scaling factors σk of the chosen orthogonal functions Φk; and

  • the lifting parameter ρ and the termination tolerance ε in Algorithm 1.

All these dependencies are illustrated in the following example.

Example 5

Consider the space of square-integrable functions H:=L2[-π,π], for which it has been established in Remark 2 that any subset Gp of p-times differentiable functions with uniformly Lipschitz-continuous p-th derivatives on [-π,π] is regular, with convergence rate R¯C(M,Gp)αM-p for some constant α<. On choosing the standard trigonometric Fourier basis, such that σk=π are constant scaling factors and K:=KsupkΦk2=K, and doing the partitioning using simple interval boxes as in Remark 13, a worst-case iteration count can be obtained as

Σ¯=γK(ρ+1)πρε2α(ρ+1)L/ε1pexpO1/ε1plog(1/ε).

Furthermore, if the global minimizer of Problem (1) happens to be a smooth (C) function, the convergence rate can be expected to be of the form R(M,G)=αexp(-βM), and Theorem 2 then predicts a worst-case iteration count as

Σ¯expO(log(1/ε))2,

which is much more favorable.

Numerical case study

We consider the Hilbert space H:=L2[0,T] of square-integrable functions on the interval [0, T], here with T=10. Our focus is on the following nonconvex, infinite-dimensional optimization problem

infxL2[0,T]F(x):=0T0Tf1(t-t)x(t)dt2-0Tf2(t-t)x(t)dt2dts.t.xC:=xHt[0,T],|x(t)|1, 21

with the functions f1 and f2 given by

tR,f1(t)=t2sinπt2T+1andf2=f1t.

Notice the symmetry in the optimization problem (21), as F(x)=F(-x) and xC if and only if -xC. Thus, if x is a global solution point of (21), then -x is also a global solution point.

Although it might be possible to apply techniques from the field of variational analysis to determine the set of optimal solutions, our main objective here is to apply Algorithm 1 without exploiting any particular knowledge about the solution set. For this, we use the Legendre polynomials as basis functions in L2[0,T],

iNΦi(t)=(-1)ij=0iiji+jj-tTj,

which are orthogonal by construction.

We start by showing that the functional F is strongly Lipschitz-continuous, with the bounded regular subset G in condition (7) taken as

G:=f1tt[0,T]f2tt[0,T]H,

where we use the shorthand notation f1t(τ):=f1(t-τ) and f2t(τ):=f2(t-τ). For all xL2[0,T] and all eH, we have

F(x+e)-F(x)=0Tf1t,x+e2-f1t,x2-f2t,x+e2+f2t,x2dt=0Tf1t,2x+ef1t,e-f2t,2x+ef2t,edtLmaxsupt[0,T]f1t,e,supt[0,T]f2t,e=supgGg,e,

where L is any upper bound on the term

0Tf1t,2x+e+f2t,2x+edt20Tmaxτ[0,T]f1t(τ)+f2t(τ)dt+2TsupgGg,eT22+π2+2supgGg,e. 22

In order to obtain an explicit bound, we need to further analyze the term supgGg,e. First of all, we have

D¯C(M)γ=supxCx2=T.

Next, recalling that the Legendre approximation error for any smooth function gL2[0,T] is bounded as

D(M,g):=g-PM(g)2μM+1T(M+1)!TMMwithμi:=supξ[0,T]igti(ξ)

for all M1, and working out explicit bounds on the derivatives of the functions f1t and f2t, we obtain

MN+,supgGD(M,g)T32(M+1)!TMM12+Mππ2TM34T32(M+1)!π2MM-1.

It follows by Theorem 1 that

supgGg,eR¯C(M,G)=supgGD¯C(M)D(M,g)34T2(M+1)!π2MM-1.

Combining all the bounds and substituting T=10 shows that the constant L=611 satisfies the condition (22).

Based on the foregoing developments and the considerations in Sect. 3.2, a simple bound ΔM(A) on the approximation error satisfying (17) can be obtained as

(M,A)N+×SM+1,ΔM(A)=45825(M+1)!π2MM-1.

Although rather loose for very small M, this estimate converges quickly to 0 for larger M; for instance, Δ7(A)2·10-4. Note also that, in a practical implementation, the computation of ΔM(A)—and also to validate the generalized Lipschitz constant L—could be automated using computer algebra programs, such as Chebfun (http://www.chebfun.org/) [16] or MC++ (https://github.com/omegaicl/mcpp) [35].

With regards to the computation of bounds LM0(A) and UM0(A) satisfying (16), we note that F(x) can be interpreted as a quadratic form in x,

Fi=0MaiΦi=aTQa,

with the elements of the matrix Q given by

j,k{0,,M},Qj,k=0Tf1t,Φjf1t,Φk-f2t,Φjf2t,Φkdt.

Of the available approaches [18, 39, 41] to compute bounds LM0(A) and UM0(A) such that

LM0(A)minaAaTQaUM0(A)

for interval boxes ARM+1, we use standard LMI relaxation techniques [20] here.

At this point, we have all the elements needed for implementing Algorithm 1 for Problem (21). On selecting the termination tolerance ε=10-5 and the lifting parameter ρ=1, Algorithm 1 terminates after less than 100 iterations and applies 8 lifting operations (starting with M=1). The corresponding decrease in the gap between upper and lower bounds as a function of the lifted subspace dimension M—immediately after each lifting operation–is shown on the left plot of Fig. 1. Upon convergence, the infimum of (21) is bracketed as

-0.16812infxCF(x)-0.16811,

and a corresponding ε-global solution x is reported on the right plot of Fig. 1; the symmetric function (-x) provides another ε-global solution for this problem. Overall, this case study demonstrates that the proposed branch-and-lift algorithm is thus capable of solving such non-convex and infinite-dimensional optimization problem to global optimality within reasonable computational effort.

Fig. 1.

Fig. 1

Results of Algorithm 1 applied to Problem (21) for ε=10-5 and ρ=1. Left gap between upper and lower bounds as a function of the lifted subspace dimension M. Right a globally ε-suboptimal solution x

Conclusions

This paper has presented a complete-search algorithm, called branch-and-lift, for global optimization of problems with a non-convex cost functional and a bounded and convex constraint sets defined on a Hilbert space. A key contribution is the determination of run-time complexity bounds for branch-and-lift that are independent of the number of variables in the optimization problem, provided that the cost functional is strongly Lipschitz-continuous with respect to a regular and bounded subset of that Hilbert space. The corresponding convergence conditions are satisfied for a large class of practically relevant problems in calculus of variations and optimal control. In particular, the complexity analysis in this paper implies that branch-and-lift can be applied to solve potentially non-convex and infinite-dimensional optimization problems without needing a-priori knowledge about the existence or regularity of minimizers, as the run-time bounds solely depend on the structural and regularity properties of the cost functional as well as the underlying Hilbert space and the geometry of the constraint set. This could pave the way for a new complexity analysis of optimization problems, whereby the “complexity” or “hardness” of a problem does not necessarily depend on their number of optimization variables. In order to demonstrate that these algorithmic ideas and complexity analysis are not of pure theoretical interest only, the practical applicability of branch-and-lift has been illustrated with a numerical case study for a problem of calculus of variations. The case study of an optimal control problem in [25] provides another illustration.

Acknowledgements

This paper is based upon work supported by the Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/J006572/1, National Natural Science Foundation of China (NSFC) under Grant 61473185, and ShanghaiTech University under Grant F-0203-14-012. Financial support from Marie Curie Career Integration Grant PCIG09-GA-2011-293953 and from the Centre of Process Systems Engineering (CPSE) of Imperial College is gratefully acknowledged. The authors would like to thank Co-Editor Dr. Sven Leyffer for his constructive comments about minimality of assumptions for the convergence of branch-and-lift.

Footnotes

1

We have used the integration formula eaxdx=2eax(ax-1)a+C for the integral term in (6).

References

  • 1.Akhiezer NI. The Classical Moment Problem and Some Related Questions in Analysis. Translated by N. Kemmer. New York: Hafner Publishing Co.; 1965. [Google Scholar]
  • 2.Albersmeyer J, Diehl M. The lifted Newton method and its application in optimization. SIAM J. Optim. 2010;20(3):1655–1684. [Google Scholar]
  • 3.Anderson EJ, Nash P. Linear Programming in Infinite-Dimensional Spaces. Hoboken: Wiley; 1987. [Google Scholar]
  • 4.Bampou D, Kuhn D. Polynomial approximations for continuous linear programs. SIAM J. Optim. 2012;22(2):628–648. [Google Scholar]
  • 5.Bendsøe MP, Sigmund O. Topology Optimization: Theory, Methods, and Applications. Berlin: Springer; 2004. [Google Scholar]
  • 6.Betts JT. Practical Methods for Optimal Control Using Nonlinear Programming. 2. Philadelphia: SIAM; 2010. [Google Scholar]
  • 7.Biegler LT. Solution of dynamic optimization problems by successive quadratic programming and orthogonal collocation. Comput. Chem. Eng. 1984;8:243–248. [Google Scholar]
  • 8.Bock, H.G., Plitt, K.J.: A multiple shooting algorithm for direct solution of optimal control problems. In: Proceedings 9th IFAC World Congress Budapest, pp. 243–247. Pergamon Press, Oxford (1984)
  • 9.Bompadre A, Mitsos A. Convergence rate of McCormick relaxations. J. Glob. Optim. 2012;52(1):1–28. [Google Scholar]
  • 10.Bompadre A, Mitsos A, Chachuat B. Convergence analysis of Taylor and McCormick-Taylor models. J. Glob. Optim. 2013;57(1):75–114. [Google Scholar]
  • 11.Boyd S, Vandenberghe L. Convex Optimization. Cambridge: University Press; 2004. [Google Scholar]
  • 12.Bryson AE, Ho Y. Applied Optimal Control. Washington: Hemisphere; 1975. [Google Scholar]
  • 13.Buie R, Abrham J. Numerical solutions to continuous linear programming problems. Z. Oper. Res. 1973;17(3):107–117. [Google Scholar]
  • 14.Devolder O, Glineur F, Nesterov Y. Solving infinite-dimensional optimization problems by polynomial approximation. In: Diehl M, Glineur F, Jarlebring E, Michiels W, editors. Recent Advances in Optimization and its Applications in Engineering. Berlin Heidelberg: Springer; 2010. pp. 31–40. [Google Scholar]
  • 15.Ditzian Z, Totik V. Moduli of Smoothness. Berlin: Springer; 1987. [Google Scholar]
  • 16.Driscoll TA, Hale N, Trefethen LN. Chebfun Guide. Oxford: Pafnuty Publications; 2014. [Google Scholar]
  • 17.Floudas CA. Deterministic Global Optimization: Theory, Methods, and Applications. Dordrecht: Kluwer; 1999. [Google Scholar]
  • 18.Goemans MX, Williamson DP. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM. 1995;42(6):1115–1145. [Google Scholar]
  • 19.Gottlieb D, Shu CW. On the Gibbs phenomenon and its resolution. SIAM Rev. 1997;39(4):644–668. [Google Scholar]
  • 20.Henrion D, Tarbouriech S, Arzelier D. LMI approximations for the radius of the intersection of ellipsoids: a survey. J. Optim. Theory Appl. 2001;108(1):1–28. [Google Scholar]
  • 21.Henrion D, Korda M. Convex computation of the region of attraction of polynomial control systems. IEEE Trans. Autom. Control. 2014;59(2):297–312. [Google Scholar]
  • 22.Hinze M, Pinnau R, Ulbrich M, Ulbrich S. Optimization with PDE Constraints. Berlin: Springer; 2009. [Google Scholar]
  • 23.Horst R, Tuy H. Global Optimization: Deterministic Approaches. 3. Berlin, Germany: Springer; 1996. [Google Scholar]
  • 24.Houska B, Ferreau HJ, Diehl M. ACADO toolkit–an open-source framework for automatic control and dynamic optimization. Optim. Control Appl. Methods. 2011;32:298–312. [Google Scholar]
  • 25.Houska B, Chachuat B. Branch-and-lift algorithm for deterministic global optimization in nonlinear optimal control. J. Optim. Theory Appl. 2014;162(1):208–248. [Google Scholar]
  • 26.Houska B, Villanueva ME, Chachuat B. Stable set-valued integration of nonlinear dynamic systems using affine set parameterizations. SIAM J. Numer. Anal. 2015;53(5):2307–2328. [Google Scholar]
  • 27.Jackson D. The Theory of Approximation. New York: AMS Colloquium Publication; 1930. [Google Scholar]
  • 28.Katznelson Y. An Introduction to Harmonic Analysis. 2. New York: Dover Publications; 1976. [Google Scholar]
  • 29.Korda M, Henrion D, Jones CN. Convex computation of the maximum controlled invariant set for polynomial control systems. SIAM J. Control Optim. 2014;52(5):2944–2969. [Google Scholar]
  • 30.Lasserre JB. Moments, Positive Polynomials and Their Applications. London: Imperial College Press; 2009. [Google Scholar]
  • 31.Lin Y, Stadtherr MA. Validated solutions of initial value problems for parametric ODEs. Appl. Numer. Math. 2007;57(10):1145–1162. [Google Scholar]
  • 32.Luo X, Bertsimas D. A new algorithm for state-constrained separated continuous linear programs. SIAM J. Control Optim. 1998;37:177–210. [Google Scholar]
  • 33.McCormick GP. Computability of global solutions to factorable nonconvex programs: part I-convex underestimating problems. Math. Program. 1976;10:147–175. [Google Scholar]
  • 34.Misener R, Floudas CA. ANTIGONE: algorithms for continuous/integer global optimization of nonlinear equations. J. Glob. Optim. 2014;59(2–3):503–526. [Google Scholar]
  • 35.Mitsos A, Chachuat B, Barton PI. McCormick based relaxations of algorithms. SIAM J. Optim. 2009;20:573–601. [Google Scholar]
  • 36.Moore RE. Methods and Applications of Interval Analysis. Philadelphia: SIAM; 1979. [Google Scholar]
  • 37.Mordukhovich BS. Variational Analysis and Generalized Differentiation I: Basic Theory. Berlin: Springer; 2006. [Google Scholar]
  • 38.Neher M, Jackson KR, Nedialkov NS. On Taylor model based integration of ODEs. SIAM J. Numer. Anal. 2007;45:236–262. [Google Scholar]
  • 39.Nemirovski A, Roos C, Terlaky T. On maximization of quadratic form over intersection of ellipsoids with common center. Math. Program. 1999;86(3):463–473. [Google Scholar]
  • 40.Nesterov Y, Nemirovskii A. Interior-Point Polynomial Methods in Convex Programming. Philadelphia: SIAM; 1994. [Google Scholar]
  • 41.Nesterov Y. Semidefinite relaxation and non-convex quadratic optimization. Optim. Methods Softw. 1997;12:1–20. [Google Scholar]
  • 42.Nesterov Y. Squared functional systems and optimization problems. In: Frenk H, Roos K, Terlaky T, Zhang S, editors. High Performance Optimization. Dordrecht: Kluwer Academic Publishers; 2000. pp. 405–440. [Google Scholar]
  • 43.Neumaier A. Taylor forms—use and limits. Reliab. Comput. 2002;9(1):43–79. [Google Scholar]
  • 44.Neumaier A. Complete search in continuous global optimization and constraint satisfaction. Acta Numer. 2004;13:271–369. [Google Scholar]
  • 45.Parrilo, P.A.: Polynomial games and sum of squares optimization. In: Proceedings of the 45th IEEE Conference on Decision & Control, pp. 2855–2860. San Diego (CA) (2006)
  • 46.Pontryagin LS, Boltyanskii VG, Gamkrelidze RV, Mishchenko EF. The Mathematical Theory of Optimal Processes. New York: Wiley; 1962. [Google Scholar]
  • 47.Rajyaguru J, Villanueva ME, Houska B, Chachuat B. Chebyshev model arithmetic for factorable functions. J. Glob. Optim. 2017;68(2):413–438. doi: 10.1007/s10898-016-0474-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Saff EB, Totik V. Polynomial approximation of piecewise analytic functions. J. Lond. Math. Soc. 1989;39(2):487–498. [Google Scholar]
  • 49.Sahinidis NV. A general purpose global optimization software package. J. Glob. Optim. 1996;8(2):201–205. [Google Scholar]
  • 50.Scott JK, Chachuat B, Barton PI. Nonlinear convex and concave relaxations for the solutions of parametric ODEs. Optim. Control Appl. Methods. 2013;34(2):145–163. [Google Scholar]
  • 51.von Stryk O, Bulirsch R. Direct and indirect methods for trajectory optimization. Ann. Oper. Res. 1992;37:357–373. [Google Scholar]
  • 52.Tawarmalani M, Sahinidis NV. A polyhedral branch-and-cut approach to global optimization. Math. Program. 2005;103(2):225–249. [Google Scholar]
  • 53.Villanueva ME, Houska B, Chachuat B. Unified framework for the propagation of continuous-time enclosures for parametric nonlinear ODEs. J. Glob. Optim. 2015;62(3):575–613. [Google Scholar]
  • 54.Vinter R. Optimal Control. Berlin: Springer; 2010. [Google Scholar]
  • 55.Wang H, Xiang S. On the convergence rates of Legendre approximation. Math. Comput. 2012;81(278):861–877. [Google Scholar]

Articles from Mathematical Programming are provided here courtesy of Springer

RESOURCES