Skip to main content
Springer logoLink to Springer
. 2021 Jan 9;188(3):744–769. doi: 10.1007/s10957-020-01805-8

New Results on Superlinear Convergence of Classical Quasi-Newton Methods

Anton Rodomanov 1,, Yurii Nesterov 2
PMCID: PMC7929971  PMID: 33746292

Abstract

We present a new theoretical analysis of local superlinear convergence of classical quasi-Newton methods from the convex Broyden class. As a result, we obtain a significant improvement in the currently known estimates of the convergence rates for these methods. In particular, we show that the corresponding rate of the Broyden–Fletcher–Goldfarb–Shanno method depends only on the product of the dimensionality of the problem and the logarithm of its condition number.

Keywords: Quasi-Newton methods, Convex Broyden class, DFP, BFGS, Superlinear convergence, Local convergence, Rate of convergence

Introduction

We study local superlinear convergence of classical quasi-Newton methods for smooth unconstrained optimization. These algorithms can be seen as an approximation of the standard Newton method, in which the exact Hessian is replaced by some operator, which is updated in iterations by using the gradients of the objective function. The two most famous examples of quasi-Newton algorithms are the Davidon–Fletcher–Powell (DFP) [1, 2] and the Broyden–Fletcher–Goldfarb–Shanno (BFGS) [37] methods, which together belong to the Broyden family [8] of quasi-Newton algorithms. For an introduction into the topic, see [9] and [10, Chapter 6]. See also [11] for the discussion of quasi-Newton algorithms in the context of nonsmooth optimization.

The superlinear convergence of quasi-Newton methods was established as early as in 1970s, firstly by Powell [12] and Dixon [13, 14] for the methods with exact line search, and then by Broyden, Dennis and Moré [15] and Dennis and Moré [16] for the methods without line search. The latter two approaches have been extended onto more general methods under various settings (see, e.g., [1725]).

However, explicit rates of superlinear convergence for quasi-Newton algorithms were obtained only recently. The first results were presented in [26] for the greedy quasi-Newton methods. After that, in [27], the classical quasi-Newton methods were considered, for which the authors established certain superlinear convergence rates, depending on the problem dimension and its condition number. The analysis was based on the trace potential function, which was then augmented by the logarithm of determinant of the inverse Hessian approximation to extend the proof onto the general nonlinear case.

In this paper, we further improve the results of [27]. For the classical quasi-Newton methods, we obtain new convergence rate estimates, which have better dependency on the condition number of the problem. In particular, we show that the superlinear convergence rate of BFGS depends on the condition number only through the logarithm. As compared to the previous work, the main difference in the analysis is the choice of the potential function: now the main part is formed by the logarithm of determinant of Hessian approximation, which is then augmented by the trace of inverse Hessian approximation.

It is worth noting that recently, in [28], another analysis of local superlinear convergence of the classical DFP and BFGS methods was presented with the resulting rate, which is independent of the dimensionality of the problem and its condition number. However, to obtain such a rate, the authors had to make an additional assumption that the methods start from a sufficiently good initial Hessian approximation. Without this assumption, to our knowledge, their proof technique, based on the Frobenius-norm potential function, leads only to the rates, which are weaker than those in [27].

This paper is organized as follows. In Sect. 2, we introduce our notation. In Sect. 3, we study the convex Broyden class of quasi-Newton updates for approximating a self-adjoint positive definite operator. In Sect. 4, we analyze the rate of convergence of the classical quasi-Newton methods from the convex Broyden class as applied to minimizing a quadratic function. On this simple example, where the Hessian is constant, we illustrate the main ideas of our analysis. In Sect. 5, we consider the general unconstrained optimization problem. Finally, in Sect. 6, we discuss why the new superlinear convergence rates, obtained in this paper, are better than the previously known ones.

Notation

In what follows, E denotes an n-dimensional real vector space. Its dual space, composed of all linear functionals on E, is denoted by E. The value of a linear function sE, evaluated at a point xE, is denoted by s,x.

For a smooth function f:ER, we denote by f(x) and 2f(x) its gradient and Hessian, respectively, evaluated at a point xE. Note that f(x)E, and 2f(x) is a self-adjoint linear operator from E to E.

The partial ordering of self-adjoint linear operators is defined in the standard way. We write A1A2 for A1,A2:EE, if (A2-A1)x,x0 for all xE, and H1H2 for H1,H2:EE, if s,(H2-H1)s0 for all sE.

Any self-adjoint positive definite linear operator A:EE induces in the spaces E and E the following pair of conjugate Euclidean norms:

hA:=Ah,h1/2,hE,sA:=s,A-1s1/2,sE. 1

When A=2f(x), where f:ER is a smooth function with positive definite Hessian, and xE, we prefer to use notation ·x and ·x, provided that there is no ambiguity with the reference function f.

Sometimes, in the formulas, involving products of linear operators, it is convenient to treat xE as a linear operator from R to E, defined by xα=αx, and x as a linear operator from E to R, defined by xs=s,x. Likewise, any sE can be treated as a linear operator from R to E, defined by sα=αs, and s as a linear operator from E to R, defined by sx=s,x. In this case, xx and ss are rank-one self-adjoint linear operators from E to E and from E to E, respectively, acting as follows: (xx)s=s,xx and (ss)x=s,xs for xE and sE.

Given two self-adjoint linear operators A:EE and H:EE, we define the trace and the determinant of A with respect to H as follows: H,A:=Tr(HA), and Det(H,A):=Det(HA). Note that HA is a linear operator from E to itself, and hence, its trace and determinant are well defined by the eigenvalues (they coincide with the trace and determinant of the matrix representation of HA with respect to an arbitrary chosen basis in the space E, and the result is independent of the particular choice of the basis). In particular, if H is positive definite, then H,A and Det(H,A) are, respectively, the sum and the product of the eigenvalues of A relative to H-1. Observe that ·,· is a bilinear form, and for any xE, we have Ax,x=xx,A. When A is invertible, we also have A-1,A=n and Det(A-1,δA)=δn for any δR. Also recall the following multiplicative formula for the determinant: Det(H,A)=Det(H,G)·Det(G-1,A), which is valid for any invertible linear operator G:EE. If the operator H is positive semidefinite, and A1A2 for some self-adjoint linear operators A1,A2:EE, then H,A1H,A2 and Det(H,A1)Det(H,A2). Similarly, if A is positive semidefinite and H1H2 for some self-adjoint linear operators H1,H2:EE, then H1,AH2,A and Det(H1,A)Det(H2,A).

Convex Broyden Class

Let A and G be two self-adjoint positive definite linear operators from E to E, where A is the target operator, which we want to approximate, and G is its current approximation. The Broyden class of quasi-Newton updates of G with respect to A along a direction uE\{0} is the following family of updating formulas, parameterized by a scalar τR:

Broydτ(A,G,u)=ϕτG-AuuG+GuuAAu,u+Gu,uAu,u+1AuuAAu,u+(1-ϕτ)G-GuuGGu,u+AuuAAu,u, 2

where

ϕτ:=ϕτ(A,G,u):=τAu,uAG-1Au,uτAu,uAG-1Au,u+(1-τ)Gu,uAu,u. 3

If the denominator in (3) is zero, we left both ϕτ and Broydτ(A,G,u) undefined. For the sake of convenience, we also set Broydτ(A,G,u)=G for u=0.

In this paper, we are interested in the convex Broyden class, which is described by the values of τ[0,1]. Note that for all such τ the denominator in (3) is always positive for any u0, so both ϕτ and Broydτ(A,G,u) are well defined; moreover, ϕτ[0,1]. For τ=1, we have ϕτ=1, and (2) becomes the DFP update; for τ=0, we have ϕτ=0, and (2) becomes the BFGS update.

Remark 3.1

Usually the Broyden class is defined directly in terms of the parameter ϕ. However, in the context of this paper, it is more convenient to work with τ instead of ϕ. As can be seen from (66), τ is exactly the weight of the DFP component in the updating formula for the inverse operator.

A basic property of an update from the convex Broyden class is that it preserves the bounds on the eigenvalues with respect to the target operator.

Lemma 3.1

(see [27, Lemma 2.1]) If 1ξAGηA for some ξ,η1, then, for any uE, and any τ[0,1], we have 1ξABroydτ(A,G,u)ηA.

Consider the measure of closeness of G to A along direction uE\{0}:

ν(A,G,u):=(G-A)G-1(G-A)u,u1/2Au,u1/2=(1)(G-A)uGuA. 4

Let us present two potential functions, whose improvement after one update from the convex Broyden class can be bounded from below by a certain nonnegative monotonically increasing function of ν, vanishing at zero.

First, consider the log-det barrier

V(A,G)=lnDet(A-1,G). 5

It will be useful when AG. Note that in this case V(A,G)0.

Lemma 3.2

Let A,G:EE be self-adjoint positive definite linear operators, AGηA for some η1. Then, for any τ[0,1] and uE\{0}:

V(A,G)-V(A,Broydτ(A,G,u))ln1+(τ1η+1-τ)ν2(A,G,u).

Proof

Indeed, denoting G+:=Broydτ(A,G,u), we obtain

V(A,G)-V(A,G+)=(5)lnDet(G+-1,G)=(67)lnτAu,uAG-1Au,u+(1-τ)Gu,uAu,u=ln1+τA(A-1-G-1)Au,uAG-1Au,u+(1-τ)(G-A)u,uAu,u. 6

Since10G-A(1-1η)G, we have

(G-A)G-1(G-A)1-1η(G-A)11+1η(G-A)G-A. 7

Therefore, denoting ν:=ν(A,G,u), we can write that

(G-A)u,uAu,u(7)(G-A)G-1(G-A)u,uAu,u=(4)ν2,

and, since A(A-1-G-1)A=G-A-(G-A)G-1(G-A), that

A(A-1-G-1)Au,uAG-1Au,u=(G-A-(G-A)G-1(G-A))u,uAG-1Au,u(7)1η(G-A)G-1(G-A)u,uAG-1Au,u1η(G-A)G-1(G-A)u,uAu,u=(4)1ην2.

Substituting the above two inequalities into (6), we obtain the claim.

Now consider another potential function, the augmented log-det barrier:

ψ(G,A):=lnDet(A-1,G)-G-1,G-A. 8

As compared to the log-det barrier, this potential function is more universal since it works even if the condition AG is violated. Note that the augmented log-det barrier is in fact the Bregman divergence, generated by the strictly convex function d(A):=-lnDet(B-1,A), defined on the set of self-adjoint positive definite linear operators from E to E, where B:EE is an arbitrary fixed self-adjoint positive definite linear operator. Indeed,

ψ(G,A)=-lnDet(B-1,A)+lnDet(B-1,G)--G-1,A-G=d(A)-d(G)-d(G),A-G0. 9

Remark 3.2

The idea of combining the trace with the logarithm of determinant to form a potential function for the analysis of quasi-Newton methods can be traced back to [29]. Note also that in [27], the authors studied the evolution of ψ(A,G), i.e. the Bregman divergence was centered at A instead of G.

Lemma 3.3

For any real αβ>0, we have α+1β-11, and

α-lnβ-132+3lnα+1β-1613lnα+1β-1. 10

Proof

We only need to prove the first inequality in (10) since the second one follows from it and the fact that 3+23=1+231+76=136 (since 2723).

Let β>0 be fixed, and let ζ1:(1-1β,+)R be the function, defined by ζ1(α):=α-32+3lnα+1β-1. Note that the domain of ζ1 includes the point α=β since β2-1β>1-1β. Let us show that ζ1 increases on the interval [β,+). Indeed, for any αβ, we have

ζ1(α)=1-32+31α+1β-1>1-1α+1β-1=α+1β-2α+1β-1β+1β-2α+1β-10.

Thus, it is sufficient to prove (10) only in the case when α=β. Equivalently, we need to show that the function ζ2:(0,+)R, defined by the formula ζ2(α):=α-lnα-1-32+3lnα+1α-1, is nonnegative. Differentiating, we find that, for all α>0, we have

ζ2(α)=1-1α-32+31-1α2α+1α-1=1-1α1-32+31+1αα+1α-1=1-1αα+1α-1-(23-3)(1+1α)α+1α-1=1-1αα-2(3-1)+(3-1)21α1+1α-1=1-1α(α-(3-1)1α)2α+1α-1.

Hence, ζ2(α)0 for 0<α1, and ζ2(α)0 for α1. Thus, the minimum of ζ2 is attained at α=1. Consequently, ζ2(α)ζ2(1)=0 for all α>0.

It turns out that, up to some constants, the improvement in the augmented log-det barrier can be bounded from below by exactly the same logarithmic function of ν, which was used for the simple log-det barrier.

Lemma 3.4

Let A,G:EE be self-adjoint positive definite linear operators, 1ξAGηA for some ξ,η1. Then, for any τ[0,1] and uE\{0}:

ψ(G,A)-ψ(Broydτ(A,G,u),A)613ln1+(τ1ξη+1-τ)ν2(A,G,u).

Proof

Indeed, denoting G+:=Broydτ(A,G,u), we obtain

G-1-G+-1,A=(66)τAG-1AG-1Au,uAG-1Au,u-1+(1-τ)AG-1Au,uAu,u-1,

and

Det(G+-1,G)=(67)τAu,uAG-1Au,u+(1-τ)Au,uGu,u.

Thus,

ψ(G,A)-ψ(G+,A)=(8)G-1-G+-1,A+lnDet(G+-1,G)=τα1+(1-τ)α0+ln(τβ1-1+(1-τ)β0-1)-1=α-lnβ-1, 11

where we denote α1:=AG-1AG-1Au,uAG-1Au,u, β1:=AG-1Au,uAu,u, α0:=AG-1Au,uAu,u, β0:=Au,uGu,u, α:=τα1+(1-τ)α0, β:=(τβ1-1+(1-τ)β0-1)-1. Note that α1β1 and α0β0 by the Cauchy–Schwartz inequality. At the same time, τβ1+(1-τ)β2β by the convexity of the inverse function tt-1. Hence, we can apply Lemma 3.3 to estimate (11) from below. Note that

α+1β-1=τ(A+AG-1AG-1A)u,uAG-1Au,u+(1-τ)(G+A)u,uAu,u-1=1+τ(G-A)G-1AG-1(G-A)AG-1Au,u+(1-τ)(G-A)G-1(G-A)u,uAu,u1+(τ1ξη+1-τ)(G-A)G-1(G-A)u,uAu,u=(4)1+(τ1ξη+1-τ)ν2(A,G,u).

The measure ν(A,G,u), defined in (4), is the ratio of the norm of (G-A)u, measured with respect to G, and the norm of u, measured with respect to A. It is important that we can change the corresponding metrics to G+ and G, respectively, by paying only with the minimal eigenvalue of G relative to A.

Lemma 3.5

Let A,G:EE be self-adjoint positive definite linear operators such that 1ξAG for some ξ>0. Then, for any τ[0,1], any uE\{0}, and G+:=Broydτ(A,G,u), we have

ν2(A,G,u)11+ξ(G-A)G+-1(G-A)u,uGu,u.

Proof

From (66), it is easy to see that G+-1Au=u. Hence,

(G-A)G+-1(G-A)u,uGu,u=GG+-1Gu,uGu,u+Au,G+-1AuGu,u-2Gu,G+-1AuGu,u=GG+-1Gu,uGu,u+Au,uGu,u-2. 12

Since 1-t1t-1 for all t>0, we further have

GG+-1Gu,uGu,u=(66)τ1-Au,u2Gu,uAG-1Au,u+Gu,uAu,u+(1-τ)AG-1Au,uAu,u+1Gu,uAu,u-1AG-1Au,uAu,u+1Gu,uAu,u-1. 13

Denote ν:=ν(A,G,u). Then,

ν2=(4)(G-A)G-1(G-A)u,uAu,u=Gu,uAu,u+AG-1Au,uAu,u-2. 14

Consequently,

(1+ξ)ν2AG-1Au,uAu,u+1ν2=(14)AG-1Au,uAu,u+1Gu,uAu,u+AG-1Au,u2Au,u2-AG-1Au,uAu,u-2(13)GG+-1Gu,uAu,u+AG-1Au,u2Au,u-AG-1Au,uAu,u-1. 15

Thus,

(1+ξ)ν2-(G-A)G+-1(G-A)u,uGu,u=(12)(1+ξ)ν2-GG+-1Gu,uGu,u-Au,uGu,u+2(15)AG-1Au,u2Au,u2-AG-1Au,uAu,u-Au,uGu,u+1AG-1Au,u2Au,u2-2AG-1Au,uAu,u+10,

where we have used the Cauchy–Schwartz inequality Au,uGu,uAG-1Au,uAu,u.

Unconstrained Quadratic Minimization

Let us study the convergence properties of the classical quasi-Newton methods from the convex Broyden class, as applied to minimizing the quadratic function

f(x):=12Ax,x-b,x, 16

where A:EE is a self-adjoint positive definite linear operator, and bE.

Let B:EE be a fixed self-adjoint positive definite linear operator, and let μ,L>0 be such that

μBALB. 17

Thus, μ is the strong convexity parameter of f, and L is the constant of Lipschitz continuity of the gradient of f, both measured relative to B.

Consider the following standard quasi-Newton process for minimizing (16):graphic file with name 10957_2020_1805_Figa_HTML.jpg For measuring its rate of convergence, we use the norm of the gradient, taken with respect to the Hessian:

λk:=f(xk)A=(1)f(xk),A-1f(xk)1/2.

It is known that the process () has at least a linear convergence rate of the standard gradient method:

Theorem 4.1

(see [27, Theorem 3.1]) In scheme (), for all k0:

AGkLμA,λk1-μLkλ0. 19

Let us establish the superlinear convergence. According to (19), for the quadratic function, we have AGk for all k0. Therefore, in our analysis, we can use both potential functions: the log-det barrier and the augmented log-det barrier. Let us consider both options. We start with the first one.

Theorem 4.2

In scheme (), for all k1, we have

λk2i=0k-1(τiμL+1-τi)1/kenklnLμ-1k/2Lμ·λ0. 20

Proof

Without loss of generality, we can assume that ui0 for all 0ik. Denote Vi:=V(A,Gi), νi:=ν(A,Gi,ui), pi:=τiμL+1-τi, gi:=f(xi)Gi for any 0ik. By Lemma 3.2 and (19), for all 0ik-1, we have ln(1+piνi2)Vi-Vi+1. Summing up, we obtain

i=0k-1ln(1+pkνk2)V0-Vk(19)V0=(18)V(A,LB)=(5)lnDet(A-1,LB)(17)lnDet(1μB-1,LB)=nlnLμ. 21

Hence, by the convexity of function tln(1+et), we get

nklnLμ(21)1ki=0k-1ln(1+piνi2)=1ki=0k-1ln(1+eln(piνi2))ln1+e1ki=0k-1ln(piνi2)=ln1+i=0k-1piνi21/k. 22

But, for all 0ik-1, we have νi212(Gi-A)Gi+1-1(Gi-A)ui,uiGiui,ui=12gi+12gi2 by Lemma 3.5, (19), and since Giui=-f(xi), Aui=f(xi+1)-f(xi). Hence, i=0k-1νi212kgk2g02, and so nklnLμ(22)ln1+12i=0k-1pi1/kgkg02/k. Rearranging, we obtain gk2i=0k-1pi1/k(enklnLμ-1)k/2g0. It remains to note that λkLμ·gk and g0λ0 in view of (19).

Remark 4.1

As can be seen from (21), the factor nlnLμ in (20) can be improved up to lnDet(A-1,LB)=i=1nlnLλi, where λ1,,λn are the eigenvalues of A relative to B. This improved factor can be significantly smaller than the original one if the majority of the eigenvalues λi are much larger than μ.

Let us briefly present another approach, which is based on the augmented log-det barrier. The resulting efficiency estimate will be the same as in Theorem 4.2 up to a slightly worse absolute constant under the exponent. However, this proof can be extended onto general nonlinear functions.

Theorem 4.3

In scheme (), for all k1, we have

λk2i=0k-1(τiμL+1-τi)1/ke136nklnLμ-1k/2Lμ·λ0.

Proof

Without loss of generality, we can assume that ui0 for all 0ik. Denote ψi:=ψ(Gi,A), νi:=ν(A,Gi,ui), pi=τiμL+1-τi, gi:=f(xi)Gi for all 0ik. By Lemma 3.4 and (19), for all 0ik-1, we have 613ln(1+piνi2)ψi-ψi+1. Hence,

613i=0k-1ln(1+piνi2)ψ0-ψk(9)ψ0=(18)ψ(LB,A)=(8)lnDet(A-1,LB)-1LB-1,LB-A(17)nlnLμ, 23

and we can continue exactly as in the proof of Theorem 4.2.

Minimization of General Functions

In this section, we consider the general unconstrained minimization problem:

minxEf(x), 24

where f:ER is a twice continuously differentiable function with positive definite second derivative. Our goal is to study the convergence properties of the following standard quasi-Newton scheme for solving (24): graphic file with name 10957_2020_1805_Figb_HTML.jpg Here, B:EE is a self-adjoint positive definite linear operator, and L is a positive constant, which together define the initial Hessian approximation G0.

We assume that there exist constants μ>0 and M0, such that

μB2f(x)LB, 26
2f(y)-2f(x)My-xz2f(w) 27

for all x,y,z,wE. The first assumption (26) specifies that, relative to the operator B, the objective function f is μ-strongly convex and its gradient is L-Lipschitz continuous. The second assumption (27) means that f is M-strongly self-concordant. This assumption was recently introduced in [26] as a convenient affine-invariant alternative to the standard assumption of the Lipschitz second derivative and is satisfied at least for any strongly convex function with Lipschitz continuous Hessian (see [26, Example 4.1]). The main facts, which we use about strongly self-concordant functions, are summarized in the following lemma (see [26, Lemma 4.1]):

Lemma 5.1

For any x,yE, J:=012f(x+t(y-x))dt, r:=y-xx:

1+Mr2-12f(x)J1+Mr22f(x), 28
1+Mr2-12f(y)J1+Mr22f(y). 29

Note that for a quadratic function, we have M=0.

For measuring the convergence rate of (), we use the local gradient norm:

λk:=f(xk)xk=(1)f(xk),2f(xk)-1f(xk)1/2. 30

The local convergence analysis of the scheme () is, in general, the same as the corresponding analysis in the quadratic case. However, it is much more technical due to the fact that, in the nonlinear case, the Hessian is no longer constant. This causes a few problems.

First, there are now several different ways how one can treat the Hessian approximation Gk. One can view it as an approximation to the Hessian 2f(xk) at the current iterate xk, to the Hessian 2f(x) at the minimizer x, to the integral Hessian Jk, etc. Of course, locally, due to strong self-concordancy, all these variants are equivalent since the corresponding Hessians are close to each other. Nevertheless, from the viewpoint of technical simplicity of the analysis, some options are slightly more preferable than others. We find it to be the most convenient to always think of Gk as an approximation to the integral Hessian Jk.

The second issue is as follows. Suppose we already know what is the connection between our current Hessian approximation Gk and the actual integral Hessian Jk, e.g., in terms of the relative eigenvalues and the value of the augmented log-det barrier potential function (8). Naturally, we want to know how these quantities change after we update Gk into Gk+1 at Step 4 of the scheme (). For this, we apply Lemma 3.1 and Lemma 3.4, respectively. However, the problem is that both of these lemmas will provide us only with the information on the connection between the update result Gk+1 and the current integral Hessian Jk (which was used for performing the update), not the next one Jk+1. Therefore, we need to additionally take into account the errors, resulting from approximating Jk+1 by Jk.

For estimating the errors, which accumulate as a result of approximating one Hessian by another, it is convenient to introduce the following quantities2:

rk:=ukxk,ξk:=eMi=0k-1ri(1),k0. 31

Remark 5.1

The general framework of our analysis is the same as in the previous paper [27]. The main difference is that now another potential function is used for establishing the rate of superlinear convergence (Lemma 5.4). However, in order to properly incorporate the new potential function into the analysis, many parts in the proof had to be appropriately modified, most notably the part, related to estimating the region of local convergence. In any case, the analysis, presented below, is fully self-contained and does not require the reader first go through [27].

We analyze the method () in several steps. The first step is to establish the bounds on the relative eigenvalues of the Hessian approximations with respect to the corresponding Hessians.

Lemma 5.2

For all k0, we have

1ξk2f(xk)GkξkLμ2f(xk), 32
1ξk+1JkGkξk+1LμJk. 33

Proof

For k=0, (32) follows from (26) and the fact that G0=LB and ξ0=1. Now suppose that k0, and that (32) has already been proved for all indices up to k. Then, applying Lemma 5.1 to (32), we obtain

1ξk1+Mrk2JkGk1+Mrk2ξkLμJk. 34

Since (1+Mrk2)ξkξk+1 by (31), this proves (33) for the index k. Applying Lemma 3.1 to (34), we get 1ξk(1+Mrk2)JkGk+1(1+Mrk2)ξkLμJk, and so

Gk+1(29)1+Mrk22ξkLμ2f(xk+1)(31)ξk+1Lμ2f(xk+1),Gk+1(29)11+Mrk22ξk2f(xk+1)(31)1ξk+12f(xk+1).

This proves (32) for the index k+1, and we can continue by induction.

Corollary 5.1

For all k0, we have

rkξkλk. 35

Proof

Indeed,

rk=(31)ukxk=(35)f(xk),Gk-12f(xk)Gk-1f(xk)1/2(32)ξkf(xk),2f(xk)-1f(xk)1/2=(30)ξkλk.

The second step in our analysis is to establish a preliminary version of the linear convergence theorem for the scheme ().

Lemma 5.3

For all k0, we have

λkξkλ0i=0k-1qi, 36

where

qi:=max1-μξi+1L,ξi+1-1. 37

Proof

Let k,i0 be arbitrary. By Taylor’s formula, we have

f(xi+1)=(25)f(xi)+Jiui=(25)Ji(Ji-1-Gi-1)f(xi). 38

Hence,

λi+1=(30)f(xi+1),2f(xi+1)-1f(xi+1)1/2(29)1+Mri2f(xi+1),Ji-1f(xi+1)1/2=(38)1+Mri2f(xi),(Ji-1-Gi-1)Ji(Ji-1-Gi-1)f(xi)1/2. 39

Note that -(ξi+1-1)Ji-1(33)Ji-1-Gi-1(33)1-μξi+1LJi-1. Therefore,

(Ji-1-Gi-1)Ji(Ji-1-Gi-1)(37)qi2Ji-1(28)qi21+Mri22f(xi)-1.

Thus, λi+11+Mri2qiλi in view of (39) and (30). Consequently,

λkλ0i=0k-11+Mri2qiλ0i=0k-1eMri2qi=(31)ξkλ0i=0k-1qi.

Next, we establish a preliminary version of the theorem on superlinear convergence of the scheme (). The proof uses the augmented log-det barrier potential function and is essentially a generalization of the corresponding proof of Theorem 4.3.

Lemma 5.4

For all k1, we have

λk1+ξki=0k-1(τiμξi+12L+1-τi)1/ke136nklnξk+1ξk+1Lμ-1k/2ξkLμ·λ0. 40

Proof

Without loss of generality, assume that ui0 for all 0ik. Denote ψi:=ψ(Gi,Ji), ψ~i+1:=ψ(Gi+1,Ji), νi:=ν(Ji,Gi,ui), pi:=τiμξi+12L+1-τi, and gi:=f(xi)Gi for any 0ik.

Let 0ik-1 be arbitrary. By Lemma 3.4 and (33), we have

613ln1+piνi2ψi-ψ~i+1=ψi-ψi+1+Δi, 41

where

Δi:=ψi+1-ψ~i+1=(8)Gi+1-1,Ji+1-Ji+lnDet(Ji+1-1,Ji). 42

Note that Ji(1+Mri2)-12f(xi+1)(1+Mri2)-1(1+Mri+12)-1Ji+1 in view of (29) and (28). In particular, Jie-M2(ri+ri+1)Ji+1(1-M2(ri+ri+1))Ji+1. Therefore, Ji+1-JiM2(ri+ri+1)Ji+1, and so

i=0k-1Gi+1-1,Ji+1-JiM2i=0k-1(ri+ri+1)Gi+1-1,Ji+1(33)nM2i=0k-1ξi+2(ri+ri+1)(31)nξk+1M2i=0k-1(ri+ri+1)nξk+1Mi=0kri=(31)nξk+1lnξk+1.

Consequently,

i=0k-1Δi(42)nξk+1lnξk+1+lnDet(Jk-1,J0). 43

Summing up (41), we thus obtain

613i=0k-1ln(1+piνi2)ψ0-ψk+i=0k-1Δi(9)ψ0+i=0k-1Δi=(8)lnDet(J0-1,LB)-1LB-1,LB-J0+i=0k-1Δi(43)lnDet(Jk-1,LB)-1LB-1,LB-J0+nξk+1lnξk+1(26)nlnLμ+nξk+1lnξk+1=nlnξk+1ξk+1Lμ.

By the convexity of function tln(1+et), it follows that

136nklnξk+1ξk+1Lμ1ki=0k-1ln(1+piνi2)=1ki=0k-1ln(1+eln(piνi2))ln1+e1ki=0k-1ln(piνi2)=ln1+i=0k-1piνi21/k. 44

At the same time, νi211+ξi+1(Gi-Ji)Gi+1-1(Gi-Ji)ui,uiGiui,ui=11+ξi+1gi+12gi2 in view of Lemma 3.5, (33) and since Giui=-f(xi), Jiui=f(xi+1)-f(xi). Hence, we can write i=0k-1νi2gk2g02i=0k-111+ξi+1(31)1(1+ξk)kgk2g02. Consequently, 136nkln(ξk+1ξk+1Lμ)(44)ln1+i=0k-1pi1/k1+ξkgkg02/k. Rearranging, we obtain that gk1+ξki=0k-1pi1/k(e136nkln(ξk+1ξk+1Lμ)-1)k/2g0. But λkξkLμ·gk by (32), and g0λ0 in view of (26) and the fact that G0=LB.

In the quadratic case (M=0), we have ξk1 (see (31)), and Lemmas 5.2 and 5.3 reduce to the already known Theorem 4.1, and Lemma 5.4 reduces to the already known Theorem 4.2. In the general case, the quantities ξk can grow with iterations. However, as we will see in a moment, by requiring the initial point x0 in the scheme () to be sufficiently close to the solution, we can still ensure that ξk stay uniformly bounded by a sufficiently small absolute constant. This allows us to recover all the main results of the quadratic case.

To write down the region of local convergence of (), we need to introduce one more quantity, related to the starting moment of superlinear convergence3:

K0:=1τ4μ9L+1-τ8nln2Lμ,τ:=supk0τk(1). 45

For DFP (τk1) and BFGS (τk0), we have, respectively,

K0DFP=18nLμln2Lμ,K0BFGS=8nln2Lμ. 46

Now we are ready to prove the main result of this section.

Theorem 5.1

Suppose that, in scheme (), we have

Mλ0ln323232maxμ2L,1K0+9. 47

Then, for all k0,

232f(xk)Gk3L2μ2f(xk), 48
λk1-μ2Lk32·λ0, 49

and, for all k1,

λk52i=0k-1(τi4μ9L+1-τi)1/ke136nkln2Lμ-1k/23L2μ·λ0. 50

Proof

Let us prove by induction that, for all k0, we have

ξk32. 51

Clearly, (51) is satisfied for k=0 since ξ0=1. It is also satisfied for k=1 since ξ1=(31)eMr0(35)eξ0Mλ0=(31)eMλ0(47)32.

Now let k0, and suppose that (51) has already been proved for all indices up to k+1. Then, applying Lemma 5.2, we obtain (48) for all indices up to k+1. Applying now Lemma 5.3 and using for all 0ik the relation qi=(37)max{1-μξi+1L,ξi+1-1}(51)max{1-2μ3L,12}1-μ2L, we obtain (49) for all indices up to k+1. Finally, if k1, then, applying Lemma 5.4 and using that ξi+1ξi+1(51)(32)32=323232(1+14)=1582 for all 0ik, we obtain (50) for all indices up to k. Thus, at this moment, (48) and (49) are proved for all indices up to k+1, while (50) is proved only up to k.

To finish the inductive step, it remains to prove that (51) is satisfied for the index k+2, or, equivalently, in view of (31), that Mi=0k+1riln32. Since Mi=0k+1riMi=0k+1ξiλi32Mi=0k+1λi in view of (35) and (51), respectively, it suffices to show that 32Mi=0k+1λiln32.

Note that

32Mi=0k+1λi(49)3232Mλ0i=0k+11-μ2Li32322LμMλ0. 52

Therefore, if we could prove that

32Mi=0k+1λi3232(K0+9)Mλ0, 53

then, combining (52) and (53), we would obtain

32Mi=0k+1λi3232min2Lμ,K0+9Mλ0(47)ln32,

which is exactly what we need. Let us prove (53). If kK0, in view of (49), we have 32Mi=0k+1λi3232(k+2)Mλ03232(K0+2)Mλ0, and (53) follows. Therefore, from now on, we can assume that kK0. Then4,

32Mi=0k+1λi=32Mi=0K0-1λi+λk+1+32Mi=K0kλi(49)3232(K0+1)Mλ0+32Mi=K0kλi.

It remains to show 32Mi=K0kλi32328Mλ0. We can do this using (50).

First, let us make some estimations. Clearly, for all 0<t<1, we have et=j=0tjj!1+t+t22j=0tj=1+t(1+t2(1-t)). Hence, for all 0<t1, we obtain e13t48-113t48(1+13482(1-1348))=13t48·837013t48·65=13t40, and so

52te13t48-11/252t·13t40=13161112. 54

At the same time, 1112=1-112e-112. Hence,

1112K0Lμ(45)11128ln2LμLμe-23ln2LμLμ=2Lμ-23Lμ=2-23Lμ-162-2323. 55

Thus, for all K0ik, and p:=τ4μ9L+1-τ(45)j=0i-1(τi4μ9L+1-τi)1/i:

λi(50)52pe136niln2Lμ-1i/23L2μ·λ0(45)52pe13p48-1i/23L2μ·λ0(54)1112i3L2μ·λ0=1112i-K01112K03L2μ·λ0(55)1112i-K023·32·λ0.

Hence, 32Mi=K0kλi(32)32Mλ0·23i=K0k(1112)i-K0(32)328Mλ0.

Remark 5.2

In accordance with Theorem 5.1, the parameter M of strong self-concordancy affects only the size of the region of local convergence of the process (), and not its rate of convergence. We do not know whether this is an artifact of the analysis or not, but it might be an interesting topic for future research. For a quadratic function, we have M=0, and so the scheme () is globally convergent.

The region of local convergence, specified by (47), depends on the maximum of two quantities: μL and 1K0. For DFP, the 1K0 part in this maximum is in fact redundant, and its region of local convergence is simply inversely proportional to the condition number: OμL. However, for BFGS, the 1K0 part does not disappear, and we obtain the following region of local convergence:

Mλ0maxOμL,O1nln2Lμ.

Clearly, the latter region can be much bigger than the former when the condition number Lμ is significantly larger than the dimension n.

Remark 5.3

The previous estimate of the size of the region of local convergence, established in [27], was O(μL) for both DFP and BFGS.

Example 5.1

Consider the functions

f(x):=f0(x)+μ2x2,f0(x):=lni=1meai,x+bi,xE,

where aiE, biR, i=1,,m, μ>0, and · is the Euclidean norm, induced by the operator B. Let γ>0 be such that

aiγ,i=1,,m,

where · is the norm conjugate to ·. Define

πi(x):=eai,x+bij=1meaj,x+bj,xE,i=1,,m.

Clearly, i=1mπi(x)=1, πi(x)>0 for all xE, i=1,,m. It is not difficult to check that, for all x,hE, we have5

f0(x),h=i=1mπi(x)ai,hγ.2f0(x)h,h=i=1mπi(x)ai-f0(x),h2=i=1mπi(x)ai,h2-f0(x),h2γ2h2,D3f0(x)[h,h,h]=i=1mπi(x)ai-f0(x),h32γh2f0(x)h,h2γ3h3.

Thus, f0 is a convex function with γ2-Lipschitz gradient and (2γ3)-Lipschitz Hessian. Consequently, the function f is μ-strongly convex with L-Lipschitz gradient, (2γ3)-Lipschitz Hessian, and, in view of [26, Example 4.1], M-strongly self-concordant, where

L:=γ2+μ,M:=2γ3μ3/2.

Let the regularization parameter μ be sufficiently small, namely μγ2. Denote Q:=γ2μ1. Then, QLμ2Q, M=2Q3/2, so, according to (47), the region of local convergence of BFGS can be described as follows:

λ0maxO1Q5/2,O1nQ3/2ln(4Q).

Discussion

Let us compare the new convergence rates, obtained in this paper for the classical DFP and BFGS methods, with the previously known ones from [27]. Since the estimates for the general nonlinear case differ from those for the quadratic one just in absolute constants, we only discuss the latter case.

In what follows, we use our standard notation: n is the dimension of the space, μ is the strong convexity parameter, L is the Lipschitz constant of the gradient, and λk is the local norm of the gradient at the kth iteration.

For BFGS, the previously known rate (see [27, Theorem 3.2]) is

λknLμkk/2λ0. 56

Although (56) is formally valid for all k1, it becomes useful6 only after

K^0BFGS:=nLμ 57

iterations. Thus, K^0BFGS can be thought of as the starting moment of the superlinear convergence, according to the estimate (56).

In this paper, we have obtained a new estimate (Theorem 4.2):

λk2enklnLμ-1k/2Lμ·λ0. 58

Its starting moment of superlinear convergence can be described as follows:

K0BFGS:=4nlnLμ. 59

Indeed, since et11-t=1+t1-t for any t<1, we have, for all kK0BFGS,

enklnLμ-1nklnLμ1-nklnLμ(59)nklnLμ1-14=4n3klnLμ. 60

At the same time, for all kK0BFGS:

Lμ=e12lnLμ(59)ek8=(e14)k/243k/232k/2. 61

Hence, according the new estimate (58), for all kK0BFGS:

λk(60)8n3klnLμk/2Lμ·λ0(61)4nklnLμk/2λ0((59)λ0). 62

Comparing the previously known efficiency estimate (56) and its starting moment of superlinear convergence (57) with the new ones (62), (59), we thus conclude that we manage to put the condition number Lμ under the logarithm.

For DFP, the previously known rate (see [27, Theorem 3.2]) is

λknL2μ2kk/2λ0

with the following starting moment of the superlinear convergence:

K^0DFP:=nL2μ2. 63

The new rate, which we have obtained in this paper (Theorem 4.2), is

λk2LμenklnLμ-1k/2Lμ·λ0. 64

Repeating the same reasoning as above, we can easily obtain that the new starting moment of the superlinear convergence can be described as follows:

K0DFP:=4nLμlnLμ, 65

and, for all kK0DFP, the new estimate (64) takes the following form:

λk4nLμklnLμk/2λ0((65)λ0).

Thus, compared to the old result, we have improved the factor L2μ2 up to LμlnLμ. Interestingly enough, the ratio between the old starting moments (63), (57) of the superlinear convergence of DFP and BFGS and the new ones (65), (59) have remained the same, Lμ, although the both estimates have been improved.

It is also interesting whether the results, obtained in this paper, can be applied to limited-memory quasi-Newton methods such as L-BFGS [30]. Unfortunately, it seems like the answer is negative. The main problem is that we cannot say anything interesting about just a few iterations of BFGS. Indeed, according to our main result, after k iterations of BFGS, the initial residual is contracted by the factor of the form [exp(nklnLμ)-1]k. For all values knlnLμ, this contraction factor is in fact bigger than 1, so the result becomes useless.

Conclusions

We have presented a new theoretical analysis of local superlinear convergence of classical quasi-Newton methods from the convex Broyden class. Our analysis has been based on the potential function involving the logarithm of determinant of Hessian approximation and the trace of inverse Hessian approximation. Compared to the previous works, we have obtained new convergence rate estimates, which have much better dependency on the condition number of the problem.

Note that all our results are local, i.e. they are valid under the assumption that the starting point is sufficiently close to a minimizer. In particular, there is no contradiction between our results and the fact that the DFP method is not known to be globally convergent with inexact line search (see, e.g., [31]).

Let us mention several open questions. First, looking at the starting moment of superlinear convergence of the BFGS method, in addition to the dimension of the problem, we see the presence of the logarithm of its condition number. Although typically such logarithmic factors are considered small, it is still interesting to understand whether this factor can be completely removed.

Second, all the superlinear convergence rates, which we have obtained for the convex Broyden class in this paper, are expressed in terms of the parameter τ, which controls the weight of the DFP component in the updating formula for the inverse operator. At the same time, in [27], the corresponding estimates were presented in terms of the parameter ϕ, which controls the weight of the DFP component in the updating formula for the primal operator. Of course, for the extreme members of the convex Broyden class, DFP and BFGS, ϕ and τ coincide. However, in general, they could be quite different. We do not know if it is possible to express the results of this paper in terms of ϕ instead of τ.

Finally, in all the methods, which we considered, the initial Hessian approximation G0 was LB, where L is the Lipschitz constant of the gradient, measured relative to the operator B. We always assume that this constant is known. Of course, it is interesting to develop some adaptive algorithms, which could start from any initial guess L0 for the constant L, and then somehow dynamically adjust the Hessian approximations in iterations, yet retaining all the original efficiency estimates.

Acknowledgements

The presented results were supported by ERC Advanced Grant 788368. The authors are thankful to the anonymous reviewers for their valuable time and comments.

Appendix

Lemma A.1

Let A,G:EE be self-adjoint positive definite linear operators, let uE be nonzero, and let τR be such that G+:=Broydτ(A,G,u) is well defined. Then,

G+-1=τG-1-G-1AuuAG-1AG-1Au,u+uuAu,u+(1-τ)G-1-G-1Auu+uuAG-1Au,u+AG-1Au,uAu,u+1uuAu,u, 66

and

Det(G+-1,G)=τAu,uAG-1Au,u+(1-τ)Gu,uAu,u. 67

Proof

Denote ϕ:=ϕτ(A,G,u). According to Lemma 6.2 in [27], we have

Det(G-1,G+)=ϕAG-1Au,uAu,u+(1-ϕ)Au,uGu,u=(3)τAu,uAG-1Au,u+(1-τ)Gu,uAu,u-1.

This proves (67) since Det(G+-1,G)=1Det(G-1,G+). Let us prove (66). Denote

G0:=G-GuuGGu,u+AuuAAu,u,s:=AuAu,u-GuGu,u. 68

Note that

G+=(2)G0+ϕGu,uAuuAAu,u2+GuuGGu,u-AuuG+GuuAAu,u=G0+ϕGu,uss. 69

Let IE and IE be the identity operators in E and E. Since G0u=Au, we have

IE-uuAAu,uG-1IE-AuuAu,u+uuAu,uG0=IE-uuAAu,uG-1G0-AuuAAu,u+uuAAu,u=(68)IE-uuAAu,uG-1G-GuuGGu,u+uuAAu,u=IE.

Hence, we can conclude that

G0-1=IE-uuAAu,uG-1IE-AuuAu,u+uuAu,u=G-1-G-1Auu+uuAG-1Au,u+AG-1Au,uAu,u+1uuAu,u.

Thus, we see that the right-hand side of (66) equals

H+:=G0-1-τAG-1Au,uuuAu,u2+G-1AuuAG-1AG-1Au,u-G-1Auu+uuAG-1Au,u=G0-1-τAG-1Au,uww, 70

where

w:=G-1AuAG-1Au,u-uAu,u. 71

It remains to verify that H+G+=IE. Clearly,

AG-1Au,uG0w=(71)G0G-1Au-AG-1Au,uG0uAu,u=(68)Au-Au,uGuGu,u=(68)Au,us. 72

Hence,

AG-1Au,uG0w,w=(72)Au,us,w=(71)Au,us,G-1AuAG-1Au,u-s,u=(68)Au,uAG-1Au,uAG-1Au,uAu,u-Au,uGu,u=1-Au,u2AG-1Au,uGu,u. 73

Consequently,

Gu,uAu,uH+G0wwG0=(70)Gu,uAu,u(G0-1-τAG-1Au,uww)G0wwG0=Gu,uAu,u(1-τAG-1Au,uG0w,w)wwG0=(73)Gu,uAu,u1-τ+τAu,u2AG-1Au,uGu,uwwG0=τAu,uAG-1Au,u+(1-τ)Gu,uAu,uwwG0. 74

Thus,

H+G+=(69)H+(G0+ϕGu,uss)=(72)H+G0+ϕAG-1Au,u2Au,uGu,uAu,uG0wwG0=(74)H+G0+ϕAG-1Au,u2Au,uτAu,uAG-1Au,u+(1-τ)Gu,uAu,u=(3)H+G0+τAG-1Au,uwwG0=(70)IE.

Footnotes

1

This is obvious when G-A is nondegenerate. The general case follows by continuity.

2

We follow the standard convention that the sum over the empty set is defined as 0, so ξ0=1. Similarly, the product over the empty set is defined as 1.

3

Hereinafter, t for t>0 denotes the smallest positive integer greater or equal to t.

4

We will estimate the second sum using (50). However, recall that, at this moment, (50) is proved only up to the index k. This is the reason why we move λk+1 into the first sum.

5

D3f0(x)[h,h,h]=d3dt3f0(x+th)|t=0 is the third derivative of f along the direction h.

6

Indeed, according to Theorem 4.1, we have at least λk(1-μL)kλ0 for all k0.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Anton Rodomanov, Email: anton.rodomanov@uclouvain.be.

Yurii Nesterov, Email: yurii.nesterov@uclouvain.be.

References

  • 1.Davidon, W.: Variable metric method for minimization. Argonne National Laboratory Research and Development Report 5990, (1959)
  • 2.Fletcher R, Powell M. A rapidly convergent descent method for minimization. Comput. J. 1963;6(2):163–168. doi: 10.1093/comjnl/6.2.163. [DOI] [Google Scholar]
  • 3.Broyden C. The convergence of a class of double-rank minimization algorithms: 1. General considerations. IMA J. Appl. Math. 1970;6(1):76–90. doi: 10.1093/imamat/6.1.76. [DOI] [Google Scholar]
  • 4.Broyden C. The convergence of a class of double-rank minimization algorithms: 2. The new algorithm. IMA J. Appl. Math. 1970;6(3):222–231. doi: 10.1093/imamat/6.3.222. [DOI] [Google Scholar]
  • 5.Fletcher R. A new approach to variable metric algorithms. Comput. J. 1970;13(3):317–322. doi: 10.1093/comjnl/13.3.317. [DOI] [Google Scholar]
  • 6.Goldfarb D. A family of variable-metric methods derived by variational means. Math. Comput. 1970;24(109):23–26. doi: 10.1090/S0025-5718-1970-0258249-6. [DOI] [Google Scholar]
  • 7.Shanno D. Conditioning of quasi-Newton methods for function minimization. Math. Comput. 1970;24(111):647–656. doi: 10.1090/S0025-5718-1970-0274029-X. [DOI] [Google Scholar]
  • 8.Broyden C. Quasi-Newton methods and their application to function minimization. Math. Comput. 1967;21(99):368–381. doi: 10.1090/S0025-5718-1967-0224273-2. [DOI] [Google Scholar]
  • 9.Dennis J, Moré J. Quasi-Newton methods, motivation and theory. SIAM Rev. 1977;19(1):46–89. doi: 10.1137/1019005. [DOI] [Google Scholar]
  • 10.Nocedal J, Wright S. Numerical Optimization. New York: Springer; 2006. [Google Scholar]
  • 11.Lewis A, Overton M. Nonsmooth optimization via quasi-Newton methods. Math. Program. 2013;141(1–2):135–163. doi: 10.1007/s10107-012-0514-2. [DOI] [Google Scholar]
  • 12.Powell M. On the convergence of the variable metric algorithm. IMA J. Appl. Math. 1971;7(1):21–36. doi: 10.1093/imamat/7.1.21. [DOI] [Google Scholar]
  • 13.Dixon L. Quasi-Newton algorithms generate identical points. Math. Program. 1972;2(1):383–387. doi: 10.1007/BF01584554. [DOI] [Google Scholar]
  • 14.Dixon L. Quasi Newton techniques generate identical points II: the proofs of four new theorems. Math. Program. 1972;3(1):345–358. doi: 10.1007/BF01585007. [DOI] [Google Scholar]
  • 15.Broyden C, Dennis J, Moré J. On the local and superlinear convergence of quasi-Newton methods. IMA J. Appl. Math. 1973;12(3):223–245. doi: 10.1093/imamat/12.3.223. [DOI] [Google Scholar]
  • 16.Dennis J, Moré J. A characterization of superlinear convergence and its application to quasi-Newton methods. Math. Comput. 1974;28(126):549–560. doi: 10.1090/S0025-5718-1974-0343581-1. [DOI] [Google Scholar]
  • 17.Stachurski A. Superlinear convergence of Broyden’s bounded θ-class of methods. Math. Program. 1981;20(1):196–212. doi: 10.1007/BF01589345. [DOI] [Google Scholar]
  • 18.Griewank A, Toint P. Local convergence analysis for partitioned quasi-Newton updates. Numer. Math. 1982;39(3):429–448. doi: 10.1007/BF01407874. [DOI] [Google Scholar]
  • 19.Engels J, Martínez H. Local and superlinear convergence for partially known quasi-Newton methods. SIAM J. Optim. 1991;1(1):42–56. doi: 10.1137/0801005. [DOI] [Google Scholar]
  • 20.Byrd R, Liu D, Nocedal J. On the behavior of Broyden’s class of quasi-Newton methods. SIAM J. Optim. 1992;2(4):533–557. doi: 10.1137/0802026. [DOI] [Google Scholar]
  • 21.Yabe H, Yamaki N. Local and superlinear convergence of structured quasi-Newton methods for nonlinear optimization. J. Oper. Res. Soc. Jpn. 1996;39(4):541–557. [Google Scholar]
  • 22.Wei Z, Yu G, Yuan G, Lian Z. The superlinear convergence of a modified BFGS-type method for unconstrained optimization. Comput. Optim. Appl. 2004;29(3):315–332. doi: 10.1023/B:COAP.0000044184.25410.39. [DOI] [Google Scholar]
  • 23.Yabe H, Ogasawara H, Yoshino M. Local and superlinear convergence of quasi-Newton methods based on modified secant conditions. J. Comput. Appl. Math. 2007;205(1):617–632. doi: 10.1016/j.cam.2006.05.018. [DOI] [Google Scholar]
  • 24.Mokhtari A, Eisen M, Ribeiro A. IQN: an incremental quasi-Newton method with local superlinear convergence rate. SIAM J. Optim. 2018;28(2):1670–1698. doi: 10.1137/17M1122943. [DOI] [Google Scholar]
  • 25.Gao W, Goldfarb D. Quasi-Newton methods: superlinear convergence without line searches for self-concordant functions. Optim. Methods Softw. 2019;34(1):194–217. doi: 10.1080/10556788.2018.1510927. [DOI] [Google Scholar]
  • 26.Rodomanov, A., Nesterov, Y.: Greedy quasi-Newton methods with explicit superlinear convergence. CORE Discussion Papers. 06 (2020)
  • 27.Rodomanov, A., Nesterov, Y.: Rates of Superlinear Convergence for Classical Quasi-Newton Methods. CORE Discussion Papers. 11 (2020) [DOI] [PMC free article] [PubMed]
  • 28.Jin, Q., Mokhtari, A.: Non-asymptotic Superlinear Convergence of Standard Quasi-Newton Methods. arXiv preprint arXiv:2003.13607 (2020)
  • 29.Byrd R, Nocedal J. A tool for the analysis of quasi-Newton methods with application to unconstrained minimization. SIAM J. Numer. Anal. 1989;26(3):727–739. doi: 10.1137/0726042. [DOI] [Google Scholar]
  • 30.Liu D, Nocedal J. On the limited memory BFGS method for large scale optimization. Math. Program. 1989;45(1–3):503–528. doi: 10.1007/BF01589116. [DOI] [Google Scholar]
  • 31.Byrd R, Nocedal J, Yuan Y. Global convergence of a class of quasi-Newton methods on convex problems. SIAM J. Numer. Anal. 1987;24(5):1171–1190. doi: 10.1137/0724077. [DOI] [Google Scholar]

Articles from Journal of Optimization Theory and Applications are provided here courtesy of Springer

RESOURCES