Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Apr 1.
Published in final edited form as: Parallel Comput. 2020 Nov 4;101:102721. doi: 10.1016/j.parco.2020.102721

Asynchronous Parallel Stochastic Quasi-Newton Methods

Qianqian Tong 1, Guannan Liang 1, Xingyu Cai 2, Chunjiang Zhu 1, Jinbo Bi 1
PMCID: PMC7755129  NIHMSID: NIHMS1645046  PMID: 33363295

Abstract

Although first-order stochastic algorithms, such as stochastic gradient descent, have been the main force to scale up machine learning models, such as deep neural nets, the second-order quasi-Newton methods start to draw attention due to their effectiveness in dealing with ill-conditioned optimization problems. The L-BFGS method is one of the most widely used quasi-Newton methods. We propose an asynchronous parallel algorithm for stochastic quasi-Newton (AsySQN) method. Unlike prior attempts, which parallelize only the calculation for gradient or the two-loop recursion of L-BFGS, our algorithm is the first one that truly parallelizes L-BFGS with a convergence guarantee. Adopting the variance reduction technique, a prior stochastic L-BFGS, which has not been designed for parallel computing, reaches a linear convergence rate. We prove that our asynchronous parallel scheme maintains the same linear convergence rate but achieves significant speedup. Empirical evaluations in both simulations and benchmark datasets demonstrate the speedup in comparison with the non-parallel stochastic L-BFGS, as well as the better performance than first-order methods in solving ill-conditioned problems.

Keywords: Quasi-Newton method, Asynchronous parallel, Stochastic algorithm, Variance Reduction

1. Introduction

With the immense growth of data in modern life, developing parallel or distributed optimization algorithms has become a well-established strategy in machine learning, such as the widely used stochastic gradient descent (SGD) algorithm and its variants [1, 2, 3, 4, 5, 6, 7]. Because gradient-based algorithms have deficiencies (e.g., zigzagging) with ill-conditioned optimization problems (elongated curvature), second-order algorithms that utilize curvature information have drawn research attention. The most well-known second-order algorithm is a set of quasi-Newton methods, particularly the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and its limited memory version (L-BFGS) [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. Second-order methods have several advantages over first-order methods, such as fast rate of local convergence (typically superlinear) [21], and affine invariance (not sensitive to the choice of coordinates).

Stochastic algorithms have been extensively studied and substantially improved the scalability of machine learning models. Particularly, stochastic first-order methods have been a big success for which convergence is guaranteed when assuming the expectation of stochastic gradient is the true gradient. In contrast, second-order algorithms cannot directly use stochastic sampling techniques without losing the curvature information. Several schemes have been proposed to develop stochastic versions of quasi-Newton methods. Stochastic quasi-Newton method (SQN) [22] uses independent large batches for updating Hessian inverse. Stochastic block BFGS [23] calculates subsampled Hessian-matrix product instead of Hessian-vector product to preserve more curvature information. These methods achieve a sub-linear convergence rate in the strongly convex setting. Using the variance reduction (VR) technique proposed in [24], the convergence rate can be lifted up to linear in the latest attempts [25, 23]. Later, acceleration strategies [26] are combined with VR, non-uniform mini-batch subsampling, momentum calculation to derive a fast and practical stochastic algorithm. Another line of stochastic quasi-Newton studies tries to focus on solving self-concordant functions, which requires more on the shape or property of objective functions, can reach a linear convergence rate. Stochastic adaptive quasi-Newton methods for self-concordant functions have been proposed in [27], where the step size can be computed analytically, using only local information, and adapts itself to the local curvature. [28] gives a global convergence rate for a stochastic variant of the BFGS method combined with stochastic gradient descent. Other researchers use randomized BFGS [19] as a variant of stochastic block BFGS and prove the linear convergence rate under self-concordant functions. However, so far all these stochastic methods have not been designed for parallel computing.

Parallel quasi-Newton methods have been explored in several directions: map-reduce (vector-free L-BFGS) [29] has been used to parallelize the two-loop recursion (See more discussion in Section 2) in a deterministic way; the distributed L-BFGS [35] is focused on the implementation of L-BFGS over high performance computing cluster (HPCC) platform, e.g. how to distribute data such that a full gradient or the two-loop recursion can be calculated fast. As a successful trial to create both stochastic and parallel algorithms, multi-batch L-BFGS [30] uses map-reduce to compute both gradients and updating rules for L-BFGS. The idea of using overlapping sets to evaluate curvature information helps if a node failure is encountered. However, the size of overlapping is large, reshuffling and redistributing data among nodes may need costly communications. Progressive batching L-BFGS [31] gradually increases the batch size instead of using the full batch in the multi-batch scheme to improve the computational efficiency. Another line of work explores the decentralized quasi-Newton methods (Decentralized BFGS) [32, 33]. Taking physical network structure into consideration, the nodes communicate only with their neighbors and perform local computations. Recently, the distributed averaged quasi-Newton methods and the adaptive distributed variants (DAve-QN) have been implemented in [34], which can be very efficient for low dimensional problems because that they need to read and write the whole Hessian matrix during updating. Notice that in the analysis of DAve-QN method, they have a strong requirement for condition number of the objective functions in Lemma 2 of [34], which is hard to satisfied in most real datasets. However, when dealing with machine learning problems under big datasets, or considering the possibility of node failures, deterministic algorithms like the decentralized ones perform no better than stochastic ones.

1.1. Contributions

In this paper, we prepare an asynchronous parallel regime of SQN method with the VR technique, which we call AsySQN. The comparison of our method against existing methods is illustrated in Table 1. AsySQN aims to design a stochastic and parallel process for every step during calculating the search directions. To the best of our knowledge, this is the first effort in implementing the SQN methods in an asynchronous parallel way instead of directly using some local acceleration techniques. The basic idea of asynchronous has been explored by first-order methods, but it is not straightforward to apply to stochastic quasi-Newton methods, and not even clear if such a procedure converges. We provide a theoretical guarantee of a linear convergence rate for the proposed algorithm. Notice that even though there is another commonly used way of lifting the convergence rate, by exploring the self-concordant functions, we choose to use the VR technique so that we can have more general objective functions when conducting experiments. In addition, we remove the requirement of gradients sparsity in previous methods [2, 6]. Although we focus on the L-BFGS method, the proposed framework can be applied to other quasi-Newton methods including BFGS, DFP, and the Broyden family [36]. In our empirical evaluation, we show that the proposed algorithm is more computational efficient than its non-parallel counterpart. Additional experiments are conducted to show that AsySQN maintains the property of a quasi-Newton method that is to obtain a solution with high precision faster on ill-conditioned problems than first-order methods without zigzagging.

Table 1:

The comparison of various quasi-Newton methods in terms of their stochastic, parallel, asynchronous frameworks, convergence rates, and if they use a variance reduction technique and limited memory update of BFGS method to make it work well for high dimensional problem. Here, our algorithm AsySQN is designed for paralleling the whole L-BFGS method in shared memory.

QN methods Stochastic Parallel Asy VR High dimensional Convergence
[22] sublinear
[25] linear
[23] linear
[26] linear
[29] parallel two-loop recursion
[30] map reduce for gradient sublinear
[31] map reduce for gradient linear
[32, 33] parallel calculation for gradient
[34] parallel calculation for Hessian superlinear
AsySQN parallel model for L-BFGS linear

1.2. Notations

The Euclidean norm of vector x is denoted by ∥x2, and without loss of generality, we omit the subscript 2 and use ∥x∥. We use x* to denote the global optimal solution of Eq.(2). For simplicity, we also use f* to be the corresponding optimal value f(x*). The notation E[·] means taking the expectation in terms of all random variables, and I denotes the identity matrix. The condition number of a matrix A is defined by κ(A)=σmax(A)σmin(A), where σmax(A) and σmin(A) denote, respectively, the largest and the smallest singular values of A. For square matrices A and B of the same size, AB, iff BA is positive semidefinite. Given two numbers aZ (integers), bZ+ (positive intergers), mod(a, b) = 0, if there exists an integer k satisfying a = kb.

2. Algorithm

In this paper, we consider the following finite-sum problem :

f(x)1ni=1nf(x;zi)=def1ni=1nfi(x), (1)
Algorithm 1 AsySQN
1:Input:Initialw0Rd,step sizeηR+,subsample sizeb,bh,parameterm,L,M,and threads numberP.2:InitializeH0=I3:Initialize parameterxin shared memory4:fork=0,1,2,do5:wk,μk=ScheduleUpdate(x,k,m)(Algorithm 2)6:forp=0,1,P1,paralleldo7:fort=0toL1do8:SampleSp,kL+t{1,2,..,n}9:Readxfrom shared memory asxp,kL+t10:Computeg1=fSp,kL+t(xp,kL+t)11:Computeg2=fSp,kL+t(xk)12:Calculate the variance-reduced gradient:vp,kL+t=g1g2+μk13:ifk1then14:xp,kL+t+1=xp,kL+tηvp,kL+t15:else16:xp,kL+t+1=xp,kL+tηHkvp,kL+t17:where the search directionp=Hkvp,kL+tis computed by theTwo-loop-recursion(vp,kL+t,si,yi,i=kM,,k1)(Algorithm 3)18:endif19:Writexp,kL+t+1toxin shared memory20:endfor21:endfor22:SampleTk{1,2,..,n}23:xk=1LPp=0P1i=kLkL+L1xp,i24:sk=xkxk125:Option I:yk=fTk(xk)fTk(xk1)26:Option II:yk=2fTk(xk)sk27:endfor

where {zi}i=1n denote the training examples, f:RdR is a loss function that is parametrized by x which is to be determined in the optimization process, and fi is the loss occurred on zi. Most machine learning models (with model parameter x) can be obtained by solving the following minimization problem:

minxRdf(x)=1ni=1nfi(x). (2)

We aim to find an ϵ–approximate solution, x, if x satisfies:

E[f(x)]f(x)ϵ, (3)

where f(x*) is the global minimum if exists and x* is an optimal solution, ϵ is the targeted accuracy, and E[f(x)] means to take the expectation of f(x). We require a high precision solution, e.g., ϵ = 10−30.

2.1. Review of the stochastic quasi-Newton method

According to the classical L-BFGS methods [36], given the current and last iterates xk and xk−1 and their corresponding gradients, we define the correction pairs:

sk=xkxk1,yk=fkfk1, (4)
Algorithm 2 Schedule-update (x, k, m)
1:Input:x,k,m.2:Output:wk,μk3:ifmod(k,m)==0then4:wk=x5:μk=f(x)6:endif
Algorithm 3 Two-loop-recursion (v, si, yi, ∀i = 1, ··· , M)
1:Input:v,si,yiwherei=1,,M.2:Output:p3:Initializep=v4:fori=M,,1do5:αi=sipsiyi6:p=pαiyi7:endfor8:p=(sk1yk1yk1yk1)p9:fori=1,,Mdo10:β=yipsiyi11:p=p+(αiβ)si12:endfor

and define ρk=1ykTsk, Vk=IρkykskT. Then the new Hessian inverse matrix can be uniquely updated based on the last estimate Hk−1:

Hk=(IρkskykT)Hk1(IρkykskT)+ρkskskT=VkTHk1Vk+ρkskskT. (5)

When datasets are high dimensional, i.e. d > 0 is a large number, and the Hessian and Hessian inverse matrices are d × d and maybe dense, storing and directly updating Hk might be computationally prohibitive. Instead of exploring fully dense matrices, limited-memory methods only save the latest information (say, the recent M) of correction pairs {sk, yk} to arrive a Hessian inverse approximation. Given the initial guess H0 , which is usually set to an identity matrix, the new Hessian inverse at the k-th iteration can actually be approximated by repeated application of the formula (5),

Hk=(VkTVk1TVkM+1T)HkM(VkM+1VkM+2Vk)+ρkM+1(VkTVk1TVkMT)skM+1skM+1T(VkMVkM+1Vk)++ρkskskT. (6)

Based on this updating rule, a two-loop recursion procedure has been derived: the first for-loop multiplying all the left-hand side vectors and the second for-loop multiplying the right-hand side ones. In practice, if ν denotes the current variance-reduced gradient, we directly calculate Hessian-vector product H · ν to find the search direction rather than actually computing and storing H. Details are referred to Algorithm 3.

2.2. The proposed quasi-Newton scheme

Algorithm 1 outlines the main steps of our AsySQN method where k indexes the epochs, p indexes the processors, and t indexes the iterations in each epoch within each processor. Following the same sampling strategy in [22], we use two separate subsets S and T of data to estimate the gradient ∇fS (Lines 8-11) and the Hessian inverse HT (Lines 22-24), respectively. At each iteration in an epoch, S is used to update the gradient whereas the Hessian inverse is updated at each epoch using a subsample T. In order to better preserve the curvature information, the size of subsample T should be relatively large, whereas the subsample S should be smaller to reduce the overall computation.

The algorithm requires several pre-defined parameters: step size η, limited memory size M, integers m and L, the number of threads P and batch size ∣S∣ = b, and ∣T∣ = bh. We need the parameter m because we only compute the full gradient of the objective function at the iterate xk every m=nb×L×P epochs, which will help reduce the computational cost. Algorithm 1 can be split into hierarchically two parts: an inner loop (iteration) and an outer loop (epoch). The inner loop runs asynchronous parallel processes where each thread updates x concurrently and the epoch loop includes the L inner loops and an update on the curvature information.

In our implementation, at the k-th epoch, the vector sk (Line 24) is the difference between the two iterates obtained before and after L (parallel) inner iterations, where xk (Line 23) can be the average or the latest iterate in every L iterations. The vector yk represents the difference between two gradients computed at xk and xk−1 using the Tk data subsample: yk = ∇fTk(xk)−∇fTk(xk−1). By the mid-point theorem, there exists a ξ ∈ [xk−1, xk], such that yk = ∇2fTk(ξsk. The two options (Line 25 and Line 26) of computing yk are both frequently used in quasi-Newton methods. In our numerical experiments, we run both options for comparisons to show that option II (Line 26) is more stable when the length of sk, ∥sk∥, becomes small.

Stochastic gradient descent methods are generally difficult in achieving a solution of high precision, but they can reduce the objective function quickly in the beginning. Thus, a desirable way is to quickly find the optimal interval using a first-order (gradient-based) method and then switch to a second-order method. In practice, we take a warm start by the SVRG (not shown in Algorithm 1), and after certain iterations of quick descending, we then switch to the SQN method to efficiently get a more accurate optimal value.

2.3. The proposed asynchronous parallel quasi-Newton

We propose a multi-threaded process in which each thread makes stochastic updates to a centrally stored vector x (stored in a shared memory) without waiting, which is similar to the processes used in the asynchronous stochastic gradient descent (AsySGD) [2, 37], asynchronous stochastic coordinate descent (AsySCD) [38, 39] and asynchronous SVRG (AsySVRG) [6, 40]. In multi-threaded process, there are P threads, and each thread reads the latest x from the shared memory, calculates the next iterate concurrently, and then write the iterate into the shared memory.

We use an example in Figure 1 to illustrate the process of this parallel algorithm. At the beginning of each epoch, the initial x0 (for notational convenience) is the current xk from the shared memory. When a thread finishes computing the next iterate, it will immediately write x into the shared memory. Using the modern memory techniques, there are locks to guarantee that each time only one thread can write to the shared memory. Let us use a global timer t that records any time when the shared memory is updated. In an asynchronous algorithm, when one thread is updating x to xt, another thread may still use an old state xD(t) to compute the search direction and update x. Let D(t) ∈ [t] be the particular state of x for a thread when the shared memory (x) is updated at time t. Specifically, let D′(t) ∈ [t] be the state of x that has been used to update x to xt. We assume that the maximum delay among processes is bounded by τ, that is tD(t) ⩽ τ. When Thread p ∈ {1, …, P} updates x to xt, within this thread, the actual state of x that has been used to calculate the search direction is D′(t). Hence, we have xt = xt−1ηH(xD′(t))ν(xD′(t)), where the delay within Thread p is ν = tD′(t), obviously ντ. Considering that the maximum number of iterations in each epoch is L, we have 0 ⩽ τL < m.

Figure 1:

Figure 1:

The key asynchronous parallel steps have been identified as follows: 1. Subsample S; 2. Read x from the shared memory; 3. Compute stochastic gradients g1, g2, and ν; 4. Compute the two-loop recursion and update x; 5. Write x to the shared memory; 6. Subsample T and update correction pairs. Before these steps, there is another step that computes the full gradient for all threads, i.e., 0. Calculate full gradient. Here we show the asynchronous parallel updating of the threads where t is a global timer in the epoch, recording the times that x has been updated. For example, at t = 2, Thread 2 updates x to x2 in the shared memory while Thread 1 and Thread 3 are in the middle of calculation using x1 and x0 respectively; hence the current x index for Thread 1 is 1 which we denote as D(t) for Thread 1, and similarly D(t) = 0 for Thread 2, the maximum delay τ = tD(t) is 2. For Thread 2, the delayed searching direction H(xD′(2))ν(xD′(2)) is used to update x1 to x2, where D2=0; therefore the delay ν = 2. Similar analysis can be applied to each state of x.

In summary, we design an asynchronous parallel stochastic quasi-Newton method, named AsySQN, to improve the computational efficiency. In our experiments (Section 5), AsySQN shows obvious speedup compared with the original stochastic quasi-Newton methods SQN-VR due to the asynchronous parallel setting. Meanwhile, it still enjoys the advantages of common second-order methods: stability and accuracy. Furthermore, in some machine learning problems, it is not easy to do data normalization or even need more transformation to change the ill-conditioning behavior of objective function. In this case, AsySQN can perform better than AsySVRG or other first-order algorithms.

3. Preliminaries

3.1. Definitions

Each epoch consists of m iterations, at the beginning of each epoch, suppose that the ScheduleUpdate rule is triggered after each m steps, and for brevity, we use wk to denote the iterations chosen at the k-th epoch.

At the (k + 1) − th epoch tth iteration:

The updating formula is: xt+1 = xtηHtνt, where step size ηR+ determines how long should moves on, here we do not require diminishing way to guarantee convergence rate;

Stochastic Variance-Reduced gradient: ut = ∇fit(xt) − ∇fit(wk) + ∇f(wk), where it is randomly chosen from a subset in S ⊆ {1, .., n}. Easily, we get the expectation of stochastic Variance-Reduced gradient: E[ut] = ∇f(xt);

The delayed stochastic Variance-Reduced (VR) gradient used in asynchronous setting: νt = ∇fit(xD(t) − ∇fit(wk) + ∇f(wk). As the full gradient has been computed before, the gradient was estimated at wk whereas the current iteration should be xD(t) due to the delay. Taking expectation of the delayed gradient, we get: E[νt] = ∇f(xD(t)).

3.2. Assumptions

We make the same assumptions as non-asynchronous version of the SQN method that also uses VR technique [25].

Assumption 1. The loss function fi is μ–strongly convex, that is,

fi(y)fi(x)+<fi(x),yx>+μ2yx2,x,y.

Assumption 2. The loss function fi has l–Lipschitz continuous gradients:

fi(x)fi(y)lxy,x,y.

Since the average of loss function fi over i preserves the continuity and convexity, objective function f also satisfies these two hypotheses, then we can easily derive the following lemmas.

Lemma 3.1. Suppose that Assumption 1 and 2 hold. The Hessian matrix will be bounded by two positive constants μ and l such that

μI¯Bk¯lI.

The second-order information of the objective function f is bounded by μ and l : μI ≤ ∇2 flI, therefore all Hessian matrices B share the same bounds and generally we have the condition number of the objective function: κ(B)=lμ1.

Lemma 3.2. Suppose that Assumption 1 and 2 hold. Let Hk be the Hessian inverse matrix. Then for all k ≥ 1, there exist constants 0 < μ1 < μ2 such that Hk satisfies

μ1I¯Hk¯μ2I,

where μ1=1(d+M)l, μ2=((d+M)l)d+M1μd+M, l is the Lipschitz constant, M is the limited memory size, d is the length of x.

Proof can be referenced from [25]. Similarly, all generated Hessian inverse matrices have the same following condition number: κ(H)=μ2μ1=(d+M)d+M(κ(B))d+M. Obviously, the ill-conditioning degree of Hessian inverse will surely be amplified. For large scale datasets, the shape of objective function can be extreme ill-conditioned. Thus, how to achieve a fair performance in this ill-conditioned situation deserves our attention. As a second-order method, our AsySQN still enjoys fast convergence rate in ill-conditioned situation (see the main theorem 4.6 and Corrollary 4.6.1). We also do extensive experiments showing this property in our simulation numerical analysis.

4. Convergence Analysis

In the asynchronous parallel setting, all working nodes read and update without synchronization. This will cause delayed updates for some other threads. Hence, the stochastic variance-reduced gradients may not be computed using the latest iterate of x. Here, we use gradient νt defined above, then each thread uses each its own νt to update xt+1 = xtηHtνt.

Lemma 4.1. In one epoch, the delay of parameter x reading from the shared memory can be bounded by:

E[xtxD(t)]ημ2j=D(t)t1E[vj].

Proof. From Lemma 3.2 and updating formula xt+1 = xtηHtνt,

E[xtxD(t)]j=D(t)t1E[xj+1xj]=j=D(t)t1ηE[Htvt]ημ2j=D(t)t1E[vj].

Lemma 4.2. Suppose that Assumption 2 holds, the delay in the stochastic VR gradient of the objective function f :

E[utvt]lημ2j=D(t)t1E[vj].

Proof. From Assumption 2, Lemma 3.2 and 4.1,

E[utvt]=E[f(xt)f(xD(t))]lE[xtxD(t)]lημ2j=D(t)t1E[vj].

Lemma 4.3. Suppose that Assumption 1 holds, denote w* to be the unique minimizer of objective function f. Then for any x, we have

f(x)22μ(f(x)f(w))

Proof. From Assumption 1,

f(w)f(x)+f(x)T(wx)+μ2wx2f(x)+minξ(f(x)Tξ+μ2ξ2)=f(x)12μf(x)2.

Letting ξ = w* − x, when ξ=f(x)μ, the quadratic function can achieve its minimum. □

Lemma 4.4. Suppose that Assumptions 1 and 2 hold. Let f* be the optimal value of Eq.(2). Considering the stochastic variance-reduced gradient, we have:

E[ut2]4lE[f(xt)f+f(wk)f].

Proof reference [24].

Theorem 4.5. Suppose that Assumptions 1 and 2 hold, the gradient delay in an epoch can be measured in the following way:

t=kmkm+m1E[vt2]212l2η2μ22τ2t=kmkm+m1E[ut2].

Proof.

E[vt2]2E[vtut2]+2E[ut2]2l2η2μ22τj=D(t)t1E[vj2]+2E[ut2]

Summing from t = km to t = km + m − 1, we get

t=kmkm+m1E[vt2]t=kmkm+m12l2η2μ22τj=D(t)t1E[vj2]+2t=kmkm+m1E[ut2]2l2η2μ22τ2t=kmkm+m1E[vt2]+2t=kmkm+m1E[ut2]

By rearranging, we get the inequality above. □

Theorem 4.6 (Main theorem). Suppose that Assumptions 1 and 2 hold, step size η and epoch size m are chosen such that the following condition holds:

0<θ1+C1+mημμ1C<1,

where C=4ml2η2μ22(lημ1τ+1)12l2η2μ22τ2. Then for all k ≥ 0, we have

E[f(wk+1)f]θE[f(wk)f].

Proof. Let’s start from the Lipschitz continuous condition:

f(xt+1)f(xt)+f(xt)(xt+1xt)+l2xt+1xt2=f(xt)ηHrf(xt)vt+lη2Hr22vt2f(xt)ημ1f(xt)vt+lη2μ222vt2

Taking expectation at both hands,

E[f(xt+1)]E[f(xt)]ημ1E[ut]E[vt]+lη2μ222E[vt2]

Given ab=(a+b)2(ab)24, we can derive: ab=12(ab)212a212b2. Thus,

E[ut]E[vt]=12(E[ut]E[vt])212(E[ut])212(E[vt])2=12(f(xt)f(xD(t)))212(f(xt))212(f(xD(t)))2l22xtxD(t)2μ(f(xt)f)μ(f(xD(t))f)=l2η2μ222j=D(t)t1E[vj2]μ(f(xt)f)μ(f(xD(t))f)

The first inequality comes from Lipschitz continuous condition and Lemma 4.3.

Substituting this in the inequality above, we get the following result:

E[f(xt+1)]E[f(xt)]+ημ1(l2η2μ222j=D(t)t1E[vj2]μ(f(xt)f)μ(f(xD(t))f))+lη2μ222E[vt2]E[f(xt)]+l2η3μ22μ12j=D(t)t1E[vj2]ημμ1E[f(xt)f]ημμ1E[f(xD(t))f]+lη2μ222E[vt2]E[f(xt)]+l2η3μ22μ12j=D(t)t1E[vj2]ημμ1E[f(xt)f]+lη2μ222E[vt2]

The last inequality comes from the fact that f* = minf(x), f(xD(t)) − f* ≥ 0.

Summing from t = km to t = km + m − 1, we get

t=kmkm+m1E[f(xt+1)f(xt)]l2η3μ22μ12t=kmkm+m1j=D(t)t1E[vj2]+lη2μ222t=kmkm+m1E[vt2]ημμ1t=kmkm+m1E[f(xt)f]l2η3μ22μ1τ2t=kmkm+m1E[vt2]+lη2μ222t=kmkm+m1E[vt2]ημμ1mE[f(wk+1)f]=lη2μ22(lημ1τ+1)2t=kmkm+m1E[vt2]ημμ1mE[f(wk+1)f]lη2μ22(lημ1τ+1)2212l2η2μ22τ2t=kmkm+m1E[ut2]ημμ1mE[f(wk+1)f]=lη2μ22(lημ1τ+1)12l2η2μ22τ2t=kmkm+m1E[ut2]ημμ1mE[f(wk+1)f]lη2μ22(lημ1τ+1)12l2η2μ22τ2t=kmkm+m14lE[f(xt)f+f(wk)f]ημμ1mE[f(wk+1)f](4ml2η2μ22(lημ1τ+1)12l2η2μ22τ2mημμ1)E[f(wk+1)f]+4l2η2μ22(lημ1τ+1)12l2η2μ22τ2E[f(wk)f] (7)

The second inequality uses the assumption that we have a delay τ in one epoch iteration. The third inequality uses the results of Theorem 4.5, hence all delayed stochastic VR gradients can be bounded by the stochastic VR gradients. The fourth inequality holds for Lemma 4.4. In the epoch-based setting,

t=kmkm+m1E[f(xt+1)f(xt)]=E[f(x(k+1)m)f(xkm)]=E[f(wk+1)f(wk)].

Thus the above inequality gives

E[f(wk+1)f]E[f(wk)f]+4l2η2μ22(lημ1τ+1)12l2η2μ22τ2t=kmkm+m1E[f(wk)f](mημμ14ml2η2μ22(lημ1τ+1)12l2η2μ22τ2)E[f(wk+1)f].

By rearranging the above gives

(1+mημμ14ml2η2μ22(lημ1τ+1)12l2η2μ22τ2)E[f(wk+1)f](1+4ml2η2μ22(lημ1τ+1)12l2η2μ22τ2)E[f(wk)f].

Hence, we get the following result:

E[f(wk+1f)]θE[f(wkf)],

where θ=1+C1+mημμ1C, C=4ml2η2μ22(lημ1τ+1)12l2η2μ22τ2. □

From Theorem 4.6 we can see that η, τ should satisfy condition 12l2η2μ22τ2>0 to ensure the result holds. Therefore, we reach an upper bound of the step size: η<12lμ2τ. By substituting the result of Lemma 3.2, we can easily derive lμ2τ=l((d+M)l)d+M1μd+Mτ=(d+M)d+M1(1μ)d+Mτ>1.

To ensure that θ=1+C1+mημμ1C(0,1), we require mημμ1 > 2C, where C > 0, and this requirement can be written equivalently in the following way:

(12l2η2μ22τ2)mημμ1>8ml2η2μ22(lημ1τ+1).

We can treat the above inequality as quadratic equation with respect to step size η,

2l2μ1μ22τ(4l+μτ)η2+8l2μ22ημμ1<0,

and it can be checked that when η(0,16l2μ22+2μ12μτ(4l+μτ)4lμ22lμ1μ2τ(4l+μτ)), the inequality will be satisfied. Noticing that this range falls into the previous restriction η<12lμ2τ, then we get a more accurate bound of step size to ensure the convergence rate θ falls in range (0, 1). For simplicity, we set the step size parameter η=12lμ2m, it is easy to check that the quadractic inequality above will be certainly satisfied.

Corollary 4.6.1. Suppose that the conditions in Theorem 4.6 hold and for simplicity we set parameters as step size η=12lμ2m and the maximum delay parameter τ = L < m. If m > 8κ(B)κ(H) + 4κ(B), then the Asynchronous parallel SQN will have the following convergence rate:

E[f(wk+1)f](m+2)κ(B)κ(H)+κ(B)(m2)κ(B)κ(H)κ(B)+m2E[f(wk)f].

Proof. We first check that when η=12lμ2m,

C=4ml2η2μ22(lημ1τ+1)12l2η2μ22τ2=μ12μ2Lm+1m(112(Lm)2)<1κ(H)+2m.

Then we get a simplified C, which is still positive. The convergence rate will correspondingly become:

θ=1+C1+mημμ1C<1+1κ(H)+2m1+12κ(B)κ(H)1κ(H)+2m=(m+2)κ(B)κ(H)+κ(B)(m2)κ(B)κ(H)κ(B)+m2.

When m > 2, θ > 0 can be guaranteed. Further, to ensure convergence rate θ < 1, m > 8κ(B)κ(H) + 4κ(B). □

Remark 4.7 (Step size). As the step size η diminishing to 0, C → 0, and θ → 1, which means the convergence rate becomes slower as the step size getting smaller. Decayed step size makes it easier to converge, however it is not the best choice for majority algorithms, and we use constant step size here.

It is well known that the deterministic quasi-Newton methods (e.g. L-BFGS) enjoy a superlinear convergence rate, while the stochastic version of quasi-Newton methods (including L-BFGS) will have a sublinear convergence rate in strongly convex optimization as a sacrifice; with the help of variance reduction technique, SQN-VR reaches a linear convergence rate. Compared with the original SQN-VR, even equipped with asynchronous setting, our method still achieves a linear convergence rate. The delay parameter τ in asynchronous setting affects the linear speed θ and more details are given as follows.

Remark 4.8 (Asynchronous delay). If we assume that the delay parameter τ diminishes to 0, the convergence rate corresponds to the stochastic parallel one. SQN-VR [25] reaches a linear convergence rate, which is the upper bound since sequential algorithms have no delay in updating. Further, for synchronous parallel SQN, the convergence rate will not be changed. However, once the delay is introduced by asynchronous, the convergence rate will certainly be impaired, even not converge. Our main theorem has proved that even with asynchronous updating scheme, the convergence rate still can reach linear. In the worst asynchronous case, i.e. the delay is maximized to be epoch size, Corollary 4.6.1 shows the linear convergence rate.

Remark 4.9 (Affine invariance). It is known that second-order methods have the property that independent of linear scaling, which is not true for gradient descent as it owns a rate with exponential relationship with condition number. For stochastic quasi-Newton methds, the convergence rate in references [22, 25] did not show that SQN keeps the affine invariance. In our analysis, Theorem 4.6 shows that θ=O(1+C1+1κ(B)C), where C=O(1κ(H)) when fixing all other parameters, specifically m, d, M, τ. In the later Corollary 4.6.1 analysis, when m is chosen large enough than 8κ(B)κ(H)+4κ(B), the linear convergence rate θmm+m2=23. Then we get the conclusion that θ doesn’t depend on the condition number, our theoretical result verifies the affine invariance together with empirical study.

Remark 4.10 (Limitation on subsample T). During the stochastic Hessian inverse construction, if the subsample T goes to zero, no more curvature information will be captured, SQN will degrade to first-order gradient descent method; if T goes to the entire data n, SQN has the whole costly Hessian inverse approximation, then SQN will not be so efficient. As a balance, subsample T should be chosen as a moderately large, independent subset of data. In our experiments, we choose the size of T to be 10 times the size of subsample S.

5. Empirical Study

In this section, we compare the performance of our designed algorithm AsySQN with the state-of-the-art SQN-VR method [25] and distributed quasi-Newton method named DAve-QN [34] to show the enhancement raised by asynchronous parallel setting. More formally, we also make comparisons with classical stochastic gradient descent (SGD) [1] and stochastic variance-reduced gradient method (SVRG) [24] and their asynchronous versions [2, 6] to give some insights of first-order gradient methods and second-order quasi-Newton methods. We employ a machine with 32GB main memory and 8 Intel Xeon CPU E5-2667 v3 @ 3.2 GHz processors. The experiments have been performed on both synthetic and real-world datasets, with different numbers of data points and dimensions.

Datapass analysis.

We use the number of times the whole dataset will be visited, or called datapass, as performance measure of the convergence speed. This is because it is independent of the actual implementation of the algorithms, and has been a well-established convention in the literature on both stochastic first-order and second-order methods[24, 4, 41, 22, 25, 23].

For each epoch, the SVRG and AsySVRG algorithms visit the whole dataset twice (datapass = 2): one for updating parameter x and the other is for calculating average gradient. For SQN-VR and AsySQN, each epoch will take 2+bHb×L×P datapasses, where P is the number of threads. The extra datapass of AsySQN is introduced by Hessian-vector product calculations.

Speedup analysis.

We compare stochastic quasi-Newton methods (SQN-VR) and its asynchronous parallel version (AsySQN) in terms of speedup to show the advantage of taking asynchronous parallel strategy. Here we consider the stopping criteria: f(w) − f* < ϵ, where f* has been calculated before, we require high-precision optimum as error ϵ = 10−30. The speedup results for simulation and real datasets are shown in Figures 4 and 6, respectively.

Figure 4:

Figure 4:

Comparisons of different asynchronous algorithms in 20-D and 200-D simulation datasets.

Figure 6:

Figure 6:

Datapasses and speedup of logistic regression with Real_sim dataset (a)(d) , MNIST dataset (b)(e) and SVM with RCV1 dataset (c)(f).

As we require no more on sparsity and Hessian matrix is always dense, lock has been added when updating parameter x in shared memory, that is the reason why we have extra waiting time and have not reached the ideal linear speedup. However, obvious speedup still can be captured. Also, we compare with the lock-version of asynchronous first-order methods for fairness.

CPU time analysis.

We also include the real clock time when making comparisons within second-order methods, as shown in Figures 5 and 7. The stopping criteria and precision settings are the same with speedup analysis.

Figure 5:

Figure 5:

CPU time of SQN-VR, its asynchronous version AsySQN, and distributed DAve-QN in 20-D and 200-D simulation datasets.

Figure 7:

Figure 7:

CPU time of logistic regression with Real_sim dataset (a), MNIST dataset (b) and SVM with RCV1 dataset (c).

Experiment setup.

We conduct two simulations (moderate-dimensional I and high-dimensional II under ill-conditioning case) using synthetic datasets and three real datasets (Real_sim, MNIST, RCV1) evaluations to show the comparisons of our methods against competitors. For all our experiments, the batch size b is set to be 5, 10, 20, 50 and the Hessian batch size bH is set to be 10b. The limited memory size M = 10 in both SQN-VR and AsySQN. After testing experiments in SQN-VR, we select P × L as roughly a preferable constant for each dataset, which means a smaller L is preferred when more threads are used. For all the algorithms, a constant step size is chosen via grid search. All experiments are initialized randomly except simulation I since we want to check the trace of algorithms in simulation I, see Figure 3. All experiments are performed with 8 cores in the parallel setting.

Figure 3:

Figure 3:

Explore the results in ill-conditioning (same setting as Figure 2 (b)), demonstrates the stability of AsySQN. AsySQN can still quickly drop down to the optimum even though with little perturbation due to the asynchronous updates; while the AsySVRG methods have obvious jumping compared with SVRG due to the asynchronous setting.

5.1. Experiments using synthetic datasets

In order to explore some properties of our algorithm, simulation I and simulation II are performed for the least square problem:

minxRd1ni=1n(yiziTx)2. (8)

The goal of simulation I is to study that under what circumstance our designed algorithm AsySQN will be preferred compared with first-order methods (AsySGD and AsySVRG). Various degrees of ill-conditioning least square problems are generated in the following procedures: z1 and z2 are two features generated uniformly from the interval [0, 1]. We then label y = a × z1 + b × z2 + ϵ, where ϵ satisfies normal distribution with mean 0 and standard deviation 1. To control the degree of ill-conditioning, we use four different settings of (a, b) pairs: (0.1, 10), (1, 10), (1, 5) and (1, 1), which correspond to extreme ill-conditioned, ill-conditioned, moderate ill-conditioned and well-behaved least squared problems, respectively.

From Figure 2, we can see in extreme ill-conditioned case, AsySQN performs significantly better than first-order methods, it could achieve high precision solutions in short datapasses, where AsySVRG and AsySGD can barely solve the problem. In (moderate) ill-conditioned cases, AsySGD still performs bad, AsySVRG and AsySQN can linearly go to the optimum while AsySQN is faster in terms of datapasses. Even in well-behaved optimization, AsySQN is still comparable with AsySVRG. From (a)-(d), we can clearly conclude that the performance of AsySQN is more stable and verifies the affine invariance property.

Figure 2:

Figure 2:

Under various degrees of ill-conditioning least square problems: (a) extreme ill- conditioning; (b) ill-conditioning; (c) moderate ill-conditioning; (d) well-behaved.

To see the details, we plot the trace of AsySQN and AsySVRG respectively1. Figure 3 carefully explores the results showing in Figure 2(b), in ill-conditioning optimization, how the trace information reveal the performance difference. We first study the well-known first-order method, SVRG, and its corresponding asynchronous version AsySVRG in Figure 3(a). It is clear to see that asynchronous setting results in wild jumps around the global minimum. However, second-order methods are well-known to be stable and we can see stochastic second-order method performs much more stable than SVRG in Figure 3(b). And even equipped with an asynchronous setting, AsySQN still exhibits good performance in terms of stability of convergence to the global optimal as shown in Figures 3(c,d). In Figure 3(d), the side-by-side comparison between AsySVRG and AsySQN, AsySVRG has wild jumping around the global minimum. In other words, AsySQN avoids the jumping which is a hard burden for lending asynchronous implementation directly to first-order methods. Hence, we conclude that AsySQN outperforms AsySVRG under ill-conditioned circumstance.

In practice, it is appealing to solve machine learning problems over high dimensional datasets. Simulation II is performed where the dimensions of ill-conditioned simulation datasets are (104, 20) and (104, 200), respectively. AsySGD and AsySVRG are hard to quickly achieve a high precision solution, they cost almost 200 datapasses to approximate the optimal value within a precision of 10−4. DAve-QN method relies highly on the condition number of objective functions and it performs bad in our ill-conditioned simulations, which confirms their theoretical analysis. In contrast, AsySQN can achieve the high precision solution (10−30) within finite datapasses, showing better performance with ill-conditioned datasets. Further, AsySQN has an obvious speedup compared with synchronous SQN-VR methods in dealing with high dimensional data. See Figure 4. We also include the speedup of AsySVRG as a reference. When the number of threads is 1, it is actually the original non-parallel version of stochastic L-BFGS method (SQN-VR in our experiment). The CPU time comparisons within second-order methods: the base line method SQN-VR, our designed method AsySQN, and the-state-of-the-art distributed quasi-Newton method DAve-QN, are shown in Figure 5. AsySQN costs the least amount of time to reach a high precision solution, and shows obvious speedup compared with SQN-VR. In summary, we can get the conclusion that our designed algorithm scales very well with the problem size, and actually improve the speedup in dealing with high dimensional optimization problems.

5.2. Experiments using real-world datasets

Three datasets have been used for real-dataset evaluations: Real_sim, MNIST and RCV1, which can be downloaded from the LibSVM website2.

We have conducted experiments on logistic regression as in Eq.(9) with real_sim dataset (with 72, 309 data points of 20, 958 feature dimension, or 72, 309 × 20, 958) and MNIST dataset (60, 000 × 780):

minxRd1ni=1n(log(1+exp(yiziTx))+λx2); (9)

and then we perform our algorithm on SVM as in Eq.(10) with RCV1 dataset (677, 399 × 47, 236):

minxRd1ni=1n(max(0,1yi(ziTx))+λx2); (10)

where ziRd, and yi is the corresponding label. In our experiments, we set the regularizer λ = 10−3.

Notice that we perform stochastic algorithms: AsySGD, AsySVRG, AsySQN and SQN-VR on all these real-datasets, while conducting DAve-QN method only on the smallest dataset, MNIST. This is because DAve-QN method needs to store and calculate hessian matrix (not uses limited memory version of BFGS method). Real_sim and RCV1 have a large number of features, so it is hard to fulfill DAve-QN method on these two datasets. DAve-QN performs better in terms of CPU time over MNIST dataset. However, MNIST dataset has a moderate data size and is regular, that is, it doesn’t have heterogeneity among features.

Figures 6 and 7 plot the comparisons of the methods mentioned above. By studying the convergence rate with respect to effective datapasses, AsySQN outperforms AsySVRG and AsySGD in all these real datasets. When the number of threads equals to 1, it is the original stochastic L-BFGS with VR method. With the increasing number of threads, the speedup will first show an ideal linear rate and ascend to a peak due to the asynchronous parallel within threads. When the number of threads reaches some marginal value, here in our experiments, the marginal value is particular 6, the speedup will descend. This is because the high cost of communication between multiple threads, the overhead of lock, and the resource contention, thus the growth curve of speedup is not as ideal as simulation plots. It may come down to the fact mentioned in earlier research [42] that benchmark downloaded from Lib-SVM has been row normalized. Hence AsySQN hasn’t shown great advantage over AsySVRG but is still comparable in well-behaved case, which corresponds to the results of the simulation datasets above. In terms of speedup, we still plot AsySVRG as a reference: for first-order methods, locked AsySVRG reaches around 3 times speedup using 8 cores compared with SVRG [6]; second-order method AsySQN, which requires more communications, can reach almost the same speedup with 8 cores. And we would recommend 6 cores in practice. In conclusion, our designed algorithm works well in real datasets.

6. Conclusion

We have proposed a stochastic quasi-Newton method in an asynchronous parallel environment, which can be applied to solve large dense optimization with high accuracy. Compared with prior work on second-order methods, our algorithm represents the first effort in implementing the stochastic version in asynchronous parallel way and provides theoretical guarantee that a linear rate of convergence can be reached in the strongly convex setting. The experimental results demonstrate that our parallel version can effectively accelerate stochastic L-BFGS algorithm. For ill-conditioned problems, our algorithm carries the effectiveness of second-order methods and converges much faster than the best first-order methods such as AsySVRG. The theoretical proof of convergence rate can be used as framework in the asynchronous setting of stochastic second-order methods. In our future work, we may extend the algorithm and analysis to solve non-strongly convex or even non-convex loss functions.

Acknowledgments

This work was funded by NSF grants CCF-1514357, IIS-1447711, and DBI-1356655 to Jinbo Bi. Jinbo Bi was also supported by NIH grants K02-DA043063 and R01-DA037349.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Declaration of Competing Interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

1

AsySVRG is known to outperform AsySGD, and we don’t include the details of AsySGD here.

References

  • [1].Zinkevich M, Weimer M, Li L, Smola AJ, Parallelized stochastic gradient descent, in: Advances in neural information processing systems, 2010, pp. 2595–2603. [Google Scholar]
  • [2].Recht B, Re C, Wright S, Niu F, Hogwild: A lock-free approach to parallelizing stochastic gradient descent, in: Advances in neural information processing systems, 2011, pp. 693–701. [Google Scholar]
  • [3].Duchi J, Hazan E, Singer Y, Adaptive subgradient methods for online learning and stochastic optimization, Journal of Machine Learning Research 12 (July) (2011) 2121–2159. [Google Scholar]
  • [4].Defazio A, Bach F, Lacoste-Julien S, Saga: A fast incremental gradient method with support for non-strongly convex composite objectives, in: Advances in Neural Information Processing Systems, 2014, pp. 1646–1654. [Google Scholar]
  • [5].Kingma DP, Ba J, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980. [Google Scholar]
  • [6].Reddi SJ, Hefny A, Sra S, Poczos B, Smola AJ, On variance reduction in stochastic gradient descent and its asynchronous variants, in: Advances in Neural Information Processing Systems, 2015, pp. 2647–2655. [PMC free article] [PubMed] [Google Scholar]
  • [7].Schmidt M, Le Roux N, Bach F, Minimizing finite sums with the stochastic average gradient, Mathematical Programming 162 (1–2) (2017) 83–112. [Google Scholar]
  • [8].Dennis JE Jr, Moré JJ, Quasi-newton methods, motivation and theory, SIAM review 19 (1) (1977) 46–89. [Google Scholar]
  • [9].Nocedal J, Updating quasi-newton matrices with limited storage, Mathematics of computation 35 (151) (1980) 773–782. [Google Scholar]
  • [10].Dembo RS, Eisenstat SC, Steihaug T, Inexact newton methods, SIAM Journal on Numerical analysis 19 (2) (1982) 400–408. [Google Scholar]
  • [11].Liu DC, Nocedal J, On the limited memory bfgs method for large scale optimization, Mathematical programming 45 (1) (1989) 503–528. [Google Scholar]
  • [12].Bordes A, Bottou L, Gallinari P, Sgd-qn: Careful quasi-newton stochastic gradient descent, Journal of Machine Learning Research 10 (July) (2009) 1737–1754. [Google Scholar]
  • [13].Wang X, Ma S, Goldfarb D, Liu W, Stochastic quasi-newton methods for nonconvex stochastic optimization, SIAM Journal on Optimization 27 (2) (2017) 927–956. [Google Scholar]
  • [14].Mokhtari A, Eisen M, Ribeiro A, Iqn: An incremental quasi-newton method with local superlinear convergence rate, SIAM Journal on Optimization 28 (2) (2018) 1670–1698. [Google Scholar]
  • [15].Bottou L, Curtis FE, Nocedal J, Optimization methods for large-scale machine learning, SIAM Review 60 (2) (2018) 223–311. [Google Scholar]
  • [16].Karimireddy SP, Stich SU, Jaggi M, Global linear convergence of newton’s method without strong-convexity or lipschitz gradients, arXiv preprint arXiv:1806.00413. [Google Scholar]
  • [17].Marteau-Ferey U, Bach F, Rudi A, Globally convergent newton methods for ill-conditioned generalized self-concordant losses, in: Advances in Neural Information Processing Systems, 2019, pp. 7636–7646. [Google Scholar]
  • [18].Gao W, Goldfarb D, Quasi-newton methods: superlinear convergence without line searches for self-concordant functions, Optimization Methods and Software 34 (1) (2019) 194–217. [Google Scholar]
  • [19].Kovalev D, Gower RM, Richtárik P, Rogozin A, Fast linear convergence of randomized bfgs, arXiv preprint arXiv:200211337. [Google Scholar]
  • [20].Jin Q, Mokhtari A, Non-asymptotic superlinear convergence of standard quasi-newton methods, arXiv preprint arXiv:2003.13607. [Google Scholar]
  • [21].Dennis JE, Moré JJ, A characterization of superlinear convergence and its application to quasi-newton methods, Mathematics of computation 28 (126) (1974) 549–560. [Google Scholar]
  • [22].Byrd RH, Hansen SL, Nocedal J, Singer Y, A stochastic quasi-newton method for large-scale optimization, SIAM Journal on Optimization 26 (2) (2016) 1008–1031. [Google Scholar]
  • [23].Gower R, Goldfarb D, Richtárik P, Stochastic block bfgs: squeezing more curvature out of data, in: International Conference on Machine Learning, 2016, pp. 1869–1878. [Google Scholar]
  • [24].Johnson R, Zhang T, Accelerating stochastic gradient descent using predictive variance reduction, in: Advances in neural information processing systems, 2013, pp. 315–323. [Google Scholar]
  • [25].Moritz P, Nishihara R, Jordan M, A linearly-convergent stochastic l-bfgs algorithm, in: Artificial Intelligence and Statistics, 2016, pp. 249–258. [Google Scholar]
  • [26].Zhao R, Haskell WB, Tan VY, Stochastic l-bfgs: Improved convergence rates and practical acceleration strategies, arXiv preprint arXiv:1704.00116. [Google Scholar]
  • [27].Zhou C, Gao W, Goldfarb D, Stochastic adaptive quasi-newton methods for minimizing expected values, in: International Conference on Machine Learning, 2017, pp. 4150–4159. [Google Scholar]
  • [28].Meng SY, Vaswani S, Laradji IH, Schmidt M, Lacoste-Julien S, Fast and furious convergence: Stochastic second order methods under interpolation, in: International Conference on Artificial Intelligence and Statistics, PMLR, 2020, pp. 1375–1386. [Google Scholar]
  • [29].Chen W, Wang Z, Zhou J, Large-scale l-bfgs using mapreduce, in: Advances in Neural Information Processing Systems, 2014, pp. 1332–1340. [Google Scholar]
  • [30].Berahas AS, Nocedal J, Takác M, A multi-batch l-bfgs method for machine learning, in: Advances in Neural Information Processing Systems, 2016, pp. 1055–1063. [Google Scholar]
  • [31].Bollapragada R, Mudigere D, Nocedal J, Shi H-JM, Tang PTP, A progressive batching l-bfgs method for machine learning, arXiv preprint arXiv:1802.05374. [Google Scholar]
  • [32].Eisen M, Mokhtari A, Ribeiro A, A decentralized quasi-newton method for dual formulations of consensus optimization, in: Decision and Control (CDC), 2016 IEEE 55th Conference on, IEEE, 2016, pp. 1951–1958. [Google Scholar]
  • [33].Eisen M, Mokhtari A, Ribeiro A, Decentralized quasi-newton methods, IEEE Transactions on Signal Processing 65 (10) (2017) 2613–2628. [Google Scholar]
  • [34].Soori S, Mishchenko K, Mokhtari A, Dehnavi MM, Gurbuzbalaban M, Dave-qn: A distributed averaged quasi-newton method with local superlinear convergence rate, in: International Conference on Artificial Intelligence and Statistics, 2020, pp. 1965–1976. [Google Scholar]
  • [35].Najafabadi MM, Khoshgoftaar TM, Villanustre F, Holt J, Large-scale distributed l-bfgs, Journal of Big Data 4 (1) (2017) 22. [Google Scholar]
  • [36].Wright SJ, Nocedal J, Numerical optimization, Springer Science 35 (67–68) (1999) 7. [Google Scholar]
  • [37].Lian X, Huang Y, Li Y, Liu J, Asynchronous parallel stochastic gradient for nonconvex optimization, in: Advances in Neural Information Processing Systems, 2015, pp. 2737–2745. [Google Scholar]
  • [38].Liu J, Wright SJ, Ré C, Bittorf V, Sridhar S, An asynchronous parallel stochastic coordinate descent algorithm, The Journal of Machine Learning Research 16 (1) (2015) 285–322. [Google Scholar]
  • [39].Hsieh C-J, Yu H-F, Dhillon I, Passcode: Parallel asynchronous stochastic dual co-ordinate descent, in: International Conference on Machine Learning, 2015, pp. 2370–2379. [Google Scholar]
  • [40].Zhao S-Y, Li W-J, Fast asynchronous parallel stochastic gradient descent: A lock-free approach with convergence guarantee., in: AAAI, 2016, pp. 2379–2385. [Google Scholar]
  • [41].Harikandeh R, Ahmed MO, Virani A, Schmidt M, Konečnỳ J, Sallinen S, Stopwasting my gradients: Practical svrg, in: Advances in Neural Information Processing Systems, 2015, pp. 2251–2259. [Google Scholar]
  • [42].Xiao L, Zhang T, A proximal stochastic gradient method with progressive variance reduction, SIAM Journal on Optimization 24 (4) (2014) 2057–2075. [Google Scholar]

RESOURCES