Skip to main content
Springer logoLink to Springer
. 2020 May 12;188(1):135–192. doi: 10.1007/s10107-020-01506-0

Stochastic quasi-gradient methods: variance reduction via Jacobian sketching

Robert M Gower 1,, Peter Richtárik 2,3,4, Francis Bach 5
PMCID: PMC8550794  PMID: 34720193

Abstract

We develop a new family of variance reduced stochastic gradient descent methods for minimizing the average of a very large number of smooth functions. Our method—JacSketch—is motivated by novel developments in randomized numerical linear algebra, and operates by maintaining a stochastic estimate of a Jacobian matrix composed of the gradients of individual functions. In each iteration, JacSketch efficiently updates the Jacobian matrix by first obtaining a random linear measurement of the true Jacobian through (cheap) sketching, and then projecting the previous estimate onto the solution space of a linear matrix equation whose solutions are consistent with the measurement. The Jacobian estimate is then used to compute a variance-reduced unbiased estimator of the gradient. Our strategy is analogous to the way quasi-Newton methods maintain an estimate of the Hessian, and hence our method can be seen as a stochastic quasi-gradient method. Our method can also be seen as stochastic gradient descent applied to a controlled stochastic optimization reformulation of the original problem, where the control comes from the Jacobian estimates. We prove that for smooth and strongly convex functions, JacSketch converges linearly with a meaningful rate dictated by a single convergence theorem which applies to general sketches. We also provide a refined convergence theorem which applies to a smaller class of sketches, featuring a novel proof technique based on a stochastic Lyapunov function. This enables us to obtain sharper complexity results for variants of JacSketch with importance sampling. By specializing our general approach to specific sketching strategies, JacSketch reduces to the celebrated stochastic average gradient (SAGA) method, and its several existing and many new minibatch, reduced memory, and importance sampling variants. Our rate for SAGA with importance sampling is the current best-known rate for this method, resolving a conjecture by Schmidt et al. (Proceedings of the eighteenth international conference on artificial intelligence and statistics, AISTATS 2015, San Diego, California, 2015). The rates we obtain for minibatch SAGA are also superior to existing rates and are sufficiently tight as to show a decrease in total complexity as the minibatch size increases. Moreover, we obtain the first minibatch SAGA method with importance sampling.

Keywords: Stochastic gradient descent, Sketching, Variance reduction, Covariates

Introduction

We consider the problem of minimizing the average of a large number of differentiable functions

x=argminxRdf(x)=def1ni=1nfi(x), 1

where f is μ—strongly convex and L—smooth. In solving (1), we restrict our attention to first-order methods that use a (variance-reduced) stochastic estimate of the gradient gkf(xk) to take a step towards minimizing (1) by iterating

xk+1=xk-αgk, 2

where α>0 is a stepsize.

In the context of machine learning, (1) is an abstraction of the empirical risk minimization problem; x encodes the parameters/features of a (statistical) model, and fi is the loss of example/data point i incurred by model x. The goal is to find the model x which minimizes the average loss on the n observations.

Typically, n is so large that algorithms which rely on scanning through all n functions in each iteration are too costly. The need for incremental methods for the training phase of machine learning models has revived the interest in the stochastic gradient descent (SGD) method [27]. SGD sets gk=fi(xk), where i is an index chosen from [n]=def{1,2,,n} uniformly at random. SGD therefore requires only a single data sample to complete a step and make progress towards the solution. Thus SGD scales well in the number of data samples, which is important in several machine learning applications since there many be a large number of data samples. On the downside, the variance of the stochastic estimates of the gradient produced by SGD does not vanish during the iterative process, which suggests that a decreasing stepsize regime needs to be put into place if SGD is to converge. Furthermore, for SGD to work efficiently, this decreasing stepsize regime needs to be tuned for each application area, which is costly.

Variance-reduced methods

Stochastic variance-reduced versions of SGD offer a solution to this high variance issue, which improves the theoretical convergence rate and solves the issue with ad hoc stepsize regimes. The first variance reduced method for empirical risk minimization is the stochastic average gradient (SAG) method of Schmidt, Le Roux and Bach [29], closely followed by Finito [7] and Miso [18]. The analysis of SAG is notoriously difficult, which is perhaps due to the estimator of gradient being biased. Soon afterwards, the SAG gradient estimator was modified into an unbiased one, which resulted in the SAGA method [6]. The analysis of SAGA is dramatically simpler than that of SAG. Another popular method is SVRG of Johnson and Zhang [15] (see also S2GD [16]). SVRG enjoys the same theoretical complexity bound as SAGA, but has a much smaller memory footprint. It is based on an inner–outer loop procedure. In the outer loop, a full pass over data is performed to compute the gradient of f at the current point. In the inner loop, this gradient is modified with the use of cheap stochastic gradients, and steps are taken in the direction of the modified gradients. A notable recent addition to the family of variance reduced methods, developed by Nguyen et al. [20], is known as SARAH. Unlike other methods, SARAH does not use an estimator that is unbiased in the last step. Instead, it is unbiased over a long history of the method.

A fundamentally different way of designing variance reduced methods is to use coordinate descent [24, 25] to solve the dual. This is what the SDCA method [33] and its various extensions [32] do. The key advantage of this approach is that the dual often has a seperable structure in the coordinate space, which in turn means that each iteration of coordinate descent is cheap. Furthermore, SDCA is a variance-reduced method by design since the coordinates of the gradient tend to zero as one approaches the solution. One of the downsides of SDCA is that it requires calculating Fenchel duals and their derivatives. This issue was later solved by introducing approximations and mapping the dual iterates to the primal space as pointed out in [6]. This resulted in primal variants of SDCA such as dual-free SDCA [31]. A primal-dual variant which enables the use of arbitrary minibatch strategies was developed by Qu et al. [23], and is known as QUARTZ.

Finally, variance reduced methods can also be accelerated, as has been shown for the loop based methods such as Katyusha [3] or using the Universal catalyst [17].

Gaps in our understanding of SAGA

Despite significant research into variance-reduced stochastic gradient descent methods for solving (1), there are still big gaps in our understanding of variance reduction. For instance, the current theory supporting the SAGA algorithm is far from complete.

SAGA with uniform probabilities enjoys the iteration complexity O((n+Lmaxμ)log1ϵ), where Lmax=defmaxiLi and Li is the smoothness constant of fi. While importance sampling versions of SAGA have proved in practice to produce a speed-up over uniform SAGA [30], a proof of this speed-up has been elusive. It was conjectured by Schmidt et al. [30] that a properly designed importance sampling strategy for SAGA should lead to the rate On+L¯μlog1ϵ, where L¯=1niLi. However, no such result was proved. This rate is achieved by, for instance, importance sampling variants of SDCA, QUARTZ [23] and SVRG [36]. However, the analysis only applies to a more specialized version of problem (1) (e.g., one needs an explicit strongly convex regularizer).

Second, existing minibatch variants of SAGA do not enjoy the same rate as that offered by methods such as SDCA and QUARTZ. Are the above issues with SAGA unavoidable, or is it the case that our understanding of the method is far from complete? Lastly, no minibatch variant of SAGA with importance sampling is known.

One of the contributions of this paper is giving positive answers to all of the above questions.

Jacobian sketching: a new approach to variance reduction

Our key contribution in this paper is the introduction of a novel approach—which we call Jacobian sketching—to designing and understanding variance-reduced stochastic gradient descent methods for solving (1). We refer to our method by the name JacSketch. We shall now briefly introduce some of the key insights motivating our approach. Let F:RdRn be defined by

F(x)=def(f1(x),,fn(x))Rn, 3

and further let

F(x)=def[f1(x),,fn(x)]Rd×n, 4

be the Jacobian of F at x.

The starting point of our new approach is the following trivial observation: the gradient of f at x can be computed from the Jacobian F(x) by a simple linear transformation:

1nF(x)e=f(x), 5

where e is the vector of all ones in Rn. This alone is not useful to come up with a better way of estimating the gradient. Indeed, formula (5) has two issues. First, the Jacobian is not available. If we wanted to compute it, we would need to pay the cost of one pass through the data. Second, even if the Jacobian was available, merely multiplying it by the vector of all ones would cost O(nd) operations, which is again a cost equivalent to one pass over data.

Now, let us replace the vector of all ones in (5) by eiRn, the unit coordinate/basis vector in Rn. If the index i is chosen randomly from [n], then

F(x)ei=fi(x), 6

which is a stochastic gradient of f at x. In other words, by performing a random linear transformation of the Jacobian, we have arrived at the classical stochastic estimate of the gradient. This approach does not suffer from the first issue mentioned above as the Jacobian is not needed at all in order to compute fi(x). Likewise, it does not suffer from the second issue; namely, the cost of computing the stochastic gradient is merely O(d), and we can avoid a costly pass through the data.1

However, this approach suffers from a new issue: by constructing the estimate this way, we do not learn from the (random) information collected about the Jacobian in prior iterations, through having access to random linear transformations thereof. In this paper we take the point of view that this is the reason why SGD suffers from large variance. Our approach towards alleviating this problem is to maintain and update an estimate JRd×n of the Jacobian F(x).

Given xkRd, ideally we would like J to satisfy

J=F(xk), 7

that is, we would like it to be equal to the true Jacobian. However, at the same time we do not wish to pay the price of computing it. Hence, assuming we have an estimate JkRd×n of the Jacobian available, we instead pick a random matrix SkRn×τ from some distribution D of matrices2 and consider the following sketched version of the linear system (7), with unknown J:

JSk=F(xk)SkRd×τ. 8

This equation generalizes both (5) and (6). The left hand side contains the sketched system matrix Sk and the unknown matrix J, and the right hand side contains a quantity we can measure (through a random linear measurement of the Jacobian, which we assume is cheap). Of course, the true Jacobian solves (8). However, in general, and in particular when τn which is the regime we want to be in for practical reasons, the system (8) will have infinite J solutions.

We pick a unique solution Jk+1 as the closest solution of (8) to our previous estimate Jk, with respect to a weighted Frobenius norm with a positive definite weight matrix WRn×n:

Jk+1=argminJRd×nJ-JkW-1subject toJSk=F(xk)Sk, 9

where

XW-1=defTrXW-1X. 10

In doing so, we have built a learning mechanism whose goal is to maintain good estimates of the Jacobian throughout the run of method (2). These estimates can be used to efficiently estimate the gradient by performing a linear transformation similar to (5), but with F(x) replaced by the latest estimate of the Jacobian. In practice, it is important to design sketching matrices so that the Jacobian sketch F(x)Sk can be calculated efficiently.

The “sketch-and-project” strategy (9) for updating our Jacobian estimate is analogous to the way quasi-Newton methods update the estimate of the Hessian (or inverse Hessian) [8, 9, 12]. From this perspective, our method can be viewed as a stochastic quasi-gradient method.3

Problem (9) admits the explicit closed-form solution (see Lemma 14):

Jk+1=Jk+(F(xk)-Jk)ΠSk, 11

where

ΠS=defS(SWS)SW, 12

is a projection matrix, and denotes the Moore–Penrose pseudoinverse.

The key insight of our work is to propose an efficient Jacobian learning mechanism based on ideas borrowed from recent results in randomized numerical linear algebra.

Having established our update of the Jacobian estimate, we now need to use this to form an estimate of the gradient. Unfortunately, using Jk+1 in place of F(xk) in (5) leads to a biased gradient estimate (something we explore later in Sect. 2.5). To obtain an unbiased estimator of the gradient, we introduce a stochastic relaxation parameter θSk0 and use

gk=def1-θSknJke+θSknJk+1e=1nJke+1nF(xk)-JkθSkΠSke, 13

as an approximation of the gradient. Taking expectations in (13) over SkD (for this we use the notation ED·ESkD·), we get

EDgk=1nJke+1n(F(xk)-Jk)EDθSkΠSke. 14

Thus provided that

EDθSkΠSke=e, 15

we have EDgk=(14)1nF(xk)e=(5)f(xk), and hence, gk is an unbiased estimate of the gradient. If (15) holds, we say that θSk is a bias-correcting random variable and Sk is an unbiased sketch. Our new JacSketch method is method (2) with gk computed via (13) and the Jacobian estimate updated via (11). This method is formalized in Sect. 2 as Algorithm 1.

This strategy indeed works, as we show in detail in this paper. Under appropriate conditions (on the stepsize α, properties of f and randomness behind the sketch matrices Sk and so on), the variance of gk diminishes to zero (e.g., see Lemma 6), which means that JacSketch is a variance-reduced method. We perform an analysis for smooth and strongly convex functions f, and obtain a linear convergence result (Theorem 1). We summarize our complexity results in detail in Sect. 1.5.

SAGA as a special case of JacSketch

Of particular importance in this paper are minibatch sketches, which are sketches of the form Sk=ISk, where Sk is a random subset of [n], and ISk is a random column submatrix of the n×n identity matrix with columns indexed by Sk. For minibatch sketches, JacSketch corresponds to minibatch variants of SAGA. Indeed, in this case, and if W=Diag(w1,,wn), we have ΠSke=eSk, where eS=iSei (see Lemma 7). Therefore,

gk=1nJke+θSkniSkfi(xk)-J:ik. 16

In view of (11), and since ΠSk=ISkISk (see Lemma 7), the Jacobian estimate gets updated as follows

J:ik+1=J:ikiSk,fi(xk)iSk. 17

Standard uniform SAGA is obtained by setting Sk={i} with probability 1/n for each i[n], and letting θSkn. SAGA with arbitrary probabilities is obtained by instead choosing Sk={i} with probability pi>0 for each i[n], and letting θSk1pi. However, virtually all minibatching and importance sampling strategies can be treated as special cases of our general approach.

The theory we develop answers the open questions raised earlier. In particular, we answer the conjecture of Schmidt et al. [30] about the rate of SAGA with importance sampling in the affirmative. In particular, we establish the iteration complexity (n+4L¯μ)log1ϵ. This complexity is obtained for different importance sampling distributions that have not been proposed in the current literature for SAGA. In order to achieve this, we develop a new analysis technique which makes use of a stochastic Lyapunov function (see Sect. 5). That is, our Lyapunov function has a random element which is independent of the randomness inherited from the iterates of the method. This is unlike any other Lyapunov function used in the analysis of stochastic methods we are aware of. Further, we prove that SAGA converges with any initial matrix J0 in place of the matrix of gradients of functions fi at the starting point. We also show that our results give better rates for minibatch SAGA than are currently known, even for uniform minibatch strategies. We also allow for a family of completely new uniform minibatching strategies which were not considered in connection with SAGA before, and consider also SAGA with importance sampling for minibatches4 (based on a partition of [n]). Lastly, as a special case, our method recovers standard gradient descent, together with the sharp iteration complexity of 4Lμlog1ϵ.

Our general approach also enables a novel reduced memory variant of SAGA as a special case. Let Sk=eSk, and choose W=I. Since ΠSke=eSk, the formula for gk is the same as in the case of SAGA, and is given by (16). What is notably different about this sketch (compared to ISk) is that, since ΠeSk=1|Sk|eSkeSk, the update of the Jacobian estimate is given by

Jk+1=(11)Jk-1|Sk|iSkJ:ik-fi(xk)eSk.

Thus, the same update is applied to all the columns of Jk that belong to Sk. Equivalently, this update can be written as

J:jk+1=1|Sk|iSkfi(xk)ifjSk,J:jkifjSk. 18

In particular, if Sk only ever picks sets which correspond to a partition of [n], and we initialize J0 so that all the columns belonging to the same partition are the same, then they will be the same within in each partition for all k. In such a case, we do not need to maintain all the identical copies. Instead, we can update and use a condensed/compressed version of the Jacobian, with one column per partition set only, to reduce the total memory usage. This method, with non-uniform probabilities, is analyzed in our framework in Sect. 5.6.

Summary of complexity results

All convergence results obtained in this paper are summarized in Table 1.

Table 1.

Special cases of our JacSketch method, and the associated iteration complexity

ID Method Sketch SRn×τ Iteration complexity (×log1ϵ) Reference
W0
1 JacSketch Any unbiased max4L1μ,1κ+4ρL2κμn2 Theorem 1
Any
2 JacSketch IS maxCsupp(S)1pC+τnpC4LCμ Theorem 6
(Any probabilities for τ—partition) I
3 Gradient descent I 4Lμ Theorems 1 and 6
I Sections 4.6 and 5.6
4 SAGA IS n+4Lmaxμ Theorems 1 and 6
(Uniform sampling) I Sections 4.6 and 5.6
5 SAGA IS n+4L¯μ Theorem 6
(Importance sampling) I (129)
6 Minibatch SAGA IS max4LmaxGμ,nτ+4ρμnmaxiLiwi Theorem 1
(τ—uniform sampling) Diag(wi) (100)
7 Minibatch SAGA IS max4LmaxGμ,nτ+n-τ(n-1)τ4Lmaxμ Theorem 1
(τ—nice sampling) I (101)
8 Minibatch SAGA IS max4LmaxGμ,nτ+n-τnτ4(L¯+Lmax)μ Theorem 1
(τ—nice sampling) Diag(Li) (102)
9 Minibatch SAGA IS nτ+4Lmaxμ Theorem 1
(τ—partition sampling) I (103)
10 Minibatch SAGA Diag(Li) nτ+4maxCsupp(S)1τiCLiμ Theorem 1
(τ—partition sampling) IS (104)
11 Minibatch SAGA IS nτ+41|supp(S)|Csupp(S)LCμ Theorem 6
(Importance τ—partition sampling) I (131)

All methods converge linearly. In the iteration complexity column we list the number of iterations sufficient to obtain an ϵ accurate solution, ignoring a log1ϵ factor

Our convergence results depend on several constants which we will now briefly introduce. The precise definitions can be found in the main text. For C[n]={1,2,,n}, define fC(x)=def1|C|iCfi(x). We assume fC is LC—smooth.5 We let Li=L{i}, L=L[n], Lmax=maxiLi and L¯=1niLi. Note that LiLmax, L¯LmaxnL¯, LC1|C|iCLi and LL¯. For a sampling6S[n], we let supp(S)={C[n]:PS=C>0}. That is, the support of a sampling are all the sets which are selected by this sampling with positive probability. Finally, LmaxG=maxi1c1Csupp(S),iCLC, where c1 is the cardinality of the set {C:Csupp(S),iC} (which is assumed to be the same for all i). So, LmaxG is the maximum over i of averages of values LC for those sets C which are picked by S with positive probability and which contain i. Clearly, LmaxGLmax (see Theorem 3).

General theorem. Theorem 1 is our most general result, allowing for any(unbiased) sketch S (see (15)), and any weight matrix W0. The resulting iteration complexity given by this theorem is

max4L1μ,1κ+4ρL2κμn2×log1ϵ,

and is also presented in the first row of Table 1. This result depends on two expected smoothness constants L1 (measuring the expected smoothness of the stochastic gradient of our stochastic reformulation; see Assumption 3.1) and L2 (measuring the expected smoothness of the Jacobian; see Assumption 3.2). The complexity also depends on the stochastic contraction number κ (see (48)) and the sketch residual ρ (see (37) and (55)). We devote considerable effort to give simple formulas for these constants under some specialized settings (for special combinations of sketches S and weight matrices W). In fact, the entire Sect. 4 is devoted to this. In particular, all rows of Table 1 where the last column mentions Theorem 1 arise as special cases of the general iteration complexity in the first row.

  • Gradient descent As a starting point, in row 3 we highlight that one can recover gradient descent as a special case of JacSketch with the choice S=I (with probability 1) and W=I. We get the rate 4Lμlog1ϵ, which is tight.

  • SAGA with uniform sampling Let us now focus on a slightly more interesting special case: row 4. We see that SAGA with uniform probabilities appears as a special case, and enjoys the rate n+4Lmaxμlog1ϵ, recovering an existing result.

  • SAGA with importance sampling Unfortunately, the generality of Theorem 1 comes at a cost: we are not able to obtain an importance sampling version of SAGA as a special case which would have a better iteration complexity than uniform SAGA. This will be remedied by our second complexity theorem, which we shall discuss later below.

  • Minibatch SAGA Rows 6–11 correspond to minibatch versions of SAGA. In particular, row 6 contains a general statement (albeit still a special case of the statement in row 1), covering virtually all minibatch strategies. Rows 7–11 specialize this result to two particular minibatch sketches (i.e., S=IS), each with two choices of W. The first sketch corresponds to samplings S which choose from among all subsets of [n] uniformly at random. This sampling is known in the literature as τ-nice sampling [22, 25]. The second sketch corresponds to S being a τ—partition sampling. This sampling picks uniformly at random subsets of [n] which form a partition of [n], and are all of cardinality τ. The complexities in rows 7 and 8 are comparable (each can be slightly better than the other, depending on the values of the smoothness constants {Li}). On the other hand, in the case of τ—partition, the choice W=Diag(Li) is better than W=I: the complexity in row 10 is better than that in row 9 because maxCsupp(S)1τiCLiLmax.

  • Optimal minibatch size for SAGA Our analysis for mini-batch SAGA also gives the first iteration complexities that interpolate between the (n+4Lmaxμ)log1ϵ complexity of SAGA and the 4Lμlog1ϵ complexity of gradient descent, as τ increases from 1 to n. Indeed, consider the complexity in rows 7 and 8 for τ=1 and τ=n. Our iteration complexity of mini-batch SAGA is the first result that is precise enough to inform an optimal mini-batch size (see Sect. 6.2). In contrast, the previous best complexity result for mini-batch SAGA [14] interpolates between (n+4Lmaxμ)log1ϵ and 4Lmaxμlog1ϵ as τ increases from 1 to n, and thus is not precise enough as to inform the best minibatch size. We make a more detailed comparison between our results and [14] in Sect. 4.7.

Specialized theorem We now move to the second main complexity result of our paper: Theorem 6. The general complexity statement is listed in row 2 of Table 1:

maxCsupp(S)1pC+τnpC4LCμ×log1ϵ, 19

where pC=PS=C. This theorem is a refined result specialized to minibatch sketches (S=IS) with τ—partition samplings S. This is a sampling which picks subsets of [n] of size τ forming a partition of [n], uniformly at random. This theorem also includes gradient descent as special case since when S=[n] with probability 1 (hence, p[n]=1) we have that τ=n and L[n]=L. Hence, (19) specializes to 4Lμlog1ϵ. But more importantly, our focus on τ—partition samplings enables us to provide stronger iteration complexity guarantees for non-uniform probabilities.

  • SAGA with importance sampling The first remarkable special case of (19) is summarized in row 5, and corresponds to SAGA with importance sampling. The complexity obtained, (n+4L¯μ)log1ϵ, answers a conjecture of Schmidt et al. [30] in the affirmative. In this case, the support of S are the singletons {1}, {2},,{n}, p{i}=pi for all i, τ=1 and L{i}=Li. Optimizing the complexity bound over the probabilities p1,,pn, we obtain the importance sampling pi=μn+4τLijμn+4τLj.

  • Minibatch SAGA with importance sampling In row 11 we state the complexity for a minibatch SAGA method with importance sampling. This is the first result for this method in the literature. Note that by comparing rows 4 and 10, we can conclude that the complexity of minibatch SAGA with importance sampling is better than for minibatch SAGA with uniform probabilities. Indeed, this is because7
    1|supp(S)|Csupp(S)LCL¯maxCsupp(S)1τiCLi. 20

Outline of the paper

We present an alternative narrative motivating the development of JacSketch in Sect. 2. This narrative is based on a novel technical tool which we call controlled stochastic optimization reformulations of problem (1). We then develop a general convergence theory of JacSketch in Sect. 3. This theory admits practically any sketches S (including minibatch sketches mentioned in the introduction) and weight matrices W. The main result in this section is Theorem 1. In Sect. 4 we specialize the general results to minibatch sketches. Here we also compute the various constants appearing in the general complexity result for JacSketch for specific classes of minibatch samplings. In Sect. 5 we develop an alternative theory for JacSketch, one based on a novel stochastic Lyapunov function. The main result in this section is Theorem 6. Computational experiments are included in Sect. 6.

Notation

We will introduce notation when and as needed. If the reader would like to recall any notation, for ease of reference we have a notation glossary in Sect. 1. As a general rule, all matrices are written in upper-case bold letters. By logt we refer to the natural logarithm of t.

Controlled stochastic reformulations

In this section we provide an alternative narrative behind the development of JacSketch; one through the lens of what we call controlled stochastic reformulations.

We design our family of methods so that two keys properties are satisfied, namely unbiasedness, Egk=f(xk), and diminishing variance: Egk-f(xk)220 as xkx. These are both favoured statistical properties. Moreover, currently only methods that have diminishing variance exhibt fast linear convergence (exponential decay of the error) on strongly convex problems. On the other hand, unbiasedness is not necessary for a fast method in practice since several biased stochastic gradient methods such as SAG [29] perform well in practice. Still, the absence of bias greatly facilitates the analysis of JacSketch.

Stochastic reformulation using sketching

It will be useful to formalize the condition mentioned in Sect. 1.3 which leads to gk being an unbiased estimator of the gradient.

Assumption 2.1

(Unbiased sketch) Let W0 be a weighting matrix and let D be the distribution from which the sketch matrices S are drawn. There exists a random variable θS such that

EDθSΠSe=e. 21

When this assumption is satisfied, we say that (S,θS,W) constitutes an “unbiased sketch”, and we call θS the bias-correcting random variable. When the triple is obvious from the context, sometimes we shall simply say that S is an unbiased sketch.

The first key insight of this section is that besides producing unbiased estimators of the gradient, unbiased sketches produce unbiased estimators of the loss function as well. Indeed, by simply observing that f(x)=1nF(x),e, we get

f(x)=(1)1ni=1nfi(x)=1nF(x),e=(21)1nF(x),EDθSΠSe=ED1nF(x),θSΠSe.

In other words, we can rewrite the finite-sum optimization problem (1) as an equivalent stochastic optimization problem where the randomness comes from D rather than from the representation-specific uniform distribution over the n loss functions:

minxRdf(x)=EDfS(x),wherefS(x)=defθSnF(x),ΠSe. 22

The stochastic optimization problem (22) is a stochastic reformulation of the original problem (1). Further, the stochastic gradient of this reformulation is given by

fS(x)=θSnF(x)ΠSe. 23

With these simple observations, our options at designing stochastic gradient-type algorithms for (1) have suddenly broadened dramatically. Indeed, we can now solve the problem, at least in principle, by applying SGD to any stochastic reformulation:

xk+1=xk-αfSk(xk). 24

But now we have a parameter to play with, namely, the distribution of S. The choice of this parameter will influence both the iteration complexity of the resulting method as well as the cost of each iteration. We now give a few examples of possible choices of D to illustrate this.

Example 1

(gradient descent) Let S be equal to I (or any other n×n invertible matrix) with probability 1 and let W0 be chosen arbitrarily. Then θS1 is bias-correcting since

EDθSΠSe=ΠSe=(12)S(SWS)SWe=SS-1W-1(S)-1SWe=Ie=e.

With this setup, the SGD method (24) becomes gradient descent:

xk+1=xk-αfSk(xk)=(5)+(23)xk-αf(xk). 25

Example 2

(SGD with non-uniform sampling) Let S=ei (unit basis vector in Rn) with probability pi>0 and let W=I. Then θei=1/pi is bias-correcting since

EDθSΠSe=(12)i=1npi1piei(eiei)-1eie=i=1neieie=Ie=e.

Let Sk={ik} be picked at iteration k. Then the SGD method (24) becomes SGD with non-uniform sampling:

xk+1=xk-αfSk(xk)=(23)xk-αnpikfik(xk). 26

Note that with this setup, and when pi=1/n for all i, the stochastic reformulation is identical to the original finite-sum problem. This is the case because fei(x)=fi(x).

Example 3

(minibatch SGD) Let S=eS=iSei, where S=C[n] with probability pC. Let W=I. Assume that the cardinality of the set {C[n]:Csupp(S),iC} does not depend on i (and is equal to c1>0). Then θeS=1/(c1pS) is bias-correcting since

EDθSΠSe=(12)Csupp(S)pC1c1pCeC(eCeC|C|)-1eCe|C|=Csupp(S)1c1eC=e.

Note that ΠeSe=eS. Assume that set Sk is picked in iteration k. Then the SGD method (24) becomes minibatch SGD with non-uniform sampling:

xk+1=xk-αfSk(xk)=(23)xk-αnc1iSk1pSkfi(xk). 27

Finally, note that gradient descent (25) is a special case of (27) if we set p[n]=1 and pC=0 for all other subsets C of [n]. Likewise, SGD with non-uniform probabilities (26) is a special case of (27) if we set p{i}=pi>0 for all i and pC=0 for all other subsets C of [n].

The controlled stochastic reformulation

Though SGD applied to the stochastic reformulation can generate several known algorithms in special cases, there is no reason to believe that the gradient estimates gk will have diminishing variance (excluding the extreme case such as gradient descent). Here we handle this issue using control variates, a commonly used tool to reduce variance in Monte Carlo methods [13] and introduced in [35] for designing variance reduced stochastic gradient algorithm.

Given a random function zS(x), we introduce the controlled stochastic reformulation:

minxRdf(x)=EDfS,z(x),wherefS,z(x)=deffS(x)-zS(x)+EDzS(x). 28

Since

fS,z(x)=deffS(x)-zS(x)+EDzS(x) 29

is an unbiased estimator of the gradient f(x), we can apply SGD to the controlled stochastic reformulation instead, which leads to the method

xk+1=xk-α(fSk(x)-zSk(x)+EDzS(x)).

Reformulation (22) and method (24) is recovered as a special case with the choice zS(x)0. However, we now have the extra freedom to choose zS(x) so as to control the variance of this stochastic gradient. In particular, if zS(x) and fS(x) are sufficiently correlated, then (29) will have a smaller variance than fS(x). For this reason, we choose a linear model for zS(x) that mimicks the stochastic function fS(x).

Let JRd×n be a matrix of parameters of the following linear model

zS(x)=defθSnJx,ΠSe,zS(x)=θSnJΠSe. 30

Note that this linear model has the same structure as fS(x) in (22) except that F(x) has been replaced by the linear function Jx.8 If S is an unbiased sketch (see (21)), we get EDzS(x)=1nJe, which plugged into (28) and (29) together with the definition (22) of fS gives the following unbiased estimate of f(x) and f(x):

fS,J(x)=deffS,z(x)=θSnF(x)-Jx,ΠSe+1nJx,e, 31

and

fS,J(x)=deffS,z(x)=θSn(F(x)-J)ΠSe+1nJe. 32

We collect this observation that (32) is unbiased in the following lemma for future reference.

Lemma 1

If S is an unbiased sketch (see Definition 2.1), then

EDfS,J(x)=f(x), 33

for every JRd×n and xRd. That is, (32) is an unbiased estimate of the gradient (1).

Now it remains to choose the matrix J, which we do by minimizing the variance of our gradient estimate.

The Jacobian estimate, variance reduction and the sketch residual

Since (32) gives an unbiased estimator of f(x) for all JRd×n, we can attempt to choose J that minimizes its variance. Minimizing the variance of (32) in terms of J will, for all sketching matrices of interest, lead to J=F(x). This follows because

EDfS,J(x)-f(x)22=(32)ED1nJ(I-θSΠS)e-1nF(x)(I-θSΠS)e22=1n2ED(J-F(x))(I-θSΠS)e22=1n2Tr(J-F(x))(J-F(x))B=1n2J-F(x)B2, 34

where

B=defED(I-θSΠS)ee(I-θSΠS)=(21)EDθS2ΠSeeΠS-ee0, 35

and we have used the weighted Frobenius norm with weight matrix B (see (10)).

For most distributions D of interest, the matrix B is positive definite.9 Letting vS=def(I-θSΠS)e, we can bound the largest eigenvalue of matrix B via Jensen’s inequality as follows:

λmax(B)=(35)λmax(EDvSvS)EDλmax(vSvS)=EDvS22.

Combined with (34), we get the following bound on the variance of fS,J:

EDfS,J(x)-f(x)22EDvS22n2J-F(x)I2.

This suggests that the variance is low when J is close to the true Jacobian F(x), and when the second moment of vS is small. If S is an unbiased sketch, then EDvS=0, and hence EDvS22 is the variance of vS. So, the lower the variance of 1nθSΠSe as an estimator of 1ne, the lower the variance of fS,J(x) as an estimator of f(x).

Let us now return to the identity (34) and its role in choosing J. Minimizing the variance in a single step is overly ambitious, since it requires setting J=F(x), which is costly. So instead, we propose to minimize (34) iteratively. But first, to make (34) more manageable, we upper-bound it using a norm defined by the weight matrix W as follows

J-F(x)B2ρJ-F(x)W-12, 36

where

ρ=defλmaxW1/2BW1/20 37

is the largest eigenvalue of W1/2BW1/2. We refer to the constant ρ as the sketch residual, and it is a key constant affecting the convergence rate of JacSketch as captured by Theorem 1. The sketch residual ρ represents how much information is “lost” on average due to sketching and due to how well W-1 approximates B. We develop formulae and estimates of the sketch residual for several specific sketches of interest in Sect. 4.5.

Example 4

(Zero sketch residual) Consider the setup from Example 1 (gradient descent). That is, let S be invertible with probability one and let θS=1 be the bias-reducing variable. Then ΠSe=e and hence B=0, which means that ρ=0.

Example 5

(Large sketch residual) Consider the setup from Example 2 (SGD with non-uniform probabilities). That is, let S=ei (unit basis vector in Rn) with probability pi>0 and let W=I. Then θei=1/pi is a bias-reducing variable, and it is easy to show that B=Diag(1/p1,,1/pn)-ee. If we choose pi=1/n for all i, then ρ=n.

We have switched from the B norm to a user-controlled W-1 norm because minimizing under the B norm will prove to be impractical because B is a dense matrix for most all practical sketches. With this norm change we now have the option to set W as a sparse matrix (e.g., the identity, or a diagonal matrix), as we explain in Remark 1 further down. However, the theory we develop allows for any symmetric positive definite matrix W.

We can now minimize (36) iteratively by only using a single sketch of the true Jacobian at each iteration. Suppose we have a current estimate Jk of the true Jacobian and a sketch of the true Jacobian F(xk)Sk. With this we can calculate an improved Jacobian estimate using a projection step

Jk+1=argJRd×nminYRm×τ12J-F(xk)W-12subjecttoJ=Jk+YSkW, 38

the solution of which, as it turns out, depends on F(xk) through its sketch F(xk)Sk only. That is, we choose the next Jacobian estimate Jk+1 as close as possible to the true Jacobian F(xk) while restricted to a matrix subspace that passes through Jk. Thus in light of (36), the variance is decreasing. The explicit solution to (38) is given by

Jk+1=Jk-(Jk-F(xk))ΠSk. 39

See Lemma B.1 in the appendix of an extended preprint version of this paper [10] or Theorem 4.1 in [12] for the proof. Note that, as alluded to before, Jk+1 depends on F(xk) through its sketch only. Note that (39) updates the Jacobian estimate by re-using the sketch F(xk)Sk which we also use when calculating the stochastic gradient (32).

Note that (39) gives the same formula for Jk+1 as (11) which we obtained by solving (9); i.e., by projecting Jk onto the solution set of (8). This is not a coincidence. In fact, the optimization problems (9) and (38) are mutually dual. This is also formally stated in Lemma B.1 in [10].

In the context of solving linear systems, this was observed in [11]. Therein, (9) is called the sketch-and-project method, whereas (38) is called the constrain-and-approximate problem. In this sense, the Jacobian sketching narrative we followed in Sect. 1.3 is dual to the Jacobian sketching narrative we are pursuing here.

Remark 1

(On the weight matrix and the cost) Loosely speaking, the denser the weighting matrix W, the higher the computational cost for updating the Jacobian using (39). Indeed, the sparsity pattern of W controls how many elements of the previous Jacobian estimate Jk need to be updated. This can be seen by re-arranging (39) as

Jk+1=Jk+YkSkW, 40

where Yk=(F(xk)Sk-JkSk)(SkWSk)Rd×τ. Although we have no control over the sparsity of Yk, the matrix SkW can be sparse when both Sk and W are sparse. This will be key in keeping the update (40) at a cost propotional to d×τ, as oppossed to n×d when W is dense. This is why we consider a diagonal matrix W=Diag(w1,,wn) in all of the special complexity results in Table 1. While it is clear that some non-diagonal sparse matrices W could also be used, we leave such considerations to future work.

JacSketch algorithm

Combining formula (32) for the stochastic gradient of the controlled stochastic reformulation with formula (39) for the update of the Jacobian estimate, we arrive at our JacSketch algorithm (Algorithm 1).graphic file with name 10107_2020_1506_Figa_HTML.jpg

Typically, one should not implement the algorithm as presented above. The most efficient implementation of JacSketch will depend heavily on the structure of W, distribution D and so on. For instance, in the special case of minibatch SAGA, as presented in Sect. 1.4, the update of the Jacobian (77) has a particularly simple form. That is, we maintain a single matrix JRd×n and keep replacing its columns by the appropriate stochastic gradients, as computed. Moreover, in the case of linear predictors, as is well known, a much more memory-efficient implementation is possible. In particular, if fi(x)=ϕi(aix) for some loss function ϕi and a data vector aiRd and all i, then fi(x)=ϕi(aix)ai, which means that the gradient always points in the same direction. In such a situation, it is sufficient to keep track of the scalar loss derivatives ϕi(aix) only. Similar comments can be made about the step (16) for computing the gradient estimate gk.

A window into biased estimates and SAG

We will now take a small detour from the main flow of the paper to develop an alternative viewpoint of Algorithm 1 and also make a bridge to biased methods such as SAG [29].

The simple observation that

f(xk)=1nF(xk)e, 41

suggests that g^k=1nJk+1e, where Jk+1F(xk) would give a good estimate of the gradient. To decrease the variance of g^k, we can also use the same update of the Jacobian estimate (39) since

Eg^k-f(xk)22=1n2E(Jk+1-F(xk))e22=1n2E(Jk+1-F(xk))W-1/2W1/2e22eWen2EJk+1-F(xk)W-12.

Thus, if EJk+1-F(xk)W-12 converges to zero, so will Eg^k-f(xk)22. Though unfortunately, the combination of the gradient estimate g^k=1nJk+1e and a Jacobian estimate updated via (39) will almost always give a biased estimator. For example, if we define D by setting S=ei with probability 1n and let W=I, then we recover the celebrated SAG method [29] and its biased estimator of the gradient.

The issue with using 1nJk+1e as an estimator of the gradient is that it decreases the variance too aggressively, neglecting the bias. However, this can be fixed by trading off variance for bias. One way to do this is to introduce the random variable θS as a stochastic relaxation parameter

g^k=1-θSknJke+θSknJk+1e. 42

If θS is bias correcting, we recover the unbiased SAGA estimator (13). By allowing θS to be closer to one, however, we will get more bias and lower variance. We leave this strategy of building biased estimators for future work. It is conceivable that SAG could be analyzed using reasonably small modifications of the tools developed in this paper. Doing this would be important due to at least four reasons: (i) SAG was the first variance-reduced method for problem (1), (ii) the existing analysis of SAG is not satisfying, (iii) one may be able to obtain a better rate, (iv) one may be able to develop and analyze novel variants of SAG.

Convergence analysis for general sketches

In this section we establish a convergence theorem (Theorem 1) which applies to general sketching matrices S (that is, arbitrary distributions D from which they are sampled). By design, we keep the setting in this section general, and only deal with specific instantiations and special cases in Sect. 4.

Two expected smoothness constants

We first formulate two expected smoothness assumptions tying together f, its Jacobian F(x) and the distribution D from which we pick sketch matrices S. These assumptions, and the associated expected smoothness constants, play a key role in the convergence result.

Our first assumption concerns the expected smoothness of the stochastic gradients fS of the stochastic reformulation (22).10

Assumption 3.1

(Expected smoothness of the stochastic gradient) There is a constant L1>0 such that

EDfS(x)-fS(x)222L1(f(x)-f(x)),xRd. 43

It is easy to see from (23) and (32) that

fS(x)-fS(y)22=1n2(F(x)-F(y))θSΠSe22=fS,J(x)-fS,J(y)22 44

for all JRd×n and x,yRd, and hence the expected smoothness assumption can equivalently be understood from the point of view of the controlled stochastic reformulation. The above assumption is not particularly restrictive. Indeed, in Theorem 2 we provide formulae for L1 for smooth functions f and for a class of minibatch samplings S=IS. These formulae can be seen as proofs that Assumption 3.1 is satisfied for a large class of practically relevant sketches S and functions f.

Our second expected smoothness assumption concerns the Jacobian of F.

Assumption 3.2

(Expected smoothness of the Jacobian) There is a constant L2>0 such that

ED(F(x)-F(x))ΠSW-122L2(f(x)-f(x)),xRd, 45

where the norm is the weighted Frobenius norm defined in (10).

It is easy to see (see Lemma 4, Eq. (60)) that for any matrix MRd×n, we have EDMΠSW-12=MEDHS2, where

HS=defS(SWS)S=(12)ΠSW-1. 46

Therefore, (45) can be equivalently written in the form

F(x)-F(x)EDHS22L2(f(x)-f(x)),xRd, 47

which suggests that the above condition indeed measures the variation/smoothness of the Jacobian under a specific weighted Frobenius norm.

Stochastic contraction number

By the stochastic contraction number associated with W and D we mean the constant defined by

κ=κ(D,W)=defλmin(EDΠS). 48

In the next lemma we show that 0κ1 for all distributions D for which the expectation (48) exists.

Lemma 2

For all distributions D, we have the bounds 0κ1.

Proof

It is not difficult to show that W1/2HSW1/2=(46)W1/2ΠSW-1/2 is the orthogonal projection matrix that projects onto RangeW1/2S. Consequently, 0W1/2HSW1/2I and, after taking expectation, we get 0W1/2EDHSW1/2I. Finally, this implies that

0λmax(I-W1/2EDHSW1/2)=1-λmin(W1/2EDHSW1/2)1. 49

In our convergence theorem we will assume that κ>0. This can be achieved by choosing a suitable distribution D and it holds trivially for all the examples we develop. The condition κ>0 essentially says that the distribution is sufficiently rich. This contraction number was first proposed in [11] in the context of randomized algorithms for solving linear systems. We refer the reader to that work for details on sufficient assumptions about D guaranteeing κ>0. Below we give an example.

Example 6

Let W0, and let D be given by setting S=ei with probability pi>0. Then

κ=(48)λminW1/2EDΠSW-1/2=λmini=1npieiWeiW1/2eieiW1/2.

Since the vectors W1/2ei span Rn and pi>0 for all i, the matrix is positive definite and hence κ>0. In particular, when W=I, then the expected projection matrix is equal to Diag(p1,,pn) and κ=minipi>0. If instead of unit basis vectors {ei} we use vectors that span Rn, using similar arguments we can also conclude that κ>0.

Convergence theorem

Our main convergence result, which we shall present shortly, holds for μ-strongly convex functions. However, it turns out our results hold for the somewhat larger family of functions that are quasi-strongly convex.

Assumption 3.3

(Quasi-strong convexity) Function f for some μ>0 satisfies

f(x)f(x)+f(x),x-x+μ2x-x22,xRd, 50

where x=argminxRdf(x).

We are now ready to present the main result of this section.

Theorem 1

(Convergence of JacSketch for General Sketches) Let W0. Let f satisfy Assumption 3.3. Let Assumption 2.1 be satisfied (i.e., S is an unbiased sketch and θS is the associated bias-correcting random variable). Let the expected smoothness assumptions be satisfied: Assumptions 3.1 and 3.2. Assume that κ>0. Let the sketch residual be defined as in (37), i.e.,

ρ=ρ(θS,D,W)=(37)λmaxW1/2EDθS2ΠSeeΠS-eeW1/20. 51

Choose any x0Rd and J0Rd×n. Let {xk,Jk}k0 be the random iterates produced by JacSketch (Algorithm 1). Consider the Lyapunov function

Ψk=defxk-x22+α2L2Jk-F(x)W-12. 52

If the stepsize satisfies

0αmin14L1,κ4L2ρ/n2+μ, 53

then

EΨk(1-μα)k·Ψ0, 54

If we choose α to be equal to the upper bound in (53), then

kmax4L1μ,1κ+4ρL2κμn2log1ϵEΨkϵΨ0. 55

Recall that the iteration complexity expression from (55) is listed in row 1 of Table 1.

The Lyapunov function we use is simply the sum of the squared distance between xk to the optimal x and the distance of our Jacobian estimate Jk to the optimal Jacobian F(x). Hence, the theorem says that both the iterates {xk} and the Jacobian estimates {Jk} converge.

Projection lemmas and the stochastic contraction number κ

In this section we collect some basic results on projections. Recall from (12) that ΠS=S(SWS)SW and from (46) that HS=S(SWS)S.

Lemma 3

ΠSW-1(I-ΠS)=0. 56

Furthermore,

EDΠSW-1ΠS=EDHSandED(I-ΠS)W-1(I-ΠS)=W-1-EDHS. 57

Proof

Using the pseudoinverse property AAA=A we have that

ΠSW-1ΠS=(12)S(SWS)SWS(SWS)S=(46)ΠSW-1=HS, 58

and as a consequence (56) holds. Moreover,

(I-ΠS)W-1(I-ΠS)=(56)W-1(I-ΠS)=(46)W-1-HS. 59

Finally, taking expectation over (58) and (59) gives (57).

Lemma 4

For any matrices M,NRd×n we have the identities

M(I-ΠS)+NΠSW-12=M(I-ΠS)W-12+NΠSW-12

and

EDNΠSW-12=NEDHS2. 60

Furthermore,

EDM(I-ΠS)+NΠSW-12(1-κ)MW-12+NEDHS2. 61

Proof

First, note that

graphic file with name 10107_2020_1506_Equ180_HTML.gif

By taking expectations in D, we get

graphic file with name 10107_2020_1506_Equ181_HTML.gif

where in the last step we used the estimate

W-1-EDHS=W-1/2(I-W1/2EDHSW1/2)W-1/2λmax(I-W1/2EDHSW1/2)W-1=(49)(1-κ)W-1.

Key lemmas

We first establish two lemmas. The first lemma provides an upper bound on the quality of new Jacobian estimate in terms of the quality of the current estimate and function suboptimality. If the second term on the right hand side was not there, the lemma would be postulating a contraction on the quality of the Jacobian estimate.

Lemma 5

Let Assumption 3.2 be satisfied. Then iterates of Algorithm 1 satisfy

EDJk+1-F(x)W-12(1-κ)Jk-F(x)W-12+2L2(f(xk)-f(x)), 62

where κ is defined in (48).

Proof

Subtracting F(x) from both sides of (39) gives

Jk+1-F(x)=(39)(Jk-F(x))M(I-ΠSk)+(F(xk)-F(x))NΠSk. 63

Taking norms on both sides, then expectation with respect to Sk and then using Lemma 4, we get

graphic file with name 10107_2020_1506_Equ182_HTML.gif

We now bound the second moment of gk. The lemma implies that as xk approaches x and Jk approaches F(x), the variance of gk approaches zero. This is a key property of JacSketch which elevates it into the ranks of variance-reduced methods.

Lemma 6

Let S be an unbiased sketch. Let Assumption 3.1 be satisfied (i.e., assume that inequality (43) holds for some L1>0). Then the second moment of the estimated gradient is bounded by

EDgk224L1(f(xk)-f(x))+2ρn2Jk-F(x)W-12, 64

where ρ is defined in (51).

Proof

Adding and subtracting θSknF(x)ΠSke in  (13) gives

gk=1nJke-θSkn(Jk-F(x))ΠSkeb+θSkn(F(xk)-F(x))ΠSkea.

Taking norms on both sides and using the bound a+b222a22+2b22 gives

gk222n2(F(xk)-F(x))ΠSkθSke22ak+2n2θSk(Jk-F(x))ΠSke-Jke22bk. 65

In view of Assumption 3.1 (combine (43) and (44)), we have

EDak4L1(f(xk)-f(x)), 66

where the expectation is taken with respect to Sk. Let us now bound EDbk. Using the fact that F(x)e=0, we can write

EDbk=2n2ED(Jk-F(x))θSkΠSke-(Jk-F(x))e22=2n2ED(Jk-F(x))(θSkΠSk-I)e22=2n2EDe(θSkΠSk-I)(Jk-F(x))(Jk-F(x))(θSkΠSk-I)e=2n2EDTre(θSkΠSk-I)(Jk-F(x))(Jk-F(x))(θSkΠSk-I)e=2n2EDTre(θSkΠSk-I)W1/2W-1/2(Jk-F(x))(Jk-F(x))W-1/2W1/2(θSkΠSk-I)e=2n2EDTrW-1/2(Jk-F(x))(Jk-F(x))W-1/2W1/2(θSkΠSk-I)ee(θSkΠSk-I)W1/2=2n2TrW-1/2(Jk-F(x))(Jk-F(x))W-1/2EDW1/2(θSkΠSk-I)ee(θSkΠSk-I)W1/2.

If we now let v=W1/2(θSkΠSk-I)e and M=(Jk-F(x))W-1/2, then we can continue:

graphic file with name 10107_2020_1506_Equ67_HTML.gif 67

where in the last step we have used the assumption that θSk is bias-correcting:

λmaxEDvv=(21)λmaxW1/2EDθSk2ΠSkeeΠSkW1/2-W1/2eeW1/2=(51)ρ. 68

It now only remains to substitute (66) and (67) into (65) to arrive at (64).

Proof of Theorem 1

With the help of the above lemmas, we now proceed to the proof of the theorem. In view of (50), we have

f(y),y-xf(y)-f(x)+μ2y-x22. 69

By using the relationship xk+1=xk-αgk, the fact that gk is an unbiased estimate of the gradient f(xk), and using one-point strong convexity (69), we get

EDxk+1-x22=(2)EDxk-x-αgk22=(33)xk-x22-2αf(xk),xk-x+α2EDgk22(69)(1-αμ)xk-x22+α2EDgk22-2α(f(xk)-f(x)). 70

Next, applying Lemma 6 leads to the estimate

graphic file with name 10107_2020_1506_Equ71_HTML.gif 71

Let σ=1/(2L2). Adding σαEDJk+1-F(x)W-12 to both sides of the above inequality and substituting in the definition of Ψk from (52), it follows that

EDΨk+1(71)(1-αμ)xk-x22+2α2αL1-1(f(xk)-f(x))+2α2ρn2Jk-F(x)W-12+σαEDJk+1-F(x)W-12(Lemma5)(1-αμ)xk-x22+2αL2σ+2αL1-1I(f(xk)-f(x))+σα1-κ+2αρσn2IIJk-F(x)W-12. 72

We now choose α so that I0 and II1-αμ, which can be written as

α1-L2σ2L1andακ2ρ/(σn2)+μ. 73

If α satisfies the above two inequalities, then (72) takes on the simplified form EDΨk+1(1-αμ)Ψk. By taking expectation again and using the tower rule, we get EΨk(1-αμ)kΨ0. Note that as long as k1αμlog1ϵ, we have EΨkϵΨ0. Recalling that σ=1/(2L2), and choosing α to be the minimum of the two upper bounds (73) gives the upper bound on (53), which in turn leads to (55).

Minibatch sketches

In this section we focus on special cases of Algorithm 1 where one computes fi(xk) for iSk, where Sk is a random subset (mini-batch) of [n] chosen in each iteration according to some fixed probability law. As we have seen in the introduction, this is achieved by choosing Sk=ISk.

We say that S is a minibatch sketch if S=IS for some random set (sampling) S, where ISRn×|S| is a column submatrix of the n×n identity matrix I associated with columns indexed by the set S. That is, the distribution D from which the sketches S are sampled is defined by

PS=IC=pC,C[n],

where C[n]pC=1 and pC0 for all C.

Samplings

We now formalize the notion of a random set, which we will refer to by the name sampling. A sampling is a random set-valued mapping with values being the subsets of [n]. A sampling S is uniquely characterized by the probabilities pC=defPS=C associated with every subset C of [n].

Definition 1

(Types of samplings) We say that sampling S is non-vacuous if PS==0 (i.e., p=0). Let pi=defPiS=C:iCpC. We say that S is proper if pi>0 for all i. We say that S is uniform if pi=pj for all ij. We say that S is τ—uniform if it is uniform and |S|=τ with probability 1. In particular, the unique sampling which assigns equal probabilities to all subsets of [n] of cardinality τ and zero probabilities to all other subsets is called the τ—nice sampling.

We refer the reader to [22, 25] for a background reading on samplings and their properties.

Definition 2

(Support) The support of a sampling S is the set of subsets of [n] which are chosen by S with positive probability: supp(S)=def{C:pC>0}. We say that S has uniform support if

c1=def|{Csupp(S):iC}|=|{Csupp(S):jC}|

for all i,j[n]. In such a case we say that the support is c1—uniform.

To illustrate the above concepts, we now list a few examples with n=4.

Example 7

The sampling defined by setting p{1,2}=p{3,4}=0.5 is non-vacuous, proper, 2—uniform (pi=0.5 for all i and |S|=2 with probability 1), and has 1—uniform support. If we change the probabilities to p{1,2}=0.4 and p{3,4}=0.6, the sampling is no longer uniform (since p1=0.40.6=p3), but it still has 1—uniform support, is proper and non-vacuous. Hence, a sampling with uniform support need not be uniform. On the other hand, a uniform sampling need not have uniform support. As an example, consider sampling S defined via p{1}=0.4, p{2,3}=p{3,4}=p{2,4}=0.2. It is uniform (since pi=0.4 for all i). However, while element 1 appears in a single set of its support, elements 2, 3 and 4 each appear in two sets. So, this sampling does not have uniform support.

Example 8

A uniform sampling need not be τ—uniform for any τ. For example, the sampling defined by setting p{1,2,3,4}=0.5, p{1,2}=0.25 and p{3,4}=0.25 is uniform (since pi=0.75 for all i), but as it assigns positive probabilities to sets of at least two different cardinalities, it is not τ—uniform for any τ.

Example 9

Further, the sampling defined by setting p{1,2}=1/6, p{1,3}=1/6, p{1,4}=1/6, p{2,3}=1/6, p{2,4}=1/6, p{3,4}=1/6 is non-vacuous, 2—uniform (pi=1/2 for all i and |S|=2 with probability 1), and has 3—uniform support. The sampling defined by setting p{1,2}=1/3, p{2,3}=1/3, p{3,1}=1/3 is non-vacuous, proper, 2—uniform (pi=2/3 for all i and |S|=2 with probability 1) and has 2—uniform support.

Note that a sampling with uniform support is necessarily proper as long as c1>0. However, it need not be non-vacuous. For instance, the sampling S defined by setting p=1 has 0—uniform support and is vacuous. From now on, we only consider samplings with the following properties.

Assumption 4.1

S is non-vacuous and has c1—uniform support with c11.

Note that if S is a non-vacuous sampling with 1—uniform support, then its support is necessary a partition of [n]. We shall pay specific attention to such samplings in Sect. 5 as for them we can develop a stronger analysis than that provided by Theorem 1.

Minibatch sketches and projections

In the next result we describe some basic properties of the projection matrix ΠS=S(SWS)SW associated with a minibatch sketch S.

Lemma 7

Let W=Diag(w1,,wn). Let S be any sampling, S=IS be the associated minibatch sketch, and let P be the probability matrix11 associated with sampling S: Pij=PiS&jS. Then

  • (i)

    ΠS=ISIS. This is a diagonal matrix with the ith diagonal element equal to 1 if iS, and 0 if iS.

  • (ii)

    ΠSe=eS=defiSei.

  • (iii)

    EDΠSeeΠS=C[n]pCeCeC=P

  • (iv)

    EDΠS=Diag(P)

  • (v)

    The stochastic contraction number defined in (48) is given by κ=minipi

  • (vi)
    Let S satisfy Assumption 4.1. Then the random variable
    θS=def1c1pS, 74
    defined on supp(S), is bias-correcting. That is, EDΠSθSe=e.

Proof

  • (i)

    This follows by noting that ISWIS is the |S|×|S| diagonal matrix with diagonal entries corresponding to wi for iS, which in turn can be used to show that (ISWIS)-1ISW=IS.

  • (ii)

    This follows from (i) by noting that ISe is the vector of all ones in R|S|.

  • (iii)

    Using (ii), we have ΠSeeΠS=eSeS. By linearity of expectation, EDeSeSij=ED(eSeS)ij=ED1i,jS=PiS&jS=Pij, where 1i,jS=1 if i,jS and 1i,jS=0 otherwise.

  • (iv)

    This follows from (i) by taking expectations of the diagonal elements of ΠS.

  • (v)

    Follows from (iv).

  • (vi)
    Indeed,
    EDθSΠSe=(ii)Csupp(S)pCθCeC=(74)1c1Csupp(S)eC=e, 75
    where the last equation follows from the assumption that the support of S is c1—uniform.

The following simple observation will be useful in the computation of the constant L1. The proof is straightforward and involves a double counting argument.

Lemma 8

Let S be a sampling satisfying Assumption 4.1. Moreover, assume that S is a τ—uniform sampling. Then |supp(S)|c1=nτ. Consequently, κ=p1=p2==pn=τn=c1|supp(S)|, where κ is the stochastic contraction number associated with the minibatch sketch S=IS.

JacSketch for minibatch sampling = minibatch SAGA

As we have mentioned in Sect. 1.4 already, JacSketch admits a particularly simple form for minibatch sketches, and corresponds to known and new variants of SAGA. Assume that S satisfies Assumption 4.1 and let W=Diag(w1,,wn). In view of Lemma 7(vi), this means that the random variable θS=1c1pS is bias-correcting, and due to Lemma 7(ii), we have ΠSke=eSk=iSkei. Therefore,

graphic file with name 10107_2020_1506_Equ76_HTML.gif 76

By Lemma 7(i), ΠSk=ISkISk. In view of (11), the Jacobian estimate gets updated as follows

J:ik+1=J:ikiSk,fi(xk)iSk. 77

The resulting minibatch SAGA method is formalized as Algorithm 2.graphic file with name 10107_2020_1506_Figb_HTML.jpg

Below we specialize the formula for gk to a few interesting special cases.

Example 10

(Standard SAGA) Standard uniform SAGA is obtained by setting Sk={i} with probability 1/n for each i[n]. Since the support of this sampling is 1—uniform, we set c1=1. This leads to the gradient estimate

gk=1nJke+fi(xk)-J:ik. 78

Example 11

(Non-uniform SAGA) However, we can use non-uniform probabilities instead. Let Sk={i} with probability pi>0 for each i[n]. Since the support of this sampling is 1—uniform, we have c1=1. So, the gradient estimate has the form

gk=1nJke+1npi(fi(xk)-J:ik). 79

Example 12

(Uniform minibatch SAGA, version 1) Let C1,,Cq be nonempty subsets of forming a partition [n]. Let Sk=Cj with probability pCj>0. The support of this sampling is 1—uniform, and hence we can choose c1=1. This leads to the gradient estimate

gk=1nJke+1npCjiCj(fi(xk)-J:ik).

Example 13

(Uniform minibatch SAGA, version 2) Let Sk be chosen uniformly at random from all subsets of [n] of cardinality τ2. That is, Sk is the τ-nice sampling, and the probabilities are equal to pSk=1/nτ. This sampling has c1—uniform support with c1=n-1τ-1=τnnτ. Thus, nc1pSk=τ, and we have

gk=1nJke+1τiSk(fi(xk)-J:ik). 80

Example 14

(Gradient descent) Consider the same situation as in Example 13, but with τ=n. That is, we choose Sk=[n] with probability 1, and c1=1. Then

gk=1nJke+1ni=1n(fi(xk)-J:ik)=f(xk).

Expected smoothness constants L1 and L2

Here we compute the expected smoothness constants L1 and L2 in the case of S being a minibatch sketch S=IS, and assuming that f is convex and smooth. We first formalize the notion of smoothness we will use.

Assumption 4.2

For C[n] define

fC(x)=def1|C|iCfi(x). 81

For each C[n] and all xRd, the function fC is LC—smooth and convex. That is, there exists LC0 such that the following inequality holds

fC(x)-fC(x)222LCfC(x)-fC(x)-fC(x),x-x,xRd. 82

Let Li=L{i} for i[n].

The above assumption is somewhat non-standard. Note that, however, if we instead assume that each fi is convex and Li-smooth, then the above assumption holds for LC=1|C|iCLi. In some cases, however, we may have better estimates of the constants LC than those provided by the averages of the Li values. The value of these constants will have a direct influence on L1 and L2, which is why we work with this more refined assumption instead.

Lemma 9

(Smoothness of the Jacobian) Assume that fi is convex and Li—smooth for all i[n]. Define Lmax=defmaxiLi and DL=defDiag(L1,,Ln)Rn×n. Then

F(x)-F(x)DL-122n(f(x)-f(x)),xRd. 83

Proof

Indeed,

F(x)-F(x)DL-12=(10)(F(x)-F(x))DL-1/22=(10)i=1n1Lifi(x)-fi(x)222i=1n(fi(x)-fi(x)-fi(x),x-x)=(1)2n(f(x)-f(x)),

where in the last step we used the fact that i=1nfi(x)=nf(x)=0.

Theorem 2

(Expected smoothness) Let S=IS be a minibatch sketch where S is a sampling satisfying Assumption 4.1 (in particular, the support of S is c1—uniform). Consider the bias-correcting random variable θS given in (74). Further, let f satisfy Assumption 4.2. Then the expected smoothness assumptions (Assumptions 3.1 and 3.2) are satisfied with constants L1 and L2 given by12

L1=1nc12maxiCsupp(S):iC|C|LCpC,L2=nmaxipiLiwi, 84

where Li=L{i}. If moreover, S is τ—nice sampling, then13

L1=LmaxG=defmaxi1c1Csupp(S):iCLC,L2=τmaxiLiwi. 85

Proof

Let R=F(x)-F(x) and A=EDfS(x)-fS(x)22. Then

A=(44)EDθS2n2RΠSe22=(74)Csupp(S)pCc12pC2n2RΠICe22=Csupp(S)1c12pCn2TreΠICRRΠICe=Csupp(S)1c12pCn2TrRRΠICeeΠIC=Lem7(iii)Csupp(S)1c12pCn2TrRReCeC=Csupp(S)1c12pCn2(F(x)-F(x))eC22=Csupp(S)|C|2c12pCn2fC(x)-fC(x)22.

Using (82) and (81), we can continue:

A(82)Csupp(S)2LC|C|2c12pCn2(fC(x)-fC(x)-fC(x),x-x)=(81)2c12n2Csupp(S)LC|C|2pC1|C|iC(fi(x)-fi(x)-fi(x),x-x)=2c12n2Csupp(S)iC(fi(x)-fi(x)-fi(x),x-x)LC|C|pC=2c12n2i=1nCsupp(S):iC(fi(x)-fi(x)-fi(x),x-x)LC|C|pC=2c12n2i=1n(fi(x)-fi(x)-fi(x),x-x)Csupp(S):iCLC|C|pC2c12nmaxiCsupp(S):iCLC|C|pC1ni=1n(fi(x)-fi(x)-fi(x),x-x), 86

where in this last inequality we have used convexity of fi for i[n]. Since

1ni=1nfi(x)-fi(x)-fi(x),x-x=f(x)-f(x)-f(x),x-x=f(x)-f(x),

the formula for L1 now follows by comparing (86) to (43). In order to establish the formula for L2, we estimate

EDRΠSW-12=(10)EDRΠSW-1/2I2=(10)TrRREDΠSW-1ΠS=(57)TrRREDHS=TrDL-1/2RRDL-1/2DL1/2EDHSDL1/2RDL-12λmaxDL1/2EDHSDL1/2(83)2nλmaxDL1/2EDHSDL1/2(f(xk)-f(x)). 87

From Lemma 7(iv) we have EDHS=EDΠSW-1=PW-1=Diagp1w1,,pnwn, and hence DL1/2EDHSDL1/2=Diagp1L1w1,,pnLnwn. Comparing to the definition of L2 in (45) to (87), we conclude that

L2=nλmaxDL1/2PW-1DL1/2=nmaxipiLiwi.

The specialized formulas (85) for τ—nice sampling follow as special cases of the general formulas (84) since |C|pC=1τnτ=n!(τ-1)!(n-τ)!=nn-1τ-1=nc1 and pi=τ/n for all i.

In the next result we establish some inequalities relating the quantities L, Lmax, LC and LmaxG. In particular, the results says that for a certain family of samplings S (the same for which we have defined the quantity LmaxG in (85)), the expected smoothed constant LmaxG is lower-bounded by the average of LC over CG=supp(S), and upper-bounded by Lmax.

Theorem 3

Let S be a τ—uniform sampling (τ1) with c1—uniform support (c11). Let G=supp(S). Then

f(x)=1|G|CGfC(x). 88

Moreover,

L1|G|CGLCLmaxGLmax. 89

The last inequality holds without the need to assume τ—uniformity.

Proof

Using the fact that S has c1—uniform support, and utilizing a double-counting argument, we observe that CG|C|fC(x)=c1i=1nfi(x). Multiplying both sides by 1nc1, and since |C|=τ for all CG, we get τ|G|c1n1|G|CGfC(x)=1ni=1nfi(x)=f(x). To obtain (88), it now only remains to use the identity

τ|G|c1n=1 90

which was shown in Lemma 8. The first inequality in (89) follows from (88) using standard arguments (identical to those that lead to the inequality LL¯).

Let us now establish the second inequality in (89). Define LiG=def1c1CG:iCLC. Again using a double-counting argument we observe that τCGLC=c1i=1nLiG. Multiplying both sides of this equality by |G|c1n and using identity (90), we get 1|G|CGLC=1ni=1nLiGmaxiLiG=LmaxG. We will now establish the last inequality by proving that LiGLmax for any i:

LiG=1c1CG:iCLC1c1CG:iC1|C|iCLi1c1CG:iC1|C|iCLmax=Lmax1c1CG:iC1|C|iC1=1Lmax1c1CG:iC1=1Lmax.

Note that we did not need to assume τ—uniformity to prove that LmaxGLmax.

Estimating the sketch residual ρ

In this section we compute the sketch residual ρ for several classes of samplings S. Let G=supp(S). We will assume throughout this section that S is non-vacuous, has c1—uniform support (with c11), and is τ—uniform.

Further, we assume that W=Diag(w1,,wn), and that the bias-correcting random variable θS is chosen as θS=1c1pS=|G|c1 (see (75) and Lemma 8). In view of the above, since ΠICe=eC, the sketch residual is given by

ρ=(51)λmaxW1/2|G|2c12EDΠSeeΠS-eeW1/2=λmaxW1/2|G|c12CGeCeC-eeW1/2=λmax|G|c12CGeCeC-eeW, 91

where the last equality follows by permuting the multiplication of matrices within the λmax.

In the following text we calculate upper bounds for ρ for τ—partition and τ—nice samplings. Note that Theorem 1 still holds if we use an upper bound of ρ in place of ρ.

Theorem 4

If S is the τ—partition sampling, then

ρnτmaxCGiCwi. 92

Proof

Using Lemma 8, and since c1=1, we get |G|c12=nτ. Consequently,

ρ(91)nτλmaxCGeCeCW=nτλmaxCGeCwC, 93

where wC=iCwiei and we used that -W1/2eeW1/2 is negative semidefinite. When W=I, the above bound is tight. By Gershgorin’s theorem, every eigenvalue λ of the matrix is bounded by at least one of the inequalities λiCwi for CG. Consequently, from (93) we have that ρnτmaxCGiCwi.

Next we give an useful upper bound on ρ for a large family of uniform samplings (for proof, see “Appendix C”).

Theorem 5

Let G be a collection of subsets of [n] with the property that the number of sets CG containing distinct elements i,j[n] is the same for all ij. In particular, define

c2=def|{C:{1,2}C,CG}|. 94

Now define a sampling S by setting S=CG with probability 1|G|. Moreover, assume that the support of S is c1—uniform. Consider the minibatch sketch S=IS.

  • (i)
    If W=Diag(w1,,wn), then
    ρmaxi=1,,n|G|c1-1wi+jiwj|G|c2c12-1. 95
  • (ii)
    If W=I, then
    ρ=max|G|c11+(n-1)c2c1-n,|G|c11-c2c1. 96

Note that as long as τ2, the τ—nice sampling S satisfies the assumptions of the above theorem. Indeed, G is the support of S consisting of all subsets of [n] of size τ, |G|=nτ, c1=n-1τ-1, and c2=n-2τ-2. As a result, bound (95) simplifies to

ρnτ-1maxi=1,,nwi+1n-1jiwj, 97

and (96) simplifies to

ρ=nτn-τn-1. 98

Calculating the iteration complexity for special cases

In this section we consider minibatch SAGA (Algorithm 2) and calculate its iteration complexity in special cases using Theorem 1 by pulling together the formulas for L1,L2,κ and ρ established in previous sections. In particular, assume S is τ—uniform and has c1—uniform support with c11. In this case, formula (85) for L1,L2 from Lemma 2 applies and we have L1=LmaxG and L2=τmaxiLiwi.

Moreover, by Lemma 8, κ=τn. By Theorem 1, if we use the stepsize

α=min14L1,κ4L2ρ/n2+μ=14min1LmaxG,1ρnmaxj=1,,nLjwj+μ4nτ, 99

then the iteration complexity is given by

max4L1μ,1κ+4ρL2κμn2log1ϵ=max4LmaxGμ,nτ+4ρμnmaxiLiwilog1ϵ. 100

Complexity (100) is listed in line 9 of Table 1. The complexities in lines 3, 5 and 10–13 arise as special cases of (100) for specific choices of S:

  • In line 3 we have gradient descent. This arises for the choice W=I and S=[n] with probability 1. In this case, τ=n, LmaxG=L and ρ=0. So, (100) simplifies to 4Lμlog1ϵ.

  • In line 5 we have uniform SAGA. We choose W=I and S={i} with probability 1/n. We have τ=1 and LmaxG=Lmax. In view of Theorem 4, ρn. So, (100) simplifies to n+4Lmaxμlog1ϵ.

  • In line 10 we choose W=I and S is the τ-nice sampling. In this case, Theorem 5 says that ρ=nτn-τn-1 (see (98)). Therefore, (100) reduces to
    max4LmaxGμ,nτ+n-τ(n-1)τ4Lmaxμlog1ϵ. 101
  • In line 11 we choose W=Diag(Li) and S is the τ-nice sampling. Theorem 5 says that ρn-ττn-2n-1Lmax+nn-1L¯ (see (97)). Therefore, (100) reduces to
    max4LmaxGμ,nτ+n-ττn4n-2n-1Lmax+nn-1L¯μlog1ϵ. 102
    To simplify the above expression, one may further use the bound n-2n-1Lmax+nn-1L¯Lmax+L¯. In Table 1 we have listed the complexity in this simplified form.
  • In line 12 of Table 1 we let W=I and S is the τ-partition sampling. In view of Theorem 4, ρnττ=n and hence (100) reduces to
    max4LmaxGμ,nτ+4Lmaxμlog1ϵ. 103
  • In line 13 of Table 1 we let W=Diag(Li) and S is the τ-partition sampling. In view of Theorem 4, ρnτmaxCGiCLi and hence (100) reduces to
    max4LmaxGμ,nτ+4maxCGiCLiμτlog1ϵ. 104
    Note that the previous bound for W=I is better than this bound since maxCGiCLiτLmax.

Comparison with previous mini-batch SAGA convergence results

Recently in [14], a method that includes a mini-batch variant of SAGA was proposed. This work is the most closely related to our minibatch SAGA. The methods described in [14] can be cast in our framework. In the language of our paper, in [14] the authors update the Jacobian estimate according to (77), where Sk is sampled according to a uniform probability with pi=τ/n, for all i=1,,n. What [14] do differently is that instead of introducing the bias-corecting random variable θS to maintain an unbiased gradient estimate, the gradient estimate is updated using the standard SAGA update (78) and this sampling process is done independently of how Sk is sampled for the Jacobian update. Thus at every iteration a gradient fi(xk) is sampled to compute (78), but is then discarded and not used to update the Jacobian update so as to maintain the independence between Jk and gk. By introducing the bias-correcting random variable θS in our method we avoid the data-hungry strategy used in [14].

The analysis provided in [14] shows that, by choosing the stepsize appropriately, the expectation of a Lyapunov function similar to (52) is less than ϵ>0 after

12nτ+K+n2τ2+K2log1ϵ 105

iterations, where K=def4Lmaxμ. When τ=1 this gives an iteration complexity of O(n+K)log1ϵ, which is essentially the same complexity as the standard SAGA method. The main issue with this complexity is that it decreases only very modestly as τ increases. In particular, on the extreme end when τ=n, since K4, we can approximate (1+K)21+K2 and the resulting complexity (105) becomes

1+4Lmaxμlog1ϵ.

Yet we know that τ=n corresponds to gradient descent, and thus the iteration complexity should be O(Lμlog(1/ϵ)), which is what we recover in the analysis of all our mini-batch variants. In Fig. 1a–c in the experiments in Sect. 6 we illustrate how (105) descreases very modestly as τ increases.

Fig. 1.

Fig. 1

The iteration complexity of minibatch SAGA (80) vs the mini-batch size τ for two ridge regression problems (132). We used λ=Lmax/n

A refined analysis with a stochastic Lyapunov function

In this section we perform a refined analysis of JacSketch applied with a minibatch sketch S=IS where the sampling S is over partitions of [n] into sets of size τ.14

Assumption 5.1

Let G be a partition of [n] into sets of size τ. Assume that the sampling S picks sets from the partition G uniformly at random. That is, pC=defPS=C for CG=supp(S). A sampling with these properties is called a τ—partition sampling.

In the terminology introduced in Sect. 4.1, a τ—partition sampling is non-vacuous, proper and τ—uniform. Its support is a partition of [n], and is 1—uniform. It satisfies Assumption 4.1. Restricting our attention to τ—partition samplings will allow us to perform a more in-depth analysis of JacSketch using a stochastic Lyapunov function.

One of the key reasons why we restrict our attention to τ-partition samplings is the fact that

IC1IC2=IRτ×τ,C1=C2,0Rτ×τ,C1C2, 106

for C1,C2G. Recall from Lemma 7 that if W=I, then ΠIC=ICIC. Consequently, for C1,C2G we have

C1C2ΠIC1ΠIC2=0,C1=C2(I-ΠIC1)ΠIC2=0. 107

This orthogonality property will be fundamental for controlling the convergence of the gradient estimate in Lemma 10.

Convergence theorem

Recall from (32) that the stochastic gradient of the controlled stochastic reformulation (28) of the original finite-sum problem (1) is given by

fIS,J(x)=1nJe+1pSn(F(x)-J)ΠISe 108

provided that we use the minibatch sketch S=IS and bias-correcting variable θS=θIS=1/pS given by Lemma 7(vi). This object will appear in our Lyapunov function, evaluated at x=x and J=Jk. We are now ready to present the main result of this section.

Theorem 6

(Convergence for minibatch sketches with τ-partition samplings) Let

  • (i)

    S be a minibatch sketch (i.e., S=IS),15 where S is a τ—partition sampling with support G=supp(S).

  • (ii)

    fC=def1|C|iCfi be LC—smooth and μ—strongly convex (for μ>0) for all CG.

  • (iii)

    W=I, θS=1pS.

  • (iv)

    {xk,Jk} be the iterates produced by JacSketch.

Consider the stochastic Lyapunov function

ΨSk=defxk-x22+2σSα1nJke-fIS,Jk(x)22, 109

where σS=n4τLS is a stochastic Lyapunov constant. If we use a stepsize that satisfies

αminCGpCμ+4LCτn, 110

then

EΨSk(1-μα)k·EΨS0. 111

This means that if we choose the stepsize equal to the upper bound (110), then

kmaxCG1pC+4LCμτnpClog1ϵEΨSkϵ·EΨS0. 112

Gradient estimate contraction

Here we will show that our gradient estimate contracts in the following sense.

Lemma 10

Let S be the τ—partition sampling, and σ(S)=defσS0 be any non-negative random variable. Then

EσS1nJk+1e-fIS,Jk+1(x)22EσS(1-pS)1nJke-fIS,Jk(x)22+EσSpSfIS,Jk(xk)-fIS,Jk(x)22. 113

Proof

For simplicity, in this proof we let Fk=F(xk) and F=F(x). Rearranging (108), we have

1nJk+1e-fIS,Jk+1(x)=(108)1npS(Jk+1-F)ΠISe=(39)1npSJk-(Jk-Fk)ΠISk-FΠISe=1npS(Jk-F)(I-ΠISk)ΠISe+1npS(Fk-F)ΠISkΠISe. 114

Taking norm squared on both sides gives

1nJk+1e-fIS,Jk+1(x)22=1n2pS2(Jk-F)A(I-ΠISk)ΠISe22I+1n2pS2(Fk-F)RΠISkΠISe22II+21n2pS2(Jk-F)(I-ΠISk)ΠISe(Fk-F)ΠISkΠISeIII. 115

First, it follows from (107) that expression III is zero. We now multiply expressions I and II by σS and bound certain conditional expectations of these terms. Since S and Sk are independent samplings, we have

EσSn2pS2A(I-ΠISk)ΠISe22|A=CGCGpCpCσCn2pC2A(I-ΠIC)ΠICe22=(107)CGσCn2pCAΠICe22CG,CCpC=CGσCn2pC(1-pC)AΠICe22=CGpCσC(1-pC)1n2pC2AΠICe22=(114)EσS(1-pS)1nJke-fIS,Jk(x)22|Jk. 116

Taking conditional expectation over expression II yields

EσSn2pS2RΠISkΠISe22|R,Sk=CGpCσCn2pC2RΠISkΠICe22=(107)σSkn2pSkRΠISkΠISke22=σSkn2pSkRΠISke22=σSkpSkfISk,Jk(xk)-fISk,Jk(x)22, 117

where in the last equation we used the identity

fIC,J(x)-fIC,J(y)22=1npC(F(x)-F(y))ΠCe22,JRd×n,CG, 118

which in turn is a specialization of (44) to the minibatch sketch S=IS and the specific choice of the bias-correcting variable θS=1/pS. It remains to take expectation of (116) and (117), apply the tower property, and combine this with (115).

Bounding the second moment of gk

In the next lemma we bound the second moment of our gradient estimate gk.

Lemma 11

The second moment of the gradient estimate is bounded by

Egk22|Jk,xk2EfIS,Jk(xk)-fIS,Jk(x)22|Jk,xk+2EfIS,Jk(x)-1nJke22|Jk,xk. 119

Proof

Adding and subtracting 1npSkF(x)ΠISke from (108) gives

gk=1nJke-1npSk(Jk-F(x))ΠISke+1npSk(F(xk)-F(x))ΠISke.

Taking norm squared on both sides, and using the bound a+b222a22+2b22 gives

gk222n2pSk2(F(xk)-F(x))ΠISke22+2n21pSk(Jk-F(x))ΠISke-Jke22=(118)2fISk,Jk(xk)-fISk,Jk(x)22+2n21pSk(Jk-F(x))ΠISke-Jke22A. 120

Taking expectation of the A term, we get

E1pS(Jk-F(x))ΠISeX-JkeEX22|Jk,xkE1pS(Jk-F(x))ΠISe22|Jk,xk=(114)n2EfIS,Jk(x)-1nJke22|Jk,xk,

where we used the inequality EX-EX22EX22. The result follows by combining the above with (120).

Smoothness and strong convexity of fIC,J

Recalling the setting of Theorem 6, we assume that each fC is μ—strongly convex and LC—smooth:

fC(y)+fC(y),x-y+μ2x-y22fC(x)fC(y)+fC(y),x-y+LC2x-y22

for all CG. It is known (see Section 2.1 in [19]) that the above conditions imply the following inequality:

fC(x)-fC(y),x-yμLCμ+LCx-y22+1μ+LCfC(x)-fC(y)22, 121

for all x,yRd. A consequence of these assumptions that will be useful to us is that the function fIC,J is τμnpC—strongly convex and τLCnpC—smooth. This can in turn be used to establish the next lemma, which will be used in the proof of Theorem 6:

Lemma 12

Under the assumptions of Theorem 6 (in particular, assumptions on f and S), we have

f(x)-f(y),x-yμ2x-y22+EDnpS2τLSfIS,J(x)-fIS,J(y)22, 122

for all x,yRd and JRd×n.

Proof

Applying (121) to the function fIS,J gives

fIS,J(x)-fIS,J(y),x-yτnpSμLSμ+LSx-y22+npSτ(μ+LS)fIS,J(x)-fIS,J(y)22τμ2npSx-y22+npS2τLSfIS,J(x)-fIS,J(y)22.

Taking expectation over both sides over S, noting that ED1pS=CG1=nτ, and recalling that fIS,J(x) is an unbiased estimator of f(x), we get the result.

Proof of Theorem 6

Let Ek· denote expectation conditional on Jk and xk. We can write

Ekxk+1-x22=(2)Ekxk-x-αgk22=(33)xk-x22-2αf(xk),xk-x+α2Ekgk22(122)(1-μα)xk-x22-αEknpSτLSfIS,Jk(xk)-fIS,Jk(x)22+α2Ekgk22(119)(1-μα)xk-x22+2α2Ek1nJke-fIS,Jk(x)22+2αEkα-npS2τLSfIS,Jk(xk)-fIS,Jk(x)22. 123

Next, after taking expectation in (123), applying the tower property, and subsequently adding the term 2αEσS1nJk+1e-fIS,Jk+1(x)22 to both sides of the resulting inequality, we get

EΨSk+1E(1-μα)xk-x22+2αEα-npS2τLSfIS,Jk(xk)-fIS,Jk(x)22+2α2E1nJke-fIS,Jk(x)2+2αEσS1nJk+1e-fIS,Jk+1(x)22(113)E1-μαIxk-x22+2αEσS1-pS+ασSII1nJke-fIS,Jk(x)22+2αEα+σSpS-npS2τLSIIIfIS,Jk(xk)-fIS,Jk(x)22. 124

Next, we determine a bound on α so that III 0. Choosing

α+σCpC-npC2τLC0,CGαnpC2τLC-σCpC,CG, 125

guarantees that III 0, and thus the last term in term in (124) can be safely dropped. Next, to build a recurrence and conclude the convergence proof, we bound the stepsize α so that II I; that is,

1-pC+ασC1-αμ,CGασCpCμσC+1,CG. 126

Consequently,

EΨSk+1E(1-μα)xk-x22+2αEσS(1-μα)1nJke-fIS,Jk(x)22=(1-μα)EΨSk.

Since σS=n4τLS, in view of (125) and (126) the combined bound on α is

αminnpC4τLC,pCμ+4τnLC=pCμ+4τnLC,CG.

Hence, we have established the recursion (111).

Calculating the iteration complexity in special cases

In this section we consider the special case of JacSketch analyzed via Theorem 6—minibatch SAGA with τ—partition sampling—and look at further special cases by varying the minibatch size τ and probabilities. Our aim is to justify the complexities appearing in Table 1. In view of Theorem 6 the iteration complexity is given by

maxCG1pC+τnpC4LCμlog1ϵ, 127

where G=supp(S). Complexity (127) is listed in line 2 of Table 1. The complexities in lines 4, 6, 8 and 14 arise as special cases of (127) for specific choices of τ and probabilities pC.

  • In line 4 we have gradient descent. This is obtained by choosing G={[n]} (whence p[n]=1, τ=n and L[n]=L), which is why (127) simplifies to 1+4Lμlog1ϵ.

  • In line 6 we consider uniform SAGA. That is, we choose τ=1 and pi=1/n for all i. We have G={{1},{2},,{n}} and L{i}=Li. Therefore, (127) simplifies to n+4Lmaxμlog1ϵ. This is essentially the same16 complexity result given in [6].

  • In line 8 we consider SAGA with importance sampling. This is the same setup as above, except we choose
    pi=μn+4Lij=1nnμ+4Lj, 128
    which is the optimal choice minimizing the complexity bound in p1,,pn. With these optimal probabilities, the stepsize bound becomes α1nμ+4L¯, and by choosing the maximum allowed stepsize the resulting iteration complexity is
    n+4L¯μlog1ϵ. 129
    Now consider the probabilities pi=Lij=1nLj suggested in [30]. Using our bound, these lead to the complexity
    maxi=1,,nj=1nLjLi+4j=1nLjμnlog1ϵ=nL¯Lmin+4L¯μlog1ϵ. 130
    Comparing this with (129), we see that this non-uniform sampling offers a significant speed up over uniform sampling if nμLmin. However, our complexity (129) is always better than both the uniform sampling sampling complexity (n+Lmax/μ)log1ϵ and (130).
  • Finally, in line 14 of Table 1 we optimize over probabilities pC directly; that is we extend the importance sampling described above to any τ. Minimizing the complexity bound over the probabilities, and noting that |G|=nτ, this leads to the rate
    nτ+41|G|CGLCμlog1ϵ. 131
    This iteration complexity also applies to the reduced memory variant of SAGA (18). This is because Theorem 6 also holds for sketches S=eS where S is a τ—partition sampling. To see this, note that our analysis in this section relies on the orthogonality property (107) which also holds for S=eS since (for W=I) we have:
    ΠeC1ΠeC2=1τeC1(eC1eC2=0)eC21τ=0,forC1,C2G,C1C2.
    Lemmas 10, 11 and 12 depend on the sketch through fS,J(x) only, which in turn depends on the sketch through ΠSe, and it is easy to see that if either S=IS or S=eS, we have ΠSe=eS.

Experiments

We perform several experiments to validate the theory, and also test the practical relevance of non-uniform SAGA (79) with the optimized probability distribution (128). All of our code for these experiments was written in Julia and can be found on github in https://github.com/gowerrobert/StochOpt.jl.

In our experiments we test either ridge regression

f(x)=12nAx-y22+λ2x22, 132

or logistic regression

f(x)=1ni=1nlog1+e-yiai,x+λ2x22, 133

where A=[a1,,an]Rd×n, yRn is the given data and λ>0 the regularization parameter.

New non-uniform sampling using optimal probabilities

First we compare non-uniform SAGA using the new optimized importance probabilities (128) against using the probabilities pi=Li/L¯ as suggested in [30]. When nμ is significantly smaller than Li for all i then the two sampling are very similar. But when nμ is relatively large, then the optimized probabilities (128) can be much closer to a uniform distribution as compared to using pi=Li/L¯. We illustrate this by solving a ridge regression problem (132), using generated data such that

Ax=y+ϵ, 134

where the elements of A and x are sampled from the standard Gaussian distribution N(0,1), and the elements of ϵ are sampled from N(0,10-3). It is not hard to see that the smoothness constants {Li} are given by Li=ai22+λ for i[n]. We scale the columns of A so that a122=1 and ai22=1n2, for i=2,,n, and set the regularization parameter λ=1n2. Consequently, Lmax=1+1n2, Li=2n2 for i=1,,n, L¯=(n+1)2-1n3 and μ=1nλmin(AA)+1n2. In this case the iteration complexity of non-uniform SAGA with the optimal probabilities (129) is given by

n+4(n+1)2-1μn3log1ϵ. 135

The complexity (130) which results from using the probabilities pi=Li/L¯ is given by

(n+1)2-1n3n32+4μlog1ϵ. 136

Now we consider the regime where n, in which case μO(1n2) and consequently (135)O(n)log1ϵ and in contrast (136) O(n2)log1ϵ.

We illustrate this in Fig. 1a-c where we set n=10, n=100 and n=1000, respectively, and plot the complexities given in (135) and (136) . To accompany this plot, in Fig. 2a-c we also plot an execution of SAGA-uni (SAGA with uniform probabilities), SAGA-Li (SAGA with pi=Li/L¯) and SAGA-opt (SAGA with optimized probabilities). In all figures we see that SAGA-opt is the fastest method. We can also see that SAGA-Li stalls in Fig. 2b and c when n is larger, performing even worst as compared to SAGA-uni.

Fig. 2.

Fig. 2

Comparing the performance of SAGA with importance sampling based on the optimized probabilities (128) (SAGA-opt), pi=Li/L¯ (SAGA-Li) and pi=1/n (SAGA-uni) for an artificially constructed ridge regression problem as n grows. Markers represent monitored points and not the iterations of the algorithms

Optimal mini-batch size

Our analysis of the mini-batch SAGA is precise enough as to inform an optimal mini-batch size. For instance, consider τ—nice sampling and the resulting iteration complexity (102). Theorem 3 states that for any τ[n], the terms within the maximum in (102) are bounded by

LmaxLmaxGL 137
Lmax+μn4C(τ)=def1τn-τn-1Lmax+μ4nτμ4. 138

Moreover, the upper and lower bounds are realized for τ=1 and τ=n, respectively. Consequently, for τ small, we have LmaxGC(τ). On the other hand, for τ large we have LmaxGC(τ). Furthermore, C(τ) decreases super-linearly in τ while LmaxG tends to decrease more modestly. Consequently, the point where LmaxG overtakes C(τ) is often the best for the overall complexity of the method. To better appreciate these observations, we plot the evolution of the iteration complexity (102), the total complexity and the iteration complexity as predicted by Hofmann et al. [14] (see (105)) as τ increases in Fig. 3a–c for three different linear least squares problems. Since each step of mini-batch SAGA computes τ stochastic gradients, so the total complexity is τ times the iteration complexity. In each figure we can see that our iteration complexity initially decreases super-linearly, then at some point the complexity is dominated by LmaxG and the iteration complexity decreases sublinearly. Up to this point we can observe an improvement in overall total complexity. This is in contrast to the iteration complexity given by Hofmann et al. that shows practically no improvement in even the iteration complexity as τ increases.

Fig. 3.

Fig. 3

Comparison of the methods on logistic regression problems (133) with data taken from LIBSVM [4]

Though these experiments indicate only modest improvements in total complexity, and suggests that τ=2 or τ=3 is optimal, we must bear in mind that this corresponds to 10% and 20% of the data for these small dimensional problems. We conjecture that for larger problems, this improvement in total complexity will also be larger.

To use these insights in practice, we need to be able to efficiently determine the τ which corresponds to the point at which the convergence regimes switches from being dominated by C(τ) to being dominated by LmaxG. This surmounts to choosing τ so that LmaxG=1τn-τn-1Lmax+μ4nτ. Estimating Lmax and μ is often possible, but the cost of computing LmaxG has a combinatorial dependency on n and τ. Thus to have a practical way of choosing τ, we first need to bound LmaxG. This can be done for losses with linear classifiers using concentration bounds. We leave this for future work.

Comparative experiments

We now compare the performance of SAGA-opt to several known methods such as SVRG [15], grad (gradient descent with fixed stepsizes) and AMprev (an improved version of SVRG that uses second order information) [28]. For the stepsize of SAGA-opt and SAG-opt, we found the stepsize α1nμ+4L¯ given by theory to be a bit too conservative. Instead do we away with the 4 and used α=1nμ+L¯ instead. For the remaining methods we used a grid search over Lmax×2m for m=21,19,17,,-10,-11.

To illustrate how biased gradient estimates can perform well in practice, we also test SAG-opt: a method that uses the same Jacobian updates as SAGA-opt, but instead uses the biased gradient estimate gk=1nJk+1e. See Sect. 2.5 for more details on biased gradient estimates.

In Fig. 3a–c we compare the methods on three logistic regression problems (133) based on three different data sets taken from LIBSVM [4]. In all these problems the two methods with optimized non-uniform sampling SAG-opt and SAGA-opt were faster in terms of both epochs and time. The next best method was AM-prev, followed by SVRG and grad. It is interesting to see how well SAG-opt performs in practice, despite having biased gradient estimates. This is why we believe it is important to advance the analyse of biased gradient estimates as future work.

Conclusion

We now provide a brief summary of some of the key contributions of this paper and a few selected pointers to possible future research directions.

We developed and analyzed JacSketch—a novel family of variance reduced methods based on Jacobian sketching—and provided a link between variance reduction for empirical risk minimization and recent results from the field of randomized numerical linear algebra on sketch-and-project type methods for solving linear systems. In particular, it turns out that variance reduction is obtained by taking an SGD step on a stochastic optimization problem whose solution is the unknown Jacobian. As a consequence of our analysis, we resolved the conjecture of [30] in the affirmative by proving a properly designed importance sampling for SAGA leading to the iteration complexity of O(n+L¯μ)log1ϵ. For this purpose we developed a new proof technique using a stochastic Lyapunov function. Our complexity result for uniform mini-batch SAGA perfectly interpolates between the best known convergence rates of SAGA and gradient descent, and is sufficiently precise as to inform the choice of the batch size that minimizes the over all complexity of the method. Additionally we design and analyse a reduced memory variant of SAGA as a special case.

For future work we see many possible avenues including the following.

Structured sparse weight matrices One may wish to explore combinations of a weight matrix and different sketches to design new efficient methods further improving iteration complexity. For this the weighting matrix will have to be highly structured (e.g., block diagonal or very sparse) so that the Jacobian update (39) can be computed efficiently.

Bias-variance trade-off One can try to explore the bias-variance trade-off as opposed to merely focus on the extremes only: SAG (minimum variance) and SAGA (no bias). There is also no empirical evidence that unbiased estimators outperform the biased ones.

Johnson–Lindenstrauss sketches One can design completely new methods using different sparse sketches, such as the fast Johnson–Lindenstrauss transform [2] or the Achlioptas transform [1]. The resulting method can then be analyzed through Theorem 1. But first these sketches need to be adapted to ensure we get an efficient method. In particular, computing F(x)S is only efficient if S most of the rows of S are zeros.

Acknowledgements

Funding was provided by Fondation de Sciences Mathématiques de Paris, European Research Council (Grant No. ERC SEQUOIA), LabEx LMH (Grant No. ANR-11-LABX-0056-LMH).

Appendix A: Proof of inequality (20)

Lemma 13

Let S be a sampling whose support G=supp(S) is a partition of [n]. Moreover, assume all sets of this partition have cardinality τ. Then

1|G|CGLCL¯maxCG1τiCLi.

Proof

By assumption, |G|=nτ. The first inequality follows from CGLCCG1τiCLi=1τi=1nLi=nτL¯. On the other hand,

L¯=1ni=1nLi=1nCGiCLi=1|G|CG1τiCLimaxCG1τiCLi.

Appendix B: Duality of sketch-and-project and constrain-and-approximate

Lemma 14

Let Jk,FRd×n and SRn×τ. The sketch-and-project problem

Jk+1=argminJRd×n12J-JkW-12subjecttoFS=JS, 139

and the constrain-and-approximate problem

Jk+1=argJRd×nminYRd×τ12J-FW-12subjecttoJ=Jk+YSW, 140

have the same solution, given by:

Jk+1=Jk-(Jk-F)S(SWS)SW. 141

Proof

The proof is given in Theorem 4.1 in [12].

Appendix C: Proof of Theorem 5

First we will establish that

|G|c12CGeCeCW=|G|c2c12c1c2w1w2wn-1wnw1c1c2w2wn-1wnw1c1c2wn-1wnw1w2wn-1c1c2wn. 142

Indeed, for every i we have that ei|G|c12CGeCeCWei=wi|G|c12CG:iC1=wi|G|c1, and for every ij we have ei|G|c12CGeCeCWej=wj|G|c12CG:i,jC1=wj|G|c2c12. Using (142), (91) and the Gershgorin circle theorem to bound ρ from above we get ρmaxi|G|c1-1wi+ijwj|G|c2c12-1, as claimed. When W=I we can get tighter results by using that |G|c12CGeCeC-ee is a circulant matrix with associated vector v=|G|c1-1,|G|c2c12-1,,|G|c2c12-1Rn. There is an elegant formula for calculating eigenvalues λj of circulant matrices [34] using v, given by

λj=v1+k=1n-1ωjkvn-k+1=|G|c1-1+|G|c2c12-1k=1n-1ωjk,forj=0,,n-1, 143

where ωj=e2πijn are the n-th roots of unity and i is the imaginary number. From (143) we see that there are only two distinct eigenvalues. Namely, for j=0 we have

λ0=(143)|G|c1-1+|G|c2c12-1(n-1)=|G|c11+(n-1)c2c1-n.

The other eigenvalue is given by any j0 since

λj=(143)|G|c1-1-|G|c2c12-1+|G|c2c12-1k=0n-1ωjk=0=|G|c11-c2c1.

Appendix D: Notation glossary

See the Table 2.

Table 2.

Frequently used notation

f(x) 1ni=1nfi(x) (convex loss function f:RdR) (1)
x Minimizer of f (1)
μ Strong convexity constant of f Table 1 and Assumption 3.3 and Theorem 6
α Stepsize (2)
gk Stochastic estimator of f(xk) (2), (13), (16), (33)
[n] {1,2,,n}
F(x) (f1(x),,fn(x))Rn (function F:RdRn) (3)
F(x) [f1(x),,fn(x)]Rd×n (Jacobian of F at x) (4)
e (1,1,,1)Rn (vector of all ones) (5)
f/fk Shorthand for f(x) / f(xk)
W n×n symmetric positive definite “weight” matrix (10), (12)
XW-1 (TrXW-1X)1/2 (weighted Frobenius norm) (10)
S A random (sketching) n×τ matrix picked from D
ΠS S(SWS)SW (stochastic projection matrix)
θS Bias-correcting random variable (15) and Assumption 2.1
ED· ESD· (expectation over SD)
S or Sk Sampling (a random subset of [n])
τ E|S| (minibatch size)
C Subset of [n]
eC iCei (ei is the ith unit coordinate vector in Rd)
pC/pi PS=C/PiS Sections 1.4 and 4
IC Column submatrix of I with columns indexed by C Section 4 and Theorem 6
G=supp(S) {C[n]:pC>0} (support of sampling S) Section 4
fC 1|C|iCfi (subsampled loss function) Section 4 and Theorems 3 and 6
LC Smoothness constant of fC Sections 1.5 and 4.4 and Theorems 3 and 6
Li Smoothness constant of fi Sections 1.5 and 4.4
Lmax maxiLi Sections 1.5 and 4.4 and Theorem 3
L Smoothness constant of f=1nifi Sections 1.5 and 4.4 and Theorem 3
L¯ 1ni=1Li Sections 1.5 and 4.4 and Theorem 3
L1 Expected smoothness constant of the stochastic gradient Assumption 3.1 and Theorem 1
L2 Expected smoothness constant of the Jacobian Assumption 3.2 and Theorem 1
LiG 1c1C:CG,iCLC
LmaxG maxiLiG (= L1 for τ—uniform S with c1—uniform support) Sections 1.5 and 4.4 and Theorems 2 and 3
κ Stochastic contraction number Section 3.2 and Lemma 2 and Theorem 1
ρ Sketch residual (37) and Theorem 1 and Lemma 6
Ψk / ΨSk Lyapunov function/stochastic Lyapunov function (52)/(109)
c1 |{C:Csupp(S),1C}| Definition 2
c2 |{C:Csupp(S),1C;2C}| (94)

Footnotes

1

For the purposes of this narrative it suffices to assume that stochastic gradients can be sampled at cost O(d).

2

We will not bother about the distribution from which it is picked at the moment. It suffices to say that virtually all distributions are supported by our theory. However, if we wish to obtain a practical method, some distributions will make much more sense than others.

3

The term “quasi-gradient methods” was popular in the 1980s [21], and refers to algorithms for solving certain stochastic optimization problems which rely on stochastic estimates of function values and their derivatives. In this paper we give the term a different meaning by drawing a direct link with quasi-Newton methods.

4

For some prior results on importance sampling for minibatches, in the context of QUARTZ, see [5].

5

A formal definition can be found in Assumption 4.2.

6

In this paper, a sampling is a random set-valued mapping with the sets being subsets of [n].

7

We prove inequality (20) in the Appendix; see Lemma 13.

8

SVRG is also built on a linear covariate model [15].

9

Excluding such trivial cases as when S is an invertible matrix and θS=1 with probability one, in which case B=0.

10

A similar relation to (43) holds for the stochastic optimization reformulation of linear systems studied by Richtárik and Takáč [26]. Therein, this relation holds as an identity with L1=1 (see Lemma 3.3 in [26]). However, the function fS considered there is entirely different and, moreover, f(x)=0 and fS(x)=0 for all S.

11

The notion of a probability matrix associated with a sampling was first introduced in [25] in the context of parallel coordinate descent methods, and further studied in [22].

12

Recall that pi=PiS for i[n], pC=PS=C for C[n] and W=Diag(w1,,wn)0.

13

Note that c1=|{Csupp(S):1C}|, and hence L1 has the form of a maximum over averages.

14

This is only possible when n is a multiple of τ.

15

We can alternatively set S=eS and the same results will hold.

16

With the difference being that in [6] the iteration complexity is 2n+Lmax/μlog1ϵ, thus a small constant change.

The first results of this paper were obtained in Fall 2015 and most key results were obtained by Fall 2016. All key results were obtained by Fall 2017. The first author gave a series of talks on the results (before the paper was released online) in November 2016 (Machine learning seminar at Télécom ParisTech), December 2016 (CORE seminar, Université catholique de Louvain), March 2017 (Optimization, machine learning, and pluri-disciplinarity workshop, Inria Grenoble - Rhone-Alpes), May 2017 (SIAM Conference on Optimization, Vancouver), September 2017 (Optimization 2017, Faculdade de Ciencias of the Universidade de Lisboa), and November 2017 (PGMO Days 2017, session on Continuous Optimization for Machine Learning, EDF’Lab Paris-Saclay).

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Achlioptas D. Database-friendly random projections: Johnson–Lindenstrauss with binary coins. J. Comput. Syst. Sci. 2003;66(4):671–687. doi: 10.1016/S0022-0000(03)00025-4. [DOI] [Google Scholar]
  • 2.Ailon N, Chazelle B. The fast Johnson–Lindenstrauss transform and approximate nearest neighbors. SIAM J. Comput. 2009;39(1):302–322. doi: 10.1137/060673096. [DOI] [Google Scholar]
  • 3.Allen-Zhu, Z.: Katyusha: the first direct acceleration of stochastic gradient methods. In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. STOC 2017, pp. 1200–1205 (2017)
  • 4.Chang CC, Lin CJ. LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011;2(3):1–27. doi: 10.1145/1961189.1961199. [DOI] [Google Scholar]
  • 5.Csiba D, Richtárik P. Importance sampling for minibatches. J. Mach. Learn. Res. 2018;19(1):962982. [Google Scholar]
  • 6.Defazio A, Bach F, Lacoste-julien S. SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. Adv. Neural Inf. Process. Syst. 2014;27:1646–1654. [Google Scholar]
  • 7.Defazio, A.J., Caetano, T.S., Domke, J.: Finito: a faster, permutable incremental gradient method for big data problems. In: CoRR arXiv:1407.2710 (2014)
  • 8.Goldfarb D. Modification methods for inverting matrices and solving systems of linear algebraic equations. Math. Comput. 1972;26(120):829–829. doi: 10.1090/S0025-5718-1972-0317527-4. [DOI] [Google Scholar]
  • 9.Goldfarb D. A family of variable-metric methods derived by variational means. Math. Comput. 1970;24(109):23–26. doi: 10.1090/S0025-5718-1970-0258249-6. [DOI] [Google Scholar]
  • 10.Gower, R.M., Richtárik, P., Bach, F.: Stochastic quasi-gradient methods: variance reduction via Jacobian sketching. arXiv:1805.02632 (2018) [DOI] [PMC free article] [PubMed]
  • 11.Gower RM, Richtárik P. Randomized iterative methods for linear systems. SIAM J. Matrix Anal. Appl. 2015;36(4):1660–1690. doi: 10.1137/15M1025487. [DOI] [Google Scholar]
  • 12.Gower RM, Richtárik P. Randomized quasi-newton updates are linearly convergent matrix inversion algorithms. SIAM J. Matrix Anal. Appl. 2017;38(4):1380–1409. doi: 10.1137/16M1062053. [DOI] [Google Scholar]
  • 13.Hickernell FJ, Lemieux C, Owen AB. Control variates for quasi-Monte Carlo. Stat. Sci. 2005;20(1):1–31. doi: 10.1214/088342304000000468. [DOI] [Google Scholar]
  • 14.Hofmann, T., Lucchi, A., Lacoste-Julien, S., McWilliams, B.: Variance reduced stochastic gradient descent with neighbors. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds) NIPS, pp. 2305–2313 (2015)
  • 15.Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. In: Advances in Neural Information Processing Systems 26, pp. 315–323. Curran Associates, Inc. (2013)
  • 16.Konečný J, Richtárik P. Semi-stochastic gradient descent methods. Front. Appl. Math. Stat. 2017;3:9. doi: 10.3389/fams.2017.00009. [DOI] [Google Scholar]
  • 17.Lin H, Mairal J, Harchaoui Z. Catalyst acceleration for first-order convex optimization: from theory to practice. J. Mach. Learn. Res. 2017;18(1):7854–7907. [Google Scholar]
  • 18.Mairal J. Incremental majorization–minimization optimization with application to large-scale machine learning. SIAM J. Optim. 2015;25(2):829–855. doi: 10.1137/140957639. [DOI] [Google Scholar]
  • 19.Nesterov Y. Introductory Lectures on Convex Optimization: A Basic Course. 1. Berlin: Springer; 2014. [Google Scholar]
  • 20.Nguyen, L.M., Liu, J., Scheinberg, K., Takáč, M.: SARAH: a novel method for machine learning problems using stochastic recursive gradient. In: Precup D., Teh Y.W. (eds) Proceedings of the 34th International Conference on Machine Learning, Vol. 70, pp. 2613–2621. Proceedings of Machine Learning Research (PMLR) (2017)
  • 21.Novikova N. A stochastic quasi-gradient method of solving optimization problems in Hilbert space. U.S.S.R. Comput. Math. Math. Phys. 1984;24(2):6–16. doi: 10.1016/0041-5553(84)90077-6. [DOI] [Google Scholar]
  • 22.Qu, Z., Richtárik, P.: Coordinate descent with arbitrary sampling II: expected separable overapproximation. arXiv:1412.8063 (2014)
  • 23.Qu, Z., Richtárik, P., Zhang, T.: Quartz: Randomized dual coordinate ascent with arbitrary sampling. In: Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 1. NIPS’15, pp. 865–873. MIT Press, Cambridge (2015)
  • 24.Richtárik P, Takáč M. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program. 2014;144(1):1–38. doi: 10.1007/s10107-012-0614-z. [DOI] [Google Scholar]
  • 25.Richtárik, P., Takáč, M.: Parallel coordinate descent methods for big data optimization problems. In: Mathematical Programming, pp. 1–52 (2015)
  • 26.Richtárik, P., Takáč, M.: Stochastic reformulations of linear systems: algorithms and convergence theory. arXiv:1706.01108 (2017)
  • 27.Robbins H, Monro S. A stochastic approximation method. Ann. Math. Stat. 1951;22:400–407. doi: 10.1214/aoms/1177729586. [DOI] [Google Scholar]
  • 28.Robert, N.L.R., Gower, M., Bach, F.: Tracking the gradients using the Hessian: a new look at variance reducing stochastic methods. In: Proceedings of the 21th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research (2018)
  • 29.Schmidt M, Le Roux N, Bach F. Minimizing finite sums with the stochastic average gradient. Math. Program. 2017;162(1):83–112. doi: 10.1007/s10107-016-1030-6. [DOI] [Google Scholar]
  • 30.Schmidt, M.W., Babanezhad, R., Ahmed, M.O., Defazio, A., Clifton, A., Sarkar, A.: Non-uniform stochastic average gradient method for training conditional random fields. In: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2015, San Diego, California, USA, May 9–12, 2015 (2015)
  • 31.Shalev-Shwartz, S.: SDCA without duality, regularization, and individual convexity. arXiv:1602.01582 (2016)
  • 32.Shalev-Shwartz S, Zhang T. Accelerated mini-batch stochastic dual coordinate ascent. Adv. Neural Inf. Process. Syst. 2013;26:378–385. [Google Scholar]
  • 33.Shalev-Shwartz S, Zhang T. Stochastic dual coordinate ascent methods for regularized loss. J. Mach. Learn. Res. 2013;14(1):567–599. [Google Scholar]
  • 34.Varga RS. Eigenvalues of circulant matrices. Pac. J. Math. 1954;1:151–160. doi: 10.2140/pjm.1954.4.151. [DOI] [Google Scholar]
  • 35.Wang, C., Chen, X., Smola, A.J., Xing, E.P.: Variance reduction for stochastic gradient optimization. In: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds) Advances in Neural Information Processing Systems, vol. 26, pp. 181–189. Curran Associates Inc. (2013)
  • 36.Xiao, L., Zhang, T.: A proximal stochastic gradient method with progressive variance reduction. arXiv:1403.4699 (2014)

Articles from Mathematical Programming are provided here courtesy of Springer

RESOURCES