Abstract
We develop a new family of variance reduced stochastic gradient descent methods for minimizing the average of a very large number of smooth functions. Our method—JacSketch—is motivated by novel developments in randomized numerical linear algebra, and operates by maintaining a stochastic estimate of a Jacobian matrix composed of the gradients of individual functions. In each iteration, JacSketch efficiently updates the Jacobian matrix by first obtaining a random linear measurement of the true Jacobian through (cheap) sketching, and then projecting the previous estimate onto the solution space of a linear matrix equation whose solutions are consistent with the measurement. The Jacobian estimate is then used to compute a variance-reduced unbiased estimator of the gradient. Our strategy is analogous to the way quasi-Newton methods maintain an estimate of the Hessian, and hence our method can be seen as a stochastic quasi-gradient method. Our method can also be seen as stochastic gradient descent applied to a controlled stochastic optimization reformulation of the original problem, where the control comes from the Jacobian estimates. We prove that for smooth and strongly convex functions, JacSketch converges linearly with a meaningful rate dictated by a single convergence theorem which applies to general sketches. We also provide a refined convergence theorem which applies to a smaller class of sketches, featuring a novel proof technique based on a stochastic Lyapunov function. This enables us to obtain sharper complexity results for variants of JacSketch with importance sampling. By specializing our general approach to specific sketching strategies, JacSketch reduces to the celebrated stochastic average gradient (SAGA) method, and its several existing and many new minibatch, reduced memory, and importance sampling variants. Our rate for SAGA with importance sampling is the current best-known rate for this method, resolving a conjecture by Schmidt et al. (Proceedings of the eighteenth international conference on artificial intelligence and statistics, AISTATS 2015, San Diego, California, 2015). The rates we obtain for minibatch SAGA are also superior to existing rates and are sufficiently tight as to show a decrease in total complexity as the minibatch size increases. Moreover, we obtain the first minibatch SAGA method with importance sampling.
Keywords: Stochastic gradient descent, Sketching, Variance reduction, Covariates
Introduction
We consider the problem of minimizing the average of a large number of differentiable functions
| 1 |
where f is —strongly convex and L—smooth. In solving (1), we restrict our attention to first-order methods that use a (variance-reduced) stochastic estimate of the gradient to take a step towards minimizing (1) by iterating
| 2 |
where is a stepsize.
In the context of machine learning, (1) is an abstraction of the empirical risk minimization problem; x encodes the parameters/features of a (statistical) model, and is the loss of example/data point i incurred by model x. The goal is to find the model x which minimizes the average loss on the n observations.
Typically, n is so large that algorithms which rely on scanning through all n functions in each iteration are too costly. The need for incremental methods for the training phase of machine learning models has revived the interest in the stochastic gradient descent (SGD) method [27]. SGD sets , where i is an index chosen from uniformly at random. SGD therefore requires only a single data sample to complete a step and make progress towards the solution. Thus SGD scales well in the number of data samples, which is important in several machine learning applications since there many be a large number of data samples. On the downside, the variance of the stochastic estimates of the gradient produced by SGD does not vanish during the iterative process, which suggests that a decreasing stepsize regime needs to be put into place if SGD is to converge. Furthermore, for SGD to work efficiently, this decreasing stepsize regime needs to be tuned for each application area, which is costly.
Variance-reduced methods
Stochastic variance-reduced versions of SGD offer a solution to this high variance issue, which improves the theoretical convergence rate and solves the issue with ad hoc stepsize regimes. The first variance reduced method for empirical risk minimization is the stochastic average gradient (SAG) method of Schmidt, Le Roux and Bach [29], closely followed by Finito [7] and Miso [18]. The analysis of SAG is notoriously difficult, which is perhaps due to the estimator of gradient being biased. Soon afterwards, the SAG gradient estimator was modified into an unbiased one, which resulted in the SAGA method [6]. The analysis of SAGA is dramatically simpler than that of SAG. Another popular method is SVRG of Johnson and Zhang [15] (see also S2GD [16]). SVRG enjoys the same theoretical complexity bound as SAGA, but has a much smaller memory footprint. It is based on an inner–outer loop procedure. In the outer loop, a full pass over data is performed to compute the gradient of f at the current point. In the inner loop, this gradient is modified with the use of cheap stochastic gradients, and steps are taken in the direction of the modified gradients. A notable recent addition to the family of variance reduced methods, developed by Nguyen et al. [20], is known as SARAH. Unlike other methods, SARAH does not use an estimator that is unbiased in the last step. Instead, it is unbiased over a long history of the method.
A fundamentally different way of designing variance reduced methods is to use coordinate descent [24, 25] to solve the dual. This is what the SDCA method [33] and its various extensions [32] do. The key advantage of this approach is that the dual often has a seperable structure in the coordinate space, which in turn means that each iteration of coordinate descent is cheap. Furthermore, SDCA is a variance-reduced method by design since the coordinates of the gradient tend to zero as one approaches the solution. One of the downsides of SDCA is that it requires calculating Fenchel duals and their derivatives. This issue was later solved by introducing approximations and mapping the dual iterates to the primal space as pointed out in [6]. This resulted in primal variants of SDCA such as dual-free SDCA [31]. A primal-dual variant which enables the use of arbitrary minibatch strategies was developed by Qu et al. [23], and is known as QUARTZ.
Finally, variance reduced methods can also be accelerated, as has been shown for the loop based methods such as Katyusha [3] or using the Universal catalyst [17].
Gaps in our understanding of SAGA
Despite significant research into variance-reduced stochastic gradient descent methods for solving (1), there are still big gaps in our understanding of variance reduction. For instance, the current theory supporting the SAGA algorithm is far from complete.
SAGA with uniform probabilities enjoys the iteration complexity , where and is the smoothness constant of . While importance sampling versions of SAGA have proved in practice to produce a speed-up over uniform SAGA [30], a proof of this speed-up has been elusive. It was conjectured by Schmidt et al. [30] that a properly designed importance sampling strategy for SAGA should lead to the rate , where . However, no such result was proved. This rate is achieved by, for instance, importance sampling variants of SDCA, QUARTZ [23] and SVRG [36]. However, the analysis only applies to a more specialized version of problem (1) (e.g., one needs an explicit strongly convex regularizer).
Second, existing minibatch variants of SAGA do not enjoy the same rate as that offered by methods such as SDCA and QUARTZ. Are the above issues with SAGA unavoidable, or is it the case that our understanding of the method is far from complete? Lastly, no minibatch variant of SAGA with importance sampling is known.
One of the contributions of this paper is giving positive answers to all of the above questions.
Jacobian sketching: a new approach to variance reduction
Our key contribution in this paper is the introduction of a novel approach—which we call Jacobian sketching—to designing and understanding variance-reduced stochastic gradient descent methods for solving (1). We refer to our method by the name JacSketch. We shall now briefly introduce some of the key insights motivating our approach. Let be defined by
| 3 |
and further let
| 4 |
be the Jacobian of F at x.
The starting point of our new approach is the following trivial observation: the gradient of f at x can be computed from the Jacobian by a simple linear transformation:
| 5 |
where is the vector of all ones in . This alone is not useful to come up with a better way of estimating the gradient. Indeed, formula (5) has two issues. First, the Jacobian is not available. If we wanted to compute it, we would need to pay the cost of one pass through the data. Second, even if the Jacobian was available, merely multiplying it by the vector of all ones would cost operations, which is again a cost equivalent to one pass over data.
Now, let us replace the vector of all ones in (5) by , the unit coordinate/basis vector in . If the index i is chosen randomly from [n], then
| 6 |
which is a stochastic gradient of f at x. In other words, by performing a random linear transformation of the Jacobian, we have arrived at the classical stochastic estimate of the gradient. This approach does not suffer from the first issue mentioned above as the Jacobian is not needed at all in order to compute . Likewise, it does not suffer from the second issue; namely, the cost of computing the stochastic gradient is merely , and we can avoid a costly pass through the data.1
However, this approach suffers from a new issue: by constructing the estimate this way, we do not learn from the (random) information collected about the Jacobian in prior iterations, through having access to random linear transformations thereof. In this paper we take the point of view that this is the reason why SGD suffers from large variance. Our approach towards alleviating this problem is to maintain and update an estimate of the Jacobian
Given , ideally we would like to satisfy
| 7 |
that is, we would like it to be equal to the true Jacobian. However, at the same time we do not wish to pay the price of computing it. Hence, assuming we have an estimate of the Jacobian available, we instead pick a random matrix from some distribution of matrices2 and consider the following sketched version of the linear system (7), with unknown :
| 8 |
This equation generalizes both (5) and (6). The left hand side contains the sketched system matrix and the unknown matrix , and the right hand side contains a quantity we can measure (through a random linear measurement of the Jacobian, which we assume is cheap). Of course, the true Jacobian solves (8). However, in general, and in particular when which is the regime we want to be in for practical reasons, the system (8) will have infinite solutions.
We pick a unique solution as the closest solution of (8) to our previous estimate , with respect to a weighted Frobenius norm with a positive definite weight matrix :
| 9 |
where
| 10 |
In doing so, we have built a learning mechanism whose goal is to maintain good estimates of the Jacobian throughout the run of method (2). These estimates can be used to efficiently estimate the gradient by performing a linear transformation similar to (5), but with replaced by the latest estimate of the Jacobian. In practice, it is important to design sketching matrices so that the Jacobian sketch can be calculated efficiently.
The “sketch-and-project” strategy (9) for updating our Jacobian estimate is analogous to the way quasi-Newton methods update the estimate of the Hessian (or inverse Hessian) [8, 9, 12]. From this perspective, our method can be viewed as a stochastic quasi-gradient method.3
Problem (9) admits the explicit closed-form solution (see Lemma 14):
| 11 |
where
| 12 |
is a projection matrix, and denotes the Moore–Penrose pseudoinverse.
The key insight of our work is to propose an efficient Jacobian learning mechanism based on ideas borrowed from recent results in randomized numerical linear algebra.
Having established our update of the Jacobian estimate, we now need to use this to form an estimate of the gradient. Unfortunately, using in place of in (5) leads to a biased gradient estimate (something we explore later in Sect. 2.5). To obtain an unbiased estimator of the gradient, we introduce a stochastic relaxation parameter and use
| 13 |
as an approximation of the gradient. Taking expectations in (13) over (for this we use the notation ), we get
| 14 |
Thus provided that
| 15 |
we have , and hence, is an unbiased estimate of the gradient. If (15) holds, we say that is a bias-correcting random variable and is an unbiased sketch. Our new JacSketch method is method (2) with computed via (13) and the Jacobian estimate updated via (11). This method is formalized in Sect. 2 as Algorithm 1.
This strategy indeed works, as we show in detail in this paper. Under appropriate conditions (on the stepsize , properties of f and randomness behind the sketch matrices and so on), the variance of diminishes to zero (e.g., see Lemma 6), which means that JacSketch is a variance-reduced method. We perform an analysis for smooth and strongly convex functions f, and obtain a linear convergence result (Theorem 1). We summarize our complexity results in detail in Sect. 1.5.
SAGA as a special case of JacSketch
Of particular importance in this paper are minibatch sketches, which are sketches of the form , where is a random subset of [n], and is a random column submatrix of the identity matrix with columns indexed by . For minibatch sketches, JacSketch corresponds to minibatch variants of SAGA. Indeed, in this case, and if , we have , where (see Lemma 7). Therefore,
| 16 |
In view of (11), and since (see Lemma 7), the Jacobian estimate gets updated as follows
| 17 |
Standard uniform SAGA is obtained by setting with probability 1/n for each , and letting . SAGA with arbitrary probabilities is obtained by instead choosing with probability for each , and letting . However, virtually all minibatching and importance sampling strategies can be treated as special cases of our general approach.
The theory we develop answers the open questions raised earlier. In particular, we answer the conjecture of Schmidt et al. [30] about the rate of SAGA with importance sampling in the affirmative. In particular, we establish the iteration complexity This complexity is obtained for different importance sampling distributions that have not been proposed in the current literature for SAGA. In order to achieve this, we develop a new analysis technique which makes use of a stochastic Lyapunov function (see Sect. 5). That is, our Lyapunov function has a random element which is independent of the randomness inherited from the iterates of the method. This is unlike any other Lyapunov function used in the analysis of stochastic methods we are aware of. Further, we prove that SAGA converges with any initial matrix in place of the matrix of gradients of functions at the starting point. We also show that our results give better rates for minibatch SAGA than are currently known, even for uniform minibatch strategies. We also allow for a family of completely new uniform minibatching strategies which were not considered in connection with SAGA before, and consider also SAGA with importance sampling for minibatches4 (based on a partition of [n]). Lastly, as a special case, our method recovers standard gradient descent, together with the sharp iteration complexity of .
Our general approach also enables a novel reduced memory variant of SAGA as a special case. Let , and choose Since , the formula for is the same as in the case of SAGA, and is given by (16). What is notably different about this sketch (compared to ) is that, since the update of the Jacobian estimate is given by
Thus, the same update is applied to all the columns of that belong to . Equivalently, this update can be written as
| 18 |
In particular, if only ever picks sets which correspond to a partition of [n], and we initialize so that all the columns belonging to the same partition are the same, then they will be the same within in each partition for all k. In such a case, we do not need to maintain all the identical copies. Instead, we can update and use a condensed/compressed version of the Jacobian, with one column per partition set only, to reduce the total memory usage. This method, with non-uniform probabilities, is analyzed in our framework in Sect. 5.6.
Summary of complexity results
All convergence results obtained in this paper are summarized in Table 1.
Table 1.
Special cases of our JacSketch method, and the associated iteration complexity
| ID | Method | Sketch | Iteration complexity () | Reference |
|---|---|---|---|---|
| 1 | JacSketch | Any unbiased | Theorem 1 | |
| Any | ||||
| 2 | JacSketch | Theorem 6 | ||
| (Any probabilities for —partition) | ||||
| 3 | Gradient descent | Theorems 1 and 6 | ||
| Sections 4.6 and 5.6 | ||||
| 4 | SAGA | Theorems 1 and 6 | ||
| (Uniform sampling) | Sections 4.6 and 5.6 | |||
| 5 | SAGA | Theorem 6 | ||
| (Importance sampling) | (129) | |||
| 6 | Minibatch SAGA | Theorem 1 | ||
| (—uniform sampling) | (100) | |||
| 7 | Minibatch SAGA | Theorem 1 | ||
| (—nice sampling) | (101) | |||
| 8 | Minibatch SAGA | Theorem 1 | ||
| (—nice sampling) | (102) | |||
| 9 | Minibatch SAGA | Theorem 1 | ||
| (—partition sampling) | (103) | |||
| 10 | Minibatch SAGA | Theorem 1 | ||
| (—partition sampling) | (104) | |||
| 11 | Minibatch SAGA | Theorem 6 | ||
| (Importance —partition sampling) | (131) |
All methods converge linearly. In the iteration complexity column we list the number of iterations sufficient to obtain an accurate solution, ignoring a factor
Our convergence results depend on several constants which we will now briefly introduce. The precise definitions can be found in the main text. For , define . We assume is —smooth.5 We let , , and . Note that , , and . For a sampling6, we let . That is, the support of a sampling are all the sets which are selected by this sampling with positive probability. Finally, , where is the cardinality of the set (which is assumed to be the same for all i). So, is the maximum over i of averages of values for those sets C which are picked by S with positive probability and which contain i. Clearly, (see Theorem 3).
General theorem. Theorem 1 is our most general result, allowing for any(unbiased) sketch (see (15)), and any weight matrix . The resulting iteration complexity given by this theorem is
and is also presented in the first row of Table 1. This result depends on two expected smoothness constants (measuring the expected smoothness of the stochastic gradient of our stochastic reformulation; see Assumption 3.1) and (measuring the expected smoothness of the Jacobian; see Assumption 3.2). The complexity also depends on the stochastic contraction number (see (48)) and the sketch residual (see (37) and (55)). We devote considerable effort to give simple formulas for these constants under some specialized settings (for special combinations of sketches and weight matrices ). In fact, the entire Sect. 4 is devoted to this. In particular, all rows of Table 1 where the last column mentions Theorem 1 arise as special cases of the general iteration complexity in the first row.
Gradient descent As a starting point, in row 3 we highlight that one can recover gradient descent as a special case of JacSketch with the choice (with probability 1) and . We get the rate , which is tight.
SAGA with uniform sampling Let us now focus on a slightly more interesting special case: row 4. We see that SAGA with uniform probabilities appears as a special case, and enjoys the rate , recovering an existing result.
SAGA with importance sampling Unfortunately, the generality of Theorem 1 comes at a cost: we are not able to obtain an importance sampling version of SAGA as a special case which would have a better iteration complexity than uniform SAGA. This will be remedied by our second complexity theorem, which we shall discuss later below.
Minibatch SAGA Rows 6–11 correspond to minibatch versions of SAGA. In particular, row 6 contains a general statement (albeit still a special case of the statement in row 1), covering virtually all minibatch strategies. Rows 7–11 specialize this result to two particular minibatch sketches (i.e., ), each with two choices of . The first sketch corresponds to samplings S which choose from among all subsets of [n] uniformly at random. This sampling is known in the literature as -nice sampling [22, 25]. The second sketch corresponds to S being a —partition sampling. This sampling picks uniformly at random subsets of [n] which form a partition of [n], and are all of cardinality . The complexities in rows 7 and 8 are comparable (each can be slightly better than the other, depending on the values of the smoothness constants ). On the other hand, in the case of —partition, the choice is better than : the complexity in row 10 is better than that in row 9 because
Optimal minibatch size for SAGA Our analysis for mini-batch SAGA also gives the first iteration complexities that interpolate between the complexity of SAGA and the complexity of gradient descent, as increases from 1 to n. Indeed, consider the complexity in rows 7 and 8 for and Our iteration complexity of mini-batch SAGA is the first result that is precise enough to inform an optimal mini-batch size (see Sect. 6.2). In contrast, the previous best complexity result for mini-batch SAGA [14] interpolates between and as increases from 1 to n, and thus is not precise enough as to inform the best minibatch size. We make a more detailed comparison between our results and [14] in Sect. 4.7.
Specialized theorem We now move to the second main complexity result of our paper: Theorem 6. The general complexity statement is listed in row 2 of Table 1:
| 19 |
where . This theorem is a refined result specialized to minibatch sketches () with —partition samplings S. This is a sampling which picks subsets of [n] of size forming a partition of [n], uniformly at random. This theorem also includes gradient descent as special case since when with probability 1 (hence, ) we have that and . Hence, (19) specializes to . But more importantly, our focus on —partition samplings enables us to provide stronger iteration complexity guarantees for non-uniform probabilities.
SAGA with importance sampling The first remarkable special case of (19) is summarized in row 5, and corresponds to SAGA with importance sampling. The complexity obtained, , answers a conjecture of Schmidt et al. [30] in the affirmative. In this case, the support of S are the singletons , , for all i, and . Optimizing the complexity bound over the probabilities , we obtain the importance sampling
- Minibatch SAGA with importance sampling In row 11 we state the complexity for a minibatch SAGA method with importance sampling. This is the first result for this method in the literature. Note that by comparing rows 4 and 10, we can conclude that the complexity of minibatch SAGA with importance sampling is better than for minibatch SAGA with uniform probabilities. Indeed, this is because7
20
Outline of the paper
We present an alternative narrative motivating the development of JacSketch in Sect. 2. This narrative is based on a novel technical tool which we call controlled stochastic optimization reformulations of problem (1). We then develop a general convergence theory of JacSketch in Sect. 3. This theory admits practically any sketches (including minibatch sketches mentioned in the introduction) and weight matrices . The main result in this section is Theorem 1. In Sect. 4 we specialize the general results to minibatch sketches. Here we also compute the various constants appearing in the general complexity result for JacSketch for specific classes of minibatch samplings. In Sect. 5 we develop an alternative theory for JacSketch, one based on a novel stochastic Lyapunov function. The main result in this section is Theorem 6. Computational experiments are included in Sect. 6.
Notation
We will introduce notation when and as needed. If the reader would like to recall any notation, for ease of reference we have a notation glossary in Sect. 1. As a general rule, all matrices are written in upper-case bold letters. By we refer to the natural logarithm of t.
Controlled stochastic reformulations
In this section we provide an alternative narrative behind the development of JacSketch; one through the lens of what we call controlled stochastic reformulations.
We design our family of methods so that two keys properties are satisfied, namely unbiasedness, and diminishing variance: as . These are both favoured statistical properties. Moreover, currently only methods that have diminishing variance exhibt fast linear convergence (exponential decay of the error) on strongly convex problems. On the other hand, unbiasedness is not necessary for a fast method in practice since several biased stochastic gradient methods such as SAG [29] perform well in practice. Still, the absence of bias greatly facilitates the analysis of JacSketch.
Stochastic reformulation using sketching
It will be useful to formalize the condition mentioned in Sect. 1.3 which leads to being an unbiased estimator of the gradient.
Assumption 2.1
(Unbiased sketch) Let be a weighting matrix and let be the distribution from which the sketch matrices are drawn. There exists a random variable such that
| 21 |
When this assumption is satisfied, we say that constitutes an “unbiased sketch”, and we call the bias-correcting random variable. When the triple is obvious from the context, sometimes we shall simply say that is an unbiased sketch.
The first key insight of this section is that besides producing unbiased estimators of the gradient, unbiased sketches produce unbiased estimators of the loss function as well. Indeed, by simply observing that , we get
In other words, we can rewrite the finite-sum optimization problem (1) as an equivalent stochastic optimization problem where the randomness comes from rather than from the representation-specific uniform distribution over the n loss functions:
| 22 |
The stochastic optimization problem (22) is a stochastic reformulation of the original problem (1). Further, the stochastic gradient of this reformulation is given by
| 23 |
With these simple observations, our options at designing stochastic gradient-type algorithms for (1) have suddenly broadened dramatically. Indeed, we can now solve the problem, at least in principle, by applying SGD to any stochastic reformulation:
| 24 |
But now we have a parameter to play with, namely, the distribution of . The choice of this parameter will influence both the iteration complexity of the resulting method as well as the cost of each iteration. We now give a few examples of possible choices of to illustrate this.
Example 1
(gradient descent) Let be equal to (or any other invertible matrix) with probability 1 and let be chosen arbitrarily. Then is bias-correcting since
With this setup, the SGD method (24) becomes gradient descent:
| 25 |
Example 2
(SGD with non-uniform sampling) Let (unit basis vector in ) with probability and let . Then is bias-correcting since
Let be picked at iteration k. Then the SGD method (24) becomes SGD with non-uniform sampling:
| 26 |
Note that with this setup, and when for all i, the stochastic reformulation is identical to the original finite-sum problem. This is the case because .
Example 3
(minibatch SGD) Let , where with probability . Let . Assume that the cardinality of the set does not depend on i (and is equal to ). Then is bias-correcting since
Note that . Assume that set is picked in iteration k. Then the SGD method (24) becomes minibatch SGD with non-uniform sampling:
| 27 |
Finally, note that gradient descent (25) is a special case of (27) if we set and for all other subsets C of [n]. Likewise, SGD with non-uniform probabilities (26) is a special case of (27) if we set for all i and for all other subsets C of [n].
The controlled stochastic reformulation
Though SGD applied to the stochastic reformulation can generate several known algorithms in special cases, there is no reason to believe that the gradient estimates will have diminishing variance (excluding the extreme case such as gradient descent). Here we handle this issue using control variates, a commonly used tool to reduce variance in Monte Carlo methods [13] and introduced in [35] for designing variance reduced stochastic gradient algorithm.
Given a random function , we introduce the controlled stochastic reformulation:
| 28 |
Since
| 29 |
is an unbiased estimator of the gradient , we can apply SGD to the controlled stochastic reformulation instead, which leads to the method
Reformulation (22) and method (24) is recovered as a special case with the choice . However, we now have the extra freedom to choose so as to control the variance of this stochastic gradient. In particular, if and are sufficiently correlated, then (29) will have a smaller variance than For this reason, we choose a linear model for that mimicks the stochastic function
Let be a matrix of parameters of the following linear model
| 30 |
Note that this linear model has the same structure as in (22) except that F(x) has been replaced by the linear function .8 If is an unbiased sketch (see (21)), we get , which plugged into (28) and (29) together with the definition (22) of gives the following unbiased estimate of f(x) and :
| 31 |
and
| 32 |
We collect this observation that (32) is unbiased in the following lemma for future reference.
Lemma 1
If is an unbiased sketch (see Definition 2.1), then
| 33 |
for every and . That is, (32) is an unbiased estimate of the gradient (1).
Now it remains to choose the matrix , which we do by minimizing the variance of our gradient estimate.
The Jacobian estimate, variance reduction and the sketch residual
Since (32) gives an unbiased estimator of for all , we can attempt to choose that minimizes its variance. Minimizing the variance of (32) in terms of will, for all sketching matrices of interest, lead to This follows because
| 34 |
where
| 35 |
and we have used the weighted Frobenius norm with weight matrix (see (10)).
For most distributions of interest, the matrix is positive definite.9 Letting , we can bound the largest eigenvalue of matrix via Jensen’s inequality as follows:
Combined with (34), we get the following bound on the variance of :
This suggests that the variance is low when is close to the true Jacobian , and when the second moment of is small. If is an unbiased sketch, then , and hence is the variance of . So, the lower the variance of as an estimator of , the lower the variance of as an estimator of .
Let us now return to the identity (34) and its role in choosing . Minimizing the variance in a single step is overly ambitious, since it requires setting , which is costly. So instead, we propose to minimize (34) iteratively. But first, to make (34) more manageable, we upper-bound it using a norm defined by the weight matrix as follows
| 36 |
where
| 37 |
is the largest eigenvalue of . We refer to the constant as the sketch residual, and it is a key constant affecting the convergence rate of JacSketch as captured by Theorem 1. The sketch residual represents how much information is “lost” on average due to sketching and due to how well approximates . We develop formulae and estimates of the sketch residual for several specific sketches of interest in Sect. 4.5.
Example 4
(Zero sketch residual) Consider the setup from Example 1 (gradient descent). That is, let be invertible with probability one and let be the bias-reducing variable. Then and hence , which means that .
Example 5
(Large sketch residual) Consider the setup from Example 2 (SGD with non-uniform probabilities). That is, let (unit basis vector in ) with probability and let . Then is a bias-reducing variable, and it is easy to show that . If we choose for all i, then .
We have switched from the norm to a user-controlled norm because minimizing under the norm will prove to be impractical because is a dense matrix for most all practical sketches. With this norm change we now have the option to set as a sparse matrix (e.g., the identity, or a diagonal matrix), as we explain in Remark 1 further down. However, the theory we develop allows for any symmetric positive definite matrix .
We can now minimize (36) iteratively by only using a single sketch of the true Jacobian at each iteration. Suppose we have a current estimate of the true Jacobian and a sketch of the true Jacobian . With this we can calculate an improved Jacobian estimate using a projection step
| 38 |
the solution of which, as it turns out, depends on through its sketch only. That is, we choose the next Jacobian estimate as close as possible to the true Jacobian while restricted to a matrix subspace that passes through . Thus in light of (36), the variance is decreasing. The explicit solution to (38) is given by
| 39 |
See Lemma B.1 in the appendix of an extended preprint version of this paper [10] or Theorem 4.1 in [12] for the proof. Note that, as alluded to before, depends on through its sketch only. Note that (39) updates the Jacobian estimate by re-using the sketch which we also use when calculating the stochastic gradient (32).
Note that (39) gives the same formula for as (11) which we obtained by solving (9); i.e., by projecting onto the solution set of (8). This is not a coincidence. In fact, the optimization problems (9) and (38) are mutually dual. This is also formally stated in Lemma B.1 in [10].
In the context of solving linear systems, this was observed in [11]. Therein, (9) is called the sketch-and-project method, whereas (38) is called the constrain-and-approximate problem. In this sense, the Jacobian sketching narrative we followed in Sect. 1.3 is dual to the Jacobian sketching narrative we are pursuing here.
Remark 1
(On the weight matrix and the cost) Loosely speaking, the denser the weighting matrix , the higher the computational cost for updating the Jacobian using (39). Indeed, the sparsity pattern of controls how many elements of the previous Jacobian estimate need to be updated. This can be seen by re-arranging (39) as
| 40 |
where Although we have no control over the sparsity of , the matrix can be sparse when both and are sparse. This will be key in keeping the update (40) at a cost propotional to , as oppossed to when is dense. This is why we consider a diagonal matrix in all of the special complexity results in Table 1. While it is clear that some non-diagonal sparse matrices could also be used, we leave such considerations to future work.
JacSketch algorithm
Combining formula (32) for the stochastic gradient of the controlled stochastic reformulation with formula (39) for the update of the Jacobian estimate, we arrive at our JacSketch algorithm (Algorithm 1).
Typically, one should not implement the algorithm as presented above. The most efficient implementation of JacSketch will depend heavily on the structure of , distribution and so on. For instance, in the special case of minibatch SAGA, as presented in Sect. 1.4, the update of the Jacobian (77) has a particularly simple form. That is, we maintain a single matrix and keep replacing its columns by the appropriate stochastic gradients, as computed. Moreover, in the case of linear predictors, as is well known, a much more memory-efficient implementation is possible. In particular, if for some loss function and a data vector and all i, then , which means that the gradient always points in the same direction. In such a situation, it is sufficient to keep track of the scalar loss derivatives only. Similar comments can be made about the step (16) for computing the gradient estimate .
A window into biased estimates and SAG
We will now take a small detour from the main flow of the paper to develop an alternative viewpoint of Algorithm 1 and also make a bridge to biased methods such as SAG [29].
The simple observation that
| 41 |
suggests that , where would give a good estimate of the gradient. To decrease the variance of , we can also use the same update of the Jacobian estimate (39) since
Thus, if converges to zero, so will Though unfortunately, the combination of the gradient estimate and a Jacobian estimate updated via (39) will almost always give a biased estimator. For example, if we define by setting with probability and let , then we recover the celebrated SAG method [29] and its biased estimator of the gradient.
The issue with using as an estimator of the gradient is that it decreases the variance too aggressively, neglecting the bias. However, this can be fixed by trading off variance for bias. One way to do this is to introduce the random variable as a stochastic relaxation parameter
| 42 |
If is bias correcting, we recover the unbiased SAGA estimator (13). By allowing to be closer to one, however, we will get more bias and lower variance. We leave this strategy of building biased estimators for future work. It is conceivable that SAG could be analyzed using reasonably small modifications of the tools developed in this paper. Doing this would be important due to at least four reasons: (i) SAG was the first variance-reduced method for problem (1), (ii) the existing analysis of SAG is not satisfying, (iii) one may be able to obtain a better rate, (iv) one may be able to develop and analyze novel variants of SAG.
Convergence analysis for general sketches
In this section we establish a convergence theorem (Theorem 1) which applies to general sketching matrices (that is, arbitrary distributions from which they are sampled). By design, we keep the setting in this section general, and only deal with specific instantiations and special cases in Sect. 4.
Two expected smoothness constants
We first formulate two expected smoothness assumptions tying together f, its Jacobian and the distribution from which we pick sketch matrices . These assumptions, and the associated expected smoothness constants, play a key role in the convergence result.
Our first assumption concerns the expected smoothness of the stochastic gradients of the stochastic reformulation (22).10
Assumption 3.1
(Expected smoothness of the stochastic gradient) There is a constant such that
| 43 |
It is easy to see from (23) and (32) that
| 44 |
for all and , and hence the expected smoothness assumption can equivalently be understood from the point of view of the controlled stochastic reformulation. The above assumption is not particularly restrictive. Indeed, in Theorem 2 we provide formulae for for smooth functions f and for a class of minibatch samplings . These formulae can be seen as proofs that Assumption 3.1 is satisfied for a large class of practically relevant sketches and functions f.
Our second expected smoothness assumption concerns the Jacobian of F.
Assumption 3.2
(Expected smoothness of the Jacobian) There is a constant such that
| 45 |
where the norm is the weighted Frobenius norm defined in (10).
It is easy to see (see Lemma 4, Eq. (60)) that for any matrix , we have where
| 46 |
Therefore, (45) can be equivalently written in the form
| 47 |
which suggests that the above condition indeed measures the variation/smoothness of the Jacobian under a specific weighted Frobenius norm.
Stochastic contraction number
By the stochastic contraction number associated with and we mean the constant defined by
| 48 |
In the next lemma we show that for all distributions for which the expectation (48) exists.
Lemma 2
For all distributions we have the bounds
Proof
It is not difficult to show that is the orthogonal projection matrix that projects onto . Consequently, and, after taking expectation, we get Finally, this implies that
| 49 |
In our convergence theorem we will assume that . This can be achieved by choosing a suitable distribution and it holds trivially for all the examples we develop. The condition essentially says that the distribution is sufficiently rich. This contraction number was first proposed in [11] in the context of randomized algorithms for solving linear systems. We refer the reader to that work for details on sufficient assumptions about guaranteeing . Below we give an example.
Example 6
Let , and let be given by setting with probability . Then
Since the vectors span and for all i, the matrix is positive definite and hence . In particular, when , then the expected projection matrix is equal to and . If instead of unit basis vectors we use vectors that span , using similar arguments we can also conclude that .
Convergence theorem
Our main convergence result, which we shall present shortly, holds for -strongly convex functions. However, it turns out our results hold for the somewhat larger family of functions that are quasi-strongly convex.
Assumption 3.3
(Quasi-strong convexity) Function f for some satisfies
| 50 |
where
We are now ready to present the main result of this section.
Theorem 1
(Convergence of JacSketch for General Sketches) Let . Let f satisfy Assumption 3.3. Let Assumption 2.1 be satisfied (i.e., is an unbiased sketch and is the associated bias-correcting random variable). Let the expected smoothness assumptions be satisfied: Assumptions 3.1 and 3.2. Assume that . Let the sketch residual be defined as in (37), i.e.,
| 51 |
Choose any and . Let be the random iterates produced by JacSketch (Algorithm 1). Consider the Lyapunov function
| 52 |
If the stepsize satisfies
| 53 |
then
| 54 |
If we choose to be equal to the upper bound in (53), then
| 55 |
Recall that the iteration complexity expression from (55) is listed in row 1 of Table 1.
The Lyapunov function we use is simply the sum of the squared distance between to the optimal and the distance of our Jacobian estimate to the optimal Jacobian Hence, the theorem says that both the iterates and the Jacobian estimates converge.
Projection lemmas and the stochastic contraction number
In this section we collect some basic results on projections. Recall from (12) that and from (46) that .
Lemma 3
| 56 |
Furthermore,
| 57 |
Proof
Using the pseudoinverse property we have that
| 58 |
and as a consequence (56) holds. Moreover,
| 59 |
Lemma 4
For any matrices we have the identities
and
| 60 |
Furthermore,
| 61 |
Proof
First, note that
By taking expectations in , we get
where in the last step we used the estimate
Key lemmas
We first establish two lemmas. The first lemma provides an upper bound on the quality of new Jacobian estimate in terms of the quality of the current estimate and function suboptimality. If the second term on the right hand side was not there, the lemma would be postulating a contraction on the quality of the Jacobian estimate.
Lemma 5
Let Assumption 3.2 be satisfied. Then iterates of Algorithm 1 satisfy
| 62 |
where is defined in (48).
Proof
Subtracting from both sides of (39) gives
| 63 |
Taking norms on both sides, then expectation with respect to and then using Lemma 4, we get
We now bound the second moment of . The lemma implies that as approaches and approaches , the variance of approaches zero. This is a key property of JacSketch which elevates it into the ranks of variance-reduced methods.
Lemma 6
Let be an unbiased sketch. Let Assumption 3.1 be satisfied (i.e., assume that inequality (43) holds for some ). Then the second moment of the estimated gradient is bounded by
| 64 |
where is defined in (51).
Proof
Adding and subtracting in (13) gives
Taking norms on both sides and using the bound gives
| 65 |
In view of Assumption 3.1 (combine (43) and (44)), we have
| 66 |
where the expectation is taken with respect to . Let us now bound . Using the fact that , we can write
If we now let and , then we can continue:
| 67 |
where in the last step we have used the assumption that is bias-correcting:
| 68 |
It now only remains to substitute (66) and (67) into (65) to arrive at (64).
Proof of Theorem 1
With the help of the above lemmas, we now proceed to the proof of the theorem. In view of (50), we have
| 69 |
By using the relationship , the fact that is an unbiased estimate of the gradient , and using one-point strong convexity (69), we get
| 70 |
Next, applying Lemma 6 leads to the estimate
| 71 |
Let . Adding to both sides of the above inequality and substituting in the definition of from (52), it follows that
| 72 |
We now choose so that and , which can be written as
| 73 |
If satisfies the above two inequalities, then (72) takes on the simplified form By taking expectation again and using the tower rule, we get . Note that as long as , we have . Recalling that , and choosing to be the minimum of the two upper bounds (73) gives the upper bound on (53), which in turn leads to (55).
Minibatch sketches
In this section we focus on special cases of Algorithm 1 where one computes for , where is a random subset (mini-batch) of [n] chosen in each iteration according to some fixed probability law. As we have seen in the introduction, this is achieved by choosing .
We say that is a minibatch sketch if for some random set (sampling) S, where is a column submatrix of the identity matrix associated with columns indexed by the set S. That is, the distribution from which the sketches are sampled is defined by
where and for all C.
Samplings
We now formalize the notion of a random set, which we will refer to by the name sampling. A sampling is a random set-valued mapping with values being the subsets of [n]. A sampling S is uniquely characterized by the probabilities associated with every subset C of [n].
Definition 1
(Types of samplings) We say that sampling S is non-vacuous if (i.e., ). Let . We say that S is proper if for all i. We say that S is uniform if for all i, j. We say that S is —uniform if it is uniform and with probability 1. In particular, the unique sampling which assigns equal probabilities to all subsets of [n] of cardinality and zero probabilities to all other subsets is called the —nice sampling.
We refer the reader to [22, 25] for a background reading on samplings and their properties.
Definition 2
(Support) The support of a sampling S is the set of subsets of [n] which are chosen by S with positive probability: . We say that S has uniform support if
for all . In such a case we say that the support is —uniform.
To illustrate the above concepts, we now list a few examples with .
Example 7
The sampling defined by setting is non-vacuous, proper, 2—uniform ( for all i and with probability 1), and has 1—uniform support. If we change the probabilities to and , the sampling is no longer uniform (since ), but it still has 1—uniform support, is proper and non-vacuous. Hence, a sampling with uniform support need not be uniform. On the other hand, a uniform sampling need not have uniform support. As an example, consider sampling S defined via , . It is uniform (since for all i). However, while element 1 appears in a single set of its support, elements 2, 3 and 4 each appear in two sets. So, this sampling does not have uniform support.
Example 8
A uniform sampling need not be —uniform for any . For example, the sampling defined by setting , and is uniform (since for all i), but as it assigns positive probabilities to sets of at least two different cardinalities, it is not —uniform for any .
Example 9
Further, the sampling defined by setting , , , , , is non-vacuous, 2—uniform ( for all i and with probability 1), and has 3—uniform support. The sampling defined by setting , , is non-vacuous, proper, 2—uniform ( for all i and with probability 1) and has 2—uniform support.
Note that a sampling with uniform support is necessarily proper as long as . However, it need not be non-vacuous. For instance, the sampling S defined by setting has 0—uniform support and is vacuous. From now on, we only consider samplings with the following properties.
Assumption 4.1
S is non-vacuous and has —uniform support with .
Note that if S is a non-vacuous sampling with 1—uniform support, then its support is necessary a partition of [n]. We shall pay specific attention to such samplings in Sect. 5 as for them we can develop a stronger analysis than that provided by Theorem 1.
Minibatch sketches and projections
In the next result we describe some basic properties of the projection matrix associated with a minibatch sketch .
Lemma 7
Let . Let S be any sampling, be the associated minibatch sketch, and let be the probability matrix11 associated with sampling S: . Then
Proof
-
(i)
This follows by noting that is the diagonal matrix with diagonal entries corresponding to for , which in turn can be used to show that .
-
(ii)
This follows from (i) by noting that is the vector of all ones in .
-
(iii)
Using (ii), we have . By linearity of expectation, , where if and otherwise.
-
(iv)
This follows from (i) by taking expectations of the diagonal elements of .
-
(v)
Follows from (iv).
-
(vi)Indeed,
where the last equation follows from the assumption that the support of S is —uniform.75
The following simple observation will be useful in the computation of the constant . The proof is straightforward and involves a double counting argument.
Lemma 8
Let S be a sampling satisfying Assumption 4.1. Moreover, assume that S is a —uniform sampling. Then . Consequently, , where is the stochastic contraction number associated with the minibatch sketch .
JacSketch for minibatch sampling = minibatch SAGA
As we have mentioned in Sect. 1.4 already, JacSketch admits a particularly simple form for minibatch sketches, and corresponds to known and new variants of SAGA. Assume that S satisfies Assumption 4.1 and let . In view of Lemma 7(vi), this means that the random variable is bias-correcting, and due to Lemma 7(ii), we have . Therefore,
![]() |
76 |
By Lemma 7(i), . In view of (11), the Jacobian estimate gets updated as follows
| 77 |
The resulting minibatch SAGA method is formalized as Algorithm 2.
Below we specialize the formula for to a few interesting special cases.
Example 10
(Standard SAGA) Standard uniform SAGA is obtained by setting with probability 1/n for each . Since the support of this sampling is 1—uniform, we set . This leads to the gradient estimate
| 78 |
Example 11
(Non-uniform SAGA) However, we can use non-uniform probabilities instead. Let with probability for each . Since the support of this sampling is 1—uniform, we have . So, the gradient estimate has the form
| 79 |
Example 12
(Uniform minibatch SAGA, version 1) Let be nonempty subsets of forming a partition [n]. Let with probability . The support of this sampling is 1—uniform, and hence we can choose . This leads to the gradient estimate
Example 13
(Uniform minibatch SAGA, version 2) Let be chosen uniformly at random from all subsets of [n] of cardinality . That is, is the -nice sampling, and the probabilities are equal to . This sampling has —uniform support with . Thus, , and we have
| 80 |
Example 14
(Gradient descent) Consider the same situation as in Example 13, but with . That is, we choose with probability 1, and . Then
Expected smoothness constants and
Here we compute the expected smoothness constants and in the case of being a minibatch sketch , and assuming that f is convex and smooth. We first formalize the notion of smoothness we will use.
Assumption 4.2
For define
| 81 |
For each and all , the function is —smooth and convex. That is, there exists such that the following inequality holds
| 82 |
Let for .
The above assumption is somewhat non-standard. Note that, however, if we instead assume that each is convex and -smooth, then the above assumption holds for . In some cases, however, we may have better estimates of the constants than those provided by the averages of the values. The value of these constants will have a direct influence on and , which is why we work with this more refined assumption instead.
Lemma 9
(Smoothness of the Jacobian) Assume that is convex and —smooth for all . Define and Then
| 83 |
Proof
Indeed,
where in the last step we used the fact that
Theorem 2
(Expected smoothness) Let be a minibatch sketch where S is a sampling satisfying Assumption 4.1 (in particular, the support of S is —uniform). Consider the bias-correcting random variable given in (74). Further, let f satisfy Assumption 4.2. Then the expected smoothness assumptions (Assumptions 3.1 and 3.2) are satisfied with constants and given by12
| 84 |
where . If moreover, S is —nice sampling, then13
| 85 |
Proof
Let and . Then
Using (82) and (81), we can continue:
| 86 |
where in this last inequality we have used convexity of for . Since
the formula for now follows by comparing (86) to (43). In order to establish the formula for , we estimate
| 87 |
From Lemma 7(iv) we have , and hence . Comparing to the definition of in (45) to (87), we conclude that
The specialized formulas (85) for —nice sampling follow as special cases of the general formulas (84) since and for all i.
In the next result we establish some inequalities relating the quantities L, , and In particular, the results says that for a certain family of samplings S (the same for which we have defined the quantity in (85)), the expected smoothed constant is lower-bounded by the average of over , and upper-bounded by .
Theorem 3
Let S be a —uniform sampling () with —uniform support (). Let . Then
| 88 |
Moreover,
| 89 |
The last inequality holds without the need to assume —uniformity.
Proof
Using the fact that S has —uniform support, and utilizing a double-counting argument, we observe that . Multiplying both sides by , and since for all , we get To obtain (88), it now only remains to use the identity
| 90 |
which was shown in Lemma 8. The first inequality in (89) follows from (88) using standard arguments (identical to those that lead to the inequality ).
Let us now establish the second inequality in (89). Define . Again using a double-counting argument we observe that Multiplying both sides of this equality by and using identity (90), we get We will now establish the last inequality by proving that for any i:
Note that we did not need to assume —uniformity to prove that .
Estimating the sketch residual
In this section we compute the sketch residual for several classes of samplings S. Let . We will assume throughout this section that S is non-vacuous, has —uniform support (with ), and is —uniform.
Further, we assume that , and that the bias-correcting random variable is chosen as (see (75) and Lemma 8). In view of the above, since , the sketch residual is given by
| 91 |
where the last equality follows by permuting the multiplication of matrices within the
In the following text we calculate upper bounds for for —partition and —nice samplings. Note that Theorem 1 still holds if we use an upper bound of in place of .
Theorem 4
If S is the —partition sampling, then
| 92 |
Proof
Using Lemma 8, and since , we get . Consequently,
| 93 |
where and we used that is negative semidefinite. When , the above bound is tight. By Gershgorin’s theorem, every eigenvalue of the matrix is bounded by at least one of the inequalities for . Consequently, from (93) we have that
Next we give an useful upper bound on for a large family of uniform samplings (for proof, see “Appendix C”).
Theorem 5
Let be a collection of subsets of [n] with the property that the number of sets containing distinct elements is the same for all i, j. In particular, define
| 94 |
Now define a sampling S by setting with probability . Moreover, assume that the support of S is —uniform. Consider the minibatch sketch .
-
(i)If , then
95 -
(ii)If , then
96
Note that as long as , the —nice sampling S satisfies the assumptions of the above theorem. Indeed, is the support of S consisting of all subsets of [n] of size , , , and . As a result, bound (95) simplifies to
| 97 |
and (96) simplifies to
| 98 |
Calculating the iteration complexity for special cases
In this section we consider minibatch SAGA (Algorithm 2) and calculate its iteration complexity in special cases using Theorem 1 by pulling together the formulas for and established in previous sections. In particular, assume S is —uniform and has —uniform support with . In this case, formula (85) for from Lemma 2 applies and we have and .
Moreover, by Lemma 8, . By Theorem 1, if we use the stepsize
| 99 |
then the iteration complexity is given by
| 100 |
Complexity (100) is listed in line 9 of Table 1. The complexities in lines 3, 5 and 10–13 arise as special cases of (100) for specific choices of S:
Comparison with previous mini-batch SAGA convergence results
Recently in [14], a method that includes a mini-batch variant of SAGA was proposed. This work is the most closely related to our minibatch SAGA. The methods described in [14] can be cast in our framework. In the language of our paper, in [14] the authors update the Jacobian estimate according to (77), where is sampled according to a uniform probability with for all What [14] do differently is that instead of introducing the bias-corecting random variable to maintain an unbiased gradient estimate, the gradient estimate is updated using the standard SAGA update (78) and this sampling process is done independently of how is sampled for the Jacobian update. Thus at every iteration a gradient is sampled to compute (78), but is then discarded and not used to update the Jacobian update so as to maintain the independence between and By introducing the bias-correcting random variable in our method we avoid the data-hungry strategy used in [14].
The analysis provided in [14] shows that, by choosing the stepsize appropriately, the expectation of a Lyapunov function similar to (52) is less than after
| 105 |
iterations, where . When this gives an iteration complexity of which is essentially the same complexity as the standard SAGA method. The main issue with this complexity is that it decreases only very modestly as increases. In particular, on the extreme end when , since , we can approximate and the resulting complexity (105) becomes
Yet we know that corresponds to gradient descent, and thus the iteration complexity should be which is what we recover in the analysis of all our mini-batch variants. In Fig. 1a–c in the experiments in Sect. 6 we illustrate how (105) descreases very modestly as increases.
Fig. 1.
The iteration complexity of minibatch SAGA (80) vs the mini-batch size for two ridge regression problems (132). We used
A refined analysis with a stochastic Lyapunov function
In this section we perform a refined analysis of JacSketch applied with a minibatch sketch where the sampling S is over partitions of [n] into sets of size .14
Assumption 5.1
Let be a partition of [n] into sets of size . Assume that the sampling S picks sets from the partition uniformly at random. That is, for . A sampling with these properties is called a —partition sampling.
In the terminology introduced in Sect. 4.1, a —partition sampling is non-vacuous, proper and —uniform. Its support is a partition of [n], and is 1—uniform. It satisfies Assumption 4.1. Restricting our attention to —partition samplings will allow us to perform a more in-depth analysis of JacSketch using a stochastic Lyapunov function.
One of the key reasons why we restrict our attention to -partition samplings is the fact that
| 106 |
for . Recall from Lemma 7 that if , then . Consequently, for we have
| 107 |
This orthogonality property will be fundamental for controlling the convergence of the gradient estimate in Lemma 10.
Convergence theorem
Recall from (32) that the stochastic gradient of the controlled stochastic reformulation (28) of the original finite-sum problem (1) is given by
| 108 |
provided that we use the minibatch sketch and bias-correcting variable given by Lemma 7(vi). This object will appear in our Lyapunov function, evaluated at and . We are now ready to present the main result of this section.
Theorem 6
(Convergence for minibatch sketches with -partition samplings) Let
-
(i)
be a minibatch sketch (i.e., ),15 where S is a —partition sampling with support .
-
(ii)
be —smooth and —strongly convex (for ) for all .
-
(iii)
, .
-
(iv)
be the iterates produced by JacSketch.
Consider the stochastic Lyapunov function
| 109 |
where is a stochastic Lyapunov constant. If we use a stepsize that satisfies
| 110 |
then
| 111 |
This means that if we choose the stepsize equal to the upper bound (110), then
| 112 |
Gradient estimate contraction
Here we will show that our gradient estimate contracts in the following sense.
Lemma 10
Let S be the —partition sampling, and be any non-negative random variable. Then
| 113 |
Proof
For simplicity, in this proof we let and . Rearranging (108), we have
| 114 |
Taking norm squared on both sides gives
| 115 |
First, it follows from (107) that expression III is zero. We now multiply expressions I and II by and bound certain conditional expectations of these terms. Since S and are independent samplings, we have
| 116 |
Taking conditional expectation over expression II yields
| 117 |
where in the last equation we used the identity
| 118 |
which in turn is a specialization of (44) to the minibatch sketch and the specific choice of the bias-correcting variable . It remains to take expectation of (116) and (117), apply the tower property, and combine this with (115).
Bounding the second moment of
In the next lemma we bound the second moment of our gradient estimate .
Lemma 11
The second moment of the gradient estimate is bounded by
| 119 |
Proof
Adding and subtracting from (108) gives
Taking norm squared on both sides, and using the bound gives
| 120 |
Taking expectation of the A term, we get
where we used the inequality . The result follows by combining the above with (120).
Smoothness and strong convexity of
Recalling the setting of Theorem 6, we assume that each is —strongly convex and —smooth:
for all . It is known (see Section 2.1 in [19]) that the above conditions imply the following inequality:
| 121 |
for all . A consequence of these assumptions that will be useful to us is that the function is —strongly convex and —smooth. This can in turn be used to establish the next lemma, which will be used in the proof of Theorem 6:
Lemma 12
Under the assumptions of Theorem 6 (in particular, assumptions on f and S), we have
| 122 |
for all and .
Proof
Applying (121) to the function gives
Taking expectation over both sides over S, noting that , and recalling that is an unbiased estimator of , we get the result.
Proof of Theorem 6
Let denote expectation conditional on and . We can write
| 123 |
Next, after taking expectation in (123), applying the tower property, and subsequently adding the term to both sides of the resulting inequality, we get
| 124 |
Next, we determine a bound on so that III . Choosing
| 125 |
guarantees that III , and thus the last term in term in (124) can be safely dropped. Next, to build a recurrence and conclude the convergence proof, we bound the stepsize so that II I; that is,
| 126 |
Consequently,
Since , in view of (125) and (126) the combined bound on is
Hence, we have established the recursion (111).
Calculating the iteration complexity in special cases
In this section we consider the special case of JacSketch analyzed via Theorem 6—minibatch SAGA with —partition sampling—and look at further special cases by varying the minibatch size and probabilities. Our aim is to justify the complexities appearing in Table 1. In view of Theorem 6 the iteration complexity is given by
| 127 |
where . Complexity (127) is listed in line 2 of Table 1. The complexities in lines 4, 6, 8 and 14 arise as special cases of (127) for specific choices of and probabilities .
In line 4 we have gradient descent. This is obtained by choosing (whence , and ), which is why (127) simplifies to
In line 6 we consider uniform SAGA. That is, we choose and for all i. We have and . Therefore, (127) simplifies to This is essentially the same16 complexity result given in [6].
- In line 8 we consider SAGA with importance sampling. This is the same setup as above, except we choose
which is the optimal choice minimizing the complexity bound in . With these optimal probabilities, the stepsize bound becomes and by choosing the maximum allowed stepsize the resulting iteration complexity is128
Now consider the probabilities suggested in [30]. Using our bound, these lead to the complexity129
Comparing this with (129), we see that this non-uniform sampling offers a significant speed up over uniform sampling if However, our complexity (129) is always better than both the uniform sampling sampling complexity and (130).130 - Finally, in line 14 of Table 1 we optimize over probabilities directly; that is we extend the importance sampling described above to any . Minimizing the complexity bound over the probabilities, and noting that , this leads to the rate
This iteration complexity also applies to the reduced memory variant of SAGA (18). This is because Theorem 6 also holds for sketches where S is a —partition sampling. To see this, note that our analysis in this section relies on the orthogonality property (107) which also holds for since (for ) we have:131
Lemmas 10, 11 and 12 depend on the sketch through only, which in turn depends on the sketch through , and it is easy to see that if either or , we have
Experiments
We perform several experiments to validate the theory, and also test the practical relevance of non-uniform SAGA (79) with the optimized probability distribution (128). All of our code for these experiments was written in Julia and can be found on github in https://github.com/gowerrobert/StochOpt.jl.
In our experiments we test either ridge regression
| 132 |
or logistic regression
| 133 |
where is the given data and the regularization parameter.
New non-uniform sampling using optimal probabilities
First we compare non-uniform SAGA using the new optimized importance probabilities (128) against using the probabilities as suggested in [30]. When is significantly smaller than for all i then the two sampling are very similar. But when is relatively large, then the optimized probabilities (128) can be much closer to a uniform distribution as compared to using . We illustrate this by solving a ridge regression problem (132), using generated data such that
| 134 |
where the elements of and x are sampled from the standard Gaussian distribution , and the elements of are sampled from . It is not hard to see that the smoothness constants are given by for . We scale the columns of so that and for and set the regularization parameter Consequently, , for , and . In this case the iteration complexity of non-uniform SAGA with the optimal probabilities (129) is given by
| 135 |
The complexity (130) which results from using the probabilities is given by
| 136 |
Now we consider the regime where in which case and consequently (135) and in contrast (136)
We illustrate this in Fig. 1a-c where we set , and , respectively, and plot the complexities given in (135) and (136) . To accompany this plot, in Fig. 2a-c we also plot an execution of SAGA-uni (SAGA with uniform probabilities), SAGA-Li (SAGA with ) and SAGA-opt (SAGA with optimized probabilities). In all figures we see that SAGA-opt is the fastest method. We can also see that SAGA-Li stalls in Fig. 2b and c when n is larger, performing even worst as compared to SAGA-uni.
Fig. 2.
Comparing the performance of SAGA with importance sampling based on the optimized probabilities (128) (SAGA-opt), (SAGA-Li) and (SAGA-uni) for an artificially constructed ridge regression problem as n grows. Markers represent monitored points and not the iterations of the algorithms
Optimal mini-batch size
Our analysis of the mini-batch SAGA is precise enough as to inform an optimal mini-batch size. For instance, consider —nice sampling and the resulting iteration complexity (102). Theorem 3 states that for any , the terms within the maximum in (102) are bounded by
| 137 |
| 138 |
Moreover, the upper and lower bounds are realized for and , respectively. Consequently, for small, we have . On the other hand, for large we have Furthermore, decreases super-linearly in while tends to decrease more modestly. Consequently, the point where overtakes is often the best for the overall complexity of the method. To better appreciate these observations, we plot the evolution of the iteration complexity (102), the total complexity and the iteration complexity as predicted by Hofmann et al. [14] (see (105)) as increases in Fig. 3a–c for three different linear least squares problems. Since each step of mini-batch SAGA computes stochastic gradients, so the total complexity is times the iteration complexity. In each figure we can see that our iteration complexity initially decreases super-linearly, then at some point the complexity is dominated by and the iteration complexity decreases sublinearly. Up to this point we can observe an improvement in overall total complexity. This is in contrast to the iteration complexity given by Hofmann et al. that shows practically no improvement in even the iteration complexity as increases.
Fig. 3.
Comparison of the methods on logistic regression problems (133) with data taken from LIBSVM [4]
Though these experiments indicate only modest improvements in total complexity, and suggests that or is optimal, we must bear in mind that this corresponds to 10% and 20% of the data for these small dimensional problems. We conjecture that for larger problems, this improvement in total complexity will also be larger.
To use these insights in practice, we need to be able to efficiently determine the which corresponds to the point at which the convergence regimes switches from being dominated by to being dominated by . This surmounts to choosing so that Estimating and is often possible, but the cost of computing has a combinatorial dependency on n and Thus to have a practical way of choosing , we first need to bound . This can be done for losses with linear classifiers using concentration bounds. We leave this for future work.
Comparative experiments
We now compare the performance of SAGA-opt to several known methods such as SVRG [15], grad (gradient descent with fixed stepsizes) and AMprev (an improved version of SVRG that uses second order information) [28]. For the stepsize of SAGA-opt and SAG-opt, we found the stepsize given by theory to be a bit too conservative. Instead do we away with the 4 and used instead. For the remaining methods we used a grid search over for .
To illustrate how biased gradient estimates can perform well in practice, we also test SAG-opt: a method that uses the same Jacobian updates as SAGA-opt, but instead uses the biased gradient estimate . See Sect. 2.5 for more details on biased gradient estimates.
In Fig. 3a–c we compare the methods on three logistic regression problems (133) based on three different data sets taken from LIBSVM [4]. In all these problems the two methods with optimized non-uniform sampling SAG-opt and SAGA-opt were faster in terms of both epochs and time. The next best method was AM-prev, followed by SVRG and grad. It is interesting to see how well SAG-opt performs in practice, despite having biased gradient estimates. This is why we believe it is important to advance the analyse of biased gradient estimates as future work.
Conclusion
We now provide a brief summary of some of the key contributions of this paper and a few selected pointers to possible future research directions.
We developed and analyzed JacSketch—a novel family of variance reduced methods based on Jacobian sketching—and provided a link between variance reduction for empirical risk minimization and recent results from the field of randomized numerical linear algebra on sketch-and-project type methods for solving linear systems. In particular, it turns out that variance reduction is obtained by taking an SGD step on a stochastic optimization problem whose solution is the unknown Jacobian. As a consequence of our analysis, we resolved the conjecture of [30] in the affirmative by proving a properly designed importance sampling for SAGA leading to the iteration complexity of . For this purpose we developed a new proof technique using a stochastic Lyapunov function. Our complexity result for uniform mini-batch SAGA perfectly interpolates between the best known convergence rates of SAGA and gradient descent, and is sufficiently precise as to inform the choice of the batch size that minimizes the over all complexity of the method. Additionally we design and analyse a reduced memory variant of SAGA as a special case.
For future work we see many possible avenues including the following.
Structured sparse weight matrices One may wish to explore combinations of a weight matrix and different sketches to design new efficient methods further improving iteration complexity. For this the weighting matrix will have to be highly structured (e.g., block diagonal or very sparse) so that the Jacobian update (39) can be computed efficiently.
Bias-variance trade-off One can try to explore the bias-variance trade-off as opposed to merely focus on the extremes only: SAG (minimum variance) and SAGA (no bias). There is also no empirical evidence that unbiased estimators outperform the biased ones.
Johnson–Lindenstrauss sketches One can design completely new methods using different sparse sketches, such as the fast Johnson–Lindenstrauss transform [2] or the Achlioptas transform [1]. The resulting method can then be analyzed through Theorem 1. But first these sketches need to be adapted to ensure we get an efficient method. In particular, computing is only efficient if most of the rows of are zeros.
Acknowledgements
Funding was provided by Fondation de Sciences Mathématiques de Paris, European Research Council (Grant No. ERC SEQUOIA), LabEx LMH (Grant No. ANR-11-LABX-0056-LMH).
Appendix A: Proof of inequality (20)
Lemma 13
Let S be a sampling whose support is a partition of [n]. Moreover, assume all sets of this partition have cardinality . Then
Proof
By assumption, . The first inequality follows from On the other hand,
Appendix B: Duality of sketch-and-project and constrain-and-approximate
Lemma 14
Let and The sketch-and-project problem
| 139 |
and the constrain-and-approximate problem
| 140 |
have the same solution, given by:
| 141 |
Proof
The proof is given in Theorem 4.1 in [12].
Appendix C: Proof of Theorem 5
First we will establish that
| 142 |
Indeed, for every i we have that and for every we have Using (142), (91) and the Gershgorin circle theorem to bound from above we get as claimed. When we can get tighter results by using that is a circulant matrix with associated vector There is an elegant formula for calculating eigenvalues of circulant matrices [34] using v, given by
| 143 |
where are the n-th roots of unity and is the imaginary number. From (143) we see that there are only two distinct eigenvalues. Namely, for we have
The other eigenvalue is given by any since
Appendix D: Notation glossary
See the Table 2.
Table 2.
Frequently used notation
| f(x) | (convex loss function ) | (1) |
| Minimizer of f | (1) | |
| Strong convexity constant of f | Table 1 and Assumption 3.3 and Theorem 6 | |
| Stepsize | (2) | |
| Stochastic estimator of | (2), (13), (16), (33) | |
| [n] | ||
| F(x) | (function ) | (3) |
| (Jacobian of F at x) | (4) | |
| (vector of all ones) | (5) | |
| / | Shorthand for / | |
| symmetric positive definite “weight” matrix | (10), (12) | |
| (weighted Frobenius norm) | (10) | |
| A random (sketching) matrix picked from | ||
| (stochastic projection matrix) | ||
| Bias-correcting random variable | (15) and Assumption 2.1 | |
| (expectation over ) | ||
| S or | Sampling (a random subset of [n]) | |
| (minibatch size) | ||
| C | Subset of [n] | |
| ( is the ith unit coordinate vector in ) | ||
| / | / | Sections 1.4 and 4 |
| Column submatrix of with columns indexed by C | Section 4 and Theorem 6 | |
| (support of sampling S) | Section 4 | |
| (subsampled loss function) | Section 4 and Theorems 3 and 6 | |
| Smoothness constant of | Sections 1.5 and 4.4 and Theorems 3 and 6 | |
| Smoothness constant of | Sections 1.5 and 4.4 | |
| Sections 1.5 and 4.4 and Theorem 3 | ||
| L | Smoothness constant of | Sections 1.5 and 4.4 and Theorem 3 |
| Sections 1.5 and 4.4 and Theorem 3 | ||
| Expected smoothness constant of the stochastic gradient | Assumption 3.1 and Theorem 1 | |
| Expected smoothness constant of the Jacobian | Assumption 3.2 and Theorem 1 | |
| (= for —uniform S with —uniform support) | Sections 1.5 and 4.4 and Theorems 2 and 3 | |
| Stochastic contraction number | Section 3.2 and Lemma 2 and Theorem 1 | |
| Sketch residual | (37) and Theorem 1 and Lemma 6 | |
| / | Lyapunov function/stochastic Lyapunov function | (52)/(109) |
| Definition 2 | ||
| (94) |
Footnotes
For the purposes of this narrative it suffices to assume that stochastic gradients can be sampled at cost .
We will not bother about the distribution from which it is picked at the moment. It suffices to say that virtually all distributions are supported by our theory. However, if we wish to obtain a practical method, some distributions will make much more sense than others.
The term “quasi-gradient methods” was popular in the 1980s [21], and refers to algorithms for solving certain stochastic optimization problems which rely on stochastic estimates of function values and their derivatives. In this paper we give the term a different meaning by drawing a direct link with quasi-Newton methods.
For some prior results on importance sampling for minibatches, in the context of QUARTZ, see [5].
A formal definition can be found in Assumption 4.2.
In this paper, a sampling is a random set-valued mapping with the sets being subsets of [n].
SVRG is also built on a linear covariate model [15].
Excluding such trivial cases as when is an invertible matrix and with probability one, in which case .
A similar relation to (43) holds for the stochastic optimization reformulation of linear systems studied by Richtárik and Takáč [26]. Therein, this relation holds as an identity with (see Lemma 3.3 in [26]). However, the function considered there is entirely different and, moreover, and for all .
The notion of a probability matrix associated with a sampling was first introduced in [25] in the context of parallel coordinate descent methods, and further studied in [22].
Recall that for , for and .
Note that , and hence has the form of a maximum over averages.
This is only possible when n is a multiple of .
We can alternatively set and the same results will hold.
With the difference being that in [6] the iteration complexity is thus a small constant change.
The first results of this paper were obtained in Fall 2015 and most key results were obtained by Fall 2016. All key results were obtained by Fall 2017. The first author gave a series of talks on the results (before the paper was released online) in November 2016 (Machine learning seminar at Télécom ParisTech), December 2016 (CORE seminar, Université catholique de Louvain), March 2017 (Optimization, machine learning, and pluri-disciplinarity workshop, Inria Grenoble - Rhone-Alpes), May 2017 (SIAM Conference on Optimization, Vancouver), September 2017 (Optimization 2017, Faculdade de Ciencias of the Universidade de Lisboa), and November 2017 (PGMO Days 2017, session on Continuous Optimization for Machine Learning, EDF’Lab Paris-Saclay).
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Achlioptas D. Database-friendly random projections: Johnson–Lindenstrauss with binary coins. J. Comput. Syst. Sci. 2003;66(4):671–687. doi: 10.1016/S0022-0000(03)00025-4. [DOI] [Google Scholar]
- 2.Ailon N, Chazelle B. The fast Johnson–Lindenstrauss transform and approximate nearest neighbors. SIAM J. Comput. 2009;39(1):302–322. doi: 10.1137/060673096. [DOI] [Google Scholar]
- 3.Allen-Zhu, Z.: Katyusha: the first direct acceleration of stochastic gradient methods. In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. STOC 2017, pp. 1200–1205 (2017)
- 4.Chang CC, Lin CJ. LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011;2(3):1–27. doi: 10.1145/1961189.1961199. [DOI] [Google Scholar]
- 5.Csiba D, Richtárik P. Importance sampling for minibatches. J. Mach. Learn. Res. 2018;19(1):962982. [Google Scholar]
- 6.Defazio A, Bach F, Lacoste-julien S. SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. Adv. Neural Inf. Process. Syst. 2014;27:1646–1654. [Google Scholar]
- 7.Defazio, A.J., Caetano, T.S., Domke, J.: Finito: a faster, permutable incremental gradient method for big data problems. In: CoRR arXiv:1407.2710 (2014)
- 8.Goldfarb D. Modification methods for inverting matrices and solving systems of linear algebraic equations. Math. Comput. 1972;26(120):829–829. doi: 10.1090/S0025-5718-1972-0317527-4. [DOI] [Google Scholar]
- 9.Goldfarb D. A family of variable-metric methods derived by variational means. Math. Comput. 1970;24(109):23–26. doi: 10.1090/S0025-5718-1970-0258249-6. [DOI] [Google Scholar]
- 10.Gower, R.M., Richtárik, P., Bach, F.: Stochastic quasi-gradient methods: variance reduction via Jacobian sketching. arXiv:1805.02632 (2018) [DOI] [PMC free article] [PubMed]
- 11.Gower RM, Richtárik P. Randomized iterative methods for linear systems. SIAM J. Matrix Anal. Appl. 2015;36(4):1660–1690. doi: 10.1137/15M1025487. [DOI] [Google Scholar]
- 12.Gower RM, Richtárik P. Randomized quasi-newton updates are linearly convergent matrix inversion algorithms. SIAM J. Matrix Anal. Appl. 2017;38(4):1380–1409. doi: 10.1137/16M1062053. [DOI] [Google Scholar]
- 13.Hickernell FJ, Lemieux C, Owen AB. Control variates for quasi-Monte Carlo. Stat. Sci. 2005;20(1):1–31. doi: 10.1214/088342304000000468. [DOI] [Google Scholar]
- 14.Hofmann, T., Lucchi, A., Lacoste-Julien, S., McWilliams, B.: Variance reduced stochastic gradient descent with neighbors. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds) NIPS, pp. 2305–2313 (2015)
- 15.Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. In: Advances in Neural Information Processing Systems 26, pp. 315–323. Curran Associates, Inc. (2013)
- 16.Konečný J, Richtárik P. Semi-stochastic gradient descent methods. Front. Appl. Math. Stat. 2017;3:9. doi: 10.3389/fams.2017.00009. [DOI] [Google Scholar]
- 17.Lin H, Mairal J, Harchaoui Z. Catalyst acceleration for first-order convex optimization: from theory to practice. J. Mach. Learn. Res. 2017;18(1):7854–7907. [Google Scholar]
- 18.Mairal J. Incremental majorization–minimization optimization with application to large-scale machine learning. SIAM J. Optim. 2015;25(2):829–855. doi: 10.1137/140957639. [DOI] [Google Scholar]
- 19.Nesterov Y. Introductory Lectures on Convex Optimization: A Basic Course. 1. Berlin: Springer; 2014. [Google Scholar]
- 20.Nguyen, L.M., Liu, J., Scheinberg, K., Takáč, M.: SARAH: a novel method for machine learning problems using stochastic recursive gradient. In: Precup D., Teh Y.W. (eds) Proceedings of the 34th International Conference on Machine Learning, Vol. 70, pp. 2613–2621. Proceedings of Machine Learning Research (PMLR) (2017)
- 21.Novikova N. A stochastic quasi-gradient method of solving optimization problems in Hilbert space. U.S.S.R. Comput. Math. Math. Phys. 1984;24(2):6–16. doi: 10.1016/0041-5553(84)90077-6. [DOI] [Google Scholar]
- 22.Qu, Z., Richtárik, P.: Coordinate descent with arbitrary sampling II: expected separable overapproximation. arXiv:1412.8063 (2014)
- 23.Qu, Z., Richtárik, P., Zhang, T.: Quartz: Randomized dual coordinate ascent with arbitrary sampling. In: Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 1. NIPS’15, pp. 865–873. MIT Press, Cambridge (2015)
- 24.Richtárik P, Takáč M. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program. 2014;144(1):1–38. doi: 10.1007/s10107-012-0614-z. [DOI] [Google Scholar]
- 25.Richtárik, P., Takáč, M.: Parallel coordinate descent methods for big data optimization problems. In: Mathematical Programming, pp. 1–52 (2015)
- 26.Richtárik, P., Takáč, M.: Stochastic reformulations of linear systems: algorithms and convergence theory. arXiv:1706.01108 (2017)
- 27.Robbins H, Monro S. A stochastic approximation method. Ann. Math. Stat. 1951;22:400–407. doi: 10.1214/aoms/1177729586. [DOI] [Google Scholar]
- 28.Robert, N.L.R., Gower, M., Bach, F.: Tracking the gradients using the Hessian: a new look at variance reducing stochastic methods. In: Proceedings of the 21th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research (2018)
- 29.Schmidt M, Le Roux N, Bach F. Minimizing finite sums with the stochastic average gradient. Math. Program. 2017;162(1):83–112. doi: 10.1007/s10107-016-1030-6. [DOI] [Google Scholar]
- 30.Schmidt, M.W., Babanezhad, R., Ahmed, M.O., Defazio, A., Clifton, A., Sarkar, A.: Non-uniform stochastic average gradient method for training conditional random fields. In: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2015, San Diego, California, USA, May 9–12, 2015 (2015)
- 31.Shalev-Shwartz, S.: SDCA without duality, regularization, and individual convexity. arXiv:1602.01582 (2016)
- 32.Shalev-Shwartz S, Zhang T. Accelerated mini-batch stochastic dual coordinate ascent. Adv. Neural Inf. Process. Syst. 2013;26:378–385. [Google Scholar]
- 33.Shalev-Shwartz S, Zhang T. Stochastic dual coordinate ascent methods for regularized loss. J. Mach. Learn. Res. 2013;14(1):567–599. [Google Scholar]
- 34.Varga RS. Eigenvalues of circulant matrices. Pac. J. Math. 1954;1:151–160. doi: 10.2140/pjm.1954.4.151. [DOI] [Google Scholar]
- 35.Wang, C., Chen, X., Smola, A.J., Xing, E.P.: Variance reduction for stochastic gradient optimization. In: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds) Advances in Neural Information Processing Systems, vol. 26, pp. 181–189. Curran Associates Inc. (2013)
- 36.Xiao, L., Zhang, T.: A proximal stochastic gradient method with progressive variance reduction. arXiv:1403.4699 (2014)




