Abstract
The phenomenon of benign overfitting is one of the key mysteries uncovered by deep learning methodology: deep neural networks seem to predict well, even with a perfect fit to noisy training data. Motivated by this phenomenon, we consider when a perfect fit to training data in linear regression is compatible with accurate prediction. We give a characterization of linear regression problems for which the minimum norm interpolating prediction rule has near-optimal prediction accuracy. The characterization is in terms of two notions of the effective rank of the data covariance. It shows that overparameterization is essential for benign overfitting in this setting: the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size. By studying examples of data covariance properties that this characterization shows are required for benign overfitting, we find an important role for finite-dimensional data: the accuracy of the minimum norm interpolating prediction rule approaches the best possible accuracy for a much narrower range of properties of the data distribution when the data lie in an infinite-dimensional space vs. when the data lie in a finite-dimensional space with dimension that grows faster than the sample size.
Keywords: statistical learning theory, overfitting, linear regression, interpolation
Deep learning methodology has revealed a surprising statistical phenomenon: overfitting can perform well. The classical perspective in statistical learning theory is that there should be a tradeoff between the fit to the training data and the complexity of the prediction rule. Whether complexity is measured in terms of the number of parameters, the number of nonzero parameters in a high-dimensional setting, the number of neighbors averaged in a nearest neighbor estimator, the scale of an estimate in a reproducing kernel Hilbert space, or the bandwidth of a kernel smoother, this tradeoff has been ubiquitous in statistical learning theory. Deep learning seems to operate outside the regime where results of this kind are informative since deep neural networks can perform well even with a perfect fit to the training data.
As one example of this phenomenon, consider the experiment illustrated in figure 1C in ref. 1: standard deep network architectures and stochastic gradient algorithms, run until they perfectly fit a standard image classification training set, give respectable prediction performance, even when significant levels of label noise are introduced. The deep networks in the experiments reported in ref. 1 achieved essentially zero cross-entropy loss on the training data. In statistics and machine learning textbooks, an estimate that fits every training example perfectly is often presented as an illustration of overfitting [“…interpolating fits…[are] unlikely to predict future data well at all” (ref. 2, p. 37)]. Thus, to arrive at a scientific understanding of the success of deep learning methods, it is a central challenge to understand the performance of prediction rules that fit the training data perfectly.
In this paper, we consider perhaps the simplest setting where we might hope to witness this phenomenon: linear regression. That is, we consider quadratic loss and linear prediction rules, and we assume that the dimension of the parameter space is large enough that a perfect fit is guaranteed. We consider data in an infinite-dimensional space (a separable Hilbert space), but our results apply to a finite-dimensional subspace as a special case. There is an ideal value of the parameters, , corresponding to the linear prediction rule that minimizes the expected quadratic loss. We ask when it is possible to fit the data exactly and still compete with the prediction accuracy of . Since we require more parameters than the sample size in order to fit exactly, the solution might be underdetermined, and therefore, there might be many interpolating solutions. We consider the most natural: choose the parameter vector with the smallest norm among all vectors that gives perfect predictions on the training sample. (This corresponds to using the pseudoinverse to solve the normal equations; see below.) We ask when it is possible to overfit in this way—and embed all of the noise of the labels into the parameter estimate —without harming prediction accuracy.
Our main result is a finite sample characterization of when overfitting is benign in this setting. The linear regression problem depends on the optimal parameters and the covariance of the covariates . The properties of turn out to be crucial since the magnitude of the variance in different directions determines both how the label noise gets distributed across the parameter space and how errors in parameter estimation in different directions in parameter space affect prediction accuracy. There is a classical decomposition of the excess prediction error into two terms. The first is rather standard: provided that the scale of the problem (that is, the sum of the eigenvalues of ) is small compared with the sample size , the contribution to that we can view as coming from is not too distorted. The second term is more interesting since it reflects the impact of the noise in the labels on prediction accuracy. We show that this part is small if and only if the effective rank of in the subspace corresponding to low-variance directions is large compared with . This necessary and sufficient condition of a large effective rank can be viewed as a property of significant overparameterization: fitting the training data exactly but with near-optimal prediction accuracy occurs if and only if there are many low-variance (and hence, unimportant) directions in parameter space where the label noise can be hidden.
The details are more complicated. The characterization depends in a specific way on two notions of effective rank, and ; the smaller one, , determines a split of into large and small eigenvalues, and the excess prediction error depends on the effective rank as measured by the larger notion of the subspace corresponding to the smallest eigenvalues. For the excess prediction error to be small, the smallest eigenvalues of must decay slowly.
Studying the patterns of eigenvalues that allow benign overfitting reveals an interesting role for large but finite dimensions: in an infinite-dimensional setting, benign overfitting occurs only for a narrow range of decay rates of the eigenvalues. On the other hand, it occurs with any suitably slowly decaying eigenvalue sequence in a finite-dimensional space with dimension that grows faster than the sample size. Thus, for linear regression, data that lie in a large but finite-dimensional space exhibit the benign overfitting phenomenon with a much wider range of covariance properties than data that lie in an infinite-dimensional space.
The phenomenon of interpolating prediction rules has been an object of study by several authors over the last two years since it emerged as an intriguing mystery at the Simons Institute program on Foundations of Machine Learning in the spring of 2017. Belkin et al. (3) described an experimental study demonstrating that this phenomenon of accurate prediction for functions that interpolate noisy data also occurs for prediction rules chosen from reproducing kernel Hilbert spaces and explained the mismatch between this phenomenon and classical generalization bounds. Belkin et al. (4) gave an example of an interpolating decision rule—simplicial interpolation—with an asymptotic consistency property as the input dimension gets large. That work and subsequent work of Belkin et al. (5) studied kernel smoothing methods based on singular kernels that both interpolate and, with suitable bandwidth choice, give optimal rates for nonparametric estimation [building on earlier consistency results (6) for these unusual kernels]. Liang and Rakhlin (7) considered minimum norm interpolating kernel regression with kernels defined as nonlinear functions of the Euclidean inner product and showed that, with certain properties of the training sample (expressed in terms of the empirical kernel matrix), these methods can have good prediction accuracy. Belkin et al. (8) studied experimentally the excess risk as a function of the dimension of a sequence of parameter spaces for linear and nonlinear classes.
Subsequent to our work, ref. 9 considered the properties of the interpolating linear prediction rule with minimal expected squared error. After this work was presented at the NAS Colloquium on the Science of Deep Learning (10), we became aware of the concurrent work of Belkin et al. (11) and of Hastie et al. (12). Belkin et al. (11) calculated the excess risk for certain linear models (a regression problem with identity covariance and sparse optimal parameters, both with and without noise, and a problem with random Fourier features with no noise), and Hastie et al. (12) considered linear regression in an asymptotic regime, where sample size and input dimension go to infinity together with asymptotic ratio . They assumed that, as gets large, the empirical spectral distribution of (the discrete measure on its set of eigenvalues) converges to a fixed measure, and they applied random matrix theory to explore the range of behaviors of the asymptotics of the excess prediction error as , the noise variance, and the eigenvalue distribution vary. They also studied the asymptotics of a model involving random nonlinear features. In contrast, we give upper and lower bounds on the excess prediction error for arbitrary finite sample size, for arbitrary covariance matrices, and for data of arbitrary dimension.
The next section introduces notation and definitions used throughout the paper, including definitions of the problem of linear regression and of various notions of effective rank of the covariance operator. The following section gives the characterization of benign overfitting, illustrates why the effective rank condition corresponds to significant overparameterization, and presents several examples of patterns of eigenvalues that allow benign overfitting, suggesting that slowly decaying covariance eigenvalues in input spaces of growing but finite dimension are the generic example of benign overfitting. Then we discuss the connections between these results and the benign overfitting phenomenon in deep neural networks and outline the proofs of the results.
Definitions and Notation
We consider linear regression problems, where a linear function of covariates from a (potentially infinite-dimensional) Hilbert space is used to predict a real-valued response variable . We use vector notation, so that denotes the inner product between and and denotes the tensor product of .
Definition 1 (Linear Regression): A linear regression problem in a separable Hilbert space is defined by a random covariate vector and outcome . We define
-
1)
the covariance operator and
-
2)
the optimal parameter vector , satisfying .
We assume that
-
1)
and are mean zero;
-
2)
, where is the spectral decomposition of and has components that are independent sub-Gaussian with a positive constant: that is, for all ,
where is the norm in the Hilbert space ;
-
3)
the conditional noise variance is bounded below by some constant ,
-
4)
is sub-Gaussian conditionally on : that is, for all ,
(note that this implies ); and
-
5)
almost surely, the projection of the data on the space orthogonal to any eigenvector of spans a space of dimension .
Given a training sample of independent pairs with the same distribution as , an estimator returns a parameter estimate . The excess risk of the estimator is defined as
where denotes the conditional expectation given all random quantities other than (in this case, given the estimate ). Define the vectors with entries and with entries . We use infinite matrix notation: denotes the linear map from to corresponding to so that has th component . We use similar notation for the linear map from to .
Notice that Assumptions 1 to 5 are satisfied when and are jointly Gaussian with zero mean and .
We shall be concerned with situations where an estimator can fit the data perfectly: that is, . Typically, this implies that there are many such vectors. We consider the interpolating estimator with minimal norm in . We use to denote both the Euclidean norm of a vector in and the Hilbert space norm.
Definition 2 (Minimum Norm Estimator): Given data and , the minimum norm estimator solves the optimization problem
By the projection theorem, parameter vectors that solve the least squares problem solve the normal equations, and therefore, we can equivalently write as the minimum norm solution to the normal equations
where denotes the pseudoinverse of the bounded linear operator (for infinite-dimensional , the existence of the pseudoinverse is guaranteed because is bounded and has a closed range) (13). When has dimension with and has rank , there is a unique solution to the normal equations. On the contrary, Assumption 5 in Definition 1 implies that we can find many solutions to the normal equations that achieve . The minimum norm solution is given by
[1] |
Our main result gives tight bounds on the excess risk of this minimum norm estimator in terms of certain notions of effective rank of the covariance that are defined in terms of its eigenvalues.
We use to denote the eigenvalues of in descending order, and we denote the operator norm of by . We use to denote the identity operator on and to denote the identity matrix.
Definition 3 (Effective Ranks): For the covariance operator , define for . If and for , define
Main Results
The following theorem establishes nearly matching upper and lower bounds for the risk of the minimum norm interpolating estimator.
Theorem 1. For any , there are for which the following holds. Consider a linear regression problem from Definition 1. Define
where the minimum of the empty set is defined as . Suppose that with . If , then . Otherwise,
with probability at least , and
Moreover, there are universal constants such that, for all , for all , and for all , there is a with such that, for and with probability at least ,
Effective Ranks and Overparameterization.
In order to understand the implications of Theorem 1, we now study relationships between the two notions of effective rank, and , and establish sufficient and necessary conditions for the sequence of eigenvalues to lead to small excess risk.
The following lemma shows that the two notions of effective rank are closely related. SI Appendix, section H has its proof and other properties of and .
Lemma 1. , , and
Notice that . More generally, if all of the nonzero eigenvalues of are identical, then . For with finite rank, we can express both and as a product of the rank and a notion of symmetry. In particular, for , we can write
Both notions of symmetry and lie between (when ) and 1 (when the are all equal).
Theorem 1 shows that, for the minimum norm estimator to have near-optimal prediction accuracy, should be small compared with the sample size (from the first term) and and should be large compared with . Together, these conditions imply that overparameterization is essential for benign overfitting in this setting: the number of nonzero eigenvalues should be large compared with , they should have a small sum compared with , and there should be many eigenvalues no larger than . If the number of these small eigenvalues is not much larger than , then they should be roughly equal, but they can be more asymmetric if there are many more of them.
The following theorem shows that the kind of overparameterization that is essential for benign overfitting requires to have a heavy tail. (The proof—and some other examples illustrating the boundary of benign overfitting—are in SI Appendix, section I.) In particular, if we fix in an infinite-dimensional Hilbert space and ask when the excess risk of the minimum norm estimator approaches zero as , it imposes tight restrictions on the eigenvalues of . However, there are many other possibilities for these asymptotics if can change with . Since rescaling affects the accuracy of the least norm interpolant in an obvious way, we may assume without loss of generality that . If we restrict our attention to this case, then informally, Theorem 1 implies that, when the covariance operator for data with examples is , the least norm interpolant converges if , , and and only if , , and , where for the universal constant in Theorem 1. This motivates the following definition.
Definition 4: A sequence of covariance operators with is benign if
We give some examples of benign and nonbenign settings.
Theorem 2.
-
1)
If , then is benign if and only if and .
-
2)
If
and , then with is benign if and only if and . Furthermore, for and ,
Compare the situations described by Theorem 2.1 and 2.2. Theorem 2.1 shows that, for infinite-dimensional data with a fixed covariance, benign overfitting occurs if and only if the eigenvalues of the covariance operator decay just slowly enough for their sum to remain finite. Theorem 2.2 shows that the situation is very different if the data have finite dimension and a small amount of isotropic noise is added to the covariates. In that case, even if the eigenvalues of the original covariance operator (before the addition of isotropic noise) decay very rapidly, benign overfitting occurs if and only if both the dimension is large compared with the sample size and the isotropic component of the covariance is sufficiently small—but not exponentially small—compared with the sample size.
These examples illustrate the tension between the slow decay of eigenvalues that is needed for to be small and the summability of eigenvalues that is needed for to be small. There are two ways to resolve this tension. First, in the infinite-dimensional setting, slow decay of the eigenvalues suffices—decay just fast enough to ensure summability—as shown by Theorem 2.1. (SI Appendix, section I gives another example—Theorem S14.2—where the eigenvalue decay is allowed to vary with ; in that case, is benign iff the decay rate gets close—but not too close—to as increases.) Second, the other way to resolve the tension is to consider a finite-dimensional setting (which ensures that the eigenvalues are summable), and in this case, arbitrarily slow decay is possible. Theorem 2.2 gives an example of this: eigenvalues that are all at least as large as a small constant. SI Appendix, section I gives other examples with a similar flavor, including a truncated infinite series that decays sufficiently slowly that the sum does not converge (SI Appendix, section I, Theorem S14.3). Theorem 2.1 shows that a very specific decay rate is required in infinite dimensions, which suggests that this is an unusual phenomenon in that case. The more generic scenario where benign overfitting will occur is demonstrated by Theorem 2.2, with eigenvalues that are either at least a constant or slowly decaying in a very high—but finite-dimensional—space.
Proof
Throughout the proofs, we treat (the sub-Gaussian norm of the covariates) as a constant. Therefore, we use the symbols to refer to constants that only depend on . Their values are suitably large (and always at least one) but do not depend on any parameters of the problems that we consider other than . For universal constants that do not depend on any parameters of the problem at all, we use the symbol . Also, whenever we sum over eigenvectors of , the sum is restricted to eigenvectors with nonzero eigenvalues.
Outline.
The first step is a standard decomposition of the excess risk into two pieces, a term that corresponds to the distortion that is introduced by viewing through the lens of the finite sample and a term that corresponds to the distortion introduced by the noise . The impact of both sources of error in on the excess risk is modulated by the covariance , which gives different weight to different directions in parameter space.
Lemma 2. The excess risk of the minimum norm estimator satisfies with probability at least over , and , where
The proof of this lemma is in SI Appendix, section A. SI Appendix, sections J and K give bounds on the term . The heart of the proof is controlling .
Before continuing with the proof, let us make a quick digression to note that Lemma 2 already begins to give an idea that many low-variance directions are necessary for the least norm interpolator to succeed. Let us consider the extreme case that and . In this case, . For Gaussian data, for instance, standard random matrix theory implies that, with high probability, the eigenvalues of will all be within a constant factor of , which implies that is bounded below by a constant, and then, Lemma 2 implies that the least norm interpolant’s excess risk is at least a constant.
To prove that can be controlled for suitable , the first step is to express it in terms of sums of outer products of unit-covariance, independent, sub-Gaussian random vectors. We show that, when there is a with small and large, all of the smallest eigenvalues of these matrices are suitably concentrated, and this implies that is bounded above by
(Later, we show that the minimizer is .) Next, we show that this expression is also a lower bound on provided that there is such a . Conversely, we show that, for any for which is not large compared with , is at least as big as a constant times . Combining shows that, when is small, is upper and lower bounded by constant factors times
Unit Variance Sub-Gaussians.
Our assumptions allow the trace of to be expressed as a function of many independent sub-Gaussian vectors.
Lemma 3. Consider a covariance operator with and . Write its spectral decomposition , where the orthonormal are the eigenvectors corresponding to the . For with , define . Then,
and these are independent sub-Gaussian. Furthermore, for any with , we have
where .
Proof: By Assumption 2 in Definition 1, the random variables are independent sub-Gaussian. We consider in the basis of eigenvectors of , , to see that
and therefore, we can write
For the second part, we use SI Appendix, section B, Lemma S3, which is a consequence of the Sherman–Woodbury–Morrison formula
by SI Appendix, section B, Lemma S3 for the case and . Note that is invertible by Assumption 5 in Definition 1.
The weighted sum of outer products of these sub-Gaussian vectors plays a central role in the rest of the proof. Define
where the are independent vectors with independent sub-Gaussian coordinates with unit variance defined in Lemma 3. Note that the vector is independent of the matrix , and therefore, in the last part of Lemma 3, all of the random quadratic forms are independent of the points where those forms are evaluated.
Concentration of .
The next step is to show that eigenvalues of , , and are concentrated. The proof of the following inequality is in SI Appendix, section C. Recall that and denote the largest and the smallest eigenvalues of the matrix .
Lemma 4. There is a constant such that, for any with probability at least ,
The following lemma uses this result to give bounds on the eigenvalues of , which in turn, give bounds on some eigenvalues of and . For these upper and lower bounds to match up to a constant factor, the sum of the eigenvalues of should dominate the term involving its leading eigenvalue, which is a condition on the effective rank . The lemma shows that, after is sufficiently large, all of the eigenvalues of are identical up to a constant factor.
Lemma 5. There are constants such that, for any , with probability at least ,
-
1)
for all ,
-
2)
for all ,
and
-
3)
if , then
Proof: By Lemma 4, we know that, with probability at least ,
First, the matrix has rank at most (as a sum of matrices of rank 1). Thus, there is a linear space of dimension such that, for all , and therefore, .
Second, by the Courant–Fischer–Weyl Theorem, for all and , (SI Appendix, section G, Lemma S11). On the other hand, for , , and therefore, all of the eigenvalues of are lower bounded by .
Finally, if ,
Choosing and gives the third claim of the lemma.
Upper Bound on the Trace Term.
Lemma 6. There are constants such that, if , , and , then with probability at least ,
The proof uses the following lemma and its corollary. Their proofs are in SI Appendix, section C.
Lemma 7. Suppose that is a nonincreasing sequence of nonnegative numbers such that and that are independent centered -subexponential random variables. Then, for some universal constant for any , with probability at least ,
Corollary 1. Suppose that is a centered random vector with independent sub-Gaussian coordinates with unit variances, is a random subspace of of codimension , and is independent of . Then, for some universal constant and any , with probability at least ,
where is the orthogonal projection on .
Proof of Lemma 6: Fix to its value in Lemma 5. By Lemma 3,
[2] |
First, consider the sum up to . If , Lemma 5 shows that, with probability at least for all , and for all , . The lower bounds on the imply that, for all and ,
and the upper bounds on the give
where is the span of the eigenvectors of corresponding to its smallest eigenvalues. Therefore, for ,
[3] |
Next, we apply Corollary 1 times together with a union bound to show that, with probability at least for all ,
[4] |
[5] |
provided that and for some sufficiently large (note that and only depend on , , and , and we can still take large enough in the end without changing and ). Combining Eqs. 3–5, with probability at least ,
Second, consider the second sum in Eq. 2. Lemma 5 shows that, on the same high-probability event that we considered in bounding the first half of the sum, . Hence,
Notice that is a weighted sum of -subexponential random variables, with the weights given by the in blocks of size . Lemma 7 implies that, with probability at least ,
because . Combining the above gives
Finally, putting both parts together and taking gives the lemma.
Lower Bound on the Trace Term.
We first give a bound on a single term in the expression for in Lemma 3 that holds regardless of . The proof is in SI Appendix, section D.
Lemma 8. There is a constant such that, for any with and any , with probability at least ,
We can extend these bounds to a lower bound on using the following lemma. The proof is in SI Appendix, section E.
Lemma 9. Suppose that , is a sequence of nonnegative random variables, and that is a sequence of nonnegative real numbers (at least one of which is strictly positive) such that, for some and any , . Then,
These two lemmas imply the following lower bound.
Lemma 10. There are constants such that, for any and any with probability at least ,
-
1)
if , then ; and
-
2)
if , then
In particular, if all choices of give , then implies that, with probability at least , .
Proof: From Lemmas 3, 8, and 9, with probability at least ,
Now, if , then the second term in the minimum is always bigger than the third term, and in that case,
On the other hand, if ,
where the equality follows from the fact that the are nonincreasing.
A Simple Choice of .
Recall that is a constant. If no has , then Lemmas 2 and 10 imply that the expected excess risk is , which proves the first paragraph of Theorem 1 for large . If some does have , then the upper and lower bounds of Lemmas 6 and 10 are constant multiples of
It might seem surprising that any suitable choice of suffices to give upper and lower bounds: what prevents one choice of from giving an upper bound that falls below the lower bound that arises from another choice of ? However, the freedom to choose is somewhat illusory: Lemma 5 shows that, for any qualifying value of , the smallest eigenvalue of is within a constant factor of . Thus, any two choices of satisfying and must have values of within constant factors. The smallest such simplifies the bound on as the following lemma shows. The proof is in SI Appendix, section F.
Lemma 11. For any and , if , we have
Finally, we can finish the proof of Theorem 1. Set in Lemma 10 and Theorem 1 to the constant from Lemma 6. Take to be the maximum of the constants from Lemmas 6 and 10.
By Lemma 10, if , then with high probability . However, by Lemma 10.2 and by Lemma 6, if , then with high probability is within a constant factor of , which by Lemma 11, is within a constant factor of . Taking sufficiently large and combining these results with Lemma 2 and with the upper bound on the term in SI Appendix, section J completes the proof of the first paragraph of Theorem 1.
The proof of the second paragraph is in SI Appendix, section K.
Deep Neural Networks
How relevant are Theorems 1 and 2 to the phenomenon of benign overfitting in deep neural networks? One connection appears by considering regimes where deep neural networks are well approximated by linear functions of their parameters. This so-called neural tangent kernel (NTK) viewpoint has been vigorously pursued recently in an attempt to understand the optimization properties of deep learning methods. Very wide neural networks, trained with gradient descent from a suitable random initialization, can be accurately approximated by linear functions in an appropriate Hilbert space, and in this case, gradient descent finds an interpolating solution quickly (14–19). (Note that these papers do not consider prediction accuracy, except when there is no noise; for example, ref. 14, Assumption A1 implies that the network can compute a suitable real-valued response exactly, and the data-dependent bound of ref. 19, Theorem 5.1 becomes vacuous when independent noise is added to the .) The eigenvalues of the covariance operator in this case can have a heavy tail under reasonable assumptions on the data distribution (20, 21), and the dimension is very large but finite as required for benign overfitting. However, the assumptions of Theorem 1 do not apply in this case. In particular, the assumption that the random elements of the Hilbert space are a linearly transformed vector with independent components is not satisfied. Thus, our results are not directly applicable in this—somewhat unrealistic—setting. Note that the slow decay of the eigenvalues of the NTK is in contrast to the case of the Gaussian and other smooth kernels, where the eigenvalues decay nearly exponentially quickly (22).
The phenomenon of benign overfitting was first observed in deep neural networks. Theorems 1 and 2 are steps toward understanding this phenomenon by characterizing when it occurs in the simple setting of linear regression. Those results suggest that covariance eigenvalues that are constant or slowly decaying in a high (but finite)-dimensional space might be important in the deep network setting also. Some authors have suggested viewing neural networks as finite-dimensional approximations to infinite-dimensional objects (23–25), and there are generalization bounds—although not for the overfitting regime—that are applicable to infinite-width deep networks with parameter norm constraints (26–30). However, the intuition from the linear setting suggests that truncating to a finite-dimensional space might be important for good statistical performance in the overfitting regime. Confirming this conjecture by extending our results to the setting of prediction in deep neural networks is an important open problem.
Conclusions and Further Work
Our results characterize when the phenomenon of benign overfitting occurs in high-dimensional linear regression with Gaussian data and more generally. We give finite sample excess risk bounds that reveal the covariance structure that ensures that the minimum norm interpolating prediction rule has near-optimal prediction accuracy. The characterization depends on two notions of the effective rank of the data covariance operator. It shows that overparameterization (that is, the existence of many low-variance and hence, unimportant directions in parameter space) is essential for benign overfitting and that data that lie in a large but finite-dimensional space exhibit the benign overfitting phenomenon with a much wider range of covariance properties than data that lie in an infinite-dimensional space.
There are several natural future directions. Our main theorem requires the conditional expectation to be a linear function of , and it is important to understand whether the results are also true in the misspecified setting, where this assumption is not true. Our main result also assumes that the covariates are distributed as a linear function of a vector of independent random variables. We would like to understand the extent to which this assumption can be relaxed since it rules out some important examples, such as infinite-dimensional reproducing kernel Hilbert spaces with continuous kernels defined on finite-dimensional spaces. We would also like to understand how our results extend to other loss functions other than squared error and what we can say about overfitting estimators beyond the minimum norm interpolating estimator. The most interesting future direction is understanding how these ideas could apply to nonlinearly parameterized function classes, such as neural networks, the methodology that uncovered the phenomenon of benign overfitting.
Data Availability.
There are no data associated with this manuscript.
Supplementary Material
Acknowledgments
We acknowledge the support of NSF Grant IIS-1619362 and of a Google research award. G.L. was supported by the Spanish Ministry of Economy and Competitiveness, Grant PGC2018-101643-B-I00; “High-dimensional problems in structured probabilistic models - Ayudas Fundación BBVA a Equipos de Investigación Cientifica 2017”; and Google Focused Award “Algorithms and Learning for AI.” Part of this work was done as part of the fall 2018 program on Foundations of Data Science at the Simons Institute for the Theory of Computing.
Footnotes
The authors declare no competing interest.
This article is a PNAS Direct Submission. R.B. is a guest editor invited by the Editorial Board.
This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, “The Science of Deep Learning,” held March 13–14, 2019, at the National Academy of Sciences in Washington, DC. NAS colloquia began in 1991 and have been published in PNAS since 1995. From February 2001 through May 2019 colloquia were supported by a generous gift from The Dame Jillian and Dr. Arthur M. Sackler Foundation for the Arts, Sciences, & Humanities, in memory of Dame Sackler’s husband, Arthur M. Sackler. The complete program and video recordings of most presentations are available on the NAS website at http://www.nasonline.org/science-of-deep-learning.
This article contains supporting information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1907378117/-/DCSupplemental.
References
- 1.Zhang C., Bengio S., Hardt M., Recht B., Vinyals O., “Understanding deep learning requires rethinking generalization” in 5th International Conference on Learning Representations. https://openreview.net/forum?id=Sy8gdB9xx. Accessed 30 March 2020. [Google Scholar]
- 2.Hastie T., Tibshirani R., Friedman J. H., Elements of Statistical Learning (Springer, 2001). [Google Scholar]
- 3.Belkin M., Ma S., Mandal S., “To understand deep learning we need to understand kernel learning” in Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research, 2018), vol. 80, pp. 540–548. [Google Scholar]
- 4.Belkin M., Hsu D., Mitra P., “Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate” in Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, S. Bengio et al., Eds. (NIPS, 2018), pp. 2306–2317. [Google Scholar]
- 5.Belkin M., Rakhlin A., Tsybakov A. B., Does data interpolation contradict statistical optimality? arXiv:1806.09471 (25 June 2018).
- 6.Devroye L., Györfi L., Krzyżak A., The Hilbert kernel regression estimate. J. Multivariate Anal. 65, 209–227 (1998). [Google Scholar]
- 7.Liang T., Rakhlin A., Just interpolate: Kernel “ridgeless” regression can generalize. arXiv:1808.00387 (1 August 2018).
- 8.Belkin M., Hsu D., Ma S., Mandal S., Reconciling modern machine learning and the bias-variance trade-off. arXiv:1812.11118 (28 December 2018). [DOI] [PMC free article] [PubMed]
- 9.Muthukumar V., Vodrahalli K., Sahai A., Harmless interpolation of noisy data in regression. arXiv:1903.09139 (21 March 2019).
- 10.Bartlett P. L., “Accurate prediction from interpolation: A new challenge for statistical learning theory (presentation at the National Academy of Sciences workshop, The Science of Deep Learning)” (video recording, 2019). https://www.youtube.com/watch?v=1y2sB38T6FU&feature=youtu.be. Accessed 14 March 2019.
- 11.Belkin M., Hsu D., Xu J., Two models of double descent for weak features. arXiv:1903.07571 (18 March 2019).
- 12.Hastie T., Montanari A., Rosset S., Tibshirani R. J., Surprises in high-dimensional ridgeless least squares interpolation. arXiv:1903.08560 (19 March 2019). [DOI] [PMC free article] [PubMed]
- 13.Desoer C. A., Whalen B. H., A note on pseudoinverses. J. Soc. Ind. Appl. Math. 11, 442–446 (1963). [Google Scholar]
- 14.Li Y., Liang Y., Learning overparameterized neural networks via stochastic gradient descent on structured data. arXiv:1808.01204 (3 August 2018).
- 15.Du S. S., Poczós B., Zhai X., Singh A., Gradient descent provably optimizes over-parameterized neural networks. arXiv:1810.02054 (4 October 2018).
- 16.Du S. S., Lee J. D., Li H., Wang L., Zhai X., Gradient descent finds global minima of deep neural networks. arXiv:1811.03804 (9 November 2018).
- 17.Zou D., Cao Y., Zhou D., Gu Q., Stochastic gradient descent optimizes over-parameterized deep relu networks. arXiv:1811.08888 (21 November 2018).
- 18.Jacot A, Gabriel F, Hongler C, “Neural tangent kernel: Convergence and generalization in neural networks” in 32nd Conference on Neural Information Processing Systems, Bengio S. et al., Eds. (NeurIPS, 2018), pp. 8580–8589. [Google Scholar]
- 19.Arora S., Du S. S., Hu W., Li Z., Wang R., Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. arXiv:1901.08584 (24 January 2019).
- 20.Xie B., Liang Y., Song L., Diverse neural network learns true target functions. arXiv:1611.03131 (9 November 2016).
- 21.Cao Y., Fang Z., Wu Y., Zhou D. X., Gu Q., Towards understanding the spectral bias of deep learning. arXiv:1912.01198 (3 December 2019).
- 22.Belkin M., “Approximation beats concentration? An approximation view on inference with smooth radial kernels” in Conference On Learning Theory, 2018, Stockholm, Sweden, 6-9 July 2018, S. Bubeck, V. Perchet, P. Rigollet, Eds. (PMLR, 2018), vol. 75, pp. 1348–1361.
- 23.Lee W. S., Bartlett P. L., Williamson R. C., Efficient agnostic learning of neural networks with bounded fan-in. IEEE Trans. Inf. Theor. 42, 2118–2132 (1996). [Google Scholar]
- 24.Bengio Y., Roux N. L., Vincent P., Delalleau O., Marcotte P., “Convex neural networks” in Advances in Neural Information Processing Systems 18, Weiss Y., Schölkopf B., Platt J. C., Eds. (MIT Press, Cambridge, MA, 2006), pp. 123–130. [Google Scholar]
- 25.Bach F., Breaking the curse of dimensionality with convex neural networks. J. Mach. Learn. Res. 18, 1–53 (2017). [Google Scholar]
- 26.Bartlett P. L., The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network. IEEE Trans. Inf. Theor. 44, 525–536 (1998). [Google Scholar]
- 27.Bartlett P. L., Mendelson S., Rademacher and Gaussian complexities: Risk bounds and structural results. J. Mach. Learn. Res. 3, 463–482 (2002). [Google Scholar]
- 28.Neyshabur B., Tomioka R., Srebro N., “Norm-based capacity control in neural networks” in Proceedings of the 28th Conference on Learning Theory, Proceedings of Machine Learning Research, Grünwald P., Hazan E., Kale S., Eds. (PMLR, Paris, France, 2015), vol. 40, pp. 1376–1401. [Google Scholar]
- 29.Bartlett P., Foster D., Telgarsky M., “Spectrally-normalized margin bounds for neural networks” in Advances in Neural Information Processing Systems 30, Guyon I., et al., Eds. (Curran Associates, Inc., 2017), pp. 6240–6249. [Google Scholar]
- 30.Golowich N., Rakhlin A., Shamir O., “Size-independent sample complexity of neural networks” in Proceedings of the 31st Conference on Learning Theory, Proceedings of Machine Learning Research, Bubeck S., Perchet V., Rigollet P., Eds. (PMLR, 2018), vol. 75, pp. 297–299. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
There are no data associated with this manuscript.