Skip to main content
. 2020 Dec;132:428–446. doi: 10.1016/j.neunet.2020.08.022

Fig. 1.

Fig. 1

Learning from a noisy linear teacher. (A) A dataset D={xμ,yμ},μ=1,,P of P examples is created by providing random inputs x to a teacher network with a weight vector w¯, and corrupting the teacher outputs with noise of variance σϵ2. (B) A student network is then trained on this dataset D. (C) Example dynamics of the student network during full batch gradient descent training. Training error (blue) decreases monotonically. Test error, also referred to as generalization error, (yellow), here computable exactly (4), decreases to a minimum Eg at the optimal early stopping time t before increasing at longer times (Eglate), a phenomenon known as overtraining. Because of noise in the teacher’s output, the best possible student network attains finite generalization error (“oracle”, green) even with infinite training data. This error is the approximation error E. The difference between test error and this best-possible error is the estimation error Eest. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)