Skip to main content
. 2022 Oct 4;16:949934. doi: 10.3389/fnins.2022.949934

FIGURE 4.

FIGURE 4

A model trained on an Modified National Institute of Standards and Technology database (MNIST) set with no dropout (red in all panels), with the standard dropout (green in all panels), and with the fractional Brownian motion (FBM)-based dropout (blue in all panels). (A) The training loss after each training epoch. (B) The testing loss after each training epoch. (C) The testing accuracy after each training epoch. The insets show the zoomed-in plot segments after epoch 60. (D) The generalization gap (the difference between the testing loss and the training loss) after each training epoch. The network consisted of 784 input neurons (28 × 28 grayscale images of digits), 1,024 neurons in the input and hidden layers, and 10 output neurons. The number of epochs was 100, with 16 mini-batches with 64 samples each. The Adam optimizer with the learning rate of 0.0001 was used. In all conditions, the dropout probability was adjusted to be around 0.2. N = 32, n = 60, H = 0.9, T = 2, Δt = 1/500, L = 50, s = 50. The same dropout parameters were used in the input and hidden layers (the two sets of fibers were independent in the layers). (E) Examples of the images used in the training and testing.