Skip to main content
. Author manuscript; available in PMC: 2019 Jul 1.
Published in final edited form as: Artif Intell. 2018 Apr 3;260:1–35. doi: 10.1016/j.artint.2018.03.003

Table 1.

Summary of experimental results showing the final test accuracy (in percentages) for the RBP algorithms after 100 epochs of training on MNIST and CIFAR-10. For the experiments in this section, training was repeated five times with different weight initializations; in these cases the mean is provided, with the sample standard deviation in parentheses. Also included are the quantization results from Section 5, and the experiments applying dropout to the learning channel from Section 6.

BP RBP SRBP Top layer only
MNIST Baseline 97.9 (0.1) 97.2 (0.1) 97.2 (0.2) 84.7 (0.7)

No-f 89.9 (0.3) 88.3 (1.1) 88.4 (0.7)

Adaptive 97.3 (0.1) 97.3 (0.1)

Sparse-8 96.0 (0.4) 96.9 (0.1)
Sparse-2 96.3 (0.5) 95.8 (0.2)
Sparse-1 90.3 (1.1) 94.6 (0.6)

Quantized error 5-bit 97.6 95.4 95.1
Quantized error 3-bit 96.5 92.5 93.2
Quantized error 1-bit 94.6 89.8 91.6

Quantized update 5-bit 95.2 94.0 93.3
Quantized update 3-bit 96.5 91.0 92.2
Quantized update 1-bit 92.5 9.6 90.7

LC Dropout 10% 97.7 96.5 97.1
LC Dropout 20% 97.8 96.7 97.2
LC Dropout 50% 97.7 96.7 97.1

CIFAR-10 Baseline 83.4 (0.2) 70.2 (1.1) 72.7 (0.8) 47.9 (0.4)

No-f 54.8 (3.6) 32.7 (6.2) 39.9 (3.9)

Sparse-8 46.3 (4.3) 70.9 (0.7)
Sparse-2 62.9 (0.9) 65.7 (1.9)
Sparse-1 56.7 (2.6) 62.6 (1.8)