Skip to main content
. 2021 Oct 5;11:19797. doi: 10.1038/s41598-021-99191-2

Table 1.

Comparison of accuracy for all tasks.

Tasks Networks Method Binarization w-Reduction Accuracy (%)
MNIST 2-layer DNN Full precision 98.2 (0.1)
Ours Weights only Manual 97.2 (0.1)
Ours Activation only Manual 97.1 (0.1)
Ours Weights and activations Manual 96.0 (0.2)
Dogs vs Cats 4-layer CNN Full precision 89.7 (0.2)
Ours Weights only V1 86.0 (0.3)
Ours Activation only V1 84.6 (0.1)
Ours Weights and activations V1 85.5 (0.6)
Spoken numbers 4-layer CNN Full precision 93.7 (0.2)
Ours Weights only V1 91.5 (0.5)
Ours Activation only V1 91.4 (0.8)
Ours Weights and activations V1 93.0 (0.2)
CIFAR-10 ResNet-20 Full precision 92.412
Ours Weights only V1 90.2 (0.1)
DoReFa14 Weights only 90.0
LQ-Net12 Weights only 90.1
DSQ13 Weights only 90.2
ProxQuant16 Weights only 90.7
Ours Activation only V2 86.2 (0.3)
Ours Weights and activations V2 84.1 (0.2)
DoReFa14 Weights and activations 79.9
DSQ13 Weights and activations 84.1
CL-BCNN42 Weights and activations 91.1
VGG-Small Full precision 93.812
Ours Weights only V1 93.3 (0.1)
BWN11 Weights only 90.1
BinaryConnect9 Weights only 91.7
LQ-Net12 Weights only 93.5
Ours Activation only V1 92.4 (0.2)
Ours Weights and activations V1 90.7 (0.2)
XNOR-Net11 Weights and activations 89.8
BNN10 Weights and activations 89.9
DSQ13 Weights and activations 91.7
CL-BCNN42 Weights and activations 92.5

The classes in all datasets are completely balanced.