Skip to main content
. 2021 Oct 18;15:697469. doi: 10.3389/fncom.2021.697469

Table 1.

Comparison with other inference-latency reducing methods.

Dataset Method Accuracy (decline compared with CNN) Inference-latency (time steps) Accelerative ratio
MNIST (Neil et al., 2016) Sparse Coding 98.00% 631 -
MNIST (Neil et al., 2016) Activation Cost 98.00% 602 -
MNIST (Neil et al., 2016) Dropout 98.00% 641 -
MNIST (Neil et al., 2016) Dropout Learning Sched. 98.00% 602 -
MNIST (Neil et al., 2016) Stacked AE 98.00% 788 -
MNIST (Avg 0b Analog) Stopping criterion 98.50% (0.06%) 24 1.88X
MNIST (Avg 0b Poisson) Stopping criterion 98.48% (0.08%) 27 1.48X
MNIST (Max 0b Analog) Stopping criterion 97.91% (0.74%) 30 1.97X
MNIST (Avg BN Analog) Stopping criterion 98.73% (0.09%) 70 1.39X
MNIST (Yang et al., 2020) Conversion rule 99.03% (0.08%) 67 1.49X
CIFAR-10 (Avg 0b Analog) Stopping criterion 87.72% (0.23%) 267 1.87X
CIFAR-10 (Avg 0b Analog) Stopping criterion 87.25% (0.70%) 146 1.81X
CIFAR-10 (Yang et al., 2020) Conversion rule 80.03% (0.78%) 245 1.63X

The accelerative ratio in the table is compared with original model (Rueckauer et al., 2017) under the same accuracy.