Skip to main content
. 2016 Nov 8;10:508. doi: 10.3389/fnins.2016.00508

Table 2.

Comparison of accuracy of different models on PI MNIST.

Network # units in HLs Test accuracy (%)
ANN (Srivastava et al., 2014) 800 98.4
ANN (Srivastava et al., 2014), Drop-out 4096–4096 98.99
ANN (Wan et al., 2013), Drop-connect 800–800 98.8
ANN (Goodfellow et al., 2013), maxout 240 × 5–240 × 5 99.06
SNN (O'Connor et al., 2013)a,b 500–500 94.09
SNN (Hunsberger and Eliasmith, 2015)a 500–300 98.6
SNN (Diehl et al., 2015) 1200–1200 98.64
SNN (O'Connor and Welling, 2016) 200–200 97.8
SNN (SGD, This work) 800 [98.56, 98.64, 98.71]*
SNN (SGD, This work) 500–500 [98.63, 98.70, 98.76]*
SNN (ADAM, This work) 300–300 [98.71, 98.77, 98.88]*

We compare only to models that do not use unsupervised pre-training or data augmentation, with the exception of O'Connor et al. (2013) and Hunsberger and Eliasmith (2015).

a

pretraining,

b

data augmentation,

*

[min, average, max] values over epochs [181, 200].