Skip to main content
. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331

Table 2.

Comparison with the state-of-the-art spiking networks with similar architecture on MNIST.

Model Network structure Training skills Accuracy%
Spiking RBM (STDP) (Neftci et al., 2013) 784-500-40 None 93.16
Spiking RBM(pre-training*) (Peter et al., 2013) 784-500-500-10 None 97.48
Spiking MLP(pre-training*) (Diehl et al., 2015) 784-1200-1200-10 Weight normalization 98.64
Spiking MLP(pre-training*) (Hunsberger and Eliasmith, 2015) 784-500-200-10 None 98.37
Spiking MLP(BP) (O'Connor and Welling, 2016) 784-200-200-10 None 97.66
Spiking MLP(STDP) (Diehl and Cook, 2015) 784-6400 None 95.00
Spiking MLP(BP) (Lee et al., 2016) 784-800-10 Error normalization/ parameter regularization 98.71
Spiking MLP(STBP) 784-800-10 None 98.89

We mainly compare with these methods that have the similar network architecture, and

*

means that their model is based on pre-trained ANN models.