Table 2.
Model | Network structure | Training skills | Accuracy% |
---|---|---|---|
Spiking RBM (STDP) (Neftci et al., 2013) | 784-500-40 | None | 93.16 |
Spiking RBM(pre-training*) (Peter et al., 2013) | 784-500-500-10 | None | 97.48 |
Spiking MLP(pre-training*) (Diehl et al., 2015) | 784-1200-1200-10 | Weight normalization | 98.64 |
Spiking MLP(pre-training*) (Hunsberger and Eliasmith, 2015) | 784-500-200-10 | None | 98.37 |
Spiking MLP(BP) (O'Connor and Welling, 2016) | 784-200-200-10 | None | 97.66 |
Spiking MLP(STDP) (Diehl and Cook, 2015) | 784-6400 | None | 95.00 |
Spiking MLP(BP) (Lee et al., 2016) | 784-800-10 | Error normalization/ parameter regularization | 98.71 |
Spiking MLP(STBP) | 784-800-10 | None | 98.89 |
We mainly compare with these methods that have the similar network architecture, and
means that their model is based on pre-trained ANN models.