Skip to main content
. 2018 Nov 19;12:836. doi: 10.3389/fnins.2018.00836

Table 4.

Average classification accuracy of different models with multi-condition training.

SNR MLP CNN RNN LSTM SOM-SNN
Clean 96.10 ± 1.18% 97.60 ± 0.89% 94.30 ± 3.04% 98.15 ± 0.71% 99.80 ± 0.22%
20 dB 98.45 ± 0.61% 99.50 ± 0.22% 94.30 ± 2.70% 99.10 ± 0.89% 100.00 ± 0.00%
10 dB 99.35 ± 0.45% 99.70 ± 0.33% 95.25 ± 2.49% 99.05 ± 1.25% 100.00 ± 0.00%
0 dB 98.20 ± 1.45% 99.45 ± 0.75% 93.65 ± 2.82% 95.80 ± 3.93% 99.45 ± 0.55%
-5 dB 92.50 ± 1.53% 98.35 ± 0.78% 86.85 ± 5.20% 91.35 ± 4.82% 98.70 ± 0.48%
Average 96.92% 98.92% 92.87% 96.69% 99.59%

Experiments are conducted over 10 runs with random weight initialization.

The bold values indicate the best classification accuracies under different SNR.