Skip to main content
. 2020 Feb 13;11(11):2895–2906. doi: 10.1039/c9sc06145b

Investigated neural network (NN) structures with corresponding weight and bias count, size to store the network and achieved accuracy. The quantized two-layer NN was used on the IoT nodes. The maximum difference in accuracy between quantized NN and its full precision correspondence was 0.07% and was observed in both directions, better and worse than the full precision network.

Layers Weights and biases Computations per inference Accuracy [%]
Deep six-layer NN by Cireşan et al.64 784–2500–2000–1500–1000–500–10 ∼12 million (∼46 MB) ∼24 million 99.65
Large two-layer NN (15 epochs) 784–800–10 636 010 (∼2.5 MB) 1 270 400 98.3
Small two-layer NN (5 epochs) 784–64–10 50 890 (∼200 kB) 101 632 97
Two-layer NN on small images (5 epochs) 196–32–10 6634 (∼26 kB) 13 184 95.00 ± 0.17
Quantized two-layer NN on small images 196–32–10 6634 (∼6.5 kB) 13 184 94.99 ± 0.16