Skip to main content
. 2021 Aug 9;17(8):e1009202. doi: 10.1371/journal.pcbi.1009202

Table 1. Table of performance for benchmark datasets compared to published results on sparse networks.

We replicate the published architecture in each case for a fair comparison: For the original MNIST dataset and CIFAR-10 datasets, Mocanu et al (2018) [29] used three sparsely-connected layers of 1000 neurons each and 4% of possible connections existing. Pieterse & Mocanu (2019) [30] used the same architecture for the COIL-100 dataset. For the Fashion-MNIST dataset, Pieterse & Mocanu (2019) [30] used three sparsely-connected layers of 200 neurons each, with 20% of possible connections existing.

Dataset Size Accuracy
Training Test Classes Control (Source) Normalisation
MNIST [40] 60, 000 10, 000 10 98.74% [29] 99.63%
Fashion-MNIST [41] 60, 000 10, 000 10 89.01% [30] 92.23%
CIFAR-10 [67] 50, 000 10, 000 10 74.84% [29] 77.43%
COIL-100 [68] 5, 764 1, 436 100 98.68% [30] 98.47%