Skip to main content
. 2018 Aug 31;12:608. doi: 10.3389/fnins.2018.00608

Table 3.

CIFAR10 final train and test accuracy after 100 training epochs on a 5-layer network.

conv1 conv2 conv3 FC1 FC2
LEL(SYM) Train 49.8 ± 0.8% 98.8 ± 0.2% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0%
Test 47.3 ± 1.0% 71.0 ± 0.3% 78.7 ± 0.3% 79.0 ± 0.3% 78.9 ± 0.2%
LEL(SYM)-Fix 1 Train 22.0 ± 0.8% 91.2 ± 1.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0%
Test 22.5 ± 0.7% 66.5 ± 0.5% 75.2 ± 0.2% 75.5 ± 0.1% 75.4 ± 0.1%
LEL(SYM)-Fix 2 Train 22.7 ± 1.0% 27.0 ± 1.3% 99.2 ± 0.1% 100.0 ± 0.0% 100.0 ± 0.0%
Test 23.1 ± 0.9% 27.4 ± 1.3% 68.2 ± 0.2% 68.5 ± 0.1% 68.3 ± 0.2%
LEL(SYM) + DO Train 46.3 ± 0.7% 75.2 ± 0.9% 87.2 ± 0.7% 88.8 ± 0.6% 89.1 ± 0.6%
Test 44.8 ± 0.9% 67.8 ± 0.8% 76.4 ± 0.5% 78.1 ± 0.5% 78.1 ± 0.4%
LEL(SYM)/GI Train 50.0 ± 0.4% 98.7 ± 0.3% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0%
Test 47.5 ± 0.3% 71.1 ± 0.3% 78.6 ± 0.2% 78.8 ± 0.3% 78.6 ± 0.3%
LEL(SYM)+DO/GI Train 46.6 ± 1.1% 75.6 ± 0.7% 87.4 ± 0.5% 89.0 ± 0.4% 89.2 ± 0.4%
Test 44.8 ± 1.3% 68.3 ± 0.5% 76.6 ± 0.5% 78.5 ± 0.2% 78.3 ± 0.4%
LEL(SCFB)+DO Train 39.8 ± 1.0% 72.4 ± 0.9% 85.4 ± 0.8% 87.7 ± 0.6% 88.1 ± 0.7%
Test 38.5 ± 1.2% 65.4 ± 0.5% 75.6 ± 0.6% 78.1 ± 0.5% 78.3 ± 0.6%
LEL(SCFB) Train 43.5 ± 0.9% 96.5 ± 0.3% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0%
Test 41.4 ± 0.9% 66.9 ± 0.3% 77.1 ± 0.1% 78.0 ± 0.3% 77.9 ± 0.3%
LEL(TLC)+DO Train 70.9 ± 2.1% 86.5 ± 2.5% 97.2 ± 1.2% 98.2 ± 0.7% 97.6 ± 1.0%
Test 62.4 ± 0.7% 73.8 ± 0.8% 78.0 ± 1.2% 79.2 ± 1.1% 79.1 ± 1.2%
LEL(TLC) Train 80.8 ± 9.8% 99.8 ± 0.3% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0%
Test 63.3 ± 1.6% 75.2 ± 2.8% 80.0 ± 2.0% 80.5 ± 1.9% 80.5 ± 1.9%
FA Train 99.6 ± 0.2%
Test 62.0 ± 1.6%
BP Train 100.0 ± 0.0%
Test 84.4 ± 0.1%
BP + DO Train 99.6 ± 0.06%
Test 84.0 ± 0.4%

When learning using local errors, the local classifier errors in all layers are reported. Mean and standard deviation from 4 runs. LEL, Local error learning; SYM, Symmetric feedback weights; SCFB, Sign-concordant feedback weights; TLC, Trainable local classifier; DO, Dropout; FA, Feedback alignment; BP, Backpropagation; GI, Gaussian initialization of local classifier weights. For local error learning, local classifier weights were initialized from a uniform distribution, except for cases where GI is indicated. “Fix n” means the parameters of the first n convolutional layers in the network were random and non-trainable.