Skip to main content
. 2018 Aug 31;12:608. doi: 10.3389/fnins.2018.00608

Table 5.

CIFAR10 final train and test accuracy after 100 training epochs on the last 5 layers of 10-layer network.

conv6 conv7 conv8 conv9 conv10
LEL(SYM) Train 99.9 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0%
Test 75.9 ± 0.2% 75.9 ± 0.3% 76.3 ± 0.3% 75.7 ± 0.3% 75.7 ± 0.4%
LEL(SYM)-Fix 1 Train 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0%
Test 75.3 ± 0.1% 75.2 ± 0.2% 75.6 ± 0.2% 75.0 ± 0.2% 74.9 ± 0.2%
LEL(SYM)-Fix 2 Train 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0%
Test 71.0 ± 1.0% 70.8 ± 0.8% 71.2 ± 0.8% 70.6 ± 0.8% 70.5 ± 0.7%
LEL(SYM) + DO Train 66.2 ± 2.4% 69.9 ± 2.4% 70.5 ± 2.2% 67.8 ± 2.2% 67.9 ± 2.3%
Test 61.5 ± 2.1% 63.2 ± 2.1% 64.2 ± 1.9% 62.7 ± 1.9% 62.8 ± 2.0%
LEL(SYM)/GI Train 99.9 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0%
Test 76.0 ± 0.3% 76.1 ± 0.5% 76.2 ± 0.5% 75.7 ± 0.6% 75.6 ± 0.6%
LEL(SYM)+DO/GI Train 66.4 ± 4.7% 70.0 ± 4.7% 69.6 ± 5.0% 66.9 ± 5.2% 66.9 ± 5.2%
Test 61.4 ± 4.2% 63.1 ± 3.7% 63.1 ± 3.9% 61.6 ± 4.2% 61.6 ± 4.3%
LEL(SCFB)+DO Train 60.1 ± 6.7% 63.2 ± 5.6% 63.8 ± 5.8% 61.7 ± 5.9% 61.8 ± 6.0%
Test 56.2 ± 6.0% 57.8 ± 4.8% 58.5 ± 4.9% 57.4 ± 5.3% 57.6 ± 5.4%
LEL(SCFB) Train 99.2 ± 0.1% 99.8 ± 0.0% 99.9 ± 0.0% 99.9 ± 0.0% 99.9 ± 0.0%
Test 74.1 ± 0.4% 75.1 ± 0.1% 75.3 ± 0.2% 74.9 ± 0.2% 74.7 ± 0.1%
LEL(TLC)+DO Train 93.2 ± 1.9% 97.0 ± 1.4% 96.0 ± 1.5% 94.5 ± 1.9% 94.5 ± 1.9%
Test 79.4 ± 1.5% 80.5 ± 1.4% 80.9 ± 1.3% 80.8 ± 1.4% 80.8 ± 1.3%
LEL(TLC) Train 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0% 100.0 ± 0.0%
Test 82.7 ± 0.5% 82.9 ± 0.4% 83.1 ± 0.5% 82.9 ± 0.5% 82.9 ± 0.5%
FA Train 76.7 ± 4.0%
Test 51.3 ± 1.5%
BP Train 100.0 ± 0.0%
Test 86.7 ± 0.3%
BP + DO Train 98.4 ± 0.4%
Test 87.3 ± 0.5%

When learning using local errors, the local classifier errors in all layers are reported. Mean and standard deviation from 4 runs. LEL, Local error learning; SYM, Symmetric feedback weights; SCFB, Sign-concordant feedback weights; TLC, Trainable local classifier; DO, Dropout; FA, Feedback alignment; BP, Backpropagation; GI, Gaussian initialization of local classifier weights. For local error learning, local classifier weights were initialized from a uniform distribution, except for cases where GI is indicated. “Fix n” means the parameters of the first n convolutional layers in the network were random and non-trainable.