Skip to main content
. 2022 Mar 21;16:789253. doi: 10.3389/fncom.2022.789253

Table 1.

Test accuracy of a linear classifier trained on CIFAR10 embeddings from the self-supervised learning (SSL) and CIFAR100-trained base encoder.

Loss Learning Update Acc. +Grad. block and 5 negatives
Contr. Hinge E2E BP 71.44% 70.35%
DTP 71.29% 67.74%
RF 61.70% 63.61%
GLL BP/URF 72.76% 71.14%
RF 67.83% 66.52%
RLL BP/URF 71.35% 71.17%
RF 65.94% 65.49%
SimCLR E2E BP 72.44% N/A
CLAPP E2E BP 69.05% N/A
CLAPP GLL N/A 68.93% N/A
Rnd. encoder 61.23%

We compare our contrastive hinge loss (Contr. Hinge) with SimCLR and the encoder with randomly generated weights. We also compare the results from three different learning methods, End-to-End (E2E), Greedy Layer-wise Learning (GLL), and Randomized Layer-wise Learning (RLL), and four updating methods, back-propagation (BP), Updated Random Feedback (URF), Random Feedback (RF), and Difference Target Propagation (DTP). We further compared the models with and without gradient block and a smaller number of negatives, as well as the CLAPP loss (Illing et al., 2021) with our deformations.