Skip to main content
. 2020 Oct 14;14:83. doi: 10.3389/fncom.2020.00083

Table 1.

Performances of our model compared to other non-episodic unsupervised feature learning methods on Omniglot and MiniImageNet.

Omniglot MiniImageNet
Methods (M, K) Clustering Metric (5,1) (5,5) (20,1) (20,5) (5,1) (5,5) (5,20) (5,50)
Baseline N/A N/A 57.97 79.25 34.17 59.33 25.91 32.38 37.01 38.95
AutoEncoder N/A N/A 53.63 77.34 32.98 55.01 26.17 33.01 37.98 39.39
Denoising autoEncoder N/A N/A 59.63 79.89 34.78 60.88 27.81 34.19 39.01 40.11
InfoGAN N/A N/A 51.49 76.38 31.01 53.99 29.81 36.47 40.17 42.46
BiGAN+KNN N/A N/A 49.55 68.06 27.37 46.70 25.56 31.10 37.31 43.60
BiGAN+LC N/A N/A - - - - 27.08 33.91 44.00 50.41
DeepClustering Kmeans Euclidean 59.07 79.81 34.05 60.12 28.91 36.01 39.29 41.98
UFLST Kmeans Euclidean 69.54 86.18 47.11 69.19 31.77 43.03 51.35 55.72
UFLST BSCAN KRJD 96.51 99.23 90.27 97.22 37.75 50.95 59.18 62.27

Baseline performance means training from scratch. Results based on BiGAN are adapted from Hsu et al. (2018). For complete results with confidence intervals, see Appendix 6. The best performances are in bold.