Baldassi et al. 10.1073/pnas.0700324104. |
Fig. 5. Performance of the modified perceptron algorithm with N = 4,001 synapses and K = 100 visible states as the secondary threshold qm varies. (A) Convergence time, averaged over 50 samples of 0.5N patterns each. (B) Maximum achieved capacity (at least 90% successes on 25 samples, with cutoff time of 1,000 presentations per pattern).
Fig. 6. Optimal value of ps as a function of a. For every a à {0, 0.05, . . . , 1}, 10 sets of random pattern were classified by the stochastic belief propagation-inspired (SBPI) algorithm with psà {0, 0.1, . . . , 1}. Crosses show the value of ps that achieves the minimum mean convergence time, as a function of a. The line is a linear fit.
Fig. 7. Distribution of hidden variables after convergence in the belief propagation-inspired algorithm for a = 0.3. (A) Effect of bounds on the hidden variable h on its distribution, shown for N = 64,001. The number of hidden states is indicated. Histogram step size, 20. (B) The width of the distribution is proportional to ÃN. Histograms obtained with different values of N are shown rescaled by 1/ÃN.
Fig. 8. Optimal number of hidden states vs. N for SBPI and SBPI01. The number of samples goes from 100 to 20. The fits show that the scaling is close to ÃN in both cases. Precise values of the fit parameters are as follows. SBPI coefficient, 1.15 ± 0.06; SBPI exponent, 0.521 ± 0.005; SBPI01 coefficient, 0.87 ± 0.02; SBPI01 exponent, 0.502 ± 0.003.
Fig. 9. Robustness to noise of various algorithms. In all tests, we used N = 4,001 synapses trained on 0.55N patterns. Red lines, SBPI with parameter ps = 0.4; green lines, standard perceptron; blue lines, modified perceptron with optimal value for qm. Results for both bounded (K = 100) and unbounded cases are shown. Points were obtained by averaging over 25 samples for protocol 1,100 samples for protocol 2. (A) Protocol 1, unbounded case. (B) Protocol 1, bounded case. (C) Protocol 2, unbounded case. (D) Protocol 2, bounded case.