Skip to main content
. 2020 Nov 11;117(47):29948–29958. doi: 10.1073/pnas.1918674117

Fig. 4.

Fig. 4.

Sparse sequences with a nonlinear learning rule. (A) Probability density of Gaussian input pattern ξ (black). Step function f binarizing the input patterns prior to storage in the connectivity matrix (blue) is shown. (B) Firing rate of several representative units as a function of time. (C) Firing rates of 4,000 neurons (out of 40,000) as a function of time, with “silent” neurons shown on top and active neurons on the bottom, sorted by time of peak firing rate. (D) Correlation of network activity with each stored pattern. (E) Average population rate as a function of “coding level” (probability that input pattern is above xf). The average of f(x) is maintained by varying qf with xf. In red, the average of f(x) is constrained to 0.15, the value in AD. In black, the average of f(x) is fixed to zero. All other parameters are as in AD. For A–D, parameters of the learning rule were xf=1.645, xg=1.645, qf=0.8, and qg=0.95. S=1, P=30.