Skip to main content
. 2014 Jan 30;7:272. doi: 10.3389/fnins.2013.00272

Figure 9.

Figure 9

The recurrent structure of the network allows it to classify, reconstruct and infer from partial evidence. (A) Raster plot of an experiment illustrating these features. Before time 0s, the neural RBM runs freely, with no input. Due to the stochasticity in the network, the activity wanders from attractor to attractor. At time 0s, the digit 3 is presented (i.e., layer υd is driven by d), activating the correct class label in υc; At time t = 0.3 s, the class neurons associated to 5 are clamped to high activity and the rest of the class label neurons are strongly inhibited, driving the network to reconstruct its version of the digit in layer υd; At time t = 0.6 s, the right-half part of a digit 8 is presented, and the class neurons are stimulated such that only 3 or 6 can activate (all others are strongly inhibited as indicated by the gray shading). Because the stimulus is inconsistent with 6, the network settles to a 3 and attempts to reconstruct it. The top figures show the digits reconstructed in layer υd. (B) Digits 0–9, reconstructed in the same manner. The columns correspond to clamping digits 0–9, and each is different, independent run. (C) Population firing rate of the experiment presented in (A). The network activity is typically at equilibrium after about 10τr = 40ms (black bar).