The spiking neural network learns a generative model of the MNIST dataset using the event-driven CD procedure. (A) Learning curve, shown here up to 10,000 samples. (B) Details of the training procedure, before and after training (20,000 samples). During the first half of each 0.1s epoch, the visible layer v is driven by the sensory layer, and the gating variable g is 1, meaning that the synapses undergo LTP. During the second half of each epoch, the sensory stimulus is removed, and g is set to −1, so the synapses undergo LTD. The top panels of both figures show the mean of the entries of the weight matrix. The second panel shows the values of the modulatory signal g(t). The third panel shows the synaptic currents of a visible neuron, where Ih is caused by the feedback from the hidden and the bias, and Id is the data. The timing of the clamping (Id) and g differ due to an interval τbr where no weight update is undertaken to avoid the transients (see section 2). Before learning and during the reconstruction phase, the activity of the visible layer is random. But as learning progresses, the activity in the visible layer reflects the presented data in the reconstruction phase. This is very well visible in the layer class label neurons vc, whose activity persists after the sensory stimulus is removed. Although the firing rates of the hidden layer neurons before training is high (average 113Hz), this is only a reflection of the initial conditions for the recurrent couplings W. In fact, at the end of the training, the firing rates in both layers becomes much sparser (average 9.31Hz).