Skip to main content
. 2019 Mar 29;116(16):7723–7731. doi: 10.1073/pnas.1820458116

Fig. 2.

Fig. 2.

The pipeline of the training algorithm. Inputs vi are converted to a set of input currents Iμ. These currents define the dynamics [8] that lead to the steady-state activations of the hidden units. These activations are used to update the synapses using the learning rule [3]. The learning activation function changes the sign at h*, which separates the Hebbian and anti-Hebbian learning regimes. The second term in the plasticity rule [3], which is the product of the input current Iμ and the weight Wμi, corresponds to another path from the data to the synapse update. This path is not shown here and does not go through Eq. 8.