Skip to main content
. 2013 Aug 2;8(8):e69952. doi: 10.1371/journal.pone.0069952

Figure 1. Transformation-invariance with the CT learning mechanism.

Figure 1

In the initial position at the first transform time (Inline graphic) the input neurons randomly activate a set of postsynaptic neurons (due to the random synaptic weight initialisation) and the synaptic connections between the active input and output neurons will be strengthened through Hebbian learning. If the second transform at Inline graphic is similar enough to the first, the same postsynaptic neurons will be encouraged to fire by some of the same connections potentiated at Inline graphic and the input neurons of the second transform will have their synapses potentiated onto the same set of output neurons. This process may continue (Inline graphic) until there is very little or no resemblance between the current and the initial transforms. In addition to changes in retinal location, the same principles will apply to build other types of transformation-invariance. For example, changes in view and scale will be accommodated through the same process, provided that there is sufficient overlap of afferent neurons between the transforms.