Skip to main content
. 2019 Apr 22;374(1774):20180369. doi: 10.1098/rstb.2018.0369

Figure 7.

Figure 7.

A neural network (NN) model of associative learning. A NN model that performs the same task as the GRN in figure 5. This NN was adapted from [139], but we supplied the appropriate model parameters (electronic supplementary material, Supplement 1). This model consists of the two stimuli, US and CS, and the response (p) just as described for the GRN model above. The main difference is that w1 and w2 in this case are not response neurons (they are molecules in the GRN) but the synaptic weights between US-p and CS-p respectively. Furthermore, this NN follows the Hebbian rewiring principle of ‘neurons that fire together wire together’. The dynamical portrait of the behaviour is very similar to the one for the GRN (electronic supplementary material, Supplement 1, figure S2). Finally, we show an example of how information theory can be used to quantify cognition. We show the normalized MI between the behaviours of w1 and w2 (both change with time, as described above), showing that it significantly increases during the learning step, and it remains higher after learning compared with the MI before learning. (a) Schematic of the NN. US, CS and p (response) represent the same as in the GRN above. The difference is that (1) the dynamics of p follow the sigmoidal activation which is traditionally used to model the integrate-and-fire behaviour of neurons and (2) the synaptic weights are influenced by the activities of the pre-synaptic and post-synaptic neurons following the Hebbian principle. (b) The behaviour (response p) of the NN before, during and after the association step (middle box). The normalized mutual information (MI) between the behaviours of w1 and w2 is also shown during the three phases. Clearly, the MI increases during and after learning, even though there is no direct connection between w1 and w2, thus demonstrating the power of information theoretic tools. (Online version in colour.)