Table 1.
Effect of the Hebbian-like error learning rule on the alignment of the modulating signals αℓ for different layers
α0 | α1 | α2 | ||
---|---|---|---|---|
W0,1, W1,2, W2,3 | – | 89.89 | 76.69 | 82.04 |
W0,1, W2,3 | W1,2 | 89.95 | 59.95 | 72.14 |
W0,1, W1,2 | W2,3 | 90.03 | 75.18 | 29.02 |
W2,3 | W0,1, W1,2 | 75.29 | 61.23 | 72.56 |
W0,1 | W1,2, W2,3 | 90.2 | 49.4 | 27.9 |
W1,2 | W0,1, W2,3 | 84.86 | 74.25 | 30.33 |
– | W0,1, W1,2, W2,3 | 77.93 | 49.93 | 28.4 |
The leftmost column includes the parameters updated using with feedback alignment, and the next column indicates layers trained with (Eq. (8)). Angles αℓ represent the alignment between the modulatory signal eℓ and the backpropagated counterpart at each layer (in degrees). Since e0 is a synthetic error, the effect of the on W0,1 alone has been excluded. The model is trained for 500 episodes, and the computed angles are averaged after a burn-in period of 100 episodes.