Skip to main content
. 2023 Mar 31;14:1805. doi: 10.1038/s41467-023-37562-1

Table 1.

Effect of the Hebbian-like error learning rule FeHebb on the alignment of the modulating signals α for different layers

F0 FeHebb α0 α1 α2
W0,1, W1,2, W2,3 89.89 76.69 82.04
W0,1, W2,3 W1,2 89.95 59.95 72.14
W0,1, W1,2 W2,3 90.03 75.18 29.02
W2,3 W0,1, W1,2 75.29 61.23 72.56
W0,1 W1,2, W2,3 90.2 49.4 27.9
W1,2 W0,1, W2,3 84.86 74.25 30.33
W0,1, W1,2, W2,3 77.93 49.93 28.4

The leftmost column includes the parameters updated using F0 with feedback alignment, and the next column indicates layers trained with FeHebb (Eq. (8)). Angles α represent the alignment between the modulatory signal e and the backpropagated counterpart eBP at each layer (in degrees). Since e0 is a synthetic error, the effect of the FeHebb on W0,1 alone has been excluded. The model is trained for 500 episodes, and the computed angles are averaged after a burn-in period of 100 episodes.