Skip to main content
. 2020 Jul 17;11:3625. doi: 10.1038/s41467-020-17236-y

Fig. 6. Computational graph and gradient propagations.

Fig. 6

a Assumed mathematical dependencies between hidden neuron states hjt, neuron outputs zt, network inputs xt, and the loss function E through the mathematical functions E( ⋅ ), M( ⋅ ), f( ⋅ ) are represented by colored arrows. bd The flow of computation for the two components et and Lt that merge into the loss gradients of Eq. (3) can be represented in similar graphs. b Following Eq. (14), the flow of the computation of the eligibility traces ejit is going forward in time. c Instead, the ideal learning signals Ljt=dEdzjt requires to propagate gradients backward in time. d Hence, while ejit is computed exactly, Ljt is approximated in e-prop applications to yield an online learning algorithm.