Skip to main content
. Author manuscript; available in PMC: 2019 Jul 1.
Published in final edited form as: Artif Intell. 2018 Apr 3;260:1–35. doi: 10.1016/j.artint.2018.03.003

Table 2.

Postsynaptic information required by deep synapses for optimal learning. Iijh represents the signal carried by the deep learning channel and the postsynaptic term in the learning rules considered here. Different algorithms reveal the essential ingredients of this signal and how it can be simplified. In the last row, the function F can be implemented with sparse or adaptive matrices, carry low precision signals, or include non-linear transformations in the learning channel (see also [4]).

Information Algorithm
Iijh=Iijh(T,O,wrsl(l>h),f(lh))
General Form
Iijh=Iih(T,O,wrsl(l>h),f(lh))
BP (symmetric weights)
Iijh=Iih(T-O,wrsl(l>h),f(lh))
BP (symmetric weights)
Iijh=Iih(T-O,wrsl(l>h+1),wkih+1,f(lh))
BP (symmetric weights)
Iijh=Iih(T-O,rrsl(lh+1),rkih,f(lh))
RBP (random weights)
Iijh=Iih(T-O,rkih,f(lh))
SRBP (random skipped weights)
Iijh=Iih(T-O,rkih,f(l=h))
SRBP (random skipped weights)
Iijh=Iih(F(T-O),f(l=h))
F sparse/low-prec./adaptive/non-lin.