Skip to main content
. 2019 May 6;116(21):10537–10546. doi: 10.1073/pnas.1813416116

Fig. 5.

Fig. 5.

Optimal network size for linear and nonlinear networks in the presence of intrinsic synaptic noise. (A) Network expansion for a linear network, given by an embedding into a larger network, followed by a rotation of the weight matrix. This corresponds to transforming inputs u by a projection B and outputs y by a semiorthogonal mapping D. (B) Plots show the dependence of Nopt in linear and nonlinear networks using Eqs. 17 and 19. In both cases the learning rule has γ1=0.01 and T=2. Low task-irrelevant plasticity corresponds to γ2=0.05, while high task-irrelevant plasticity corresponds to γ2=1.