Skip to main content
. Author manuscript; available in PMC: 2015 Sep 17.
Published in final edited form as: Neuron. 2014 Sep 17;83(6):1329–1334. doi: 10.1016/j.neuron.2014.08.040

Figure 2. Separate classes of neurons encoding the same input dimension can appear with decreasing neural noise.

Figure 2

The noise values measured relative to the input standard deviation are ν=0.1, 0.3, 0.5, top to bottom row. The probability of a spike from a neuron with a threshold μ and noise value ν is modeled using logistic function P(spikex)=11+exp(x-μν), with x is a Gaussian input of unit variance. For two neurons, there are four different patterns (00, 01, 10, 11 where each neuron is assigned response 0 or 1 if it produces a spike). Neurons are assumed to be conditionally independent given x. With these assumptions, it is possible to evaluate mutual information as the difference between response entropy R = −p00log2p00p01log2p01p10log2p10p11log2p11 and noise entropy N=dx11+exp(μ1-xν)log2[1+exp(μ1-xν)]+dx11+exp(x-μ1ν)log2[1+exp(x-μ1ν)]+dx11+exp(μ2-xν)log2[1+exp(μ2-xν)]+dx11+exp(x-μ2ν)log2[1+exp(x-μ2ν)], where μ1 and μ2 are the thresholds for the neurons 1 and 2, respectively. The mutual information I=R-N is plotted here as a function of the difference between thresholds μ1μ2. For each value of μ1μ2 and ν, the average threshold between the two neurons is adjusted to ensure that the spike rate remains constant for all information values shown. In these simulation, the average spike rate across all inputs x and summed for neurons 1 and 2 was set to 0.2. In this case, transition from redundant coding observed for large noise values to coding based on different thresholds (observed at low noise values) occurs at ν ~ 0.4.