The noise values measured relative to the input standard deviation are ν=0.1, 0.3, 0.5, top to bottom row. The probability of a spike from a neuron with a threshold μ and noise value ν is modeled using logistic function
, with x is a Gaussian input of unit variance. For two neurons, there are four different patterns (00, 01, 10, 11 where each neuron is assigned response 0 or 1 if it produces a spike). Neurons are assumed to be conditionally independent given x. With these assumptions, it is possible to evaluate mutual information as the difference between response entropy R = −p00log2p00 − p01log2p01 − p10log2p10 − p11log2p11 and noise entropy
, where μ1 and μ2 are the thresholds for the neurons 1 and 2, respectively. The mutual information I=R-N is plotted here as a function of the difference between thresholds μ1 − μ2. For each value of μ1 − μ2 and ν, the average threshold between the two neurons is adjusted to ensure that the spike rate remains constant for all information values shown. In these simulation, the average spike rate across all inputs x and summed for neurons 1 and 2 was set to 0.2. In this case, transition from redundant coding observed for large noise values to coding based on different thresholds (observed at low noise values) occurs at ν ~ 0.4.