Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 1990 Jul 1;10(7):2281–2299. doi: 10.1523/JNEUROSCI.10-07-02281.1990

Neural model of stereoacuity and depth interpolation based on a distributed representation of stereo disparity

SR Lehky 1, TJ Sejnowski 1
PMCID: PMC6570385  PMID: 2376775

Abstract

We have developed a model for the representation of stereo disparity by a population of neurons that is based on tuning curves similar in shape to those measured physiologically (Poggio and Fischer, 1977). Signal detection analysis was applied to the model to generate predictions of depth discrimination thresholds. Agreement between the model and human psychophysical data was possible in this model only when the population size representing disparity in a small patch of visual field was in the range of about 20-200 units. Interval encoding and rate encoding were found to be inconsistent with these data. Psychophysical data on stereo interpolation (Westheimer, 1986a) suggest that there are short-range excitatory and long-range inhibitory interactions between disparity- tuned units at nearby spatial locations. We extended our population model of disparity coding at a single spatial location to include such lateral interactions. When there was a small disparity gradient between stimuli at 2 locations, units in the intermediate, unstimulated position developed a pattern of activity corresponding to the average of the 2 lateral disparities. When there was a large disparity gradient, units at the intermediate position developed a pattern of activity corresponding to an independent superposition of the 2 lateral disparities, so that both disparities were represented simultaneously. This mixed population pattern may underlie the perception of depth discontinuities and transparent surfaces. Similar types of distributed representations may be applicable to other parameters, such as orientation, motion, stimulus size, and motor coordinates.


Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES