Skip to main content
. Author manuscript; available in PMC: 2014 Aug 25.
Published in final edited form as: Neuron. 2011 Sep 8;71(5):926–940. doi: 10.1016/j.neuron.2011.06.032

Figure 8.

Figure 8

Model of visual texture perception (variant of (Portilla and Simoncelli, 2000)) depicted in a format analogous to the auditory texture model of Figure 1. An image of beans (top row) is filtered into spatial frequency bands by center-surround filters (second row), as happens in the retina/LGN. The spatial frequency bands (third row) are filtered again by orientation selective filters (fourth row) analogous to V1 simple cells, yielding scale and orientation filtered bands (fifth row). The envelopes of these bands are extracted (sixth row) to produce analogues of V1 complex cell responses (seventh row). The linear function at the envelope extraction stage indicates the absence of the compressive nonlinearity present in the auditory model. As in Figure 1, red icons denote statistical measurements: marginal moments of a single signal (M) or correlations between two signals (AC, C1, or C2 for autocorrelation, cross-band correlation, or phase-adjusted correlation). C1 and C2 here and in Figure 1 denote conceptually similar statistics. The autocorrelation (AC) is identical to C1 except that it is computed within a channel.

HHS Vulnerability Disclosure