Skip to main content
. 2013 Dec 18;111(6):1183–1189. doi: 10.1152/jn.00637.2013

Fig. 5.

Fig. 5.

Schematic representation of our findings and a possible implication for hierarchical object-recognition models. A, left: schematic drawing of 2 connected layers representing a fragment of a hierarchical, multilayer, object-recognition model such as HMAX or convolutional nets. Each layer contains units that are assigned a specific operation, such as AND-like Gaussian tuning or a MAX-like function. A, right: while we find the MAX function in auditory system, we also find evidence against exclusive “MAX” neurons. Constraining and exploiting this flexibility to produce invariant responses are challenges for future models. B: schematic illustration of one potential solution incorporating operational flexibility. Left: depiction of the same 2 layers of neurons as in A but with a flexible neuron-to-computation mapping. Before a stimulus activates inputs, it is impossible to assign a specific operation to each neuron. When inputs are activated, depending on their activation level, different stimuli (middle and right) will result in different computation-to-neuron mappings. With sparse coding, there will be little overlap between neuronal populations performing the same set of computations activated by different stimuli. It remains to be tested whether this principle can be implemented and work well for object recognition.