Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2014 Aug 6;34(32):10462–10464. doi: 10.1523/JNEUROSCI.2248-14.2014

What's the Difference between a Tiger and a Cat? From Visual Object to Semantic Concept via the Perirhinal Cortex

Marieke Mur 1,
PMCID: PMC4122795  PMID: 25100581

When we see an object, we interpret the visual image that is projected on the retina, giving it meaning. The transformation from visual image to meaningful object takes place along the human ventral visual stream. Early in the ventral stream, brain activity represents low-level visual information, such as orientation and contrast. Higher up in the ventral stream, in posterior inferior temporal (IT) cortex, brain activity represents high-level object information. This includes a broad categorical organization of individual objects based on biological relevance, reflecting, for example, the division between animate and inanimate objects and between faces and places (Kriegeskorte et al. 2008b; Mur et al., 2012). Category membership is an important piece of information for object recognition. However, knowing which category an object belongs to is not sufficient for identifying individual objects, especially those which belong to the same category. Successful recognition of individual objects requires object-specific knowledge. This is knowledge which allows us to distinguish, for example, a tiger from a domestic cat. The combination of four legs, fur, and a carnivorous diet is not specific enough to distinguish these two objects. Additional information is needed, in particular information about features that are not shared between a tiger and a domestic cat, for example size and potential threat. In fact, many objects we encounter in life cannot be distinguished based on a few simple feature combinations. Successful discrimination requires a rich multidimensional representation that captures the complex feature conjunctions that make each object unique.

Where in the brain is this object-specific knowledge represented? One strong candidate is the anterior temporal lobe (ATL). Damage to the medial ATL, especially perirhinal cortex, has been reported to impair the discrimination of objects that have a high degree of feature overlap (Bussey et al., 2005). Semantic dementia, a degenerative neuropathology characterized by an impaired capacity to identify specific objects, affects neurons primarily in ATL. Critically, this impairment in object-specific knowledge does not appear to affect one sensory modality in particular. For instance, cueing a patient with an image of a tiger, or the sound of a tiger's growl, might both fail to elicit recognition of the tiger. The ATL is therefore thought to be critical in the association of object information across sensory modalities, supporting abstract conceptual representations (Patterson et al., 2007). Together, these findings point to the ATL as a potential key neural substrate for integrating and representing object-specific semantic knowledge. Nevertheless, the fine-grained representational content of the ATL has not been thoroughly explored. Are neuronal responses in the ATL sensitive to the degree of semantic similarity among distinct visual objects?

In a recent issue of The Journal of Neuroscience, Clarke and Tyler (2014) took a significant step forward in the search for the representation of object-specific knowledge in the human brain. In their study, healthy participants were shown images of real-world objects from a range of categories, including animals, fruits, and tools, while they performed a basic-level naming task in the fMRI scanner. The use of basic-level naming (e.g., cat, tiger, apple, pear) ensured that participants processed the objects at an individual level, as opposed to a category level (animal, fruit). Importantly, descriptive features for each object shown in the experiment were drawn from an independent publicly available database (e.g., is green, tastes sweet, used for cider, eaten in pies, for “apple”). The list of descriptive features, on average 13 per object, defines a multidimensional semantic representation for that object. Objects with many shared features are predicted to evoke more semantically confusable representations.

The authors subsequently examined the neural response patterns elicited by each object image. Each pattern can be thought of as a specific point in multidimensional voxel space. In this space, distances reflect dissimilarities among the response patterns elicited by different objects, and all pairwise pattern dissimilarities constitute a summary of the information content of a certain brain region (in this case, a multivoxel searchlight, which is moved through the brain). For example, if a region shows two clusters of response patterns, one for living things and one for nonliving things, we can conclude that the region emphasizes, and might therefore represent, object animacy. The advantage of this summary, or representational similarity structure, is that it allows comparison to similarity structures generated in different representational spaces, for example, a space based on perceived similarity of objects (Mur et al., 2013). This is the main tenet of representational similarity analysis (RSA; Kriegeskorte et al., 2008a).

In Clarke and Tyler's (2014) study, the representational space of interest was defined by the semantic feature descriptions obtained for each object. Objects with many shared features, i.e., semantically confusable objects, cluster together in this space. Since semantic confusability might be correlated with low-level visual properties (e.g., fruits are often round and warm-colored) and category membership (an apple and a pear share many features), the authors also computed similarity structures generated in these two additional representational spaces. They used partial correlation analysis to fit all three models (semantic, low-level visual, categorical) simultaneously to determine the unique contribution of each model to explaining the brain similarity structure at each searchlight location.

The main region showing a significant match with the semantics model was bilateral perirhinal cortex. This finding corroborates an earlier report showing that response patterns to words in left perirhinal cortex cluster based on semantic similarity (Bruffaerts et al., 2013). Clarke and Tyler (2014) extend this finding to pictures, and convincingly show that the effect is (almost) exclusively located in perirhinal cortex. The authors further report that perirhinal cortex activation is positively correlated with semantic confusability. This effect was not exclusive to perirhinal cortex: frontoparietal cognitive-control regions showed a similar effect, which suggests that the effect could be related to difficulty, i.e., objects that are semantically confusable might be harder to name. No behavioral data are presented to falsify this hypothesis, but the fact that frontoparietal regions did not show a match with the object-specific semantics model supports the interpretation that perirhinal cortex is not just simply driven by task difficulty.

The analyses and results presented in this study are exciting, but also raise further questions. First, the principal finding of perirhinal involvement in representing semantic confusability points to the intriguing possibility that this region is centrally involved in the integration of perceptual with semantic information. What remains open is how perirhinal cortex accomplishes this feat. Does it integrate information from visual regions lower in the ventral hierarchy? Does it communicate with the hippocampus and other temporal lobe structures during object recognition? These questions motivate an investigation into the functional and computational role of perirhinal cortex within a larger network of regions supporting object recognition and semantic memory.

Perirhinal's anatomical and functional connectivity make it the ideal candidate for associating, encoding, and retrieving object-specific information. Perirhinal cortex sits at the interface between perception and cognition: it receives input from high-level visual and other sensory regions and projects to the hippocampus, which in turn projects to many cortical sites, including the ATL (Brown and Aggleton, 2001). Could this circuit be responsible for the acquisition of object-specific information that is too abstract to be represented in posterior IT? Recent evidence indicates that perirhinal cortex and hippocampus need to interact during successful retrieval of object-specific associations (Staresina et al., 2012), which suggests that the same circuit might be involved in retrieval processes, e.g., retrieving an object's name or semantic properties when seeing its picture. Whether the functional role of perirhinal cortex is mostly integrative (that is, as a semantic hub, representing object-specific information) or executive (encoding and retrieving abstract object-specific information localized elsewhere) remains an open question.

Another question further underscored by this study concerns the level of dependence of object-specific knowledge on perirhinal function. Although Clarke and Tyler (2014) did not find a match for the object-specific semantics model in ATL sites other than perirhinal cortex, these other sites, including inferior lateral ATL, remain a strong candidate for representing object-specific knowledge. Consistent with neuropsychological reports (Patterson et al., 2007), response patterns in inferior lateral ATL have been reported to carry conceptual object information (Peelen and Caramazza, 2012), possibly encoded and retrieved in collaboration with perirhinal cortex. fMRI distortion artifacts in inferior lateral ATL, however, might have prevented Clarke and Tyler (2014) from detecting effects in this region. The authors also did not detect object-specific semantic information in posterior IT. It is possible that this information is present at a spatial scale beyond that of a local searchlight. Another possible explanation is that although the within-category object representations in posterior IT might be specific enough to support discrimination of individual objects (Kriegeskorte et al. 2008b), the representations are predominantly visual. There is some evidence pointing toward a role for posterior IT in representing object-specific semantics, i.e., objects eliciting similar response patterns in this region are perceived as semantically similar (Carlson et al., 2014), but conclusive evidence would need to de-confound high-level visual and semantic similarity.

Finally, if the function of perirhinal cortex is to discriminate objects that are semantically or perceptually similar, these objects should be distinguishable based on perirhinal response patterns. The reported results indicate that more semantically similar objects elicit more similar perirhinal response patterns. Are these more similar patterns still distinct? This question cannot be answered conclusively based on the current results. The strength of RSA is to aggregate representational information over many object representations, but this makes it more difficult to make claims about the discrimination of individual object pairs. A more rigorous approach would be to test, for each within-category object pair, whether the two objects can be discriminated based on their perirhinal response patterns. If this approach yielded negative results, it could mean that effects are subtle and difficult to detect in noisy data, but it could also mean that it is semantic confusability, not object-specific knowledge per se, that is coded in perirhinal cortex.

Together, Clarke and Tyler (2014) provide an interesting clue as to how the brain integrates visual objects with semantic concepts. At the center of this integration sits the perirhinal cortex, where neural response patterns reflect the conceptual similarity among distinct visual objects. More work is needed to clarify the computational and neural basis of perirhinal function within the distributed temporal system it occupies. Nevertheless, the current findings underscore the critical contribution of perirhinal cortex to the rich multidimensional abstractions that define our perceptions.

Footnotes

Editor's Note: These short, critical reviews of recent papers in the Journal, written exclusively by graduate students or postdoctoral fellows, are intended to summarize the important findings of the paper and provide additional insight and commentary. For more information on the format and purpose of the Journal Club, please see http://www.jneurosci.org/misc/ifa_features.shtml.

This work was funded by the Medical Research Council of the UK, and by a Wellcome Trust Project Grant to N. Kriegeskorte (Grant WT091540MA). I thank T.W. Schmitz for helpful comments on the manuscript.

References

  1. Brown MW, Aggleton JP. Recognition memory: what are the roles of the perirhinal cortex and hippocampus? Nat Rev Neurosci. 2001;2:51–61. doi: 10.1038/35049064. [DOI] [PubMed] [Google Scholar]
  2. Bruffaerts R, Dupont P, Peeters R, De Deyne S, Storms G, Vandenberghe R. Similarity of fMRI activity patterns in left perirhinal cortex reflects semantic similarity between words. J Neurosci. 2013;33:18597–18607. doi: 10.1523/JNEUROSCI.1548-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bussey TJ, Saksida LM, Murray EA. The perceptual-mnemonic/feature conjunction model of perirhinal cortex function. Q J Exp Psychol B. 2005;58:269–282. doi: 10.1080/02724990544000004. [DOI] [PubMed] [Google Scholar]
  4. Carlson TA, Simmons RA, Kriegeskorte N, Slevc LR. The emergence of semantic meaning in the ventral temporal pathway. J Cogn Neurosci. 2014;26:120–131. doi: 10.1162/jocn_a_00458. [DOI] [PubMed] [Google Scholar]
  5. Clarke A, Tyler LK. Object-specific semantic coding in human perirhinal cortex. J Neurosci. 2014;34:4766–4775. doi: 10.1523/JNEUROSCI.2828-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Kriegeskorte N, Mur M, Bandettini P. Representational similarity analysis—connecting the branches of systems neuroscience. Front Syst Neurosci. 2008a;2:4. doi: 10.3389/neuro.01.016.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Kriegeskorte N, Mur M, Ruff DA, Kiani R, Bodurka J, Esteky H, Tanaka K, Bandettini PA. Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron. 2008b;60:1126–1141. doi: 10.1016/j.neuron.2008.10.043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Mur M, Ruff DA, Bodurka J, De Weerd P, Bandettini PA, Kriegeskorte N. Categorical, yet graded—single-image activation profiles of human category-selective cortical regions. J Neurosci. 2012;32:8649–8662. doi: 10.1523/JNEUROSCI.2334-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Mur M, Meys M, Bodurka J, Goebel R, Bandettini PA, Kriegeskorte N. Human object-similarity judgments reflect and transcend the primate-IT object representation. Front Psychol. 2013;4:128. doi: 10.3389/fpsyg.2013.00128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Patterson K, Nestor PJ, Rogers TT. Where do you know what you know? The representation of semantic knowledge in the human brain. Nat Rev Neurosci. 2007;8:976–987. doi: 10.1038/nrn2277. [DOI] [PubMed] [Google Scholar]
  11. Peelen MV, Caramazza A. Conceptual object representations in human anterior temporal cortex. J Neurosci. 2012;32:15728–15736. doi: 10.1523/JNEUROSCI.1953-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Staresina BP, Fell J, Do Lam AT, Axmacher N, Henson RN. Memory signals are temporally dissociated in and across human hippocampus and perirhinal cortex. Nat Neurosci. 2012;15:1167–1173. doi: 10.1038/nn.3154. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES