Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2011 May 12;33(6):1375–1383. doi: 10.1002/hbm.21296

Exploring commonalities across participants in the neural representation of objects

Svetlana V Shinkareva 1,, Vicente L Malave 2, Marcel Adam Just 3, Tom M Mitchell 4
PMCID: PMC6870121  PMID: 21567662

Abstract

The question of whether the neural encodings of objects are similar across different people is one of the key questions in cognitive neuroscience. This article examines the commonalities in the internal representation of objects, as measured with fMRI, across individuals in two complementary ways. First, we examine the commonalities in the internal representation of objects across people at the level of interobject distances, derived from whole brain fMRI data, and second, at the level of spatially localized anatomical brain regions that contain sufficient information for identification of object categories, without making the assumption that their voxel patterns are spatially matched in a common space. We examine the commonalities in internal representation of objects on 3T fMRI data collected while participants viewed line drawings depicting various tools and dwellings. This exploratory study revealed the extent to which the representation of individual concepts, and their mutual similarity, is shared across participants. Hum Brain Mapp, 2011. © 2011 Wiley‐Liss, Inc.

Keywords: fMRI, logistic regression, machine learning, multivariate data analysis, eigen decomposition, RV‐coefficient, STATIS

INTRODUCTION

The way that concrete objects are represented in the human brain is an important question in cognitive neuroscience. Recently, multivoxel pattern analysis has been applied to fMRI‐measured brain activity to associate the brain activity patterns with presented stimuli (see [Haynes and Rees, 2006; Norman et al., 2006; O'Toole et al., 2007; Pereira et al., 2009] for reviews of this approach). This approach has the potential to be particularly useful in determining how semantic information about objects is represented in the cerebral cortex. Using multivoxel pattern analysis, previous studies succeeded in identifying the cognitive states associated with viewing categories of objects [Carlson et al., 2003; Cox and Savoy, 2003; Hanson and Halchenko, 2007; Hanson et al., 2004; Haxby et al., 2001; O'Toole et al., 2005; Polyn et al., 2005]. Moreover, the category of an object that a participant was viewing [Shinkareva et al., 2008, 2011] or a concrete noun that a participant was reading [Just et al., 2010; Shinkareva et al., 2011] can be identified based only on other participants' characteristic neural activation patterns, establishing the commonality in how different people's brains represent the same object. The similarity of object representation across individuals is of particular interest.

Most studies of object representation that use multivoxel pattern analysis focus on the accuracy with which the stimulus object can be predicted from observed fMRI activation. The distributed nature of object representation makes it challenging to report voxel locations from pattern‐classification results, and the locations can be unstable depending on the numerical techniques used to fit a classifier [Carroll et al., 2009]. The questions of where the discriminating information is located in the brain and how internal representations of objects vary across people have not been addressed in a systematic fashion.

In this work we explore the degree of commonality in object representation across people. Describing the commonalities in the way objects are represented by different people is complicated by potentially nonintersecting voxel‐level activations [Kriegeskorte and Bandettini, 2007], due to individual differences in functional organization as well as the methodological difficulty of normalizing the morphological differences among different people. In this work we examine the similarities in object representation across people without making the assumption that their voxel patterns are spatially matched in a common space. We consider two complementary approaches that examine the commonalities in the neural representation of objects across people: at the level of similarities between internal representations of objects, in terms of their whole brain neural signature, and at the level of spatially localized anatomical brain regions that contain sufficient information for identification of object categories.

Similarity between internal representations of a pair of objects for an individual can be derived from fMRI data, making it possible to examine the similarity structure for a set of objects in a lower‐dimensional space using scaling techniques, for example multidimensional scaling [Edelman et al., 1998; Kriegeskorte et al., 2008; Tzagarakis et al., 2009]. In this work we are interested in examining similarity structure for internal representation of objects across people, thus focusing on multiway data (i.e., objects‐by‐voxels‐by‐people). Multiway data analysis requires special considerations; standard two‐way analysis methods often fail to find underlying structure in multiway arrays [Acar and Yener, 2009]. Several solutions have been proposed to examine similarity structure for multiway data, such as INDSCAL procedure [Carroll and Chang, 1970], or PARAFAC procedure [Harshman, 1970]. In this work we employ a generalization of principal component analysis for multiple matrices, STATIS, which stands for Structuration des Tableaux A Trois Indices de la Statistique [Lavit et al., 1994], to compare similarities between objects—in terms of their whole brain neural signatures—across participants. STATIS is an exploratory data analysis method for comparison of multiple matrices, and it offers several advantages. First, unlike many of the multiway data analysis techniques it is a noniterative procedure. Second, STATIS is based on the cross‐product matrix, thus allowing the number of voxels that are used in the analysis to vary between participants. The STATIS procedure has been shown to be robust and computationally efficient [Stanimirova et al., 2004], and has been previously applied in neuroimaging literature [Abdi et al., 2009; O'Toole et al., 2007; Shinkareva et al., 2008]. For nonneuroimaging in‐depth treatment of the STATIS procedure the reader is referred to Abdi and Valentin [ 2007]. The commonality in the representations revealed by STATIS analysis is a commonality in the distances between internal representations of objects, derived from whole‐brain fMRI data, and does not imply that objects are encoded in the same spatial locations across different people. Therefore, additional analysis localizing the similarities in object representation across participants is needed.

High classification accuracies for the two categories [Shinkareva et al., 2008] have been previously shown for the whole brain (single large scale ROI). In this work we are introducing the analysis of interobject distances based on the whole brain (single large scale ROI). The analysis of interobject distances can be done locally, for each of the ROIs, however, we chose not to do that analysis due to the limited stimuli set. Thus we first explored a commonality in the internal representation of objects across participants at the whole brain level, and showed that part of the variability in the data was explained by the category structure of the objects. Second, we identify single anatomical brain regions that on average contain sufficient information to identify object categories across participants in order to spatially localize these similarities. The level of anatomical regions (as compared with a search‐light approach [Kriegeskorte et al., 2006]) is appropriate for the comparison of the neural representation of objects across participants because, due to variations in anatomy, functional areas may not overlap exactly across participants. By focusing on distinct anatomical regions we trade the ability to make fine spatial distinctions for a straightforward, meaningful interpretation.

METHODS

Similarities in Interobject Distances

Our aim is to examine consistencies in object representations in terms of interobject distances, across individuals. Let X g, g = 1,…,G be an I × V g preprocessed fMRI data matrix, where I is the number of objects and V g is the number of voxels available for the gth participant. Let S g = X g X gT be a cross‐product matrix for the gth participant, normalized by the first eigenvalue [Abdi et al., 2009], that represents the similarity between objects. The interobject similarity structure for each participant can be examined in a lower‐dimensional space with principal component analysis. In this work, however, instead of focusing on one participant, we are interested in principles of neural organization which are consistent across individuals. Therefore, we examine a cross‐product matrix that is combined across individuals to account for biologically plausible interparticipant differences. We combine participant cross‐product matrices into a compromise cross‐product matrix, which is a weighted average of individual cross‐product matrices. The weights are chosen such that the compromise is as representative of all the participant cross‐product matrices as possible [Abdi et al., 2009]. Thus the compromise matrix expresses the agreement among the interobject distances across participants, and is constructed such that participants with configurations of objects similar to those of other participants are assigned larger weights, and participants with configurations of objects most different from those of others are assigned lower weights. As a consequence, unusual or atypical observations have less influence on the result. Construction of the compromise matrix is illustrated in Figure 1. The compromise matrix is further analyzed by the eigen‐decomposition. Together these steps constitute the STATIS procedure.

Figure 1.

Figure 1

Schematic illustration of the object‐by‐object compromise matrix construction from object‐by‐voxels matrices for each of the participants.

Formally, the compromise matrix S + is defined as S + = Σ1G αg S g, where αg is the weight for the gth participant, subject to the constraints Σ1G αg = 1, αg ≥ 0, g = 1,…, G. The weights are derived from an G × G between‐participants cosine matrix C, where the g,g′ entry corresponds to the value of the RV‐coefficient [Escoufier, 1973; Robert and Escoufier, 1976], which is computed as:

\font\abc=cmmib10\def\bi#1{\hbox{\abc #1}}$${\bi C}({\rm{g}},{\rm{g}}') = {\rm{RV}}({{\bi S}}_{\rm{g}} {\bi{S}}_{{\rm{g}}'}) = {{{\rm{tr}}({\bi{S}}_{\rm{g}}^T {\bi{S}}_{{\rm{g}}'})} \over {{\rm{(tr)}}({\bi{S}}_{\rm{g}}^T {\bi{S}}_{\rm{g}}){\rm{tr}}({\bi{S}}_{{\rm{g'}}}^T {\bi{S}}_{{\rm{g}}'} )^{0.5}}}$$

The RV‐coefficient has been previously used in the neuroimaging literature [Abdi et al., 2009; Glascher et al., 2009; Kherif et al., 2003; O'Toole et al., 2007; Shinkareva et al., 2006, 2008] and is described in detail in Abdi [ 2007] and Ramsay et al. [ 1984]. It measures the overall similarity of two matrices and is analogous to a squared coefficient of correlation. RV‐coefficient values vary from 0 to 1, and larger values indicate higher similarity between the two matrices. Thus a cosine matrix C is positive semidefinite with positive elements, and the first principal component of C has only positive elements (Perron‐Frobenius result, e.g., Rencher [ 2002; p 34 and 402]. Therefore, the eigen‐decomposition of C corresponds to a noncentered principal component analysis of C [Abdi et al., 2009]. The weights α are given by the elements of the first eigenvector of C, rescaled to sum up to one. The compromise matrix S + is analyzed by eigen‐decomposition: S + = QΛQ T, where Q is the matrix of eigenvectors, such that Q T Q = I and Λ is the diagonal matrix of eigenvalues of S +. The quality of the compromise can be accessed from the first eigenvalue of C. The objects are then represented as points with coordinates QΛ1/2 in the compromise space that is common to all participants (this is equivalent to PCA on the compromise matrix). Furthermore, cross‐product matrices for individual participants can be projected into the compromise space: S g QΛ−1/2, allowing for a direct visual comparison.

Localizing Category Identification Accuracies Across Participants

Next, we identify brain regions that on average support within‐participant object‐category identification across participants. For each participant and for each region, we learn a classification function of the form: f: fMRI imageY, where fMRI image denotes preprocessed fMRI data for one example (e.g., castle), and Y is a set of semantic categories to be discriminated (e.g., tools and dwellings). Then, we test for significance of mean accuracies across participants, accounting for multiple comparisons. Thus each of the anatomical regions, selected by this process, on average contains sufficient information for category identification, although within‐region organization may vary across participants. These steps are described in more detail below.

To obtain an unbiased estimate of classification accuracy the data is divided into training and test sets. K‐fold cross validation is used, in which a set of examples per class is left out for each fold (in our illustration, a classifier was trained on five presentations of each object—50 exemplars, and tested on the sixth one—10 exemplars). A classifier is built from the training set, and classification performance is evaluated on only the left‐out test set. Classification accuracy can then be estimated as the average accuracy across folds:

equation image

where k is the number of folds and E i is the error rate for the ith fold.

In this analysis we consider anatomical regions defined by the Anatomical Automatic Labeling (AAL) system [Tzourio‐Mazoyer et al., 2002]. In addition to existing AAL regions, left and right intraparietal sulcus regions were defined and superior, middle, and inferior temporal gyrus regions were separated into anterior, middle, and posterior sections based on planes F and D from the Rademacher scheme, resulting in a total of 71 regions [Rademacher et al., 1992]. For each anatomical region, a classifier is trained on all voxels in that brain region to discriminate between the two categories (tools and dwellings in our example) using a training data set. Within each participant a cross‐validated accuracy based on each individual region for that participant is computed using a logistic regression classifier with L2 regularization [Bishop, 2006] using all the voxels from that region. Although all of the voxels in a region are included in the analysis, the logistic regression assigns greater weights to those voxels that contribute most to the classification.

To access if an anatomical region contains sufficient information to decode the object category on average across participants, the classification accuracy for each anatomical region is compared with a binomial distribution B(n, p), where n is the number of exemplars (60 in our illustration) and p is the probability of successful category identification under the hypothesis that exemplars are randomly assigned into the two categories (0.5 in our illustration) [Pereira et al., 2009]. P‐values (computed using a normal approximation) are obtained for the mean classification accuracy, computed across participants for each region. The P‐values are then compared with the 0.001 level of significance using the Bonferroni correction to account for multiple comparisons, which is appropriate for a map at an anatomical region level.

fMRI Data: Representation of Tools and Dwellings

We examine the commonality in neural representation of objects across participants on an fMRI data set reported in Shinkareva et al. [2008]. Twelve participants viewed line drawings of 10 objects from tools and dwellings categories. Ten line drawings were presented six times, each time in a different random permutation order. Participants were asked to think of the same object properties each time they saw a given object, to encourage activation of multiple attributes of the depicted object, besides those used for visual recognition. Each stimulus was presented for 3 s, followed by a 7 s rest period, during which the participants were instructed to fixate on an X displayed in the center of the screen.

Functional images were acquired on a Siemens Allegra 3.0T scanner (Siemens, Erlangen, Germany) at the Brain Imaging Research Center of Carnegie Mellon University and the University of Pittsburgh using a gradient echo EPI pulse sequence with TR = 1,000 ms, TE = 30 ms and a 60° flip angle. Seventeen 5‐mm thick oblique‐axial slices were imaged with a gap of 1 mm between slices. The acquisition matrix was 64 × 64 with 3.125 × 3.125 × 5 mm3 voxels. Data preprocessing, typical for fMRI data, was performed with Statistical Parametric Mapping software (SPM99, Wellcome Department of Cognitive Neurology, London, UK). The data were corrected for slice timing, motion, linear trend, and were temporally smoothed with a high‐pass filter using a 190‐s cutoff. So that brain regions can be compared systematically across participants, the data were normalized to the MNI template brain image using a 12‐parameter affine transformation and resampled to 3 × 3 × 3 mm3 voxels.

The percent signal change relative to the fixation condition was computed at each voxel for each stimulus presentation. A single fMRI image was created for each of the 60 item presentations by taking the mean of the images collected 4 s, 5 s, 6 s, and 7 s after stimulus onset (to account for the delay in the hemodynamic response).

The data for each presentation of an object were further normalized across all voxels to have zero mean and unit variance to equate between‐participants variation in the exemplars. There were a total of 60 example images for each participant (10 objects, six presentations). For the analysis of interobject distances, the data were averaged across the six presentations of each object, to get a more reliable estimate of neural activity. Furthermore, to reduce the dimensionality of the data, only voxels inside the brain's gray matter present in all participants, a total of 4,561 voxels, were included in the analysis. All voxels within each of the anatomical regions were used for localizing category identification accuracies across participants. A classifier was trained on five presentations of each object—50 exemplars, and tested on the sixth presentation—10 exemplars. Classification performance was evaluated using sixfold cross‐validation, such that one exemplar per class (10 exemplars) was left out on each cross‐validation fold. Thus training and test sets were independent [Mitchell et al., 2004].

RESULTS

First, we examined how similar the internal representation of objects is across participants, in terms of the consistency of interobject distances derived from whole‐brain fMRI data. The internal representation of objects that was common to 12 participants was revealed by the principal components analysis of the compromise matrix. Despite the considerable variation in the whole‐brain activation patterns among participants, the compromise matrix explained 61% of the variability in the set of individual cross‐product matrices. Thus the agreement among the participants was large enough to warrant an analysis of the compromise matrix S +. There was a common component and the interpretation of the compromise is meaningful. At the same time, some portion of internal representation of objects was idiosyncratic to individual participants. Examination of the participants' similarity structure based on the analysis of the between‐participants' similarity matrix revealed that most participants were similar in terms of their internal representation of objects. Participants with the largest projections on the first eigenvector (e.g., P6, P8) were more similar in terms of the internal representation of objects, and only a few participants (for example, participant 12) differed from the rest in their internal representations of objects. Participant weights are shown in Figure 2. Participant differences in this analysis could not be explained by either motion during the scan or gender. Next, we examined the object structure common to all participants. The first principal component of the compromise matrix explained 16% of the total variance and represented a contrast between tools and dwellings categories (see Fig. 3). The remaining components were less amenable to interpretation; moreover, the comparison of lower‐dimensional representations derived for each of the participants separately showed a substantial amount of individual variation beyond the first component. There was a commonality in the internal representation of objects across participants, and part of the variability in the data was explained by the category structure of the objects.

Figure 2.

Figure 2

Participant weights derived from the eigen‐decomposition of the between‐participants' similarity matrix C. Participants with configurations of objects similar to those of other participants were assigned larger weights (e.g., P6, P8), and participants with configurations of objects most different from those of others (e.g., P12) were assigned lower weights.

Figure 3.

Figure 3

Objects, shown as word labels, in the space defined by the first two principal components of the compromise matrix. The first component accounts for 16% of the variability in the data, and contrasts tools (shown in red) and dwellings (shown in blue) categories. The second component accounts for 13%, and is not easily interpretable.

Individual differences in the internal representation of objects can be seen from the examination of the projections for each of the participants into the compromise space. Each object's position in the compromise space is the centroid of this object's positions for each of the 12 participants. Generally, there was more agreement among the participants for the representation of some objects than for others, highlighting individual differences among participants. For example, there was more agreement on castle, and less on hut (see Fig. 4). The object representations of Participant 12 (shown in red) differed most from those of the other participants. This is the participant who was assigned the lowest weight and contributed less to the construction of the compromise (see Fig. 2).

Figure 4.

Figure 4

Projection of participants into the space defined by the first two principal components of the compromise matrix. Each object location (shown as word label) is a weighted center of participants' locations for that object. Each participant is shown in a unique color. Participant 12 (shown in red) differs the most from other participants.

Examination of similarities in interobject distances revealed a common component in the internal representation of objects that is stable across participants and contains category structure. Importantly, we note that the commonality in the representations revealed by this analysis is a commonality in the distances between internal representations of objects, derived from whole brain fMRI data, and does not imply that objects are encoded in the same spatial locations across different people.

Second, to spatially localize the commonalities in representation, we identified anatomical regions that contain information for category identification. For each participant, a classifier was trained using voxels from only one anatomical region (such as left inferior parietal lobule) at a time. Out of 71 anatomical regions, 25 of them contained adequate information for meaningful object category identification on average across participants (see Fig. 5). Thus, the object categories are represented in many regions of the cortex, and those regions are similar across participants. The regions that generated the highest accuracies across participants in this single‐region identification were the bilateral primary and secondary visual areas, cerebellum, parietal and posterior temporal areas, and left frontal areas: inferior, superior, and precentral gyri and insula (Figs. 5 and 6). One participant (P4) had very high category identification accuracies for most of the selected anatomical regions, compared with the rest of the participants. Participants P4, P8, P6, and P11 had similar identification accuracies for the selected anatomical regions and had higher category identification accuracies compared with other participants. Notice that these are the same participants who were assigned larger weights in the analysis of interobject distances, i.e., had similar patterns of interobject distances (see Fig. 2).

Figure 5.

Figure 5

Mean classification accuracy for classification of tools versus dwellings over 12 participants shown for each of the anatomical regions. Vertical line indicates a threshold at α = 0.001 level of significance with Bonferroni correction for multiple comparisons. Filled circles correspond to anatomical regions where mean accuracy values across participants were significant (at α = 0.001). Anatomical regions with significant mean accuracy values across participants are shown on the three‐dimensional anatomical rendering.

Figure 6.

Figure 6

Participant‐specific accuracies for tools versus dwellings categories for the anatomical regions with significant (α = 0.001) mean identification accuracy across participants. Participants are ordered by whole brain classification accuracy. Accuracies above chance level are shown in color.

CONCLUSIONS AND DISCUSSION

We examined the commonalities in the neural representation of objects across participants in two complementary ways. First, at the level of similarities between objects based on the internal representation of objects derived from whole‐brain fMRI data. We have found that despite the considerable variation in the whole‐brain activation patterns among participants there was a commonality in the internal representation of objects, and part of the variability in the data was explained by the category structure of the objects. Second, we identified regions that supported category identification of objects at the level of spatially localized anatomical brain regions, despite the individual differences in functional organization and the methodological difficulty of normalizing the morphological differences among participants. Thus we have demonstrated the extent to which representations of objects and their mutual similarities are shared across participants. These findings indicate the commonality of the neural basis of this type of object knowledge across participants at the level of semantic property representations (and not just visual features).

Examining the internal representation of objects derived from fMRI data opens possibilities for future investigations of individual differences in representations. Similarities based on the internal representation of objects derived from fMRI data can be compared with internal representations derived from empirically obtained judgment data or other models of semantic space, for example those based on feature norming studies [Cree and McRae, 2003; McRae et al., 2005], or lexical co‐occurrence models [Andrews et al., 2009; Church and Hanks, 1990; Landauer and Dumais, 1997; Lund and Burgess, 1996] using representational similarity analysis [Kriegeskorte et al., 2008].

Semantic categories of an object viewed by participants were accurately identified from fMRI activity in several regions, possibly reflecting the distributed representation across cortical areas that are specialized for various types of object properties [Goldberg et al., 2006; Haxby et al., 2001; Martin et al., 2000]. For example, ventral premotor cortex and posterior parietal cortex were previously implicated in motor representations associated with tool usage [Chao and Martin, 2000; Culham et al., 2006; Phillips et al., 2002]. The inclusion of motor and somatosensory areas in object representations is also consistent with “embodied cognition,” a theoretical position holding that conceptual representations contain perceptual and motor components corresponding to human interactions with real entities in the physical environment (e.g., Glenberg, 1997]. There are multiple brain regions, besides classical object‐selective cortex that contain information about the object category. These results are consistent with previous findings of the distributed patterns of activation evoked by objects [Ishai et al., 1999, 2000; Mechelli et al., 2004].

It is quite striking that single regions contain, on their own, enough information to decode the object category. We make no claim that the information in the different regions is equivalent. Methods such as repetition priming [Grill‐Spector et al., 1999; James et al., 2002; Vuilleumier et al., 2002] or dynamically adaptive imaging [Cusack et al., 2010] may be useful to further investigate what object properties are represented in various regions, and to link the observed neural data to the statistics of the stimuli that were used in the experiment. Furthermore, although beyond the scope of this paper, it might be of interest to identify whether the information contained in the identified regions is used in behavioral performance [Williams et al., 2007].

In summary, the application of various quantitative techniques to fMRI is increasingly revealing the existence of semantically organized structure in the pattern of fMRI‐measured brain activation during the perception of objects, as well as revealing a degree of commonality across people in this semantic organization.

Acknowledgements

The authors thank the anonymous reviewers for helpful comments on previous versions of this article.

REFERENCES

  1. Abdi H ( 2007): RV Coefficient and congruence coefficient In: Salkind NJ, editor. Encyclopedia of Measurement and Statistics. Thousand Oaks, CA: Sage; pp 849–853. [Google Scholar]
  2. Abdi H, Dunlop JP, Williams LJ ( 2009): How to compute reliability estimates and display confidence and tolerance intervals for pattern classifiers using the Bootstrap and 3‐way multidimensional scaling (DISTATIS). Neuroimage 45: 89–95. [DOI] [PubMed] [Google Scholar]
  3. Abdi H, Valentin D ( 2007): STATIS In: Salkind NJ, editor. Encyclopedia of Measurement and Statistics. Thousand Oaks, CA: Sage; pp 955–962. [Google Scholar]
  4. Acar E, Yener B ( 2009): Unsupervised multiway data analysis: A literature survey. IEEE Trans Knowl Data Eng 21: 6–20. [Google Scholar]
  5. Andrews M, Vigliocco G, Vinson D ( 2009): Integrating experiential and distributional data to learn semantic representations. Psychol Rev 116: 463–498. [DOI] [PubMed] [Google Scholar]
  6. Bishop CM ( 2006): Pattern Recognition and Machine Learning. New York: Springer. [Google Scholar]
  7. Carlson TA, Schrater P, He S ( 2003): Patterns of activity in the categorical representations of objects. J Cogn Neurosci 15: 704–717. [DOI] [PubMed] [Google Scholar]
  8. Carroll JD, Chang JJ ( 1970): Analysis of individual differences in multidimensional scaling via an N‐way generalization of ‘Eckard‐Young’ decomposition. Psychometrika 35: 283–320. [Google Scholar]
  9. Carroll MK, Cecchi GA, Rish I, Garg R, Rao AR ( 2009): Prediction and interpretation of distributed neural activity with sparse models. Neuroimage 44: 112–122. [DOI] [PubMed] [Google Scholar]
  10. Chao LL, Martin A ( 2000): Representation of manipulable man‐made objects in the dorsal stream. Neuroimage 12: 478–484. [DOI] [PubMed] [Google Scholar]
  11. Church KW, Hanks P ( 1990): Word association norms, mutual information, and lexicography. Comput Linguist 16: 22–29. [Google Scholar]
  12. Cox DD, Savoy RL ( 2003): Functional magnetic resonance imaging (fMRI) “brain reading”: Detecting and classifying distributed patterns of fMRI activity in human visual cortex. Neuroimage 19: 261–270. [DOI] [PubMed] [Google Scholar]
  13. Cree GS, McRae K ( 2003): Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns). J Exp Psychol Gen 132: 163–201. [DOI] [PubMed] [Google Scholar]
  14. Culham JC, Cavina‐Pratesi C, Singhal A ( 2006): The role of parietal cortex in visiomotor control: What have we learned from neuroimaging? Neuropsychologia 44: 2668–2684. [DOI] [PubMed] [Google Scholar]
  15. Cusack R, Veldsman M, Naci L, Mitchell D ( 2010): Using dynamically adaptive imaging with fMRI to rapidly characterize neural representations. In: ISMRM, 18th Scientific Meeting. Stockholm. pp 2346.
  16. Edelman S, Grill‐Spector K, Kushnir T, Malach R ( 1998): Toward direct visualization of the internal shape representation space by fMRI. Psychobiology 26: 309–321. [Google Scholar]
  17. Glascher J, Tranel D, Paul LK, Rudrauf D, Rorden C, Hornaday A, Grabowski T, Damasio H, Adolphs R ( 2009): Lesion mapping of cognitive abilities linked to intelligence. Neuron 61: 681–691. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Glenberg AM ( 1997): What memory is for. Behav Brain Sci 20: 1–55. [DOI] [PubMed] [Google Scholar]
  19. Goldberg RF, Perfetti CA, Schneider W ( 2006): Perceptual knowledge retrieval activates sensory brain regions. J Neurosci 26: 4917–4921. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Grill‐Spector K, Kushnir T, Edelman S, Avidan G, Itzchak Y ( 1999): Differential processing of objects under various viewing conditions in the human lateral occipital complex. Neuron 24: 187–203. [DOI] [PubMed] [Google Scholar]
  21. Hanson SJ, Halchenko YO ( 2007): Brain reading using full brain support vector machines for object recognition: There is no ‘face’ identification area. Neural Comput 20: 486–503. [DOI] [PubMed] [Google Scholar]
  22. Hanson SJ, Matsuka T, Haxby JV ( 2004): Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: Is there a “face” area? Neuroimage 23: 156–166. [DOI] [PubMed] [Google Scholar]
  23. Harshman RA ( 1970): Foundations of the PARAFAC procedure: Models and conditions for an ‘exploratory’ multi‐modal factor analysis. UCLA Work Pap Phonetics 16: 1–84. [Google Scholar]
  24. Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, Pietrini P ( 2001): Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293: 2425–2430. [DOI] [PubMed] [Google Scholar]
  25. Haynes JD, Rees G ( 2006): Decoding mental states from brain activity in humans. Nat Rev Neurosci 7: 523–534. [DOI] [PubMed] [Google Scholar]
  26. Ishai A, Ungerleider LG, Martin A, Haxby JV. ( 2000). The representation of objects in the human occipital and temporal cortex. J Cogn Neurosci 12( Suppl 2): 35–51. [DOI] [PubMed] [Google Scholar]
  27. Ishai A, Ungerleider LG, Martin A, Schouten JL, Haxby JV ( 1999): Distributed representation of objects in the human ventral visual pathway. Proc Natl Acad Sci USA 96: 9379–9384. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. James TW, Humphrey GK, Gati JS, Menon RS, Goodale MA ( 2002): Differential effects of viewpoint on object‐driven activation in dorsal and ventral streams. Neuron 35: 793–801. [DOI] [PubMed] [Google Scholar]
  29. Just MA, Cherkassky VL, Aryal S, Mitchell TM ( 2010): 3 Neurosemantic theory of concrete noun representation based on the underlying brain codes. PLoS One 5: e8622. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Kherif F, Poline JB, Meriaux S, Benali H, Flandin G, Brett M ( 2003): Group analysis in functional neuroimaging: Selecting subjects using similarity measures. Neuroimage 20: 2197–2208. [DOI] [PubMed] [Google Scholar]
  31. Kriegeskorte N, Bandettini P ( 2007): Analyzing for information, not activation, to exploit high‐resolution fMRI. Neuroimage 38: 649–662. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Kriegeskorte N, Goebel R, Bandettini P ( 2006): Information‐based functional brain mapping. Proc Natl Acad Sci USA 103: 3863–3868. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Kriegeskorte N, Mur M, Bandettini P ( 2008): Representational similarity analysis—Connecting the branches of systems neuroscience. Front Syst Neurosci 2: 1–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Landauer TK, Dumais ST ( 1997): A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychol Rev 104: 211–240. [Google Scholar]
  35. Lavit C, Escoufier Y, Sabatier R, Traissac P ( 1994): The ACT (STATIS method). Comput Stat Data Anal 18: 97–119. [Google Scholar]
  36. Lund K, Burgess C ( 1996): Producing high‐dimensional semantic spaces from lexical co‐occurrence. Behav Res Methods Instrum Comput 28: 203–208. [Google Scholar]
  37. Martin A, Ungerleider LG, Haxby JV ( 2000): Category specificity and the brain: The sensory/motor model of semantic representations of objects In: Gazzaniga MS, editor. The New Cognitive Neurosciences. Cambridge: The MIT Press; pp 1023–1035. [Google Scholar]
  38. McRae K, Cree GS, Seidenberg MS, McNorgan C ( 2005): Semantic feature production norms for a large set of living and nonliving things. Behav Res Methods 37: 547–559. [DOI] [PubMed] [Google Scholar]
  39. Mechelli A, Price CJ, Friston KJ, Ishai A ( 2004): Where bottom‐up meets top‐down: Neuronal interactions during perception and imagery. Cereb Cortex 14: 1256–1265. [DOI] [PubMed] [Google Scholar]
  40. Mitchell TM, Hutchinson R, Niculescu RS, Pereira F, Wang X, Just MA, Newman S ( 2004): Learning to decode cognitive states from brain images. Mach Learn 57: 145–175. [Google Scholar]
  41. Norman KA, Polyn SM, Detre GJ, Haxby JV ( 2006): Beyond mind‐reading: Multi‐voxel pattern analysis of fMRI data. Trends Cogn Sci 10: 424–430. [DOI] [PubMed] [Google Scholar]
  42. O'Toole A, Jiang F, Abdi H, Haxby JV ( 2005): Partially distributed representations of objects and faces in ventral temporal cortex. J Cogn Neurosci 17: 580–590. [DOI] [PubMed] [Google Scholar]
  43. O'Toole AJ, Jiang F, Abdi H, Penard N, Dunlop JP, Parent MA ( 2007): Theoretical, statistical, and practical perspectives on pattern‐based classification approaches to the analysis of functional neuroimaging data. J Cogn Neurosci 19: 1735–1752. [DOI] [PubMed] [Google Scholar]
  44. Pereira F, Mitchell T, Botvinick M ( 2009): Machine learning classifiers and fMRI: A tutorial overview. Neuroimage 45: S199–S209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Phillips JA, Noppeney U, Humphreys GW, Price CJ ( 2002): Can segregation within the semantic system account for category‐specific deficits? Brain 125: 2067–2080. [DOI] [PubMed] [Google Scholar]
  46. Polyn SM, Natu VS, Cohen JD, Norman KA ( 2005): Category‐specific cortical activity precedes retrieval during memory search. Science 310: 1963–1966. [DOI] [PubMed] [Google Scholar]
  47. Rademacher J, Galaburda AM, Kennedy DN, Filipek PA, Caviness VS ( 1992): Human cerebral cortex: Localization, parcellation and morphometry with magnetic resonance imaging. J Cogn Neurosci 4: 352–374. [DOI] [PubMed] [Google Scholar]
  48. Ramsay JO, Ten Berge J, Styan GPH ( 1984): Matrix correlation. Psychometrika 49: 403–423. [Google Scholar]
  49. Rencher AC ( 2002): Methods of Multivariate Analysis. Wiley‐Interscience. ISBN 978‐0‐471‐41889‐4. [Google Scholar]
  50. Shinkareva SV, Malave VL, Mason RA, Mitchell TM, Just MA ( 2011): Commonality of neural representations of words and pictures. Neuroimage 54: 2418–2425. [DOI] [PubMed] [Google Scholar]
  51. Shinkareva SV, Mason RA, Malave VL, Wang W, Mitchell TM, Just MA ( 2008): Using FMRI brain activation to identify cognitive states associated with perception of tools and dwellings. PLoS One 3: e1394–e1394. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Shinkareva SV, Ombao HC, Sutton BP, Mohanty A, Miller GA ( 2006): Classification of functional brain images with a spatio‐temporal dissimilarity map. Neuroimage 33: 63–71. [DOI] [PubMed] [Google Scholar]
  53. Stanimirova I, Walczak B, Massart DL, Simeonov V, Saby CA, Crescenzo ED ( 2004): STATIS, a three‐way method for data analysis. Application to environmental data. Chemometr Intell Lab Syst 73: 219–233. [Google Scholar]
  54. Tzagarakis C, Jerde TA, Lewis SM, Uğurbil K, Georgopoulos AP ( 2009): Cerebral cortical mechanisms of copying geometrical shapes: A multidimensional scaling analysis of fMRI patterns of activation. Exp Brain Res 194: 369–380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Tzourio‐Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Etard O, Delcroix N, Mazoyer B, Joliot M ( 2002): Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single‐subject brain. Neuroimage 15: 273–289. [DOI] [PubMed] [Google Scholar]
  56. Vuilleumier P, Henson RN, Driver J, Dolan RJ ( 2002): Multiple levels of visual object constancy revealed by event‐related fMRI of repetition priming. Nat Neurosci 5: 491–499. [DOI] [PubMed] [Google Scholar]
  57. Williams MA, Dang S, Kanwisher NG ( 2007): Only some spatial patterns of fMRI response are read out in task performance. Nat Neurosci 10: 685–686. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES