Abstract
In a recent study we found that multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data could predict which of several touch-implying video clips a subject saw, only using voxels from primary somatosensory cortex. Here, we re-analyzed the same dataset using cross-individual MVPA to locate patterns of information that were common across participants’ brains. In this procedure a classifier learned to distinguish the neural patterns evoked by each stimulus based on the data from a sub-group of the subjects and was then tested on data from an individual that was not part of that sub-group. We found prediction performance to be significantly above chance both when using voxels from the whole brain and when only using voxels from the postcentral gyrus. SVM voxel weight maps established based on the whole-brain analysis as well as a separate searchlight analysis suggested foci of especially high information content in medial and lateral occipital cortex and around the intraparietal sulcus. Classification across individuals appeared to rely on similar brain areas as classification within individuals. These data show that observing touch leads to stimulus-specific patterns of activity in sensorimotor networks and that these patterns are similar across individuals. More generally, the results suggest that cross-individual MVPA can succeed even when applied to restricted regions of interest.
Keywords: fMRI, touch observation, MVPA, pattern analysis, multisensory, perception
Introduction
Multivariate pattern analysis (MVPA) has recently emerged as an effective method of analyzing functional brain imaging data (Haynes and Rees, 2006; Norman et al., 2006; O'Toole et al., 2007). In contrast to traditional (univariate) approaches that analyze the time course of each voxel independently, MVPA is able to uncover patterns of activity across populations of voxels. The vast majority of MVPA studies involve analyses performed on an individual subject basis: a classifier algorithm is trained on part of the data obtained from a subject and then tested on the remaining (independent) data from the same person. The applicability of MVPA to data from individual subjects is often considered an advantage, as it allows the characterization of information contained within a single brain.
It is also desirable, however, to describe how results from pattern analysis generalize across individuals. This is commonly done by showing that the mean classification accuracy across a group of subjects is significantly higher than chance. However, this approach does not reveal whether information is represented in the same way across individuals. In other words, each brain may have its own idiosyncratic multi-voxel patterns that allow the classifier to learn, but those patterns may not be the same across individuals. One way to test whether there are consistent patterns across individuals would be to train a classifier on data from multiple individuals and then test it on data from a new individual.
There are several factors that stack the deck against the success of inter-individual MVPA. First, variations in anatomy combined with the imprecision of inter-subject co-registration make it unreasonable to expect a voxel-to-voxel correspondence across subjects. Second, MVPA is sensitive to neural patterns of spatial frequencies that lie beyond the voxel sizes of current fMRI, such as those generated by orientation columns in V1 (Haynes and Rees, 2005; Kamitani and Tong, 2005). This phenomenon has been dubbed “fMRI hyper-acuity” (Op de Beeck, 2010) and has been explained in terms of specific spatio-temporal filter properties that exist at the level of each individual voxel due to micro-variations in anatomy and vasculature (Kriegeskorte et al., 2010). Thus, even if we were able to achieve perfect voxel-to-voxel co-registration among the data obtained from individual brains, we would still expect the level of classification performance across subjects to be affected by anatomical differences at the sub-voxel level.
In light of these considerations, patterns of lower spatial frequency would be more likely to be detected with MVPA across individuals, and indeed this has been demonstrated in the literature. For instance, using cross-participant classification on voxels from the whole brain, Mourao-Miranda et al. (2005) successfully predicted whether a subject performed a face-matching task or a visual location task. Poldrack, Halchenko & Hanson (2009) recently extended these findings by distinguishing eight different cognitive tasks a person was engaged in, again using voxels from the whole brain. A similar approach has also been used to demonstrate common neural patterns across subjects associated with lying and saying the truth (Davatzikos et al., 2005). Mitchell et al. (2004) demonstrated successful cross-participant performance in predicting whether subjects were looking at sentences or pictures, and whether or not a presented sentence was ambiguous. To accomplish this, they averaged the activity levels of voxels from multiple circumscribed regions of interest into “supervoxels”; the analysis was successful when 4 to 7 such supervoxels were supplied to the classifier. All of these studies have in common that they used data from the whole brain, or at least distributed regions of interest, to perform inter-individual pattern classification. Therefore, the success of the classifiers may rely on very coarse information: the analysis will succeed, for instance, if the cognitive tasks or the stimuli under investigation activate grossly different regions of the brain. Therefore, these studies do not necessarily speak to the consistency of neural patterns across subjects on a smaller spatial scale.
In order to address the latter, one would need to provide evidence that cross-individual classifiers can succeed within smaller regions of interest. A few more recent studies have now accomplished this. One such study used MVPA to predict whether participants had received monetary or social rewards on individual trials of a reward task (Clithero et al., 2010). In this study, within- and between-individual classification was directly compared. A searchlight analysis revealed different spatial distributions of information for the two approaches: the voxels that were most informative within individuals were not identical with those that were most informative across individuals. Nevertheless, inter-individual MVPA was successful even when restricted to voxels within circumscribed ROIs, in this case searchlight spheres in the fusiform face area and the ventromedial prefrontal cortex. Two recent studies in the visual domain by Shinkareva and colleagues also showed that classification can succeed across individuals within anatomically defined regions of interest (Shinkareva et al., 2011; Shinkareva et al., 2008). Shinkareva et al. (2008) performed cross-subject MVPA on fMRI data obtained while participants looked at pictures containing either tools or dwellings. When trained on the data from all but one subject, their classifier successfully discriminated categories and, at a lower performance level, individual stimulus exemplars based on the data from the last subject. In the second study (Shinkareva et al., 2011), participants viewed either the names or pictures of the same objects, and a classifier was able to distinguish categories of objects even when trained on one presentation mode and tested on the other within several circumscribed regions of interest distributed throughout the brain. Again, classification was successful both within and across subjects. These studies are among the few (see also Etzel et al., 2011; Quadflieg et al., 2011) that demonstrate successful cross-individual pattern analysis performed within localized regions of interest.
We recently found that when stimuli presented in the visual modality imply sensations in another modality, they lead to stimulus-specific activity patterns in the low-level sensory cortex of the other modality. For instance, when subjects watched silent video clips of objects and events that implied sound, patterns in low-level auditory cortex permitted classification of the visually presented stimuli (Meyer et al., 2010). A second study obtained a similar finding in the somatosensory domain: when subjects watched short video clips of various textured objects being manipulated by human hands, a classifier was able to distinguish the neural activity patterns induced by the individual clips in primary somatosensory cortex (Meyer et al., 2011). These data extended previous work that showed that visual, auditory, and somatosensory mental imagery can lead to activity in the early sensory cortices of the respective modalities (Kosslyn et al., 1999; Kosslyn et al., 1995; Kraemer et al., 2005; Yoo et al., 2003; Yoo et al., 2001). However, the within-individual classifications presented in those studies do not reveal the extent to which the patterns invoked in low-level sensory cortices are similar across individuals. Here, we present an inter-individual pattern classification of the data from our touch observation study. If the somatosensory activity patterns underlying successful classification are similar among individuals, then MVPA across individuals should achieve above-chance level prediction performance.
Materials and methods
Participants
Data from eight right-handed adults were reported in the Meyer et al. (Meyer et al., 2011) study (four female, four male). The present investigation re-analyzes the data from the same eight subjects.
Experimental procedure
Participants watched a series of 5-second video clips, each of which depicted the bimanual exploration of an everyday object. Although the participants were simply told to “watch the video clips attentively” and did not receive an instruction to imagine the shape and texture of the depicted objects, they all reported after the experiment that this occurred automatically. Five different videos showed the manipulation of a plant, a tennis ball, a skein of yarn, a light bulb, and a set of keys. Stimuli were presented 16 seconds apart in random order. Stimulus timing and presentation was controlled by MATLAB (The Mathworks) and the freely available Psychophysics Toolbox Version 3 (Brainard, 1997). The clips were projected onto a rear-projection screen at the end of the scanner bore which the subjects viewed through a mirror mounted on the head coil. Each participant completed ten 5.8-minute functional scans. During each scan there were four presentations of every video, amounting to 40 repetitions of each stimulus and a total of 200 video trials across the experiment for each participant.
Image acquisition
Images were acquired with a 3-Tesla Siemens MAGNETON Trio System. Echo-planar volumes were acquired continuously with the following parameters: TR = 2,000 ms, TE = 25 ms, flip angle = 90°, 64 × 64 matrix, in-plane resolution 3.0 mm × 3.0 mm, 41 transverse slices, each 3.0 mm thick, covering the whole brain. We also acquired a structural T1-weighted MPRAGE in each subject (TR = 2,530 ms, TE = 3.09 ms, flip angle = 10°, 256 × 256 matrix, 208 coronal slices, 1 mm isotropic resolution).
Data pre-processing
For pattern analysis, data were co-registered to a common reference image in the 64 × 64 × 41 functional space using FSL’s FLIRT tool for linear registration (Jenkinson et al., 2002; Jenkinson and Smith, 2001). To this end, each subject’s data were first motion corrected by referencing all functional volumes for that subject to the middle volume of the entire 10-scan time series. Then, the reference (middle) volume for each subject was registered to the reference volume from the first subject using a 12-degree of freedom linear transformation. The eight co-registered reference volumes were then averaged to create a new functional space reference image, and the eight subjects’ volumes were registered to this average volume. This procedure was repeated iteratively 3 times to create a common average functional reference space. All of the functional data were transformed into this common average reference space for pattern analysis. We chose this method in order to preserve the original resolution of the data provided to the classifier. Next, the data were linearly de-trended and converted to z-scores by scan. No spatial smoothing or additional filtering was applied. For display purposes, the average reference image was co-registered to the standard MNI 152 atlas with a 12-degree-of-freedom linear transformation. Volume renderings were created with the MRIcroGL software (Rorden et al., 2007).
MVPA analysis
MVPA was performed using the PyMVPA software package version 0.4.3 (Hanke et al., 2009), in combination with LibSVM’s (http://www.csie.ntu.edu.tw/~cjlin/libsvm/) implementation of the linear support vector machine (SVM). We used a support vector machine with a linear kernel and PyMVPA’s default C parameter, which automatically scales C according to the Euclidean norm of the data. The input to the classifier for each trial was the average of the 4th and 5th volumes after stimulus onset (i.e. the data acquired between 6 and 10 s post stimulus onset). These volumes can be expected to capture the peak of the hemodynamic response to the stimulus, and our previous work has confirmed that the information content in this time window is maximal (Meyer et al., 2011).
We carried out two-way discriminations among all stimulus pairs (n = 10, given there were 5 different video clips), as well as a five-way discrimination in which all five stimuli were classified simultaneously. Although SVMs are ultimately binary classifiers, multi-way classification can be achieved by combining the results of several binary classifiers; LibSVM accomplishes this using the “one-against-one method” (Hsu and Lin, 2002).
For cross-individual MVPA, we used a leave-one-subject-out cross-validation procedure, in which an SVM was trained on data from seven subjects and then tested on data from the eighth subject. This procedure was repeated eight times, leaving each subject out once. Overall classifier performance was obtained by averaging the results of all eight cross-validation folds.
Within-individual MVPA was performed according to a cross-validation procedure as well. In this case, the classifier was trained on nine of the ten imaging runs acquired for a subject and tested on the tenth. This procedure was repeated 10 times, using each run as test run once. Again, the results from all the cross-validation steps were averaged to obtain an overall performance value.
We performed each of the following three analyses both within and across individuals: 1) classification using voxels from the whole brain; 2) classification using only voxels from within the postcentral gyrus; and 3) a searchlight analysis which performed classification on small spheres of voxels throughout the entire brain.
Whole brain analysis
In this analysis we trained the classifier using voxels from the whole brain without any feature selection. To examine which voxels contributed most to classification, we established voxel weight maps by extracting the linear SVM weights for a classifier trained on all of the data in a five-way classification. It has recently been argued that linear SVM weight vectors may not be the best possible measure of the importance of each voxel to classification, since they do not reflect the contribution of the activity level at each voxel (Lee et al., 2010). Nevertheless, the SVM weights should contain information—though perhaps incomplete—about the relative importance of each voxel to the classification. We note that while this procedure does provide an illustration of the distribution of voxel weights in our analysis, it does not support a statistical inference on which voxels were more important.
Region of interest analysis: postcentral gyrus
In this analysis we attempted to predict the touch-implying videos based only on the voxels contained in primary somatosensory cortex. The postcentral gyrus was traced for each subject using her/his high-resolution T1 scan. The anatomical criteria used to delineate S1 were described in detail in Meyer et al. (Meyer et al., 2011). However, for inter-individual MVPA it was necessary to have a common mask in the average reference space. To create this common mask, we transformed the masks of all individual subjects into the averaged functional reference space and summed them. This common mask was then thresholded to include only voxels that were present in 3 or more subjects. The resulting mask consisted of a similar number of voxels as the individual anatomical masks.
Searchlight procedure
In addition to the whole-brain and the ROI analyses described earlier, we performed a searchlight procedure as described by Kriegeskorte et al. (2006), in which a classifier was trained and tested on a sphere of voxels (radius = 8 mm, containing an average of 75 voxels) centered at each brain coordinate in the average reference space. The prediction performance at each voxel was then visualized in the form of searchlight maps. Because the searchlight procedure is very computationally intensive, we only performed it for the simultaneous five-way discrimination among all stimuli, rather than all ten possible two-way discriminations. In the case of the within-subjects analysis, the eight individual searchlight maps were transformed into the standard MNI space and averaged in order to illustrate which voxels contributed most to successful classification across the group of participants. As it is conceivable that performance peaks in this averaged map might be influenced disproportionately by very high values in only one or few subjects, we combined the individual searchlight maps in yet another way: we thresholded the searchlight maps of all participants individually and then summed them in order to create an overlap map showing the voxels that were consistently informative at the within-subjects level.
Statistical thresholding
The statistical significance of the results was assessed using a combination of three methods: permutation tests, t-tests, and binomial tests. For permutation testing, null distributions were generated for the discriminations of interest by randomly shuffling the pattern labels of the training and testing data sets before supplying them to the classifier (for a recent discussion of permutation testing in MVPA, see Pereira and Botvinick, 2011). This procedure was carried out a large number of times in order to determine how likely a certain classification accuracy was to occur by pure chance. To maintain an even proportion of trials of each type within training and testing sets, labels were only permuted within, rather than across, data splits, i.e. within each subject for between-subject analyses and within each run for within-subject analyses.
For some of the analyses we present, permutation testing was not feasible. For example, due to the large number of voxels and training/testing trials in the whole brain analysis it was not computationally practical to generate a null distribution by permutation. In these instances, one-tailed t-tests or binomial tests were conducted instead. In the case of within-subjects analyses we tested the hypothesis that the eight individual subjects’ performances were greater than chance using one-tailed t-tests. Whenever t-tests were used we first subjected the data to a Lilliefors test for normality, and in all cases failed to reject the null hypothesis that the samples came from a normal distribution. Given that performance on the cross-subject classification cannot be assessed independently for each subject (because all subjects contribute to all cross-validation folds either in terms of training or testing data), we did not assess the significance of these results using t-tests, but employed binomial tests instead, using the number of testing trials within a cross-validation fold as the number of independent tests. The binomial distribution describes the probability of a number of correct classifier guesses given the overall number of independent tests and the probability of being correct on each of these tests and is commonly used for assessing the significance of classifier performance in MVPA studies (Pereira et al., 2009).
In order to give the reader an overview, we will briefly summarize the statistical tests carried out for the individual discriminations. For the whole brain analyses, we used the binomial test for the between-subjects analysis and t-tests for the within-subjects analysis. For the ROI analysis in the postcentral gyrus, we performed a permutation test (n = 10,000) for the 2-way discriminations, and a binomial test (for the between-subjects analysis) or t-test (for the within-subjects analysis) for the 5-way discrimination, as, once again, the large number of training and testing trials in the five-way discrimination made permutation testing impractical. For the cross-subjects searchlight analysis, we generated null distributions by permuting the labels on a 5-way discrimination within three different 8-mm spheres. We chose three spheres located in distant brain regions (one in the frontal lobe, one in the occipital lobe, and one in the temporal lobe) to verify that chance performance was similar across different spatial locations. The permutation procedure was iterated 15,000 times in each sphere in order to reach the precision necessary for multiple comparisons correction. For the within-subjects searchlight analysis, we performed a separate permutation test for each subject in an occipital sphere.
Results
Whole brain analysis
Between subjects
We performed all ten possible two-way discriminations among the five video clips using voxels from the whole brain, without any further feature selection. Between-subjects performance for each of the discriminations was high and ranged from 70.1% for yarn vs. light bulb to 81.8% for tennis ball vs. keys, as compared with a chance level of 50% (gray bars in Figure 1). The average performance across all ten pairwise discriminations was 75.3%. Each of these comparisons is significant according to a binomial test with 80 trials (p < 1 × 10−4 for the least accurate pair).
Figure 1. Classifier performance on within- and across-subject classifications using voxels from the whole brain.
All pairs of stimuli were discriminated significantly above the chance level of 50%.
We also performed a 5-way discrimination on voxels from the whole brain, which yielded 50.3% correct classifications, as compared with a chance level of 20%. This is again well beyond what is expected by chance according to the binomial distribution (p < 1 × 10−21, given the 200 testing trials in each cross-validation fold). To visualize which voxels contributed most to classifier performance in this analysis, we extracted the linear SVM weights for every voxel in the brain from a classifier trained on the data from all eight subjects. The resulting map is shown in Figure 2A. The most highly weighted voxels were located in the posterior regions of the brain, with clusters throughout the medial and lateral occipital cortices, the postero-inferior part of the temporal lobe bilaterally, and the superior parietal lobule bilaterally. Additional clusters of somewhat lesser significance were seen in the inferior parietal lobule bilaterally, the posterior part of the postcentral gyrus bilaterally, and the precentral gyrus extending into the precentral sulcus bilaterally.
Figure 2. Whole-brain SVM voxel weight maps.
A) Between-subjects analysis: Shown are the SVM weights for all voxels in the brain, generated based on the 5-way classification among all stimuli in the between-subjects analysis. Whiter voxels were weighted more heavily by the classifier in discriminating among the five video clips (w: weight). B) Within-subjects analysis: For each subject, we selected the voxels with the 10% highest SVM weights in the within-subjects classification and then summed the thresholded maps. In the resulting overlap, the color of each voxel denotes the number of subjects who had one of their most heavily weighted voxels at that location.
Within subjects
For comparison, we performed the same discriminations within subjects. Averaged across the eight subjects, pairwise classification accuracy ranged from 64.7% for yarn vs. light bulb to 81.7% for tennis ball vs. keys (black bars in Figure 1). Each of these comparisons was significant according to a t-test across subjects (p < 0.01 for the least accurate pair and p < 0.00001 for the most accurate pair). The average performance across all pairwise discriminations was 74.6%.
The 5-way discrimination yielded an average classification accuracy of 49.6% across subjects, as compared to a chance level of 20%. Again, this performance was significantly greater than chance (p < 0.0001). To illustrate the distribution of information across the brain, we generated a voxel weight map for each individual subject, transformed all of these maps into the standard space and smoothed them with a 5-mm Gaussian kernel. We then selected the 10% highest ranked voxels from each subject’s map and created an overlap map showing those voxels that were most frequently ranked in the top 10%. The distribution of the most highly weighted voxels was very similar to that obtained for the cross-subject analysis, as can be gleaned from the comparable maps in Figures 2A and 2B. Again, despite the visual nature of the stimuli employed, highly informative voxels can be found in the postcentral gyrus, in particular along its posterior part.
ROI analysis: primary somatosensory cortex
Between subjects
We again performed all ten two-way discriminations according to a leave-one-participant-out cross-validation procedure, but this time used only voxels from within the postcentral gyrus. Average pairwise classification within the mask was 59.4%, close to the maximum value from 10,000 iterations of the permutation test (59.5%). Individual comparisons ranged from 54.2% for plant vs. yarn to 68.8% for tennis ball vs. keys (gray bars in Figure 3). According to the permutation test, all ten comparisons reached significance at the p < 0.05 level, with seven of the comparisons being significant at the p < 0.001 level. The 5-way discrimination yielded 28.2% accuracy, which is significant according to a binomial test with 200 trials per cross-validation fold (p < .01). .
Figure 3. Classifier performance on within- and across-subjects classifications using only voxels from within the postcentral gyrus.
All pairs of stimuli were discriminated significantly above chance level (50%).
To test whether classification performance was due only to the overall activity levels within the mask, we also trained and tested a classifier on the mean values from within the postcentral gyrus mask (see Supplementary Materials). As shown in Supplementary Figure 1, accuracy using only the mean values was not significantly different from chance, indicating that classifier performance in somatosensory cortex indeed benefits from information contained in the spatial activity profile.
Within subjects
This analysis was reported in Meyer et al. (Meyer et al., 2011). Briefly, average pairwise prediction performance was 64.7% across all subjects, and ranged from 52.1% for tennis ball vs. light bulb to 73.2% for tennis ball vs. keys (black bars in Figure 3). In Meyer et al. (2011) we reported a t-test across individuals indicating that eight of the ten comparisons reached statistical significance.
Searchlight analysis
Between subjects
Permutation tests carried out in 8-mm searchlight spheres located in the occipital, frontal, and temporal lobes produced very similar chance distributions: the maximum value achieved by chance in 15,000 iterations of the 5-way permutation test was 24.5% in the occipital lobe, 24.5% in the temporal lobe, and 24.6% in the frontal lobe (chance level: 20%). Given that the actual searchlight analysis involved performing MVPA in spheres centered at each brain voxel in the average functional reference space (n = 53,387), correction for multiple comparisons was necessary. Note, however, that while the total number of tests was equal to the number of voxels in the brain, these tests were not independent of each other: neighboring spheres overlapped and thus shared information. Following Clithero et al. (2010), we estimated the number of independent resolution elements by dividing the total number of voxels by the average size of a sphere, and used this number in a Bonferroni correction. A Bonferroni correction for the number of resels at the α = 0.05 level yields a corrected threshold of α = 7.02 × 10−5. Thus, values which occur less than 1 in 15,000 permutations (p < 6.67 × 10−5) satisfy correction for multiple comparisons. Accordingly, we used the maximum value from the three permutation tests (24.6%) as a conservative threshold for the between-subjects searchlight.
Figure 4A shows the voxels that surpassed this threshold, projected onto the standard MNI brain. Above-threshold performance was found in medial and lateral occipital cortices bilaterally, as well as in the right parietal lobe. The parietal cluster is located around the junction of the intraparietal sulcus and the postcentral sulcus; it includes parts of the postcentral gyrus, the inferior parietal lobule and, to a minor extent, the superior parietal lobule. The locations of the most accurate voxels (i.e. the voxels which defined the center of the spheres that yielded the highest performance) in these clusters are described in Table 1.
Figure 4. Whole brain searchlight maps for within- and between-subjects classification.
A) Classifier performance for every location in the across-individual searchlight. B) Mean performance level across all within-individual searchlight maps. C) Map of the overlap among thresholded individual searchlight maps: the color code represents the number of subjects in whom a specific voxel surpassed the threshold of a permutation test. Only voxels where at least four of eight subjects had significant searchlight spheres are shown. Both the between- and within-subjects maps show above-chance classification in the medial and lateral occipital cortices, as well as in a cluster in right parietal cortex near the junction of the intraparietal sulcus and the postcentral sulcus. The within-subjects searchlights also show an additional cluster of significant voxels in the left parietal lobe.
Table 1. Locations of peak performance in the searchlight analyses.
Coordinates refer to the Montreal Neurological Institute (MNI) space and denote the center of the 8-mm radius sphere that yielded maximal classifier performance within each local cluster. Accuracy numbers derive from a 5-way discrimination in which the level of chance performance is 20%.
| Location | x | y | z | Classifier Accuracy |
|---|---|---|---|---|
| Between individuals | ||||
| Medial occipital cortex | −2 | −80 | −6 | 33.9% |
| Right lateral occipital cortex | 30 | −96 | −8 | 27.8% |
| Left lateral occipital cortex | −30 | −92 | −14 | 27.9% |
| Right parietal cortex | 44 | −32 | 46 | 28.0% |
| Within individuals | ||||
| Medial occipital cortex | 4 | −86 | −2 | 8.4% |
| Right lateral occipital cortex | 28 | −96 | 2 | 34.8% |
| Left lateral occipital cortex | −28 | −94 | 0 | 35.9% |
| Right parietal cortex | 48 | −20 | 36 | 35.5% |
| Left parietal cortex | −50 | −34 | 54 | 32.7% |
We previously performed a univariate GLM analysis of these data contrasting each stimulus type with rest (Meyer et al., 2011). That analysis showed that the “keys” stimulus, in particular, led to a greater overall signal change in several brain regions, as compared with the other stimuli. To rule out the possibility that inter-individual classifier performance resulted solely from distinctions involving the “keys” stimulus, we produced both a searchlight map and a whole-brain voxel weight map for a four-way discrimination among the remaining stimuli (see Supplementary Materials). As is evident from Supplementary Figures 2 and 3, these maps appeared very similar to their respective counterparts obtained from the 5-way discrimination among all stimuli. We therefore conclude that our results do not depend disproportionately on the “keys” stimulus.
Within-subjects
We computed a separate null distribution (n = 15,000 permutations) for a searchlight sphere in occipital cortex in each individual subject. The maximum values achieved by chance in these permutation tests ranged from 32.5% to 34.7% with a mean of 33.7% across subjects (note that this number is higher compared with the cross-individual result due to the fewer number of testing trials). Figure 4B shows the locations of the voxels that surpassed this threshold for the average of the individual searchlight maps. As it is conceivable that peak values on this averaged map could have resulted from very high performance values in only one or a few subjects, we combined the individual searchlight maps in yet another way: we thresholded the individual maps according to the maximum permutation value for each subject and then summed them to produce an overlap map which shows all those voxels which surpassed this threshold in at least half the subjects (Figure 4C). Note that we present these maps as descriptive of the within-individual classifier performance in our subjects and not to support an inferential conclusion about within-individual searchlight maps in general. As is evident from Figures 4B and 4C, the average of the individual searchlight maps and the overlap of the thresholded individual maps look very similar. Furthermore, both maps also resemble the searchlight map of the inter-individual classification shown in Figure 4A: they illustrate high performance in the medial and lateral aspects of the occipital lobe and in the parietal lobe near the junction of the intraparietal sulcus and the postcentral sulcus.
Discussion
In a recent study (Meyer et al. 2011), we demonstrated that MVPA can predict the content of touch-implying video clips from neural activity patterns in various brain regions. Notably, in spite of the visual nature of the stimuli, we showed that successful discrimination of the clips was possible in both primary and secondary somatosensory cortices. In other words, low-level somatosensory areas exhibited content-specific information about visual stimuli that implied touch. In the present analysis of the same data set, we addressed the question of whether these content-specific activity patterns are consistent across subjects. Specifically, we tested whether a classifier trained on the data of some subjects would successfully classify stimuli when tested on novel data from a different subject.
Using voxels from the whole brain, our SVM was very successful at this task, producing classification accuracies of close to 80% for some of the two-way discriminations. Performance on whole-brain MVPA across subjects was comparable to and in some cases even exceeded within-subjects performance; however, this comparison must be tempered by the fact that the between-subjects classifier was trained on many more training examples than the within-subjects classifiers. When restricted to voxels from the postcentral gyrus, inter-subject prediction performance remained greater than chance, indicating that neural activity patterns in primary somatosensory cortex reflect the content of touch-implying visual stimuli, and that these patterns share features across individuals.
While the subjects in our study were not directly engaged in manipulating objects, it is increasingly evident that simulating action and perception engages the brain in a fashion similar to action and perception itself (see Rizzolatti and Craighero, 2004 for a review), possibly through a mechanism of top-down retro-activation (Damasio and Meyer, 2008; Meyer and Damasio, 2009). According to this view, memory and imagery invoke the conscious experience of an object by reactivating low-level sensorimotor representations. For example, in our study, when the idea of a tennis ball is evoked by the corresponding visual stimulus, tactile and kinesthetic images associated with that object would be re-expressed in visual-tactile motor integration regions in the parietal lobes and, ultimately, in low-level somatosensory cortex. Other imaging studies have shown that the observation of touch activates somatosensory cortex (Blakemore et al., 2005; Ebisch et al., 2008; Keysers et al., 2004; Schaefer et al., 2009), and that kinesthetic imagery activates the superior parietal lobule (Guillot et al., 2009), among other regions. In our previous study, we extended these findings by showing that the neural activity patterns in these regions specifically reflect the content of observed, touch-implying stimuli (Meyer et al., 2011). The results of the present study now suggest that these content-specific representations in low-level somatosensory cortex share similarities across individuals.
Spatial distribution of cross-individual information
Information that was common across individuals was not uniformly distributed throughout the brain, as evidenced both by the voxel weight map resulting from the cross-individual whole-brain classification and by the cross-individual searchlight analysis. The voxel weights from the whole-brain analysis reveal an information-containing neural network ranging from the occipital cortices into the superior parietal lobule, and extending into the postcentral and precentral gyri. This network corresponds well with activations found in previous studies investigating the observation of touch (Blakemore et al., 2005; Ebisch et al., 2008; Keysers et al., 2004; Schaefer et al., 2009) and tactile and visual object recognition (Amedi et al., 2001; Deibert et al., 1999; Habas and Cabanis, 2007; James et al., 2002; Peltier et al., 2007; Reed et al., 2004; Stoeckel et al., 2004; Tal and Amedi, 2009; Valenza et al., 2001). These regions, which form a network connecting somatosensory and visual cortices via the posterior parietal lobe, seem to be important for using visual information to guide action and, specifically, for integrating haptic and visual information (Dijkerman and de Haan, 2007; James and Kim, 2010; James et al., 2007b). The SVM voxel weight maps also agree closely with the group-level GLM contrast between the observation of the video clips and fixation, as reported in our previous study (Meyer et al. 2011). In other words, the same brain regions which, at the group level, are “activated” during touch observation appear to contain information consistent enough to allow for cross-individual multivariate classification.
The cross-individual searchlight analysis suggests that the similarity in neural patterns across brains is especially high in three well-circumscribed areas located, respectively, in the medial occipital cortex, the lateral occipital cortex, and around the junction of the intraparietal and the postcentral sulci in the right hemisphere. The medial surface of the occipital lobe corresponds to the primary and early visual cortices, which contain topographical maps of the visual field that can be expected to be similar across individuals. Thus, this area lends itself well to cross-individual pattern recognition and, indeed, our analyses indicate that information content about the video clips was highest in this area. The lateral occipital cluster may correspond to higher-level retinotopic areas such as V2 or V3 (Wandell et al., 2007). However, located just anterolateral to these retinotopic regions is the lateral occipital complex (LOC), which has been found to play an important role in object perception (Grill-Spector et al., 1999; Kourtzi and Kanwisher, 2000; Malach et al., 1995), and, in particular, is sensitive to object shape (Kim et al., 2009). Object identity has been predicted from LOC activity using MVPA within individuals (Eger et al., 2008). A sub-region of LOC has even been implicated in tactile processing. The latter region, which has come to be known as the lateral-occipital tactile-visual area (LOtv), is one of the main areas for the convergence of visual and haptic object processing (Amedi et al., 2001; Amedi et al., 2007; James et al., 2002; Reed et al., 2004; Tal and Amedi, 2009); however, the location of LOtv seems to be somewhat more anterior and lateral than the cluster we observe here. Without having performed functional localizer tasks we cannot confidently assign the activation we observe to a specific visual area; however, either of these regions (LOC or LOtv) seems plausible for supporting cross-individual pattern analysis of visually observed object manipulation.
The IPS, particularly its anterior portion, is involved in reaching and grasping behavior, and seems to be especially important for guiding hand actions associated with visuo-haptic sensory information (Binkofski et al., 1999; Grefkes and Fink, 2005; Grefkes et al., 2002; James et al., 2007a). Recently, MVPA on the anterior IPS region has accurately predicted both executed and observed actions (Dinstein et al., 2008; Oosterhof et al., 2010). Shinkareva et al. (2008) also reported above-chance classification for viewing drawings of individual objects in IPS. Our cross-individual searchlight found a peak of information content in this region in the right hemisphere. It is not clear to us why this focus is lateralized to the right, as most activation studies of this region have shown either bilateral or left-lateralized activations.
Comparison of between-individual and within-individual classification
Our data suggest that between- and within-individual MVPA in our task generally rely on information from similar brain regions. The average within-subject SVM voxel weight map closely resembled the corresponding between-subjects map, suggesting that both analyses relied on the same brain regions for successful stimulus prediction. The largest differences between the two searchlight maps appear in the occipital cortex, where the within-subjects analysis evidences more left-lateralized and diffuse patches of information than the between-subjects analysis. The pattern of performance across individual pairwise comparisons also appears similar (e.g. high performance for tennis ball vs. keys and low performance for tennis ball vs. light bulb).
Our findings contrast with those of Clithero et al. (2010), who have argued that classification within and between individuals implicate different brain regions. In their study a classifier predicted which of several kinds of rewards participants had received, and results showed differing patterns of information distribution for the between-individual and within-individual analyses. At least as far as our touch observation task is concerned, a similar network of brain regions appears to underlie within-subject and cross-subject classification.
As mentioned in the Introduction section, while some studies have shown good classifier performance across individuals at coarser level of detail (Davatzikos et al., 2005; Mourao-Miranda et al., 2005; Poldrack et al., 2009), there are now a growing number that show common patterns across individuals on a smaller scale as well (Clithero et al., 2010; Etzel et al., 2011; Quadflieg et al., 2011; Shinkareva et al., 2008). The present data confirm these previous studies in showing that MVPA can uncover invariant patterns of neural activity across subjects even within restricted sets of voxels, and, additionally, they show an interesting trend. For between-individual pattern classification, the best performance was obtained when using voxels from the whole brain. In this analysis between-individual classification even surpassed the performance of within-subject classification. When restricted to a smaller anatomical region, the postcentral gyrus, accuracy for the cross-individual analysis fell below that for the within-individual analysis. Finally, in the searchlight analysis, where the patterns examined are restricted to a small sphere of voxels, the advantage of within-subject classification was greatest. This trend could be explained by the relative amount of signal in proportion to noise included in each analysis: at the cross-subject level, where each voxel reflects differences in anatomy in addition to subject-to-subject variability in activation levels, signal may be too weak for a classifier to be successful based on a small number of voxels. Expressed in different terms, overall activity levels across macroscopic brain regions may be similar among individuals, but when focusing in on smaller and smaller regions, the idiosyncrasies of individual brains begin to take precedence over shared patterns of organization. This speculation could potentially be verified in future studies by sampling activity patterns at different levels of spatial resolution before supplying them to the classifier.
Supplementary Material
Acknowledgements
The authors wish to thank Hanna and Antonio Damasio for their support and for their valuable feedback on this manuscript, as well as Ryan Essex for assistance with data collection. This work was supported by the Mathers Foundation and by National Institutes of Health grant number 5P50NS019632-27.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Amedi A, Malach R, Hendler T, Peled S, Zohary E. Visuo-haptic object-related activation in the ventral visual pathway. Nature Neuroscience. 2001;4:324–330. doi: 10.1038/85201. [DOI] [PubMed] [Google Scholar]
- Amedi A, Stern WM, Camprodon JA, Bermpohl F, Merabet L, Rotman S, Hemond C, Meijer P, Pascual-Leone A. Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nature Neuroscience. 2007;10:687–689. doi: 10.1038/nn1912. [DOI] [PubMed] [Google Scholar]
- Binkofski F, Buccino G, Stephan KM, Rizzolatti G, Seitz RJ, Freund HJ. A parieto-premotor network for object manipulation: evidence from neuroimaging. Exp Brain Res. 1999;128:210–213. doi: 10.1007/s002210050838. [DOI] [PubMed] [Google Scholar]
- Blakemore S-J, Bristow D, Bird G, Frith C, Ward J. Somatosensory activations during the observation of touch and a case of vision-touch synaesthesia. Brain. 2005;128:1571–1583. doi: 10.1093/brain/awh500. [DOI] [PubMed] [Google Scholar]
- Brainard DH. The Psychophysics Toolbox. Spat Vis. 1997;10:433–436. [PubMed] [Google Scholar]
- Clithero JA, Smith DV, Carter RM, Huettel SA. Within- and cross-participant classifiers reveal different neural coding of information. Neuroimage. 2010 doi: 10.1016/j.neuroimage.2010.03.057. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Damasio A, Meyer K. Behind the looking-glass. Nature. 2008;454:167–168. doi: 10.1038/454167a. [DOI] [PubMed] [Google Scholar]
- Davatzikos C, Ruparel K, Fan Y, Shen DG, Acharyya M, Loughead JW, Gur RC, Langleben DD. Classifying spatial patterns of brain activity with machine learning methods: application to lie detection. Neuroimage. 2005;28:663–668. doi: 10.1016/j.neuroimage.2005.08.009. [DOI] [PubMed] [Google Scholar]
- Deibert E, Kraut M, Kremen S, Hart J. Neural pathways in tactile object recognition. Neurology. 1999;52:1413–1417. doi: 10.1212/wnl.52.7.1413. [DOI] [PubMed] [Google Scholar]
- Dijkerman HC, de Haan EH. Somatosensory processes subserving perception and action. Behav Brain Sci. 2007;30:189–201. doi: 10.1017/S0140525X07001392. discussion 201-139. [DOI] [PubMed] [Google Scholar]
- Dinstein I, Gardner JL, Jazayeri M, Heeger DJ. Executed and observed movements have different distributed representations in human aIPS. J Neurosci. 2008;28:11231–11239. doi: 10.1523/JNEUROSCI.3585-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ebisch SJH, Perrucci MG, Ferretti A, Del Gratta C, Romani GL, Gallese V. The sense of touch: embodied simulation in a visuotactile mirroring mechanism for observed animate or inanimate touch. Journal of cognitive neuroscience. 2008;20:1611–1623. doi: 10.1162/jocn.2008.20111. [DOI] [PubMed] [Google Scholar]
- Eger E, Ashburner J, Haynes J-D, Dolan RJ, Rees G. fMRI activity patterns in human LOC carry information about object exemplars within category. Journal of cognitive neuroscience. 2008;20:356–370. doi: 10.1162/jocn.2008.20019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Etzel JA, Valchev N, Keysers C. The impact of certain methodological choices on multivariate analysis of fMRI data with support vector machines. Neuroimage. 2011;54:1159–1167. doi: 10.1016/j.neuroimage.2010.08.050. [DOI] [PubMed] [Google Scholar]
- Grefkes C, Fink GR. The functional organization of the intraparietal sulcus in humans and monkeys. J Anat. 2005;207:3–17. doi: 10.1111/j.1469-7580.2005.00426.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grefkes C, Weiss PH, Zilles K, Fink GR. Crossmodal processing of object features in human anterior intraparietal cortex: an fMRI study implies equivalencies between humans and monkeys. Neuron. 2002;35:173–184. doi: 10.1016/s0896-6273(02)00741-9. [DOI] [PubMed] [Google Scholar]
- Grill-Spector K, Kushnir T, Edelman S, Avidan G, Itzchak Y, Malach R. Differential processing of objects under various viewing conditions in the human lateral occipital complex. Neuron. 1999;24:187–203. doi: 10.1016/s0896-6273(00)80832-6. [DOI] [PubMed] [Google Scholar]
- Guillot A, Collet C, Nguyen VA, Malouin F, Richards C, Doyon J. Brain activity during visual versus kinesthetic imagery: an fMRI study. Human brain mapping. 2009;30:2157–2172. doi: 10.1002/hbm.20658. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Habas C, Cabanis EA. The neural network involved in a bimanual tactile-tactile matching discrimination task: a functional imaging study at 3 T. Neuroradiology. 2007;49:681–688. doi: 10.1007/s00234-007-0239-8. [DOI] [PubMed] [Google Scholar]
- Hanke M, Halchenko YO, Sederberg PB, Hanson SJ, Haxby JV, Pollmann S. PyMVPA: A python toolbox for multivariate pattern analysis of fMRI data. Neuroinformatics. 2009;7:37–53. doi: 10.1007/s12021-008-9041-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haynes J-D, Rees G. Predicting the orientation of invisible stimuli from activity in human primary visual cortex. Nature Neuroscience. 2005;8:686–691. doi: 10.1038/nn1445. [DOI] [PubMed] [Google Scholar]
- Haynes J-D, Rees G. Decoding mental states from brain activity in humans. Nature Reviews Neuroscience. 2006;7:523–534. doi: 10.1038/nrn1931. [DOI] [PubMed] [Google Scholar]
- Hsu CW, Lin CJ. A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw. 2002;13:415–425. doi: 10.1109/72.991427. [DOI] [PubMed] [Google Scholar]
- James T, Kim S, Fishe J. The neural basis of haptic object processing. Canadian Journal of Experimental Psychology. 2007a:219–229. doi: 10.1037/cjep2007023. [DOI] [PubMed] [Google Scholar]
- James TW, Humphrey GK, Gati JS, Servos P, Menon RS, Goodale MA. Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia. 2002;40:1706–1714. doi: 10.1016/s0028-3932(02)00017-9. [DOI] [PubMed] [Google Scholar]
- James TW, Kim S. Dorsal and ventral cortical pathways for visuo-haptic shape integration revealed using fMRI. In: Naumer MJ, Kaiser J, editors. Multisensory Object Perception in the Primate Brain. New York: Springer; 2010. [Google Scholar]
- James TW, Kim S, Fisher JS. The neural basis of haptic object processing. Can J Exp Psychol. 2007b;61:219–229. doi: 10.1037/cjep2007023. [DOI] [PubMed] [Google Scholar]
- Jenkinson M, Bannister P, Brady M, Smith S. Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage. 2002;17:825–841. doi: 10.1016/s1053-8119(02)91132-8. [DOI] [PubMed] [Google Scholar]
- Jenkinson M, Smith S. A global optimisation method for robust affine registration of brain images. Med Image Anal. 2001;5:143–156. doi: 10.1016/s1361-8415(01)00036-6. [DOI] [PubMed] [Google Scholar]
- Kamitani Y, Tong F. Decoding the visual and subjective contents of the human brain. Nature Neuroscience. 2005;8:679–685. doi: 10.1038/nn1444. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Keysers C, Wicker B, Gazzola V, Anton J-L, Fogassi L, Gallese V. A touching sight: SII/PV activation during the observation and experience of touch. Neuron. 2004;42:335–346. doi: 10.1016/s0896-6273(04)00156-4. [DOI] [PubMed] [Google Scholar]
- Kim JG, Biederman I, Lescroart MD, Hayworth KJ. Adaptation to objects in the lateral occipital complex (LOC): shape or semantics? Vision Res. 2009;49:2297–2305. doi: 10.1016/j.visres.2009.06.020. [DOI] [PubMed] [Google Scholar]
- Kosslyn SM, Pascual-Leone A, Felician O, Camposano S, Keenan JP, Thompson WL, Ganis G, Sukel KE, Alpert NM. The role of area 17 in visual imagery: convergent evidence from PET and rTMS. Science. 1999;284:167–170. doi: 10.1126/science.284.5411.167. [DOI] [PubMed] [Google Scholar]
- Kosslyn SM, Thompson WL, Kim IJ, Alpert NM. Topographical representations of mental images in primary visual cortex. Nature. 1995;378:496–498. doi: 10.1038/378496a0. [DOI] [PubMed] [Google Scholar]
- Kourtzi Z, Kanwisher N. Cortical regions involved in perceiving object shape. J Neurosci. 2000;20:3310–3318. doi: 10.1523/JNEUROSCI.20-09-03310.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kraemer DJ, Macrae CN, Green AE. Sound of silence activates auditory cortex. Nature. 2005;434:158. doi: 10.1038/434158a. [DOI] [PubMed] [Google Scholar]
- Kriegeskorte N, Cusack R, Bandettini P. How does an fMRI voxel sample the neuronal activity pattern: compact-kernel or complex spatiotemporal filter? Neuroimage. 2010;49:1965–1976. doi: 10.1016/j.neuroimage.2009.09.059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kriegeskorte N, Goebel R, Bandettini P. Information-based functional brain mapping. Proc Natl Acad Sci U S A. 2006;103:3863–3868. doi: 10.1073/pnas.0600244103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee S, Halder S, Kubler A, Birbaumer N, Sitaram R. Effective functional mapping of fMRI data with support-vector machines. Hum Brain Mapp. 2010;31:1502–1511. doi: 10.1002/hbm.20955. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Malach R, Reppas JB, Benson RR, Kwong KK, Jiang H, Kennedy WA, Ledden PJ, Brady TJ, Rosen BR, Tootell RB. Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proc Natl Acad Sci U S A. 1995;92:8135–8139. doi: 10.1073/pnas.92.18.8135. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meyer K, Damasio A. Convergence and divergence in a neural architecture for recognition and memory. Trends Neurosci. 2009;32:376–382. doi: 10.1016/j.tins.2009.04.002. [DOI] [PubMed] [Google Scholar]
- Meyer K, Kaplan JT, Essex R, Damasio H, Damasio A. Seeing touch is correlated with content-specific activity in primary somatosensory cortex. Cereb Cortex. 2011;21:2113–2121. doi: 10.1093/cercor/bhq289. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meyer K, Kaplan JT, Essex R, Webber C, Damasio H, Damasio A. Predicting visual stimuli on the basis of activity in auditory cortices. Nature Neuroscience. 2010:1–2. doi: 10.1038/nn.2533. [DOI] [PubMed] [Google Scholar]
- Mitchell TM, Hutchinson R, Niculescu RS, Pereira F, Wang X, Just MA, Newman S. Learning to decode cognitive states from brain images. Machine Learning. 2004;57:145–175. [Google Scholar]
- Mourao-Miranda J, Bokde ALW, Born C, Hampel H, Stetter M. Classifying brain states and determining the discriminating activation patterns: Support Vector Machine on functional MRI data. Neuroimage. 2005;28:980–995. doi: 10.1016/j.neuroimage.2005.06.070. [DOI] [PubMed] [Google Scholar]
- Norman KA, Polyn SM, Detre GJ, Haxby JV. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in Cognitive Sciences. 2006;10:424–430. doi: 10.1016/j.tics.2006.07.005. [DOI] [PubMed] [Google Scholar]
- O'Toole AJ, Jiang F, Abdi H, Pénard N, Dunlop JP, Parent MA. Theoretical, statistical, and practical perspectives on pattern-based classification approaches to the analysis of functional neuroimaging data. Journal of cognitive neuroscience. 2007;19:1735–1752. doi: 10.1162/jocn.2007.19.11.1735. [DOI] [PubMed] [Google Scholar]
- Oosterhof NN, Wiggett AJ, Diedrichsen J, Tipper SP, Downing PE. Surface-based information mapping reveals crossmodal vision-action representations in human parietal and occipitotemporal cortex. J Neurophysiol. 2010 doi: 10.1152/jn.00326.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Op de Beeck H. Against hyperacuity in brain reading: Spatial smoothing does not hurt multivariate fMRI analyses? Neuroimage. 2010;49:1943–1948. doi: 10.1016/j.neuroimage.2009.02.047. [DOI] [PubMed] [Google Scholar]
- Peltier S, Stilla R, Mariola E, LaConte S, Hu X, Sathian K. Activity and effective connectivity of parietal and occipital cortical regions during haptic shape perception. Neuropsychologia. 2007;45:476–483. doi: 10.1016/j.neuropsychologia.2006.03.003. [DOI] [PubMed] [Google Scholar]
- Pereira F, Mitchell T, Botvinick M. Machine learning classifiers and fMRI: a tutorial overview. Neuroimage. 2009;45:S199–S209. doi: 10.1016/j.neuroimage.2008.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pereira F, Botvinick M. Information mapping with pattern classifiers: a comparative study. Neuroimage. 2011;56:476–496. doi: 10.1016/j.neuroimage.2010.05.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poldrack RA, Halchenko YO, Hanson SJ. Decoding the large-scale structure of brain function by classifying mental States across individuals. Psychological science : a journal of the American Psychological Society / APS. 2009;20:1364–1372. doi: 10.1111/j.1467-9280.2009.02460.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Quadflieg S, Etzel JA, Gazzola V, Keysers C, Schubert TW, Waiter GD, Macrae CN. Puddles, Parties, and Professors: Linking Word Categorization to Neural Patterns of Visuospatial Coding. J Cogn Neurosci. 2011 doi: 10.1162/jocn.2011.21628. [DOI] [PubMed] [Google Scholar]
- Reed CL, Shoham S, Halgren E. Neural substrates of tactile object recognition: an fMRI study. Human brain mapping. 2004;21:236–246. doi: 10.1002/hbm.10162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rizzolatti G, Craighero L. The mirror-neuron system. Annu Rev Neurosci. 2004;27:169–192. doi: 10.1146/annurev.neuro.27.070203.144230. [DOI] [PubMed] [Google Scholar]
- Rorden C, Karnath HO, Bonilha L. Improving lesion-symptom mapping. J Cogn Neurosci. 2007;19:1081–1088. doi: 10.1162/jocn.2007.19.7.1081. [DOI] [PubMed] [Google Scholar]
- Schaefer M, Xu B, Flor H, Cohen LG. Effects of different viewing perspectives on somatosensory activations during observation of touch. Human brain mapping. 2009;30:2722–2730. doi: 10.1002/hbm.20701. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shinkareva SV, Malave VL, Mason RA, Mitchell TM, Just MA. Commonality of neural representations of words and pictures. Neuroimage. 2011;54:2418–2425. doi: 10.1016/j.neuroimage.2010.10.042. [DOI] [PubMed] [Google Scholar]
- Shinkareva SV, Mason RA, Malave VL, Wang W, Mitchell TM, Just MA. Using FMRI brain activation to identify cognitive states associated with perception of tools and dwellings. PLoS ONE. 2008;3:e1394. doi: 10.1371/journal.pone.0001394. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stoeckel MC, Weder B, Binkofski F, Choi H-J, Amunts K, Pieperhoff P, Shah NJ, Seitz RJ. Left and right superior parietal lobule in tactile object discrimination. Eur J Neurosci. 2004;19:1067–1072. doi: 10.1111/j.0953-816x.2004.03185.x. [DOI] [PubMed] [Google Scholar]
- Tal N, Amedi A. Multisensory visual-tactile object related network in humans: insights gained using a novel crossmodal adaptation approach. Exp Brain Res. 2009;198:165–182. doi: 10.1007/s00221-009-1949-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Valenza N, Ptak R, Zimine I, Badan M, Lazeyras F, Schnider A. Dissociated active and passive tactile shape recognition: a case study of pure tactile apraxia. Brain. 2001;124:2287–2298. doi: 10.1093/brain/124.11.2287. [DOI] [PubMed] [Google Scholar]
- Wandell BA, Dumoulin SO, Brewer AA. Visual field maps in human cortex. Neuron. 2007;56:366–383. doi: 10.1016/j.neuron.2007.10.012. [DOI] [PubMed] [Google Scholar]
- Yoo S-S, Freeman DK, McCarthy JJ, Jolesz FA. Neural substrates of tactile imagery: a functional MRI study. Neuroreport. 2003;14:581–585. doi: 10.1097/00001756-200303240-00011. [DOI] [PubMed] [Google Scholar]
- Yoo S-S, Lee CU, Choi BG. Human brain mapping of auditory imagery: event-related functional MRI study. Neuroreport. 2001;12:3045–3049. doi: 10.1097/00001756-200110080-00013. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.




