Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jan 31.
Published in final edited form as: Brain Cogn. 2014 Dec 18;93:54–63. doi: 10.1016/j.bandc.2014.11.007

Separability of Abstract-Category and Specific-Exemplar Visual Object Subsystems: Evidence from fMRI Pattern Analysis

Brenton W McMenamin 1, Rebecca G Deason 2, Vaughn R Steele 3, Wilma Koutstaal 4, Chad J Marsolek 5
PMCID: PMC4281302  NIHMSID: NIHMS648170  PMID: 25528436

Abstract

Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified.

Keywords: fMRI, MVPA, Repetition priming, Object Identification, Category, Exemplar

1. Introduction

The nature of visual object representations remains controversial. On the one hand, visual object representations may be relatively abstract, in that a common representation can be activated by multiple object exemplars or by multiple views of the same object exemplar (e.g., Amira, Biederman, & Hayworth, 2012; Biederman, 1987; Biederman & Bar, 1999; Biederman & Cooper, 2009; Cooper et al., 1992; Hayworth & Biederman, 2006; Hummel & Biederman, 1992; Hummel & Stankiewicz, 1996; Wagemans et al. 1996, often but not always positing structural description representations). On the other hand, visual object representations may be relatively specific, in that different representations are activated by different exemplars or by different views of the same object exemplar (e.g., Bülthoff & Edelman, 1992; Gauthier et al. 2002; Poggio & Edelman, 1990; Tarr, 1995; Tarr & Gauthier, 1998; Tarr, Williams, Hayward, & Gauthier, 1998; Ullman, 1996, often but not always positing view- or image-based representations). Moreover, both relatively abstract and relatively specific representations may exist within a single, unified processing system (e.g., Farah, 1992; Hayward & Williams, 2000; Tarr & Bülthoff, 1995). Alternatively, abstract and specific visual object representations may exist in dissociable neural subsystems (e.g., Burgund & Marsolek, 2000; Marsolek, 1995, 1999; Marsolek & Burgund, 1997, 2008). Here we use pattern analysis of functional magnetic resonance imaging (fMRI) data to test the separability of visual subsystems involved in representing abstract categories versus specific exemplars of objects, providing evidence for weakly modular visual subsystems.

The dissociable neural subsystems theory (Marsolek, 1999, 2003) posits that an abstract-category (AC) subsystem recognizes the visual category to which an object stimulus belongs (e.g., piano, cat, pen, etc.), whereas a specific-exemplar (SE) subsystem identifies the individuated visual exemplar to which an object stimulus corresponds (e.g., a calico cat, a different calico cat, a grand piano, etc.). The AC subsystem stores category-invariant features and disregards within-category variability in object shape so that different exemplars can be mapped to the same categorical representation (Marsolek, 1995). In contrast, the SE subsystem stores visually distinctive information and relies strongly on within-category variability so that different exemplars can be mapped to different representations (Marsolek & Burgund, 2003, 2005). The contradictory processing demands associated with object categorization versus individuated exemplar identification can be alleviated if at least partially dissociable neural subsystems underlie the two abilities.

Evidence for multiple dissociable object representation systems comes mostly from repetition priming experiments using divided visual field presentations (e.g., Marsolek, 1999; Marsolek & Burgund, 2003, 2005; see also related evidence from unilateral auditory presentations, Gonzalez & McLennan, 2007, 2009) and fMRI (e.g., Koutstaal et al., 2001; Simons, Koutstaal, Prince, Wagner, & Schacter, 2003). However, convergent support for the dissociability of these systems also has been obtained using other methods, such as neuromodulatory evidence (Burgund, Marsolek, & Luciana, 2003), neuropsychological dissociations (Beeri et al. 2004; Vaidya et al. 1998), interhemispheric transfer of visual information (Marsolek, Nicholas, & Andresen, 2002), visual word identification (Deason & Marsolek, 2005), asymmetries in relevant amygdala activations (McMenamin & Marsolek, 2013), and assessments of encoding-related activity in relation to subsequent memory performance (Garoff, Slotnick, & Schacter, 2005). Parallel functional arguments and repetition priming evidence for lateralized form-specific vs. form-abstract subsystems also have recently been extended beyond the processing of individual objects to the processing of complex visual scenes (Epstein & Morgan, 2012; Stevens, Kahn, Wig, & Schacter, 2012).

In repetition priming paradigms, SE priming is demonstrated when test objects (e.g., a grand piano) are identified more readily after they have been primed by an earlier exposure to the same exemplars (e.g., the same grand piano) than when they have been primed by an earlier exposure to different exemplars in the same abstract categories as the test objects (e.g., an upright piano). Divided visual field studies have shown SE priming when test objects are presented directly to the right hemisphere (briefly in the left visual field) but not when test objects are presented directly to the left hemisphere (briefly in the right visual field; Marsolek, 1999). In contrast, AC visual-object priming is observed when test objects (e.g., a grand piano) are identified more readily after they have been different-exemplar primed (e.g., by an upright piano) than after they have been “word-primed” by exposure to the printed names corresponding to the test objects (e.g., by the printed word “piano”). AC visual-object priming has been observed when test objects are presented directly to the left hemisphere but not when they are presented directly to the right hemisphere (Marsolek, 1999). The opposite patterns of laterality for SE priming and AC priming support the dissociable neural subsystems theory.

Functional neuroimaging studies have used similar conditions to examine hemispheric asymmetries in visual-object priming. Koutstaal et al. (2001) and Simons et al. (2003) found that the difference in occipitotemporal cortical activity between a same-exemplar primed condition and a different-exemplar primed condition was greater in the right hemisphere than in the left (but see also Vuilleumier, Henson, Driver, & Dolan, 2002). This supports the hypothesis that SE processing is more effective in the right hemisphere than in the left (see also the selectively greater stimulus-level than basic-level priming in the right anterior fusiform gyrus reported by Fairhall, Anzellotti, Pajtas, & Caramazza, 2011). However, these studies did not include a word-primed condition, which is important for measuring AC visual-object priming. Differential activity between a different-exemplar primed condition and a completely unprimed condition could reflect visual processing in an AC visual-object subsystem or it could reflect post-visual processing of objects (e.g., involving non-perceptual semantic information). Comparisons between a different-exemplar primed condition and a word-primed condition are needed to isolate processing in an AC visual-object subsystem. This is important in part because Simons et al. found that lexical processing concurrent with visual object processing affected the differential neural activity between SE primed and DE primed conditions in occipitotemporal cortex in the left hemisphere but not in the right. They concluded that the differential activity in the left hemisphere was due to lexical/semantic processing in addition to visual object processing. In the present experiment, we included the important word-primed condition to enable assessment of more purely visual processing in an AC visual-object subsystem.

An important unanswered question involves the degree of separability of AC and SE subsystems. They may be strongly modular, such that AC processing takes place only in left hemisphere visual areas and SE processing takes place only in right hemisphere visual areas. Alternatively, they may be weakly modular, such that both processes can take place in either hemisphere but with AC processing being relatively more effective in left hemisphere visual areas and SE processing being relatively more effective in right-hemisphere visual areas. Finally, they may be unified or non-modular, such that both processes can take place equally well in both hemispheres. The answer to this question is crucial for arbitrating among the theories of visual object representation cited above and for applications to more effective understanding of visual object agnosias following brain damage (e.g., Farah, 1990, 1991).

There are limitations, however, in using the extant results to address the degree of separability of dissociated subsystems. Hemispheric asymmetries measured in divided visual field experiments are too coarse and indirect to test the degree of neural separability of processes. Perhaps more importantly, any attempt to test the separability of visual processes using fMRI must take into account that visual object representations are widely distributed in visual cortex and anatomically overlapping such that single voxels do not distinctly participate in representing one object category (e.g., Haxby et al. 2001; Ishai et al. 1999) nor one object exemplar (Cichy, Chen & Haynes, 2011; Eger, Ashburner, Haynes, Dolan, & Rees, 2008). Thus, a multivariate pattern analysis (MVPA) approach, in which multi-voxel patterns of activity are examined (e.g., Haynes & Rees, 2006; Norman, Polyn, Detre, & Haxby, 2006; O'Toole et al. 2007; Poldrack, 2008), may be needed to assess the separability of different kinds of visual processing. Specifically, we used support vector machines (SVM; Norman et al. 2006) to measure the strength of AC and SE priming effects in ventral visual object processing areas and to provide tests of subsystem separability.

Another virtue of the MVPA analysis strategy is that it may enable more sensitive measures of the AC and SE priming effects of interest. This is important because several aspects of the repetition-priming paradigm that we adopted were such that, although they had important benefits for allowing cleaner inferences from the data, they also had concomitant costs in the likelihood of obtaining strong blood-oxygen-level dependent (BOLD) signal differences between conditions. First, participants were presented with only a single block of encoding phase trials, with each object presented only once rather than multiple times during encoding, followed by a single block of test phase trials rather than multiple interleaved study-test blocks or cycles. This was done to minimize the possibility that explicit memory for previously viewed objects would be responsible for putatively implicit-memory priming effects measured during the test phase. Second, during the test phase, objects were presented very briefly with low visual contrast and participants were asked only to indicate if they could visually identify the objects, rather than make more complex semantic judgments about visual objects presented for a relatively long time, in order to increase the degree to which the purely visual processes of interest were engaged. Third, the tasks differed between the initial encoding phase and the subsequent test phase. This was done because rapid response learning, rather than repetition priming within visual object identification subsystems, can underlie putative behavioral and neural priming effects when the encoding and test tasks are the same (e.g., Dobbins, Schnyer, Verfaellie & Schacter, 2004), and because such response learning can generalize from same to different object exemplars (Denkinger & Koutstaal, 2009) and from object names to object exemplars (Horner & Henson, 2011). Fourth, AC priming was measured as the difference between activity in a different-exemplar primed condition and activity in a word-primed condition (as opposed to an unprimed condition), in order to hone in on AC representations that are primarily visual-object in nature (and hence are not activated by a corresponding printed word) rather than post-visual, phonological, or conceptual in nature. Each of these procedural aspects (a single study-test cycle with a longer study-to-test interval, briefer test presentations, different encoding and test tasks, and a more tightly controlled measure of AC priming) comprise changes from the previous fMRI experiments that, while associated with the indicated inferential advantages, likely led to weaker overall BOLD signal differences between conditions. In fact, when Harvey and Burgund (2012) used both a single study-test cycle and different encoding and test tasks in a similar object priming study, they found no hemisphere difference in SE priming in univariate analyses. For these reasons, we used MVPA to enable more sensitive statistical testing.

The main questions that we addressed were (a) do the previously observed hemispheric asymmetries of AC and SE subsystems replicate when the aforementioned procedural changes are implemented such as using a difficult perceptual identification task, and (b) what is the degree of separability between these neural subsystems, particularly, if they are not unified are they weakly modular or strongly modular?

2. Method

2.1 Participants

Thirty-two right-handed participants were recruited from the University of Minnesota community in exchange for $20 an hour. All participants gave written, informed consent in accordance with procedures and protocols approved by the human subjects review committee of the University of Minnesota. Each participant reported normal or corrected-to-normal vision and reported no history of traumatic head injuries. Eight participants were removed from the analysis because, across all conditions, they were unable to identify a large number of the objects (they could not identify between 50 and 85% of the trials, N=5), or had a large number of exceptionally rapid (< 200 ms) responses (N=3), indicating that they were unable to complete the difficult perceptual identification test task or did not follow the test instructions correctly. Although excluding eight out of 32 participants may seem high, it is important for the trial-wise MVPA analysis to have a large number of successfully identified trials for each participant.

The final sample consisted of twenty-four right-handed participants (12 female; Mage = 21.8 years with a range of 18-28 years; mean laterality quotient = 0.84 with a range of 0.6 – 1.0 according to the Edinburgh Handedness Inventory; Oldfield, 1971). Due to a data-collection error, responses in the explicit memory recognition test were not recorded for one subject, resulting in N = 23 for that behavioral analysis.

2.2 Materials

A total of 400 gray-scaled images of familiar visual objects (two exemplars from each of 200 categories) and their corresponding entry-level names (Jolicœur, Gluck & Kosslyn, 1984) were used as visual stimuli. For each participant, the visual test objects used to represent the four main conditions were balanced on several stimulus dimensions that were assessed in separate behavioral norming sessions, including: (a) the level of agreement of the best category name for each object, (b) the typicality of that particular exemplar with respect to all others in its category, (c) the frequency with which participants judged they saw instances of that category in everyday life, and (d) the visual similarity of the two exemplars in each category. Full counterbalancing assured that each visual object image represented each test condition an equal number of times across participants.

2.3 Procedure

Each experimental session had an initial encoding phase and a subsequent test phase. In both phases, stimuli were presented using E-Prime (Psychology Software Tools; Pittsburgh, PA), and participants used a mirror mounted on the head coil to view images projected onto a screen behind the scanner. Participants responded by pressing buttons on a scanner-compatible button box using the right hand.

During the initial encoding phase, participants viewed 100 visual objects (e.g., a grand piano, a hardcover book, etc.) intermixed with the printed names of 50 other objects (e.g., the word “airplane”, etc.), and they rated how much they liked each item. In each trial, a fixation cross appeared for 125 ms, then an object or a word appeared for 1750 ms, and then a second fixation cross appeared for 125 ms (Figure 1). Participants judged on a four-point scale how much they liked the referent of the object shown, considering the meaning of the object rather than how it looked or the sound of its name, and they pressed one of four buttons to make each response.

Figure 1.

Figure 1

Experimental Paradigm.

During the subsequent test phase, participants attempted to identify 200 briefly presented grey-scale objects (16 ms presentations) and indicated whether they were confident that they identified each object. Test objects were either same-exemplar primed (the test object was the same exemplar as one presented during the encoding phase; e.g., the grand piano), different-exemplar primed (the test object was a different exemplar compared with one presented during the encoding phase, but in the same abstract category; e.g., a softcover book), word-primed (the printed name of the test object had been presented during the encoding phase; e.g., a picture of an airplane), or unprimed (the test object was not related in any way to an item presented during the encoding phase). There were 50 trials for each of the four trial types (same-exemplar primed, different-exemplar primed, word primed, and unprimed) that were intermixed within a single run. In each trial, an object appeared for 16 ms, followed by a fixation cross for 1984 ms. Participants attempted to visually identify each object, and they pressed one button when they confidently identified it and another button when they could not confidently identify it. This task was used to encourage visual object identification while avoiding motion artifacts that would occur from speaking aloud responses if an object-naming task was used (see also Marsolek et al., 2010). Fixation-only trials were included and interspersed in an order and timing that allowed for optimal deconvolution of the hemodynamic response (optseq2, MGH NMR Center, Charlestown, MA).

It may seem that 16 ms presentations may be too fast to allow accurate identification of objects, but we have found high identification rates (>70%) using such presentations in previous work (Marsolek, 1999; Marsolek, Schnyer, Deason, Ritchey, & Verfaellie, 2006; Marsolek et al., 2010). Nonetheless, to provide a check that participants indeed attempted to identify objects during the test phase, we tested explicit memory for the test objects following the test phase, while anatomical scans were acquired. Sixty-four objects were presented and participants indicated whether they had seen each object previously in any portion of the experiment or they had not. Thirty-two of the stimuli presented in this recognition test were new (not presented in any way previously) and eight from each of the four test-phase conditions (i.e., same-exemplar primed, different-exemplar primed, word primed, and unprimed) were old.

2.4 Data Acquisition

Imaging was performed with a 3.0 Tesla Siemens Trio MRI scanner using an 8-channel head coil. Functional data were acquired during the test phase using a standard echo planar imaging (EPI) sequence (35 interleaved slices, TR = 2 s, TE = 28, flip angle = 90 degrees, slice thickness = 3.5 mm, base resolution = 64, FOV = 224), with axial slices aligned parallel to the AC-PC transverse plane. Structural images were collected using a standard T1-weighted MPRAGE sequence (with the same center slice as the functional scans, 160 slices, slice thickness = 1 mm).

2.5 Behavioral Analytic Strategy

Response times for visually identified objects and proportions of objects identified were the dependent variables in separate one-way repeated-measures analyses of variance. The within-participants independent variable was priming condition (same-exemplar primed, different-exemplar primed, word primed, and unprimed). Response time outliers were defined as trials in which response times were faster than 200 ms or greater than 2.5 standard deviations from the participant's mean response time and were eliminated from analyses. Performance on the explicit memory recognition task was measured using the signal detection measure d’ (Wickens, 2001) involving comparing the probability of hits (correctly labeling previously seen stimuli as ‘old’) and false-alarms (incorrectly labeling new stimuli as ‘old’).

2.6 fMRI Analytic Strategy

All processing was performed using AFNI (Cox, 1996; http://afni.nimh.nih.gov/) and MATLAB (http://www.mathworks.com/).

2.6.1 Preprocessing Functional Data

Slice-timing correction was applied to functional scans for each participant, and then they were realigned to the first scan to correct for head movement using a six-parameter rigid body transformation. Functional data were normalized to Talairach space (Talairach & Tournoux, 1988) by first using a 12-parameter transformation to register each participant's anatomical scan with the TT_N27 template and then using the same transformation on the functional data. Spatial smoothing was applied to all volumes with a 6 mm FWHM Gaussian filter, and the average intensity at each voxel was scaled to 100.

2.6.2 Defining the Visual Regions of Interest

Regions of interest were defined as an intersection between a functional localizer and regions defined with an anatomical atlas. The functional localizer was performed using the unprimed trials to identify voxels activated during visual object processing when none of the objects had been primed in any way. Functional data were analyzed using a deconvolution model in AFNI for each subject that modeled the BOLD response in each voxel with regressors for each of the four trial types (same-exemplar primed, different-exemplar primed, word primed, and unprimed). Separate regressors were used for trials on which the participant indicated that they could or could not identify the object. Each regressor modeled the response to the stimulus with cubic spline basis functions that allowed the HRF to assume any shape for the 16 seconds following stimulus onset. Constant, linear, and quadratic terms were included as covariates of no interest to correct for drift in the scanner signal during the experiment, and the six rigid-body head motion parameters were also included as model covariates. The activation for unprimed trials relative to fixation periods was calculated at the group level using random-effects analysis. Because the signal associated with fixation trials was not explicitly modeled, fixation trials served as an implicit baseline in the GLM analysis and the comparison was made by testing where the mean level of activation six seconds after the onset of an unprimed trial differed from zero (p < 0.05, uncorrected).

An anatomical atlas was used to mask-out voxels from the localizer task that did not correspond to early visual processing regions in the occipital lobe or to regions in the ventral visual stream that contain high-level object representations. The Desai Maximum Probability atlas was used to identify left and right hemisphere voxels in occipital visual processing regions (inferior occipital gyrus and sulcus, middle occipital gyrus, and occipital pole) and inferior temporal (inferior temporal, and lateral fusiform) regions. Regions of interest were defined within each of these four anatomical areas (left/right, occipital/temporal) as clusters of contiguous voxels with suprathreshold localizer effects. To ensure that the regions in each hemisphere were equivalent in size and location, the final regions of interest were defined as the intersection between a particular region of interest (e.g., a right hemisphere occipital ROI) and a mirrored region from the contralateral hemisphere (e.g., a left hemisphere occipital ROI mirrored across the x-axis).

2.6.3 Estimating Activation on Single Trials

Pattern classification requires estimates of BOLD signal for each stimulus at each voxel (Figure 2), which is difficult in rapid event-related designs because the slow rise and fall of the HRF ensures that each volume of data contains signal from several previous trials. We employed a deconvolution approach that has proven successful (Mumford, Turner, Ashby, & Poldrack, 2012; Turner, Mumford, Ashby, & Poldrack, 2012) to estimate the neural response evoked by each trial in the experiment.

Figure 2.

Figure 2

Analysis pipeline. Flowchart depicting the process of measuring abstract-category (AC) and specific-exemplar (SE) priming effects from single-trial activation estimates and classifiers.

A separate deconvolution analysis was performed for every trial in which the trial of interest (e.g., trial #4) was modeled using a canonical HRF, and the contributions from all other trials (e.g., trials 1, 2, 3, 5, ...) and nuisance variables were modeled with other regressors. Eight regressors were used to model the trials of non-interest using canonical HRFs based on priming condition (same-exemplar primed, different-exemplar primed, word primed, and unprimed) and identification response (identified or not identified). Constant, linear, and quadratic drift terms and six rigid-body head motion parameters were included as model covariates of no interest. To control for possible response time and motor-related differences across conditions, regressors were used to model the onset of every trial that was identified or not identified with a cubic-spline basis function for 16 seconds from stimulus onset and a separate cubic-spline regressor with its amplitude modulated by that trial's response time. The inclusion of these RT-modulated regressors and the predictors for each individual trial means that the individual trial estimates do not provide an estimate of the total activation evoked by a single trial. Rather, they estimate how the activation on a particular trial deviates from an average trial, so positive values mean that the trial has above-average signal and negative values mean less than average. All subsequent MVPA analyses only used trials in which objects were claimed to be identified and had response times that were not outliers.

2.6.4 Measuring Visual Repetition Priming with SVMs

For each participant and each of the four ROIs, linear support vector machines (SVMs) were trained to detect visual repetition priming by distinguishing the same-exemplar primed and word primed trials (see Figure 2). The SVMs were implemented using the LIBSVM library (Chang & Lin, 2011) with soft-margin classifiers (C = 1; Chu, Hsu, Chou, Bandettini & Lin, 2012). Ideally, the training and testing datasets for the classifiers would come from separate runs of functional data to ensure independence of data samples (Pereira, Mitchell & Botvinick, 2009). However, all of the task trials in the present analysis were in a single, extended run. We used only a single run for the test phase of the experiment to make the experimental procedure in the fMRI experiment as similar as possible to the previous behavioral work for greater comparability. In addition, having participants perform in two time-separated test phases would have allowed them to consider the nature of the first test phase (e.g., guess about its purpose) in ways that could change their strategy for processing in the second test phase (e.g., decide to use explicit memory to look for objects repeated from the study phase to help them perform the difficult perceptual identification task, something we worked hard to avoid as described in the Introduction). To reduce the statistical dependence between trials in the testing and training datasets for the classifiers, a) the activation for each trial was measured using a GLM model to account for (and remove) variance attributed to preceding trials (Section 2.6.3) and b) data partitions for testing and training were formed by splitting the experimental run into early/late halves to ensure that testing and training trials were not temporally interleaved.

Because there are unequal numbers of trials in each condition, ‘chance’ performance of a classifier would not occur at 50% accuracy. To simplify interpretation and statistics, the signal detection measure d’ (Wickens, 2001) was used to quantify classifier performance because d’ = 0 occurs for chance performance regardless of the number of trials. To facilitate interpretation, we also report an “equivalent accuracy” for each d’ that corresponds to the expected accuracy if there were equal numbers of trials in each condition. The permutation-based approach developed by Stelzer, Chen and Turner (2012) was used to measure whether the d’ statistic was biased (i.e., chance d’ 0) and whether the group mean d’ was significantly greater than chance. The same-exemplar primed and word primed trials in the training set were relabeled for each participant (Etzel & Braver, 2013) and an SVM was re-trained to calculate a new cross-validated d’. This was repeated 1,000 times for each participant. The participant-level scores were combined to estimate the group-level null distribution by randomly selecting one of the 1,000 d’ scores from each participant and averaging. This group-level averaging was repeated 10,000 times to estimate the group-level null-distribution for classifier performance in each ROI. Because trials with outlier response times or stimuli that participants could not identify were excluded, the average participant used 45.0 same-exemplar primed trials (range 35-50) and 42.8 word primed trials (range 25-49).

2.6.5 Measuring Contributions of AC and SE Subsystems to Visual Repetition Priming

Both AC and SE subsystems should distinguish same-exemplar primed and word primed objects, albeit in putatively different ways and for putatively different purposes. In addition to differences between same-exemplar primed and word primed activity, AC priming was characterized by different-exemplar primed activity that differs from word primed activity, and SE priming was characterized by same-exemplar primed activity that differs from different-exemplar primed activity. Therefore, the strength of AC and SE contributions to the visual repetition priming effect was assessed in each ROI by measuring how classifiers treat the previously un-classified different-exemplar primed trials (average number of different-exemplar primed trials was 43.2, range 34-50). Thus, if the SVM decision values differentiate different-exemplar primed trials from word primed trials, the pattern of repetition priming in that region was at least partially driven by changes in the representations of abstract categories of visual objects. Alternatively, if the SVM decision values differentiate different-exemplar trials from same-exemplar primed trials, the pattern of repetition priming in that region was at least partially driven by changes in the representations of specific exemplars of objects.

The AC and SE effects were measured using an SVM with split-half cross validation. The SVM was trained to discriminate same-exemplar primed and word primed trials using the training data partition, and then the continuous-valued classifier decision values were calculated for each same-exemplar primed, different-exemplar primed, and word primed trial in the test data partition. After both folds of cross-validation, every trial had a continuous classifier decision value that indexed how much visual-object repetition priming was present on each trial. This continuous classifier decision score served as a trial-by-trial index of visual repetition priming – the SVM was trained to assign values near 0 to word-primed trials (e.g., not visually primed), values near 1 to same-primed trials (e.g., highly visually primed). When the decision values for different-exemplar primed trials were greater than word primed trials, this was evidence for AC priming. When the decision values for different-exemplar primed trials were less than same-exemplar primed trials, this was evidence for SE priming.

The signal detection measure d’ (Wickens, 2001) was used to determine whether the decision values for different-exemplar primed trials differed from word primed trials (evidence for AC priming) and differed from same-exemplar primed trials (evidence for SE priming). A permutation-based approach was used to estimate the magnitude of these d’ priming effects under the null hypothesis (Etzel & Braver, 2013; Mourão-Miranda, Bokde, Born, Hampel & Stetter, 2005; Nichols & Holmes, 2001). For each ROI and participant, the same-exemplar and word primed trials in the SVM training set were randomly relabeled, and the SVM decision values were re-calculated for each of the trial types. This was repeated 1,000 times to generate a distribution of the expected magnitude of AC and SE priming effects under the null hypothesis that there was no visual repetition priming in the region of interest. The median1 of this null-distribution was an estimate of baseline (e.g., “chance”) performance for AC and SE priming and subtracted from the observed d' scores. Bias-corrected AC and SE priming scores were used for the following statistical analyses.

It is important to note that these measures of AC and SE priming both involved identifying regions that underlie general (AC or SE) processing abilities, not identifying particular object representations (e.g., the representation of the category of cat only or the representation of an exemplar cat only; see Vindiola & Wolmetz, 2010). The AC priming measure reflects the degree to which a region participates in the representation of a number of different visual object categories (e.g., the category cat, the category piano, etc.), and the SE priming measure reflects the degree to which a region participates in the representation of a number of different individual exemplars of objects (e.g., a calico cat, a different cat, a grand piano, etc.).

2.6.6 Testing Asymmetries in Priming Effects

The AC and SE priming effects were not defined with orthogonal contrasts, so AC and SE priming scores will be negatively correlated. Therefore, the analysis used a two-way repeated-measures MANOVA with bias-corrected AC and SE priming scores as the dependent variables and region (occipital and inferior temporal), and cerebral hemisphere (left and right) as the within-participants independent variables. Significant effects of hemisphere were followed up by paired t-tests across hemispheres on bias-corrected AC and SE priming scores individually.

3. Results

3.1 Behavioral Effects

In the analysis of response times for visually identified objects, the main effect of priming condition was significant (F(3,69) = 10.02, MSe = 639.50, p < .001). In particular, the linear trend of decreasing response times from unprimed (512 ms) to word primed (504 ms) to different-exemplar primed (495 ms) to same-exemplar primed (474 ms) conditions was significant, (F(1,69) = 28.11, MSe = 639.50, p < .001). This pattern of results was expected as a combination of AC and SE behavioral priming effects, but pairwise comparisons can be used to specifically test for AC and SE priming effects. The difference between different-exemplar primed and word primed objects indicated that there was a marginally significant AC priming effect (t(23) = 1.93, p = 0.07 uncorrected), and the difference between same-exemplar primed and different-exemplar primed objects indicated that there was a significant SE priming effect (t(23) = 2.74, p = 0.01 uncorrected).

The main effect of priming condition on proportion of trials identified was also significant (F(3,69) = 5.88, MSe = 0.016, p < .005). A linear trend of increasing proportion of trials identified from unprimed (80.6%) to word primed (83.1%) to different-exemplar primed (84.2%) to same exemplar primed (86.8%) trials was significant, (F(1,69) = 17.24, MSe = 0.003,p < .001). This pattern of results was expected as a combination of AC and SE behavioral priming effects. However, pairwise comparisons on this measure revealed a significant SE priming effect (t(23) = 2.71, p = 0.01 uncorrected) but not a significant AC priming effect (t(23) = 0.66, p = 0.52 uncorrected).

Finally, participants reliably discriminated between new and old stimuli during the explicit-memory recognition test (mean d’ = 2.14, equivalent accuracy = 85.7%, t(22) = 14.39, p < 0.001). This provided an admittedly imperfect manipulation check that participants had indeed attended to the stimuli during the preceding task and that they could in fact perceive many of the rapidly presented objects. The recognition scores were calculated separately for each of the priming conditions, and there was a significant linear trend of decreasing scores from same-exemplar primed (d’ = 2.49, equivalent accuracy = 89.3%) to different-exemplar primed (d’ = 2.34, equivalent accuracy = 87.9%) to word primed (d’= 2.05, equivalent accuracy = 84.7%) to unprimed (d’ = 1.98, equivalent accuracy = 83.9%) conditions (t(22) = -5.17, p < 0.001). The pairwise comparison between different-exemplar primed and word primed conditions was significant (t(22) = 2.34, p = 0.03 uncorrected) but not the pairwise comparison between same-exemplar primed and different-exemplar primed conditions (t(22) = 1.42, p = 0.17 uncorrected). This pattern indicated that the number of times a visual object category was viewed (twice for same-exemplar primed and different-exemplar primed objects versus once for word primed and unprimed objects, with the single presentations for word and unprimed objects being brief test-related exposures only) is important for determining the magnitude of subsequent explicit memory.

3.2 fMRI Regions of Interest

There was widespread activation in the atlas regions of interest during unprimed trials. The final regions of interest (Figure 3) in the occipital lobe each had 169 voxels centered at (±39, −78, 0) and the final regions of interest in the inferior temporal lobe had 138 voxels centered at (±39, −44, −16).

Figure 3.

Figure 3

Regions of interest. Four regions identified by the conjoined anatomical and functional localizer for visual object processing and then used for MVPA. The four areas were LH occipital (blue), RH occipital (green), LH inferior temporal (yellow), and RH inferior temporal (red).

3.3 Visual Repetition Priming Effects Measured via SVM

The SVMs could reliably distinguish (ps < 0.001) same-exemplar primed and word primed trials in all four regions of interest: left occipital (mean d’ = 0.47, equivalent accuracy = 59.3%; chance d’ = 0.004, equivalent accuracy = 50.1%), right occipital (mean d’ = 0.45 equivalent accuracy = 58.9%; chance d’ = 0.001, equivalent accuracy = 50.0%), left temporal (mean d’ = 0.44, equivalent accuracy = 58.7%; chance d’ = 0.021, equivalent accuracy = 50.0%), and right temporal (mean d’ = 0.44, equivalent accuracy = 58.7%; chance d’ = 0.018, equivalent accuracy = 50.4%). This indicated that each of the four regions exhibited visual repetition priming, and the next stage of analysis was performed to determine the relative contributions of AC and SE representations to the priming in each ROI.

3.4 Relative Contributions of AC and SE Priming to Visual Repetition Priming in each ROI

The bias-corrected AC and SE priming scores (see Figure 4) were analyzed using a twoway repeated measures MANOVA . The effect of hemisphere was significant (F(2,22) = 5.11, p = 0.02), indicating that the pattern of AC and SE priming differed between the left and right hemisphere ROIs. Follow-up univariate tests indicated that AC priming scores were greater in the left hemisphere than in the right hemisphere (t(23) = 2.68, p = 0.01) whereas SE priming scores were greater in the right hemisphere than in the left hemisphere (t(23) = −2.24, p = 0.03). The effect of region (occipital versus temporal) was not significant (F(2,22) = 1.12, p = 0.34). and the region-by-hemisphere interaction was not significant (F(2,22) = 0.65, p = 0.53). Despite the lack of a significant interaction, we were interested in whether there was any trend for the hemispheric asymmetries to be greater in one region than the other. As shown in Table 1, the hemispheric asymmetries in priming were greater in the occipital regions (occipital AC asymmetry: t(23) = 2.62, p = 0.02; occipital SE asymmetry: t(23) = -2.10, p = 0.05) than in the temporal regions (temporal AC asymmetry: t(23) = 1.15, p = 0.26; temporal SE Asymmetry: t(23) = -0.96, p = 0.35). Because of the lack of a significant interaction, however, we are cautious and do not place confidence in the differences between regions.

Figure 4.

Figure 4

Priming effects. Average abstract-category priming effect (AC) and specific-exemplar priming effect (SE) in terms of bias-corrected priming scores (d’) obtained from pattern classifiers. Results are shown averaged across the occipital and inferior temporal regions of interest in the left hemisphere (LH) and in the right hemisphere (RH). Error bars depict within-subjects standard errors of the mean (Loftus & Masson, 1994).

Table 1.

Mean bias-corrected priming scores in each region on interest.

Occipital Temporal
LH RH LH RH
AC 0.33 (0.05) 0.16 (0.05) 0.33 (0.07) 0.25 (0.07)
SE 0.10 (0.06) 0.28 (0.06) 0.09 (0.07) 0.15 (0.06)

Note: standard error of the mean in parentheses.

4. Discussion

The aim of this study was to examine the neural representation of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity in the ventral visual stream during a repetition-priming task. Results indicated that dissociable neural subsystems underlie AC and SE visual object processing, but these subsystems are only weakly modular, not strongly modular. Recognition of the AC to which an image belongs is supported to a greater degree in the left hemisphere than in the right, whereas identification of the SE to which an image corresponds is supported to a greater degree in the right hemisphere than in the left. This double dissociation supports the hypothesis that AC and SE processing do not occur in a single or unified system. The fact that both AC and SE processing can be supported by both left and right hemisphere areas supports the hypothesis that AC and SE subsystems should be understood as weakly separable or weakly modular, not strongly separable or strongly modular. This conclusion is important in light of other evidence that each subsystem is capable of operating in each hemisphere. For example, previous studies have found that the right hemisphere can play a role in object categorization (Gerlach, Law, Gade & Paulson, 1999; Zannino et al., 2010).

It should be noted that the object images in the same-exemplar primed trials were the same as the images viewed during the initial encoding phase. Thus, we cannot use our results to make any claims about various high-level visual properties in the same-exemplar subsystem, such as invariance to orientations, rotations, translation, reflections, etc. However, our previous divided-visual-field experiments have indicated that greater exemplar-specific priming in the right hemisphere than in the left is observed when changes in translation, orientation, and rotation take place between encoding and test phases (Burgund & Marsolek, 2000; Marsolek, 1995, 1999; Marsolek & Burgund, 2003, 2005, 2008). In addition, previous studies have used MVPA to study object representation, and they have found that the lateral occipital cortex (LOC) is a critical object representation hub with dissociable representations of different object categories and different object exemplars within categories (Cichy et al, 2011; Eger et al., 2008). Interestingly, this is consistent with our result of stronger priming asymmetries in the occipital regions relative to the temporal regions, so future studies may benefit from explicitly testing hemispheric asymmetries in LOC. Moreover, these studies have found that the LOC uses object representations that may be invariant to viewpoint.

Interestingly, recent work by Ward, Chun, and Kuhl (2013) examined repetition priming effects in LOC and temporal visual regions using a different type of pattern analysis from the type we used. They used a relatively unsupervised form of pattern analysis and found that the similarity of the evoked activation patterns for the first and second presentations of each stimulus related to explicit memory but not to implicit memory. At first glance, this may seem to run counter to our conclusion that pattern analysis was very useful for assessing repetition priming in a manner that avoided influences from explicit memory, but there are important reasons why it does not. First, the task for the participants in Ward et al. was the same for the first and second presentations, thus rapid response learning rather than repetition priming in visual object identification subsystems may have been measured during the putative “implicit memory” task (Denkinger & Koutstaal, 2009; Dobbins, Schnyer, Verfaellie & Schacter, 2004; Horner & Henson, 2011). Rapid response learning has been linked with explicit memory processes (Schnyer, Dobbins, Nicholls, Schacter, & Verfaellie, 2006) thus it may not be surprising that activation similarities related to explicit memory. Second, we suspect that explicit memory processes may have contributed much to the overall patterns of activation measured in Ward et al., but this does not mean that no activation patterns could have been associated with implicit memory. Implicit memory processes may have a much smaller effect on the overall activation patterns, so that supervised forms of pattern analysis (such as the SVM used in the present article) are needed to identify these patterns by identifying commonalities across many objects.

Although it is suboptimal to test and train classifiers using data from the same scanning run (Pereira et al., 2009), we used only a single run for the test phase of the experiment. As described above, we did this to make our procedure comparable with previous repetition priming work and to avoid interpretation problems (such as might arise from an increased reliance on explicit memory) with using two time-separated test phases. It is important to note that we took multiple steps to reduce the impact of dependence across trials. We used Turner et al.'s (2012) methods for estimating single trial activation in a way that “unmixes” the activity from each trial to reduce signal contamination from preceding trials. In addition, the split-half cross validation that we used partitioned the data based on first-half/second-half rather than odd/even trial numbers. This approximated the effect of training/testing on two separate blocks by minimizing the potential overlap between signals from training/testing blocks, helping to adhere to the assumption in MVPA that training and testing sets are drawn independently (Pereira et al.). Furthermore, any effect of statistical dependence between training and testing trials despite our precautions may not be a critical limitation to our results. When Pereira et al. modeled performance as independent Bernoulli trials, they showed that non-independent trials should not be used. However, we tested our classifiers using nonparametric methods that do not model performance as independent Bernoulli trials. Moreover, Pereira et al. noted that having non-independence does not prohibit the use of contiguous trials when the purpose is not only obtaining estimates of accuracy of the classifiers. In our study, the absolute level of decoding accuracy was not of primary importance for testing our hypotheses. The important tests concerned how classifiers differed in performance across hemispheres on discriminations that they were not trained to perform.

As in all putative implicit memory experiments conducted with non-amnesic participants, it is possible that our results reflect involuntary explicit memory rather than implicit memory. Despite our efforts to minimize explicit memory contamination (described above), the primed objects may have automatically cued explicit memories for having seen their corresponding stimuli during the encoding phase in an unintentional manner. Even if so, our results may still address our main research questions involving AC and SE visual object representations. We (Marsolek, Schacter, & Nicholas, 1996; Marsolek, Squire, Kosslyn, & Lulenski, 1994) and others (e.g., Garoff et al., 2005) have hypothesized that the same visual form representations that underlie repetition priming in neocortex may also be used—in interaction with medial temporal and other memory subsystems—to help subserve explicit memories involving those visual forms.

Lastly, the AC and SE “priming effects” observed in the present study may be due to combinations of priming and antipriming effects (Marsolek, 2008). Given that different objects have overlapping or superimposed representations in visual cortex (e.g., Haxby et al. 2001; Ishai et al. 1999), strengthening one representation can also weaken other representations with which it is superimposed. Priming refers to the enhanced processing of a repeated stimulus due to strengthening of its representation during the prior encounter. Antipriming refers to impaired processing of a non-repeated stimulus due to weakening of its representation during the prior encounters with other stimuli that have representations that overlap with the non-repeated stimulus. An unbiased baseline condition is needed to tease apart the measurable benefit of priming and the measureable impairment of antipriming (Marsolek et al., 2006). No such baseline condition was included in the current study, in order to keep the main questions of interest tractable and to keep the design similar to those used in previous neuroimaging work on repetition priming. Because recent work indicates that neural activity differences between repeated and non-repeated objects may, in some cases, be more strongly attributed to antipriming than to priming (Marsolek et al., 2010), future work should be aimed at uncovering how much of the AC “priming effects” and the SE “priming effects” reported here are due to antipriming versus priming. For example, relative to appropriate baseline conditions, how much of the AC “priming effect” is due to priming of the same-exemplar primed and different-exemplar primed objects versus antipriming of the word primed objects, and how much of the SE “priming effect” is due to priming of the same-exemplar primed objects versus antipriming of the different-exemplar primed and word primed objects? It would be interesting if observed differences in these ratios between dissociable AC and SE subsystems may be leveraged to help uncover differing representational properties.

The results supported a multiple-systems theory of visual object representation in which dissociable yet overlapping subsystems exist for recognizing abstract object categories and identifying specific object exemplars. The evidence that these subsystems are dissociable, but only weakly separable/modular and not strongly modular, helps to arbitrate among theories of visual object representation. It may also help to clarify an outstanding issue in neuropsychology, in particular why the combinations of impaired and intact abilities in individual patients with associative agnosias indicate that two visual processors (not one or three) are susceptible to impairment from brain damage (Farah, 1990, 1991). One is primarily used for visual word and sometimes visual object processing and hence usually requires AC processes, and the other is primarily used for visual face and sometimes visual object processing and hence usually requires SE processes. In addition, patients with left hemisphere damage are more impaired in object categorization tasks that cannot be accomplished by comparing physical identity (implicating AC processing) than in tasks requiring physical identity comparisons (implicating more SE processing), yet patients with right hemisphere damage show the opposite pattern (DeRenzi, Scotti, & Spinnler, 1969; Warrington & Taylor, 1978). Moreover, achieving invariance across orientations and other image transformation (or “object constancy”) can be accomplished by left hemisphere processes through structural descriptions of objects that use an object-centered reference frame involving the principal axis of elongation (Humphreys & Bruce, 1989; Humphreys & Riddoch, 1984). Future research with patients suffering from associative agnosias may be usefully aimed at testing AC and SE processing abilities in patients with primarily left and right hemisphere visual cortical damage.

Highlights.

  • Multivariate fMRI analyses were used to measure repetition priming in visual cortex

  • Priming for object categories was greater in the left hemisphere than in the right

  • Priming for object exemplars was greater in the right hemisphere than in the left

  • Dissociable, weakly modular systems process visual object categories and exemplars

Acknowledgments

Funding for this research came from the National Institutes of Health (MH60442, HD-07151) and from the Center for Cognitive Sciences in conjunction with the National Institute of Health and Human Development (HD-07151, T32-HD007151), P30 NS057091, and the Office of the Vice President for Research and Dean of the Graduate School of the University of Minnesota. We also thank the Center for Magnetic Resonance Research (BTRR P41 RR008079). Additionally, we would like to thank Casey Tuck for assistance with data collection and Josh Kinnison for help implementing the AFNI single trial estimates.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1

Other methods of bias correction were employed with no meaningful changes to the results, such as: a) subtracting the mean of the null distribution and b) converting each priming score to a z-score using their quantile in the null distribution.

Contributor Information

Brenton W. McMenamin, Department of Psychology University of Maryland – College Park

Rebecca G. Deason, Department of Psychology Texas State University

Vaughn R. Steele, The Mind Research Network Lovelace Biomedical and Environmental Research Institute University of New Mexico

Wilma Koutstaal, Department of Psychology Center for Cognitive Sciences University of Minnesota – Twin Cities.

Chad J. Marsolek, Department of Psychology Center for Cognitive Sciences University of Minnesota – Twin Cities

References

  1. Amira O, Biederman I, Hayworth KJ. Sensitivity to nonaccidental properties across various shape dimensions. Vision Research. 2012;62:35–43. doi: 10.1016/j.visres.2012.03.020. [DOI] [PubMed] [Google Scholar]
  2. Beeri MS, Vakil E, Adonsky A, Levenkron S. The role of the cerebral hemispheres in specific versus abstract priming. Laterality. 2004;9:313–323. doi: 10.1080/13576500342000176. [DOI] [PubMed] [Google Scholar]
  3. Biederman I. Recognition-by-components: A theory of human image understanding. Psychological Review. 1987;94:115–147. doi: 10.1037/0033-295X.94.2.115. [DOI] [PubMed] [Google Scholar]
  4. Biederman I, Bar M. One-shot viewpoint invariance in matching novel objects. Vision Research. 1999;39:2885–2899. doi: 10.1016/s0042-6989(98)00309-5. [DOI] [PubMed] [Google Scholar]
  5. Biederman I, Cooper EE. Translational and reflectional priming invariance: a retrospective. Perception. 2009;38(6):809–817. doi: 10.1068/pmkbie. [DOI] [PubMed] [Google Scholar]
  6. Burgund ED, Marsolek CJ. Viewpoint-invariant and viewpoint-dependent recognition in dissociable neural subsystems. Psychonomic Bulletin & Review. 2000;7:480–489. doi: 10.3758/bf03214360. [DOI] [PubMed] [Google Scholar]
  7. Burgund ED, Marsolek CJ, Luciana M. Serotonin levels influence patterns of repetition priming. Neuropsychology. 2003;17:161–170. [PubMed] [Google Scholar]
  8. Bülthoff HH, Edelman S. Psychophysical support for a two-dimensional view interpolation theory of object recognition. Proceedings of the National Academy of Sciences USA. 1992;89:60–64. doi: 10.1073/pnas.89.1.60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Chang CC, Lin CJ. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST) 2011;2:27. [Google Scholar]
  10. Chu C, Hsu AL, Chou KH, Bandettini P, Lin C. Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images. Neuroimage. 2012;60:59–70. doi: 10.1016/j.neuroimage.2011.11.066. [DOI] [PubMed] [Google Scholar]
  11. Cichy RM, Chen Y, Haynes JD. Encoding the identity and location of objects in human LOC. NeuroImage. 2011;54(3):2297–2307. doi: 10.1016/j.neuroimage.2010.09.044. [DOI] [PubMed] [Google Scholar]
  12. Cooper EE, Biederman I, Hummel JE. Metric invariance in object recognition: A review and further evidence. Canadian Journal of Psychology. 1992;46:191–214. doi: 10.1037/h0084317. [DOI] [PubMed] [Google Scholar]
  13. Cox RW. AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical Research. 1996;29:162–173. doi: 10.1006/cbmr.1996.0014. [DOI] [PubMed] [Google Scholar]
  14. Deason RG, Marsolek CJ. A critical boundary to the left-hemisphere advantage in visual word processing. Brain and Language. 2005;92:251–261. doi: 10.1016/j.bandl.2004.06.105. [DOI] [PubMed] [Google Scholar]
  15. Denkinger B, Koutstaal W. Perceive-decide-act, perceive-decide-act: How abstract is repetition-related decision learning? Journal of Experimental Psychology: Learning, Memory, and Cognition. 2009;35:742–756. doi: 10.1037/a0015263. [DOI] [PubMed] [Google Scholar]
  16. DeRenzi E, Scotti G, Spinnler H. Perceptual and associative disorders of visual recognition. Neurology. 1969;19:634–642. doi: 10.1212/wnl.19.7.634. [DOI] [PubMed] [Google Scholar]
  17. Dobbins IG, Schnyer DM, Verfaellie M, Schacter DL. Cortical activity reductions during repetition priming can result from rapid response learning. Nature. 2004;428:316–319. doi: 10.1038/nature02400. [DOI] [PubMed] [Google Scholar]
  18. Eger E, Ashburner J, Haynes JD, Dolan RJ, Rees G. fMRI activity patterns in human LOC carry information about object exemplars within category. Journal of Cognitive Neuroscience. 2008;20(2):356–370. doi: 10.1162/jocn.2008.20019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Epstein RA, Morgan LK. Neural responses to visual scenes reveals inconsistencies between fMRI adaptation and multivoxel pattern analysis. Neuropsychologia. 2012;50:530–543. doi: 10.1016/j.neuropsychologia.2011.09.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Etzel JA, Braver TS. Pattern Recognition in Neuroimaging (PRNI), 2013 International Workshop. IEEE; Jun, 2013. MVPA permutation schemes: Permutation testing in the land of cross-validation. pp. 140–143. [Google Scholar]
  21. Fairhall SL, Anzellotti S, Pajtas PE, Caramazza A. Concordance between perceptual and categorical repetition effects in the ventral visual stream. Journal of Neurophysiology. 2011;106:398–408. doi: 10.1152/jn.01138.2010. [DOI] [PubMed] [Google Scholar]
  22. Farah MJ. Visual agnosia: Disorders of object recognition and what they tell us about normal vision. MIT Press; Cambridge: 1990. [Google Scholar]
  23. Farah MJ. Patterns of co-occurrence among the associative agnosias: Implications for visual object representation. Cognitive Neuropsychology. 1991;8:1–19. [Google Scholar]
  24. Farah MJ. Is an object an object an object? Cognitive and neuropsychological investigations of domain specificity in visual object recognition. Current Directions in Psychological Science. 1992;1:164–169. [Google Scholar]
  25. Garoff RJ, Slotnick SD, Schacter DL. The neural origins of specific and general memory: The role of the fusiform cortex. Neuropsychologia. 2005;43:847–859. doi: 10.1016/j.neuropsychologia.2004.09.014. [DOI] [PubMed] [Google Scholar]
  26. Gauthier I, Hayward WG, Tarr MJ, Anderson A, Skudlarski P, Gore JC. BOLD activity during mental rotation and viewpoint-dependent object recognition? Neuron. 2002;34:161–171. doi: 10.1016/s0896-6273(02)00622-0. [DOI] [PubMed] [Google Scholar]
  27. Gerlach C, Law I, Gade A, Paulson OB. Perceptual differentiation and category effects in normal object recognition: A PET study. Brain. 1999;122(11):2159–2170. doi: 10.1093/brain/122.11.2159. [DOI] [PubMed] [Google Scholar]
  28. González J, McLennan CT. Hemispheric differences in indexical specificity effects in spoken word recognition. Journal of Experimental Psychology: Human Perception and Performance. 2007;33:410–424. doi: 10.1037/0096-1523.33.2.410. [DOI] [PubMed] [Google Scholar]
  29. González J, McLennan CT. Hemispheric differences in the recognition of environmental sounds. Psychological Science. 2009;20:887–894. doi: 10.1111/j.1467-9280.2009.02379.x. [DOI] [PubMed] [Google Scholar]
  30. Harvey DY, Burgund ED. Neural adaptation across viewpoint and exemplar in fusiform cortex. Brain and Cognition. 2012;80:33–44. doi: 10.1016/j.bandc.2012.04.009. [DOI] [PubMed] [Google Scholar]
  31. Hayward WG, Williams P. Viewpoint dependence and object discriminability. Psychological Science. 2000;11:7–12. doi: 10.1111/1467-9280.00207. [DOI] [PubMed] [Google Scholar]
  32. Hayworth KJ, Biederman I. Neural evidence for intermediate representations in object recognition. Vision Research. 2006;46:4024–4031. doi: 10.1016/j.visres.2006.07.015. [DOI] [PubMed] [Google Scholar]
  33. Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, Pietrini P. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science. 2001;293(5539):2425–--2430. doi: 10.1126/science.1063736. [DOI] [PubMed] [Google Scholar]
  34. Haynes JD, Rees G. Decoding mental states from brain activity in humans. Nature Reviews Neuroscience. 2006;7(7):523–534. doi: 10.1038/nrn1931. [DOI] [PubMed] [Google Scholar]
  35. Horner AJ, Henson RN. Stimulus-response bindings code both abstract and specific representations of stimuli: Evidence from a classification priming design that reverses multiple levels of response representation. Memory & Cognition. 2011;39:1457–1471. doi: 10.3758/s13421-011-0118-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Hummel JE, Biederman I. Dynamic binding in a neural network for shape recognition. Psychological Review. 1992;99:480–517. doi: 10.1037/0033-295x.99.3.480. [DOI] [PubMed] [Google Scholar]
  37. Hummel JE, Stankiewicz BJ. Categorical relations in shape perception. Spatial Vision. 1996;10:201–236. doi: 10.1163/156856896x00141. [DOI] [PubMed] [Google Scholar]
  38. Humphreys GW, Bruce V. Visual cognition: Computational, experimental, and neuropsychological perspectives. Erlbaum; Hillsdale, NJ: 1989. [Google Scholar]
  39. Humphreys GW, Riddoch MJ. Routes to object constancy: Implications from neurological impairments of object constancy. Quarterly Journal of Experimental Psychology. 1984;36A:385–415. doi: 10.1080/14640748408402169. [DOI] [PubMed] [Google Scholar]
  40. Ishai A, Ungerleider LG, Martin A, Schouten JL, Haxby JV. Distributed representation of objects in the human ventral visual pathway. Proceedings of the National Academy of Sciences. 1999;96(16):9379–9384. doi: 10.1073/pnas.96.16.9379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Jolicœur P, Gluck MA, Kosslyn SM. Pictures and names: Making the connection. Cognitive Psychology. 1984;16(2):243–275. doi: 10.1016/0010-0285(84)90009-4. [DOI] [PubMed] [Google Scholar]
  42. Koutstaal W, Wagner AD, Rotte M, Maril A, Buckner RL, Schacter DL. Perceptual specificity in visual object priming: Functional magnetic resonance imaging evidence for a laterality difference in fusiform cortex. Neuropsychologia. 2001;39:184–199. doi: 10.1016/s0028-3932(00)00087-7. [DOI] [PubMed] [Google Scholar]
  43. Loftus GR, Masson ME. Using confidence intervals in within-subject designs. Psychonomic Bulletin & Review. 1994;1:476–490. doi: 10.3758/BF03210951. [DOI] [PubMed] [Google Scholar]
  44. Marsolek CJ. Abstract visual-form representations in the left cerebral hemisphere. Journal of Experimental Psychology: Human Perception and Performance. 1995;21:375–386. doi: 10.1037//0096-1523.21.2.375. [DOI] [PubMed] [Google Scholar]
  45. Marsolek CJ. Dissociable neural subsystems underlie abstract and specific object recognition. Psychological Science. 1999;10:111–118. [Google Scholar]
  46. Marsolek CJ. What is priming and why? In: Bowers JS, Marsolek CJ, editors. Rethinking implicit memory. Oxford University Press; Oxford: 2003. pp. 41–64. [Google Scholar]
  47. Marsolek CJ. What antipriming reveals about priming. Trends in Cognitive Sciences. 2008;12:176–181. doi: 10.1016/j.tics.2008.02.005. [DOI] [PubMed] [Google Scholar]
  48. Marsolek CJ, Burgund ED. Computational analyses and hemispheric asymmetries in visual-form recognition. In: Christman S, editor. Cerebral asymmetries in sensory and perceptual processing. Elsevier; Amsterdam: 1997. pp. 125–158. [Google Scholar]
  49. Marsolek CJ, Burgund ED. Visual recognition and priming of incomplete objects: The influence of stimulus and task demands. In: Bowers JS, Marsolek CJ, editors. Rethinking implicit memory. Oxford University Press; Oxford: 2003. pp. 139–156. [Google Scholar]
  50. Marsolek CJ, Burgund ED. Initial storage of unfamiliar objects: Examining memory stores with signal detection analyses. Acta Psychologica. 2005;119:81–106. doi: 10.1016/j.actpsy.2004.11.001. [DOI] [PubMed] [Google Scholar]
  51. Marsolek CJ, Burgund ED. Dissociable neural subsystems underlie visual working memory for abstract categories and specific exemplars. Cognitive, Affective, and Behavioral Neuroscience. 2008;8:17–24. doi: 10.3758/cabn.8.1.17. [DOI] [PubMed] [Google Scholar]
  52. Marsolek CJ, Deason RG, Ketz NA, Ramanathan P, Bernat EM, Steele VR, Patrick CJ, Verfaellie M, Schnyer DM. Identifying objects impairs knowledge of other objects: A relearning explanation for the neural repetition effect. NeuroImage. 2010;49:1919–1932. doi: 10.1016/j.neuroimage.2009.08.063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Marsolek CJ, Nicholas CD, Andresen DR. Interhemispheric communication of abstract and specific visual-form information. Neuropsychologia. 2002;40:1983–1999. doi: 10.1016/s0028-3932(02)00065-9. [DOI] [PubMed] [Google Scholar]
  54. Marsolek CJ, Schacter DL, Nichols CD. Form-specific visual priming for new associations in the right cerebral hemisphere. Memory & Cognition. 1996;24:539–556. doi: 10.3758/bf03201082. [DOI] [PubMed] [Google Scholar]
  55. Marsolek CJ, Schnyer DM, Deason RG, Ritchey M, Verfaellie M. Visual antipriming: Evidence for ongoing adjustments of superimposed visual object representations. Cognitive, Affective, and Behavioral Neuroscience. 2006;6:163–174. doi: 10.3758/cabn.6.3.163. [DOI] [PubMed] [Google Scholar]
  56. Marsolek CJ, Squire LR, Kosslyn SM, Lulenski ME. Form-specific explicit and implicit memory in the right cerebral hemi-sphere. Neuropsychology. 1994;8:588–597. [Google Scholar]
  57. McMenamin BW, Marsolek CJ. Can theories of visual representation help to explain asymmetries in amygdala function? Cognitive, Affective, and Behavioral Neuroscience. 2013;13:211–224. doi: 10.3758/s13415-012-0139-1. DOI 10.3758/s13415-012-0139-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Mourão-Miranda J, Bokde AL, Born C, Hampel H, Stetter M. Classifying brain states and determining the discriminating activation patterns: support vector machine on functional MRI data. NeuroImage. 2005;28:980–995. doi: 10.1016/j.neuroimage.2005.06.070. [DOI] [PubMed] [Google Scholar]
  59. Mumford JA, Turner BO, Ashby FG, Poldrack RA. Deconvolving BOLD activation in event-related designs for multivoxel pattern classification analyses. NeuroImage. 2012;59:2636–2643. doi: 10.1016/j.neuroimage.2011.08.076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Nichols TE, Holmes AP. Nonparametric permutation tests for functional neuroimaging: a primer with examples. Human Brain Mapping. 2002;15:1–25. doi: 10.1002/hbm.1058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Norman KA, Polyn SM, Detre GJ, Haxby JV. Beyond mind-reading: Multi-voxel pattern analysis of fMRI data. Trends in Cognitive Sciences. 2006;10:424–430. doi: 10.1016/j.tics.2006.07.005. [DOI] [PubMed] [Google Scholar]
  62. Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9(1):97–113. doi: 10.1016/0028-3932(71)90067-4. [DOI] [PubMed] [Google Scholar]
  63. O'Toole AJ, Jiang F, Abdi H, Pénard N, Dunlop JP, Parent MA. Theoretical, statistical, and practical perspectives on pattern-based classification approaches to the analysis of functional neuroimaging data. Journal of Cognitive Neuroscience. 2007;19:1735–1752. doi: 10.1162/jocn.2007.19.11.1735. [DOI] [PubMed] [Google Scholar]
  64. Pereira F, Mitchell T, Botvinick M. Machine learning classifiers and fMRI: a tutorial overview. NeuroImage. 2009;45:S199–S209. doi: 10.1016/j.neuroimage.2008.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Poggio T, Edelman S. A network that learns to recognize three-dimensional objects. Nature. 1990;343:263–266. doi: 10.1038/343263a0. [DOI] [PubMed] [Google Scholar]
  66. Poldrack RA. The role of fMRI in cognitive neuroscience: where do we stand? Current Opinion in Neurobiology. 2008;18(2):223–227. doi: 10.1016/j.conb.2008.07.006. [DOI] [PubMed] [Google Scholar]
  67. Schnyer DM, Dobbins IG, Nicholls L, Schacter DL, Verfaellie M. Rapid response learning in amnesia: Delineating associative learning components in repetition priming. Neuropsychologia. 2006;44:140–149. doi: 10.1016/j.neuropsychologia.2005.03.027. [DOI] [PubMed] [Google Scholar]
  68. Simons JS, Koutstaal W, Prince S, Wagner AD, Schacter DL. Neural mechanisms of visual object priming: Evidence for perceptual and semantic distinctions in fusiform cortex. NeuroImage. 2003;19:613–626. doi: 10.1016/s1053-8119(03)00096-x. [DOI] [PubMed] [Google Scholar]
  69. Stelzer J, Chen Y, Turner R. Statistical inference and multiple testing correction in classification-based multi-voxel pattern analysis (MVPA): random permutations and cluster size control. Neuroimage. 2013;65:69–82. doi: 10.1016/j.neuroimage.2012.09.063. [DOI] [PubMed] [Google Scholar]
  70. Stevens WD, Kahn I, Wig GS, Schacter DL. Hemispheric asymmetry of visual scene processing in the human brain: Evidence from repetition priming and intrinsic activity. Cerebral Cortex. 2012;22:1935–1949. doi: 10.1093/cercor/bhr273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Talairach J, Tournoux P. Co-planar stereotaxic atlas of the human brain. Thieme; Stuttgart: 1988. [Google Scholar]
  72. Tarr MJ. Rotating objects to recognize them: A case study on the role of viewpoint dependency in the recognition of three-dimensional objects. Psychonomic Bulletin & Review. 1995;2:55–82. doi: 10.3758/BF03214412. [DOI] [PubMed] [Google Scholar]
  73. Tarr MJ, Bülthoff HH. Is human object recognition better described by geon structural descriptions or by multiple views? Comment on Biederman and Gerhardstein (1993). Journal of Experimental Psychology: Human Perception and Performance. 1995;21:1494–1505. doi: 10.1037//0096-1523.21.6.1494. [DOI] [PubMed] [Google Scholar]
  74. Tarr MJ, Gauthier I. Do viewpoint-dependent mechanisms generalize across members of a class? Cognition. 1998;67:73–110. doi: 10.1016/s0010-0277(98)00023-7. [DOI] [PubMed] [Google Scholar]
  75. Tarr MJ, Williams P, Hayward WG, Gauthier I. Three-dimensional object recognition is viewpoint dependent. Nature Neuroscience. 1998;1:275–277. doi: 10.1038/1089. [DOI] [PubMed] [Google Scholar]
  76. Turner BO, Mumford JA, Poldrack RA, Ashby FG. Spatiotemporal activity estimation for multivoxel pattern analysis with rapid event-related designs. NeuroImage. 2012;62:1429–1438. doi: 10.1016/j.neuroimage.2012.05.057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Ullman S. High-level vision: Object recognition and visual cognition. MIT Press; Cambridge, MA: 1996. [Google Scholar]
  78. Vaidya CJ, Gabrieli JDE, Verfaellie M, Fleischman D, Askari N. Font-specific priming following global amnesia and occipital lobe damage. Neuropsychology. 1998;12:183–192. doi: 10.1037//0894-4105.12.2.183. [DOI] [PubMed] [Google Scholar]
  79. Vindiola M, Wolmetz M. Mental encoding and neural decoding of abstract cognitive categories: A commentary and simulation. NeuroImage. 2010;54:2822–2827. doi: 10.1016/j.neuroimage.2010.09.091. [DOI] [PubMed] [Google Scholar]
  80. Vuilleumier P, Henson RNA, Driver J, Dolan RJ. Multiple levels of visual object constancy revealed by event-related fMRI of repetition priming. Nature Neuroscience. 2002;5:491–499. doi: 10.1038/nn839. [DOI] [PubMed] [Google Scholar]
  81. Wagemans J, Gool L, Lamote C. The visual system's measurement of invariants need not itself be invariant. Psychological Science. 1996;7:232–236. [Google Scholar]
  82. Ward EJ, Chun MM, Kuhl BA. Repetition suppression and multi-voxel pattern similarity differentially track implicit and explicit visual memory. Journal of Neuroscience. 2013;33(37):14749–14757. doi: 10.1523/JNEUROSCI.4889-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Warrington EK, Taylor AM. Two categorical stages of object recognition. Perception. 1978;7:695–705. doi: 10.1068/p070695. [DOI] [PubMed] [Google Scholar]
  84. Wickens TD. Elementary signal detection theory. Oxford University Press; USA: 2001. [Google Scholar]
  85. Zannino GD, Buccione I, Perri R, Macaluso E, Gerfo EL, Caltagirone C, Carlesimo GA. Visual and semantic processing of living things and artifacts: An fMRI study. Journal of Cognitive Neuroscience. 2010;22(3):554–570. doi: 10.1162/jocn.2009.21197. [DOI] [PubMed] [Google Scholar]

RESOURCES