Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Jun 1.
Published in final edited form as: Cogn Neurosci. 2013 Apr 27;4(2):107–114. doi: 10.1080/17588928.2013.787056

Using fMR-Adaptation to Track Complex Object Representations in Perirhinal Cortex

Rachael D Rubin 1,2, Samantha Chesney 1, Neal J Cohen 1,2,3, Brian D Gonsalves 1,2,3
PMCID: PMC3753104  NIHMSID: NIHMS465658  PMID: 23997832

Abstract

Brain regions in medial temporal lobe have seen a shift in emphasis in their role in long-term declarative memory to an appreciation of their role in cognitive domains beyond declarative memory, such as implicit memory, working memory, and perception. Recent theoretical accounts emphasize the function of perirhinal cortex in terms of its role in the ventral visual stream. Here, we used functional magnetic resonance adaptation (fMRa) to show that brain structures in the visual processing stream can bind item features prior to the involvement of hippocampal binding mechanisms. Evidence for perceptual binding was assessed by comparing BOLD responses between fused objects and variants of the same object as different, non-fused forms (e.g. physically separate objects). Adaptation of the neural response to fused, but not non-fused, objects was in left fusiform cortex and left perirhinal cortex, indicating the involvement of these regions in the perceptual binding of item representations.


There is a wealth of evidence in support of the idea that declarative memory depends on the integrity of structures in medial temporal lobe (MTL), in particular hippocampus. Recent research has focused on a theoretical debate about the possibility of different structures within MTL subserving different kinds of declarative memory. Specifically, there is growing evidence that while hippocampus is critical for relational or associative memory, the adjacent perirhinal cortex may be able to support memory for individual items (Brown and Aggleton, 2001; Cohen and Eichenbaum, 1993; Davachi, Mitchell, & Wagner, 2003; Rugg and Yonelinas, 2003). While earlier work focused on defining functions of hippocampus and adjacent MTL cortices in terms of putative cognitive processes such as recollection and familiarity, more recent work has focused on understanding the nature of representations supported by these distinct regions. Conceptually, the main difference between item memory and relational memory is that item memory consists of a configural or inflexible representation of an item in isolation, whereas relational memory flexibly represents items, along with other elements of an experience, including the spatial and temporal context (Cohen and Eichenbaum, 1993).

One effect of this focus on representations supported by MTL regions has been a shift in emphasis on the role of these regions in long-term declarative memory, to an appreciation of their role in cognitive domains beyond declarative memory, such as implicit memory (Hannula and Greene, 2012), working memory (Ryan and Cohen, 2004) and perception (Lee et al., 2005). Several recent theoretical accounts (Murray, Bussey, & Saksida, 2007; Saksida and Bussey, 2010) have emphasized the function of perirhinal cortex in terms of its role in the ventral visual stream, with a corresponding emphasis on a role for perirhinal cortex in perception rather than strictly memory. Such accounts emphasize that perirhinal cortex is situated at the top of a hierarchically organized system, in which representations of visual objects are built from simple features combined into more and more complex conjunctions as one moves from posterior to anterior in the ventral visual stream (Desimone and Ungerleider, 1989). Under such a view, perirhinal cortex participates in both perception and memory, by virtue of its role in forming representations of complex objects (Saksida and Bussey, 2010). Much of this work is based on lesion studies in non-human animals, though recent human lesion and neuroimaging studies have converged on a role for perirhinal cortex in the representation of complex objects. On the other hand, much of the research investigating the role of perirhinal cortex in item memory has utilized verbal or verbalizable visual stimuli, and indeed neuroimaging work has suggested a role for perirhinal cortex in conceptual implicit memory (Voss, Hauner, & Paller, 2009; Wang, Lazzara, Ranganath, Knight, & Yonelinas, 2010).

The current study uses fMR-adaptation (fMRa) to index the sensitivity of brain regions to complex object representations, taking advantage of the phenomenon of repetition suppression, which is a general property of neurons at the single-cell and population levels in the ventral visual stream (Desimone, 1996; Grill-Spector, Henson, & Martin, 2006). The basic logic of fMRa is to record the combined hemodynamic response to a train of stimuli, with the idea that if the same stimulus is repeated, each subsequent response will show “adaptation”, and be smaller than the previous response in regions that represent that property of the stimulus. The combined hemodynamic response to such repetition is then compared to conditions in which the stimuli or some aspect of the stimuli change across repetitions. This logic can be used to probe the nature of representations in cortical regions by manipulating some property of the stimulus on the last repetition, and observe which regions recover from adaptation. Those regions that recover from adaptation are sensitive to changes in that property, while those regions which do not recover from adaptation are insensitive to that property, suggesting that they do not code that kind of information (Grill-Spector and Malach, 2001).

In this experiment, we manipulated the specific combination of visual forms that composed an object to bias the perception of a set of abstract features as either a single coherent object or as two separate objects. To encourage the perception of features as a single object, two visual forms were ‘fused’ and moved together across the screen; while to encourage perception of features as two separate objects, the visual forms were non-overlapping and moved separately across the screen (see examples in Fig. 1 a) and d), respectively). The critical comparison was between the condition in which the same fused object was presented across the duration of the trial (i.e. Fused item condition) to the condition in which a non-fused variant of two visual forms was presented immediately followed by a fused variant of the same visual forms (i.e. Paired item condition). In this way, the final stimulus is equivalent across conditions, what differs is the viewing history. Differences in activation between these conditions, revealed by adaptation or recovery from adaptation, would indicate which brain regions were sensitive to the fused versus non-fused viewing history of the visual forms.

Figure 1.

Figure 1

Experimental Stimuli and Trial Structure. a) Fused item condition (red box), b) Size-Changing item condition (purple box), c) Novel item condition (blue box), d) Paired item condition (yellow box). The critical comparison is between the Fused item condition and Paired item condition, in which the final stimulus in both conditions is equivalent, however, the viewing history differs. It is essential to the design that in both the Fused item and Paired item conditions the visual forms are similar across the trial, but only in the Paired item condition are the visual forms physically separated. Differences in activation between these conditions indicate which brain regions are sensitive to the fused versus non-fused viewing history of the visual forms (and not the perceptual similarity of the forms).

In early visual areas, we expected there to be little difference between the conditions because there is little difference between the conditions in terms of the properties to which early visual regions are sensitive, e.g. color, general shape, size, motion. In more anterior visual regions, especially perirhinal cortex, however, we expected differences between the conditions to emerge depending on whether the visual forms are fused in the movie portion of the trial. In the Fused item condition, the visual forms are presented as the same fused object in both the movie and subsequent static image; therefore we expected adaptation of the neural response in item processing areas that are sensitive to coherent complex visual objects. In the Size-Changing condition, we expected a similar pattern of adaptation, since though the size of the fused object is changing during the movie, the specific combination of visual forms does not change throughout the trial. Contrastingly, in the Novel item condition, the visual forms that compose the object in the movie are completely different than the visual forms that compose the object in the static image; therefore we expected recovery from adaptation of the neural response in item processing areas that are sensitive to the specific combination of features that compose an object. In the Paired item condition, we expected a similar pattern of recovery from adaptation. Even though there is overlap in the features of the visual forms, the specific combination of the visual forms change in that they appear separate during the movie and then fused during the static image. Such results would inform theories about the nature of complex object representations supported by perirhinal cortex, without requiring the relational binding mechanisms of hippocampus, which would be expected to be necessary for storing the link between visual forms that are spatially discontiguous.

Method

Participants

Participants were recruited through local advertisements in the University of Illinois community. Prior to enrollment in the study, participants were screened for contraindications to MRI examination. After an explanation of the study was given, informed written consent was obtained from all participants prior to initiation of the study. All procedures used were approved by the Institutional Review Board of the University of Illinois at Urbana-Champaign.

The study included 14 right-handed, native English-speaking adults age 18–28 (Mean = 21.5; 8 females). Apart from the 14 participants included in the study, 4 additional participants were excluded from analysis due to excessive movement artifacts in their fMRI data. All participants were financially compensated for their involvement in the study.

Stimuli and Design

The stimuli were computer rendered novel objects comprised of various combinations of separate visual forms (for examples see Fig. 1). Individual study trials consisted of a 4-second movie showing the object(s) moving across the screen, a half-second fixation, and then a 1.5 second display of a static object. Study trials were from one of four conditions: Fused item, Size-Changing item, Novel item, and Paired item. The name of the condition describes the different kind of item(s) in the movie, as all trials ended with the presentation of a static object. In the Fused item condition, two visual forms were slightly overlapping, and the combined (“fused”) object translated across the screen as a single object followed by the presentation of the static version of the same fused object. In the Paired item condition, the two visual forms were non-overlapping and translated separately across the screen, followed by the presentation of a static version of the same two visual forms combined into a fused object. In the Size-Changing item condition, a fused object expanded and contracted on the screen (size transformation), followed by a static version of the same fused object. This condition provided a Fused item control condition that controlled for movement across the screen, and thus across the visual field. Finally, in the Novel item condition, a fused object translated across the screen (as in the Fused item condition), but was followed by a static version of a completely different fused object. Each Study block was followed by a Test block in which individual test trials consisted of a 4s presentation of a static object. Some of the static objects were the same static objects seen in the previous study block, while others were completely novel objects.

Procedure

The experiment consisted of four study-test blocks inside the scanner. Trials were intermixed in a pseudorandom order including null fixation events to facilitate deconvolution of the hemodynamic response to individual conditions. Before each study block, participants were instructed to study the following displays and pay attention to the movement. They were also informed that there would be a subsequent memory test. Each study block consisted of 28 trials as described above (7 from each condition). The study block was immediately followed by instructions for the test block. Participants were told they would make judgments about which displays they had previously seen. Participants made combined recognition confidence judgments (1 = “confident new”, 2 = “unsure new”, 3 = “unsure old”, 4 = confident “old”). Each test block (N=42) consisted of trials containing a static object, 28 were previously seen and 14 were new.

fMRI Data Acquisition

Imaging was performed using a 3-Tesla Siemens Magnetom Allegra MRI scanner (Siemens Medical Solutions, Erlangen, Germany) at the University of Illinois Biomedical Imaging Center. Participants viewed visual stimuli on a back-projection screen using an angled mirror mounted on the head coil.

After acquisition of a T2 localizer scan, four functional gradient echo-planar imaging (EPI) runs were collected, each 11:16 minutes long (TR = 2000ms, TE = 25ms, 38 interleaved oblique coronal slices, 0.42mm interslice-gap, 3.4 × 3.4 × 3 mm3 voxels, flip angle = 80°, field of view = 220 mm, 336 volumes per run). Oblique coronal slice acquisition perpendicular to the main axis of the hippocampus was used to minimize susceptibility artifacts in anterior temporal lobe regions during fMRI data acquisition. Slices were positioned to ensure complete coverage of the occipital lobe, at the expense of excluding the frontal poles for participants for whom whole-brain coverage was not possible. Following the four functional runs, high-resolution T1 MPRAGE anatomical images were acquired. Following the four functional runs, a high-resolution T1 MPRAGE anatomical image was acquired (scan time = 6:58min, TR = 2000ms, TE = 2.22ms, 112 ascending sagittal slices, 0.75mm slice-gap, 1.1 × 1.1 × 1.5mm voxels, flip angle = 8°, field of view = 220 mm).

fMRI Data Analysis

The data were preprocessed using SPM5 (Wellcome Department of Cognitive Neurology, London, UK). For each participant, functional images were adjusted for interleaved slice acquisition and were then subjected to affine motion correction. Resulting images were visually inspected for quality of motion correction. Functional volumes were then normalized to the SPM echo-planar imaging template and resampled to 3 × 3 × 3 mm3 voxels. T2-weighted localizer images were then coregistered to the mean EPI volume across runs, and high-resolution T1 MPRAGE images were coregistered to T2-weighted images. Finally, functional images were smoothed with an 8mm full-width at half maximum isotropic Gaussian kernel to reduce noise.

It is important to note that a study block “event” in this event related fMRI design consisted of a compound response to the object movie and the subsequent static object. In this sense, the design is akin to adaptation paradigms that consider the combined BOLD response to trains of stimuli. Functional data were modeled using the Finite Impulse Response (FIR) model in SPM5 because 1) we expected an extended hemodynamic response to our compound events and 2) the FIR model does not make assumptions about the shape of the HRF. Direct contrasts between conditions were carried out using t-contrasts at the subject level. Subsequent to individual subject analyses, random-effect group analyses were performed for each contrast using one-sample t-tests, comparing the value of the contrast images against zero.

In addition to these voxel-based random-effects analyses, targeted ROI analyses were performed to examine the activity in functionally-defined brain regions using the MARSBAR toolbox (Brett, Anton, Valabregue, & Poline, 2002). Functional ROIs were used to characterize the responses of regions that were predicted to be involved in item processing. ROIs were defined in an unbiased manner by contrasting all study trials with fixation trials to identify regions generally involved in processing the stimuli. 8mm spheres were constructed around peak activations in ROIs that were sufficiently large - otherwise the entire functional activation was used. Then peak BOLD responses were calculated for each condition for each ROI and were subjected to repeated measures ANOVA F-tests to compare activity across conditions.

Results

Behavioral Test Block Results

The mean percent correct for the recognition memory data was calculated (Fused item = 76%, Size-Changing item = 73%, Novel item = 60%, Paired item = 75%, and New item = 81%) and a one-way repeated measures ANOVA of condition with five levels was performed. An omnibus F-test showed significant differences between conditions (F4, 52 = 7.88, p < 0.0051). Planned pairwise comparisons between Fused item vs. Novel item, Size-Changing item vs. Novel item, Paired item vs. Novel item, and New item vs. Novel item were all significant (4.38 <|t13| < 6.14, all ps ≤ 0.001). The mean confidence ratings for the recognition memory data was also calculated (Fused item = 3.28, Size-Changing item = 3.20, Novel item = 2.83, Paired item = 3.21, and New item = 1.83) and a one-way repeated measures ANOVA of condition with five levels was performed. An omnibus F-test showed significant differences between conditions (F4, 52 = 69.07, p < 0.001). Planned pairwise comparisons between Fused item vs. Novel item, Novel item vs. New item, Paired item vs. Novel item, Size-Changing vs. Novel item, Fused item vs. New item, Paired item vs. New item, and Size-Changing vs. New item were all significant (5.84 <|t13| < 10.45, all ps < 0.001).

fMRI Study Block Results

To assess differences in neural activity across the four conditions a region-of-interest (ROI) analysis was performed in targeted brain regions that were active across all conditions to compare relative amounts of peak activation. Differences between regions in varying degrees of peak activation were predicted depending on whether adaptation or recovery of adaptation (of the neural response) to the static image was expected during the trial. All regions that showed significant activation to the objects in the four conditions compared to fixation ([Fused item, Size-Changing item, Paired item, Novel item] > fixation, p<0.005, 5 contiguous voxels) were calculated. From this general set of regions, we selected three a priori ROIs based on regions implicated in item processing, including areas in visual cortex, fusiform cortex and perirhinal cortex (Davachi, 2006; Gonsalves, Kahn, Curran, Norman, & Wagner, 2005; Tanaka, 1997). Each ROI was an 8mm sphere around the peak of the functional activation cluster for all trials greater than fixation (p < 0.001, 5 contiguous voxels), except in perirhinal cortex in which the entire functional activation was used. Coordinates indicate the center of mass for each cluster.

Region-of-Interest Analysis

First, we focused on the critical comparison between the Fused item and Paired item conditions, where there was the largest overlap between the individual features of either a single visual form (i.e. Fused item) or non-fused visual forms (i.e. Paired item). As our predictions suggested, there were no differences between these two conditions in early bilateral visual areas: left middle occipital gyrus (MNI coordinates: −36, −87, 9) and right middle occipital gyrus (MNI coordinates: 39, −84, 6), t = .89, p>0.052 and t = 1.24, p>0.05, respectively. Differences between the Fused item and Paired item condition, however, emerged as early as left fusiform cortex (MNI coordinates: −33, −60, −15), t = −2.91, p<0.05, as well as left perirhinal cortex (MNI coordinates: −24, −6, −39), t = −1.87, p<0.05 (see Fig. 2). In these regions, the Paired item condition showed recovery from adaptation relative to the Fused item condition.

Figure 2.

Figure 2

Selected Item Processing ROIs (same color coding as Fig. 1). ROIs Insensitive to Various Objects (left panel): Left Middle Occipital Gyrus and Right Middle Occipital Gyrus. ROIs Sensitive to Complex (i.e. Fused Forms) Objects (right panel): Left Fusiform and Left Perirhinal. Highlighted regions indicate 8mm sphere around the peak of the functional activation cluster for all trials greater than fixation (p < 0.001, 5 contiguous voxels), except Left Perirhinal in which the entire functional activation cluster was used. Coordinates indicate the center of mass for each cluster. Bar graphs depict average activity of the highlighted region for the peak timepoint of the modeled FIR timecourse. Error bars represent within-subjects SE. *p<.05.

ROI results from the Size-Changing item and Novel item conditions further support the adaptation results from the critical comparisons between the Fused item and Paired item conditions. There was a main effect in left fusiform cortex (F3,39 = 5.70, p < 0.05) and all planned comparisons were significant between conditions in which adaptation was expected versus conditions in which recovery was expected: Size-Changing item vs. Paired item (t = −3.45, p<0.05), Size-Changing item vs. Novel item (t = −2.18, p<0.05), Fused item vs. Novel item (t = −4.26, p<0.05). Furthermore, our predictions were supported by finding no differences between the two conditions in which adaptation was predicted: Size-Changing item vs. Fused item (t = 0.72, p>0.05), as well as no differences between the two conditions in which recovery from adaptation was predicted: Novel item vs. Paired item (t =0.71, p>0.05). In left perirhinal cortex a main effect failed to reach significance (F3,39 = 0.93, p > 0.05); although as mentioned previously the critical comparison was significant between the Fused item vs. Paired item condition, t = −1.87, p<0.05.

Discussion

In the present study, the information driving perception of combinations of visual features as either a single object versus two separate objects was given by short videos showing object(s) moving together or separately. The results of this study suggest that fMR-adaptation can be used to show differences in item processing throughout the visual system and early MTL regions. In early visual areas the BOLD response elicited by the Fused item and Paired item conditions was similar. This suggests that early bilateral visual areas processed low-level object features (e.g. general object shape, color), which indeed were designed to be similar between the two critical conditions. In contrast, late visual areas and early MTL regions represented specific combinations of item features as a fused object prior to hippocampal processing. Specifically, BOLD activity in fusiform and perirhinal cortex showed recovery from adaptation in the Paired item condition when the two non-fused visual forms were presented and then immediately followed by a fused variant of the same two visual forms. This suggests that these regions are sensitive to specific high-level object features (e.g. object identity, specific feature combinations of an item). In contrast, BOLD activity showed adaptation in these same regions in the Fused item condition when the same fused visual form was shown throughout the trial. These results are consistent with the notion that fusiform and perirhinal cortex play a role in representing complex combinations of visual features that comprise objects.

One implication of this result is that perirhinal cortex, in this respect, behaves like other regions in the ventral visual stream. It forms object representations taking into account information such as spatial contiguity and does not obligatorily create fused object representations out of features present in the visual world that are spatially discontiguous, which is consistent with accounts that emphasize the role of perirhinal cortex in representation of complex objects (Bussey, Saksida, & Murray, 2002, 2005). Other work in primates, however, has provided evidence for the role of perirhinal cortex in representation of pairs of objects (Fujimichi, Naya, Koyano, Takeda, Takeuchi, & Miyashita 2010; Miyashita, 1988; Sakai and Miyashita, 1991). One difference between that work and the present study is that in the work of Miyashita and colleagues, pair coding neurons in perirhinal cortex develop response selectivity to pairs of objects over a relatively large number of repeated encounters with the same object pairs. Thus, perirhinal cortex may come to represent object pairs after repeated exposure to those pairs, rather than during a single trial as in our present experiment.

Questions also remain about the role of perirhinal cortex in the online creation of fused items from initially disparate elements, especially under task instructions to mentally fuse the elements into a single item. Results from the literature on “unitization” support such a role for perirhinal cortex in conceptual unitization in humans, though most of these studies have relied on verbal stimuli (Haskins, Yonelinas, Quamme, & Ranganath, 2008; though see Staresina and Davachi, 2010). Other studies of unitization have used either amnesic patients or event-related potentials and thus can provide only indirect evidence concerning the specific role of perirhinal cortex in unitization (Bader, Mecklinger, Hoppstädter, & Meyer, 2010; Diana, Van den Boom, Yonelinas, & Ranganath, 2011; Opitz & Cornell, 2006; Pilgrim, Murray, & Donaldson; Quamme, Yonelinas, & Norman; Rhodes & Donaldson, 2007, 2008). Further research is needed to establish the role of perirhinal cortex in unitization in which more purely visual forms need to be fused online to form novel visual object representations, as well as to directly contrast the role of perirhinal cortex in perceptual versus conceptual unitization.

An intriguing possibility is that the adaptation technique employed in the current study could be used as a marker of online unitization in situations that require the volitional combination of spatially discontiguous items - more akin to the studies cited above that address the role of unitization in associative memory. Such a neural marker of unitization could potentially be assessed for its relationship to subsequent memory performance, in that it may be possible to predict subsequent memory performance based on the amount of adaptation that occurred during unitization. The prediction in this case would be that the more successfully the items were unitized during study, as indexed by neural adaptation, the more likely the unitized version will be recognized during subsequent memory testing on the basis of familiarity - without the need for relational memory representations supported by the hippocampus. If such a relationship is found, this method may be considered a powerful tool for assessing the nature of representations implemented in cortical regions surrounding the hippocampus, and thus provide valuable data informing our understanding of the roles of these regions in long term memory and in other cognitive domains.

ACKNOWLEDGEMENTS

This research was funded by R03 MH082086 and NIMH R01 MH062500.

Footnotes

1

Hereafter, where violations of sphericity occurred, Greenhouse-Geisser corrected p-values are reported; for clarity, unadjusted df are reported.

2

One-tailed thresholds were used for the remaining contrasts given that fMRI adaptation lends itself to clearly directional predictions.

References

  1. Bader R, Mecklinger A, Hoppstädter M, Meyer P. Recognition memory for one-trial-unitized word pairs: evidence from event-related potentials. NeuroImage. 2010;50(2):772. doi: 10.1016/j.neuroimage.2009.12.100. [DOI] [PubMed] [Google Scholar]
  2. Brett M, Anton JL, Valabregue R, Poline JB. Region of interest analysis using an SPM toolbox. 8th International Conference on Functional Mapping of the Human Brain; Sendai, Japan. 2002. [Google Scholar]
  3. Brown MW, Aggleton JP. Recognition memory: What are the roles of the perirhinal cortex and hippocampus? Nature Reviews Neuroscience. 2001;2(1):51–61. doi: 10.1038/35049064. [DOI] [PubMed] [Google Scholar]
  4. Bussey TJ, Saksida LM, Murray EA. Perirhinal cortex resolves feature ambiguity in complex visual discriminations. European Journal of Neuroscience. 2002;15(2):365–374. doi: 10.1046/j.0953-816x.2001.01851.x. [DOI] [PubMed] [Google Scholar]
  5. Bussey TJ, Saksida LM, Murray EA. The perceptual-mnemonic/feature conjunction model of perirhinal cortex function. The Quarterly Journal of Experimental Psychology Section B. 2005;58(3–4):269–282. doi: 10.1080/02724990544000004. [DOI] [PubMed] [Google Scholar]
  6. Cohen NJ, Eichenbaum H. Memory, amnesia, and the hippocampal system. Cambridge, MA: MIT Press; 1993. [Google Scholar]
  7. Davachi L. Item, context and relational episodic encoding in humans. Current Opinions in Neurobiology. 2006;16:693–700. doi: 10.1016/j.conb.2006.10.012. [DOI] [PubMed] [Google Scholar]
  8. Davachi L, Mitchell JP, Wagner AD. Multiple routes to memory: distinct medial temporal lobe processes build item and source memories. Proceedings of the National Academy of Sciences. 2003;100(4):2157–2162. doi: 10.1073/pnas.0337195100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Desimone R. Neural mechanisms for visual memory and their role in attention. Proceedings of the National Academy of Sciences. 1996;93:13494–13499. doi: 10.1073/pnas.93.24.13494. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Desimone R, Ungerleider LG. Neural mechanisms of visual processing in monkeys. In: Boller E, Grafman J, editors. Handbook of neuropsychology. vol. II. Amsterdam: Elsevier; 1989. pp. 267–299. [Google Scholar]
  11. Diana RA, Van den Boom W, Yonelinas AP, Ranganath C. ERP correlates of source memory: Unitized source information increases familiarity-based retrieval. Brain Research. 2011;1367:278–286. doi: 10.1016/j.brainres.2010.10.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Fujimichi R, Naya Y, Koyano KW, Takeda M, Takeuchi D, Miyashita Y. Unitized representation of paired objects in area 35 of the macaque perirhinal cortex. European Journal of Neuroscience. 2010;32(4):659–667. doi: 10.1111/j.1460-9568.2010.07320.x. [DOI] [PubMed] [Google Scholar]
  13. Gonsalves BD, Kahn I, Curran T, Norman KA, Wagner AD. Memory strength and repetition suppression: multimodal imaging of medial temporal cortical contributions to recognition. Neuron. 2005;47:751–761. doi: 10.1016/j.neuron.2005.07.013. [DOI] [PubMed] [Google Scholar]
  14. Grill-Spector K, Henson R, Martin A. Repetition and the brain: neural models of stimulus-specific effects. Trends in Cognitive Science. 2006;10:14–23. doi: 10.1016/j.tics.2005.11.006. [DOI] [PubMed] [Google Scholar]
  15. Grill-Spector K, Malach R. fMRadaptation: a tool for studying the functional properties of human cortical neurons. Acta Psychologica. 2001;107:293–321. doi: 10.1016/s0001-6918(01)00019-1. [DOI] [PubMed] [Google Scholar]
  16. Hannula DE, Greene AJ. The hippocampus reevaluated in unconscious learning and memory: at a tipping point? Frontiers in Human Neuroscience. 2012;6:80. doi: 10.3389/fnhum.2012.00080. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Haskins AL, Yonelinas AP, Quamme JR, Ranganath C. Perirhinal cortex supports encoding and familiarity-based recognition of novel associations. Neuron. 2008;59:554–560. doi: 10.1016/j.neuron.2008.07.035. [DOI] [PubMed] [Google Scholar]
  18. Lee AC, Bussey TJ, Murray EA, Saksida LM, Epstein RA, Kapur N, Hodges JR, Graham KS. Perceptual deficits in amnesia: challenging the medial temporal lobe ‘mnemonic’view. Neuropsychologia. 2005;43(1):1–11. doi: 10.1016/j.neuropsychologia.2004.07.017. [DOI] [PubMed] [Google Scholar]
  19. Miyashita Y. Neuronal correlate of visual associative long-term memory in the primate temporal cortex. Nature. 1988;335(6193):817–820. doi: 10.1038/335817a0. [DOI] [PubMed] [Google Scholar]
  20. Murray EA, Bussey TJ, Saksida LM. Visual Perception and Memory: A New View of Medial Temporal Lobe Function in Primates and Rodents. Annual Review of Neuroscience. 2007;30:99–122. doi: 10.1146/annurev.neuro.29.051605.113046. [DOI] [PubMed] [Google Scholar]
  21. Opitz B, Cornell S. Contribution of familiarity and recollection to associative recognition memory: Insights from event-related potentials. Journal of Cognitive Neuroscience. 2006;18(9):1595–1605. doi: 10.1162/jocn.2006.18.9.1595. [DOI] [PubMed] [Google Scholar]
  22. Pilgrim LK, Murray JG, Donaldson DI. Characterizing episodic memory retrieval: Electrophysiological evidence for diminished familiarity following unitization. Journal of Cognitive Neuroscience. 2012;24(8):1671–1681. doi: 10.1162/jocn_a_00186. [DOI] [PubMed] [Google Scholar]
  23. Quamme JR, Yonelinas AP, Norman KA. Effect of unitization on associative recognition in amnesia. Hippocampus. 2007;17:192–200. doi: 10.1002/hipo.20257. [DOI] [PubMed] [Google Scholar]
  24. Rhodes SM, Donaldson D. Electrophysiological evidence for the influence of unitization on the processes engaged during episodic retrieval: Enhancing familiarity based remembering. Neuropsychologia. 2007;45(2):412–424. doi: 10.1016/j.neuropsychologia.2006.06.022. [DOI] [PubMed] [Google Scholar]
  25. Rhodes SM, Donaldson D. Electrophysiological evidence for the effect of interactive imagery on episodic memory: Encouraging familiarity for non-unitized stimuli during associative recognition. Neuroimage. 2008;39(2):873–884. doi: 10.1016/j.neuroimage.2007.08.041. [DOI] [PubMed] [Google Scholar]
  26. Rugg MD, Yonelinas AP. Human recognition memory: a cognitive neuroscience perspective. Trends in Cognitive Science. 2003;7:313–319. doi: 10.1016/s1364-6613(03)00131-1. [DOI] [PubMed] [Google Scholar]
  27. Ryan JD, Cohen NJ. Processing and short-term retention of relational information in amnesia. Neuropsychologia. 2004;42(4):497–511. doi: 10.1016/j.neuropsychologia.2003.08.011. [DOI] [PubMed] [Google Scholar]
  28. Sakai K, Miyashita Y. Neural organization for the long-term memory of paired associates. Nature. 1991;354:152–155. doi: 10.1038/354152a0. [DOI] [PubMed] [Google Scholar]
  29. Staresina BP, Davachi L. Object unitization and associative memory formation are supported by distinct brain regions. The Journal of Neuroscience. 2010;30(29):9890–9897. doi: 10.1523/JNEUROSCI.0826-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Tanaka K. Mechanisms of visual object recognition: Monkey and human studies. Current Opinions in Neurobiology. 1997;7(4):523–529. doi: 10.1016/s0959-4388(97)80032-3. [DOI] [PubMed] [Google Scholar]
  31. Voss JL, Hauner KK, Paller KA. Establishing a relationship between activity reduction in human perirhinal cortex and priming. Hippocampus. 2009;19(9):773–778. doi: 10.1002/hipo.20608. [DOI] [PubMed] [Google Scholar]
  32. Wang WC, Lazzara MM, Ranganath C, Knight RT, Yonelinas AP. The medial temporal lobe supports conceptual implicit memory. Neuron. 2010;68(5):835–842. doi: 10.1016/j.neuron.2010.11.009. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES