Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Aug 23.
Published in final edited form as: J Cogn Neurosci. 2020 Jun 12;32(9):1796–1812. doi: 10.1162/jocn_a_01593

Examining the neural basis of congruent and incongruent configural contexts during associative retrieval

Courtney R Gerver 1, Amy A Overman 2, Harini J Babu 1, Chloe E Hultman 2, Nancy A Dennis 1
PMCID: PMC8381215  NIHMSID: NIHMS1604308  PMID: 32530379

Abstract

Disrupting the configural context, or relative organization and orientation of paired stimuli, between encoding and retrieval negatively impacts memory. Using univariate and multivariate fMRI analyses, we examined the effect of retaining and manipulating the configural context on neural mechanisms supporting associative retrieval. Behavioral results showed participants had significantly higher hit rates for recollecting pairs in a contextually congruent, versus incongruent, configuration. Additionally, contextual congruency between memory phases was a critical determinant to characterizing both the magnitude and patterns of neural activation within visual and parietal cortices. Regions within visual cortices also exhibited higher correlations between patterns of activity at encoding and retrieval when configural context was congruent across memory phases than incongruent. Collectively, these findings shed light on how manipulating configural context between encoding and retrieval affect associative recognition, with changes in the configural context leading to reductions in information transfer and increases in task difficulty.

Introduction

With respect to memory, context has often been defined as background information that is irrelevant, incidental, or peripheral to the cognitive task being performed (Murnane, Phelps, & Malmberg, 1999; Hayes, 2007). Researchers have characterized context in many different ways, including the background color behind an item (Mori & Graf, 1996; Dulsky, 1935), word font style, item color, item orientation (Graf & Ryan, 1990; Murnane & Phelps, 1993, 1994, 1995), and item location on a screen (Murnane & Phelps, 1995). Decades of memory research has shown that item retrieval is more difficult when contextual information shifts such that the congruency between study and test becomes disrupted (Hayes, Nadel, & Ryan, 2007; Tulving & Thompson, 1970; DaPolito, Barker, & Wiant, 1972; Thompson, 1972; Mandler, 1980; Eich, 1985; Godden & Baddeley, 1975; Nixon & Kanak, 1981, Smith, 1979, 1982, 1988; Murnane, Phelps & Malmberg, 1999; Grupposo, Lindsay, & Masson, 2007; McKenzie & Tiberghien, 2004; Macken, 2002). For example, researchers have found a decrease in memory for the item itself when contextual details such as those described previously (e.g., background color behind an item, item color, location of information on a screen) change between encoding and retrieval phases, compared to when contextual information is congruent between the two memory phases (Eich, 1985; Godden & Baddeley, 1975; Smith, 1988). This disruption of context is also associated with slower response times (RTs; Criss, 2010; Ratcliff & McKoon, 2008), even when the items are accurately remembered (Overman et al., 2018).

A critical aspect of item retrieval difficulty in the presence of a changed context is the fact that the item is not encoded and stored in isolation. Rather, it is often incidentally bound to the context irrespective of any instructions to do so. For example, Nakashima & Yokosawa (2011) found that when participants were instructed to encode an item placed in the foreground of a scene, memory was poorer for foreground items when the background was removed at time of test compared to when the background was reinstated at test, suggesting that scene was incidentally encoded alongside the item. Hayes and colleagues (2004; 2007) found a similar decrement in memory for item retrieval, also showing that item memory associated with a change in background color at the time of test. Item retrieval absent of the original context in this study was associated with more right-lateralized activity in the parahippocampal cortex (PHC), compared to bilateral activity in the same region when the background remained consistent between encoding and retrieval (Hayes et al., 2004, 2007). The authors interpret this increase in PHC activation in the absence of scenes as evidence that the scenes are being retrieved during item recognition, indicating that they were implicitly bound to the items during encoding (Epstein & Ward, 2010; Goh et al., 2004; Hayes et al., 2007). Thus, past research suggests that reinstatement of encoding background, or context, facilitates item retrieval, and when context is removed, individuals may attempt to retrieve it in order to support item retrieval.

It has been posited that context reinstatement improves item retrieval because context cues facilitate recollection, a more deliberate and detailed retrieval process than familiarity (DaPolito et al., 1972; Mandler, 1980). Mandler’s (1980) classic “butcher on the bus” example illustrates this principle by noting a feeling of familiarity may arise when an item, such as one’s butcher, is unexpectedly encountered in an unusual context (the bus), but recollection of the individual’s identity is hindered when the retrieval context is different than the encoding context. In another example, an empirical study of this phenomenon found that for face-context pairs, switching contexts at test significantly reduced recollection of the face while leaving item-based familiarity intact (Gruppuso et al., 2007). This effect is seen across verbal (McKenzie & Tiberghien, 2004; Thomson, 1972) and visual (Bar, 2004; Bar & Aminoff, 2003; Tibon et al., 2012) stimuli. Thus, the absence of contextual information negatively impacts the ability of the retrieval cue to support successful recollection (Tiberghien & Cauzinille, 1979).

Associative memory shows similar declines in accuracy and increases in RT to that of item memory when contextual shifts occur between encoding and retrieval phases, such as when the location and relative positioning of items that comprise the pair are changed between encoding and retrieval (e.g., De Brigard et al., 2020; Giovanello et al., 2009; Overman et al., 2018). While context can be defined as background when referring to item memory, in associative memory, context can refer to the way individual parts of the associative pair are organized with respect to one another. We label the relative organization and orientation of paired stimuli on a computer screen as configural context. When there is a disruption to configural context such that information is shifted from its original presentation format during encoding to a different format at retrieval, memory itself is disrupted (De Brigard et al., 2020; Giovanello et al., 2009; Naveh-Benjamin et al., 2004; Overman et al., 2018; Rhodes et al., 2008; Siegel & Castel, 2018). For example, in Overman and colleagues (2018), a change in the configural context (see Figure 1) of face-scene pairs from encoding to retrieval significantly reduced associative hit rates and led to longer RTs. This reduction in behavior occurred even though the configural context of the face-scene pairs between encoding and retrieval was irrelevant to the memory decision regarding face-scene associations. Similar results have been observed when the ordering of word pairs or face pairs incurs a similar configural shift between encoding and retrieval (Giovanello et al., 2009; Rhodes et al., 2008). Like the item memory studies discussed above, these findings imply that in associative memory tasks, a lack of encoding-retrieval congruency of the configural context can negatively impact memory across various stimuli types and complexities.

Figure 1.

Figure 1.

Depiction of task showing both encoding conditions (side-by-side and superimposed) and retrieval conditions as a function of configural context congruency. Encoding trials displayed the question, “How welcoming is the face and scene?” and the four response choices below the face scene pair on every trial. Retrieval trials displayed the text “Please identify the pairings that have been presented previously.” on every trial above the 3 responses choices, Remember, Know, New.

With respect to the neural correlates underlying contextual changes in associative memory, studies (Giovanello et al., 2009; Hayes et al., 2007; Pihlajamäki et al., 2004) have shown that shifts in the configural context from encoding to retrieval has an impact on activation within the medial temporal lobe (MTL), a region critical to processing associative memory (Achim et al., 2007; Mayes et al., 2007). For example, simple inversion of word pairs between encoding and retrieval (e.g., a switch from clock-river to river-clock) was found to lead to differences in MTL recruitment during retrieval. Specifically, reversed pairs were associated with greater anterior hippocampal activity, whereas activity for intact pairs was greater in posterior hippocampus (Giovanello et al., 2009). The authors interpreted this finding as suggesting that activity in the posterior hippocampus reflects processing of the exact reinstatement of a study episode, while activity in the anterior hippocampus reflects the flexible retrieval of learned associations. While this study is important in demonstrating the relationship between neural activity at retrieval and behavioral accuracy when manipulating verbal associations, additional replication and expansion is warranted in order to investigate whether a similar manipulation using complex visual stimuli would result in a similar dissociation within the MTL, as well as differences across the larger retrieval network.

Behavioral consequences of shifts in the configural context (e.g., reduced hit rates and increased RTs), suggest that such retrieval is a more difficult retrieval process. Difficulty with respect to memory retrieval has been associated with increased activity across the retrieval network, including increased activity in the medial and lateral prefrontal cortex (PFC), lateral and medial MTL (hippocampus, parahippocampal cortex, and perirhinal cortex), ventral parietal cortex, and posterior cingulate cortex (Cabeza & St Jacques, 2007; McDermott et al., 2009; Spreng et al., 2009; Svoboda et al., 2006). Upregulation of these regions has been interpreted as a need to increase retrieval monitoring and evaluation processes associated with memory decisions of greater complexity. Similar increases in activity within these regions has been observed for successful recollection (Achim et al., 2007; Johnson & Rugg, 2007; Mayes et al., 2007; McDermott et al., 2000; Poppenk et al., 2010; Schacter et al., 1996; Vilberg & Rugg, 2007; Von Zerssen et al., 2001), which itself is a difficult retrieval process that relies on a great deal of monitoring, decision-making, and evaluation (Egner et al., 2005; Fellows & Farah, 2005; Philiastides et al., 2006; Ranganath & Paller, 2000).

The benefits of contextual congruency at retrieval are highlighted by the principles of transfer appropriate processing (TAP) and cortical reinstatement. Based on the idea that encoding and retrieval processes are interdependent, it has long been suggested that successful memory is associated with the recapitulation of cognitive processes between encoding and retrieval (Gisquet-Verrier & Riccio, 2012; Morris et al., 1977; Rasch & Born, 2007; Ritchey et al., 2013; Rugg et al., 2008; Tulving & Thomson, 1973; Vaidya et al., 2002). In the model put forth by Norman & O’Reilly (2003), not only was encoding-retrieval overlap in activation critical to memory success, but so was the reactivation of encoded representations, reflected in the reinstatement of the pattern of cortical activity from encoded at retrieval. The principles of this theory can be tested using encoding-retrieval similarity (ERS) analysis, which evaluates the neural pattern similarity between individual trials across encoding and retrieval (Cabeza et al., 2002; Nairne, 2002; Ranganath & Ritchey, 2012; Rugg et al., 2008). For example, using ERS Ritchey et al. (2013) found that overlap in neural patterns within middle occipital gyrus, middle temporal gyrus, and supramarginal gyrus was associated with memory success, suggesting that retrieval-related recapitulation of neural patterns elicited at the time of encoding benefits memory. TAP theory suggests that this reinstatement should be greater when the configural context is reinstated at retrieval; however, it is unclear how manipulations of contextual congruency affect cortical reinstatement. Like ERS analysis, multivoxel pattern analysis (MVPA) has been able to examine neural processes at the level of cortical patterns of activation. With respect to context processing, MVPA analyses have been able to discriminate differences in stimulus orientation in visual processing regions (Harrison & Tong, 2009), memories of context locations in parahippocampal gyrus and posterior cingulate (Polyn et al., 2005), and perceptually distinct targets and lures in midline occipital cortex (Bowman et al., 2019). The application of MVPA to manipulations of contextual configurations at retrieval are useful for elucidating how context congruency affects neural patterns of activation, and in turn how these patterns support successful memory.

The current study aims to extend our original work with regard to the role of configural context in associative memory retrieval (Overman et al., 2018) and broadens that of Giovanello, Schnyer, and Verfaellie (2009) to better understanding the neural signature of contextual reinstatement when manipulating the configural context of complex object pairs from encoding to retrieval. We first aim to replicate previous findings in both the item and associative memory fields, showing that disruptions of context have a detrimental effect on memory retrieval. With respect to the neural basis of associative retrieval across congruent and incongruent configural contexts, we hypothesize that, in line with Giovanello, Schnyer, and Verfaellie (2009), congruent retrieval will be associated with greater activation in posterior hippocampus, due to the fact that this region has been shown to be involved with processing the exact reinstatement of a study episode irrespective of memory decision (Giovanello, Schnyer, & Verfallie, 2009). By comparison, we predict viewing pairs in an incongruent configural context will exhibit increased activation in anterior hippocampus (Giovanello, Schnyer, & Verfallie, 2009). Based on the position that incongruent retrieval is a more difficult memory process, we also posit that, compared to congruent retrieval, incongruent retrieval will be associated with increased activity in the anterior cingulate, right inferior frontal cortex, and visual processing regions (Gould et al., 2003). If congruency at retrieval is facilitated by the fact that the original encoding context is recapitulated at retrieval, we predict that congruent and incongruent targets should be discriminable with respect to neural patterns representing both types of associations. Furthermore, we would predict greater encoding-retrieval similarity in neural patterns for congruent compared to incongruent associations across the retrieval network.

Method

Participants

Thirty-one right-handed adults were recruited from The Pennsylvania State University community and received $25 for their participation. Participants were screened for history of psychiatric and neurological illness, head injury, stroke, learning disability, medication that affects cognitive and physiological function, and substance abuse. The day of the study, all participants provided written informed consent for a protocol approved by The Pennsylvania State University Institutional Review Board. All participants were native English speakers with normal or corrected-to-normal vision. All participants were enrolled in college or post-graduate education. Three participants were removed from the study due to claustrophobia issues during scanning, one due to excess movement in the scanner, and one due to misunderstanding the paradigm. Thus, the reported results are based on data from 26 participants (19 female, Mage = 20.5 years, SDage = 1.95 years).

Stimuli

Stimuli consisted of 170 color photographs of faces and 170 color photographs of scenes paired together. Face stimuli consisted of both male and females faces, each exhibiting a neutral expression, taken from the following online databases: the Color FERET database (Phillips, Moon, Rizvi, & Rauss, 2000), adult face database from Dr. Denise Park’s lab (Minear & Park, 2004), the AR face database (Martinez & Benavente, 1998), and the FRI CVL Face Database (Solina, Peer, Batageli, Juvan, & Kovac, 2003). Scene stimuli consisted of outdoor and indoor scenes collected from an Internet image search. Using Adobe Photoshop CS2 version 9.0.2 and Irfanview 4.0 (http://www.irfanview.com/), we edited face stimuli to a uniform size (320 × 240 pixels) and background (black), and scene stimuli were standardized to 576 × 432 pixels.

During encoding, half of the face-scene pairs were presented side-by-side and the other half with the face superimposed on top of the scene (see Figure 1). At retrieval half of the pairs were present in a manner congruent with their original contextual configuration and the other half in the opposite configuration (i.e., incongruent contextual configuration). Each encoding and retrieval block consisted of 36 total pairs. Within the 36 pairs per retrieval block, 10 were lures (5 of which were side-by-side configured and 5 of which were superimposed configured) in which the face was rearranged with a different scene than was originally presented at encoding. During encoding pairs were presented for 4s and for 4s at retrieval. A jittered interstimulus interval (2–8s) separated the presentation of each image. Each encoding and retrieval block lasted 4 minutes and 18 seconds. A second version of this task was created to counterbalance the design such that the same stimuli were presented in the opposite configuration across versions (e.g., a face-scene pair presented side-by-side in Version A was presented as superimposed in Version B).

Procedure

Procedure.

The encoding phase of this experiment was reported in Dennis et al., 2019. Prior to scanning, all participants practiced several trials of encoding and retrieval. Participants were encouraged to ask questions during this time. The scanning session began with a structural scan (MPRAGE) that took approximately 7 minutes. During this time, the participants were asked to remain as still as possible. Following the structural scan, participants completed 5 encoding and 5 retrieval blocks presented in an alternating order. Instruction screens were presented prior to each block reiterating the verbal instructions participants received prior to entering the scanner. Presentation of all instruction screens were self-paced, meaning that the participants pressed “1” on the handheld button box to advance to the next screen when they read the instructions and were ready to begin the task. When the instruction slides appeared on the screen, the participant was asked to explain the instructions verbally before proceeding with the experiment in order to verify an accurate understanding of the task.

After advancing past the instruction slides in encoding, the participant was presented with a series of face and scene pairings displayed on the screen in either in an item-item or an item-context configuration. Each pair was presented for 4 seconds, during which time the participant responded to the question: “How welcoming are the scene and face?” presented in text below each pair by utilizing a rating scale from 1–4 (1 = not at all; 4 = very) and making a key press on their hand-held button box. The instructions emphasized that the participants should choose the rating based on how welcoming the face and scene pairing was together to facilitate encoding of both the face and the scene, rather than just one or the other. This question also helped ensure that participants paid attention to the scene, even when it was configured behind the face in the item-context condition because we did not want the scene to be incidentally encoded while the face was intentionally encoded. Across versions, faces and scenes were counter-balanced for their inclusions in either an item-item or an item-context pair. No differences across versions were noted and all analyses are collapsed across versions.

Each encoding block was followed by a retrieval block. Similar to encoding, each face-scene pair at retrieval was presented for 4 seconds. During this time participants were asked to respond to the question: “Please identify the pairings that have been presented previously.” Displayed below the question were the following choices: 1 = remember, 2 = know, 3 = new. A Remember-Know-New design was chosen in order to isolate recollection-related activity, associated with ‘remember’ responses, from that of familiarity, associated with ‘know’ responses. This distinction has shown to be critical when assessing memory-related activity particularly within the MTL (Yonelinas, 2002; Yonelinas, Otten, Shaw, & Rugg, 2005; Yonelinas et al., 2007). Participants were instructed to make their memory judgments based on the co-occurrence of the face and scene and not to base their judgments on the configuration of the display. Specifically, participants were instructed to select ‘Remember’ if they were able to retrieve details of the face and scene appearing together at study, select ‘Know’ if they thought the face and scene appeared together previously but they couldn’t remember specific details about the original appearance, and select ‘New’ if they believe that the fac-scene pair appeared together previously. Thus, whether a retrieval configuration was congruent or incongruent, the Remember/Know/New labels applied equally to both types of configurations, with a participant’s response simply dependent on the vividness of their memory. Similar to encoding, all responses were made using the button box (Only retrieval data is analyzed in the current analysis). With respect to retrieval configurations, half of the trials were congruent with their encoding configuration and half incongruent (such that a side-by-side trial at encoding was presented as a superimposed trial at retrieval).

Image Acquisition.

Structural and functional images were acquired using a Siemens 3T scanner equipped with a 12-channel head coil, parallel to the AC-PC plane. Structural images were acquired with a 1,650 ms TR, 2.03 ms TE, 256 mm field of view (FOV), 2562 matrix, 160 axial slices, and 1.0 mm slice thickness for each participant. Echo-planar functional images were acquired using a descending acquisition, 2,500 ms TR, 25 ms TE, 240 mm FOV, a 802 matrix, 90 degree flip angle, 42 axial slices with 3.0 mm slice thickness resulting in 3.0 mm isotropic voxels.

Image Processing.

For univariate analyses, raw anatomical and functional images were first skull stripped using the Brain Extraction Tool (Smith, 2002) in the FMRIB Software Library (FSL) version 5.0.10 (www.fmrib.ox.ac.uk/fsl). FSL’s MCFLIRT function (Jenkinson, Bannister, Brady, & Smith, 2002) was then applied for realignment and motion correction within each functional run. All volumes were aligned to the middle volume of the middle run of encoding. The realigned functional images were then processed by FSL’s fMRI Expert Analysis Tool (FEAT; Woolrich, Ripley, Brady, & Smith, 2001), where they were high-passed filtered and spatially smoothed at 6mm FWHM. These data were then prewhitened to account for temporal autocorrelations within voxels. Lastly, the structural data underwent non-linear transformation into the standardized Montreal Neurological Institute (MNI) space by using the warping function in FSL’s FNIRT (Andersson, Jenkinson, & Smith, 2010). For multivariate analyses the raw data underwent the exact same steps as above, absent smoothing.

Behavioral analyses

All behavioral analyses were conducted in RStudio (RStudio Team (2018). RStudio: Integrated Development for R. RStudio, Inc., Boston, MA URL (http://www.rstudio.com/). Since this study aimed to examine pure associative memory, behavioral and neuroimaging analyses were focused on accurately recollected memory decisions. To test for congruency-related differences in memory, recollection values were entered into a paired-samples t-test. Response times for configurally congruent and incongruent recollection were subjected to t-tests to examine whether differences in visual presentation affect processing speed. Last, response times were correlated against recollection rates to examine whether there is a speed-accuracy tradeoff.

Univariate Analyses

At the first level, trial-related activity was modeled in SPM12 using two general linear models (GLM) with a stick function corresponding to trial onset convolved with a canonical hemodynamic response function. In order to address our first aim, namely to better understand the neural signature of contextual reinstatement when manipulating the configural presentation of complex object pairs from encoding to retrieval, two second-level random effects GLMs were created and one sample t-tests were conducted to investigate contrasts of interest. The first model focused on 2 trial types of interest: 1) Congruent Targets, which were defined as any target trial presented in a congruent visual presentation from encoding; and 2) Incongruent Targets, which were defined as any target trial presented in an incongruent visual presentation from encoding. Configurally congruent and incongruent targets contained two regressors that were combined at the contrast stage: Side-by-side congruent and superimposed congruent for the former, and side-by-side incongruent and superimposed incongruent for the latter. All lures were coded as a regressor of no interest.

The second model focused on 4 trial types of interest: 1) Congruent Recollection, which were defined as ‘Remembered’ trials that were configurally congruent with encoding; 2) Incongruent Recollection, which were defined as ‘Remembered’ trials that were configurally incongruent from encoding; 3) Combined Familiar, which were defined as targets in which participants accurately responded ‘Know’; and 4) Combined Miss; which were defined as targets that participants misidentified as ‘New’. Similar to the above model, congruent and incongruent recollection contained two regressors: side-by-side congruent and superimposed congruent for the former, and side-by-side incongruent and superimposed incongruent for the latter. With respect to the trial types 3 and 4, low trial counts in the ‘Know’ responses precluded us from modeling, and examining, Familiarity within each visual condition. Thus, the decision was made to isolate Recollection-related activity for each condition of interest and combine Know and New responses within each condition to create an ‘Other’ regressor for contrasting Recollection in obtaining Recollection success effects. All other trial types, along with no response trials, were coded together as a regressor of no interest.

Based on our a priori hypotheses regarding the role of MTL subregions in processing congruent and incongruent trials, we investigated all univariate and multivariate effects within the bilateral hippocampus, PHC, and perirhinal cortex (PrC). The hippocampus and PHC masks were derived from the aal pickatlas (Lancaster et al., 2000) and the bilateral PrC mask was taken from Holdstock and colleagues (Holdstock, Hocking, Notley, Devlin, & Price, 2009).

For all univariate contrasts, we employed Monte Carlo simulations as implemented by 3dClustSim in AFNI version 16.0 (Cox & Hyde, 1997), to determine activation that was corrected for multiple comparisons at p < 0.05, using an uncorrected p threshold (p < 0.005). An additional simulation was run to determine a correction specific to the MTL.

Multivariate pattern classification and analysis

In order to estimate neural activity associated with individual trials, an additional GLM was estimated in SPM12 defining one regressor for each trial at retrieval (170 in total). An additional 6 nuisance regressors were included in each run corresponding to motion. Whole-brain beta parameter maps were generated for each trial at retrieval for each subject. In a given parameter map, the value in each voxel represents the regression coefficient for that trial’s regressor in a multiple regression containing all other trials in the run and the motion parameters. These beta parameter maps were next concatenated across runs and submitted to the CoSMoMVPA toolbox (Oosterhof et al., 2016) for pattern classification analyses. Given our interest in determining which region in the associative retrieval network discriminated between congruent and incongruent targets, separate classification accuracies were computed in regions of interest (ROIs) previously identified as critical to associative memory retrieval. These include separate predictions within the aforementioned regions in the MTL (bilateral hippocampus, PHC, and PrC), prefrontal cortex (PFC), posterior parietal cortex, early visual cortex, and late visual cortex. Regions that comprised the PFC were first identified from meta-analyses of memory, including inferior, medial, and middle frontal gyri (Kim, 2011; Maillet & Rajah, 2014). These regions were then identified in aal pickatlas using anatomically defined boundaries identified by the anatomical labeling to create a single ROI for the PFC. Due to our previous work indicating that encoding configuration significantly influences patterns of neural activity in the inferior occipital cortex (IOC; BA 17/18) and middle occipital cortex (MOC; BA 19; Dennis, Overman et al., 2019), these regions were also selected using the aal pickatlas. Finally, parts of the posterior parietal cortex, including BA5 and BA7 were chosen based on previous work that showed consistent involvement of these region in memory retrieval (Davis et al., 2008; Sestieri et al., 2017; Wagner et al., 2005). Along with this, recent MVPA studies suggest that regions within the posterior parietal cortex, which includes the angular gyrus, BA5, and BA7 (Brodt et al., 2018; Sestieri et al., 2017), can represent content-specific episodic information during retrieval (Kuhl & Chun, 2014; Xue et al., 2013). We therefore created a posterior parietal mask using bilateral angular gyrus, BA5, and BA7 labels again using the aal pickatlas.

Classification analyses were computed for retrieval runs using a support vector machine classifier with a linear kernel using all voxels within an ROI (Mumford et al., 2012). Training and testing followed an n-1 cross validation procedure with 4 runs used as training data and 1 run as testing data. Group-level results were generated from averaging across validation folds from all possible train-data/test-data permutations from the individual subject level. We first tested whether a classifier was significantly able to discriminate between all configurally congruent and incongruent targets above theoretical chance (two trial types; 50%) using a one-tailed one-sample t-test for classification accuracy within each ROI. We also ran a separate analysis that tested whether a classifier was accurately able to discriminate above theoretical chance between targets presented across the four configural trial types (side-by-side congruent, side-by-side incongruent, superimposed congruent, and superimposed incongruent; 25%) using a one-tailed one-sample t-test for classification accuracy within each ROI. Last, to assess whether classifier accuracy was related to behavioral performance, separate repeated measured ANOVAs were computed for each ROI that contained significant classification with overall recollection rates as the dependent measure.

All significant findings were confirmed using permutation testing in order to correct for the occurrence of false positives (Etzel & Braver, 2013; Gaonkar & Davatzikos, 2012; Kriegeskorte et al., 2006; Stelzer et al., 2013). Specifically, we ran a follow up test that repeatedly randomized the retrieval labels and reran the classification analysis on the permuted data. This was done 1,000 times for each significant region to produce a null distribution that simulates the potential accuracy scores that could be obtained if the retrieval manipulation had no effect (Dennis et al., 2019).

ERS analysis

In order to examine whether shifts in the configural context had an effect on the similarity of target representations between encoding and retrieval, we directly compare neural patterns of activation between encoding and retrieval across configurally congruent and incongruent conditions. Based on our previous work indicating that encoding configuration (i.e., side-by-side; superimposed) significantly influences patterns of neural activity during encoding (Dennis et al., 2019), we separated all targets into four conditions of interest: side-by-side congruent targets, superimposed congruent targets, side-by-side incongruent targets, and superimposed incongruent targets. Activation for each individual trial for a given condition at encoding was correlated with every trial of the same type at retrieval (e.g., targets at encoding that were presented as a side-by-side configuration that were congruently re-presented as side-by-side at retrieval). This resulted in similarity scores, as operationalized by Pearson’s r correlation values, for each trial. The correlations were then averaged within condition for each subject. Group-level results were generated from averaging within condition across all participants.

Results

Behavioral.

A paired-samples t-test comparing the effect of retrieval presentation on recollection rates revealed that participants recollected significantly more configurally congruent pairs (Mrecollection = 68.53%, SDrecollection = 13.30%) than incongruent pairs (Mrecollection = 54.43%, SDrecollection = 15.76%), t(25) = 6.34, p < .001, 95% CI [.10, .19]. The same pattern of results emerged for a t-test comparing corrected recollection rates (recollection rate – false alarm rates), such that the corrected recollection rates were significantly higher for configurally congruent pairs (McorrRecollection = 61.54%, SDcorrRecollection = 13.63%) than incongruent pairs (McorrRecollection = 50.00%, SDcorrRecollection = 15.14%), t(25) = 5.25, p < .001, 95% CI [.07, .16]. Another paired-samples t-test revealed participants false alarmed to congruently presented pairs (Mrecollection = 18.20%, SDrecollection = 8.46%) more often than incongruently presented pairs (Mrecollection = 14.18%, SDrecollection = 7.91%), t(25) = 4.26, p < .001, 95% CI [.02, .06]. Response time also significantly differed for recollection between conditions, such that participants were faster at recollecting congruently presented trials (Mseconds = 1.69, SDseconds = .30) than incongruently presented trials (Mseconds = 1.87, SDseconds = .37), t(25) = −.24, p < .001, 95% CI [−.24, −.12]. Last, higher overall recollection rate was significantly correlated with faster RT (r = −.36, p < .01).

For the sake of thoroughness, we also computed adjusted familiarity hit rate (pKnow/(1-pRemember)) and adjusted familiarity false alarm rates rate (pKnow FA/(1-pRemember FA)). There was no significant difference in the identification of configurally congruent pairs as familiar (using the ‘know’ response; MadjustedFamiliarity = 42.90%, SDadjustedFamiliarity = 15.18%) compared to configurally incongruent pairs pairs (MadjustedFamiliarity = 44.77%, SDadjustedFamiliarity = 14.60%), t(25) = 1.60, p = .12, 95% CI [−0.01,.11]. Similarly, there was no significant difference in the false alarm rate of familiarity to configurally congruent pairs; MadjustedFamiliarityFA = 12.07%, SDadjustedFamiliarityFA = 8.36%) compared to familiarity for configurally incongruent pairs (MadjustedFamiliarityFA = 10.20%, SDadjustedFamiliarityFA = 8.31%), t(25) = 1.75, p = .09, 95% CI [0.00,.04] (See Table 1 for a depiction of all behavioral results).

Table 1.

Behavioral data

Congruent Incongruent
Overall Hit* .88 (.06) .79 (.09)
Overall Hit RT* 1.85s (.32s) 2.07s (.042s)
Recollection
  Hit* .69 (.13) .54 (.16)
  False Alarm* .07 (.03) .04 (.03)
  Adjusted Hit* .62 (.14) .50 (.15)
  RT* 1.69s (.30s) 1.87s (.37s)
Adjusted Familiarity
  Hit .60 (.13) .55 (.15)
  False Alarm .12 (.08) .10 (.08)
  Adjusted Hit 0.48 (.06) .45 (.07)
  RT 2.57s (.49s) 2.56s (.50s)

Table 1 reports the mean (and standard deviation) hit rate, false alarm rate, adjused hit rate, and response time (RT) broken down by recognition response and condition. Significant differences between configurally congruent and incongruent trials are noted by

*

= p < .001.

Adjusted recollection hit rate = recollection rate - false alarm (FA) rate. Adjusted familiarity hit rate = (pKnow/(1-pRemember)). Adjusted familiarity FA rates rate = (pKnow FA/(1- pRemember FA)).

Univariate.

Overall successful recollection exhibited neural activity within the typical associative memory network, including bilateral occipital cortex, bilateral superior parietal cortex, bilateral medial and superior PFC, left PHC, and right hippocampus. While retrieval success was not a planned analysis, we conducted the analysis to confirm the success of our task design in achieving typical recollection-related activity throughout the associative retrieval network.

Examination of the effect of configural congruency found that congruent compared to incongruent targets exhibited significantly greater activation in right medial superior frontal gyrus, bilateral middle temporal gyrus, left angular gyrus, and left cuneus. Alternatively, incongruent compared to congruent targets exhibited significantly greater activation in bilateral middle frontal gyrus, right inferior frontal gyrus, right middle cingulate gyrus, bilateral posterior parietal cortex, left inferior temporal gyrus, and left cerebellum. Neither contrast exhibited significant differences in MTL activation.

When focusing on recollection-related activation, successful recollection in the congruent compared to incongruent condition exhibited significantly greater activity in the right medial superior frontal gyrus, left middle temporal gyrus, and left caudate nucleus. The reverse contrast showed that incongruent compared to congruent recollection elicited greater activity in bilateral posterior parietal cortex, bilateral inferior frontal gyrus, right middle frontal gyrus, right fusiform gyrus, right cerebellum, and bilateral inferior temporal gyrus. Within the MTL, congruent recollection exhibited significantly greater activity than incongruent recollection the left PHC. There was no greater activity for incongruent recollection compared to congruent recollection in any MTL region. See Figure 2 and Table 2 for a depiction of all univariate results.

Figure 2.

Figure 2.

Whole-brain activity for (a) congruent and incongruent targets and (b) congruent and incongruent recollection at retrieval. Red = greater activity for congruent compared to incongruent; Blue = greater activity for incongruent > congruent.

Table 2.

Congruency Activation

MNI coordinates MNI coordinates
BA H x y z k t BA H x y z k t
   Congruent > Incongruent    Incongruent > Congruent
Recollection Recollection
Caudate nucleus 11 L −12 14 −14 267 5.92 Superior parietal gyrus 7 R 26 −66 50 3184 8.55
Superior frontal gyrus, medial 10 R 2 54 2 514 4.47 Superior parietal gyrus 7 L −24 −66 42 2877 6.7
Middle temporal gyrus 39 L −54 −56 22 253 3.62 Inferior frontal gyrus, opercular part 44 L −40 10 24 1860 6.34
Parahippocampal Gyrus* 36 L −26 −26 −16 52 4.75 Middle frontal gyrus 6 R 32 4 48 783 5.93
Inferior frontal gyrus, triangular part 46 R 50 40 14 1404 5.48
Fusiform gyrus 19 R 32 −48 −6 294 5.24
Cerebelum L −10 −78 −36 580 4.94
Inferior temporal gyrus 37 R 52 −54 −8 319 4.59
Inferior temporal gyrus 37 L −52 −64 −10 482 4.48
Targets Targets
Medial prefrontal gyrus 10 R 8 58 12 2021 5.27 Superior/ inferior parietal lobule 7/39/40 R 26 −66 50 2800 6.83
Middle temporal gyrus 38 L −60 2 −28 556 5.05 Superior/ inferior parietal lobule 7/39/40 L −20 −66 50 2019 6.21
Angular gyrus 39 L −60 −54 36 1052 4.62 Middle frontal gyrus 6 L −26 4 48 452 5.99
Middle cingulate gyrus 31 R 4 −22 40 281 4.55 Inferior temporal gyrus 37 R 48 −54 −12 515 5.72
Middle temporal gyrus 22 R 62 −34 6 385 4.41 Inferior frontal gyrus, triangular part 45 R 44 24 14 1510 5.62
Cuneus 18 L −8 −92 24 1008 4.22 Cerebelum 18 L −10 −80 −34 301 5.09
Middle frontal gyrus 8 R 26 12 48 734 4.95
Middle cingulate gyrus 8 R 12 18 38 438 4.57
Inferior temporal gyrus 37 L −48 −60 −8 246 4.51

BA: Brodmann’s area; H: hemisphere (L: left; R: right); x, y, z represent peak MNI; k, cluster extent; t: statistical t value;

*

Corrected at MTL threshold.

MVPA.

We first examined whether there was an effect of configural congruency on classification of targets at retrieval, irrespective of the trial’s encoding configuration (i.e., side-by-side or superimposed configuration). No region showed significant above-chance classification. Two regions had marginally significant accuracy (IOC (Maccuracy = 0.52), t(25) = 1.99, p = .057, and posterior parietal cortex (Maccuracy = 0.53), t(25) = 2.05, p = .051).

Based on our previous work showing that the configural context at encoding is a critical factor in classifying neural patterns of activation (Dennis, Overman et al., 2019), we next examined the effect of configural congruency on classification considering the configural context at encoding. Specifically, we examined whether a classifier could accurately classify targets based on the four conditions of interest above theoretical chance (25%) in each ROI. The classifier was able to accurately predict retrieval condition above chance in the IOC (Maccuracy = 0.31, t(25) = 8.12, p < .001), MOC (Maccuracy = 0.36, t(25) = 10.86, p < .001), and posterior parietal cortex (Maccuracy = 0.29, t(25) = 4.03, p < .001). The classifier was marginally above chance in the PFC (Maccuracy = 0.27, t(25) = 1.89, p = .07). It also fell significantly below chance in the PHC (Maccuracy = 0.23, t(25) = −2.10, p =.047). No other region reached significance. In none of the foregoing regions did classification accuracy correlate with recollection rates. All multivariate results reported above were confirmed through permutation testing. See Table 3 for a depiction of all MVPA results.

Table 3.

Classifier accuracy at retrieval

Classifier Accuracy t p value
Combined (Chance = 0.50)
 IOC 0.523 2.00 0.057
 MOC 0.515 1.31 0.203
 Posterior Parietal Cortex 0.529 2.05 0.051
 HC 0.512 1.09 0.288
 PHC 0.485 −1.70 0.101
 PrC 0.496 −0.46 0.650
 PFC 0.516 1.352 0.189
Separate (Chance = 0.25)
 IOC 0.309 8.12 0.000
 MOC 0.36 10.86 0.000
 Posterior Parietal Cortex 0.293 4.03 0.000
 HC 0.246 −0.61 0.549
 PHC 0.233 −2.10 0.046
 PrC 0.246 −0.52 0.605
 PFC 0.272 1.89 0.071

Results of a one sample t-test for classifier accuracy. Combined = Combined Congruent vs. Combined Incongruent Targets. Separate = Congruent side-by-side vs. congruent superimposed vd. incongruent side-by-side vs. incongruent superimposed. All df = 25. IOC = inferior occipital cortex; MOC = middle occipital cortex; posterior parietal cortex = angular gyrus and BA5/7; HC = hippocampus; PHC = parahippocampal cortex; PrC = perirhinal cortex; PFC = prefrontal cortex.

ERS.

We examined the effect of contextual congruency and configural context on neural pattern similarity for targets by computing Pearson’s r correlation values for each target type (side-by-side congruent, superimposed congruent, side-by-side incongruent, and superimposed incongruent) between encoding and retrieval for each ROI. We then computed a repeated measures ANOVA to determine whether pattern similarity within each ROI significantly differed based on target type. We also corrected for multiple comparisons across the four conditions within ROI using the Bonferroni method (significance at p < .0125). There was a significant effect of target type within the IOC, F(3, 23) = 10.07, p < .001 and the MOC, F(3, 23) = 28.07. p < .001 (Figure 3). Pattern similarity did not significantly differ between target type in any other region, nor did the ERS of any target type within the IOC or MOC significantly predict or correlate with recollection.

Figure 3.

Figure 3.

Encoding-Retrieval Similarity for targets broken down by condition in the inferior and middle occipital cortex.

Paired samples t-tests were used to make post hoc comparisons between conditions in the IOC and MOC. Using the same corrected significance threshold as above, ERS within the IOC significantly differed for side-by-side congruent and side-by-side incongruent targets (t(26) = 3.97, p < .005), as well as superimposed congruent and side-by-side incongruent targets (t(26) = 4.02, p < .001). Within the MOC, ERS significantly differed for side-by-side congruent and side-by-side incongruent targets (t(26) = 7.88, p < .001), superimposed congruent and side-by-side incongruent targets (t(26) = 6.48, p < .001), side-by-side congruent and superimposed incongruent targets (t(26) = 5.24, p < .001), and superimposed congruent and superimposed incongruent targets (t(26) = 4.02, p < .001). See Table 4 for all pairwise comparisons and Figure 3 for a depiction of ERS results.

Table 4.

Pairwise comparisons for ERS correlations

t p value
Inferior Occipital Cortex
 side-by-side congruent - superimposed congruent −1.70 0.102
 side-by-side congruent - side-by-side incongruent 3.97 0.001
 side-by-side congruent - superimposed incongruent 2.07 0.049
 superimposed congruent - side-by-side incongruent 4.02 0.000
 superimposed congruent - superimposed incongruent 2.58 0.016
 side-by-side incongruent - superimposed incongruent −2.50 0.019
Middle Occipital Cortex
 side-by-side congruent - superimposed congruent −1.71 0.100
 side-by-side congruent - side-by-side incongruent 7.88 0.000
 side-by-side congruent - superimposed incongruent 5.24 0.000
 superimposed congruent - side-by-side incongruent 6.48 0.000
 superimposed congruent - superimposed incongruent 4.54 0.000
 side-by-side incongruent - superimposed incongruent −1.59 0.123

Pairwise comparisons for ERS correlations within regions which produced a main effect of configural context.

Discussion

The focus of the current investigation was to examine the behavioral and neural differences associated with maintaining and manipulating configural context when remembering associative pairs. Behaviorally, participants recollected, and false alarmed, significantly more for associative pairs when the configural context was maintained between encoding and retrieval than when there was a disruption to the configural context. Remembering congruent pairs was also associated with faster response times compared to remembering associates when the contextual congruency with encoding was disrupted. Further, manipulations of contextual congruency affected neural activation across he retrieval network. Specifically, viewing contextual congruent targets at retrieval was associated with greater activation within the visuospatial processing network (Kravitz et al., 2011), including right medial superior frontal gyrus, bilateral middle temporal gyrus, left angular gyrus, and left cuneus, while incongruent targets compared to congruent targets exhibited significantly greater activation across a frontoparietal monitoring network (Iidaka et al., 2006), including bilateral middle frontal gyrus, right inferior frontal gyrus, right middle cingulate gyrus, and bilateral superior/ inferior parietal lobule. When examining the effect of configural context within recollection, recollection associated with congruent compared to incongruent pairs exhibited significantly greater activity in the left PHC, right medial superior frontal gyrus, left middle temporal gyrus, and left caudate nucleus. The reverse contrast showed that recollection associated with incongruent compared to congruent pairs again elicited greater activity across the frontoparietal monitoring network, including bilateral posterior parietal cortex, bilateral inferior frontal gyrus, right middle frontal gyrus. Finally, we also showed that both configural context and contextual congruency were critical determinants to characterizing patterns of neural activation within visual and parietal cortices. The aforementioned visual regions also had significantly different patterns of activity across encoding and retrieval for congruent and incongruent associative targets. Taken together, our results support our hypothesis that contextual congruency at retrieval affects both behavior and neural processing within associative memory.

Behaviorally, our results are consistent with the larger literature indicating that context is important in memory retrieval (Bar, 2004; Bar & Aminoff, 2003; DaPolito et al., 1972; Gruppuso et al., 2007; Mandler, 1980; Tibon et al., 2012). Specifically, participants exhibited lower recollection rates and slower reaction times for associations that were presented in a configural context that was incongruent with respect to how the pair was learned at encoding compared to pairs in which the configural context was consistent with that of encoding. Replicating our previous work examining contextual congruency (Overman et al., 2018), this suggests there is increased difficulty involved with retrieval of associative information when contextual congruency is disrupted between study and test. Collectively, our results add to the longstanding literature highlighting the benefits of context reinstatement to memory (DaPolito et al., 1972; Eich, 1985; Godden & Baddeley, 1975; Gruppuso et al., 2007; Hayes et al., 2007; Macken, 2002; Mandler, 1980; McKenzie & Tiberghien, 2004; Murnane et al., 1999; Nixon & Kanak, 1981; Smith, 1979, 1982, 1988; Thomson, 1972; Thomson & Tulving, 1970) by also identifying the benefits of configural context to associative memory.

Of interest to our findings regarding the effect of configural congruency to memory is the notion that all individual aspects of the original associative information are recapitulated from encoding to retrieval. It is merely the orientation and configuration of the information that differs. To this point, we posit that a disruption to configural congruency parallels work in the domains of working memory and mental rotation (Anguera et al., 2010; Cepeda & Kramer, 1999). That is, mental rotation is a process by which one needs to mentally re-orient a visual image until it matches a previously encountered state or target image (Cohen et al., 1996; Corballis, 1997; Podzebenko et al., 2002; Shepard & Metzler, 1971). Just as mental rotation tasks require participants to mentally rotate a given object back to its original state, we posit that a similar mental reconfiguration is required during associative memory retrieval when responding to associative pairs that are configurally incongruent with that shown at encoding. Importantly, this process may be fundamentally different than the need to recall a missing context as is the case in past work investigating the effect of context changes to item retrieval (M. Bar et al., 2008; Hayes et al., 2004, 2007). The current study extends the definition and role of context in memory retrieval to consider not just the presence and absence of specific configural elements, but the orientation of those elements and the influence of orientation to retrieval success.

We extend these behavioral results to show that configural congruency also affects neural processing involved in associative retrieval. While our overall associative memory success results are consistent with typical associative retrieval studies showing recollection-related activation in the hippocampus, parahippocampal gyrus, medial superior frontal gyrus, anterior and posterior cingulate gyrus, angular gyrus, cuneus, and cerebellum (Dennis & McCormick-Huhn, 2018; Takashima et al., 2007; van Kesteren et al., 2010; Wang et al., 2014), we see that configural congruency modifies this activation. Extending our investigation of configural congruency to neural functioning, we found that the retrieval of associative pairs presented in an incongruent configural context was met with greater widespread activity across fronto-parietal regions including bilateral superior and inferior parietal cortex, bilateral inferior frontal gyrus, right middle frontal gyrus, and right fusiform gyri compared to those retrieved within a congruent context. Within the general retrieval literature, increases in frontoparietal activity has been associated with increases in task difficulty (Gould et al., 2003) associated with a need for greater monitoring, decision-making, and evaluation (Achim & Lepage, 2005; Fellows & Farah, 2005; Ranganath & Paller, 2000; Rugg et al., 2002, 2003). We posit that the difference in the configural context displayed at retrieval with that stored in memory from encoding induces a mismatch of representations that needs to be resolved through monitoring and evlauation processes prior to producing a memory response. Thus, consistent with the behavioral findings, the neuroimgaing results also support the conclusion that the incongruent condition results in a greater cognitive load posed to individuals as they attempt to retrieve the associatve link aross individual items.

The foregoing heightened activation throughout the frontoparietal monitoring network is observed not only for general associative retrieval (across all targets), also for successful recollection in the current study, suggests that this activity supports the successful recollection of incongruent associative pairs. In other words, if heightened activity was observed for all targets, but was not prominent during recollection, it would suggest that the observed activity supports search and monitoring processes that are engaged during any retrieval attempt. The similar pattern of heightened activity during both general retrieval and successful recollection suggests that recollection is the driving force behind the differences. Noted above, we suggested a parallel between mental rotation tasks and incongruent associative retrieval, as individuals are likely attempting to mentally rearrange the pair at retrieval to match its original configural conext when responding to configurally incongruent trials. Supporting this interpretation, mental rotation tasks are also associated with increased activation across the same frontal and parietal regions observed in the current study (Christophel et al., 2015; Diwadkar et al., 2000; Gogos et al., 2010; Levin et al., 2005; see Zacks et al., 2007 for a review). Moreover, increased activation across the frontoparietal nework is modulated as a function of the degree of rotation, and has been shown to support successful behavior during mental rotation tasks (Gogos et al., 2010; Milivojevic et al., 2008). Additional work investiagting the role of mental rotation in incongruent associative retreiavl is needed to fully understand the meaning underlying the observed frontoparietal actovation.

Despite the absence of univariate differences in overall levels of neural activation within visual cortices, multivariate analyses that measure differences in patterns of neural activity identified differences across both configural context and contextual congruency at retrieval across occipital ROIs. Specifically, considering both configural context at encoding and contextual congruency at retrieval, patterns of neural activity were distinguishable in both visual cortex, specifically the IOC and MOC, and posterior parietal cortex. Thus, our findings suggest neural patterns at retrieval are discriminable only when separated by both the configural context in which they were originally encoded as well as whether that specific context was maintained at retrieval. Noted in our Methods, our attempt to train a classifier to identify differences between incongruent and congruent retrieval configurations, irrespective of the configural context shown at encoding (mirroring the foregoing univariate analysis) resulted in no significant classification across our a priori ROIs. This is not altogether surprising, considering our classification results from encoding found that configural context influenced neural patterns of activation across the same visual regions (Dennis et al., 2019). The current finding is consistent with a larger literature that shows that the occipital cortex is sensitive to the physical and categorical properties of visual stimuli (Dennis et al., 2019; Gibson, 1969; Harrison & Tong, 2009; Kanwisher et al., 1997; Kapadia et al., 2000) as well as mental imagery paradigms that find early visual cortices maintain information regarding the orientation of a visual image (Albers et al., 2013; Lee et al., 2012; Naselaris et al., 2015).

The fact that the posterior parietal cortex also shows this same pattern of results is also consistent with a larger retrieval literature showing that the parietal cortex is responsible for processing and storing spatial information of visual stimuli (Cabeza et al., 2009; Crowe, Averbeck, & Chafee, 2010; Kesner, 2009; Ramanan & Bellana, 2019; Vilberg & Rugg, 2008). In considering neural activity within the parietal cortex can also discriminate between retrieval conditions when accounting for configural context at encoding and subsequent contextual congruency, this complements past work that has shown that the angular gyrus can discriminate patterns of activity elicited by two or more study conditions (Johnson et al., 2009; Kuhl & Chun, 2014). Specially, this past work has shown frontoparietal regions can discriminate between individual events within memory, part of which is due to the contextual reinstatement of the encoded episode.

Interestingly, while univariate contrasts showed greater activity across both frontal and parietal regions for incongruent configural contexts, multivariate analyses found discriminable neural patterns between all four retrieval conditions in parietal cortex, but not our frontal ROI. With respect to the role of each region in mental rotation tasks that have parallel demands to our retrieval task, it has been suggested that while the parietal lobe mediates the specific rotation aspects of the task (Alivisatos & Petrides, 1997; Carpenter et al., 1999; Cohen et al., 1996; Harris et al., 2000; Jordan et al., 2001; Koshino et al., 2005; Milivojevic et al., 2008; Podzebenko et al., 2002; Zacks, 2008), frontal regions underlie monitoring and comparison between the rotated image and the stored representation of the target image. Following from this literature, the current results would suggest that while the occipital and parietal cortex clearly represent differences in configural context and contextual congruency, the underlying evaluation processes engaged in by frontal cortices may be similar in nature across conditions (e.g., Christophel et al., 2015), albeit heightened for incongruent retrieval. Future work will be needed to further examine this dissociation with respect to contextual congruency in retrieval processes.

In addition to significant differences in neural activation patterns in IOC and MOC at retrieval, our ERS analysis showed that patterns of neural activity were found to be similar across encoding and retrieval in the aforementioned visual regions when considering configural context and configural reinstatement. These findings are indicative of greater recapitulation of visual processes and stored mental representation of associative pairs when viewing associative targets presented in the same configural context across both memory phases. Stronger recapitulation for congruent opposed to incongruent trials supports the TAP theory, which posits that 1) memories are represented in terms of the cognitive operations engaged by an event as it is initially processed, and 2) that successful memory retrieval occurs when those earlier operations are recapitulated (Kolers, 1973; Morris et al., 1977; Rugg et al., 2008). Accordingly, TAP suggests that the effectiveness of a retrieval cue depends on the similarity between the cognitive operations engaged by the cue and the cognitive operations that occurred during study (Roediger & Challis, 1989; Roediger & Guynn, 1996). Previous empirical applications of this principle have shown that recapitulation, by way of greater neural pattern similarity between encoding and retrieval, is associated with better memory (e.g., (Ritchey et al., 2013; Xue et al., 2010). The current results extend this previous work to show that recapitulation of neural activation within occipital cortices also underlies associative retrieval and that these processes are benefited by maintaining not only the reoccurrence of all individual items, but the contextual congruency with respect to configuration as well. Interestingly, not only did we observe significantly greater correlations between encoding and retrieval activation patterns for congruent associations, but we saw significant anticorrelations in our ERS analysis for incongruent targets in the IOC and MOC. This suggests that, despite the fact that the information in the associative pairs was retained from study to test, altering the configural context of the pairs from encoding to retrieval leads to unique neural patterns that systematically differ across memory stages (Fox et al., 2005). While disruptions of contextual congruency across memory phases clearly disrupts the recapitulation of neural patterns from encoding to retrieval, it is unclear whether greater deviation from encoding processes leads to greater detriments in behavior. Future work is needed to understand the behavioral consequences of such anticorrelations in neural activation.

Contrary to our predictions, we did not see any univariate or multivariate differences within the hippocampus. This runs counter to work by Giovanello, Schnyer, & Verfallie (2009), who demonstrated greater activation in posterior hippocampus for congruent compared to incongruent word pairs, and greater activity in anterior hippocampus for the reverse contrast. This absence of hippocampal differences may be due to the fact that the previous study used words, while the present used more salient face-scene associative pairs. Further, the incongruent condition in the previous study involved a reversal of the placement of the word pairs on the screen. As such, it could be that this switch induced a different overall interpretation by the participant of the word meanings, leading to more complex or varied meanings of the associative pair from encoding to retrieval. While past work has also found that activity in the hippocampus at encoding can be reactivated at retrieval for content-specific information for perceptually associated word pairs (Prince et al., 2005), our findings speak to the idea that contextual congruency may not impact hippocampal processing when the associative meaning is not disrupted, such as the case with objects or face-scene associations. In other words, the meaning-based mental representation of associative word pairs in Giovanello et al. (2009) may be more susceptible to disruption at retrieval by a change in configuration (e.g., word order) than the mental representation of a face-scene pair would be at retrieval. Accordingly, an adjustment to the relative location of the face and scene likely has little impact on their interdependent meaning. Future studies might examine the effect of disrupting meaning with associative pairs on neural processing supporting associative retrieval. Of note, if our threshold was lowered to that reported in the Giovanello study (p < .01 and 5 contiguous voxels), univariate differences would emerge. Thus, it may also be that the effect size of differences in contextual congruency along the long axis of the hippocampus is relatively small and only emerge at lower thresholds.

Future Directions

In light of the current findings, we suggest exploring several future directions with respect to elucidating the neural processes underlying the effect of configural context on associative retrieval. First, given the difficulties older adults face with associative memory (Naveh-Benjamin, 2000), especially when configural context is manipulated (Overman et al., 2018), examining the extent to which a change in configural context impacts neural processing underlying their memory deficit would be highly beneficial to understanding if older adults’ errors arise from similar. disruptions in TAP and ERS processing. Additionally, it would be interesting to examine whether manipulating configural context in other ways affects neural processing, such as introducing a third retrieval condition in which the pair is presented in a novel configuration pattern that does not mirror the configuration of either of the encoding conditions. Last, since the current paradigm involved retrieving highly salient items, in which the associative meaning did not change as a function of the configural context. It may be interesting to explore whether the degree of stimuli complexity or disruptions to the associative meaning across encoding and retrieval impacts differences in visual and MTL processing during associative retrieval.

Conclusions

The current results extend prior work showing configural context within association memory is a critical determinant of memory success. In doing so, we expand the definition of context to include the orientation and configuration of associated pairs. As well, the current study demonstrates that configural context as well as configural congruency between encoding and retrieval are critical factors in determining neural processes supporting associative memory retrieval. Mirroring work in the field of mental rotation, we showed that disruptions to the configural context between encoding and retrieval is met with increased neural activation across frontoparietal regions. We posit that this activation supports the need to reconfigure the associative pair to match the encoding configuration prior to making a memory decision. This increased processing is supported by behavioral findings of greater errors and increased RT for incongruent compared to congruent associative retrieval. Additionally, patterns of neural activation at retrieval were distinguishable in occipital and parietal cortex only when considering both configural context and contextual congruency. Finally, the ERS results support the behavioral findings and past work promoting transfer appropriate processing, in showing greater neural recapitulation of activation patterns across encoding and retrieval for configural reinstatement.

Acknowledgements

We wish to thank Catherine Carpenter and Valeria Martinez Goodman for help with data collection and analyses, as well as Jordan Chamberlain and Daniel Elbich for support in analyses and comments on an earlier version of the paper. This work was supported by the NIH under Grant R15AG052903 awarded to Amy A. Overman & Nancy A. Dennis. Nancy A. Dennis was also supported in part by the NSF under Grant BCS1025709. Portions of the research in this article used the Color FERET (Facial Recognition Technology) database of facial images collected under the FERET program, sponsored by the Department of Defense Counterdrug Technology Development Program Office.

References

  1. Achim AM, Bertrand M-C, Montoya A, Malla AK, & Lepage M. (2007). Medial temporal lobe activations during associative memory encoding for arbitrary and semantically related object pairs. Brain Research, 1161, 46–55. [DOI] [PubMed] [Google Scholar]
  2. Achim AM, & Lepage M. (2005). Dorsolateral prefrontal cortex involvement in memory post-retrieval monitoring revealed in both item and associative recognition tests. Neuroimage, 24(4), 1113–1121. [DOI] [PubMed] [Google Scholar]
  3. Albers AM, Kok P, Toni I, Dijkerman HC, & De Lange FP (2013). Shared representations for working memory and mental imagery in early visual cortex. Current Biology, 23(15), 1427–1431. [DOI] [PubMed] [Google Scholar]
  4. Alivisatos B, & Petrides M. (1997). Functional activation of the human brain during mental rotation. Neuropsychologia, 35(2), 111–118. [DOI] [PubMed] [Google Scholar]
  5. Anguera JA, Reuter-Lorenz PA, Willingham DT, & Seidler RD (2010). Contributions of Spatial Working Memory to Visuomotor Learning. Journal of Cognitive Neuroscience, 22(9), 1917–1930. 10.1162/jocn.2009.21351 [DOI] [PubMed] [Google Scholar]
  6. Bar M, Aminoff E, & Schacter DL (2008). Scenes Unseen: The Parahippocampal Cortex Intrinsically Subserves Contextual Associations, Not Scenes or Places Per Se. Journal of Neuroscience, 28(34), 8539–8544. 10.1523/JNEUROSCI.0987-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bar M. (2004). Visual objects in context. Nature Reviews Neuroscience, 5(8), 617–629. [DOI] [PubMed] [Google Scholar]
  8. Bar M, & Aminoff E. (2003). Cortical Analysis of Visual Context. Neuron, 38(2), 347–358. 10.1016/S0896-6273(03)00167-3 [DOI] [PubMed] [Google Scholar]
  9. Boldini A, Russo R, & Avons SE (2004). One process is not enough! A speed-accuracy tradeoff study of recognition memory. Psychonomic Bulletin & Review, 11(2), 353–361. [DOI] [PubMed] [Google Scholar]
  10. Bowman CR, Chamberlain JD, & Dennis NA (2019). Sensory Representations Supporting Memory Specificity: Age Effects on Behavioral and Neural Discriminability. The Journal of Neuroscience, 39(12), 2265–2275. 10.1523/JNEUROSCI.2022-18.2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Brodt S, Gais S, Beck J, Erb M, Scheffler K, & Schönauer M. (2018). Fast track to the neocortex: A memory engram in the posterior parietal cortex. Science, 362(6418), 1045–1048. 10.1126/science.aau2528 [DOI] [PubMed] [Google Scholar]
  12. Cabeza R, Dolcos F, Graham R, & Nyberg L. (2002). Similarities and differences in the neural correlates of episodic memory retrieval and working memory. Neuroimage, 16(2), 317–330. [DOI] [PubMed] [Google Scholar]
  13. Cabeza R, & St Jacques P. (2007). Functional neuroimaging of autobiographical memory. Trends in Cognitive Sciences, 11(5), 219–227. [DOI] [PubMed] [Google Scholar]
  14. Carpenter PA, Just MA, Keller TA, Eddy W, & Thulborn K. (1999). Graded functional activation in the visuospatial system with the amount of task demand. Journal of Cognitive Neuroscience, 11(1), 9–24. [DOI] [PubMed] [Google Scholar]
  15. Cepeda NJ, & Kramer AF (1999). Strategic effects on object-based attentional selection. Acta Psychologica, 103(1), 1–19. 10.1016/S0001-6918(99)00021-9 [DOI] [PubMed] [Google Scholar]
  16. Christophel TB, Cichy RM, Hebart MN, & Haynes J-D (2015). Parietal and early visual cortices encode working memory content across mental transformations. NeuroImage, 106, 198–206. 10.1016/j.neuroimage.2014.11.018 [DOI] [PubMed] [Google Scholar]
  17. Cohen MS, Kosslyn SM, Breiter HC, DiGirolamo GJ, Thompson WL, Anderson AK, Bookheimer SY, Rosen BR, & Belliveau JW (1996). Changes in cortical activity during mental rotation A mapping study using functional MRI. Brain, 119(1), 89–100. [DOI] [PubMed] [Google Scholar]
  18. Corballis MC (1997). Mental rotation and the right hemisphere. Brain and Language, 57(1), 100–121. [DOI] [PubMed] [Google Scholar]
  19. DaPolito F, Barker D, & Wiant J. (1972). The effects of contextual changes on component recognition. The American Journal of Psychology, 431–440. [Google Scholar]
  20. Davis SW, Dennis NA, Daselaar SM, Fleck MS, & Cabeza R. (2008). Qué PASA? The Posterior–Anterior Shift in Aging. Cerebral Cortex, 18(5), 1201–1209. 10.1093/cercor/bhm155 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. De Brigard F, Langella S, Stanley ML, Castel AD, & Giovanello KS (2020). Age-related differences in recognition in associative memory. Aging, Neuropsychology, and Cognition, 27(2), 289–301. [DOI] [PubMed] [Google Scholar]
  22. Dennis NA, & McCormick-Huhn JM (2018). Item and associative memory decline in healthy aging. [Google Scholar]
  23. Dennis NA, Overman AA, Gerver CR, McGraw KE, Rowley MA, & Salerno JM (2019). Different types of associative encoding evoke differential processing in both younger and older adults: Evidence from univariate and multivariate analyses. Neuropsychologia, 135, 107240. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Diwadkar VA, Carpenter PA, & Just MA (2000). Collaborative activity between parietal and dorso-lateral prefrontal cortex in dynamic spatial working memory revealed by fMRI. Neuroimage, 12(1), 85–99. [DOI] [PubMed] [Google Scholar]
  25. Egner T, Jamieson G, & Gruzelier J. (2005). Hypnosis decouples cognitive control from conflict monitoring processes of the frontal lobe. NeuroImage, 27(4), 969–978. 10.1016/j.neuroimage.2005.05.002 [DOI] [PubMed] [Google Scholar]
  26. Eich E. (1985). Context, memory, and integrated item/context imagery. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11(4), 764–770. 10.1037/0278-7393.11.1-4.764 [DOI] [Google Scholar]
  27. Epstein RA, & Ward EJ (2010). How Reliable Are Visual Context Effects in the Parahippocampal Place Area? Cerebral Cortex, 20(2), 294–303. 10.1093/cercor/bhp099 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Etzel JA, & Braver TS (2013). MVPA Permutation Schemes: Permutation Testing in the Land of Cross-Validation. Proceedings of the 2013 International Workshop on Pattern Recognition in Neuroimaging, 140–143. 10.1109/PRNI.2013.44 [DOI] [Google Scholar]
  29. Fellows LK, & Farah MJ (2005). Dissociable elements of human foresight: A role for the ventromedial frontal lobes in framing the future, but not in discounting future rewards. Neuropsychologia, 43(8), 1214–1221. 10.1016/j.neuropsychologia.2004.07.018 [DOI] [PubMed] [Google Scholar]
  30. Gaonkar B, & Davatzikos C. (2012). Deriving Statistical Significance Maps for SVM Based Image Classification and Group Comparisons. In Ayache N, Delingette H, Golland P, & Mori K. (Eds.), Medical Image Computing and Computer-Assisted Intervention – MICCAI 2012 (Vol. 7510, pp. 723–730). Springer; Berlin Heidelberg. 10.1007/978-3-642-33415-3_89 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Gibson E. (1969). Principles of Perceptual Learning and Development. Science, 168(3934), 958–959. 10.1126/science.168.3934.958 [DOI] [Google Scholar]
  32. Giovanello KS, Schnyer D, & Verfaellie M. (2009). Distinct hippocampal regions make unique contributions to relational memory In Press, Hippocampus. Hippocampus, 19(2), 111–117. 10.1002/hipo.20491 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Gisquet-Verrier P, & Riccio DC (2012). Memory reactivation effects independent of reconsolidation. Learning & Memory, 19(9), 401–409. [DOI] [PubMed] [Google Scholar]
  34. Godden DR, & Baddeley AD (1975). CONTEXT-DEPENDENT MEMORY IN TWO NATURAL ENVIRONMENTS: ON LAND AND UNDERWATER. British Journal of Psychology, 66(3), 325–331. 10.1111/j.2044-8295.1975.tb01468.x [DOI] [Google Scholar]
  35. Gogos A, Gavrilescu M, Davison S, Searle K, Adams J, Rossell SL, Bell R, Davis SR, & Egan GF (2010). Greater superior than inferior parietal lobule activation with increasing rotation angle during mental rotation: An fMRI study. Neuropsychologia, 48(2), 529–535. [DOI] [PubMed] [Google Scholar]
  36. Goh JO, Siong SC, Park D, Gutchess A, Hebrank A, & Chee MW (2004). Cortical areas involved in object, background, and object-background processing revealed with functional magnetic resonance adaptation. Journal of Neuroscience, 24(45), 10223–10228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Gould RL, Brown RG, Owen AM, ffytche DH, & Howard RJ (2003). FMRI BOLD response to increasing task difficulty during successful paired associates learning. NeuroImage, 20(2), 1006–1019. 10.1016/S1053-8119(03)00365-3 [DOI] [PubMed] [Google Scholar]
  38. Gruppuso V, Lindsay DS, & Masson ME (2007). I’d know that face anywhere! Psychonomic Bulletin & Review, 14(6), 1085–1089. [DOI] [PubMed] [Google Scholar]
  39. Harris IM, Egan GF, Sonkkila C, Tochon-Danguy HJ, Paxinos G, & Watson JD (2000). Selective right parietal lobe activation during mental rotation: A parametric PET study. Brain, 123(1), 65–73. [DOI] [PubMed] [Google Scholar]
  40. Harrison SA, & Tong F. (2009). Decoding reveals the contents of visual working memory in early visual areas. Nature, 458(7238), 632–635. 10.1038/nature07832 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Hayes SM, Nadel L, & Ryan L. (2007). The effect of scene context on episodic object recognition: Parahippocampal cortex mediates memory encoding and retrieval success. Hippocampus, 17(9), 873–889. 10.1002/hipo.20319 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Hayes SM, Ryan L, Schnyer DM, & Nadel L. (2004). An fMRI Study of Episodic Memory: Retrieval of Object, Spatial, and Temporal Information. 118(5), 885–896. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Hofstetter C, Achaibou A, & Vuilleumier P. (2012). Reactivation of visual cortex during memory retrieval: Content specificity and emotional modulation. NeuroImage, 60(3), 1734–1745. 10.1016/j.neuroimage.2012.01.110 [DOI] [PubMed] [Google Scholar]
  44. Iidaka T, Matsumoto A, Nogawa J, Yamamoto Y, & Sadato N. (2006). Frontoparietal Network Involved in Successful Retrieval from Episodic Memory. Spatial and Temporal Analyses Using fMRI and ERP. Cerebral Cortex, 16(9), 1349–1360. 10.1093/cercor/bhl040 [DOI] [PubMed] [Google Scholar]
  45. Ivanoff J, Branning P, & Marois R. (2008). FMRI Evidence for a Dual Process Account of the Speed-Accuracy Tradeoff in Decision-Making. PLOS ONE, 3(7), e2635. 10.1371/journal.pone.0002635 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Johnson JD, McDuff SGR, Rugg MD, & Norman KA (2009). Recollection, Familiarity, and Cortical Reinstatement: A Multivoxel Pattern Analysis. Neuron, 63(5), 697–708. 10.1016/j.neuron.2009.08.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Johnson JD, & Rugg MD (2007). Recollection and the reinstatement of encoding-related cortical activity. Cerebral Cortex, 17(11), 2507–2515. [DOI] [PubMed] [Google Scholar]
  48. Jordan K, Heinze H-J, Lutz K, Kanowski M, & Jäncke L. (2001). Cortical activations during the mental rotation of different visual objects. Neuroimage, 13(1), 143–152. [DOI] [PubMed] [Google Scholar]
  49. Kanwisher N, McDermott J, & Chun MM (1997). The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception. 17(10), 10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Kapadia MK, Westheimer G, & Gilbert CD (2000). Spatial distribution of contextual interactions in primary visual cortex and in visual perception. Journal of Neurophysiology, 84(4), 2048–2062. 10.1152/jn.2000.84.4.2048 [DOI] [PubMed] [Google Scholar]
  51. Kim H. (2011). Neural activity that predicts subsequent memory and forgetting: A meta-analysis of 74 fMRI studies. Neuroimage, 54(3), 2446–2461. [DOI] [PubMed] [Google Scholar]
  52. Kolers PA (1973). Remembering operations. Memory & Cognition, 1(3), 347–355. [DOI] [PubMed] [Google Scholar]
  53. Koshino H, Carpenter PA, Keller TA, & Just MA (2005). Interactions between the dorsal and the ventral pathways in mental rotation: An fMRI study. Cognitive, Affective, & Behavioral Neuroscience, 5(1), 54–66. [DOI] [PubMed] [Google Scholar]
  54. Kravitz DJ, Saleem KS, Baker CI, & Mishkin M. (2011). A new neural framework for visuospatial processing. Nature Reviews Neuroscience, 12(4), 217–230. 10.1038/nrn3008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Kriegeskorte N, Goebel R, & Bandettini P. (2006). Information-based functional brain mapping. Proceedings of the National Academy of Sciences, 103(10), 3863–3868. 10.1073/pnas.0600244103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Kuhl BA, & Chun MM (2014). Successful Remembering Elicits Event-Specific Activity Patterns in Lateral Parietal Cortex. Journal of Neuroscience, 34(23), 8051–8060. 10.1523/JNEUROSCI.4328-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Lee S-H, Kravitz DJ, & Baker CI (2012). Disentangling visual imagery and perception of real-world objects. Neuroimage, 59(4), 4064–4073. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Levin SL, Mohamed FB, & Platek SM (2005). Common ground for spatial cognition? A behavioral and fMRI study of sex differences in mental rotation and spatial working memory. Evolutionary Psychology, 3(1), 147470490500300130. [Google Scholar]
  59. Macken WJ (2002). Environmental context and recognition: The role of recollection and familiarity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28(1), 153. [DOI] [PubMed] [Google Scholar]
  60. Maillet D, & Rajah MN (2014). Age-related differences in brain activity in the subsequent memory paradigm: A meta-analysis. Neuroscience & Biobehavioral Reviews, 45, 246–257. [DOI] [PubMed] [Google Scholar]
  61. Mandler G. (1980). Recognizing: The judgment of previous occurrence. Psychological Review, 87(3), 252. [Google Scholar]
  62. Mayes A, Montaldi D, & Migo E. (2007). Associative memory and the medial temporal lobes. Trends in Cognitive Sciences, 11(3), 126–135. [DOI] [PubMed] [Google Scholar]
  63. McDermott KB, Jones TC, Petersen SE, Lageman SK, & Roediger HL (2000). Retrieval Success is Accompanied by Enhanced Activation in Anterior Prefrontal Cortex During Recognition Memory: An Event-Related fMRI Study. Journal of Cognitive Neuroscience, 12(6), 965–976. 10.1162/08989290051137503 [DOI] [PubMed] [Google Scholar]
  64. McDermott KB, Szpunar KK, & Christ SE (2009). Laboratory-based and autobiographical retrieval tasks differ substantially in their neural substrates. Neuropsychologia, 47(11), 2290–2298. [DOI] [PubMed] [Google Scholar]
  65. McKenzie WA, & Tiberghien G. (2004). Context effects in recognition memory: The role of familiarity and recollection. Consciousness and Cognition, 13(1), 20–38. [DOI] [PubMed] [Google Scholar]
  66. Milivojevic B, Hamm JP, & Corballis MC (2008). Functional Neuroanatomy of Mental Rotation. Journal of Cognitive Neuroscience, 21(5), 945–959. 10.1162/jocn.2009.21085 [DOI] [PubMed] [Google Scholar]
  67. Morris CD, Bransford JD, & Franks JJ (1977). Levels of Processing Versus Transfer Appropriate Processing. 16, 519–533. [Google Scholar]
  68. Mulligan N, & Hirshman E. (1995). Speed-accuracy trade-offs and the dual process model of recognition memory. Journal of Memory and Language, 34(1), 1–18. [Google Scholar]
  69. Mumford JA, Turner BO, Ashby FG, & Poldrack RA (2012). Deconvolving BOLD activation in event-related designs for multivoxel pattern classification analyses. NeuroImage, 59(3), 2636–2643. 10.1016/j.neuroimage.2011.08.076 [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Murnane K, Phelps MP, & Malmberg K. (1999). Context-Dependent Recognition Memory: The ICE Theory. 12(4), 403–415. [DOI] [PubMed] [Google Scholar]
  71. Nairne JS (2002). Remembering over the short-term: The case against the standard model. Annual Review of Psychology, 53(1), 53–81. [DOI] [PubMed] [Google Scholar]
  72. Naselaris T, Olman CA, Stansbury DE, Ugurbil K, & Gallant JL (2015). A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes. Neuroimage, 105, 215–228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Naveh-Benjamin M, Guez J, Kilb A, & Reedy S. (2004). The associative memory deficit of older adults: Further support using face-name associations. Psychology and Aging, 19(3), 541. [DOI] [PubMed] [Google Scholar]
  74. Nixon SJ, & Kanak NJ (1981). The interactive effects of instructional set and environmental context changes on the serial position effect. Bulletin of the Psychonomic Society, 18(5), 237–240. [Google Scholar]
  75. Norman KA, & O’Reilly RC (2003). Modeling hippocampal and neocortical contributions to recognition memory: A complementary-learning-systems approach. Psychological Review, 110(4), 611. [DOI] [PubMed] [Google Scholar]
  76. Oosterhof NN, Connolly AC, & Haxby JV (2016). CoSMoMVPA: Multi-modal multivariate pattern analysis of neuroimaging data in Matlab/GNU Octave. Frontiers in Neuroinformatics, 10, 27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Overman AA, McCormick-Huhn JM, Dennis NA, Salerno JM, & Giglio AP (2018). Older adults’ associative memory is modified by manner of presentation at encoding and retrieval. Psychology and Aging, 33(1), 82–92. 10.1037/pag0000215 [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Philiastides MG, Ratcliff R, & Sajda P. (2006). Neural Representation of Task Difficulty and Decision Making during Perceptual Categorization: A Timing Diagram. Journal of Neuroscience, 26(35), 8965–8975. 10.1523/JNEUROSCI.1655-06.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Pihlajamäki M, Tanila H, Könönen M, Hänninen T, Hämäläinen A, Soininen H, & Aronen HJ (2004). Visual presentation of novel objects and new spatial arrangements of objects differentially activates the medial temporal lobe subareas in humans. European Journal of Neuroscience, 19(7), 1939–1949. [DOI] [PubMed] [Google Scholar]
  80. Podzebenko K, Egan GF, & Watson JD (2002). Widespread dorsal stream activation during a parametric mental rotation task, revealed with functional magnetic resonance imaging. Neuroimage, 15(3), 547–558. [DOI] [PubMed] [Google Scholar]
  81. Polyn SM, Natu VS, Cohen JD, & Norman KA (2005). Category-Specific Cortical Activity Precedes Retrieval During Memory Search. Science, 310(5756), 1963–1966. 10.1126/science.1117645 [DOI] [PubMed] [Google Scholar]
  82. Poppenk J, McIntosh AR, Craik FIM, & Moscovitch M. (2010). Past Experience Modulates the Neural Mechanisms of Episodic Memory Formation. Journal of Neuroscience, 30(13), 4707–4716. 10.1523/JNEUROSCI.5466-09.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Prince SE, Daselaar SM, & Cabeza R. (2005). Neural Correlates of Relational Memory: Successful Encoding and Retrieval of Semantic and Perceptual Associations. Journal of Neuroscience, 25(5), 1203–1210. 10.1523/JNEUROSCI.2540-04.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Ranganath C, & Paller KA (2000). Neural correlates of memory retrieval and evaluation. Cognitive Brain Research, 9(2), 209–222. 10.1016/S0926-6410(99)00048-8 [DOI] [PubMed] [Google Scholar]
  85. Ranganath C, & Ritchey M. (2012). Two cortical systems for memory-guided behaviour. Nature Reviews Neuroscience, 13(10), 713–726. [DOI] [PubMed] [Google Scholar]
  86. Rasch B, & Born J. (2007). Maintaining memories by reactivation. Current Opinion in Neurobiology, 17(6), 698–703. [DOI] [PubMed] [Google Scholar]
  87. Reed AV (1973). Speed-accuracy trade-off in recognition memory. Science, 181(4099), 574–576. [DOI] [PubMed] [Google Scholar]
  88. Rhodes MG, Castel AD, & Jacoby LL (2008). Associative recognition of face pairs by younger and older adults: The role of familiarity-based processing. Psychology and Aging, 23(2), 239. [DOI] [PubMed] [Google Scholar]
  89. Ritchey M, Wing EA, LaBar KS, & Cabeza R. (2013). Neural similarity between encoding and retrieval is related to memory via hippocampal interactions. Cerebral Cortex, 23(12), 2818–2828. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Roediger HL, & Challis BH (1989). Hypermnesia: Improvements in recall with repeated testing. [Google Scholar]
  91. Roediger HL, & Guynn MJ (1996). Retrieval processes. In Memory (pp. 197–236). Elsevier. [Google Scholar]
  92. Rugg MD, Henson RNA, & Robb WGK (2003). Neural correlates of retrieval processing in the prefrontal cortex during recognition and exclusion tasks. Neuropsychologia, 41(1), 40–52. 10.1016/S0028-3932(02)00129-X [DOI] [PubMed] [Google Scholar]
  93. Rugg MD, Johnson JD, Park H, & Uncapher MR (2008). Chapter 21 Encoding-retrieval overlap in human episodic memory: A functional neuroimaging perspective. In Sossin WS, Lacaille J-C, Castellucci VF, & Belleville S. (Eds.), Progress in Brain Research (Vol. 169, pp. 339–352). Elsevier. 10.1016/S0079-6123(07)00021-0 [DOI] [PubMed] [Google Scholar]
  94. Rugg MD, Otten LJ, & Henson RNA (2002). The neural basis of episodic memory: Evidence from functional neuroimaging. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 357(1424), 1097–1110. 10.1098/rstb.2002.1102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Schacter DL, Verfaellie M, & Pradere D. (1996). The Neuropsychology of Memory Illusions: False Recall and Recognition in Amnesic Patients. Journal of Memory and Language, 35(2), 319–334. 10.1006/jmla.1996.0018 [DOI] [Google Scholar]
  96. Sestieri C, Shulman GL, & Corbetta M. (2017). The contribution of the human posterior parietal cortex to episodic memory. Nature Reviews Neuroscience, 18(3), 183–192. 10.1038/nrn.2017.6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Shepard RN, & Metzler J. (1971). Mental rotation of three-dimensional objects. Science, 171(3972), 701–703. [DOI] [PubMed] [Google Scholar]
  98. Siegel AL, & Castel AD (2018). Memory for important item-location associations in younger and older adults. Psychology and Aging, 33(1), 30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Slotnick SD (2004). Visual Memory and Visual Perception Recruit Common Neural Substrates. Behavioral and Cognitive Neuroscience Reviews, 3(4), 207–221. 10.1177/1534582304274070 [DOI] [PubMed] [Google Scholar]
  100. Smith SM (1979). Remembering in and out of context. Journal of Experimental Psychology: Human Learning and Memory, 5(5), 460. [Google Scholar]
  101. Smith SM (1982). Enhancement of recall using multiple environmental contexts during learning. Memory & Cognition, 10(5), 405–412. 10.3758/BF03197642 [DOI] [PubMed] [Google Scholar]
  102. Smith SM (1988). Environmental context—Dependent memory. In Memory in context: Context in memory (pp. 13–34). John Wiley & Sons. [Google Scholar]
  103. Spreng RN, Mar RA, & Kim AS (2009). The common neural basis of autobiographical memory, prospection, navigation, theory of mind, and the default mode: A quantitative meta-analysis. Journal of Cognitive Neuroscience, 21(3), 489–510. [DOI] [PubMed] [Google Scholar]
  104. Stelzer J, Chen Y, & Turner R. (2013). Statistical inference and multiple testing correction in classification-based multi-voxel pattern analysis (MVPA): Random permutations and cluster size control. NeuroImage, 65, 69–82. 10.1016/j.neuroimage.2012.09.063 [DOI] [PubMed] [Google Scholar]
  105. Svoboda E, McKinnon MC, & Levine B. (2006). The functional neuroanatomy of autobiographical memory: A meta-analysis. Neuropsychologia, 44(12), 2189–2208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Takashima A, Nieuwenhuis IL, Rijpkema M, Petersson KM, Jensen O, & Fernández G. (2007). Memory trace stabilization leads to large-scale changes in the retrieval network: A functional MRI study on associative memory. Learning & Memory, 14(7), 472–479. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Thomson DM (1972). Context Effects in Recognition Memory. Journal of Verbal Learning and Verbal Behavior; New York, 11(4), 497–511. [Google Scholar]
  108. Thomson DM, & Tulving E. (1970). Associative encoding and retrieval: Weak and strong cues. Journal of Experimental Psychology, 86(2), 255. [Google Scholar]
  109. Tiberghien G, & Cauzinille E. (1979). Pre-decision and conditional search in long-term recognition memory. Acta Psychologica, 43(4), 329–343. [Google Scholar]
  110. Tibon R, Vakil E, Goldstein A, & Levy DA (2012). Unitization and temporality in associative memory: Evidence from modulation of context effects. Journal of Memory and Language, 67(1), 93–105. [Google Scholar]
  111. Tulving E, & Thomson DM (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review, 80(5), 352–373. 10.1037/h0020071 [DOI] [Google Scholar]
  112. Vaidya CJ, Zhao M, Desmond JE, & Gabrieli JD (2002). Evidence for cortical encoding specificity in episodic memory: Memory-induced re-activation of picture processing areas. Neuropsychologia, 40(12), 2136–2143. [DOI] [PubMed] [Google Scholar]
  113. van Kesteren MT, Rijpkema M, Ruiter DJ, & Fernández G. (2010). Retrieval of associative information congruent with prior knowledge is related to increased medial prefrontal activity and connectivity. Journal of Neuroscience, 30(47), 15888–15894. [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Vilberg KL, & Rugg MD (2007). Dissociation of the neural correlates of recognition memory according to familiarity, recollection, and amount of recollected information. Neuropsychologia, 45(10), 2216–2225. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Von Zerssen GC, Mecklinger A, Opitz B, & Von Cramon DY (2001). Conscious recollection and illusory recognition: An event-related fMRI study. European Journal of Neuroscience, 13(11), 2148–2156. [DOI] [PubMed] [Google Scholar]
  116. Wagner AD, Shannon BJ, Kahn I, & Buckner RL (2005). Parietal lobe contributions to episodic memory retrieval. Trends in Cognitive Sciences, 9(9), 445–453. [DOI] [PubMed] [Google Scholar]
  117. Waldhauser GT, Braun V, & Hanslmayr S. (2016). Episodic Memory Retrieval Functionally Relies on Very Rapid Reactivation of Sensory Information. Journal of Neuroscience, 36(1), 251–260. 10.1523/JNEUROSCI.2101-15.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  118. Wang JX, Rogers LM, Gross EZ, Ryals AJ, Dokucu ME, Brandstatt KL, Hermiller MS, & Voss JL (2014). Targeted enhancement of cortical-hippocampal brain networks and associative memory. Science, 345(6200), 1054–1057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  119. Wheeler ME, Petersen SE, & Buckner RL (2000). Memory’s echo: Vivid remembering reactivates sensory-specific cortex. Proceedings of the National Academy of Sciences, 97(20), 11125–11129. 10.1073/pnas.97.20.11125 [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Xue G, Dong Q, Chen C, Lu Z, Mumford JA, & Poldrack RA (2010). Greater Neural Pattern Similarity Across Repetitions Is Associated with Better Memory. Science, 330(6000), 97–101. 10.1126/science.1193125 [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Xue G, Dong Q, Chen C, Lu Z-L, Mumford JA, & Poldrack RA (2013). Complementary Role of Frontoparietal Activity and Cortical Pattern Similarity in Successful Episodic Memory Encoding. Cerebral Cortex, 23(7), 1562–1571. 10.1093/cercor/bhs143 [DOI] [PMC free article] [PubMed] [Google Scholar]
  122. Zacks JM (2008). Neuroimaging studies of mental rotation: A meta-analysis and review. Journal of Cognitive Neuroscience, 20(1), 1–19. [DOI] [PubMed] [Google Scholar]
  123. Zacks JM, Speer NK, Swallow KM, Braver TS, & Reynolds JR (2007). Event perception: A mind-brain perspective. Psychological Bulletin, 133(2), 273. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES