Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jul 15.
Published in final edited form as: Brain Res. 2017 May 8;1667:46–54. doi: 10.1016/j.brainres.2017.05.003

Event-related Potentials during Encoding: Comparing Unitization to Relational Processing

Hsiao-Wei Tu 1, Emma E Alty 1, Rachel A Diana 1
PMCID: PMC5543772  NIHMSID: NIHMS878084  PMID: 28495307

Abstract

Context details are typically encoded into episodic memory via arbitrary associations to the relevant item, known as relational binding. Subsequent retrieval of those context details is primarily supported by recollection. Research suggests that context retrieval can rely on familiarity if the context details are “unitized” and thereby encoded as features of the item itself in a single new representation. With most investigations into unitization focusing on the contributions of familiarity and recollection during retrieval, little is know n about unitization during encoding. In an effort to begin understanding unitization as an encoding process, we used event-related potentials to monitor brain activity while participants were instructed to encode words with color information using relational association or unitization. Results showed that unitization-based encoding elicited significantly more negative potentials in the left parietal region than relational encoding during presentation of the second segment of strategically-specific sentences. This difference continued through presentation of the third sentence segment, becoming less lateralized, and ended before the final two segments were presented. During the mental imagery period, unitization-based encoding elicited significantly more positive potentials than relational encoding in the first 200 ms centrally and from 400 through 1000 ms in left fronto-temporal and parieto-occipital regions. Our findings indicate that unitization and relational processing diverged at approximately the time that the context item was presented in the relational condition. During mental imagery, unitization diverged from relational processing immediately, suggesting that unitization affected the nature of the item representation, and possibly the brain regions involved, during memory encoding.

Keywords: source recognition, ERP, familiarity, recollection, encoding strategy, unitization

1. Introduction

Recognition memory is the ability to identify an object or an event encountered previously. It is supported by both familiarity, which is based on the quantitative signal strength of an item, and recollection, which is based on subjective judgments of the type of information retrieved about the prior event (for review see Yonelinas, 2002). Typical encoding processing allows familiarity, recollection, or a combination of the two to support single item retrieval, whereas retrieval of the association between two random items strongly relies on recollection (Hockley & Consoli, 1999; Jacoby, 1991; Yonelinas, 1997). Likewise, memory for the arbitrary association between an item and its context details is strongly supported by recollection under typical encoding conditions, which have been termed “relational encoding” (Cohen & Eichenbaum, 1993; Davachi, 2006). However, if arbitrary item-item or item-context pairs are encoded as a single meaningful and cohesive representation through a strategy termed “unitization” (Graf & Schacter, 1989; Yonelinas, Kroll, Dobbins, & Soltani, 1999), the associations can be recognized via familiarity processes (Diana, Yonelinas, & Ranganath, 2008; Giovanello, Keane, & Verfaellie, 2006). For example, imagine that you store your USB drive and your grocery “rewards” card on your key chain. You might remember the association between those two separate items as an arbitrary, non-meaningful relationship (relational processing), or as a unitized representation of “things on my keychain,” which includes information about both items within a single representation1.

Unitization and relational association have been investigated extensively using behavioral paradigms (e.g. Diana et al., 2008; Quamme, Yonelinas, & Norman, 2007; Tu & Diana, 2016) and functional magnetic resonance imaging both at encoding (e.g. Davachi, Mitchell, & Wagner, 2003; Staresina & Davachi, 2006) and at retrieval (e.g. Bader, Opitz, Reith, & Mecklinger, 2014; Ford, Verfaellie, & Giovanello, 2010). In these studies, participants have typically been instructed to make random associations between two words (or a word and its background color) on “non-unitized” trials and to imagine a new meaning for a combination of the two items (or a meaningful combination of the item with its context details) on “unitized” trials. Increased activity in the perirhinal cortex (PRC) has been associated with encoding of unitized word pairs when compared to relationally-bound pairs (Haskins, Yonelinas, Quamme, & Ranganath, 2008; Staresina & Davachi, 2010), which is consistent with the finding of PRc involvement during subsequent retrieval of unitized information (Diana, Yonelinas, & Ranganath, 2007; Ford et al., 2010). On the other hand, hippocampal activation was greater during relational encoding as compared to unitized encoding (Davachi et al., 2003), and activation of both parahippocampal cortex (PHC) and the hippocampus were correlated with recollection-based retrieval of relationally bound, non-unitized, context details (Ranganath et al., 2004; Weis et al., 2004).

Event-related potentials (ERPs) have also been adopted to provide information about the timecourse of unitized or relational processing during retrieval, with participants being asked to recognize context details encountered earlier (see review in Rugg & Curran, 2007). The FN400, an early onset (300–500 ms) bilateral frontal component, has been correlated with familiarity-based recognition in item memory (Curran, 2000) and was modulated by unitized word pair retrieval (e.g., traffic-jam) rather than retrieval of semantically-associated word pairs (e.g., bread-cereal) (Rhodes & Donaldson, 2007). This component has also been interpreted as a correlate of conceptual or semantic processing (Stróżak, Abedzadeh, & Curran, 2016; Voss & Federmeier, 2011; Voss & Paller, 2009). The LPC, a late onset (400–800 ms) parietal component, has been associated with recollection-based recognition in item memory (Curran, 2000) and was modulated by retrieval of word pairs in both conditions, regardless of the level of unitization (Rhodes & Donaldson, 2007). Similar ERP effects with delayed latencies were found in a paradigm that tested item-context unitization rather than item-item unitization (Diana, Van den Boom, Yonelinas, & Ranganath, 2011). Participants were instructed to adopt either a “high unitization” strategy or a “low unitization” strategy to encode individual words and their associated background colors. During retrieval, participants showed a parietally-distributed positivity correlated with recollection-based source memory in both high and low unitization trials, whereas a frontally-distributed positivity was only correlated with high unitization trials, indicating the contribution of familiarity to source recognition.

Although previous studies have demonstrated dissociable ERP responses when retrieving unitized and non-unitized information, ERP techniques have not been used to examine the temporal characteristics of unitization-based encoding as compared to relational encoding. One difficulty in examining this question is the challenge to pinpoint when unitization actually occurs during encoding. Unitized encoding is defined by re-conceptualizing previously unrelated items into a single, unified representation. In item unitization, participants have been required to unitize two random words by a reason for why/how they are presented together as a compound word (e.g. Quamme et al., 2007). In item-context unitization, participants have been required to imagine the item as if it were the same color as the background and create a meaningful explanation for that imagery (Staresina & Davachi, 2006; Tu & Diana, 2016). Both processes require several seconds, making it difficult to precisely measure the temporal correlates elicited by unitization.

The current study used a unique procedure that allowed us to time-lock the unitization/relational scenarios and unitization/relational mental imagery with ERP recording. Specifically, the meaningful explanation for the unitized representation was provided by the experimenter, via a sentence, thus removing the additional creative demands present in some previous unitization paradigms. We also presented the sentences in segments that were visible for 800 ms each, rather than presenting a complete sentence and being unable to determine when the participant finished reading the scenario. Pilot data indicated that this segmentation-based presentation forced participants to read and collect sentence fragments before they could begin imagining the described scene and thereby begin unitizing the to-be-remembered word and the corresponding color. This procedure was then matched to the relational encoding control condition, with sentences describing an arbitrary relationship also presented as segments. We hypothesized that unitization and relational associations elicit different cognitive processing and therefore different underlying neural responses during encoding, which leads to qualitative differences in the representations created in episodic memory. In particular, we expected that encoding effects in an ERP paradigm would be similar to previous ERP findings at retrieval. That is, early frontal effects, which predict familiarity-based source recognition, should occur when participants were unitizing an item and its corresponding color, whereas late parietal effects, which predict recollection-based source recognition, should appear during both unitization and arbitrary association. The particular time period during which differences between unitized and relational encoding occur will provide insight into the stage of processing at which unitization diverges from relational encoding.

2. Results

2.1 Behavioral results

Participants’ measured d’ scores for the source memory task were significantly higher in the unitized condition (M = 1.44, SD = .44) than in the relational condition (M = 0.78, SD = .46), paired t(25) = 8.42, p < .001, dz = 1.62. This difference was also evident in higher vividness ratings for the unitized encoding sentences (M = 2.47 out of 3), compared to the association sentences (M = 2.36 out of 3), Wilcoxon signed-rank test Z = 3.61, p < .001, dz = .71. There was no difference between recollection estimates for green and recollection estimates for red in either the unitized condition, paired t(25) = 0.21, p = .83, or the relational condition, paired t(25) = .87, p = .39. Therefore, the red and green recollection estimates were averaged for further analyses.

In addition, behavioral responses were used to create receiver operating characteristic (ROC) curves and individual ROC curves were fit with the dual-process signal detection (DPSD) model (Yonelinas, 1999). The DPSD model fits provided estimates of the contribution from familiarity and recollection to each individual participant’s memory judgments (see Yonelinas, 1999 for details of the model). This familiarity estimate in the DPSD model is known as d’. We will use the term “familiarity estimate” rather than d’ to avoid confusion with the signal detection measure of accuracy described above. This model has been used to fit observed ROCs in many recognition paradigms, including source memory paradigms (e.g. Diana et al., 2008; Tu & Diana, 2016; for a review see Yonelinas & Parks, 2007). The DPSD model proposes that recollection is a threshold process that supports high confidence responses, whereas familiarity is a signal detection process contributing to a full range of confidence responses. Thus, if source memory decisions are based on recollection, the model predicts linear source ROCs in probability space. If source memory decisions are based on familiarity, the model predicts curvilinear source ROCs in probability space (Yonelinas, 1999).

Observed ROC curve aggregates are shown in Figure 2A with the DPSD model fits overlaid. The ROC curve in the unitized condition was, on average, more curvilinear than that in the relational condition. This difference can also be seen in the average familiarity estimates (Figure 2B, calculated based on DPSD model fits to individual observed ROC curves, with the parameter estimates being averaged across participants), which indicated a larger familiarity estimate for the unitized condition than the relational condition. Statistically, both familiarity estimates and recollection estimates were significantly higher in the unitized condition than in the non-unitized condition, familiarity: paired t(25) = 4.40, p < .001, dz = .86; recollection: paired t(25) = 4.04, p < .001, dz = .79.

Figure 2.

Figure 2

The aggregate ROC curves (A) and DPSD model estimates (B) in the unitized and relational conditions. Note that recollection estimates are a proportion of responses, whereas familiarity estimates are in arbitrary units (only interpretable relative to one another). Error bars indicate standard errors.

2.2 ERP results

A 3-way repeated measures ANOVA (20 time bins × 32 electrodes × 2 conditions) was used to examine the sentence presentation phase of encoding in 200 ms intervals.2 This analysis revealed significant main effects of time, F(19, 77.65) = 4.84, p < .005, ηp = .16 and electrode, F(31, 208.28) = 3.78, p < .001, ηp = .13, as well as a trend toward a main effect of condition, F(1, 25) = 4.06, p = .06, ηp = .14. Three interactions were also significant: time by condition, F(19, 121.27) = 2.65, p < .05, ηp = .10, time by electrode, F(589, 305.50) = 7.83, p < .001, ηp = .24, and time by condition by electrode, F(589, 346.76) = 2.46, p < .005, ηp = .09. Based on these interactions, we performed 2-way ANOVAs examining each time bin (32 electrodes × 2 conditions). The main effect of electrode was significant in all time bins except 3800 to 4000 ms, where it trended toward significance, p = .05. The main effect of condition was significant from 1000 ms to 1800 ms, trended toward significance from 1800 to 2000 ms, and was again significant from 2000 ms to 2200 ms. The statistical findings, including means, for these main effects are presented in Table 1. Examination of the means in each condition across electrodes revealed that ERPs in the relational encoding condition were more positive from 1000 to 2200 ms than those in the unitized encoding condition. Figure 3 shows the topographical pattern of the difference between the Rcorrect and Ucorrect conditions.

Table 1.

Means, standard deviations (in parentheses), and ANOVA statistics for the significant main effects of condition during sentence presentation. Time indicates the beginning in ms of each 200 ms time bin that produced a significant main effect of condition or a trend toward signficance.

Time Ucorrect Rcorrect F p ηp

1000 2.01 (.54) 2.77 (.60) 7.25 0.012 .23
1200 .71 (.53) 1.90 (.62) 12.79 0.001 .34
1400 1.10 (.66) 2.26 (.69) 10.52 0.003 .30
1600 .71 (.68) 1.72 (.69) 7.00 0.014 .22
1800 1.63 (.73) 2.46 (.70) 3.82 0.062 .13
2000 −.02 (.69) 1.01 (.67) 6.06 0.021 .20

Figure 3.

Figure 3

Whole-brain topographic maps of difference potentials (Rcorrect – Ucorrect) during the sentence presentation phase of encoding. Black squares indicate time periods during which the main effect of condition was significant (see Table 1 for statistics). Black circles indicate electrodes that were significantly more positive for Rcorrect than Ucorrect (also indicates an overall condition × electrode interaction). Sample sentence divisions and their onsets are indicated for both the Unitized and Relational encoding conditions.

The interaction between condition and electrode was found to be significant in four tme bins: 1200 to 1400 ms, F(9.14, 228.52) = 2.00, p < .05, ηp = .07, 1400 to 1600 ms, F(9.05, 226.34) = 2.57, p < .01, ηp = .09, 1600 to 1800 ms, F(8.50, 212.51) = 2.18, p < .05, ηp = .08, and 2000 to 2200 ms, F(9.5, 237.49) = 2.17, p < .05, ηp = .08. These time periods were further interrogated with follow-up t-tests which revealed significant effects of condition at the electrodes indicated in Figure 3 on the appropriate topographical maps. For the 1200 to 1400 ms time bin, Fp1, AF3, CP5, P7, P3, Pz, PO3, PO4, P8, CP6, CP2, C4, T8, AF4, Cz, FC1, FC5, T7, C3, and CP1 were all significantly more positive for Rcorrect than Ucorrect, all t(25) > 2.10, all p < .05. For the 1400 to 1600 time bin, Fp1, CP5, P7, P3, Pz, PO3, O1, O2, PO4, P8, CP6, CP2, C4, T8, AF4, Cz, FC1, C3, and CP1 were all significantly more positive for Rcorrect than Ucorrect, all t(25) > 2.15, all p < .05. For the 1600 to 1800 ms time bin, Fp1, CP5, P7, P3, Pz, PO3, O1, PO4, C4, T8, AF4, Cz, C3, and CP1, were all significantly more positive for Rcorrect than Ucorrect, all t(25) > 2.11, all p < .05. From 2000 to 2200 ms, Pz, PO3, O1, O2, PO4, CP6, CP2, C4, T8, AF4, Cz, C3, and CP1, were all significantly more positive for Rcorrect than Ucorrect, all t(25) > 2.11, all p < .05. Figure 4 shows waveforms at three electrodes that produced significant differences between Rcorrect and Ucorrect during sentence presentation.

Figure 4.

Figure 4

Timecourses of Ucorrect and Rcorrect during the sentence presentation phase of encoding at three electrode sites that produced significant differences between the conditions (during time periods indicated with *, p < .05). Note that a new visual stimulus (sentence segment) appeared at 800 ms intervals throughout the larger time window.

We also calculated the correlation between participants’ measured d’ in each encoding condition and their average ERP amplitude at each electrode within the four time windows that showed condition × electrode interactions (1200 to 1400 ms, 1400 to 1600 ms, 1600 to 1800 ms, and 2000 to 2200 ms). For a significance level of p < .05, the minimum significant correlation is .33. All four time windows showed significant, but moderate, negative correlations between measured d’ and Ucorrect amplitudes. The largest correlation at each time window was: −.49 (CP2), −.56 (CP2), −.51 (CP2), and −.46 (Fz), respectively. We found similar correlations between measured d’ and Rcorrect amplitudes during these four time windows as well. The largest correlation for Rcorrect at each time window occurred in electrode F3: −.38, −.50, −.51, and −.36, respectively.

A repeated measures ANOVA (5 time bins × 32 electrodes × 2 conditions) examining the mental imagery phase of encoding did not indicate a statistically significant main effect of condition (i.e., unitized vs. relational), F(1, 25) = 0.13, p = .73, ηp = 0.01. There was also no interaction between condition and electrode, F(31, 465) = 0.72, p = .87, ηp = 0.05, or condition and time, F(4, 60) = 0.67, p = .62, ηp = 0.04. However, it is possible that the noise in the frontal electrodes (FP1, FP2, AF3, and AF4) from closing the eyes masked any effects. When the four frontal electrodes were eliminated from the analysis, a significant interaction between condition and electrode appeared, F(27, 459) = 1.52, p = .046, ηp = 0.08. Based on this interaction, we examined the topographic maps and conducted analyses of individual electrodes.

Follow-up analyses of individual electrodes revealed that the ERP signals elicited during encoding of unitized information were significantly more positive than the ERP signals elicited during encoding of relational information in the left parietal region (FC1, C3, Cz, CP1, and P3) within 200 ms of the beginning of the visualization period, paired ts > 2.12, ps < .05. This difference then diverged into two regions: an increased positivity for Ucorrect as compared to Rcorrect in the left temporo-frontal region, F7 (400–600 ms): paired t(25) = 2.08, p < .05, dz = .40; T7 (400–1000 ms): paired ts > 2.12, ps < .05, dz > .41, and an increased positivity for Ucorrect as compared to Rcorrect in the left occipito-parietal region, P3 (600–800 ms): paired t(25) = 2.09, p < .05, dz = .40; PO3 (800–1000 ms): paired t(25) = 2.16, p < .05, dz = .42. There were no significant differences in frontal electrodes (Fp1, Fp2, AF3, or AF4) during any time bin. Figure 5A presents waveform plots of electrodes Cz, Pz, and PO3 during the mental imagery phase of encoding.

Figure 5.

Figure 5

(A) Timecourses of Ucorrect and Rcorrect during the mental imagery phase of encoding at three electrode sites that produced significant differences between the conditions (during time periods indicated with *, p < .05): Cz, Pz, and PO3. (B) Whole-brain topographic maps of mean difference potentials (Ucorrect – Rcorrect) during the mental imagery phase of encoding in 200 ms time windows. Note that frontal electrodes (FP1, FP2, AF3, and AF4) had high variability due to eyelid movement and were therefore excluded from the topo maps.

Figure 5B shows the topographic maps derived from the difference potentials comparing the two conditions of interest (Ucorrect – Rcorrect). Qualitatively, the topographic maps indicate a similar pattern to that identified in the individual electrode analysis. The left parietal electrodes produced more positive potentials for unitized than relational encoding in the first time bin, which then split into a fronto-temporal stream and an occipito-parietal stream, both of which were localized to the left hemisphere. This finding is consistent with the electrode-specific analyses described in the previous paragraph. The right hemisphere electrodes also show a slightly more positive potential for unitized as compared to relational encoding in the right fronto-parietal region (C4 and FC2), but this began much later (approximately 800 ms after the start of the visualization period).

3. Discussion

The present study is the first to examine ERP signals during encoding of unitized information as compared to relationally bound information. In doing so, it is also the first to provide fine-grained information about the timescale of unitization as an encoding process. The experiment was designed to examine unitization and relational binding specifically, with the ERPs reflecting visualization of a unitized or relational representation rather than stimulus perception or initial scenario processing. The absence of red or green perceptual cues allow us to conclude that unitization was based on semantic processing rather than an extrinsically bonded perceptual image of the stimulus word on a red/green background or in a red/green font. We found that the ERPs showed a widespread effect that was significantly more negative in amplitude during unitized sentence processing than relational sentence processing. This differentiation between the encoding strategies began during the second and third sentence segment presentations. The ERP signals in the two conditions then converged following the presentation of the fourth and fifth sentence segments. We also found that ERPs during mental imagery of the scenario described in the sentence was more positive in amplitude within 200 ms of visualizing a unitized encoding scenario as opposed to an arbitrary relational scenario between an item and its associated context detail. This positivity initially appeared in the left fronto-parietal electrodes. After 600 ms of processing, this increased positivity was evident in the left fronto-temporal region and, more posteriorly, in the occipito-parietal region.

We propose that the differences in ERP signals between the unitized and relational conditions reflect the distinct cognitive processing, and potentially differences in underlying neural activity, that is required to support these encoding strategies. The familiarity estimates derived from the DPSD model are consistent with the use of distinct encoding strategies in that the sentences provided at encoding successfully increased the contribution from familiarity to retrieval of unitized context information as compared to retrieval of relationally-bound context information. Visualization of the studied item in a new mental representation that included the context detail (in this case the color of the item) as a meaningful feature of that item allowed source judgments to be supported by familiarity. Visualization of the studied item and an arbitrary association with a context feature (stop sign or dollar bill) resulted in a smaller contribution from familiarity during retrieval.

Previous studies focused on the ERP patterns during memory retrieval have shown converging evidence that familiarity and recollection involve two independent, dissociable electrophysiological responses (Curran, 2000; Opitz & Cornell, 2006; Woodruff, Hayama, & Rugg, 2006). However, studies focused on ERPs at encoding have generated inconclusive data. Some studies showed no significant difference between encoding of items later retrieved via familiarity or recollection (Smith, 1993), suggesting that all items were initially processed in the same way, whereas others showed differences in both scalp topography and timecourse for subsequent familiarity-based and recollection-based recognition (Duarte, Ranganath, Winward, Hayward, & Knight, 2004; Mangels, Picton, & Craik, 2001). Our data are consistent with the latter studies, but it should be noted that our experiment differed in that we explicitly manipulated encoding strategies rather than sorting encoding trials exclusively based on subsequent memory judgments and that we attempted to distinguish between perceptual processing and memory encoding.

Encoding is usually assumed to start as soon as a stimulus is presented in the visual field, which makes it difficult to distinguish the perceptual processing of that stimulus from formation of the mental representation in long-term memory. Our speeded sentence segment presentation procedure was intended to make it difficult for participants to do anything other than read the sentences during the initial phase of encoding. In fact, our participants reported, during informal conversations, that they were focused on collecting as much information as they could during sentence presentation and therefore were unable to visualize the described image until they were cued to close their eyes. Therefore, we interpret the ERP differences during sentence encoding as being related to working memory encoding of the information that would later be unitized or relationally encoded into episodic memory representations during the mental imagery phase.

Sentences designed to apply the relational encoding strategy produced widespread increases in ERP amplitude as compared to sentences designed to apply the unitized encoding strategy beginning 200 ms after the second sentence segment occurred and continuing until 200 ms before the fourth sentence segment occurred. Our post hoc analysis of the sentence segmentation revealed that unitized sentences included the word “red” or “green” in the first sentence segment (along with the target word) on 69% of the trials with the remaining trials including the color word in the second sentence segment. In contrast, relational sentences included at least the first word of the phrase “stop sign” or “dollar bill” in the first sentence segment (along with the target word) on only 20% of trials. The majority of relational sentences included the color information in the second sentence segment (58%) with only a few trials including that information in the third sentence segment. The divergence of the two ERP signals occurred 200 ms after the second sentence segment was presented, suggesting that the appearance of the context detail (“stop sign” or “dollar bill”) may have provoked the increased positivity in the relational trials as compared to the unitized trials. This may be related to the need to maintain multiple independent items in working memory as opposed to a single item and its perceptual descriptor.

We were surprised to find that the meaningful details describing the reason for the target item taking on the specific color (red or green), which occurred during the third through fifth sentence segments, did not produce differentiated ERP signals during sentence processing. That is, the information indicating why the “stain” was red (because it was a ketchup stain) appeared to be processed similarly to the information indicating why the “bike” was near the stop sign (because it was parked there). We have previously proposed that the meaningfulness of the relationship between the context detail and the target item is a critical factor in the effect of unitization on encoding representations. Although our data do not definitively refute this possibility, they do suggest that the key distinguishing aspect of sentence processing occurred before any logical explanation for the unitized relationship was available.

Unfortunately, due to the large potentials provoked by the “eyes closed” procedure, the analyses of the mental imagery period were limited to non-frontal electrodes. In future studies, it would be beneficial to develop a procedure by which the process of closing the eyes is separated from the visual imagery task to allow analysis of frontal electrodes. One of our findings during the mental imagery portion of the encoding trial, the early-onset significant increase in positivity at left parietal sites in the unitization condition, is similar to the strongly left-lateralized fronto-temporal potentials identified by Duarte and colleagues (2004) during 150–450 ms at encoding of words later given a familiarity response. This, together with the behavioral findings of curvilinear ROCs and increased familiarity estimates, reinforces the interpretation that the increased contribution of familiarity to source recognition induced by unitization can be predicted by the positive fronto-parietal ERPs at encoding. The consistency of our early ERP finding with this 2004 result leads us to interpret the effect as representative of familiarity-specific processing that is not necessarily unique to unitization. That is, because the Duarte study (2004) did not manipulate encoding strategies and did not include an emphasis on unitization of context details, we assume that the overlapping effects in the current study reflect more typical encoding processes.

On the other hand, our data indicated that unitization led to a late-onset increase in positivity at left occipito-parietal sites during mental imagery is less consistent with prior findings. Duarte and colleagues (2004) found that encoding of items subsequently retrieved by recollection were associated with positivity in right anterior region from 300 to 450 ms and bilateral activity later in the recording period. Similarly, Mangels et al. (2001) contrasted items correctly retrieved by familiarity and by recollection, respectively, with later missed items. A fronto-temporal negativity was found around 340 ms (N340) post-stimulus in both conditions but enhanced (more negative) for recollected items. These previous studies did not manipulate encoding strategies. Therefore, the results in the current experiment that are different from these two prior studies may indicate the processes that are specific to unitization rather than unconstrained encoding strategies. The lateralized, early-onset increase in positivity, followed by the late-onset increase in positivity in this current study suggests that unitization of item and context information is initially driven by the left fronto-parietal electrical potentials and continued by late (400 ms and later) occipito-parietal electrical potentials. We also found that the right hemisphere differences occurred quite late in processing (after 800 ms), suggesting that unitization may lead to a change in the timecourse of hemispheric processing (left hemisphere followed by bilateral) as compared to unconstrained encoding strategies (left-lateralized familiarity and right-lateralized recollection).

An alternative interpretation of our findings is that the increased negativity in the ERP waveforms for the unitized encoding condition as compared to the relational encoding condition during sentence processing and the increased positivity during mental imagery was driven solely by the overall difference in accuracy on the behavioral task. That is, more accurate memory judgments led to more negative waveforms during sentence processing and more positive waveforms during mental imagery. Although we cannot conclusively rule out this interpretation, we think there is some evidence that overall accuracy is not the primary factor driving our results. First, our EEG data were recorded during encoding rather than retrieval. Therefore, the ERP signals cannot be interpreted as correlates of memory retrieval success in general as might be possible if we had recorded during retrieval. Second, we only included the encoding ERP signals that resulted in correct responses at retrieval in the analyses, which makes it less possible for differences in accuracy to drive our results. Finally, the characteristics of our early ERP effect are consistent with encoding that led to familiarity-based retrieval in a prior experiment (Duarte et al., 2004, see below). In this prior study, the familiarity-based retrieval condition was lower in accuracy than the comparison (recollection) condition. This suggests that a similar ERP effect was not accuracy-based in a prior experiment.

It is important to note that our behavioral data indicated that contributions from both recollection and familiarity were significantly higher in the unitized condition than in the relational encoding condition, indicating that familiarity was not the only process that supported source recognition in the unitized condition. The neural activity that we observed may reflect a combination of both familiarity and recollection, rather than a single process. Encoding ERPs that support subsequent familiarity-based recognition and those support subsequent recollection-based recognition are dissociable (Duarte et al., 2004; Woodruff et al., 2006), but it is unclear whether they are independent. Future investigations may focus on the interaction (or lack thereof) of encoding processes by creating various levels of familiarity and recollection involved at retrieval.

In conclusion, our results show that the ERP correlates of unitized and relational encoding differ both during initial processing of to-be-remembered stimuli and during creation of the episodic representation. Our findings are consistent with the neuroimaging findings that distinctive subregions in the medial temporal lobe are activated by different unitized as compared to relational encoding processes (Haskins et al., 2008; Staresina & Davachi, 2008). Our study provides the first information about the timecourse of these encoding processes. We found differences in the ERPs produced by unitization and relational sentences during the period when the context detail was first presented and also in the first 200 ms period after mental imagery of the scenario began. Unitization during mental imagery primarily affected left central electrodes during this early time window but moved more laterally and, in a distinct stream, posteriorly after 400 ms. All effects were left lateralized until 800 ms following the onset of encoding, when right hemisphere electrodes began to distinguish the two conditions. Comparison with previous encoding findings in episodic memory (Duarte et al., 2004; Mangels et al., 2001) revealed that the left fronto-temporal and left parieto-occipital differences that began after 400 ms are the most distinct that leads to familiarity processing. Therefore, we propose that one or both of these findings may reflect unitization-based encoding specifically, rather than familiarity-based encoding in general. These results provide a complete portrait of the timecourse of unitization during encoding, as compared to relational processing. The ERP components identified here, particularly those with early onset in the fronto-parietal regions, could provide a marker for unitized encoding in future studies.

4. Experimental Procedure

4.1 Participants

Thirty-three participants, ages 18 to 40, were recruited from the Virginia Tech community and all signed the consent form approved by the Virginia Tech IRB. Participants were given a choice of either a monetary reward or class credit for their participation. Seven participants were dropped due to either excessive movement during recording (N = 5) or poor memory performance as assessed by the behavioral task (N = 2; measured d’ was two standard deviations lower than the group average) for a final N of 26, including 12 males. Participants were required to be fluent English readers given the speed of stimulus presentation but not restricted with respect to first language learned.

4.2 Stimuli

Participants were asked to complete two blocks in the unitization condition (U) and two blocks in the relational condition (R). Sixty concrete nouns were studied in each encoding block, half of which were unitized or associated with red and the other half with green. Each word was incorporated into an experimenter-created sentence that facilitated use of the encoding strategy designated in that block. The sentence stimuli were presented in white on a black background with no red or green color cues.

The unitization condition included sentences designed to create a meaningful, holistic representation of an item including the context detail (color) that we manipulated. Therefore, in the unitization condition, each sentence provided a meaningful scenario in which the study item itself was red or green. For example, “The COTTON is red because the doctor used it to wipe away the blood” or “The BOWL is green because it was filled with pea soup.” The relational condition included sentences designed to create an arbitrary association between an item and a contextual feature (stop sign or dolor bill) within a scene. Therefore, in the relational condition, each sentence described a scenario that included both the study item and a stop sign (if the color was red) or a dollar bill (if the color was green). For example, “The BALL is associated with a stop sign because the kids playing in the street accidentally hit the sign” or “The CAR is associated with dollar bills because the bank uses it to transport funds between branches.” The study word was always presented in capital letters in the sentence, and each sentence was divided into five segments of similar length for serial presentation during EEG recording in the study session. Two examples of sentence segmentation, one from each condition, are seen in Figure 3. Post hoc analysis of the sentence segments revealed that among 120 unitized trials, the color information (red or green) was presented in the first sentence segment for 83 trials and in the second sentence segment for 37 trials, whereas among 120 relational trials, the color information (stop sign or dollar bill) began in the first sentence segment for 26 trials, the second sentence fragment for 90 trials, and the third sentence segment for 4 trials.

4.3 Procedure

The order of the two unitization and two relational study sessions was counterbalanced using an ABBA design. Electrophysiological data were only recorded during the study sessions, using a BioSemi ActiveTwo system with 32 active Ag/AgCl electrodes mounted in an elastic cap according to the international 10/20 system. The sampling rate for EEG data was 2046 Hz with off-line digital filtering of noise below 0.1 Hz and above 30 Hz.

Each of the study sessions began with five practice trials to familiarize the participant with the procedure. During the study session, each trial started with a fixation cross, presented for 1 s, after which each of the five segments of a sentence appeared on the screen for 0.8 s consecutively. Participants were instructed to close their eyes immediately following the last sentence segment and to remain as still as possible. A four-second delay followed the last sentence segment, during which participants were asked to imagine the scene described in that sentence. Participants were prompted to open their eyes by a “ding” sound after 4 s and then asked to rate the vividness of the mental image they created from 1 (very vivid) to 3 (not vivid) within the following 6 s, after which the next trial began (Figure 1). The sixty sentences were randomly ordered for each participant’s study sessions.

Figure 1. Experimental procedure.

Figure 1

Each participant went through four study sessions, followed by four test sessions. There were two unitized study sessions and two relational study sessions, organized in an ABBA design. All 240 words were pooled and randomly assigned to the four test sessions. The top line shows an example of a stimulus from the Unitized condition and the second line shows an example of a stimulus from the Relational condition.

After all four study sessions were completed, participants were given the opportunity to remove the electrodes and take a short break before they returned for the test sessions. All 240 words from all four encoding sessions were pooled and randomly sorted into four test sessions. Each test trial began with a fixation cross on the screen for 1 s, followed by concurrent presentation of a word, a test question (“What was the color of this word?”), and 6 confidence levels. Participants were required to judge their confidence in their answer on a scale of 1–6, with 1 representing the most confident “green” response and 6 representing the most confident “red” response. Participants were allowed a maximum of 6 s to answer these questions (Figure 1). No new words were presented during the test sessions.

4.4 Analysis

Confidence judgments were pooled so that numbers 1–3 represented “green” responses and numbers 4–6 represented “red” responses. Hit rate, false alarm rate, and measured d’ (defined as Z(hit rate) – Z(false alarm rate)) were calculated for the evaluation of behavioral performance. In addition, behavioral responses were used to individual create receiver operating characteristic (ROC) curves which were fit with the dual-process signal detection (DPSD) model (Yonelinas, 1999).

EEG data were processed and analyzed by ERPLAB (Lopez-Calderon & Luck, 2014), a toolbox within MATLAB (2012a) used in conjunction with EEGLAB (Delorme & Makeig, 2004). The recordings were referenced offline to the average of the left and right mastoids. The sentence encoding portions of the trials were segmented into epochs of 200 ms beginning with the first sentence fragment and continuing through 4000 ms, when the last sentence fragment was removed from the screen. This time period was not initially intended to be analyzed and therefore suffered from extensive blink artifacts. Therefore, these epochs were analyzed with the independent components analysis (ICA) function in EEGLAB. The ICA components were inspected manually and those that reflected blink artifacts were removed from the dataset. The resulting data were further inspected for other artifacts before being averaged by condition and back-sorted according to retrieval accuracy. The average number of artifact-free trials in each condition was 77.89 (SD = 11.98, Ucorrect) and 67.33 (SD = 9.78, Rcorrect).

The mental imagery portions of the trials were segmented into 200 ms epochs beginning 200 ms before the participant closed his or her eyes and ending 1000 ms after the eyes were closed. The time at which the participant closed his or her eyes was visually identified by a sharp increase in positivity in the upper and lower eye electrodes. Artifact rejection (including muscle tension and slow or unclear timing for closing the eyes) was determined using visual inspection. Each epoch was first categorized as unitized or relational depending on the encoding strategy used with the item during study and then categorized according to subsequent memory as a correct or incorrect source judgment based on the participant’s eventual test response. All artifact-free, correct trials were averaged within the same encoding condition, yielding event-related potentials (ERPs) for the mean unitized (Ucorrect) and mean relational (Rcorrect) encoding trials at each of the 32 electrode sites.

Highlights.

  • Recognition of unitized context details was supported by familiarity.

  • Event-related potentials compared for unitized and relational encoding strategies.

  • Relational encoding ERPs more positive than unitized during context presentation.

  • Unitization ERPs more positive than relational within 200 ms after imagery begins.

  • Early left parietal difference extends to temporo-occipital later in imagery.

Acknowledgments

This work was supported by National Institute of Mental Health Grant R00MH083945. We thank Fang Wang, Vanessa L. Brayman and Tanner M. Hurley for assistance with data collection.

Footnotes

1

These representations can also co-exist. It is likely that relational associations are more flexible than unitized representations and therefore the two types are useful in different situations.

2

All ANOVA results, except the main effects of condition, are Greenhouse-Geisser corrected due to violations of sphericity.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Bader R, Opitz B, Reith W, Mecklinger A. Is a novel conceptual unit more than the sum of its parts?: FMRI evidence from an associative recognition memory study. Neuropsychologia. 2014;61:123–134. doi: 10.1016/j.neuropsychologia.2014.06.006. https://doi.org/10.1016/j.neuropsychologia.2014.06.006. [DOI] [PubMed] [Google Scholar]
  2. Cohen NJ, Eichenbaum H. Memory, amnesia, and the hippocampal system. Cambridge, MA: MIT Press; 1993. [Google Scholar]
  3. Curran T. Brain potentials of recollection and familiarity. Memory & Cognition. 2000;28(6):923–938. doi: 10.3758/bf03209340. [DOI] [PubMed] [Google Scholar]
  4. Davachi L. Item, context and relational episodic encoding in humans. Current Opinion in Neurobiology. 2006;16:693–700. doi: 10.1016/j.conb.2006.10.012. [DOI] [PubMed] [Google Scholar]
  5. Davachi L, Mitchell JP, Wagner A. Multiple routes to memory: Distinct medial temporal lobe processes build item and source memories. Proceedings of the National Academy of Sciences. 2003;100:2157–2162. doi: 10.1073/pnas.0337195100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods. 2004;134(1):9–21. doi: 10.1016/j.jneumeth.2003.10.009. https://doi.org/10.1016/j.jneumeth.2003.10.009. [DOI] [PubMed] [Google Scholar]
  7. Diana RA, Van den Boom W, Yonelinas AP, Ranganath C. ERP correlates of source memory: unitized source information increases familiarity-based retrieval. Brain Research. 2011;1367:278–286. doi: 10.1016/j.brainres.2010.10.030. https://doi.org/10.1016/j.brainres.2010.10.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Diana RA, Yonelinas AP, Ranganath C. Imaging recollection and familiarity in the medial temporal lobe: a three-component model. Trends in Cognitive Sciences. 2007;11(9):379–386. doi: 10.1016/j.tics.2007.08.001. https://doi.org/10.1016/j.tics.2007.08.001. [DOI] [PubMed] [Google Scholar]
  9. Diana RA, Yonelinas AP, Ranganath C. The effects of unitization on familiarity-based source memory: testing a behavioral prediction derived from neuroimaging data. Journal of Experimental Psychology. Learning, Memory, and Cognition. 2008;34(4):730–740. doi: 10.1037/0278-7393.34.4.730. https://doi.org/10.1037/0278-7393.34.4.730. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Duarte A, Ranganath C, Winward L, Hayward D, Knight RT. Dissociable neural correlates for familiarity and recollection during the encoding and retrieval of pictures. Cognitive Brain Research. 2004;18:255–272. doi: 10.1016/j.cogbrainres.2003.10.010. [DOI] [PubMed] [Google Scholar]
  11. Ford JH, Verfaellie M, Giovanello KS. Neural correlates of familiarity-based associative retrieval. Neuropsychologia. 2010;48(10):3019–3025. doi: 10.1016/j.neuropsychologia.2010.06.010. https://doi.org/10.1016/j.neuropsychologia.2010.06.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Giovanello KS, Keane MM, Verfaellie M. The contribution of familiarity to associative memory in amnesia. Neuropsychologia. 2006;44:1859–1865. doi: 10.1016/j.neuropsychologia.2006.03.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Haskins AL, Yonelinas AP, Quamme JR, Ranganath C. Perirhinal cortex supports encoding and familiarity-based recognition of novel associations. Neuron. 2008;59(4):554–560. doi: 10.1016/j.neuron.2008.07.035. https://doi.org/10.1016/j.neuron.2008.07.035. [DOI] [PubMed] [Google Scholar]
  14. Lopez-Calderon J, Luck SJ. ERPLAB: an open-source toolbox for the analysis of event-related potentials. Frontiers in Human Neuroscience. 2014;8 doi: 10.3389/fnhum.2014.00213. https://doi.org/10.3389/fnhum.2014.00213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Mangels JA, Picton TW, Craik FI. Attention and successful episodic encoding: An event-related potential study. Cognitive Brain Research. 2001;11:77–95. doi: 10.1016/s0926-6410(00)00066-5. [DOI] [PubMed] [Google Scholar]
  16. Opitz B, Cornell S. Contribution of familiarity and recollection to associative recognition memory: insights from event-related potentials. Journal of Cognitive Neuroscience. 2006;18(9):1595–1605. doi: 10.1162/jocn.2006.18.9.1595. https://doi.org/10.1162/jocn.2006.18.9.1595. [DOI] [PubMed] [Google Scholar]
  17. Quamme JR, Yonelinas AP, Norman KA. Effect of unitization on associative recognition in amnesia. Hippocampus. 2007;17:192–200. doi: 10.1002/hipo.20257. [DOI] [PubMed] [Google Scholar]
  18. Ranganath C, Yonelinas AP, Cohen MX, Dy CJ, Tom SM, D’Esposito M. Dissociable correlates of recollection and familiarity within the medial temporal lobes. Neuropsychologia. 2004;42(1):2–13. doi: 10.1016/j.neuropsychologia.2003.07.006. [DOI] [PubMed] [Google Scholar]
  19. Rhodes SM, Donaldson DI. Electrophysiological evidence for the influence of unitization on the processes engaged during episodic retrieval: Enhancing familiarity based remembering. Neuropsychologia. 2007;45:412–424. doi: 10.1016/j.neuropsychologia.2006.06.022. [DOI] [PubMed] [Google Scholar]
  20. Rugg MD, Curran T. Event-related potentials and recognition memory. Trends in Cognitive Sciences. 2007;11(6):251–257. doi: 10.1016/j.tics.2007.04.004. https://doi.org/10.1016/j.tics.2007.04.004. [DOI] [PubMed] [Google Scholar]
  21. Smith Me. Neurophysiological manifestations of recollective experience during recognition memory judgments. Journal of Cognitive Neuroscience. 1993;5(1):1–13. doi: 10.1162/jocn.1993.5.1.1. [DOI] [PubMed] [Google Scholar]
  22. Staresina BP, Davachi L. Differential encoding mechanisms for subsequent associative recognition and free recall. Journal of Neuroscience. 2006;26:9162–9172. doi: 10.1523/JNEUROSCI.2877-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Staresina BP, Davachi L. Selective and Shared Contributions of the Hippocampus and Perirhinal Cortex to Episodic Item and Associative Encoding. Journal of Cognitive Neuroscience. 2008 doi: 10.1162/jocn.2008.20104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Staresina BP, Davachi L. Object unitization and associative memory formation are supported by distinct brain regions. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience. 2010;30(29):9890–9897. doi: 10.1523/JNEUROSCI.0826-10.2010. https://doi.org/10.1523/JNEUROSCI.0826-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Stróżak P, Abedzadeh D, Curran T. Separating the FN400 and N400 potentials across recognition memory experiments. Brain Research. 2016;1635:41–60. doi: 10.1016/j.brainres.2016.01.015. https://doi.org/10.1016/j.brainres.2016.01.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Tu H-W, Diana RA. Two are not better than one: Combining unitization and relational encoding strategies. Journal of Experimental Psychology. Learning, Memory, and Cognition. 2016;42(1):114–126. doi: 10.1037/xlm0000170. https://doi.org/10.1037/xlm0000170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Voss JL, Federmeier KD. FN400 potentials are functionally identical to N400 potentials and reflect semantic processing during recognition testing. Psychophysiology. 2011;48(4):532–546. doi: 10.1111/j.1469-8986.2010.01085.x. https://doi.org/10.1111/j.1469-8986.2010.01085.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Voss JL, Paller KA. Remembering and knowing: electrophysiological distinctions at encoding but not retrieval. NeuroImage. 2009;46(1):280–289. doi: 10.1016/j.neuroimage.2009.01.048. https://doi.org/10.1016/j.neuroimage.2009.01.048. [DOI] [PubMed] [Google Scholar]
  29. Weis S, Specht K, Klaver P, Tendolkar I, Willmes K, Ruhlmann J, Fernandez G. Process dissociation between contextual retrieval and item recognition. NeuroReport. 2004;15:2729–2733. [PubMed] [Google Scholar]
  30. Woodruff CC, Hayama HR, Rugg MD. Electrophysiological dissociation of the neural correlates of recollection and familiarity. Brain Research. 2006;1100(1):125–135. doi: 10.1016/j.brainres.2006.05.019. https://doi.org/10.1016/j.brainres.2006.05.019. [DOI] [PubMed] [Google Scholar]
  31. Yonelinas AP. The contribution of recollection and familiarity to recognition and source-memory judgments: A formal dual-process model and an analysis of receiver operating characteristics. Journal of Experimental Psychology: Learning, Memory, & Cognition. 1999;25(6):1415–1434. doi: 10.1037//0278-7393.25.6.1415. [DOI] [PubMed] [Google Scholar]
  32. Yonelinas AP. The nature of recollection and familiarity: A review of 30 years of research. Journal of Memory and Language. 2002;46:441–517. [Google Scholar]
  33. Yonelinas AP, Parks CM. Receiver operating characteristics (ROCs) in recognition memory: a review. Psychological Bulletin. 2007;133(5):800–832. doi: 10.1037/0033-2909.133.5.800. https://doi.org/10.1037/0033-2909.133.5.800. [DOI] [PubMed] [Google Scholar]

RESOURCES