Skip to main content
. 2022 Dec 8;11:e60190. doi: 10.7554/eLife.60190

Figure 2. Trial-level neural reactivation of initial learning activity during emotional learning.

(A) An illustration of trial-level reactivation analysis. Example data was from one subject. During initial learning (left), sagittal views of activation maps for four trials were shown. During emotional learning (right), sagittal views of activation maps for the corresponding trials with two in aversive condition and two in neutral condition were shown. Solid lines indicate correlations for pair-specific similarity measure and dash lines indicate correlations for across-pair similarity measure. These correlations from each similarity measure were then averaged across trials for each participant in aversive and neutral conditions separately. (B) Bar graphs depict the average pair-specific and across-pair pattern similarities in aversive and neutral conditions for the bilateral hippocampus (left), bilateral ventral LOC (vLOC, middle) and bilateral FFA (right) ROIs. ‘X’ indicates a significant interaction (p<0.05). Error bars represent standard error of mean (n=28). (C) Scatter plots depict correlations of observed associative memory performance (i.e. remembered with high confidence) with predicted memory outcome from machine learning prediction analysis based on hippocampal pair-specific pattern similarity in aversive and neutral conditions. Dashed lines indicate 95% confidence intervals, and solid lines indicate the best linear fit. Dots represent data from each participant (n=28). Notes: ~p < 0.10; *p<0.05; **p<0.01; two-tailed tests.

Figure 2.

Figure 2—figure supplement 1. Brain systems involved in memory encoding effect during initial and emotional learning phases.

Figure 2—figure supplement 1.

Widespread brain activation associated with an encoding effect (i.e. all encoding trials with 2 s duration from onset of each face-object/ face-voice association vs. fixation) during (A) the initial learning phase, and (B) the emotional learning phase. Significant clusters, at a height threshold of p<0.0001 and an extent threshold of p<0.05 with family-wise error correction for multiple comparisons based on nonstationary suprathreshold cluster-size distributions computed by Monte Carlo simulations, were shown. Notes: L, left; R, right; A, anterior; P, posterior. Color bar represents t values.
Figure 2—figure supplement 2. Condition-level neural reactivation of initial learning activity during emotional learning.

Figure 2—figure supplement 2.

(A) An illustration of condition-level reactivation analysis. Example data was from one subject. (B) Bar graphs depict the average pattern similarities in aversive and neutral conditions during the presentation of face cues only (i.e. Face cue) and face-voice associations (i.e. Face +Voice) separately for the bilateral hippocampus (left), bilateral vLOC (middle) and bilateral FFA (right) (i.e. the same ROIs used in trial-level pattern similarity analysis). The similarities were entered into separate 2 (Emotion: aversive vs. neutral) by 2 (Presentation: face cue vs. face-voice association) repeated-measures ANOVAs for each ROI. These analyses revealed significant Emotion-by-Presentation interaction effects in the three ROIs (hippocampus: F(1, 27)=5.68, p=0.024, partial η2=0.17; vLOC: F(1, 27)=5.33, p=0.029, partial η2=0.17; FFA: F(1, 27)=5.61, p=0.025, partial η2=0.17). Error bars represent standard error of mean. ‘X’ indicates significant interaction (p<0.05). Notes: NS, non-significant; *p<0.05; **p<0.01; two-tailed t-tests.
Figure 2—figure supplement 3. Trial-level pattern similarity analysis approach.

Figure 2—figure supplement 3.

Several similarity measures were computed for each trial. (1) Pair-specific similarity (black) was computed by correlating each face-voice pair’s multivoxel activity pattern during emotional learning with its corresponding pattern of face-object pair during initial learning. (2) Across-pair within-condition similarity (purple) was computed by averaging all correlations between each face-voice pair’s pattern and the patterns of all other different face-object pairs within the same aversive (or neutral) condition. (3) Across-pair between-condition similarity (grey) was computed by averaging all correlations between each face-voice pair’s pattern and the patterns of all other different face-object pairs from the different condition. (4) Within-encoding similarity (green) was computed by averaging all correlations between each face-object pair’s pattern and the patterns of all other face-object pairs within initial learning phase. (5) Within-arousal similarity (yellow) was computed by averaging all correlations between each face-voice pair’s pattern and the patterns of all other face-voice pairs within emotional learning phase. These correlations from each similarity measure were then averaged across trials for each participant in aversive and neutral conditions separately.
Figure 2—figure supplement 4. Additional trial-level pattern similarity results.

Figure 2—figure supplement 4.

(A–C) The bilateral hippocampal, vLOC and FFA ROIs used in trial-level pattern similarity analyses. (D–F) Bar graphs depict the average pair-specific, across-pair within-condition and across-pair between-condition pattern similarities in aversive and neutral conditions for the three ROIs. We conducted separate 2 (Emotion: aversive vs. neutral) by 3 (Measure: pair-specific vs. across-pair within-condition vs. across-pair between-condition) repeated-measures ANCOVA on each ROI, with individual’s univariate activation differences (i.e. aversive vs. neutral) in both initial and emotional learning phases as two covariates of no interest. For hippocampal pattern similarity, this analysis revealed a significant main effect of Emotion (F(1, 25)=11.10, p=0.003, partial η2=0.31) and an Emotion-by-Measure interaction (F(2, 50)=3.72, p=0.031, partial η2=0.13), but no main effect of Measure (F(2, 50)=0.62, p=0.541, partial η2=0.02). For vLOC pattern similarity, it also revealed a significant main effect of Emotion (F(1, 25)=8.45, p=0.008, partial η2=0.25), but neither a main effect of Measure (F(2,50) = 0.16, p=0.849, partial η2=0.01) nor an Emotion-by-Measure interaction (F(2, 50)=1.87, p=0.165, partial η2=0.07). For FFA pattern similarity, it revealed neither main effects of Emotion (F(1, 25)=2.30, p=0.142, partial η2=0.08) and Measure (F(2,50) = 0.53, p=0.591, partial η2=0.02) nor their interaction effect (F(2, 50)=0.06, p=0.938, partial η2=0.003). Bar graphs inside the red frame depict the average across-pair within-condition and across-pair between-condition pattern similarities in aversive and neutral conditions separately for the three ROIs. To directly compare the two across-pair similarities, we further conducted separate 2 (Emotion: aversive vs. neutral) by 2 (Measure: across-pair within-condition vs. across-pair between-condition) repeated-measures ANCOVA on each ROI, with above-mentioned covariates of no interest. Error bars represent standard error of mean. Notes: ~p < 0.10; *p<0.05; **p<0.01; two-tailed tests.
Figure 2—figure supplement 5. Prediction relationships between hippocampal between-phases pattern similarity and associative memory performance.

Figure 2—figure supplement 5.

(A) Scatter plots depict correlations between observed memory performance (i.e. face-object associative memory with high confidence) and predicted outcomes from machine-learning prediction analysis based on across-pair pattern similarity in aversive and neutral conditions. (B) Correlations between observed memory performance and predicted outcomes based on pair-specific (vs. across-pair) similarity. Dashed lines indicate 95% confidence intervals, and solid lines indicate the best linear fit. Notes: *p<0.05; ***p<0.001; the p values were calculated using permutation tests (i.e. 1000 times); two-tailed tests.
Figure 2—figure supplement 6. Prediction relationships between hippocampal within-phase pattern similarity and associative memory performance.

Figure 2—figure supplement 6.

(A) Scatter plots depict correlations between observed memory performance (i.e. face-object associative memory with high confidence) and predicted outcomes from machine-learning prediction analysis based on within-encoding pattern similarity in aversive and neutral conditions. (B) Scatter plots depict correlations between observed memory performance and predicted outcomes based on within-arousal pattern similarity in aversive and neutral conditions. Dashed lines indicate 95% confidence intervals, and solid lines indicate the best linear fit. Notes: the p values were calculated using permutation tests (i.e. 1000 times); two-tailed tests.