Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2022 Dec 30;17(12):e0279823. doi: 10.1371/journal.pone.0279823

The neural correlates of context driven changes in the emotional response: An fMRI study

Brigitte Biró 1,2,3, Renáta Cserjési 3, Natália Kocsel 3, Attila Galambos 2,3, Kinga Gecse 1,4, Lilla Nóra Kovács 2,3, Dániel Baksa 1,4, Gabriella Juhász 1,4, Gyöngyi Kökönyei 1,3,4,*
Editor: Fausta Lui5
PMCID: PMC9803168  PMID: 36584048

Abstract

Emotional flexibility reflects the ability to adjust the emotional response to the changing environmental context. To understand how context can trigger a change in emotional response, i.e., how it can upregulate the initial emotional response or trigger a shift in the valence of emotional response, we used a task consisting of picture pairs during functional magnetic resonance imaging sessions. In each pair, the first picture was a smaller detail (a decontextualized photograph depicting emotions using primarily facial and postural expressions) from the second (contextualized) picture, and the neural response to a decontextualized picture was compared with the same picture in a context. Thirty-one healthy participants (18 females; mean age: 24.44 ± 3.4) were involved in the study. In general, context (vs. pictures without context) increased activation in areas involved in facial emotional processing (e.g., middle temporal gyrus, fusiform gyrus, and temporal pole) and affective mentalizing (e.g., precuneus, temporoparietal junction). After excluding the general effect of context by using an exclusive mask with activation to context vs. no-context, the automatic shift from positive to negative valence induced by the context was associated with increased activation in the thalamus, caudate, medial frontal gyrus and lateral orbitofrontal cortex. When the meaning changed from negative to positive, it resulted in a less widespread activation pattern, mainly in the precuneus, middle temporal gyrus, and occipital lobe. Providing context cues to facial information recruited brain areas that induced changes in the emotional responses and interpretation of the emotional situations automatically to support emotional flexibility.

Introduction

Emotional flexibility refers to the ability to modulate one’s emotional responses to fit the changing demands of the environmental context, and, thus, to change–generate, inhibit, down- or upregulate–one’s initial emotional responses according to the contextual demands [1,2]. Context can automatically direct emotional processing and can easily override facial expressions [3]. Learned associations between emotional responses and contexts lead to the appraisal of situations and shape the emotional responses and/or the regulatory processes [4,5]. For illustration, crying at a funeral represents sadness, in contrast with crying when achieving great success it represents happiness or pride. Thus, the meaning of a stimulus may depend on the context and may change when the context changes, so one of the key elements of emotional flexibility is shifting between meanings to adapt our behavior, e.g., our emotional response to the context. In short, to give an appropriate emotional response after context modifications [6,7].

Decoding emotions from faces has been extensively studied, but there is now a large body of evidence proving that the situational context (e.g., physical and social environment), along with the emotional/social knowledge of the perceiver about the situation will automatically guide the perception [8], causing even radical categorical changes in the perceived emotion (e.g., pride instead of sadness) [9]. Contextual information is processed and integrated with facial affective information in the early phase of perception [10]; thus, even the perception of a basic facial emotion can be categorically changed automatically by the context [9].

There are many ways to test the effect of context on emotion perception experimentally. For instance, knowledge about the situation of the observed person can be manipulated by semantic-linguistic labels [3,11] and information given before presenting even neutral faces [12]. Emotional faces can also be presented on different, even artificial backgrounds [13,14], or in naturalistic scenes. Results of electrophysiological [10] and magneto-encephalographic [15] studies suggest that facial perception is influenced by contextual cues even in the early stage of visual processing. On the basis of the available evidence, Aviezer and colleagues [16] conclude that context does not simply have a modulating effect on the processing and perception of emotions, but can actually lead to a categorical shift in the perception of an emotional expression.

Research on cognitive reappraisal can also help to understand the impact of context on emotional information processing. In reappraisal studies [17,18], when participants are asked to give a different meaning to a negative or positive stimulus, they are instructed to create a new cognitive context for the stimulus. Shifting from emotional to non-emotional or from negative to positive meaning (or vice versa) definitely alters the emotional trajectory, causing changes in the initial emotional responses [19]. In a multi-level framework, proposed by Braunstein, Gross, and Ochsner [20] reappraisal studies are considered to address controlled emotion regulation with explicit regulatory goals: participants are instructed to regulate their emotions (explicit regulatory goal) using effortful processes to change the cognitive context (controlled processes). However, reappraisal or shift in meaning, or more generally, emotion regulation can happen without explicit regulatory goals and/or in a more automatic manner (see [20,21]).

To extend previous works, our aim was to investigate a shift or change in emotional perception triggered by the context that occurs automatically and without explicit instructions to change. We used the Emotional Shifting Task (EST) [22,23], in which pairs of pictures are presented: the first one is a small detail of the second picture, as depicted in Fig 1. The presentation of the first picture, which is a decontextualized part of the whole picture, generates an emotional response, but when this picture is put into a context, the context itself may cause a change (and in some cases a shift) in the meaning and valence of the stimulus. For instance, a picture of a smiling girl is generally evaluated as a positive stimulus but if it turns out that she is smiling while bullying a peer, it will probably cause a shift in the evaluation toward a negative direction automatically and without any explicit regulatory goals. This technique was developed first by Munn in 1940 [24], who selected 16 pictures depicting emotional expressions from Life and Look magazines and then prepared two sets of pictures: one set with the full picture including the context as well, another set with only the face from the full picture. Munn found that the extra information of the context could change the judgment of an emotional face when participants were asked to name the emotion appearing on the face. The EST also uses naturalistic scenes as well in order to mimic real-life emotion perception and increase the ecological validity of our task.

Fig 1. Schematic representation of a sequence of trial in the Emotional Shifting Task.

Fig 1

An example for shifting from positive to negative emotion.

Our aim was to explore the neuronal responses to the previously seen decontextualized photograph (depicting emotions using primarily facial and postural expressions) when it was presented in a naturalistic context. We expected that areas involved in processing of facial and contextual cues were going to be recruited by our task. For instance, studies using facial expressions in investigating emotion perception found that facial expressions activated the so-called face-selective regions, including the inferior occipital gyrus (occipital face area, OFA), lateral fusiform gyrus (fusiform face area), and posterior superior temporal sulcus (pSTS) as core regions of a widely distributed network [8]. Other brain regions such as the amygdala, anterior inferior temporal cortex [25], insula, and inferior frontal gyrus [26] also play a role in facial emotional processing [27]. As for the context, it is well established that complex social situations can easily trigger mentalizing, i.e., inferring mental and affective states to others (often called theory of mind) [28,29] and/or empathic responding, i.e., vicariously experiencing the feelings and emotional states of other people [30]. Core brain regions of the neural network of mentalization are the bilateral temporoparietal junction (TPJ) and medial prefrontal cortex (mPFC) [31] whereas the core network of empathy involves the dorsal anterior cingulate cortex, anterior midcingulate cortex and supplementary motor area (SMA) [32].

The EST task we used allowed us to distinguish different conditions as presented in Table 1. On the basis of two dimensions, which are (1) the valence (positive or negative) of the initial emotional stimulus and (2) the valence (positive or negative) of its contextualized presentation, four different types of automatic changes can be targeted in emotional response. We argue that emotional flexibility can be investigated with this design as Coifman and Summers [2] pointed out the initial emotional response (to the decontextualized pictures) must be modulated (upregulate the emotions or shift) to fit to the context.

Table 1. Understanding emotional flexibility in terms of context-driven emotional response.

Second picture (with context)
Positive Negative

First picture (no context)
Positive Upregulation of positive emotions Shift from positive emotions to negative ones
Negative Shift from negative emotions to positive ones Upregulation of negative emotions

By shift we refer to categorical changes where the valence of the initial emotional response is reversed, i.e., the initial negative emotional response becomes positive and vice versa. The term upregulation is used when the valence of the same initial emotional response is increased by the context; thus, a negative stimulus becomes more negative, or a positive one becomes more positive. Accordingly, the EST contained two shift and two non-shift (upregulation) conditions.

On the basis of the theory by Saxe and Houlihan [33] different emotional responses could be expected to stimuli in context vs. without context. They argue that forward inferences are used to attribute emotions to the target when an emotional expression is processed in a context; thus, we automatically infer that the cause of the emotional state of the target reflected in their emotional expressive behavior is the context/event. On the basis of this, we expected that context itself would recruit areas involved in emotional processing and understanding complex social situations; thus, first we simply compared the neural responses to whole pictures vs. decontextualized (cropped) pictures. We refer to this as a general context effect in our study. Then we used this activation map as an exclusive mask to be able to explore the four different types of automatic changes specifically in emotional responses. It allowed us to explore neural activation to changes in the meaning triggered by the context as a passive cue independent of the context vs. no context differences. On the basis of previous studies, we hypothesized that prefrontal regions, especially the mPFC and dorsolateral prefrontal cortex (dlPFC) [17,34] were going to be recruited when the context induced a shift in the emotional valence of the pictures. More specifically, on the basis of a recent study on the automatic regulation of negative emotions by Yang and coworkers [35] we expected that visual areas, striatal areas, precentral/postcentral gyri and dlPFC would be activated when context resulted in a shift from negative towards positive meaning.

Method

Participants

Thirty-two healthy adult volunteers recruited through social media sites and journal advertisements were included in the present study; however, one participant was excluded from the first level analysis due to excessive movement during the fMRI measure; thus, the final data of 31 participants, 18 females and 13 males (mean age: 24.44 ± 3.4), were analyzed. The participants were right-handed, as assessed by the Edinburgh Handedness Inventory [36], and had normal or corrected-to-normal vision. All participants were examined by a senior psychiatrist and neurologist and were excluded with any history of psychiatric or neurological disorders or chronic medical conditions.

The present study was approved by the Scientific and Research Ethics Committee of the Medical Research Council (Hungary), and written informed consent was received from all subjects in accordance with the Declaration of Helsinki.

Psychological task: The Emotional Shifting Task

The EST [22] consists of 24 picture pairs. In each pair, the first picture is always a smaller detail from the second (whole) picture. In most cases the cropped image expressed emotion primarily through facial expression and/or posture. The valence of the firstly presented picture either remains or changes when it is placed into a context, and so should change the elicited emotion (Fig 1). For the upregulation conditions (P1P2 and N1N2), pictures were selected from the International Affective Picture System [37]. Their identification numbers were 1340, 2091, 2141, 2205, 2216, 2340, 2530, 2700, 6242, 6838, 8497, and 9050. For the shift conditions (P1N2 and N1P2) pictures were selected from the internet. Six criteria were used to select the images: (1) free for non-commercial use, (2) depicting social interactions, (3) evoking an emotional response without being shocking or extreme, (4) not depicting famous person(s), (5) eligible for shifting conditions, i.e., the valence of facial expression and the whole picture should be opposite, and (6) the images should represent as many different situations as possible.

After each pair, a happy and a sad smiley/emoji appeared on the screen (Fig 2), and participants had to choose one of them by pressing the corresponding button to indicate the valence (positive or negative) of the second (whole) picture. We decided to use emojis in the scanner to mimic the two endpoints of valence ratings in Self-Assessment Manikin [37].

Fig 2. The design of the Emotional Shifting Task.

Fig 2

Four conditions were defined in the task: two conditions, in which participants were required to alter their emotions either from positive to negative (P1N2) or from negative to positive (N1P2), and two conditions, where no shift (but upregulation) was expected in the valence (i.e. both pictures presented were either positive (P1P2) or negative (N1N2)). Each condition consisted of six pairs of pictures, presented in pseudo-random order. During functional magnetic imaging, two behavioral variables were registered: the reaction times (RT) of the selection of the emoji and the number of “correct” answers. An answer was considered correct if the valence of the secondly presented picture matched the valence of the selected smiley. Stimulus presentations and data registrations were conducted in E-Prime 2.0 Software (Psychology Software Tools, Inc., Pittsburgh, PA, USA).

Procedure

Data collection included three steps. First, participants completed a short practice session, which was explained and presented on a laptop outside of the scanner and consisted of three pairs of stimuli that were not used in the task and included shift and non-shift conditions as well. In this part, participants could read the instructions and ask their questions in case of uncertainty regarding the instructions or the operation of the task.

In the next step, participants were instructed to get as emotionally involved in the presented situations as possible while viewing the pictures in the scanner. To measure a baseline brain activity, a white fixation cross was presented on a black background at the beginning and at the end of the task for 20 seconds. Each emotional stimulus was shown for 8 seconds. The timing was based on laboratory pilot studies and previous studies [11]. To avoid artifacts due to expectations, fixation crosses were presented with altered timing (from 5 to 11 seconds, mean presentation time: 8 seconds) before each emotional stimulus. At the end of each trial, the answer screen was presented for 4 seconds (Fig 2).

Lastly, a post-test was filled out after the fMRI measure (outside the scanner) to examine whether participants observed changes in the valence of the pictures. In this part, the pairs of pictures were presented in the same order on a laptop, and participants were asked to rate them on a 7-point Likert scale. Valence and arousal were measured from 1 to 7 (1 being very unpleasant and 7 very pleasant; 1 being calm and 7 very excited, respectively).

FMRI acquisition

The functional MRI data collection was carried out by a SIEMENS MAGNETOM Prisma syngo MR D13D 20 channels headcoil 3T scanner. A BOLD-sensitive T2*-weighted echo-planar imaging sequence was used (TR = 2220ms, TE = 30 ms, FOV: 222) with 3 mm × 3 mm in-plane resolution and contiguous 3-mm slices providing whole-brain coverage. Four hundred and nine volumes were acquired during the task. For the structural data a series of high-resolution anatomical images were acquired before the functional imaging using a T1-weighted 3D TFE sequence with 1 × 1 × 1 mm resolution.

Statistical analysis of self-report and post-test data

To analyze demographic and behavioral data SPSS version 28.0 (IBM SPSS, IBM Corp, Armonk, NY, USA) was used, and descriptive and non-parametric statistics were performed. As the distribution of valence and arousal ratings was non-normal, we used Wilcoxon Signed Rank Test to compare the valence and arousal ratings of the first and second pictures in each condition (P1N2, N1P2, P1P2, and N1N2). However, as it was easier to interpret changes in means than in ranks, we repeated these analyses using a series of bootstrapped paired t-tests. A repeated measures ANOVA was performed on the reaction times collected during the fMRI scan.

FMRI data analyses

Preprocessing

Statistical Parametrical Mapping (SPM12) analysis software package (Wellcome Department of Imaging Neuroscience, Institute of Neurology, London, UK; http://www.fil.ion.ucl.ac.uk/spm12/ implemented in Matlab 2016b (Math Works, Natick, MA, USA) was used to analyze the imaging data. Preprocessing contained the following steps: realignment, co-registration to the structural image, segmentation, normalization in the Montreal Neurological Institute (MNI) space, and spatial smoothing with an 8-mm full-width half-maximum Gaussian kernel. These steps of preprocessing were performed on the functional images. Finally, a visual inspection of the pictures took place to exclude the poor-quality images.

First level model

During first-level analyses, BOLD (blood oxygenation level-dependent) hemodynamic responses were modeled in a general linear model. In the event-related single subject analysis fixation screens, both stimuli (positive and negative), the disposition of the shift (the valence of the first stimuli: positive/negative, and the nature of the condition: shift/non-shift condition) and the two possible answers (happy and sad emojis) were modeled as separate regressors of interest. High-pass temporal filtering with a cut-off of 128 s was included in the model to remove the effects of low-frequency physiological noise, and serial correlations in data series were estimated using an autoregressive AR (1) model. Motion outliers (threshold of global signal > 3 SD and motion > 1 mm) were identified with the Artifact Detection Tools (ART; www.nitrc.org/projects/artifact_detect/), and the six motion parameters were used as regressors of no interest in the fMRI model.

Four contrasts were created to analyze whether an increased activation could be detected to stimuli that were placed into a context (2nd picture) compared to the ones without contextual background (1st picture), and also focusing on the valence of the stimuli (see Table 1).

Second-level analyses

During second-level analyses (whole brain t-test) the threshold was set to p< .05 family-wise error (FWE) corrected for multiple comparisons. The automated anatomical labeling atlas (aal) [38] was used to anatomically identify the activated clusters, whereas the MNI 152 template brain provided in MRIcroGL was used to visualize statistical maps http://www.mccauslandcenter.sc.edu/mricrogl/ [39].

Results

Behavioral results

Descriptive statistics

To track the changes in valence and arousal in picture pairs, ratings of the valence and arousal values registered in the post-task after scan were analyzed (S1 Table). Answers were compared to the post-task, and the valence ratings of all the pictures were in the expected directions, namely positive pictures (P1; P2) were rated more pleasant and negative pictures (N1; N2) more unpleasant. Results of the Wilcoxon Signed Rank Test (and bootstrapped paired t-test) showed the significant differences between the mean valence and arousal values within conditions according to which valence ratings significantly differed in each condition (P1N2; P1P2; N1P2; N1N2), similarly, arousal ratings showed a significant increase for the second pictures (P2, N2), compared to the first ones (P1, N1) in each condition (see S1 Table, Figs 3 and 4).

Fig 3. Changes in valence ratings in the post-task.

Fig 3

Note. P1: First picture of the picture pairs is positive. N1: First picture of the picture pairs is negative. P2: Second picture of the picture pairs is positive. N2: Second picture of the picture pairs is negative. * p < .001.

Fig 4. Changes in arousal ratings in the post-task.

Fig 4

Note. P1: First picture of the picture pairs is positive. N1: First picture of the picture pairs is negative. P2: Second picture of the picture pairs is positive. N2: Second picture of the picture pairs is negative. * p < .001.

Descriptive data, collected during the EST, provided information on the mean reaction time values given in the different contrasts (see S2 Table) and also on the “accuracy” of the answers of the participants (whether they picked the expected smiley/emoji showed on the screen). This accuracy was 95.54% (range: 79–100%). A repeated measures ANOVA was performed on the reaction times collected during the fMRI scan, resulting in a significant difference across the four contrasts (F3, 90 = 16.273, p< .001). Post hoc analyses showed that subjects pressed the button more slowly in trials of shifting from positive to negative (P1N2) compared to the other three types of trials. Reaction times in the trials when both pictures were negative (N1N2) were longer, compared to the trials when both pictures were positive (P1P2) and to the trials of shifting from negative to positive (N1P2) (see S2 Table and Fig 5).

Fig 5. Mean reaction times (and standard deviations in milliseconds) to the second picture of the picture pairs in the scanner by the type of picture pairs in the Emotional Shifting Task.

Fig 5

Note. P1: First picture of the picture pairs is positive. N1: First picture of the picture pairs is negative. P2: Second picture of the picture pairs is positive. N2: Second picture of the picture pairs is negative. * p < .05.

Task-related activations

Main effect of context

The main effect of the context was checked by comparing the increased brain activations of the second images (full pictures with the context) to the firstly presented images (pictures without context) regardless of valence changes. Widespread activations were found in the brainstem, lingual and fusiform gyri, precuneus, calcarine, middle and superior temporal gyri, middle and superior occipital gyri, inferior parietal gyrus, middle, superior, medial, inferior frontal gyri, precentral gyrus, SMA, anterior cingulate (ACC) and postcentral gyrus (S3 Table and S1 Fig).

From positive to negative: Activation to positive pictures in a negative context compared to positive pictures without context (P1-N2)

To reveal the increased brain activation specific to the change in the meaning from positive to negative triggered by context, the context main effect regions were used as an exclusive mask on the results of the widespread increased activations to positive pictures in a negative context compared to positive pictures without context (Table 2 and Fig 6). Thus, the regions activated outside the mask were the superior medial frontal gyrus, inferior orbitofrontal gyrus, superior temporal pole, middle and superior temporal gyrus, middle occipital gyrus, SMA, anterior cingulum, thalamus, caudate and amygdala.

Table 2. Brain regions showing increased activation to positive pictures in a negative context compared to positive pictures without context and brain regions showing significantly increased activation to negative pictures in a positive context compared to negative pictures without context.
Cluster size
(voxel)
Region Side Peak T-values MNI coordinates
Positive pictures in a negative context > Positive pictures without context x y z
228 Superior Frontal Gyrus, medial L 9.03 0 56 23
Pregenual ACC R 7.95 6 50 17
Superior Frontal Gyrus, medial R 7.94 6 47 26
40 Inferior frontal Gyrus, pars orbitalis L 7.65 -45 23 -10
Superior Temporal Pole L 7.04 -36 17 -25
Superior Temporal Pole L 6.80 -42 17 -19
14 Thalamus R 7.54 9 -25 -1
66 Caudate R 7.35 12 11 8
Caudate R 7.19 6 8 -1
Thalamus R 6.54 6 -7 8
12 Middle Occipital Gyrus L 7.24 -39 -76 11
17 Rectus L 7.10 0 50 -16
Superior Frontal Gyrus, medial orbital L 6.31 0 59 -13
26 Thalamus L 6.78 -6 -7 5
Caudate L 6.56 -9 8 5
Caudate L 6.10 -12 14 -1
2 Amygdala L 6.68 -21 -7 -16
48 Supplementary Motor Area R 6.66 6 17 68
Superior Frontal Gyrus, medial R 6.59 15 32 59
Supplementary Motor Area R 6.42 3 14 56
14 Middle Temporal Gyrus L 6.37 -60 -10 -16
12 Middle Temporal Gyrus R 6.37 57 -37 -1
Superior Temporal Gyrus R 6.18 48 -37 5
Negative pictures in a positive context > negative pictures without context
71 Middle Temporal gyrus R 9.05 51 -73 2
Middle Temporal gyrus R 8.08 39 -67 20
Middle Temporal gyrus R 7.31 42 -73 8
45 Superior Occipital Gyrus R 7.75 33 -70 41
18 Precuneus R 7.40 3 -52 56
Precuneus R 6.17 6 -64 47
25 Calcarine R 7.40 12 -91 8
Calcarine R 6.88 18 -88 -1
11 Middle Occipital Gyrus L 7.22 -39 -79 2
16 Midcingulate R 6.17 6 -52 32
Precuneus 6.06 0 -55 20
Precuneus R 5.93 3 -61 29

Note. R = right, L = left; the statistical threshold was set to p< .05, family-wise error (FWE) corrected for multiple comparison, ACC = anterior cingulate cortex.

Fig 6. Activated regions when the context categorically changed the valence of the emotional stimuli (from negative to positive and from positive to negative) after excluding the general effect of context.

Fig 6

Increased activations when a negative stimulus was put in a positive context are shown in green whereas increased activations when a positive stimulus was put in a negative context are shown in red. Coordinates are in the Montreal Neurological Institute (MNI) space. Statistical maps were visualized on the MNI 152 template brain provided in MRIcroGL [39].

From negative to positive: Brain activations to negative pictures in a positive context compared to negative pictures without context (N1P2)

In picture pairs, where the secondly presented positively valenced stimuli (P2) were compared to the firstly presented negatively valenced stimuli (N1), increased activations were found in the middle temporal gyrus, middle cingulum, middle occipital gyrus, precuneus and calcarine when we used the above mentioned exclusive mask (Table 2 and Fig 6).

From positive to positive: Brain activations when both the first and second pictures were positive (P1P2)

Increased BOLD signals were found in the visual areas including the calcarine and the lingual gyrus (Table 3) when positive pictures in a positive context were compared to positive pictures without context when using the exclusive mask.

Table 3. Brain regions showing significantly increased activation to positive pictures in a positive context were compared to positive pictures without context (P1P2 picture pairs), and those with significantly increased activation to negative pictures in a negative context were compared to negative pictures without context (N1N2 picture pairs).
Cluster size (voxel) Region Side Peak T-values MNI coordinates
Negative pictures in a negative context > negative pictures without context x y z
15 Superior Occipital Gyrus R 7.42 27 -73 44
Positive pictures in a positive context > positive pictures without context
17 Lingual Gyrus R 7.24 12 -88 -4
Calcarine R 7.23 15 -94 2
13 Calcarine L 7.18 -6 -94 2
-3 -85 -4

Note. R = right, L = left; the statistical threshold was set to p< .05, family-wise error (FWE) corrected for multiple comparison.

From negative to negative: Brain activations when both the first and second pictures were negative (N1N2)

The superior occipital gyrus showed an increased activation (Table 3) to negative pictures in a negative context compared to negative pictures without context, and the context main effect regions were used as an exclusive mask.

Discussion

In the present study we aimed to measure emotional flexibility, defined here as a change in the emotional response elicited by a specific context. This task is built on the notion that the context may give an entirely different interpretation to the stimulus, resulting in even categorical changes in the valence and/or arousal of the elicited emotion [2]. In our task, the context appeared as a passive cue that induced changes in the emotional responses and interpretation of the emotional situations automatically; thus, this task is considered to explore changes in the spontaneous emotional output guided by the context. Thus, on the basis of the review by Coifman and Summers [2], we argue that EST is an appropriate task to capture emotional flexibility. Post-task data on the valence, arousal and changes in BOLD responses support that this task can induce changes in the emotional response.

Context effect on valence, arousal rating and reaction times

To understand the outcome of the fMRI results better, valence and arousal ratings of the emotional stimuli were collected during the post-task (out of scanner post-task): participants were asked to rate the pictures used in the fMRI task after the scan. Significant changes in the valence and arousal ratings of the stimuli were observed after they were placed into a context, indicating that participants reinterpreted the emotional stimuli. These changes in the valence and arousal were detected in all four types of picture sets, indicating that the context itself did not only shape the categorization of the emotional states [9], but it also might affect valence and intensity (arousal). The context, or more precisely the appraisal of overall context (e.g. bullying, being in a hospital with a sick person, or childbirth), i.e., the semantic features, might provide extra affective information, as it gives an explanation for the emotion; thus, it guides our interpretation [33]. For instance, seeing a crying woman in hospital compared to seeing just a crying woman, could elicit a more intense emotional response, as it can activate additional emotional meaning or knowledge such as her relative being sick.

Reaction times in the scanner were also registered and showed that participants performed more slowly to the second picture in trials of shifting from positive to negative compared to the other three types of trials. Reaction times to the second picture were longer in the trials when both pictures were negative compared to trials when both pictures were positive or when shifting from negative to positive was required. Thus, reaction time was longer when the second picture was negative compared to when it was positive. We did not ask our participants to choose an emoji after the first picture in the scanner; thus, it limits our understanding.

This result is in line with the findings of Sakaki and coworkers [40] who presented negative, neutral and positive pictures in their study and found that participants had longer reaction times to negative pictures compared to neutral or positive pictures in their task that required semantic processing. On the basis of the results, Sakaki et al. [40] concluded that facing pictures of negative emotional events affected or interfered with the semantic processing of the following stimuli even more than perceptual processing.

General context effect on brain activation

Overall, we found increased occipital cortex activation when emotional stimuli were put into context, indicating heightened perception and attention [41] presumably evoked by the information about the social and physical environment surrounding the expressor, along with existing emotional knowledge about the context. The activation of the lateral occipital cortex has been found in studies investigating emotional scene processing [42], and the role of calcarine in visual information processing [43,44] and in visual-imagery processes is well-documented [45].

In our task context brought new knowledge (information) on the emotional state of the protagonist. Indeed, many of the activated areas such as the middle temporal gyrus (MTG), fusiform gyrus, SMA [34], temporal pole [46] caudate [47], brainstem [48] and thalamus [49,50] suggest that the social and emotional meaning of facial/postural information, possibly along with other faces on the picture, required an increased emotional reprocessing when the context appeared.

The mental states we attribute to others and the extent to which we resonate with their emotional states can guide our behavior in complex social situations. Accordingly, the contextual presentation of emotional stimuli may activate brain areas involved in mentalization and/or empathic response. Thorough investigation of activation maps revealed that the MTG activation extended to the TPJ. Several meta-analyses suggest [31,51,52] that TPJ is one of the core regions associated with social cognition, or more specifically mentalizing (often called theory of mind) [28,53], and TPJ is also activated in empathy studies [51]. Furthermore, the precuneus was also activated, this area was previously associated with affective mentalizing [54] i.e., inferring affective states of others, occasionally correspondingly called cognitive empathy. The recruitment of TPJ and the precuneus in our task might suggest that when context was added to the first picture the observers (our participants) reflected on the emotional meaning of the situation.

We detected increased anterior insula activation when context was added to the first picture. Its role in processing interoceptive information that is a key to representing emotional experience [55] is well-established, and insula activation correlated with self-reported arousal [56]. In a recent study [57] activations in other regions, such as in the superior temporal sulcus, fusiform gyrus and lateral occipital cortex, have also been associated with arousal. Note that our post-test after scanning revealed that the second (whole) pictures were more arousing compared to the first one.

We also found increased activation in the right inferior frontal gyrus and lateral orbitofrontal regions when context was added to the first picture. According to a meta-analysis [58] major overlaps in both regions along with the insula and temporal structures can be seen in emotional processing, interoceptive signaling, and social cognition, supporting previous neuroimaging data and our previous expectations that adding a context will recruit areas involved in emotional processing and understanding complex social situations. These results indirectly support the theory of Saxe and Houlihan [33] proposing that information about the event in which the target is expressing emotion is used as a cause for inferring target’s emotion(s).

Specific context effect

Placing positive stimulus in negative context

After excluding the general effect of context by using a mask with activation to context vs. no-context, the automatic shift from positive to negative valence was associated with increased BOLD response in dorsomedial PFC, SMA, lateral orbitofrontal cortex (OFC), rectus, caudate, thalamus, amygdala, and MTG. Many of the activated areas such as the MTG, SMA [34], orbitofrontal cortex [46] caudate [47], and thalamus [49,50] suggest that the context required an increased emotional reprocessing (beyond the general context effect) when it made a previously positive picture negative.

In the field of emotion, activations in the lateral OFC in our study correspond to face selective part of OFC observed in a face discrimination reversal task when a formerly correct face was no longer the correct choice [59]. More specifically, in that study when a correct face was chosen, its expression turned into a smile, but when the wrong face was chosen, it turned into an angry face. In that study this part of OFC was activated when a formerly correct face was no longer the correct choice, so instead of a smile its choice resulted in an angry face. Thus, activation in this part of OFC was proposed to be error-related, as there was a discrepancy between the expected and perceived feedback. This area was recruited when face expression signaled a need for behavior change (i.e., change in the choices) [60]. On the basis of this, seeing a formerly positive face expression in an overall negative context–e.g., a smiling girl in a bullying context–also triggers error-related processes and might signal a need for behavior change. In our study, the biggest change in valence according to our post-test results emerged when positive pictures were put into a negative context. This change, or more precisely the shift from a positive to a negative meaning might require the reformulation of mental representations as well reflected in the increased activations in the supplementary motor area [34].

Medial PFC, especially its dorsomedial part has been activated in studies investigating empathy with other regions such as the SMA and thalamus [51]. In addition, medial prefrontal regions are recruited in reappraisal studies [61]. For instance, previous studies suggest that the regulation of negative emotions needs the allocation of cognitive resources provided by the dorsomedial prefrontal and dorsal cingulate gyrus [62]. The automatic shift from positive to negative in the meaning of the stimulus in our study might be accompanied by the recruitment of regions providing cognitive sources to regulate the resulting negative emotions. However, we did not find activation in the dorsolateral prefrontal cortex commonly observed in reappraisal studies [17,34,63].

We detected small activation in the amygdala that is often associated with negative/fearful emotional experiences, faces [64], emotional events and personal affective importance [65] or motivational relevance [66]. As this was the first fMRI study using the EST, we decided to present all significant activations. However, the cluster size was too small to interpret this result in the context of shifting.

The above results suggest that after excluding the general effect of context, shift in the meaning of a positive stimulus to negative induced by the context was supported in our task by areas involved in emotional processing, reformulation of mental representations, mentalizing, empathy and, to some extent, error-related process and cognitive control.

Placing negative stimuli in positive context

Interestingly, trials where a negative facial expression was placed into a positive context resulted in a less widespread activation pattern compared to trials where a positive facial expression turned out to have a negative meaning. The increased activation of MTG and fusiform gyrus suggested that this shift was also associated with increased emotional processing [67], whereas the recruitment of the precuneus also supports that affective and cognitive processing of affective mental states was also increased in the participants [51]. However, in this case, we did not detect activations in the medial or lateral part of the frontal cortex, suggesting that this automatic shift, or the resulting positive emotions, did not require extra cognitive resources or regulation at least in our task. Thus, the differences and similarities between effortful and automatic regulation of negative emotions require further studies.

Our results differ from those of Yang and colleagues [35]. Although they investigated spontaneous recovery from a negative emotional state as an implicit form of emotion regulation, we used context as a passive cue to trigger a change in emotional response, suggesting that different forms of automatic emotion regulation need to be tested in further studies.

Placing negative stimuli in negative context and positive stimuli in positive context

Interestingly, when a positive stimulus was placed into a positive context, or a negative stimulus was placed into a negative context, increased activation primarily in the occipital regions (calcarine, lingual gyrus and superior occipital lobule) was detected. In the literature, upregulating negative [68] or positive emotions [69] are mainly examined in reappraisal studies where participants are instructed to use certain tactics; thus, effortful and controlled processes are targeted. In our task, upregulation of emotions–confirmed by the valence and arousal ratings–was induced by the context that did not require effortful processes, which might explain our results. However, only small clusters were found in these two conditions after masking, so we should interpret these results with caution.

Limitations

We asked participants to rate the valence and arousal of pictures during the post-task, not during the fMRI measure; however, results showed that the emotional valence ratings of the second, whole pictures changed in the expected direction. In addition, we did not record the eye movements of the participants, so we cannot rule out the possibility that the four different conditions differed in the amount of eye movements, and that this may have affected our results.

In order to better understand the impact of the complex context in an emotional situation, adding conditions that include the context without the face might have been useful [70]. That would have allowed us to see if adding the face to the context would modify the valence beyond the information already evident in the context itself. However, the aim of the study was not to study facial and contextual information processing per se, but to use the context as a passive cue that promotes a shift in the meaning of certain set of images, therefore supporting emotion regulation.

Participants did not have to choose from emojis after the first picture, so we could not calculate reaction time differences within the pairs, but simply compared reaction times to second pictures, which limits our understanding; however, reaction time differences were not in the focus of our study. Additionally, it would have been ideal to put the same first picture in a negative and a positive context as well to directly compare how different contexts might influence the emotional processing of the same emotional stimulus.

Another limitation could be that participants might have had a certain expectation regarding the context that might have appeared in the activations to the second pictures. However, to avoid this or to decrease its possibility, participants were specifically instructed to solely focus on the stimuli on hand.

For the non-shift trials, stimuli were selected from the IAPS [37] database, whereas pictures for the shift trials were collected from the internet, so they are not from a standardized set of emotional stimuli; however, they went through several pilot studies [22]. Individual differences in the emotional reactivity, empathy, or ToM might affect the perception of emotional stimuli [71], but we did not assess these characteristics of our participants.

Conclusion

We aimed to capture emotional flexibility by a task using context as a cue to trigger (an automatic) change in emotional response. The affective information and the social knowledge activated by the context had a major impact on the neural processing of the emotional visual stimuli. Presenting previously seen decontextualized emotional stimulus in a context recruited areas involved in emotional processing and understanding complex social situations, probably indicating that the context itself narrows the probability of emotions previously attributed to the decontextualized stimulus [72]. Thus, information about the context might be used as a cause of emotions [33].

Additionally, our results highlight that sensitivity and appropriate responses to context depend on many different processes; thus, emotional inflexibility may stem from different underlying mechanisms. Therefore, understanding emotional inflexibility in psychopathologies requires the dissection of these underlying mechanisms first.

Supporting information

S1 Fig. General context effect: Full pictures with the context vs. firstly presented images (pictures without the context) at p< .05, family-wise error (FWE) corrected for multiple comparison.

Coordinates are in Montreal Neurological Institute (MNI) space. Statistical maps were visualized on the MNI 152 template brain provided in MRIcroGL [39].

(DOCX)

S1 Table. Valence and arousal values for the pictures of the post-task by contrasts.

P1: First picture of the picture pairs is positive. N1: First picture of the picture pairs is negative. P2: Second picture of the picture pairs is positive. N2: Second picture of the picture pairs is negative. + Results of the bootstrapped paired t-test. ++ Results of the Wilcoxon Signed Rank Test. *p < .001. Valence and arousal were measured from 1 to 7 (1 being very unpleasant and 7 very pleasant; 1 being calm and 7 very excited, respectively).

(DOCX)

S2 Table. Mean reaction times (in milliseconds) to the second picture in the scanner by the type of picture pairs in the Emotional Shifting Task.

P1: First picture of the picture pairs is positive. N1: First picture of the picture pairs is negative. P2: Second picture of the picture pairs is positive. N2: Second picture of the picture pairs is negative. Different letters (a, b, c) represent significant (p < .05) difference between mean scores, whereas the same letters represent non-significant difference between mean scores according to the paired post hoc test of repeated measure of ANOVA.

(DOCX)

S3 Table. General context effect: Increased activations to the 2nd pictures with context compared to the 1st pictures without context.

L = left; the initial statistical threshold was set to p< .05, family-wise error (FWE) corrected for multiple comparison.

(DOCX)

Acknowledgments

The authors thank Mária Kelner for the drawing the first figure and Tamás Smahajcsik-Szabó for the figures with behavioural data.

Data Availability

All data underlying the findings described in the manuscript, i.e., post-scan valence and arousal ratings of the picture pairs and reaction times to the second pictures in the MR scanner along with main fMRI contrast maps, are fully available at https://osf.io/hgdky/. However, we are not allowed to share raw imaging dataset publicly, because at the time our study started, there was no information on open access data availability in the consent forms (the study was approved by the Scientific and Research Ethics Committee of the Medical Research Council (Hungary)), therefore participants were not able to accept or refuse their assent to share imaging data in an open access repository. However, raw imaging data are available from the corresponding author (Gyöngyi Kökönyei, kokonyei.gyongyi@ppk.elte.hu) or from the Department of Pharmacodynamics, Faculty of Pharmacy, Semmelweis University (titkarsag.gyhat@pharma.semmelweis-univ.hu) on reasonable request.

Funding Statement

This study was supported by the Hungarian Academy of Sciences (MTA-SE Neuropsychopharmacology and Neurochemistry Research Group), the Hungarian Brain Research Program (Grant: 2017-1.2.1-NKP-2017-00002), and the Hungarian Brain Research Program 3.0 (NAP2022-I-4/2022), and by the Hungarian National Research, Development and Innovation Office (Grant No. FK128614, K 143391). Project no. TKP2021-EGA-25 has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the Hungarian National Research, Development and Innovation Fund, financed under the TKP2021-EGA funding scheme. DB was supported by the ÚNKP-20-3-II-SE-51 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund. The sponsors had no role in the design of study, in the collection, analysis, interpretation of data and in the writing the manuscript.

References

  • 1.Beshai S, Prentice JL, Huang V. Building blocks of emotional flexibility: Trait mindfulness and self-compassion are associated with positive and negative mood shifts. Mindfulness. 2018;9(3):939–48. [Google Scholar]
  • 2.Coifman KG, Summers CB. Understanding Emotion Inflexibility in Risk for Affective Disease: Integrating Current Research and Finding a Path Forward. Frontiers in Psychology. 2019;10. WOS:000459769500001. doi: 10.3389/fpsyg.2019.00392 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Kayyal M, Widen S, Russell JA. Context Is More Powerful Than We Think: Contextual Cues Override Facial Cues Even for Valence. Emotion. 2015;15(3):287–91. WOS:000354544600004. doi: 10.1037/emo0000032 [DOI] [PubMed] [Google Scholar]
  • 4.Malooly AM, Genet JJ, Siemer M. Individual differences in reappraisal effectiveness: the role of affective flexibility. Emotion. 2013;13(2):302. doi: 10.1037/a0029980 [DOI] [PubMed] [Google Scholar]
  • 5.Robinson MD, Watkins ER, Harmon-Jones E. Cognition and emotion: An introduction. In Robinson M. D., Watkins E., & Harmon-Jones E. (Eds.), Handbook of cognition and emotion; (pp. 3–16). The Guilford Press; 2013. [Google Scholar]
  • 6.Fu F, Chow A, Li J, Cong Z. Emotional flexibility: Development and application of a scale in adolescent earthquake survivors. Psychological trauma: theory, research, practice, and policy. 2018;10(2):246. doi: 10.1037/tra0000278 [DOI] [PubMed] [Google Scholar]
  • 7.Waugh CE, Thompson RJ, Gotlib IH. Flexible emotional responsiveness in trait resilience. Emotion. 2011;11(5):1059. doi: 10.1037/a0021786 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Wieser MJ, Brosch T. Faces in context: a review and systematization of contextual influences on affective face processing. Frontiers in Psychology. 2012;3:471. doi: 10.3389/fpsyg.2012.00471 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Aviezer H, Hassin RR, Ryan J, Grady C, Susskind J, Anderson A, et al. Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychological science. 2008;19(7):724–32. doi: 10.1111/j.1467-9280.2008.02148.x [DOI] [PubMed] [Google Scholar]
  • 10.Hietanen JK, Astikainen P. N170 response to facial expressions is modulated by the affective congruency between the emotional expression and preceding affective picture. Biological Psychology. 2013;92(2):114–24. doi: 10.1016/j.biopsycho.2012.10.005 [DOI] [PubMed] [Google Scholar]
  • 11.Deak A, Bodrogi B, Biro B, Perlaki G, Orsi G, Bereczkei T. Machiavellian emotion regulation in a cognitive reappraisal task: An fMRI study. Cognitive, Affective, & Behavioral Neuroscience. 2017;17(3):528–41. [DOI] [PubMed] [Google Scholar]
  • 12.Schwarz JM, Smith SH, Bilbo SD. FACS analysis of neuronal–glial interactions in the nucleus accumbens following morphine administration. Psychopharmacology. 2013;230(4):525–35. doi: 10.1007/s00213-013-3180-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Righart R, De Gelder B. Rapid influence of emotional scenes on encoding of facial expressions: an ERP study. Social Cognitive and Affective Neuroscience. 2008;3(3):270–8. doi: 10.1093/scan/nsn021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Righart R, De Gelder B. Recognition of facial expressions is influenced by emotional scene gist. Cognitive, Affective, & Behavioral Neuroscience. 2008;8(3):264–72. doi: 10.3758/cabn.8.3.264 [DOI] [PubMed] [Google Scholar]
  • 15.Morel S, Beaucousin V, Perrin M, George N. Very early modulation of brain responses to neutral faces by a single prior association with an emotional context: evidence from MEG. Neuroimage. 2012;61(4):1461–70. doi: 10.1016/j.neuroimage.2012.04.016 [DOI] [PubMed] [Google Scholar]
  • 16.Aviezer H, Ensenberg N, Hassin RR. The inherently contextualized nature of facial emotion perception. Current Opinion in Psychology. 2017;17:47–54. WOS:000414464100009. doi: 10.1016/j.copsyc.2017.06.006 [DOI] [PubMed] [Google Scholar]
  • 17.Ochsner KN, Silvers JA, Buhle JT. Functional imaging studies of emotion regulation: a synthetic review and evolving model of the cognitive control of emotion. Annals of the New York Academy of Sciences. 2012;1251:E1. doi: 10.1111/j.1749-6632.2012.06751.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Ochsner KN, Gross JJ. Cognitive emotion regulation: Insights from social cognitive and affective neuroscience. Current Directions in Psychological Science. 2008;17(2):153–8. doi: 10.1111/j.1467-8721.2008.00566.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Gross JJ. The emerging field of emotion regulation: An integrative review. Review of General Psychology. 1998;2(3):271–99. [Google Scholar]
  • 20.Braunstein LM, Gross JJ, Ochsner KN. Explicit and implicit emotion regulation: a multi-level framework. Social cognitive and affective neuroscience. 2017;12(10):1545–57. doi: 10.1093/scan/nsx096 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Gyurak A, Gross JJ, Etkin A. Explicit and implicit emotion regulation: a dual-process framework. Cognition and Emotion. 2011;25(3):400–12. doi: 10.1080/02699931.2010.544160 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Biro B, Kokonyei G, De Oliveira Negrao R, Dancsik A, Karsai S, Logemann HNA, et al. Interaction between emotional context-guided shifting and cognitive shifting: Introduction of a novel task. Neuropsychopharmacol Hung. 2021;23(3):319–30. . [PubMed] [Google Scholar]
  • 23.Lacroix A, Dutheil F, Logemann A, Cserjesi R, Peyrin C, Biro B, et al. Flexibility in autism during unpredictable shifts of socio-emotional stimuli: Investigation of group and sex differences. Autism. WOS:000736641700001. doi: 10.1177/13623613211062776 [DOI] [PubMed] [Google Scholar]
  • 24.Munn NL. The effect of knowledge of the situation upon judgment of emotion from facial expressions. The Journal of Abnormal and Social Psychology. 1940;35(3):324–38. doi: 10.1037/h0063680 [DOI] [Google Scholar]
  • 25.Zhang H, Japee S, Nolan R, Chu C, Liu N, Ungerleider LG. Face-selective regions differ in their ability to classify facial expressions. Neuroimage. 2016;130:77–90. WOS:000372745600007. doi: 10.1016/j.neuroimage.2016.01.045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Haxby JV, Hoffman EA, Gobbini MI. The distributed human neural system for face perception. Trends in Cognitive Sciences. 2000;4(6):223–33. doi: 10.1016/s1364-6613(00)01482-0 [DOI] [PubMed] [Google Scholar]
  • 27.Duchaine B, Yovel G. A revised neural framework for face processing. Annual Review of Vision Science. 2015;1:393–416. doi: 10.1146/annurev-vision-082114-035518 [DOI] [PubMed] [Google Scholar]
  • 28.Premack D, Woodruff G. Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences. 1978;1(4):515–26. [Google Scholar]
  • 29.Frith CD, Frith U. The neural basis of mentalizing. Neuron. 2006;50(4):531–4. doi: 10.1016/j.neuron.2006.05.001 [DOI] [PubMed] [Google Scholar]
  • 30.Davis MH. Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of Personality and Social Psychology. 1983;44(1):113. [Google Scholar]
  • 31.Molenberghs P, Johnson H, Henry JD, Mattingley JB. Understanding the minds of others: A neuroimaging meta-analysis. Neuroscience & Biobehavioral Reviews. 2016;65:276–91. [DOI] [PubMed] [Google Scholar]
  • 32.Fan Y, Duncan NW, de Greck M, Northoff G. Is there a core neural network in empathy? An fMRI based quantitative meta-analysis. Neuroscience & Biobehavioral Reviews. 2011;35(3):903–11. [DOI] [PubMed] [Google Scholar]
  • 33.Saxe R, Houlihan SD. Formalizing emotion concepts within a Bayesian model of theory of mind. Current Opinion in Psychology. 2017;17:15–21. WOS:000414464100004. doi: 10.1016/j.copsyc.2017.04.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Kohn N, Eickhoff SB, Scheller M, Laird AR, Fox PT, Habel U. Neural network of cognitive emotion regulation—an ALE meta-analysis and MACM analysis. Neuroimage. 2014;87:345–55. doi: 10.1016/j.neuroimage.2013.11.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Yang Y, Zhang X, Peng Y, Bai J, Lei X. A dynamic causal model on self-regulation of aversive emotion. Brain Informatics. 2020;7(1):20. doi: 10.1186/s40708-020-00122-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9(1):97–113. doi: 10.1016/0028-3932(71)90067-4 [DOI] [PubMed] [Google Scholar]
  • 37.Lang PJ, Bradley MM, Cuthbert BN. International Affective Picture System (IAPS): Technical manual and affective ratings. Washington, DC: NIMH Center for the Study of Emotions and Attention; 1997. [Google Scholar]
  • 38.Tzourio-Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Etard O, Delcroix N, et al. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage. 2002;15(1):273–89. doi: 10.1006/nimg.2001.0978 [DOI] [PubMed] [Google Scholar]
  • 39.Rorden C, Brett M. Stereotaxic display of brain lesions. Behav Neurol. 2000;12(4):191–200. Epub 2001/09/25. doi: 10.1155/2000/421719 . [DOI] [PubMed] [Google Scholar]
  • 40.Sakaki M, Gorlick MA, Mather M. Differential Interference Effects of Negative Emotional States on Subsequent Semantic and Perceptual Processing. Emotion. 2011;11(6):1263–78. WOS:000297921200001. doi: 10.1037/a0026329 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Keil A, Costa V, Smith JC, Sabatinelli D, McGinnis EM, Bradley MM, et al. Tagging cortical networks in emotion: A topographical analysis. Human Brain Mapping. 2012;33(12):2920–31. WOS:000310798800014. doi: 10.1002/hbm.21413 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Sabatinelli D, Fortune EE, Li Q, Siddiqui A, Krafft C, Oliver WT, et al. Emotional perception: Meta-analyses of face and natural scene processing. NeuroImage. 2011;54(3):2524–33. doi: 10.1016/j.neuroimage.2010.10.011 [DOI] [PubMed] [Google Scholar]
  • 43.Engel SA, Glover GH, Wandell BA. Retinotopic organization in human visual cortex and the spatial precision of functional MRI. Cerebral Cortex. 1997;7(2):181–92. doi: 10.1093/cercor/7.2.181 [DOI] [PubMed] [Google Scholar]
  • 44.Onitsuka T, Shenton ME, Salisbury DF, Dickey CC, Kasai K, Toner SK, et al. Middle and inferior temporal gyrus gray matter volume abnormalities in chronic schizophrenia: An MRI study. American Journal of Psychiatry. 2004;161(9):1603–11. WOS:000223800600013. doi: 10.1176/appi.ajp.161.9.1603 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Klein I, Paradis A-L, Poline J-B, Kosslyn SM, Le Bihan D. Transient Activity in the Human Calcarine Cortex During Visual-Mental Imagery: An Event-Related fMRI Study. Journal of Cognitive Neuroscience. 2000;12(Supplement 2):15–23. doi: 10.1162/089892900564037 [DOI] [PubMed] [Google Scholar]
  • 46.Barat E, Wirth S, Duhamel J-R. Face cells in orbitofrontal cortex represent social categories. Proceedings of the National Academy of Sciences. 2018;115(47):E11158–E67. doi: 10.1073/pnas.1806165115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Almeida I, van Asselen M, Castelo-Branco M. The role of the amygdala and the basal ganglia in visual processing of central vs. peripheral emotional content. Neuropsychologia. 2013;51(11):2120–9. 10.1016/j.neuropsychologia.2013.07.007. [DOI] [PubMed] [Google Scholar]
  • 48.Buhle JT, Kober H, Ochsner KN, Mende-Siedlecki P, Weber J, Hughes BL, et al. Common representation of pain and negative emotion in the midbrain periaqueductal gray. Social Cognitive and Affective Neuroscience. 2012;8(6):609–16. doi: 10.1093/scan/nss038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Sambuco N, Bradley MM, Herring DR, Lang PJ. Common circuit or paradigm shift? The functional brain in emotional scene perception and emotional imagery. Psychophysiology. 2020;57(4):e13522. doi: 10.1111/psyp.13522 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Kober H, Barrett LF, Joseph J, Bliss-Moreau E, Lindquist K, Wager TD. Functional grouping and cortical–subcortical interactions in emotion: A meta-analysis of neuroimaging studies. NeuroImage. 2008;42(2):998–1031. doi: 10.1016/j.neuroimage.2008.03.059 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Bzdok D, Schilbach L, Vogeley K, Schneider K, Laird AR, Langner R, et al. Parsing the neural correlates of moral cognition: ALE meta-analysis on morality, theory of mind, and empathy. Brain Structure and Function. 2012;217(4):783–96. doi: 10.1007/s00429-012-0380-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Schurz M, Radua J, Aichhorn M, Richlan F, Perner J. Fractionating theory of mind: A meta-analysis of functional brain imaging studies. Neuroscience & Biobehavioral Reviews. 2014;42:9–34. doi: 10.1016/j.neubiorev.2014.01.009 [DOI] [PubMed] [Google Scholar]
  • 53.Frith CD, Wolpert DM, Frith U, Frith CD. Development and neurophysiology of mentalizing. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences. 2003;358(1431):459–73. doi: 10.1098/rstb.2002.1218 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Takahashi HK, Kitada R, Sasaki AT, Kawamichi H, Okazaki S, Kochiyama T, et al. Brain networks of affective mentalizing revealed by the tear effect: The integrative role of the medial prefrontal cortex and precuneus. Neuroscience Research. 2015;101:32–43. doi: 10.1016/j.neures.2015.07.005 [DOI] [PubMed] [Google Scholar]
  • 55.Craig AD. Emotional moments across time: a possible neural basis for time perception in the anterior insula. Philosophical Transactions of the Royal Society B: Biological Sciences. 2009;364(1525):1933–42. doi: 10.1098/rstb.2009.0008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Knutson B, Greer SM. Anticipatory affect: neural correlates and consequences for choice. Philosophical Transactions of the Royal Society B: Biological Sciences. 2008;363(1511):3771–86. doi: 10.1098/rstb.2008.0155 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Muller-Bardorff M, Bruchmann M, Mothes-Lasch M, Zwitserlood P, Schlossmacher I, Hofmann D, et al. Early brain responses to affective faces: A simultaneous EEG-fMRI study. Neuroimage. 2018;178:660–7. WOS:000438467800055. doi: 10.1016/j.neuroimage.2018.05.081 [DOI] [PubMed] [Google Scholar]
  • 58.Adolfi F, Couto B, Richter F, Decety J, Lopez J, Sigman M, et al. Convergence of interoception, emotion, and social cognition: A twofold fMRI meta-analysis and lesion approach. Cortex. 2017;88:124–42. Epub 20161229. doi: 10.1016/j.cortex.2016.12.019 . [DOI] [PubMed] [Google Scholar]
  • 59.Kringelbach ML, Rolls ET. Neural correlates of rapid reversal learning in a simple model of human social interaction. Neuroimage. 2003;20(2):1371–83. doi: 10.1016/S1053-8119(03)00393-8 . [DOI] [PubMed] [Google Scholar]
  • 60.Rolls ET. The orbitofrontal cortex and emotion in health and disease, including depression. Neuropsychologia. 2019;128:14–43. Epub 20170924. doi: 10.1016/j.neuropsychologia.2017.09.021 . [DOI] [PubMed] [Google Scholar]
  • 61.Ochsner KN, Bunge SA, Gross JJ, Gabrieli JD. Rethinking feelings: an FMRI study of the cognitive regulation of emotion. Journal of Cognitive Neuroscience. 2002;14(8):1215–29. doi: 10.1162/089892902760807212 [DOI] [PubMed] [Google Scholar]
  • 62.Urry HL, van Reekum CM, Johnstone T, Davidson RJ. Individual differences in some (but not all) medial prefrontal regions reflect cognitive demand while regulating unpleasant emotion. NeuroImage. 2009;47(3):852–63. doi: 10.1016/j.neuroimage.2009.05.069 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Picó-Pérez M, Radua J, Steward T, Menchón JM, Soriano-Mas C. Emotion regulation in mood and anxiety disorders: A meta-analysis of fMRI cognitive reappraisal studies. Progress in Neuro-Psychopharmacology and Biological Psychiatry. 2017;79:96–104. doi: 10.1016/j.pnpbp.2017.06.001 [DOI] [PubMed] [Google Scholar]
  • 64.Davis M, Whalen PJ. The amygdala: vigilance and emotion. Molecular Psychiatry. 2001;6(1):13–34. doi: 10.1038/sj.mp.4000812 [DOI] [PubMed] [Google Scholar]
  • 65.Balleine BW, Killcross S. Parallel incentive processing: an integrated view of amygdala function. Trends in Neurosciences. 2006;29(5):272–9. doi: 10.1016/j.tins.2006.03.002 [DOI] [PubMed] [Google Scholar]
  • 66.Cunningham WA, Van Bavel JJ, Johnsen IR. Affective Flexibility:Evaluative Processing Goals Shape Amygdala Activity. Psychological Science. 2008;19(2):152–60. doi: 10.1111/j.1467-9280.2008.02061.x . [DOI] [PubMed] [Google Scholar]
  • 67.Kensinger EA, Schacter DL. Processing emotional pictures and words: effects of valence and arousal. Cogn Affect Behav Neurosci. 2006;6(2):110–26. doi: 10.3758/cabn.6.2.110 . [DOI] [PubMed] [Google Scholar]
  • 68.Ochsner KN, Ray RD, Cooper JC, Robertson ER, Chopra S, Gabrieli JDE, et al. For better or for worse: neural systems supporting the cognitive down- and up-regulation of negative emotion. Neuroimage. 2004;23(2):483–99. WOS:000224817100005. doi: 10.1016/j.neuroimage.2004.06.030 [DOI] [PubMed] [Google Scholar]
  • 69.Li F, Yin S, Feng P, Hu N, Ding C, Chen A. The cognitive up- and down-regulation of positive emotion: Evidence from behavior, electrophysiology, and neuroimaging. Biol Psychol. 2018;136:57–66. Epub 2018/05/23. doi: 10.1016/j.biopsycho.2018.05.013 . [DOI] [PubMed] [Google Scholar]
  • 70.Chen Z, Whitney D. Tracking the affective state of unseen persons. PNAS Proceedings of the National Academy of Sciences of the United States of America. 2019;116(15):7559–64. doi: 10.1073/pnas.1812250116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Brosch T, Pourtois G, Sander D. The perception and categorisation of emotional stimuli: A review. Cognition and Emotion. 2010;24(3):377–400. doi: 10.1080/02699930902975754 [DOI] [Google Scholar]
  • 72.Anzellotti S, Houlihan SD, Liburd S, Saxe R. Leveraging Facial Expressions and Contextual Information to Investigate Opaque Representations of Emotions. Emotion. 2021;21(1):96–107. WOS:000614363500008. doi: 10.1037/emo0000685 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Hugh Cowley

6 Sep 2022

PONE-D-22-11714The neural correlates of context driven changes in the emotional response: an fMRI studyPLOS ONE

Dear Dr. Kökönyei,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

When revising your manuscript, please in particular ensure you address the comments raised below regarding reporting details of the methodology and data.

Please submit your revised manuscript by Oct 20 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Hugh Cowley

Staff Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. "We note that Figure 1 includes an image of a [patient / participant / in the study].

As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. Please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”.

If you are unable to obtain consent from the subject of the photograph, you will need to remove the figure and any other textual identifying information or case descriptions for this individual.

3. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

4. Please note that in order to use the direct billing option the corresponding author must be affiliated with the chosen institute. Please either amend your manuscript to change the affiliation or corresponding author, or email us at plosone@plos.org with a request to remove this option.

5. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

6. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: OVERALL IMPRESSION

The authors have investigated the neural correlates of emotional flexibility. The paper is innovative with its two focal points combined in one study: the effect of context and the direction of emotion regulation.

Below, please find some suggestions to improve the manuscript.

SPECIFIC COMMENTS

ABSTRACT

1. As the direction of emotion regulation is a key aspect in the study (besides the effect of context), I recommend to clarify it in the abstract (before line 36, in which a ‘shift from positive to negative valence’ is mentioned).

2. It is unclear in the abstract (but clear in the main text) why facial emotional processing and responsible brain regions are relevant. (Because the decontextualized details of the pictures are faces.) It would be great to clarify it in the abstract.

3. Brain structures are identified while the authors report the results in the abstract, except for line 34 where funtions are mentioned (‘areas involved in facial emotional processing and affective mentalizing’). Some important regions could be added here.

INTRODUCTION

4. Why is it an upregulation to have a positive first picture followed by a positive second picture? Does the valence rating increase (i.e., valence for the second picture with context > valence for the first picture without context? It seems to be ‘only’ a consistency between first and second picture compared to the inconsist trials (negative first picture – positive second picture and vice verse). (see from line 122 - Table 1). Of course, the Results part gives an answer to the question above, but there is a slight inconsistency in using terms in different parts of the manuscript, such as upregulation, shifting, context effect. Please, correct it.

METHODS

5. Lines 156 – 159: Although, I understand that Reference #29 contains the methodological background of EST, it would be useful to add some more information about the stimulus material (e.g., database, ID number of selected pictures; selection criteria for depicting social scenes/faces/emotional expression etc).

6. Was the smaller detail of the pairs of picture (contextualized) always a face/facial expression? If yes, it would be great to clarify it in the text. (lines 156-159).

7. Line 159: What was the purpose of the happy/sad emojis? Please, add a short explanation.

RESULTS

8. Lines 193-194 and 241-242: Please, check the valence ratings in the non-shift condition (S1 Table: 5.850 vs 6.129), if there is a significant difference between them. (This difference seems to be too small.)

9. Lines 251-252 and S1 Table: The changes in valence ratings are in the focus of the study, however, it is very interesting that arousal ratings are always higher for the second pictures than for the first ones. What could be the reason for that? Do you have an explanation?

DISCUSSION

10. Lines 338-342: Please, explain the top-down manner of changes in valence and arousal (due to the context) more detailed.

11. Lines 472-473: It is unclear, what is the exact limitation related to the pictures (from IAPS database and the collected pictures from the internet)?

IN SUM

This is a well-designed, innovative fMRI-study to explore the effect of context and valence. I recommend it for acceptance after minor changes.

Reviewer #2: In this paper, the Authors aim to study the mechanisms of emotional flexibility (i.e., the change of emotional responses according to the context), performing both behavioral tests (valence, arousal, reaction times) and functional brain imaging (fMRI).

Major comments:

Line 359: Eye movements should be taken in consideration as a possible explanation of occipital activity; the Authors should mention the lack of eye movement recording among the limitations of the study

"facial expression" appears quite late in the manuscript; it should be mentioned in Keywords and Abstract (and/or title?)

The mask used in the analysis is mentioned in different manners: e.g., line 35 "explicit", line 131 "external", line 273 "exclusive"; the Authors should use consistently the most appropriate one and explain clearly what it means

Table 1 not necessary, same concept is adequately explained in text

Line 443: it should probably read "context"?

Line 445: is it referred to "negative to negative", not " negative to positive"?

Minor points:

line 147: "standardized handedness questionnaire": commonly quoted as "Edimburgh inventory"

Several language imperfections, too many to mention them all (the Authors should have the English checked throughout); some examples:

line 38 ans 138: it should read "it resulted IN a"

line 61: it should read: "pride instead of sadness"

line 128: "/even": ?

line 146 and 166: parenthesis missing

line 151: "and were excluded with any history...": grammar!?

line 187: "crosses were presented with variable duration"

line 199: "Four hundred and nine volumes were acquired...."

line 372: "inferring affective states OF others"

line 376 and following: "Beyond its role ... reported arousal": grammar!?

Reviewer #3: Biró et al., explored the neural correlates of emotional response during context shifting. This work links emotional context shift to changes in the BOLD response. This work may be of interest to communities investigating the role of context during emotional processing. I have comments and questions about the manuscript that need clarification.

1. I recommend that the authors review the grammar of the manuscript. Some errors disrupt the clarity of writing, e.g., hyphens (line 57, 118, 119, 415) and parentheses (line 88, 411, 430, 440) when not necessary.

2. Further, the authors should address the structure of the manuscript. There are several one-sentence paragraphs throughout the manuscript, e.g., lines 203, 241, 364, 393, 427, and 453. The flow of the introduction and discussion should be improved to paint a clearer picture of the existing literature and interpretation of the results.

3. On lines 245/256 the authors discuss the accuracy of participants' responses. The values in parentheses (mean: 22.93, range 19-24) do not seem to reflect the accuracy rate which leads me to believe they are an error, could the authors please explain these values?

4. In the methods section of the manuscript, the authors detail how behavioural data will be analysed with “descriptive and non-parametric statistics”. I suggest the authors use non-parametric tests where the assumptions for parametric tests have been violated and specify this in-text. Further, I assume non-parametric tests have been employed due to violation of normality, have the authors considered transforming the data to better fit the normal distribution?

5. It would be beneficial to the reader to specify the levels of the ANOVA in the results section.

6. In the supplementary material, table S2 specifies in the legend: “Different letters (a, b, c) represent significant (p < .05) difference between mean scores, whereas the same letters represent non-significant difference between mean scores according to the paired post hoc test of repeated measure of ANOVA". I find this difficult to follow and suggest the authors present all behavioural results as figures in-text, with significance denoted by asterisks.

7. When using acronyms, use the full term first followed by the acronym in parentheses, e.g., Theory of Mind (ToM).

8. There are details of stimuli selection from the International Affective Picture System (IAPS) in the discussion section, which is not included in the methods section of the manuscript. Please ensure any relevant information about the stimuli and/or task is included in the methods section.

9. The authors should ensure they use the same referencing style throughout the manuscript, e.g., lines 397- 399.

10. There are several small clusters after masking, e.g., cluster size = 2 for amygdala activation for positive pictures in a negative context > positive pictures without context. How can the authors be sure they are not over-interpreting small clusters?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: Yes: Jessica Henderson

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Dec 30;17(12):e0279823. doi: 10.1371/journal.pone.0279823.r002

Author response to Decision Letter 0


16 Nov 2022

Dear Editor-in-Chief, Dear Dr. Hugh Cowley,

We would like to thank you for giving us the opportunity to improve our manuscript. The comments and suggestions of the reviewers helped us to structure the Introduction part in a more logic way. Questions on the task and results helped us to provide a clearer overview of our study, and to complete the discussion with further relevant thoughts. As a result, we would like to take the opportunity and resubmit the revised version of our manuscript, entitled “The neural correlates of context driven changes in the emotional response: an fMRI study”.

Please, find below our detailed answers to the reviewer’s comments (written in blue in the response to reviewers.doc). In the revised manuscript the original text is presented in black, modifications are indicated by tracked changes mode in MS Word.

We used a professional English language editing service that substantially helped to increase the readability of the manuscript. Please note that these modifications are not marked by tracked changes in the manuscript.

We were asked to address the following additional requirements:

1. We checked all the PLOS ONE's style requirements, and manuscript meets the requirements, as does the naming of the files.

2. We would like to note that Figure 1 did not include an image of a patient/participant in the study. Instead, Figure 1 contained pictures from the International Affective Picture System and pictures from the internet which were collected for the shifting of trials. However, in order to avoid copyright issues, we have decided to restructure Figure 1 and present our task schematically. The revised manuscript contains the new schematic presentation of the Emotional Shifting Task.

3. Grant information we provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections match.

Funding

This study was supported by the Hungarian Academy of Sciences (MTA-SE Neuropsychopharmacology and Neurochemistry Research Group), the Hungarian Brain Research Program (Grant: 2017-1.2.1-NKP-2017-00002), and the Hungarian Brain Research Program 3.0 (NAP2022-I-4/2022), and by the Hungarian National Research, Development and Innovation Office (Grant No. FK128614, K 143391). Project no. TKP2021-EGA-25 has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the Hungarian National Research, Development and Innovation Fund, financed under the TKP2021-EGA funding scheme. DB was supported by the ÚNKP-20-3-II-SE-51 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund. The sponsors had no role in the design of study, in the collection, analysis, interpretation of data and in the writing the manuscript.

4. In order to use the direct billing option I the corresponding author amended her affiliation with the chosen institute (Semmelweis University)

5. Regarding data availability statement, we have specified that data regarding valence, arousal ratings and reaction times, along with fMRI contrast maps are fully available at the following link: https://osf.io/hgdky/. However, we are not allowed to share raw imaging dataset publicly, because at the time our study started, there was no information on open access data availability in the consent forms (the study was approved by the Scientific and Research Ethics Committee of the Medical Research Council (Hungary)), therefore participants were not able to accept or refuse their assent to share imaging data in an open access repository. However, raw imaging data are available from the corresponding author (Gyöngyi Kökönyei, kokonyei.gyongyi@ppk.elte.hu) or from the Department of Pharmacodynamics, Faculty of Pharmacy, Semmelweis University (titkarsag.gyhat@pharma.semmelweis-univ.hu) on reasonable request.

Should any other issues with the manuscript arise, please let us know and we are prepared to address them.

Sincerely,

the authors

Reviewers' comments:

Reviewer #1: OVERALL IMPRESSION

The authors have investigated the neural correlates of emotional flexibility. The paper is innovative with its two focal points combined in one study: the effect of context and the direction of emotion regulation.

Answer# Thank you for your positive feedback and for taking the time to review our manuscript!

Below, please find some suggestions to improve the manuscript.

SPECIFIC COMMENTS

ABSTRACT

1. As the direction of emotion regulation is a key aspect in the study (besides the effect of context), I recommend to clarify it in the abstract (before line 36, in which a ‘shift from positive to negative valence’ is mentioned).

Answer# Thank you for your comment. Since emotion regulation and its direction are key aspects in the study, these should be definitely included in the abstract.

In the text: To understand how context can trigger a change in emotional response, i.e. how it can up-regulate the initial emotional response or trigger a shift in the valence of emotional response, we used a task consisting of picture pairs during functional magnetic resonance imaging sessions.

2. It is unclear in the abstract (but clear in the main text) why facial emotional processing and responsible brain regions are relevant. (Because the decontextualized details of the pictures are faces.) It would be great to clarify it in the abstract.

Answer# We added this information to the abstract.

In the text: In each pair the first picture was a smaller detail (a decontextualized photograph depicting emotions using primarily facial and postural expressions) from the second (contextualized) picture and the neural response to a decontextualized picture was compared with the same picture in a context.

3. Brain structures are identified while the authors report the results in the abstract, except for line 34 where functions are mentioned (‘areas involved in facial emotional processing and affective mentalizing’). Some important regions could be added here.

Answer# Thank you for your suggestions. We added some relevant regions to facial emotional processing and affective mentalizing.

In the text: In general, context (vs. pictures without context) increased activation in areas involved in facial emotional processing (e.g., middle temporal gyrus, fusiform gyrus, and temporal pole) and affective mentalizing (e.g., precuneus, temporoparietal junction).

INTRODUCTION

4. Why is it an upregulation to have a positive first picture followed by a positive second picture? Does the valence rating increase (i.e., valence for the second picture with context > valence for the first picture without context (It seems to be ‘only’ a consistency between first and second picture compared to the inconsistant trials (negative first picture – positive second picture and vice versa). (see from line 122 - Table 1).

Answer# We used term upregulation in cases where the valence of the same initial emotional response increased. We tested the changes in the valence and arousal after scan. Indeed, when a positive first picture was followed by a positive second picture, both the valence and the arousal post-scan ratings increased. In S1 Table we presented all the data relevant to these comparisons. Since Reviewer 3 suggested to use figures to help readers to understand supplementary tables and changes in valence and arousal induced by the context, separate figures (Fig 3 and 4) show the valence and arousal data for the shift and upregulation conditions (see below). The figure shows when a first positive picture followed by second positive picture (P1P2 condition), valence became more positive. Similarly, when a first negative picture followed by a second negative picture (N1N2 condition), the valence became more negative. These changes were significant. In addition, changes in arousal are also significant in both P1P2 and N1N2 conditions.

Fig 3. Changes in valence ratings in the post-task. Note. P1: First picture of the picture pairs is positive. N1: First picture of the picture pairs is negative. P2: Second picture of the picture pairs is positive. N2: Second picture of the picture pairs is negative. * p < .001.

Fig 4. Changes in arousal ratings in the post-task. Note. P1: First picture of the picture pairs is positive. N1: First picture of the picture pairs is negative. P2: Second picture of the picture pairs is positive. N2: Second picture of the picture pairs is negative. * p < .001

Of course, the Results part gives an answer to the question above, but there is a slight inconsistency in using terms in different parts of the manuscript, such as upregulation, shifting, context effect. Please, correct it.

Answer# Thank you for your comment. Indeed, we use all three terms in the text: upregulation, shift and context effect. However, the three terms are used to describe completely different phenomena. The term upregulation is used in cases where the valence of the same initial emotional response increases, so that when a negative image is presented in a negative context, or a positive image is presented in a positive context. In these cases the initial negative response becomes more negative and the initial positive response becomes more positive. By shift we refer to the process where the valence of the initial emotional response is reversed, i.e., the initial negative emotional response becomes positive and vice versa. The term general context effect, on the other hand, is used in a general sense, i.e., it refers only to the process of responding to an emotional stimulus in a context vs. responding to the same stimulus without context.

Based on you comment, these three terms have been clarified in the introduction to make it easier to follow which process is being discussed. We also checked that the correct one of the three terms was used in any part of the manuscript.

In the text: By shift we refer to categorical changes where the valence of the initial emotional response is reversed, i.e., the initial negative emotional response becomes positive and vice versa. The term upregulation is used when the valence of the same initial emotional response is increased by the context; thus, a negative stimulus becomes more negative, or a positive one becomes more positive. Accordingly, the EST contained two shift and two non-shift (upregulation) conditions.

On the basis of the theory by Saxe and Houlihan [33] different emotional responses could be expected to stimuli in context vs. without context. They argue that forward inferences are used to attribute emotions to the target when an emotional expression is processed in a context; thus, we automatically infer that the cause of the emotional state of the target reflected in their emotional expressive behavior is the context /event. On the basis of this, we expected that context itself would recruit areas involved in emotional processing and understanding complex social situations; thus, first we simply compared the neural responses to whole pictures vs. decontextualized (cropped) pictures. We refer to this as a general context effect in our study.

METHODS

5. Lines 156 – 159: Although, I understand that Reference #29 contains the methodological background of EST, it would be useful to add some more information about the stimulus material (e.g., database, ID number of selected pictures; selection criteria for depicting social scenes/faces/emotional expression etc).

Answer# As you suggested we added more information about the stimulus material to the Method section. We would like to mention that in order to avoid copyright issues, we have decided to restructure Figure 1 and present our task schematically. The revised manuscript contains the new schematic presentation of the Emotional Shifting Task.

In the text: For the upregulation conditions (P1P2 and N1N2), pictures were selected from the International Affective Picture System [37]. Their identification numbers were 1340, 2091, 2141, 2205, 2216, 2340, 2530, 2700, 6242, 6838, 8497, and 9050. For the shift conditions (P1N2 and N1P2) pictures were selected from the internet. Six criteria were used to select the images: (1) free for non-commercial use, (2) depicting social interactions, (3) evoking an emotional response without being shocking or extreme, (4) not depicting famous person(s), (5) eligible for shifting conditions, i.e., the valence of facial expression and the whole picture should be opposite, and (6) the images should represent as many different situations as possible.

6. Was the smaller detail of the pairs of picture (contextualized) always a face/facial expression? If yes, it would be great to clarify it in the text. (lines 156-159).

Answer# Not always, but in most cases the cropped image expressed emotion primarily through facial expression and/or posture. We added this information to the text:

In the text: The EST [22] consists of 24 picture pairs. In each pair, the first picture is always a smaller detail from the second (whole) picture. In most cases the cropped image expressed emotion primarily through facial expression and/or posture. The valence of the firstly presented picture either remains or changes when it is placed into a context, and so should change the elicited emotion (Fig 1).

7. Line 159: What was the purpose of the happy/sad emojis? Please, add a short explanation.

Answer# Thank you for your question. We decided to use happy/sad emojis in the scanner to mimic the two endpoints of valence ratings of Self-Assessment Manikin. This was an easy and quick method to check the valence of second (whole) picture while performing the task.

In the text: After each pair, a happy and a sad smiley/emoji was shown on the screen (Fig 2), and participants had to choose one of them by pressing the corresponding button to indicate the valence (positive or negative) of the second (whole) picture. We decided to use emojis in the scanner to mimic the two endpoints of valence ratings in Self-Assessment Manikin (Lang et al., 1997).

RESULTS

8. Lines 193-194 and 241-242: Please, check the valence ratings in the non-shift condition (S1 Table: 5.850 vs 6.129), if there is a significant difference between them. (This difference seems to be too small.)

Answer# Thank you for your comment. We have checked all the analyses, and all the differences were significant. We also repeated our analyses with bootstrapped paired t-tests, as it is easier to interpret changes in means than in ranks. We completed S1 table with these t-tests results.

9. Lines 251-252 and S1 Table: The changes in valence ratings are in the focus of the study, however, it is very interesting that arousal ratings are always higher for the second pictures than for the first ones. What could be the reason for that? Do you have an explanation?

Answer# Thank you for your question. Based on the paper by Saxe and Houlihan (2017), our idea is that the context might provide extra affective information, since it gives an explanation for the emotion; thus, it guides our interpretation. For instance, seeing a crying woman in hospital compared to seeing just a crying woman, could elicit a more intense emotional response, as it can activate additional emotional meaning or knowledge such as her relative being sick.

In the text: Significant changes in the valence and arousal ratings of the stimuli were observed after they were placed into a context, indicating that participants reinterpreted the emotional stimuli. These changes in the valence and arousal were detected in all four types of picture sets, indicating that the context itself did not only shape the categorization of the emotional states [9], but it also might affect valence and intensity (arousal) in a top-down manner. The context, or more precisely the appraisal of overall context (e.g., bullying, being in a hospital with a sick person, or childbirth), i.e., the semantic features, might provide extra affective information, as it gives an explanation for the emotion; thus, it guides our interpretation [33]. For instance, seeing a crying woman in hospital compared to seeing just a crying woman could elicit a more intense emotional response, as it can activate additional emotional meaning or knowledge such as her relative being sick.

DISCUSSION

10. Lines 338-342: Please, explain the top-down manner of changes in valence and arousal (due to the context) more detailed.

Answer# Thank you for your question. We used the term top-down manner because we believed that the appraisal/meaning of overall context (e.g. bullying, being in a hospital with a sick person, or childbirth), i.e., the semantic features, guided the changes. We included this idea above (see our answer to question 9). However, it does not necessarily mean that it happened in a top-down manner, since declarative knowledge and experiences could be retrieved automatically and actively (Osada et al., 2008, Philos Trans R Soc Lond B Biol Sci, 363(1500):2187-99.). In addition, we can’t exclude that that low level image features are not relevant in the observed changes. Therefore, we decided to delete “in a top-down manner” from the text.

11. Lines 472-473: It is unclear, what is the exact limitation related to the pictures (from IAPS database and the collected pictures from the internet)?

Answer# We wanted to mention that pictures in the shift trials are not from a standardized set of stimuli, however we piloted the pictures before the fMRI study.

In the text: For the non-shift trials, stimuli were selected from the IAPS [37] database, whereas pictures for the shift trials were collected from the internet, so they are not from a standardized set of emotional stimuli; however, they went through several pilot studies [22].

This is a well-designed, innovative fMRI-study to explore the effect of context and valence. I recommend it for acceptance after minor changes.

Answer# Thank you very much for your positive and encouraging feedback, for your comments, suggestions and questions. They helped us a lot in improving our manuscript.

Reviewer #2: In this paper, the Authors aim to study the mechanisms of emotional flexibility (i.e., the change of emotional responses according to the context), performing both behavioral tests (valence, arousal, reaction times) and functional brain imaging (fMRI).

Major comments:

Line 359: Eye movements should be taken in consideration as a possible explanation of occipital activity; the Authors should mention the lack of eye movement recording among the limitations of the study

Answer # Thank you for bringing this to our attention. We added the lack of eye movement recording to the section of Limitations.

In the text: In addition, we did not record the eye movements of the participants, so we cannot rule out the possibility that the four different conditions differed in the amount of eye movements, and that this may have affected our results.

"facial expression" appears quite late in the manuscript; it should be mentioned in Keywords and Abstract (and/or title?)

Answer# Thank you for your suggestion. We added this information to the abstract and “facial expression” to the Keywords.

In the text: In each pair, the first picture was a smaller detail (a decontextualized photograph depicting emotions using primarily facial and postural expressions) from the second (contextualized) picture, and the neural response to a decontextualized picture was compared with the same picture in a context.

Keywords: emotional flexibility; context; neuroimaging; emotion processing; mentalization; empathy; facial expression;

The mask used in the analysis is mentioned in different manners: e.g., line 35 "explicit", line 131 "external", line 273 "exclusive"; the Authors should use consistently the most appropriate one and explain clearly what it means

Answer# Thank you for your comment. Indeed, we mentioned the mask used in the analysis in different manners in the manuscript. We decided to use the term exclusive mask, as we wanted to exclude from the analysis the activation observed for the general context effect. We have corrected each word in the text to this term everywhere.

For instance, in the text: On the basis of this, we expected that context itself would recruit areas involved in emotional processing and understanding complex social situations: thus, first we simply compared the neural responses to whole pictures vs. decontextualized (cropped) pictures. We refer to this as a general context effect in our study. Then we used this activation map as an external exclusive mask to be able to explore the four different types of automatic changes specifically in emotional responses. It allowed us to explore neural activation to changes in the meaning triggered by the context as a passive cue independent of the context vs. no context differences.

Table 1 not necessary, same concept is adequately explained in text

Answer# Thank you for your comment. Indeed, we aimed to explain the main concepts in the text, but following the suggestion of Reviewer 1, we decided to keep Table 1 as well, to clarify the terms used in the manuscript.

We would like to mention that in order to avoid copyright issues, we have decided to restructure Figure 1 and present our task schematically. The revised manuscript contains the new schematic presentation of the Emotional Shifting Task.

Line 443: it should probably read "context"?

AnswerXYes, of course. Thank you, we corrected it accordingly.

In the text: Placing negative stimuli in negative context and positive stimuli in positive context

Line 445: is it referred to "negative to negative", not " negative to positive"?

Answer# Thank you. We corrected it accordingly.

In the text: Interestingly, when a positive stimulus was placed into a positive context, or a negative stimulus was placed into a negative context increased activation primarily in the occipital regions (calcarine, lingual gyrus and superior occipital lobule, respectively) was detected.

Minor points:

line 147: "standardized handedness questionnaire": commonly quoted as "Edinburgh inventory"

Answer# We added the name of the inventory to the text.

In the text: The participants were right-handed, as assessed by the Edinburgh Handedness Inventory [35], and had normal or corrected-to-normal vision.

Several language imperfections, too many to mention them all (the Authors should have the English checked throughout); some examples:

line 38 ans 138: it should read "it resulted IN a"

line 61: it should read: "pride instead of sadness"

line 128: "/even": ?

line 146 and 166: parenthesis missing

line 151: "and were excluded with any history...": grammar!?

line 187: "crosses were presented with variable duration"

line 199: "Four hundred and nine volumes were acquired...."

line 372: "inferring affective states OF others"

line 376 and following: "Beyond its role ... reported arousal": grammar!?

Answer# Thank you for this suggestion, we used a professional English language editing service that substantially helped to increase the readability of the manuscript. Please note that these modifications are not marked by tracked changes in the manuscript.

Thank you very much for your critics, comments and questions. They helped us improve our manuscript. We hope that we have managed to address your concerns.

Reviewer #3: Biró et al., explored the neural correlates of emotional response during context shifting. This work links emotional context shift to changes in the BOLD response. This work may be of interest to communities investigating the role of context during emotional processing. I have comments and questions about the manuscript that need clarification.

1. I recommend that the authors review the grammar of the manuscript. Some errors disrupt the clarity of writing, e.g., hyphens (line 57, 118, 119, 415) and parentheses (line 88, 411, 430, 440) when not necessary.

Answer# Thank you for this suggestion, we used a professional English language editing service that substantially helped to increase the readability of the manuscript. Please note that these modifications are not marked by tracked changes in the manuscript.

2. Further, the authors should address the structure of the manuscript. There are several one-sentence paragraphs throughout the manuscript, e.g., lines 203, 241, 364, 393, 427, and 453.

Answer# Thank you for the comment. Indeed there are some one-sentence paragraphs in the Method and Discussion sections. In the Method section we added additional sentences to these paragraphs since you asked us to provide some further details about the analyses of behavioral data (e.g. about non-parametric/parametric analyses). We also checked the discussion to complete paragraphs with one sentence.

The flow of the introduction and discussion should be improved to paint a clearer picture of the existing literature and interpretation of the results.

Answer# Thank you for your feedback. We restructured the introduction to show why the effect of context on the emotional processing may be relevant for understanding emotional flexibility. Thus, we first introduced the concept of emotional flexibility highlighting the role of adjustment of our emotional response to the context. Next, we argued that emotion perception is context dependent, and then we cited cognitive reappraisal studies to support the idea that creating a new cognitive context for an emotional stimulus has relevance for understanding context effect on emotional trajectories. Next, we introduced the EST task, and we wrote about how this task can capture shift and upregulation of emotional responses. And finally, we presented our expectation on neural activity based on the literature.

Regarding the discussion, we have kept the original order of the topics, but we have added a number of points to the text to make the argument more fluid.

3. On lines 245/256 the authors discuss the accuracy of participants' responses. The values in parentheses (mean: 22.93, range 19-24) do not seem to reflect the accuracy rate which leads me to believe they are an error, could the authors please explain these values?

Answer# Thank you! Of course, the data in the brackets were wrong. We corrected it.

In the text: This accuracy was 95.54% (range: 79-100%).

4. In the methods section of the manuscript, the authors detail how behavioural data will be analysed with “descriptive and non-parametric statistics”. I suggest the authors use non-parametric tests where the assumptions for parametric tests have been violated and specify this in-text. Further, I assume non-parametric tests have been employed due to violation of normality, have the authors considered transforming the data to better fit the normal distribution?

Answer# When analysing the change in valence and arousal we used non-parametric tests due to violation of normality. However, as it is easier to interpret changes in means than in ranks, we repeated these analyses using bootstrapped paired t-tests. In S1 Table means are reported instead of ranks. Thus, instead of transforming the data to better fit the normal distribution we preferred to use bootstrapped paired t-test, since bootstrapping improve the power of t-test under violation of non-normality (e.g. Konietschke & Pauly, 2013).

We added these pieces of information to the section of Statistical analysis of self-report and post-test data and completed S1 Table with changes in means and the corresponding 95% confidence intervals.

In the text: Since the distribution of valence and arousal ratings was non-normal, we used Wilcoxon Signed Rank Test to compare the valence and arousal ratings of the first and second pictures in each condition (P1N2, N1P2, P1P2, and N1N2). However, as it was easier to interpret changes in means than in ranks, we repeated these analyses using a series of bootstrapped paired t-tests. A repeated measures ANOVA was performed on the reaction times collected during the fMRI scan.

5. It would be beneficial to the reader to specify the levels of the ANOVA in the results section.

Answer# We used a repeated measures ANOVA on the reaction times collected during the fMRI scan, thus we compared the reaction times in the four conditions (P1N2, P1P2, N1P2, N1N2). To make it clearer, we have added this information to the section of Statistical analysis of self-report and post-test data.

In the text: A repeated measures ANOVA was performed on the reaction times collected during the fMRI scan.

6. In the supplementary material, table S2 specifies in the legend: “Different letters (a, b, c) represent significant (p < .05) difference between mean scores, whereas the same letters represent non-significant difference between mean scores according to the paired post hoc test of repeated measure of ANOVA". I find this difficult to follow and suggest the authors present all behavioural results as figures in-text, with significance denoted by asterisks.

Answer # As you suggested, we present all the behavioral results (including results on arousal, valence and reaction times) as figures in text (Fig 3-5), while tables with behavioral results are supplementary ones. Fig 3. and Fig 4.

Fig 5. Mean reaction times (and standard deviations in milliseconds) to the second picture of the picture pairs in the scanner by the type of picture pairs in the Emotional Shifting Task. Note. P1: First picture of the picture pairs is positive. N1: First picture of the picture pairs is negative. P2: Second picture of the picture pairs is positive. N2: Second picture of the picture pairs is negative. * p < .05

7. When using acronyms, use the full term first followed by the acronym in parentheses, e.g., Theory of Mind (ToM).

Answer# We checked all the acronyms in the text.

8. There are details of stimuli selection from the International Affective Picture System (IAPS) in the discussion section, which is not included in the methods section of the manuscript. Please ensure any relevant information about the stimuli and/or task is included in the methods section.

Answer# Thank you for your comment. We completed the Methods section with the relevant information. We would like to mention that in order to avoid copyright issues, we have decided to restructure Figure 1 and present our task schematically. The revised manuscript contains the new schematic presentation of the Emotional Shifting Task.

In the text: For the upregulation conditions (P1P2 and N1N2), pictures were selected from the International Affective Picture System [37]. Their identification numbers were 1340, 2091, 2141, 2205, 2216, 2340, 2530, 2700, 6242, 6838, 8497, and 9050. For the shift conditions (P1N2 and N1P2) pictures were selected from the internet. Six criteria were used to select the images: (1) free for non-commercial use, (2) depicting social interactions, (3) evoking an emotional response without being shocking or extreme, (4) not depicting famous person(s), (5) eligible for shifting conditions, i.e., the valence of facial expression and the whole picture should be opposite, and (6) the images should represent as many different situations as possible.

9. The authors should ensure they use the same referencing style throughout the manuscript, e.g., lines 397- 399.

Answer# Thank you. We corrected the text accordingly.

In the text: Many of the activated areas such as the MTG, SMA [34], orbitofrontal cortex [46] caudate [47], and thalamus [49, 50] suggest that the context required an increased emotional reprocessing (beyond the general context effect) when it made a previously positive picture negative.

10. There are several small clusters after masking, e.g., cluster size = 2 for amygdala activation for positive pictures in a negative context > positive pictures without context. How can the authors be sure they are not over-interpreting small clusters?

Answer# We were hesitating whether to present such small clusters. As this was the first test of the task, we finally added them to the tables since we set the threshold to p< .05, family-wise error (FWE) corrected for multiple comparison. However, we agree that we should avoid overinterpreting our results, so we deleted amygdala from the abstract and added some sentences to the discussion. We also have to admit that we found small clusters in the upregulation conditions after masking, so we should interpret those results with caution.

In the text: We detected small activation in the amygdala that is often associated with negative/fearful emotional experiences, faces [64], emotional events and personal affective importance [65] or motivational relevance [66]. As this was the first fMRI study using the EST, we decided to present all significant activations. However, the cluster size was too small to interpret this result in the context of shifting.

In the text: However, only small clusters were found in these two conditions after masking, so we should interpret these results with caution.

Answer# Thank you very much for your critics, comments and questions, they helped us improve our manuscript. We hope that we have managed to address your concerns.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Fausta Lui

15 Dec 2022

The neural correlates of context driven changes in the emotional response: an fMRI study

PONE-D-22-11714R1

Dear Dr. Kökönyei,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Fausta Lui

Guest Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Fausta Lui

21 Dec 2022

PONE-D-22-11714R1

The neural correlates of context driven changes in the emotional response: an fMRI study

Dear Dr. Kökönyei:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Fausta Lui

Guest Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. General context effect: Full pictures with the context vs. firstly presented images (pictures without the context) at p< .05, family-wise error (FWE) corrected for multiple comparison.

    Coordinates are in Montreal Neurological Institute (MNI) space. Statistical maps were visualized on the MNI 152 template brain provided in MRIcroGL [39].

    (DOCX)

    S1 Table. Valence and arousal values for the pictures of the post-task by contrasts.

    P1: First picture of the picture pairs is positive. N1: First picture of the picture pairs is negative. P2: Second picture of the picture pairs is positive. N2: Second picture of the picture pairs is negative. + Results of the bootstrapped paired t-test. ++ Results of the Wilcoxon Signed Rank Test. *p < .001. Valence and arousal were measured from 1 to 7 (1 being very unpleasant and 7 very pleasant; 1 being calm and 7 very excited, respectively).

    (DOCX)

    S2 Table. Mean reaction times (in milliseconds) to the second picture in the scanner by the type of picture pairs in the Emotional Shifting Task.

    P1: First picture of the picture pairs is positive. N1: First picture of the picture pairs is negative. P2: Second picture of the picture pairs is positive. N2: Second picture of the picture pairs is negative. Different letters (a, b, c) represent significant (p < .05) difference between mean scores, whereas the same letters represent non-significant difference between mean scores according to the paired post hoc test of repeated measure of ANOVA.

    (DOCX)

    S3 Table. General context effect: Increased activations to the 2nd pictures with context compared to the 1st pictures without context.

    L = left; the initial statistical threshold was set to p< .05, family-wise error (FWE) corrected for multiple comparison.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All data underlying the findings described in the manuscript, i.e., post-scan valence and arousal ratings of the picture pairs and reaction times to the second pictures in the MR scanner along with main fMRI contrast maps, are fully available at https://osf.io/hgdky/. However, we are not allowed to share raw imaging dataset publicly, because at the time our study started, there was no information on open access data availability in the consent forms (the study was approved by the Scientific and Research Ethics Committee of the Medical Research Council (Hungary)), therefore participants were not able to accept or refuse their assent to share imaging data in an open access repository. However, raw imaging data are available from the corresponding author (Gyöngyi Kökönyei, kokonyei.gyongyi@ppk.elte.hu) or from the Department of Pharmacodynamics, Faculty of Pharmacy, Semmelweis University (titkarsag.gyhat@pharma.semmelweis-univ.hu) on reasonable request.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES