Skip to main content
PMC Canada Author Manuscripts logoLink to PMC Canada Author Manuscripts
. Author manuscript; available in PMC: 2018 May 15.
Published in final edited form as: Brain Res. 2017 Mar 14;1663:38–50. doi: 10.1016/j.brainres.2017.03.013

Effects of task demands on the early neural processing of fearful and happy facial expressions

Roxane J Itier 1,*, Karly N Neath-Tavares 1
PMCID: PMC5756067  CAMSID: CAMS6610  PMID: 28315309

Abstract

Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200–350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150–350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing.

Keywords: facial expressions, fearful, happy, ERPs, task demands, P1, N170, EPN

1. Introduction

Among sources of social information, facial expressions of emotions are particularly salient stimuli, conveying essential nonverbal signals regarding others’ dispositions and intentions that are critical for proper social interactions. For example, being able to tell whether the person coming towards you is happy or fearful will influence your interactions with them. Due to the biological and social relevance of facial expressions of emotion, which in the remainder of this paper will be called interchangeably facial emotions or facial expressions for convenience, information derived from emotional faces should be processed rapidly to help regulate behaviour. In fact, due to their importance for survival, threat-related expressions such as fearful ones might even be processed automatically, i.e. in a mandatory way regardless of the demands imposed by the task (for a review see Palermo and Rhodes, 2007). The time course of facial expression processing has been evaluated using Event Related Potentials (ERPs) that provide excellent timing resolution, but results remain partially inconsistent (Vuilleumier & Pourtois, 2007). This inconsistency might be, in part, due to the use of various tasks in previous reports. The present study directly investigated the effects of task demands on well-known ERPs recorded to facial expressions in a within-subject design, to address the question of automatic processing and to test the idea that possible inconsistencies regarding early ERP sensitivity to emotion might be due to processing demands imposed by specific tasks.

A well-established marker of emotion processing has been termed the Early Posterior Negativity (EPN). The EPN is a relative increase in amplitude seen over occipito-temporal sites ~150–350ms post-stimulus onset (but most often measured between ~200–300ms) that is seen for emotional relative to neutral stimuli, whether they are verbal (Kissler et al., 2007; Schacht & Sommer, 2009) or non-verbal items (Junghöfer et al., 2001; Schupp et al., 2003, 2004a, 2007a,b). For faces, more negative amplitudes at those posterior sites are usually seen for fearful and angry expressions (i.e. threat-related) compared to neutral and happy expressions (e.g., Rellecke et al., 2011; Schupp et al., 2004b; Schupp et al., 2006), although some have also reported larger negativity for happy than neutral faces (e.g. Marinkovic & Halgren, 1998). As more negative amplitudes for pleasant than unpleasant pictures have also been reported, with both eliciting larger responses than neutral stimuli (Schupp et al., 2003; 2004a; Schupp et al., 2007a,b), the EPN has been suggested to reflect enhanced processing of emotionally salient stimuli in general, with a particular sensitivity for threatening faces (Schupp et al., 2004b). The EPN would reflect activity linked to the appraisal of the emotion valence in occipito-temporal visual areas (Schupp et al., 2007a; Vuilleumier & Pourtois, 2007).

There have also been reports of earlier sensitivity to facial emotions, namely on the preceding P1 and face-sensitive N170 components, although this sensitivity remains debated. The P1 component occurs between 80–120ms post-stimulus onset at occipital sites, is thought to be generated within extrastriate visual cortex (Clark et al., 1995), and is known to be sensitive to attention (Luck, 1995; Luck et al., 2000; Mangun, 1995) and low-level stimulus properties (i.e. colour, contrast, luminance and spatial frequencies – Johannes et al., 1995; Rossion & Jacques, 2012). Early emotion effects on the P1 have been reported, mostly for fearful expressions that elicit larger P1 than neutral or happy expressions (e.g., Batty & Taylor, 2003; Jetha et al., 2012; Pourtois et al., 2004; Sato et al., 2001; Smith et al., 2013; Wijers & Banis, 2012); however, many studies also failed to report this effect (Vuilleumier & Pourtois, 2007). This early fear-related P1 modulation has been suggested to reflect the coarse detection of threatening fearful faces, driven by fast processing of low spatial frequencies via a subcortical route involving the amygdala (see Palermo & Rhodes, 2007 and Vuilleumier & Pourtois, 2007 for reviews; Vlamings et al., 2009).

Some neuroimaging studies have reported larger amygdala activation to facial expressions during emotion-irrelevant tasks compared to emotion-relevant tasks (e.g. Critchley et al., 2000; Hariri et al., 2000; Lange et al., 2003), and the P1 fear effect has also been most commonly reported in studies using tasks where emotion was irrelevant, including oddball detection tasks (e.g., Batty & Taylor, 2003; Williams et al., 2004) and passive viewing of emotional faces (e.g., Pizzagalli et al., 2002; Smith et al., 2013). It is thus possible that this P1 effect is modulated by task demands, and might be largest in emotion-irrelevant tasks when attention is directed away from the emotional content of the face, compared to emotion-relevant tasks such as explicit emotion discrimination judgements where attention is focused on the emotional content of the face. However, direct comparison of P1 modulations by fearful facial expressions across tasks in a within-subject design is currently lacking.

Following the P1, the face-sensitive N170 ERP component recorded over occipito-temporal electrodes ~130–200ms post-face onset, is thought to reflect encoding of the face structure and its configuration (Bentin et al., 1996; Bentin & Deouell, 2000; Eimer, 2000; Itier & Taylor, 2002, 2004; for a review see Rossion & Jacques, 2012). Proposed generators of this component include areas involved in the cortical face perception network such as the fusiform gyrus, the inferior occipital gyrus and even possibly the superior temporal sulcus (e.g. Itier & Taylor, 2002; 2004; Rossion et al., 1999; Rossion & Jacques, 2012). As seen with P1, inconsistent findings regarding the N170 sensitivity to facial emotions have been reported. Larger N170s have been reported for fearful compared to neutral and happy faces in some studies (e.g. Batty & Taylor, 2003; Blau et al., 2007; Calvo & Beltrán, 2014; Leppänen et al., 2007; Leppänen et al., 2008; Morel et al., 2014; Neath-Tavares & Itier, 2016 – Exp.2), a finding often interpreted as an increase in activation of the face perception network driven by amygdalar projections (Vuilleumier & Pourtois, 2007). However, a lack of such N170 sensitivity to fearful expressions has also been reported in many studies (e.g. Eimer et al., 2003; Hermann et al., 2002; Meaux et al., 2014; Neath & Itier, 2015; Neath-Tavares & Itier, 2016-Exp.1; Rellecke et al., 2013; Schupp et al., 2004b; Smith et al., 2013). Let’s note that a few studies have also put forth the idea that the emotion effects seen on the N170 are simply due to superimposed EPN activity (Rellecke et al., 2011; Rellecke, Sommer, & Schacht, 2012; Schacht & Sommer, 2009), i.e. they would reflect processing of emotional content superimposed onto the face sensitivity reflected by the N170.

The current literature suggests that some experimental factors modulate emotion-related N170 effects. These include the type of reference used in the EEG recordings, with greatest emotion effects seen with the average reference (e.g., Hinojosa et al., 2015; Rellecke et al., 2013), but also possibly the nature of task demands (Hinojosa et al., 2015). A variety of emotion-relevant and emotion-irrelevant tasks have been employed, the most common ones including oddball detection tasks (Batty & Taylor, 2003; Leppänen et al., 2007; Neath-Tavares & Itier, 2016, Exp.2; Williams et al., 2004), pure passive viewing of emotional faces with no explicit task (Blau et al., 2007; Hermann et al., 2002; Pizzagalli et al., 2002; Schupp et al., 2004b; Smith et al., 2013), categorization of face gender (Neath & Itier, 2015; Pourtois et al., 2005; Sato et al., 2001; Wijers & Banis, 2012) and categorization of facial emotion (Calvo & Beltrán, 2014; Eimer et al., 2003; Leppänen et al., 2008; Neath-Tavares & Itier, 2016, Exp.1; Schacht & Sommer, 2009). A recent meta-analysis reported greater effect sizes of emotion effects on the N170 in tasks where attention was not directed to the facial emotion (e.g. passive viewing) or was directed away from it, compared to tasks where attention was drawn onto the facial expression such as expression categorization tasks (Hinojosa et al., 2015). Thus, similar to the P1, differences in attention to the facial expression placed by task demands may be in part responsible for the lack of consistent N170 emotion modulations. However, within-subject designs directly comparing the effects of task demands on neural responses to facial expressions are scarce.

To the best of our knowledge there are, at present, only two ERP studies that have directly investigated the impact of task demands on the early neural response to facial expressions in a within-subject design. The first study by Wronka and Walentowska (2011) required participants to categorize angry, happy and neutral expressions as either emotional or neutral, and to categorize face gender in separate blocks. A three-way interaction between task, emotion and hemisphere was found, such that N170 amplitude was larger for emotion compared to neutral faces during the emotion discrimination task on the right hemisphere, but no emotion effect was seen during the gender discrimination task. In contrast, the EPN measured between 240–340ms was enhanced for emotional compared to neutral faces in both tasks. In the second study by Rellecke et al., (2012), participants viewed angry, happy and neutral expressions and were asked to either passively view the faces, discriminate faces from words, identify face gender or explicitly identify the emotional expressions (five different tasks tested in total). Angry expressions elicited larger P1, N170 and EPN amplitudes than neutral faces in all tasks while an increased response to happy compared to neutral expressions was seen on the EPN (150–300ms) during the gender and emotion discrimination tasks only. These results were interpreted as reflecting automatic processing of threat-related angry expressions regardless of task demands, whereas attention to the face or emotion seems required to differentiate happy from neutral expressions at the neural level (Rellecke et al., 2012). These contradictory findings highlight the need to investigate further the impact of task demands on early neural responses to facial expressions in general, and in particular to fearful expressions which have not yet been investigated in a within-subject design.

To address this gap, the present study tested the impact of task demands on the neural processing of fearful, happy and neutral facial expressions that were presented in three task conditions: (1) a Gender Discrimination (GD) task requiring categorization of face gender, (2) an Emotion Discrimination (ED) task requiring categorization of the facial expression, and (3) an Oddball Detection (ODD) task requiring a response to infrequent flower stimuli. The emotion discrimination task is emotion-relevant while the other two are emotion-irrelevant and thus differ in their overall attention to the face and face cues used to performed the task. In addition, the comparison of different emotion-irrelevant tasks such as the gender and oddball tasks is important as they differ in their level of processing (Rellecke et al., 2012), the oddball task arguably requiring a shallower processing of the face than the gender task. If the amygdala response to threatening faces is indeed largest in emotion-irrelevant tasks (e.g. Lange et al., 2003), and if P1 and N170 modulations by fearful expressions are driven by amygdala-mediated stimulation of the cortical face network (e.g. Vuilleumier & Pourtois, 2007), then we expected to see fear effects on the P1 and N170 to a greater extent in the GD and ODD tasks compared to the ED task. However, based on the two previous within-subject design studies investigating angry expressions, it was also possible that a larger emotion effect would be seen in the ED than the GD (and ODD) tasks (on the right hemisphere, Wronka & Walentowska, 2011) or that these emotional responses would be observed to the same extent in all three tasks (Rellecke et al., 2012). Finally, the possibility remained that no emotional modulation would be seen on these early components. Based on the previous conflicting findings (Rellecke et al., 2012; Wronka & Walentowska, 2011), it was unclear whether task demands would also modulate the emotion-related EPN component.

2. Methods

2.1. Participants

A total of 52 undergraduate students from the University of Waterloo (UW) were tested and received course credit for their participation. They all lived in North America for at least 10 years and were thus accustomed to seeing Caucasian faces. They all reported normal or corrected-to-normal vision, no history of head-injury or neurological disease, and were not taking any medication. They all signed informed written consent. The study was approved by the Research Ethics Board at UW and was done in compliance with The Code of Ethics of the World Medical Association (Declaration of Helsinki). Twenty-three participants were rejected: four due to high anxiety scores (see procedure below), three due to completion of less than half of the study, three due to too many artefacts resulting in too few trials per condition, nine due to too few trials after removing trials with eye movements (see procedure section below), one due to problems recording the EEG file and three due to failure to calibrate participants’ eye movements with the eye-tracker. The results from 29 participants were retained for the final analysis (20.4 ± 1.8 years, 15 males, 26 right-handed).

2.2. Stimuli

Stimuli consisted of fearful, happy and neutral facial expressions of 8 males and 8 females from the “NimStim” database (MacBrain Face Stimulus Set1, Tottenham et al., 2009) and 6 flower stimuli (see Figure 1). We also included a flipped version of each face to control for any minor differences in low-level contrast and pixel intensity between the left and right face halves. All images were converted to grayscale and an elliptical mask was applied using the GNU Image Manipulation Program (GIMP) to remove hair, ear and other paraphernalia. The faces subtended 4.74° horizontally and 10.12° vertically when viewed from a distance of 70cm and were presented on a white background for an image visual angle of 8.93° horizontally and 13.26° vertically. Root mean square (RMS) contrast and normalized pixel intensity (PI) of the pictures were calculated using custom Matlab (Mathworks, Inc) scripts. Paired t-tests (two-tailed) revealed no differences between emotions for mean normalized PI and RMS contrast (PI=.616 (S.D=.003); RMS = .371 (S.D=.01); p > .1 for all comparisons). Participants fixated on the tip of the nose, as commonly done in face research. The coordinates of the fixation location corresponding to the tip of the nose for each identity and expression was calculated, with minor variations between the 16 identities and the three expressions used, and aligned with the center of the fixation cross coordinates on every trial.

Figure 1.

Figure 1

Top panel: trial example. The fixation cross was displayed on the screen and participants must have fixated on it for 309ms before the face could be presented (gaze-contingent procedure). The grayscale picture was then flashed for 259ms, immediately followed by a response screen. During the Gender Discrimination (GD) and Oddball detection (ODD) tasks, a white screen with a fixation point appeared for 2000ms during which participants indicated their response. For the Emotion Discrimination (ED) task the response screen remained until participants made their response. Bottom left panel: exemplars of fearful, happy and neutral expressions and the flower stimuli (oddball task only) used in the present study (from the NimStim database).

2.3. Apparatus and Procedure

Participants sat in a sound-attenuated Faraday-cage protected booth 70cm from a ViewSonic G225f 21-inch color monitor driven by an Intel Core i7-3820 with a refresh rate of 85Hz. Task conditions were presented in separate experimental blocks and the order of the tasks (gender discrimination (GD), emotion discrimination (ED) and oddball-detection (ODD)) was counterbalanced across participants (Figure 1 for a trial example). At the beginning of each task participants received specific instructions followed by a 12-trial practice session. For the GD task, participants had to respond whether the stimulus was male or female by pressing one of two buttons on a game controller using their index fingers; button order was counterbalanced across participants. In the ED task, participants had to select the expression (fearful, happy or neutral) from a vertically-presented forced-choice response screen using the mouse click. Participants were told to keep their hand on the mouse during the entire experimental block to reduce response times. Emotion order on the response screen was counterbalanced between participants. In the ODD task, participants were told they would see a series of images including faces and flowers and were asked to press the space-bar for flowers. Responses in all tasks were to be given both fast and accurately.

Participants were instructed to fixate on the black fixation-cross in the center of the screen in order to initiate each trial and to remain fixated there until the response screen appeared. On each trial, the fixation cross was presented in a gaze-contingent fashion, so that participants must have fixated on the cross for 306ms2 before the face was displayed in such a way that the nose would be situated where the fixation was, ensuring fixation on the nose when the face appeared. This procedure was employed because recent studies have shown that 1) the N170 is larger when fixation is on the eyes compared to the nose or mouth (de Lissa et al., 2014; Nemrodov et al., 2014, Neath & Itier, 2015; Neath-Tavares & Itier, 2016) and 2) participants move their eyes toward the eyes of emotional faces even when stimuli are presented for as short as 150ms (Gamer et al., 2013), thus possibly impacting the neural recordings to facial expressions. The target face stimulus (or flower in the ODD task) was then presented for 259ms. In the ODD and GD tasks, the target stimulus was immediately followed by a fixation cross that was presented for 2000ms. This timing was chosen to keep the trial duration time as consistent as possible between tasks. In the ED task, the target stimulus was immediately followed by the response screen that was presented until response. In order to minimize artefacts, participants were instructed to avoid eye-movements and to blink only after making their response.

A block consisted of 96 face trials (3 emotions X 16 identities X 2 face presentations -standard and mirror-reversed) and for the ODD task, each block also contained 12 flowers. For each task, each block was repeated three times with a different trial order (randomized), for a total of 96 trials per face expression per task. Practice trials were given before each task and following the computer task, participants completed the 21-item trait test from the State-Trait Inventory for Cognitive and Somatic Anxiety (STICSA; Ree et al., 2008). The STICSA is a Likert-scale assessing cognitive and somatic symptoms of anxiety as they pertain to one’s mood in general. Only participants scoring in the normal range, below 43, were kept in the analyses (a score of 43 or above is a likely indication of clinical anxiety disorder, Van Dam et al., 2013). Trait anxiety was monitored as it has been shown to interact with emotion processing (e.g. Bar-Haim et al., 2007).

2.4. Electrophysiological Recordings

The EEG was recorded continuously at 1024Hz by an Active-two Biosemi system at 72 recording sites: 66 channels in an electrode-cap under the 10/20 system-extended (the standard 64 locations + custom CB1/CB2 sites) and three pairs of additional electrodes (two pairs of electrodes situated on the outer canthi and infra-orbital ridges for monitoring ocular artifacts and one pair over the mastoids). A Common Mode Sense (CMS) active electrode and a Driven Right Leg (DRL) passive electrode acted as a ground during recordings (for details see: www.biosemi.com/faq/cms&drl.html). The electrodes were average-referenced offline.

2.5. Eye-Tracking Recordings

Eye movements were monitored using a remote Eyelink 1000 eye-tracker from SR Research with a sampling rate of 1000Hz. The eye-tracker was calibrated to each participant’s dominant eye, but viewing was binocular. If participants spent over 10s before successfully fixating on the cross, a drift correction was used. After two drift corrections, a mid-block recalibration was performed. Calibration was done using a nine-point automated calibration accuracy test. Calibration was repeated if the error at any point was more than 1°, or if the average for all points was greater than 0.5°. The participants’ head positions were stabilized with a head and chin rest to maintain viewing position and distance constant.

2.6. Data Processing and Analyses

Only correctly answered trials were used for ERP analysis for the gender categorization (GD) and emotion categorization (ED) tasks, and the correctly rejected trials for the ODD task (i.e. the face trials that participants did not respond to given the task required a response only to flower stimuli). A correct ERP trial also included RTs within 2.5 standard deviations from the mean of each condition (Van Selst & Jolicoeur, 1994) as a way to eliminate anticipatory responses (which would overlap with the EPN component) or late responses, which excluded 6.6% of the total number of trials.

In order to maintain foveation to the defined central fixation location (i.e., tip of the nose), trials in which a saccadic eye movement was recorded beyond 1.4° visual angle around the fixation-location were removed from further analysis. An average of 2.9% of trials were removed during this step across the 29 participants included in the final sample.

The ERP data were processed offline using EEGLab (Delorme & Makeig, 2004) and ERPLab (http://erpinfor.org/erplab) toolboxes implemented in Matlab (Mathworks, Inc.). Average-waveform epochs of 500ms were generated with a 100ms pre-stimulus baseline and were digitally band-pass filtered (0.01–30Hz, two-way least-squares FIR filter). Artifact rejection was done in two steps. First, using an automated procedure, trials containing artifacts >±70μV were rejected. Second, visual inspection was performed and trials still containing artefacts were manually rejected. Participants with less than 30 trials in any condition (out of 96 initial trials) were rejected. The average number of trials per condition in the final sample of participants was 67(SD=12) and did not significantly differ across emotions or task.

The approach taken for data analysis followed our previous work on facial expressions in which the exact same tasks were used (Neath & Itier, 2015; Neath-Tavares & Itier, 2016), to allow for a better comparison. Based on the group averages, the P1 component was maximal at electrodes O1, O2 and Oz and was thus measured at these sites between 80 and 120ms post-stimulus-onset using automatic peak detection. The N170 component was maximal at different electrodes across participants, and within a given participant the N170 was often maximal at different electrodes across the two hemispheres (but maximal at the same electrodes across conditions). Thus, to best capture that component, the N170 peak was measured between 120–200ms at the electrode where it was maximal for each subject and for each hemisphere, using automatic peak detection (see Table 1; see also Rousselet & Pernet, 2011). To measure the time course of the task and emotion effects, mean amplitudes were also calculated within six 50ms windows starting from 50ms to 350ms. This approach allowed us to monitor neural activity in between the P1 (which peaked on average around 100ms) and the N170 (which peaked on average around 150ms), a transition period that has been suggested important in previous work (e.g. Itier et al., 2004; Rousselet et al., 2008; Schyns et al., 2007). In addition, monitoring the entire waveform allowed for a more complete picture and was especially important to track the EPN which has been analyzed at different time windows in previous studies (e.g. Leppänen et al., 2008; Schacht & Sommer, 2009; Schupp et al., 2004a,b).

Table 1.

Number of subjects for whom the N170 was maximal at left (P9, CB1, PO7) and right hemisphere (P10, CB2, PO8) electrodes.

Left Hemisphere Right Hemisphere
P9 16 P10 20
CB1 11 CB2 8
PO7 2 PO8 1

Preliminary exploration of the data revealed that ERP variations occurred at occipital sites and at lateral posterior sites. Based on our previous studies where similar effects were seen at these same locations (Neath & Itier, 2015; Neath-Tavares & Itier, 2016), we extracted mean amplitudes separately for each window at occipital sites (O1, O2 and Oz), at left lateral-posterior sites (CB1, P9, P7, PO7) and at right lateral-posterior sites (CB2, P10, P8 and PO8). We then averaged the amplitudes across the electrodes of each of these three clusters. Note that the lateral-posterior electrodes encompassed the N170, the visual P2 component and the EPN.

All analyses used SPSS Statistics 22. ERP components P1 and N170 amplitudes were analyzed separately, and so were mean amplitudes. For mean amplitudes, we first ran a large ANOVA with the within-subject factors time window (6), expression (3: fearful, happy, neutral), task (3: GD, ED, ODD), and cluster (3: occipital, Left lateral and right lateral). For the P1 peak, an electrode factor was used (3: O1, O2, Oz). For N170, there was no electrode factor but a hemisphere factor was used (2: LH, RH). If necessary further analyses of the interactions found were completed with separate ANOVAs for each time window, each task, each emotion or each cluster.

All ANOVAs used Greenhouse-Geisser adjusted degrees of freedom when the Mauchly’s test of sphericity was significant, and pair-wise comparisons used Bonferroni corrections for multiple comparisons.

3. Results

3.1. Behavioural Results

The percentage of errors for each task can be found in Table 2. As the Oddball task required a response to flower stimuli rather than to faces, very few errors were made that are not directly comparable to errors made in the other two tasks. Overall detection of flower stimuli was excellent (~99%), demonstrating participants were attending to the task, with very few errors made to the non-target face stimuli (<1%).

Table 2.

Behavioural responses obtained in the three tasks, as a function of facial expression. Mean percent error are reported for all three tasks while mean reaction times (RT) are reported for the Gender Discrimination (GD) and Emotion Discrimination (ED) tasks only.

A. Gender Discrimination (GD) B. Emotion Discrimination (ED) C. Oddball Detection (ODD)

Mean Error (%)(std error) Mean Reaction Time (RT) (ms) (std error) Mean Error (%)(std error) Mean Reaction Time (RT) (ms) (std error) Mean Error (%)(std error)
Fear 2.97 (.53) 699 (20) 4.72 (1.16) 1158 (29) .31 (.09)
Happy 2.75 (.49) 686 (18) .92 (.23) 1031 (38) .29 (.12)
Neutral 2.85 (.49) 687 (20) 3.29 (.84) 1113 (46) .15 (.07)

Errors made in the Gender Discrimination and Emotion Discrimination tasks were directly compared using a 2(task) x 3(emotion) repeated-measures ANOVA. No main effect of task was seen. A main effect of emotion (F(1.33, 37.32)=4.18, MSE=21.4, p=.037, ηp2 =.13) was qualified by an emotion by task interaction (F(1.72, 48.31)=4.57, MSE=12.11, p=.019, ηp2=.14). While the number of errors did not differ between facial emotions in the Gender task (F=.1, p=.87, ηp2=.004), less errors were seen for happy than for fearful and neutral expressions in the ED task (effect of emotion, F(1.45, 40.72)=4.99, MSE=29.42, p=.01, ηp2=.15; happy-fearful comparison p=.007; happy-neutral comparison p=.04).

Reaction times were also directly compared between these two tasks. As can be seen on Table 2, RTs were much longer in the ED than in the GD discrimination task (main effect of task, F(1, 28)=174.18, MSE=42063.3, p<.001, ηp2=.86). A main effect of emotion was also found (F(2, 56)=7.84, MSE=9085.9, p=.001, ηp2=.22), that was qualified by a task by emotion interaction (F(2, 56)=6.82, MSE=7307.7, p=.003, ηp2=.19). When each task was analyzed separately using a one-way ANOVA with the factor emotion, no emotion effect was seen for the GD task (F(2, 56)=2.81, p=.077, ηp2=.091) while for the ED task a main effect of emotion (F(2, 56)=7.54, MSE=16233, p=.001, ηp2=.21) was due to overall faster RTs for happy than fearful or neutral faces (happy-fearful comparison, p=.001; happy-neutral comparison, p=.061).

3.2. Event-Related Potential (ERP) Results

3.2.1. P1 Peak Amplitude

No effects were found for P1 peak (Figure 2): main effects of Electrode (F=2.52, p=.099, ηp2=.083), Task (F=2.15, p=.129, ηp2=.071), Emotion (F=.213, p=.792, ηp2=.008) or interactions between these factors, in particular the Emotion by Task interaction (F=.341, p=.835, ηp2=.012), were not significant.

Figure 2.

Figure 2

A) Group averages for the three tasks (across emotions) at occipital sites O1, Oz and O2, showing effects of task between 300–350ms with smaller amplitudes for ED and ODD tasks compared to GD task. (B) Group averages for the three emotions (across tasks) at the same occipital sites. P1 did not show any effect.

3.2.2. N170 Peak Amplitude

The N170 amplitude (Fig. 3), analyzed at the electrode where it was maximal for each participant, was larger in the right compared to the left hemisphere (main effect of hemisphere, F(1, 28)=9.51, MSE=65.1, p=.005, ηp2=.25). No main effect of task was seen (F=.35, p=.69, ηp2=.012) 3.

Figure 3.

Figure 3

Group averages featuring the N170 across (A) tasks and (B) emotions, at P9 and P10 electrodes. (C) Group difference waveforms showing the fear effect (Fear minus Neutral [F-N], solid line) and happiness effect (Happy minus Neutral [H-N], dashed line) at P10. The fear effect was significant between 150–350ms and largest around 180ms (map). Task effects were significant between 200–350ms.

As seen on Figure 3B, the N170 amplitude was also larger for fearful compared to both happy and neutral expressions which did not differ (main effect of emotion, F(1.93, 54.11)=9.51, MSE=7.13, p<.001, ηp2=.25; significant comparisons for fear-neutral at p=.001 and fear-happy at p=.024; happy-neutral p=.29). No interaction between task and emotion was seen (F=.71, p=.57, ηp2=.025). Figure 4 displays the N170 and EPN for the three emotions separately for each task.

Figure 4.

Figure 4

Group averages featuring the ERP waveform including the N170 component, at P9 and P10 electrodes for each task (ODD: Oddball task; ED: Emotion Discrimination; GEN: Gender discrimination).

3.2.3. Mean Amplitudes over Six Time Windows

The omnibus ANOVA revealed a main effect of time window (F(2.97, 83.04)=27.48, MSE=319.1, p<.0001, ηp2=.495), a main effect of cluster (F(2, 56)=17.79, MSE=169.8, p<.0001, ηp2=.388) and a main effect of emotion (F(2, 56)=7.42, MSE=18.8, p=.002, ηp2=.21). Most importantly, time window interacted with all other factors: time window by cluster (F(4.4, 123.2)=8.17, MSE=49.4, p<.0001, ηp2=.226), time window by task (F(5.5, 154.1)=3.99, MSE=10.3, p=.001, ηp2=.125); time window by emotion (F(5.53, 154.73)=8.4, MSE=3.5, p<.0001, ηp2=.231), time window by cluster by task (F(7.43, 208.1)=13.1, MSE=2.7, p<.0001, ηp2=.319) and time window by cluster by emotion (F(7.81, 218.5)=2.72, MSE=1.1, p=.008, ηp2=.089). These interactions justified analyzing each time window separately. Table 3 reports the statistics for each time window.

Table 3.

Statistical effects on mean amplitudes analyzed over six 50ms time windows at occipital (Occ - O1, O2, Oz), left lateral (Llat - CB1, P7, PO7, P9) and right lateral clusters (Rlat - CB2, P8, PO8, P10), with F, p, MSE and ηp2 values. ODD, oddball detection task; ED, emotion discrimination task; GD, gender discrimination task; F, fearful; H, happy; N, neutral. Bonferroni-corrected paired comparisons are also reported Note that only significant effects or interactions are reported, except the non-significant emotion x task interactions that we deemed important to report (in italic and in parenthesis). For interactions, follow up ANOVA results are also reported.

Main effects and interactions 50–100ms 100–150ms 150–200ms 200–250ms 250–300ms 300–350ms
Task - - - F(1.6,44.93)= 4.37, MSE=17.93, p=.025, ηp2 = .135
GD<ODD (p=.003)
F(1.59,44.64)= 6.81, MSE=21.24, p=.005, ηp2 = .196
GD<ODD (p=.001)
-
Emotion - - F(2,56)= 18.18, MSE=4.8 p<.0001, ηp2 = .394
F<H ; F<N (ps<.0001)
F(2,56)= 9.57, MSE=4.87, p<.0005, ηp2 = .26
F<H ; F<N (ps<.005)
F(1.63,45.62)= 7.37, MSE=6.86, p=.003, ηp2 = .21
F<N (p<.005)
F(2,56)=5.75, MSE=5.48, p=.005, ηp2 = .17
F<N (p=.004)
Cluster × Task - - - F(3.1,86.95) = 4.37, MSE=4.07, p=.006, ηp2 =.135
*Occ: no task effect
*Llat: task effect, F(1.6,45.3) = 7.74, MSE=12.5, p=.002, ηp2 =.217
GD<ODD (p=.002); GD<ED (p=.036)
*Rlat: task effect, F(1.9,52.02) = 8.53, MSE=7.62, p=.001, ηp2 =.234
GD<ODD (p<.0005);
F(4,112) = 16.98, MSE=4.9, p<.0001, ηp2 =.378
*Occ: task effect, F(2,56) = 11.76, MSE=9.78, p=.0001, ηp2 =.296
GD>ODD (p<.0001); GD>ED (p=.003)
*Llat: task effect, F(2,56) = 3.47, MSE=9.04, p=.04, ηp2 =.11
GD<ODD (p=.024);
*Rlat: no task effect
Emotion × Task -
(F= .987, p=.41, ηp2 = .034)
-
(F= .843, p=.48, ηp2 = .029)
-
(F= .436, p=.75, ηp2 = .015)
-
(F= .38, p=.78, ηp2 = .013)
-
(F= .994, p=.41, ηp2 = .034)
-
(F= .348, p=.81, ηp2 = .012)
Cluster × Task × Emotion - F(5.5,154.2) =2.37, MSE=1.32, p=.036, ηp2 = .078
*Occ: no effects
*Llat: no effects
*Rlat: no effects
- - - -

No task effect was seen before 200ms. Task demands modulated amplitudes between 200–300ms with lower amplitudes for the GD task compared to the ODD task, with amplitudes for the ED task falling in between (Fig. 3A; Table 3). However between 250–300ms, this GD-ODD difference was seen at lateral sites but not at occipital sites (Cluster x task interaction, Table 3), while more negative amplitudes were also seen for the GD compared to the ED task at left lateral sites. Between 300–350ms, the task effect interacted with clusters again (Table 3): at occipital sites, amplitudes were smallest for ODD and ED tasks compared to the GD task (Fig. 2A) while at left lateral sites, the same pattern as earlier was seen (GD<ODD), with no task effect at right lateral sites.

A significant effect of emotion was seen from 150–350ms (Table 3). Lower amplitudes for fearful compared to neutral expressions were seen across this interval, as clearly seen on the posterior lateral sites (Fig. 3B–C, Fig. 4). This fear effect is also clear on the topographic maps as bilateral negativities at occipito-temporal sites along with frontal positivities at central sites, as well as on the difference waveforms at all posterior-lateral sites (Figure 5). The peak of this effect was seen at posterior lateral sites around 180ms (Fig. 3C), i.e. after the N170 (which peaked around 150ms). Between 150–250ms, amplitudes were also more negative for fearful than happy faces. In contrast, amplitudes for happy faces were never significantly smaller than those for neutral faces in any time window (and no cluster by emotion interaction was seen in any time window).

Figure 5.

Figure 5

Mean voltage distribution maps (top) and waveforms (bottom) of the group difference waveforms generated by subtracting neutral from fearful and happy conditions (F-N and H-N, averaged across tasks) at lateral-posterior sites (left lateral cluster and right lateral clusters). The grey zone highlights the time during which the effect for fear was significant (150–350ms).

Importantly, emotion never interacted with task in any time window (Table 3).

4. Discussion

The study of the temporal dynamics of facial expression processing using the ERP technique has yielded inconsistent findings regarding modulations of the early neural markers of perception with emotion. These inconsistencies might be stemming from the use of various task demands in the literature, which might impact the level of processing of the emotion displayed by the face. Only two previous studies directly compared task effects on the neural activity elicited by facial expressions in a within-subject design, and in both studies angry, happy and neutral faces were used (Wronka & Walentowska, 2011; Rellecke et al., 2012). As most of the reported inconsistencies concern early fear effects, rather and anger effects, the present study sought to directly test the impact of three of the main tasks used in the literature, on the neural processing of fearful expressions presented along happy and neutral expressions, in a within-subject design.

Participants discriminated the gender of the face (Gender Discrimination – GD), the facial expression of emotion (Emotion Discrimination – ED) or simply responded to infrequently presented flower stimuli (Oddball task – ODD). The classic ERP components P1 and N170 peaks were analyzed, in addition to mean amplitudes across the entire epoch (from 50ms until 350ms, in 50ms windows) at occipital and posterior-lateral electrodes, spanning the EPN. Fixation was enforced on the face nose with an eye tracker (gaze-contingent procedure) and eye movements were monitored to prevent possible small gaze shifts toward facial features from impacting the neural recordings. This was important in light of recent studies showing that where on the face participants fixate matters, with the P1 and N170 components being modulated by fixation location on the face (e.g. de Lissa et al., 2014; Nemrodov et al., 2014; Neath & Itier, 2015; Neath-Tavares & Itier, 2016). We found that task demands and facial expressions impacted neural activity separately, but never interacted with each other. We discuss these findings and their implication in turn.

Effects of task demands

Errors on the three tasks were low (<5%), suggesting the tasks were not difficult. Flower detection was nearly perfect in the oddball task. This task requires a face/non-face judgement which can be done easily without deep processing. In contrast, the gender discrimination and explicit emotion categorization usually require more cognitive resources (although see Reddy, Wilken & Koch, 2004, for the suggestion that gender discrimination can be done with virtually no attention). Direct comparison of the latter two tasks revealed no main effect of task on error rates, suggesting a similar level of task difficulty, and no main effect of emotion. However, task and emotion interacted, with no emotion effect seen for the GD task but overall fewer errors and shorter reaction times for happy than neutral faces in the ED task (although given the use of a mouse for the ED task, RT results should be taken cautiously).

A lack of emotion effect for the GD task was also reported by Wronka and Walentowska (2011) while Rellecke et al. (2012) reported more errors for angry than happy faces in their GD task. In our previous GD task using similar stimuli as here (Neath & Itier, 2015), we also reported a trend for more errors and longer RTs for fearful than neutral faces, but the specific design used with fixation location enforced on a different feature on every trial might have affected task performances. The better performance found for happy faces in the present ED task is in line with previous results on the same task using similar stimuli (Neath-Tavares & Itier, 2016, Exp.1) and with the facial expression recognition literature featuring a “happy superiority effect” with typically best and fastest responses for happy expressions (Calvo & Nummenmaa, 2016; Neath & Itier, 2014; Tottenham et al., 2009; Palermo & Coltheart, 2004). Wronka and Walentowska (2011) also reported less errors for emotional than neutral faces although responses to happy and angry faces were collapsed. In contrast, Rellecke et al (2012) did not find any emotion effect for their ED task. It is difficult to directly relate our findings to the previous within-subject studies given the use of fearful rather than angry facial expressions, and the use of the fixation trigger and eye tracker to ensure no micro eye movements and correct foveation on the nose. However, overall, our findings are in line with those we reported recently using the same tasks and similar design and stimuli (Neath & Itier, 2015; Neath-Tavares & Itier, 2016) and with the overall literature on facial expressions recognition.

Task demands also modulated neural recordings from 200ms until 350ms, i.e. after the N170, but including the EPN. The lack of main task effect on the N170 is in line with both Rellecke et al. (2012) and Wronka and Walentowska (2011)’s studies also using within-subject designs, and these findings support the view of a task-irrelevant face-sensitive N170 component. After the N170, around 200ms, task demands modulated neural recordings. Overall, a reduced negativity was seen for the oddball task compared to the gender discrimination task, with explicit emotion discrimination task falling in between. This task effect is likely the result of differences in the amount and/or level of processing (e.g., Craik & Lockheart, 1972) between the tasks, with the ODD task requiring the least amount of processing. This clear task effect on the EPN, similar to that reported by Rellecke et al. (2012), contrasts with a lack of task effect reported by Wronka and Walentowska (2011) between 240–340ms where EPN was measured. In that study, the EPN was measured at occipital sites (O1, O2, Oz) while in the present study and the Rellecke et al. (2012) study, the EPN was measured at posterior-lateral sites as well. The electrodes used to measure the EPN may account for the differences in findings between the studies. Our results support the view that task demands overall modulate neural recordings during the time interval where the EPN is seen at posterior lateral sites, presumably reflecting the depth of processing of the stimuli.

The emotion expressed by the face also modulated the neural activity, but only clearly for fearful expressions, as discussed below.

Processing of happy facial expressions

Happy facial expressions did not modulate amplitudes significantly compared to neutral expressions in any time window. In a previous gender discrimination task (Neath and Itier, 2015), an explicit emotion discrimination and an oddball detection tasks (Neath-Tavares & Itier, 2016), we used a similar gaze-contingent procedure except fixation location was also enforced on the eyes and mouth, in addition to the nose. We found that a happiness effect (more negative amplitude for happy than neutral faces) was seen only when participants were fixated on the mouth, and this effect started early at occipital sites (120ms) and was seen until 350ms. The fact that the happiness effect was not found in the current study where fixation was restricted to the nose, is in line with the idea that the happiness effect reported in those previous studies was driven by cues from the mouth (like the current study, stimuli had wide open smiles where teeth were apparent).

Following Halgren et al. (2000) who reported an occipital source that differentiated the magnetic response between happy and neutral faces around 100–120ms, we previously interpreted this early happiness effect as reflecting the fast discrimination of diagnostic cues such as the smile, based on local luminance and contrast, in early visual areas. It would be sensible that this neural activity would be caught maximally when fixation is on the mouth, but not or only weakly when fixation falls elsewhere on the face such as on the nose, as the smile cues would then be in the visual periphery (however, see Calvo et al., 2014, for suggestion that the smile can also modulate early ERPs around 90–130ms in the periphery). An early processing of the salient smile before 200ms post-stimulus onset at the brain level has also been suggested by a recent ERP study (Beltrán & Calvo, 2015) and makes sense in view of the rapid discrimination of happy faces from other facial expressions based on saccadic latencies of 200ms to 280ms (Calvo & Nummenmaa, 2011). A growing body of evidence suggests that the happy superiority effect, which includes faster processing of happy faces than other facial expressions (see Palermo & Coltheart, 2004; Calvo & Nummenmaa, 2016), is linked to the saliency of the mouth (e.g. Calvo & Nummenmaa, 2011), and the saliency of the mouth, given by apparent teeth, modulates early ERP components recorded to pictures of smiles or full faces (Beltrán & Calvo, 2015; DaSilva et al., 2016). Our present study suggests that when fixation is enforced on the nose using a gaze contingent procedure, these early effects of the smile are not detected.

Processing of fearful facial expressions

Early effects for fearful faces have been debated. Many studies have reported no modulation of the P1 by emotion (Palermo & Rhodes, 2007; Vuilleumier & Pourtois, 2007); however, other studies have reported enhanced P1 for fearful compared to neutral faces in gender discrimination tasks (Pourtois et al., 2005; Wijers & Banis, 2012), oddball detection tasks (Batty & Taylor, 2003; Williams et al. 2004), and passive viewing of emotional faces (Blau et al., 2007; Smith et al., 2013). Some studies have suggested that this early effect might reflect automatic (i.e., involuntary) attention capture by threatening faces, mediated by low spatial frequencies and rapid feedback from the amygdala onto early visual areas (e.g. Pourtois et al., 2005; Vlamings et al., 2009). We reasoned that if this was the case, and in light of the neuroimaging literature suggesting largest response of the amygdala in emotion-irrelevant tasks (Hariri et al., 2000; Critchley et al., 2000; Lange et al., 2003), then we might see largest P1 increases to fearful faces in the oddball and gender discrimination tasks in which emotion was task-irrelevant.

In contrast, in the current study, no modulation of the P1 by fearful expressions was seen regardless of whether the task was emotion-relevant or –irrelevant. Although null findings should be interpreted cautiously, the present results suggest task demands are unlikely the reason for previous inconsistent fear effects on P1. This lack of P1 fear effect was also noted in our previous studies (Neath & Itier, 2015; Neath-Tavares & Itier, 2016). Some previous reports of P1 modulations by fearful faces may have been driven by differences in stimuli low-level characteristics such as contrast or luminance which were not always controlled for or even measured. Alternatively, previous modulations of P1 might be attributed to possible micro eye movements, given P1 is modulated by fixation location (Neath & Itier, 2015; Neath-Tavares & Itier, 2016). Here, mean contrast and normalized pixel intensity did not vary between expressions and micro eye movements were prevented by the use of a gaze-contingent procedure. Another possibility is that individual differences were driving this very early effect in previous studies. There is suggestion that trait anxiety influences the processing of threat-related information (including fearful faces, see Bar-Haim et al., 2007) and neuroimaging studies have shown increased amygdala activity in high trait anxious individuals during unconscious processing of fearful stimuli (Bishop, 2007; Etkin et al., 2004) when compared with low anxious individuals. Anxiety level has also been shown to modulate the P1, although results remain inconsistent (Walentowska & Wronka, 2012; Morel et al., 2014; see also Jetha et al., 2012 with shyness traits). For instance P1 was larger for high compared to low trait anxious participants regardless of emotion in one study (main effect of group, Walentowska & Wronka, 2012) while in another study, P1 was larger for happy than neutral faces only in high trait anxious participants, with no effects seen for fearful faces (emotion by group interaction, Morel et al., 2014). In the present study, high anxious participants were excluded. Whether and how micro eye movements or individual differences in anxiety might be associated with early effects on the P1 provides a future direction for this ERP emotion research.

In the present experiment, modulations of ERPs by fearful faces started at the N170 component (~150ms) and were seen bilaterally as more negative amplitudes for fearful than neutral faces, all the way until 350ms at both lateral-posterior and occipital sites. This fear effect was in fact maximal after the N170 peak, around 180ms (Fig. 35) and did not interact with task demands. This enhanced negativity for fearful faces likely reflects activity linked to the processing of fear superimposed onto the normal activity related to the processing of neutral faces, as proposed by other groups (Rellecke et al., 2012; 2013; Schacht & Sommer, 2009). This superimposed negativity started at the N170 but was seen mostly after the peak, during the timing of the visual P2 and EPN (Neath & Itier, 2015; Neath-Tavares & Itier, 2016; Leppänen et al., 2008; Rellecke et al., 2011, 2013; Schupp et al., 2004a,b). Very similar topographic effects were reported recently (Neath & Itier, 2015; Neath-Tavares & Itier, 2016), with a bilateral negativity at occipital and posterior-temporal sites between 150–350ms for fear, peaking around 180–200ms, and seen along with a simultaneous positivity at midline sites. This topographic distribution was also reported for angry faces (e.g. Schacht & Sommer, 2009; Rellecke et al., 2012) and might be driven by similar or overlapping underlying generators related to processing threat. It seems that this superimposed negativity is sometimes caught on the N170 peak which then shows an effect of fear as in the present study (see also Neath-Tavares & Itier, 2016, Exp.2), and sometimes not, resulting in no modulation of the N170 with fearful expressions (e.g. Neath & Itier, 2015; Neath-Tavares & Itier, 2016, Exp.1). The reason for this remains unclear although the current study suggests it is unlikely due to differences in attention to the face placed by task demands, given the lack of emotion and task interaction on the N170 component. Individual differences in sensitivity to threat, possibly linked to anxiety, might be at play, a possibility that future studies could investigate.

Our result of a fear effect seen on the N170 is in line with a recent meta-analysis reporting modulation of the N170 with facial expressions of emotion, with largest effect sizes seen for fearful and angry faces (Hinojosa et al., 2015). That same study also found largest effect sizes of the emotion effect in emotion-irrelevant tasks compared to emotion-relevant tasks, which points at a possible interaction between emotion and task effects. In contrast, the present within-subject design does not support an emotion effect modulated by task demands, at least with the three tasks we used. Wronka and Walentowska (2011) reported an interaction between emotion and task demands on the N170, however the component was measured at some unusual electrodes including PO3 and PO4 and a linked mastoid reference was used. In contrast, the current lack of interaction between task demands and facial expression was also reported by Rellecke et al. (2012) using angry faces and the same average reference montage as used here. These results support the view that processing of threat-related expressions (fearful, angry) might be mandatory and independent of attention to the face (Rellecke et al., 2012; Schupp et al., 2004b).

It is to be noted that the order of task presentation may have influenced the results. For example, completing the ED task first could have primed participants to attend to the emotional faces in the following ODD task differently than when completing the ODD task first. Task order was completely counterbalanced between participants; however, the total number of participants in each task order condition was too small for a meaningful analysis. An investigation of task order, requiring many more participants, is an opportunity for future studies. Furthermore, as we already acknowledged, the present lack of interaction between task demands and emotion might be due to the three tasks used, which are the most commonly used tasks in the facial expression literature. It is possible that in different tasks, or much more difficult tasks, such an interaction would emerge. Finally, all the conclusions drawn are restricted to the first 350ms of processing and do not exclude possible later interactions between task demands and emotion. For instance, it is possible that such an interaction would be seen on the P300 which is modulated by task demands and attention (see Polich, 2007, for a review) or on the Late Positive Complex (LPC) which is involved in emotion processing at more elaborated cognitive stages (Schupps et al., 2004b, 2007b) and has been shown to vary with task demands (Rellecke et al., 2012). These later timing would make sense in view of the task by emotion interaction that we obtained at the behavioural level. However, the focus of the present study was restricted to early components.

5. Conclusions

The current study investigated the impact of task demands on the neural processing of fearful and happy facial expressions. Task demands modulated neural recordings after 200ms. An enhanced negativity at lateral-posterior sites was seen for fearful expressions from ~150–300ms post-stimulus while no clear modulation was seen for happy faces, likely driven by the nose fixation used. Importantly, no interaction between emotion and task was seen during the entire epoch analyzed, including P1, N170 and EPN components. The current results suggest that early emotion effects for fearful expressions occur irrespective of task-relevance, at least when comparing the commonly used gender discrimination, explicit emotion categorization and oddball detection tasks. Those results also suggest that previous inconsistencies in the facial expression ERP literature related to the early modulation of P1 and N170 components by fearful expressions, are unlikely due to task demands.

Acknowledgments

We would like to warmly thank Frank Preston for his assistance with data collection and programming. This research was possible thanks to funding from the Natural Sciences and Engineering Research Council of Canada (NSERC Discovery Grant #418431), the Ontario government (Early Researcher Award, ER11-08-172), the Canada Foundation for Innovation (CFI, #213322), and the Canada Research Chair (CRC, #213322; #230407) program to RJI, as well as by a doctoral NSERC grant to KNNT (CGS D Grant # 210825634).

Footnotes

1

Development of the MacBrain Face Stimulus Set was overseen by Nim Tottenham and supported by the John D. and Catherine T. MacArthur Foundation Research Network on Early Experience and Brain Development. Please contact Nim Tottenham at tott0006@tc.umn.edu for more information concerning the stimulus set. The models used in the present study were models # 2, 3, 5, 6, 7, 8, 15, 16, 20, 21, 23, 24, 27, 32, 33, 34.

2

On average participants took 864ms (909 SD) between the onset of the fixation-cross and the stimulus presentation.

3

Note that a small effect of task can be seen on Fig. 3A which displays the grand averages at P9 and P10 sites and thus does not represent the statistics computed using the maximum peak electrodes. When only P9/P10 were used to measure the N170, a small effect of task was found (F=3.46, p=.05, ηp2=.11) with smaller amplitude for the ODD than the ED task but no paired comparisons reached significance. We believe this methodological difference in measurement is important to keep in mind when comparing studies across the literature, however, we believe our data support a lack of true task effect at the N170 level. With P9/10 sites, the effect of emotion (F=4.65, p=.014, ηp2=.142) was still significant (Fear < Neutral, p=.011) and the task by emotion interaction was still non-significant (F=.255, p=.83, ηp2=.009).

References

  1. Bar-Haim Y, Lamy D, Pergamin L, Bakermans-Kranenburg MJ, van IJzendoorn MH. Threat related attentional bias in anxious and non-anxious individuals: A meta-analytic study. Psychological Bulletin. 2007;133:1–24. doi: 10.1037/0033-2909.133.1.1. [DOI] [PubMed] [Google Scholar]
  2. Batty M, Taylor M. Early processing of the six basic facial emotional expressions. Cognitive Brain Research. 2003;17(3):613–620. doi: 10.1016/s0926-6410(03)00174-5. [DOI] [PubMed] [Google Scholar]
  3. Beltrán D, Calvo MG. Recognition advantage of happy faces: Tracing the neurocognitive processes. Human Brain Mapping. 2015;36(11):4287–4303. [Google Scholar]
  4. Bentin S, Allison T, Puce A, Perez E, McCarthy G. Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience. 1996;8:551–565. doi: 10.1162/jocn.1996.8.6.551. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bentin S, Deouell LY. Structural encoding and identification in face processing: ERP evidence for separate mechanisms. Cognitive Neuropsychology. 2000;17(1–3):35–55. doi: 10.1080/026432900380472. [DOI] [PubMed] [Google Scholar]
  6. Bishop SJ. Neurocognitive mechanisms of anxiety: an integrative account. Trends in Cognitive Science. 11(7):307–316. doi: 10.1016/j.tics.2007.05.008. [DOI] [PubMed] [Google Scholar]
  7. Blau VC, Maurer U, Tottenham N, McCandliss BD. The face-specific N170 component is modulated by emotional facial expression. Behavioral and Brain Functions. 2007;3(7):1–13. doi: 10.1186/1744-9081-3-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Calvo MG, Nummenmaa L. Time course of discrimination between emotional facial expressions: the role of visual saliency. Vision Research. 2011;51:1751–1759. doi: 10.1016/j.visres.2011.06.001. [DOI] [PubMed] [Google Scholar]
  9. Calvo MG, Beltrán D. Brain lateralization of holistic versus analytic processing of emotional facial expressions. Neuroimage. 2014;92:237–247. doi: 10.1016/j.neuroimage.2014.01.048. [DOI] [PubMed] [Google Scholar]
  10. Calvo MG, Beltrán D, Fernández-Martín A. Processing of facial expressions in peripheral vision: neurophysiological evidence. Biological Psychology. 2014;100:60–70. doi: 10.1016/j.biopsycho.2014.05.007. [DOI] [PubMed] [Google Scholar]
  11. Calvo MG, Nummenmaa L. Perceptual and affective mechanisms in facial expression recognition: an integrative review. Cognition and Emotion. 2016;30(6):1081–106. doi: 10.1080/02699931.2015.1049124. [DOI] [PubMed] [Google Scholar]
  12. Clark VP, Fan S, Hillyard SA. Identification of early visual evoked potential generators by retinotopic and topographic analyses. Human Brain Mapping. 1995;2(3):170–187. [Google Scholar]
  13. Craik FI, Lockhart RS. Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior. 1972;11(6):671–684. [Google Scholar]
  14. Critchley HD, Daly E, Phillips M, Brammer M, Bullmore E, Williams S, van Amelsvoort T, Robertson D, David A, Murphy D. Explicit and implicit neural mechanisms for processing of social information from facial expressions: a functional magnetic imaging study. Human Brain Mapping. 2000;9:93–105. doi: 10.1002/(SICI)1097-0193(200002)9:2&#x0003c;93::AID-HBM4&#x0003e;3.0.CO;2-Z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. DaSilva EB, Crager K, Geisler D, Newbern P, Orem B, Puce A. Something to sink your teeth into: the presence of teeth augments ERPs to mouth expressions. Neuroimage. 2016;127:2227–2241. doi: 10.1016/j.neuroimage.2015.12.020. [DOI] [PubMed] [Google Scholar]
  16. Delorme A, Makeig S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods. 2004;134:9–21. doi: 10.1016/j.jneumeth.2003.10.009. [DOI] [PubMed] [Google Scholar]
  17. de Lissa P, McArthur G, Hawelka S, Palermo R, Mahajan Y, Hutzler F. Fixation location on upright and inverted faces modulates the N170. Neuropsychologia. 2014;57:1–11. doi: 10.1016/j.neuropsychologia.2014.02.006. [DOI] [PubMed] [Google Scholar]
  18. Eimer M. Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clinical Neurophysiology. 2000;111(4):694–705. doi: 10.1016/s1388-2457(99)00285-0. [DOI] [PubMed] [Google Scholar]
  19. Eimer M, Holmes A, McGlone FP. The role of spatial attention in the processing of facial expression: An ERP study of rapid brain responses to six basic emotions. Cognitive, Affective, & Behavioural Neuroscience. 2003;3(2):97–110. doi: 10.3758/cabn.3.2.97. [DOI] [PubMed] [Google Scholar]
  20. Etkin A, Klemenhagen KC, Dudman JT, Rogan MT, Hen R, Kandel ER, Hirsch J. Individual differences in trait anxiety predict the response of the basolateral amygdala to unconsciously processed fearful faces. Neuron. 2004;44(6):1043–1055. doi: 10.1016/j.neuron.2004.12.006. [DOI] [PubMed] [Google Scholar]
  21. Gamer M, Schmitz AK, Tittgemeyer M, Schilbach L. The human amygdala drives reflexive orienting towards facial features. Current Biology. 2013;23(20):R917–R918. doi: 10.1016/j.cub.2013.09.008. [DOI] [PubMed] [Google Scholar]
  22. Halgren E, Raij T, Marinkovic K, Jousmäki V, Hari R. Cognitive response profile of the human fusiform face area as determined by MEG. Cerebral Cortex. 2000;10:69–81. doi: 10.1093/cercor/10.1.69. [DOI] [PubMed] [Google Scholar]
  23. Hariri AR, Bookheimer SY, Mazziotta JC. Modulating emotional responses: effects of a neocortical network on the limbic system. Neuroreport. 2000;11:43–48. doi: 10.1097/00001756-200001170-00009. [DOI] [PubMed] [Google Scholar]
  24. Herrmann MJ, Aranda D, Ellgring H, Meuller TJ, Strik WK, Heidrich A, Fallgatter AJ. Face-specific event-related potential in humans is independent from facial expression. International Journal of Psychophysiology. 2002;45(3):241–244. doi: 10.1016/s0167-8760(02)00033-8. [DOI] [PubMed] [Google Scholar]
  25. Hinojosa JA, Mercado F, Carretié L. N170 sensitivity to facial expression: A meta-analysis. Neuroscience & Biobehavioral Reviews. 2015;55:498–509. doi: 10.1016/j.neubiorev.2015.06.002. [DOI] [PubMed] [Google Scholar]
  26. Itier RJ, Taylor MJ. Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: A repetition study using ERPs. NeuroImage. 2002;15(2):353–372. doi: 10.1006/nimg.2001.0982. [DOI] [PubMed] [Google Scholar]
  27. Itier RJ, Taylor MJ. Source analysis of the N170 to faces and objects. Neuroreport. 2004;15(8):1261–1265. doi: 10.1097/01.wnr.0000127827.73576.d8. [DOI] [PubMed] [Google Scholar]
  28. Itier RJ, Taylor MJ, Lobaugh Spatiotemporal analysis of event-related potentials to upright, inverted and contrast-reversed faces: effects on encoding and recognition. Psychophysiology. 2004;41:643–653. doi: 10.1111/j.1469-8986.2004.00183.x. [DOI] [PubMed] [Google Scholar]
  29. Jetha MK, Zheng X, Schmidt LA, Segalowitz SJ. Shyness and the first 100ms of emotional face processing. Social Neuroscience. 2012;7(1):74–89. doi: 10.1080/17470919.2011.581539. [DOI] [PubMed] [Google Scholar]
  30. Johannes S, Münte TF, Heinze HJ, Mangun GR. Luminance and spatial attention effects on early visual processing. Cognitive Brain Research. 1995;2(3):189–205. doi: 10.1016/0926-6410(95)90008-x. [DOI] [PubMed] [Google Scholar]
  31. Junghöfer M, Bradley MM, Elbert TR, Lang PJ. Fleeting images: A new look at early emotion discrimination. Psychophysiology. 2001;38(2):175–178. [PubMed] [Google Scholar]
  32. Kissler J, Herbert C, Peyk P, Junghöfer M. Buzzwords – Early cortical responses to emotional words during reading. Psychological Science. 2007;18:475–480. doi: 10.1111/j.1467-9280.2007.01924.x. [DOI] [PubMed] [Google Scholar]
  33. Lange K, Williams LM, Young AW, Bullmore ET, Brammer MJ, Williams SCR, Gray JA, Phillips ML. Task instructions modulate neural responses to fearful facial expressions. Biological Psychiatry. 2003;53(3):226–232. doi: 10.1016/s0006-3223(02)01455-5. [DOI] [PubMed] [Google Scholar]
  34. Leppänen JM, Moulson MC, Vogel-Farley VK, Nelson CA. An ERP study of emotional face processing in the adult and infant brain. Child Development. 2007;78(1):232–245. doi: 10.1111/j.1467-8624.2007.00994.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Leppänen JM, Hietanen JK, Koskinen K. Differential early ERPs to fearful versus neutral facial expressions: a response to the salience of the eyes? Biological Psychology. 2008;78:150–158. doi: 10.1016/j.biopsycho.2008.02.002. [DOI] [PubMed] [Google Scholar]
  36. Luck SJ. Multiple mechanisms of visual-spatial attention: recent evidence from human electrophysiology. Behavioural brain research. 1995;71(1):113–123. doi: 10.1016/0166-4328(95)00041-0. [DOI] [PubMed] [Google Scholar]
  37. Luck SJ, Woodman GF, Vogel EK. Event-related potential studies of attention. Trends in Cognitive Sciences. 2000;4(11):432–440. doi: 10.1016/s1364-6613(00)01545-x. [DOI] [PubMed] [Google Scholar]
  38. Mangun GR. Neural mechanisms of visual selective attention. Psychophysiology. 1995;32(1):4–18. doi: 10.1111/j.1469-8986.1995.tb03400.x. [DOI] [PubMed] [Google Scholar]
  39. Marinkovic K, Halgren E. Human brain potentials related to the emotional expression, repetition, and gender of faces. Psychobiology. 1998;26(4):348–356. [Google Scholar]
  40. Meaux E, Roux S, Batty M. Early visual ERPs are influenced by individual emotional skills. Social Cognitive and Affective Neuroscience. 2014;9(8):1089–1098. doi: 10.1093/scan/nst084. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Morel S, George N, Foucher A, Chammat M, Dubal S. ERP evidence for an early bias towards happy faces in trait anxiety. Biological Psychology. 2014;99:183–192. doi: 10.1016/j.biopsycho.2014.03.011. [DOI] [PubMed] [Google Scholar]
  42. Neath KN, Itier RJ. Facial expression discrimination varies with presentation time but not with fixation on features: a backward masking study using eye-tracking. Cognition and Emotion. 2014;28(1):115–131. doi: 10.1080/02699931.2013.812557. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Neath KN, Itier RJ. Fixation to features and neural processing of facial expressions in a gender discrimination task. Brain and Cognition. 2015;99:97–111. doi: 10.1016/j.bandc.2015.05.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Neath-Tavares KN, Itier RJ. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: a fixation-to-feature approach. Biological Psychology. 2016;119:122–140. doi: 10.1016/j.biopsycho.2016.07.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Nemrodov D, Anderson T, Preston FF, Itier RJ. Early sensitivity for eyes within faces: A new neuronal account of holistic and featural processing. NeuroImage. 2014;97:81–94. doi: 10.1016/j.neuroimage.2014.04.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Palermo R, Coltheart M. Photographs of facial expression: Accuracy, response times, and ratings of intensity. Behavior Research Methods, Instruments & Computers. 2004;36(4):634–638. doi: 10.3758/bf03206544. [DOI] [PubMed] [Google Scholar]
  47. Palermo R, Rhodes G. Are you always on my mind? A review of how face perception and attention interact. Neuropsychologia. 2007;45(1):75–92. doi: 10.1016/j.neuropsychologia.2006.04.025. [DOI] [PubMed] [Google Scholar]
  48. Pizzagalli DA, Lehmann D, Hendrick AM, Regard M, Pascual-Marqui RD, Davidson RJ. Affective judgments of faces modulate early activity (~ 160 ms) within the fusiform gyri. Neuroimage. 2002;16(3):663–677. doi: 10.1006/nimg.2002.1126. [DOI] [PubMed] [Google Scholar]
  49. Polich J. Updating P300: an integrative theory of P3a and P3b. Clin Neurophysiol. 2007;118(10):2128–48. doi: 10.1016/j.clinph.2007.04.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Pourtois G, Grandjean D, Sander D, Vuilleumier P. Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cerebral cortex. 2004;14(6):619–633. doi: 10.1093/cercor/bhh023. [DOI] [PubMed] [Google Scholar]
  51. Pourtois G, Dan ES, Grandjean D, Sander D, Vuilleumier P. Enhanced extrastriate visual response to bandpass: Time course and topographic evoked-potentials mapping. Human Bran Mapping. 2005;26:65–79. doi: 10.1002/hbm.20130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Reddy L, Wilken P, Koch C. Face-gender discrimination is possible in the near absence of attention. Journal of Vision. 2004;4(2):106–117. doi: 10.1167/4.2.4. [DOI] [PubMed] [Google Scholar]
  53. Ree MJ, French D, MacLeod C, Locke V. Distinguishing cognitive and somatic dimensions of state and trait anxiety: Development and validation of the state-trait inventory for cognitive and somatic anxiety (STICSA) Behavioural and Cognitive Psychotherapy. 2008;36(3):313–332. [Google Scholar]
  54. Rellecke J, Sommer W, Schacht A. Does processing of emotional facial expressions depend on intention? Time-resolved evidence from event-related brain potentials. Biological Psychology. 2012;90(1):23–32. doi: 10.1016/j.biopsycho.2012.02.002. [DOI] [PubMed] [Google Scholar]
  55. Rellecke J, Sommer W, Schacht A. Emotion effects on the N170: A question of reference? Brain Topography. 2013;26(1):62–71. doi: 10.1007/s10548-012-0261-y. [DOI] [PubMed] [Google Scholar]
  56. Rellecke J, Palazova M, Sommer W, Schacht A. On the automaticity of emotion processing in words and faces: Event-related brain potentials evidence from a superficial task. Brain and Cognition. 2011;77(1):23–32. doi: 10.1016/j.bandc.2011.07.001. [DOI] [PubMed] [Google Scholar]
  57. Rossion B, Campanella S, Gomez CM, Delinte A, Debatisse D, Liard L, … Guerit JM. Task modulation of brain activity related to familiar and unfamiliar face processing: An ERP study. Clinical Neurophysiology. 1999;110(3):449–462. doi: 10.1016/s1388-2457(98)00037-6. [DOI] [PubMed] [Google Scholar]
  58. Rossion B, Jacques C. The N170: Understanding the time course of face perception in the human brain. In: Luck SJ, Kappenman ES, editors. The Oxford handbook of Event-Related Potential Components. Oxford: Oxford University Press; 2012. pp. 115–141. [Google Scholar]
  59. Rousselet GA, Husk JS, Bennett PJ, Sekuler AB. Time course and robustness of ERP object and face differences. Journal of Vision. 2008;8(12):3, 1–18. doi: 10.1167/8.12.3. [DOI] [PubMed] [Google Scholar]
  60. Rousselet GA, Pernet CR. Quantifying the time course of visual object processing using ERPs: it’s time to up the game. Front Psych. 2011;23:2–107. doi: 10.3389/fpsyg.2011.00107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Sato W, Kochiyama T, Yoshikawa S, Matsumura M. Emotional expression boosts early visual processing of the face: ERP recording and its decomposition by independent component analysis. Neuroreport. 2001;12(4):709–714. doi: 10.1097/00001756-200103260-00019. [DOI] [PubMed] [Google Scholar]
  62. Schacht A, Sommer W. Emotions in word and face processing: Early and late cortical responses. Brain and Cognition. 2009;69:538–550. doi: 10.1016/j.bandc.2008.11.005. [DOI] [PubMed] [Google Scholar]
  63. Schupp HT, Junghöfer M, Weike AI, Hamm AO. Emotional facilitation of sensory processing in the visual cortex. Psychological Science. 2003;14:7–13. doi: 10.1111/1467-9280.01411. [DOI] [PubMed] [Google Scholar]
  64. Schupp HT, Junghöfer M, Weike AI, Hamm AO. The selective processing of briefly presented affective pictures: An ERP analysis. Psychophysiology. 2004a;41(3):441–449. doi: 10.1111/j.1469-8986.2004.00174.x. [DOI] [PubMed] [Google Scholar]
  65. Schupp HT, Ohman A, Junghöfer M, Weike AI, Stockburger J, Hamm AO. The facilitated processing of threatening faces: an ERP analysis. Emotion. 2004b;4(2):189–200. doi: 10.1037/1528-3542.4.2.189. [DOI] [PubMed] [Google Scholar]
  66. Schupp HT, Flaisch T, Stockburger J, Junghöfer M. Emotion and attention: event-related brain potential studies. Progress in brain research. 2006;156:31–51. doi: 10.1016/S0079-6123(06)56002-9. [DOI] [PubMed] [Google Scholar]
  67. Schupp HT, Stockburger J, Codispoti M, Junghöfer M, Weike AI, Hamm AO. Selective visual attention to emotion. The Journal of neuroscience. 2007a;27:1082–1089. doi: 10.1523/JNEUROSCI.3223-06.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Schupp HT, Stockburger J, Bublatzky F, Junghöfer M, Weike AI, Hamm AO. Explicit attention interferes with selective emotion processing in human extrastriate cortex. BMC Neuroscience. 2007b;22:8–16. doi: 10.1186/1471-2202-8-16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Schyns PG, Petro LS, Smith ML. Dynamics of visual information integration in the brain for categorizing facial expressions. Current Biology. 2007;17(18):1580–1585. doi: 10.1016/j.cub.2007.08.048. [DOI] [PubMed] [Google Scholar]
  70. Smith E, Weinberg A, Moran T, Hajcak G. Electrocortical responses to NIMSTIM facial expressions of emotion. International Journal of Psychophysiology. 2013;88(1):17–25. doi: 10.1016/j.ijpsycho.2012.12.004. [DOI] [PubMed] [Google Scholar]
  71. Tottenham N, Tanaka JW, Leon AC, McCarry T, Nurse, Hare TA, … Nelson C. The NimStim set of facial expressions: Judgments from untrained research participants. Psychiatry Research. 2009;168(3):242–249. doi: 10.1016/j.psychres.2008.05.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Van Dam NT, Gros DF, Earleywine M, Antony MM. Establishing a trait anxiety threshold that signals likelihood of anxiety disorders. Anxiety Stress Coping. 2013;26(1):70–86. doi: 10.1080/10615806.2011.631525. [DOI] [PubMed] [Google Scholar]
  73. Van Selst M, Jolicoeur P. A solution to the effect of sample size on outlier elimination. The Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology. 1994;47(3):631–650. [Google Scholar]
  74. Vlamings PH, Goffaux V, Kemner C. Is the early modulation of brain activity by fearful facial expressions primarily mediated by coarse spatial frequency information? Journal of Vision. 2009;9(5):12, 1–13. doi: 10.1167/9.5.12. [DOI] [PubMed] [Google Scholar]
  75. Vuilleumier P, Pourtois G. Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia. 2007;45:174–194. doi: 10.1016/j.neuropsychologia.2006.06.003. [DOI] [PubMed] [Google Scholar]
  76. Wijers AA, Banis S. Foveal and parafoveal spatial attention and its impact on the processing of facial expressions: An ERP study. Clinical Neurophysiology. 2012;123:513–526. doi: 10.1016/j.clinph.2011.07.040. [DOI] [PubMed] [Google Scholar]
  77. Williams LM, Liddell BJ, Rathjen J, Brown KJ, Gray J, Phillips M, … Gordon E. Mapping the time course of nonconscious and conscious perception of fear: an integration of central and peripheral measures. Human Brain Mapping. 2004;21(2):64–74. doi: 10.1002/hbm.10154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Wronka E, Walentowska W. Attention modulates emotional expression processing. Psychophysiology. 2011;48(8):1047–1056. doi: 10.1111/j.1469-8986.2011.01180.x. [DOI] [PubMed] [Google Scholar]
  79. Wronka E, Walentowska W. Trait anxiety and involuntary processing of facial emotions. International journal of Psychophysiology. 2012;85(1):27–36. doi: 10.1016/j.ijpsycho.2011.12.004. [DOI] [PubMed] [Google Scholar]

RESOURCES