Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Apr 1.
Published in final edited form as: Int J Psychophysiol. 2013 Jan 9;88(1):40–46. doi: 10.1016/j.ijpsycho.2013.01.003

Emotional processing modulates attentional capture of irrelevant sound input in adolescents

B Gulotta 1, G Sadia 1, E Sussman 1
PMCID: PMC3648586  NIHMSID: NIHMS434576  PMID: 23313604

Abstract

The main goal of this study was to investigate how emotional processing modulates the allocation of attention to irrelevant background sound events in adolescence. We examined the effect of viewing positively and negatively valenced video clips on components of event-related brain potentials (ERPs), while irrelevant sounds were presented to the ears. All sounds evoked the P1, N1, P2, and N2 components. The infrequent, randomly occurring novel environmental sounds evoked the P3a component in all trial types. The main finding was that the P3a component was larger in amplitude when evoked by salient, distracting background sound events when participants were watching negatively-charged video clips, compared to when viewing of the positive or neutral video clips. The results suggest that the threshold for involuntary attention to the novel sounds was lowered during viewing of the negative movie contexts. This indicates a survival mechanism, which would be needed for more automatic processing of irrelevant sounds to monitor the unattended environment in situations perceived as more threatening.

Keywords: emotions, P3a, event-related potentials, attention, auditory

Introduction

Emotional valence influences how attention is allocated and perceptual information is processed (Cuthbert et al., 1998; Lang et al., 1990; Lang et al., 1998; Wong et al., 2012). Two intertwined systems are proposed to be engaged in perceiving environmental events (Hopfinger et al., 2006; Oray et al., 2002; Sugimoto et al., 2007). One is an exogenous system, generating involuntary responses to external events (e.g., a car honking), and the other is an endogenous system, driving responses to the environment by a current internal state (e.g., sadness). The manner in which these interacting systems are modulated by emotional state is not fully clear.

Imagine being in a darkened movie theater, watching the protagonist on the screen clamber to the end of a long hallway, unaware that the enemy waits just around the corner. Suddenly, “crinkle, crinkle”, the sound of your seat mate opening a candy bar and you jump out of your seat. In all likelihood, any salient, unexpected sound occurring during this type of heightened state would lead to a similar response. That is, the negative valence induced by the context of the movie would affect the exogenous response to an irrelevant sound event (Cuthbert et al., 1998; Lang et al., 1990). The purpose for a mechanism that lowers the threshold for detecting unattended sensory information is to expeditiously alert to potentially threatening events. External events are more likely to be perceived as imminent during a portentous situation or negative emotional state. However, evidence to the contrary suggests that any emotional state (negative or positive) lowers the threshold for detecting unattended environmental events when compared to a neutral state (Cuthbert et al., 1998; Keil et al., 2002; Smith et al., 2003). The purpose of the current study was to test the influence of a visually induced emotional state, such as described in the example of the movie theater, in altering neurophysiologic processes of sound detection for irrelevant but attention-capturing auditory events. To that end, we tested the hypothesis that a negatively induced state lowers the threshold for detecting unattended sound events compared to positive or neutral states.

Processing irrelevant auditory events can be affected by the context in which they are heard (Alexandrov et al., 2007; Domíniguez-Borrás et al., 2008; Johnson & Zatorre, 2005; Katayama & Polich, 1998; Sussman & Steinschneider, 2006; Wronka, Kaiser & Coenen, 2008), and by the emotional state of the induced context, even if the irrelevant sounds themselves do not carry any particular emotional value (Alexandrov et al., 2007). Moreover, negative emotional stimuli have been shown to garner an attentional bias, in that responses to negatively charged information are processed more automatically than positively charged (Alexandrov et al., 2005; Alexandrov et al., 2007; Carretié et al., 2001; Domínguez-Borrás et al., 2008; Eastwood, Smilek & Merikle, 2003; Hajcak et al., 2008; Hansen & Hansen,1988; Lang et al., 2009; Paul et al., 2001; Pratto et al., 1991; Rinick et al., 2009; Schupp et al., 2004; Schupp et al., 2007; Smith et al., 2003).

There are contradictory results, however, regarding the effects of emotional state on brain processes associated with unattended sound detection. Specifically with regard to whether emotional engagement enhances or detracts from exogenous processing of unexpected input. Activation of more extensive brain areas and larger amplitude responses during emotional engagement induced by external input have been reported (Cuthbert et al., 1998). This enhancement of brain responses has been demonstrated in larger amplitude event-related brain potentials (ERPs), such as P1 (Smith et al., 2003), N1 (Keil et al., 2002; Sugimoto et al., 2007; Suzuki et al., 2005), P2 (Sugimoto et al., 2007), and P3 (Fichtenholtz et al., 2007; Keil et al., 2002). Although a negative response bias has been repeatedly demonstrated by response enhancement occurring only for negatively charged input compared to positive and neutral (Alexandrov et al., 2007; Domínguez-Borrás et al., 2008), other studies have shown that any emotionally charged input (positive and negative) produces enhanced responses compared to neutral or other control-type stimuli (Keil et al., 2002; Smith et al., 2003; Surakka et al., 1998). Moreover, differential timing effects of response enhancement have additionally been demonstrated. Some of the studies showed only early effects, reflected in enhancement of the P1 (Smith et al., 2003) or the N1 components (Cuthbert et al., 1998; Sugimoto et al., 2007); others showed only later enhancement effects reflected in the P3 component (Fichtenholtz et al., 2007); whereas others demonstrated both early and late enhancements to irrelevant stimuli (Keil et al., 2002). Some of the disparities among results of these studies are likely due to the considerable variation in experimental paradigms used to engage emotion in participants.

Response reductions, rather than enhancements, to irrelevant stimuli have also been reported (Oray et al., 2002; Rosenfeld et al., 1992; Suzuki et al., 2005; Vardi et al., 2006). In these studies, reduction of evoked responses was attributed to an attention bias, suggesting that attention to emotionally charged stimuli reduced the exogenous responses rather than enhanced them (Suzuki et al., 2005; Rosenfeld et al., 1992; Vardi et al., 2006). Suzuki et al., for example, found that when subjects viewed video clips that were `interesting', the P3 amplitude was reduced compared to when the viewed video clips were neutral. They suggested that less attention was allocated to the irrelevant sound probes when the clips were interesting compared to when they were not, which then resulted in a smaller P3 component amplitude during viewing of the interesting clips.

Responses to irrelevant auditory events can reflect the ability to regulate ones' actions (Andrés et al., 2006). Neural mechanisms associated with self-regulation are subject to maturational changes that extend into adolescence, and are thought to depend on the interaction between affect and cognitive control (Berger et al., 2007; Dolcos & McCarthy, 2006; Ladouceur et al., 2010; Posner & Rothbart, 1998). Emotions have also been shown to impair cognition (Dolcos & McCarthy, 2006; Singhal et al., 2012), which could have detrimental effects during development. In line with this, cognitive and behavioral disorders in adolescence have reportedly involved interactions between emotional and attentional states (Dolcos & McCarthy, 2006; Singhal et al., 2012). To delineate the aspects of attentional control that may be aberrant in clinical populations it is necessary to first identify links between specific types of attention, in this case attentional capture, and emotional engagement in non-affected adolescents. The continued changes in neural activity to sounds, neurophysiologically observed during adolescence (Ponton et al., 2000; Sussman et al., 2008), further illustrates their potential as a unique age group for investigating the development of attention and emotions.

The current study tested the hypothesis that unattended environmental events garner attention depending upon the type of context that induces an emotional state. Participants watched video clips and rated them according to whether they were perceived as negative, positive, or neutral. Novel environmental sounds were presented randomly amongst standard sounds and used to investigate how perceived emotional state influences processing of attention-capturing irrelevant sound events via the P3a component of ERPs. ERPs elicited by the standard tones were additionally used to assess the obligatory responses (P1, N1, P2, and N2 components) during viewing of the different types of video clips. We predicted that emotional processing would alter the threshold for detecting unattended sounds, such that a negatively induced state would produce the most heightened automatic surveillance of the unattended background, consistent with the idea of a survival mechanism. When in a threatening situation, a higher degree of vigilance is needed to assess threatening events. Thus, we expected that ERP indices would be more robust during negative viewing than less threatening situations perceived as positive or neutral.

Methods

Participants

Twelve healthy adolescents, ages 15–21 years (8 males) volunteered into the study. Participants over 18 years gave informed consent. Participants under 18 years gave written assent, and their accompanying parents gave written consent after the protocol was explained to them. The protocol was approved by the Committee for Clinical Investigations at the Albert Einstein College of Medicine where the study was conducted, and all procedures were carried out according to the Declaration of Helsinki. Participants were right handed and had no reported history of neurological disorders, were in age-appropriate grades in school, and no report of special educational services. All participants passed a hearing screen (hearing threshold ≤ 20 dB HL for pure tones at 500, 1000, 2000, & 4000 Hz, bilaterally) and were paid for their participation ($10 per hour). Two participants' data were rejected due to excessive EEG artifact. The data from the remaining ten participants are reported (M=17 years).

Stimuli

Visual Stimuli

Video clips were excerpted from Disney™ films that had a G rating according to the Motion Picture Association of America (meaning that the movie is appropriate for audiences of all ages), using Microsoft Windows Movie Maker (2007, Version 5.1). The clips were chosen to evoke a positive, neutral, or negative feeling in participants. Clips with negative valence included images of mild violence or lamentation; those with positive valence included images of merriment and smiling characters; and those intended to be neutral, to evoke no emotion, contained images of abstract art or nature (Fig. 1). Clips were played silently for approximately 40 seconds each. To verify the association between the video clip and intended emotion to be evoked, a behavioral pilot test was conducted in which nine participants, none of whom participated in the main study, rated 90 video clips as either evoking a positive, neutral, or negative feeling. Fifty-four of the 90 video clips were used in the main experiment, those that demonstrated at least 88% agreement in the emotions they evoked.

Figure 1.

Figure 1

Schematic depiction of the stimulus paradigm. Visual (V) and Auditory (A) tracks are presented with simultaneous onsets and offsets. Time scale is represented in seconds (s) on the x-axis. Open rectangles on the audio track depict randomly occurring novel sounds presented amongst pure tones, represented by tick marks. Subject responses are recorded during the 5 s blank screen period.

Sound sequences (described below) were paired with the video sequences (Fig 1). The paired video clips were presented in a randomized order across the three trial types (positive, negative, neutral) in six stimulus blocks. The order of the six blocks was also randomized in presentation across participants. During the experiment, each of the video clips was separated by a blank screen of approximately 5 seconds. No sounds were presented during the blank screen period (Fig 1).

Auditory Stimuli

Three auditory stimuli were used, two pure tones and one complex environmental sound. The two pure tones differed in frequency and probability. One tone had a frequency of 880 Hz (p=.80) and the other a frequency of 988 HZ (p=.10). Tones were created using Adobe Audition® 1 software (Adobe Systems, Inc., San Jose, CA). The environmental sounds (“novels”, p=.10) were sounds from nature and man-made sources (e.g., bird tweeting, telephone ringing). Sound duration was 200ms (10ms rise/fall time). Sounds were equated for loudness using the root mean square (RMS) amplitude and presented at 70 dB SPL measured using a sound pressure level meter (Brüel & Kjær 2209). Sounds were presented with a stimulus onset asynchrony (SOA) of 1.2 s. The order of the sounds in the sequences was quasi-randomized, with the exception that at least two standard sounds (880 Hz tones) intervened between any two infrequently occurring sounds. 601 sounds in total (standard/deviant/novel) were presented during the Positive video trials, 574 during the Negative trials, and 635 during the Neutral trials.

Procedure

Participants sat in a sound attenuated booth (Industrial Acoustics Corp., Bronx, NY) and were instructed to watch the video clips presented on a video monitor placed approximately 1.6 meters directly in front of them. The contemporaneous tones were presented bilaterally with insert earphones (E-A-R-tone 3A®), and participants were told to ignore them. After each video clip ended, during the 5 s blank screen, participants were instructed to write their evaluation of the emotional content of the clip as evoking a positive, neutral, or negative feeling. The task had two purposes: to keep participants focused on the video, and to assure that the emotion evoked by the clips was consistent with what was intended.

Electroencephalogram (EEG) Recording and Data Analysis

EEG was recorded with an electrode cap (Electro-Cap International, Inc.) from 31 electrode sites: Fpz, FP1, Fp2, Fz, F3, F4, F7, F8, FC1, FC2, FC5, FC6, Cz, C3, C4, CP1, CP2, CP5, CP6, T7, T8, Pz, P3, P4, P7, P8, Oz, O1, O2 (10–20 system) and the left and right mastoids (LM and RM). Eye movements were monitored from the electrooculogram (EOG) using a bipolar electrode configuration between F7 and F8 for horizontal eye movements (HEOG), and between FP1 and an external electrode positioned below the left eye for vertical eye movements (VEOG). PO9 was used as the ground electrode. All impedances were maintained below 5 kΩ. EEG and EOG were digitized (Neuroscan Synamps amplifier, Compumedics Corp., El Paso, Texas) with a 500 Hz sampling rate (0.05–100 Hz band pass), and low-pass filtered off-line to 15Hz. Epochs were 600 ms, starting 100 ms before stimulus onset. Artifact rejection criterion was set at ± 100 μV on all electrodes after baseline correction was applied to the EEG epochs.

Grand-averaged ERPs were calculated separately by stimulus type (standard and novel sounds), and separately for each trial type (positive, negative, and neutral). There were no hypotheses with respect to the deviant sounds for the current study. Thus, responses to deviant sounds are not reported. The ERP responses to the first 10 sounds of each of the six stimulus blocks were excluded from analysis to avoid potential orienting responses that might have occurred at the start of each stimulus block.

The obligatory components were measured from the ERPs evoked by the standard sounds, separately by trial type. P1 mean amplitude was measured in each individual, using a 30 ms interval centered on the grand-mean peak latency of the component (60 ms). N1 mean amplitude was measured in each individual using a 40 ms interval centered on the grand-mean peak latency of the component (96 ms). P2 mean amplitude was measured in each individual using a 40 ms interval centered on the grand-mean peak latency of the component (160 ms). The N2-obligatory response was visually observed in the standard waveforms (consistent with Ponton et al., 2000; Sussman et al., 2008 for adolescents). The mean amplitude was measured in the individuals using a 40 ms interval centered on the grand-mean peak of the waveforms (246 ms).

Effects of trial type (condition) on involuntary attention were measured with the P3a component evoked by the novel sounds (Escera et al., 1998; Friedman et al., 2001; Polich, 2003). To measure the P3a, difference waveforms were created by subtracting the grand-mean ERP response to the standard sounds from the grand-mean ERP response to the novel sounds, for each condition separately. The peak latency was chosen from the grand-mean difference waveforms (238ms). Mean amplitude in each individual was obtained using a 50 ms interval centered on the peak latency.

Presence of the P3a component was determined using two-tailed, one-sample t-tests calculated from the Cz electrode (greatest signal-to-noise ratio) on the difference waveforms to determine whether the mean amplitude was significantly larger than zero.

Two-way repeated-measures ANOVAs with factors of condition (positive, negative, neutral) and electrode (Fz, Cz, Pz) were calculated to compare the amplitudes and latencies of the ERP components (P1, N1, P2, N2, and P3a separately for each component and separately for amplitude and latency), across the three conditions, and to assess the scalp distribution of each component. Post hoc calculations were conducted using Tukey HSD analysis. Greenhouse-Geisser method was used to correct for violations of sphericity, and the p and epsilon values are reported.

Results

Behavioral data

Table 1 displays the mean distribution of evaluative responses for the three trial types. Participant assignment of the trial type was consistent with the intended valence: 93% of the positive clips were rated as positive, 80% of the neutral clips, and 90% of the negative clips. Data were not excluded on the basis of behavioral responses due to the low number of incorrectly labeled video clips, and because clips were not consistently mislabeled across participants.

Table 1.

Behavioral Responses

Trial type Proportion labeled
Positive Neutral Negative
Positive .93 .16 .01
Neutral .07 .80 .09
Negative 0 .04 .90

EEG data

Trial type had no effect on the latency of the obligatory responses or P3a components (Table 2).

Table 2.

ERP latencies. ERP components, electrode and interval used to measure the peaks, mean peak latency (in msec) with standard deviation (in parentheses) obtained during Positive, Neutral, and Negative movie clips.

ERP Component Measuring Electrode Measurement Interval Positive clips Neutral clips Negative clips
P1 Fz 45–75 58 (8) 57 (9) 59 (9)
N1 Fz 76–116 99 (8) 96 (8) 96 (7)
P2 Cz 140–180 160 (13) 162 (11) 160 (11)
N2 Fz 226–266 245 (12) 249 (12) 247 (15)
P3a Cz 213–263 225 (10) 224 (10) 227 (9)

P3a component

A P3a component, indexing involuntary allocation of attention to novel sounds, was elicited by all trial types: Positive (M=9.65, df=9, t =5.16, p<0.001); Neutral (M=7.77, df=9, t =4.18, p=0.002); and Negative (M=10.51, df=9, t =7.65, p<0.001). The P3a component had a central scalp distribution (main effect of electrode F2,18=9.26, ε=0.75, p=0.005) (Figs. 23, Tables 23). Post-hoc calculations revealed that the P3a amplitude was largest at the vertex electrode site (Cz). The amplitude of the P3a was affected by emotional processing (main effect of condition: F2,18= 4.95, ε=0.92, p=0.023). Post hoc analysis showed that the amplitude of the P3a component was larger when elicited in the negative condition than the neutral condition. Although there was a linear trend of amplitude negative>positive>neutral (largest to smallest, respectively), it did not quite reach significance (Fig. 2). There was no interaction between condition and electrode (p>0.82).

Figure 2.

Figure 2

Event-related brain potentials (ERPs) displayed for the midline Fz (top row), Cz (middle row), and Pz (bottom row) electrodes. ERP responses evoked by the standard tones (left column) and by the novel sounds (right column) are displayed for the Negative (solid line), Positive (dotted line), and Neutral (dashed line) video clip trials. The P1 component is labeled at the Fz electrode in the standard sounds (top row, left column), the P2 component at the Cz electrode (middle row, left column), and the P3a component at the Cz electrode in the difference waveform (middle row, right column), where signal-to-noise ratio is greatest for each respective component. X-axis depicts the time scale in milliseconds. Y-axis depicts microvolts, positive is up.

Figure 3.

Figure 3

Difference waveforms are displayed at the Cz electrode, overlaying the P3a components elicited during viewing of the Negative (solid line), Positive (dotted line), and Neutral (dashed line) movie clips. X-axis depicts the time scale in milliseconds. Y-axis depicts microvolts, positive is up. P3a component is labeled.

Table 3.

ERP amplitudes. ERP component mean amplitude (in μV) and standard deviation (in parentheses) obtained during Positive, Neutral, and Negative movie clips.

ERP Component Electrode Positive clips Neutral clips Negative clips
P1 Fz 0.72 (1.12) 0.53 (1.31) 0.21 (1.15)
N1 Fz −1.53 (1.72) −1.47 (1.81) −2.01 (2.02)
P2 Cz 1.45 (0.81) 2.04 (1.09) 1.28 (1.34)
N2 Fz −1.72 (1.71) −1.34 (1.91) −2.14 (1.99)
P3a Cz 9.22 (5.36) 7.77 (5.58) 10.51 (4.12)

Obligatory responses

The P1 component (Fig 2, top row, left column), elicited by standard sounds in all three conditions, had a fronto-central scalp distribution (main effect of electrode: F2,18=6.35, ε=0.56, p=0.028). Post-hoc calculations revealed that the amplitude at Fz and Cz was larger than at Pz (Fz=Cz>Pz). There was no significant condition effect on amplitude (condition effect: F2,18=1.53, p=0.25). Thus, P1 component was not modulated by trial type.

The amplitude of the N1 component, elicited by standard sounds in all three conditions, was largest for the negative trials (−1.57 μV) compared to positive (−1.22 μV) or neutral (−0.97 μV), but this difference did not reach significance (condition effect: F2,18=2.39, ε=0.757, p<0.137). N1 amplitude at Fz and Cz was visibly larger than at Pz (Fig 2, top row, left column), but did this did not reach significance after correction (effect of electrode: F2,18=4.43, ε=0.53, p=0.06.) There was no interaction between factors (p>0.2). Thus, there was no significant effect on N1.

The P2 component (Fig 2, middle row, left column), elicited by standard sounds in all three conditions, was modulated by trial type (main effect of condition: F2,18=4.14, ε=0.87, p=0.041). Post hoc calculations showed that the amplitude was largest for the neutral trials (1.52 μV) compared to the negative (0.82 μV) and positive (0.97 μV) trials (Fig 2, middle row, left column). Amplitudes evoked during the positive and negative trials did not significantly differ. The P2 amplitude was largest at the vertex (Cz) (main effect of electrode: F2,18=6.94, ε=0.60, p=0.02). The interaction between factors did not reach significance (p=0.085).

N2 amplitude, elicited by standard sounds in all three conditions, was affected by trial type (main effect of condition: F2,18=5.98, ε=0.88, p=0.014) (Fig 2, top row, left column). Post hoc analysis revealed that the N2 was significantly larger in the negative (−1.56 μV) than in the neutral (−0.58 μV) trials. The N2 amplitude elicited during positive trials (−1.16 μV) was not significantly different than either negative or neutral (though N2 in positive was trending toward being larger than in neutral trials: p=0.058). The N2 amplitude was largest at the fronto-central electrodes (main effect of electrode: F2,18=6.66, ε=0.51, p=0.029). Post hoc calculation showed that the N2 amplitude was larger at Fz and Cz electrodes than Pz.

The scalp distribution of the obligatory components, having a fronto-central scalp distribution for the P1, N1, and N2 components, and a central distribution for the P2 component, is consistent with our previous study in adolescents (Sussman et al., 2008).

Discussion

The main goal of this study was to test whether emotional processing modulates the allocation of attention to the irrelevant background sounds in adolescence. Emotion processing was induced in participants watching video clips with negative, positive, or neutral valence, while irrelevant sounds were presented to the ears. The main finding was that when participants watched negatively charged video clips, the P3a amplitude to irrelevant novel sounds was significantly larger than when watching neutral clips. This finding suggests that negative valence lowers the threshold for processing irrelevant sounds, which is consistent with a survival point of view. If the perceived situation is negatively charged, there is a higher likelihood of potentially negative consequences. Therefore, automatic processing of the unattended environment for potentially threatening events is crucial. In contrast, in a neutral or positively charged situation, the perceived imminent threat of danger would be lower and automatic monitoring of the background environment for potentially threatening events less needed. Thus, the larger amplitude P3a, reflecting involuntary attention, suggests that the irrelevant salient events garnered more automatic attention during perceived negative viewing.

The early obligatory components, P1 and N1, showed no effect of the induced emotion. There were no differences in amplitude or latency of these components when viewing positive, neutral, or negative video clips. However, response effects induced by emotional processing were not limited to the occurrence of the unexpected novel sounds. The later obligatory components (the P2 and N2 components) were modulated by the type of clip that was viewed. P2 amplitudes evoked during viewing of positive and negative video clips were attenuated compared to neutral clips. The N2 component, in contrast, was enhanced during the emotionally charged video clips compared to the neutral clips. The standard waveforms across the whole epoch were overall more negative compared to the neutral waveform, which could account for an attenuation of the P2 component and enhancement of the N2 component observed during the emotional conditions. Thus, in contrast to the P3a component, the P2 and N2 components were differentiated according to any emotional content in the viewed clips compared to neutral. This suggests differential attention to the overall background stimulation during viewing of emotionally charged video clips. This result may indicate that the ongoing state of the viewer is altered during emotional viewing. However, further investigation is needed. It is also possible that specific mechanisms were modulated by emotional processing, in line with the evidence that P2 and N2 are distinct components with different neural generators, and reflective of different processes (Crowley & Colrain, 2004). A larger N2 amplitude induced during negatively-charged video clips is consistent with results of Ladouceur et al. (2007), who found larger N2 amplitudes in adolescents resulting from an interaction between negative affect and attentional control.

These results may have implications for clinical disease. Children diagnosed with major depression (MD) demonstrate increased distraction when compared with controls (Lepsitö et al., 2004; Sévigny et al., 2003). This may suggest that a more negative internal state alters processing of the ongoing input, such that any irrelevant, background sounds may induce larger responses than in individuals in a more neutral or positive state. In addition, children and adults diagnosed with attention deficit hyperactive disorder (ADHD), whose attention to exogenous stimuli is thought to be impaired, appear to have a higher incidence of depression or anxiety when compared to the general population (Eyestone & Howell, 1994; Kayser et al., 2000; Satterfield et al., 1994; Williams et al., 2008). These reports potentially suggest that processing systems normally allocated only for negative states are persistently switched on in patients with ADHD, MD, or generalized anxiety. Further, this may partially explain results showing that medications used to treat depression and ADHD induce alterations in the auditory ERP responses of those patients who respond to them (Alhaj et al., 2011; Daviss et al., 2001; Vaughan et al., 2009)

Finally, the age of the participants did not appear to be a factor when considering effects of emotion processing on involuntary attentional allocation. Enhanced P3a was also demonstrated in young adults during negative trials (Domínguez-Borrás et al., 2008). However, there may be a difference in the timing of the effects, as the adult participants exhibited effects of emotion on processing standard tones earlier than adolescent participants in the current study, reflected in the P1 and N1 components (Alexandrov et al., 2007). Older adults may display less distractibility during emotional tasks than younger adults (Samanez-Larkin et al., 2009).

Highlights

  • Emotional processing alters processing of ongoing, unattended background sounds.

  • Attention bias was seen in modulation of P2 and N2 obligatory components to standard sounds during viewing of emotional compared to neutral video clips.

  • Viewing negatively-charged videos modulated involuntary allocation of attention to irrelevant novel environmental sounds.

  • Negative valence lowers the threshold for processing irrelevant sounds.

Acknowledgements

This research was supported by the National Institutes of Health (R01 DC004263). We thank Jean DeMarco for her assistance with subject recruitment and EEG set-up.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Alhaj H, Wisniewski G, McAllister-Williams R. The use of the EEG in measuring therapeutic drug action: focus on depression and antidepressants. J Psychopharmacol. 2011;25(9):1175–1191. doi: 10.1177/0269881110388323. [DOI] [PubMed] [Google Scholar]
  2. Alexandrov YI, Klucharev V, Sams M. Effect of emotional context in auditory-cortex processing. Int J Psychophysiol. 2007;65(3):261–271. doi: 10.1016/j.ijpsycho.2007.05.004. [DOI] [PubMed] [Google Scholar]
  3. Alexandrov YI, Sams ME. Emotion and consciousness: Ends of continuum. Brain Res. Cogn Brain Res. 2005;25(2):387–405. doi: 10.1016/j.cogbrainres.2005.08.006. [DOI] [PubMed] [Google Scholar]
  4. Andrés P, Parmentier FB, Escera C. The effect of age on involuntary capture of attention by irrelevant sounds: a test of the frontal hypothesis of aging. Neuropsychologia. 2006;44(12):2564–2568. doi: 10.1016/j.neuropsychologia.2006.05.005. [DOI] [PubMed] [Google Scholar]
  5. Berger A, Kofman O, Livneh U, Henik A. Multidisciplinary perspectives on attention and the development of self-regulation. Prog Neurobiol. 2007;82(5):256–286. doi: 10.1016/j.pneurobio.2007.06.004. [DOI] [PubMed] [Google Scholar]
  6. Carretié L, Mercado F, Tapia M, Hinojosa JA. Emotion, attention, and the `negativity bias', studied through event-related potentials. Int J Psychophysiol. 2001;41(1):75–85. doi: 10.1016/s0167-8760(00)00195-1. [DOI] [PubMed] [Google Scholar]
  7. Crowley KE, Colrain IM. A review of the evidence for P2 being an independent component process: age, sleep and modality. Clinical Neurophys. 2004;115(4):732–44. doi: 10.1016/j.clinph.2003.11.021. [DOI] [PubMed] [Google Scholar]
  8. Cuthbert BN, Schupp HT, Bradley M, McManis M, Lang PJ. Probing affective pictures: attended startle and tone probes. Psychophys. 1998;35(3):344–7. doi: 10.1017/s0048577298970536. [DOI] [PubMed] [Google Scholar]
  9. Daviss WB, Bentivoglio P, Racusin R, Brown KM, Bostic JQ, Wiley L. Bupropion sustained release in adolescents with comorbid attention-deficit/hyperactivity disorder and depression. J Am Acad of Child & Adolesc Psychiatry. 2001;40(3):307–314. doi: 10.1097/00004583-200103000-00010. [DOI] [PubMed] [Google Scholar]
  10. Dolcos F, McCarthy G. Brain systems mediating cognitive interference by emotional distraction. J. Neurosci. 2006;26(7):2072–2079. doi: 10.1523/JNEUROSCI.5042-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Domínguez-Borrás J, Garcia-Garcia M, Escera C. Emotional context enhances auditory novelty processing: behavioural and electrophysiological evidence. Eur J Neurosci. 2008;28(6):1199–1206. doi: 10.1111/j.1460-9568.2008.06411.x. [DOI] [PubMed] [Google Scholar]
  12. Eastwood JD, Smilek D, Merikle PM. Negative facial expression captures attention and disrupts performance. Percept Psychophys. 2003;65(3):352–358. doi: 10.3758/bf03194566. [DOI] [PubMed] [Google Scholar]
  13. Escera C, Alho K, Winkler I, Näätänen R. Neural mechanisms of involuntary attention to acoustic novelty and change. J Cogn Neurosci. 1998;10(5):590–604. doi: 10.1162/089892998562997. [DOI] [PubMed] [Google Scholar]
  14. Eyestone LL, Howell RJ. An epidemiological study of attention-deficit hyperactivity disorder and major depression in a male prison population. Bull Am Acad Psychiatry Law. 1994;22(2):181–193. [PubMed] [Google Scholar]
  15. Fichtenholtz HM, Hopfinger JB, Graham R, Detwiler JM, LaBar KS. Happy and fearful emotion in cues and targets modulate event-related potential indices of gaze-directed attentional orienting. Soc Cogn Affect Neurosci. 2007;2(4):323–333. doi: 10.1093/scan/nsm026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Friedman D, Cycowicz YM, Gaeta H. The novelty P3: an event-related brain potential (ERP) sign of the brain's evaluation of novelty. Neurosci Biobehav Rev. 2001;25(4):355–373. doi: 10.1016/s0149-7634(01)00019-7. [DOI] [PubMed] [Google Scholar]
  17. Hopfinger J, West V. Interactions between endogenous and exogenous attention on cortical visual processing. Neuroimage. 2006;31(2):774–789. doi: 10.1016/j.neuroimage.2005.12.049. [DOI] [PubMed] [Google Scholar]
  18. Johnson JA, Zatorre RJ. Attention to simultaneous unrelated auditory and visual events: behavioral and neural correlates. Cereb Cortex. 2005;15(10):1609–1620. doi: 10.1093/cercor/bhi039. [DOI] [PubMed] [Google Scholar]
  19. Kayser J, Bruder GE, Tenke CE, Steward JE, Quitkin FM. Event-related potentials (ERPs) to hemifield presentations of emotional stimuli: differences between depressed patients and healthy adults in P3 amplitude and asymmetry. Int J Psychophysiol. 2000;36(3):211–236. doi: 10.1016/s0167-8760(00)00078-7. [DOI] [PubMed] [Google Scholar]
  20. Katayama J, Polich J. Stimulus context determines P3a and P3b. Psychophysiology. 1998;35(1):23–33. [PubMed] [Google Scholar]
  21. Keil A, Bradley MM, Hauk O, Rockstroh B, Elbert T, Lang PJ. Large-scale neural correlates of affective picture processing. Psychophysiology. 2002;39(5):641–649. doi: 10.1017.S0048577202394162. [DOI] [PubMed] [Google Scholar]
  22. Ladouceur CD, Conway A, Dahl RE. Attentional control moderates relations between negative affect and neural correlates of action monitoring in adolescence. Dev Neuropsychol. 2010;35(2):194–211. doi: 10.1080/87565640903526553. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Lang PJ, Bradley MM, Cuthbert BN. Emotion, attention, and the startle reflex. Psychol Rev. 1990;97(3):377–395. [PubMed] [Google Scholar]
  24. Lang PJ, Bradley MM, Cuthbert BN. Emotion, motivation, and anxiety: brain mechanisms and psychophysiology. Biol Psych. 1998;44(12):1248–63. doi: 10.1016/s0006-3223(98)00275-3. [DOI] [PubMed] [Google Scholar]
  25. Lepsitö T, Soininen M, Ćeponien R, Almqvist F, Näätänen R, Aronen RT. Auditory event-related potential indices of increased distractibility in children with major depression. Clin Neurophysiol. 2004;11(3):620–627. doi: 10.1016/j.clinph.2003.10.020. [DOI] [PubMed] [Google Scholar]
  26. Oray S, Lu ZL, Dawson ME. Modification of sudden onset auditory ERP by involuntary attention to visual stimuli. Int J Psychophysiol. 2002;43:213–224. doi: 10.1016/s0167-8760(01)00174-x. [DOI] [PubMed] [Google Scholar]
  27. Polich J. Overview of P3a and P3b. In: Detection of Change: Event-Related Potential and fMRI Findings. Kluwer Academic Press; Boston: 2003. pp. 83–98. [Google Scholar]
  28. Ponton CW, Eggermont JJ, Kwong B, Don M. Maturation of human central auditory system activity: evidence from multi-channel evoked potentials. Clin Neurophysiol. 2000;111(2):220–236. doi: 10.1016/s1388-2457(99)00236-9. [DOI] [PubMed] [Google Scholar]
  29. Posner MI, Rothbart MK. Attention, self-regulation and consciousness. Philos Trans: Biol Sci. 1998;353(1377):1915–1927. doi: 10.1098/rstb.1998.0344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Rosenfeld JP, Bhat K, Miltenberger A, Johnson M. Event-related potentials in the dual task paradigm: P300 discriminates engaging and non-engaging films when film-viewing is the primary task. Int J Psychophysiol. 1992;12(3):221–232. doi: 10.1016/0167-8760(92)90060-o. [DOI] [PubMed] [Google Scholar]
  31. Samanez-Larkin GR, Robertson ER, Mikels JA, Carstensen LL, Gotlib IH. Selective attention to emotion in the aging brain. Psychol Aging. 2009;24(3):519–529. doi: 10.1037/a0016952. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Satterfield JH, Schell AM, Nicholas T. Preferential neural processing of attended stimuli in attention-deficit hyperactivity disorder and normal boys. Psychophysiol. 1994;31(1):1–10. doi: 10.1111/j.1469-8986.1994.tb01018.x. [DOI] [PubMed] [Google Scholar]
  33. Schupp H, Cuthbert B, Bradley M, Hillman C, Hamm A, Lang P. Brain processes in emotional perception: Motivated attention. Cogn & Emot. 2004;18(5):593–611. [Google Scholar]
  34. Schupp HT, Stockburger J, Codispoti M, Junghöfer M, Weike AI, Hamm AO. Selective visual attention to emotion. J Neuroscienci. 2007;27(5):1082–1089. doi: 10.1523/JNEUROSCI.3223-06.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Sévigny MC, Everett J, Grondin S. Depression, attention, and time estimation. Brain Cogn. 2003;53(2):351–353. doi: 10.1016/s0278-2626(03)00141-6. [DOI] [PubMed] [Google Scholar]
  36. Singhal A, Shafer AT, Russell M, Gibson B, Wang L, Vohra S, Dolcos F. Electrophysiological correlates of fearful and sad distraction on target processing in adolescents with attention deficit-hyperactivity symptoms and affective disorders. Front Integr Neurosci. 2012;6(119) doi: 10.3389/fnint.2012.00119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Smith NK, Cacioppo JT, Larsen JT, Chartrand TL. May I have your attention, please: electrocortical responses to positive and negative stimuli. Neuropsychologia. 2003;41:171–183. doi: 10.1016/s0028-3932(02)00147-1. [DOI] [PubMed] [Google Scholar]
  38. Sugimoto S, Nittono H, Hori T. Visual emotional context modulates brain potentials elicited by unattended tones. Int J Psychophysiol. 2007;66(1):1–9. doi: 10.1016/j.ijpsycho.2007.05.007. [DOI] [PubMed] [Google Scholar]
  39. Surakka V, Tenhunen-Eskelinen M, Hietanen JK, Sams M. Modulation of human auditory information processing by emotional visual stimuli. Brain Res. Cogn Brain Res. 1998;7(2):159–63. doi: 10.1016/s0926-6410(98)00021-4. [DOI] [PubMed] [Google Scholar]
  40. Sussman E, Steinschneider M, Gumenyuk V, Grushko J, Lawson K. The maturation of human evoked brain potentials to sounds presented at different stimulus rates. Hear Res. 2008;236(1–2):61–79. doi: 10.1016/j.heares.2007.12.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Sussman E, Steinschneider M. Neurophysiologicical evidence for context dependent encoding of sensory input in human auditory cortex. Brain Res. 2006;1075(5):165–174. doi: 10.1016/j.brainres.2005.12.074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Suzuki J, Nittoni H, Hori T. Level of interest in video clips modulates event related potentials to auditory probes. Int J Psychophysiol. 2005;55(1):35–43. doi: 10.1016/j.ijpsycho.2004.06.001. [DOI] [PubMed] [Google Scholar]
  43. Vardi Y, Volos M, Sprecher E, Granovsky Y, Gruenwald I, Yarnitsky D. A P300 event related potential technique for assessment of sexually oriented interest. J Urol. 2006;176(6):2736–2740. doi: 10.1016/j.juro.2006.07.134. [DOI] [PubMed] [Google Scholar]
  44. Vaughan B, Kratochvil CJ, Fegert J. Update on atomoxetine in the treatment of attention-deficit/hyperactivity disorder. Expert Opin Pharmacother. 2009;10(4):669–676. doi: 10.1517/14656560902762873. [DOI] [PubMed] [Google Scholar]
  45. Williams LM, Hermens DF, Palmer D, Kohn M, Clarke S, Keage H, Clark CR, Gordon E. Misinterpreting emotional expressions in attention deficit/hyperactivity disorder: evidence for a neural marker and stimulant effects. Biol Psychiatry. 2008;63(10):917–926. doi: 10.1016/j.biopsych.2007.11.022. [DOI] [PubMed] [Google Scholar]
  46. Wong G, Dolcos S, Denkova E, Morey R, Wang L, McCarthy G, Dolcos F. Brain imaging investigation of the impairing effect of emotion on cognition. J Vis Exp. 2012;1(60):2424. doi: 10.3791/2434. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Wronka E, Kaiser J, Coenen, Anton ML. The auditory P3 from passive and active three-stimulus oddball paradigm. Acta Neurobiol Exp (Wars) 2008;68(3):362–372. doi: 10.55782/ane-2008-1702. [DOI] [PubMed] [Google Scholar]

RESOURCES