Abstract
How task focus affects recognition of change in vocal emotion remains in debate. In this study, we investigated the role of task focus for change detection in emotional prosody by measuring changes in event-related electroencephalogram (EEG) power. EEG was recorded for prosodies with and without emotion change while subjects performed emotion change detection task (explicit) and visual probe detection task (implicit). We found that vocal emotion change induced theta event-related synchronization during 100–600 ms regardless of task focus. More importantly, vocal emotion change induced significant beta event-related desynchronization during 400–750 ms under explicit instead of implicit task condition. These findings suggest that the detection of emotional changes is independent of task focus, while the task focus effect in neural processing of vocal emotion change is specific to the integration of emotional deviations.
Keywords: Vocal emotion, Change detection, ERS, ERD, Task effect
Introduction
Identifying changes in emotionally vocal expressions is important for the coordination of social interactions (Salovey and Mayer 1989; Van Kleef 2009). For example, in a negotiation, an unexpected obstacle may result in a shift from happiness to anger. One must continuously monitor and rapidly detect changes in their interlocutor’s vocal emotion in order to adapt their behavior accordingly. In fact, human beings are geared to quickly recognize emotion change in vocal expressions. For instance, it was found that deviant emotional notes, which lead to emotion change in oddball paradigm, elicited early negative responses (MMN) around 200 ms in passive condition (Goydke et al. 2004; Jiang et al. 2014; Schirmer and Escoffier 2010; Schirmer et al. 2008; Schirmer et al. 2005; Thonnessen et al. 2010). In contrast, deviant emotional notes elicited a late positive deflection (P300) in active oddball paradigm (Thierry and Roberts 2007; Wambacq et al. 2004). The similar results were reported in cross-splicing paradigm, in which the contexts preceded the splicing point allow participants to establish context and the cross-spliced parts deviate the econtext and thus cause emotion change (Chen et al. 2011; Kotz and Paulmann 2007; Paulmann and Kotz 2008). Kotz and Paulmann (2007) first reported that emotional expectancy violation in prosody elicitd a positive deflection 350 ms post changing point (Prosodic Expectancy Positivity, PEP). And the following studies showed that emotional expectancy violation elicited PEP response irrespective of speaker voice, emotional category, and task demands (Paulmann et al. 2012; Paulmann and Kotz 2008). Furthermore, the study in Mandrin Chinese also reported a positive deflection resemble to PEP, with an early negative response (N2) preceded it (Chen et al. 2012; Chen et al. 2011). In short, the studies addressing the time course of vocal emotion change perception suggest that the brain detects vocal emotion change at various stages, indexed by an early negative deflection (MMN/N2) and late posistive deflection (PEP/P300).
While there has been a consensus on the detection of emotion change in vocal expressions, debate remains regarding the role of task focus in vocal emotion change processing. As stated above, the brain can recognize vocal emotion change in both passive and active oddball paradigms, but the time course of the brain response varied as a function of task demands, suggesting a critical role of task focus (Goydke et al. 2004; Wambacq and Jerger 2004). However, early studies adopting cross-splicing paradigm reported that vocal emotion expectancy violation elicited PEP independent of task relevance (Kotz and Paulmann 2007; Paulmann and Kotz 2008), implying that the processing of vocal emotion change is not primarily influenced by task focus. Nevertheless, our previous studies showed that prosodies with emotion change elicited an early negativity (N2) and a late positivity (P300) relative to prosodies without emotion change (Chen et al. 2011, 2012). Further, while the early negativity was immune to task focus, the late positivity was modulated by task focus (Chen et al. 2011). Accordingly, it was proposed that these conflicting findings could be reconciled with the multistage model of vocal emotion perception (Schirmer and Kotz 2006; Wildgruber et al. 2009), while the early negativity is associated with emotional significance deriving, the late positivity is corresponding to emotion evaluation and reintegration (Chen et al. 2011). Thus, task focus modulates the late stage of vocal emotion change perception, but not the early stage. Nevertheless, this conclusion mainly results from the ERP studies which focus on phase-locked neural activity (Chen et al. 2011, 2012; Goydke et al. 2004; Wambacq and Jerger 2004). Supportive evidences from EEG oscillatory dynamics, as suggested by Cacace and McFarland (2003), are highly necessary.
Specifically, though the ERP data in previous studies (Chen et al. 2011; Jiang et al. 2014; Wambacq and Jerger 2004) effectively depicted the time course of processing emotional change information, the acquisition of ERP data is exclusively restricted to phase-locked averaging (Luck 2005; Pfurtscheller and da Silva 1999). In contrast, time–frequency analysis provides non-phase-locked indexes of cognitive processing, and thus delineates the neural oscillatory dynamics, an inherent characteristic of synchronizing or desynchronizing ongoing activity in various frequency bands (Makeig et al. 2004; Neuper and Klimesch 2006; Pfurtscheller and da Silva 1999). Neural oscillatory dynamics can be quantified with event-related spectral perturbation (ERSP), which is a temporally sensitive index of the relative increase or decrease in mean EEG power from baseline associated with stimulus presentation or response execution, termed and event-related synchronization and desynchronization, or ERS and ERD, respectively (Delorme and Makeig 2004; Makeig et al. 2004).
Indeed, time–frequency analysis has been used for perception of auditory change other than vocal emotion change. For instance, it has been documented that auditory change detection is associated with theta band power increase (Fuentemilla et al. 2008; Hsiao et al. 2009). Moreover, Cacace and McFarland (2003) reported that deviant auditory stimulation in oddball paradigm induced theta ERS and beta ERD. While the magnitude of theta ERS depended on stimulus and task demands, the beta ERD occurred only for easily discriminable stimuli in attention-related target conditions. Additionally, ERD in beta band has also been reported associated with deviant syllable detection (Kim and Chung 2008), syntactic unification (Davidson and Indefrey 2007) and abnormal rhythmic pattern reintegration (Luo et al. 2010). Given the similarity between emotion change perception and other auditory change processing, it is reasonable to expect that time–frequency analysis would depict the neural oscillatory dynamics of vocal emotion change perception and help to elucidate the role of task focus.
Therefore, in this study we conduct the time–frequency analysis with our previously reported data (Chen et al. 2011), in order to obtain neural oscillatory evidence concerning the role of task focus in vocal emotion change perception. Based on the neural oscillatory profiles associated with auditory change detection (Cacace and McFarland 2003; Chen et al. 2012; Fuentemilla et al. 2008; Hsiao et al. 2009; Kim and Chung 2008) and deviant stimuli reintegration (Davidson and Indefrey 2007; Luo et al. 2010), we hypothesized that vocal emotion change would induce theta ERS and beta ERD. Moreover, based on the findings that task focus modulated the late stage of deviation integration but not early stage of detection in vocal emotion perception (Chen et al. 2011; Schirmer and Kotz 2006), and the fact that beta ERD was modulated by attention direction in auditory change detection (Cacace and McFarland 2003), we hypothesized that task focus would mediate late beta ERD but not early theta ERS.
Method
Participants
Thirty university students [mean age = 22.25 years, range = 20–26 years, 15 males] were recruited to participate in the experiment. Half of the participants were randomly assigned to the explicit task while the others to the implicit task. One participant in each task was excluded because of heavy artifacts during the EEG recording session, leaving 14 participants for each task remained in the final data analysis. All participants were self-reported right-handed and free of any affective disorders. They had normal hearing and normal or corrected-to-normal vision. All participants signed an informed consent form prior to the experiment and was paid ¥50 Yuan for their participation. The study was approved by the local ethics committee of Institute of Psychology, Chinese Academy of Sciences and was conducted following the ethical principles regarding human experimentation (Helsinki Declaration).
Stimuli
Fifty sentences of neutral content (see Fig. 1 for example) were produced in neutral and angry prosodies by a native male actor of Mandarin Chinese. Each sentence consists 12 syllables and each syllable lasts about 200 ms. All the materials were recorded in a soundproof chamber, at a sampling rate of 22 kHz. All the materials had the classical acoustic features for emotional prosodies and can be distinguished accurately (for details see Chen et al. 2011). The emotional prosodies with emotion change (“neutral-to-angry, NA” and “angry-to-neutral, AN”) were obtained by cross-splicing (Kotz and Paulmann 2007) the first part of a neutral prosody and the second part of an angry prosody, and vise versa (see Fig. 1 for a graphical illustration). Two splicing positions were used to increase the variability of the prosody development, such that the occurrence of deviation was unpredictable (see Chen et al. 2011 for details). In the present study, only the data for NA and its control stimuli (AA) were included.
Fig. 1.
Example materials and design. Illustration a explains the splicing procedure. The NA prosodies were obtained by splicing the first part of neutral prosodies to the second part of angry prosodies (green to red), at the onset of the fifth syllable (*) or at the onset of the ninth syllable (#). Triggers were placed at the splicing point and corresponding point in the control prosodies (* or #). Illustration b shows the acoustic feature of example NA prosody with emotional change and its control AA prosody without emotional change (only the splicing-point at fifth syllable was illustrated). The dataset consists of oscillogram (up) and voice spectrographs (down) with uncorrected pitch contours (blue line) superimposed (AA all Angry, NA Neutral-to-Angry). (Color figure online)
Procedure
All prosodies were presented in four blocks of 50 trials. In each block, prosodies were presented in a pseudo-randomized order, that is, prosodies from the same type of prosody were presented up to three consecutive trials. Each participant was seated comfortably at a distance of 115 cm from a computer monitor in a sound-attenuating chamber. Each trial began with a fixation cross in the center of the monitor for 300 ms, and then the sentence was presented aurally via headphones while the cross remained on the screen. The sound volume was adjusted for each participant to ensure that all sentences were heard clearly. The offset of sentences was followed by a two-syllable word probe for 300 ms in the implicit condition and a question mark “?” for 300 ms in explicit condition. In the implicit condition, participants were asked to respond as quickly and accurately as possible whether the visual probe had occurred in the preceding sentence while ignoring the prosody. In the explicit task condition, the participants had to decide whether the emotion expressed by the prosody has changed or not as quickly and accurately as possible. The “yes” or “no” response were made by pressing the “J” or “F” button on the keyboard, and the buttons for “yes” or “no” were counterbalanced between participants. Participants were asked to look at the fixation cross, avoiding eye movements during stimuli presentation. The inter-trial interval was 1,500 ms. Practice trails were used to familiarize participants with the procedure.
EEG recording and analysis
Electroencephalogram (EEG) was recorded with 64 Ag–AgCl electrodes mounted in an elastic cap with a sampling rate of 500 Hz. EEG data were referenced online to the left mastoid. Vertical electrooculograms (EOGs) were recorded supra- and infra-orbitally at the left eye. Horizontal EOG was recorded from the left versus right orbital rim. Impedances were kept below 5 kΩ. EEG and EOG recordings were amplified with a high cutoff of 100 Hz.
Since the detail analysis of behavioral and ERP results has been reported in the previous study (Chen et al. 2011), here we focused on the time–frequency analysis only. After screened offline for eye movements, muscle artifacts, electrode drifting using NeuroScan 4.3, EEG data were segmented to 3,000 ms epochs time-locked to the splicing points of NA and the corresponding points of AA, starting 1,000 ms prior to splicing point. The epoched data were imported to EEGLAB toolbox (Delorme and Makeig 2004, http://sccn.ucsd.edu/eeglab/) running under Matlab7.8.0 (MathWorks, Natick, MA, USA). The data were highpass filtered at 0.05 Hz and average referenced across all scalp electrodes. Epochs with large artifacts (exceeding ± 100 μV) were removed. Independent component analysis (ICA) using the Infomax algorithm was used to obtain independent components (ICs) from scalp EEG activity and ICs represent artifacts were rejected.
Trial-by-trial time–frequency analysis was computed by a Morlet wavelet with linearly increased cycles, from 3 cycles at the lowest frequency (3 Hz) and 62.5 cycles at the highest frequency (125 Hz) analyzed. Changes in event-related spectral power response (in dB) were computed by the ERSP index (Makeig 1993) (1),
| 1 |
where, for n trials, Fk(f,t) is the spectral estimate of trial k at frequency f and time t. Power values were normalized with respect to a 100 ms pre-stimulus baseline and transformed into decibel scale (10_log10 of the signal). ERSPs were averaged over trials for each condition and transformed into time–frequency plots (see Fig. 2). For conciseness and the aim of the present study, only the data from 4–30 Hz during 100–800 ms were presented here.
Fig. 2.
ERSP results. a The average oscillatory activities for NA and AA prosody as function of task focus at CZ over time (x-axis; 0 is cross-splicing point) and frequency (y-axis). Red colors indicate ERS and blue colors indicate ERD relative to baseline. b Topographical map of permutation test between two kinds of prosody over theta and beta band during time window of interests. As depicted, vocal emotion change induced theta band ERS during 100–600 ms regardless of task demands. But only under the explicit task condition, vocal emotion change induced significant beta ERD during 400–750 ms over central areas. c Histogram of the theta and beta activity depicted as function of change under two task conditions, after collapsing across the corresponding brain regions. (Color figure online)
Based on the previous studies depicting neural oscillatory dynamics of auditory change detection (Chen et al. 2012; Fuentemilla et al. 2008; Hsiao et al. 2009), and the results of permutation test implemented in the statcond function of EEGLAB toolbox, ERSPs in the range of 4–8 Hz during 100–600 ms (theta), and 18–26 Hz during 400–750 ms (beta) were averaged for further statistical analysis. To depict the topographic distribution difference, nine scalp regions of interest were defined1 (for regional averaging, see Dien and Santuzzi 2004). Then the data were analyzed by repeated ANOVA with Change (NA vs. AA), Region (anterior, central and posterior) and Hemisphere (left, middle and right) as within subject factors and Task (explicit vs. implicit) as between subject factor. The degrees of freedom of the F-ratio were corrected according to the Greenhouse–Geisser method and multiple comparisons were Bonferroni adjusted in all these analysis. The effect sizes were shown as partial eta squared ().
Result
As can be seen in Fig. 2a, the prosodies with emotion change induced larger ERS in theta band and larger ERD in beta band than those without emotion change. A permutation test was conducted using the statcond function of the EEGLAB toolbox (Fig. 2b). As the figure depicted, vocal emotion change induced larger theta ERS regardless of attention location, which was largest over frontal-central regions. Additionally, vocal emotion change induced significant beta ERD only under the explicit task condition over central regions.
Complemented to the permutation results, the repeated measures ANOVAs for theta activity revealed a significant main effect of Change [F(1,26) = 9.51, p<.01, ], with emotional change inducing larger theta ERS (0.50 dB) in comparison with the nochange condition (0.14 dB). Moreover, the two way interaction between Change and task [F(1,26) = 0.34, p = .57, ], and the three way interaction amongst Change, Region and task [F(2,52) = 1.67, p = .20, ], amongst Change, hemisphere and task [F(2,52) = .26, p = .77, ] were all not significant, suggesting that emotion change effect is insensitive to task focus.
The repeated measures ANOVAs for beta activity yielded a marginally significant main effect of Change [F(1,26) = 3.60, p = .07, ], with emotional change inducing larger beta ERD (−0.32 vs. −0.17 dB). Further, the three way interaction amongst Change, Region and Task [F(2,52) = 5.32, p<.01, ], and four way interaction amongst Change, Region, Hemisphere and Task [F(2,52) = 6.27, p<.01, ] were all significant. To break down this interaction, we analyzed the effect of change under the explicit and implicit task condition separately. The analysis for explicit task condition revealed a significant interaction between change and region [F(2,26) = 5.82, p<.05, ], and a significant interaction between change, region and hemisphere [F(4,52) = 4.08, p<.05, ]. A further simple test found that emotional change elicited larger beta ERD over right-anterior, left-central and middle-central, right-central and left-posterior regions (ps<.05). However, no main or interaction effects involving change reached statistically significant level [Fs<2.5, ps>.1] under the implicit task condition.
To clearly delineate the effect of emotion change and task focus, the ERS and ERD values for each condition were depicted as histogram (Fig. 2c), after collapsing the neural oscillatory values across the corresponding brain regions (theta activity were collapsed across the 9 regions and beta activity collapsed across the 5 regions). As depicted, vocal emotion change induced conspicuous theta ERS regardless of task relevant attention direction, but induced significant beta ERD only under the explicit task condition. In short, the neural oscillatory responses showed similar task effect as that in the ERPs results (Chen et al. 2011).
Discussion
The present study aimed to further elucidate the role of task focus in vocal emotion change perception by conducting time–frequency analysis on our previous reported data. We found that vocal emotion change induced theta ERS regardless of task focus, while only under the explicit task condition, vocal emotion change induced significant larger beta ERD. Moreover, the frequency range and time window for both theta and beta activity were roughly coincided with the previous studies (Cacace and McFarland 2003; Chen et al. 2012; Fuentemilla et al. 2008; Hsiao et al. 2009), suggesting that the current neural oscillatory dynamics associated with vocal emotion change was valid.
We observed significant larger theta ERS for prosodies with emotion change relative to prosodies without emotion change in both explicit and implicit condition. This result was consistent with previous findings that theta band power increase is associated with auditory change detection (Cacace and McFarland 2003; Fuentemilla et al. 2008; Hsiao et al. 2009; Schirmer et al. 2008). Moreover, theta band power increase has been associated with expectation violation (Cavanagh et al. 2010; Cohen et al. 2007; Tzur and Berger 2007). In the present study, we created vocal emotion change by cross-splicing two types of emotional prosodies. The context before the splicing point establish expectation and the cross-spliced part violated the established expectation. Therefore, the recognition of vocal emotion change is also based on expectation violation, similar to that for auditory change detection (Chen et al. 2011). In this regard, it is reasonable that vocal emotion change induces larger theta ERS, reflecting a detection of the sudden change or expectancy violation in emotional prosody. More importantly, the theta ERS was not influenced by task focus. This result is consistent with the ERP studies which showed that early component indexing change detection was not sensitive to task demands (Chen et al. 2011; Goydke et al. 2004; Koelsch 2009; Thonnessen et al. 2010). Moreover, this finding is also in line with previous study, in which the emotion change induced similar theta ERS under both emotion relative and irrelative task conditions (Chen et al. 2012). Therefore, the current results indicated that the detection of vocal emotion change is not affected by task focus. However, Cacace and McFarland (2003) reported that theta ERS associated with deviant auditory stimulation in oddball paradigm depended on stimulus and task demands. This discrepancy might be resulted from the great salience of vocal emotion change in comparison with pure auditory change.
We observed that vocal emotion change induced significant beta ERD under explicit instead of implicit condition. Beta desynchronization has been reported to associate with deviant syllable detecting in an oddball paradigm (Cacace and McFarland 2003; Kim and Chung 2008), reintegration of abnormal rhythmic pattern (Luo et al. 2010), and syntactic unification (Davidson and Indefrey 2007). In this regard, the larger beta desynchronization in explicit condition might suggest reintegration of the cross-spliced part into the preceding context. However, significant beta ERD was observed only in explicit condition, suggesting that reintegration only happened when participants were required to allocated attention on the vocal emotion change. This phenomenon is consistent with the previous study by Cacace and McFarland (2003), in which the deviant stimuli induced beta ERD only for easily discriminable stimuli in attention-related target conditions. Moreover, this phenomenon was also in line with our ERP data, which showed that P300 reflecting reintegration and context updating only observed under explicit condition (Chen et al. 2011).
Taken together, the current theta ERS and beta ERD provided converging evidence for the proposition that task focus plays different roles in various stages of vocal emotion change perception (Chen et al. 2011). Indeed, task focus is believed to be one of most important factors that modulates vocal emotion perception (Schirmer and Kotz 2006; Wildgruber et al. 2009). Quite a few studies indicated that the brain response evoked by vocal emotion varied as a function of task demands. Specifically, Wambacq et al. (2004) found that explicit vocal emotion processing was indexed by P300, while implicit processing indexed by P200. And our ERP data also suggested that the recognizing of emotion expectancy violation was not affected by task focus during early stage but influenced by task focus in the late stage (Chen et al. 2011). Therefore, the current data provided neural oscillatory dynamics associated with the modulation of task focus in vocal emotion change perception. While our ERP data only delineated the time course, the current neural oscillatory data extended the previous finding by providing neural oscillatory evidence.
At the same time, the current data also extended the findings observed in pure auditory change (Cacace and McFarland 2003) into vocal emotion change, by indicating that neural oscillatory dynamics of EEG activity can be a useful tool to assess vocal emotion perception. Actually, neural oscillatory dynamics preserves the non-phase-locked information, which is not detected by time domain ERP. However, non-phase-locked EEG rhythmicities are also reactive to sensory stimulation (internal event) and cognitive processing (external event) in various attention and memory tasks (Makeig et al. 2004; Neuper and Klimesch 2006; Pfurtscheller and da Silva 1999). Thus, the neural oscillatory dynamics should be a useful tool for depicting vocal emotion perception.
However, there are few limitation should be noted before made a firm conclusion. First, only emotion change with neutral to angry was included, which might constrain generalization of the current results to other kind of emotion change. Second, only young university students were recruited as participants. Such biased sample might also limit the generalization of the current results. Third, although small sample of vocal stimuli prevails in the literature (Chen et al. 2011; Kotz and Paulmann 2007; Paulmann et al. 2012), all the stimuli were produced by one male actor can also be a limitation.
In conclusion, vocal emotion change induced theta ERS independent of task focus and induced beta ERD only with attention allocation. These results provided converging evidence for the proposition that task focus plays different roles in various stages of vocal emotion change perception (Chen et al. 2011; Schirmer and Kotz 2006), while the detection of emotion change is insensitive to task focus, the late reintegration of emotion change is modulated by task focus. Moreover, these findings suggested that neural oscillatory profiles should be used to supplement the classical ERP evidence in the study of vocal emotion perception.
Acknowledgments
This study was supported by National Natural Science Foundation of China (31300835), General Projects for Humanities and Social Science Research of Ministry of Education, China (12XJC190002), Natural Science Foundation of Shaanxi Province, China (2012JQ4010), and Fundamental Research Funds for the Central Universities (14SZYB07).
Footnotes
Left anterior: F3, F5, F7, FC3, FC5, and FT7; middle anterior: F1, FZ, F2, FC1, FCZ and FC2; right anterior: F4, F6, F8, FC4, FC6 and FT8; left central: C3, C5, T7, CP3, CP5, and TP7; middle central: C1, CZ, C2, CP1, CPZ, and CP2, right central: C4, C6, T8, CP4, CP6, and TP8; left posterior: P3, P5, P7, PO3, PO7 and O1; middle posterior: P1, PZ, P2, POZ, and OZ; right posterior: P4, P6, P8, PO4, PO8, and O2.
References
- Cacace AT, McFarland DJ. Spectral dynamics of electroencephalographic activity during auditory information processing. Hear Res. 2003;176(1–2):25–41. doi: 10.1016/S0378-5955(02)00715-3. [DOI] [PubMed] [Google Scholar]
- Cavanagh JF, Frank MJ, Klein TJ, Allen JJB. Frontal theta links prediction errors to behavioral adaptation in reinforcement learning. NeuroImage. 2010;49(4):3198–3209. doi: 10.1016/j.neuroimage.2009.11.080. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen X, Zhao L, Jiang A, Yang Y. Event-related potential correlates of the expectancy violation effect during emotional prosody processing. Biol Psychol. 2011;86(3):158–167. doi: 10.1016/j.biopsycho.2010.11.004. [DOI] [PubMed] [Google Scholar]
- Chen X, Yang J, Gan S, Yang Y. The contribution of sound intensity in vocal emotion perception: behavioral and electrophysiological evidence. PLoS one. 2012;7(1):e30278. doi: 10.1371/journal.pone.0030278. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen MX, Elger CE, Ranganath C. Reward expectation modulates feedback-related negativity and EEG spectra. NeuroImage. 2007;35(2):968–978. doi: 10.1016/j.neuroimage.2006.11.056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davidson DJ, Indefrey P. An inverse relation between event-related and time–frequency violation responses in sentence processing. Brain Res. 2007;1158:81–92. doi: 10.1016/j.brainres.2007.04.082. [DOI] [PubMed] [Google Scholar]
- Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods. 2004;134(1):9–21. doi: 10.1016/j.jneumeth.2003.10.009. [DOI] [PubMed] [Google Scholar]
- Dien J, Santuzzi A (2004) Application of repeated measures ANOVA to high-density ERP datasets: a review and tutorial. In: Handy TE (ed) Event-related potentials: a methods handbook. MIT Press, Cambridge, pp 57–82
- Fuentemilla L, Marco-Pallares J, Munte TF, Grau C. Theta EEG oscillatory activity and auditory change detection. Brain Res. 2008;1220:93–101. doi: 10.1016/j.brainres.2007.07.079. [DOI] [PubMed] [Google Scholar]
- Goydke KN, Altenmüler E, Möller J, Mönte TF. Changes in emotional tone and instrumental timbre are reflected by the mismatch negativity. Cogn Brain Res. 2004;21(3):351–359. doi: 10.1016/j.cogbrainres.2004.06.009. [DOI] [PubMed] [Google Scholar]
- Hsiao FJ, Wu ZA, Ho LT, Lin YY. Theta oscillation during auditory change detection: an MEG study. Biol Psychol. 2009;81(1):58–66. doi: 10.1016/j.biopsycho.2009.01.007. [DOI] [PubMed] [Google Scholar]
- Jiang A, Yang J, Yang Y. MMN responses during implicit processing of changes in emotional prosody: an ERP study using Chinese pseudo-syllables. Cogn Neurodyn. 2014;8(6):499–508. doi: 10.1007/s11571-014-9303-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim JS, Chung CK. Language lateralization using MEG beta frequency desynchronization during auditory oddball stimulation with one-syllable words. NeuroImage. 2008;42(4):1499–1507. doi: 10.1016/j.neuroimage.2008.06.001. [DOI] [PubMed] [Google Scholar]
- Koelsch S. Music-syntactic processing and auditory memory: similarities and differences between ERAN and MMN. Psychophysiology. 2009;46(1):179–190. doi: 10.1111/j.1469-8986.2008.00752.x. [DOI] [PubMed] [Google Scholar]
- Kotz SA, Paulmann S. When emotional prosody and semantics dance cheek to cheek: ERP evidence. Brain Res. 2007;1151:107–118. doi: 10.1016/j.brainres.2007.03.015. [DOI] [PubMed] [Google Scholar]
- Luck S. An introduction to the event-related potential technique. London: The MIT press; 2005. [Google Scholar]
- Luo Y, Zhang Y, Feng X, Zhou X. EEG oscillations differentiate semantic and prosodic processes during sentence reading. Neuroscience. 2010;169:654–664. doi: 10.1016/j.neuroscience.2010.05.032. [DOI] [PubMed] [Google Scholar]
- Makeig S. Auditory event-related dynamics of the EEG spectrum and effects of exposure to tones. Electroencephalogr Clin Neurophysiol. 1993;86(4):283–293. doi: 10.1016/0013-4694(93)90110-H. [DOI] [PubMed] [Google Scholar]
- Makeig S, Debener S, Onton J, Delorme A. Mining event-related brain dynamics. Trends Cogn Sci. 2004;8(5):204–210. doi: 10.1016/j.tics.2004.03.008. [DOI] [PubMed] [Google Scholar]
- Neuper C, Klimesch W. Event-related dynamics of brain oscillations. Amsterdam: Elsevier; 2006. [Google Scholar]
- Paulmann S, Kotz SA. An ERP investigation on the temporal dynamics of emotional prosody and emotional semantics in pseudo- and lexical-sentence context. Brain Lang. 2008;105(1):59–69. doi: 10.1016/j.bandl.2007.11.005. [DOI] [PubMed] [Google Scholar]
- Paulmann S, Jessen S, Kotz SA. It’s special the way you say it: an ERP investigation on the temporal dynamics of two types of prosody. Neuropsychologia. 2012;50(7):1609–1620. doi: 10.1016/j.neuropsychologia.2012.03.014. [DOI] [PubMed] [Google Scholar]
- Pfurtscheller G, da Silva FHL. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol. 1999;110(11):1842–1857. doi: 10.1016/S1388-2457(99)00141-8. [DOI] [PubMed] [Google Scholar]
- Salovey P, Mayer JD. Emotional intelligence. Imagin Cogn Personal. 1989;9(3):185–211. doi: 10.2190/DUGG-P24E-52WK-6CDG. [DOI] [Google Scholar]
- Schirmer A, Escoffier N. Emotional MMN: anxiety and heart rate correlate with the ERP signature for auditory change detection. Clin Neurophysiol. 2010;121(1):53–59. doi: 10.1016/j.clinph.2009.09.029. [DOI] [PubMed] [Google Scholar]
- Schirmer A, Kotz SA. Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing. Trends Cognit Sci. 2006;10(1):24–30. doi: 10.1016/j.tics.2005.11.009. [DOI] [PubMed] [Google Scholar]
- Schirmer A, Striano T, Friederici A. Sex differences in the preattentive processing of vocal emotional expressions. NeuroReport. 2005;16(6):635–639. doi: 10.1097/00001756-200504250-00024. [DOI] [PubMed] [Google Scholar]
- Schirmer A, Escoffier N, Li QY, Li H, Strafford-Wilson J, Li W-I. What grabs his attention but not hers? Estrogen correlates with neurophysiological measures of vocal change detection. Psychoneuroendocrinology. 2008;33(6):718–727. doi: 10.1016/j.psyneuen.2008.02.010. [DOI] [PubMed] [Google Scholar]
- Thierry G, Roberts M. Event-related potential study of attention capture by affective sounds. NeuroReport. 2007;18(3):245–248. doi: 10.1097/WNR.0b013e328011dc95. [DOI] [PubMed] [Google Scholar]
- Thonnessen H, Boers F, Dammers J, Chen YH, Norra C, Mathiak K. Early sensory encoding of affective prosody: neuromagnetic tomography of emotional category changes. NeuroImage. 2010;50(1):250–259. doi: 10.1016/j.neuroimage.2009.11.082. [DOI] [PubMed] [Google Scholar]
- Tzur G, Berger A. When things look wrong: theta activity in rule violation. Neuropsychologia. 2007;45(13):3122–3126. doi: 10.1016/j.neuropsychologia.2007.05.004. [DOI] [PubMed] [Google Scholar]
- Van Kleef GA. How emotions regulate social life the emotions as social information (EASI) model. Curr Dir Psychol Sci. 2009;18(3):184–188. doi: 10.1111/j.1467-8721.2009.01633.x. [DOI] [Google Scholar]
- Wambacq IJA, Jerger JF. Processing of affective prosody and lexical-semantics in spoken utterances as differentiated by event-related potentials. Cogn Brain Res. 2004;20(3):427–437. doi: 10.1016/j.cogbrainres.2004.03.015. [DOI] [PubMed] [Google Scholar]
- Wambacq IJA, Shea-Miller KJ, Abubakr A. Non-voluntary and voluntary processing of emotional prosody: an event-related potentials study. NeuroReport. 2004;15(3):555–559. doi: 10.1097/00001756-200403010-00034. [DOI] [PubMed] [Google Scholar]
- Wildgruber D, Ethofer T, Grandjean D, Kreifelts B. A cerebral network model of speech prosody comprehension. Int J Speech-Lang Pathol. 2009;11(4):277–281. doi: 10.1080/17549500902943043. [DOI] [Google Scholar]


