Abstract
Understanding the temporal dynamics of brain function contributes to models of learning and memory as well as the processing of emotions and habituation. In this article, we present a novel analysis technique to investigate spatiotemporal patterns of activation in response to blocked presentations of emotional stimuli. We modeled three temporal response functions (TRFs), which were maximally sensitive to the onset, early or sustained temporal component of a given block type. This analysis technique was applied to a data set of 29 subjects who underwent functional magnetic resonance imaging while responding to fearful, happy, and sad facial expressions. We identified brain regions that uniquely fit each of the three TRFs for each emotional condition and compared the results to the standard approach, which was based on the canonical hemodynamic response function. We found that voxels within the precuneus fit the onset TRF but did not fit the early or the sustained TRF in all the emotional conditions. On the other hand, voxels within the amygdala fit the sustained TRF, but not the onset or early TRF, during presentation of fearful stimuli, suggesting a spatiotemporal dissociation between these structures. This technique provides researchers with an additional tool in order to investigate the temporal dynamics of neural circuits.
INTRODUCTION
Functional magnetic resonance imaging (fMRI) allows for the characterization of changes in brain function over space and time (Menon & Kim, 1999). Novel methods directed at investigating changes in the temporal dynamics of brain activation are valuable to researchers interested in models of brain function such as adaptation (Krekelberg, Boynton, & van Wezel, 2006), repetition suppression (Grill-Spector, Henson, & Martin, 2006) and sensitization (Strauss et al., 2005). The differential time-course of brain activation is also of particular interest to those investigating models of emotion-related brain function (Garrett & Maddock, 2001, 2006; Ishai, Pessoa, Bikle, & Ungerleider, 2004; Phillips et al., 2001; Winston, Henson, Fine-Goulden, & Dolan, 2004).
Previous studies investigating the temporal dynamics of brain activation during emotional tasks have mainly used two approaches. One approach has been applied to event-related designs, and compares the signal obtained during presentations of individual trials (Ishai et al., 2004; Winston et al., 2004). However, most studies to date continue to use blocked designs, which are statistically more powerful (Bandettini & Cox, 2000; Miezin, Maccotta, Ollinger, Petersen, & Buckner, 2000), and which are particularly well suited for emotion studies that seek to generate sustained emotional feeling states. Thus, approaches that analyze temporal dynamics of brain response to blocked stimulus presentations continue to be useful and important.
A second approach has been applied to blocked fMRI designs, and compares early versus late blocks of stimuli (Breiter et al., 1996; Feinstein, Goldin, Stein, Brown, & Paulus, 2002; Phan, Liberzon, Welsh, Britton, & Taylor, 2003; Protopopescu et al., 2005; Strauss et al., 2005; Wright et al., 2001). This approach is well suited to identify brain regions that are activated in the early or late stages of the experiment, but does not reveal any information about short-term temporal dynamics within blocks. Our approach is designed to accomplish this latter goal. Specifically, our approach is designed to identify brain regions that respond to either the onset (and only the onset) of a given block (“onset”), or to only the early (but not late) component of the block (“early”), or that show a sustained pattern of activation throughout the duration of the block (“sustained”).
We applied this approach to an fMRI data set collected during the performance of a gender discrimination task of emotional facial expressions (fearful, happy, sad, and neutral). We then compared these results to those obtained with the canonical hemodynamic response function (HRF; contrasting emotional and neutral blocks). A priori regions of interest included those previously reported in emotional face studies (Breiter et al., 1996; Critchley et al., 2000; Lange et al., 2003; Morris et al., 1998; Phillips et al., 2004; Whalen et al., 2001; Winston et al., 2004), including the amygdala (Morris et al., 1998) and fusiform gyrus (Dolan et al., 1996; Pessoa, McKenna, Gutierrez, & Ungerleider, 2002).
Furthermore, we predicted specific temporal response patterns to emotional stimuli in different brain regions, based on studies that identified differential time-courses of signal in visual areas using electrophysiological measures (Braeutigam, Bailey, & Swithenby, 2001; McCarthy, Puce, Belger, & Allison, 1999; Noguchi, Inui, & Kakigi, 2004; Puce, Allison, & McCarthy, 1999) and limbic areas using fMRI (Breiter et al., 1996; Feinstein et al., 2002; Phan et al., 2003; Proto-popescu et al., 2005; Strauss et al., 2005; Wright et al., 2001). In particular, we predicted that visual areas such as the occipital and posterior parietal/temporal cortices would exhibit transient temporal patterns of activation, while limbic areas would exhibit more sustained temporal patterns of activation primarily during the negative emotional (fear and sad) facial expression conditions.
METHODS
Modeling and comparison of temporal response functions
Considerable research has characterized the amplitude, latency and width of the blood-oxygen-level dependent (BOLD) response to stimulus presentations. This canonical pattern of signal change has been termed the hemodynamic response function (HRF) (Bellgowan, Saad, & Bandettini, 2003). The HRF is used to predict BOLD signal change in response to stimuli presented in both event-related (Buckner, 2003) and blocked designs (Menon & Kim, 1999) and has been used to dissociate activation based on temporal characteristics (Henson, Price, Rugg, Turner, & Friston, 2002a; Visscher et al., 2003). Based on this previous research, the temporal response functions utilized in the current analysis technique were characteristic of the properties of the canonical (standard) HRF.
Figure 1 displays the three temporal response functions (TRFs), which were generated by convolving the canonical HRF with three different functions to reflect BOLD signal maximally sensitive to the onset, early or sustained temporal components within a given block of 18 seconds (s) (six stimulus presentations). The onset TRF (oTRF) was modeled to be predictive of signal primarily driven by the presentation of the first trial (one presentation, 3 s) within a given block. In particular, the peak of the oTRF was modeled to occur roughly 6 s following onset of the initial stimulus within a block. This time-course of signal is consistent with event-related research demonstrating that the peak of the HRF occurs roughly 4–8 s following the presentation of a single stimulus (Buckner, 2003; Buckner et al., 1996). The early TRF (eTRF) was modeled to be reflective of signal primarily driven by the early (first half: three presentations, 9 s) component within a given block. The peak of the eTRF was modeled to occur roughly 10 s following the initial stimulus within a block. This was based on previous research identifying peak BOLD signal to occur 8–14 s following the initial presentation of a stimulus presented for a comparable duration (Boynton, Engel, Glover, & Heeger, 1996). The sustained TRF (sTRF) was modeled to be reflective of signal that was consistently active throughout the entire block (six presentations, 18 s). The latency and shape of the sTRF were based on research identifying BOLD signal response to stimuli presented for a similar duration (Boynton et al., 1996).
Figure 1.
Temporal response functions (TRFs) fitted to represent sensitivity to the onset, early or sustained temporal components within a block of stimuli. The oTRF was modeled to reflect activation driven by the first presentation within a block. The eTRF was modeled to reflect activation driven by the first half of presentations within a block. The sTRF was modeled in order to reflect activation driven by all of the presentations throughout a block.
In order to identify regions that displayed signal change reflective of the temporal components of the functions described above within a given block type, the parameters for each function (oTRF, eTRF and sTRF) were entered separately for each stimulus condition (averaged across block type: fear, happy and sad) within a single model using SPM software (Wellcome Department of Cognitive Neurology, London, UK). Condition-specific signal for each block type was defined as change of BOLD response during a particular emotional condition (fear, happy or sad), relative to a neutral baseline condition (neutral faces). The neutral baseline was held constant (sustained). This analysis therefore identifies voxels that fit a defined temporal function specific to a particular emotional condition (relative to neutral) and not simply to task blocks in general.
Our analysis was aimed at localizing regions whose signal fit a specific TRF to a greater extent relative to the other two TRFs within a given emotional condition block type. Given that each TRF has some level of overlap with each of the other two TRFs (Figure 1) and they are therefore correlated with one another, we used a masking approach in order to isolate which TRF most appropriately fit the time course of activity in a given voxel. Specifically, for any given TRF and contrast condition, we masked all voxels that were significantly activated with either of the other two TRFs. Thus, voxels were only considered to fit a given TRF in each condition if that voxel did not significantly fit any other TRF within that condition.
Subjects
Twenty-nine healthy right-handed subjects (14 females) were recruited. The subjects’ mean age was 22.4 years (SD=2.8; range: 18–29). The subjects had no history of brain injury, reported no substance abuse within the past 6 months, were not on any mood-altering medication, and had no physical limitations that prohibited them from participating in an fMRI experiment. This study was approved by the Stony Brook University and Yale University Institutional Review Boards.
Experimental design
Subjects were placed into the scanner, while they made gender discriminations of emotional facial expressions. Face stimuli were presented in blocks of fearful, sad, happy and neutral emotional facial expressions. The duration of each trial was 3 s, consisting of a 1-s fixation cross followed by 2 s of stimulus presentation. Trials for each condition were presented in four blocks of 6 trials each (18 s duration) (one run total). The order of block presentation was pseudo-randomized such that within a group of four blocks, each condition was selected once at random and within the entire experiment no condition was repeated. This design should therefore reduce carryover effects by randomizing the probability of a particular condition preceding another condition. Throughout the entire experiment, no trials were repeated (all were novel). Each run began and ended with a 30-s fixation period.
Image acquisition
Whole-brain imaging data were acquired on a 3 T Siemens Trio Scanner. For structural whole-brain images, a three-dimensional high-resolution spoiled gradient scan (SPGR) and a T1 scan (24 slices, 5 mm thickness; oriented parallel to the line between the anterior and posterior commissure) were conducted. Functional images were acquired using a gradient echo T2*-weighted echoplanar imaging (EPI) scan, conducted with a flip angle of 80°, repetition time (TR)=1.5 s, echo time (TE)=30 ms, and a field of view (FOV)=220×220 mm matrix.
Image analysis
Functional data were preprocessed and statistically analyzed using SPM2 (Wellcome Department of Imaging Neuroscience, London, UK). The images were temporally realigned to the middle slice, spatially realigned to the first in the time series, and coregistered to the T1 volume image, which was segmented and normalized to the gray matter template. Spatial transformations derived from normalizing to the segmented gray matter were then applied to all functional volumes, which were spatially smoothed with an 8 mm full width–half maximum isotropic Gaussian filter.
Fixed-effects models were used at the individual subject level of analysis and random effects models were used for group-level analyses (Friston, Jezzard, & Turner, 1994). At the individual level, models were created (general linear model) in order to represent all conditions and TRFs and all data were then high-pass filtered. On an individual level, contrasts were created comparing each TRF for each emotional condition to the neutral face condition, resulting in a total of nine comparisons for each subject (3 emotional conditions×3 TRFs). For comparisons between the onset TRFs and the neutral condition, movement parameters for each direction (x, y and z) were entered in order to control for the head motion that may have occurred during the onset of each block. The TRFs within each emotional condition were then utilized in order to localize regions that uniquely fit these functions. Voxels were considered to uniquely fit a specific TRF for a given emotional condition if they were significant at the p<.01 (uncorrected) 20 voxel extent threshold but were not significant (at the same threshold) for any of the other two TRFs within the same emotional condition (whole brain). We used a slightly reduced statistical threshold (p<.05 uncorrected; 10 voxel extent) within a priori regions of interest (amygdala and fusiform gyrus).
The results obtained via the above outlined procedure were compared to results obtained via the standard approach of fMRI blocked design analysis. Specifically, t-contrasts between each emotional condition (fear, happy, sad) and the neutral condition were created on an individual level and random effects models were used for group-level analyses. We used a statistical threshold of p<.05 (uncorrected) 20 voxel extent for this analysis.
RESULTS
Spatiotemporal activation in response to fearful facial expressions
Table 1 lists and Figure 2 displays activity that uniquely fit each temporal response function during the processing of fearful relative to neutral facial expressions. A comparison of regions of activation between the temporal response functions reveals that several brain regions implicated in affective attention and facial processing display signal change fitting certain temporal response functions but not others during the fearful facial expression condition. In particular, voxels within the left and right precuneus were found to fit the onset TRF (Montreal Neurological Institute (MNI): −22, −74, 24; 3226 voxels; t=4.24, p<.001) but not the early or the sustained TRF. Voxels within the left angular gyrus were also found to fit the onset TRF (MNI: −40, −78, 32; 83 voxels; p<.001) but not the early or the sustained TRF. Voxels within the right rostral anterior cingulate were found to fit the early TRF (MNI: 10, 24, −8; 225 voxels; t=3.87, p<.001) but not the onset or the sustained TRF. Voxels within the right amygdala were found to fit the sustained TRF (MNI: 28, −6, −14; 12 voxels; t=2.36, p=.013) but not the onset or the early TRF. Figure 3 plots data from two loci identified in this analysis: right precuneus and right fusiform gyrus. As can be seen, the signal in the right precuneus exhibits a pattern of activity reflective of the oTRF and the signal in the right fusiform gyrus exhibits a pattern of activity reflective of the sTRF.
TABLE 1.
Spatiotemporal activation in response to fearful facial expressions
| MNI coordinates |
||||||
|---|---|---|---|---|---|---|
| Condition and loci | Cluster size | T score | p value | x | y | z |
| Fear: oTRF (Red) | ||||||
| L/R. Medial frontal and precuneus | 3442 | 5.12 | <.001 | 10 | −22 | 46 |
| L/R. Precuneus and cuneus | 3226 | 4.24 | <.001 | −22 | −74 | 24 |
| R. Superior temporal gyrus | 1123 | 4.82 | <.001 | 38 | −34 | 16 |
| L. Angular gyrus | 83 | 4.57 | <.001 | −40 | −78 | 32 |
| L. Superior temporal gyrus | 735 | 4.38 | <.001 | −42 | −28 | 4 |
| R. Insula | 215 | 4.13 | <.001 | 48 | −8 | −20 |
| L. Inferior parietal lobule | 43 | 4 | <.001 | −64 | −30 | 36 |
| R. Precentral gyrus | 125 | 3.93 | <.001 | 58 | −2 | 10 |
| R. Thalamus | 144 | 3.85 | <.001 | 18 | −24 | 6 |
| L. Thalamus | 47 | 3.61 | .001 | −16 | −32 | 8 |
| R. Middle temporal gyrus | 62 | 3.55 | .001 | 46 | −70 | 28 |
| L. Inferior frontal gyrus | 28 | 3.4 | .001 | −38 | 30 | 8 |
| L. Middle temporal gyrus | 30 | 3.1 | .002 | −36 | −62 | 18 |
| L. Fusiform | 90 | 3.25 | .001 | −30 | −46 | −12 |
| R. Fusiform | 45 | 2.29 | .015 | 20 | −36 | −16 |
| Fear: eTRF (yellow) | ||||||
| R. Posterior cingulated | 657 | 4.73 | <.001 | 12 | −56 | 26 |
| R. Insula | 186 | 3.92 | <.001 | 38 | −14 | 2 R. |
| Rostral anterior cingulated | 225 | 3.87 | <.001 | 10 | 24 | −8 |
| L. Middle occipital gyrus | 111 | 3.8 | <.001 | −16 | −92 | 14 |
| R. Lingual gyrus | 38 | 3.5 | .001 | 10 | −76 | −2 |
| R. Cuneus | 103 | 3.45 | .001 | 18 | −80 | 22 |
| R. Precentral gyrus | 67 | 3.38 | .001 | 20 | −28 | 72 |
| R. Precentral gyrus | 139 | 3.34 | .001 | 54 | −8 | 6 |
| L. Lingual gyrus | 48 | 3.17 | .002 | −12 | −78 | −4 |
| R. Insula | 27 | 3.05 | .002 | 42 | −6 | −10 |
| L. Superior occipital gyrus | 49 | 2.99 | .003 | −36 | −82 | 28 |
| L. Fusiform | 16 | 2.42 | .011 | −30 | −54 | −4 |
| Fear: sTRF (green) | ||||||
| R. Middle occipital gyrus | 191 | 4.43 | <.001 | 26 | −96 | 4 |
| R. Fusiform | 60 | 3.74 | <.001 | 42 | −48 | −20 |
| R. Middle temporal gyrus | 80 | 3.56 | <.001 | 52 | −10 | −14 |
| L. Middle occipital gyrus | 49 | 3.44 | .001 | −28 | −90 | 2 |
| R. Inferior frontral gyrus | 38 | 3.17 | .002 | 52 | 34 | 4 |
| R. Superior temporal gyrus | 70 | 3 | .003 | 50 | −42 | 10 |
| R. Amygdala | 12 | 2.36 | .013 | 28 | −6 | −14 |
| R. Fusiform | 46 | 2.52 | .001 | 38 | −34 | −20 |
Note: L, left; R, right; x, y, and z coordinates, T score, and p value apply to the most significant voxel within each cluster.
Figure 2.
Spatiotemporal activation in response to fearful facial expressions. Voxels significantly fitting each TRF are overlaid on axial slices of a template brain taken every eight slices from z=−8 to 56. Voxels that significantly fit the oTRF, eTRF, and sTRF are displayed in red, yellow, and green, respectively.
Figure 3.
Extracted averaged time-course from two regions of interest, the right precuneus (A) and the right fusiform gyrus (B), during the fearful face condition. (A) Signal obtained from the right precuneus (MNI: 10, −22, 46) identified to fit the oTRF during the fearful face condition. Data are plotted with the x-axis representing scan number (throughout averaged blocks) and y-axis representing percentage signal change relative to neutral. (B) Signal obtained from the right fusiform gyrus (MNI: 42, −48, −20) identified to fit the sTRF during the fearful face condition. Data are plotted with the x-axis representing scan number (throughout averaged blocks) and y-axis representing percentage signal change relative to neutral. Error bars represent standard error from the mean.
Spatiotemporal activation in response to happy facial expressions
Table 2 lists and Figure 4 displays activity that uniquely fit each temporal response function during the processing of happy relative to neutral facial expressions. Several brain regions implicated in affective attention and facial processing contained voxels that fit certain temporal response functions but not others during the happy facial expression condition. In particular, voxels within the right precuneus were found to fit the onset TRF (MNI: 22, −76, 24; 387 voxels; t=4.12, p<.001) but not the early or the sustained TRF. Voxels within the left insula were also found to fit the onset TRF (MNI: −42, −22, −4; 72 voxels, t=3.60, p<.001) but not the early or the sustained TRF. Voxels within the right medial frontal gyrus were found to fit the early TRF (MNI: 10, 18, 54; 703 voxels; t=5.70, p<.001) but not the onset or the sustained TRF. Voxels within the left and right inferior parietal lobule were found to fit the sustained TRF (left: MNI: −40, −28, 36; 53 voxels; t=3.25, p=.002; right: MNI: 52, −40, 36; 81 voxels; t=3.01, p=.003) but not the onset or the early TRF. Voxels within the left fusiform were also found to fit the sustained TRF (MNI: −24, −44 −14; 22 voxels; t=2.48, p=.010) but not the onset or the early TRF.
TABLE 2.
Spatiotemporal activation in response to happy facial expressions
| MNI coordinates |
||||||
|---|---|---|---|---|---|---|
| Condition and loci | Cluster size | T score | p value | x | y | z |
| Happy: oTRF (red) | ||||||
| R. Precentral gyrus | 260 | 4.34 | <.001 | 20 | −30 | 72 |
| R. Precuneus | 387 | 4.12 | <.001 | 22 | −76 | 24 |
| R. Superior temporal gyrus | 133 | 3.98 | <.001 | 60 | −24 | −2 |
| L. Insula | 72 | 3.6 | .001 | −42 | −22 | −4 |
| L. Cuneus | 156 | 3.53 | .001 | −10 | −76 | 30 |
| R. Superior temporal gyrus | 37 | 3.46 | .001 | 52 | −10 | 10 |
| R. Middle cingulate gyrus | 94 | 3.39 | .001 | 4 | −26 | 48 |
| R. Precuneus | 25 | 3.19 | .002 | 10 | −62 | 68 |
| Happy: eTRF (yellow) | ||||||
| R. Medial frontal gyrus | 703 | 5.7 | <.001 | 10 | −18 | 54 |
| L. Cuneus | 943 | 4.88 | <.001 | −18 | −86 | 28 |
| L. Lingual gyrus | 242 | 4.8 | <.001 | −14 | −74 | −4 |
| R. Cuneus | 698 | 4.46 | <.001 | 10 | −78 | 30 |
| R. Rostral anterior cingulated | 298 | 4.29 | <.001 | 14 | 22 | −8 |
| R. Cuneus | 158 | 3.35 | .001 | 10 | −74 | 6 |
| L. Superior temporal gyrus | 33 | 3.31 | .001 | −52 | −12 | −4 |
| R. Precentral gyrus | 185 | 3.28 | .001 | 54 | −10 | 14 |
| L. Superior temporal gyrus | 24 | 3.22 | .002 | −46 | −16 | −2 |
| L. Parahippocampal gyrus | 30 | 3.15 | .002 | −26 | −46 | −6 |
| R. Precentral gyrus | 33 | 3.11 | .002 | 40 | −16 | 40 |
| Happy: sTRF (green) | ||||||
| L. Inferior parietal lobule | 53 | 3.25 | .002 | −40 | −28 | 36 |
| R. Inferior parietal lobule | 81 | 3.01 | .003 | 52 | −40 | 36 |
| L. Fusiform | 22 | 2.48 | .01 | −24 | −44 | −14 |
Note: L, left; R, right; x, y, and z coordinates, T score, and p value apply to the most significant voxel within each cluster.
Figure 4.
Spatiotemporal activation in response to happy facial expressions. Voxels significantly fitting each TRF are overlaid on axial slices of a template brain taken every 8 mm from z=−8 to 56. Voxels that significantly fit the oTRF, eTRF, and sTRF are displayed in red, yellow, and green, respectively.
Spatiotemporal activation in response to sad facial expressions
Table 3 lists and Figure 5 displays activity that uniquely fit each temporal response function during the processing of sad relative to neutral facial expressions. Again, several brain regions implicated in affective attention and facial processing contained voxels that fit certain temporal response functions but not others during the sad facial expression condition. In particular, voxels within the right precuneus were found to fit the onset TRF (MNI: 6, −46, 48; 7168 voxels; t=7.03, p<.001) but not the early or the sustained TRF. Voxels within the right rostral anterior cingulate were also found to fit the onset TRF (MNI: 10, 40, −6; 91 voxels; t=6.65, p=.001) but not the early or the sustained TRF. Voxels within the left and right fusiform gyrus were found to fit the early TRF (left, MNI: −22, −44, −12; 61 voxels; t=2.98, p=.003; right, MNI: 30, −42, −10; 27 voxels; t=2.57, p=.008) but not the onset or the sustained TRF. Voxels within the right inferior frontal gyrus were found to fit the sustained TRF (MNI: 54, 32, 0; 40 voxels; t=3.22, p=.002) but not the onset or the early TRF.
TABLE 3.
Spatiotemporal activation in response to sad facial expressions
| MNI coordinates |
||||||
|---|---|---|---|---|---|---|
| Condition and locus | Cluster size | T score | p value | x | Y | z |
| Sad: oTRF (red) | ||||||
| R/L. Precuneus | 7168 | 7.03 | <.001 | 6 | −46 | 48 |
| L. Lingual gyrus | 540 | 5.22 | <.001 | −10 | −74 | −4 |
| R. Superior frontal gyrus | 80 | 4.51 | <.001 | 22 | 58 | 6 |
| L. Middle cingulate gyrus | 22 | 3.83 | <.001 | −10 | −14 | 36 |
| R. Rostral anterior cingulate | 91 | 3.65 | .001 | 10 | 40 | −6 |
| L. Inferior parietal lobule | 58 | 3.47 | .001 | −64 | −32 | 34 |
| R. Inferior parietal lobule | 66 | 3.39 | .001 | 44 | −40 | 28 |
| L. Postcentral gyrus | 48 | 3.37 | .001 | −24 | −32 | 64 |
| R. Insula | 33 | 3.21 | .002 | 40 | −18 | −8 |
| L. Insula | 23 | 2.84 | .004 | −38 | −18 | −2 |
| Sad: eTRF (yellow) | ||||||
| R. Superior temporal gyrus | 323 | 5.13 | <.001 | 34 | −36 | 12 |
| R. Cuneus | 92 | 4.06 | <.001 | 20 | −76 | 32 |
| L. Angular gyrus | 111 | 3.69 | <.001 | −38 | −80 | 32 |
| L. Parahippocampal gyrus | 29 | 3.22 | .002 | −22 | −44 | −10 |
| L. Fusiform | 61 | 2.98 | .003 | −22 | −44 | −12 |
| R. Fusiform | 27 | 2.57 | .008 | 30 | −42 | −10 |
| Sad: sTRF (green) | ||||||
| R. Middle temporal gyrus | 855 | 4.02 | <.001 | 48 | −12 | −14 |
| R. Inferior frontal gyrus | 40 | 3.22 | .002 | 54 | 32 | 0 |
Note: L, left; R, right; x, y, and z coordinates, T score, and p value apply to the most significant voxel within each cluster.
Figure 5.
Spatiotemporal activation in response to sad facial expressions. Voxels significantly fitting each TRF are overlaid on axial slices of a template brain taken every 8 mm from z=−8 to 56. Voxels that significantly fit the oTRF, eTRF, and sTRF are displayed in red, yellow, and green, respectively.
Activation revealed by the standard approach of blocked design analysis
Table 4 lists activation revealed with the standard HRF. This method revealed several clusters previously identified to display increased signal change during emotional (fear, happy and sad), relative to neutral, facial expressions. In particular, voxels within the right amygdala were identified to display greater activation during the fearful, relative to the neutral, facial expression condition (MNI: 28, −12, −16; 83 voxels; t=3.32; p=.001). Voxels within the left fusiform gyrus were identified to display greater activation during the fearful (MNI: −40, −54, −24; 374 voxels; t=3.36; p=.001) and the sad (MNI: −44, −48, −12; 31 voxels; t=3.13; p=.002), relative to the neutral facial expression condition. This approach also identified that voxels within the right middle temporal gyrus displayed greater activation during each of the emotional (fear, happy and sad), relative to the neutral, facial expression conditions (fear, MNI: 42, −46, −14; 2453 voxels; t=5.10; p<.001; happy, MNI: 50, −36, 10; 108 voxels; t=2.56; p =.008; sad, MNI: 54, −42, 4; 839 voxels; t=3.77; p<.001).
TABLE 4.
Activation in response to emotional facial expressions (standard approach)
| MNI Coordinates |
||||||
|---|---|---|---|---|---|---|
| Condition and locus | Cluster size | T score | p value | x | y | z |
| Fear–Neu | ||||||
| R. Middle temporal gyrus, extend into R. Fusifom | 2453 | 5.1 | <.001 | 42 | −46 | −14 |
| L. Middle occipital gyrus | 402 | 4.55 | <.001 | −28 | −90 | 2 |
| L. Fusiform | 374 | 3.36 | .001 | −40 | −54 | −24 |
| R. Hippocampus, extend into R. | ||||||
| Amygdala | 83 | 3.32 | .001 | 28 | −12 | −16 |
| R. Inferior frontal gyrus | 162 | 2.94 | .003 | 50 | 30 | 8 |
| Happy–Neu | ||||||
| L. Postcentral, extend into | ||||||
| L. Inferior parietal lobule | 520 | 3.33 | .001 | −36 | −26 | 42 |
| R. Superior temporal gyrus | 128 | 3.32 | .001 | 36 | −34 | 10 |
| R. Middle frontal gyrus | 23 | 3.08 | .002 | 40 | 42 | 28 |
| R. Middle cingulate cortex | 151 | 2.94 | .003 | 8 | −24 | 42 |
| R. Inferior frontal cortex | 115 | 2.86 | .004 | 54 | 26 | −2 |
| R. Anterior cingulate | 182 | 2.63 | .007 | 12 | 10 | 34 |
| L. Hippocampus | 70 | 2.6 | .007 | −24 | −26 | −6 |
| R. Middle temporal gyrus | 108 | 2.56 | .008 | 50 | −36 | 10 |
| R. Precuneus | 298 | 2.53 | .009 | 12 | −72 | 42 |
| L. Inferior frontal gyrus | 101 | 2.43 | .011 | −48 | 20 | −4 |
| R/L. Medial frontal gyrus | 368 | 2.41 | .011 | 0 | −4 | 58 |
| R. Middle frontal gyrus | 26 | 2.31 | .014 | 40 | 26 | 44 |
| L. Lingual gyrus | 47 | 2.31 | .014 | −8 | −70 | −4 |
| L. Middle occipital gyrus | 21 | 2.2 | .018 | −28 | −92 | 0 |
| R. Middle frontal gyrus | 23 | 2.01 | .027 | 40 | −2 | 54 |
| Sad–Neu | ||||||
| R. Middle temporal gyrus | 839 | 3.77 | <.001 | 54 | −42 | 4 |
| R. Middle frontal gyrus | 65 | 3.65 | .001 | 38 | 24 | 48 |
| R. Cuneus | 350 | 3.55 | .001 | 6 | −70 | 26 |
| L. Inferior parietal extend into | ||||||
| L. Postcentral gyrus | 2318 | 3.2 | .002 | −34 | −4 | 48 |
| R. Inferior frontal gyrus | 108 | 3.15 | .002 | 56 | 22 | 28 |
| L. Fusiform | 31 | 3.13 | .002 | −44 | −48 | −12 |
| R. Supplementary motor area | 123 | 3.11 | .002 | 8 | −20 | 54 |
| L. Lingual gyrus | 48 | 3.04 | .003 | −10 | −68 | −4 |
| R. Superior temporal gyrus | 76 | 2.5 | .009 | 36 | −34 | 12 |
| L. Precentral gyrus | 101 | 2.5 | .009 | −36 | −2 | 26 |
| R. Superior parietal lobule | 137 | 2.48 | .01 | 32 | −64 | 44 |
| L. Cuneus | 160 | 2.46 | .01 | −22 | −82 | 42 |
| L. Middle temporal gyrus | 32 | 2.41 | .011 | −50 | −52 | 6 |
| R. Middle frontal gyrus | 97 | 2.38 | .012 | 40 | 0 | 54 |
| R. Postcentral gyrus | 46 | 2.17 | .019 | 26 | −40 | 62 |
| L. Precuneus | 21 | 1.98 | .028 | −6 | −76 | 40 |
| L. Medial frontal gyrus | 30 | 1.94 | .031 | −4 | −4 | 54 |
Note: L, left; R, right; x, y, and z coordinates, T score, and p value apply to the most significant voxel within each cluster.
DISCUSSION
In this article, we have introduced a technique that spatially dissociates brain regions based on differential temporal responses to blocks of emotional stimuli. By applying this method to an fMRI data set of blocked presentations, we have identified unique sets of brain structures that are primarily responsive to the onset, early or sustained temporal components of blocks of fearful, happy and sad facial expressions. The use of this technique may be of particular interest to researchers investigating brain function in relation to models of sensitization or habituation/adaptation.
A comparison of the results obtained utilizing the temporal response functions to those obtained with the standard HRF here, or by others (Breiter et al., 1996; Critchley et al., 2000; Killgore & Yurgelun-Todd, 2004; Lange et al., 2003; Morris et al., 1998; Phillips et al., 2004; Whalen et al., 2001), demonstrates that this technique may provide additional temporal resolution of fMRI data sets. For example, recent meta-analyses have reported variability between studies of emotional processing (Costafreda, Brammer, David, & Fu, in press; Phan, Wager, Taylor, & Liberzon, 2004). Our data suggest that these inconsistent findings may be in part due to the fact that these studies have utilized experimental designs with different block lengths. The results provided here demonstrate that separate neural networks are engaged depending on the temporal specificity in which emotional stimuli are presented. The function of these networks may correspond to psychological mechanisms such as encoding and the maintenance of this information on line.
Other studies have investigated changes in relatively short time-courses of activation in response to a wide variety of tasks using different methods (Chen & Desmond, 2005; Henson, Price et al., 2002; Henson, Shallice, Gorno-Tempini, & Dolan, 2002b; Ishai et al., 2004; Seifritz et al., 2002). For example, Seifritz et al. (2002) used an elegant blind data-driven independent component analysis (ICA) in order to characterize the temporal response of the auditory cortex to sounds, and Henson et al. (2002a) developed a novel whole brain analysis technique for event-related studies that identifies voxels that display latency differences relative to the canonical HRF. Future work could directly compare divergent and convergent results obtained from these techniques to yield a more complete picture of brain temporal dynamics.
We found that the pattern of precuneus activity fit the onset temporal response function during all the emotional conditions but not the early or sustained functions. The precuneus has been implicated in memory processes (Henson, Rugg, Shallice, Josephs, & Dolan, 1999; Lundstrom, Ingvar, & Petersson, 2005) and visuo-spatial imagery (Cavanna & Trimble, 2006). The fact that we observed precuneus activity to be greatest during the beginning of each block may suggest that this structure is engaged in attributing greater salience to these initial visually presented stimuli compared to those stimuli presented later. Behaviorally, this may correspond to greater retention of the “onset” presented stimuli (primacy effect) relative to those presented later (Hay, Smyth, Hitch, & Horton, 2007).
The fusiform gyrus exhibited activity that fit all of the TRFs during the fearful condition, only the early TRF during the sad condition and only the sustained TRF during the happy condition. The fact that we observed fusiform activation regardless of emotional valence is consistent with prior work (Dolan et al., 1996; Pessoa, McKenna, Gutierrez, & Ungerleider, 2002). However, we now show that in the temporal domain, the fusiform gyrus does discriminate between facial expressions of differing valence. This is a clear illustration that obtaining information about temporal characteristics of brain regions can reveal processes that were previously unknown. The challenge for future work is to develop models that take this temporal information into account to predict brain dynamics or explain existing patterns of brain dynamics.
The signal obtained from the right amygdala fit the sustained TRF during the fearful condition but not any of the other TRFs, nor did the signal significantly fit any of the other TRFs during any of the other emotional conditions. On the one hand, this finding is consistent with the view that the amygdala is particularly engaged in response to fear-inducing conditions in the environment (Whalen, 1998) and is consistently activated by exposure to fearful facial expressions (Johnstone et al., 2005). On the other hand, our observation of sustained amygdala activation contradicts other accounts that reported rapid habituation of the amygdala, such as those that compared amygdala activation to fearful stimuli during early versus late blocks (Breiter et al., 1996; Wright et al., 2001). These inconsistencies likely reflect methodological differences in the temporal and stimulus parameters. With respect to temporal parameters, both other studies used much longer time blocks and longer experimental runs than we did. For example, blocks in the study by Breiter et al (1996) lasted 36 s and were interspersed with 4-min rest periods. Block duration in the study by Wright et al. (2001) was 80 s. To determine whether a similar comparison of early-versus-late blocks would replicate their results, we conducted such an analysis but failed to see any evidence for amygdala habituation. It is therefore possible that the amygdala does not exhibit habituation when shorter (18-s) blocks and experimental runs are used. A second possibility as to why we did not replicate the observations by Breiter et al. (1996) and Wright et al. (2001) may be the choice of stimulus parameters. Both other studies used repeated presentations of face stimuli, which may have exacerbated the likelihood of amygdala habituation. In contrast, our study did not repeat face presentations, which may have minimized the likelihood of amygdala habituation. The parameters that enhance or reduce amygdala habituation remain to be fully elucidated.
Differences in the temporal dynamics of BOLD response may be a neural marker of habituation (adaptation) (Grill-Spector, Henson, & Martin, 2006; Winston et al., 2004). For example, the structures that fit the onset TRF and not the early or the sustained TRF may be habituating more quickly relative to structures found to fit the sustained TRF. Differences in habituation are thought to moderate several psychological processes such as rumination (Siegle, Steinhauer, Thase, Stenger, & Carter, 2002) and novelty seeking (Martindale, Anderson, Moore, & West, 1996). One prediction derived from the evidence presented here is that those who have a high affinity towards novel experiences may exhibit greater onset activation compared to those who have a lower affinity towards novelty. Clearly, studying the temporal dynamics of brain function during emotional processing may provide insights to how people differ in social and affective tendencies.
The technique presented here is limited to several constraints defined by the experimental design as well as the analysis procedure itself. The fMRI data set used in the current analysis consisted of blocks of only one particular length (six presentations; 18 s). It would be of particular interest to utilize this analysis technique on a data set with longer block durations and of varying length. This type of analysis may clarify some of the time-course characteristics of amygdala and fusiform function previously discussed. We also did not directly statistically compare TRFs to one another within the same emotional condition. This approach was not utilized due to the relatively high correlation between each of the TRFs. We instead used a masking and threshold approach where activations were only considered significant if they were responsive to one TRF but not to either of the other two. In addition, our analysis technique was constrained to only three different temporal response functions (onset, early and sustained), which were assumed to be linearly related to one another. Studies have shown that the HRF to single events may peak from 2 to 8 s and is variable across people (Aguirre, Zarahn, & D’Esposito, 1998). Certain theoretical models of brain function may predict that activation occurs later versus earlier in a block of stimuli and would therefore use TRFs modeled to be particularly reflective of sensitivity to late versus early trials within a block.
In conclusion, we have presented a technique for blocked-design fMRI studies that localizes brain regions by their temporal activation characteristics. We applied this technique to an existing data set collected during the processing of emotional facial expressions and demonstrated that regions previously implicated in affective attention and facial processing displayed unique and dissociable time-courses of activation during blocks of emotional stimuli, as in the case of the precuneus and the amygdala. We also showed that temporal information can yield novel insights into brain processes of emotion. Most notably, we replicated earlier reports that the fusiform gyrus activates to various emotional facial expressions, as measured with the canonical HRF. However, we then showed that the fusiform gyrus does discriminate emotional facial expressions when temporal dynamics of its activation are taken into account. Current models of brain emotional processing need to be updated to incorporate this kind of temporal information. The technique we introduced here will be useful in generating new data for temporal models of brain affective processing and hopefully in testing future predictions generated by these models.
Acknowledgments
The authors would like thank J. Ferri for excellent assistance in collecting fMRI data. This research was supported by the National Science Foundation, NSF Grant No. 0224221, to TC and by the Stony Brook University Retirees’ Dissertation Fellowship awarded to BWH.
Footnotes
The research reported in this article served to fulfill the dissertation requirements of Brian W. Haas in the Department of Psychology at Stony Brook University.
Publisher's Disclaimer: Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf
This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
Contributor Information
Brian W. Haas, Stanford University School of Medicine, Stanford, CA, USA, and Stony Brook University, New York, NY, USA
R. Todd Constable, Yale University School of Medicine, New Haven, CT, USA.
Turhan Canli, Stony Brook University, New York, NY, USA.
References
- Aguirre GK, Zarahn E, D’Esposito M. The variability of human, BOLD hemodynamic responses. NeuroImage. 1998;8(4):360–369. doi: 10.1006/nimg.1998.0369. [DOI] [PubMed] [Google Scholar]
- Bandettini PA, Cox RW. Event-related fMRI contrast when using constant interstimulus interval: Theory and experiment. Magnetic Resonance in Medicine. 2000;43(4):540–548. doi: 10.1002/(sici)1522-2594(200004)43:4<540::aid-mrm8>3.0.co;2-r. [DOI] [PubMed] [Google Scholar]
- Bellgowan PS, Saad ZS, Bandettini PA. Understanding neural system dynamics through task modulation and measurement of functional MRI amplitude, latency, and width. Proceedings of the National Academy of Sciences of the United States of America. 2003;100(3):1415–1419. doi: 10.1073/pnas.0337747100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Boynton GM, Engel SA, Glover GH, Heeger DJ. Linear systems analysis of functional magnetic resonance imaging in human V1. Journal of Neuroscience. 1996;16(13):4207–4221. doi: 10.1523/JNEUROSCI.16-13-04207.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Braeutigam S, Bailey AJ, Swithenby SJ. Task-dependent early latency (30–60 ms) visual processing of human faces and other objects. NeuroReport. 2001;12(7):1531–1536. doi: 10.1097/00001756-200105250-00046. [DOI] [PubMed] [Google Scholar]
- Breiter HC, Etcoff NL, Whalen PJ, Kennedy WA, Rauch SL, Buckner RL, et al. Response and habituation of the human amygdala during visual processing of facial expression. Neuron. 1996;17:875–887. doi: 10.1016/s0896-6273(00)80219-6. [DOI] [PubMed] [Google Scholar]
- Buckner RL. The hemodynamic inverse problem: Making inferences about neural activity from measured MRI signals. Proceedings of the National Academy of Sciences of the United States of America. 2003;100(5):2177–2179. doi: 10.1073/pnas.0630492100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Buckner RL, Bandettini PA, O’Craven KM, Savoy RL, Petersen SE, Raichle ME, et al. Detection of cortical activation during averaged single trials of a cognitive task using functional magnetic resonance imaging. Proceedings of the National Academy of Sciences of the United States of America. 1996;93(25):14878–14883. doi: 10.1073/pnas.93.25.14878. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cavanna AE, Trimble MR. The precuneus: A review of its functional anatomy and behavioural correlates. Brain. 2006;129(3):564–583. doi: 10.1093/brain/awl004. [DOI] [PubMed] [Google Scholar]
- Chen SH, Desmond JE. Temporal dynamics of cerebro-cerebellar network recruitment during a cognitive task. Neuropsychologia. 2005;43(9):1227–1237. doi: 10.1016/j.neuropsychologia.2004.12.015. [DOI] [PubMed] [Google Scholar]
- Costafreda SG, Brammer MJ, David AS, Fu CH. Predictors of amygdala activation during the processing of emotional stimuli: A meta-analysis of 385 PET and fMRI studies. Brain Research Reviews. doi: 10.1016/j.brainresrev.2007.10.012. (in press) [DOI] [PubMed] [Google Scholar]
- Critchley H, Daly E, Phillips M, Brammer M, Bullmore E, Williams S, et al. Explicit and implicit neural mechanisms for processing of social information from facial expressions: A functional magnetic resonance imaging study. Human Brain Mapping. 2000;9(2):93–105. doi: 10.1002/(SICI)1097-0193(200002)9:2<93::AID-HBM4>3.0.CO;2-Z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dolan RJ, Fletcher P, Morris J, Kapur N, Deakin JF, Frith CD. Neural activation during covert processing of positive emotional facial expressions. NeuroImage. 1996;4(3):194–200. doi: 10.1006/nimg.1996.0070. [DOI] [PubMed] [Google Scholar]
- Feinstein JS, Goldin PR, Stein MB, Brown GG, Paulus MP. Habituation of attentional networks during emotion processing. NeuroReport. 2002;13(10):1255–1258. doi: 10.1097/00001756-200207190-00007. [DOI] [PubMed] [Google Scholar]
- Friston KJ, Jezzard P, Turner R. Analysis of functional MRI time series. Human Brain Mapping. 1994;1:153–171. [Google Scholar]
- Garrett AS, Maddock RJ. Time course of the subjective emotional response to aversive pictures: Relevance to fMRI studies. Psychiatry Research. 2001;108(1):39–48. doi: 10.1016/s0925-4927(01)00110-x. [DOI] [PubMed] [Google Scholar]
- Garrett AS, Maddock RJ. Separating subjective emotion from the perception of emotion-inducing stimuli: An fMRI study. Neuroimage. 2006;33(1):263–274. doi: 10.1016/j.neuroimage.2006.05.024. [DOI] [PubMed] [Google Scholar]
- Grill-Spector K, Henson R, Martin A. Repetition and the brain: Neural models of stimulus-specific effects. Trends in Cognitive Science. 2006;10(1):14–23. doi: 10.1016/j.tics.2005.11.006. [DOI] [PubMed] [Google Scholar]
- Hay DC, Smyth MM, Hitch GJ, Horton NJ. Serial position effects in short-term visual memory: A SIMPLE explanation? Memory and Cognition. 2007;35(1):176–190. doi: 10.3758/bf03195953. [DOI] [PubMed] [Google Scholar]
- Henson RN, Price CJ, Rugg MD, Turner R, Friston KJ. Detecting latency differences in event-related BOLD responses: Application to words versus nonwords and initial versus repeated face presentations. NeuroImage. 2002a;15(1):83–97. doi: 10.1006/nimg.2001.0940. [DOI] [PubMed] [Google Scholar]
- Henson RN, Rugg MD, Shallice T, Josephs O, Dolan RJ. Recollection and familiarity in recognition memory: An event-related functional magnetic resonance imaging study. Journal of Neuroscience. 1999;19(10):3962–3972. doi: 10.1523/JNEUROSCI.19-10-03962.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Henson RN, Shallice T, Gorno-Tempini ML, Dolan RJ. Face repetition effects in implicit and explicit memory tests as measured by fMRI. Cerebral Cortex. 2002b;12(2):178–186. doi: 10.1093/cercor/12.2.178. [DOI] [PubMed] [Google Scholar]
- Ishai A, Pessoa L, Bikle PC, Ungerleider LG. Repetition suppression of faces is modulated by emotion. Proceedings of the National Academy of Sciences of the United States of America. 2004;101(26):9827–9832. doi: 10.1073/pnas.0403559101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johnstone T, Somerville LH, Alexander AL, Oakes TR, Davidson RJ, Kalin NH, et al. Stability of amygdala BOLD response to fearful faces over multiple scan sessions. NeuroImage. 2005;25(4):1112–1123. doi: 10.1016/j.neuroimage.2004.12.016. [DOI] [PubMed] [Google Scholar]
- Killgore WD, Yurgelun-Todd DA. Activation of the amygdala and anterior cingulate during nonconscious processing of sad versus happy faces. NeuroImage. 2004;21(4):1215–1223. doi: 10.1016/j.neuroimage.2003.12.033. [DOI] [PubMed] [Google Scholar]
- Krekelberg B, Boynton GM, van Wezel RJ. Adaptation: From single cells to BOLD signals. Trends in Neuroscience. 2006;29(5):250–256. doi: 10.1016/j.tins.2006.02.008. [DOI] [PubMed] [Google Scholar]
- Lange K, Williams LM, Young AW, Bullmore ET, Brammer MJ, Williams SC, et al. Task instructions modulate neural responses to fearful facial expressions. Biological Psychiatry. 2003;53(3):226–232. doi: 10.1016/s0006-3223(02)01455-5. [DOI] [PubMed] [Google Scholar]
- Lundstrom BN, Ingvar M, Petersson KM. The role of precuneus and left inferior frontal cortex during source memory episodic retrieval. NeuroImage. 2005;27(4):824–834. doi: 10.1016/j.neuroimage.2005.05.008. [DOI] [PubMed] [Google Scholar]
- Martindale C, Anderson K, Moore K, West AN. Creativity, oversensitivity, and rate of habituation. Personality and Individual Differences. 1996;20(4):423–427. [Google Scholar]
- McCarthy G, Puce A, Belger A, Allison T. Electrophysiological studies of human face perception. II: Response properties of face-specific potentials generated in occipitotemporal cortex. Cerebral Cortex. 1999;9(5):431–444. doi: 10.1093/cercor/9.5.431. [DOI] [PubMed] [Google Scholar]
- Menon RS, Kim SG. Spatial and temporal limits in cognitive neuroimaging with fMRI. Trends in Cognitive Science. 1999;3(6):207–216. doi: 10.1016/s1364-6613(99)01329-7. [DOI] [PubMed] [Google Scholar]
- Miezin FM, Maccotta L, Ollinger JM, Petersen SE, Buckner RL. Characterizing the hemodynamic response: Effects of presentation rate, sampling procedure, and the possibility of ordering brain activity based on relative timing. NeuroImage. 2000;11(6):735–759. doi: 10.1006/nimg.2000.0568. [DOI] [PubMed] [Google Scholar]
- Morris JS, Friston KJ, Buchel C, Frith CD, Young AW, Calder AJ, et al. A neuromodulatory role for the human amygdala in processing emotional facial expressions. Brain. 1998;121(1):47–57. doi: 10.1093/brain/121.1.47. [DOI] [PubMed] [Google Scholar]
- Noguchi Y, Inui K, Kakigi R. Temporal dynamics of neural adaptation effect in the human visual ventral stream. Journal of Neuroscience. 2004;24(28):6283–6290. doi: 10.1523/JNEUROSCI.0655-04.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pessoa L, McKenna M, Gutierrez E, Ungerleider LG. Neural processing of emotional faces requires attention. Proceedings of the National Academy of Sciences of the United States of America. 2002;99(17):11458–11463. doi: 10.1073/pnas.172403899. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Phan KL, Liberzon I, Welsh RC, Britton JC, Taylor SF. Habituation of rostral anterior cingulate cortex to repeated emotionally salient pictures. Neuropsychopharmacology. 2003;28(7):1344–1350. doi: 10.1038/sj.npp.1300186. [DOI] [PubMed] [Google Scholar]
- Phan KL, Wager TD, Taylor SF, Liberzon I. Functional neuroimaging studies of human emotions. CNS Spectrums. 2004;9(4):258–266. doi: 10.1017/s1092852900009196. [DOI] [PubMed] [Google Scholar]
- Phillips ML, Medford N, Young AW, Williams L, Williams SC, Bullmore ET, et al. Time courses of left and right amygdalar responses to fearful facial expressions. Human Brain Mapping. 2001;12(4):193–202. doi: 10.1002/1097-0193(200104)12:4<193::AID-HBM1015>3.0.CO;2-A. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Phillips ML, Williams LM, Heining M, Herba CM, Russell T, Andrew C, et al. Differential neural responses to overt and covert presentations of facial expressions of fear and disgust. NeuroImage. 2004;21(4):1484–1496. doi: 10.1016/j.neuroimage.2003.12.013. [DOI] [PubMed] [Google Scholar]
- Protopopescu X, Pan H, Tuescher O, Cloitre M, Goldstein M, Engelien W, et al. Differential time courses and specificity of amygdala activity in posttraumatic stress disorder subjects and normal control subjects. Biological Psychiatry. 2005;57(5):464–473. doi: 10.1016/j.biopsych.2004.12.026. [DOI] [PubMed] [Google Scholar]
- Puce A, Allison T, McCarthy G. Electro-physiological studies of human face perception. III: Effects of top-down processing on face-specific potentials. Cerebral Cortex. 1999;9(5):445–458. doi: 10.1093/cercor/9.5.445. [DOI] [PubMed] [Google Scholar]
- Seifritz E, Esposito F, Hennel F, Mustovic H, Neuhoff JG, Bilecen D, et al. Spatiotemporal pattern of neural processing in the human auditory cortex. Science. 2002;297(5587):1706–1708. doi: 10.1126/science.1074355. [DOI] [PubMed] [Google Scholar]
- Siegle GJ, Steinhauer SR, Thase ME, Stenger VA, Carter CS. Can’t shake that feeling: Event-related fMRI assessment of sustained amygdala activity in response to emotional information in depressed individuals. Biological Psychiatry. 2002;51(9):693–707. doi: 10.1016/s0006-3223(02)01314-8. [DOI] [PubMed] [Google Scholar]
- Strauss MM, Makris N, Aharon I, Vangel MG, Goodman J, Kennedy DN, Gasic GP, Breiter HC. fMRI of sensitization to angry faces. NeuroImage. 2005;26(2):389–413. doi: 10.1016/j.neuroimage.2005.01.053. [DOI] [PubMed] [Google Scholar]
- Visscher KM, Miezin FM, Kelly JE, Buckner RL, Donaldson DI, McAvoy MP, et al. Mixed blocked/event-related designs separate transient and sustained activity in fMRI. NeuroImage. 2003;19(4):1694–1708. doi: 10.1016/s1053-8119(03)00178-2. [DOI] [PubMed] [Google Scholar]
- Whalen PJ. Fear, vigilance, and ambiguity: Initial neuroimaging studies of the human amygdala. Current Directions in Psychological Science. 1998;7(6):177–188. [Google Scholar]
- Whalen PJ, Shin LM, McInerney SC, Fischer H, Wright CI, Rauch SL. A functional MRI study of human amygdala responses to facial expressions of fear versus anger. Emotion. 2001;1(1):70–83. doi: 10.1037/1528-3542.1.1.70. [DOI] [PubMed] [Google Scholar]
- Winston JS, Henson RN, Fine-Goulden MR, Dolan RJ. fMRI-adaptation reveals dissociable neural representations of identity and expression in face perception. Journal of Neurophysiology. 2004;92(3):1830–1839. doi: 10.1152/jn.00155.2004. [DOI] [PubMed] [Google Scholar]
- Wright CI, Fischer H, Whalen PJ, McInerney SC, Shin LM, Rauch SL. Differential prefrontal cortex and amygdala habituation to repeatedly presented emotional stimuli. NeuroReport. 2001;12(2):379–383. doi: 10.1097/00001756-200102120-00039. [DOI] [PubMed] [Google Scholar]





