Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2015 Feb 25;36(6):2338–2351. doi: 10.1002/hbm.22774

From personal fear to mass panic: The neurological basis of crowd perception

Elisabeth M J Huis in ‘t Veld 1, Beatrice de Gelder 1,2,
PMCID: PMC6869145  PMID: 25716010

Abstract

Recent studies have investigated the neural correlates of how we perceive emotions of individuals or a group of individuals using images of individual bodily expressions. However, it is still largely unknown how we perceive the emotion of a dynamic crowd. This fMRI study used realistic videos of a large group of people expressing fearful, happy or neutral emotions. Furthermore, the emotions were expressed by either unrelated individuals in the group or by an interacting group. It was hypothesized that the dynamics between the people in a crowd is a more salient signal than merely the emotion of the crowd. Second, it was expected that the group interaction is of special importance in a fearful or “panic” situation, as opposed to a happy or neutral situation. Using a fast‐event related design, it was revealed that observing interactive individuals, more so than independently expressive individuals, activated networks related to the perception, execution and integration of action and emotion. Most importantly, the interactive or panicked crowds, as opposed to the individually fearful crowds, triggered more anticipatory and action preparation activity, whereas the brain was less sensitive to the dynamics of individuals in a happy or neutral crowd. This is the first study to assess the effect of the dynamics between people and the collectively displayed emotion as an important aspect of emotional crowd perception. Hum Brain Mapp 36:2338–2351, 2015. © 2015 Wiley Periodicals, Inc.

Keywords: emotion, fMRI, social interaction, body, movement

INTRODUCTION

In day‐to‐day situations, one often sees a few individuals at the same time, acting either as individuals or as a group. In situations that are potentially critical it becomes very important to quickly perceive the mood of a crowd, for example when panic breaks out [Helbing et al., 2000]. Whole body expressions are primary carriers of emotion and action information and may thus play a more important role in a crowd situation than facial expressions [de Gelder et al., 2010; Frijda, 2010]. In the last decade, many studies have investigated the neuronal correlates of how we perceive the mood of an individual from bodily expressions. But how do we perceive the mood of a crowd? Simmons et al. [2006] used a “Wall of Faces” paradigm to study what brain regions are recruited when subjects viewed 32 faces simultaneously. Areas such as the ventromedial prefrontal cortex and ventral anterior cingulate cortex (ACC) appear important for the processing of such complex scenes. However, does the brain perceive a group of people merely as the sum of the individuals in the group? In a task similar to the Wall of Faces paradigm, McHugh et al. [2010] presented displays consisting of multiple whole body dynamic avatars and found that the emotion of a group is quickly perceived, especially for happiness, fear and sadness. However, these scenarios still do not come close to everyday situations, where movements, interactions and behaviors between other people are very salient signals. For example, observers are able to detect fake or unnatural group behaviors just by subtle body motion cues [Ennis et al., 2010; McDonnell et al., 2009] and observers are sensitive to the interaction between group members when judging an individual agents' movement and emotion [Clarke et al., 2005; Hirai and Kakigi, 2009; Manera et al., 2011; Neri et al., 2006]. In fact, subtle movement indicators are enough to gauge whether an interaction between two people is a tease or a threat [Sinke et al., 2010]. Previous fMRI studies assessed the underlying neural correlates of these social interactions using stimuli of two humans facing each other [Kujala et al., 2012], inviting versus avoiding interactions [Dolcos et al., 2012; Sung et al., 2011] and interacting versus noninteracting (point‐light or human) figures [Centelles et al., 2011; Iacoboni et al., 2004; Pierno et al., 2008]. These studies indicate that action [extrastriate body area (EBA), hMT+/V5, fusiform gyrus (FG), premotor cortex (PM), precuneus, inferior frontal gyrus (IFG), superior temporal sulcus (STS)] and emotion processing networks [amygdala (AMG), insula, ACC] play a role in the processing of natural social interactions.

Interestingly, all these studies include humans or human figures, but movement alone can be a strong indicator for interaction, even when the agents are nonhuman like shapes. For example, Castelli et al. [2000] used animations in which a red and a blue triangle played out different scenarios (chasing, mocking or surprising each other) or where they moved randomly. Again, areas such as the FG and STS showed a preference towards the socially interactive stimuli. Additionally, areas related to social perception were found to respond to these kinds of stimuli in similar studies, such as the amygdala, hMT+/V5, dmPFC, posterior cingulate cortex (PCC) and intraparietal sulcus (IPS) [Blakemore et al., 2003; Chaminade et al., 2011; Gobbini et al., 2007; Martin and Weisberg, 2003; Schultz et al., 2005; Tavares et al., 2008].

In conclusion, there is ample evidence that the dynamics between individuals is of equal importance as the emotion of individuals when it comes to accurately judging social interactions. However, not much is known about the neural correlates of perceiving large groups of people, or about the effect of interactions between the people in a crowd. The current study used realistic videos of a large group of people expressing emotion either as individuals or as a group. We tested two hypotheses; first whether the brain is sensitive to the difference between individual and interactive expression and second, whether this is a function of the emotion expressed. To assess this question, videos showing a crowd of people expressing neutral, fearful or happy bodily expressions, either interacting (e.g., “You are at a football stadium together with a group of supporters of your club”) or ignoring those around them “You are happy or fearful, because of personal news you just received, but it has nothing to do with those people around you,” were presented to participants in a fast event‐related design. For example, interactively fearful crowds more closely resemble panic situations, whereas interactively happy crowds are more alike to a happy crowd during a sports event. Based on the previous studies it is expected that action perception and body motion networks (among others, the precentral gyrus and the superior and inferior parietal regions; Grosbras et al., 2012] but foremost those networks involved in both body motion, kinematics and emotion (especially fear) processing, such as the premotor and supplementary motor area (SMA), amygdala, anterior insula, STS, EBA, FG, IFG and the cerebellum [de Gelder, 2006; de Gelder et al., 2010; Grezes et al., 2007; McAleer et al. 2014; Pichon et al., 2008, 2012] will be specifically sensitive to the more salient interactively fearful crowds than interactively happy crowds.

MATERIALS AND METHODS

Subjects

Sixteen right handed participants (3 male; between 19 and 27 years old, M = 22.7, SD = 2.4) were recruited by an advertisement at Maastricht University. All participants were healthy with no history of neurological or psychiatric illness and had normal or corrected‐to‐normal vision. All subjects gave informed consent and were paid €10‐ per hour. The study was performed in accordance to the Declaration of Helsinki and was approved by the local medical ethical committee.

Stimulus Materials

Video recordings were made of a group of 17 professional actors (of which nine were women) expressing happy, fearful, or neutral emotions in either an interactive or individual manner. For the interactive videos, the actors were instructed to express emotion while interacting with the other members of the group. For the individual condition, the actors were instructed to express the emotion while ignoring the other actors. See Figure 1 for example frames. The recordings were made with a HD digital camera (25 frames/s) and edited with Ulead VideoStudio into 2.5 s (s) segments (63 frames, 632 × 416 pixels). The videos were converted to greyscale and low‐pass filtered in the spatial frequency domain using Matlab software. This Fourier‐based technique filters out high spatial frequencies, resulting in a blurred video clip in which confounding information, such as facial expressions and details on clothing, are removed. The video clips were tested in a validation study in which 18 first year students of Tilburg University categorized the emotion in the clip and indicated whether the people in the group were expressing the emotion interactively or individually. Eight video clips with the best recognition rates (all above 80% correct) for all conditions were selected, resulting in a total of 48 video clips. These video clips were processed further by adding two colored dots (blue or yellow, lasting 80 ms each) in random frames and in a random location on the frame.

Figure 1.

Figure 1

Example frame of each of the stimulus conditions. The videos are blurred using a low pass Fourier‐based technique filter.

Twenty‐eight other participants from Tilburg University also rated the valence and arousal of these clips on a scale from 1 to 5 using a self‐assessment manikin [Bradley and Lang, 1994] and indicated on a scale from 1 to 5 how much movement the video contained. The amount of movement for each video clip was also estimated using a procedure [Pichon et al., 2008] that consists of calculating luminance differences between pixels per frame. Happy videos were seen as more arousing, higher in positive valence and containing more movement than fearful videos, which in turn were more arousing, higher in negative valence and rated as having more movement than neutral videos. Furthermore, interactive fear and happy videos were seen as more arousing and containing more movement than their individual counterparts, while the dynamics did not influence valence or arousal ratings of the neutral videos. Lastly, 15 other participants from Tilburg University rated the amount of movement of still frames taken from the movies. The same pattern of results was found as for the video stimuli. Also, the still frames of the fearful movies were even perceived as having more implied movement than the actual videos. See Figure 2.

Figure 2.

Figure 2

Mean arousal, valence and subjective movement ratings of the stimuli, and mean movement calculated as luminance differences per frame. * P < 0.05, *** P < 0.001.

Experimental Design

We created six experimental conditions with a three emotion (neutral, happy and fearful) by two dynamics (interactive or individual) fast‐event related design. The experiment was divided into four functional runs of 96 randomly presented trials (48 clips by two repetitions) with a total of 384 trials. Each condition therefore contained a total of 64 trials (eight repetitions of eight unique trials). A trial consisted of the video presentation (2500 ms), an answer screen containing the words “zelfde” (same) or “anders”(different) in Dutch in white letters on a black screen with a duration of 1500 ms, and an inter trial interval of either 2, 4, or 6 s showing a white fixation cross on a black background. The forced‐choice task for the participants consisted of indicating whether the two dots in the movie were of the same or different color. The response alternatives “same” or “different” appeared randomly left or right of the fixation cross to prevent anticipatory responses. The stimuli were back‐projected on a frosted screen at the back end of the scanner tunnel and viewed through a mirror attached to the head coil. Stimuli were presented using Presentation software (Neurobehavioral Systems, version 11.0).

Image Acquisition

fMRI images of brain activity were acquired using a head scanner with a magnetic field strength of 3 Tesla (Siemens Allegra, AG, Erlangen, Germany) located at Maastricht University, the Netherlands. High resolution anatomical MRI data were acquired using a three‐dimensional (3D) T1‐weighted image according to the ADNI (Alzheimer's Disease Neuroimaging Initiative) MPRAGE sequence protocol: TR = 2250, TE = 2.6, flip angle (FA) = 90°, FoV = 256 × 256 mm2, matrix size = 256×256, slice thickness = 1 mm, sagittal orientation, total scan time = 8 min 26 s. The anatomical scan of eight of the participants was acquired during a different run (see Hortensius and de Gelder [2014]. The functional images consisted of following repeated single‐shot echo‐planar imaging sequence: repetition time (TR), 2000; echo time (TE) = 30 ms, (FA) = 90°, field of view (FOV) = 224 × 224 mm2, matrix size = 64×64, slice thickness = 3.5 mm, transversal orientation, number of volumes = 397 per run, total scan time per run = 13 min 14 s.

Data Analyses

Data were preprocessed and analyzed using BrainVoyager QX 2.3 (Brain Innovation, Maastricht, the Netherlands). The functional data were slice scan time corrected using cubic spline interpolation, aligned to the first nondiscarded scan time, 3‐dimentional motion corrected using a trilinear/sinc interpolation and temporal high‐pass filtered (GLM‐Fourier) with two cycles per data point. The first two volumes were discarded. The ADNI scan was transformed into Talairach space [Talairach and Tournoux, 1988]. After coregistration the functional runs were also talairach transformed and spatially smoothed with an 8‐mm Gaussian kernel. At single subject level, the data of each functional run was deconvolved, using the standard procedure for analyzing fast‐event related designs in Brain Voyager. In short, the hemodynamic responses to the overlapping events are separated by modeling 10 shifted “delay” predictors, resulting in a design with 66 predictors, 10 predictors per condition to model the BOLD response at several delays, and an additional six z‐transformed movement predictors. Then, a random effects multisubject general linear model was conducted.

As contrasts are not a suitable analysis method for this design, random effects ANOVAs were performed. These analyses show in which areas a main effect of emotion, a main effect of dynamics or an interaction between emotion and dynamics are found. The resulting clusters were corrected for multiple comparisons using a cluster level threshold analysis with an initial P‐value of P < 0.005 [Goebel et al., 2006]. Four very extensive clusters covering most of the occipital lobe were found for the main effect of emotion using this threshold, so these four clusters were corrected more stringently with a false discovery rate (FDR) of < 0.01. These are indicated in the results (see Table 1). Also, two very large clusters in the inferior occipital gyrus and cerebellum were found for the main effect of dynamics, these were also more strictly corrected with a cluster level threshold analysis with an initial P‐value of P < 0.001 (as indicated in Table 2). The ANOVA only specifies in which areas a main or an interaction effect can be found, but does not show which conditions are higher or lower than the others. To assess the underlying pattern of the results, beta values of the clusters were exported and explored in SPSS using repeated measures GLMs with a three emotion (fear, happy, neutral) by two dynamics (interactive, individual) within‐subject design. It is important to note that these statistics are not reported but only used to explore the main and interaction effects and to order the results in the tables.

Table 1.

Main effect of emotion.

Anatomic region x y z Voxels F Peak voxel P
1×1×1
A. Fear > happy + neutral
Fusiform gyrus L −46 −35 −15 1557 13.49 P = 0.000066
Insula L −28 19 3 191 8.16 P = 0.001476
Inferior frontal gyrus L −43 52 −6 108 7.46 P = 0.002342
B. Fear + happy > neutral
Middle occipital gyrus a L −52 −71 9 2732 33.73 P = 0.000001
Temporal pole R 44 7 −36 186 9.26 P = 0.000737
C. Fear + neutral > happy
Postcentral gyrus R 35 −26 42 203 7.08 P = 0.003029
Inferior parietal lobule R 38 −68 33 287 10.42 P = 0.000365
Superior parietal lobule R 17 −59 51 591 10.97 P = 0.000265
Ventral anterior cingulate cortex L −7 10 24 346 8.25 P = 0.001389
Precuneus R 26 −59 9 230 9.36 P = 0.000692
Precuneus L −10 −62 6 275 8.62 P = 0.001097
Superior frontal gyrus R 29 43 3 653 9.08 P = 0.000824
D. Happy > fear + neutral
Middle occipital gyrusa R 47 −65 0 3636 27.80 P = 0.000001
Cuneusa R 17 −89 9 2071 26.18 P = 0.000001
Cuneusa L −10 −95 9 429 20.64 P = 0.000002
E. Neutral > fear + happy
Precentral gyrus R 29 −14 54 107 9.66 P = 0.000576
Cerebellum R 11 −44 −9 247 8.66 P = 0.001074
Superior frontal gyrus L −25 40 0 467 11.30 P = 0.000219

Cluster sizes, peak voxel locations and P‐values of clusters with a significant main effect of emotion. Cluster size threshold corrected, P < 0.005, a = correction at FDR < 0.01.

Table 2.

Main effect of dynamics.

Anatomic region x y z Voxels F Peak voxel P
1x1x1
Posterior ACC R 8 −11 42 425 19.84 P = 0.000463
Fusiform Gyrus R 38 −47 −15 1203 20.65 P = 0.000387
Fusiform Gyrus L −46 −41 −15 366 16.57 P = 0.001002
Inferior Occipital Gyrusa L −43 −74 3 6056 33.44 P = 0.000036
Lingual Gyrusa R 26 −89 −6 3294 38.13 P = 0.000018
Extrastriate cortex L −34 −80 −12 13192 34.82 P = 0.000029
Middle Temporal Gyrus R 41 −62 9 5775 34.07 P = 0.000033
Supramarginal Gyrus R 62 −32 24 131 14.24 P = 0.001837
Precuneus L −31 −62 36 856 22.15 P = 0.000281
Postcentral Gyrus L −49 −35 45 163 13.32 P = 0.002367
Precentral gyrus L −25 −14 60 126 16.99 P = 0.000904
Superior temporal sulcus L −49 −41 18 801 18.00 P = 0.000709
Cerebellum L −10 −50 −30 362 20.79 P = 0.000376
Cerebellum L −16 −35 −27 430 19.31 P = 0.000522
Cerebelluma L −40 −65 −27 86 27.77 P = 0.000094

Cluster sizes, peak voxel locations and P‐values of clusters with a significant main effect of dynamics. Cluster size threshold corrected, P < 0.005, a = cluster size threshold corrected, P < 0.001.

Table 3.

Emotion by dynamics interaction.

Anatomic region x y z Voxels F Peak voxel P
1 × 1 × 1
A. Interactive fear > individual fear
Parahippocampal gyrus R 23 −38 −6 267 8.81 P = 0.000975
Parahippocampal gyrus L −28 −47 −6 364 10.69 P = 0.000312
Extrastiate visual cortex R 41 −74 24 962 10.13 P = 0.000434
Extrastriate visual cortex L −34 −80 24 122 7.12 P = 0.002929
Insula R 35 22 15 605 8.90 P = 0.000920
Precuneus R 23 −62 42 1483 9.56 P = 0.000611
Inferior temporal gyrus R 50 −50 0 276 7.68 P = 0.002013
Lingual gyrus L −19 −89 6 2144 15.06 P = 0.000030
B. Interactive fear > individual fear + interactive happy > individual happy
Lingual gyrus R 14 −9 0 6383 26.37 P = 0.000000
C. Interactive fear > individual fear + interactive neutral > individual neutral
Middle occipital gyrus L −28 −86 6 234 8.11 P = 0.001524
Middle occipital gyrus (hMT) L −49 −68 6 1114 11.57 P = 0.000188
D. Interactive fear > individual fear + individual neutral > interactive neutral
Insula R 41 16 −6 4516 19.20 P = 0.000004
Insula L −37 16 −6 242 8.00 P = 0.001642
Posterior dorsal ACC L −7 4 27 363 9.69 P = 0.000565
E. Interactive neutral > individual neutral
Cuneus R 14 −83 21 436 12.53 P = 0.000111
Cuneus L −4 −89 36 211 8.65 P = 0.001079

Cluster sizes, peak voxel locations and P‐values of clusters with a significant emotion by dynamics interaction effect. Cluster size threshold corrected, P < 0.005.

RESULTS

All participants scored higher than 99% correct on the dot detection task and hence, those results were not further analyzed.

Main Effect of Emotion

The BOLD responses were higher in the fear condition than in the happy and neutral condition in the left FG, the left insula and the left inferior frontal gyrus (IFG) (see Table IA). Two other regions showed a preference for both the fearful and happy conditions compared to neutral: the left middle occipital gyrus (MOG), probably corresponding to the hMT+/V5, extending to the middle temporal gyrus (MTG) and the superior temporal gyrus (STG), and the right temporal pole (TP) (see Table Ib). The majority of brain areas were more active during the fear and neutral conditions, as compared to the happy condition. These included the right postcentral gyrus (somatosensory cortex), the right inferior parietal lobule (IPL), the right superior parietal lobule (SPL), the left ventral anterior cingulate cortex (ACC), the left precuneus, the right precuneus and the right superior frontal gyrus (SFG) (see Table IC). Three regions showed the most activation in response to the happy stimuli and lowest activation in the neutral condition; the right MOG (or hMT+/V5), extending to the right middle temporal, inferior and superior temporal gyri and the right cuneus. In the left cuneus, almost the same pattern was found, except there were no differences between the neutral and the fearful condition (see Table ID). Finally, three regions showed the highest BOLD response in the neutral condition; the precentral gyrus (premotor cortex), a region in the anterior lobe of the cerebellum and the left SFG (see Table IE). Also, see Figure 3.

Figure 3.

Figure 3

Areas with a main effect of emotion in the whole brain analysis. (A) right SFG, bilateral MOG, bilateral cuneus, right precuneus, left lingual gyrus, right postcentral gyrus, right IPL, and right TP. (B) bilateral SFG, left insula, bilateral MOG, right cuneus. (C) right cerebellum, right cuneus, left FG. (D) right precentral gyrus, right SPL, left ACC.

Main Effect of Dynamics

Only areas with a higher BOLD response in the interactive than in the individual condition were found. See Table 2 and Figure 4.

Figure 4.

Figure 4

Areas with a main effect of dynamics (interactive > individual) in the whole brain analysis as described in Table 2. (A) right ACC, right lingual gyrus, left precuneus, right MTG, right FG, left precuneus, left IOG, left cerebellum. (B) left postcentral gyrus, left STS, left IOG, left FG, right ACC, right posterior ACC, bilateral FG, right MTG, right lingual gyrus.

The Interaction Between Emotion and Dynamics

In a large number or areas, the interaction effect was a result of a stronger response to the interactive fear than to the individual fear condition, whereas there was no such difference in the other two emotional conditions. These include the bilateral parahippocampal gyrus, the bilateral extrastiate visual cortex, the right inferior temporal gyrus (ITG), a cluster in the right insula, the right precuneus and the left lingual gyrus (see Table IIIA). The right lingual gyrus showed an increased BOLD response in the interactive fear condition as compared to the individual fear condition and in the interactive happy condition as compared to the individual happy condition, but no difference between the two neutral conditions (see Table IIIB). Two clusters in the left MOG, one of which corresponds to the hMT+/V5, showed a preference for the interactive fearful and neutral conditions as compared to their individual counterparts, without any difference between interactive and individual happy (see Table IIIC). Lastly, the insula bilaterally and a cluster in the posterior ACC responded to both interactive fear and the individual neutral condition (see Table IVD) and the cuneus bilaterally activated more strongly in response to the interactive neutral condition (see Table IVE). See Figure 5.

Figure 5.

Figure 5

Areas with an emotion by dynamics interaction effect in the whole brain analysis as described in Table 3. (A) Left insula, right ITG, bilateral lingual gyrus, right MOG (hMT). (B) Right insula, bilateral lingual gyrus, left MOG, left MOG (hMT), right extrastriate area. (C) Bilateral insula, bilateral parahippocampal gyrus, right lingual gyrus, right precuneus.

DISCUSSION

The aim of this study was to assess the neural correlates of emotional crowd perception and the effect of the behavioral dynamics between individuals. First, observing emotional crowds activates an extensive network of areas involved in emotional facial and bodily expressions, imitation and emotion contagion (insula, cingulate cortex, IPL, somatosensory areas) [de Gelder, 2006; Iacoboni, 2009; Keysers et al., 2010; Leslie et al., 2004; Nummenmaa et al., 2008; Tamietto and de Gelder, 2010], motion processing (SPL, MOG), and action perception, preparation and execution (STS, SFG, premotor cortex, cerebellum)[Blake and Shiffrar, 2007; Caspers et al., 2010; Lestou et al., 2008]. More importantly however, our first hypothesis pertained to whether the brain is sensitive to the behavioral dynamics of the individuals in a crowd. Indeed, regardless of the emotion of the crowd, observing interactive as compared to individually behaving crowds activated a broad network of areas including those related to the perception of motional body language, biological motion and social interaction processing; such as the lingual gyrus, supramarginal gyrus, extrastriate cortex, and inferior occipital gyrus [Blakemore et al., 2003; Centelles et al., 2011; Chaminade et al., 2011; Dolcos et al., 2012; Grezes et al., 2007; Iacoboni et al., 2004; Kujala et al., 2012; Pfeiffer et al., 2013; Pichon et al., 2012; Sinke et al., 2010; Tavares et al., 2008]. Additionally, interactive crowds more strongly activate networks related to action observation, understanding and execution, such as the precentral and postcentral gyri, the superior and inferior parietal lobules, the STS, MTG, FG, fusiform area, and even the cerebellum [Caligiore et al., 2013; Caspers et al., 2010; Pelphrey et al., 2004]. Interestingly, the cerebellum is crucial for action perception coupling [Christensen et al., 2014] and this response in combination with the precuneus, somatosensory and primary motor cortex activations may indicate that interactive behavior between people signals important determinant of whether emotional states are shared [Nummenmaa et al., 2012] or that something relevant may be happening [Hadjikhani et al., 2008; Schulte‐Ruther et al., 2007; Tamura et al., 2013]. It was previously found that the precuneus activity increases in response to the speed of a geometric shape when the movement is believed to be intentional, but decreases when it is thought to be random (Zacks et al., 2006). Also, the precuneus, premotor and somatosensory areas are strongly interconnected and in turn linked to the inferior and superior parietal lobules and also to the subcortical areas (thalamus and pulvinar), involved in unconscious emotion processing [Cavanna and Trimble, 2006; Tamietto and de Gelder, 2010]. Finally, the dorsal ACC is connected to premotor areas and the limbic system [Etkin et al., 2011] and has been found to integrate emotional and behavioral responses [Pereira et al., 2010]. This activity, together with motor and SMA activations, indicates that interactive crowds might have a greater emotional impact than crowds of individuals and may increase action preparation and sympathetic nervous system arousal [Etkin et al., 2011; Gentil et al., 2009]. In addition, previous studies using controlled, nonhuman like (geometrical) shapes, have also found that movement alone can be a strong indicator for interaction and found similar activations [Blakemore et al., 2003; Chaminade et al., 2011; Gobbini et al., 2007; Martin and Weisberg, 2003; Schultz et al., 2005; Tavares et al., 2008]. The finding that the interaction between individuals influences these action perception and execution networks also corroborates psychophysiological findings that interactions between people increase corticospinal excitability [Bucchioni et al., 2013; Sartori et al., 2011] and influences mu rhythm oscillations, said to reflect mirror neuron activity [Oberman et al., 2007]. In short, it seems that not only the collectively expressed emotion, but also the dynamics between the individuals are very important characteristics of crowd perception.

We also assessed whether there is an interaction between the emotion of the crowd and the behavior dynamics. Behaviorally, interactive fear or “panic” is experienced as more arousing and less pleasant than individual fear, happy people interacting are seen as more pleasant and arousing than merely a collection of happy individuals, but the dynamics are not of importance for the perception of neutral crowds. Furthermore, interactive fearful crowds activated a specific network of brain areas consisting of the parahippocampal gyrus (PPA), EBA, the precuneus, insula, lingual gyrus, and ITG. The PPA is a well‐known area responsive to stimuli of scenes and is involved in encoding the layout of the environment [Epstein and Kanwisher, 1998], whereas the EBA is sensitive to the perception of bodies and body parts [Downing et al., 2001]. The EBA, PPA and ITG in the occipitotemporal cortex are crucial for visual perception, but also play an important role in self‐representation, a successful integration of the own body into the environment by integrating visual, spatial and motor‐sensory information and preparing the body actions in the environment [Astafiev et al., 2004; Gallivan et al., 2013]. This is corroborated by studies showing that the EBA plays a role not only in perceiving a body, but also in action and goal perception [Downing et al., 2006; Herrington et al., 2012; Jastorff and Orban, 2009] and that damage to the extrastriate cortex produces hallucinations regarding the own body in space [Heydrich and Blanke, 2013]. Similarly, both the EBA and the PPA are sensitive to emotion [Atkinson et al., 2012; Peelen et al., 2007; Sinke et al., 2012]. In conclusion, in a panic situation, these areas may be responsible for quickly assessing the gist of the scene in relation to the behaviors of others and more importantly, our own body and its position. Additionally, the insula and the precuneus, with their connections to the limbic, somatosensory and motor systems [Augustine, 1996], may integrate all important emotion and action information to a meaningful whole [Carr et al., 2003; Kurth et al., 2010] and prepare to take the appropriate action [Zhang and Li, 2012].

Lastly, without taking the dynamics between the individuals into account, many of the regions involved in the processing of fearful crowds overlap with a network important to the perception of danger [Tamura et al., 2013; Zurcher et al., 2013]. Specifically, it is noteworthy that the FG, insula and IFG were activated in response to fearful crowds. Insula activation was previously found when participants viewed people facing each other [Kujala et al., 2012], in response to threatening scenes [Nummenmaa et al., 2008] and to stimuli showing a threatening response [Pichon et al., 2012]. The insula may therefore play a role in detecting threat in social interactions. It is well connected to other areas involved in emotion processing such as the ACC and vmPFC [Grupe and Nitschke, 2013]. In addition, the IFG and FG have frequently been found to be engaged in the perception of social interactions [Centelles et al., 2011; Dolcos et al., 2012; Iacoboni et al., 2004; Kujala et al., 2012] and specifically in threatening interactions [Pichon et al., 2008; Pichon et al., 2012; Sinke et al., 2010] even when there were no humans present in the stimuli [Gobbini et al., 2007; Schultz et al., 2005]. Additionally, activity in the IFG, a key region for action observation, imitation [Molnar‐Szakacs et al., 2005] and action understanding [De Lange et al., 2008], is related to the perception of group‐like behavior of dots [Chaminade et al., 2011] and a display containing multiple faces [Simmons et al., 2006]. Most importantly, in the present case the IFG may play a role in risk aversion [Christopoulos et al., 2009].

An important issue when using dynamic, natural social interactions is the fact that they cannot be well controlled on all levels. Thus, some of the activations could be explained by the differences in arousal, valence or movement of the stimuli, such as the clusters found in the cuneus, the middle occipital gyrus (likely corresponding to the hMT+/V5) and the temporal poles, as they are also important for biological motion detection [Grossman et al., 2000; Watson et al., 1993] and emotion processing and autonomic reactivity [Ongur and Price, 2000]. It is possible that merely the higher movement or arousal levels of the fearful and happy stimuli are responsible for activation in these areas [Lane et al., 1999]. Unfortunately, this is a major problem in dynamic emotion expression research and it is well studied that arousal [Adolphs et al., 1999] and movement information such as speed or gait [Chouchourelou et al., 2006; Roether et al., 2009] are each by themselves critical for accurate emotion recognition. This means that equalizing stimuli on these inherent dimensions impairs the correct recognition and perception of the emotion. It is important to point out that when images of movement are used, even these still frames contain different levels of implicit motion. This is clearly reflected in the findings that the subjective movement ratings of the inherent movement in the still frames were similar to that of the actual movement in the video clips. However, even though there are differences in movement, valence and arousal in the videos, it is worth stressing that the activity patterns found in the BOLD signal did not match the movement patterns in the videos. For example, even though (especially interactive) happy videos contain the most movement and is are high in valence and arousal, only the lingual gyrus shows higher activation for interactive happy than individual happy videos. In short, if the movement, arousal and valence of the stimuli would be the main factor driving the BOLD responses, we should have found most activations in response to happy videos or in response to interactive fear and interactive happy simultaneously. Nonetheless, it would be very beneficial if future studies unraveled the unique contributions of each of these processes. In addition, it would be interesting to more closely assess how people look at the crowds and what exactly they pay attention to. Eye tracking experiments could possibly shed light on the question if people look at the crowds in different ways, and electroencephalography studies could assess when exactly the interaction dynamics influence the perception of the group. Furthermore, a study with motion captured groups could not only provide better controlled stimuli, by changing the position of the individuals without changing the movement information, but also provides a mean to correlate activity to quantifiable movement aspects (f.e. distance between people, velocity, trajectories, and movement synchronization).

Note that the majority of studies into the perception of emotional body language or emotional social interactions find amygdala activation [Dolcos et al., 2012; Grezes et al., 2007; Kujala et al., 2012; Pichon et al., 2008; Sinke et al., 2010; Sung et al., 2011; Tavares et al., 2008]. A possible explanation for the lack of amygdala activation in the current study pertains to the fact that attention was devoted to the dot‐detection task the participants performed. Attention is known to reduce the level of activation of the amygdala as we found in a study using individual video clips [Pichon et al., 2012; see de Gelder et al. [2012] for a review). A dot detection task was used for a number of reasons. First, as compared to passive viewing, it not only engages the participant but assures participants watched the video. Second, the task is unrelated to the content of the stimuli and thus does not stimulate the participants to overtly think and reason about what is happening in the video.

This is the first study to assess the effect of the social interactions between people as an important aspect of emotional crowd perception. Taken together these results reveal that the brain is sensitive to the difference between interactive and noninteractive expressions. As expected, the salience of the interactive emotion expression versus the individualistic ones was found to be higher for fear “panic” expressions than for happy or neutral expressions. This study again highlights that the perception of emotion and action are is closely linked, and that the emotions and social interactions between people in turn influence our own responses to and anticipations of others and the consequences for the self [Wolpert et al., 2003]. Better understanding of how the brain copes with complex social situations [Pfeiffer et al., 2013; Schilbach et al., 2013] adds a new dimension to understanding social communication and its deficits and is beneficial for the study of, for example, autism [Centelles et al., 2013; Pavlova, 2012; Zurcher et al., 2013]. Additionally, by taking social interaction and movement dynamics between individuals into account, as compared to solely focusing on facial or bodily expressions of individuals, the efficiency and accuracy of for example security cameras for crowd surveillance could be improved.

ACKNOWLEDGMENTS

The authors are grateful to R. Hortensius and C.B.A. Sinke for assistance in data collection, R. Hortensius and A. de Borst for comments on the manuscript, to the actors that participated in the video recordings, to S. Bell for video recording and to the Brussels opera house De Munt (http://www.demunt.be ) for hosting the video recordings for this project. Author contribution: B.dG. and E.H. designed the research, EH performed research and analyzed the data, EH and BdG wrote the paper. Conflict of interest: The authors declare no competing financial interests.

REFERENCES

  1. Adolphs R, Russell JA, Tranel D (1999): A role for the human amygdala in recognizing emotional arousal from unpleasant stimuli. Psychol Sci 10:167–171. [Google Scholar]
  2. Astafiev SV, Stanley CM, Shulman GL, Corbetta M (2004): Extrastriate body area in human occipital cortex responds to the performance of motor actions. Nat Neurosci 7:542–548. [DOI] [PubMed] [Google Scholar]
  3. Atkinson AP, Vuong QC, Smithson HE (2012): Modulation of the face‐ and body‐selective visual regions by the motion and emotion of point‐light face and body stimuli. Neuroimage 59:1700–1712. [DOI] [PubMed] [Google Scholar]
  4. Augustine JR (1996): Circuitry and functional aspects of the insular lobe in primates including humans. Brain Res Rev 22:229–244. [DOI] [PubMed] [Google Scholar]
  5. Blake R, Shiffrar M (2007): Perception of human motion. Annu Rev Psychol 58:47–73. [DOI] [PubMed] [Google Scholar]
  6. Blakemore SJ, Boyer P, Pachot‐Clouard M, Meltzoff A, Segebarth C, Decety J (2003): The detection of contingency and animacy from simple animations in the human brain. Cereb Cortex 13:837–844. [DOI] [PubMed] [Google Scholar]
  7. Bradley MM, Lang PJ (1994): Measuring emotion ‐The self‐assessment mannequin and the semantic differential. J Behav Ther Exp Psy 25:49–59. [DOI] [PubMed] [Google Scholar]
  8. Bucchioni G, Cavallo A, Ippolito D, Marton G, Castiello U (2013): Corticospinal excitability during the observation of social behavior. Brain Cogn 81:176–182. [DOI] [PubMed] [Google Scholar]
  9. Caligiore D, Pezzulo G, Miall RC, Baldassarre G (2013): The contribution of brain sub‐cortical loops in the expression and acquisition of action understanding abilities. Neurosci Biobehav Rev 37:2504–2515. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Carr L, Iacoboni M, Dubeau MC, Mazziotta JC, Lenzi GL (2003): Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. Proc Natl Acad Sci USA 100:5497–5502. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Caspers S, Zilles K, Laird AR, Eickhoff SB (2010): ALE meta‐analysis of action observation and imitation in the human brain. Neuroimage 50:1148–1167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Castelli F, Happe F, Frith U, Frith C (2000): Movement and mind: A functional imaging study of perception and interpretation of complex intentional movement patterns. Neuroimage 12:314–325. [DOI] [PubMed] [Google Scholar]
  13. Cavanna AE, Trimble MR (2006): The precuneus: a review of its functional anatomy and behavioural correlates. Brain 129:564–583. [DOI] [PubMed] [Google Scholar]
  14. Centelles L, Assaiante C, Etchegoyhen K, Bouvard M, Schmitz C (2013): From action to interaction: Exploring the contribution of body motion cues to social understanding in typical development and in autism Spectrum Disorders. J Autism Dev Disord 43:1140–1150. [DOI] [PubMed] [Google Scholar]
  15. Centelles L, Assaiante C, Nazarian B, Anton JL, Schmitz C (2011): Recruitment of both the mirror and the mentalizing networks when observing social interactions depicted by point‐lights: A neuroimaging study. PLoS One 6:e15749. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Chaminade T, Kawato M, Frith C (2011): Individuals' and groups' intentions in the medial prefrontal cortex. Neuroreport 22:814–818. [DOI] [PubMed] [Google Scholar]
  17. Chouchourelou A, Matsuka T, Harber K, Shiffrar M (2006): The visual analysis of emotional actions. Soc Neurosci 1:63–74. [DOI] [PubMed] [Google Scholar]
  18. Christensen A, Giese MA, Sultan F, Mueller OM, Goericke SL, Ilg W, Timmann D (2014): An intact action‐perception coupling depends on the integrity of the cerebellum. J Neurosci 34:6707–6716. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Christopoulos GI, Tobler PN, Bossaerts P, Dolan RJ, Schultz W (2009): Neural correlates of value, risk, and risk aversion contributing to decision making under risk. J Neurosci 29:12574–12583. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Clarke TJ, Bradshaw MF, Field DT, Hampson SE, Rose D (2005): The perception of emotion from body movement in point‐light displays of interpersonal dialogue. Perception 34:1171–1180. [DOI] [PubMed] [Google Scholar]
  21. de Gelder B (2006): Towards the neurobiology of emotional body language. Nat Rev Neurosci 7:242–249. [DOI] [PubMed] [Google Scholar]
  22. de Gelder B, Hortensius R, Tamietto M (2012): Attention and awareness each influence amygdala activity for dynamic bodily expressions—A short review. Front in Integr Neurosci 6:Article 54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. de Gelder B, Van den Stock J, Meeren HKM, Sinke CBA, Kret ME, Tamietto M (2010): Standing up for the body. Recent progress in uncovering the networks involved in the perception of bodies and bodily expressions. Neurosci Biobehav R 34:513–527. [DOI] [PubMed] [Google Scholar]
  24. De Lange FP, Spronk M, Willems RM, Toni I, Bekkering H (2008): Complementary systems for understanding action intentions. Curr Biol 18:454–457. [DOI] [PubMed] [Google Scholar]
  25. Dolcos S, Sung K, Argo JJ, Flor‐Henry S, Dolcos F (2012): The power of a handshake: Neural correlates of evaluative judgments in observed social interactions. J Cognitive Neurosci 24:2292–2305. [DOI] [PubMed] [Google Scholar]
  26. Downing PE, Jiang YH, Shuman M, Kanwisher N (2001): A cortical area selective for visual processing of the human body. Science 293:2470–2473. [DOI] [PubMed] [Google Scholar]
  27. Downing PE, Peelen MV, Wiggett AJ, Tew BD (2006): The role of the extrastriate body area in action perception. Soc Neurosci 1:52–62. [DOI] [PubMed] [Google Scholar]
  28. Ennis C, McDonnell R, O'Sullivan C (2010): Seeing is believing: Body motion dominates in multisensory conversations. Acm T Graphic 29. [Google Scholar]
  29. Epstein R, Kanwisher N (1998): A cortical representation of the local visual environment. Nature 392:598–601. [DOI] [PubMed] [Google Scholar]
  30. Etkin A, Egner T, Kalisch R (2011): Emotional processing in anterior cingulate and medial prefrontal cortex. Trends Cogn Sci 15:85–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Frijda NH (2010): Impulsive action and motivation. Biol Psychol 84:570–579. [DOI] [PubMed] [Google Scholar]
  32. Gallivan JP, Chapman CS, McLean DA, Flanagan JR, Culham JC (2013): Activity patterns in the category‐selective occipitotemporal cortex predict upcoming motor actions. Eur J Neurosci 38:2408–2424. [DOI] [PubMed] [Google Scholar]
  33. Gentil AF, Eskandar EN, Marci CD, Evans KC, Dougherty DD (2009): Physiological responses to brain stimulation during limbic surgery: Further evidence of anterior cingulate modulation of autonomic arousal. Biol Psychiatry 66:695–701. [DOI] [PubMed] [Google Scholar]
  34. Gobbini MI, Koralek AC, Bryan RE, Montgomery KJ, Haxby JV (2007): Two takes on the social brain: A comparison of theory of mind tasks. J Cognitive Neurosci 19:1803–1814. [DOI] [PubMed] [Google Scholar]
  35. Goebel R, Esposito F, Formisano E (2006): Analysis of Functional Image Analysis Contest (FIAC) data with BrainVoyager QX: From single‐subject to cortically aligned group general linear model analysis and self‐organizing group independent component analysis. Hum Brain Mapp 27:392–401. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Grezes J, Pichon S, de Gelder B (2007): Perceiving fear in dynamic body expressions. Neuroimage 35:959–967. [DOI] [PubMed] [Google Scholar]
  37. Grosbras MH, Beaton S, Eickhoff SB (2012): Brain regions involved in human movement perception: A quantitative voxel‐based meta‐analysis. Hum Brain Mapp 33:431–454. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Grossman E, Donnelly M, Price R, Pickens D, Morgan V, Neighbor G, Blake R (2000): Brain areas involved in perception of biological motion. J Cognitive Neurosci 12:711–720. [DOI] [PubMed] [Google Scholar]
  39. Grupe DW, Nitschke JB (2013): Uncertainty and anticipation in anxiety: An integrated neurobiological and psychological perspective. Nat Rev Neurosci 14:488–501. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Hadjikhani N, Hoge R, Snyder J, de Gelder B (2008): Pointing with the eyes: The role of gaze in communicating danger. Brain Cogn 68:1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Helbing D, Farkas I, Vicsek T (2000): Simulating dynamical features of escape panic. Nature 407:487–490. [DOI] [PubMed] [Google Scholar]
  42. Herrington J, Nymberg C, Faja S, Price E, Schultz R (2012): The responsiveness of biological motion processing areas to selective attention towards goals. Neuroimage 63:581–590. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Heydrich L, Blanke O (2013): Distinct illusory own‐body perceptions caused by damage to posterior insula and extrastriate cortex. Brain 136:790–803. [DOI] [PubMed] [Google Scholar]
  44. Hirai M, Kakigi R (2009): Differential orientation effect in the neural response to interacting biological motion of two agents. BMC Neurosci 10:39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Hortensius R, de Gelder B (2014): The neural basis of the bystander effect–the influence of group size on neural activity when witnessing an emergency. Neuroimage 93 Pt 1:53–8. [DOI] [PubMed] [Google Scholar]
  46. Iacoboni M (2009): Neurobiology of imitation. Curr Opin Neurobiol 19:661–665. [DOI] [PubMed] [Google Scholar]
  47. Iacoboni M, Lieberman MD, Knowlton BJ, Molnar‐Szakacs I, Moritz M, Throop CJ, Fiske AP (2004): Watching social interactions produces dorsomedial prefrontal and medial parietal BOLD fMRI signal increases compared to a resting baseline. Neuroimage 21:1167–1173. [DOI] [PubMed] [Google Scholar]
  48. Jastorff J, Orban GA (2009): Human functional magnetic resonance imaging reveals separation and integration of shape and motion cues in biological motion processing. J Neurosci 29:7315–7329. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Keysers C, Kaas JH, Gazzola V (2010): Somatosensation in social perception. Nat Rev Neurosci 11:417–428. [DOI] [PubMed] [Google Scholar]
  50. Kujala MV, Carlson S, Hari R (2012): Engagement of amygdala in third‐person view of face‐to‐face interaction. Hum Brain Mapp 33:1753–1762. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Kurth F, Zilles K, Fox PT, Laird AR, Eickhoff SB (2010): A link between the systems: functional differentiation and integration within the human insula revealed by meta‐analysis. Brain Struct Funct 214:519–534. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Lane RD, Chua PML, Dolan RJ (1999): Common effects of emotional valence, arousal and attention on neural activation during visual processing of pictures. Neuropsychologia 37:989–997. [DOI] [PubMed] [Google Scholar]
  53. Leslie KR, Johnson‐Frey SH, Grafton ST (2004): Functional imaging of face and hand imitation: Towards a motor theory of empathy. Neuroimage 21:601–607. [DOI] [PubMed] [Google Scholar]
  54. Lestou V, Pollick FE, Kourtzi Z (2008): Neural substrates for action understanding at different description levels in the human brain. J Cognitive Neurosci 20:324–341. [DOI] [PubMed] [Google Scholar]
  55. Manera V, Becchio C, Schouten B, Bara BG, Verfaillie K (2011): Communicative interactions improve visual detection of biological motion. PLoS One 6:ARTN e14594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Martin A, Weisberg J (2003): Neural foundations for understanding social and mechanical concepts. Cogn Neuropsychol 20:575–587. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. McAleer P, Pollick FE, Love SA, Crabbe F, Zacks JM (2014): The role of kinematics in cortical regions for continuous human motion perception. CABN 14:307–318. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. McDonnell R, Ennis C, Dobbyn S, O'Sullivan C (2009): Talking bodies: Sensitivity to desynchronization of conversations. Acm Trans Appl Percept 6:Article 22. [Google Scholar]
  59. McHugh JE, McDonnell R, O'Sullivan C, Newell FN (2010): Perceiving emotion in crowds: The role of dynamic body postures on the perception of emotion in crowded scenes. Exp Brain Res 204:361–372. [DOI] [PubMed] [Google Scholar]
  60. Molnar‐Szakacs I, Iacoboni M, Koski L, Mazziotta JC (2005): Functional segregation within pars opercularis of the inferior frontal gyrus: Evidence from fMRI studies of imitation and action observation. Cereb Cortex 15:986–994. [DOI] [PubMed] [Google Scholar]
  61. Neri P, Luu JY, Levi DM (2006): Meaningful interactions can enhance visual discrimination of human agents. Nat Neurosci 9:1186–1192. [DOI] [PubMed] [Google Scholar]
  62. Nummenmaa L, Glerean E, Viinikainen M, Jaaskelainen IP, Hari R, Sams M (2012): Emotions promote social interaction by synchronizing brain activity across individuals. Proc Natl Acad Sci USA 109:9599–9604. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Nummenmaa L, Hirvonen J, Parkkola R, Hietanen JK (2008): Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy. Neuroimage 43:571–580. [DOI] [PubMed] [Google Scholar]
  64. Oberman LM, Pineda JA, Ramachandran VS (2007): The human mirror neuron system: A link between action observation and social skills. Soc Cogn Affect Neur 2:62–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Ongur D, Price JL (2000): The organization of networks within the orbital and medial prefrontal cortex of rats, monkeys and humans. Cereb Cortex 10:206–219. [DOI] [PubMed] [Google Scholar]
  66. Pavlova MA (2012): Biological motion processing as a hallmark of social cognition. Cereb Cortex 22:981–995. [DOI] [PubMed] [Google Scholar]
  67. Peelen MV, Atkinson AP, Andersson F, Vuilleumier P (2007): Emotional modulation of body‐selective visual areas. Soc Cogn Affect Neur 2:274–283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Pelphrey KA, Viola RJ, McCarthy G (2004): When strangers pass—Processing of mutual and averted social gaze in the superior temporal sulcus. Psychol Sci 15:598–603. [DOI] [PubMed] [Google Scholar]
  69. Pereira MG, de Oliveira L, Erthal FS, Joffily M, Mocaiber IF, Volchan E, Pessoa L (2010): Emotion affects action: Midcingulate cortex as a pivotal node of interaction between negative emotion and motor signals. Cogn Affect Behav Neurosci 10:94–106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Pfeiffer UJ, Timmermans B, Vogeley K, Frith CD, Schilbach L (2013): Towards a neuroscience of social interaction. Front Hum Neurosci 7:Artn 22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Pichon S, de Gelder B, Grezes J (2008): Emotional modulation of visual and motor areas by dynamic body expressions of anger. Soc Neurosci 3:199–212. [DOI] [PubMed] [Google Scholar]
  72. Pichon S, de Gelder B, Grezes J (2012): Threat prompts defensive brain responses independently of attentional control. Cereb Cortex 22:274–285. [DOI] [PubMed] [Google Scholar]
  73. Pierno AC, Becchio C, Turella L, Tubaldi F, Castiello U (2008): Observing social interactions: The effect of gaze. Soc Neurosci 3:51–59. [DOI] [PubMed] [Google Scholar]
  74. Roether CL, Omlor L, Christensen A, Giese MA (2009): Critical features for the perception of emotion from gait. J Vision 9:Artn 15. [DOI] [PubMed] [Google Scholar]
  75. Sartori L, Cavallo A, Bucchioni G, Castiello U (2011): Corticospinal excitability is specifically modulated by the social dimension of observed actions. Exp Brain Res 211:557–568. [DOI] [PubMed] [Google Scholar]
  76. Schilbach L, Timmermans B, Reddy V, Costall A, Bente G, Schlicht T, Vogeley K (2013): Toward a second‐person neuroscience. Behav Brain Sci 36:393–414. [DOI] [PubMed] [Google Scholar]
  77. Schulte‐Ruther M, Markowitsch HJ, Fink GR, Piefke M (2007): Mirror neuron and theory of mind mechanisms involved in face‐to‐face interactions: A functional magnetic resonance imaging approach to empathy. J Cognitive Neurosci 19:1354–1372. [DOI] [PubMed] [Google Scholar]
  78. Schultz J, Friston KJ, O'Doherty J, Wolpert DM, Frith CD (2005): Activation in posterior superior temporal sulcus parallels parameter inducing the percept of animacy. Neuron 45:625–635. [DOI] [PubMed] [Google Scholar]
  79. Simmons A, Stein MB, Matthews SC, Feinstein JS, Paulus MP (2006): Affective ambiguity for a group recruits ventromedial prefrontal cortex. Neuroimage 29:655–661. [DOI] [PubMed] [Google Scholar]
  80. Sinke CBA, Sorger B, Goebel R, de Gelder B (2010): Tease or threat? Judging social interactions from bodily expressions. Neuroimage 49:1717–1727. [DOI] [PubMed] [Google Scholar]
  81. Sinke CBA, Van den Stock J, Goebel R, de Gelder B (2012): The constructive nature of affective vision: seeing fearful scenes activates extrastriate body area. PLoS One 7:e38118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Sung K, Dolcos S, Flor‐Henry S, Zhou C, Denkova E, Masuda T, Argo J, Dolcos F (2011): Brain imaging investigation of the neural correlates of observing virtual social interactions. J Vis Exp 53:e2379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Talairach J, Tournoux P (1988). Co‐planar stereotaxic atlas of the human brain: 3‐dimensional proportional system: An approach to cerebral imaging. New York: Thieme Medical. [Google Scholar]
  84. Tamietto M, de Gelder B (2010): Neural bases of the non‐conscious perception of emotional signals. Nat Rev Neurosci 11:697–709. [DOI] [PubMed] [Google Scholar]
  85. Tamura M, Moriguchi Y, Higuchi S, Hida A, Enomoto M, Umezawa J, Mishima K (2013): Activity in the action observation network enhances emotion regulation during observation of risk‐taking: An fMRI study. Neurol Res 35:22–28. [DOI] [PubMed] [Google Scholar]
  86. Tavares P, Lawrence AD, Barnard PJ (2008): Paying attention to social meaning: An fMRI study. Cereb Cortex 18:1876–1885. [DOI] [PubMed] [Google Scholar]
  87. Watson JDG, Myers R, Frackowiak RSJ, Hajnal JV, Woods RP, Mazziotta JC, Shipp S, Zeki S (1993): Area‐V5 of the human brain—Evidence from a combined study using positron emission tomography and magnetic‐resonance‐imaging. Cereb Cortex 3:79–94. [DOI] [PubMed] [Google Scholar]
  88. Wolpert DM, Doya K, Kawato M (2003): A unifying computational framework for motor control and social interaction. Philos Trans R Soc B 358:593–602. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Zacks JM, Swallow KM, Vettel JM, McAvoy MP (2006). Visual motion and the neural correlates of event perception. Brain Res 1076:150–162. [DOI] [PubMed] [Google Scholar]
  90. Zhang S, Li CSR (2012): Functional connectivity mapping of the human precuneus by resting state fMRI. Neuroimage 59:3548–3562. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Zurcher NR, Rogier O, Boshyan J, Hippolyte L, Russo B, Gillberg N, Helles A, Ruest T, Lemonnier E, Gillberg C and others (2013): Perception of social cues of danger in autism spectrum disorders. PLoS One 8:ARTN e81206. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES