Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2013 Jun 29;35(4):1362–1378. doi: 10.1002/hbm.22259

Perceiving nonverbal behavior: Neural correlates of processing movement fluency and contingency in dyadic interactions

Alexandra L Georgescu 1,, Bojana Kuzmanovic 1,2, Natacha S Santos 1, Ralf Tepest 1, Gary Bente 3, Marc Tittgemeyer 4, Kai Vogeley 1,5
PMCID: PMC6869512  PMID: 23813661

Abstract

Despite the fact that nonverbal dyadic social interactions are abundant in the environment, the neural mechanisms underlying their processing are not yet fully understood. Research in the field of social neuroscience has suggested that two neural networks appear to be involved in social understanding: (1) the action observation network (AON) and (2) the social neural network (SNN). The aim of this study was to determine the differential contributions of the AON and the SNN to the processing of nonverbal behavior as observed in dyadic social interactions. To this end, we used short computer animation sequences displaying dyadic social interactions between two virtual characters and systematically manipulated two key features of movement activity, which are known to influence the perception of meaning in nonverbal stimuli: (1) movement fluency and (2) contingency of movement patterns. A group of 21 male participants rated the “naturalness” of the observed scenes on a four‐point scale while undergoing fMRI. Behavioral results showed that both fluency and contingency significantly influenced the “naturalness” experience of the presented animations. Neurally, the AON was preferentially engaged when processing contingent movement patterns, but did not discriminate between different degrees of movement fluency. In contrast, regions of the SNN were engaged more strongly when observing dyads with disturbed movement fluency. In conclusion, while the AON is involved in the general processing of contingent social actions, irrespective of their kinematic properties, the SNN is preferentially recruited when atypical kinematic properties prompt inferences about the agents' intentions. Hum Brain Mapp 35:1362–1378, 2014. © 2013 Wiley Periodicals, Inc.

Keywords: nonverbal behavior, dyadic social interaction, action observation network, social neural network, fMRI

INTRODUCTION

It is widely accepted that nonverbal behavior constitutes a central component of human communication [Burgoon, 1993]: When watching interactions between other people, humans evaluate the social communicative intentions of others by heavily relying on nonverbal cues. However, meaningful information is not only conveyed by specific gestures, facial expressions, or body postures, but also by the kinematic properties of perceived movement (i.e., spatiotemporal dynamics). Such properties can describe both individual characteristics like the quality of motion (e.g., movement fluency) and dyadic characteristics like the interactive dynamics between objects (e.g., movement contingency) [Blakemore et al., 2003]. Fluency is an important kinematic characteristic of biological motion [Flash and Hogan, 1985; Lacquaniti et al., 1983]. Research has found that the visual system is biased toward movements that follow a smooth velocity profile [Bidet‐Ildei et al., 2006; Hirai and Hiraki, 2007; Viviani and Stucchi, 1992] and that this sensitivity is innate [Johansson, 1973]. Moreover, such articulated movements are usually perceived as intentional and animate [Morewedge et al., 2007; Pyles et al., 2007]. Apart from the physical properties of biological motion, the contingencies of movement patterns also facilitate the perception of meaning in a visual stimulus: Graphical displays of simple moving geometrical figures were interpreted as social encounters due to their interactive dynamics [Castelli et al., 2000; Gobbini et al., 2007; Santos et al., 2008, 2010; Schultz et al., 2004, 2005]. In the context of research on social interaction, the term “social contingency” has been used to describe an above chance probabilistic mutual relationship between the actions of two interactants [Moran et al., 1992]. Thus, we refer to contingency as the noncoincidental bidirectional coordination of movement patterns both in the temporal and the spatial domain between two interacting agents, which result in meaningful patterns of mutual social coordination.

Although human nonverbal social interactions contain complex information with respect to both movement fluency and contingency of movement patterns, brain imaging studies investigating the neural mechanisms of the perception of such social stimuli are still rare as nonverbal behavior is hard to capture and very difficult to control experimentally [Bente et al., 2001a,b; Choi et al., 2005; Krumhuber and Kappas, 2005]. Indeed, up to now, most neuroimaging studies investigating the perception of human nonverbal interactions have used static stimuli (either photographs or comics), [Canessa et al., 2012; Kujala et al., 2011; Pierno et al., 2008; Walter et al., 2004]. To our knowledge, only four neuroimaging studies used dynamic stimuli of nonverbally interacting dyads [point light displays, Centelles et al., 2011; Hirai and Kakigi, 2009; and videos, Iacoboni et al., 2004; Sinke et al., 2010]. However, none of these studies addressed the role of the kinematics or of the contingency factor per se for the perception of communicative nonverbal interactions, despite the fact that the weight of each of these two factors for the perception of social meaning is still unclear. Thus, the investigation of nonverbal communicative interactions may help clarify the role of these two factors and the two main brain networks social neural network (SNN) and action observation network (AON) in social perception.

Neural activation related to the attribution of meaning to perceived movement has been found to be implicated in the so‐called AON [Caspers et al., 2010; Decety and Grèzes, 1999; Grèzes et al., 2001; Marsh et al., 2010; Rizzolatti et al., 1996; Saygin, 2007]. This network is thought to comprise the bilateral posterior superior temporal sulcus (pSTS) and the inferior parietal lobe (IPL). It also includes a premotor node, which encompasses the inferior frontal gyrus (IFG, pars opercularis), the adjacent ventral as well as dorsal premotor cortices (PMv, PMd) and the supplementary motor area (SMA). Interestingly, it has been suggested that the AON might be tuned specifically to biological motion and that it would respond to a lesser extent to nonbiological or robotic movements [e.g., Casile et al., 2010; Dayan et al., 2007; Tai et al., 2004]. However, research on this issue is still inconclusive [for a recent review, see Press et al., 2011].

Studies investigating the perception of the contingent information between interacting agents have mainly found activations that seem to form another neural network, the so‐called SNN [Castelli et al., 2000, 2002; Martin and Weisberg, 2003; Ohnishi et al., 2004; Santos et al., 2010; Tavares et al., 2008]. The SNN is thought to include regions along the cortical midline and in the temporal lobes, namely the medial prefrontal cortex (mPFC), the posterior cingulate cortex (PCC), the temporoparietal junction (TPJ) and adjacent pSTS as well as the temporal poles [Adolphs, 2009; Frith, 2007]. It has been proposed that the AON is required for automatic detection of intentionality from motion via kinematic analyses, whereas the SNN is required for the evaluation of social stimuli, including inferential processes [Brass et al., 2007; de Lange et al., 2008; Keysers and Gazzola, 2007; Santos et al., 2010; Spunt et al., 2011; Thioux et al., 2008; Uddin et al., 2007; Van Overwalle and Baetens, 2009].

The major objective of this fMRI study was to clarify (i) the relevance of movement fluency and movement contingency for the perception of nonverbal communicative interactions and (ii) the contribution of these two factors to the recruitment of the AON and the SNN. To our knowledge, this is the first study that explores the involvement of the two movement‐related factors and of the two neural networks in a social context. For this purpose, the movement fluency and the contingency information present in short videos of nonverbal dyadic interactions were systematically manipulated in a two‐by‐two factorial design. During fMRI participants watched short videos of communicative nonverbal interactions, and were asked to rate how natural they perceived each one to be on a four‐point scale. Although the AON and SNN appear to serve complementary functional roles [Brass et al., 2007; Canessa et al., 2012; for a meta‐analysis see Van Overwalle and Baetens, 2009], a recent study has found that both systems might be involved in the processing of whole‐body nonverbal behavior during social interactions [Centelles et al., 2011]. Considering the conclusions of the latter study, we hypothesized that both the AON and the SNN would be involved in the processing of contingent information in the context of dyadic social interactions. With respect to the kinematics manipulation we hypothesized, by considering previous research [e.g., Engel et al., 2008a, 2008b; Gazzola et al., 2007; Obermann et al., 2007a; Stanley et al., 2007, 2010) that the AON would not respond selectively to biological motion trajectories.

METHODS

Subjects

A group of twenty‐eight right‐handed male participants with normal or corrected‐to‐normal vision and no neurologic or psychiatric past medical history were recruited. Handedness was assessed by the Edinburgh Handedness Inventory [Oldfield, 1971]. All participants were naïve with respect to the purpose of the experiment. Written informed consent was obtained from all participants. They received a monetary compensation for their participation of 10 euro per hour. The study was conducted with the approval of the local ethics committee of the Medical Faculty of the University Hospital of Cologne, Germany. Five participants were excluded from further data analyses due to excessive head movement, which caused significant signal spiking, along with uncorrectable motion artifacts. Two participants were excluded due to noncompliance with the instruction. The 21 remaining participants were between 23 and 33 years of age (mean age = 26.86 + −2.56).

Stimuli

The stimulus material of this study was based on that of a previous paradigm for the investigation of nonverbal behavior [Bente et al., 2001b, 2008, 2010]. It was developed by converting 3‐min long videos depicting dyadic role‐play interactions between two seated persons into silent animations. Two virtual 3D mannequin models were considered appropriate to standardize the appearance of all actors of the original videos. By keeping the appearance information constant over all videos, we avoided the confounding of appearance and motion. In addition, a fully rendered (polygonal) character was preferred to point light displays, since we assumed that it would enable participants to better discriminate subtle motion variations [Dittrich, 1993; Hodgins et al., 1998]. In addition, it enabled us to avoid the uncanny valley effect, a phenomenon by which artificial characters that are too realistic appear to be eerie and strange [Mori, 1970]. Movement behavior was transcribed from the original video sequences onto the virtual characters using the key framing technique and specially developed computer‐assisted coding software as described by Bente et al. [2001b, 2008]. For this purpose, a special movement transcription plug‐in for the commercially available character animation software Autodesk MotionBuilder 2011 (Autodesk San Rafael, CA) was developed. Finally, animations were rendered from these protocols by interpolating the key‐frame data to a frame rate of 30 frames per second, using a cubic spline algorithm (Bézier curve) to guarantee the smooth flow of movements. The Bézier function has been found to have general utility for human motion simulation [Faraway et al., 2007] and is generally used in character animation to approximate a smooth minimum jerk trajectory [Pocock and Rosenbush, 2002], which is characteristic of human movement [Flash and Hogan, 1985]. Animations were further optimized for the fMRI environment and validated in a series of prestudies. Finally, 10 ecologically valid social interaction animations, lasting for 10 s each, were chosen as stimulus material for this fMRI study (for examples see Supporting Information).

Study Design

By systematically manipulating key features of movement patterns in a two‐by‐two design (see Fig. 1A), we wanted to characterize the contribution of two important factors in social perception to the processing of social interactions: (i) movement fluency and (ii) contingency of movement patterns. First, to manipulate fluency, an artificial version of each original video was needed. To achieve this, the smooth movement velocity of the original agents was changed by linearly interpolating between turning points. A linear interpolation produces a second derivative discontinuity, namely a jerk in the action at the start and end of the shot [Pocock and Rosebush, 2002]. This resulted in rigid, robot‐like movements, which did not simulate acceleration and deceleration as manifested in human actions and violated the kinematic laws of biological movement [e.g., Viviani and Flash, 1995]. Second, to manipulate the contingency information, one of two agents of each of the original dyads was substituted by the mirrored image of the other, thus effectively eliminating the contribution of one of the two agents from the interaction. Thus, we consider the resulting perfectly mirrored movement patterns to be neither statistically probable nor interactively meaningful and hence noncontingent. Third, to provide a high‐level baseline, scrambled videos were created using a Matlab based algorithm (The MathWorks, Natick, MA) dividing the original videos into 2 × 4 × 8 arrays and systematically rearranging them crosswise (see Fig. 1B). By proceeding this way, it was possible to present videos with the same luminance, color, and amount of motion across all video categories.

Figure 1.

Figure 1

A: Sample stimuli and the 2 × 2 factorial experimental design. CS = contingent + smooth; CR = contingent + rigid; MS = mirrored + smooth; MR = mirrored + rigid. B: Example of still caption of a scrambled video. C: Example of an experimental trial: The participants' task was to observe each video and rate the perceived naturalness of each scene on a 4‐point scale.

Experimental Procedure

An experimental trial consisted of a 10 s long stimulus presentation followed by a rating scale lasting for a maximum of 3 s. Each trial entailed two randomly jittered interstimulus intervals (ISIs): one between each stimulus presentation and the rating scale, to enable for statistical isolation of the behavioral response (applied ISI durations: 1.5, 1.75, 2, 2.25, and 2.5 s; mean ISI: 2 s), and the other between single trials to increase condition‐specific BOLD signal discriminability [Dale, 1999; Serences, 2004] (applied ISI durations: 5.4, 6.3, 7.2, and 8.1 s, and 9 s; mean ISI: 7.2 s; see Fig. 1C). The experiment was conducted in an event‐related fashion and split into two runs each lasting for about 20 min. Each of the ten animations was presented with two repetitions: Every animation once in their original position and another time with the positions of the two agents swapped, to ensure that each of the characters was presented equally often on each side of the screen. Each run consisted of 10 events per condition, summing up to 50 events per run and 100 events in total. Both runs consisted of equivalent numbers of condition‐specific events, shown in randomized order. There was a 2‐min break between runs. Prior to the fMRI experiment, all participants were familiarized with the performance of the task in a standardized instruction and practice session presented on a computer screen outside the MRI environment. None of the animations used in the introduction were used in the subsequent fMRI experiment. Participants were told that they would see presentations of 10 s long silent animations of two interacting characters and that they would be asked to answer the question “How natural did the scene appear to you?” on a four‐point scale, ranging from 1 (“very unnatural”) to 4 (“very natural”). They were instructed to base their judgments on the perceived plausibility and familiarity of a scene. They were also told that the animations were based on real interactions but that sometimes the original scenes had been computer‐manipulated to achieve variation. Additionally, subjects were instructed to focus on the fixation cross between trials, on both agents during the presented videos and to respond as intuitively and quickly as possible after the display of the scale. To balance for lateralized motor‐related activations, participants alternately used the right or left hand across runs. The sequence of the two runs was randomized as well. The stimulus presentation and response recording were performed by the software package Presentation (version 12.2; Neurobehavioral Systems, Inc., http://www.neurobs.com/). Responses were assessed using four buttons of a MR‐compatible handheld response device (LUMItouch™, Photon Control, BC, Canada).

Data Acquisition

Functional and structural magnetic resonance images were acquired on a Siemens 3T whole‐body scanner, which was equipped with a standard head coil and a custom‐built head holder for movement reduction (Siemens TRIO, Medical Solutions, Erlangen, Germany). For the fMRI scans we used a T2*‐weighted gradient echo planar imaging (EPI) sequence with the following imaging parameters: TR = 2200 ms, TE = 30 ms, field of view = 200 × 200 mm2, 36 axial slices, slice thickness 3.0 mm, in‐plane resolution = 3.1 × 3.1 mm2. Additional four images were collected at the beginning of each session and discarded prior to further image processing to allow for magnetic saturation. For the structural images we used a high‐resolution T1‐weighted magnetization‐prepared rapid acquisition gradient echo (MPRAGE) sequence with TR = 2250 ms; TE = 3.93 ms, field of view = 256 × 256 mm2, 176 sagittal slices, slice thickness = 1.0 mm, in‐plane resolution = 1.0 × 1.0 mm2.

Behavioral Data Analyses

The effect of factors of interest on individual ratings was tested by a two‐way repeated measures analysis of variance (ANOVA) using SPSS (PASW Statistics 18) with contingency (contingent vs. mirrored) and fluency (smooth vs. rigid) as within‐subject independent variables and the naturalness ratings as a dependent variable.

FMRI Data Analyses

FMRI data were spatially preprocessed and analyzed using SPM8 (The Wellcome Trust Centre for Neuroimaging) implemented in Matlab 7.1 (The MathWorks Inc., Natick, MA., USA). After the functional images were corrected for head movements using realignment and unwarping, each structural MRI was coregistered to each participant's mean realigned functional image. All images were then normalized to the Montreal Neurological Institute (MNI) reference space using the unified segmentation function in SMP8 and were resampled to a voxel size of 2 × 2 × 2 mm3. The transformation was also applied to each participant's structural image. Functional images were then spatially smoothed with an isotopic Gaussian filter (8 mm full width at half maximum) to meet the statistical requirements of further analysis and to account for macroanatomical interindividual differences across participants.

The data were analyzed using a General Linear Model as implemented in SPM8. In all single subject analyses, effects of interest were modeled separately using a boxcar reference vector convolved with the canonical hemodynamic response function and its time derivatives. Trials were classified according to five event types: (1) contingent and smooth (CS), (2) contingent and rigid (CR), (3) mirrored and smooth (MS), (4) mirrored and rigid (MR), (5) scrambled videos. Durations for events of interest were set at 10 s, corresponding to the video duration. A 128 s temporal high‐pass filter was applied to account for subject‐specific, low‐frequency drifts. For each subject and each condition, a comparison with the implicit baseline was implemented as an individual contrast image, by weighting only the regressor corresponding to that particular condition with 1 and all other regressors with 0. The single subject contrasts were fed into the second level analyses using a flexible factorial ANOVA (factors: condition and subject), employing a random‐effects model [Penny et al., 2003]. First, the group‐level analysis evaluated, which brain regions were differentially active while watching meaningful compared with scrambled videos (CS + CR + MS + MR > scrambled videos). Second, for the study of the main effect of movement fluency, comparisons were collapsed across contingencies; for the study of the main effect of contingency, comparisons were collapsed across velocity profiles. Consequently, the following contrasts were computed: (i) CS + MS > CR + MR (smooth compared with rigid motion); (ii) CR + MR > CS + MS (rigid compared with smooth motion); (iii) CS + CR > MS + MR (contingent compared with mirrored movements); (iv) MS + MR > CS + CR (mirrored compared with contingent movements); (v) (CS > MS) > (CR > MR) (interaction: contingent compared with mirrored movements for smooth compared with rigid motion); (vi) (CR > MR) > (CS > MS) (interaction: contingent compared with mirrored movements for rigid compared with smooth motion). At the group level, all effects are reported as significant at P < 0.05, corrected for multiple comparisons at the cluster level with P < 0.001, uncorrected, at the voxel level [Friston et al., 1996].

Significant activations were anatomically localized by using the brain atlas by Duvernoy 1999 and the SPM anatomy toolbox, version 1.7 [Eickhoff et al., 2005]. Group activation maps were superimposed on a mean T1 image that was constructed from the individual T1 images of the 21 participants. Reported coordinates refer to maximum values in a given cluster according to the MNI 1 mm isotopic brain template.

BEHAVIORAL RESULTS

Behavioral results have shown that people are sensitive to contingency information and, to a lesser degree, also to movement velocity. The ANOVA revealed significant main effects of contingency (F(1,20) = 64.9, P < 0.001) and movement fluency (F(1,20) = 57.4, P < 0.001) on the dependent variable “naturalness rating” with higher naturalness ratings for contingent (M = 2.91; SE = 0.07) than for mirrored movement patterns (M = 1.60; SE = 0,12) as well as higher naturalness ratings for videos where characters moved with smooth (M = 2.62; SE = 0.09) than rigid velocities (M = 1.89; SE = 0.05; see Fig. 2). Furthermore, there was a significant interaction effect between contingency and movement fluency (F(1,20) = 31.2, P < 0.001). This effect reflects that contingent (compared with mirrored) videos increased naturalness ratings more in videos with smooth kinematics than it did in videos with rigid kinematics (see Fig. 2).

Figure 2.

Figure 2

The plot illustrates the effects of video type on naturalness ratings. The scales on the y‐axis indicate the mean of stimuli ratings. A score of 1 refers to rating a video as “unnatural” and one of 4 as “natural.” CS = contingent + smooth; CR = contingent + rigid; MS = mirrored + smooth; MR = mirrored + rigid.

NEURAL RESULTS

The comparisons of all meaningful videos to scrambled videos revealed robust AON activity (see Table 1). Direct comparisons of the different kinds of movement contingencies and fluency revealed striking differences, as described in the following and in Table 2 and 3.

Table 1.

Regions more responsive to meaningful than scrambled videos

Region Cluster‐level Side MNI coordinates T
Size P FWE‐corr x y z
Supramarginal gyrus 3870 0.000 R 60 −34 24 9.30
Middle temporal gyrus 0.000 R 58 −58 4 9.05
Posterior superior temporal sulcus 0.000 R 56 −44 6 8.72
Middle temporal gyrus 1936 0.000 L −52 −64 10 8.86
Superior temporal gyrus 0.000 L −58 −42 18 5.12
Inferior frontal gyrus (p. triang.) 8291 0.000 R 56 22 22 8.62
Inferior frontal gyrus (p. orbit.) 0.000 R 46 26 −6 8.60
Inferior frontal gyrus (p. operc.) 0.000 R 52 18 14 8.36
Fusiform gyrus 228 0.031 R 44 −46 −18 8.60
Fusiform gyrus 208 0.043 L −42 −48 −20 6.96
Inferior frontal gyrus (p. triang.) 1132 0.000 L −56 26 4 5.77
Inferior frontal gyrus (p. orbit.) 0.000 L −44 32 −4 5.51
Insula 0.000 L −32 22 −2 5.26
Superior medial frontal gyrus 976 0.000 R 6 28 44 5.15
Dorsal medial prefrontal cortex 0.000 R 4 36 32 4.64
Supplementary motor area 0.000 R 8 14 56 4.44
Superior parietal lobe 517 0.001 R 40 −48 62 5.11
Inferior parietal lobe 0.001 R 44 −52 46 4.13
Posterior cingulate gyrus 198 0.050 R 2 −26 30 4.68
Thalamus 436 0.002 L −8 −6 −2 3.95

Abbreviations: T: t‐values of regions active in each contrast; L: left hemisphere; R: right hemisphere; p. operc.: pars opercularis; p. orbit.: pars orbitalis; p. triang.: pars triangularis.

Table 2.

Main effects of movement contingency

Region Cluster‐level Side MNI coordinates T
Size p FWE‐corr x y z
1. Contingent > Mirrored
Inferior frontal gyrus (p. operc.) 5791 0.000 R 56 16 6 6.35
Inferior frontal gyrus (p. triang.) R 56 28 2 6.35
Inferior frontal gyrus (p.orbit.) R 46 26 −4 6.06
Superior temporal gyrus 2878 0.000 R 58 −38 16 7.16
Superior temporal sulcus R 48 −22 −8 6.83
Posterior superior temporal sulcus R 54 −42 8 6.77
Postcentral gyrus R 46 −30 52 5.83
Inferior frontal gyrus (p. triang.) 1444 0.000 L −38 28 2 6.19
Inferior frontal gyrus (p. operc.) L −60 12 24 5.23
Midbrain 1298 0.000 8 −22 −12 5.69
Thalamus R 10 −20 4 5.07
Posterior superior temporal sulcus 1043 0.000 L −48 −50 12 6.50
Middle temporal gyrus/EBA L −50 −66 10 5.09
Postcentral gyrus 710 0.000 L −38 −32 42 5.69
Intraparietal sulcus L −64 −16 32 4.75
Inferior parietal lobule L −48 −34 42 4.28
Fusiform gyrus 231 0.029 L −40 −44 −14 6.34
Globus pallidus 217 0.037 R 20 2 0 3.24
2. Mirrored > Contingent
Lingual gyrus 1326 0.000 R 18 −68 −8 5.83
Parahippocampal gyrus R 28 −44 −6 4.88
Posterior cingulate gyrus/ Isthmus R 10 −52 10 4.54
Parahippocampal gyrus 850 0.000 L −32 −34 −14 4.78
Posterior cingulate gyrus/ Isthmus L −8 −56 12 4.45
Lingual gyrus L −16 −60 −2 4.02
Middle frontal gyrus 727 0.000 L −24 16 46 5.31
Superior frontal gyrus L −20 20 40 4.94
Cuneus 688 0.000 R 14 −82 28 6.10
Cuneus L −8 −86 18 5.36
Middle occipital gyrus 439 0.002 L −38 −76 30 5.46
Angular gyrus L −40 −62 30 3.66
Posterior cingulate gyrus 364 0.004 L −10 −40 38 4.43
R 8 −36 44 3.92

1. Regions more responsive to contingent than mirrored movement patterns. 2. Regions more responsive to mirrored than contingent movement patterns. Abbreviations: T: t‐values of regions active in each contrast; L: left hemisphere; R: right hemisphere; p. operc.: pars opercularis; p. orbit.: pars orbitalis; p. triang.: pars triangularis; EBA: extrastriate body area.

Table 3.

Main effects of movement fluency and interaction

Region Cluster‐Level Side MNI Coordinates T
Size p FWE‐corr x y z
1. Rigid > Smooth
Inferior frontal gyrus (p. triang.) 818 0.000 L −54 24 24 4.85
Middle orbital gyrus L −30 46 −14 4.39
Angular gyrus 726 0.000 L −42 −56 40 4.72
Inferior parietal lobule L −50 −56 46 4.64
Superior parietal lobule L −36 −64 54 4.50
Dorsal medial prefrontal cortex 452 0.001 R 10 46 28 4.70
Dorsal medial prefrontal cortex L −6 38 32 4.59
2. Interaction: (Contingent > Mirrored) > (Smooth > Rigid)
Cingulate gyrus 370 0.004 L −10 −20 46 5.84
Cingulate gyrus R 8 −14 44 4.23
Supplementary motor area L −10 −8 64 3.61
Precentral gyrus 592 0.000 L −28 −20 50 5.04
Postcentral gyrus L −42 −28 60 4.35
Cingulate gyrus 342 0.006 L −8 4 40 4.80
Cingulate gyrus R 6 2 42 4.61

1. Regions more responsive to rigid compared with smooth kinematics. 2. Regions more responsive to the interaction effect evaluating brain regions more responsive to contingent than to mirrored videos when the motion was smooth, than when it was rigid. Abbreviations: T: t‐values of regions active in each contrast; L: left hemisphere; R: right hemisphere; p. triang.: pars triangularis.

We found that the perception of contingent compared with mirrored dyads was associated with a significant increase of neural activity in the AON, involving bilaterally the IFG (extending bilaterally to the premotor cortex), the STG and pSTS (extending to the extrastriate cortices), the left IPL and the left fusiform gyrus (FG). Other regions identified as differentially responsive to contingency information were distributed among the midbrain, the right thalamus and the right pallidum (see Fig. 3A, Table 2). In contrast, the perception of mirrored compared with contingent dyads revealed activations bilaterally in the parahippocampal gyrus, the cuneus, and the PCC, as well as in the left angular gyrus and the left middle to superior frontal gyrus (see Table 2).

Figure 3.

Figure 3

A: Differential neural activity for observing contingent compared with mirrored movement patterns. B: Plots illustrate corresponding contrast estimates obtained for the four stimulus categories for three different local maxima: left IPL (−48, −34, and 42), right pSTS (54, −42, and 8), and right IFG (56, 16, and 6). Error bars represent standard errors. The principally activated voxels are overlaid on the mean structural anatomic image of the 21 male participants: P < 0.05, cluster‐level corrected; CS = contingent + smooth; CR = continegnt + rigid; MS = mirrored + smooth; MR = mirrored + rigid; L = left hemisphere; R = right hemisphere; IFG = inferior temporal gyrus; pSTS = posterior superior temporal sulcus; IPL = inferior parietal lobule.

The observation of videos where characters were moving with smooth compared with rigid kinematics did not reveal any differential neural response. The opposite contrast, investigating the perception of rigid compared with smooth movements revealed activations in the left IFG (pars triangularis), the left angular gyrus, corresponding to the left TPJ, as well as, bilaterally, the dorsomedial prefrontal cortex (dmPFC; see Fig. 4A, Table 3).

Figure 4.

Figure 4

A: Regions of the SNN associated with the observation of videos with rigid compared with smooth movement velocity. B. Plots illustrate corresponding contrast estimates obtained for the four stimulus categories for two different local maxima: right dmPFC (10, 46, and 28) and left TPJ (−42, −56, and 40). Error bars represent standard errors. C: Interaction effect evaluating brain regions more responsive to contingent than to noncontingent videos when the motion was biological, than when it was nonbiological. The principally activated voxels are overlaid on the mean structural anatomic image of the 21 male participants: P < 0.05, cluster‐level corrected; CS = contingent + smooth; CR = continegnt + rigid; MS = mirrored + smooth; MR = mirrored + rigid; L = left hemisphere; R = right hemisphere; TPJ = temporo‐parietal junction; dmPFC = dorsomedial prefrontal cortex.

The interaction evaluating brain regions more responsive to contingent than to mirrored movement patterns when the motion was smooth, but not rigid, revealed activations in the middle cingulate cortex, bilaterally, as well as in a cluster encompassing the left precentral and postcentral gyrus (see Fig. 4C, Table 3). The second interaction, which evaluated brain regions more responsive to the contrast between contingent and mirrored videos when the motion was rigid than when it was smooth, did not reveal any differential neural response.

DISCUSSION

This study focused on the influence of the two factors movement fluency and movement contingency on the perception of naturalness in nonverbal communicative interactions and the neural activation patterns related to their processing. Behavioral results revealed that naturalness ratings were higher for both contingent and fluent movements. The neural results can be summarized as follows: First, the AON was engaged more strongly by the processing of movement contingency (contingent compared with mirrored movements). Second, the AON did not discriminate between different types of kinematic information (fluent compared with rigid movements or vice versa). Third, regions of the SNN were preferentially engaged by nonbiological kinematics (rigid vs. fluent motion). We argue that, while the AON is involved in the general processing of contingent social actions, irrespective of their kinematic properties, the SNN is preferentially recruited when atypical kinematic properties prompt inferences about the agents' intentions.

Behavioral Findings

As expected, videos with contingent movement patterns compared with those displaying mirrored movements were rated as more natural, showing that the relational information in a dyadic interaction influences perceptual judgments. This complements research showing that spatiotemporal factors are associated with increases in the perception of mindfulness and animacy [i.e., aliveness, Dittrich and Lea, 1994; Santos et al., 2008, 2010; Scholl and Tremoulet, 2000]. Since human nonverbal social interactions are characterized by a high degree of automatic interpersonal coordination [Bernieri and Rosenthal, 1991; Burgoon et al., 1993; Cappella, 1998], it is likely that human observers, based on an innate sensitivity, implicitly learn to extract information about social contingencies using spacing and timing cues [Gergely and Watson, 1999]. Indeed, research has robustly demonstrated that observers make use of such spatiotemporal dynamics to judge social interactions they observe [Balas et al., 2012; Becchio et al., 2012; Berry et al., 1992; Blythe et al., 1996; Clarke et al., 2005; Heider and Simmel, 1944; Manera et al., 2011; McAleer and Pollick, 2008; Michotte, 1946; Rimé et al., 1985; Santos et al., 2008, 2010; Sartori et al., 2011; Scholl and Tremoulet, 2000].

In addition, we report higher naturalness ratings for videos in which characters moved with a smooth compared to a rigid velocity. This is in line with both the view that people have an innate sensitivity for the kinematics of biological motion [Johansson, 1973] as well as findings, which show that smooth movements are more likely to be perceived as intentional and animate [Morewedge et al., 2007; Pyles et al., 2007].

Finally, there was a significant interaction effect between contingency and movement fluency, indicating that contingent (compared with mirrored) videos increased naturalness ratings more when movements were performed with smooth rather than rigid kinematics. This may be due to a ceiling effect, considering that the difference between contingent and mirrored videos was reported as more easily detectable than that between fluent and rigid movements. Nevertheless, the highest mean naturalness rating was received by videos with both contingent and fluent movements. Given that human social interactions are characterized by both contingent social dynamics and fluent movement kinematics, stimuli complying with these requirements would also be most plausible and hence most likely to be perceived as natural.

Effects of Contingency

No SNN engagement during the processing of contingent interactions

In contrast to our initial hypothesis, our results revealed no differential engagement of the SNN for the contingent compared with the mirrored movement videos. This might be a surprising finding, particularly since there is robust evidence for this network's involvement in the observation of social interactions presented in different formats [Castelli et al., 2000; Centelles et al., 2011; Iacoboni et al., 2004; Kujala et al., 2011; Pierno et al., 2008; Santos et al., 2010; Schultz et al., 2004, 2005; Tavares et al., 2008; Walter et al., 2004]. However, since the processing of observed actions is sensitive to different cognitive strategies which may be triggered by task demands and/or contextual information [de Lange et al., 2008; Spunt et al., 2011; Tavares et al., 2008; Wheatley et al., 2007; Zaki et al., 2010], the stimuli and design of this study might explain this apparent contradiction relating to the results. The engagement of the SNN has been robustly attested when participants were explicitly or implicitly prompted to deliberate on the intentions of observed agents [Brass et al., 2007; Buccino et al., 2007; de Lange et al., 2008; Liepelt et al., 2008; Marsh and Hamilton, 2011; Tavares et al., 2008; Wheatley et al., 2007]. In addition, the SNN is modulated by increasing degrees of inferential computation [Spunt et al., 2011]. Thus, a possible interpretation for the lack of SNN engagement in our study is that the presented contingent communicative interactions were plausible and “typical” from an everyday perspective and hence required no additional inferential computation. Moreover, we could argue that the actions of one agent contextualized the other's actions. Therefore, an increased effort in the computation of these types of social encounters would not be required. An alternative interpretation is related to the task of this study. Participants were asked to rate the naturalness of the interaction, hence targeting a global impression of the scenes by paying attention to the movement patterns observed. They were not asked to judge the social content of such interactions or infer the mental states or feelings of the agents. Since top‐down effects have been shown to influence AON activity [Engel et al., 2008a, 2008b; Stanley et al., 2007], we assume that our task rather stimulated intuitive evaluation processes. Such evaluations, compared with deliberate and reflective ones, do not rely on the integration of a wide range of social information and decision‐making processes and would rather trigger a prereflective process via the AON.

The AON is engaged by contingent movement patterns

Confirming the initial hypothesis, we found stronger engagement of the AON during the processing of contingency of movement patterns: The comparison of contingent to mirrored interaction sequences revealed clusters of differential activation bilaterally in the STG/pSTS, extending posterior to the occipitotemporal region, the IPL and the IFG, as well as the left FG.

The strongest increase in activity was shown in the right STG/pSTS. This region is typically associated with the perception of biological motion [for reviews see Allison et al., 2000; Pavlova et al., 2012] but it is also activated by perceiving movements of nonbiological agents, when exhibiting intentionality as reflected by interactive dynamics [Castelli et al., 2000; Gobbini et al., 2007; Santos et al., 2008, 2010; Schultz et al., 2004, 2005]. The finding of this study supports previous research that attests this region's involvement in the observation of human nonverbal interactions [Centelles et al., 2011; Iacoboni et al., 2004; Kujala et al., 2011; Walter et al., 2004]. This result corroborates the idea that the pSTS plays a key role in social interaction [Noordzij et al., 2009; Redcay et al., 2010] by being specifically involved in processing the social significance of motion cues and their contribution to social communication [Zilbovicius et al., 2006].

The multimodal information in the STS is further processed by the IPL and by the IFG (pars opercularis) [Rizzolatti and Craighero, 2004]. Together, these two regions of the AON are considered to facilitate the understanding of intention from action [Hamilton and Grafton, 2007]. Paralleling the sensitivity of IPL and IFG for social contingencies reported in this work, a top‐down modulation of these regions by social interaction has been attested by previous research [Centelles et al., 2011; Gobbini et al., 2007; Sinke et al., 2010]. Oberman et al. [2007ab] for example, used EEG to demonstrate modulations of the activity in the AON by the degree of social interaction present in 80 s long videos of a ball‐tossing game.

The FG and occipitotemporal regions have been involved in processing configurations of bodies in motion [Grossman and Blake, 2002; Michels et al., 2005; Peelen et al., 2006]. The hierarchical neural model of biological motion perception proposed by Giese and Poggio 2003 suggests that movement patterns may be encoded as sequences of body postures in the ventral processing stream. Thus, the network of brain areas involved in processing human movement may not include solely the so‐called “motion” dorsal processing stream but may extend to the so‐called “form” ventral processing stream. We would interpret the activation of the FG and the occipitotemporal region in this study as reflecting additional body‐ and posture‐processing that is needed for representing two moving bodies in relation to each other as opposed to one body and its identical reflection.

It has been proposed that the stronger recruitment of the AON for processing communicative interactions is likely due to the fact that processing the movements of a dyad requires more complex action representations [Centelles et al., 2011] than those of agents performing individual actions. However, in this study we extend previous findings by showing that such complexity of action representation is not merely determined by the communicative nature of the observed behavior [Centelles et al., 2011] but indeed by the relational context in which such behavior is performed. Our findings suggest that the AON could be considered an early key processing component that supports and contributes to the understanding of nonverbal social interaction, and that an automatic movement analysis might be performed to adequately understand an observed agent's social intentions [Gallese, 2006; Gallese and Goldman, 1998; Jacob and Jeannerod, 2005].

Increased visual processing for mirrored movement patterns

The inverse contrast of mirrored versus contingent scenes demonstrated greater recruitment of the medial visual cortex, centered on the right lingual gyrus, bilateral cuneus, and parahippocampal gyrus, as well as the PCC and the dlPFC. The activation of the medial aspects of the extrastriate cortex suggests an increased demand on visual analysis, which may be related to the perception of symmetry. Sasaki et al. 2005 have previously found that symmetric compared with random dot stimuli activate the extrastriate visual cortex. Moreover, these results are also consistent with studies investigating not only texture, but also shape symmetry discrimination. For instance, Wilkinson et al. 2000 used concentric radial frequency patterns, which are characteristic of complex biological shapes and found that they produced strong fMRI activation of human extrastriate area V4 and the FG. The results of our study are in line with such findings, by showing that the perception of symmetrical moving bodies is processed, among other regions, in medial extrastriate areas. Moreover, the parahippocampal gyrus has also been differentially recruited during the processing of visual complexity and may be tuned to representing the differences among stimuli with a high degree of visual overlap or featural ambiguity [e.g., Mundy et al., 2012]. Indeed, in this study, the noncontingent condition consists of videos displaying stimuli with “perfect visual symmetry” (i.e., twice the same body performing identical movements simultaneously). When processing these videos, it is possible that it is more challenging for the participants to represent differences between the two bodies, which is needed to judge the plausibility and naturalness of the situation. Moreover, the activation of the PCC and the parahippocampal gyrus, two regions that are strongly interconnected [Vogt et al., 1992], might also point to the assignment of mnemonic associations to sensory input [PCC, e.g., Vogt et al., 1992; parahippocampal gyrus, e.g., Bar et al., 2008]. Recent findings have demonstrated the involvement of the parahippocampal gyrus in the re‐activation of visual context (e.g., a café) to mediate successful episodic memory retrieval [Hayes et al., 2007], which may be important for associating a stimulus with actions that have been frequently experienced in a given context. In the case of this task, this may be necessary, to judge whether a perceived scene is natural and plausible. It is likely that this may be more challenging for the mirrored movement patterns compared with the contingent ones, which allow for a much faster association with experience in a prototypically similar location. In this line, we assume that a mirrored dyad, compared with a contingent one, poses a greater challenge to the assessment of the positions of two bodies relative to each other and to the environment.

In the following, we would like to consider the possibility of having used an alternative operationalization for noncontingency. For instance, instead of replacing one agent's actions with the mirror image of the other agent, we could have time‐lagged their actions towards each other. In such case, the contrast between contingent and noncontingent interactions might not have shown a differential engagement of neural networks. This would be the case, since viewers might interpret a relationship among people moving on screen [Iacoboni et al., 2004], irrespective of contingency information. This, in turn, is most likely because social contingencies in communicative interactions are highly complex [Bigelow, 1999] and observers might have a higher tolerance for variability and noise. Interpersonal predictive coding [i.e., the perception of one agent's action has predictive value for the other agent's actions, Hirai and Kakigi, 2009; Neri et al., 2006] may extend to socially interactive activities [Manera et al., 2011]. However, this seems to be the case only for ritualized behaviors, with a learned social response expectancy (e.g., directives like “come here” or “sit down”) and not for more complex nonverbal contingencies. Thus, we argue that alternative operationalization possibilities for noncontingency in complex nonverbal social behaviors, would probably not have completely eliminated the relational information between the two agents.

EFFECTS OF KINEMATICS

No biological bias for the AON

It has been suggested that the AON might be tuned specifically to biological motion since this type of kinematics is what humans are most familiar with, both due to experience and exposure [Bouquet et al., 2007; Casile et al., 2010; Chaminade et al., 2010, 2012; Dayan et al., 2007; Kilner et al., 2003; Press, 2011; Press et al., 2007; Tai et al., 2004; Tsai and Brass, 2007]. However, research in this area is still inconclusive [for contradicting results see: Cross et al., 2012; Engel et al., 2008a, 2008b; Gazzola et al., 2007; Gobbini et al., 2011; Kupferberg et al., 2011; Oberman et al., 2007a; Oztop et al., 2005; Saygin et al., 2011]. This lack of consistency in previous findings may be due to differences in top‐down factors, which have been known to modulate motor simulation effects [Liepelt and Brass, 2010; Müller et al., 2011; Stanley et al., 2010; but see also Press et al., 2006]. Confirming our initial hypothesis, the results of this study contribute to this ongoing debate by clearly showing no differential engagement of the AON for processing different motion kinematics, when the appearance information is kept constant and the task requires intuitive judgments to be made. Our finding is supported by previous research that used an implicit behavioral paradigm to show that in a humanoid agent, an artificial movement velocity profile might be familiar enough to be simulated and is sufficient to cause motor interference [Kilner et al., 2007; Kupferberg et al., 2011]. In this line, it has been suggested that such effects are driven more by the end goals, rather than by movement kinematics [Gazzola et al., 2007; Hamilton and Grafton, 2008; Liepelt et al., 2008, 2010; Longo et al., 2008]. We argue that, biological motion as operationalized by fluent movements does not influence the cognitive processes related to understanding actions in a dyadic social interaction context.

An alternative account for this result is inspired by a recent proposal by Cross et al. 2012. The authors suggest that there may be a nonlinear relationship between the activity of the AON and action familiarity and that a heightened AON response can be associated with both highly unfamiliar and highly familiar actions compared with actions that are at neither end of the familiarity continuum. Considering that the task instructions in this study required participants to judge the “naturalness” of perceived scenes based on their plausibility and familiarity, we could assume that scenes perceived as natural were also perceived as highly familiar and scenes perceived as unnatural, as highly unfamiliar. Since our results show considerable overlap in AON engagement for both smooth (familiar) and rigid (unfamiliar) movements compared with the scrambled videos, the direct contrast of scenes containing fluent movements with rigid ones, indeed would reveal no differential AON activation.

Enhanced SNN activation for rigid motion

The processing of rigid, nonbiological compared with fluent, biological kinematics revealed clusters of increased neural activation in the left vlPFC (IFG) and in regions of the SNN, namely, the left TPJ and the bilateral dmPFC. In concordance with Zaki et al. 2010, we suggest that the current activation pattern is related to cognitive control processing and specific social inferential processing. The engagement of the SNN is known to be triggered by contextual incongruencies and tasks focusing on mentalizing or related capacities [Frith, 2007; Van Overwalle and Baetens, 2009]. The perceived awkwardness of the rigid movements is caused by a conflict between expected biological and perceived nonbiological kinematics. This stimulus‐context conflict engages the pars triangularis of the IFG, which has been previously involved in processing the semantic content of an action, which is incongruent with the context [Willems et al., 2007]. Subsequently, the involvement of the TPJ and the dmPFC as fundamental SNN components is required [Zaki et al., 2010]. Evidence from functional neuroimaging studies shows that the TPJ is associated with mental state attribution [e.g., Castelli et al., 2000; Gallagher et al., 2000; Schultz et al., 2004; Vogeley et al., 2001] and reasoning about others' beliefs [e.g., Aichhorn et al., 2009; Samson et al., 2004]. We assume that our left‐hemispheric TPJ activation might be due to the fact that nonbiologically moving humanoid characters pose a greater challenge to the processing of communicative intentions [Centelles et al., 2011; Ciaramidaro et al., 2007]. With regard to the dmPFC, Frith 2007 summarizes that it is recruited by mentalizing tasks that require the processing of nonobservable mental states. Indeed, person perception tasks [e.g., Kuzmanovic et al., 2009; Mitchell et al., 2002] and perception of social nonverbal interactions [Centelles et al., 2011; Iacoboni et al., 2004; Kujala et al., 2011; Walter et al., 2004] have all been found to engage this region. In addition, the present dmPFC activation is located in the same region of the dorsal paracingulate cortex, which has been involved in the ascertainment of human or intentional agency during the observation of ambiguous stimuli [Stanley et al., 2010].

Alternatively, apart from suggesting mentalizing about the characters on screen, the finding of SNN engagement for rigid compared with smooth movements could also indicate mentalizing about the experimenter. Since the manipulation of the videos was announced, participants may have attributed to the experimenter the intention of deluding or misleading them in all instances when the artificial movement kinematics were presented. However, participants were not asked to detect which videos were based on the original live‐action videos and which were computer‐manipulated. Instead, they were instructed to make subjective judgments as to which nonverbal situations seemed plausible and natural to them, irrespective of the origin of the video. Thus, we assume there was no substantial need to think about the origin of the video or the experimenter's intention.

This result corroborates findings of a study by Chaminade et al., 2007, which found that reporting a motion as artifical is more cognitively demanding. While they did not find any additional SNN engagement, this may be due to the nature of their stimulus material: The judgment of running motion be it animated or motion captured, does not require observers to mentalize about the underlying intentions of the agents. Our results are, however, in conflict with those of a study by Gobbini et al. 2011, which revealed that it was the observation of moving human compared with robotic emotional faces, which evoked stronger activity in regions of the SNN. It is important to note that in the study by Gobbini et al. 2011 the effects of the agents' appearance might have overwritten more subtle effects of their movement kinematics. This interpretation is in concert with the idea that appearance information, as a top‐down factor, modulates SNN engagement [Chaminade et al., 2007, 2012; Krach et al., 2008] and argues for the dissociation of shape and motion in experimental designs [Cross et al., 2012; Saygin et al., 2011; Shimada, 2010].

We conclude that the present SNN finding reflects an increased need for the inferential processing of intentions “behind” other persons' actions or, in short, for “social computation.” Our results show that this need increases with atypical kinematic features of motion, which may render social encounters ambiguous and/or awkward.

EFFECTS OF INTERACTION

The interaction evaluating brain regions more responsive to the contrast between contingent and mirrored biological movement patterns, when the motion was smooth rather than rigid, yielded activations in the left premotor and somatosensory cortices, the left paracentral lobule (extending to the SMA), as well as bilaterally in the middle cingulate cortex.

The SMA is part of the premotor node of the AON and its involvement in the perception of nonverbal body movements has been previously attested [Cross et al., 2006; Decety and Grèzes, 1999; Zentgraf et al., 2005]. The middle cingulate cortex (MCC) is a so‐called “cortical midline structure”, which has been suggested to respond more in conditions that are more self‐relevant than the comparison condition [Northoff et al., 2006]. Taken together, it is possible, though speculative, that an implicit self‐reference is established for scenes with contingent and smooth movement patterns: Such scenes may be considered more similar to scenes that were experienced by oneself in the past. This is additionally reflected by the behavioral interaction effect suggesting highest “naturalness” ratings for these videos.

Considering that individuals with Autism Spectrum Disorder have been shown to have both a compromised perception of biological motion [for a review, see Kaiser and Pelphrey, 2012], as well as a deficit in the detection of social contingencies [e.g. Castelli et al., 2002; Gergely, 2001; Klin, 2000], the current paradigm could be a useful tool to investigate social interaction perception in this disorder of social communication.

CONCLUSION

This research used a novel method to explore brain activations during the perception of nonverbal behavior in communicative interactions. First, our study shows that people are sensitive to both contingency information and (to a lesser degree) to movement fluency, but that they consider interactions to be most natural when movements are both fluent and contingent. Second, we found that the AON is preferentially engaged by contingency within movement patterns and that this effect is driven by the contingency information per se, not merely by the social or communicative nature of the nonverbal behavior. In addition, we have found that the AON does not discriminate between different movement velocities. These data can be taken to suggest that a tight kinematics match is not required to represent an end goal of a social action, hence, arguing against a potential biological bias of this network. Furthermore, regions of the SNN are engaged when a mismatch between the knowledge with respect to the agents' biological nature on the one hand and their nonbiological movements on the other prompts inferences about the agents' intentions.

Supporting information

Supporting Information Video 1

Supporting Information Video 2

Supporting Information Video 3

Supporting Information Video 4

Supporting Information Video 5

ACKNOWLEDGMENTS

The authors would like to thank Timm Wetzel, Kurt Wittenberg, Elke Bannemer, and Anke Rühling for their assistance with the fMRI scanning and Ursula Juchellek for help with the subjects' recruitment. Björn Günter, Haug Leuschner, Mathis Jording, and Katja Weber deserve much appreciation for the help with stimulus generation and evaluation. Thanks also go to Gayanée Kédia and Karl MacDorman for valuable comments during the conceptual phase of the study. Katharina Krämer, Ulrich Pfeiffer, and Bert Timmermans deserve the authors' gratitude for their helpful feedback on an earlier version of this article.

REFERENCES

  1. Adolphs R (2009): The social brain: Neural basis of social knowledge. Annu Rev Psychol 60:693–716. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aichhorn M, Perner J, Weiss B, Kronbichler M, Staffen W, Ladurner G (2009): Temporo‐parietal junction activity in theory‐of‐mind tasks: Falseness, beliefs, or attention. J Cogn Neurosci 21:1179–1192. [DOI] [PubMed] [Google Scholar]
  3. Allison T, Puce A, McCarthy G (2000): Social perception from visual cues: Role of the STS region. Trends Cogn Sci 4:267–278. [DOI] [PubMed] [Google Scholar]
  4. Balas B, Kanwisher N, Saxe R (2012): Thin‐slice perception develops slowly. J Exp Child Psychol 112:257–264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bar M, Aminoff E, Schacter DL (2008): Scenes unseen: The parahippocampal cortex intrinsically subserves contextual associations, not scenes or places per se. J Neurosci 28:8539–8544. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Becchio C, Cavallo A, Begliomini C, Sartori L, Feltrin G, Castiello U (2012): Social grasping: From mirroring to mentalizing. Neuroimage 61:240–248. [DOI] [PubMed] [Google Scholar]
  7. Bente G, Krämer NC, Petersen A, de Ruiter JP (2001a): Computer‐animated movement and person perception: Methodological advances in nonverbal behavior research. J Nonverbal Behav 25:151–166. [Google Scholar]
  8. Bente G, Krämer NC, Petersen A, de Ruiter JP (2001b): Transcript‐based computer animation of movement: Evaluating a new tool for nonverbal behavior research. Behav Res Meth Instrum Comput 33:303–310. [DOI] [PubMed] [Google Scholar]
  9. Bente G, Senokozlieva M, Pennig S, Al‐Issa A, Fischer O (2008): Deciphering the secret code: A new methodology for the cross‐cultural analysis of nonverbal behavior. Behav Res Methods 40:269–277. [DOI] [PubMed] [Google Scholar]
  10. Bente G, Leuschner H, Al Issa A, Blascovich JJ (2010): The others: Universals and cultural specificities in the perception of status and dominance from nonverbal behavior. Conscious Cogn 19:762–777. [DOI] [PubMed] [Google Scholar]
  11. Bernieri FJ, Rosenthal R (1991): Interpersonal coordination: Behavior matching and interactional synchrony In: Feldman RS, Rime B, editors. Fundamentals of nonverbal behavior. Cambridge: Cambridge University Press; p 401–432. [Google Scholar]
  12. Berry DS, Misovich SJ, Kean KJ, Baron RM (1992): Effects of disruption of structure and motion on perceptions of social causality. Pers Soc Psychol B 18:237–244. [Google Scholar]
  13. Bidet‐Ildei C, Orliaguet J‐P, Sokolov AN, Pavlova M (2006): Perception of elliptic biological motion. Perception 35:1137–1147. [DOI] [PubMed] [Google Scholar]
  14. Bigelow A (1999): Infant's sensitivity to imperfect contingency in social interaction In: Rochat P, editor. Early social cognition. Hillsdale NJ: Erlbaum; p 137–154. [Google Scholar]
  15. Blakemore S‐J, Boyer P, Pachot‐Clouard M, Meltzoff A, Segebarth C, Decety J (2003): The detection of contingency and animacy from simple animations in the human brain. Cereb Cortex 13:837–844. [DOI] [PubMed] [Google Scholar]
  16. Blythe PW, Miller GF, Todd PM (1996): Human simulation of adaptive behavior: Interactive studies of pursuit, evasion, courtship, fighting, and play. From Animals to Animats 4 Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior. p 13–22.
  17. Bouquet CA, Gaurier V, Shipley T, Toussaint L, Blandin Y (2007): Influence of the perception of biological or non‐biological motion on movement execution. J Sports Sci 25:519–530. [DOI] [PubMed] [Google Scholar]
  18. Brass M, Schmitt RM, Spengler S, Gergely G (2007): Investigating action understanding: Inferential processes versus action simulation. Curr Biol 17:2117–2121. [DOI] [PubMed] [Google Scholar]
  19. Buccino G, Baumgaertner A, Colle L, Buechel C, Rizzolatti Giacomo, Binkofski F (2007): The neural basis for understanding non‐intended actions. Neuroimage 36(suppl 2):T119–127. [DOI] [PubMed] [Google Scholar]
  20. Burgoon JK (1994). Nonverbal signals In: Knapp ML, Miller GR, editors. Handbook of interpersonal communication. Thousand Oaks CA: Sage; p 229–285. [Google Scholar]
  21. Burgoon JK, Dillman L, Stem LA (1993): Adaptation in dyadic interaction: Defining and operationalizing patterns of reciprocity and compensation. Commun Theory 3:295–316. [Google Scholar]
  22. Canessa N, Alemanno F, Riva F, Zani A, Proverbio AM, Mannara N, Perani D, Cappa SF (2012): The neural bases of social intention understanding: The role of interaction goals. PLoS One 7:e42347. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Cappella JN (1998): The dynamics of nonverbal coordination and attachment: Problems of causal direction and causal mechanism In: Palmer MT, Barnett GA, editors. Progress in communication sciences14. Stamford CT: Ablex; p 19–37. [Google Scholar]
  24. Casile A, Dayan E, Caggiano V, Hendler T, Flash T, Giese MA (2010): Neuronal encoding of human kinematic invariants during action observation. Cereb Cortex 20:1647–1655. [DOI] [PubMed] [Google Scholar]
  25. Caspers S, Zilles K, Laird AR, Eickhoff SB (2010): ALE meta‐analysis of action observation and imitation in the human brain. Neuroimage 50:1148–1167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Castelli F, Happé F, Frith U, Frith C (2000): Movement and mind: A functional imaging study of perception and interpretation of complex intentional movement patterns. Neuroimage 12:314–325. [DOI] [PubMed] [Google Scholar]
  27. Castelli F, Frith C, Happé F, Frith U (2002): Autism, Asperger syndrome and brain mechanisms for the attribution of mental states to animated shapes. Brain 125(Pt 8):1839–1849. [DOI] [PubMed] [Google Scholar]
  28. Centelles L, Assaiante C, Nazarian B, Anton J‐L, Schmitz C (2011): Recruitment of both the mirror and the mentalizing networks when observing social interactions depicted by point‐lights: A neuroimaging study. PLoS One 6:e15749. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Chaminade T, Hodgins J, Kawato M (2007): Anthropomorphism influences perception of computer‐animated characters' actions. Soc Cogn Affect Neurosci 2:206–216. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Chaminade T, Zecca M, Blakemore S‐J, Takanishi A, Frith CD, Micera S, Dario P, Rizzolatti G, Gallese V, Umilta MA (2010): Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures. PLoS One 5:e11577. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Chaminade T, Rosset D, Da Fonseca D, Nazarian B, Lutcher E, Cheng G, Deruelle C (2012): How do we think machines think? An fMRI study of alleged competition with an artificial intelligence. Front Hum Neurosci 6:100. doi:10.3389/fhum.2012.00103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Choi VS, Gray HM, Ambady N (2005): The glimpsed world: Unintended communication and unintended perception In: Hassin RR, Uleman JS, Bargh JA, editors. The new unconscious. New York: Oxford University Press; p 309–333. [Google Scholar]
  33. Ciaramidaro A, Adenzato M, Enrici I, Erk S, Pia L, Bara BG, Walter H (2007): The intentional network: How the brain reads varieties of intentions. Neuropsychologia 45:3105–3113. [DOI] [PubMed] [Google Scholar]
  34. Clarke TJ, Bradshaw MF, Field DT, Hampson SE, Rose D (2005): The perception of emotion from body movement in point‐light displays of interpersonal dialogue. Perception 34:1171–1180. [DOI] [PubMed] [Google Scholar]
  35. Cross ES, Hamilton AF de C, Grafton ST (2006): Building a motor simulation de novo: Observation of dance by dancers. Neuroimage 31:1257–1267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Cross ES, Liepelt R, Hamilton AF de C, Parkinson J, Ramsey R, Stadler W, Prinz W (2012): Robotic movement preferentially engages the action observation network. Hum Brain Mapp 33:2238–2254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Dale AM (1999): Optimal experimental design for event‐related fMRI. Hum Brain Mapp 8:109–114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Dayan E, Casile A, Levit‐Binnun N, Giese MA, Hendler T, Flash T (2007): Neural representations of kinematic laws of motion: Evidence for action‐perception coupling. Proc Natl Acad Sci USA 104:20582–20587. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. de Lange FP, Spronk M, Willems RM, Toni I, Bekkering H (2008): Complementary systems for understanding action intentions. Curr Biol 18:454–457. [DOI] [PubMed] [Google Scholar]
  40. Decety J, Grèzes J (1999): Neural mechanisms subserving the perception of human actions. Trends Cogn Sci 3:172–178. [DOI] [PubMed] [Google Scholar]
  41. Dittrich WH (1993): Action categories and the perception of biological motion. Perception 22:15–22. [DOI] [PubMed] [Google Scholar]
  42. Dittrich WH, Lea SE (1994): Visual perception of intentional motion. Perception 23:253–268. [DOI] [PubMed] [Google Scholar]
  43. Duvernoy HM. 1999. The human brain surface, three‐dimensional sectional anatomy with MRI, and blood supply. Vienna: Springer. [Google Scholar]
  44. Eickhoff SB, Stephan KE, Mohlberg H, Grefkes C, Fink GR, Amunts K, Zilles K (2005): A new SPM tolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage 25:1325–1335. [DOI] [PubMed] [Google Scholar]
  45. Engel A, Burke M, Fiehler K, Bien S, Rösler F (2008a) How moving objects become animated: The human mirror neuron system assimilates non‐biological movement patterns. Soc Neurosci 3:368–387. [DOI] [PubMed] [Google Scholar]
  46. Engel A, Burke M, Fiehler K, Bien S, Rösler F (2008b): What activates the human mirror neuron system during observation of artificial movements: Bottom‐up visual features or top‐down intentions? Neuropsychologia 46:2033–2042. [DOI] [PubMed] [Google Scholar]
  47. Faraway JJ, Reed MP, Wang J (2007): Modelling three‐dimensonal trajectories by using Bézier curves with application to hand motion. J Roy Stat Soc Ser C 56:571–585. [Google Scholar]
  48. Flash T, Hogan, N (1985): The coordination of arm movements: An experimentally confirmed mathematical model. J Neurosci 5:1688–1703. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Friston KJ, Holmes A, Poline JB, Price CJ, Frith CD (1996): Detecting activations in PET and fMRI: Levels of inference and power. Neuroimage 4(3 Pt 1):223–235. [DOI] [PubMed] [Google Scholar]
  50. Frith CD (2007): The social brain? Philos Trans R Soc Lond B Biol Sci 362:671–678. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Gallagher HL, Happé F, Brunswick N, Fletcher PC, Frith U, Frith CD (2000): Reading the mind in cartoons and stories: An fMRI study of ‘theory of mind’ in verbal and nonverbal tasks. Neuropsychologia 38:11–21. [DOI] [PubMed] [Google Scholar]
  52. Gallese V (2006): Intentional attunement: A neurophysiological perspective on social cognition and its disruption in autism. Brain Res 1079:15–24. [DOI] [PubMed] [Google Scholar]
  53. Gallese V, Goldman A (1998): Mirror neurons and the simulation theory of mind‐reading. Trends Cogn Sci 2:493–501. [DOI] [PubMed] [Google Scholar]
  54. Gazzola V, Rizzolatti G, Wicker B, Keysers C (2007): The anthropomorphic brain: The mirror neuron system responds to human and robotic actions. Neuroimage 35:1674–1684. [DOI] [PubMed] [Google Scholar]
  55. Gergely G (2001): The obscure object of desire: ‘Nearly, but clearly not, like me’: Contingency preference in normal children versus children with autism. Bull Menninger Clin 65:411–426. [DOI] [PubMed] [Google Scholar]
  56. Gergely G, Watson JS (1999): Early social‐emotional development: Contingency perception and the social biofeedback model In: Rochat P, editor. Early social cognition: Understanding others in the first months of life. Mahwah: Lawrence Erlbaum Associates; p 101–137. [Google Scholar]
  57. Giese MA, Poggio T (2003): Neural mechanisms for the recognition of biological movements. Nat Rev Neurosci 4:179–192. [DOI] [PubMed] [Google Scholar]
  58. Gobbini MI, Koralek AC, Bryan RE, Montgomery KJ, Haxby JV (2007): Two takes on the social brain: A comparison of theory of mind tasks. J Cogn Neurosci 19:1803–1814. [DOI] [PubMed] [Google Scholar]
  59. Gobbini MI, Gentili C, Ricciardi E, Bellucci C, Salvini P, Laschi C, Guazzelli M, Pietrini P (2011): Distinct neural systems involved in agency and animacy detection. J Cogn Neurosci 23:1911–1920. [DOI] [PubMed] [Google Scholar]
  60. Grèzes J, Fonlupt P, Bertenthal B, Delon‐Martin C, Segebarth C, Decety J (2001): Does perception of biological motion rely on specific brain regions? Neuroimage 13:775–785. [DOI] [PubMed] [Google Scholar]
  61. Grossman ED, Blake R (2002): Brain areas active during visual perception of biological motion. Neuron 35:1167–1175. [DOI] [PubMed] [Google Scholar]
  62. Hamilton AF de C, Grafton ST (2007): The motor hierarchy: From kinematics to goals and intentions In: Haggard P, Rosetti Y, Kawato M, editors. Attention and Performance XXII, Oxford: Oxford University Press; p 381–408. [Google Scholar]
  63. Hamilton AF de C, Grafton ST (2008): Action outcomes are represented in human inferior frontoparietal cortex. Cereb Cortex 18:1160–1168. [DOI] [PubMed] [Google Scholar]
  64. Hayes SM, Nadel L, Ryan L (2007): The effect of scene context on episodic object recognition: Parahippocampal cortex mediates memory encoding and retrieval success. Hippocampus 17:873–889. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Heider F, Simmel M (1944): An experimental study of apparent behavior. Am J Psychol 57:243–259. [Google Scholar]
  66. Hirai M, Hiraki K (2007): Differential neural responses to humans vs. robots: An event‐related potential study. Brain Res 1165:105–115. [DOI] [PubMed] [Google Scholar]
  67. Hirai M, Kakigi R (2009): Differential orientation effect in the neural response to interacting biological motion of two agents. BMC Neurosci 10:39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Hodgins JK, O'Brien JF, Tumblin J (1998): Perception of human motion with different geometric models. IEEE Trans Vis Comput Graph 4:307–316. [Google Scholar]
  69. Iacoboni M, Lieberman MD, Knowlton BJ, Molnar‐Szakacs I, Moritz M, Throop CJ, Fiske AP (2004): Watching social interactions produces dorsomedial prefrontal and medial parietal BOLD fMRI signal increases compared to a resting baseline. Neuroimage 21:1167–1173. [DOI] [PubMed] [Google Scholar]
  70. Jacob P, Jeannerod M (2005): The motor theory of social cognition: A critique. Trends Cogn Sci 9:21–25. [DOI] [PubMed] [Google Scholar]
  71. Johansson G (1973): Visual perception of biological motion and a model for its analysis. Percept Psychophys 14:201–211. [Google Scholar]
  72. Kaiser MD, Pelphrey KA (2012): Disrupted action perception in autism: Behavioral evidence, neuroendophenotypes, and diagnostic utility. Dev Cogn Neurosci 2:25–35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Keysers C, Gazzola V (2007): Integrating simulation and theory of mind: From self to social cognition. Trends Cogn Sci 11:194–196. [DOI] [PubMed] [Google Scholar]
  74. Klin A (2000): Attributing social meaning to ambiguous visual stimuli in higher‐functioning autism and Asperger syndrome: The social attribution task. J Child Psychol Psychiatry 41:831–846. [PubMed] [Google Scholar]
  75. Kilner JM, Paulignan Y, Blakemore SJ (2003): An interference effect of observed biological movement on action. Curr Biol 13:522–525. [DOI] [PubMed] [Google Scholar]
  76. Kilner JM, Hamilton AF de C, Blakemore S‐J (2007): Interference effect of observed human movement on action is due to velocity profile of biological motion. Soc Neurosci 2:158–166. [DOI] [PubMed] [Google Scholar]
  77. Krach S, Hegel F, Wrede B, Sagerer G, Binkofski F, Kircher T (2008): Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLoS One 3:e2597. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Krumhuber E, Kappas A (2005): Moving smiles: The role of dynamic components for the perception of the genuineness of smiles. J Nonverbal Behav 29:3–24. [Google Scholar]
  79. Kujala MV, Carlson S, Hari R (2011): Engagement of amygdala in third‐person view of face‐to‐face interaction. Hum Brain Mapp 33:1753–1762. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Kupferberg A, Glasauer S, Huber M, Rickert M, Knoll A, Brandt T (2011): Biological movement increases acceptance of humanoid robots as human partners in motor interaction. AI Soc 26:339–345. [Google Scholar]
  81. Kuzmanovic B, Georgescu AL, Eickhoff SB, Shah NJ, Bente G, Fink GR, Vogeley K (2009): Duration matters: Dissociating neural correlates of detection and evaluation of social gaze. Neuroimage 46:1154–1163. [DOI] [PubMed] [Google Scholar]
  82. Lacquaniti F, Terzuolo C, Viviani P (1983): The law relating the kinematic and figural aspects of drawing movements. Acta Psychol (Amst) 54:115–130. [DOI] [PubMed] [Google Scholar]
  83. Liepelt R, Brass M (2010): Top‐down modulation of motor priming by belief about animacy. Exp Psychol 57:221–227. [DOI] [PubMed] [Google Scholar]
  84. Liepelt R, Von Cramon DY, Brass M (2008): How do we infer others' goals from non‐stereotypic actions? The outcome of context‐sensitive inferential processing in right inferior parietal and posterior temporal cortex. Neuroimage 43:784–792. [DOI] [PubMed] [Google Scholar]
  85. Liepelt R, Prinz W, Brass M (2010): When do we simulate non‐human agents? Dissociating communicative and non‐communicative actions. Cognition 115:426–434. [DOI] [PubMed] [Google Scholar]
  86. Longo MR, Kosobud A, Bertenthal BI (2008): Automatic imitation of biomechanically possible and impossible actions: Effects of priming movements versus goals. J Exp Psychol [Hum Percept] 34:489–501. [DOI] [PubMed] [Google Scholar]
  87. Manera V, Becchio C, Schouten B, Bara BG, Verfaillie K (2011): Communicative interactions improve visual detection of biological motion. PLoS One 6:e14594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Marsh LE, Hamilton AF de C (2011): Dissociation of mirroring and mentalising systems in autism. Neuroimage 56:1511–1519. [DOI] [PubMed] [Google Scholar]
  89. Marsh AA, Kozak MN, Wegner DM, Reid ME, Yu HH, Blair RJR (2010): The neural substrates of action identification. Soc Cogn Affect Neurosci 5:392–403. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Martin A, Weisberg J (2003): Neural foundations for understanding social and mechanical concepts. Cogn Neuropsychol 20:575–587. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. McAleer P, Pollick FE (2008): Understanding intention from minimal displays of human activity. Behav Res Methods 40:830–839. [DOI] [PubMed] [Google Scholar]
  92. Michels L, Lappe M, Vaina LM (2005): Visual areas involved in the perception of human movement from dynamic form analysis. Neuroreport 16:1037–1041. [DOI] [PubMed] [Google Scholar]
  93. Michotte A. 1946. La perception de la causalité. Études de psychologie vol vi. Louvain: Institut Supérieur de Philosophie, France. [Google Scholar]
  94. Mitchell JP, Heatherton TF, Macrae CN (2002): Distinct neural systems subserve person and object knowledge. Proc Natl Acad Sci USA 99:15238–15243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Moran G, Dumas JE, Symons DK (1992): Approaches to sequential analysis and the description of contingency in behavioral interaction. Behav Assess 14:65–92. [Google Scholar]
  96. Morewedge CK, Preston J, Wegner DM (2007): Timescale bias in the attribution of mind. J Pers Soc Psychol 93:1–11. [DOI] [PubMed] [Google Scholar]
  97. Mori M (1970): The uncanny valley. Energy 7:33–35, translation by MacDorman KF, Minato T. [Google Scholar]
  98. Müller BCN, Brass M, Kühn S, Tsai C‐C, Nieuwboer W, Dijksterhuis A, van Baaren RB (2011): When Pinocchio acts like a human, a wooden hand becomes embodied. Action co‐representation for non‐biological agents. Neuropsychologia 49:1373–1377. [DOI] [PubMed] [Google Scholar]
  99. Mundy ME, Downing PE, Graham KS (2012): Extrastriate cortex and medial temporal lobe regions respond differentially to visual feature overlap within preferred stimulus category. Neuropsychologia 50:3053–3061. [DOI] [PubMed] [Google Scholar]
  100. Neri P, Luu JY, Levi DM (2006): Meaningful interactions can enhance visual discrimination of human agents. Nat Neurosci 9:1186–1192. [DOI] [PubMed] [Google Scholar]
  101. Noordzij ML, Newman‐Norlund SE, de Ruiter JP, Hagoort P, Levinson SC, Toni I (2009): Brain mechanisms underlying human communication. Front Hum Neurosci 3:14. doi: 10.3389/neuro.09.014.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Northoff G, Heinzel A, de Greck M, Bermpohl F, Dobrowolny H, Panksepp J (2006): Self‐referential processing in our brain ‐ a meta‐analysis of imaging studies on the self. Neuroimage 31:440–457. [DOI] [PubMed] [Google Scholar]
  103. Oberman LM, McCleery JP, Ramachandran VS, Pineda JA (2007a) EEG evidence for mirror neuron activity during the observation of human and robot actions: Toward an analysis of the human qualities of interactive robots. Neurocomput 70:2194–2203. [Google Scholar]
  104. Oberman LM, Pineda JA, Ramachandran VS (2007b) The human mirror neuron system: A link between action observation and social skills. Soc Cogn Affect Neurosci 2:62–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Ohnishi T, Moriguchi Y, Matsuda H, Mori T, Hirakata M, Imabayashi E, Hirao K, Nemoto K, Kaga M, Inagaki M, Yamada M, Uno A (2004): The neural network for the mirror system and mentalizing in normally developed children: An fMRI study. Neuroreport 15:1483–1487. [DOI] [PubMed] [Google Scholar]
  106. Oldfield RC (1971): The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia 9:97–113. [DOI] [PubMed] [Google Scholar]
  107. Oztop E, Franklin DW, Chaminade T, Cheng G (2005): Human‐humanoid interaction: Is a humanoid robot perceived as a human? Int J Hum Robot 2:537–559. [Google Scholar]
  108. Pavlova MA (2012): Biological motion processing as a hallmark of social cognition. Cereb Cortex 22:981–995. [DOI] [PubMed] [Google Scholar]
  109. Peelen MV, Wiggett AJ, Downing PE (2006): Patterns of fMRI activity dissociate overlapping functional brain areas that respond to biological motion. Neuron 49:815–822. [DOI] [PubMed] [Google Scholar]
  110. Penny WD, Holmes AP, Friston KJ (2003): Random effects analysis In: Frackowiak RSJ, Friston KJ, Frith CD, Dolan R, Friston KJ, Price CJ, Zeki S, Ashburner J, Penny WD, editors. Human Brain Function. San Diego, London: Academic Press; p 843–850. [Google Scholar]
  111. Pierno AC, Becchio C, Turella L, Tubaldi F, Castiello U (2008): Observing social interactions: The effect of gaze. Soc Neurosci 3:51–59. [DOI] [PubMed] [Google Scholar]
  112. Pocock L, Rosebush J (2002): Motion pathways, key frame animation, and easing In: The Computer Animator's Technical Handbook, Chapter 10. San Francisco: Morgan Kaufmann Publishers Inc; p 247–274. [Google Scholar]
  113. Press C (2011): Action observation and robotic agents: Learning and anthropomorphism. Neurosci Biobehav Rev 35:1410–1418. [DOI] [PubMed] [Google Scholar]
  114. Press C, Gillmeister H, Heyes C (2006): Bottom‐up, not top‐down, modulation of imitation by human and robotic models. Eur J Neurosci 24:2415–2419. [DOI] [PubMed] [Google Scholar]
  115. Press C, Gillmeister H, Heyes C (2007): Sensorimotor experience enhances automatic imitation of robotic action. Proc Biol Sci 274:2509–2514. [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Pyles JA, Garcia JO, Hoffman DD, Grossman ED (2007): Visual perception and neural correlates of novel ‘biological motion.’ Vision Res 47:2786–2797. [DOI] [PubMed] [Google Scholar]
  117. Redcay E, Dodell‐Feder D, Pearrow MJ, Mavros PL, Kleiner M, Gabrieli JDE, Saxe R (2010): Live face‐to‐face interaction during fMRI: A new tool for social cognitive neuroscience. Neuroimage 50:1639–1647. [DOI] [PMC free article] [PubMed] [Google Scholar]
  118. Rimé B, Boulanger B, Laubin P, Richir M, Stroobants K (1985). The perception of interpersonal emotions originated by patterns of movement. Motiv Emotion 9:241–260. [Google Scholar]
  119. Rizzolatti G, Craighero L (2004): The mirror neuron system. Ann Rev Neurosci 27:169–192. [DOI] [PubMed] [Google Scholar]
  120. Rizzolatti G, Fadiga L, Matelli M, Bettinardi V, Paulesu E, Perani D, Fazio F (1996): Localization of grasp representations in humans by PET: 1. Observation versus execution. Exp Brain Res 111:246–252. [DOI] [PubMed] [Google Scholar]
  121. Samson D, Apperly IA, Chiavarino C, Humphreys GW (2004): Left temporoparietal junction is necessary for representing someone else's belief. Nat Neurosci 7:499–500. [DOI] [PubMed] [Google Scholar]
  122. Santos NS, David N, Bente G, Vogeley K (2008). Parametric induction of animacy experience. Conscious Cogn 17:425–437. [DOI] [PubMed] [Google Scholar]
  123. Santos NS, Kuzmanovic B, David N, Rotarska‐Jagiela A, Eickhoff SB, Shah JN, Fink GR, Bente G, Vogeley K (2010): Animated brain: A functional neuroimaging study on animacy experience. Neuroimage 53:291–302. [DOI] [PubMed] [Google Scholar]
  124. Sartori L, Becchio C, Castiello U (2011): Cues to intention: The role of movement information. Cognition 119:242–252. [DOI] [PubMed] [Google Scholar]
  125. Sasaki Y, Vanduffel W, Knutsen T, Tyler C, Tootell R (2005): Symmetry activates extrastriate visual cortex in human and nonhuman primates. Proc Natl Acad Sci USA 102:3159–3163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  126. Saygin AP (2007): Superior temporal and premotor brain areas necessary for biological motion perception. Brain 130:2452–2461. [DOI] [PubMed] [Google Scholar]
  127. Saygin AP, Chaminade T, Ishiguro H, Driver J, Frith C (2011): The thing that should not be: Predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Soc Cogn Affect Neurosci 7:413–422. [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Scholl BJ, Tremoulet PD (2000): Perceptual causality and animacy. Trends Cogn Sci 4:299–309. [DOI] [PubMed] [Google Scholar]
  129. Schultz J, Imamizu H, Kawato M, Frith CD (2004): Activation of the human superior temporal gyrus during observation of goal attribution by intentional objects. J Cogn Neurosci 16:1695–1705. [DOI] [PubMed] [Google Scholar]
  130. Schultz J, Friston KJ, O'Doherty J, Wolpert DM, Frith CD (2005): Activation in posterior superior temporal sulcus parallels parameter inducing the percept of animacy. Neuron 45:625–635. [DOI] [PubMed] [Google Scholar]
  131. Serences JT (2004): A comparison of methods for characterizing the event‐related BOLD timeseries in rapid fMRI. Neuroimage 21:1690–1700. [DOI] [PubMed] [Google Scholar]
  132. Shimada S (2010). Deactivation in the sensorimotor area during observation of a human agent performing robotic actions. Brain Cogn 72:394–399. [DOI] [PubMed] [Google Scholar]
  133. Sinke CBA, Sorger B, Goebel R, de Gelder B (2010): Tease or threat? Judging social interactions from bodily expressions. Neuroimage 49:1717–1727. [DOI] [PubMed] [Google Scholar]
  134. Spunt RP, Satpute AB, Lieberman MD (2011). Identifying the what, why, and how of an observed action: An fMRI study of mentalizing and mechanizing during action observation. J Cogn Neurosci 23:63–74. [DOI] [PubMed] [Google Scholar]
  135. Stanley J, Gowen E, Miall RC (2007): Effects of agency on movement interference during observation of a moving dot stimulus. J Exp Psychol Hum Percept Perform 33:915–926. [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Stanley J, Gowen E, Miall RC (2010): How instructions modify perception: An fMRI study investigating brain areas involved in attributing human agency. Neuroimage 52:389–400. [DOI] [PMC free article] [PubMed] [Google Scholar]
  137. Tai YF, Scherfler C, Brooks DJ, Sawamoto N, Castiello U (2004): The human premotor cortex is ‘mirror’ only for biological actions. Curr Biol 14:117–120. [DOI] [PubMed] [Google Scholar]
  138. Tavares P, Lawrence AD, Barnard PJ (2008): Paying attention to social meaning: An fMRI study. Cereb Cortex 18:1876–1885. [DOI] [PubMed] [Google Scholar]
  139. Thioux M, Gazzola V, Keysers C (2008): Action understanding: How, what and why. Curr Biol 18:R431–434. [DOI] [PubMed] [Google Scholar]
  140. Tsai C‐C, Brass M (2007): Does the human motor system simulate Pinocchio's actions? Coacting with a human hand versus a wooden hand in a dyadic interaction. Psychol Sci 18:1058–1062. [DOI] [PubMed] [Google Scholar]
  141. Uddin LQ, Iacoboni M, Lange C, Keenan JP (2007): The self and social cognition: The role of cortical midline structures and mirror neurons. Trends Cogn Sci 11:153–157. [DOI] [PubMed] [Google Scholar]
  142. Van Overwalle F, Baetens K (2009): Understanding others' actions and goals by mirror and mentalizing systems: A meta‐analysis. Neuroimage 48:564–584. [DOI] [PubMed] [Google Scholar]
  143. Viviani P, Flash T (1995): Minimum‐jerk, two‐thirds power law, and isochrony: Converging approaches to movement planning. J Exp Psychol Hum Percept Perform 21:32–53. [DOI] [PubMed] [Google Scholar]
  144. Viviani P, Stucchi N (1992): Biological movements look uniform: Evidence of motor‐perceptual interactions. J Exp Psychol Hum Percept Perform 18:603–623. [DOI] [PubMed] [Google Scholar]
  145. Vogeley K, Bussfeld P, Newen A, Herrmann S, Happé F, Falkai P, Maier W, Shah NJ, Fink GR, Zilles K (2001): Mind reading: Neural mechanisms of theory of mind and self‐perspective. Neuroimage 14(1 Pt 1):170–181. [DOI] [PubMed] [Google Scholar]
  146. Vogt BA, Finch DM, Olson CR (1992): Functional heterogeneity in cingulate cortex: The anterior executive and posterior evaluative regions. Cereb Cortex 2:435–443. [DOI] [PubMed] [Google Scholar]
  147. Walter H, Adenzato M, Ciaramidaro A, Enrici I, Pia Lorenzo, Bara Bruno G (2004): Understanding intentions in social interaction: The role of the anterior paracingulate cortex. J Cogn Neurosci 16:1854–1863. [DOI] [PubMed] [Google Scholar]
  148. Wheatley T, Milleville SC, Martin A (2007): Understanding animate agents: Distinct roles for the social network and mirror system. Psychol Sci 18:469–474. [DOI] [PubMed] [Google Scholar]
  149. Wilkinson F, James TW, Wilson HR, Gati JS, Menon RS, Goodale MA (2000): An fMRI study of the selective activation of human extrastriate form vision areas by radial and concentric gratings. Curr Biol 10:1455–1458. [DOI] [PubMed] [Google Scholar]
  150. Willems RM, Ozyürek A, Hagoort P (2007): When language meets action: The neural integration of gesture and speech. Cereb Cortex 17:2322–2333. [DOI] [PubMed] [Google Scholar]
  151. Zaki J, Hennigan K, Weber J, Ochsner KN (2010): Social cognitive conflict resolution: Contributions of domain‐general and domain‐specific neural systems. J Neurosci 30:8481–8488. [DOI] [PMC free article] [PubMed] [Google Scholar]
  152. Zentgraf K, Stark R, Reiser M, Kunzell S, Schienle A, Kirsch P, Walter B, Vaitl D, Munzert J. (2005): Differential activation of pre‐SMA and SMA proper during action observation: Effects of instructions. Neuroimage 26:662–672. [DOI] [PubMed] [Google Scholar]
  153. Zilbovicius M, Meresse I, Chabane N, Brunelle F, Samson Y, Boddaert N (2006): Autism, the superior temporal sulcus and social perception. Trends Neurosci 29:359–366. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting Information Video 1

Supporting Information Video 2

Supporting Information Video 3

Supporting Information Video 4

Supporting Information Video 5


Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES