Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Dec 17.
Published in final edited form as: Cogn Affect Behav Neurosci. 2014 Mar;14(1):307–318. doi: 10.3758/s13415-013-0192-4

The role of kinematics in cortical regions for continuous human motion perception

Phil McAleer 1,2, Frank E Pollick 2, Scott A Love 2,3, Frances Crabbe 1, Jeffrey M Zacks 4
PMCID: PMC8679008  NIHMSID: NIHMS1745759  PMID: 23943513

Abstract

It has been proposed that we make sense of the movements of others by observing fluctuations in the kinematic properties of their actions. At the neural level, activity in the human motion complex (hMT+) and posterior Superior Temporal Sulcus (pSTS) have been implicated in this relationship. However, previous neuroimaging studies have largely utilised brief diminished stimuli and the role of relevant kinematic parameters for the processing of human action remains unclear. We address this issue by showing extended duration natural displays of an actor, engaged in two common activities, to 12 participants in an fMRI study under passive viewing conditions. Region of Interest (ROI) analysis focused on three neural areas, (hMT+, pSTS, and Fusiform Face Area (FFA)), and was accompanied by a whole brain analysis. Analyses indicated that the kinematic properties of the actor were related to activity in hMT+ and pSTS, namely speed, with additional areas related to human motion and action perception being highlighted by the whole brain analysis. Results indicate that the kinematic properties of peoples’ movements are continually monitored during everyday activity as a step to determining actions and intent.

Keywords: biological motion, fMRI, natural events, kinematic properties, perception


The world is a fluid, ever-changing stream of ongoing activity and movement that the brain must transform into an understanding of intent. How this transformation takes place is a complex problem that can be approached in a bottom-up fashion by exploring the kinematic properties of actions, and of the brain areas responsive to these changes. The actions in question are the actual physical movements that people perform in order to achieve their intent or goal (Baldwin, Andersson, Saffran and Meyer, 2008). Much of the research in this area has taken the approach of contrasting the percept, or brain activity, between different classes of elemental movements, (e.g. walking and running), utilising brief video displays, or impoverished representations of actions in the form of point-light displays and geometric shapes (cf. Grosbras, Beaton & Eickhoff, 2012, for a recent meta-analysis). Only a handful of studies have examined the processing of an on-going stream of activity over longer durations (Baldwin, Baird, Saylor & Clark, 2001; Baldwin et al, 2008; Zacks, Kumar, Abrams & Mehta, 2009) and related this observation to brain activity (Schubotz, Korb, Schiffer, Stadler & Von Cramon, 2012; Zacks, Swallow, Vettel & McAvoy, 2006). However, these studies neglect how cortical areas respond to relevant kinematics whilst viewing human activity. In this research, we directly explore how the kinematics of observed continuous actions relate to brain activity in cortical regions previously established as areas for the general processing of human action and motion. Our approach closely approximates the processes involved in natural viewing and taken together with studies of elemental movements, complements our insight into mechanisms of biological motion perception.

Studies comparing different classes of elemental actions with changing kinematics provide insight into what motion properties relate to judgments in behavioural experiments. For example, studies of male versus female point light walking movements have revealed the relative contributions of form and motion in judging gender from point-light walkers (Kozlowski & Cutting, 1977; Mather & Murdoch, 1994; Pollick, Kay, Heim, & Stringer, 2005; Troje, 2002). In addition, studies of affective door-knocking movements have shown how the velocity of the wrist can explain the structure of affective judgments from point-light arms (Johnson, McKay, & Pollick, 2011; Pollick, Paterson, Bruderlin, & Sanford, 2001). Examination of ongoing activity has been explored in the context of animacy displays (Heider & Simmel, 1944) that investigated what motion properties inform the recognition of intent from two geometric objects interacting. Blythe, Todd and Miller (1999) used a computational analysis to show that seven kinematic features distilled from the trajectories were sufficient to categorise the type of interaction between the two objects. These cues were: velocity, relative distance, relative angle, relative heading, absolute velocity, absolute vorticity (change in heading) and relative vorticity. Similarly, Zacks (2004) showed that kinematic properties could be related to the way in which observers perceived event boundaries in an animacy display. Event boundaries are proposed as a means for understanding on-going activity, first developed in infancy (Baldwin et al, 2001; Newtson, 1976; Zacks, Tversky & Iyer, 2001; Zacks, 2004). Movement features, including velocity and acceleration of the animated shapes, predicted when observers perceived event boundaries and influenced action comprehension (See also Hard, Recchia & Tversky, 2011).

Zacks, Kumar, Abrams, & Mehta (2009) explored key motion properties for biological motion perception in natural scenes, using video displays (mean duration = 370 secs.) of an actor performing common actions: e.g. a man sitting at a desk paying bills. Based on regression models in previous work (Zacks, 2004; Zacks et al, 2006), they investigated motion properties including changes in speed, position and acceleration of the actor’s two arms and head, and the relative changes between the pairwise combinations. It was found that when viewing natural activities, motion properties were indeed correlated with event boundaries perceived by the observer: the most significant predictors of event boundaries were speed and acceleration of body parts, and relative distances incorporating the left hand. This left hand bias was partially explained by the saliency of the vision of that hand: the actor was left-handed and the hand was always closer to the viewer. This study shows that, behaviourally, in natural displays, event boundaries, and related action perception, is predictable via changes in the bottom-up processing of the low-level motion properties.

Studies of brain activity of human motion observation have revealed a variety of brain regions specialised for this purpose. Research into how we obtain meaning and intention from observed action have revealed two networks involving substantial frontal, parietal and temporal regions (Van Overwalle & Baetens, 2009). One of these networks has been indicated to be involved in Theory of Mind (ToM) and involves medial prefrontal cortex and temporo-parietal cortex (Amodio & Frith, 2006; Frith & Frith, 1999). The other network has been associated with mirror neurons and involves the inferior frontal gyrus and the inferior parietal cortex (Rizzolatti & Sinigaglia, 2010). While these two networks appear to be involved with more complex processing of actions, other regions in temporal cortex have been shown to be involved with processing aspects of the form and motion of an action (Grosbras, Beaton, & Eickhoff, 2012). The areas associated with selectivity to processing of form include the extrastriate body area (EBA) (Downing, Jiang, Shuman, & Kanwisher, 2001) and the fusiform body area (FBA) (Peelen & Downing, 2005). These regions respond to photorealistic displays of bodies and body parts with evidence showing that the representation is more part based in EBA than FBA (Downing & Peelen, 2011). Areas associated with processing motion include the human Motion complex (hMT+), and the posterior region of the Superior Temporal Sulcus (pSTS) which has been demonstrated to be more active when viewing intact displays of biological motion than scrambled displays (Beauchamp, Lee, Haxby, & Martin, 2002; Grossman & Blake, 2002). It is important to point out though that the distinction between areas and networks is not succinct: for example, pSTS is largely considered part of the ToM network (Castelli, Happe, Frith & Frith, 2000; Frith & Frith, 2010), whilst it is also considered the visual input of the mirror neuron network (see Iacoboni & Dapretto, 2006). Likewise, recent evidence points to a large, if not complete, overlap of hMT+ on EBA (Ferri, Kolster, Jastorff & Orban, 2013; Weiner & Grill-Spector, 2011), suggesting that hMT+ may be more attuned for the integration of human motion and form than previously thought (Ferri et al, 2013; Gilaie-Dotan, Bentin, Harel, Rees & Saygin, 2011). Ultimately, however, research collectively suggests that cognitive aspects of human motion and action interpretation are performed by fronto-parietal networks while early visual processing of human movement comprises dorsal regions sensitive to motion and ventral regions sensitive to form. This role of frontal areas was furthered explored by Schubotz and colleagues (2012) using extended natural displays of human activity (mean duration = 81 secs). Comparing intention driven displays (e.g. laundry) with purely human motion displays (e.g. tai chi movements), and scrambled point-light versions as controls, they found that only when displays allowed previous knowledge of intent as opposed to knowledge of movement (i.e. laundry > tai chi) were fronto-parietal networks invoked, suggesting top-down modulation to comprehend intent (Schubotz et al, 2012; Zacks, 2004). Perception of human movement compared to controlled stimuli did however show activation in hMT+ as expected.

To move beyond simply identifying activated cortical regions, studies using long display durations have examined the relationship of kinematic properties of animated movement to brain activity. For example, Dayan et al (2007) investigated how brain activity is modulated by changing the relationship between speed and geometry of 9-second movements. They contrasted displays of a cloud of points moving in an elliptical trajectory with natural covariation of speed and shape that conformed to a 1/3 power law against other speed-shape relations. They found that regardless of the speed-shape relation there was a bilateral fMRI signal increase in posterior visual areas including occipito-temporal cortex and bilateral inferior parietal lobe. However, when the speed-shape relation was natural they found activation in bilateral STS/STG and in bilateral posterior cerebellum. Using simple animations of two interacting geometric objects, Zacks and colleagues reported that speed, distance between objects and relative speed were related to the fMRI response in areas including hMT+. They also showed that activity in hMT+ and pSTS increased at times that observers identified event boundaries (Zacks, Swallow, Vettel, & McAvoy, 2006). Similarly Jola and colleagues (2013) found that brain activity in these regions, as revealed by fMRI, were correlated among a group of observers when they watched a 6 minute long solo dance: suggesting that this activity might result from observers synchronizing their brain activity to the motion of the dancer (Jola, McAleer, Grosbras, Love, Morison & Pollick, 2013). One possibility, is that hMT+ and pSTS both perform human motion analyses, and play a bottom-up role in action perception. However, Zacks et al (2006) used simple geometric stimuli that are impoverished compared to extended displays of human activity. Everyday activity incorporates large additional information, such as non-rigid articulation, and it may be the case that the relationship between hMT+, pSTS and observed kinematics may dissipate when such additional information is freely available despite the behavioural percept remaining largely intact (Zacks et al, 2009).

The current study extends previous results by exploring the relationship between a set of kinematic properties of a viewed naturalistic movement and brain activity as revealed via fMRI. Visual stimuli consisted of two extended live-action movies taken from Zacks et al (2009) of an actor performing two activities: a) “playing with duplo (lego)” and b) “paying bills”. Analysis was conducted at both the whole brain level and ROI level, focusing on areas previously highlighted: pSTS and hMT+. Furthermore, an additional ROI, the Fusiform Face Area (FFA) was studied. The FFA is specialised for the processing and recognition of faces (Kanwisher, McDermott & Chum, 1997; Sergent, Ohta & MacDonald, 1992) and thus would not be expected to be greatly influenced by motion. We proposed that in regions relating to motion processing, BOLD activity would be predicted by changes in speed and acceleration changes of the actor’s limbs. Furthermore, we expected this relationship to be found in both the ROI and whole brain analyses, with the whole brain analysis revealing additional areas that may be involved in processing observed natural actions. No change in neural activation in the FFA, relating to the motion properties of the actor, was expected.

Methods

Participants

12 participants (6 male, mean age = 22.25 SD = 2.83) were recruited from the University of Glasgow Subject Pool. All participants self-reported being right-handed and neurologically healthy. They were paid £30 in total for their participation. Ethics permission was granted from the Ethics Board of the Faculty of Information and Mathematical Sciences, University of Glasgow.

Stimuli

Stimuli for the functional scan consisted of two movies taken from the set previously described by Zacks et al. (2009). In brief, both movies depicted one man performing common activities at a table: a) the actor paid a set of bills (Figure 1A – “Bills”); b) the actor built model figures using Duplos blocks (www.lego.com) (Figure 1B - “Duplo”). The displays lasted 371 s and 388 s respectively. Three markers for motion tracking were visible on the head and hands of the actor during the displays. These allowed for magnetic motion-tracking (www.ascension-tech.com) of the actor’s movements for later analysis of kinematic properties in a three-dimensional space.

Figure 1.

Figure 1.

One Frame from each of the displayed experimental movies. A – taken from display showing the actor paying bills; B – taken from display showing the actor building a model with Duplo.

Image Acquisition

Participants were scanned using a 3T Siemens TimTrio MRI scanner (Erlangen, Germany). Participants completed two scanning sessions lasting approximately 1hr each, separated by a break-period of ~30min during which the participant was removed from the scanner. Both sessions contained an anatomical image acquisition of the whole brain structure via 3D magnetization prepared rapid acquisition gradient recalled echo (MP-RAGE) T1-weighted sequence (192 slices; 1mm3 isovoxel; TR = 1900ms; TE = 2.52; 256×256 image resolution). The sessions included the functional scans described here as well as others that comprised a separate experiment.

Functional and localiser scans were divided up between sessions: Session 1 consisted of the functional scan for seeing the two movies (Echo Planar 2D imaging; PACE-MoCo; TR = 2000ms; TE = 30ms; 32 Sagittal Slices (near whole brain); 3mm3 isovoxel; 70×70 matrix; 391 Volumes), as well as a hMT+ localiser (as previous; 182 volumes) and a pSTS localiser (as previous; 245 volumes). Session 2 consisted of two unreported functional scans and the FFA localiser (as previous; TR = 3000ms; TE = 30ms; 152 volumes).

Functional Scan

The two displays (“Bills” and “Duplo”) were presented to participants in the scanner via NordicNeuroimagingLab presentation goggles (www.nordicneuroimgaginglab.com). The goggles have a viewing area of 800w by 600h pixels, covering 30×22.5 degrees of visual angle. Both displays were presented in one functional scan with 10 s of fixation (white cross on uniform black background) separating the displays. A further 10 s of fixation was presented immediately prior to the onset of the 1st display and immediately post offset of the 2nd display. Order of displays within the run was pseudo-randomized across participants: 4 participants saw “Duplo” first in the sequence. Timing and display presentation was controlled via Presentation (www.neurobs.com). Participants viewed all displays passively.

Functional Localizers

hMT+

hMT+ is defined as a region in the lateral posterior cortex where brain activation is stronger to dynamic displays than static displays (Tootell, Repas et al, 1995) and is neither shape-sensitive (Albright et al, 1984; Zeki et al, 1974) nor contrast sensitive (Beauchamp et al, 2002; Huk et al, 2002; Zacks et al, 2006). To identify hMT+, a block-design paradigm, in-line with previous reports (Beauchamp et al 2002; Swallow et al 2003; Zacks et al, 2006), was used with stimuli that alternated between a high-contrast static black and white circular checkerboard pattern, and a low-contrast dynamic circular dot display constructed from alternating concentric circles of black and white dots. The static checkerboard subtended 21.6 degrees of visual angle and alternated between two versions of the same image fifteen times over 30 s, with the black and white segments of the checkerboard switching location across images. The total span of the moving dot display was equivalent to that of the static. Each dot within the moving display was 2×2 pixels. The dynamic display contracted and expanded every 1.5 s, with each block of dynamic display lasting 30 s. In total, 5 blocks of both dynamic and static images were shown, with periods of baseline before and after the initial and final blocks. Block order was fixed across participants and always alternated from static to dynamic. All images were presented on a uniform gray background. No task was given other than to attend displays and maintain fixation. Participants completed two runs of this 400 s hMT+ localizer. Prior to starting, subjects were asked to confirm that they could indeed perceive the low-contrast dynamic display.

Localization analysis was carried out at the single-subject level using BrainVoyager QX (2.1) in a standardized atlas space (Talairach and Tournoux, 1988). hMT+ was defined as voxels that: a) survived the contrast of dynamic greater than static (p-value < 0.005 uncorr.); b) had a continuous area greater than 108mm2 (3mm isotropic voxels) and c) were within a restricted anatomical region (Zacks et al, 2006).

PSTS

pSTS is defined as a region located in the lateral posterior cortex that responds more strongly to coherent than scrambled biological motion. A block-design paradigm was used to localize pSTS, alternating blocks of coherent and scrambled biological motion with periods of fixation (Beauchamp, et al, 2002; Grossman and Blake, 2002; Grossman et al., 2000; Zacks et al, 2006). Coherent biological motion displays (black dots on a uniform gray background) were created using 12 marker points on virtual actors as they performed actions such as walking and jumping: scrambled biological motion displays were created by perturbing the initial point positions. Images when viewed subtended a viewing angle of 9.45w by 15h degrees. During the fixation intervals participants viewed a black cross on a uniform gray background. Each motion stimulus block lasted 16 s, consisting of eight pairs of a 1 s display followed by a 1 s blank screen, with 10 repetitions per condition. Block order within the runs was fixed: run 1 being coherent, scrambled, baseline and run 2 showing the reverse order. Blocks were separated by 2 s of blank, with 10 s of fixation before commencing first block. A fixation cross was displayed centrally throughout. No task was given other than to attend displays and maintain fixation. Participants completed two runs of this 490 s pSTS localizer.

pSTS was defined as voxels that: a) survived the contrast of coherent greater than scrambled motion (p-value < 0.005 uncorr.); b) had a continuous area greater than 108mm2 (3mm isotropic voxels) and c) were within a restricted anatomical region (Zacks et al, 2006). Given the proximity of pSTS and hMT+, any overlapping voxels were removed from both regions to achieve region-specific voxels.

FFA

A final block-design paradigm was incorporated to localize cortical regions sensitive to face perception. Participants viewed alternating blocks of faces, houses and noise; noise patterns constructed from the two other conditions (Vizioli, Smith, Muckli & Caldara, 2010). All images were shown as uniform grey presented on a white background, and measured 11.25 degrees of visual angle: faces were cropped using an elliptical annulus to remove neck, ears and hairline from the images.

Blocks of the three categories lasted for 18 s and were made up of 20 image presentations lasting 750 ms, separated by 250 ms of blank white screen. Five blocks of each category were shown. Twelve seconds of a fixation cross on a uniform background commenced each run, and separated each condition block. Participants completed 2 runs of the FFA localizer, each lasting 456 s, using a fixed order: 1) faces, noise then houses and 2) reverse order. No task was given other than to attend displays and maintain fixation.

FFA was defined as voxels that a) survived the contrast of faces greater than houses (p-value < 0.005 uncorr.); b) had a continuous area greater than 108mm2 (3mm isotropic voxels) and c) were located around the mid-fusiform gyrus (Kanwisher, McDermott, Chun, 1997; McCarthy, Puce, Gore, Allison, 1997).

A summary of the average localized regions, from the three functional localisers, across all 12 participants, in both hemispheres, can be seen in Table 1, and a schematic depiction of all localiser stimuli and regions of activation in Figure 2.

Table 1:

Summary positional co-ordinates and cluster size of group activation from the three functional localizer scans. pSTS – posterior Superior Temporal Sulcus; hMT+ - human motion complex; FFA – Fusiform Face Area; n - number of participants who showed activation at given contrast. Standard Deviations in parenthesis.

Localiser Summary
Avg. Talairach Co-ordinates and Voxels in ROI (st.dev in parenthesis)
Right Hemisphere Left Hemisphere
n x y z No. of Voxels n x y z No. of Voxels
pSTS (t485) 11/12 43.4 (6.1) −51.7 (6.8) 11.4 (7.5) 524 (261.1) 9/12 −48.2 (6.9) −58 (10.1) 13 (6.9) 281 (146.3)
hMT+ (t360) 11/12 41.5 (4.8) −65.6 (4.5) 4.8 (5.7) 908 (490.1) 10/12 −41.1 (5.3) −69.4 (4.7) 5.8 (6.8) 811 (376.4)
FFA (t295) 8/12 34.5 (2.8) −46.8 (10.5) −12.0 (8.1) 406 (222) 7/12 −36.4 (3.6) −52 (13.1) −12.3 (3.8) 358 (263.1)
Co-ordinates in Tal88 space at p<0.0005; K>5; Voxels = 3mm3;
Figure 2.

Figure 2.

Frame from each condition of the 3 functional localisers, with schematic diagram of paradigm and diagram of location of mean activation on generic normalized brain atlas. Tal88 co-ordinates are as in Table 2.

fMRI Data Analysis

Two separate analyses were performed, characterizing the relations between movement features and brain activity.

ROI

All functional and anatomical images were analysed using BrainVoyager QX 2.1 (Brain Innovations, Maastricht, Netherlands). Functional images were initially pre-processed via slice scan-time correction (cubic-spline interpolation), and temporal high-pass filtering to remove low frequency non-linear drifts. An additional 3D motion correction (trilinear interpolation) was used to remove head motion: translation correction never exceeded 3mm. All images were transformed into Talairach stereotaxic space (Talairach and Tournoux, 1988) by initially co-aligning all functional images to the first volume of the functional run closest to the relevant anatomical. The two session anatomical scans were then co-aligned via intersession alignment methods within BrainVoyager, resulting in all functional images being registered to the stereotaxic space.

For each ROI, in all participants, the mean time-course of the region during each display was extracted and a linear model predicting brain activity from four movement variables was fitted. Four variables were selected to minimize multi-collinearity and degrees of freedom while retaining as much as possible of the variance in the movement signals. The three-dimensional positional co-ordinates of the actor were tracked using magnetic sensors attached to the actor’s hands and head. Based on stepwise regression models from previous research informing the most relevant parameters for anthropomorphized motion analysis (Zacks, 2004; Zacks et al, 2006; Zacks et al, 2009), the selected variables were:

  1. distance between each pair of body parts (i.e. left hand, right hand and head),

  2. speed of each body part (the norm of the first derivative of position),

  3. relative speed of each pair of body parts (the first derivative of distance), and

  4. relative acceleration of each pair of body parts (the second derivative of distance).

For example, if the actor were to have his left hand resting on his head and then reach for a pen on the table, the movement variables would change as follows. The distance between the hand and head would increase. The speed of the left hand would increase from zero, and then decrease as the hand approached the pen. The relative speed between the left hand and head would likewise increase from zero, then decrease. The relative acceleration of the left hand and head would initially be zero, then would be positive as the hand sped up, then turn negative as the hand slowed, approaching the pen.

The left hand, right hand, and head each had a speed feature, and each of the three pairs of body parts had a distance, relative speed, and relative acceleration features. Thus, there were twelve movement features. Each movement feature was averaged over the duration of each image acquisition frame (29.97 fps) and convolved with a model hemodynamic response function (Boynton et al., 1996) to produce a set of predictor variables. These variables were entered into linear models together with variables coding for effects of no interest: the presence of the movies and the linear trend across the scan. The dependent measure was the preprocessed blood oxygen dependent (BOLD) signal for each participant for each ROI. Mean regression weights per participant, for distance, speed, relative speed, and relative acceleration were calculated by averaging the regression weights across body parts (for speed) or pairs of body parts (for distance, relative speed and relative acceleration). To provide significance tests with participant as the random effect, the averaged regression weights from each participant’s linear models were subjected to one-sample t-tests. Only participants who showed, at minimum, unilateral localization of the ROIs were included in this analysis. Furthermore, in order to compare relationships between motion parameters within and across the two main experimental ROIs (i.e. pSTS and hMT+), a two-way mixed-design ANOVA (between: hMT+; pSTS; within: speed; distance; relative speed; relative acceleration) was conducted using participants that correlated at least unilaterally in both ROIs. Finally, inter-hemispherical differences were not considered due to the reduced power of this test from unilateral localization in a number of subjects (see Table 1). All tests were corrected for multiple comparisons using Bonferroni correction.

Whole Brain

Brain activity was also analyzed at the single voxel level across the whole brain. The raw BOLD data were preprocessed to remove artifacts due to slice timing, corrected for participant motion, and mapped into a standard atlas space (see Speer et al., 2007 and Yarkoni et al., 2008 for the details of the procedure). Linear models were fitted and t-tests were conducted as for the ROI analysis. The results were corrected for multiple comparisons by converting the t statistics to z statistics, selecting a threshold of z = 3.5 and a cluster size of 9 to control the overall map-wise false positive rate at p= .05 based on the Monte Carlo simulations of McAvoy et al. (2001).

Results

Region of Interest Analysis

The localized regions of interest are consistent with previous research on these areas: hMT+ (Beauchamp et al, 2002; Zacks et al, 2006); pSTS (Grossman & Blake, 2002; Zacks et al, 2006); FFA (Grill-Spector, Knouf & Kanwisher, 2004). The strengths of the relations between movement features and brain activity in hMT+, pSTS, and FFA are depicted in Figure 3. To contain the number of multiple comparisons, and due to unilateral localization in a number of participants, statistical tests were conducted after averaging the effects across the two hemispheres. In hMT+, the distance between body parts, speed of body parts, and relative speed were all significantly related to brain activity, (smallest t(10) = 3.58, corrected p = .02). Relative acceleration was not, (t(10) = −.24, n.s.). In pSTS, speed was significantly related to activity, (t(10) = 2.98, corrected p = .04). The relation for distance was significant before correcting for multiple comparisons, but did not survive the correction (t(10), 2.36, corrected p = .12). Neither relative speed nor relative acceleration was significantly related (largest t(10) = 1.80, n.s.). In FFA, no movement features were significantly related to brain activity (largest t(9) = −2.02, n.s.).

Figure 3.

Figure 3.

Strength of relations between movement features and brain activity in independently identified functional regions of interest. Units are percent signal change per unit of distance (pixel), speed (pixel/s), relative speed (pixel/s) and relative acceleration (pixel/s2). Error bars are standard errors.

To compare strength of relationships of motion parameters across the two main experimental ROIs (i.e. pSTS and hMT+), a two-way mixed-design ANOVA (between: hMT+; pSTS; within: speed; distance; relative speed; relative acceleration) was conducted using all subjects that activated both regions (n/ROI=10). After Bonferroni correction for multiple comparisons, no significant interaction between ROI and kinematics was found, (F(3,54)=0.3, n.s.) nor for the between variable ROI, (F(1,18)=1.7, n.s). Finally, a significant main effect of the within variable kinematics was found, F(3,54)=11.24, p<0.01: post-hoc comparisons revealed a significant difference between mean regression weights for speed being higher than those of distance, relative speed, and relative acceleration. Two similar follow-up ANOVAs comparing FFA to the two main experimental ROIs (n/ROI=9) showed a similar pattern of results, except this time the between comparison of ROIs was significant: activation in hMT+ was greater than in FFA (F(1,16)=8.0, p<0.05), as was activation in pSTS greater than in FFA (F(F1,16)=5.3, p<0.05).

Finally, to understand the collinearity between the kinematics, we considered the correlation between the averaged kinematics, as a summary, after being convolved with the HRF. Pearson correlation revealed a moderate positive correlation between speed and distance (r=0.68, p<0.01), and weak negative correlations between relative acceleration and distance (r=−0.18, p<0.05), and relative acceleration and speed (r=−0.16, p<0.05). No other relationships were significant.

Whole Brain Analysis

Using a similar analytical method as in the ROI analysis, the whole-brain analysis revealed a number of regions whose neural activity was predicted by movement features in the posterior cortex, and one region each in the frontal cortex and cerebellum. Areas are listed in Table 2 and the cortical regions are depicted in Figure 3. Distance between the head and hands of the actor was positively related to activity in a number of regions in the superior parietal cortex, another cluster at the juncture of the parietal, temporal, and occipital lobes, and small regions in the medial temporal lobes and the cerebellum. There were no regions whose activity was negatively related to distance between the head and hands of the actor. Speed of the same body parts was positively related to activity in a pair of temporoparietal regions situated proximally to hMT+. This was also true in an early visual area in the left lingual gyrus (likely corresponding to V2/V3), and in the left cuneus. There were no regions whose activity was significantly negatively related to speed. The relative speed with which the body parts moved was positively related to activity in small clusters in the superior parietal cortex in the left hemisphere, and in the somatosensory cortex and premotor cortex in the right hemisphere. There were no regions whose activity was negatively related to the relative speed of the body parts. No significant clusters relating to changes in relative acceleration were found.

Table 2:

Neural regions from the whole-brain analysis that were significantly correlated with movement features: R – right hemisphere; L – left hemisphere; BA – Brodmann’s Area

X (peak) Y (peak) Z (peak) Description (BA) Volume (cm3) Z statistic at peak
Distance
26 −27 −27 R. medial temporal 28/36 0.73 4.91
−22 −39 −21 L. medial temporal 28/34 0.24 3.94
20 −78 −21 R. cerebellum 0.41 4.1
−46 −54 −15 L. fusiform gyrus 20/37 1.08 4.54
38 −51 −12 R. fusiform gyrus 20/37 1 4.48
−22 −54 −15 L. fusiform gyrus 20/37 0.59 4.6
28 −69 −18 R. fusiform gyrus 19 0.3 4.05
46 −69 −3 R. occipitotemporal junction 19/37 2.02 4.88
32 −84 6 R. lateral occipital 19 0.89 3.95
−40 −81 12 L. lateral occipital 19 0.49 3.8
−26 −81 15 L. temporoparietal junction 19/39 0.27 4.01
26 −66 45 R. inferior parietal lobule 7 0.65 4.47
−22 −57 48 L. inferior parietal lobule/superior parietal lobule 7 0.35 3.94
26 −60 57 R. superior parietal lobule 7 1.05 4.33
Speed
−10 −93 −9 L. lingual gyrus 18 0.81 4.59
−40 −69 6 L. middle temporal gyrus (hMT+) 37 0.62 4.13
−14 −96 9 L. cuneus 18 0.54 4.16
44 −63 12 R. middle temporal gyrus (hMT+) 37 0.68 3.97
Relative Speed
28 −36 51 R. somatosensory cortex 4 0.32 4.98
−20 −57 57 L. superior parietal lobule 7 0.89 4.48
32 −12 57 R. premotor cortex 6 0.76 4.36

Discussion

From the brain activity of observers viewing extended duration natural displays of human movement, a relationship between brain activation and motion of the hands and head of an observed actor was found. This relationship was shown in both a focused ROI analysis and an unguided whole brain analysis. The convergence of the two analytical methods highlights the association between limb movements and brain activity: both methods showed a positive relationship between the speed of the actor’s hands and head, and BOLD activity in bilateral human motion complex (hMT+). In addition, the ROI analyses highlighted relationships between hMT+ and distance, between hMT+ and relative speed, and between pSTS and speed. Speed appeared to be the main predictor of brain activity in hMT+, and pSTS. However, speed and distance were shown to be highly correlated, and thus disentangling the full role of each predictor is difficult. Finally, it was shown that hMT+ and pSTS showed no significant difference in terms of how their activity was modulated by motion properties, but both regions showed stronger relationships to the kinematic parameters than FFA. In turn, FFA showed no relationships with kinematic parameters. These results indicate that motion properties, and primarily the speed of human motion, are processed in a related fashion within pSTS and hMT+, and that the FFA is not involved in biological motion processing.

Overall, the findings advance of current thinking on the relationship between brain activity and kinematic properties for action understanding. Behaviorally, using the same displays as present, the prevalent kinematic parameters that enable the understanding of human actions are the speed and distance properties of an observed agent’s limbs and head (Zacks et al, 2009). At the neural level, Zacks and colleagues (2006) showed that from passively viewing activity in animated displays, BOLD changes in hMT+ and in pSTS were indicative of times in the displays that participants would later explicitly perceive to be the end of one event and the beginning of the next. In turn, Schubotz et al (2012), using natural displays and an online segmentation task, found that hMT+, in general, was related to changes in human motion, but not goals and intents: goal comprehension appears modulated by frontal memory networks. The current results advance the theory by showing that when passively observing extended natural displays of a human actor, as opposed to elemental or animated displays, the speed and distal motion properties previously proposed as relevant for event comprehension, are correlated to changes in brain activation in the previously highlighted occipital-temporal regions, (Schubotz et al. 2012; Zacks et al, 2006).

One caveat is that the present study does not show a direct behavioral test between action understanding and kinematic parameters: instead, we link activation associated with passive viewing in the current study to behavioral measures obtained previously by Zacks and colleagues (Zacks et al., 2009) using the identical stimuli. This link is consistent with previous work where brain activation in occipital-temporal regions obtained via passive viewing related to the role of kinematic parameters in event segmentation (Zacks et al, 2006). Future study may consider maintaining direct tests between behavior and brain activation, giving consideration to activation due to task demands (cf. Grosbras et al, 2012). Ultimately this does not detract from the theory that a combination of this bottom-up processing of motion parameters that must combine with top-down action knowledge and memory (Schubotz et al, 2012) or statistical learning of actions (Baldwin et al, 2008) to comprehend the continuous activity around us.

Neural areas highlighted by the whole-brain analysis are wholly consistent with previous findings for biological motion perception. A relationship between speed of movement and neural activation was witnessed in both hMT+ and pSTS, and in the whole brain analysis, in bilateral occipital-temporal regions. Speed correlates were also witnessed in early visual cortex (V2/V3), with acceleration correlates witnessed proximal to hMT+. Distance, akin to posture, was related to brain activity mostly in temporal and occipital regions, with some activation in the superior parietal cortex and cerebellum. These results are consistent with a distinction between processing of form and motion (Giese & Poggio, 2003; Jastorff & Orban, 2009; Grosbras et al. 2012; Lange & Lappe, 2006) but cross-over of regions is to be expected given the correlation between speed and distance parameters. The correlation of speed and distance may hide the veridical relationship of these parameters to brain activation; potential disentanglement, via increased power, would resolve their roles in the processing of form and motion. Finally, the relationship between distance and brain activity in parietal regions likely reflects activity in body-part centered neurons for the encoding of hand and head position (Calvo-Merino, Glaser, Grezes, Passingham, & Haggard, 2005; Caspers, Zilles, Laird, & Eickhoff, 2010; Wagner, Dal Cin, Sargent, Kelley, & Heatherton, 2011; Colby, 1998; Graziano & Gross, 1998; Willems & Hagoort, 2009).

Activations in right premotor and inferior parietal regions are consistent with the existence of a fronto-parietal mirror neuron circuit (Rizzolatti & Sinigaglia, 2010; Fogassi, Ferrari, Gesierich, Rozzi, Chersi & Rizzolatti, 2005; Gallese, Fadiga, Fogassi & Rizzolatti, 1996). Similarly, activation in the right somatosensory cortex is consistent with findings of areas with mirror function in this area (Keysers, Kaas & Gazzola, 2010). These activations are suggested to be driven by the actor being left-handed and the prominence of this hand in the video display: influence of the left hand in these displays was previously shown on event perception (Zacks et al., 2009).

Finally, given the lack of relationship between motion parameters and activation to the FFA, activation witnessed in the fusiform gyrus/lateral occipital cortex (LOC) likely relates in part to face perception in general, of the actor, (Kanwisher et al., 1997; Sergent et al., 1992) or to object recognition (Grill-Spector, Kourtzi & Kanwisher, 2001), as the viewer recognizes what the actor is manipulating (Grosbras et al, 2012). For example, this may occur in the ‘Duplo’ scenario where recognition of the object changes as the video progresses.

In conclusion, the ROI analysis showed that fMRI activity in areas associated with biological motion perception are indeed modulated by specific kinematic properties of actors in continuous natural displays. The most prevalent parameter would appear to be the moving speed of the hands and head of the observed actor; though further distinction of the roles of speed and distance is required. A secondary whole-brain analysis supported these findings in hMT+, and showed involvement of neural areas expected from previous findings for human perception. This study moves beyond mere extrapolation of a relationship between kinematics and neural activation when observing constrained or animated displays, to firmly establishing that this relationship exists when viewing actual human motion over long durations. The monitoring of motion parameters, such as speed and distance of limbs, by areas including hMT+ and pSTS is proposed as a key bottom-up process in how the brain processes the ongoing streams of activity that make up our environment.

Figure 4.

Figure 4.

Regions that were significantly correlated with movement features in the whole-brain analysis. Red: Distance; Green: Speed; Blue: Relative speed.

References

  1. Albright TD, (1984). Direction and orientation selectivity of neurons in visual area MT of the macaque, J. Neurophysiol, 52, 1106–1130. [DOI] [PubMed] [Google Scholar]
  2. Amodio DM, & Frith CD (2006). Meeting of minds: the medial frontal cortex and social cognition. Nature Reviews Neuroscience, 7(4), 268–277. [DOI] [PubMed] [Google Scholar]
  3. Baldwin D, Andersson A, Saffran J, & Meyer M (2008). Segmenting dynamic human action via statistical structure. Cognition, 106, 1382–1407. [DOI] [PubMed] [Google Scholar]
  4. Baldwin DA, Baird JA, Saylor MM, & Clark MA (2001). Infants parse dynamic action. Child Development, 72(3), 708–717. [DOI] [PubMed] [Google Scholar]
  5. Beauchamp MS, Lee KE, Haxby JV, & Martin A (2002). Parallel visual motion processing streams for manipulable objects and human movements. Neuron 34:149–159. [DOI] [PubMed] [Google Scholar]
  6. Blythe PW, Todd PM and Miller GF (1999). How Motion Reveals Intention: Categorizing Social Interactions, in Simple Heuristics that make us smart. In Simple Heuristics That Make Us Smart (Gigerenzer G et al. , eds), pp. 257–285, Oxford University Press. [Google Scholar]
  7. Boynton GM, Engel SA, Glover GH, & Heeger DJ (1996). Linear systems analysis of functional magnetic resonance imaging in human V1. Journal of Neuroscience, 16, 4207–4221. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Calvo-Merino B, Glaser DE, Grezes J, Passingham RE, & Haggard P (2005). Action observation and acquired motor skills: An fMRI study with expert dancers. Cerebral Cortex, 15(8), 1243–1249. [DOI] [PubMed] [Google Scholar]
  9. Caspers S, Zilles K, Laird AR, & Eickhoff SB (2010). ALE meta-analysis of action observation and imitation in the human brain. Neuroimage, 50(3), 1148–1167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Castelli F, Happé F, Frith U, & Frith C (2000). Movement and mind: A functional imaging study of perception and interpretation of complex intentional movement patterns. NeuroImage, 12, 314–325. [DOI] [PubMed] [Google Scholar]
  11. Colby CL (1998). Action-oriented spatial reference frames in cortex. Neuron, 20(1), 15–24. [DOI] [PubMed] [Google Scholar]
  12. Dayan E, Casile A, Levit-Binnun N, Giese MA, Hendler T, & Flash T (2007). Neural representations of kinematic laws of motion: evidence for action-perception coupling. Proc Natl Acad Sci U S A, 104(51), 20582–20587. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Downing PE, Jiang YH, Shuman M, & Kanwisher N (2001). A cortical area selective for visual processing of the human body. Science, 293(5539), 2470–2473. [DOI] [PubMed] [Google Scholar]
  14. Downing PE, & Peelen MV (2011). The role of occipitotemporal body-selective regions in person perception. Cognitive Neuroscience, 2(3–4), 186–203. [DOI] [PubMed] [Google Scholar]
  15. Ferri S, Kolster H, Jastorff J & Orban GA, (2013) The overlap of the EBA and the MT/V5 cluster. NeuroImage, 66, 412–425. [DOI] [PubMed] [Google Scholar]
  16. Fogassi L, Ferrari PF, Gesierich B, Rozzi S, Chersi F, & Rizzolatti G (2005). Parietal lobe: From action organization to intention understanding. Science, 308(5722), 662–667. [DOI] [PubMed] [Google Scholar]
  17. Frith CD, & Frith U (1999). Cognitive psychology - Interacting minds - A biological basis. Science, 286(5445), 1692–1695. [DOI] [PubMed] [Google Scholar]
  18. Frith U & Frith C (2010) The social brain: allowing humans to boldly go where no other species has been. Philosophical Transaction of the Royal Society B, 365 (1537), 165–176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Gallese V, Fadiga L, Fogassi L, & Rizzolatti G (1996). Action recognition in the premotor cortex. Brain, 119, 593–609. [DOI] [PubMed] [Google Scholar]
  20. Gazzola V, Rizzolatti G, Wicker B, Keysers C (2007). The anthropomorphic brain: The mirror neuron system responds to human and robotic actions. Neuroimage 35:1674–1684. [DOI] [PubMed] [Google Scholar]
  21. Giese MA, Poggio T (2003). Neural mechanisms for the recognition of biological movements. Nat Rev Neurosci 4:179–192. [DOI] [PubMed] [Google Scholar]
  22. Gilaie-Dotan S, Bentin S, Harel M, Rees G & Saygin AP, (2011). Normal form from biological motion despite impaired ventral stream function. Neuropsychologia, 49, 1033–1043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Graziano MSA, & Gross CG (1998). Spatial maps for the control of movement. Current Opinion in Neurobiology, 8(2), 195–201. [DOI] [PubMed] [Google Scholar]
  24. Grezes J, Fonlupt P, Bertenthal B, Delon-Martin C, Segebarth C, Decety J (2001): Does perception of biological motion rely on specific brain regions? Neuroimage 13:775–785. [DOI] [PubMed] [Google Scholar]
  25. Grosbras MH, Beaton S, & Eickhoff SB (2012). Brain regions involved in human movement perception: A quantitative voxel-based meta-analysis. Human Brain Mapping, 33(2), 431–454. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Grossman E, Donnelly M, Price R, Pickens D, Morgan V, Neighbor G, Blake R (2000). Brain areas involved in perception of biological motion. J Cogn Neurosci 12:711–720. [DOI] [PubMed] [Google Scholar]
  27. Grossman ED, & Blake R (2002). Brain areas active during visual perception of biological motion. Neuron, 35(6), 1167–1175. [DOI] [PubMed] [Google Scholar]
  28. Grill-Spector K, Knouf N & Kanwisher N (2004) The fusiform face area subserves face perception, not generic within-category identification. Nature Neuroscience. 7 (5) 555–562 [DOI] [PubMed] [Google Scholar]
  29. Grill-Spector K, Kourtzi Z & Kanwisher N (2001). The lateral occipital complex and its role in object recognition. Vision Res, 41, 1409–1422. [DOI] [PubMed] [Google Scholar]
  30. Hard BM, Recchia G & Tversky B (2011). The shape of action. Journal of Experimental Psychology: General, 140, 586–604. [DOI] [PubMed] [Google Scholar]
  31. Heider F, & Simmel M (1944). An experimental study of apparent behavior. American Journal of Psychology, 57, 243–259. [Google Scholar]
  32. Huk AC, & Heeger DJ, (2002). Pattern-motion responses in human visual cortex. Nat. Neurosci 5 (1), 72–75. [DOI] [PubMed] [Google Scholar]
  33. Jastorff J, Orban GA (2009). Human functional magnetic resonance imaging reveals separation and integration of shape and motion cues in biological motion processing. J Neurosci 29:7315–7329. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Johnson KL, Mckay LS, & Pollick FE (2011). He throws like a girl (but only when he’s sad): Emotion affects sex-decoding of biological motion displays. Cognition, 119(2), 265–280. [DOI] [PubMed] [Google Scholar]
  35. Jola C, McAleer P, Grosbras M, Love SA, Morison G & Pollick FE (2013) Uni- and multisensory brain areas are synchronized across spectators when watching unedited dance recordings, i-Perception, 4, 1–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Kanwisher N, McDermott J, Chun MM (1997. June 1). “The fusiform face area: a module in human extrastriate cortex specialized for face perception”. J Neurosci 17 (11): 4302–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Keysers C, Kaas JH & Gazzola V (2010). Somatosensation in social perception. Nature Reviews Neuroscience, 11, 417–428. [DOI] [PubMed] [Google Scholar]
  38. Kozlowski LT, & Cutting JE (1977). Recognizing Sex of a Walker from a Dynamic Point-Light Display. Perception & Psychophysics, 21(6), 575–580. [Google Scholar]
  39. Lange J & Lappe M (2006). A model of biological motion perception from configural form cues. Journal of Neuroscience, 26(11):2894–2906. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Mather G, & Murdoch L (1994). Gender Discrimination in Biological Motion Displays Based on Dynamic Cues. Proceedings of the Royal Society of London Series B-Biological Sciences, 258(1353), 273–279. [Google Scholar]
  41. McCarthy G, Puce A, Gore J & Allison T (1997) Face-specific processing in the human fusiform gyrus. Journal of Cognitive Neuroscience, 9, 605–610 [DOI] [PubMed] [Google Scholar]
  42. McAvoy M, Ollinger JM, & Buckner RL (2001). Cluster size thresholds for assessment of significant activation in fMRI. NeuroImage, 13, S198 [Google Scholar]
  43. Newtson D (1976). Foundations of attribution: The perception of ongoing behavior. In Harvey JH, Ickes WJ, & Kidd RF (Eds.), New directions in attribution research (pp. 223–248). Hillsdale, New Jersey: Lawrence Erlbaum Associates. [Google Scholar]
  44. Peelen MV, & Downing PE (2005). Selectivity for the human body in the fusiform gyrus. Journal of Neurophysiology, 93(1), 603–608. [DOI] [PubMed] [Google Scholar]
  45. Pollick FE, Kay JW, Heim K, & Stringer R (2005). Gender recognition from point-light walkers. Journal of Experimental Psychology-Human Perception and Performance, 31(6), 1247–1265. [DOI] [PubMed] [Google Scholar]
  46. Pollick FE, Paterson HM, Bruderlin A, & Sanford AJ (2001). Perceiving affect from arm movement. Cognition, 82(2), B51–B61. [DOI] [PubMed] [Google Scholar]
  47. Rizzolatti G, & Sinigaglia C (2010). The functional role of the parieto-frontal mirror circuit: interpretations and misinterpretations. Nature Reviews Neuroscience, 11(4), 264–274. [DOI] [PubMed] [Google Scholar]
  48. Schubotz RI, Franziska MK, Schiffer A, Stadler W & Von Cramon DY (2012). The fraction of an action is more than a movement: Neural signatures of event segmentaiton in fMRI. NeuroImage, 61(4), 1195–1205. [DOI] [PubMed] [Google Scholar]
  49. Sergent J, Ohta S,& MacDonald B (1992). Functional neuroanatomy of face and object processing. A positron emission tomography study. Brain, 1,15–36 [DOI] [PubMed] [Google Scholar]
  50. Speer NK, Reynolds JR, & Zacks JM (2007). Human brain activity time-locked to narrative event boundaries. Psychological Science, 18, 449–455. [DOI] [PubMed] [Google Scholar]
  51. Swallow KM, Braver TS, Snyder AZ, Speer NK, Zacks JM, (2003). Reliability of functional localization using fMRI, NeuroImage, 20, 1561–1577. [DOI] [PubMed] [Google Scholar]
  52. Talairach J and Tournoux P, (1988). Co-planar Stereotaxic Atlas of the Human Brain: 3-Dimensional Proportional System - an Approach to Cerebral Imaging, Thieme Medical Publishers, New York, 1988 [Google Scholar]
  53. Tootell RB, Mendola JD, Hadjikhani NK, Ledden PJ, Liu AK, Reppas JB, Sereno MI & Dale AM, (1997). Functional analysis of V3A and related areas in human visual Cortex, J. Neurosci, 17, 7060–7078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Troje NF (2002). Decomposing biological motion: A framework for analysis and synthesis of human gait patterns. Journal of Vision, 2(5), 371–387. [DOI] [PubMed] [Google Scholar]
  55. Van Overwalle F, & Baetens K (2009). Understanding others’ actions and goals by mirror and mentalizing systems: A meta-analysis. Neuroimage, 48(3), 564–584. [DOI] [PubMed] [Google Scholar]
  56. Vizioli L, Smith F, Muckli L & Caldara R (2010). Face encoding representations are shaped by race HBM2010
  57. Wagner DD, Dal Cin S, Sargent JD, Kelley WM, & Heatherton TF (2011). Spontaneous Action Representation in Smokers when Watching Movie Characters Smoke. Journal of Neuroscience, 31(3), 894–898. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Weiner KS, Grill-Spector K, (2011). Not one extrastriate body area: using anatomical landmarks, hMT+, and visual field maps to parcellate limb-selective activations in human lateral occipitotemporal cortex. NeuroImage, 56, 2183–2199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Willems RM, & Hagoort P (2009). Hand preference influences neural correlates of action observation. Brain Research, 1269, 90–104. [DOI] [PubMed] [Google Scholar]
  60. Yarkoni T, Speer N, Balota D, McAvoy M, & Zacks J (2008). Pictures of a thousand words: Investigating the neural mechanisms of reading with extremely rapid event-related fMRI. NeuroImage, 42, 973–987 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Zacks JM (2004). Using movement and intentions to understand simple events. Cognitive Science, 28(6), 979–1008. [Google Scholar]
  62. Zacks JM, Kumar S, Abrams RA, & Mehta R (2009). Using movement and intentions to understand human activity. Cognition, 112(2), 201–216. [DOI] [PubMed] [Google Scholar]
  63. Zacks JM, Swallow KM, Vettel JM, & McAvoy MP (2006). Visual motion and the neural correlates of event perception. Brain Res, 1076(1), 150–162. [DOI] [PubMed] [Google Scholar]
  64. Zacks JM, Tversky B, & Iyer G (2001). Perceiving, remembering, and communicating structure in events. Journal of Experimental Psychology: General, 130, 29–58. [DOI] [PubMed] [Google Scholar]
  65. Zeki SM, (1974). Functional organization of a visual area in the posterior bank of the superior temporal sulcus of the rhesus monkey. Journal of Physiol, 236, 549–573. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES