Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2013 Aug 24;35(5):2178–2190. doi: 10.1002/hbm.22319

Action observers implicitly expect actors to act goal‐coherently, even if they do not: An fMRI study

Mari Hrkać 1,2,, Moritz F Wurm 2,3, Ricarda I Schubotz 1,2
PMCID: PMC6869124  PMID: 23983202

Abstract

Actions observed in everyday life normally consist of one person performing sequences of goal‐directed actions. The present fMRI study tested the hypotheses that observers are influenced by the actor's identity, even when this information is task‐irrelevant, and that this information shapes their expectation on subsequent actions of the same actor. Participants watched short video clips of action steps that either pertained to a common action with an overarching goal or not, and were performed by either one or by varying actors (2 × 2 design). Independent of goal coherence, actor coherence elicited activation in dorsolateral and ventromedial frontal cortex, together pointing to a spontaneous attempt to integrate all actions performed by one actor. Interestingly, watching an actor performing unrelated actions elicited additional activation in left inferior frontal gyrus, suggesting a search in semantic memory in an attempt to construct an overarching goal that can reconcile the disparate action steps with a coherent intention. Post‐experimental surveys indicate that these processes occur mostly unconsciously. Findings strongly suggest a spontaneous expectation bias toward actor‐related episodes in action observers, and hence to the immense impact of actor information on action observation. Hum Brain Mapp 35:2178–2190, 2014. © 2013 Wiley Periodicals, Inc.

Keywords: functional MRI, action recognition, person perception, episodic knowledge, ventrolateral prefrontal cortex


Abbreviations

FFA

Fusiform face area

vmPFC

ventromedial prefrontal cortex

INTRODUCTION

When we witness an action, we usually see one person acting step by step in a goal‐directed fashion. We soon recognize her overarching goal from a stream of subgoals [Keele et al., 1990; Long and Golding, 1993; Hamilton and Grafton, 2006; Botvinick, 2008]. For instance, preparing breakfast might be recognized as the overarching goal of taking a bun, cutting it, buttering it, putting cheese on it and so on. As observers, we are able to deal with a remarkable amount of variance or noise that comes along with long signals like this. For example, the stream of action may be delayed or even interrupted by other actions, for example, answering the phone, and parts of it may be performed by another person, for example, when someone helps preparing the meal. Still, we are quite able to recognize the inherent goal‐driven relationship that bears to the single acts. In the present fMRI study, we investigated whether actor information can act as a cue for coherence between separate action steps.

Coherence of actions probably builds on more than one single mechanism, but particularly relates to sequential event knowledge [Grafman, 2002; Knutson et al., 2004] or script knowledge [Schank and Abelson, 1977], a subtype of the semantic (long term) memory system [Kintsch, 1980; Funnell, 2001]. According to Schank and Abelson [1977], a script represents a sequence of events in everyday situations that is specifically structured. Importantly, a script can be recognized from parts of the sequence and missing events can be filled in, that is, scripts provide connectivity to otherwise single actions. There is experimental evidence that script knowledge is a powerful and spontaneous mechanism in action observation. Thus, the persistent tendency to accept action slips as valid actions (cf., action slip rating errors [Schubotz and von Cramon, 2004]) points to a strong bias toward goal‐based explanations for actions before abandoning the action as nonsense, even if this explanation comes at the cost of postulating fairly unlikely goals. For instance, when participants observed someone trying to put a coin in a piggy bank, but obviously in a wrong orientation and hence failing, they still often judged the action to be purposeful, supposing that the actor tried to widen the piggy bank's slot. Moreover, participants tend to infer overarching goals from actually unrelated action steps and even so when they are performed by different actors [Wurm et al., 2011].

However, script knowledge may not be the only drive to construe relations between actions we encounter. A comparably strong motive may arise from the actor or actress of an action. Thus, when we encounter the same actor again after a short (sub‐minute) delay, we may tend (whether consciously or not) to refer his current action to others we saw him performing before. There is some experimental evidence for this assumption. For instance, observers seem to automatically track an actor's intentions [Frith and Frith, 2012]. Observers can infer traits from one single behavior [Todorov and Uleman, 2002] and use experience of past behavior to predict next actions [Frith and Frith, 2012]. Brothers [1990] suggested that a face leads to an automatic representation of the corresponding person and their intentions even before becoming conscious. Todorov et al. [2007] reported data from an fMRI study showing that knowledge about a person was accessed automatically when this person was seen a second time, even if not remembered explicitly.

Building on these observations, the present fMRI study tested the hypothesis that observers spontaneously take action steps to pertain to the same action script, when performed by the same actor. Thus, we assumed that when participants repeatedly encounter the same actor performing single action steps, they spontaneously tend to integrate this behavior with previously seen action steps performed by the same actor. Importantly, we sought to exclude any relation between these action steps that would be implicated by an overarching goal or by direct temporal succession. Accordingly, we used a 2 × 2 design, with sequences of actions that pertained to the same actor (condition actor‐coherent goal‐incoherent, AC), to the same overarching goal (condition actor‐incoherent goal‐coherent, GC), to both (condition actor‐coherent goal‐coherent = both coherent, BC) or none of them (condition actor‐incoherent goal‐incoherent = none coherent, NC) (Table 1). Actor or goal‐coherent actions steps were not presented in direct temporal succession but interleaved with trials of other conditions (see Fig. 1 and Methods, for details). Moreover, to make sure that we address spontaneous behavior, we used an implicit task that did not address coherence. Note that we did not request the willful detection of coherence, but simply tested whether brain activity reflects the power of actor information to provoke the spontaneous, even unconscious attempt to integrate separate action steps.

Table 1.

Experimental design

Goal coherence
+
Actor coherence +
  • BC

  • 1 Goal (Five to seven actions) followed by one actor

  • 10 actors → 60 trials

  • AC

  • Five to seven independent actions performed by one actor

  • 10 actors → 60 trials

  • GC

  • One goal (Five to seven actions) followed by Five to seven actors

  • 10 actors → 60 trials

  • NC

  • Five to seven independent actions performed by Five to seven actors

  • 10 actors → 60 trials

Figure 1.

Figure 1

Stimuli and trial structure. Video clips showed action steps that either pertained to an overarching goal or not. Actions were performed either by one or by varying actors resulting in four conditions BC (here depicted in green), GC (red), AC (yellow), NC (blue). Video trials were occasionally followed by question trials (here depicted in orange) that required participants to confirm or reject a verbal action description (e.g., salting a tomato) with respect to the preceding trial. Trial succession was arranged such that five to seven films of one condition added up to one episode. Episodes were partially overlapping with episodes of other conditions, that is, video clips of one episode were mostly not shown in direct succession. A representative video sequence is provided in Supporting Information.

We expected that brain areas involved in semantic integration would be particularly and increasingly challenged by actor‐coherent goal‐incoherent episodes (AC), as compared to all other factor level combinations. Thereby, we would show that coherence of an actor triggers the spontaneous attempt to integrate action steps into a common goal, even when they actually are not parts of a common goal. More specifically, we predicted activation to increase parametrically, i.e., with every additional action step of an episode, in the left inferior frontal gyrus (IFG) for goal‐incoherent but not for goal‐coherent episodes pertaining to the same actor. We chose a parametric approach, as coherence between action steps is dynamic: it either increases or actually becomes less probable from one action step to the other. The left IFG is suggested to contribute to semantic retrieval (BA 47) and selection (BA 45) processes in memory [Badre and Wagner, 2007]. Both should be increasingly involved when trying to find an overarching goal for a goal‐incoherent episode performed by one actor, since more and more possible goals of the actions steps could be retrieved with every movie and selected to fit to the whole episode or script. In a recent study [Wurm and Schubotz, 2012], left IFG was found to be activated when participants observed actions in incompatible spatial contexts compared to compatible and neutral contexts. Thus, activity in left IFG increased for instance when observing a typical kitchen action at the office, or an office action at the kitchen. Hence, IFG seems to reflect the increased demands in semantic integration. Moreover, we expected the same effect in dorsolateral prefrontal cortex (dlPFC) due to increasing working memory load, i.e., increasing demands on maintaining the accumulating number of subgoals to generate a higher‐level goal [Owen, 1997; Rypma et al., 2002].

In addition, we explored the main effect of actor coherence. On the one hand, when actors are seen repeatedly over a short time, they should become more familiar to participants, even when they are not remembered explicitly (cf. [Todorov et al., 2007]) This familiarization could be dynamically reflected by a parametric signal change, particularly in face‐sensitive regions, e.g. the fusiform face area (FFA; [Kanwisher et al., 1997]) and the hippocampal formation [Trinkler et al., 2009]. On the other hand, coherence building has been associated with several areas of the medial wall including BA 9/10/11, posterior cingulate cortex and precuneus, both in texts (for a meta‐analysis, see [Ferstl et al., 2008]) and in nonlinguistic paradigms [Werheid et al., 2003; van der Graaf et al., 2006; Wurm et al., 2011; Kühn and Schubotz, 2012]. This endorsed our hypothesis that these structures could be activated for AC compared to actor‐incoherent trials.

METHODS

Participants

Twenty‐two right‐handed, healthy and naïve volunteers participated in the study. Four participants were excluded from the analysis due to technical problems or falling asleep during the experiment, resulting in 18 participants (11 females, mean = 25.89 years, range 22–30 years). Participants were informed about potential risks of magnetic resonance imaging and screened by a physician. All participants gave informed written consent to participate in this study. The study was performed according to the Declaration of Helsinki. Data was handled anonymously.

Stimuli

Participants were presented with video clips showing actions (action trials) and with written action descriptions referring to these actions (question trials). Each trial (6 s) started with a video clip or a question (3 s) followed by a fixation phase (3 s).

The video clips showed single action steps that were performed either by one or by varying actors (factor ACTOR COHERENCE). Action steps either pertained to an action sequence with an overarching goal or without (factor GOAL COHERENCE). This 2 × 2 design resulted in four experimental conditions (BC, AC, GC, NC) (Table 1), each consisting of 60 action trials that were presented in a pseudo‐random trial design. The trials were arranged in sequences of six action steps on average (two sequences of five action steps, six sequences of six action steps, and two sequences of seven action steps). The number of action steps belonging to one sequence varied to prevent participants building up expectations about the structure of episodes. These episodes comprised overarching goals in the conditions BC and GC but exposed independent subgoals in conditions AC and NC.

To provide an example, an episode in condition BC looked as follows (compare Fig. 1, green squares): an actor takes a bun, cuts it, butters it, cuts cheese and puts it on the bun. In condition GC (cf. Fig. 1, red squares), the first actor takes a coffee filter, next an actress puts it in a plastic filter, a second actress puts this on a cup, a third actress fills coffee powder in the filter, an actor pours hot water over it, and a fourth actress sugars the coffee. In the first movie of condition AC (cf. Fig. 1, yellow squares), an actress is salting a tomato. Next, she strings pearls. The third movie shows her making tea. Afterward, she staples some papers. In the last movie, she laces a shoe. In condition NC (cf. Fig. 1, blue squares) an actor turns a screw in a board, an actress applies tooth paste on a brush, a second actor uses a calculator, a third actor riffles, a second actress scrunches newspaper up and a third actress sharpens a pencil. A representative video sequence is provided in Supporting Information.

Task

Participants were instructed to watch the presented video clips attentively. They were told that after some of the video clips an action description would appear that either corresponded to the content of the preceding video clip or not and that they were to denote whether they accepted or rejected the description. The response was given on a two‐button response box, using the index finger to accept and the middle finger to reject the action description (e.g., salting a tomato). Half of them matched the object manipulation shown in the preceding trial, the other half did not. Reaction times (RTs) and error rates were analyzed to assess the behavioral performance.

Analyzed parameters and arrangement of video sequences

To analyze the effect of episode building the position of a video in a sequence was used as parameter POSITION IN SEQUENCE. That is, in each actor‐coherent episode (BC and AC), the second occurrence of the actor was assigned the value “2,” the third occurrence value “3” and so on until the end of this episode (max. 7). Correspondingly, in a goal‐coherent episode (BC and GC), the second action step was assigned the value “2,” the third action step value “3” and so on. In order to control for unspecific changes developing across several videos, these values were also assigned to videos belonging to episodes of incoherent goals and incoherent actors (NC).

The sequences of each condition were partially overlapping, while the overlap was balanced across conditions to avoid confounds due to different working memory loads. On average, the video clips overlapped with video clips from 1.18 ± 0.04 (mean ± standard error) episodes of other conditions. In addition, the gaps between video clips of a sequence had a maximum length of four video clips from other conditions. The gap length was balanced across conditions. Between two movies of one episode were on average 1.82 ± 0.09 trials, i.e., movies of other episodes or questions. The mean number of video clips shown for the completion of an episode was 12.9 ± 0.36, including the overlapping video clips of other episodes.

All in all, 40 actors performed different sets of actions in the experiment. Every actor was assigned to one condition and did not appear in any other. Overall, each actor occurred six times on average during the experiment. In conditions GC and NC, in which the actions in a sequence were performed by different actors, the actors occurred evenly distributed over the course of the experiment with a gap length of 23 trials on average, while in conditions BC and AC the gap was four trials at most (on average 1.82 ± 1.2).

In this study, only the effects of “position in sequence” for the actor‐coherent conditions (BC and AC) and the main effect of actor coherence will be reported, while further effects are reported in Wurm and colleagues (in prep.).

Post fMRI session survey

In a post fMRI session survey, participants were presented with a questionnaire to establish their ability to recognize the actors' faces. Participants were asked to guess how many different actors had occurred in the movies. In addition, they were presented with 80 pictures of faces, 40 of which were faces of the actors and 40 were new. Recognition performance was measured by the corrected Discrimination index P(r), which is the difference between hit rate and false alarm rate [Snodgrass and Corwin, 1988]. For the face recognition task, the hit rate was defined as the sum of correctly recognized faces relative to the sum of the maximal score of all faces shown in the videos, and the false alarm rate as the sum of falsely indicated unrelated faces relative to the sum of the maximal score of all unrelated faces. Finally, participants were asked to rate the attractiveness of the actors on a four point Likert scale running from 1 “not attractive” to 4 “very attractive.” On a further four point Likert scale, participants rated how peculiar looking the actors were from 1 “not peculiar” to 4 “very peculiar.” These ratings were conducted as both attractiveness as well as peculiarity of actors may have the potential to attract participants' attention away from the actions. In case they were not evenly spread across conditions, we planned to include attractiveness and peculiarity as regressors of nuisance in our design matrix.

Analyses of variance (repeated measures ANOVA) were conducted to examine differences between the conditions. In all analyses, the Greenhouse‐Geisser epsilon was used to correct the degrees of freedom where the assumption of sphericity was violated.

MRI Data Acquisition

Imaging was performed on a 3‐T Scanner (Siemens Magnetom TRIO, Erlangen, Germany) equipped with a standard birdcage head coil. Participants lay supine on the scanner bed with their right index and middle fingers positioned on the response buttons of a response box. To prevent head, arm, and hand movements, form‐fitting cushions were used. Participants wore earplugs to attenuate scanner noise. Twenty‐eight axial slices (4‐mm thickness; 1‐mm spacing; 200‐mm field‐of‐view; 64 × 64 pixel matrix; in‐plane resolution of 3 × 3 mm2) covering the whole brain were acquired using a single shot gradient EPI sequence (2 s repetition time; 30 ms echo time; 90° flip angle; 116 kHz acquisition bandwidth) sensitive to blood oxygen level dependent (BOLD) contrast. There was one functional run including 941 volumes, resulting in 31.37 min recording time. After functional imaging, 28 anatomical T 1‐weighted MDEFT images [Ugurbil et al., 1993; Norris, 2000] were acquired. In different session, high‐resolution whole brain images were acquired from each participant to improve the localization of activation foci using a T 1‐weighted 3D‐segmented MDEFT sequence with 128 slices.

MRI Data Analysis

After motion correction using rigid‐body registration to the central volume, the fMRI data were processed using the software package LIPSIA [Lohmann et al., 2001]. A cubic‐spline interpolation was used to correct for the temporal offset between the slices acquired in one image. To remove low‐frequency signal changes and baseline drifts a temporal high‐pass filter with a cutoff frequency of 1/115 Hz was used. Spatial smoothing with a Gaussian filter of 5.65 mm FWHM was applied. A rigid linear registration with six degrees‐of‐freedom (three rotational, three translational) was performed to align the functional data slices with a 3D stereotactic coordinate reference system. The rotational and translational parameters were obtained on the basis of the MDEFT and the EPI‐T1 slices to achieve an optimal match between these slices and the individual 3D reference dataset. The MDEFT volume dataset with 128 slices and 1‐mm slice thickness was standardized to the Talairach stereotactic space [Talairach and Tournoux, 1988]. The rotational and translational parameters were subsequently normalized by linear scaling to a standard size. The resulting parameters were then used to transform the functional slices using trilinear interpolation, so the resulting functional slices were aligned with the stereotactic coordinate system, thus generating isotropic voxels with a spatial resolution of 3 × 3 × 3 mm2. The statistical evaluation was based on a least‐squares estimation using the general linear model for serially autocorrelated observations [Friston et al., 1994; Worsley and Friston, 1995]. The design matrix was generated with a gamma function, convolved with the hemodynamic response function. Brain activations were analyzed time‐locked to onset of the movies, and the analyzed epoch comprised the full duration of the presented movies (3 s), the duration of the null events (6 s), and the RT in question trials (max. 3 s). In addition, the parameter POSITION IN SEQUENCE and the parameter PECULIARITY, as regressor of nuisance, were included. For the main effect of actor coherence, the first trial of each episode was analyzed as belonging to condition NC, as coherence emerged not before the second trial of an episode. In the parametric analyses, all trials were included as we were interested in the change of activity during the unfolding of the whole episode. We here report the results of the parametric analysis for the condition AC and a conjunction analysis of the parametric effects of condition AC and BC, as our hypotheses addressed the effects of actor coherence, especially when the goal was incoherent. To investigate the effect of actor coherence in general, we conducted a conjunction analysis of the main effects [(BC>GC) ∩ (AC>NC)].

A Gaussian kernel of dispersion of 4 s FWHM was applied to the model equation, including the observation data, the design matrix, and the error term, to account for the temporal autocorrelation [Worsley and Friston, 1995]. Contrast images, that is, beta value estimates of the raw‐score differences between specified conditions, were generated for each participant. As the individual functional datasets were aligned to the same stereotactic reference space, the single‐subject contrast images were entered into a second‐level random effects analysis for each of the contrasts.

For the group analyses, one‐sample t‐tests were used across the contrast images of all participants that indicated whether observed differences between conditions were significantly distinct from zero. The t values were then transformed into Z scores.

To correct for false‐positive results, in a first step, an initial voxel‐wise z threshold was set to z = 2.576 (P = 0.005). In a second step, the results were corrected for multiple comparisons using cluster‐size and cluster‐value thresholds obtained by Monte Carlo simulations at a significance level of P < 0.05, that is, the reported activations are significantly activated at P < 0.05, corrected for multiple comparisons at the cluster level.

Conjunctions were calculated by extracting the minimum Z value of the two input contrasts for each voxel [Nichols et al., 2005].

To test the hypothesis that IFG and dlPFC are specific for the condition AC, we used the following analysis: we first calculated four independent parametric contrasts for each of the four factor level combinations. In a second step, we contrasted the parametric effect of AC episodes with each of the remaining parametric contrasts. Finally, we built a conjunction of these three contrasts. The 3D T 1‐weighted whole‐brain scans were used to segment the left IFG and bilateral dlPFC separately. The areas (for dlPFC superior and middle frontal gyrus) were delimited according to anatomical landmarks using the Talairach atlas [Talairach and Tournoux, 1988]. Small volume correction was performed, correcting the results for a restricted search volume using the segmentation. The volume was used to calculate the alpha level within the IFG and the dlPFC. To correct for false‐positive activation a voxel‐wise z‐threshold was set to z = 2.33 (P = 0.01), with a minimum activation area of 195 mm3 for IFG and 408 mm3 for dlPFC.

To test the two expectations on actor coherence effects, we performed two analyses: First, to tap dynamic changes due to the encounter of actor coherence, we calculated a conjunction of the two actor‐coherent parametric contrasts. Second, to test the main effect of actor coherence, we contrasted actor‐coherent with actor‐incoherent trials, collapsing across factor levels of goal coherence [(BC>GC) ∩ (AC>NC)].

RESULTS

Behavioral Results

Performance during the fMRI session was assessed by RTs of correctly answered trials and the rate of incorrectly answered trials (Table 2). Repeated measures ANOVAs were conducted for error rates and RT with the within subject factors actor coherence and goal coherence.

Table 2.

Means and standard error of reaction times, and error rates for conditions BC, GC, AC, and NC

Condition RT (ms) SE (ms) Error rate (%) SE (%)
BC 1,052 92 5.1 1.5
GC 1,157 90 9.7 3
AC 1,136 93 4.6 1.2
NC 1,082 91 5.6 2.1

RT: mean reaction times, SE: standard error.

For the RTs a main effect of ACTOR COHERENCE was found, F (1, 17) = 4.55, P = 0.048. Bonferroni adjusted post hoc tests showed that the latencies for actor‐coherent trials (1,094 ± 91 ms) were significantly shorter than for actor‐incoherent trials (1,119 ± 89 ms). The interaction was also significant, F (1, 17) = 14.14, P = 0.002. Bonferroni adjusted post hoc tests showed that the latencies for condition BC (1,052 ± 92 ms) were significantly shorter compared with conditions AC (1,136 ± 93 ms) and GC (1,157 ± 90 ms) and latencies for condition NC (1,082 ± 91 ms) were significantly shorter than for conditions AC and GC. There were no significant main effects or interaction for error rates. The average error rate was low (6.2 ± 1.1).

Face recognition was measured in a post‐session recognition test comprising two steps. First, participants were asked to guess how many actors had been shown during the experiment. On average, participants estimated 13.8 ± 1.7 actors to have appeared in the experiment, albeit there were 40 actors.

Subsequently, they performed an old/new recognition task by differentiating between faces belonging to the actors and to unfamiliar persons. Because of technical problems, only the face recognition data of 11 participants was recorded. The average probability of recognition was 0.39 ± 0.06 (hit rate 0.47 ± 0.06, false alarm rate 0.09 ± 0.08), with no significant differences between conditions. A one sample t‐test revealed that the probability of recognition differed significantly from chance level (0), t (10) = 6.64, P < 0.001. Participants needed 1,145 ± 25 ms for their response with no significant differences between conditions.

The attractiveness rating (1 “not attractive” to 4 “very attractive”) revealed no significant differences in attractiveness of the actors between conditions (1.8 ± 0.08). A repeated measures ANOVA with the within subject factors ACTOR COHERENCE and GOAL COHERENCE for the peculiarity rating (4 “not peculiar” to 4 “very peculiar”) revealed a significant main effect for ACTOR COHERENCE, F (1, 17) = 6.44, P = 0.021, a significant main effect for GOAL COHERENCE F (1, 17) = 6.19, P = 0.024 and a significant disordinal interaction, F (1, 17) = 31.48, P < 0.001. Bonferroni adjusted t‐tests showed that actors in condition NC (2.48) were significantly more peculiar than in conditions GC (2.05; P < 0.001) and AC (2.06; P < 0.001). Therefore, peculiarity was included in the fMRI analysis as regressor of nuisance.

FMRI Results

In a first step, we tested the hypothesis that actor‐coherent episodes with no overarching goal would parametrically increase activity in IFG and dlPFC. We conducted a parametric contrast POSITION IN SEQUENCE for the condition AC. As a result, POSITION IN SEQUENCE covaried positively with activity in left IFG (BA 47/45), left dlPFC (superior frontal sulcus), as well as ventromedial prefrontal cortex (vmPFC; BA 10/11) and adjacent posterior mesial orbital sulcus (orbitofrontal cortex, mOFC, hereafter) (for a list of all activations, see Table 3; Fig. 2). To ensure that activations in IFG and dlPFC were specific for condition AC we calculated paired t‐tests of condition AC with conditions BC, GC, and NC and subsequently computed a conjunction of these three contrasts, so that only activations that were common for all three contrasts would be addressed. Left IFG was activated significantly higher for condition AC than for any other condition, but dlPFC was not.

Table 3.

Areas activated in the parametric analysis for goal‐incoherent/actor‐coherent episodes (AC) during episode unfolding, that is, increase in activation from the first to the last clip of an episode; in conjunction of parametric analyses for goal‐incoherent/actor‐coherent episodes (AC) and goal‐/actor‐coherent episodes (BC); and in conjunction analysis actor‐coherence independent of goal‐coherence (BC>GC) ∩ (AC>NC).

Localization x y z Z mm3
Parametric contrast for condition AC
 vmPFC −8 44 −3 3.74 1,107
 vmPFC 7 35 −12 3.30 216
 mOFC −11 23 −12 3.59 243
 mOFC 7 20 −12 3.19 l.m.
 MFG/SFS −32 23 51 3.62 216
 IFG (BA 45/47) −47 26 3 3.31 270
 IFG (BA 45) −47 23 15 2.6 l.m.
 Postcentral gyrus −29 −31 69 3.45 405
 Precentral gyrus −38 −19 63 3.95 567
 Posterior insula 37 −19 18 4.03 999
 Posterior IFG/precentral gyrus 52 −4 24 3.91 756
 Occipital areas 28 −94 15 3.85 1,026
Conjunction of parametric contrasts for conditions AC and BC
 vmPFC −2 44 −12 3.17 837
 aSFS −29 20 48 2.95 378
Conjunction analysis actor‐coherence independent of goal‐coherence (BC>GC) ∩ (AC>NC)
 vmPFC 1 29 −12 −3.15 1,431
 FFA −35 −43 −15 −3.56 1,404
 FFA 34 −43 −15 −3.59 1,431

Macroanatomical specification, Brodmann area (BA), Talairach coordinates (x, y, z), and maximal Z scores (Z); corrected for multiple comparisons at P < 0.05. vmPFC ventromedial prefrontal cortex, OFC orbitofrontal cortex, MFG middle frontal gyrus, (a) SFS (anterior) superior frontal sulcus, IFG inferior frontal gyrus, FFA fusiform face area, l.m. local maxima.

Figure 2.

Figure 2

Parametric effect of unfolding of episodes with incoherent goals but coherent actors (AC), that is, increase in activation from the first to the last clip of an episode (z > 2.576; corrected cluster threshold P < 0.05); IFG inferior frontal gyrus, vmPFC ventromedial prefrontal cortex, SFS/MFG superior frontal sulcus/middle frontal gyrus.

To investigate our second hypothesis regarding the effects of actor coherence independently of goal coherence during episodic unfolding, we computed a conjunction of the parametric contrasts POSITION IN SEQUENCE of conditions AC and BC. This analysis yielded common activation in vmPFC (BA 11) and the left anterior superior frontal sulcus (aSFS) (Fig. 3; Table 3). Thus, activation increased in these areas with the number of times that the same actor reappeared, no matter whether the actions that he or she performed built a coherent overarching goal or not.

Figure 3.

Figure 3

Conjunction of the parametric effects of unfolding of actor‐coherent episodes, independent of goalcoherence [BC ∩ AC] (z > 2.576; corrected cluster threshold P < 0.05); aSFS anterior superior frontal sulcus, vmPFC ventromedial prefrontal cortex.

To ensure that activation in vmPFC and aSFS was specific for actor coherence, we conducted ROI analyses. To prevent double dipping [Kriegeskorte et al., 2009], ROIs were determined by a trial split procedure, in which odd and even episodes were divided into two subsets. A first parametric contrast was calculated for odd episodes and a second parametric contrast for even episodes. Peak activation voxels were identified in one contrast and used as ROI in the other contrast and vice versa. Talairach coordinates derived from these analyses were for the aSFS x = −29, y = 23, z = 45 for odd episodes and y = −23, y = 29, z = 45 for even episodes and for vmPFC x = −2, y = 38, z = −12 for odd episodes and y = −5, y = 26, z = −12 for even episodes. Two repeated measures ANOVA with the factors ACTOR COHERENCE (actor coherence vs. incoherence), GOAL COHERENCE (goal coherence vs. incoherence), and TRIAL GROUP (odd vs. even episodes) were conducted (Fig. 4). For aSFS there was a significant main effect for ACTOR COHERENCE, F (1, 17) = 10.16, P = 0.005, such that actor coherence (mean beta = 0.006) showed significantly more activation in aSFS than actor incoherence (mean beta = −0.001). No other main effect or interaction became significant, all P > 0.3.

Figure 4.

Figure 4

Main effect of actor coherence in ROI analyses for left anterior superior frontal sulcus (aSFS) and ventromedial prefrontal sulcus (vmPFC). Coordinates of ROI were derived from a trial split procedure to prevent double dipping [Kriegeskorte et al., 2009]. The effect of actor coherence is demonstrated by mean beta values extracted from conditions actor‐coherent goal‐coherent (BC), actor‐incoherent goalcoherent (GC), actor‐coherent goal‐incoherent (AC), and actor‐incoherent goal‐incoherent (NC) (error bars indicate standard errors).

For vmPFC, there was a marginally significant main effect for ACTOR COHERENCE, F (1, 17) = 4.13, P = 0.058, such that actor coherence (mean beta = 0.008) showed more activation in vmPFC than actor incoherence (mean beta = 0.002). No other main effect or interaction were significant, all P > 0.1.

Finally, to examine if areas of the medial wall would be activated for the main effect of actor coherence, independently of the presence of an overarching goal, a conjunction analysis for [(BC>GC) ∩ (AC>NC)] was conducted (for the results of the two main effects see Supporting Information). We found decreases of activation for actor coherence in the vmPFC (BA 11) and the bilateral fusiform gyrus in the FFA (Fig. 5; Table 3). No regions showed significant increases of activation.

Figure 5.

Figure 5

Conjunction of the main effects of actor coherence versus incoherence, independent of goal coherence [([BC>GC) ∩ (AC>NC)] (z > 2.576; corrected cluster threshold P < 0.05); FFA Fusiform face area, vmPFC ventromedial prefrontal cortex.

DISCUSSION

The present fMRI study was built on the assumption that the continuity of the actor provides a cue for coherence of actions that we perceive, and hence shape our expectations toward upcoming action steps that pertain to the same actor (cf. [Frith and Frith, 2012]). Based on own and others' findings on semantic integration in left prefrontal sites [Badre and Wagner, 2007; Wurm and Schubotz, 2012], we used fMRI to study this hypothesized bias. We used a 2 × 2 design with the factors ACTOR COHERENCE (coherent, incoherent) and GOAL COHERENCE (coherent, incoherent), with both factors being task‐irrelevant in order to tap a spontaneous bias.

As hypothesized, activity in the left IFG parametrically increased with unfolding of goal‐incoherent episodes only when they were actor‐coherent (AC). The same interaction‐specific increase was recorded in the mOFC.

When exploring effects of actor‐coherent episode unfolding independent of goal coherence, activity increased with re‐encounters of the same actor in vmPFC and left dlPFC (in particular aSFS), the latter finding is at variance with our hypothesis, because this effect was expected to be specific for actor‐coherent episode unfolding only for goal‐incoherent episodes.

Finally, there was a main effect for actor incoherence in vmPFC and the FFA, that is, both areas showed overall more activation in actor‐incoherent than in actor‐coherent episodes. Findings will be consecutively discussed in the following.

Goal Incoherence Disturbs Observation of Actor‐Coherent but Not Actor‐Incoherent Actions

The left IFG is related to semantic memory recall, especially with semantic retrieval (BA 47) and selection (BA 45) [Badre and Wagner, 2007] and was found to be enhanced when the spatial context of an action rendered it difficult to come up with a meaningful interpretation of its goal (e.g., when a typical kitchen action was performed in an office) [Wurm and Schubotz, 2012]. Left IFG was suggested to reflect the increased demand in semantic integration in such a situation. In goal‐incoherent episodes performed by the same actor (AC), the situation is similar with regard to the induced semantic conflict. Here, more and more action steps were seen that could not be integrated easily into one overarching goal. The common actor, however, obviously implied these action steps to belong together. Importantly, this IFG enhancement was neither observed for action sequences connected by a single actor and a common goal (BC), nor for action sequences connected by a common goal only (GC).

Although semantic integration based on script knowledge is classified as a subtype of semantic knowledge/memory [Kintsch, 1980; Funnell, 2001], it strongly alludes to the temporal structure of an action sequence. That is, the meaning of an action in the sense of the action's goal depends not only on the sum of applied manipulations but also on their correct (efficient) sequential order. This temporal structure of action, or action sequencing, is often also referred to as “action syntax.” Therefore, “action syntax” and “action semantics” are often difficult to disentangle, and action semantics may even sometimes derive from action syntax and vice versa. To avoid confusion here, it is important to note that in the present study, incoherent episodes were not simply randomized coherent episodes, but also differed with regard to the identity of manipulated objects.

We suggest that a possible strategy to reconcile the conflict induced by an actor performing unrelated action steps is to retrieve possible goals from long‐term memory that may help to generate a plausible, coherent episode. Interestingly, in language processing, left IFG (BA 45/47) is sensitive to violations against world knowledge, even if the discourse context indicates a situation in which the violation is acceptable [Menenti et al., 2009]. In the current study, goal‐incoherent actions performed by one actor may have the same effects as a violation of world knowledge, although in the context of the experiment, each single action step was absolutely acceptable. Thus, unrelated action steps performed by one single actor are only weakly associated by world knowledge and hence are not obviously connected by an overarching goal. For instance, one may engage in stringing pearls after salting a tomato, but this sequence of action steps may be much rarer in everyday life than eating the tomato or slicing bread.

Likewise, weak associative strength in word pairs (note—scale vs. bouquet—flower) during decisions about analogical and semantic relations increases both task difficulty and activation in IFG [Bunge et al., 2005]. Against this background, our findings strongly support the notion that observers implicitly expect coherence in an observed actor's behavior—even against all odds. That is, information about the actor is processed, no matter if this processing is conscious, required or even advantageous for the task at hand. Behavioral observations from related studies support this interpretation. For instance, participants perceive variable behavior of a person as coherent if they can identify an overarching goal [Plaks et al., 2003]. Further, infants at the age of one expect actors to continue to pursue similar goals, even if there are minor changes to the context [Kuhlmeier et al., 2003; Song and Baillargeon, 2007].

Interestingly, RTs recorded in the present fMRI study seemed to endorse the peculiarity of episodes with mixed coherence, and hence the fMRI findings. Thus, we found significantly longer RTs for questions following video clips of conditions GC and AC as compared to the conditions BC and NC. Accordingly, either entirely coherent or entirely incoherent action sequences were responded faster to than those that were mixed coherent/incoherent. One can speculate that here, additional mental processes were triggered in an attempt to resolve this conflict between the goal level and the actor level. Note that trial‐related RTs entered the design matrix as regressor of nuisance, and hence, their effect was cancelled out from fMRI data.

Although behavioral data dovetail with our fMRI findings, two caveats have to be issued with regard to their interpretation. First, participants' task was not related to the manipulation that we investigated with fMRI, as we aimed to tap fMRI effects of spontaneous processing of task‐irrelevant (actor) information. The employed task simply served to ensure that participants attend to the presented object manipulation. Second, responses were not required or delivered during action observation or immediately afterwards, but only after a question trial had started, that is, about 3 s after the end of the video. As actions can be recognized very quickly after video onset (cf. [Wurm and Schubotz, 2012]), this delay between recognition and reaction (about 5 s) queries the interpretability in terms of statistical condition‐related effects. Future studies are required to further explore these effects.

Actor Re‐encounters Spontaneously Trigger Further Frontal Responses

Our hypothesis‐driven results point to a strong and spontaneous tendency to process actor information when observing actions, no matter if this is advantageous or even required. In case of goal‐incoherent actions, it even leads to additional processing costs (i.e., BOLD increase) that did not pay off for the given task.

Interestingly, we found that this evidence was fostered when exploring further, non‐hypothesis‐guided contrasts. We would like to close our discussion with shortly considering these post hoc findings, also seeking to identify working hypotheses for future research.

First, we found activation in mOFC to co‐increase in activation with left IFG. MOFC and adjacent anterior cingulate were found to be structurally altered in depression [Ballmaier et al., 2004] and may reflect regulation of negative affect [Cooney et al., 2007; Kross et al., 2009; Newman‐Norlund et al., 2009]. Hence, co‐active mOFC lends further support to the notion that the brain tried to reconcile actually goal‐unrelated action steps, a mostly fruitless and hence frustrating endeavor.

Second, re‐encountering an actor after a short (within‐minute) delay gave rise to a lower BOLD in the FFA [Kanwisher et al., 1997; Larsson and Smith, 2012]. Notably, this effect cannot be based on a difference in number of different faces between the actor‐coherent and actor‐incoherent conditions since these conditions overall contained the same number of actors. Instead, this effect again points to a particular challenge to memorize and retrieve familiar actors, though not on a conscious level.

Finally, two frontal areas—left aSFS in dlPFC, and vmPFC—were found to be sensitive for actor re‐encounter but, in contrast to left IFG or mOFC, independently of goal coherence. As activity increased with the times participants encountered the same actor again, these findings imply that (the task irrelevant) information about the actor was spontaneously coded.

Speculating on the general meaning of these activations, we suggest that the brain memorizes an actor because this may help to shape expectations to upcoming actions of the same actor. Obviously, such a shaping of expectations is only viable when actor information is processed very quickly. That is what is indicated by EEG [Barbeau et al., 2008] and behavioral studies [Usakli et al., 2011], which report effects of face recognition within 200 ms after stimulus onset.

For the dlPFC, we hypothesized that an actor‐coherence driven BOLD increase should be restricted to goal‐incoherent episodes (AC), suggesting that recollecting all actions associated with this actor should (a) occur spontaneously and (b) be particularly challenging when single actions do not pertain to one common goal. However, this latter restriction was not observed. The aSFS, belonging to the dlPFC, is known to be involved in a core feature of working memory, that is, the selection operation that retrieves the most relevant item from memory [Bledowski et al., 2009]. This operation is needed when some items maintained in working memory become transiently more important than others. During the course of actor‐coherent episodes (BC and AC), probably more and more selection operations were needed as the participant tried to select movies with the same actor from the movies maintained in working memory. As dlPFC increase was not bound to goal coherence, we take this effect to reflect that actor information serves as a cue triggering the retrieval of associated information (here: actions performed by this actor) from working memory. However, whether it was possible to integrate this set of actions into a global goal‐directed action (BC) or not (AC) did not modulate dlPFC activity.

In contrast, vmPFC is clearly not a classical component of working memory. Anterior regions of the PFC are suggested to subserve relational integration [Ramnani and Owen, 2004; Bunge et al., 2005; Green et al., 2006; Cho et al., 2010], that is, considering multiple relations or representations simultaneously [Christoff et al., 2001; Bunge et al., 2005]. This is particularly relevant when evaluating new information's consistency with long‐term memory contents. An observed action might be compared to long‐term memory content about the same action and/or the same actor. When re‐encountering an actor, all occasions in which the actor was seen before might be reconsidered and, if possible, integrated (cf. [Frith and Frith, 2012]). This interpretation would also account for the vmPFC main effect when contrasting actor‐coherent with actor‐incoherent episodes (BC and AC vs. GC and NC). For the latter condition, the gap between re‐encounter of an actor was much larger than for actor‐coherent ones (on average 23 trials vs. 2 trials), and hence, more actions and actors were seen in between the re‐encounters of the same actor. Thus, in an attempt to integrate the actions accomplished by this actor, even when goal‐incoherent, more interim actions might have been considered in actor‐incoherent (GC, NC) as in actor‐coherent (BC, AC) episodes.

However, this study did not guide participant's attention to the actor or episodic structures between action steps; and indeed, behavioral findings of the post‐fMRI face recognition test showed that actor information was not necessarily processed on a conscious level: Participants estimated that they were presented 14 actors altogether in our study, whereas actually they saw 40. Thus, if participants used actor information to shape their expectations on the action‐goal level, they probably or mostly did so in a nondeliberate way.

It is likewise conceivable that the brain uses the actor's identity as an indirect predictor of the upcoming action structure, that is, as bias trigger. That is because, actor identity was indeed indicative of either goal‐coherent or goal‐incoherent action. Similar biasing mechanisms have been investigated in the area of decision making [Volz and von Cramon, 2006; Rushworth et al., 2011], but may be based on a common or overlapping network of brain areas, with frontomedian areas as a core structure. Thus, bias in expectation reduces the value of alternative expectations, no matter whether these alternatives lead to different decisions or not. For instance, in a recent fMRI study [Schiffer et al., in press] participants were shown video clips of actions while their expectation toward the course of these actions was manipulated. Here, activity in vmPFC and adjacent ACC increased when expectation became biased towards one course of the presented actions.

CONCLUDING REMARKS

Present findings suggest that actor information modulates brain activity during action observation, even when task‐irrelevant. Unrelated action steps performed by one actor appear to trigger a search in semantic memory in an attempt to construe an overarching goal that can reconcile the disparate action steps with a coherent intention. Thus, action steps assigned to one actor are expected to lead to a coherent, overarching goal. Even interruptions by other actions and actors do not prevent the buildup of a memory trace, pointing to a spontaneous expectation bias toward episodes.

Supporting information

Supporting Information

Supporting Figure 1.

ACKNOWLEDGMENTS

The authors thank Yuka Morikawa for generating the stimulus material, Kirsten Volz for fruitful discussions and Sarah Weigelt for helpful comments on the manuscript.

REFERENCES

  1. Badre D, Wagner AD (2007): Left ventrolateral prefrontal cortex and the cognitive control of memory. Neuropsychologia 45:2883–2901. [DOI] [PubMed] [Google Scholar]
  2. Ballmaier M, Toga AW, Blanton RE, Sowell ER, Lavretsky H, Peterson J, Pham D, Kumar A (2004): Anterior cingulate, gyrus rectus, and orbitofrontal abnormalities in elderly depressed patients: An MRI‐based parcellation of the prefrontal cortex. Am J Psychiatry 161:99–108. [DOI] [PubMed] [Google Scholar]
  3. Barbeau EJ, Taylor MJ, Regis J, Marquis P, Chauvel P, Liegeois‐Chauvel C (2008): Spatio temporal dynamics of face recognition. Cereb Cortex 18:997–1009. [DOI] [PubMed] [Google Scholar]
  4. Bledowski C, Rahm B, Rowe JB (2009): What “works” in working memory? Separate systems for selection and updating of critical information. J Neurosci 29:13735–13741. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Botvinick MM (2008): Hierarchical models of behavior and prefrontal function. Trends Cogn Sci 12:201–208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Brothers L (1990): The social brain: A project for integrating primate and neurophysiology in an new domain. Concepts Neurosci 1:27–51. [Google Scholar]
  7. Bunge SA, Wendelken C, Badre D, Wagner AD (2005): Analogical reasoning and prefrontal cortex: Evidence for separable retrieval and integration mechanisms. Cereb Cortex 15:239–249. [DOI] [PubMed] [Google Scholar]
  8. Cho S, Moody TD, Fernandino L, Mumford JA, Poldrack RA, Cannon TD, Knowlton BJ, Holyoak KJ (2010): Common and dissociable prefrontal loci associated with component mechanisms of analogical reasoning. Cereb Cortex 20:524–533. [DOI] [PubMed] [Google Scholar]
  9. Christoff K, Prabhakaran V, Dorfman J, Zhao Z, Kroger JK, Holyoak KJ, Gabrieli JD (2001): Rostrolateral prefrontal cortex involvement in relational integration during reasoning. Neuroimage 14:1136–1149. [DOI] [PubMed] [Google Scholar]
  10. Cooney RE, Joormann J, Atlas LY, Eugene F, Gotlib IH (2007): Remembering the good times: Neural correlates of affect regulation. Neuroreport 18:1771–1774. [DOI] [PubMed] [Google Scholar]
  11. Ferstl EC, Neumann J, Bogler C, von Cramon DY (2008): The extended language network: A meta‐analysis of neuroimaging studies on text comprehension. Hum Brain Mapp 29:581–593. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Friston KJ, Holmes AP, Worsley KJ, Poline JP, Frith CD, Frackowiak RSJ (1994): Statistical parametric maps in functional imaging: A general linear approach. Hum Brain Mapp 2:189–210. [Google Scholar]
  13. Frith CD, Frith U (2012): Mechanisms of social cognition. Annu Rev Psychol 63:287–313. [DOI] [PubMed] [Google Scholar]
  14. Funnell E (2001): Evidence for scripts in semantic dementia: Implications for theories of semantic memory. Cogn Neuropsychol 18:323–341. [DOI] [PubMed] [Google Scholar]
  15. Grafman J (2002): The structured event complex and the human prefrontal cortex In: Stuss DT, Knight RT, editors. Principles of Frontal Lobe Function. New York: Oxford University Press; pp 292–310. [Google Scholar]
  16. Green AE, Fugelsang JA, Kraemer DJ, Shamosh NA, Dunbar KN (2006): Frontopolar cortex mediates abstract integration in analogy. Brain Res 1096:125–137. [DOI] [PubMed] [Google Scholar]
  17. Hamilton AF, Grafton ST (2006): Goal representation in human anterior intraparietal sulcus. J Neurosci 26:1133–1137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Kanwisher N, McDermott J, Chun MM (1997): The fusiform face area: A module in human extrastriate cortex specialized for face perception. J Neurosci 17:4302–4311. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Keele SW, Cohen A, Ivry R (1990): Motor programs: concepts and issues In: Jeannerod M, editor. Attention and Performance 13: Motor Representation and Control. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc; pp 77–109. [Google Scholar]
  20. Kintsch W. 1980. Semantic memory: a tutorial In: Nickerson RS, editor. Attention and Performance VIII. Cambridge, MA: Bolt Beranek and Newman; pp 595–620. [Google Scholar]
  21. Knutson KM, Wood JN, Grafman J (2004): Brain activation in processing temporal sequence: An fMRI study. Neuroimage 23:1299–1307. [DOI] [PubMed] [Google Scholar]
  22. Kriegeskorte N, Simmons WK, Bellgowan PS, Baker CI (2009): Circular analysis in systems neuroscience: The dangers of double dipping. Nat Neurosci 12:535–540. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Kross E, Davidson M, Weber J, Ochsner K (2009): Coping with emotions past: The neural bases of regulating affect associated with negative autobiographical memories. Biol Psychiatry 65:361–366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Kuhlmeier V, Wynn K, Bloom P (2003): Attribution of dispositional states by 12‐month‐olds. Psychol Sci 14:402–408. [DOI] [PubMed] [Google Scholar]
  25. Kühn AB, Schubotz RI (2012): Temporally remote destabilization of prediction after rare breaches of expectancy. Hum Brain Mapp 33:1812–1820. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Larsson J, Smith AT (2012): fMRI repetition suppression: Neuronal adaptation or stimulus expectation? Cereb Cortex 22:567–576. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Lohmann G, Müller K, Bosch V, Mentzel H, Hessler S, Chen L, Zysset S, von Cramon DY (2001): LIPSIA‐a new software system for the evaluation of functional magnetic resonance images of the human brain. Comput Med Imaging Graphics 25:449–457. [DOI] [PubMed] [Google Scholar]
  28. Long DL, Golding JM (1993): Superordinate goal inferences: Are they automatically generated during comprehension? Discourse Processes 16:55–73. [Google Scholar]
  29. Menenti L, Petersson KM, Scheeringa R, Hagoort P (2009): When elephants fly: Differential sensitivity of right and left inferior frontal gyri to discourse and world knowledge. J Cogn Neurosci 21:2358–2368. [DOI] [PubMed] [Google Scholar]
  30. Newman‐Norlund RD, Ganesh S, van Schie HT, De Bruijn ER, Bekkering H (2009): Self‐identification and empathy modulate error‐related brain activity during the observation of penalty shots between friend and foe. Soc Cogn Affect Neurosci 4:10–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Nichols T, Brett M, Andersson J, Wager T, Poline JB (2005): Valid conjunction inference with the minimum statistic. Neuroimage 25:653–660. [DOI] [PubMed] [Google Scholar]
  32. Norris DG (2000): Reduced power multislice MDEFT imaging. J Magn Reson Imaging 11:445–451. [DOI] [PubMed] [Google Scholar]
  33. Owen AM (1997): MINI‐REVIEW the functional organization of working memory processes within human lateral frontal cortex: The contribution of functional neuroimaging. Neuroscience 9:1329–1339. [DOI] [PubMed] [Google Scholar]
  34. Plaks JE, Shafer JL, Shoda Y (2003): Perceiving individuals and groups as coherent: How do perceivers make sense of variable behavior? Soc Cogn 21:26–60. [Google Scholar]
  35. Ramnani N, Owen AM (2004): Anterior prefrontal cortex: Insights into function from anatomy and neuroimaging. Nat Rev Neurosci 5:184–194. [DOI] [PubMed] [Google Scholar]
  36. Rushworth MF, Noonan MP, Boorman ED, Walton ME, Behrens TE (2011): Frontal cortex and reward‐guided learning and decision‐making. Neuron 70:1054–1069. [DOI] [PubMed] [Google Scholar]
  37. Rypma B, Berger JS, D'Esposito M (2002): The influence of working‐memory demand and subject performance on prefrontal cortical activity. J Cogn Neurosci 14:721–731. [DOI] [PubMed] [Google Scholar]
  38. Schank RC, Abelson RP. 1977. Scripts, Plans, Goals and Understanding. Hillsdale, NJ: Erlbaum. [Google Scholar]
  39. Schiffer AM, Ahlheim C, Ulrichs K, Schubotz RI: Neural changes when actions change: Adaptation of strong and weak expectations. Hum Brain Mapp, doi: 10.1002/hbm.22023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Schubotz RI, von Cramon DY (2004): Sequences of abstract nonbiological stimuli share ventral premotor cortex with action observation and imagery. J Neurosci 24:5467–5474. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Snodgrass JG, Corwin J (1988): Pragmatics of measuring recognition memory: Applications to dementia and amnesia. J Exp Psychol Gen 117:34–50. [DOI] [PubMed] [Google Scholar]
  42. Song HJ, Baillargeon R (2007): Can 9.5‐month‐old infants attribute to an agent a disposition to perform a particular action on objects? Acta Psychol (Amst) 124:79–105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Talairach J, Tournoux P. 1988. Co‐planar Stereotaxic Atlas of the Human Brain. New York: Thieme. [Google Scholar]
  44. Todorov A, Gobbini MI, Evans KK, Haxby JV (2007): Spontaneous retrieval of affective person knowledge in face perception. Neuropsychologia 45:163–173. [DOI] [PubMed] [Google Scholar]
  45. Todorov A, Uleman JS (2002): Spontaneous trait inferences are bound to actors' faces: Evidence from a false recognition paradigm. J Pers Soc Psychol 83:1051–1065. [PubMed] [Google Scholar]
  46. Trinkler I, King JA, Doeller CF, Rugg MD, Burgess N (2009): Neural bases of autobiographical support for episodic recollection of faces. Hippocampus 19:718–730. [DOI] [PubMed] [Google Scholar]
  47. Ugurbil K, Garwood M, Ellermann J, Hendrich K, Hinke R, Hu X, Kim SG, Menon R, Merkle H, Ogawa S (1993): Imaging at high magnetic fields: Initial experiences at 4 T. Magn Reson Q 9:259–277. [PubMed] [Google Scholar]
  48. Usakli AB, Susac A, Gurkan S (2011): Fast face recognition: Eye blink as a reliable behavioral response. Neurosci Lett 504:49–52. [DOI] [PubMed] [Google Scholar]
  49. van der Graaf FH, Maguire RP, Leenders KL, de Jong BM (2006): Cerebral activation related to implicit sequence learning in a Double Serial Reaction Time task. Brain Res 1081:179–190. [DOI] [PubMed] [Google Scholar]
  50. Volz KG, von Cramon DY (2006): What neuroscience can tell about intuitive processes in the context of perceptual discovery. J Cogn Neurosci 18:2077–2087. [DOI] [PubMed] [Google Scholar]
  51. Werheid K, Zysset S, Müller A, Reuter M, von Cramon DY (2003): Rule learning in a serial reaction time task: An fMRI study on patients with early Parkinson's disease. Cogn Brain Res 16:273–284. [DOI] [PubMed] [Google Scholar]
  52. Worsley KJ, Friston KJ (1995): Analysis of fMRI time‐series revisited—Again. Neuroimage 2:173–181. [DOI] [PubMed] [Google Scholar]
  53. Wurm MF, Schubotz RI (2012): Squeezing lemons in the bathroom: Contextual information modulates action recognition. Neuroimage 59:1551–1559. [DOI] [PubMed] [Google Scholar]
  54. Wurm MF, von Cramon DY, Schubotz RI (2011): Do we mind other minds when we mind other minds' actions? A functional magnetic resonance imaging study. Hum Brain Mapp 32:2141–2150. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting Information

Supporting Figure 1.


Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES