Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2004 Jan 14;21(3):119–142. doi: 10.1002/hbm.10161

Imaging a cognitive model of apraxia: The neural substrate of gesture‐specific cognitive processes

Philippe Peigneux 1,2,, Martial Van der Linden 3, Gaetan Garraux 1,4, Steven Laureys 1,4, Christian Degueldre 1, Joel Aerts 1, Guy Del Fiore 1, Gustave Moonen 4, Andre Luxen 1, Eric Salmon 1,4
PMCID: PMC6872064  PMID: 14755833

INTRODUCTION

The organization of perception and production of movements at the cognitive and neuroanatomical levels is central to our comprehension of upper limb apraxia. Cognitive models of the praxis system were developed to account for the many dissociations in apraxic symptoms observed in individual patients [Buxbaum et al., 2000; Rothi et al., 1991, 1997b; Roy and Square, 1985; Roy et al., 1996] but the neuroanatomical correlates of the cognitive architecture posited by such models remain poorly understood [Koski et al., 2002]. Functional brain imaging studies may reconcile these perspectives, for regional cerebral blood flow (rCBF) measurements during performance of various gesture‐related tasks allow to tag the neuroanatomical substrate of specific cognitive processes in healthy volunteers.

We sought to ascertain the neuroanatomical basis of an influential neuropsychological model for upper limb praxis [Rothi et al., 1991, 1997b], depicted in Figure 1. A salient feature of this model is the postulate that two segregated routes interpose between perception and motor execution during imitation of familiar and novel gestures, respectively. Imitation of familiar gestures is accomplished via access to the long‐term memory representations of their physical attributes (i.e., their shape and kinetic parameters) stored in the input and the output praxicon. Supposedly, access to these praxicons provides a processing advantage for familiar gesture imitation in that it avoids the cost to compute “on‐the‐fly” all the parameters needed to analyse and implement the spatial and temporal characteristics of the familiar movement at each imitation attempt [Rothi and Heilman, 1996]. However, this representational route cannot be used when imitating novel gestures, for these do not have representations in long‐term memory. Rothi et al. [1991, 1997b] presumed that the alternative route directly links perception to motor control, but neuropsychological data suggest that knowledge about the organization of the human body mediates the transition from visual perception to motor execution [Goldenberg, 1995, 2001; Peigneux et al., 2000b]. Therefore, human body knowledge representation for novel gestures was further incorporated in the alternative route [Peigneux and Van der Linden, 2000; Peigneux et al., 2000b]. Beyond these differences, the representational and alternative routes for gesture imitation share in common the perceptual input and the motor output components. At the visuo‐gestural analysis stage, a structural description of the visually perceived gesture is worked out and fed in the input praxicon and the body knowledge components. At the innervatory pattern stage, the gestural representation that egresses the output praxicon or the body knowledge component is implemented in its motor form for effective gesture production.

Figure 1.

Figure 1

Cognitive neuropsychological model of upper limb apraxia. Adapted from Rothi et al. [1991, 1997b] and Goldenberg [1995] by Peigneux and Van der Linden [2000]. Solid arrows indicate the directional flow of information between dedicated components. Successful completion of any gesture‐related task (e.g., pantomime to verbal command, familiar or novel gesture imitation, etc.) requires serial access to several of these components. Dotted arrows indicate potential alternate connections, not normally used (see details in text).

Referring to the processing of familiar gestures in other perception and production tasks than gesture imitation, another peculiar feature of the model is that the codes about the physical attributes of “to‐be perceived” and “to‐be performed” gestures are kept apart in the input praxicon and output praxicon, respectively [Rothi and Heilman, 1996]. The input praxicon receives the “to‐be perceived” information arising from the visuo‐gestural analysis stage and supplies, in turn, the action semantic system, in which the conceptual features associated to the physical attributes of the gesture are recollected. Access to the gestural memories deposited in the input praxicon permits successful discrimination between familiar and unfamiliar gestures at a pre‐semantic level, and naming of meaningful gestures after access to the action semantic and verbal systems. Likewise, the visual information arising from an object visual analysis is processed in the object recognition system and then transmitted to the action semantic system, which incorporates the functional knowledge related to transitive actions and to objects that participate in these actions [Roy and Square, 1985, 1994]. Conversely, the activation of the conceptual characteristics of a gesture in the action semantic system triggers the corresponding “to‐be performed” visuo‐kinaesthetic code, stored in the output praxicon for the purpose of motor implementation at the innervatory pattern stage. Access to the output praxicon through the action semantic system is mandatory for meaningful gesture production upon verbal command or upon presentation of gesture‐related objects. On the other hand, successful completion of semantic association tasks in which functionally compatible objects have to be matched (e.g., a stone can be matched with a hammer because these objects share the same functional properties to be graspable and strong enough to perform the hammering action) necessitates access to the action semantic system through the verbal system.

Thus, according to the model outlined here, carrying out each one of the tasks alluded to above implies recruiting one or several dedicated cognitive units. For the purpose of upper limb apraxia assessment, ad hoc batteries testing gesture production and perception have been designed [Cubelli et al., 2000; Peigneux and Van der Linden, 2000; Rothi et al., 1992, 1997a]. Controlled comparisons between those tests in which performance is either preserved or impaired allow for segregating those cognitive component(s) whose deficit causes the apraxic behaviour in stroke patients and degenerative diseases [Hanna‐Pladdy et al., 2001; Jacobs et al., 1999; Ochipa et al., 1994; Peigneux et al., 2000b, 2001]. In the present study, we used a similar strategy to map the neuroanatomical bases of the normal cognitive architecture for upper limb praxis. Using H2 15O positron emission tomography (PET), regional cerebral blood flow (rCBF) was measured in 15 healthy volunteers during performance of four tasks used for testing upper limb apraxia: pantomime of familiar gestures on verbal command, imitation of familiar gestures, imitation of novel gestures, and matching functionally compatible objects (i.e., semantic functional associations). In addition, we re‐analysed PET data from a prior study [Peigneux et al., 2000a] in which 12 volunteers were similarly scanned during performance of visual perception tasks including the naming of intransitive familiar postures or objects and an orientation decision on novel postures or objects. Within each population, specific contrasts looked for the brain areas in which rCBF differed between conditions, in relation to the presence/absence of one or several cognitive component(s) of interest, as predicted by the model. Then inter‐ and intra‐populations conjunction analyses aimed to evidence the brain region(s) in which neural activity predominantly supports the processes devoted to each particular component.

SUBJECTS AND METHODS

Subjects

Fifteen right‐handed, healthy volunteers (Group A: 7 men, 8 women; mean age 23.4 years, range 19–29 years) gave their full, written informed consent to take part in this PET experiment approved by the Ethical Committee of the University of Liège. In addition, data obtained in 12 other right‐handed healthy volunteers (Group B: 5 men, 7 women; mean age 22.4 years, range 19–26 years), from a previously published PET study [Peigneux et al., 2000a], were re‐analysed using a random effect model to be incorporated in a multi‐study design.

Experimental Tasks

In Group A, subjects were scanned during four experimental conditions: pantomime of familiar gestures to verbal command (FC), familiar gestures imitation (FI), novel gestures imitation (NI), and semantic functional associations (SA). Each condition was repeated thrice and introduced in a pseudo‐randomised order, counterbalanced across subjects. All items were visually displayed on a 21‐inch monitor (mean field of view 31.4 degrees horizontally and 22.1 degrees vertically). Four stimuli examples of each condition, different from the experimental material, were administered after general explanations had been given, prior to scanning. Condition‐specific instructions were given verbally 1 min prior to each scan.

In a preliminary validation test, the perceived familiarity or novelty of the gestures used in this experiment was measured in 20 undergraduate volunteers who did not participate to the PET study. They were presented a series of pictures during 5 sec each, and their task was to decide whether the displayed gesture was unknown to them and with no particular signification (i.e., novel) or familiar, in which case they were asked to report its meaning. Only gestures correctly categorized as familiar or novel by at least 95% of the validation population were used in this experiment.

In FC, FI, and NI conditions, 6 upper‐limb gestures were successively performed during each scan, and the gestures were different in each replication of the condition (18 gestures/condition). During the scan, arm and hand had to come back to the resting position (beside the body) after gesture execution when a warning signal appeared at the centre of the screen to notify the occurrence of the next command. Gestures to execute involved either the hand, i.e., a specific configuration of the fingers, or the entire upper limb, i.e., a positioning or a movement of the whole arm in the extrapersonal space, or a combination of hand and arm components. If the gesture involved to reach a point in space and keep a static position (e.g., “swear an oath in court”), the position had to be kept during the 10 sec allowed for execution. If the gesture involved a dynamic repetitive component (e.g., “hitch‐hiking”), then the movement had to be performed continuously during this period. Familiar gestures were identical in FC and FI conditions. Novel gestures in NI condition were individually matched with familiar gestures used in FI and FC conditions on the basis of the upper limb components recruited (whole limb or hand/finger complex) and the movement kinematics (dynamic or static gesture). All gestures were performed with the right hand (left arm movements were hampered by the presence of a venous catheter in the forearm for oxygen‐15 labelled water infusion), and participants were asked to perform the gesture as if they were standing upright, i.e., arm and hand had to be positioned with respect to the subject's body, rather than with reference to the environment.

In the Familiar Gesture to Command (FC) condition, a warning signal was displayed for 3 sec, followed by a short written sentence depicting the gesture to perform (e.g., ”knock on the door”) during 7 sec. Participants were asked to pantomime as accurately as possible the gesture during 10 sec from the moment when the written sentence was replaced by a fixation cross displayed at the centre of the screen.

In the Familiar Gesture Imitation (FI) and Novel Gesture Imitation (NI) conditions, the timeline was identical to the FC condition. After the 3‐sec warning signal, a videotape in which an actress demonstrated a familiar (or novel) gesture was played during 7 sec. Next, participants had to imitate the gesture during 10 sec while the fixation cross was displayed. Subjects were not informed in advance of the novel or familiar quality of the gesture, but were simply told to imitate as closely as possible the demonstrated movement. The same actress, a young woman in front perspective, performed all videotaped gestures. The range of vision encompassed the body from the waist to the top of the head, with constant size and background on display. Facial expression was always kept neutral. In the Semantic Functional Association (S) condition, after a 3‐sec warning signal, two transitive object names (e.g., “hammer” and “stone”) were simultaneously displayed during 7 sec, respectively, in the upper and lower field of the screen. Participants had to decide if the object named in the lower field can provide an alternative use for the same function as the object named in the upper field. For example, a stone shares with the hammer the property to be solid enough to afford the function to drive a nail, which is not the case of, e.g., a handkerchief. Response was given during the time in which objects' names were displayed by pressing a key with the right hand for “yes,” left hand for “no.”

To control for the effect of ocular saccades density on cerebral activity, participants were requested to continuously keep the eyes on the fixation cross in the different conditions, including during gesture production.

Group B

Extensive presentation of the experimental protocol used in Group B can be found elsewhere [Peigneux et al., 2000a]. Only relevant information is briefly presented here. Subjects were scanned during four experimental conditions, and each condition was presented three times in a pseudo‐randomised order (12 scans per subject). In all conditions, still pictures were successively displayed 5 sec each (24 items per scan). In the Familiar Posture Naming (PN) condition, participants were instructed to name aloud, in a brief sentence, intransitive meaningful upper limb postures (e.g., a military salute). In the Familiar Object Naming (ON) condition, the task was identical, but only intransitive meaningful three‐dimensional objects (e.g., a mountain) were displayed. In the Posture Orientation Decision (PO) condition, participants had to decide as to the right‐left orientation of meaningless upper limb postures, by pressing the key corresponding to the side toward which this posture was oriented. In the Object Orientation Decision (OO) condition, subjects were likewise instructed to decide the orientation of a three‐dimensional meaningless object. Note that only intransitive gestures (i.e., movements that do not involve object grasping and manipulation) and intransitive objects (i.e., objects that cannot be directly grasped or manipulated) were used in this prior experiment [Peigneux et al., 2000a] aiming to segregate the visual processing of gestures from the visual processing of other types of stimuli.

PET Data Acquisition

PET data were acquired on a Siemens CTI 951 R 16/31 scanner in 3D mode using an identical procedure in Group A and Group B. The subject's head was stabilized by a thermoplastic facemask secured to the head holder (Truscan Imaging, MA), and a venous catheter inserted in a left antebrachial vein. A transmission scan was performed to allow a measured attenuation correction. Regional CBF was estimated during twelve 90‐sec emission scans. Each of them consisted of 2 frames: a 30‐sec background frame and a 90‐sec frame. The slow intravenous water (H2 15O) infusion began 10 sec before the second frame. Six mCi (222 MBq) were injected for each scan, in 5 cc saline, over a period of 20 sec. Data were reconstructed using a Hanning filter (cut‐off frequency: 0.5 cycle/pixel) and corrected for attenuation and background activity.

PET Data Analysis

PET data were analysed using the statistical parametric mapping software SPM99 (Wellcome Department of Cognitive Neurology, London, UK; online at http://www.fil.ion.ucl.ac.uk/spm) implemented in MATLAB (Mathworks Inc., Sherborn, MA). For each subject, all scans were realigned together, then normalized to a standard PET template and smoothed using a Gaussian kernel of 16 mm full width at half maximum (FWHM). The condition and subject (block) effects were estimated according to the general linear model at each voxel [Frackowiak et al., 1997], using a random effect (RFX) model [Friston et al., 1999a; Holmes and Friston, 1998] to accommodate within‐ and between‐individual variability of rCBF changes, thus accounting explicitly for subject by condition interaction effects.

At the first level of the RFX analysis, primary contrasts were used to estimate in each individual separately the main effects of subtractions and interactions between selected conditions (see details below). Realignment parameters (translations in x, y, z directions and rotations around x, y, z axes) were incorporated as nuisance variables in the design matrix [Brett et al., 1999] to account for residual movement artefacts. Global flow adjustment was performed by analysis of covariance (subject‐specific ancova). The resulting contrast images, obtained in each individual, fitted the within‐subject component of the variance and were used in the second level analysis, where subjects are considered as random variables. Two types of analyses were performed at the random effect level. One‐sample t‐tests aimed to highlight the brain areas consistently activated in all individuals for a given primary contrast. Conjunction analyses between different primary contrasts aimed to reveal the brain areas in which neural activity predominantly and consistently supports the processes executed by the cognitive component shared in common by these contrasts. Note that when using intra‐population conjunction at the random effect level, we implicitly assumed sphericity of the error variance. The resulting set of voxel values for each contrast constituted a map of the t statistic [SPM(T)], thresholded at P ≤ 0.001 (T ≥ 1.93). Statistical inferences were then obtained at the voxel level corrected for multiple comparisons in the brain volume (P corr < .05). Uncorrected values (P < .001) are reported in the tables for completeness but will not be discussed.

Representational and Alternative Routes for Gesture Imitation

First, we investigated the functional neuroanatomy of the representational and alternative routes for gesture imitation by comparing the rCBF patterns during imitation of familiar and novel gestures, respectively. Familiar gesture imitation, as compared to novel gesture imitation (FI vs. NI), should highlight those brain areas in which neural activity predominantly supports the input praxicon, the output praxicon, and the action semantic system (see Fig. 1). Conversely, novel gesture imitation, as compared to familiar gesture imitation (NI vs. FI), should evidence those brain areas that predominantly participate to human body knowledge mediation between visual perception and motor production. Neural grounds for visuo‐gestural analysis and innervatory patterns are not highlighted in the present comparison, for these components are shared in common by both routes.

Visuo‐Gestural Analysis

Second, we looked for those brain areas in which neural activity predominantly supports the visuo‐gestural analysis stage. This component is shared in common by the comparison (contrast) between novel posture and object orientation decision conditions (PO vs. OO) and the comparison between novel gestures imitation and familiar gestures production on command (NI vs. FC). As a supplementary control, brain areas activated in (FI vs. NI) or (NI vs. FI) contrasts were excluded from the analysis (exclusive mask set at P < .05 uncorrected) to avoid the inclusion of activations associated to human body knowledge or input praxicon entries, because both might possibly be covertly accessed during novel gesture visual processing.

Input Praxicon

Third, we aimed to segregate the neural basis of the input praxicon component using a conjunction analysis between the interaction [(PN vs. ON) by (PO vs. OO)] and the subtraction (FI vs. FC) contrasts. The interaction should highlight the brain areas more active during familiar posture vs. object naming (PN vs. ON) than during novel posture vs. object orientation decision (PO vs. OO), i.e., a contrast that singles out both the input praxicon and action semantic system components. The subtraction contrast between familiar gesture imitation and familiar pantomime to command (FI vs. FC) reveals the visuo‐gestural analysis and input praxicon components. Therefore, the input praxicon is the only remaining cognitive component shared in common in the conjunction between the [(PN vs. ON) by (PO vs. OO)] and (FI vs. FC) contrasts.

Output Praxicon

Fourth, we designed a conjunction to segregate the brain regions whose predominant activity supports the output praxicon component. Pantomime of familiar gestures to command, as compared to the semantic functional associations condition (FC vs. SA) should highlight both the output praxicon and innervatory patterns components. As shown above, the subtraction between familiar and novel gesture imitation (FI vs. NI) should recruit the action semantic, input praxicon, and output praxicon components, involved in the representational route for familiar gesture imitation. Hence, the conjunction between (FC vs. SA) and (FI vs. NI) should evidence the rCBF associates of the output praxicon, i.e., the only cognitive component shared in common.

Innervatory Patterns

Fifth, we attempted to highlight the neural substrate of the innervatory pattern component. This component shows up, together with the output praxicon, in the comparison between pantomime to verbal command and semantic functional associations conditions (FC vs. SA). The output praxicon, but not the innervatory pattern, component is highlighted in the (FI vs. NI) contrast. Therefore, the interaction [(FC vs. SA) by (FI vs. NI)] excludes from the analysis the brain areas more active during familiar than novel gesture imitation (including the output praxicon) and segregates the regional cerebral substrate for the innervatory pattern component.

Action Semantic System

Sixth, the action semantic system participates in the pantomime of familiar gestures to command (FC) and to the naming of visually displayed familiar gestures (PN). We segregated the action semantic system in a conjunction including the subtraction between pantomime to verbal command and novel gesture imitation (FC vs. NI), which discriminates the output praxicon and action semantic system components, and the subtraction between posture and object naming (PN vs. ON), which discriminates visuo‐gestural analysis, input praxicon, and action semantic system components.

Validation of the Model's Prediction

Finally, we mentioned above that the representational route, used for familiar gesture imitation, theoretically encompasses several upper limb praxis processing components, namely the input and output praxicon, and the action semantic system. Therefore, we controlled a posteriori that the activation peaks associated with each of these components were associated with the representational route, and not activated in the alternative route. To do so, an inclusive mask was applied to each contrast to restrict the analyses to the brain areas specifically engaged in familiar or novel gestures imitation [(FI vs. NI) or (NI vs. FI), set at P < .001]. This approach offers a further control on the validity of the cognitive architecture posited in the model depicted in Figure 1.

We did not attempt to highlight the neural substrate of the object visual analysis and object recognition system components, since only objects that could not be manipulated were used in the ON and OO conditions.

RESULTS

Representational and Alternative Routes for Imitation of Gestures

The comparisons conducted between FI and NI conditions showed that two sets of discrete brain areas preponderantly support the imitation of familiar and novel gestures, respectively (Fig. 2A). Familiar gesture imitation (vs. NI) was characterized by rCBF increases in the angular and middle frontal gyri in the left hemisphere, and in the supramarginal gyrus and the inferior parietal lobule in the right hemisphere. Conversely, novel gesture imitation (vs. FI) elicited rCBF increases in the left inferior parietal lobule, centred below the anterior portion of the intraparietal sulcus (pscorr < .05).

Figure 2.

Figure 2

A: Imitation of familiar and novel gestures. Brain areas specifically activated during imitation of familiar (vs. novel) gestures (FI vs. NI; in red) and during imitation of novel (vs. familiar) gestures (NI vs. FI; in green), superimposed on left, superior, and right views of a 3‐D MRI. Activations are displayed thresholded at P < 0.001, uncorrected, in clusters > 50 contiguous voxels (see Table I for activation details). B: Action semantic system. Left, superior, and right render views of a standard brain with superimposed foci of activity associated with the conjunction contrast aiming to segregate the action semantic component. Activations are displayed thresholded at P corr < 0.05 (see Table IIE for activation details).

Additionally, brain activation showed, at a lower statistical threshold (P < .001, uncorrected) that the representational route encompasses the inferior parietal lobule, supramarginal, middle temporal, middle frontal, and superior frontal gyri bilaterally, and the inferior and orbital gyri in the right hemisphere. Conversely, activations associated to the alternative route (for novel gestures imitation) were found bilaterally in the cerebellum, precuneus, and postcentral gyrus, and in the inferior and superior parietal lobules in the right hemisphere (Table IA and B).

Table I.

Representational and alternative routes for familiar and novel gesture imitation

Side Structure Coordinates (mm) x, y, z * t
Representational route for familiar gesture imitation (FI vs. NI)
 L Angular gyrus −42 −76 36 5.34a
 L Middle frontal gyrus −38 14 34 5.44a
 R Supramarginal gyrus 54 −58 26 5.87a
 R Inferior parietal lobule 50 −58 50 5.84a
 L Precuneus 0 −64 32 4.32
 L Supramarginal gyrus −62 −56 26 4.49
 L Inferior parietal lobule −40 −68 40 4.28
 L Middle temporal gyrus −62 −42 −10 4.36
 L Middle temporal gyrus −60 −38 2 3.48
 L Superior temporal gyrus −48 −62 28 3.82
 L Superior frontal gyrus −14 38 46 3.77
 R Inferior parietal lobule 46 −66 42 4.69
 R Inferior temporal gyrus 62 −16 −20 4.31
 R Middle temporal gyrus 60 −14 −10 4.60
 R Inferior frontal gyrus 58 26 16 4.11
 R Inferior frontal gyrus 50 18 22 3.92
 R Middle frontal gyrus 36 28 48 4.67
 R Middle frontal gyrus 28 56 −8 3.83
 R Medial frontal gyrus 2 54 −10 3.24
 R Medial frontal gyrus 2 62 −8 3.73
 R Superior frontal gyrus 16 58 −24 4.49
 R Superior frontal gyrus 22 56 14 4.13
 R Orbital gyrus 14 50 −26 3.93
Alternative route for novel gesture imitation (NI vs. FI)
 L Inferior parietal lobule −40 −44 54 5.56a
 L Inferior parietal lobule −46 −40 58 5.16a
 L Precuneus −18 −52 58 3.98
 L Postcentral gyrus −44 −32 48 4.04
 L Posterior cerebellum, lobule VI −2 −60 −20 3.83
 R Precuneus 14 −52 52 4.18
 R Superior parietal lobule 24 −66 50 4.93
 R Postcentral gyrus 66 −24 32 4.54
 R Inferior parietal lobule 68 −30 40 3.75
 R Anterior cerebellum, lobule V 4 −52 −16 3.48
*

Coordinates in standard ICBM space.

a

P < 0.05, corrected for multiple comparisons in the whole brain volume. Other results are significant at P < 0.001, uncorrected, in clusters > 50 contiguous voxels.

Neural Substrates of Upper Limb Praxis Components

Here, we report results of the conjunctions and interactions analyses aiming to segregate the neural substrates that predominantly support each of the cognitive components proposed in the model depicted in Figure 1.

Visuo‐gestural analysis

The conjunction analysis between novel posture vs. novel object orientation decision (PO vs. OO) and novel gestures imitation vs. familiar gestures production on command (NI vs. FC) disclosed a bilateral activation in the middle occipito‐temporal gyrus, encroaching the location of the MT/V5 area. In the right hemisphere, the activation extended in the posterior middle and superior temporal gyri, and in the inferior parietal lobule and supramarginal gyrus close to the superior temporal cortex (pscorr < .05; Fig. 3).

Figure 3.

Figure 3

Functional segregation of gesture‐related cognitive processes. Foci of cerebral activity associated with visuo‐gestural analysis (in red), input praxicon (in green; renamed body‐parts coding), and output praxicon (in blue; renamed praxicon) components, superimposed on posterior (top left), anterior (top right), left (bottom left), and right (bottom right) render views of 3‐D MRI. Activations are displayed thresholded at P < 0.001, uncorrected.

Additional activations (P < 0.001, uncorrected) were found in the left superior parietal lobule and cerebellum, and in the right middle frontal gyrus (Table IIA). Increased activation was observed bilaterally in the occipito‐temporal gyrus in all conditions in which gestures are displayed in the visual modality (PN, ON, FI, NI; see Fig. 4).

Table II.

rCBF correlates of discrete cognitive components for upper limb praxis processing

Side Structure Coordinates (mm) x, y, z * t
A. Visuo‐gestural analysis (PO vs. OO and NI vs. FC)
 L Middle occipital/temporal gyrus −54 −78 6 4.13a
 R Middle occipital/temporal gyrus 52 −74 2 5.92a
 R Middle temporal gyrus 58 −54 6 5.18a
 R Superior temporal gyrus 60 −44 14 4.11a
 R Supramarginal gyrus 64 −46 34 3.47a
 R Inferior parietal lobule 68 −44 26 3.37a
 L cerebellum, lobule VI −40 −52 −28 2.52
 L Superior parietal lobule −30 −82 46 2.46
 R Middle temporal gyrus 68 −26 −4 2.64
 R Precuneus 6 −50 48 2.04
 R Middle frontal gyrus 32 10 48 2.16
B. Input praxicon (renamed body parts coding), FI vs. FC and PN vs. ON by PO vs. OO
 L Inferior temporal gyrus −48 −78 0 4.01a
 L Superior parietal lobule −34 −52 60 3.40a
 R Precentral gyrus 30 −18 54 2.74
 R Postcentral gyrus 40 −30 42 2.46
 R Insula 36 −24 22 2.15
 R Middle occipital gyrus 56 −76 −14 2.81
 R Inferior temporal gyrus 60 −70 −4 2.11
 R Middle frontal gyrus 56 10 42 2.78
 R Middle temporal gyrus 50 −4 −16 2.75
 R Superior temporal gyrus 48 −2 −6 2.49
 R Inferior/Superior parietal lobule 40 −56 60 2.48
C. Output praxicon (renamed Praxicon), FC vs. SA and FI vs. NI
 L Superior temporal Sulcus −56 −54 20 3.44a
 L Superior temporal Sulcus −46 −60 14 3.41a
 R Inferior frontal gyrus (pars triangularis) 56 22 10 3.47a
 R Inferior frontal gyrus 44 10 24 3.46a
 R Superior temporal Sulcus 58 −54 14 3.25a
 R Inferior frontal gyrus 44 12 16 3.24a
 L Inferior frontal gyrus −44 26 −2 1.96
 L Caudate Body −12 −2 20 2.17
 R Supramarginal gyrus 58 −48 22 3.03
 R Thalamus 4 −26 12 2.80
 R Superior temporal gyrus 34 12 −22 2.32
D. Innervatory patterns (FC vs. SA by FI vs. NI)
 L Cuneus −16 −84 38 6.46a
 L Medial frontal gyrus (SMA) −16 −4 60 6.06a
 L cerebellum, lobule V 0 −62 −18 5.48a
 R Superior parietal lobule 26 −64 54 5.07a
 L Superior parietal lobule −18 −52 58 4.95
 L Middle temporal gyrus −56 −66 2 3.71
 L Postcentral gyrus −40 −36 54 4.34
 L Precentral gyrus −30 −32 58 3.81
 L Precentral gyrus −40 −12 48 3.58
 L Cingulate gyrus −16 −20 40 3.71
 L Thalamus −10 −22 0 5.00
 L Medial globus pallidus −16 −8 −2 4.55
 L Anterior cerebellum, lobule VI −18 −62 −32 4.51
 R Precuneus 14 −52 52 4.50
 R Middle temporal gyrus 58 −62 −2 4.05
 R Inferior parietal lobule 66 −30 28 4.70
 R cerebellum, lobule VI 16 −58 −16 4.80
 R cerebellum, lobule VI 36 −50 −30 3.99
 R Superior frontal gyrus 20 2 62 4.26
 R Superior frontal gyrus 16 −18 64 3.75
E. Action semantic system (FC vs. NI and PN vs. ON)
 L Middle temporal gyrus −56 −46 2 5.85a
 L Superior temporal gyrus −60 −56 18 5.66a
 L Superior temporal gyrus −46 −60 18 4.78a
 L Inferior frontal gyrus −48 16 6 4.72a
 L Inferior frontal gyrus −54 28 −4 4.51a
 L Inferior frontal gyrus (pars triangularis) −54 18 12 4.14a
 L Inferior frontal gyrus −52 10 16 3.74a
 L Inferior frontal gyrus −40 18 −8 3.56a
 L Middle frontal gyrus −36 20 28 4.53a
 L Middle frontal gyrus −36 56 −2 4.53a
 L Superior frontal gyrus −30 56 −12 3.60a
 L Superior frontal gyrus −8 20 64 4.49a
 L Thalamus −6 −4 6 3.50a
 R Cerebellum, lobule VII 24 −76 −32 3.64a
 R Cingulate gyrus 6 28 34 4.15a
 R Inferior frontal gyrus 50 20 0 3.51a
 R Medial frontal gyrus 16 48 14 3.63a
*

Coordinates in standard ICBM space.

a

P < 0.05, corrected for multiple comparisons in the whole brain volume. Other results are significant at P < 0.001, uncorrected, in clusters > 10 contiguous voxels.

Figure 4.

Figure 4

Visuo‐gestural analysis. Condition‐specific parameter estimates in voxels representative of activation (P corr < 0.05) in the left (top) and right (bottom) middle occipito‐temporal gyrus in the conjunction contrast aiming to segregate the visuo‐gestural analysis component (Table II). Condition abbreviations: PN, posture naming; ON, object naming; PO, posture orientation decision; OO, object orientation decision; FC, pantomime to verbal command; FI, familiar gestures imitation; NI, novel gestures imitation; SA, semantic functional associations.

Input praxicon

The conjunction analysis between the interaction contrast focused on familiar gesture naming ((PN vs. ON) by (PO vs. OO)) and the subtraction of familiar pantomimes to command from imitation (FI vs. FC) disclosed rCBF increases in the left inferior temporal gyrus and the superior parietal lobule (pscorr < 0.05; Fig. 3). The peak of activity in the inferior temporal gyrus is located nearby, but ventrally, to the peak associated to visuo‐gestural analysis. At a lower statistical threshold (P < 0.001, uncorrected), middle occipital and inferior temporal gyri and superior parietal lobule activations were found in the right hemisphere, as well as activations in pre‐ and post‐central, frontal and temporal areas (Table IIB). Increased activation was observed in the left inferior temporal gyrus in all conditions in which gestures are displayed in the visual modality (PN, ON, FI, NI). At variance, activation in the left superior parietal lobule was less consistent, being only present during novel object orientation decision and novel gesture imitation (OO, NI; Fig. 5).

Figure 5.

Figure 5

Body‐parts coding (“Input Praxicon”). Condition‐specific parameter estimates in voxels representative of activation (P corr < 0.05) in the left inferior temporal gyrus (top) and superior parietal lobule (bottom) in the conjunction contrast aiming to segregate the input praxicon (renamed body‐parts coding) component (Table IIB). At variance with the inferior temporal activity, parietal activation is not present in the FI condition (imitation of familiar gestures) nor in gesture reception tasks, suggesting that this area is engaged due to higher spatial requirements in NI and OO conditions. Condition abbreviations are given in Figure 3 legend.

Output praxicon

The conjunction analysis between pantomime to command vs. semantic functional associations (FC vs. SA) and familiar vs. novel gesture imitation (FI vs. NI) disclosed rCBF increases bilaterally near the superior temporal sulcus and in the right inferior frontal gyri (pscorr < 0.05; Fig. 3). Bilateral superior temporal activations are located nearby, but superior and anterior to the site dedicated to visuo‐gestural analysis. At a lower statistical threshold (P < 0.001, uncorrected), activations were found in the left inferior frontal gyrus, the right superior temporal and supramarginal gyri, and in the thalamus and the body of the caudate nucleus (Table IIC). Increased activations are observed bilaterally in the superior temporal sulcus and inferior frontal gyrus in all gesture production conditions in which familiar gestures are processed irrespective of their modality (FC, FI; Fig. 6).

Figure 6.

Figure 6

Praxicon (Output Praxicon). Condition‐specific parameter estimates in voxels representative of activation in the left superior temporal sulcus (top) and in the right inferior frontal gyrus (bottom) in the conjunction contrast aiming to segregate the output praxicon (renamed praxicon) component (Table II). Condition abbreviations are given in Figure 3 legend.

Innervatory patterns

The interaction analysis looking for brain areas more active during pantomime to command vs. semantic functional associations than during familiar vs. novel gesture imitation, i.e., (FC vs. SA) by (FI vs. NI), disclosed rCBF increases in cuneus and medial frontal gyrus (supplementary motor area) in the left hemisphere, in the superior parietal lobule in the right hemisphere, and in the cerebellar vermis bilaterally (pscorr < .05; Fig. 7). At a lower statistical threshold (P < 0.001, uncorrected), main activations were found bilaterally in the middle temporal gyrus, in the left cingulate, precentral and postcentral areas, left thalamus and globus pallidus, and right superior frontal gyrus (P < 0.001, uncorrected; Table IID).

Figure 7.

Figure 7

Innervatory patterns. Transverse MRI sections of the brain with superimposed foci of activity (P corr < 0.05) in the supplementary motor area (left), cuneus (middle), and cerebellum (right), associated with the conjunction contrast aiming to segregate the innervatory patterns component (Table II).

Action semantic system

The conjunction analysis between contrasts (FC vs. NI) and (PN vs. ON) highlighted a large set of activated brain areas mainly in the left hemisphere, including the middle and superior temporal gyri, the inferior, middle and superior frontal gyri, and the thalamus (Fig. 2B). Cingulate, inferior and medial frontal, and cerebellum activations were found in the right hemisphere (pscorr < 0.05; Table IIE).

Post‐hoc Validity Test on the Representational Route

In this validation step, the above analyses aiming to segregate the neural substrates of the input praxicon, output praxicon, and action semantic system were restricted to the brain areas specifically engaged in the familiar vs. novel gesture imitation (FI vs. NI) comparison. In this restricted framework, activations associated with the action semantic system and the output praxicon were found significant at the same peak locations, reasserting that these areas participated in familiar gestures imitation (Fig. 8A). However, no significant activation was found in association with the input praxicon in the framework of the representational route (Fig. 8B), even when the inclusive mask was set at a very low statistical threshold (P < 0.05 uncorrected).

Figure 8.

Figure 8

Representational and alternative routes for gesture imitation. SPM glass brains displayed thresholded at P < 0.001, uncorrected. Top: Imitation of familiar gestures (IF–IL). Bottom: Imitation of novel gestures (IL–IF). Superimposed light gray (green) triangles show voxels associated (P corr < 0.05) with the input praxicon (renamed body‐parts coding) component. These voxels (see Table II) do not superimpose with activity clusters associated with familiar gesture imitation (top). At variance, superior parietal activation superimposes with activity clusters associated with novel gesture imitation (bottom). Superimposed dark gray (red) triangles show voxels associated with output praxicon (renamed praxicon) component. Bilateral superior temporal and right inferior frontal locations (P corr < 0.05; see Table II) superimpose with activity clusters associated with familiar (top) but not novel (bottom) gesture imitation.

Unexpectedly, the same analysis restricted to the brain areas preferentially engaged in novel gestures imitation (NI vs. FI) highlighted a significant activation (P corr < .05) in the left superior parietal lobule at the same peak coordinate (−34–52 60 mm) reported for the input praxicon on Table 2.B. Activation in the inferior temporal gyrus was also found when the inclusive mask was set at a lower statistical threshold (P < .05 uncorrected). Hence, the input praxicon component was even more activated during novel than familiar gesture imitation, contrary to the model's prediction. Note that the left superior parietal and inferior temporal activation peaks were found activated when imitation conditions (NI, FI) were directly contrasted to gesture production on command (FC) or action semantic (SA) tasks, suggesting that the so‐called “input praxicon” component plays a role in the analysis of visually presented gestures. As clearly shown Figure 4, the inferior temporal gyrus is activated in all conditions in which a visual gestural input is involved, but irrespective of its familiarity.

DISCUSSION

The aim of the present study was to ascertain the neuroanatomical basis of a popular model for upper limb apraxia [Rothi et al., 1991, 1997b], adapted as depicted Figure 1 [Peigneux et al., 2000b]. Our results resume as follows. First, as predicted by the model, the routes linking perception and motor execution in the imitation of familiar and novel gestures, respectively, depend on partially distinct neuroanatomical structures. Second, we found a neuroanatomical segregation for the neural substrates of various upper limb praxis components, responsible for specific cognitive processes. This confirms neuropsychological observations at the behavioural level [for reviews, see Peigneux and Van der Linden, 2000; Rothi et al., 1997b; Roy and Square, 1994], indicating that cognitive modules are dedicated to the processing of gestures “ … in relation to their nature and the modality through which the instructions eliciting the appropriate response are conveyed” [De Renzi et al., 1982]. Third, the controversial distinction between the so‐called input and output praxicons (i.e., the memory stores used for “to‐be‐perceived” and “to‐be‐produced” gestural representations, respectively) was apparently supported at the neuroanatomical level. However, departing from the model's prediction, we found that the neural substrate of the input praxicon participates more to the alternative route for novel gesture imitation than to the representational route for familiar gesture imitation. Therefore, a more parsimonious proposal might be that one single praxicon is actually responsible for holding in memory the codes about visual and kinetic features of familiar gestures, while the remaining component (i.e., the “input praxicon”) plays a more restricted role in the visual perception of gestures. These findings and their consequences for our cognitive neuropsychological model of apraxia, however, have to be interpreted in the framework of the functional brain imaging approach used in this experiment.

How Far Can Brain Imaging Inform Us About the Functional Architecture of Praxis Processing?

In the present report, we used functional brain imaging in a group of normal volunteers to identify the neural correlates of several cognitive processes posited by a model of apraxia mostly built on the basis of detailed single‐case neuropsychological analyses in brain damaged patients. However, the fundamentals of these two approaches (i.e., the single‐case neuropsychology and the functional brain imaging) may differ radically. In an influential position study, Caramazza and McCloskey argued that “ … only single‐patients studies allow valid inferences about normal cognitive processes from the analysis of acquired cognitive disorders” [Caramazza and McCloskey, 1989, p 517], mainly because “ … averaging data across brain‐damaged subjects requires both the assumption of homogeneity of relevant cognitive mechanisms prior to brain damage … and the assumption of homogeneity with respect to functional lesions” [McCloskey and Caramazza, 1989, p 592]. For experimental purposes, there are some circumstances in which the first assumption may be taken for granted. That is, one can deliberately accept to neglect rare cognitive processes, highly variable across individuals, to concentrate on the predominant ones, shared by most subjects in a population. However, homogeneity with respect to the functional lesions can never be taken for granted a priori, because the functional property of the lesion has to be defined with regard to the cognitive process of interest, which is actually the purpose of the experimental phase of the study. That is, subjects should have been tested during the selection phase using the experimental tasks to create a homogenous group. If we do that, there is no further gain to bother averaging selected subjects as a group rather than considering them as multiple case studies already performed during the selection phase. But then, if we do not perform this step, and subjects likely differ on functional lesion sites, averaging dissimilar patients would merely obscure the identification of the process of interest. Hence, the authors stated that only single cases analysis may reveal a functional cognitive architecture applicable to the entire population.

In a similar manner, averaging data across healthy volunteers in a functional brain imaging study requires both the assumption of the homogeneity of relevant cognitive mechanisms and the homogeneity of the functional neuroanatomy that supports these cognitive processes. Although there are arguments to take the first assumption for granted (see above) in our experimental design, the second assumption is more disputable. Indeed, many brain imaging studies have emphasized a large anatomical and functional variability (or heterogeneity) in the normal population, and, therefore, one can feel a tension between the assumptions behind the single‐case neuropsychology approach that was used to create the cognitive model under examination, and the neuroimaging approach that made possible the study of the neural bases of this model. On the other hand, all the statistical analyses reported in the present study have been performed using a mixed (random) effect model, which was explicitly intended to control interindividual variability [Friston et al., 1999b] and to highlight the brain areas in which condition‐related CBF variations are detected consistently in each and every subject. In other terms, we have looked for the homogenous component of the functional neuroanatomy of praxis processing, which does not mean that we deny the existence of functional heterogeneity in the neuroanatomy of cognitive processes. Therefore, our results and the discussion that follows have to be apprehended in this precise context: the brain areas showing a preferential activity for a specific cognitive process, as identified in the present study, are those areas reliably and consistently activated in our population, and represent only a subset of the brain areas that can be engaged in this process by different individuals.

Finally, one should be aware that we have limited ourselves to the particular explanation of praxis offered by cognitive neuropsychology. Alternative interpretations can be found in the motor control, especially computational, literature [e.g., Doya, 2000; Flanagan and Wing, 1997; Haruno et al., 2001; Schall and Bichot, 1998; Wolpert and Ghahramani, 2000; Wolpert and Kawato, 1998], which shows how sensory and motor experiences may interact in movement representation, generation, and production. However, probing such models was beyond the scope of the present study centred on current cognitive models of apraxia.

A Dedicated Route for Novel Gestures Imitation Through Human Body Knowledge?

Imitation is a complex process. The observer perceives the demonstrator's acts, uses visual perception as the basis for an action plan, and executes the motor output [Meltzoff, 2002]. If the observed gesture is already in the observer's repertoire, then the imitative response is facilitated because the action plan does not need to be built again [Rothi and Heilman, 1996]. If the gesture is novel, transformations from vision to action must be computed de novo, a computational process possibly mediated by knowledge of structure of the human body [Goldenberg, 1995]. Human body knowledge provides a classification of body parts and specifies their boundaries [Meltzoff and Moore, 1997; Sirigu et al., 1991b]. Application of this knowledge permits the coding of the features of novel and meaningless upper limb gestures into the simpler form of a combination of familiar elements. This reduction provides equivalence between imitation and demonstration, which facilitates both comparison between visually presented gestures and their mapping onto motor programs [Goldenberg, 1999; Goldenberg and Hagmann, 1997]. Our results suggest that activity in the left inferior parietal lobule predominantly supports these computational processes during novel gesture imitation, in keeping with the proposal that this region plays a prominent role for coding the topographical organization of body parts [Goldenberg, 2001; Hermsdorfer et al., 2001]. The activation peak, located below the anterior portion of the intraparietal sulcus, is only 10 mm medial and superior to the peak coordinates activated during the discrimination of photographs of meaningless upper limb postures (−54–40 46 mm) [Hermsdorfer et al., 2001]. The similarity of these results is remarkable given that both studies aimed to prove the neural substrate of the human body knowledge component using totally different approaches.

These findings may depart from previous reports suggesting that familiar and novel actions share the same neural network when the aim of the visual perception is to imitate [Decety et al., 1997; Grèzes et al., 1998, 1999], and that the right occipito‐parietal pathway is predominant for meaningless actions imitation [Decety et al., 1997; Grèzes et al., 1999] or recognition [Decety et al., 1997]. The divergence is of interest since these findings raised substantial doubts on the functional validity of the hypothesized division between alternative and representational routes in Rothi's model [Grèzes et al., 1999; Grèzes and Decety, 1998]. However, a prominent feature of the studies mentioned above is the fact that regional cerebral activity was recorded during observation and/or mental simulation of actions, while we scanned volunteers during real production of large upper limb movements. Although considerable evidence supports the assumption that common mechanisms are involved in motor imagery and motor behaviour [e.g., Cochin et al., 1999; Decety, 1996; Decety et al., 1994; Parsons, 1994; Sirigu et al., 1996; Stephan et al., 1995], this is not true in all cases and functional differences may be found at the neuroanatomical level [Deiber et al., 1998; Gerardin et al., 2000]. A recent meta‐analysis of the literature clearly showed that despite a generally good overlap between action execution, simulation, and observation, each of these target processes additionally recruits specific brain areas [Grèzes and Decety, 2001]. Therefore, care must be taken when drawing conclusions relevant to the mechanisms involved in actual performance of gestures and production of apraxia symptoms on the sole basis of movement perception and simulation studies. Apraxia is clinically assessed using real gesture production tasks, and the hypothesis of an alternative route is supported by neuropsychological studies having documented imitation deficits restricted to [Goldenberg and Hagmann, 1997; Mehler, 1987] or dramatically aggravated for [Peigneux et al., 2000b] novel upper limb gestures following left parietal lesions, in line with the results of the present experiment.

Regarding the hemispheric dominance, few brain imaging studies have investigated cerebral activity during real imitation of actions, and they concentrated mostly on the imitation of meaningless finger configurations [Iacoboni et al., 1999, 2001; Krams et al., 1998; Tanaka et al., 2001] or digital manipulation of small geometric objects [Chaminade et al., 2002; Decety et al., 2002]. Within this framework, imitation of novel finger configurations was associated with activations in several brain areas including the inferior parietal lobule bilaterally [Chaminade et al., 2002; Krams et al., 1998; Tanaka et al., 2001] or predominantly in the right hemisphere [Iacoboni et al., 1999]. Note that activation was nearly bilateral in the latter study [see Iacoboni et al., 2001, p. 13998]. Bilateral or right‐sided activations, when imitating finger configurations, are consistent with the proposal that conceptual and hemispherical/anatomical requirements for body‐part coding are different for arm and finger gestures [Goldenberg, 1996; Goldenberg et al., 2001]. Relationships between numbers of perceptually distinct body parts (e.g., palm or back of hand, forearm, cheek, nose, ear, etc.) determine arm configurations, and their coding necessitates knowledge about the classification and boundaries of these body parts. At variance, fingers are perceptually similar but the thumb, and coding their configuration is primarily achieved by determining their spatial position with respect to other fingers. It entails that imitation of finger configurations did not heavily rely on body‐part coding. Rather, it raises a higher demand on visual perceptual analysis and fine‐grained discrimination processes, depending on the integrity of the right hemisphere [Goldenberg, 1997, 2001]. In the present study, most of the gestures to imitate were arm configurations. Moreover, subjects were asked to imitate as closely as possible the demonstrated gesture in all aspects including the arm position even when the focus of the imitation was put on the fingers configuration. Therefore, the predominance of left parietal activation during imitation of novel gestures is much more likely to be due to body‐part coding requirement. This interpretation is consistent with neuropsychological data. The patient HT demonstrated comprehension deficits restricted to body parts following left parietal lesions [Suzuki et al., 1997], and imitation of meaningless arm movements and postures is particularly defective in left‐brain [Goldenberg, 1995; Mehler, 1987; Weiss et al., 2001] but not right‐brain [Hermsdorfer et al., 1996] damaged patients, in which imitation of finger configurations is mostly impaired [Goldenberg, 1996, 1999; Goldenberg and Hagmann, 1997; Goldenberg and Strauss, 2002].

It must be noticed that when imitation of novel or familiar gestures is directly contrasted to pantomime of familiar gestures to verbal command or to semantic functional associations conditions (NI vs. FC, FI vs. FC, NI vs. SA, FI vs. SA), similar patterns of rCBF increases are observed bilaterally in the occipito‐parietal pathway, with more expanded activations in the right hemisphere. Hence, our data are not in contradiction with, but refine, gesture observation and simulation studies suggesting that a similar dorsal network may be involved for visually‐perceived novel and familiar gestures when the aim of the visual perception is to imitate [Decety et al., 1997; Grèzes et al., 1998, 1999]. Activation of the dorsal network during imitation is consistent with studies showing that the parietal cortex houses transformational processes linking perception to action [Andersen et al., 1997; Freund, 2001; Goodale and Haffenden, 1998; Goodale and Milner, 1992], participates in sensorimotor integration by maintaining an internal representation of the body's state as predicted by computational models [Wolpert et al., 1998], and is involved in mental transformations of the body‐in‐space [Bonda et al., 1995, 1996a].

Visuo‐Gestural Analysis and Area MT/V5

Our results showed that bilateral activation of the occipito‐temporal junction encroaching on area MT/V5 predominantly supports the visuo‐gestural analysis stage. The occipito‐temporal junction is activated more during all conditions in which novel or familiar gestures, either static or dynamic, are visually perceived, i.e., during orientation decision (PO), naming (PN), and imitation (FI and NI) (Fig. 4). The finding of occipito‐temporal junction activity associated with visuo‐gestural analysis replicates our prior study [Peigneux et al., 2000a] and is in line with the role of this region in the observation of hand/arm postures [Hermsdorfer et al., 2001] and actions [Grèzes et al., 1998; Perani et al., 2001; Rizzolatti et al., 1996b], implied actions [Kourtzi and Kanwisher, 2000], biological motion [Grèzes et al., 1998; Grossman and Blake, 2001], and action affordance by object visual perception [Grèzes and Decety, 2002]. Hence, occipito‐temporal and MT/V5 activity participates in an early cognitive level of visuo‐gestural analysis [for a discussion on that topic, see Peigneux et al., 2000a] mandatory in perception for description (e.g., gesture recognition) as well as in perception for action (e.g., imitation). Also, a functional segregation between gesture and object visual processing at this stage is concordant with neuropsychological studies. Several patients unable to recognize pantomimes of tool‐use gestures performed flawlessly the related gesture on object visual presentation [Rothi et al., 1986]. Conversely, other brain‐damaged patients correctly recognize or name gestures but not the related object [Manning and Campbell, 1992; Schwartz et al., 1998].

Not yet reported to our knowledge, but not unexpected, is the finding that area MT/V5 is also activated during actual imitation. However, two different processes were involved during the scanning time window: (1) watching the gesture demonstration on the video monitor and (2) performing the demonstrated movement. Therefore, it is conceivable that activation of area MT/V5 during actual imitation was due to the gesture observation component. However, the area MT/V5 might be additionally activated during gesture production for one's own movement is partially visually monitored, especially when the performed gesture involves positioning the arm in extrapersonal space. Therefore, visuo‐gestural analysis may concur with proprioceptive kinaesthetic information to provide information to the parietal areas on the current state of the movement. The fact that area MT/V5 is activated bilaterally when gesture production to verbal command is contrasted to the semantic functional association condition (FC vs. SA) (data not shown) supports this proposal, because there is no other visuo‐gestural input than the subject's own movement in the FC condition. Alternatively, activation of area MT/V5 may result from motion processing on the basis of proprioceptive inputs, in keeping with the demonstration that tactile stimulation of the hand with a moving brush activates the MT/V5 complex in blinded subjects [Hagen et al., 2002]. Further studies are needed to disentangle these interpretations.

Do Segregated “Input” and “Output” Praxicons Exist?

One controversial proposal in the Rothi et al. [1991, 1997b] model is the segregation between the input and output praxicons used for entering and issuing gestural representations, respectively. Support for this proposal originates from the description of a patient whose pantomime to verbal command was superior to pantomime imitation in the context of preserved gesture naming, a syndrome named conduction apraxia [Ochipa et al., 1990, 1994]. According to the authors, conduction apraxia could not be accommodated by impairment in a single praxicon for correct gesture naming implied the integrity of visuo‐gestural analysis, stored gestural representations and action semantics. Therefore, aggravated performance during imitation (vs. pantomime to verbal command) was not interpretable by a gesture perception deficit. Consequently, Rothi et al. posited a separation of the representations for familiar gesture production and perception to explain this particular observation. However, replication studies are still awaited and other interpretations are plausible. As compared to the imitation condition, naming familiar gestures seems an easier task that might merely require access to a few salient features of the perceived movement for successful performance. Hence, a moderate deficit in the visual analysis of gestures might explain the higher number of errors during imitation than during pantomime to verbal command in the context of preserved gesture naming. If this is the case, there is no need to invoke two segregated stores for the physical attributes of familiar gestures. Moreover, such a separation between two memory stores is questionable given the results of numerous brain imaging studies showing that similar brain regions are engaged in action perception and preparation [e.g., Buccino et al., 2001; Di Pellegrino et al., 1992; Iacoboni et al., 1999; Nishitani and Hari, 2000; Rizzolatti et al., 1996b]. Results of these studies are in agreement with the simulation theory [Gallese and Goldman, 1998] and the common‐coding hypothesis [Prinz, 1997; Sturmer et al., 2000] predicting that perceived events and planned actions share a common representational domain.

Input praxicon or visual analysis of static body parts?

Contrary to the model's prediction, we found that the brain areas associated to the so‐called “input praxicon” are not predominantly activated during familiar gesture imitation. Rather, an inferior temporal area is activated during all conditions in which a familiar or a novel gesture is visually presented (FI, NI, PN, ON), more than when the gesture is solely evoked from memory (FC) (Fig. 5). This activation pattern is functionally inconsistent with the role attributed to a praxicon in the model proposed by Rothi et al. [1991], i.e., a memory store for visuo‐kinaesthetic gestural representations. As an alternative interpretation, we suggest that the so‐called “input praxicon” is actually involved in the visual analysis of static body parts. In this respect, one may notice that static information, available at any instant in time, is analysed by cortical systems different from those analysing dynamic information, available only through the transition of time. Actions can involve both static (i.e., postures) or dynamic (i.e., gestures) phases, and an action can be imitated or evoked from memory by copying its end state, i.e., the final phase, and/or its motion transition, i.e., the dynamic phase transitions [Carey et al., 1997; Perrett et al., 1989]. As discussed above, visuo‐gestural analysis involves activity in the middle occipito‐temporal gyrus and area MT/V5, a region that participates in real and implied motion processing. Slightly ventral activation in the inferior temporal gyrus is associated with the input praxicon, whose role predominates during gesture visual perception. Scrutinizing the stimuli used in our experiments, only postures (i.e., static) were used in PN and ON conditions. Therefore, it is arguable that the between‐group conjunction analysis (Table IIB) aiming to segregate the input praxicon isolated mainly the static component of action visual analysis. Activation in the inferior temporal region is, therefore, consistent with the interpretation that this area is involved in the high level visual analysis of static structured stimuli both in monkey and human [for a review, see Tanaka, 1997]. Moreover, the locus of inferior temporal activation associated to the input praxicon closely falls near the location of the extrastriate body area, a region of the lateral occipitotemporal cortex selectively involved in processing the appearance of human bodies [Downing et al., 2001]. Likewise, cellular neurophysiology has shown in macaques that the inferotemporal cortex includes cell populations selectively activated by the sight of static body parts [Perrett et al., 1990; Wachsmuth et al., 1994]. A specialisation of the inferior temporal area in the visual analysis of body parts is more consistent with its higher activity during novel than familiar gesture imitation (Fig. 5), because human body knowledge mediation for novel gestures puts a higher demand on body‐parts coding, as discussed in the preceding section. Presumably, the additional activation found in a location of the left superior parietal cortex during imitation of novel gestures reflects access to this latter component of body‐parts analysis. Note that a specialisation of the inferior temporal area in the visual analysis of body parts is not to be confounded with the topographical organization of these body parts, i.e., the human body knowledge discussed above, which predominates in the left inferior parietal lobule.

Output praxicon and gestural representations

At variance with the so‐called input praxicon, whose associated brain areas were mainly activated during visual perception of gestures, the brain areas associated to the output praxicon were activated more during production of familiar (FC, FI) than novel (NI) gestures, i.e., when a representation was available in long‐term memory (Fig. 6). Increased rCBF in the superior temporal and inferior frontal regions during familiar (vs. novel) gesture production suggests that the output praxicon houses the codes representing the physical attributes of the gestures (i.e., their shape and kinetic parameters) in a distributed bilateral temporal and frontal network. The superior temporal sulcus (STS) is principally known for its complex sensory properties and the presence of a large number of neurons responding to the observation of various biological actions in monkeys [Perrett et al., 1985, 1989] and humans [Beauchamp et al., 2002; Bonda et al., 1996b; Grossman et al., 2000]. It is also activated by static images of the face and body, suggesting that it is sensitive to implied motion and more generally to stimuli that signal the actions of another individual [for reviews see Allison et al., 2000; Carey et al., 1997]. Activation of the STS during gesture production seems, therefore, unexpected. However, the superior temporal gyrus was found activated when subjects observe actions for deferred imitation [Grèzes et al., 1998] and during movement preparation for learned visuomotor associations [Toni et al., 2001] or copied hand movements [Krams et al., 1998]. A recent study reported increased activity in the human STS during finger movement observation and imitation even in the absence of direct vision of the imitator's hand [Iacoboni et al., 2001]. The peak coordinate (57–50 16 mm) of STS activation in this latter report nicely superimpose to the bilateral STS activation reported in the present study (58 –54 14 and –56 –54 20 mm) during imitation and pantomime to verbal command of familiar gestures. Its location appears to correspond to the rhesus monkey STS region, anatomically connected to the inferior parietal lobule [Seltzer and Pandya, 1994]. According to Iacobini et al. [2001], the visual representation of biological motion located in STS is activated during action execution to provide an early description of the action to parietal mirror neurons and then to the inferior frontal cortex in which the goal of the action is coded. Sensory copies of the imitated or pantomimed action are then sent back to the STS for monitoring purpose. Hence, the STS region may be a site at which observed or imagined actions and reafferent sensory copies of the performed action can be compared for monitoring purposes [Iacoboni et al., 2001]. Accordingly, the left superior temporal gyrus is activated when subjects imitate the actions of other individuals [Chaminade et al., 2002], more than when the subject's own actions are imitated by others [Decety et al., 2002]. Activations within the inferior part of the frontal gyrus, and especially in Brodmann area (BA) 44 and 45, were likewise reported during observation [Decety et al., 1997; Grafton et al., 1996; Rizzolatti et al., 1996b], imagination and execution [Gerardin et al., 2000] of gestures, imitation and preparation of imitation [Iacoboni et al., 1999; Krams et al., 1998], imagery of motion and of hand movements [Binkofski et al., 2000; Parsons et al., 1995], object manipulation [Binkofski et al., 1999] and visual perception of tool and manipulatable object [Chao and Martin, 2000; Grèzes and Decety, 2002; Murata et al., 1997]. Others have found that the bilateral premotor cortex is activated in a somatotopic manner during both observation and execution of various mouth, hand, and foot actions [Buccino et al., 2001]. Brodmann areas 44 and 45 are reputedly homologous of the region labelled F5 in the macaque [Matelli et al., 1985], in which mirror neurons code for specific grasping movements performed by the monkeys and for the same movements that the monkeys can see performed by other individuals [e.g., Di Pellegrino et al., 1992; Gallese et al., 1996; Rizzolatti et al., 1996a; see Rizzolatti et al., 2002, for a detailed review]. Resonance behaviour in STS and BA 44/45 mirror neurons [Rizzolatti et al., 1999] entails that the observed action is already part of the behavioural repertoire of the individual. Therefore, combined activation of these areas may participate in the encoding and maintenance in long‐term memory of the visual and kinetic features of familiar gestures and postures, i.e., the function of a praxicon.

Does functional segregation exist? Yes. Between “input” and “output“ praxicons? No.

To summarize, in spite of the fact that distinct neural bases were found for the so‐called input and output praxicons, these components probably support different functions. Indeed, regional cerebral activity is modulated by gesture familiarity in the “output” praxicon component only. Conversely, activity in the “input” praxicon mainly depends on the visual presentation modality irrespective of gesture familiarity. According to brain imaging and neurophysiological data, the latter seems related to the high level visual analysis of static postural stimuli and body parts. Given its anatomical position in the inferotemporal pathway, it is plausible that it participates in the recognition of gestures on the basis of the analysis of static action phases. Still, detailed representations of the visual and kinetic features of familiar gestures and postures are stored in the output praxicon, which takes place in a network mainly encompassing the superior temporal and the ventral premotor regions. Therefore, only the output praxicon truly deserves the appellation praxicon, whereas the so‐called input praxicon component rather should be labelled body‐parts coding. Consequently, we will use these new labels throughout the rest of this report.

Interestingly, our data yielded evidence that the codes about the physical attributes of a gesture are partly stored close to the location involved in the elaboration of the visual description of the gesture's configuration, around the occipito‐temporal junction and MT/V5 area. This effect is particularly clear in the left hemisphere (Fig. 3), in which the cerebral correlates of static body‐parts coding and praxicon are, respectively, centred on the inferior temporal gyrus and superior temporal sulcus. This finding extends to human gestures the demonstration that knowledge about the attributes of an object is stored close to the regions of the cortex that mediate the perception of those attributes [Martin et al., 1995].

Innervatory Patterns

According to Rothi et al. [1997b], representations of familiar movements are coded in a three‐dimensional supramodal code (spatial and kinetic parameters) in the praxicon. Therefore, supramodal space–time representations must be transposed into an innervatory pattern before the movement effectively takes place. Likewise, human body knowledge permits the coding of the physical characteristics of novel and meaningless upper limb gestures into the simpler form of a combination of familiar elements [Goldenberg, 1995], a code that should be subsequently mapped onto motor programs at the innervatory patterns stage. Our results suggest that a set of cortico‐subcortical brain areas including the pre‐ and post‐central gyri of the frontal cortex and the supplementary motor area (SMA), the superior parietal lobule, and the cerebellum participate in this transcoding process. Left hemispheric predominance of activations was expected since all subjects performed with the right, contralateral, hand only. However, we did not design a specific control task for the motor efferent component in this experiment. It is, therefore, plausible that these brain regions support not only the transcoding process per se, but also various processes involved in movement implementation and sensorimotor regulation.

In the monkey, SMA and superior parietal lobule are described as regions being part of the multiple parallel parieto‐frontal circuits devoted to sensorimotor transformations [Houk and Wise, 1995]. In humans, these are hypothesized to be similarly involved in the transformation of sensory information into action [Rizzolatti et al., 1998]. Neuropsychological studies suggest that SMA dysfunction participates in the characteristic pattern of ideo‐motor and/or limb‐kinetic apraxia observed in patients suffering from corticobasal degeneration (CBD) [Blondel et al., 1997; Jacobs et al., 1999; Leiguarda et al., 1994], and a significant metabolism decrease is observed in the SMA and superior parietal areas in CBD patients with significant apraxic symptoms [Peigneux et al., 2001]. However, few studies have reported clear cases of apraxia with circumscribed SMA lesions [Marchetti and Dellasala, 1997; Watson et al., 1986]. This paucity was tentatively explained by the fact that a defect caused by a unilateral dominant premotor lesion may be compensated by the contralateral hemisphere [Leiguarda and Marsden, 2000]. Such compensation cannot occur in CBD due to the bilateral extent of cortical hypometabolism in lateral premotor and supplementary motor areas [Garraux et al., 2000; Peigneux et al., 2001]. Therefore, results are in line with the proposal that SMA activity participates in the transcoding process, which provides information to the motor cortex for implementation [Rothi et al., 1997b]. Implementation may use the assistance of encapsulated motor programs for highly automated movements, as early suggested by Liepmann [1908], who believed that kinetic memories are stored in the sensorimotorium encompassing parietal and motor (BA 6 and 8) areas. For instance, writing one's own signature with the big toe activates the secondary sensorimotor cortices of the hand extremity with which this action is usually performed [Rijntjes et al., 1999]. This suggests an effectors‐independent storage of highly skilled motor programs that can facilitate motor implementation. Accordingly, electrical microstimulation at different sites in the motor and premotor cortices causes monkeys to make coordinated actions that follow a functional topography of spatial locations to which the movements are directed [Graziano et al., 2002].

Finally, apraxic symptoms have never been associated with lesions restricted to the cerebellum. Notwithstanding, increased cerebellar activation was reported during movement imagination [Decety et al., 1994; Jueptner et al., 1997], preparation [Deiber et al., 1996; Krams et al., 1998], and observation with the aim to imitate [Grèzes et al., 1998, 1999], although the highest activations are usually found during actual movement execution [Deiber et al., 1996; Jueptner et al., 1997; Krams et al., 1998]. Cerebellum plays also a significant role in the early phases of acquisition and planning of motor sequences [Doyon et al., 2002], and is known to participate in a wide variety of cognitive and emotional processes [e.g., see Marien et al., 2001; Middleton and Strick, 1998; Rapoport et al., 2000; Salman, 2002]. Moreover, a modular organization of internal models of tool manipulation has been recently reported in the cerebellum using fMRI [Imamizu et al., 2003], extending the predictions of the MOSAIC computational model [Haruno et al., 2001; Wolpert and Kawato, 1998] from to the “motor” to the “cognitive” cerebellum. In the present study, we suggest that the vermian cerebellar activation associated with the innervatory pattern stage merely reflects its contribution to movement regulation [Deiber et al., 1996; Jueptner et al., 1996; Kitazawa, 2002] and integration of simple movements into more complex ones [Ramnani et al., 2001; Thach, 1998] during actual movement production.

Action Semantic System

Rothi et al. [1991, 1997b] incorporated in their model the proposal presented by Roy and Square [1985] of a core conceptual, or action semantic, system that involves three kinds of knowledge: knowledge of the functions that tools and objects may serve, knowledge of actions independent of tools, and knowledge about the organization of single actions into sequences. For instance, the semantic functional association task (SA), in which objects must be matched on the basis of their functional use, relies on the first form of knowledge. Posture naming (PN) and pantomime to verbal command (FC) are other conditions in which access to the other components of the action semantic system seems mandatory for successful performance. Clearly, probing a hypothetical neural differentiation between these components of the action semantic system is beyond the scope of our experiment and deserves further investigations. Likewise, we do not believe that these data are sufficient to ascertain the segregation of a dedicated action‐semantic system according to the multiple semantic systems hypothesis, although numerous studies suggest that knowledge from different conceptual categories depends on partially segregated neural systems [e.g., Coslett et al., 2002; Martin et al., 1996; Perani et al., 1999; Silveri et al., 1997; Tranel et al., 1997; Warrington and McCarthy, 1987; for critical discussion see Phillips et al., 2002].

In the present study, action‐semantic–related rCBF increases predominated in a large‐scale temporo‐frontal network in the left hemisphere. Accordingly, observation of meaningful actions [Decety et al., 1997; Grèzes et al., 1998] and silent verbalization of tools or action verbs [e.g., Grabowski et al., 1998; Grafton et al., 1997; Grèzes and Decety, 2002; Martin et al., 1995; Warburton et al., 1996], which demand access to the semantic domain, mainly activate the inferior frontal gyrus (particularly Broca's area) and the inferior and middle temporal gyri [for a meta‐analysis review of brain imaging data, see Grèzes and Decety, 2001].

Likewise, action naming activates the left frontal cortex [Damasio et al., 2001]. Conversely, left frontal cortex stimulation (using repetitive TMS) shortens response latencies when naming actions [Cappa et al., 2002]. Naming [Grafton et al., 1997; Martin et al., 1996] or recognition [Grafton et al., 1997; Perani et al., 1995] of man‐made tools similarly activates left frontal regions. Finally, strongly left‐sided activations in frontal areas are observed in two studies in which tool‐use gestures are compared with simple movements during real or imagined pantomimes upon verbal command [Choi et al., 2001; Moll et al., 2000].

Disruptions of the action semantic or conceptual system have been reported in the literature. However, these are difficult to interpret due to terminological confusions and poverty of neuroanatomical information. It was claimed that ideational apraxia reflects an impairment of the praxis conceptual system including loss of knowledge related to tools [De Renzi and Lucchelli, 1988], but others argued that the term must be restricted to action sequencing disorders [Poeck, 1983]. Subsequently, Ochipa et al. [1989] proposed the term conceptual apraxia to qualify disturbances of the action semantic system except action‐sequencing disorders. Conceptual apraxia was reported in patients suffering from Alzheimer's disease [Dumont and Ska, 2000; Foundas et al., 1999; Greenwald et al., 1992; Ochipa et al., 1992; Schwartz et al., 2000] in which the temporal cerebral structures are often early damaged. However, a high correlation exists between performance in pantomime to verbal command and verbal comprehension deficit [Dumont and Ska, 1998; but see Ochipa et al., 1992, for divergent results]. It is, therefore, not clear whether conceptual apraxia in Alzheimer's disease reflects the disruption of a specific action semantic subsystem rather than a global conceptual impairment. Note that since occipito‐temporal infarctions seem to preserve gesture recognition [Ferreira et al., 1997; Rothi et al., 1986], it further suggests that the most important areas for action semantics are located in more anterior temporal and ventral premotor areas, whose importance for action understanding has been emphasized [Iacoboni et al., 2001; Rizzolatti et al., 2002].

Implications for a Cognitive Model of Upper Limb Apraxia

Given the considerations discussed above, our results partially support the neuropsychological model of apraxia adapted from Rothi et al. [1991, 1997b] (see Fig. 1). However, the neuroanatomical segregation between the so‐called input and output praxicons is not functionally supported. Therefore, a more parsimonious proposal is that one single praxicon is responsible for holding in memory the codes about the physical features of familiar gestures. Conversely, we suggest a new component dedicated to the visual perception of body parts whose location corresponds to the extrastriate body area described by Downing et al. [2001]. Although its role looks very similar to the one played by the visuo‐gestural analysis component, its activity would be restricted to the processing of static information. Moreover, its preferential activation vanishes when the analysis is restricted to the brain areas specifically involved in familiar (vs. novel) gesture imitation, which is not the case in the context of novel (vs. familiar) gesture imitation. It suggests that the visuo‐gestural analysis component is able to trigger directly the codes about the visual and kinetic features of familiar gestures contained in the praxicon. Of course, a complementary interpretation is that the coding of novel gestures into the form of a topographical combination of significant body parts (which takes place in the human body knowledge component) puts a higher demand on the visual analysis of these body parts.

Also, the relation between the conceptual features of a gesture and the codes about its physical properties should be reconsidered in the framework of a single praxicon. When pantomiming on verbal command, the action semantic system triggers the information contained in the praxicon for motor implementation at the innervatory pattern stage, as already stated in the model. However, what happens during gesture naming upon visual demonstration? The most likely hypothesis is that the information passes from visuo‐gestural analysis to the praxicon component and is then transmitted to the action semantic system. Therefore, there is a bidirectional flow of information between the action semantic and praxicon components, although schematic, complemental information arising from the link between body‐parts coding and action semantic components can be a substitute for gesture recognition. These modifications are tentatively illustrated in the revised model depicted Figure 9.

Figure 9.

Figure 9

Revised neuropsychological cognitive model of upper limb praxis processing. Solid arrows indicate the directional flow of information between gesture‐dedicated components. Successful completion of any gesture‐related task (e.g., pantomime to verbal command, familiar or novel gesture imitation, etc.) requires serial access to several of these components Dotted arrows indicate potential alternate connections either not normally used or awaiting experimental or neuropsychological confirmation (see details in text).

This revised model best, or at least equally, accounts for neuropsychological dissociations reported in the apraxia literature. For instance, a relative disconnection between the visuo‐gestural analysis and the praxicon component may explain worsened imitation than pantomime to command for the same set of familiar gestures [Ochipa et al., 1994; Peigneux et al., 2000b]. Conversely, partial or complete disconnection between the action semantic and the praxicon components account for performance improvement during imitation as compared to pantomime to verbal command [Alexander et al., 1992; Belanger et al., 1996; Lehmkuhl et al., 1983; Schnider et al., 1997]. Note that a similar performance profile could be observed following impairment of gestural representations in the praxicon, because familiar gestures may be imitated through access to the alternative route (normally available for imitation of novel gestures). In this revised model, we have kept the distinction between visual analysis of objects and gestures. We also acknowledge the possibility of a direct link between the praxicon and the object visual analysis or the object recognition system. Although not specifically investigated in our experiments, these distinctions are supported by several neuropsychological dissociations. Patients with spared object naming may be impaired in visual recognition of gestures [Rothi et al., 1986], or cannot pretend the use of an object from the presentation of a picture of this object, while being able to imitate and pantomime to command [Pena‐Casanova et al., 1985]. Other patients, although severely impaired in naming objects displayed on pictures, are able to identify flawlessly the same objects through pantomimes demonstrating their use [Ferreira et al., 1997, 1998; Magnie et al., 1999; Schwartz et al., 1998], or even to initiate the gesture corresponding to the objects' use from the unnamed picture [Riddoch and Humphreys, 1987; Sirigu et al., 1991a]. Interestingly, the patient JB, who cannot name objects on visual presentation but demonstrated flawlessly their use in the same modality, often succeeded to retrieve the name of the object after having performed the gesture [Riddoch and Humphreys, 1987]. In the framework of our revised model, impaired object naming suggests that object visual information cannot access the action semantic system. Spared gesture production on object visual presentation is performed on the basis of an affordance [Gibson, 1979; Riddoch et al., 1998] from the object visual analysis component to the praxicon, alternatively through the link between praxicon and object's structural description in the object recognition system. Successful gesture naming after gesture production suggests that the action semantic system received from the praxicon the information that it cannot gain from the object analysis systems, in line with our assumption of a bidirectional flow between these components.

Conclusions

For many years, neuropsychological studies in brain‐damaged patients have demonstrated that the richness of the apraxic symptoms cannot be reduced to several classical clinical labels defined at the beginning of the century. In this respect, the development of neuropsychological and cognitive models [Rothi et al., 1991, 1997b; Roy and Square, 1985, 1994] has offered more advanced theoretical frameworks to account for symptoms diversity and overcome fruitless taxonomical debates. However, little is known of their underlying neuroanatomical organization, and only a few attempts have been made to provide a wide‐range synthesis of the gesture‐specific processes [e.g., see Leiguarda, 2001; Leiguarda and Marsden, 2000], despite the considerable extent of our current knowledge about the neuroanatomy of motor and sensory systems. Here, we provide evidence that theoretically grounded brain‐imaging studies in healthy subjects may not only extend our understanding of the anatomical processes involved in normal gesture processing and upper limb apraxia symptoms, but also contribute to the functional development of cognitive models that generate testable predictions. Hopefully, further studies in apraxic patients and healthy subjects will prove the pertinence of these predictions and lead to further revisions of a work‐in‐progress neuropsychological model of upper limb apraxia.

Acknowledgements

We thank the technical staff of the Centre de Recherches du Cyclotron for kind and professional assistance, A. Komaromi for support and gesture demonstration on videotapes, P. Maquet for helpful comments, E.A. Roy for insightful advice on the (SA) condition, R.S.J. Frackowiak and K.J. Friston for providing SPM software, and an anonymous discussant for very insightful criticisms on the static/dynamic distinction in gesture processing. P.P. was supported by PAI and FMRE. G.G. and S.L. are supported by FNRS.

REFERENCES

  1. Alexander MP, Baker E, Naeser MA, Kaplan E, Palumbo C (1992): Neuropsychological and neuroanatomical dimensions of ideomotor apraxia. Brain 115: 87–107. [DOI] [PubMed] [Google Scholar]
  2. Allison T, Puce A, McCarthy G (2000): Social perception from visual cues: role of the STS region. Trends Cogn Sci 4: 267–278. [DOI] [PubMed] [Google Scholar]
  3. Andersen RA, Snyder LH, Bradley DC, Xing J (1997): Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annu Rev Neurosci 20: 303–330. [DOI] [PubMed] [Google Scholar]
  4. Beauchamp MS, Lee KE, Haxby JV, Martin A (2002): Parallel visual motion processing streams for manipulable objects and human movements. Neuron 34: 149–59. [DOI] [PubMed] [Google Scholar]
  5. Belanger SA, Duffy RJ, Coelho CA (1996): The assessment of limb apraxia. An investigation of task effects and their cause. Brain Cogn 32: 384–404. [DOI] [PubMed] [Google Scholar]
  6. Binkofski F, Buccino G, Stephan KM, Rizzolatti G, Seitz RJ, Freund HJ (1999): A parieto‐premotor network for object manipulation: evidence from neuroimaging. Exp Brain Res 128: 210–213. [DOI] [PubMed] [Google Scholar]
  7. Binkofski F, Amunts K, Stephan KM, Posse S, Schormann T, Freund HJ, Zilles K, Seitz RJ (2000): Broca's region subserves imagery of motion: a combined cytoarchitectonic and fMRI study. Hum Brain Mapp 11: 273–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Blondel A, Eustache F, Schaeffer S, Marie RM, Lechevalier B, de la Sayette V (1997): Etude clinique et cognitive de l'apraxie dans l'atrophie cortico‐basale. Rev Neurol (Paris) 153: 737–747. [PubMed] [Google Scholar]
  9. Bonda E, Petrides M, Frey S, Evans A (1995): Neural correlates of mental transformations of the body‐in‐space. Proc Natl Acad Sci USA 92: 11180–11184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bonda E, Frey S, Petrides M (1996a): Evidence for a dorso‐medial parietal system involved in mental transformations of the body. J Neurophysiol 76: 2042–2048. [DOI] [PubMed] [Google Scholar]
  11. Bonda E, Petrides M, Ostry DJ, Evans A (1996b): Specific involvement of human parietal systems and the amygdala in the perception of biological motion. J Neurosci 16: 3737–3744. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Brett M, Bloomfield P, Brooks DJ, Stein JF, Grasby P (1999): Scan order effects in PET activation studies are caused by motion artefact. Neuroimage 9: S56. [Google Scholar]
  13. Buccino G, Binkofski F, Fink GR, Fadiga L, Fogassi L, Gallese V, Seitz RJ, Zilles K, Rizzolatti G, Freund HJ (2001): Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study. Eur J Neurosci 13: 400–404. [PubMed] [Google Scholar]
  14. Buxbaum LJ, Giovannetti T, Libon D (2000): The role of the dynamic body schema in praxis: evidence from primary progressive apraxia. Brain Cogn 44: 166–191. [DOI] [PubMed] [Google Scholar]
  15. Cappa S, Sandrini M, Rossini PM, Sosta K, Miniussi C (2002): The role of the left frontal lobe in action naming. Neurology 59: 720–723. [DOI] [PubMed] [Google Scholar]
  16. Caramazza A, McCloskey M (1989): The case for single‐patient studies. Cogn Neuropsychol 5: 517–527 [Google Scholar]
  17. Carey DP, Perrett DI, Oram MW (1997): Recognizing, understanding and reproducing action In: Jeannerod M, Grafman JE, editors. Handbook of neuropsychology, Vol. 11 Amsterdam: Elsevier Science; p 111–130. [Google Scholar]
  18. Chaminade T, Meltzoff AN, Decety J (2002): Does the end justify the means? A PET exploration of the mechanisms involved in human imitation. Neuroimage 15: 318–328. [DOI] [PubMed] [Google Scholar]
  19. Chao LL, Martin A (2000): Representation of manipulable man‐made objects in the dorsal stream. Neuroimage 12: 478–484. [DOI] [PubMed] [Google Scholar]
  20. Choi SH, Na DL, Kang E, Lee KM, Lee SW, Na DG (2001): Functional magnetic resonance imaging during pantomiming tool‐use gestures. Exp Brain Res 139: 311–317. [DOI] [PubMed] [Google Scholar]
  21. Cochin S, Barthelemy C, Roux S, Martineau J (1999): Observation and execution of movement: similarities demonstrated by quantified electroencephalography. Eur J Neurosci 11: 1839–1842. [DOI] [PubMed] [Google Scholar]
  22. Coslett HB, Saffran EM, Schwoebel J (2002): Knowledge of the human body: A distinct semantic domain. Neurology 59: 357–363. [DOI] [PubMed] [Google Scholar]
  23. Cubelli R, Marchetti C, Boscolo G, Della Sala S (2000): Cognition in action: testing a model of limb apraxia. Brain Cogn 44: 144–165. [DOI] [PubMed] [Google Scholar]
  24. Damasio H, Grabowski TJ, Tranel D, Ponto LL, Hichwa RD, Damasio AR (2001): Neural correlates of naming actions and of naming spatial relations. Neuroimage 13: 1053–1064. [DOI] [PubMed] [Google Scholar]
  25. De Renzi E, Lucchelli F (1988): Ideational apraxia. Brain 3: 1173–1185. [DOI] [PubMed] [Google Scholar]
  26. De Renzi E, Faglioni P, Sorgato (1982): Modality‐specific and supramodal mechanisms of apraxia. Brain 101: 301–312. [DOI] [PubMed] [Google Scholar]
  27. Decety J (1996): Do imagined and executed actions share the same neural substrate? Brain Res Cogn Brain Res 3: 87–93. [DOI] [PubMed] [Google Scholar]
  28. Decety J, Perani D, Jeannerod M, Bettinardi V, Tadary B, Woods R, Mazziotta JC, Fazio F (1994): Mapping motor representations with positron emission tomography. Nature 371: 600–602. [DOI] [PubMed] [Google Scholar]
  29. Decety J, Grèzes J, Costes N, Perani D, Jeannerod M, Procyk E, Grassi F, Fazio F (1997): Brain activity during observation of actions: influence of action content and subject's strategy. Brain 120: 1763–1777. [DOI] [PubMed] [Google Scholar]
  30. Decety J, Chaminade T, Grezes J, Meltzoff AN (2002): A PET Exploration of the Neural Mechanisms Involved in Reciprocal Imitation. Neuroimage 15: 265–272. [DOI] [PubMed] [Google Scholar]
  31. Deiber MP, Ibanez V, Sadato N, Hallett M (1996): Cerebral structures participating in motor preparation in humans: a positron emission tomography study. J Neurophysiol 75: 233–247. [DOI] [PubMed] [Google Scholar]
  32. Deiber MP, Ibanez V, Honda M, Sadato N, Raman R, Hallett M (1998): Cerebral processes related to visuomotor imagery and generation of simple finger movements studied with positron emission tomography. Neuroimage 7: 73–85. [DOI] [PubMed] [Google Scholar]
  33. di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G (1992): Understanding motor events. A neurophysiological study. Exp Brain Res 91: 176–180. [DOI] [PubMed] [Google Scholar]
  34. Downing PE, Jiang Y, Shuman M, Kanwisher N (2001): A cortical area selective for visual processing of the human body. Science 293: 2470–2473. [DOI] [PubMed] [Google Scholar]
  35. Doya K (2000): Complementary roles of basal ganglia and cerebellum in learning and motor control. Curr Opin Neurobiol 10: 732–739. [DOI] [PubMed] [Google Scholar]
  36. Doyon J, Song AW, Karni A, Lalonde F, Adams MM, Ungerleider LG (2002): Experience‐dependent changes in cerebellar contributions to motor sequence learning. Proc Natl Acad Sci USA 99: 1017–1022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Dumont C, Ska B (1998): Limb apraxia and verbal comprehension in Alzheimer's disease. Brain Cogn 37: 93–96. [Google Scholar]
  38. Dumont C, Ska B (2000): Pantomime recognition impairment in Alzheimer's disease. Brain Cogn 43: 177–181. [PubMed] [Google Scholar]
  39. Ferreira CT, Giusiano B, Ceccaldi M, Poncet M (1997): Optic aphasia: Evidence of the contribution of different neural systems to object and action naming. Cortex 33: 499–513. [DOI] [PubMed] [Google Scholar]
  40. Ferreira CT, Ceccaldi M, Giusiano B, Poncet M (1998): Separate visual pathways for perception of actions and objects. Evidence from a case of apperceptive agnosia. J Neurol Neurosurg Psychiat 65: 382–385. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Flanagan JR, Wing AM (1997): The role of internal models in motion planning and control: Evidence from grip force adjustments during movements of hand‐ held loads. J Neurosci 17: 1519–1528. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Foundas AL, Macauley BL, Raymer AM, Maher LM, Rothi LJ, Heilman KM (1999): Ideomotor apraxia in Alzheimer disease and left hemisphere stroke: limb transitive and intransitive movements. Neuropsychiatry Neuropsychol Behav Neurol 12: 161–166. [PubMed] [Google Scholar]
  43. Frackowiak RSJ, Friston KJ, Frith CD, Dolan RJ, Mazziotta JC (1997): Human brain function. San Diego: Academic Press; 528 p. [Google Scholar]
  44. Freund HJ (2001): The parietal lobe as a sensorimotor interface: a perspective from clinical and neuroimaging data. Neuroimage 14: S142–146. [DOI] [PubMed] [Google Scholar]
  45. Friston K, Holmes A, Worsley KJ (1999a): How many subjects constitute a study? Neuroimage 10: 1–5. [DOI] [PubMed] [Google Scholar]
  46. Friston KJ, Holmes AP, Price CJ, Buchel C, Worsley KJ (1999b): Multi‐subject fMRI studies and conjunction analyses. Neuroimage 10: 385–396. [DOI] [PubMed] [Google Scholar]
  47. Gallese V, Goldman A (1998): Mirror neurons and the simulation theory of mind‐reading. Trends Cogn Sci 2: 493–501. [DOI] [PubMed] [Google Scholar]
  48. Gallese V, Fadiga L, Fogassi L, Rizzolatti G (1996): Action recognition in the premotor cortex. Brain 119: 593–609. [DOI] [PubMed] [Google Scholar]
  49. Garraux G, Salmon E, Peigneux P, Kreisler A, Degueldre C, Lemaire C, Destée A, Franck G (2000): Voxel‐based distribution of metabolic impairment in corticobasal degeneration. Mov Disord 15: 894–904. [DOI] [PubMed] [Google Scholar]
  50. Gerardin E, Sirigu A, Lehericy S, Poline JB, Gaymard B, Marsault C, Agid Y, Le Bihan D (2000): Partially overlapping neural networks for real and imagined hand movements. Cereb Cortex 10: 1093–1104. [DOI] [PubMed] [Google Scholar]
  51. Gibson JJ (1979): The ecological approach to visual perception. Boston, MA: Houghton Mifflin; 336 pp. [Google Scholar]
  52. Goldenberg G (1995): Imitating gestures and manipulating a mannikin. The representation of the human body in ideomotor apraxia. Neuropsychologia 33: 63–72. [DOI] [PubMed] [Google Scholar]
  53. Goldenberg G (1996): Defective imitation of gestures in patients with damage in the left or right hemispheres. J Neurol Neurosurg Psychiatry 61: 176–180. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Goldenberg G (1997): Disorders of body perception In: Feinberg TE, Farah MJ, editors. Behavioral neurology and neuropsychology. New York: McGraw Hill; p 289–296. [Google Scholar]
  55. Goldenberg G (1999): Matching and imitation of hand and finger postures in patients with damage in the left or right hemispheres. Neuropsychologia 37: 559–566. [DOI] [PubMed] [Google Scholar]
  56. Goldenberg G (2001): Imitation and matching of hand and finger postures. Neuroimage 14: S132–136. [DOI] [PubMed] [Google Scholar]
  57. Goldenberg G, Hagmann S (1997): The meaning of meaningless gestures: A study of visuo‐imitative apraxia. Neuropsychologia 35: 333–341. [DOI] [PubMed] [Google Scholar]
  58. Goldenberg G, Strauss S (2002): Hemisphere asymmetries for imitation of novel gestures. Neurology 59: 893–897. [DOI] [PubMed] [Google Scholar]
  59. Goldenberg G, Laimgruber K, Hermsdorfer JP (2001): Imitation of gestures by disconnected hemispheres. Neuropsychologia 39: 1432–1443. [DOI] [PubMed] [Google Scholar]
  60. Goodale MA, Haffenden A (1998): Frames of reference for perception and action in the human visual system. Neurosci Behav Rev 22: 161–172. [DOI] [PubMed] [Google Scholar]
  61. Goodale MA, Milner AD (1992): Separate visual pathways for perception and action. Trends Neurosci 15: 20–25. [DOI] [PubMed] [Google Scholar]
  62. Grabowski TJ, Damasio H, Damasio AR (1998): Premotor and prefrontal correlates of category‐related lexical retrieval. Neuroimage 7: 232–243. [DOI] [PubMed] [Google Scholar]
  63. Grafton ST, Arbib MA, Fadiga L, Rizzolatti G (1996): Localization of grasp representations in humans by positron emission tomography. 2. Observation compared with imagination. Exp Brain Res 112: 103–111. [DOI] [PubMed] [Google Scholar]
  64. Grafton ST, Fadiga L, Arbib MA, Rizzolatti G (1997): Premotor cortex activation during observation and naming of familiar tools. Neuroimage 6: 231–236. [DOI] [PubMed] [Google Scholar]
  65. Graziano MS, Taylor CS, Moore T, Cooke DF (2002): The cortical control of movement revisited. Neuron 36: 349–362. [DOI] [PubMed] [Google Scholar]
  66. Greenwald ML, Rothi LG, Maher LM, Chatterjee A, Ochipa C, Heilman KM (1992): Impaired tool knowledge with preserved object and action knowledge in limb apraxia. J Clin Exp Neuropsychol 14: 375. [Google Scholar]
  67. Grèzes J, Decety J (1998): L'approche neurobiologique de la perception des mouvements est‐elle pertinente à la compréhension de l'apraxie ? [The neurobiological approach to the perception of movements: Is it pertinents to the comprehension of apraxia ?] Rev Neuropsychol (Paris) 8: 241–270. [Google Scholar]
  68. Grèzes J, Decety J (2001): Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a meta‐analysis. Hum Brain Mapp 12: 1–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Grèzes J, Decety J (2002): Does visual perception of object afford action? Evidence from a neuroimaging study. Neuropsychologia 40: 212–22. [DOI] [PubMed] [Google Scholar]
  70. Grèzes J, Costes N, Decety J (1998): Top down effect of strategy on the perception of human biological motion: A PET investigation. Cogn Neuropsychol 15: 553–582. [DOI] [PubMed] [Google Scholar]
  71. Grèzes J, Costes N, Decety J (1999): The effects of learning and intention on the neural network involved in the perception of meaningless actions. Brain 122: 1875–1887. [DOI] [PubMed] [Google Scholar]
  72. Grossman ED, Blake R (2001): Brain activity evoked by inverted and imagined biological motion. Vision Res 41: 1475–1482. [DOI] [PubMed] [Google Scholar]
  73. Grossman E, Donnelly M, Price R, Pickens D, Morgan V, Neighbor G, Blake R (2000): Brain areas involved in perception of biological motion. J Cogn Neurosci 12: 711–720. [DOI] [PubMed] [Google Scholar]
  74. Hagen MC, Franzen O, McGlone F, Essick G, Dancer C, Pardo JV (2002): Tactile motion activates the human middle temporal/V5 (MT/V5) complex. Eur J Neurosci 16: 957–964. [DOI] [PubMed] [Google Scholar]
  75. Hanna‐Pladdy B, Heilman KM, Foundas AL (2001): Cortical and subcortical contributions to ideomotor apraxia: analysis of task demands and error types. Brain 124: 2513–2527. [DOI] [PubMed] [Google Scholar]
  76. Haruno M, Wolpert DM, Kawato M (2001): Mosaic model for sensorimotor learning and control. Neural Comput 13: 2201–2220. [DOI] [PubMed] [Google Scholar]
  77. Hermsdorfer J, Mai N, Spatt J, Marquardt C, Veltkamp R, Goldenberg G (1996): Kinematic analysis of movement imitation in apraxia. Brain Vol 119: 1575–1586. [DOI] [PubMed] [Google Scholar]
  78. Hermsdorfer J, Goldenberg G, Wachsmuth E, Conrad B, Ceballos‐Baumann A, Bartenstein P, Schwaiger M, Boecker H (2001): Cortical correlates of gesture processing: clues to the cerebral mechanisms underlying apraxia during the imitation of meaningless gestures. Neuroimage 14: 149–161. [DOI] [PubMed] [Google Scholar]
  79. Holmes A, Friston K (1998): Generalisability, random effects and population inference. Neuroimage 7: 754. [Google Scholar]
  80. Houk JC, Wise SP (1995): Distributed modular architectures linking basal ganglia, cerebellum and cerebral cortex: their role in planning and controlling action. Cereb Cortex 5: 95–110. [DOI] [PubMed] [Google Scholar]
  81. Iacoboni M, Woods RP, Brass M, Bekkering H, Mazziotta JC, Rizzolatti G (1999): Cortical mechanisms of human imitation. Science 286: 2526–2528. [DOI] [PubMed] [Google Scholar]
  82. Iacoboni M, Koski LM, Brass M, Bekkering H, Woods RP, Dubeau MC, Mazziotta JC, Rizzolatti G (2001): Reafferent copies of imitated actions in the right superior temporal cortex. Proc Natl Acad Sci USA 98: 13995–13999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Imamizu H, Kuroda T, Miyauchi S, Yoshioka T, Kawato M (2003): Modular organization of internal models of tools in the human cerebellum. PNAS 100: 5461–5466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Jacobs DH, Adair JC, Macauley B, Gold M, Rothi LJG, Heilman KM (1999): Apraxia in corticobasal degeneration. Brain Cogn 40: 336–354. [DOI] [PubMed] [Google Scholar]
  85. Jueptner M, Jenkins IH, Brooks DJ, Frackowiak RS, Passingham RE (1996): The sensory guidance of movement: a comparison of the cerebellum and basal ganglia. Exp Brain Res 112: 462–474. [DOI] [PubMed] [Google Scholar]
  86. Jueptner M, Ottinger S, Fellows SJ, Adamschewski J, Flerich L, Muller SP, Diener HC, Thilmann AF, Weiller C (1997): The relevance of sensory input for the cerebellar control of movements. Neuroimage 5: 41–48. [DOI] [PubMed] [Google Scholar]
  87. Kitazawa S (2002): Optimization of goal‐directed movements in the cerebellum: a random walk hypothesis. Neurosci Res 43: 289–294. [DOI] [PubMed] [Google Scholar]
  88. Koski L, Iacoboni M, Mazziotta JC (2002): Deconstructing apraxia: understanding disorders of intentional movement after stroke. Curr Opin Neurol 15: 71–77. [DOI] [PubMed] [Google Scholar]
  89. Kourtzi Z, Kanwisher N (2000): Activation in human MT/MST by static images with implied motion. J Cogn Neurosci 12: 48–55. [DOI] [PubMed] [Google Scholar]
  90. Krams M, Rushworth MF, Deiber MP, Frackowiak RS, Passingham RE (1998): The preparation, execution and suppression of copied movements in the human brain. Exp Brain Res 120: 386–398. [DOI] [PubMed] [Google Scholar]
  91. Lehmkuhl G, Poeck K, Willmes K (1983): Ideomotor apraxia and aphasia: An examination of types and manifestations of apraxic symptoms. Neuropsychologia 21: 199–212. [DOI] [PubMed] [Google Scholar]
  92. Leiguarda R (2001): Limb apraxia: cortical or subcortical. Neuroimage 14: S137–141. [DOI] [PubMed] [Google Scholar]
  93. Leiguarda RC, Marsden CD (2000): Limb apraxias: higher‐order disorders of sensorimotor integration. Brain 123: 860–879. [DOI] [PubMed] [Google Scholar]
  94. Leiguarda R, Lees AJ, Merello M, Starkstein S, Marsden CD (1994): The nature of apraxia in corticobasal degeneration. J Neurol Neurosurg Psychiatry 57: 455–459. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Liepmann H (1908): Drei aufsatze aus dem apraxiegebiet. Berlin: Kargerp. [Google Scholar]
  96. Magnie MN, Ferreira CT, Giusiano B, Poncet M (1999): Category specificity in object agnosia: Preservation of sensorimotor experiences related to objects. Neuropsychologia 37: 67–74. [DOI] [PubMed] [Google Scholar]
  97. Manning L, Campbell R (1992): Optic aphasia with spared action naming: a description and possible loci of impairment. Neuropsychologia 30: 587–592. [DOI] [PubMed] [Google Scholar]
  98. Marchetti C, Dellasala S (1997): On crossed apraxia. Description of a right‐handed apraxic patient with right supplementary motor area damage. Cortex 33: 341–354. [DOI] [PubMed] [Google Scholar]
  99. Marien P, Engelborghs S, Fabbro F, De Deyn PP (2001): The lateralized linguistic cerebellum: a review and a new hypothesis. Brain Lang 79: 580–600. [DOI] [PubMed] [Google Scholar]
  100. Martin A, Haxby JV, Lalonde FM, Wiggs CL, Ungerleider LG (1995): Discrete cortical regions associated with knowledge of color and knowledge of action. Science 270: 102–105. [DOI] [PubMed] [Google Scholar]
  101. Martin A, Wiggs CL, Ungerleider LG, Haxby JV (1996): Neural correlates of category‐specific knowledge. Nature 379: 649–652. [DOI] [PubMed] [Google Scholar]
  102. Matelli M, Luppino G, Rizzolatti G (1985): Patterns of cytochrome oxidase activity in the frontal agranular cortex of the macaque monkey. Behav Brain Res 318: 125–137. [DOI] [PubMed] [Google Scholar]
  103. McCloskey M, Caramazza A (1989): Theory and methodology in cognitive neuropsychology: a response to our critics. Cogn Neuropsychol 5: 583–623. [Google Scholar]
  104. Mehler FM (1987): Visuo‐imitative apraxia. Neurology 37: 129.2879257 [Google Scholar]
  105. Meltzoff AN (2002): Elements of a developmental theory of imitation In: Meltzoff AN, Prinz W, editors. The imitative mind. Development, evolution and brain bases. Cambridge, UK: Cambridge University Press; p 19–41. [Google Scholar]
  106. Meltzoff AN, Moore MK (1997): Explaining facial imitation: A theoretical model. Early Dev Parent 6: 179–192. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Middleton FA, Strick PL (1998): The cerebellum: an overview. Trends Neurosci 21: 367–369. [DOI] [PubMed] [Google Scholar]
  108. Moll J, de Oliveira SR, Passman LJ, Cunha FC, Souza LF, Andreiuolo PA (2000): Functional MRI correlates of real and imagined tool‐use pantomimes. Neurology 54: 1331–1336. [DOI] [PubMed] [Google Scholar]
  109. Murata A, Fadiga L, Fogassi L, Gallese V, Raos V, Rizzolatti G (1997): Object representation in the ventral premotor cortex (area F5) of the monkey. Am Physiol Soc 78: 2226–2230. [DOI] [PubMed] [Google Scholar]
  110. Nishitani N, Hari R (2000): Temporal dynamics of cortical representation for action. Proc Natl Acad Sci USA 97: 913–918. [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Ochipa C, Rothi LJ, Heilman KM (1989): Ideational apraxia: A deficit in tool selection and use. Ann Neurol 25: 190–193. [DOI] [PubMed] [Google Scholar]
  112. Ochipa C, Rothi LJ, Heilman KM (1990): Conduction apraxia. J Clin Exp Neuropsychol 12: 89. [Google Scholar]
  113. Ochipa C, Rothi LJ, Heilman KM (1992): Conceptual apraxia in Alzheimer's disease. Brain 115: 1061–1071. [DOI] [PubMed] [Google Scholar]
  114. Ochipa C, Rothi LJ, Heilman KM (1994): Conduction apraxia. J Neurol Neurosurg Psychiatry 57: 1241–1244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Parsons LM (1994): Temporal and kinematic properties of motor behavior reflected in mentally simulated actions. J Exp Psychol Hum Percept Perform 20: 709–730. [DOI] [PubMed] [Google Scholar]
  116. Parsons LM, Fox PT, Downs JH, Glass T, Hirsch TB, Martin CC, Jerabek PA, Lancaster JL (1995): Use of implicit motor imagery for visual shape discrimination as revealed by PET. Nature 375: 54–58. [DOI] [PubMed] [Google Scholar]
  117. Peigneux P, Van der Linden M (2000): [Presentation of a cognitive neuropsychological battery for limb apraxia assessment]. Rev Neuropsychol (Paris) 10: 311–362. [Google Scholar]
  118. Peigneux P, Salmon E, Van der Linden M, Garraux G, Aerts J, Delfiore G, Degueldre C, Luxen A, Orban GA, Franck G (2000a): The role of lateral occipito‐temporal junction and area MT/V5 in the visual analysis of upper limb postures. Neuroimage 11: 644–655. [DOI] [PubMed] [Google Scholar]
  119. Peigneux P, Van der Linden M, Andres‐Benito P, Sadzot B, Franck G, Salmon E (2000b): [A neuropsychological and functional brain imaging study of visuo‐imitative apraxia]. Rev Neurol (Paris) 156: 459–472. [PubMed] [Google Scholar]
  120. Peigneux P, Salmon E, Garraux G, Laureys S, Willems S, Dujardin K, Degueldre C, Lemaire C, Luxen A, Moonen G, Franck G, Destee A, Van der Linden M (2001): Neural and cognitive bases of upper limb apraxia in corticobasal degeneration. Neurology 57: 1259–1268. [DOI] [PubMed] [Google Scholar]
  121. Pena‐Casanova J, Roig R, Bermudez A, Tolosa S (1985): Optic aphasia, optic apraxia, and loss of dreaming. Brain Lang 26: 63–71. [DOI] [PubMed] [Google Scholar]
  122. Perani D, Cappa SF, Bettinardi V, Bressi S, Gorno‐Tempini M, Matarrese M, Fazio F (1995): Different neural systems for the recognition of animals and man‐made tools. Neuroreport 6: 1637–1641. [DOI] [PubMed] [Google Scholar]
  123. Perani D, Schnur T, Tettamanti M, Gorno‐Tempini M, Cappa SF, Fazio F (1999): Word and picture matching: a PET study of semantic category effects. Neuropsychologia 37: 293–306. [DOI] [PubMed] [Google Scholar]
  124. Perani D, Fazio F, Borghese NA, Tettamanti M, Ferrari S, Decety J, Gilardi MC (2001): Different brain correlates for watching real and virtual hand actions. Neuroimage 14: 749–758. [DOI] [PubMed] [Google Scholar]
  125. Perrett DI, Smith PA, Mistlin AJ, Chitty AJ, Head AS, Potter DD, Broennimann R, Milner AD, Jeeves MA (1985): Visual analysis of body movements by neurones in the temporal cortex of the macaque monkey: a preliminary report. Behav Brain Res 16: 153–170. [DOI] [PubMed] [Google Scholar]
  126. Perrett DI, Harries MH, Bevan R, Thomas S, Benson PJ, Mistlin AJ, Citty AJ, Hietanen JK, Ortega JE (1989): Framework of analysis for the neural representation of animate objects and actions. J Exp Biol 146: 87–113. [DOI] [PubMed] [Google Scholar]
  127. Perrett DI, Mistlin AJ, Harries MH, Chiity AJ (1990): Understanding the visual appearance and consequence of actions In: Goodale MA, editor. Vision and action. Norwood, NJ: Ablex Publishing Corp; p 163–180. [Google Scholar]
  128. Phillips JA, Noppeney U, Humphreys GW, Price CJ (2002): Can segregation within the semantic system account for category‐specific deficits? Brain 125: 2067–2080. [DOI] [PubMed] [Google Scholar]
  129. Poeck K (1983): Ideational apraxia. J Neurol 230: 1–5. [DOI] [PubMed] [Google Scholar]
  130. Prinz W (1997): Perception and action planning. Eur J Cogn Psychol 9: 129–154. [Google Scholar]
  131. Ramnani N, Toni I, Passingham RE, Haggard P (2001): The cerebellum and parietal cortex play a specific role in coordination: a PET study. Neuroimage 14: 899–911. [DOI] [PubMed] [Google Scholar]
  132. Rapoport M, van Reekum R, Mayberg H (2000): The role of the cerebellum in cognition and behavior: a selective review. J Neuropsychiatry Clin Neurosci 12: 193–198. [DOI] [PubMed] [Google Scholar]
  133. Riddoch MJ, Humphreys GW (1987): Visual object processing in optic aphasia: a case of semantic access agnosia. Cogn Neuropsychol 4: 131–186. [Google Scholar]
  134. Riddoch MJ, Edwards D, Humphreys GW, West R, Heafield T (1998): Visual affordances direct action: Neuropsychological evidence from manual interference. Cogn Neuropsychol 15: 645–684. [DOI] [PubMed] [Google Scholar]
  135. Rijntjes M, Dettmers C, Buchel C, Kiebel S, Frackowiak R, Weiller C (1999): A blueprint for movement: functional and anatomical representations in the human motor system. J Neurosci 19: 8043–8048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Rizzolatti G, Fadiga L, Gallese V, Fogassi L (1996a): Premotor cortex and the recognition of motor actions. Brain Res Cogn Brain Res 3: 131–141. [DOI] [PubMed] [Google Scholar]
  137. Rizzolatti G, Fadiga L, Matelli M, Bettinardi V, Paulesu E, Perani D, Fazio F (1996b): Localization of grasp representations in humans by PET: 1. Observation versus execution. Exp Brain Res 111: 246–252. [DOI] [PubMed] [Google Scholar]
  138. Rizzolatti G, Luppino G, Matelli M (1998): The organization of the cortical motor system: new concepts. Electroencephalogr Clin Neurophysiol 106: 283–296. [DOI] [PubMed] [Google Scholar]
  139. Rizzolatti G, Fadiga L, Fogassi L, Gallese V (1999): Resonance behaviors and mirror neurons. Arch Ital Biol 137: 85–100. [PubMed] [Google Scholar]
  140. Rizzolatti G, Fogassi L, Gallese V (2002): Motor and cognitive functions of the ventral premotor cortex. Curr Opin Neurobiol 12: 149–154. [DOI] [PubMed] [Google Scholar]
  141. Rothi LJ, Heilman KM (1996): Liepmann (1900 and 1905): A definition of apraxia and a model of praxis In: Chris C, Claus WW, Yves J, Andre RL, editors. Classic cases in neuropsychology. Hove, UK: Psychology/Erlbaum Taylor & Francis; p 111–122. [Google Scholar]
  142. Rothi LJ, Mack L, Heilman KM (1986): Pantomine agnosia. J Neurol Neurosurg Psychiatry 49: 451–454. [DOI] [PMC free article] [PubMed] [Google Scholar]
  143. Rothi LJ, Ochipa C, Heilman KM (1991): A cognitive neuropsychological model of limb praxis. Cogn Neuropsychol 8: 443–458. [Google Scholar]
  144. Rothi LG, Raymer AM, Ochipa C, Maher LM, Greenwald ML, Heilman KM (1992): Apraxia battery. Experimental edition. Gainesville, FL: University of Florida College of Medicine. [Google Scholar]
  145. Rothi LG, Raymer AM, Heilman KM (1997a): Limb praxis assessment In: Rothi LG, Heilman KM, editors. Apraxia: The neuropsychology of action. Hove, UK: Psychology Press; p 61–74. [Google Scholar]
  146. Rothi LJ, Ochipa C, Heilman KM (1997b): A cognitive neuropsychological model of limb praxis In: Rothi and Heilman, editors. Apraxia. The neuropsychology of action. Hove, UK: Psychology Press; p 29–50. [Google Scholar]
  147. Roy EA, Square P (1985): Common considerations in the studies of limb, verbal and oral apraxia In: Roy EA, editor. Neuropsychological studies of apraxia and related disorders. Amsterdam: Elsevier, p 111–156. [Google Scholar]
  148. Roy EA, Square PA (1994): Neuropsychology of movement sequencing disorders and apraxia In: Zandel DW, editor. Neuropyschology. San Diego: Academic Press; p 183–218. [Google Scholar]
  149. Roy EA, Black SE, Winchester TR, Barbour KL (1996): Gestural imitation following stroke. Brain Cogn 30: 343–346. [Google Scholar]
  150. Salman MS (2002): The cerebellum: it's about time! But timing is not everything: new insights into the role of the cerebellum in timing motor and cognitive tasks. J Child Neurol 17: 1–9. [DOI] [PubMed] [Google Scholar]
  151. Schall JD, Bichot NP (1998): Neural correlates of visual and motor decision processes. Curr Opin.Neurobiol. 8: 211–217. [DOI] [PubMed] [Google Scholar]
  152. Schnider A, Hanlon RE, Alexander DN, Benson DF (1997): Ideomotor apraxia: Behavioral dimensions and neuroanatomical basis. Brain Lang 58: 125–136. [DOI] [PubMed] [Google Scholar]
  153. Schwartz RL, Barrett AM, Crucian GP, Heilman KM (1998): Dissociation of gesture and object recognition. Neurology 50: 1186–1188. [DOI] [PubMed] [Google Scholar]
  154. Schwartz RL, Adair JC, Raymer AM, Williamson DJG, Crosson B, Rothi LJG, Nadeau SE, Heilman KM (2000): Conceptual apraxia in probable Alzheimer's disease as demonstrated by the Florida Action Recall Test. J Int Neuropsychol Soc 6: 265–270. [DOI] [PubMed] [Google Scholar]
  155. Seltzer B, Pandya DN (1994): Parietal, temporal, and occipital projections to cortex of the superior temporal sulcus in the rhesus monkey: a retrograde tracer study. J Comp Neurol 343: 445–463. [DOI] [PubMed] [Google Scholar]
  156. Silveri MC, Gainotti G, Perani D, Cappelletti JY, Carbone G, Fazio F (1997): Naming deficit for non‐living items: neuropsychological and PET study. Neuropsychologia 35: 359–367. [DOI] [PubMed] [Google Scholar]
  157. Sirigu A, Duhamel JR, Poncet M (1991a): The role of sensorimotor experience in object recognition. Brain 114: 2555–2573. [DOI] [PubMed] [Google Scholar]
  158. Sirigu A, Grafman J, Bressler K, Sunderland T (1991b): Multiple representations contribute to body knowledge processing. Brain 114: 629–642. [DOI] [PubMed] [Google Scholar]
  159. Sirigu A, Duhamel JR, Cohen L, Pillon B, Dubois B, Agid Y (1996): The mental representation of hand movements after parietal cortex damage. Science 273: 1564–1568. [DOI] [PubMed] [Google Scholar]
  160. Stephan KM, Fink GR, Passingham RE, Silbersweig D, Ceballos‐Baumann AO, Frith CD, Frackowiak RS (1995): Functional anatomy of the mental representation of upper extremity movements in healthy subjects. J Neurophysiol 73: 373–386. [DOI] [PubMed] [Google Scholar]
  161. Sturmer B, Aschersleben G, Prinz W (2000): Correspondence effects with manual gestures and postures: a study of imitation. J Exp Psychol Hum Percept Perform 26: 1746–1759. [DOI] [PubMed] [Google Scholar]
  162. Suzuki K, Yamadori A, Fujii T (1997): Category‐specific comprehension deficit restricted to body parts. Neurocase 3: 193–200. [Google Scholar]
  163. Tanaka K (1997): Mechanisms of visual object recognition: monkey and human studies. Curr Opin Neurobiol 7: 523–529. [DOI] [PubMed] [Google Scholar]
  164. Tanaka S, Inui T, Iwaki S, Konishi J, Nakai T (2001): Neural substrates involved in imitating finger configurations: an fMRI study. Neuroreport 12: 1171–1174. [DOI] [PubMed] [Google Scholar]
  165. Thach WT (1998): What is the role of the cerebellum in motor learning and cognition. Trends Cogn Sci 2: 331–337. [DOI] [PubMed] [Google Scholar]
  166. Toni I, Thoenissen D, Zilles K (2001): Movement preparation and motor intention. Neuroimage 14: S110–117. [DOI] [PubMed] [Google Scholar]
  167. Tranel D, Damasio H, Damasio AR (1997): A neural basis for the retrieval of conceptual knowledge. Neuropsychologia 35: 1319–1327. [DOI] [PubMed] [Google Scholar]
  168. Wachsmuth E, Oram MW, Perrett DI (1994): Recognition of objects and their component parts: responses of single units in the temporal cortex of the macaque. Cereb Cortex 4: 509–522. [DOI] [PubMed] [Google Scholar]
  169. Warburton E, Wise RJ, Price CJ, Weiller C, Hadar U, Ramsay S, Frackowiak RS (1996): Noun and verb retrieval by normal subjects. Studies with PET. Brain 119: 159–179. [DOI] [PubMed] [Google Scholar]
  170. Warrington EK, McCarthy RA (1987): Categories of knowledge. Further fractionations and an attempted integration. Brain 110: 1273–96. [DOI] [PubMed] [Google Scholar]
  171. Watson RT, Fleet WS, Gonzalez‐Rothi L, Heilman KM (1986): Apraxia and the supplementary motor area. Arch Neurol 43: 787–792. [DOI] [PubMed] [Google Scholar]
  172. Weiss PH, Dohle C, Binkofski F, Schnitzler A, Freund H, Hefter H (2001): Motor impairment in patients with parietal lesions: disturbances of meaningless arm movement sequences. Neuropsychologia 39: 397–405. [DOI] [PubMed] [Google Scholar]
  173. Wolpert DM, Ghahramani Z (2003): Computational principles of movement neuroscience. Nature Neuroscience 3: 1212–1217. [DOI] [PubMed] [Google Scholar]
  174. Wolpert DM, Kawato M (1998): Multiple paired forward and inverse models for motor control. Neural Netw 11: 1317–1329. [DOI] [PubMed] [Google Scholar]
  175. Wolpert DM, Goodbody SJ, Husain M (1998): Maintaining internal representations. The role of the human superior parietal lobe. Nature Neuroscience 1: 529–533. [DOI] [PubMed] [Google Scholar]

Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES