Skip to main content
Cerebral Cortex (New York, NY) logoLink to Cerebral Cortex (New York, NY)
. 2019 Feb 15;29(4):1816–1833. doi: 10.1093/cercor/bhz011

Anterior Intraparietal Area: A Hub in the Observed Manipulative Action Network

Marco Lanzilotto 1,, Carolina Giulia Ferroni 1, Alessandro Livi 1, Marzio Gerbella 1, Monica Maranesi 1, Elena Borra 1, Lauretta Passarelli 2, Michela Gamberini 2, Leonardo Fogassi 1, Luca Bonini 1, Guy A Orban 1,
PMCID: PMC6418391  PMID: 30766996

Abstract

Current knowledge regarding the processing of observed manipulative actions (OMAs) (e.g., grasping, dragging, or dropping) is limited to grasping and underlying neural circuitry remains controversial. Here, we addressed these issues by combining chronic neuronal recordings along the anteroposterior extent of monkeys’ anterior intraparietal (AIP) area with tracer injections into the recorded sites. We found robust neural selectivity for 7 distinct OMAs, particularly in the posterior part of AIP (pAIP), where it was associated with motor coding of grip type and own-hand visual feedback. This cluster of functional properties appears to be specifically grounded in stronger direct connections of pAIP with the temporal regions of the ventral visual stream and the prefrontal cortex, as connections with skeletomotor related areas and regions of the dorsal visual stream exhibited opposite or no rostrocaudal gradients. Temporal and prefrontal areas may provide visual and contextual information relevant for manipulative action processing. These results revise existing models of the action observation network, suggesting that pAIP constitutes a parietal hub for routing information about OMA identity to the other nodes of the network.

Keywords: action observation, anatomical connectivity, macaque monkey, parietal cortex, visuomotor processing

Introduction

The interest in the brain networks underlying others’ observed action processing has been triggered by the discovery, in the monkey ventral premotor area F5, of the so-called “mirror neurons,” which respond during both action execution and others’ action observation (di Pellegrino et al. 1992; Gallese et al. 1996; Rizzolatti et al. 1996). This finding demonstrated that observed action processing is not limited to visual brain areas (Perrett et al. 1989; Vangeneugden et al. 2009; Singer and Sheinberg 2010), but involves frontoparietal areas belonging to the motor system as well (see Materials and Methods for the anatomofunctional criteria defining the areas of interest). Subsequently, neurons responding to the observation of another individual’s action have been described in 2 parietal areas: PFG, in the inferior parietal lobule (IPL) convexity (Fogassi et al. 2005; Rozzi et al. 2008; Bonini et al. 2010) and the anterior intraparietal (AIP) area (Pani et al. 2014; Maeda et al. 2015). Until recently, based on fMRI and connectional evidence (Nelissen et al. 2011), PFG and AIP were believed to operate in parallel with distinct visual inputs from the superior temporal sulcus (STS), each preferentially associated with the processing of information related to observed agents (upper bank of the STS–PFG–F5 convexity) or target objects (lower bank of the STS–AIP–F5 bank sector). However, the direct anatomical link between area PFG and the STS is highly variable (Rozzi et al. 2006; Frey et al. 2014) or even undetectable in a recent study (Bruni et al. 2018) in which tracers were injected in the core of functionally identified IPL sites hosting neurons responding to other’s observed action. In contrast, area AIP is strongly connected with the STS (Lewis and Van Essen 2000; Nakamura et al. 2001; Borra et al. 2008), even with the region in which hand action observation neurons have been found (Perrett et al. 1989). Altogether, these findings suggest that the existing models, which assign a prominent role to the STS–PFG–F5 circuit in the action observation network, may need a revision. Area AIP is a likely candidate for being the main parietal node of the monkey’s action observation network: yet, in the absence of combined anatomofunctional evidence, its role remains controversial.

The importance of area AIP in observed action processing is also in line with human fMRI studies, which have documented that the putative human AIP, the homolog of monkey AIP (Grefkes et al. 2002; Orban 2016), is activated by observed manipulative actions (OMAs) such as grasping, dragging, or dropping (Shmuelof and Zohary 2005; 2006; 2008; Jastorff et al. 2010; Abdollahi et al. 2013; Ferri et al. 2015; Corbo and Orban 2017). In contrast, single neuron evidence for observed action processing in AIP is sparse and mostly limited to the encoding of own hand visual feedback (HVF) during grasping (Sakata et al. 1995; Murata et al. 2000) or the observation of another’s grasping action seen from a first-person visual perspective (Pani et al. 2014; Maeda et al. 2015). The only study that has tested AIP single neuron activity during observation of other’s action from an allocentric (side-view) perspective reported that <10% of the recorded neurons responded in this condition (Maeda et al. 2015), leaving the relevance of AIP neurons in other’s action processing unclear. Furthermore, most likely because of the widely recognized role of AIP in the visual guidance of reaching-grasping actions (Janssen and Scherberger 2015), grasping is the only observed action tested thus far (Nelissen et al. 2011; Pani et al. 2014; Maeda et al. 2015). Here, we investigated whether AIP plays a major role in the visual processing of a wider variety of OMAs, beyond grasping, in accordance with the richness of primates’ behavioral repertoire. Furthermore, we predict that neuronal responsiveness and selectivity for OMAs may be prevalent in posterior AIP (pAIP), where previous studies reported greater selectivity for several types of visual information other than observed actions (Orban et al. 2006; Durand et al. 2007; Baumann et al. 2009; Premereur et al. 2015). The presence of a rostrocaudal gradient in visual tuning of AIP neurons is also supported by combined electrical microstimulation and fMRI experiments (Premereur et al. 2015), which demonstrated stronger activation of visual areas following pAIP stimulation. Nevertheless, anatomical evidence of rostrocaudal connectional gradients supporting this functional organization is lacking.

Other studies have reported an opposite rostrocaudal motor gradient in AIP in addition to the visual one (Baumann et al. 2009), raising intriguing questions about the integration of visual and motor information at the single neuron and network level. Indeed, visuomotor matching is a hallmark of observed action processing in the parietofrontal motor system, and mirror neuron studies indicate that there is a generally broad correspondence between the action evoking the strongest discharge during observation and execution (Gallese et al. 1996; Ferrari et al. 2003; Rozzi et al. 2008; Maeda et al. 2015; Papadourakis and Raos 2017; Mazurek et al. 2018). Overall, these studies strongly emphasize the convergence between motor and visual representations of action up to the single-neuron level. However, whether and how such integration occurs in AIP remains unknown.

On these bases, this study hypothesizes that 1) area AIP may encode a variety of OMAs, particularly in its caudal part; 2) among OMAs, grasping may be the preferred exemplar, possibly linked to its widespread motor representation in AIP; and 3) rostrocaudal gradients of visual and motor properties should be linked to corresponding gradients in anatomical connections. To address these issues, we performed chronic single neuron recordings along the entire anteroposterior extent of AIP in 2 monkeys, while they passively viewed videos depicting 7 OMA exemplars (i.e., drag, drop, grasp, push, roll, rotate, and squeeze), and actively performed a visuomotor reaching/grasping task (Bonini et al. 2014b). We found evidence of neural selectivity for various OMAs, particularly in pAIP, where it was associated with motor coding of grip type and own-HVF. After the recordings, we injected neural tracers into each functionally characterized AIP site, revealing that regions hosting OMA-selective neurons showed strong direct connections with a set of temporal and prefrontal areas deemed to be involved in context-dependent visual processing of information regarding other’s actions and objects. These results revise current models of the action observation network, suggesting that pAIP constitutes a parietal hub for routing information about OMA identity to the other nodes of the network.

Materials and Methods

Experiments were carried out on 2 Macaca mulatta, 1 female (Mk1, 4 kg) and 1 male (Mk2, 7 kg). Before recordings, monkeys were habituated to sit in a primate chair and to interact with the experimenters. They were then trained to perform a visuomotor task (VMT) (Bonini et al. 2014a; Maranesi et al. 2015) and an observation task (OT), both described below. When the training was completed, a head fixation system was implanted under general anesthesia (ketamine hydrochloride, 5 mg/kg i.m. and medetomidine hydrochloride, 0.1 mg/kg i.m.), followed by postsurgical pain medications. Surgical procedures were the same as previously described (Bruni et al. 2017). All experimental protocols complied with the European law on the humane care and use of laboratory animals (directives 86/609/EEC, 2003/65/CE, and 2010/63/EU), they were approved by the Veterinarian Animal Care and Use Committee of the University of Parma (Prot. 78/12 17/07/2012) and authorized by the Italian Ministry of Health (D.M. 294/2012-C, 11/12/2012).

Apparatus and Behavioral Paradigm

During the VMT, the monkey was seated on a primate chair in front of a box, shown in Figure 1C from the monkey’s point of view, whereas during OT it was sitting in front of a video monitor located on the opposite side. The 2 tasks were carried out in distinct, subsequent blocks during the same recording session.

Figure 1.

Figure 1.

Anatomical reconstruction and behavioral paradigms. (A) Reconstruction of the probes location along the intraparietal sulcus of Mk1 and Mk2. Vertical dashed lines indicate the position of each probe’s track, illustrated in the coronal sections below (a–d for Mk1 and a′–c′ for Mk2). Asterisk indicates the location of the 2 probes that have been considered together for the analysis of neuronal responses. Cgs, cingulate gyrus; cs, central sulcus; ias, inferior arcuate sulcus; ips, intraparietal sulcus; ls, lateral sulcus; lus, lunate sulcus; ps, principal sulcus; sas, superior arcuate sulcus; sts, superior temporal sulcus. (B) Observation task (OT). (C) Behavioral setup for the visuomotor task (VMT). Temporal sequence of task events is shown in Figure S1A. (D) Examples of initial (Epoch 1) and middle (Epoch 2) frames for each OMA exemplar. (E) Average speed of motion (degree/second) for the 4 variants of each OMA exemplar during the 2.6 s video presentation period.

Visuomotor Task

The VMT was performed by using a box divided horizontally into 2 sectors by a half-mirror: the upper sector contained a small black tube with a white light-emitting diode (LED) that could project a spot of light on the half-mirror surface; the lower sector contained a sliding plane hosting 3 different objects. When the LED was turned on (in complete darkness), the half-mirror reflected the spot of light so that it appeared to the monkey as located in the lower sector (fixation point), in the very position of the center of mass of the not-yet-visible target object. The objects—a ring, a small cone, and a large cone—were chosen because they afforded 3 different grip types: hook grip (in which the index finger enters the ring); side grip (performed by opposing the thumb and the lateral surface of the index finger); whole-hand prehension (achieved by opposing all the fingers to the palm). Objects were presented one per trial, through a 7 cm opening located on the monkey’s sagittal plane within reach of its hand’s starting position. A stripe of white LEDs located on the lower sector of the box allowed us to illuminate objects during specific phases of the task. Note that, because of the half-mirror, the fixation point remained visible in the middle of the object even when the lower sector of the box was illuminated.

The VMT included 3 fully randomized conditions, as illustrated in Figure S1A: grasping in the light, grasping in the dark and a no-go condition. Each of them started when the monkey held its hand on a starting button, after a variable intertrial period ranging from 1 to 1.5 s from the end of the previous trial.

Grasping in the Light

The fixation point was presented and the monkey was required to acquire the fixation point within 1.2 s. Fixation onset resulted in the presentation of a cue sound (a pure high tone constituted by a 1200 Hz sine wave), which instructed the monkey to grasp the subsequently presented object (go-cue). After 0.8 s the lower sector of the box was illuminated and one of the objects became visible. Then, after a variable time lag (0.8–1.2 s), the sound ceased (go-signal), at which point the monkey had to reach, grasp, and pull the object within 1.2 s. It then had to hold the object steadily for 0.8 s. If the task was performed correctly without breaking fixation, the reward was automatically delivered. We collected 12 correctly preformed trials with each object.

Grasping in the Dark

The temporal sequence of events in this condition was identical to that of grasping in the light. However, when the cue sound (the same high tone as in grasping in the light) ceased (go signal), the light inside the box was automatically switched off and the monkey performed the subsequent motor acts in complete darkness. The fixation point was visible for the entire duration of each trial, providing a spatial guidance for reaching the object in the absence of visual feedback. In this paradigm, grasping in the light and grasping in the dark trials were identical and unpredictable until the occurrence of the go signal, to ensure that action planning was the same in both conditions. We collected 12 correctly performed trials with each object.

No-Go Condition

The basic sequence of events was the same as in the go conditions, but a different cue sound (a pure low tone constituted by a 300 Hz sine wave), instructed the monkey to remain still and continue fixating the object for 1.2 s in order to receive the reward. We collected 12 correctly performed trials with each object.

Observation Task

The OT was performed with the monkey chair rotated by 180° to face a video monitor (1920 × 1080, 60 Hz). The monitor was located 57 cm from the monkeys’ face, where the video took up an area of 13.04° × 9.85° of the visual field in the horizontal and vertical dimension, respectively. Videos of 7 different OMA exemplars, each performed by 2 actors, one male and one female, on 2 target objects of the same size and different color (4 variants for each exemplar), were presented. First, the monkey had to gaze at a red square on a scrambled background. Then, the video stimulus started and lasted 2.6 s. The monkey was required only to remain still, with its hand on the starting button, and to maintain fixation for the entire duration of the trial. Details on OMA exemplars administered are provided in Figure 1D and E. If the monkey maintained fixation within a 3° spatial window centered on the fixation point for the entire duration of the trial, reward was automatically delivered. The stimuli were randomly presented 3 times each, for a total of 12 trials for each exemplar.

The phases of both tasks were automatically controlled and monitored by LabView-based software, enabling the interruption of the trial if the monkey broke fixation, made an incorrect movement or did not respect the task temporal constraints described above. In all these cases, no reward was delivered. After correct completion of a trial, the monkey was automatically rewarded with the same amount of juice in all conditions (pressure reward delivery system, Crist Instruments, Hagerstown, MD).

Recording Techniques

Neuronal recordings were performed by means of chronically implanted arrays of linear silicon probes with 32 recording channels per shaft. Probes were implanted by estimating the angle of penetration with MRI-based reconstruction of the outline of the intraparietal sulcus at the selected site of insertion (Fig. 1A). Previous reports provide more details on the methodology of probe fabrication, assembly, and implantation (Herwik et al. 2011; Barz et al. 2014; Bonini, Maranesi, Livi, Bruni, et al. 2014), as well as on probes’ recording performance over time in chronic applications (Barz et al. 2017).

The signal from the 128 channels was simultaneously amplified and sampled at 30 kHz with four 32-channel Intan amplifier boards (Intan Technologies, Los Angeles, CA, USA), controlled in parallel via the electrophysiology platform Open Ephys (http://open-ephys.org/). All formal signal analyses were performed off-line with fully automated software, Mountain sorter (Chung et al. 2017), using a −3.0 standard deviations of the signal-to-noise ratio of each channel as threshold for detecting units. To distinguish single- from multiunit we used the noise overlap, a parameter that can vary between 0 and 1, with units with a value below 0.1 being considered as single. Single unit isolation was further verified using standard criteria (ISI distribution, refractory period >1 ms, and absence of cross-correlated firing with time-lag of ≈0 relative to other isolated units, to avoid oversampling), possible artifacts were removed, and all the remaining waveforms that could not be classified as single units formed the multiunit activity.

Recording of Behavioral Events and Definition of Epochs of Interest

Distinct contact sensitive devices (Crist Instruments) were used to detect when the monkey’s hand (grounded) touched the metal surface of the starting button or one of the target objects. To signal the onset and tonic phase of object pulling, an additional device was connected to the switch located behind each object. Each of these devices provided a TTL signal, which was used by the LabView-based software to monitor the monkey’s performance and to control the generation and presentation of the behavioral paradigm’s auditory and visual cue signals.

Eye position was monitored in parallel with neuronal activity with an eye tracking system consisting of a 50 Hz CCD video camera provided with an infrared filter and 2 spots of infrared light. Two identical but independent systems were used for monitoring eye position during VMT and OT. Analog signal related to horizontal and vertical eye position was fed to a computer equipped with dedicated software, enabling calibration and basic processing of eye position signals. The monkey was required to maintain fixation throughout each task, and the eye position signal was monitored by the same LabView-based software dedicated to the control of the behavioral paradigm.

The same software also generated different digital output signals associated with various input and output events of both the VMT and OT. These signals were recorded and stored together with the neuronal activity and subsequently used to construct the response histograms and the data files for statistical analysis.

Unit activity was analyzed in relation to the digital signals associated with the main behavioral events. In the VMT we considered the following epochs of interest: 1) baseline, 500 ms before object presentation; 2) object presentation, from 0 to 500 ms after switching on the light; 3) reaching-grasping, from −500 to 0 ms before pulling onset; and 4) object pulling, from pulling onset to 500 ms after this event. Note that during baseline the monkey kept its hand immobile on the starting button, was staring at the fixation point and was already aware of whether the ongoing trial was a go or a no-go trial: these features enabled us to assess possible variation in neural discharge specifically linked with the subsequent task stages within the ongoing behavioral set.

In the OT we considered the following epochs of interest: 1) baseline, 500 ms before video presentation onset; 2) Epoch 1, 300 ms from video onset; and 3) Epoch 2, including the subsequent 1200 ms of the video.

Data Analyses

Single- and Multiunit Classification

Units (single- and multi-) were primarily classified, using OT responses, based on possible modulation of the activity in Epoch 1 and/or 2 of video presentation relative to baseline as facilitation (when the response was stronger than baseline) or suppression (when the response was weaker than baseline), according to Vigneswaran et al. (2013). The choice of these 2 epochs was motivated by the fact that in the videos of some OMA exemplars the amount of motion differed markedly between the 2 epochs, with Epoch 1 including primarily static information about the depicted action conveyed by the actor’s initial body posture. The analysis was carried out by means of a 3 × 7 repeated measures ANOVA (factors: Exemplar and Epoch): we classified as action-related all units showing a significant effect (P < 0.05) of the factor Epoch, either as a main or interaction effect with the factor Exemplar. Action-related units also showing a significant effect (P < 0.05) of the factor Exemplar, either as a main or interaction effect with the factor Epoch, were classified as OMA-selective units.

Next, units were classified as motor-related by means of the VMT responses based on their possible modulation (facilitated or suppressed) in one (or both) epoch/s of action execution in the dark relative to baseline (i.e., reaching-grasping and object pulling). The analysis was carried out by means of a 3 × 3 repeated measures ANOVA (factors: Object and Epoch). The same analysis was performed to assess possible responses during grasping in the light as well. Object presentation response during the VMT was assessed by means of a 3 × 2 × 2 repeated measures ANOVA (factors: Object, Condition, and Epoch). All ANOVAs were followed by Bonferroni post hoc tests (P < 0.05) in the case of significant interaction effects or to identify specific effects of factors with more than 2 levels.

Heat maps have been constructed to show the temporal activation profile of individual units in selected neuronal populations. Each line represents the activity of a unit averaged across trials of a given condition. The color code represents the net normalized activity, computed as follows: for each unit, a mean baseline value across trials was calculated, then subtracted bin-by-bin for the task period to be plotted and finally normalized to the absolute maximum bin value across the compared conditions. All final plots were made using a bin size of 60 ms and steps of 20 ms. The data, used to produce the heat maps, averaged in 60 ms bins slid forward in steps of 20 ms, were also used to plot the time course of the net normalized population activity. To represent the population selectivity for a given variable (i.e., OMA or grip type) we performed one-way repeated measures sliding ANOVAs (P < 0.05 uncorrected) on each unit’s firing rate over time. This analysis was performed in 200 ms bins, advanced in steps of 20 ms for the entire task-unfolding period. The results of this analysis were plotted (relative to the center of each epoch) by calculating the percentage of significantly tuned units in each epoch in the entire neuronal population.

Preference Indices

Neural selectivity for a given variable of interest (e.g., OMA, grip type, object, own HVF) was quantified for each unit by calculating a preference index (PI) for that variable using the activity in a specific epoch of interest with the following equation:

PIV=n(ri/rpref)n1

where, v is the selected variable of interest, n is the number of conditions of v, ri is the unit response associated to condition i, and rpref is the unit response associated to the preferred condition. Regardless of the number of conditions of the selected variable, the PI ranges from 0 to 1, with a value of 0 corresponding to identical response magnitude for all conditions and a value of 1 corresponding to a response to only one condition.

Decoding Analyses

We assessed the decoding accuracy with a maximum correlation coefficient classifier trained to discriminate between the 7 OMA exemplars, using the methodology previously described by Meyers (2013) and adopted in other studies (Zhang et al. 2011; Rutishauser et al. 2015; Kaminski et al. 2017).

For each neuron, data were first converted from raster format into binned format. Specifically, we created binned data that contained the average firing rate in 150 ms bins sampled at 50 ms intervals for each trial (data-point). We obtained a population of binned data characterized by a number of data points corresponding to the minimum number of correct trials in all units × exemplars (i.e., 10 × 7 = 70 data-points for OMA exemplars decoding during OT) in an N-dimensional space (where, N is the total number of neurons considered for the analysis). Next, we randomly grouped the 63 available data points into a number of splits equal to the total number of data points per condition (n = 10), with each split corresponding to one replication of all 7 conditions and containing a “pseudopopulation,” that is, a population of neurons that were partially recorded separately but treated as recorded simultaneously. Before sending the data to the classifier, they were normalized by means of z-score conversion so that neurons with higher levels of activity did not dominate the decoding procedure. We used a 10-fold cross-validation procedure whereby a pattern classifier was trained using all but one of the 10 splits of the data and then tested on the remaining one: this procedure was repeated as many times as the number of splits (i.e., 10), leaving out a different test split each time. To increase the robustness of the results, the overall decoding procedure was run 50 times with different selection of data in the training and test splits, and the decoding accuracy from all these runs was then averaged. The decoding results were based on the use of a maximum correlation-coefficient classifier. The analysis was performed on data collected from the 2 monkeys.

To assess whether the classification accuracy was above chance, we performed a permutation test in which we randomly shuffled the attribution of the labels to the different trials (50 repetitions), and then ran the full clutter-decoding experiment to obtain a null distribution to be compared with the accuracy of the real decoding: the P-value was found by assessing how many of the points in the null distribution were greater than those in the real decoding distribution and selecting only periods of at least 3 consecutive significant bins. The decoding results were considered statistically significant only if accuracy was greater than all the shuffled data in the null distribution (P = 0).

Tracer Injections and Histological Procedures

The anatomofunctional investigated region was defined based on previous studies (Murata et al. 2000; Borra et al. 2008), which identified as AIP the cortical sector ranging from anterior 6 to posterior −2 mm. Electrode implantation was thus performed within this region. At the end of the electrophysiological experiments, the probes were removed and neural tracers were injected within the functionally characterized region by taking into account the following criteria: 1) achieving a match between injection sites and probe locations and 2) avoiding overlap between adjacent injection sites (minimum 1.5 mm in AP between each injection). To fulfill these criteria, we could inject 3 different tracers in each animal, as detailed in Table 1.

Table 1.

Injected regions and details for each site in both animals

Monkey Species Hemisphere Injected site Tracer Amount
Mk1 Macaca mulatta Right AP 4 mm FB 3% 1 × 0.2 μL
Right AP 2 mm CTBg Alexa 488 1% 1 × 1.2 μL
Right AP 0 mm DY 2% 1 × 0.2 μL
Mk2 Macaca mulatta Left AP 4.5 mm FB 3% 1 × 0.2 μL
Left AP 3 mm DY 2% 1 × 0.2 μL
Left AP 1 mm CTBg Alexa 488 1% 1 × 1.2 μL

Before tracer injection, each monkey was anesthetized (Ketamine, 5 mg/kg i.m. and Medetomidine, 0.08–0.1 mg/kg i.m.) and tracers were slowly pressure injected at the desired depth using a Hamilton microsyringe (Reno, NV, USA). In the right hemisphere of Mk1 we injected 2 retrograde tracers, Fast Blue (FB, 3% in distilled water, Drilling Plastics GmbH, Breuberg, Germany) and Dyamidino Yellow (DY, 2% in distilled water, Drilling Plastics GmbH, Breuberg, Germany), and an anteroretrograde tracer, analyzed as a retrograde one, cholera toxin B subunit conjugated with Alexa 488 (CTBg, 1% in phosphate-buffered saline; Molecular Probes). In the left hemisphere of Mk2 we injected FB, DY, and CTBg. After an appropriate survival period for tracer’s transport (21 days), each animal was deeply anesthetized with an overdose of sodium thiopental and perfused through the left cardiac ventricle with saline, 3.5% paraformaldehyde and 5% glycerol (in this order) prepared in phosphate buffer 0.1 M, pH 7.4. Each brain was then coronally blocked on a stereotaxic apparatus, removed from the skull, photographed, and placed in 10% buffered glycerol for 4 days. Finally, each brain was cut frozen into coronal sections of 60 μm thickness. For visualizing DY and FB by fluorescence microscopy, one section of each 5 was mounted air-dried and quickly cover-slipped. For visualizing CTB-g by bright-field microscopy, one section out of every 5 was immunohistochemically processed, as follows. After inactivation of the endogenous peroxidase (methanol: hydrogenperoxide = 4:1), selected sections were incubated for 72 h at 4 °C in a primary antibody solution of rabbit anti-Alexa 488 (1:15 000, Life Technologies) in 0.3% Triton, 5% normal goat serum in phosphate buffer solution (PBS), and then incubated in biotinylated secondary antibody (1:200, Vector Laboratories, Burlingame, CA, USA) in 0.3% Triton, 5% normal goat serum in PBS. Finally, CTBg labeling was visualized using the Vectastain ABC kit (Vector) and the Vector SG peroxidase substrate kit (SK-4700, Vector) as a chromogen. For both monkeys, 1 section out of 5 was stained using the Nissl method (thionin, 0.1% in 0.1 M acetate buffer, pH 3.7).

Reconstruction of the Injection Sites, Identification of the Recorded Regions, Distribution of Labeled Neurons and Quantitative Analysis

The locations of the probes’ tracks and injection sites were assessed under an optical microscope in a series of Nissl-stained coronal sections, then plotted and digitized together with the outer and inner borders of the cerebral cortex using a computer-based charting system. The distribution of retrograde cortical labeling was plotted in sections spaced 600 μm apart, together with the outer and inner cortical borders. The digitalized sections were then imported in CARET software to create reconstructions of the cortical surface (http://www.nitrc.org/projects/caret/, Van Essen et al. 2001) as described in other studies (Galletti et al. 2005; Gamberini et al. 2009). The same software was used to prepare the density maps of labeled neurons by projecting the location of each neuron to the nearest mid-thickness contour used to build the 3D reconstruction (Bakola et al. 2010; Passarelli et al. 2011). The strength of projections from various cortical areas was quantified as the percentage of labeled cells in each area relative to the total number of labeled cells in the whole brain, except the halo and core regions of the injection site.

Identification of Cortical Areas Containing Extrinsic Labeled Cells

The nomenclature and boundaries of the cortical areas that contained labeled cells were based on published criteria or sulcal landmarks, using previously published maps as a guide. The architectonic criteria of Pandya and Seltzer (1982) were used to subdivide the superior parietal lobule into areas PE and PEc. The IPL was subdivided according to Gregoriou et al. (2006). Areas of the lateral intraparietal sulcus were identified based on descriptions of Blatt et al. (1990), Durand et al. (2007) and Lewis and Van Essen (2000). The subdivision of the medial parietal areas and cingulate sulcus (PGm, 23, 24) was defined according to the criteria of Kobayashi and Amaral (2000), Luppino et al. (2005), Matelli et al. (1991), Morecraft et al. (2004), Passarelli et al. (2018), and Vogt et al. (2005). The temporal cortex and the STS were subdivided according to Boussaoud et al. (1990), Lewis and Van Essen (2000) and Saleem and Tanaka (1996). The frontal motor and premotor cortices were subdivided into areas F1–F7 according to the criteria of Belmalih et al. (2007) and Matelli et al. (1991). The lateral prefrontal cortex was subdivided based on architectonic and connectional definition by Borra et al. (2017), Borra et al. (2011), Carmichael and Price (1994), Gerbella et al. (2007; 2010), and Gerbella et al. (2013).

Results

We recorded neuronal activity from 4 locations along the anteroposterior extent of area AIP in Monkey 1 (Mk1) and from 3 locations in Monkey 2 (Mk2) using linear multielectrode (32-channel) silicon probes. The probes were spaced at 2 to 3 mm intervals along the lower bank of the intraparietal sulcus (Fig. 1A), covering most of the rostrocaudal extent of the area. Note that the data recorded from intermediate probes b and c in Mk1 have been combined in subsequent analyses to facilitate comparisons between animals and to evaluate correspondences with the results of the tracing study (see below). The entire recorded region corresponds to the functionally defined area AIP (Sakata et al. 1995; Murata et al. 2000), as even the most posterior probes of both monkeys show neuronal properties known to characterize area AIP (Fig. S1). During all recording sessions (n = 6, 2 in Mk1 and 4 in Mk2), monkeys performed the OT (Fig. 1B). In addition, in 4 out of the 6 sessions (n = 2 in Mk1 and n = 2 in Mk2), we also recorded neuronal activity while monkeys performed the visuomotor reaching-grasping task (VMT, see Fig. S1A) using the apparatus (Fig. 1C) originally devised for studies of other areas (Bonini et al. 2014b; Lanzilotto et al. 2016).

During OT, monkeys had to fixate a square in the middle of a screen in a dark room while they were randomly presented with one of the 4 variants (2 objects and 2 actors) of 7 different manipulative action exemplars (Fig. 1D). Each video lasted for 2.6 s and was preceded by a 1.5 s blank screen with the fixation point (Fig. 1B). Each OMA exemplar was characterized by a specific speed profile (Fig. 1E), used as proxy for the dynamic body shape changes characterizing the action (Vangeneugden et al. 2009, 2011; Theusner et al. 2014). Within the first 300 ms of video presentation (Epoch 1), OMA exemplars were characterized by the appearance of specific postures involving the actor’s body and right arm with relatively little dynamic information (Fig. 1D, gray squared pictures and Fig. 1E). In contrast, during the subsequent 1200 ms (Epoch 2) the dynamic body shape changes varied widely, with some of the exemplars (Fig. 1D, light-blue squared pictures and Fig. 1E) being characterized by greater changes. These 2 epochs (1 and 2) were subsequently used in most analyses comparing neural activity to baseline, defined as the 500 ms preceding video onset. It is worth to note that, despite the wide differences in OMAs dynamics monkeys correctly performed most OT trials and frequencies of brakes in fixation occurring during video presentation did not differ significantly (Kruskal–Wallis, χ2 = 3.61, P = 0.73) amongst exemplars (Fig. S2A).

Neuronal activity was analyzed off-line through an automated spike sorting software (Chung et al. 2017). We extracted both multi- and single unit activity, here defined together as “units”. Because of their similarity in the encoding of functional properties (see Fig. 2 and Fig. S2B and C), we pooled all (single- and multi-) units in most of subsequent analyses of functional data to match the unbiased sampling of the tracing study performed on the physiologically characterized sites.

Figure 2.

Figure 2.

Single neuron examples, population activity and tuning properties of AIP units in the OT. (A) Percentage of OMA-selective, -nonselective, and –unresponsive units in each monkey. (B) Examples of facilitation (Neuron 1) and suppression (Neuron 2) OMA-selective neurons. Each neuron’s raster and peri-stimulus response is aligned to the video presentation (green triangles and dashed lines). Red triangles, reward delivery. (C) Time course of the net normalized population activity (including single- and multiunits) of OMA-selective facilitated (Left) and suppressed (right) units. The shading area around each line indicates 1 standard error, gray and light-blue shaded areas superimposed on each plot represent epochs 1 and 2 used for statistical analysis.

Neuronal Selectivity for OMAs in Area AIP

We isolated 643 units (Fig. 2A), 131 of which were classified as well-isolated single units (n = 89 in Mk1 and n = 42 in Mk2) based on strict standard criteria (see Materials and Methods). Of all isolated units, 455 (70.8%, including 123 single units) responded to OMA (7 × 3 repeated measures ANOVA, factors: Exemplar and Epoch, P < 0.05) and, of the latter 171 (including 40 single units) exhibited action selectivity (see Materials and Methods). Thus, over a quarter (26.6%) of all AIP recorded units (30.5% of single units) exhibited OMA selectivity, despite interindividual differences (44% and 11% of units in Mk1 and Mk2, respectively).

Most AIP OMA-selective units showed increased activity in response to the video presentation (facilitation units, N = 153, 89.5%), whereas a few exhibited suppressed activity (suppression units, N = 18, 10.5%). Figure 2B shows examples of facilitation (Neuron 1) and suppression (Neuron 2) OMA-selective single units. In line with these examples, the population activity of both units (Fig. 2C) and single neurons (Fig. S2B) discriminated between best and worst OMA in a similar manner. Interestingly, the time course of facilitation units (Fig. 2C) and single neurons (Fig. S2B) showed a strong and transient activation, peaking in epoch 1, followed by a sustained activity during epoch 2. Considering the differences in motion strength (taken as a proxy for the magnitude of body shape changes) characterizing epochs 1 and 2 of the videos (Fig. 1E), the early population response (Epoch 1) may derive largely from static information conveyed by the initially different body posture of the actor, whereas the later response (Epoch 2) should reflect only dynamic visual information. To directly test this issue, we performed a series of single unit and population analyses.

OMA selectivity was evident during both Epoch 1 and 2: the OMA PI for epoch 2 was slightly greater than for Epoch 1 (Fig. 3A), but the 2 indices did not differ significantly (t = 0.81, P = 0.42) and were positively correlated (r = 0.57, P < 0.001). These findings suggest that distinctive initial static body postures in some of the OMA exemplars were sufficient to elicit OMA selectivity, which was sustained during the subsequent dynamic changes in body shape. To test this idea more directly, we independently ranked the 7 OMA exemplars based on the discharge rates of each unit in Epochs 1 and 2. OMA exemplars with lower ranks based on the units’ responses during Epoch 2 also showed the lowest ranks when scored based on their response during Epoch 1 (Fig. 3B and S2D). Thus, the representation of OMA exemplars in area AIP at the single unit level appears to be relatively invariant with regard to the strength of the dynamic information characterizing them. AIP neurons may therefore code the OMA identity, regardless of the prevalence of static or dynamic information conveying it.

Figure 3.

Figure 3.

Temporal dynamic of OMA processing in AIP. (A) Regression plot of preference index (PI) values calculated on OMA-selective unit activity during Epoch 1 and 2. See also Figure S2D. (B) Cross-validation of the ranking of all OMA exemplars performed with the average activity (±1 standard error) during Epoch 1 as a function of the same ranking performed with the average activity during Epoch 2 (Kruskal–Wallis, χ2 = 139.34, P < 0.001). (C) Percentage of units selective for each OMA exemplar in each monkey during Epoch 2. (D) Classification accuracy of OMA exemplars as a function of test and training time. The superimposed white line represents the classification accuracy of the population along the diagonal (scale on the right). The red line in the lower part of the plot indicates the period of time during which the decoding accuracy is significantly above chance level (see Materials and Methods). (E) Cross-validation of the best OMA exemplar (rank = 1) calculated with the activity during Epoch 2 (E2) as a function of time during the entire action unfolding period (bin width 300 ms, step 20 ms). For each bin the color code (see inset) represents the local rank of the OMA ranking 1 in Epoch 2. Bins in which neural activity was not significantly different from baseline (sliding window ANOVA, bin width 300 ms, step 20 ms, P > 0.5 uncorrected) have been blanked out (E1 = Epoch 1).

Encoding OMA requires not only selective units but also an adequate coverage of the various exemplars, which we investigated using responses in Epoch 2. Figure 3C demonstrates that grasping was the most represented OMA: indeed, for more than 30% of the recorded units (34% in Mk1 and 36% in Mk2) grasping evoked the strongest response (see also Fig. S2C). On the other hand, Roll, Squeeze and Rotate were the less well-represented OMAs amongst selective AIP units. Therefore, the similarity between ranks in Epochs 1 and 2 (Fig. 3B) was greater for the 4 exemplars which were most frequently preferred than for the 3 less well-represented (Fig. S2E). Further analyses were performed to rule out the possibility that neuronal selectivity for OMA can be accounted for by stimulus features other than the action (i.e., actor gender, type of object, or interaction between these factors): Figure S2F shows that action exemplar was the only manipulated factor significantly represented by AIP neuronal activity.

Finally, we trained a classifier (Meyers 2013) to discriminate between the 7 OMA exemplars using the entire AIP neuronal population. The results (Fig. 3D) indicate that the classification accuracy was high and significantly above chance level during the entire video presentation period, reaching a maximum of nearly 80% of accuracy near the end of epoch 1 and then showing close-to-maximal values during most of Epoch 2. By training the classifier at one point in time (using 150 ms bins of data shifted by 50 ms) and then testing its decoding performance at either the same or a different time point, we have been able to investigate whether a dynamic or static population code underlies OMA representation in AIP (Fig. 3D). The high decoding accuracy restricted along the diagonal indicates that the neural representation of OMA identity in AIP emerges mainly from a dynamic code, consistent with previous studies of perceptual and cognitive processes (Meyers et al. 2008, 2012; Crowe et al. 2010). This dynamic representation of OMA identity in AIP (Fig. 3D) essentially reflects the contribution of OMA selective (Fig. S3A) as compared with task-related (but OMA unselective) (Fig. S3B) and task-unrelated (Fig. S3C) units. Although accurate decoding of the AIP population activity is time dependent, most OMA selective units display a remarkable stability over time of their preference for a given exemplar (Fig. 3E), in line with the comparison of the selectivity in the 2 epochs reported above. Thus, OMA representation emerges dynamically at the population level from individual units with a relatively stable code for exemplar’s identity.

Relationship Between OMA Selectivity and Visuomotor Properties in AIP

A subset of 487 units (Mk1 n = 306; Mk2 n = 181) was tested in the VMT in addition to the OT, the vast majority of which (91%) proved task-related. Table 2 summarizes the relationship between visual responses to OMAs and selectivity for the grip type in the dark, which is considered the most reliable marker for hand grasping motor coding. A few AIP units (n = 25, 5%) discharged only during visual presentation of the target object, many (n = 133, 27%) responded during at least 1 of the 2 action execution epochs in the dark but most reacted (n = 287, 59%) during both object presentation and action execution (in the dark).

Table 2.

Properties of all units recorded during both VMT in the dark (reaching-grasping/pulling epoch) and OT (Epochs 1/2)

Grip selective Grip nonselective Unresponsive Total
OMA-selective 67 75 15 157
OMA nonselective 68 125 25 218
OMA unresponsive 34 51 27 112
Total 169 251 67 487

Grip-selective units showed OMA selectivity more frequently (49.6%) than grip-nonselective ones (37.5%, χ2 = 4.86, P = 0.028), and this association was significant even if unresponsive units were considered (χ2 = 6.5, P = 0.011). Since grasping was the most well-represented OMA (Fig. 3C and S2C), we next investigated the possible association between neuronal selectivity for observed and executed grasping. Our results show no evidence of association between grip selectivity during the VMT and visual selectivity for observed grasping relative to other OMAs in the OT (χ2 = 2.75, P = 0.097, see Table 3). Indeed, the proportion of grip selective units is even smaller (though not significantly) amongst those with OMA selectivity for grasp (36%) relative to those preferring other OMAs (51%).

Table 3.

Properties of OMA-selective units recorded during both VMT in the dark (reaching-grasping/pulling epoch) and OT (Epochs 1/2)

Observed manipulative action selectivity
Grasp Other OMAs Total
Grip selective 14 53 67
Grip nonselective 25 50 75
Total 39 103 142

To address this issue further, we investigated the relationship between motor (grip type) and visual (OMA) selectivity as a function of time. Figure 4A shows that relative to grip nonselective units (red), greater numbers of grip selective units (blue) not only exhibit grip selectivity over time but also show OMA selectivity in the later phase of video presentation (Fig. 4B), confirming the epoch based contingency analysis. Nonetheless, this effect cannot be specifically accounted for by greater visual selectivity for grasping, since the proportion of grasping OMA-selective units (Fig. 4B) did not differ significantly between grip selective (light blue) and nonselective (orange) units at any moment during video presentation (sliding χ2 test P > 0.05 during the whole period).

Figure 4.

Figure 4.

Relationship between motor selectivity for the grip type and visual selectivity for observed grasping actions. (A) Percentage of grip-selective (blue) and nonselective (red) units showing grip selectivity over time during the VMT performed in darkness. The dashed line below the plot indicates the time bins in which the relative number of tuned units was significantly different between the 2 subpopulations (sliding χ2 test performed on 20 ms bins, P < 0.05, only sets of at least 5 contiguous bins are shown). (B) Percentage of grip-selective (blue) and nonselective (red) units (same as in panel A) showing OMA selectivity over time. All conventions as in panel A. The additional curves indicate the percentage of units with specific selectivity for observed grasping (sliding χ2 tests, P < 0.05) among grip-selective (light blue) and nonselective (orange) units. (C, D) Heat maps of facilitation OMA-selective units with visual preference for grasping (C) or for OMA other than grasping (D) during the OT. Units in the heat maps have been ordered (from bottom to top) according to the timing of their peak activity after video presentation onset (vertical dashed line in the panels on the right). Superimposed on each heat map, the black lines in the left panels represent the percentage of units of the entire subpopulation showing significant tuning for grip type (sliding window one-way ANOVA, bin size 200 ms, step 20 ms, P < 0.05 uncorrected). In the right panels, the colored curves on the heat maps indicate the percentage of units in the subpopulation displaying preference for a specific OMA exemplar (i.e., the exemplar with highest activity value see color code in the legend; sliding window one-way ANOVA with 7 levels of the factor “OMA,” bin size 200 ms, step 20 ms, P < 0.05 uncorrected). Further analyses on the same set of data are provided in Figure S4. (E) Time course of the net normalized mean activity for each subpopulation illustrated in panels C and D during VMT (left) and OT (right). Shaded regions around each line represent 1 standard error.

In the analyses so far described the relationship between observed and executed grasping has been explored using motor selectivity as a criterion for grouping the units. We also investigated this relationship by grouping the units based on their selectivity for OMAs. Figure 4C shows, for units selective for observed grasping, the temporal activation pattern in both VMT in the dark (left) and OT (right). Figure 4D shows the same information for units with selectivity for the OMAs other than grasping. A sliding χ2 test (P < 0.05) carried out over the entire VMT period provides no evidence for a greater proportion of grip-selective units among those with visual selectivity for observed grasping relative to those with visual selectivity for the OMAs other than grasping. Furthermore, there was no difference in the relative levels of visual (t = 0.42, P = 0.68) and motor (t = −0.77, P = 0.44) activities in VMT between units with visual selectivity for grasping and those with selectivity for the other 6 OMA exemplars (Fig. 4E). Since nongrasp OMA exemplars can be subdivided into high- versus low-motion (reflecting magnitude of dynamic body shape changes), we also compared the visuomotor properties of units with visual preference for either of these 2 categories (Fig. S4) but found no significant difference between them. The response pattern of OMA-selective suppression units tested in the VMT was radically different from that of OMA-selective facilitation units (Fig. S5) in showing no significant modulation during grasping execution, hence, suggesting that they were essentially visual in nature.

The present findings show that AIP neurons with grasp-selective motor response play a major role in encoding OMA identity. Although grasping was the most frequently represented OMA, we found no privileged association between motor and visual representations of grasping relative to other OMAs, suggesting that the convergence of visual and motor signals onto the same neurons in AIP does not appear to be grounded in a visuomotor congruence between the 2 formats of action exemplar representation.

Rostrocaudal Distribution of OMA Selectivity and Visuomotor Properties in AIP

Previous functional evidence obtained with static visual stimuli suggested the presence of a rostrocaudal increasing gradient of visual processing within area AIP (Durand et al. 2007; Baumann et al. 2009). Here, we compared the functional properties of the 3 populations of units located in the rostral, intermediate and caudal parts of area AIP of each monkey (see Fig. 1A and Materials and Methods), by exploiting electrophysiological data collected with both the OT and VMT. For this purpose, we computed PIs for each factor of interest (OMA selectivity, object visual selectivity, grip selectivity and HVF), using the same procedure for all factors (see Materials and Methods) and including all task-related units.

Figure 5A illustrates OMA selectivity along AIP for the 2 monkeys. The caudal populations show a greater OMA selectivity relative to those located more rostrally (F(2) = 4.78, P = 0.009). This rostrocaudal gradient in the OMA coding is accompanied by a rostrocaudal increase in shape selectivity during the object presentation epoch in the VMT (Fig. 5B), although this effect did not reach significance (F(2) = 2.39, P = 0.09). Nevertheless, we found significantly greater grip selectivity in the caudal sites during the reaching-grasping epoch, both in the dark (F(2) = 5.09, P = 0.006) and in the light (F(2) = 7.19, P = 0.0008) (Fig. 5C), but not during the object pulling epoch (dark, F(2) = 1.26, P = 0.28; light, F(2) = 0.81, P = 0.45). The stronger gradient evidenced during VMT in the light may derive from input regarding monkey’s own HVF during grasping. Indeed, HVF preference (Fig. 5D) was greater in neural activity recorded more caudally (F(2) = 26.01, P = 0). It is interesting to note that the proportion of AIP units showing combined grip and OMA selectivity also varied as a function of the rostrocaudal position, from only 7% (6/90) at the rostral position up to 16% (37/238) and 15% (24/159) at the intermediate and caudal levels respectively (χ2 = 4.68, P < 0.05). These findings demonstrate a rostrocaudal gradient in AIP for a variety of visual information related to own and others’ action.

Figure 5.

Figure 5.

Rostrocaudal differences in neural selectivity for OMA, visually presented object and grip type along AIP. (A) Preference index for OMA during OT. (B) Preference index for the visually presented object during go trials of the VMT. (C) Preference index for the grip type during reaching/grasping execution (left column) and during object pulling (right column) in the dark (top) and in the light (bottom). (D) Preference index for hand visual feedback (HVF, light vs. dark) calculated from reaching/grasping (left column) and object pulling (right column) responses. Each set of data has been analyzed by means of a 2 × 3 factorial ANOVA (factors: monkey, position), **P < 0.001 for the factor position.

Rostrocaudal Connectivity of Functionally Characterized AIP Sites

To elucidate the rostrocaudal changes in AIP connectivity we injected, at the end of the neurophysiological experiments, 3 different neural tracers at the anteroposterior positions (see Methods and Methods and Fig. 6A) corresponding to the locations of the explanted probes (Fig. 1A). Consistent with previous studies (Lewis and Van Essen 2000; Nakamura et al. 2001; Borra et al. 2008), all the injections showed the connectivity pattern typical of AIP, which includes areas of the IPL, the IPS, the parietal operculum (PO), as well as different subdivisions of the premotor cortex (Fig. 6A). In particular, all the injected sites shared similar connectivity with the anatomical subdivisions of area F5, which is the most well-established connectional hallmark of AIP (Borra et al. 2008). Nonetheless, we have been able to observe quantitative differences within AIP regarding specific connectivity patterns, depending on the position of the injected site along the intraparietal sulcus (Fig. 6B).

Figure 6.

Figure 6.

Anatomical connectivity of the rostral, intermediate and caudal sectors of area AIP in the 2 monkeys. (A) Three dimensional anatomical reconstructions illustrating the distribution of labeled cells after injections at the different AIP levels of Mk1 (top) and Mk2 (bottom). For each monkey, connectivity maps are presented for the rostral (left), intermediate (center) and caudal (right) injections. To facilitate the comparison, the maps of Mk1 (right hemisphere) were flipped and shown as a left hemisphere and the mesial walls were shown as right hemispheres. The color scale indicates the relative density of labeled cells, counted within regions of 600 × 600μm2, and expressed as a percentage of the maximum value obtained within the cortical surface for any given injection. Cgs, cingulate sulcus. Other abbreviations as in Figure 1. (B) Graphical representation of the strength of the main (>1%) connections of the rostral, intermediate and caudal sectors of AIP (data from corresponding injection positions in the 2 monkeys have been merged). The width of the wedges indicates the percentage of labeled neurons following injections at the various anteroposterior positions (see scale on the left). Acronyms: rIPL, rostral inferior parietal lobule; cIPL, caudal inferior parietal lobule; cIPS, caudal intraparietal sulcus; Par.Op, parietal operculum; SPL, superior parietal lobule; mIPS, medial intraparietal sulcus; vIPS, ventral intraparietal sulcus.

The most caudal injections were characterized by stronger connections with temporal areas (PITd/PITv, TEa/TEm, and IPa/PGa), ventrolateral prefrontal areas 12r/46 v and areas FEF/8 A/45B, in addition to caudal parietal areas Opt, and LIP. The intermediate and even more the rostral injections exhibited weaker (or no) connectivity with the temporal and prefrontal regions listed above. In contrast, intermediate and even more the rostral injections yielded strong connections with ventral premotor area F5p, inferior parietal area PFG and PF, the PO, as well as with areas PEip and MIP in superior parietal lobule and the medial intraparietal sulcus (Fig. 6B).

To clarify the possible relationship between specific pathways and the information they may convey to the different AIP sectors, we calculated the relative percentage of retrograde labeling within the 6 functional clusters of areas obtained following tracer injections at the 3 rostrocaudal positions in each monkey. This analysis revealed 3 connectivity gradients (Fig. 7, Table S1 and Fig. S6) increasing in the rostrocaudal direction with 1) ventral visual areas, 2) oculomotor parietofrontal areas (particularly area LIP, see Table S1), and 3) prefrontal areas (particularly areas 46 v/12r, see Table S1). In addition, we found the opposite caudo-rostral gradient concerning a large set of sensorimotor, mainly parietal regions, processing somatosensory information. It is worth noting that connections with dorsal visual areas, including the MT cluster, CIP, PIP, and V6A, show little or no gradient, being strongest for the intermediate AIP injection.

Figure 7.

Figure 7.

Rostrocaudal gradients in AIP anatomical connectivity. Each bar represents the percentage of labeled cells observed in the various functional territories. Each territory is defined based on functional similarities of the labeled areas. The areas included in each cluster are listed under the histograms. The bars in each cluster are arranged in a rostral to caudal fashion relative to the location of the injection sites. Acronyms as in Figures 1 and 6.

Discussion

In this study we show that, per our prediction, AIP neurons encode the identity of specific OMAs, whether it is conveyed by either mostly static (body shape) or dynamic (body motion) information. Visual selectivity for OMA identity was stronger in pAIP, where it was associated with motor selectivity for the grip type as well as to high sensitivity to visual feedback of the monkey’s own hand during grasping execution. The rostrocaudal increase in preference for the visual encoding of manual actions of self and others parallels a rostrocaudal increase in anatomical connectivity with temporal areas of the ventral visual stream, oculomotor regions and prefrontal cortex, which may provide visual and contextual information relevant for manipulative action processing. These results revise current models of action observation network in the macaque, indicating that pAIP constitutes a parietal hub for routing information about OMA identity to the other parietal, premotor and prefrontal nodes of the network (Bonini 2017; Rozzi and Fogassi 2017).

AIP Neurons Encode a Variety of OMAs

Previous single neuron studies reported that 23% (Maeda et al. 2015) to 59% (Pani et al. 2014) of AIP neurons respond to observed grasping action. Here we found much greater percentages of both single- (94%) and multiunits (71%) exhibiting facilitated or suppressed responses to OMAs, with over a quarter of them displaying selectivity for the various exemplars. Importantly, our chronic recording approach is unbiased, as it excludes any preselection of the recorded neurons, making these percentages extremely reliable. The discrepancies with the 2 previous studies may be reconciled by considering that both focused only on the rostral half of area AIP and investigated a single OMA exemplar (grasping).

Different OMA exemplars are primarily characterized by specific patterns of body-shape changes. However, the distinctive static body postures of the actor before action onset allow, in many cases, predicting the action that will be observed (Theusner et al. 2014), even if the prediction may not be particularly accurate (Platonov and Orban 2016). Interestingly, a general feature of AIP neurons evidenced in the present study is that they exhibit tuning for specific OMAs both during the initial epoch dominated by the actor’s static body-posture at video onset and during the subsequent epoch in which the pattern of body-shape changes characterizes each OMA. The fact that OMA selectivity was quantitatively (i.e., magnitude of preference) and qualitatively (i.e., action exemplar) similar between the 2 epochs, indicates that AIP neurons can identify a given OMA regardless of the specific type of (static or dynamic) visual information available and independently of the magnitude of the body-shape changes. Indeed, OMA identity can be accurately decoded from AIP neuronal population activity during the entire video presentation period. Interestingly, AIP population code appears to be mostly dynamic (Mendoza-Halliday and Martinez-Trujillo 2017; Meyers 2018), suggesting that distinct AIP neurons provide specific contributions to the representation of OMA exemplars at distinct periods in time. Nonetheless, this does not mean that single neuron tuning for OMAs randomly changes over time. Indeed, we showed that OMA preference of selective units displays a remarkable temporal stability, despite changes in magnitude of the neuronal activity, suggesting that OMA representation emerges dynamically at the population level from individual units with a relatively stable code for exemplar’s identity.

In spite of the neuronal coverage of all OMA exemplars, a greater number of units show selectivity for the exemplars characterized by large changes in body-shape. Among them, grasping was by far the most widely represented in both animals, although this was not the exemplar characterized by the largest body motion, and all 7 OMAs were evenly sampled. The overrepresentation of observed grasping is consistent with its crucial ethological role in primates (Hashimoto et al. 2013; Graziano 2016; Tia et al. 2017). Furthermore, the complexity of its biomechanical control (Grafton 2010) and the extension of its motor representation in the cerebral cortex (Filimon 2010; Nelissen and Vanduffel 2011; Baldwin et al. 2017) may necessitate devoting a large number of neurons for even its visual processing. Indeed, previous studies have shown that distinct sets of parietal (Fogassi et al. 2005; Maeda et al. 2015) and premotor (Caggiano et al. 2009; Bonini et al. 2010; Papadourakis and Raos 2017; Mazurek et al. 2018) neurons represent specific types of observed grasps as well as the specific context in which the observed grasp is embedded. Thus, the more refined and articulated granularity of neural representation of grasping relative to other manual actions may explain its overrepresentation.

Because the visual representation of grasping predominates in AIP, one might expect its motor and visual formats to converge at the single neuron level. Even if grasping was the only manipulative action tested with the motor task, grip selective neurons should exhibit selectivity for observed grasping more frequently than for other OMAs. In contrast, we found the motor specificity for grasping in AIP neurons preferentially associated with visual selectivity for the OMAs other than grasping. A possible interpretation of this lack of congruence at the exemplar level is that the neuronal population investigated may be involved not only in the motor planning of grasping, but also of other manipulative actions. Indeed, a recent study using long-train intracortical microstimulation (Baldwin et al. 2017) suggested that the IPL, including AIP, hosts the neural substrates underlying a variety of ethologically relevant hand and digit movements beyond grasping, consistent with previous proposals (Tunik et al. 2007). Intriguingly, our evidence demonstrating OMA selectivity in AIP suggests that object size and shape should no longer be considered the only visual information utilized for action planning in AIP. Indeed, the observed actions of others may also play a role in the selection and planning of appropriate manipulative actions, especially in social contexts, a view supported by a recent evidence that AIP is a crucial component of the cortical network underlying the visual processing of social interactions (Sliwa and Freiwald 2017). Finally, a recent study on ventral and dorsal premotor mirror neurons demonstrated that in both these nodes of the cortical action observation network the probability to get strict visuomotor congruence between motor and visual representations of grip type in single neurons was at chance level (Papadourakis and Raos 2018). Although in the execution mode we could not test the same variety of actions as in the observation mode, the present and previous (Papadourakis and Raos 2018) findings, suggest that in most areas of the action observation network (Bonini 2017; Bruni et al. 2018; Fiave et al. 2018) the encoding of executed and observed actions recruits largely overlapping sets of neurons, which nonetheless may specify highly distinct variants of the encoded features when switching between the visual and motor modes of action representation.

The neural machinery underlying observed action processing revealed by the present study also suggests that area AIP should prove an ideal focus for future investigations into the neural basis of the monkeys’ ability to discriminate and categorize hand actions (Nelissen and Vanduffel 2017), as previously investigated for 3D-shapes (Verhoef et al. 2015). Interestingly, this function may also benefit from the activity of suppression OMA-selective units that, unlike facilitation units, did not respond during grasping in the dark but did so in the light. This suggests that, in spite of their essentially visual nature, they differentiate the visual feedback regarding the subject’s own hand during active grasping from others’ observed manual actions.

Rostrocaudal Anatomofunctional Gradients Within AIP

One of the most important contributions of our study, other than the demonstration of the role played by AIP in the neural representation of OMAs, is that the visual selectivity for observed actions is prevalent in the caudal portion of the area, where it is associated with stronger visuomotor selectivity for the grip type and, most interestingly, with stronger tuning for the visual feedback of the monkey’s own hand during active grasping. These findings are consistent with previous AIP studies indicating a rostral-to-caudal increase in the selectivity for visual features such as objects’ shape and orientation (Durand et al. 2007; Baumann et al. 2009). In our study, object selectivity exhibited the same rostrocaudal trend, although it did not reach significance probably because we used only 3 objects that the monkey grasped with the same orientation of the wrist (pronation). Furthermore, previous single-neuron AIP studies reported responses to the visual image of monkey’s own hand during grasping (Sakata et al. 1995; Maeda et al. 2015): in our study, we provide the first evidence that this effect is more marked in the caudal part of the area, where it is combined with increased sensitivity to OMAs. Our findings suggest therefore that pAIP is more specifically committed to the processing of visual information about self and others’ manipulative actions.

Significantly, we have been able to directly match our neurophysiological findings of a tuning of pAIP neurons to self and other’s observed actions with the neuro-anatomical evidence, obtained in the same animals, of 3 rostral-to-caudally increasing connectivity gradients. Compared with the intermediate and rostral levels of AIP, pAIP displays stronger connections with 1) a set of visual areas of the ventral stream that convey information about object features (Sary et al. 1993; Logothetis et al. 1995; Saleem and Tanaka 1996; Koteles et al. 2008; Hong et al. 2016) and observed actions (Perrett et al. 1989; Nelissen et al. 2011), in particular the dynamic body shape changes defining the action (Vangeneugden et al. 2009), 2) prefrontal cortical areas, including visually recipients areas 12r and 46 v (Borra et al. 2011; Gerbella et al. 2013), involved in manual action planning (Bruni et al. 2015; Simone et al. 2015) and observation (Raos and Savaki 2017; Simone et al. 2017; Fiave et al. 2018), and 3) oculomotor regions, including area LIP, which may drive spatial attention processes aimed at proactively capturing goals and targets of others’ observed actions (Flanagan and Johansson 2003; Falck-Ytter et al. 2006; Elsner et al. 2013; Maranesi et al. 2013; Lanzilotto et al. 2017). The specificity of this anatomofunctional association is underscored by the absence of a gradient in the connections with dorsovisual and skeletomotor related areas, as well as a reversed caudal-to-rostral incremental gradient for the connections with a large set of mainly parietal somatosensory regions, consistent with previous studies (Lewis and Van Essen 2000; Borra et al. 2008; Baumann et al. 2009). These somatosensory connections provide the rostral sector of AIP with a rich set of information about the state of the body parts (Buneo and Andersen 2012) and the relationship between body-state and objects in the outside world for updating action planning and control (Tunik et al. 2007; Borra et al. 2017; Gerbella et al. 2017). Of course, all these connectional patterns may also be shared by several other functions not directly linked with observed action processing, such as 3D-shape processing (Taira et al. 2000; Durand et al. 2007; Rosenberg et al. 2013), eye-hand coordination (Lehmann and Scherberger 2013; Borra et al. 2014) and spatial attention toward possible targets of planned actions (Bisley and Goldberg 2010). These additional functions do not, however, detract from the present evidence indicating that pAIP plays a major role in OMA coding.

Conclusions

The well-established visuomotor properties and the previously described functional gradients in AIP have supported the view of a gradual transformation of visual shape information into motor signals underlying planning and execution of grasping actions (Murata et al. 2000; Janssen and Scherberger 2015). Our results suggest that a similar mechanism may exist for linking the motor representations of a variety of manipulative actions with visual and contextual information about others’ observed actions, with pAIP acting as a hub in this process. To explore the variety of manual actions that characterize the monkeys’ behavioral repertoire, further studies may have to record neuronal activity during unconstrained behaviors within individual and social contexts. Our results underscore the need for such studies, using wireless recording techniques to explore the neuronal correlates of freely moving monkeys’ behaviors, to remove the biases introduced in the literature by the focus on a very limited set of actions. These studies may also provide single neuron evidence for “social affordance” (Loveland 1991), in which just as observed objects trigger a variety of motor affordances depending on the objects’ physical properties (Maranesi et al. 2014), other’s observed actions may trigger different possible reactive actions in the observer’s brain depending on the social context.

Supplementary Material

Supplementary Data

Funding

This work was supported by the European Union's framework program FP7/2007-2013 under grant agreement 600925 to G.A.O and by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement 678307 to L.B.

Notes

Conflict of Interest: None declared.

References

  1. Abdollahi RO, Jastorff J, Orban GA. 2013. Common and segregated processing of observed actions in human SPL. Cereb Cortex. 23(11):2734–2753. [DOI] [PubMed] [Google Scholar]
  2. Bakola S, Gamberini M, Passarelli L, Fattori P, Galletti C. 2010. Cortical connections of parietal field PEc in the macaque: linking vision and somatic sensation for the control of limb action. Cereb Cortex. 20(11):2592–2604. [DOI] [PubMed] [Google Scholar]
  3. Baldwin MKL, Cooke DF, Goldring AB, Krubitzer L. 2017. Representations of fine digit movements in posterior and anterior parietal cortex revealed using long-train intracortical microstimulation in macaque monkeys. Cereb Cortex. 9:1–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Barz F, Livi A, Lanzilotto M, Maranesi M, Bonini L, Paul O, Ruther P. 2017. Versatile, modular 3D microelectrode arrays for neuronal ensemble recordings: from design to fabrication, assembly, and functional validation in non-human primates. J Neural Eng. 14(3):1741–2552. [DOI] [PubMed] [Google Scholar]
  5. Barz F, Paul O, Ruther P. 2014. Modular assembly concept for 3D neural probe prototypes offering high freedom of design and alignment precision. Conf Proc IEEE Eng Med Biol Soc. 80(10):6944495. [DOI] [PubMed] [Google Scholar]
  6. Baumann MA, Fluet MC, Scherberger H. 2009. Context-specific grasp movement representation in the macaque anterior intraparietal area. J Neurosci. 29(20):6436–6448. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Belmalih A, Borra E, Contini M, Gerbella M, Rozzi S, Luppino G. 2007. A multiarchitectonic approach for the definition of functionally distinct areas and domains in the monkey frontal lobe. J Anat. 211(2):199–211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bisley JW, Goldberg ME. 2010. Attention, intention, and priority in the parietal lobe. Annu Rev Neurosci. 33:1–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Blatt GJ, Andersen RA, Stoner GR. 1990. Visual receptive field organization and cortico-cortical connections of the lateral intraparietal area (area LIP) in the macaque. J Comp Neurol. 299(4):421–445. [DOI] [PubMed] [Google Scholar]
  10. Bonini L. 2017. The extended mirror neuron network: anatomy, origin, and functions. Neuroscientist. 23(1):56–67. [DOI] [PubMed] [Google Scholar]
  11. Bonini L, Maranesi M, Livi A, Bruni S, Fogassi L, Holzhammer T, Paul O, Ruther P. 2014. Application of floating silicon-based linear multielectrode arrays for acute recording of single neuron activity in awake behaving monkeys. Biomed Tech. 59(4):273–281. [DOI] [PubMed] [Google Scholar]
  12. Bonini L, Maranesi M, Livi A, Fogassi L, Rizzolatti G. 2014. a. Space-dependent representation of objects and other’s action in monkey ventral premotor grasping neurons. J Neurosci. 34(11):4108–4119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Bonini L, Maranesi M, Livi A, Fogassi L, Rizzolatti G. 2014. b. Ventral premotor neurons encoding representations of action during self and others’ inaction. Curr Biol. 24(14):1611–1614. [DOI] [PubMed] [Google Scholar]
  14. Bonini L, Rozzi S, Serventi FU, Simone L, Ferrari PF, Fogassi L. 2010. Ventral premotor and inferior parietal cortices make distinct contribution to action organization and intention understanding. Cereb Cortex. 20(6):1372–1385. [DOI] [PubMed] [Google Scholar]
  15. Borra E, Belmalih A, Calzavara R, Gerbella M, Murata A, Rozzi S, Luppino G. 2008. Cortical connections of the macaque anterior intraparietal (AIP) area. Cereb Cortex. 18(5):1094–1111. [DOI] [PubMed] [Google Scholar]
  16. Borra E, Gerbella M, Rozzi S, Luppino G.. 2017. The macaque lateral grasping network: A neural substrate for generating purposeful handactions. Neurosci Biobehav Rev. 75:65–90. [DOI] [PubMed] [Google Scholar]
  17. Borra E, Gerbella M, Rozzi S, Luppino G. 2011. Anatomical evidence for the involvement of the macaque ventrolateral prefrontal area 12r in controlling goal-directed actions. J Neurosci. 31(34):12351–12363. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Borra E, Gerbella M, Rozzi S, Tonelli S, Luppino G. 2014. Projections to the superior colliculus from inferior parietal, ventral premotor, and ventrolateral prefrontal areas involved in controlling goal-directed hand actions in the macaque. Cereb Cortex. 24(4):1054–1065. [DOI] [PubMed] [Google Scholar]
  19. Boussaoud D, Ungerleider LG, Desimone R. 1990. Pathways for motion analysis: cortical connections of the medial superior temporal and fundus of the superior temporal visual areas in the macaque. J Comp Neurol. 296(3):462–495. [DOI] [PubMed] [Google Scholar]
  20. Bruni S, Gerbella M, Bonini L, Borra E, Coude G, Ferrari PF, Fogassi L, Maranesi M, Roda F, Simone L, et al. . 2018. Cortical and subcortical connections of parietal and premotor nodes of the monkey hand mirror neuron network. Brain Struct Funct. 223(4):1713–1729. [DOI] [PubMed] [Google Scholar]
  21. Bruni S, Giorgetti V, Bonini L, Fogassi L. 2015. Processing and integration of contextual information in monkey ventrolateral prefrontal neurons during selection and execution of goal-directed manipulative actions. J Neurosci. 35(34):11877–11890. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Bruni S, Giorgetti V, Fogassi L, Bonini L. 2017. Multimodal encoding of goal-directed actions in monkey ventral premotor grasping neurons. Cereb Cortex. 27(1):522–533. [DOI] [PubMed] [Google Scholar]
  23. Buneo CA, Andersen RA. 2012. Integration of target and hand position signals in the posterior parietal cortex: effects of workspace and hand vision. J Neurophysiol. 108(1):187–199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Caggiano V, Fogassi L, Rizzolatti G, Thier P, Casile A. 2009. Mirror neurons differentially encode the peripersonal and extrapersonal space of monkeys. Science. 324(5925):403–406. [DOI] [PubMed] [Google Scholar]
  25. Carmichael ST, Price JL. 1994. Architectonic subdivision of the orbital and medial prefrontal cortex in the macaque monkey. J Comp Neurol. 346(3):366–402. [DOI] [PubMed] [Google Scholar]
  26. Chung JE, Magland JF, Barnett AH, Tolosa VM, Tooker AC, Lee KY, Shah KG, Felix SH, Frank LM, Greengard LF. 2017. A fully automated approach to spike sorting. Neuron. 95(6):1381–1394. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Corbo D, Orban GA. 2017. Observing others speak or sing activates spt and neighboring parietal cortex. J Cogn Neurosci. 29(6):1002–1021. [DOI] [PubMed] [Google Scholar]
  28. Crowe DA, Averbeck BB, Chafee MV. 2010. Rapid sequences of population activity patterns dynamically encode task-critical spatial information in parietal cortex. J Neurosci. 30(35):11640–11653. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G. 1992. Understanding motor events: a neurophysiological study. Exp Brain Res. 91(1):176–180. [DOI] [PubMed] [Google Scholar]
  30. Durand JB, Nelissen K, Joly O, Wardak C, Todd JT, Norman JF, Janssen P, Vanduffel W, Orban GA. 2007. Anterior regions of monkey parietal cortex process visual 3D shape. Neuron. 55(3):493–505. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Elsner C, D’Ausilio A, Gredeback G, Falck-Ytter T, Fadiga L. 2013. The motor cortex is causally related to predictive eye movements during action observation. Neuropsychologia. 51(3):488–492. [DOI] [PubMed] [Google Scholar]
  32. Falck-Ytter T, Gredeback G, von Hofsten C. 2006. Infants predict other people’s action goals. Nat Neurosci. 9(7):878–879. [DOI] [PubMed] [Google Scholar]
  33. Ferrari PF, Gallese V, Rizzolatti G, Fogassi L. 2003. Mirror neurons responding to the observation of ingestive and communicative mouth actions in the monkey ventral premotor cortex. Eur J Neurosci. 17(8):1703–1714. [DOI] [PubMed] [Google Scholar]
  34. Ferri S, Rizzolatti G, Orban GA. 2015. The organization of the posterior parietal cortex devoted to upper limb actions: an fMRI study. Hum Brain Mapp. 36(10):3845–3866. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Fiave PA, Sharma S, Jastorff J, Nelissen K. 2018. Investigating common coding of observed and executed actions in the monkey brain using cross-modal multi-variate fMRI classification. Neuroimage. 178:306–317. [DOI] [PubMed] [Google Scholar]
  36. Filimon F. 2010. Human cortical control of hand movements: parietofrontal networks for reaching, grasping, and pointing. Neuroscientist. 16(4):388–407. [DOI] [PubMed] [Google Scholar]
  37. Flanagan JR, Johansson RS. 2003. Action plans used in action observation. Nature. 424(6950):769–771. [DOI] [PubMed] [Google Scholar]
  38. Fogassi L, Ferrari PF, Gesierich B, Rozzi S, Chersi F, Rizzolatti G. 2005. Parietal lobe: from action organization to intention understanding. Science. 308(5722):662–667. [DOI] [PubMed] [Google Scholar]
  39. Frey S, Mackey S, Petrides M. 2014. Cortico-cortical connections of areas 44 and 45B in the macaque monkey. Brain Lang. 131:36–55. [DOI] [PubMed] [Google Scholar]
  40. Gallese V, Fadiga L, Fogassi L, Rizzolatti G. 1996. Action recognition in the premotor cortex. Brain. 119(2):593–609. [DOI] [PubMed] [Google Scholar]
  41. Galletti C, Gamberini M, Kutz DF, Baldinotti I, Fattori P. 2005. The relationship between V6 and PO in macaque extrastriate cortex. Eur J Neurosci. 21(4):959–970. [DOI] [PubMed] [Google Scholar]
  42. Gamberini M, Passarelli L, Fattori P, Zucchelli M, Bakola S, Luppino G, Galletti C. 2009. Cortical connections of the visuomotor parietooccipital area V6Ad of the macaque monkey. J Comp Neurol. 513(6):622–642. [DOI] [PubMed] [Google Scholar]
  43. Gerbella M, Belmalih A, Borra E, Rozzi S, Luppino G. 2007. Multimodal architectonic subdivision of the caudal ventrolateral prefrontal cortex of the macaque monkey. Brain Struct Funct. 212(3-4):269–301. [DOI] [PubMed] [Google Scholar]
  44. Gerbella M, Belmalih A, Borra E, Rozzi S, Luppino G. 2010. Cortical connections of the macaque caudal ventrolateral prefrontal areas 45A and 45B. Cereb Cortex. 20(1):141–168. [DOI] [PubMed] [Google Scholar]
  45. Gerbella M, Borra E, Tonelli S, Rozzi S, Luppino G. 2013. Connectional heterogeneity of the ventral part of the macaque area 46. Cereb Cortex. 23(4):967–987. [DOI] [PubMed] [Google Scholar]
  46. Gerbella M, Rozzi S, Rizzolatti G. 2017. The extended object-grasping network. Exp Brain Res. 235(10):2903–2916. [DOI] [PubMed] [Google Scholar]
  47. Grafton ST. 2010. The cognitive neuroscience of prehension: recent developments. Exp Brain Res. 204(4):475–491. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Graziano MSA. 2016. Ethological action maps: a paradigm shift for the motor cortex. Trends Cogn Sci. 20(2):121–132. [DOI] [PubMed] [Google Scholar]
  49. Grefkes C, Weiss PH, Zilles K, Fink GR. 2002. Crossmodal processing of object features in human anterior intraparietal cortex: an fMRI study implies equivalencies between humans and monkeys. Neuron. 35(1):173–184. [DOI] [PubMed] [Google Scholar]
  50. Gregoriou GG, Borra E, Matelli M, Luppino G. 2006. Architectonic organization of the inferior parietal convexity of the macaque monkey. J Comp Neurol. 496(3):422–451. [DOI] [PubMed] [Google Scholar]
  51. Hashimoto T, Ueno K, Ogawa A, Asamizuya T, Suzuki C, Cheng K, Tanaka M, Taoka M, Iwamura Y, Suwa G, et al. . 2013. Hand before foot? Cortical somatotopy suggests manual dexterity is primitive and evolved independently of bipedalism. Philos Trans R Soc Lond B Biol Sci. 368(1630):19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Herwik S, Paul O, Ruther P. 2011. Ultrathin silicon chips of arbitrary shape by etching before grinding. J Microelectromech Syst. 20(4):791–793. [Google Scholar]
  53. Hong H, Yamins DL, Majaj NJ, DiCarlo JJ. 2016. Explicit information for category-orthogonal object properties increases along the ventral stream. Nat Neurosci. 19(4):613–622. [DOI] [PubMed] [Google Scholar]
  54. Janssen P, Scherberger H. 2015. Visual guidance in control of grasping. Annu Rev Neurosci. 38:69–86. [DOI] [PubMed] [Google Scholar]
  55. Jastorff J, Begliomini C, Fabbri-Destro M, Rizzolatti G, Orban GA. 2010. Coding observed motor acts: different organizational principles in the parietal and premotor cortex of humans. J Neurophysiol. 104(1):128–140. [DOI] [PubMed] [Google Scholar]
  56. Kaminski J, Sullivan S, Chung JM, Ross IB, Mamelak AN, Rutishauser U. 2017. Persistently active neurons in human medial frontal and medial temporal lobe support working memory. Nat Neurosci. 20(4):590–601. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Kobayashi Y, Amaral DG. 2000. Macaque monkey retrosplenial cortex: I. Three-dimensional and cytoarchitectonic organization. J Comp Neurol. 426(3):339–365. [DOI] [PubMed] [Google Scholar]
  58. Koteles K, De Maziere PA, Van Hulle M, Orban GA, Vogels R. 2008. Coding of images of materials by macaque inferior temporal cortical neurons. Eur J Neurosci. 27(2):466–482. [DOI] [PubMed] [Google Scholar]
  59. Lanzilotto M, Gerbella M, Perciavalle V, Lucchetti C. 2017. Neuronal encoding of self and others’ head rotation in the macaque dorsal prefrontal cortex. Sci Rep. 7(1):017–08936. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Lanzilotto M, Livi A, Maranesi M, Gerbella M, Barz F, Ruther P, Fogassi L, Rizzolatti G, Bonini L. 2016. Extending the cortical grasping network: pre-supplementary motor neuron activity during vision and grasping of objects. Cereb Cortex. 26(12):4435–4449. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Lehmann SJ, Scherberger H. 2013. Reach and gaze representations in macaque parietal and premotor grasp areas. J Neurosci. 33(16):7038–7049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Lewis JW, Van Essen DC. 2000. Mapping of architectonic subdivisions in the macaque monkey, with emphasis on parieto-occipital cortex. J Comp Neurol. 428(1):79–111. [DOI] [PubMed] [Google Scholar]
  63. Logothetis NK, Pauls J, Poggio T. 1995. Shape representation in the inferior temporal cortex of monkeys. Curr Biol. 5(5):552–563. [DOI] [PubMed] [Google Scholar]
  64. Loveland KA. 1991. Social affordances and interaction II: autism and the affordances of the human environment. Ecol Psychol. 3(2):99–119. [Google Scholar]
  65. Luppino G, Ben Hamed S, Gamberini M, Matelli M, Galletti C. 2005. Occipital (V6) and parietal (V6A) areas in the anterior wall of the parieto-occipital sulcus of the macaque: a cytoarchitectonic study. Eur J Neurosci. 21(11):3056–3076. [DOI] [PubMed] [Google Scholar]
  66. Maeda K, Ishida H, Nakajima K, Inase M, Murata A. 2015. Functional properties of parietal hand manipulation-related neurons and mirror neurons responding to vision of own hand action. J Cogn Neurosci. 27(3):560–572. [DOI] [PubMed] [Google Scholar]
  67. Maranesi M, Bonini L, Fogassi L. 2014. Cortical processing of object affordances for self and others’ action. Front Psychol. 5:538. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Maranesi M, Livi A, Bonini L. 2015. Processing of own hand visual feedback during object grasping in ventral premotor mirror neurons. J Neurosci. 35(34):11824–11829. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Maranesi M, Ugolotti Serventi F, Bruni S, Bimbi M, Fogassi L, Bonini L. 2013. Monkey gaze behaviour during action observation and its relationship to mirror neuron activity. Eur J Neurosci. 38(12):3721–3730. [DOI] [PubMed] [Google Scholar]
  70. Matelli M, Luppino G, Rizzolatti G. 1991. Architecture of superior and mesial area 6 and the adjacent cingulate cortex in the macaque monkey. J Comp Neurol. 311(4):445–462. [DOI] [PubMed] [Google Scholar]
  71. Mazurek KA, Rouse AG, Schieber MH. 2018. Mirror neuron populations represent sequences of behavioral epochs during both execution and observation. J Neurosci. 38(18):4441–4455. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Mendoza-Halliday D, Martinez-Trujillo JC. 2017. Neuronal population coding of perceived and memorized visual features in the lateral prefrontal cortex. Nat Commun. 8:15471. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Meyers EM. 2013. The neural decoding toolbox. Front Neuroinform. 7:8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Meyers EM. 2018. Dynamic population coding and its relationship to working memory. J Neurophysiol. 12:10. [DOI] [PubMed] [Google Scholar]
  75. Meyers EM, Freedman DJ, Kreiman G, Miller EK, Poggio T. 2008. Dynamic population coding of category information in inferior temporal and prefrontal cortex. J Neurophysiol. 100(3):1407–1419. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Meyers EM, Qi XL, Constantinidis C. 2012. Incorporation of new information into prefrontal cortical activity after learning working memory tasks. Proc Natl Acad Sci USA. 109(12):4651–4656. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Morecraft RJ, Cipolloni PB, Stilwell-Morecraft KS, Gedney MT, Pandya DN. 2004. Cytoarchitecture and cortical connections of the posterior cingulate and adjacent somatosensory fields in the rhesus monkey. J Comp Neurol. 469(1):37–69. [DOI] [PubMed] [Google Scholar]
  78. Murata A, Gallese V, Luppino G, Kaseda M, Sakata H. 2000. Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP. J Neurophysiol. 83(5):2580–2601. [DOI] [PubMed] [Google Scholar]
  79. Nakamura H, Kuroda T, Wakita M, Kusunoki M, Kato A, Mikami A, Sakata H, Itoh K. 2001. From three-dimensional space vision to prehensile hand movements: the lateral intraparietal area links the area V3A and the anterior intraparietal area in macaques. J Neurosci. 21(20):8174–8187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Nelissen K, Borra E, Gerbella M, Rozzi S, Luppino G, Vanduffel W, Rizzolatti G, Orban GA. 2011. Action observation circuits in the macaque monkey cortex. J Neurosci. 31(10):3743–3756. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Nelissen K, Vanduffel W. 2011. Grasping-related functional magnetic resonance imaging brain responses in the macaque monkey. J Neurosci. 31(22):8220–8229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Nelissen K, Vanduffel W. 2017. Action categorization in rhesus monkeys: discrimination of grasping from non-grasping manual motor acts. Sci Rep. 7(1):017–15378. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Orban GA. 2016. Functional definitions of parietal areas in human and non-human primates. Proc Biol Sci. 283:1828. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Orban GA, Claeys K, Nelissen K, Smans R, Sunaert S, Todd JT, Wardak C, Durand JB, Vanduffel W. 2006. Mapping the parietal cortex of human and non-human primates. Neuropsychologia. 44(13):2647–2667. [DOI] [PubMed] [Google Scholar]
  85. Pandya DN, Seltzer B. 1982. Intrinsic connections and architectonics of posterior parietal cortex in the rhesus monkey. J Comp Neurol. 204(2):196–210. [DOI] [PubMed] [Google Scholar]
  86. Pani P, Theys T, Romero MC, Janssen P. 2014. Grasping execution and grasping observation activity of single neurons in the macaque anterior intraparietal area. J Cogn Neurosci. 26(10):2342–2355. [DOI] [PubMed] [Google Scholar]
  87. Papadourakis V, Raos V. 2017. Evidence for the representation of movement kinematics in the discharge of F5 mirror neurons during the observation of transitive and intransitive actions. J Neurophysiol. 118(6):3215–3229. [DOI] [PubMed] [Google Scholar]
  88. Papadourakis V, Raos V. 2018. Neurons in the macaque dorsal premotor cortex respond to execution and observation of actions. Cereb Cortex. 7:5232540. [DOI] [PubMed] [Google Scholar]
  89. Passarelli L, Rosa MGP, Bakola S, Gamberini M, Worthy KH, Fattori P, Galletti C. 2018. Uniformity and diversity of cortical projections to precuneate areas in the macaque monkey: what defines area PGm? Cereb Cortex. 28(5):1700–1717. [DOI] [PubMed] [Google Scholar]
  90. Passarelli L, Rosa MG, Gamberini M, Bakola S, Burman KJ, Fattori P, Galletti C. 2011. Cortical connections of area V6Av in the macaque: a visual-input node to the eye/hand coordination system. J Neurosci. 31(5):1790–1801. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Perrett DI, Harries MH, Bevan R, Thomas S, Benson PJ, Mistlin AJ, Chitty AJ, Hietanen JK, Ortega JE. 1989. Frameworks of analysis for the neural representation of animate objects and actions. J Exp Biol. 146(1):87–113. [DOI] [PubMed] [Google Scholar]
  92. Platonov A, Orban GA. 2016. Action observation: the less-explored part of higher-order vision. Sci Rep. 6:36742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Premereur E, Van Dromme IC, Romero MC, Vanduffel W, Janssen P. 2015. Effective connectivity of depth-structure-selective patches in the lateral bank of the macaque intraparietal sulcus. PLoS Biol. 13:2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Raos V, Savaki HE. 2017. The role of the prefrontal cortex in action perception. Cereb Cortex. 27(10):4677–4690. [DOI] [PubMed] [Google Scholar]
  95. Rizzolatti G, Fadiga L, Gallese V, Fogassi L. 1996. Premotor cortex and the recognition of motor actions. Cogn Brain Res. 3(2):131–141. [DOI] [PubMed] [Google Scholar]
  96. Rosenberg A, Cowan NJ, Angelaki DE. 2013. The visual representation of 3D object orientation in parietal cortex. J Neurosci. 33(49):19352–19361. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Rozzi S, Calzavara R, Belmalih A, Borra E, Gregoriou GG, Matelli M, Luppino G. 2006. Cortical connections of the inferior parietal cortical convexity of the macaque monkey. Cereb Cortex. 16(10):1389–1417. [DOI] [PubMed] [Google Scholar]
  98. Rozzi S, Ferrari PF, Bonini L, Rizzolatti G, Fogassi L. 2008. Functional organization of inferior parietal lobule convexity in the macaque monkey: electrophysiological characterization of motor, sensory and mirror responses and their correlation with cytoarchitectonic areas. Eur J Neurosci. 28(8):1569–1588. [DOI] [PubMed] [Google Scholar]
  99. Rozzi S, Fogassi L. 2017. Neural coding for action execution and action observation in the prefrontal cortex and its role in the organization of socially driven behavior. Front Neurosci. 11:492. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Rutishauser U, Ye S, Koroma M, Tudusciuc O, Ross IB, Chung JM, Mamelak AN. 2015. Representation of retrieval confidence by single neurons in the human medial temporal lobe. Nat Neurosci. 18(7):1041–1050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Sakata H, Taira M, Murata A, Mine S. 1995. Neural mechanisms of visual guidance of hand action in the parietal cortex of the monkey. Cereb Cortex. 5(5):429–438. [DOI] [PubMed] [Google Scholar]
  102. Saleem KS, Tanaka K. 1996. Divergent projections from the anterior inferotemporal area TE to the perirhinal and entorhinal cortices in the macaque monkey. J Neurosci. 16(15):4757–4775. [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Sary G, Vogels R, Orban GA. 1993. Cue-invariant shape selectivity of macaque inferior temporal neurons. Science. 260(5110):995–997. [DOI] [PubMed] [Google Scholar]
  104. Shmuelof L, Zohary E. 2005. Dissociation between ventral and dorsal fMRI activation during object and action recognition. Neuron. 47(3):457–470. [DOI] [PubMed] [Google Scholar]
  105. Shmuelof L, Zohary E. 2006. A mirror representation of others’ actions in the human anterior parietal cortex. J Neurosci. 26(38):9736–9742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Shmuelof L, Zohary E. 2008. Mirror-image representation of action in the anterior parietal cortex. Nat Neurosci. 11(11):1267–1269. [DOI] [PubMed] [Google Scholar]
  107. Simone L, Bimbi M, Roda F, Fogassi L, Rozzi S. 2017. Action observation activates neurons of the monkey ventrolateral prefrontal cortex. Sci Rep. 7:44378. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Simone L, Rozzi S, Bimbi M, Fogassi L. 2015. Movement-related activity during goal-directed hand actions in the monkey ventrolateral prefrontal cortex. Eur J Neurosci. 42(11):2882–2894. [DOI] [PubMed] [Google Scholar]
  109. Singer JM, Sheinberg DL. 2010. Temporal cortex neurons encode articulated actions as slow sequences of integrated poses. J Neurosci. 30(8):3133–3145. [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Sliwa J, Freiwald WA. 2017. A dedicated network for social interaction processing in the primate brain. Science. 356(6339):745–749. [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Taira M, Tsutsui KI, Jiang M, Yara K, Sakata H. 2000. Parietal neurons represent surface orientation from the gradient of binocular disparity. J Neurophysiol. 83(5):3140–3146. [DOI] [PubMed] [Google Scholar]
  112. Theusner S, de Lussanet M, Lappe M. 2014. Action recognition by motion detection in posture space. J Neurosci. 34(3):909–921. [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Tia B, Takemi M, Kosugi A, Castagnola E, Ansaldo A, Nakamura T, Ricci D, Ushiba J, Fadiga L, Iriki A. 2017. Cortical control of object-specific grasp relies on adjustments of both activity and effective connectivity: a common marmoset study. J Physiol. 595(23):7203–7221. [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Tunik E, Rice NJ, Hamilton A, Grafton ST. 2007. Beyond grasping: representation of action in human anterior intraparietal sulcus. Neuroimage. 36(2):28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Van Essen DC, Drury HA, Dickson J, Harwell J, Hanlon D, Anderson CH. 2001. An integrated software suite for surface-based analyses of cerebral cortex. J Am Med Inform Assoc. 8(5):443–459. [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Vangeneugden J, De Maziere PA, Van Hulle MM, Jaeggli T, Van Gool L, Vogels R. 2011. Distinct mechanisms for coding of visual actions in macaque temporal cortex. J Neurosci. 31(2):385–401. [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Vangeneugden J, Pollick F, Vogels R. 2009. Functional differentiation of macaque visual temporal cortical neurons using a parametric action space. Cereb Cortex. 19(3):593–611. [DOI] [PubMed] [Google Scholar]
  118. Verhoef BE, Michelet P, Vogels R, Janssen P. 2015. Choice-related activity in the anterior intraparietal area during 3-D structure categorization. J Cogn Neurosci. 27(6):1104–1115. [DOI] [PubMed] [Google Scholar]
  119. Vigneswaran G, Philipp R, Lemon RN, Kraskov A. 2013. M1 corticospinal mirror neurons and their role in movement suppression during action observation. Curr Biol. 23(3):236–243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Vogt BA, Vogt L, Farber NB, Bush G. 2005. Architecture and neurocytology of monkey cingulate gyrus. J Comp Neurol. 485(3):218–239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Zhang Y, Meyers EM, Bichot NP, Serre T, Poggio TA, Desimone R. 2011. Object decoding with attention in inferior temporal cortex. Proc Natl Acad Sci USA. 108(21):8850–8855. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Data

Articles from Cerebral Cortex (New York, NY) are provided here courtesy of Oxford University Press

RESOURCES