Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2024 Jul 24;121(31):e2403445121. doi: 10.1073/pnas.2403445121

Dynamic spatial representation of self and others’ actions in the macaque frontal cortex

Taihei Ninomiya a,b,1, Masaki Isoda a,b,1
PMCID: PMC11295024  PMID: 39047041

Significance

Motor actions are a key channel for interactions with the world, including other individuals. Although actions are ubiquitous in everyday social life, how their spatial locations are encoded in the brain is poorly understood. Using a turn-taking choice task for monkeys facing different types of partners (real monkey, filmed monkey, and filmed object), we show that spatial representation of actions can be dynamically changed at the level of single neurons depending on social contexts and agents of action. Our work suggests that two frontal cortical nodes in the primate social brain are involved in coordinate transformation for embodied spatial cognition, such as imitation and perspective-taking, which are known to be impaired in various clinical conditions including autism spectrum disorder.

Keywords: ventral premotor cortex, medial prefrontal cortex, embodied spatial cognition, single neurons

Abstract

Modulation of neuronal firing rates by the spatial locations of physical objects is a widespread phenomenon in the brain. However, little is known about how neuronal responses to the actions of biological entities are spatially tuned and whether such spatially tuned responses are affected by social contexts. These issues are of key importance for understanding the neural basis of embodied social cognition, such as imitation and perspective-taking. Here, we show that spatial representation of actions can be dynamically changed depending on others’ social relevance and agents of action. Monkeys performed a turn-taking choice task with a real monkey partner sitting face-to-face or a filmed partner in prerecorded videos. Three rectangular buttons (left, center, and right) were positioned in front of the subject and partner as their choice targets. We recorded from single neurons in two frontal nodes in the social brain, the ventral premotor cortex (PMv) and the medial prefrontal cortex (MPFC). When the partner was filmed rather than real, spatial preference for partner-actions was markedly diminished in MPFC, but not PMv, neurons. This social context-dependent modulation in the MPFC was also evident for self-actions. Strikingly, a subset of neurons in both areas switched their spatial preference between self-actions and partner-actions in a diametrically opposite manner. This observation suggests that these cortical areas are associated with coordinate transformation in ways consistent with an actor-centered perspective-taking coding scheme. The PMv may subserve such functions in context-independent manners, whereas the MPFC may do so primarily in social contexts.


Motor actions are crucial for interactions with the world, including other agents. The actions of others provide useful information for inferring their intentions and goals, i.e., their mental state. Accumulating evidence suggests that single-neuron substrates of others’ actions are widespread in the human and nonhuman primate brains, ranging from the cerebral cortex to subcortical nuclei (13). Among these, the most intensively studied regions include the ventral premotor cortex (PMv) and the medial prefrontal cortex (MPFC) in the frontal cortex (4, 5). Single neurons in the PMv and MPFC encode observed, as well as executed, actions in several representation formats (6, 7). Thus, these cortical areas seem to be well-suited for linking the monitoring of others’ actions to the planning of one’s own actions in context-dependent manners. However, the functional similarities and differences between the two areas remain a matter of debate (8).

In the PMv and MPFC, responses of single neurons to others’ actions are selective for, or influenced by, various contextual factors. Such factors include the content of observed actions and its congruency with executed actions (6, 9), necessity of withholding one’s own actions (1012), distance from the observer in terms of peripersonal vs. extrapersonal space (13, 14), viewpoint of observed actions in terms of lateral vs. subjective perspective (13, 14), observer’s predicted outcome value (15, 16), social relevance in terms of real vs. filmed actions (8), and spatial location of observed actions and targets in terms of laterality viewed from a neuron under study (7, 17, 18). These factors provide important clues to understand the goal of observed actions in a given social context. However, the spatial selectivity (laterality) of other-action-responsive neurons has been only briefly described (7, 17, 18). Thus, little is known about how the spatial locations of others’ actions are represented in the brain and whether such spatially tuned neuronal responses are affected by social contexts. These issues are of key importance in social neuroscience and cognitive science because spatial coding principles of others’ actions are closely associated with embodied social cognition, such as imitation and spatial perspective-taking (1922). A central idea of embodied social cognition is that social cognition is tightly linked to an agent’s physical body and its interactions with the environment including others (23). According to this notion, social cognition is an inherently interactive embodied practice (23). Indeed, people perceive in others’ bodily movements, facial gestures, eye directions, and so on, what they intend, and what they feel (24).

To address these fundamental issues, we examined the framework of spatial coding in the PMv and MPFC while macaque monkeys made a reaching movement toward one of three targets at different locations (Fig. 1A). To create and manipulate social contexts, the monkeys performed the task in a turn-taking manner with one of three partners facing the subject. The three partners included a real monkey sitting directly in front of the subject [real agent (RA) condition], a filmed monkey replayed on a monitor (FM condition), and a filmed object (wooden stick) replayed on the monitor (FO condition). The FM condition lacked real-time social interactions, yet the partner’s appearance and size were the same as those in the RA condition. The partner in the FO condition was a nonbiological entity with no body coordinate. Using this experimental procedure, we show that many neurons in the MPFC, but not PMv, are rendered spatially untuned in the filmed partner conditions. This response modulation is evident for both self-actions and partner-actions. Notably, a subset of neurons in both the PMv and MPFC dynamically change their spatial preference between self-actions and partner-actions in ways consistent with an actor-centered perspective-taking coding scheme (e.g., preference for the contralateral reach during self-action but for the ipsilateral reach during partner-action in the face-to-face condition). These findings delineate coding properties of agent-related neurons in the frontal cortical nodes in the social brain.

Fig. 1.

Fig. 1.

Behavioral task and gaze behavior. (A) Spatial configuration of three target buttons. B1, B2, and B3 are located to the Left (L), Center (C), and to the Right (R) as seen from a recorded monkey (M1). M2 denotes M1’s partner. Ipsi and Contra indicate ipsilateral and contralateral to a neuron under study, respectively. (B) Top, sequence of events in a single trial of the role-reversal choice task, in which M1 was the observer and M2 was the actor. Bottom, sequence of role alternation and block change. The actor and observer alternated their roles every three trials. The correct target position (B1, B2, or B3) was changed every 11 to 17 trials without prior notice. RT, response time. (C) Spatial arrangements of the target buttons from M1’s viewpoint. (D) A single session example of M1’s gaze positions in self-correct and partner-correct trials. Each heatmap was plotted separately depending on the location of the correct target. Red square, region of interest set for the correct target. (E) Comparisons of M1’s gaze durations between the correct region of interest and the incorrect five regions of interest. Mean ± SEM. **P < 0.01; two-tailed Welch’s t-test.

Results

Neurons Exhibiting Spatial Preference for Self-Action and Other-Action.

Two monkeys (Macaca fuscata; A and B, both designated as M1) were trained to perform a role-reversal choice task (Fig. 1) while directly facing another monkey (designated as M2). M1 and M2 alternated the roles of “actor” and “observer” every three trials. Three target buttons were placed in front of each monkey; the positions of the buttons were on the left, center, and right from each monkey’s viewpoint, in addition to a start button on the nearer side. Each trial started when both M1 and M2 pressed their start buttons. After 0.7 to 1.3 s, the three target buttons on the actor’s side were illuminated, and the actor had to press one of them within 3 s (Fig. 1B, top). The observer was required to do nothing but hold the start button until the end of the trial. Both actor and observer were rewarded with a drop of water when the actor chose the correct target. The correct target position (i.e., reward-associated target) was fixed for 11 to 17 consecutive trials and then was changed without prior notice (Fig. 1B, bottom).

In this RA condition, we recorded from 615 and 527 single neurons in the PMv and MPFC, respectively, in the left hemisphere using multicontact probes with no sampling bias (Fig. 2A). Of them, 301 PMv neurons and 226 MPFC neurons were judged as agent-related. Consistent with our previous work (25), these agent-related neurons exhibited a statistically significant preference for self-action (“self type”), partner-action (“partner type”), or none of them (“mirror type”) (Table 1; see also Materials and Methods for the definition of each neuronal type). Then, we attempted to characterize the spatial preference of each agent-related neuron and its dependence on the social relevance of the partner. The activity was analyzed for both self-action and partner-action regardless of neuronal types because the spatial preference could in theory exist for the action of both preferred and less-preferred agents (e.g., self-action for a partner-type neuron).

Fig. 2.

Fig. 2.

Examples of neurons with spatial preference. (A) Recording sites for the MPFC (a) and PMv (b). The approximate anteroposterior levels of the sections are indicated in the lateral view of the left hemisphere. (B and C) Spiking activities of PMv (B) and MPFC (C) neurons. Rasters and spike density functions are aligned to the time of choice (i.e., target button press; vertical line at time 0). Red and blue dots indicate the times of individual action potentials for self-correct and partner-correct trials, respectively. Black dots indicate the target onset. Different shades of color indicate different target locations.

Table 1.

Number of neurons showing spatial preference in the RA condition

Spatial preference
Self-action Partner-action
Agent preference Contra Ipsi Total Contra Ipsi Total
PMv Self 105 15 11 26 19 10 29
Mirror 100 10 7 17 10 7 17
Partner 96 16 17 33 11 18 29
Total 301 41 35 76 40 35 75
MPFC Self 63 7 8 15 9 5 14
Mirror 68 7 7 14 7 6 13
Partner 95 12 8 20 14 13 27
Total 226 26 23 49 30 24 54

A sizable number of neurons showed a statistically significant spatial preference during a peri-action period (−400 to 200 ms after the button press), with comparable proportions between the two frontal areas (PMv, n = 124, 41%; MPFC, n = 79, 35%; P = 0.15; chi-square test; see Materials and Methods for details of the criteria). In an example of a mirror-type PMv neuron (Fig. 2B), activities associated with the self-action were not systematically different depending on the target locations (ρ = −0.15, P = 0.77; Spearman correlation test), whereas activities associated with the partner-action exhibited a contralateral preference (B3 > B2 > B1; ρ = 2.07, P = 5.7 × 10−5; Spearman correlation test). In an example of an MPFC partner-type neuron (Fig. 2C), activities associated with the self-action were not systematically different depending on the target locations (ρ = −0.26, P = 0.44; Spearman correlation test), whereas activities associated with the partner-action exhibited an ipsilateral preference (B1 > B2 > B3; ρ = −18.61, P = 1.2 × 10−14; Spearman correlation test). Overall, almost 20% of neurons in each agent-related type showed spatial preference for self-generated and/or partner-generated actions (Table 1). The numbers of neurons exhibiting a contralateral preference and an ipsilateral preference were not significantly different in either area, albeit with a slight contralateral predominance (Table 1; contralateral vs. ipsilateral; PMv self-action, n = 41 vs. 35, P = 0.58; PMv partner-action, n = 40 vs. 35, P = 0.68; MPFC self-action, n = 26 vs. 23, P = 0.67; MPFC partner-action, n = 30 vs. 24, P = 0.41; chi-square goodness-of-fit test). The lack of clear contralateral bias in the PMv was consistent with previous work (26). Activity of spatially tuned neurons in the PMv was significantly larger when the partner made a correct action toward the preferred target location than when the partner made an incorrect action toward the same preferred target (P = 4.0 × 10−7, paired t-test). Such a difference in activity was not observed in spatially tuned MPFC neurons (P = 0.15, paired t-test), suggesting that the impact of reward expectation, or the correctness of others’ actions, on spatially tuned neurons was different between the two areas.

The Spearman correlation approach employed above assumes neuronal responses to have a monotonic relationship with the target locations. To examine whether such an assumption was a necessary requisite for the observed spatial preference, we applied an alternative procedure, i.e., one-way ANOVA (P < 0.05) followed by the Tukey–Kramer post hoc test (P < 0.05), to analyze the activities of the same set of neurons using the target locations as a main factor. This procedure yielded comparable results regarding a contralateral versus ipsilateral preference in each area (SI Appendix, Table S1). Thus, the neuronal populations captured by the two independent approaches are largely consistent. In the following sections, we report two unusual response properties related to space coding in the frontal cortical areas.

Effect of the Partner’s Social Relevance on Spatial Preference.

The first property concerns the dependence of spatial preference on the social relevance of the partner. We examined the extent to which the spatial preference of individual neurons was consistent between task conditions in which the social relevance of the partner was manipulated. For this purpose, we devised FM and FO conditions (Fig. 3A). Similar to the RA condition, the partner in the FM condition was a life-sized monkey but its task performance was prerecorded and replayed on a monitor in front of M1. Therefore, the RA and FM partners had the same biological appearance with the same body coordinate system; however, they had different social relevance because only the RA partner was a live agent. The partner in the FO condition was a wooden stick (nonbiological object) making a similar target choice replayed on the monitor. The FO partner was considered to have the lowest social relevance with no body coordinate system. Both M1 monkeys showed high performance rates across the three partner conditions (monkey A, RA 83%, n = 47, FM 81%, n = 44, FO 77%, n = 45, F(2,133) = 13.8, P = 3.7 × 10−6; monkey B, RA 78%, n = 64, FM 81%, n = 70, FO 79%, n = 71, F(2,202) = 2.5, P = 0.084; one-way ANOVA).

Fig. 3.

Fig. 3.

Population analysis of spatially tuned neurons. (A) Three types of partners with different social relevance. (B and C) Z-score-normalized population-averaged spike density functions of PMv neurons (B) and MPFC neurons (C) with spatial preference for the self-action (red) and partner-action (blue). In these plots, activities were averaged after flipping the spatial alignment and sign for ipsilateral-preferred neurons and inhibitory neurons, respectively (Materials and Methods for details). P, preferred; NP, nonpreferred. Insets denote proportions of spatially tuned neurons to agent-related neurons. Other conventions are as in Fig. 2. (D) Regression slops of the neuron in Fig. 2B derived from Spearman correlation tests (self, ρ = −0.15, P = 0.77; partner, ρ = 2.07, P = 5.7 × 10−5). Each dot represents firing rates during the peri-action period in a single trial. (E) Distributions of response slopes for the partner-action (ordinate) and the self-action (abscissa) derived from individual PMv neurons (Left) and MPFC neurons (Right).

To our surprise, the social relevance of the partner affected the spatial preference in MPFC, but not PMv, neurons. Specifically, the proportions of the spatially tuned MPFC neurons were significantly lower in the two filmed conditions than in the RA condition (Fig. 3C, inset and Table 2 and SI Appendix, Tables S2 and S3). Notably, this phenomenon was observed for not only the partner-action (RA 24%, FM 12%, FO 11%; RA vs. FM, P = 2.7 × 10−3; RA vs. FO, P = 2.4 × 10−3; FM vs. FO, P = 0.81; chi-square test), but also the self-action (RA 22%, FM 16%, FO 9%; RA vs. FM, P = 0.12; RA vs. FO, P = 1.6 × 10−3; FM vs. FO, P = 0.086; chi-square test). In contrast, the proportions of PMv neurons exhibiting spatial preference were not significantly different between the real and filmed partner conditions for either the partner-action (RA vs. FM, P = 0.88; RA vs. FO, P = 0.41; FM vs. FO, P = 0.48; chi-square test) or the self-action (RA vs. FM, P = 0.16; RA vs. FO, P = 0.64; FM vs. FO, P = 0.35; chi-square test; Fig. 3B, inset and Table 2 and SI Appendix, Tables S2 and S3). In the MPFC, the influence of the partner’s social relevance on spatial preference was observed in all types of agent-related neurons (Table 1 and SI Appendix, Tables S2 and S3).

Table 2.

Number of spatially tuned neurons according to agent selectivity across the partner conditions

Spatial preference
Total agent-related Total Self-only Partner-only Congruent Incongruent
RA PMv 301 124 (41) 49 48 19
MPFC 226 79 (35) 25 30 15
FM PMv 344 123 (36) 39 52 23
MPFC 174 42 (24) 21 15 3
FO PMv 330 127 (39) 54 49 20
MPFC 143 28 (20) 12 15 1

Values in parentheses denote the percentages of total agent-related neurons.

We found that the social relevance of the partner also affected the strength of spatial preference in the spatially tuned MPFC neurons. As a measure of the strength of spatial preference, a regression slope derived from Spearman correlation tests was computed for each neuron (Fig. 3D), with a positive value indicative of a contralateral preference and a negative value indicative of an ipsilateral preference. In the MPFC (Fig. 3E, right), the distribution of regression slopes was scattered significantly more widely in the RA condition than in the filmed conditions for the self-action (x-axis; RA vs. FM, P = 1.4 × 10−6; RA vs. FO, P = 7.3 × 10−3; F test) and the partner-action (y-axis; RA vs. FM, P = 1.2 × 10−3; RA vs. FO, P = 1.2 × 10−1; F test), suggesting that the MPFC contained spatially tuned neurons exhibiting greater strength of spatial preference in the RA condition compared to the filmed conditions. As opposed to the MPFC, the distribution in the PMv for the partner-action was scattered significantly more widely in the filmed conditions than in the RA condition (Fig. 3E, left; RA vs. FM, P = 3.6 × 10−2; RA vs. FO, P = 9.4 × 10−3; F test), while the variance for the self-action was significantly smaller in the FM condition than in the RA condition (RA vs. FM, P = 3.1 × 10−3; RA vs. FO, P = 6.2 × 10−2; F test). The same neurons displayed similar strength of spatial selectivity between the FM and FO conditions in the PMv (SI Appendix, Fig. S1, left; self-action FM vs. FO, P = 0.96; partner action FM vs. FO, P = 0.87; paired t-test) and MPFC (SI Appendix, Fig. S1, right; self-action FM vs. FO, P = 0.24; partner action FM vs. FO, P = 0.96; paired t-test).

The alteration of spatial preference might be ascribable to differences in motor behavior, such as response times (SI Appendix, Table S4), across the three partner conditions. To examine this possibility, spatial selectivity, as indexed by a regression slope, was compared between trials with longer response times and those with shorter response times (Materials and Methods for details of the criteria). This analysis revealed the lack of a significant difference between the two trial groups for the self-action or the partner-action in either area (SI Appendix, Fig. S2; PMv self-action, P = 0.74; PMv partner-action, P = 0.10; MPFC self-action, P = 0.13; MPFC partner-action, P = 0.77; paired t-test), suggesting that the kinematic differences of the observed as well as executed actions were unlikely to account for the decreased number of spatially tuned MPFC neurons in the filmed conditions. We also considered the possibility that levels of arousal or attention might have decreased in the filmed conditions, leading to a diminished spatial preference for the self-action and partner-action. However, M1’s gaze duration at the correct target was, in fact, increased as the social relevance of the partner was decreased; gaze duration was significantly longer in the FO condition than in the RA condition during both the self-action (SI Appendix, Fig. S3, top; monkey A, RA vs. FO, P = 1.7 × 10−4; monkey B, RA vs. FO, P = 2.0 × 10−6; two-tailed Welch’s t-test) and partner-action (SI Appendix, Fig. S3, bottom; monkey A, RA vs. FO, P = 1.0 × 10−5; monkey B, RA vs. FO, P = 5.8 × 10−6; two-tailed Welch’s t-test).

Incongruent Spatial Preference Between Self-Action and Partner-Action.

The second response property concerns the incongruency of spatial preference between the self-action and partner-action. In the present study, a subset of neurons exhibited spatial preference in both self-action and partner-action. In the RA condition, the number of neurons with such a dual spatial preference property was comparable between the two areas (PMv, n = 27; MPFC, n = 24; Table 2). If the spatial preference of a neuron is determined on the basis of stimulus locations, as has been generally thought, its spatial preference should be consistent between the self-action and partner-action from M1’s viewpoint. As shown in Fig. 4A, we indeed found such neurons (congruent type; PMv, n = 19; MPFC, n = 15). Interestingly, however, the spatial preference of the remaining neurons was opposite between the self-action and partner-action (incongruent type; PMv, n = 8; MPFC, n = 9). A PMv neuron shown in Fig. 4B exhibited a preference for the M1’s left side (B1) during the self-action but a preference for the M1’s right side (B3) during the partner-action.

Fig. 4.

Fig. 4.

Congruent-type and incongruent-type neurons. (A) A congruent-type MPFC neuron. (B) An incongruent-type PMv neuron. Same conventions as in Fig. 2. (C and D) Z-score-normalized population activities for congruent-type and incongruent-type neurons in the PMv (C) and MPFC (D). For combined plots (Right column), P denotes the preferred location for the self-action, and NP denotes the nonpreferred location for the self-action. Mean ± SEM.

For congruent neurons, the regression slopes derived from Spearman correlation tests were distributed in the first and third quadrants (Fig. 3E, orange dots), whereas those for incongruent neurons were distributed in the second and fourth quadrants (Fig. 3E, green dots). To further compare these responses at the population level, the activities during the peri-action period were plotted separately for the target position viewed from a neuron under study (B1, ipsilateral; B3, contralateral). Note that, as described above, neuronal recordings were made in the left hemisphere. This analysis revealed that congruent neurons exhibited a monotonic change in activity in the same direction between the self-action and partner-action in overlapping manners (Fig. 4 C and D, top). In contrast, incongruent neurons exhibited a monotonic change in activity in the opposite direction between the self-action and partner-action (Fig. 4 C and D, bottom). When contralateral-preferring and ipsilateral-preferring neurons were combined to increase the statistical power (Fig. 4 C and D, right), the difference in activity between the preferred and nonpreferred positions was not significantly different between the self-action and partner-action at the population level (PMv congruent, P = 0.65; PMv incongruent, P = 0.34; MPFC congruent, P = 0.94; MPFC incongruent, P = 0.78; Welch’s t-test). The numbers of neurons with contralateral and ipsilateral preferences were comparable for both congruent and incongruence neurons in both areas (contralateral vs. ipsilateral; 9 vs. 10 for PMv congruent neurons; 5 vs. 3 for PMv incongruent neurons; 7 vs. 8 for MPFC congruent neurons; 5 vs. 4 for MPFC incongruent neurons).

We considered the possibility that the occurrence of incongruent-type neurons might be associated with idiosyncratic gaze behavior in experimental sessions in which those neurons were identified. We believe that this is unlikely because 12 out of 17 incongruent-type neurons were recorded simultaneously with congruent-type neurons. In fact, the M1 monkeys spontaneously looked at the correct target button consistently longer than the nontarget buttons (Fig. 1 CE) during both self-action (monkey A, P = 1.1 × 10−5; monkey B, P = 2.5 × 10−15; Weltch’s t-test) and partner-action (monkey A, P = 2.5 × 10−4; monkey B, P = 4.8 × 10−16). This would not be surprising if one considers the fact that the information about the partner’s choice can be used to determine subsequent self-choices. Moreover, gaze durations at each target were not significantly different between experimental sessions in which incongruent-type, but not congruent-type, neurons were isolated (incongruent session, n = 3) and those in which congruent-type, but not incongruent-type, neurons were isolated (congruent sessions, n = 5) for either the self-action (SI Appendix, Fig. S4, left; B1, P = 0.41; B2, P = 0.29; B3, P = 0.20; Welch’s t-test) or the partner-action (SI Appendix, Fig. S4, right; B1, P = 0.28; B2, P = 0.60; B3, P = 0.40; Welch’s t-test). This finding further supports the notion that the occurrence of incongruent-type neurons was not accounted for by unusual gaze behavior. We also considered the possibility that the occurrence of incongruent-type neurons was by chance, simply because the number of such neurons was small. To evaluate this, we performed a bootstrap analysis to test whether the number of each neuronal type was higher than the null distribution. In all cases except two (MPFC congruent-type and incongruent-type neurons in the FO condition), the observed number of neurons in our dataset was significantly higher than that expected by chance (SI Appendix, Figs. S5 and S6). The paucity of MPFC neurons with dual spatial preference in the filmed conditions, both congruent and incongruent types, was consistent with our finding that the social relevance of the partner affected spatial coding in the MPFC. These results suggest that a subset of PMv and MPFC neurons may use the spatial coordinate from the actor’s but not M1’s viewpoint for encoding the self-action and partner-action.

Population Decoding of Spatial Location.

Finally, we investigated whether activities of the subpopulations of neurons described above can be used to decode spatial information about the partner-action. For this purpose, we constructed a machine learning-based decision tree classifier using activities in the peri-action period for neurons exhibiting spatial preference during the partner-action (PMv, n = 75; MPFC, n = 54), congruent-type neurons (PMv, n = 19; MPFC, n = 15), or incongruent neurons (PMv, n = 8; MPFC, n = 9). Our decoding analysis was performed based on responses of pseudopopulations of neurons, as many of these neurons were recorded in separate sessions. Here, we trained a model (classifier) for each neuronal type using a training dataset (80% of the observed data) and then examined classification accuracy using a test dataset (the remaining 20%; Materials and Methods).

Decoding performance for the spatially tuned PMv neurons exceeded the 95th percentile of the permutation distribution (SI Appendix, Fig. S7, top left; P = 0.042). We also found that decoding performance also exceeded the 95th percentile of the permutation distribution for the congruent-type MPFC neurons (SI Appendix, Fig. S7, middle center; P = 0.037) and the incongruent-type PMv neurons (SI Appendix, Fig. S7, bottom left; P = 0.006). These results support the notion that subpopulations of PMv neurons and MPFC neurons can be used to decode spatial information of others’ actions. Decoding performance did not improve further by using both PMv and MPFC neurons (SI Appendix, Fig. S7, right column).

Discussion

In the present study, we examined spatial coding of agent-related neurons in the PMv and MPFC, two frontal cortical nodes in the social brain. In these areas, the proportion of neurons with spatial preference did not differ from each other in the RA condition. However, there was a notable difference between the two areas with respect to the impact of the partner’s social relevance on neuronal activity. In particular, the proportion of spatially tuned neurons in the PMv remained unchanged between the three types of partners, whereas it was significantly lower in the filmed partner conditions (FM and FO) compared to the RA condition in the MPFC. Another notable finding was that a subset of the spatially tuned neurons with dual spatial preference properties exhibited incongruent spatial preference between the self-action and partner-action. These findings delineate notable aspects of action coding, and the similarities and differences in the functional roles for the PMv and MPFC in social action processing.

The MPFC of the macaque shows a preference for social actions (8, 27). In particular, blood-oxygen-level-dependent signals in the MPFC increase exclusively during watching movies of two monkeys acting interactively (27). The response of partner-type neurons in the MPFC is significantly greater when facing another real monkey compared to when facing a monkey in prerecorded movies during the performance of a turn-taking task (8). The present finding extends these previous findings by demonstrating that the spatial preference of agent-related neurons is also affected by the social relevance of the partner. Interestingly, this response modulation is observable for not only the partner-action but also the self-action. Our findings suggest that, for MPFC neurons, the spatial coding of action is most sensitive during real-time social interactions, whereas the spatial coding of PMv neurons is determined faithfully by the movement per se regardless of whether its origin is a social agent or a nonsocial object.

It is not clear exactly what aspects of the social relevance affected spatial coding in the MPFC. There are several possible accounts including sensory factors (e.g., vision, audition, and olfaction) and contextual factors (e.g., real-time interactions and real-space sharing). In our task, the size of the partner monkey was equivalent between the RA and FM conditions, and task-related sounds were generated in the same manner between the RA, FM, and FO conditions. However, the smell of a monkey was only available in the RA condition. Given that the proportion of spatially tuned MPFC neurons did not differ significantly between the FM and FO conditions despite a remarkable difference in the partner’s appearance, olfaction might have played a dominant role among the sensory factors in determining the social relevance in our task condition. Regarding the contextual factors, the real-time interactions and real-space sharing seem to be equally important. These two components were present in the RA condition but were absent in the FM and FO conditions. A human neuroimaging study demonstrated that the MPFC activity increases during live task performance with another person compared to nonlive task performance with the same person (28). In addition, the mere presence of another person at rest in the same room elicits brain activity states in frontal electrodes that are different from those elicited by another person in a separate room (29). The development of experimental procedures that can distinguish between the contributions of these two contextual components will provide insights into what really determines the social relevance.

A subset of the agent-related neurons showed spatial preference for both self-action and partner-action (i.e., neurons with a dual spatial preference property). Of particular interest was that, when viewed from M1’s perspective, the spatial preference for the partner-action can be diametrically opposite to the spatial preference for the self-action (incongruent neurons). This unusual response property is interesting because the preferred direction of the partner-action becomes consistent with that of one’s own if the partner-action is seen from the partner’s location and orientation in space. This may be related to a cognitive operation known as spatial perspective-taking, in which one mentally rotates oneself into the position of another agent. Human neuroimaging studies have demonstrated that the PMv and MPFC are involved in spatial perspective-taking (3035), leading us to hypothesize that incongruent neurons might be a neuronal substrate of spatial perspective-taking for processing actions in social contexts.

Spatial perspective-taking (1922) is considered a type of embodied social cognition, the notion that social cognition is closely associated with an agent’s physical body and its interactions with the environment including others (23). Indeed, spatial perspective-taking is achieved through analogous transformational mechanisms sharing at least some properties of actual self-body motion (36, 37), and can occur spontaneously under the mere presence of another person with the potential for action, such as looking and reaching (21). Although spatial perspective-taking promotes successful social interactions, the neural mechanisms underlying this high-level social ability remain poorly understood. The present findings suggest a role of incongruent neurons in spatial perspective-taking. To test this hypothesis, it would be important to examine how incongruent neurons change their responses depending on angular disparity between visuospatial perspectives of the self and other. It would also be important to examine the effect of inactivation of PMv and MPFC neurons on the ability to attribute a visual perception to others (38). The computation for taking the perspective of others would be facilitated by several neural signals, such as those associated with others’ face view and gaze direction, angular disparity between self and others’ perspectives, and mental rotation of one’s own body. Neurons at least in the macaque temporal cortex encode the direction of others’ gaze (39).

Congruent and incongruent neurons are present in the same brain areas. The coexistence of the two types of neurons might play a complementary role in decoding spatial information of others’ actions and in grasping what the action environment looks like from their perspectives. Although such complementarity was not identified in the present study, its detection may depend on several factors, including the task context at hand and the number of functionally interconnected neurons available for analysis. We observed that the majority of neurons with dual spatial preference were of congruent type. This observation is consistent with the claim that an ego-centric perspective is primary and natural (40). Interestingly, however, it is also known that calling attention to others’ actions can increase the frequency of adopting the others’ perspective (21). An interesting possibility is that the proportion of incongruent neurons may dynamically change by experimentally manipulating the amount of attention to others’ actions and their target objects.

The aforementioned possibilities need to be discussed with caution because the ability of spatial perspective-taking was not explicitly required in the present task. Indeed, the monkeys could retrieve task-relevant information from the spatial position of actions and their consequences from M1’s viewpoint. Importantly, however, in the parietal cortex of the macaque, visuotactile bimodal neurons with receptive fields on the monkey’s body part (e.g., left forearm) can exhibit visual responses to the corresponding body part (left forearm) of the experimenter facing the monkey (41). Such neurons, called “anatomical image matching neurons,” and our incongruent neurons seem to share similar response properties. Although optimal stimuli to evoke neuronal responses differ between the two studies (visual stimulation over the arm vs. arm movement), the existence of such neurons suggests that the macaque brain is equipped with neural machinery that would be useful for spatial matching between one’s own and others’ bodies. Moreover, available evidence suggests that macaque monkeys can attribute a visual perception to others and make predictions on the basis of others’ visual perspective (38, 42, 43). Furthermore, inactivation of MPFC activity in the macaque eliminates spontaneous gaze behavior that anticipates others’ false-belief-driven actions (43). Therefore, exploring the neuronal mechanisms underlying spatial perspective-taking for social interactions is now technically feasible using macaque animal models.

A limitation of the present study should be addressed for future work. In our task, the monkeys were not required to fixate at a certain location during either the execution or the observation of action. Therefore, the coordinate system (e.g., eye-centered or body-centered) in which neurons in the PMv and MPFC represent actions remains unclear. However, previous studies have reported that the PMv contains neurons whose visual receptive fields are independent of eye positions (4446), suggesting that PMv neurons encode the locations of visual stimuli in a body-centered manner. This issue is also relevant to the functional role of incongruent neurons. One possibility is that the reference point for spatial coding is consistently one’s own physical body for congruent neurons, whereas that for incongruent neurons can flexibly switch between one’s own physical body and one’s own virtual body that is mentally rotated into the position of the partner. Addressing this issue will help us understand a coordinate transformation framework implemented in each cortical area for encoding actions and other behavioral variables in social contexts (47).

The present findings highlight spatial coding properties of neurons in the PMv and MPFC associated with the reach-to-target hand action for the self and other, a key piece of information to infer the intention and goal of social agents. Behavioral information useful for productive social exchanges can originate from other body parts as well, such as the eyes and head. It has been shown that such multitudes of information are distributed in various brain regions, including the frontal and temporal cortical areas (48, 49). An important next step is to investigate how social signals derived from multiple body parts are integrated in the brain to achieve higher-level social cognition including mind reading.

Materials and Methods

All animal care and experimentation protocols were approved by the Institutional Animal Care and Use Committee of the National Institutes of Natural Sciences and were conducted in accordance with the Guidelines for the Care and Use of Nonhuman Primates in Neuroscience Research of The Japan Neuroscience Society. Methodological procedures in the present study, such as behavioral procedures, surgical procedures, and data collection, were in part described in detail elsewhere (8, 50). This study is reported in accordance with ARRIVE guidelines.

Subjects.

Four male macaques (M. fuscata) were used in this study. Two of them were used to collect neuronal data from the PMv and MPFC [monkey A (age 6, 5.1 kg) and monkey B (age 6, 5.0 kg), both referred to as M1]. The remaining two monkeys [monkey C (age 8, 8.1 kg) and monkey Q (age 5, 6.3 kg)] participated in this study solely as partners.

Behavioral Procedures.

Behavioral tasks were controlled by a personal computer running the MonkeyLogic Matlab toolbox (51).

Role-reversal choice task.

M1 performed the role-reversal choice task with three different types of partners (referred to as M2). In the RA condition, the partner was a real monkey: monkey A was paired with monkey B or C, and monkey B was paired with monkey A, C, or Q. Monkey Q served as the partner only during behavioral training. The two monkeys sat in individual primate chairs facing each other in a sound-attenuated room. Four buttons were assigned to each monkey: a circular one as a start button and three rectangular ones as target buttons (B1, B2, and B3 from left to right from M1’s viewpoint for both M1’s and M2’s buttons; Fig. 1A). In each trial, only one monkey was designated to make a choice (actor), and the other one was required to hold the start button throughout the trial (observer). The roles of the actor and observer alternated every three trials. Each trial started when both monkeys pressed the start buttons. The target buttons on the actor’s side were illuminated after both monkeys had held the start button for 0.7 to 1.3 s. The actor was then required to press one of the three target buttons within 3 s. A high-pitch tone (1 kHz), as the feedback for button press, was presented when any of the targets was pressed. Only one of the three targets was correct, the position of which changed every 11 to 17 trials with no prior notice. When the actor chose the correct target, a drop of water was delivered to both monkeys 1.3 s after the target button press. When wrong buttons were pressed, no reward was given to either monkey.

Two more partner conditions were introduced in this study: FM and FO conditions. In these conditions, the partners were filmed and replayed on a large LCD monitor (W67.41 × H99.56 cm) placed in front of M1. The filmed partner was either a monkey sitting in a primate chair (FM condition) or a wooden stick (FO condition). The task structure was fundamentally identical among the conditions. The monkey or the stick on the monitor pressed one of the three targets in its actor trials, whereas the filmed partner held the start button down in the observer trials. A blank screen was interleaved between trials in the filmed conditions. The partner in the FM and FO conditions made only choice-related errors. This was because the frequency of occurrence of “start errors (release of the start button before target onset)” and “target errors (not responding to the illuminated targets within 3 s)” was less than 1% of all errors in the RA condition. The overall correct rates of the filmed partners were adjusted to be comparable to those of the RA partner (RA, 80%; FM, 78%; FO, 77%; P = 0.31, one-way ANOVA).

To create visual stimuli for the filmed conditions, actions performed by the filmed partners were recorded in advance using a video camera (HDR-CX470, Sony, Tokyo, Japan) at 30 frames/s with a resolution of 1920 × 1080 pixels in an uncompressed format. Then, video sequences were edited to make video clips to be replayed during data collection, the length and resolution of which were 7 to 8 s and 800 × 600 pixels, respectively. Each video clip started ~1 s before start button onset and ended ~3 s after target button press. Eight to ten different video clips were prepared for the choice of each target button. During data collection, the LCD monitor was placed at the far end of M1’s chair panel so that the target buttons for M1 and those for M2 (in the monitor) were aligned in the same direction and looked continuous from M1’s viewpoint (Fig. 3A).

The filmed condition to be performed first in each experimental session was randomly determined. A total of at least 18 blocks were performed, alternating the two filmed conditions every nine blocks.

Surgical Procedures.

The M1 monkeys underwent surgery for the implantation of a head holder and a recording chamber when their performance achieved a predetermined level in the RA condition (>75% overall correct rate). A plastic headpost and a plastic chamber were implanted on the skull under aseptic conditions for head fixation and neuronal recording. The monkeys were anesthetized using intramuscular injections of ketamine HCl (10 mg/kg) and xylazine (1 to 2 mg/kg), and then were maintained under general anesthesia with isoflurane (1 to 2%) during the surgery. The coordinates of the chamber were determined such that the PMv and MPFC in the left hemisphere were accessible, aided by magnetic resonance images. Antibiotics and analgesics were administered after the surgery.

Data collection.

Behavior.

All task events including stimulus presentation and reward delivery were controlled and recorded by a personal computer running the MonkeyLogic Matlab toolbox (51). The water reward was delivered through a spout attached to the primate chair under the control of a solenoid valve. Eye movements were continuously monitored using a commercially available eye-tracking system at a sampling rate of 1 kHz (EyeLink II; SR Research, Ontario) and streamed to the computer. Overt movements of the monkeys were monitored using a video-capturing system.

Neuronal activities.

Single-unit activities were recorded in the PMv and MPFC of the left hemisphere using 16-channel linear multicontact probes (U-probe or S-probe, Plexon Inc., TX, USA). In most sessions, probes were placed in both areas for simultaneous recordings. The probe specification was 200-μm interelectrode spacing and 0.3 to 0.5 MΩ impedance at 1 KHz. An oil-driven micromanipulator (MO-97A or MO-971A; Narishige, Tokyo, Japan) was used to advance each probe through a stainless-steel guide tube. The guide tube was held in place by a grid attached to the chamber, which allowed recordings every 0.5 mm between penetrations. Extracellular voltages were measured in reference to the probe shaft and then were amplified and bandpass-filtered (150 Hz to 8 kHz; OmniPlex system; Plexon Inc.) to discriminate single-unit activity. Each unit was isolated using an online template-matching spike discriminator (SortClient; Plexon Inc.), followed by offline manual sorting using spike sorting software (Offline Sorter, Plexon Inc.). All well-isolated neurons were sampled. A total of 41 and 55 recording sessions were performed for monkey A (RA, n = 17; FM, n = 16; FO, n = 8) and monkey B (RA, n = 15; FM, n = 22; FO, n = 18), respectively. In these sessions, monkey A was paired with monkey B (RA, n = 6; FM, n = 3) and monkey C (RA, n = 11; FM, n = 13), whereas monkey B was paired with monkey A (RA, n = 9; FM, n = 3) and monkeys C (RA, n = 6; FM, n = 19). The partner monkeys in the RA and FM conditions were identical in more than 80% of the sessions when the two conditions were performed on the same days. Most of the neuronal data (PMv neurons, n = 565; MPFC neurons, n = 480) were collected in our previous work (8).

Data analysis (Dataset S1).

Gaze behavior.

M1’s gaze behaviors were analyzed in trials in which M2 made a correct choice and the outcome of the preceding trial was also correct. The region of interest was set at each target position. The duration of M1’s gaze within each region of interest was measured during a period in which M2 was making a choice (−400 to −200 ms from button press). The proportions of gaze time were compared across the three partner conditions (P = 0.01, one-way ANOVA).

Neuronal activity.

We recorded spike activities of 615 neurons in the PMv and 527 neurons in the MPFC. As previously reported, neurons in these areas are activated by self-actions and/or partner-actions (68, 25). Accordingly, we first performed the same analyses as our previous works to classify individual neurons in the RA condition (8, 25). Briefly, as a first step, the firing rates were obtained during a control period (600 to 0 ms before target onset) and a peri-action period (from 400 ms before to 200 ms after target button press). A two-way ANOVA (P < 0.05) was performed to test the effects of two factors, i.e., agent (self or partner) and performance outcome (correct or incorrect) for the peri-action period. Any neuron with a significant main effect of agent was judged to be agent-selective. Activities of the agent-selective neurons were further compared between self-correct trials and partner-correct trials in the peri-action period (self-type or partner-type) as well as between the peri-action period and the control period (excitatory or inhibitory). For example, agent-selective neurons were defined as excitatory self-type if their activities in the peri-action period were significantly higher in the self-correct trials than in the partner-correct trials (P < 0.05, Tukey–Kramer post hoc test), and their activities in the peri-action period were significantly higher than in the control period (P < 0.05, paired t-test). Partner-type neurons were further classified as partner-error type if a significant main effect was found on performance outcome. Finally, neurons with no significant main effects of agent were also judged to be agent-related if their activities in the peri-action period were significantly different from those in the control period (P < 0.05, paired t-test) in both self-correct and partner-correct trials (mirror type).

Spatial preference of individual agent-related neurons was evaluated in the peri-action period. In the following analyses, activities in self-correct and partner-correct trials were used for self-type, mirror-type, and partner-type neurons, and those in self-correct and partner-error trials were used for partner-error-type neurons. All trials were preceded by a correct choice made either by M1 or M2. Spearman rank correlation coefficients (ρ) for self-action and partner-action were computed using neuronal activities for the three target positions (left, center, and right for B1, B2, and B3, respectively), with a positive regression slope for the right-sided preference. Each neuron was judged to have spatial preference if its activity passed the Spearman rank correlation test (P < 0.05) for self-action, partner-action, or both. Furthermore, each neuron was judged to have congruent spatial preference if its correlation coefficients were significant for both self-action and partner-action, and the signs of their regression slopes were the same (positive or negative). Conversely, each neuron was judged to have incongruent spatial preference if its correlation coefficients were significant for both self-action and partner-action, and the signs of their regression slopes were different. To examine the effect of the target location (B1, B2, or B3) with no assumption in the space domain, a one-way ANOVA was also performed for activity of individual neurons during the peri-action period in the self-action and partner-action trials. Each neuron was judged to have the preferred location if it showed a significant main effect (P < 0.05) and its maximal activity among the three targets was significantly different from activity for at least one of the remaining two targets (P < 0.05, Tukey–Kramer post hoc test).

To examine the influence of kinematic differences on spatial preference, self-correct and partner-correct trials were each divided into two groups with the same number of trials, one for the shorter response time and the other for the longer response time. The response time was defined as the duration between the target onset and the target press. The regression slopes were calculated with the two groups in the same way as described above. These procedures were performed for each spatially tuned neuron recorded in the RA condition, and the resulting regression slopes for the shorter and longer response times were compared statistically as a population (P < 0.05, paired t-test; SI Appendix, Fig. S2).

To assess whether the proportion of each neuronal type was significantly higher than chance, the actual numbers of these neuronal types were compared with the null distributions computed by a bootstrapping method. For each task-related neuron, shuffled activities were obtained by randomizing the target positions in each trial, and were tested as described above to see whether the neuron would be considered to have congruent or incongruent spatial preference. The proportion of neurons with spatial preference, congruent-type neurons, and incongruent-type neurons was calculated by applying these steps to all agent-related neurons in the PMv and MPFC. This procedure was iterated 10,000 times to compute the null distribution (SI Appendix, Figs. S5 and S6).

In constructing population-averaged spike-density functions, each spike was convolved with a Gaussian kernel (SD = 30 ms) for individual spatially tuned neurons. The resulting spike densities were normalized by the activity during the control period using a z-score. These z-scored spike density functions were averaged across all neurons with spatial preference, i.e., regardless of the laterality of spatial preference (contralateral or ipsilateral) or the direction of the response modulation (excitatory or inhibitory). The z-scored spike density functions of contralateral- and ipsilateral-preferred neurons were averaged after flipping the spatial alignment of the activities of ipsilateral-preferred neurons. Similarly, the z-scored spike density functions of excitatory and inhibitory neurons were averaged after flipping the sign of the spike density functions for inhibitory neurons.

To examine whether the spatial information of the partner’s actions was accurately represented by neurons in the PMv and MPFC, we performed a decoding analysis. Three groups of neurons were used for this analysis: neurons exhibiting spatial preference during the partner-action, congruent-type neurons, or incongruent neurons in each area. First, for individual neurons in each group, all trials were randomly split into the training (80%) and test (the remaining 20%) datasets. These datasets included unit IDs, z-scored average firing rates in the peri-action period, and the location of the partner-action targets (B1, B2, or B3). The classification model was trained using the training dataset. The performance of the trained model was assessed using the test dataset. These steps were repeated 1,000 times, and the average value was taken as the decoder’s classification performance. To determine the chance level of classification performance, the same procedure was applied to the same neuronal groups but now the target location was shuffled for the training dataset. These steps were also repeated 1000 times. If the decoder’s classification performance using the true datasets exceeded the 95 percentile of the shuffled distribution, the performance was considered significant.

Supplementary Material

Appendix 01 (PDF)

Dataset S01 (XLSX)

pnas.2403445121.sd01.xlsx (149.8KB, xlsx)

Acknowledgments

We thank A. Noritake, S. Tomatsu, and A. Uematsu for helpful discussions, and M. Togawa, Y. Yamanishi, S. Jochi, and A. Shibata for technical assistance. Japanese monkeys used in this study were provided by the National Bio-Resource Project “Japanese Macaques” of Japan Agency for Medical Research and Development (AMED). This research was supported by Grants-in-Aid for Japan Society for the Promotion of Science KAKENHI Grant Numbers 19H05467 (to T.N.), 21K07267 (to T.N.), and 22H04931 (to M.I.), and by AMED under Grant Number JP23wm0525001 (to M.I.).

Author contributions

T.N. and M.I. designed research; T.N. and M.I. performed research; T.N. analyzed data; and T.N. and M.I. wrote the paper.

Competing interests

The authors declare no competing interest.

Footnotes

This article is a PNAS Direct Submission.

Contributor Information

Taihei Ninomiya, Email: ninomiya@nips.ac.jp.

Masaki Isoda, Email: isodam@nips.ac.jp.

Data, Materials, and Software Availability

All study data are included in the article and/or supporting information.

Supporting Information

References

  • 1.Mukamel R., Ekstrom A. D., Kaplan J., Iacoboni M., Fried I., Single-neuron responses in humans during execution and observation of actions. Curr. Biol. 20, 750–756 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ninomiya T., Noritake A., Ullsperger M., Isoda M., Performance monitoring in the medial frontal cortex and related neural networks: From monitoring self actions to understanding others’ actions. Neurosci. Res. 137, 1–10 (2018). [DOI] [PubMed] [Google Scholar]
  • 3.Noritake A., Ninomiya T., Isoda M., Subcortical encoding of agent-relevant associative signals for adaptive social behavior in the macaque. Neurosci. Biobehav. Rev. 125, 78–87 (2021). [DOI] [PubMed] [Google Scholar]
  • 4.Rizzolatti G., Craighero L., The mirror-neuron system. Annu. Rev. Neurosci. 27, 169–192 (2004). [DOI] [PubMed] [Google Scholar]
  • 5.Isoda M., The role of the medial prefrontal cortex in moderating neural representations of self and other in primates. Annu. Rev. Neurosci. 44, 295–313 (2021). [DOI] [PubMed] [Google Scholar]
  • 6.Gallese V., Fadiga L., Fogassi L., Rizzolatti G., Action recognition in the premotor cortex. Brain 119, 593–609 (1996). [DOI] [PubMed] [Google Scholar]
  • 7.Yoshida K., Saito N., Iriki A., Isoda M., Representation of others’ action by neurons in monkey medial frontal cortex. Curr. Biol. 21, 249–253 (2011). [DOI] [PubMed] [Google Scholar]
  • 8.Ninomiya T., Noritake A., Kobayashi K., Isoda M., A causal role for frontal cortico-cortical coordination in social action monitoring. Nat. Commun. 11, 5233 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Pomper J. K., Shams M., Wen S., Bunjes F., Thier P., Non-shared coding of observed and executed actions prevails in macaque ventral premotor mirror neurons. Elife 12, 1–26 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Kraskov A., et al. , Corticospinal mirror neurons. Philos. Trans. R. Soc. B Biol. Sci. 369, 20130174 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kraskov A., Dancause N., Quallo M. M., Shepherd S., Lemon R. N., Corticospinal neurons in macaque ventral premotor cortex with mirror properties: A potential mechanism for action suppression? Neuron 64, 922–930 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bonini L., Maranesi M., Livi A., Fogassi L., Rizzolatti G., Ventral premotor neurons encoding representations of action during self and others’ inaction. Curr. Biol. 24, 1611–1614 (2014). [DOI] [PubMed] [Google Scholar]
  • 13.Caggiano V., Fogassi L., Rizzolatti G., Thier P., Casile A., Mirror neurons differentially encode the peripersonal and extrapersonal space of monkeys. Science 324, 403–406 (2009). [DOI] [PubMed] [Google Scholar]
  • 14.Maranesi M., Livi A., Bonini L., Spatial and viewpoint selectivity for others’ observed actions in monkey ventral premotor mirror neurons. Sci. Rep. 7, 2–8 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Caggiano V., et al. , Mirror neurons encode the subjective value of an observed action. Proc. Natl. Acad. Sci. U.S.A. 109, 11848–11853 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Pomper J. K., et al. , Representation of the observer’s predicted outcome value in mirror and nonmirror neurons of macaque F5 ventral premotor cortex. J. Neurophysiol. 124, 941–961 (2020). [DOI] [PubMed] [Google Scholar]
  • 17.Falcone R., Cirillo R., Ceccarelli F., Genovesio A., Neural representation of others during action observation in posterior medial prefrontal cortex. Cereb. Cortex 32, 4512–4523 (2022). [DOI] [PubMed] [Google Scholar]
  • 18.Falcone R., Cirillo R., Ferraina S., Genovesio A., Neural activity in macaque medial frontal cortex represents others’ choices. Sci. Rep. 7, 12663 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Kessler K., Thomson L. A., The embodied nature of spatial perspective taking: Embodied transformation versus sensorimotor interference. Cognition 114, 72–88 (2010). [DOI] [PubMed] [Google Scholar]
  • 20.Levine M., Jankovic I. N., Palij M., Principles of spatial problem solving. J. Exp. Psychol. Gen. 111, 157–175 (1982). [Google Scholar]
  • 21.Tversky B., Hard B. M., Embodied and disembodied cognition: Spatial perspective-taking. Cognition 110, 124–129 (2009). [DOI] [PubMed] [Google Scholar]
  • 22.Michelon P., Zacks J. M., Two kinds of visual perspective taking. Percept. Psychophys. 68, 327–337 (2006). [DOI] [PubMed] [Google Scholar]
  • 23.Spaulding S., Introduction to debates on embodied social cognition. Phenomenol. Cogn. Sci. 11, 431–448 (2012). [Google Scholar]
  • 24.Gallagher S., How the Body Shapes the Mind (Oxford Univ. Press, 2005). [Google Scholar]
  • 25.Ninomiya T., Noritake A., Tatsumoto S., Go Y., Isoda M., Cognitive genomics of learning delay and low level of social performance monitoring in macaque. Sci. Rep. 12, 16539 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Hoshi E., Tanji J., Contrasting neuronal activity in the dorsal and ventral premotor areas during preparation to reach. J. Neurophysiol. 87, 1123–1128 (2002). [DOI] [PubMed] [Google Scholar]
  • 27.Sliwa J., Freiwald W. A., A dedicated network for social interaction processing in the primate brain. Science 356, 745–749 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Rice K., Redcay E., Interaction matters: A perceived social partner alters the neural processing of human speech. Neuroimage 129, 480–488 (2016). [DOI] [PubMed] [Google Scholar]
  • 29.Rolison M. J., Naples A. J., Rutherford H. J. V., McPartland J. C., The presence of another person influences oscillatory cortical dynamics during dual brain EEG recording. Front. psychiatry 11, 246 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.David N., et al. , Neural representations of self versus other: Visual-spatial perspective taking and agency in a virtual ball-tossing game. J. Cogn. Neurosci. 18, 898–910 (2006). [DOI] [PubMed] [Google Scholar]
  • 31.Mazzarella E., Ramsey R., Conson M., Hamilton A., Brain systems for visual perspective taking and action perception. Soc. Neurosci. 8, 248–267 (2013). [DOI] [PubMed] [Google Scholar]
  • 32.Schurz M., Aichhorn M., Martin A., Perner J., Common brain areas engaged in false belief reasoning and visual perspective taking: A meta-analysis of functional brain imaging studies. Front. Hum. Neurosci. 7, 1–14 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Schurz M., et al. , Clarifying the role of theory of mind areas during visual perspective taking: Issues of spontaneity and domain-specificity. Neuroimage 117, 386–396 (2015). [DOI] [PubMed] [Google Scholar]
  • 34.Martin A. K., Dzafic I., Ramdave S., Meinzer M., Causal evidence for task-specific involvement of the dorsomedial prefrontal cortex in human social cognition. Soc. Cogn. Affect. Neurosci. 12, 1209–1218 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Gunia A., Moraresku S., Vlček K., Brain mechanisms of visuospatial perspective-taking in relation to object mental rotation and the theory of mind. Behav. Brain Res. 407, 113247 (2021). [DOI] [PubMed] [Google Scholar]
  • 36.Zacks J. M., Michelon P., Transformations of visuospatial images. Behav. Cogn. Neurosci. Rev. 4, 96–118 (2005). [DOI] [PubMed] [Google Scholar]
  • 37.Lenggenhager B., Lopez C., Blanke O., Influence of galvanic vestibular stimulation on egocentric and object-based mental transformations. Exp. Brain Res. 184, 211–221 (2008). [DOI] [PubMed] [Google Scholar]
  • 38.Flombaum J. I., Santos L. R., Rhesus monkeys attribute perceptions to others. Curr. Biol. 15, 447–452 (2005). [DOI] [PubMed] [Google Scholar]
  • 39.Perrett D. I., et al. , Visual cells in the temporal cortex sensitive to face view and gaze direction. Proc. R. Soc. London. Ser. B, Biol. Sci. 223, 293–317 (1985). [DOI] [PubMed] [Google Scholar]
  • 40.Epley N., Morewedge C. K., Keysar B., Perspective taking in children and adults: Equivalent egocentrism but differential correction. J. Exp. Soc. Psychol. 40, 760–768 (2004). [Google Scholar]
  • 41.Ishida H., Nakajima K., Inase M., Murata A., Shared mapping of own and others’ bodies in visuotactile bimodal area of monkey parietal cortex. J. Cogn. Neurosci. 22, 83–96 (2010). [DOI] [PubMed] [Google Scholar]
  • 42.Drayton L. A., Santos L. R., What do monkeys know about others’ knowledge? Cognition 170, 201–208 (2018). [DOI] [PubMed] [Google Scholar]
  • 43.Hayashi T., et al. , Macaques exhibit implicit gaze bias anticipating others’ false-belief-driven actions via medial prefrontal cortex. Cell Rep. 30, 4433–4444.e5 (2020). [DOI] [PubMed] [Google Scholar]
  • 44.Fogassi L., et al. , Space coding by premotor cortex. Exp. Brain Res. 89, 686–690 (1992). [DOI] [PubMed] [Google Scholar]
  • 45.Gentilucci M., Scandolara C., Pigarev I. N., Rizzolatti G., Visual responses in the postarcuate cortex (area 6) of the monkey that are independent of eye position. Exp. Brain Res. 50, 464–468 (1983). [DOI] [PubMed] [Google Scholar]
  • 46.Graziano M. S. A., Gross C. G., Visual responses with and without fixation: Neurons in premotor cortex encode spatial locations independently of eye position. Exp. Brain Res. 118, 373–380 (1998). [DOI] [PubMed] [Google Scholar]
  • 47.Chang S. W. C., Coordinate transformation approach to social interactions. Front. Neurosci. 7, 1–9 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Lanzilotto M., Gerbella M., Perciavalle V., Lucchetti C., Neuronal encoding of self and others’ head rotation in the macaque dorsal prefrontal cortex. Sci. Rep. 7, 8571 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Perrett D. I., Harris M., Social signals analyzed at the single cell level: Someone is looking at me, something moved! Int. J. Comp. Psychol. 4, 25–55 (1990). [Google Scholar]
  • 50.Ninomiya T., Noritake A., Isoda M., Live agent preference and social action monitoring in the macaque mid-superior temporal sulcus region. Proc. Natl. Acad. Sci. U.S.A. 118, e2109653118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Asaad W. F., Eskandar E. N., A flexible software tool for temporally-precise behavioral control in Matlab. J. Neurosci. Methods 174, 245–258 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix 01 (PDF)

Dataset S01 (XLSX)

pnas.2403445121.sd01.xlsx (149.8KB, xlsx)

Data Availability Statement

All study data are included in the article and/or supporting information.


Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES