Abstract
Neuroscience has long relied on macaque studies to infer human brain function, yet identifying functionally corresponding brain regions across species and measurement modalities remains a fundamental challenge. This is especially true for higher-order cortex, where functional interpretations are constrained by narrow hypotheses and anatomical landmarks are often non-homologous. We present a data-driven approach for mapping functional correspondence across species using rich, naturalistic stimuli. By directly comparing macaque electrophysiology with human fMRI responses to 700 natural scenes, we identify fine-grained alignment based on response pattern similarity, without relying on predefined tuning concepts or hand-picked stimuli. As a test case, we examine the ventral face patch system, a well-studied but contested domain in cross-species alignment. Our approach resolves a longstanding ambiguity, yielding a correspondence consistent with full-brain anatomical warping but inconsistent with prior studies limited by narrow functional hypotheses. These findings show that natural image-evoked response patterns provide a robust foundation for cross-species functional alignment, supporting scalable comparisons as large-scale primate recordings become more widespread.
Introduction
Neuroscience depends on cross-species and cross-measurement comparisons to understand the principles of human brain function. Research in humans is mostly based on non-invasive techniques such as functional magnetic resonance imaging (fMRI), which measures broad, indirect signals throughout the entire brain. This technique has been widely used to localize function to cortex, for instance to identify visual regions with response biases for edges, shapes, faces, bodies, or other categories1–6. Disentangling the neural computations in these regions requires more focal recording techniques that are largely restricted to animal models. Therefore, much of our understanding of human visual processing has been inferred indirectly from electrophysiological measures in the macaque visual system. A central challenge for systems neuroscience is to connect those findings by aligning responses across measurement modalities and identifying functionally corresponding brain regions across species. Establishing such correspondence is not only critical for interpreting human brain function based on findings from animal models but also provides insights into the evolutionary organization of the cortex7.
Existing approaches often rely on narrowly defined functional hypotheses or a small number of diagnostic stimuli to infer correspondence. But such features may fail to capture the full complexity of cortical representations, especially in higher-order areas where conceptual distinctions between regions remain poorly understood8. As large-scale, brain-wide recordings in non-human primates become more widely available9, and as neuroscience expands into more abstract cortical territories, these limitations become more acute, highlighting the need for scalable, hypothesis-agnostic methods that can leverage such data. Here, we propose one such framework: a flexible, data-driven approach for identifying functionally corresponding brain regions across species using response selectivity to a large number of naturalistic stimuli. By leveraging the rich, graded response patterns evoked by diverse natural scenes, this method identifies functional alignment without relying on predefined tuning axes, stimulus categories, or conceptual heuristics.
To evaluate this approach, we focus on the ventral face patch system, a well-characterized yet still unresolved test case for cross-species alignment. This system comprises a series of patches that respond more to faces than to other objects (“face selectivity”3,10). In humans, the ventral face patches form a sequence that extends posteriorly from the occipital face area (OFA) through the fusiform face area’s posterior (FFA-1) and anterior parts (FFA-2) and culminates anteriorly at the anterior temporal lobe face patch (ATL). In the macaque inferotemporal cortex (IT), a comparable sequence unfolds from the posterior lateral face patch (PL) through the middle lateral (ML) and anterior lateral face patches (AL), leading to the anterior medial face patch (AM)11,12. These patches contain a high proportion of face-selective neurons called “face cells”, which have been studied extensively in macaque visual cortex10,13–30. While both species exhibit a broadly conserved posterior-to-anterior axis, the precise correspondence of specific patches remains unresolved12. The macaque central IT face patch ML and human FFA are often considered homologous11,31–33. This notion is supported by predictions based on full-brain anatomical warping, which suggests an ML to FFA and an AL to ATL correspondence33,34. Nevertheless, evidence from the surrounding cortical topography suggests a possible PL to OFA, ML to FFA-1, and AL to FFA-2 correspondence35 or even a PL/ML to OFA and AL to FFA correspondence36,37.
To further complicate the picture, anatomical evidence may be insufficient on its own to establish the correspondence of face areas, as regions can reorganize, duplicate, segregate, or enlarge through cortical expansion during evolution38,39,45. Functional evidence is therefore essential to determine whether two regions in macaques and humans serve similar roles in visual processing. Traditionally, functional correspondence is determined based on a conceptual interpretation of the role that a region may serve in brain processing. For macaque face regions, such a conceptual distinction is how neurons respond to heads presented at different orientations. Whereas ML neurons show viewpoint-specific responses, more anterior AL neurons show mirror-symmetric viewpoint invariance16, a tuning property that can be retrieved from fMRI activation patterns40. In humans, one study found evidence for this mirror-symmetric coding in FFA, but not in OFA41, suggesting an ML to OFA and AL to FFA functional correspondence. An alternative interpretation is that ML may correspond to the more posterior part FFA-1 and AL to the more anterior part FFA-212. Others have suggested that evidence for mirror-symmetric tuning could merely be explained by low-level confounds42,43. The lack of consistency amongst studies calls into question the validity of relying on a single, hand-picked tuning property to arbitrate between scenarios of functional correspondence across species. Indeed, recent findings have shown that, even for face cells, the neural tuning should not be reduced to a few human-interpretable features or concepts29,30.
Here, we test whether we can establish the functional correspondence of human and macaque face areas without relying on specific assertions of functional specialization. Rather than focusing on only face images or pre-defined functional properties, we compared neural responses from macaque and human cortex to a shared, diverse set of natural scenes. Specifically, we leveraged an existing large-scale dataset of human fMRI responses to complex natural scenes44, and recorded neural responses in macaque central and anterior face regions using the same stimuli. This allowed us to evaluate whether response selectivity across a large number of natural images can resolve competing hypotheses about the alignment of face-selective regions across species. We found that a direct comparison of these responses supported a mapping in which macaque ML corresponds to human FFA and AL to ATL, consistent with predictions from full-brain anatomical warping alone33. Thus, our approach resolved prior inconsistencies and demonstrated that broad, natural image-evoked response profiles can establish functional alignment across species, without assuming conceptual distinctions between regions.
Results
We presented stimuli from the Natural Scenes Dataset44 to five macaque monkeys (initials A, OG, P, R, and B1; hereafter referred to as M1-M5), one with an array in primary visual cortex (V1; M1: N=11 reliably responsive multiunit sites, see Methods), two with arrays in CIT at the location of the middle lateral face patch (ML; M2: N=23 units; M3: N=33 units), and two with arrays in AIT targeting the anterior lateral face patch (AL; M4: N=74 units; M5: N=40 units). The stimulus set consisted of a rich set of 700 photographs of a variety of things in a scene context (including animals, humans, sports, food…) captured at varying distances. Most of these stimuli did not prominently feature faces. On average, each of the IT arrays (but not the V1 array) responded most strongly to images with prominent faces (Fig. 1a). To evaluate face selectivity of individual units, we computed a face versus no-face d’ metric comparing the response to images with prominent faces to those without faces or animals (see Methods). The units recorded from IT arrays had a high level of face selectivity (mean face versus no-face d’; M2, CIT: 1.46, 95% CI[1.24,1.67]; M3, CIT: 2.12, 95% CI[1.76,2.48]; M4, AIT: 2.30, 95% CI[2.14,2.47]; M5, AIT: 3.71, 95% CI[3.44,3.98]), unlike the V1 array (M1: 0.15, 95% CI[−0.03,0.34]).
Figure 1. Data and hypotheses.
a. Central and anterior IT array locations show face-selective responses. Middle: images ranked according to the array response averaged across units. Right: average face selectivity. Grey markers represent individual units. Black markers indicate 95% CI. b. Human face-selective ROIs from the functional localizer experiment of 44. Middle: images ranked according to the ROI response averaged across voxels and subjects. Right: average ROI face selectivity. Grey markers represent individual subjects. Black markers indicate 95% CI. c. Three scenarios of posterior-anterior functional alignment of face-selective cascade in the macaque and human ventral streams.
Similarly, for the human fMRI data, all four localized face area regions of interest (ROIs) responded most strongly to images with prominent faces (Fig. 1b). Note that we excluded the mid temporal lobe face ROI, since it was localized in only two subjects. We computed a face versus no-face d’ per ROI of each individual subject, confirming that these regions have strongly face selective responses (mean face versus no-face d’; OFA: 1.70, 95% CI[1.09,2.31]; FFA-1: 2.78, 95% CI[2.15,3.40]; FFA-2: 2.72, 95% CI[1.92,3.52]; ATL: 1.93, 95% CI[1.38,2.47]). Thus, the CIT and AIT arrays were highly face selective, like human face areas, confirming that we successfully targeted the middle and anterior face-selective regions in IT cortex.
To determine whether response patterns across natural scenes can resolve the correspondence between macaque and human face-selective regions, we evaluated three hypothesized alignments (Fig. 1c): (1) ML to OFA and AL to FFA; (2) ML to FFA-1 and AL to FFA-2; (3) ML to FFA and AL to ATL. That is, does macaque ML correspond best to human OFA (scenario 1) or to (posterior) human FFA (scenario 2 or 3), and does macaque AL correspond best to human (anterior) FFA (scenario 1 or 2) or to human ATL (scenario 3)? We used array-to-fMRI response pattern similarity to arbitrate between these scenarios.
Macaque IT arrays correlate with human fMRI face areas and beyond.
Having established the face selectivity of the monkey arrays, where in the human brain do we find similar tuning? Rather than characterizing and interpreting the tuning of neural or fMRI responses as a function of specific visual features or categories, we take an agnostic approach where we compare the full selectivity profile across all images. This approach is motivated by recent work from our lab, where we showed that the tuning of face cells is more complex than what can be characterized with faces alone, applying to all kinds of objects in a meaningful way29,30. The assumption that we make with this approach is that the stimulus set of 700 images is rich enough to differentiate between distinct complex neural tuning profiles, even if that tuning is too complex to be human interpretable.
We computed the similarity of responses at each specific array location (i.e., V1, CIT, or AIT) to the responses of each vertex of a subject-averaged human brain (Fig. 2a; see Methods, Array-to-fMRI similarity). Briefly, for a given vertex in the human brain, we took the trial-then-subject-averaged response vector and calculated the Pearson correlation with each individual unit’s trial-averaged response vector. We also calculated a joint reliability for each unit-to-vertex combination, based on the unit’s trial-wise split-half reliability and the vertex’s subject-wise split half reliability. The macaque array-to-human brain similarity was then computed by averaging for each vertex the Pearson correlations separately across V1, CIT, and AIT units, and normalizing by the noise ceiling given by the unit-averaged joint reliability values.
Figure 2. Macaque array-to-human brain similarity maps.
a. Illustration of how we computed the similarity between array unit responses and human fMRI vertex responses (subject-averaged in fsaverage space). For each vertex , the unit-to-vertex correlations were averaged across units and then normalized by the Spearman-Brown-corrected joined reliability values , averaged across units . b-d. Full right hemisphere human brain maps of similarities with macaque V1 array responses (b), CIT array responses (c), and AIT array responses (d). Vertices for which the noise ceiling (unit-averaged joint unit-vertex reliability) was below 0.2 were masked out. White outlines indicate clusters of vertices for which the fraction of subjects that has a given ROI present exceeds 0.33.
This analysis yielded three macaque array-to-human brain similarity maps, based on the right human hemisphere: one for V1 units (Fig. 2b), one for CIT units (Fig. 2c), and one for AIT units (Fig. 2d). The array-to-brain similarity maps showed a smooth, graded range of negative to positive correlations values, extending well beyond visual areas. This is consistent with a widespread, correlated engagement of the cortex in response to these images. The map based on the macaque V1 array shows similarity biased towards earlier visual areas, although it also shows marked similarity with the higher human face areas. The maps based on macaque CIT and AIT arrays, in contrast, are biased towards higher human visual areas, particularly face regions, as well as regions beyond visual cortex. The perceptual difference between maps based on CIT and AIT was less clear and required a more direct comparison.
To better assess these differences, we Fisher Z-transformed the maps (see Methods, Fisher Z-transformation) and subtracted the map based on CIT from the map based on AIT (Fig. 3a). We further masked out vertices that were not correlated or were anticorrelated with both AIT and CIT (Pearson’s r / reliability < 0.1). In this AIT minus CIT map, red hues indicate vertices in the human brain that responded more like AIT units than like CIT units. Blue hues indicate vertices that responded more like CIT. This visualization more clearly highlights the fine-grained differences between response similarities with AIT and with CIT. As a sanity check, we subtracted a map based on all IT units from a map based on V1 units (Fig. 3b). This V1 minus IT map confirmed that vertices in early visual cortex responded more like V1 units (red hues), whereas vertices in higher visual cortex and beyond responded more like IT units (blue hues).
Figure 3. Monkey AIT matched higher human visual (face) areas better than did monkey CIT.
a. Full right hemisphere human brain map showing the difference in similarity with macaque AIT and CIT array responses (subject-averaged in fsaverage space). Red hues indicate higher similarity to AIT units than to CIT units. Blue hues indicate higher similarity to CIT units than to AIT units. b. Human brain map showing the difference in similarity with macaque V1 and IT array responses (red hues: higher similarity to AIT units; blue hues: higher similarity to V1). c. AIT minus CIT array-to-brain similarity values averaged for the ROI contours indicated in (a). Error bars indicate 95% CIs based on a two-sample t-test comparing AIT units (N=116) to CIT units (N=48). Filled markers denote p<0.05. Insets show the consistency of this trend across monkeys by repeating the analysis while systematically leaving out one monkey at a time (akin to jackknife resampling).
Visual inspection of the response similarities with the target human face regions indicates that human OFA and FFA-1 responses were more like macaque CIT than AIT (predominantly blue hues), whereas human ATL responses were more like macaque AIT than CIT (predominantly red hues). This was less clear for FFA-2: some vertices corresponded better to CIT, others to AIT. For a quantitative analysis, we averaged unit-to-vertex correlations across vertices for which the fraction of subjects that had a given ROI present exceeded 0.33. This analysis confirmed the posterior-to-anterior trend observed through visual inspection, culminating in the largest AIT minus CIT difference in ATL (Fig. 3c).
Since FFA vertices did not show a higher similarity to AIT than to CIT units, these results are inconsistent with scenario 1, where the macaque AIT face region AL corresponds to human FFA (and CIT face region ML to human OFA). However, thus far these results remain consistent with both scenario 2 and 3. Next, we used each human subject’s individually localized ROIs in both the left and right hemisphere to determine whether macaque AIT units are better aligned with human FFA-2 (scenario 2) or with ATL (scenario 3).
Macaque CIT maps best onto human FFA, and AIT onto ATL
To further arbitrate between scenario 2 and 3, and to account for human subject level variability, we computed the similarity of monkey array units to each human subject’s individually defined ROIs. The methods to compute monkey array-to-human brain similarity were analogous to Fig. 2a but based on the fMRI voxel responses (trial-averaged beta values) of each human subject’s native space. For each human subject we then averaged array-to-brain similarity values across all voxels within each ROI (see Fig. 4a).
Figure 4. Monkey AIT matched ATL better than it matched FFA.

a. Average array-to-ROI similarity. Grey markers represent individual human subjects. Error bars indicate 95% CIs based on a one-sample t-test (N=7 human subjects). b. Regression interaction terms between monkey array (AIT vs. CIT) and a one-step transition in the human ROI cascade (ATL vs. FFA-2, FFA-2 vs. FFA-1, or FFA-1 vs. OFA). For example, for the interaction with ATL vs. FFA-2 (indicated as FFA-2 -> ATL in the figure), the positive value means that the increase in array-to-brain similarity from human FFA-2 to ATL is larger for macaque AIT than for CIT. Error bars indicate 95% CIs based on a paired-sample t-test (N=7 human subjects). Filled markers denote p<0.05. Insets show the consistency of this trend across monkeys by repeating the analysis while systematically leaving out one monkey at a time (akin to jackknife resampling).
For both CIT arrays and AIT arrays, there was no significant difference in similarity to human brain responses in FFA-2 versus FFA-1 (CIT: t(6)=0.61, p=0.5639; AIT: t(6)=1.11, p=0.3094). There was also no statistically significant interaction between AIT versus CIT arrays and the difference in similarity to human brain responses in FFA-2 versus FFA-1 (t(6)=2.35, p=0.0571; see Fig. 4b). These results are inconsistent with the notion that macaque ML and AL are best distinguished in how well they match human FFA-1 and FFA-2 (scenario 2).
In contrast, there was a significant interaction between AIT versus CIT arrays and the difference in similarity to human brain responses in ATL versus FFA-2 (t(6)=4.17, p=0.0059). Indeed, when pooling similarity to FFA-1 and FFA-2, we found for AIT arrays that similarity to ATL was significantly higher than similarity to FFA (t(6)=2.71, p=0.0353) but not for CIT arrays (t(6)=0.55, p=0.6010). Similarity to OFA was significantly lower than similarity to FFA for both AIT arrays (t(6)=3.81, p=0.0089) and CIT arrays (t(6)=2.66, p=0.0374; confirming the rejection of scenario 1). Thus, overall, these results support the notion that macaque AL is functionally best aligned to human ATL, and macaque ML to human FFA (scenario 3).
Discussion
We introduce a general framework for identifying fine-grained functional correspondence across species by leveraging population-level neural responses to naturalistic stimuli. Applying this approach to the ventral face patch system, we demonstrate that rich, graded response patterns provide a robust and scalable basis for mapping functional correspondence across species and measurement modalities, without requiring predefined hypotheses or hand-picked stimuli. By directly comparing macaque electrophysiology with human fMRI responses to a shared set of natural scenes, we identified a functional alignment of face-selective areas in which macaque ML corresponds to human FFA and AL to ATL. Despite being derived purely from neural responses, this mapping was consistent with predictions from anatomical topology, suggesting that large-scale cortical layout remains a strong predictor of functional organization across primates. These findings illustrate that neural responses to natural scenes can serve as a reliable basis for cross-species functional inference, offering a flexible, hypothesis-agnostic alternative to traditional methods for comparative neuroscience.
These results help resolve a long-standing ambiguity in the posterior-to-anterior cross-species alignment of face-selective regions. Previous accounts have proposed three competing scenarios: (1) ML aligns with OFA and AL with FFA; (2) ML aligns with FFA-1 and AL with FFA-2; (3) ML aligns with FFA and AL with ATL. Using response patterns across a large set of natural images, we found that recordings from macaque CIT (targeting ML) correlated best with human FFA (both posterior and anterior), with no additional increase for ATL, thereby rejecting scenario 1. In contrast, recordings from macaque AIT (targeting AL) correlated most strongly with human ATL, ruling out scenario 2. Finally, our analyses showed that ML and AL are best distinguished by their correspondence to anterior FFA versus ATL. Together, these findings support scenario 3: a functional alignment of ML with FFA and AL with ATL. This alignment suggests a more anterior human homologue of AL than is often assumed, underscoring the importance of considering broad representational properties in higher-order cortex.
One potential explanation for previous discrepancies based on cortical topography35–37 is that evolutionary changes may reshape cortical organization38,39,45. Such changes can undermine the reliability of using local topography to infer functional correspondence. On the other hand, our conclusions do converge with a posterior-to-anterior mapping derived from full-brain anatomical warping33, suggesting that more holistic anatomical approaches may better capture conserved organizational principles. Previous studies have also compared a range of functional properties, such as mirror-symmetric tuning, the face inversion effect, face familiarity, and face selectivity, often using small sets of diagnostic stimuli11,12,37,40,46,47. While these approaches are attractive for their simplicity and interpretability, they are constrained by their reliance on small and highly curated stimulus sets and may miss broader feature tuning. Moreover, recent work has suggested that observations of mirror-symmetric tuning in the human face selective areas may originate from low-level confounds and analysis choices42,43. Importantly, these observations highlight a common theme: broader approaches, whether leveraging full-brain anatomical alignment or a comprehensive range of natural stimuli, converge on a consistent cross-species mapping, whereas narrower methods based on local topography or curated stimuli can yield divergent conclusions48.
More broadly, our findings illustrate how data-driven approaches using naturalistic stimuli offer a powerful tool for fine-grained functional alignment well beyond the domain of face-selective regions. Traditional methods that rely on small, highly curated stimulus sets risk drawing inaccurate conclusions when the chosen stimuli fail to capture the richness of underlying neural representations. Under stimulus-poor conditions, the inclusion or exclusion of a single image can drastically impact observed response profiles. For example, while face-selective regions in both species responded more strongly to faces on average, the specific images that elicited the strongest responses (Fig. 1, “best” images) differed across species—highlighting nuanced differences in feature tuning that could be either amplified or overlooked with only a few stimuli. These challenges are exacerbated when the stimuli are overly constrained by a conceptual notion such as discrete stimulus categories49. Growing evidence suggests that high-level visual cortex, including face-selective cells, is best characterized by an integrated feature space, where category selective responses are carried by domain-general features29,30,50–53. By sampling broadly from the natural image space, our approach captures these representational subtleties without imposing categorical assumptions. This strategy enables a more flexible, scalable, and hypothesis-agnostic framework for assessing functional alignment, particularly in higher-order areas where feature tuning is complex or poorly understood.
Establishing cross-species functional correspondence in primates is a long-standing challenge, complicated by issues such as differences in measurement techniques7. The advent of fMRI, usable in both monkeys and humans, has enabled an explosion of comparative studies inferring correspondence by comparing fMRI maps of individual functional properties or stimulus contrasts7,8,11,12,33,35,37–39,46,54–59. Our approach demonstrates that direct comparisons between electrophysiological and fMRI data can contribute meaningfully to this line of research. By leveraging population-level neural responses and correlating them with human fMRI responses at each vertex, we were able to delineate functional alignment without requiring parallel imaging experiments in both species. Although the physiological differences between spiking and BOLD signals remain, our results show that image-level tuning in multi-unit activity aligns meaningfully with human fMRI responses60–62. Indeed, the array-to-brain correlation maps show the highest similarity in regions consistent with each array’s location: human early visual cortex for the macaque V1 array, and ventral temporal cortex (near OFA, FFA and ATL) for the IT arrays. Furthermore, our results converge with previously hypothesized homologies derived from anatomical warping of fMRI-localized macaque face patches onto a human flat map33,34. Overall, the fact that our framework captured meaningful alignment across both coarse and fine levels of cortical organization – from early versus high-level visual areas to distinctions among neighboring face regions – suggests that this approach can generalize to systems beyond the ventral stream, wherever shared representations exist across species.
Our study has several limitations. First, we recorded from only one array location per monkey, which means that potential inter-individual differences are confounded with between-area differences. To mitigate this, we repeated the main analyses while systematically leaving out one monkey at a time, confirming that our results are not driven by any single individual. Second, although we focused on two array locations targeting ML and AL, it remains to be seen whether similar cross-species correspondences can be established in other high-level visual areas or cognitive systems. Third, while natural scenes cover a rich and diverse stimulus space, they do not capture the full range of real-world dynamics and behaviors that shape neural activity. Future work could extend this approach by incorporating dynamic stimuli and behavioral context, by sampling simultaneously and more densely from a larger swath of IT or even multiple visual areas9, or by applying monkey fMRI to enable full-brain functional warping. Finally, while our approach addresses the question of functional alignment, it does not, by itself, establish phylogenetic homology in the strict evolutionary sense. Such claims require complementary evidence from cytoarchitecture, connectivity, and developmental lineage, which is often inaccessible in humans or for brain regions with ambiguous structural boundaries. In such contexts, our method provides functional evidence to inform hypotheses about homology.
In conclusion, our results point toward a flexible, data-driven framework for establishing functional correspondences across species and modalities. By using naturalistic stimuli to capture rich, high-dimensional and graded response patterns, this approach overcomes the limitations of traditional alignment methods that rely on narrow stimulus sets or predefined functional heuristics. Although we demonstrated this in the ventral visual face patch system, this method is broadly applicable, offering a general framework for establishing cross-species alignment in domains where naturalistic data can be collected. As technological advances continue to enable more large-scale and brain-wide recordings in non-human primates9, frameworks that scale with this complexity and support cross-species comparisons of increasingly abstract, hard-to-define functions will be essential. Our findings highlight the value of naturalistic, data-driven alignment as a foundation for such efforts, offering both a principled path toward unified models of brain function and a powerful complement to anatomical and developmental approaches in the study of brain evolution.
Methods
NSD data
The human data analyzed in this study were obtained from the Natural Scenes Dataset (NSD), a large-scale fMRI dataset comprising whole-brain, high-resolution measurements collected from eight subjects who viewed natural scenes while engaged in a continuous recognition task. Details regarding data collection and preprocessing can be found in the original publication44. Face-selective regions were defined based on the results of the category-selective functional localizer included in the NSD experiments. Labels for these regions were assigned by the NSD authors. For the subject-averaged human brain maps, the provided probabilistic localizer maps were used with a threshold of 0.33 (i.e., only vertices corresponding to a face-selective region in at least one-third of the subjects were included). We additionally defined each subject’s face-selective ROIs individually, based on their native-space face localizer t-maps, using a threshold of t > 3. One subject was excluded because no anterior face region could be defined based on the functional localizer, resulting in a final sample of seven subjects. In addition, a medial temporal lobe face area that could be defined in only two remaining subjects was excluded.
Animals and arrays
Five adult male macaques were used in this experiment: four rhesus macaques (Macaca mulatta; initials A, OG, P, and B1—referred to as M1, M2, M3, and M5, respectively) aged 8–16 years and one pigtailed macaque (Macaca nemestrina; initial R, referred to as M4), aged 13 years. All five monkeys were implanted with chronic microelectrode arrays: one in V1 and four in the lower bank of the superior temporal sulcus. Specifically, two monkeys were implanted with 32-channel floating microelectrode arrays (FMA; Microprobes for Life Sciences, Gaithersburg, MD): V1 of M1 and CIT of M3. Three monkeys were implanted with 64-channel NiCr microwire bundle arrays (Microprobes for Life Sciences, Gaithersburg, MD)63: CIT of M2 and AIT of M4 and M5. The target location for the face patch arrays was identified using fMRI face localizers (M3, M4, M5) or anatomical landmarks (M2; STS ‘bumps’: Arcaro et al64). All procedures were approved by the Harvard Medical School Institutional Animal Care and Use Committee and conformed to National Institutes of Health guidelines provided in the Guide for the Care and Use of Laboratory Animals.
Experiments
During recordings, the monkeys were performing a fixation task in which they were rewarded with drops of juice to fix their gaze on a spot in the middle of a 53-cm LCD monitor. The gaze position was monitored using an ISCAN system (ISCAN, Woburn, MA). The experiments were controlled with MonkeyLogic (https://monkeylogic.nimh.nih.gov/). During fixation, images were presented at a size of 8 visual degrees and a rate of 100 ms on and 100 ms off. The images were presented at the center of the mapped receptive field, with a jitter of ±1 to ±2 visual degrees for IT arrays. An average of 32 trials were presented per stimulus.
Stimuli
The stimulus set consisted of a subset of 1000 images that were shared across human subjects in the NSD dataset44. The NSD image set contains a rich variety of photographs taken from Microsoft’s Common Objects in Context (MS COCO65) image database. For this paper, we focus on 700 images for which there were at least two presentations for each human subject, to be able to compute a split-half reliability using the exact same images for each subject.
fMRI-guided array targeting
The target location for the array placement for 3 out of 4 monkeys (M3, M4, M5) was identified using an fMRI face localizer. The details of the fMRI experiments are described in Arcaro and Livingstone66 and will only be summarized here briefly. The monkeys were scanned using custom-made four-channel surface coils (by A. Maryam at the Martinos Imaging Center), in a 3T TIM Trio scanner with an AC88 gradient insert. Functional images were acquired using a repetition time = 2s, echo time = 13ms, flip angle = 72°, iPAT = 2, matrix size = 96 × 96 mm, resolution = 1-mm isotropic, and 67 contiguous sagittal slices. To enhance signal-to-noise ratio and increase contrast 67, the monkeys were injected with 12 mg/kg of monocrystalline iron oxide nanoparticles (Feraheme, AMAG Pharmaceuticals, Cambridge, MA, USA) before each scanning session. The face localizer consisted of randomly shuffled 20s blocks of face or nonface objects, interleaved with 20s of a neutral gray screen.
Data analysis
Firing rates
To compute average firing rates, we counted the number of spikes in a 200 ms window following stimulus presentation onset. The latency of the response was selected based on a visual assessment of the global average (across channels and stimuli) peristimulus time histogram, ranging from 65 ms to 125 ms. Following previous procedures29,30, we used an a-priori response reliability criterion of >0.4 to include only visually driven, selective neural units for further analysis. This yielded 11 multiunit sites from V1 recordings, 48 multiunit sites from CIT recordings and 116 multiunit sites from AIT recordings.
Response reliability
We determined a trial-wise split-half reliability per neural unit and per voxel in each NSD subject brain, and we determined a subject-wise split-half reliability per vertex in the NSD ‘fsaverage’ space. To obtain trial-wise split-half reliability, we first computed the correlation between the average response vector based on odd trials and the one based on even trials. We then applied the Spearman-Brown correction to obtain a reliability .
To obtain subject-wise split-half reliability, was computed as the correlation obtained from the trial-and-subject averaged response vectors for every possible way to split the NSD subjects in two, averaged across splits.
Face selectivity
We assessed face selectivity by computing the face versus no-face dʹ sensitivity index comparing trial-averaged responses to a subset of images with prominent primate faces (N=26 out of all 700 images) versus all images without faces or animals (N=255 out of all 700 images):
where and are the across-stimulus means for faces and non-faces, and and are the across-stimulus SDs.
Array-to-fMRI similarity
We quantified the similarity between microelectrode array responses and fMRI vertex/voxel responses as follows. For each fMRI vertex/voxel and array multiunit , we computed the Pearson correlation between their respective response vectors across images. For the flat-maps, we then averaged the unit-to-vertex correlation values across all array units to obtain a single correlation value for each vertex . For the ROIs, we averaged across all units and voxels to obtain a single correlation value for each ROI.
To estimate a noise ceiling, we computed joint reliability values as , where and represent the response reliability of each unit and each vertex/voxel, respectively (see Methods, Response reliability). Similar to the correlation values, we averaged these joint reliability values across units (and voxels in the case of ROIs) to obtain a single noise ceiling value for each vertex or each ROI.
Finally, we normalized each array-to-vertex/ROI correlation by its corresponding noise ceiling to obtain a single array-to-vertex/ROI similarity value.
Fisher Z-transformation
The correlation-based array-to-fMRI similarity values are inherently bounded between −1 and 1 and exhibit a skewed distribution, especially near the extremes. Therefore, for statistical comparison between similarity values associated with a given vertex/ROI, we applied the Fisher Z-transformation, , to normalize the distribution and stabilize variance across the range of similarity values68.
Acknowledgments
Funding:
This work was supported by the Alice and Joseph Brooks Fund Postdoctoral Fellow (K.V.), Gordon Fellowship (to S.S.) and NIH grant R01 EY025670 (to M.S.L.)
Footnotes
Competing interests: The authors declare no competing interests.
References
- 1.Malach R., Reppas J.B., Benson R.R., Kwong K.K., Jiang H., Kennedy W.A., Ledden P.J., Brady T.J., Rosen B.R., and Tootell R.B. (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences 92, 8135–8139. 10.1073/pnas.92.18.8135. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Kourtzi Z., and Kanwisher N. (2000). Cortical Regions Involved in Perceiving Object Shape. The Journal of Neuroscience 20, 3310–3318. 10.1523/JNEUROSCI.20-09-03310.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Kanwisher N., McDermott J., and Chun M.M. (1997). The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception. The Journal of Neuroscience 17, 4302–4311. 10.1523/JNEUROSCI.17-11-04302.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Downing P.E., Jiang Y., Shuman M., and Kanwisher N. (2001). A Cortical Area Selective for Visual Processing of the Human Body. Science (1979) 293, 2470–2473. 10.1126/science.1063414. [DOI] [PubMed] [Google Scholar]
- 5.Grill-Spector K., Kushnir T., Hendler T., Edelman S., Itzchak Y., and Malach R. (1998). A sequence of object-processing stages revealed by fMRI in the human occipital lobe. Hum Brain Mapp 6, 316–328. . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Vinberg J., and Grill-Spector K. (2008). Representation of Shapes, Edges, and Surfaces Across Multiple Cues in the Human Visual Cortex. J Neurophysiol 99, 1380–1393. 10.1152/jn.01223.2007. [DOI] [PubMed] [Google Scholar]
- 7.Sereno M.I., and Tootell R.B. (2005). From monkeys to humans: what do we now know about brain homologies? Curr Opin Neurobiol 15, 135–144. 10.1016/j.conb.2005.03.014. [DOI] [PubMed] [Google Scholar]
- 8.Tootell R.B.H., Tsao D., and Vanduffel W. (2003). Neuroimaging Weighs In: Humans Meet Macaques in “Primate” Visual Cortex. The Journal of Neuroscience 23, 3981–3989. 10.1523/JNEUROSCI.23-10-03981.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Trautmann E.M., Hesse J.K., Stine G.M., Xia R., Zhu S., O’Shea D.J., Karsh B., Colonell J., Lanfranchi F.F., Vyas S., et al. (2023). Large-scale high-density brain-wide neural recording in nonhuman primates. Preprint, https://doi.org/10.1101/2023.02.01.526664 10.1101/2023.02.01.526664. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Tsao D.Y., Freiwald W.A., Tootell R.B.H., and Livingstone M.S. (2006). A cortical region consisting entirely of face-selective cells. Science 311, 670–674. 10.1126/science.1119983. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Tsao D.Y., Moeller S., and Freiwald W.A. (2008). Comparing face patch systems in macaques and humans. Proceedings of the National Academy of Sciences 105, 19514–19519. 10.1073/pnas.0809662105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Yovel G., and Freiwald W.A. (2013). Face recognition systems in monkey and human: are they the same thing? F1000Prime Rep 5. 10.12703/P5-10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Freiwald W.A., Tsao D.Y., and Livingstone M.S. (2009). A face feature space in the macaque temporal lobe. Nat Neurosci 12, 1187–1196. 10.1038/nn.2363. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Moeller S., Crapse T., Chang L., and Tsao D.Y. (2017). The effect of face patch microstimulation on perception of faces and objects. Nat Neurosci 20, 743–752. 10.1038/nn.4527. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Grimaldi P., Saleem K.S., and Tsao D. (2016). Anatomical Connections of the Functionally Defined “Face Patches” in the Macaque Monkey. Neuron 90, 1325–1342. 10.1016/j.neuron.2016.05.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Freiwald W.A., and Tsao D.Y. (2010). Functional Compartmentalization and Viewpoint Generalization Within the Macaque Face-Processing System. Science (1979) 330, 845–851. 10.1126/science.1194908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Chang L., and Tsao D.Y. (2017). The Code for Facial Identity in the Primate Brain. Cell 169, 1013–1028.e14. 10.1016/j.cell.2017.05.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Sadagopan S., Zarco W., and Freiwald W.A. (2017). A causal relationship between face-patch activity and face-detection behavior. Elife 6. 10.7554/eLife.18558. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Azadi R., Lopez E., Taubert J., Patterson A., and Afraz A. (2024). Inactivation of face-selective neurons alters eye movements when free viewing faces. Proceedings of the National Academy of Sciences 121. 10.1073/pnas.2309906121. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Afraz S.-R., Kiani R., and Esteky H. (2006). Microstimulation of inferotemporal cortex influences face categorization. Nature 442, 692–695. 10.1038/nature04982. [DOI] [PubMed] [Google Scholar]
- 21.Afraz A., Boyden E.S., and DiCarlo J.J. (2015). Optogenetic and pharmacological suppression of spatial clusters of face neurons reveal their causal role in face gender discrimination. Proc Natl Acad Sci U S A 112, 6730–6735. 10.1073/pnas.1423328112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Waidmann E.N., Koyano K.W., Hong J.J., Russ B.E., and Leopold D.A. (2022). Local features drive identity responses in macaque anterior face patches. Nat Commun 13, 5592. 10.1038/s41467-022-33240-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Khandhadia A.P., Murphy A.P., Romanski L.M., Bizley J.K., and Leopold D.A. (2021). Audiovisual integration in macaque face patch neurons. Curr Biol 31, 1826–1835.e3. 10.1016/j.cub.2021.01.102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Premereur E., Taubert J., Janssen P., Vogels R., and Vanduffel W. (2016). Effective Connectivity Reveals Largely Independent Parallel Networks of Face and Body Patches. Current Biology 26, 3269–3279. 10.1016/j.cub.2016.09.059. [DOI] [PubMed] [Google Scholar]
- 25.Taubert J., Van Belle G., Vanduffel W., Rossion B., and Vogels R. (2015). The effect of face inversion for neurons inside and outside fMRI-defined face-selective cortical regions. J Neurophysiol 113, 1644–1655. 10.1152/jn.00700.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Taubert J., Goffaux V., Van Belle G., Vanduffel W., and Vogels R. (2016). The impact of orientation filtering on face-selective neurons in monkey inferior temporal cortex. Sci Rep 6, 21189. 10.1038/srep21189. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Bardon A., Xiao W., Ponce C.R., Livingstone M.S., and Kreiman G. (2022). Face neurons encode nonsemantic features. Proceedings of the National Academy of Sciences 119. 10.1073/pnas.2118705119. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Ponce C.R., Xiao W., Schade P.F., Hartmann T.S., Kreiman G., and Livingstone M.S. (2019). Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences. Cell 177, 999–1009.e10. 10.1016/j.cell.2019.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Sharma S., Vinken K., Jagadeesh A. V., and Livingstone M.S. (2024). Face cells encode object parts more than facial configuration of illusory faces. Nat Commun 15, 9879. 10.1038/s41467-024-54323-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Vinken K., Prince J.S., Konkle T., and Livingstone M.S. (2023). The neural code for “face cells” is not face-specific. Sci Adv 9. 10.1126/sciadv.adg1736. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Aparicio P.L., Issa E., and DiCarlo J.J. (2016). Neurophysiological organization of the middle face patch in macaque inferior temporal cortex. J Neurosci 36, 12729–12745. 10.1523/JNEUROSCI.0237-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Vinken K., and Vogels R. (2019). A behavioral face preference deficit in a monkey with an incomplete face patch system. Neuroimage 189, 415–424. 10.1016/j.neuroimage.2019.01.043. [DOI] [PubMed] [Google Scholar]
- 33.Rajimehr R., Young J.C., and Tootell R.B.H. (2009). An anterior temporal face patch in human cortex, predicted by macaque maps. Proceedings of the National Academy of Sciences 106, 1995–2000. 10.1073/pnas.0807304106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Tsao D.Y., Freiwald W.A., Knutsen T.A., Mandeville J.B., and Tootell R.B.H. (2003). Faces and objects in macaque cerebral cortex. Nat Neurosci 6, 989–995. 10.1038/nn1111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Lafer-Sousa R., Conway B.R., and Kanwisher N.G. (2016). Color-Biased Regions of the Ventral Visual Pathway Lie between Face- and Place-Selective Regions in Humans, as in Macaques. The Journal of Neuroscience 36, 1682–1697. 10.1523/JNEUROSCI.3164-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Caspari N., Popivanov I.D., De Mazière P.A., Vanduffel W., Vogels R., Orban G.A., and Jastorff J. (2014). Fine-grained stimulus representations in body selective areas of human occipito-temporal cortex. Neuroimage 102, 484–497. 10.1016/j.neuroimage.2014.07.066. [DOI] [PubMed] [Google Scholar]
- 37.Pinsk M.A., Arcaro M., Weiner K.S., Kalkus J.F., Inati S.J., Gross C.G., and Kastner S. (2009). Neural Representations of Faces and Body Parts in Macaque and Human Cortex: A Comparative fMRI Study. J Neurophysiol 101, 2581–2600. 10.1152/jn.91198.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Orban G.A., Van Essen D., and Vanduffel W. (2004). Comparative mapping of higher visual areas in monkeys and humans. Trends Cogn Sci 8, 315–324. 10.1016/j.tics.2004.05.009. [DOI] [PubMed] [Google Scholar]
- 39.Meyer E.E., Martynek M., Kastner S., Livingstone M.S., and Arcaro M.J. (2025). Expansion of a conserved architecture drives the evolution of the primate visual cortex. Proceedings of the National Academy of Sciences 122. 10.1073/pnas.2421585122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Dubois J., de Berker A.O., and Tsao D.Y. (2015). Single-Unit Recordings in the Macaque Face Patch System Reveal Limitations of fMRI MVPA. The Journal of Neuroscience 35, 2791–2802. 10.1523/JNEUROSCI.4037-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Axelrod V., and Yovel G. (2012). Hierarchical Processing of Face Viewpoint in Human Visual Cortex. Journal of Neuroscience 32, 2442–2452. 10.1523/JNEUROSCI.4770-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Ramírez F.M., Cichy R.M., Allefeld C., and Haynes J.-D. (2014). The Neural Code for Face Orientation in the Human Fusiform Face Area. The Journal of Neuroscience 34, 12155–12167. 10.1523/JNEUROSCI.3156-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Revsine C., Gonzalez-Castillo J., Merriam E.P., Bandettini P.A., and Ramírez F.M. (2024). A unifying model for discordant and concordant results in human neuroimaging studies of facial viewpoint selectivity. The Journal of Neuroscience 44, e0296232024. 10.1523/JNEUROSCI.0296-23.2024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Allen E.J., St-Yves G., Wu Y., Breedlove J.L., Prince J.S., Dowdle L.T., Nau M., Caron B., Pestilli F., Charest I., et al. (2022). A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nat Neurosci 25, 116–126. 10.1038/s41593-021-00962-x. [DOI] [PubMed] [Google Scholar]
- 45.Halley A.C., and Krubitzer L. (2019). Not all cortical expansions are the same: the coevolution of the neocortex and the dorsal thalamus in mammals. Curr Opin Neurobiol 56, 78–86. 10.1016/j.conb.2018.12.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Bell A.H., Hadj-Bouziane F., Frihauf J.B., Tootell R.B.H., and Ungerleider L.G. (2009). Object Representations in the Temporal Cortex of Monkeys and Humans as Revealed by Functional Magnetic Resonance Imaging. J Neurophysiol 101, 688–700. 10.1152/jn.90657.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Axelrod V., and Yovel G. (2013). The challenge of localizing the anterior temporal face area: A possible solution. Neuroimage 81, 371–380. 10.1016/j.neuroimage.2013.05.015. [DOI] [PubMed] [Google Scholar]
- 48.Baker C., and Kravitz D. (2024). Insights from the Evolving Model of Two Cortical Visual Pathways. J Cogn Neurosci 36, 2568–2579. 10.1162/jocn_a_02192. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Ritchie J.B., Wardle S.G., Vaziri-Pashkam M., Kravitz D.J., and Baker C.I. (2024). Rethinking category-selectivity in human visual cortex. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Doshi F.R., and Konkle T. (2023). Cortical topographic motifs emerge in a self-organized map of object space. Sci Adv 9. 10.1126/sciadv.ade8187. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Bao P., She L., McGill M., and Tsao D.Y. (2020). A map of object space in primate inferotemporal cortex. Nature 583, 103–108. 10.1038/s41586-020-2350-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Jagadeesh A. V., and Gardner J.L. (2022). Texture-like representation of objects in human visual cortex. Proceedings of the National Academy of Sciences 119. 10.1073/pnas.2115302119. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Lugtmeijer S., Sobolewska A.M., de Haan E.H.F., and Scholte H.S. (2025). Visual feature processing in a large stroke cohort: evidence against modular organization. Brain 148, 1144–1154. 10.1093/brain/awaf009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Kanwisher N. (2025). Animal models of the human brain: Successes, limitations, and alternatives. Curr Opin Neurobiol 90, 102969. 10.1016/j.conb.2024.102969. [DOI] [PubMed] [Google Scholar]
- 55.Vanduffel W., Fize D., Peuskens H., Denys K., Sunaert S., Todd J.T., and Orban G.A. (2002). Extracting 3D from Motion: Differences in Human and Monkey Intraparietal Cortex. Science (1979) 298, 413–415. 10.1126/science.1073574. [DOI] [PubMed] [Google Scholar]
- 56.Tsao D.Y., Vanduffel W., Sasaki Y., Fize D., Knutsen T.A., Mandeville J.B., Wald L.L., Dale A.M., Rosen B.R., Van Essen D.C., et al. (2003). Stereopsis Activates V3A and Caudal Intraparietal Areas in Macaques and Humans. Neuron 39, 555–568. 10.1016/S0896-6273(03)00459-8. [DOI] [PubMed] [Google Scholar]
- 57.Orban G.A., Fize D., Peuskens H., Denys K., Nelissen K., Sunaert S., Todd J., and Vanduffel W. (2003). Similarities and differences in motion processing between the human and macaque brain: evidence from fMRI. Neuropsychologia 41, 1757–1768. 10.1016/S0028-3932(03)00177-5. [DOI] [PubMed] [Google Scholar]
- 58.Orban G.A., Claeys K., Nelissen K., Smans R., Sunaert S., Todd J.T., Wardak C., Durand J.-B., and Vanduffel W. (2006). Mapping the parietal cortex of human and non-human primates. Neuropsychologia 44, 2647–2667. 10.1016/j.neuropsychologia.2005.11.001. [DOI] [PubMed] [Google Scholar]
- 59.Kourtzi Z., Tolias A.S., Altmann C.F., Augath M., and Logothetis N.K. (2003). Integration of local features into global shapes: monkey and human FMRI studies. Neuron 37, 333–346. 10.1016/s0896-6273(02)01174-1. [DOI] [PubMed] [Google Scholar]
- 60.Goense J.B.M., and Logothetis N.K. (2008). Neurophysiology of the BOLD fMRI Signal in Awake Monkeys. Current Biology 18, 631–640. 10.1016/j.cub.2008.03.054. [DOI] [PubMed] [Google Scholar]
- 61.Logothetis N.K., and Wandell B.A. (2004). Interpreting the BOLD Signal. Annu Rev Physiol 66, 735–769. 10.1146/annurev.physiol.66.082602.092845. [DOI] [PubMed] [Google Scholar]
- 62.Logothetis N.K., Pauls J., Augath M., Trinath T., and Oeltermann A. (2001). Neurophysiological investigation of the basis of the fMRI signal. Nature 412, 150–157. 10.1038/35084005. [DOI] [PubMed] [Google Scholar]
- 63.McMahon D.B.T., Bondar I. V, Afuwape O.A.T., Ide D.C., and Leopold D.A. (2014). One month in the life of a neuron: longitudinal single-unit electrophysiology in the monkey visual system. J Neurophysiol 112, 1748–1762. 10.1152/jn.00052.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Arcaro M.J., Mautz T., Berezovskii V.K., and Livingstone M.S. (2020). Anatomical correlates of face patches in macaque inferotemporal cortex. Proceedings of the National Academy of Sciences 117, 32667–32678. 10.1073/pnas.2018780117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Lin T., Maire M., Belongie S., Hays J., Perona P., Ramanan D., Dollár P., and Zitnick C.L. (2014). Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision, Fleet T., Pajdla D., Schiele T., Tuytelaars B., ed. (Springer; ), pp. 740–755. 10.1007/978-3-319-10602-1_48. [DOI] [Google Scholar]
- 66.Arcaro M.J., and Livingstone M.S. (2017). A hierarchical, retinotopic proto-organization of the primate visual system at birth. Elife 6, 1–24. 10.7554/eLife.26196. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Vanduffel W., Fize D., Mandeville J.B., Nelissen K., Van Hecke P., Rosen B.R., Tootell R.B.H., and Orban G.A. (2001). Visual Motion Processing Investigated Using Contrast Agent-Enhanced fMRI in Awake Behaving Monkeys. Neuron 32, 565–577. 10.1016/S0896-6273(01)00502-5. [DOI] [PubMed] [Google Scholar]
- 68.Sharma S., Schaeffer D.J., Vinken K., Everling S., and Nelissen K. (2021). Intrinsic functional clustering of ventral premotor F5 in the macaque brain. Neuroimage 227, 117647. 10.1016/j.neuroimage.2020.117647. [DOI] [PubMed] [Google Scholar]



