Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2026 Mar 9;47(4):e70494. doi: 10.1002/hbm.70494

Visual Cortical Lateralization in Activations and Functional Connectivity to the Sight of Faces, Scenes, Body Parts, and Tools

Edmund T Rolls 1,2,, Jianfeng Feng 1, Ruohan Zhang 3,
PMCID: PMC12971613  PMID: 41804040

ABSTRACT

The lateralization of cortical activations and functional connectivities was analyzed when 833 Human Connectome Project (HCP) right‐handed participants were viewing faces, spatial scenes, body parts, and tools, using the HCP‐Multimodal Parcellation atlas. Spatial scenes produce stronger activations (Bonferroni corrected) in the right hemisphere, especially in the ventromedial visual cortical stream from early visual cortical regions via ventromedial visual cortical regions (VMV1–3) and medial parahippocampal regions (PHA1–3) to the hippocampus, and in inferior parietal visual cortical regions (PGi, PGs, and PFm), and in posterior cingulate division regions. Faces, tools, and body parts produce stronger activations in the left hemisphere in some of the ventrolateral temporal lobe and superior temporal sulcus (STS) visual cortical regions. Some activations were independent of the stimulus type, such as language, and anterior temporal lobe STS semantic regions consistently have higher activations and/or functional connectivities on the left, consistent with the importance of the left hemisphere in language in right‐handed people. Also, early visual cortical regions, V2–V4 and POS1, have higher activations in the right hemisphere independently of stimulus type. The lateralizations of the functional connectivities were largely consistent with the activations, but additionally showed that groups of functional connectivities lateralize together (e.g., inferior parietal PGi, PGs, and PFm on the right for scenes but not for any other stimuli), providing further evidence on computational units of the cerebral cortex.

Keywords: activations to faces, places, body parts, and tools; cortical scene regions; human connectome; human visual cortex; ventromedial visual cortical stream


Visual ventromedial cortical stream and inferior parietal regions are activated more in the right hemisphere in humans. In contrast, faces, body parts, and tools activate some ventrolateral visual cortical regions more in the left hemisphere.

graphic file with name HBM-47-e70494-g005.jpg

1. Introduction

Lateralization of function in the human brain is well established, with language typically lateralized more to the left hemisphere, and some spatial functions to the right hemisphere, based on the effects of brain damage and on brain activations in different tasks (Barkas et al. 2010; Dalton et al. 2016; Gainotti 2021; Rossion and Lochy 2022; Labache et al. 2023; Quin‐Conroy et al. 2024). The lateralization may arise because if a computation such as language is not related to left versus right sides of the body or brain, it is adaptive to have specialized modules for each function on different sides of the brain, in order to minimize the connection lengths between the neurons involved in the computation (Rolls 2023b, 2026b; Rolls et al. 2026). But there are many unanswered questions about brain lateralization. For example, is the lateralization of processing in some brain regions always preferential on one side of the brain, or are regions on each side of the brain recruited preferentially depending on the task? This is a fundamental issue in how the human brain functions: are groups of cortical regions in different hemispheres recruited into being preferentially involved depending on the task being performed, and, for example, the stimuli being viewed? If so, lateralization of function could be seen as rather dynamic, with modules, groups of cortical regions from either hemisphere being recruited depending on the computations being performed. In this case, human cortical function could be implemented by different modules of cortical regions that are on the same or different sides of the brain, being recruited in a way that depends on what is being performed. Moreover, analysis of such right–left brain differences could provide fundamental evidence about which cortical regions operate together as a group in a cortical module, on either the right or left side of the brain.

To address these fundamental issues in human cortical brain function, we analyzed the lateralization of activations in the cerebral cortex and the functional connectivity (FC) between cortical regions when four different types of visual stimuli were being presented in a short‐term memory task. The functional magnetic resonance imaging (fMRI) neuroimaging data for the investigation were from 833 right‐handed participants in the Human Connectome Project (HCP) performing a short‐term memory task when viewing either faces, visual scenes, body parts, or tools (Barch et al. 2013; Glasser, Smith, et al. 2016). We also compared the brain BOLD signals when the same participants were in the resting state to help understand any differences in lateralization during tasks and when in the resting state. We have analyzed the activations and FC in this task (Rolls, Feng, et al. 2024), but there has been no comprehensive investigation previously as far as we know for these types of visual stimuli of the laterality differences of the activations and FC in these HCP participants using the HCP Multimodal Parcellation (HCP‐MMP) which defines 360 cortical regions in the human brain, 180 of which are on the right, with 180 corresponding cortical regions on the left (Glasser, Coalson, et al. 2016). The regions defined in this atlas, and the abbreviations, are shown in Supporting Information S1. This HCP‐MMP atlas is used for this investigation because it uses multimodal information, including cortical myelin, cortical thickness, FC, and task‐related fMRI to delineate what are potentially different computational modules in the cerebral cortex (Glasser, Coalson, et al. 2016; Rolls 2023b). In this investigation, we analyzed the difference in the activations between the right and left hemisphere for these 180 cortical regions, and the right–left differences in the FC between visual cortical regions and all other cortical regions in the HCP‐MMP cortical parcellation when four different types of visual stimuli were being presented.

Some of the key visual cortical pathways in which effects might be produced by these visual stimuli include the following (see Figure S4), which have been analyzed with HCP data and using the HCP‐MMP atlas and effective connectivity, FC, and diffusion tractography, but without systematic investigation of right–left hemisphere differences (Rolls, Deco, Huang, et al. 2023b; Rolls 2024).

First, a ventrolateral visual cortical pathway connects from V2 to V4 to FFC (the fusiform face cortex) and V8 and PIT (the posterior inferior temporal cortex region); and then to posterior inferior temporal cortex regions TE1p and TE2p, which are the last primarily unimodal visual cortical regions in the ventrolateral stream (Rolls, Deco, Huang, et al. 2023b; Rolls, Deco, Zhang, et al. 2023; Rolls 2024, 2026b; Rolls and Turova 2025) (Figure S4). This pathway is involved computationally in invariant object and face processing (Rolls 2012, 2021, 2026a, 2026b; Zhang et al. 2026).

Second, a ventromedial visual cortical pathway connects via V1–V4 to the ProStriate region (ProS) and POS1, where the retrosplenial scene area is located to the ventromedial visual cortical regions VMV1–3 and VVC; these then connect to the medial parahippocampal place (or better scene) regions PHA1–3 (Epstein and Julian 2013; Epstein and Baker 2019; Celik et al. 2021; Natu et al. 2021; Piza et al. 2024; Rolls, Feng, et al. 2024), which then connect to the hippocampus (Rolls et al. 2022b; Rolls, Deco, Huang, et al. 2023b; Rolls 2024, 2025a, 2025b, 2026b, 2026c; Rolls, Yan, et al. 2024; Rolls and Turova 2025) (Figure S4). This pathway is involved in building scene representations using spatial view cells (Rolls 2025b, 2026c) and provides the “where” input for episodic memory (Rolls, Zhang, et al. 2024; Rolls 2025a, 2026b, 2026c).

Third, the cortex in the superior temporal sulcus (STS) receives from both the ventrolateral and dorsal visual streams (Rolls, Deco, Huang, et al. 2023b; Rolls 2024), and has neurons specialized for social signals such as face expression and head and body motion (Perrett et al. 1985; Hasselmo, Rolls, and Baylis 1989; Hasselmo, Rolls, Baylis, et al. 1989; Pitcher et al. 2019; Pitcher and Ungerleider 2021). More anteriorly, the cortex in the STS and anterior temporal lobe regions is involved in multimodal semantic representations (Rolls et al. 2022a, 2025; Rolls, Deco, Huang, et al. 2023b; Rolls 2026b) (Figure S4).

Fourth, the Dorsal Cortical Visual Stream, a “Where” stream for motion and visuomotor action in space (Ungerleider and Haxby 1994; Milner and Goodale 1995; Gallivan and Goodale 2018), connects V1 > V2 > V3 > LO3 + MT > MST > FST which then connects to intraparietal regions (e.g., LIPd, LIPv, VIP, and MIP) and parietal area seven regions (e.g., 7PC) (Rolls et al. 2022a; Rolls, Deco, Huang, et al. 2023b, 2023c; Rolls 2026b) (Figure S4).

Key aims of the investigation are as follows:

  1. Are there right‐hemisphere compared to left‐hemisphere differences in cortical activations measured with the HCP‐MMP atlas (Glasser, Coalson, et al. 2016) to the following four types of visual stimuli: faces, scenes (viewed places), body parts, and tools in a large cohort of 833 right‐handed participants? (The activations were measured in a 0‐back working memory task, which had a minimal memory load, but ensured that the participants were looking at and processing the four types of visual stimuli.) Use of a large cohort here is important, in that some of the previous research used only tens of participants when studying the regions activated by these types of stimuli (Kanwisher et al. 1997; Epstein and Kanwisher 1998; Liu et al. 2010; Weiner and Grill‐Spector 2013; Epstein and Baker 2019; Rossion and Lochy 2022).

  2. If so, do groups of cortical regions lateralize similarly for a given stimulus type, and what can be learned from these grouped lateralizations?

  3. Are there right hemisphere compared to left hemisphere differences in cortical FC, another measure of cortical function, to the following four types of visual stimuli: faces, scenes (viewed places), body parts, and tools?

  4. An aim is to present the activation results not only for well‐known visual cortical pathways (Rolls 2023b, 2024; Rolls, Deco, Huang, et al. 2023b), but for all 180 cortical regions in the HCP‐MMP atlas, because we have recently found that with this HCP data set, activations can also be found in some other cortical regions that relate to the semantic properties of the stimuli, such as whether the stationary visual stimuli used in the memory task are normally associated with motion, such as tools (Rolls, Feng, et al. 2024).

This investigation focused on activation and FC measures, as both are likely to be affected by the type of stimuli, faces, scenes, tools, and body parts being presented (Rolls 2023b; Rolls, Feng, et al. 2024), whereas diffusion tractography is a structural and not a functional measure. Moreover, how activations and FC relate to each other is not simple, and FC is likely to at least add some useful information, because it is not about the activity of a single cortical region, but reflects how each cortical region interacts with all the other cortical regions (Rolls 2023b).

2. Methods

2.1. HCP Task and Working Memory Paradigm

The HCP dataset provides task fMRI data for seven cognitive tasks, one of which is the working memory task (Barch et al. 2013), which provided the data analyzed here. In the working memory task, participants were presented with separate task blocks of trials for faces, places, body parts, and tools (Barch et al. 2013). Most of the analyses described here were in a 0‐back working memory task (Figure 1), which ensured that the visual stimuli were processed, but which had only a small memory requirement so that cortical responses to the visual stimuli could be analyzed. The interest here is in effects related to the four different visual stimulus types, rather than in the 2‐back memory condition that has been analyzed elsewhere (Rolls, Feng, et al. 2024). The “place” stimuli were views of spatial scenes, and are termed “scene stimuli” here (details of the task, and the stimuli used, are available at https://www.humanconnectome.org/hcp‐protocols‐ya‐task‐fmri and https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources&resource=Scripts&filePath=HCP_TFMRI_scripts.zip). Within each task block, first an instruction/cue image was presented for 2.5 s to indicate the stimulus task type and whether that block was 0‐back or 2‐back. Then, 10 trials were run for a given stimulus type, with each stimulus shown for 2.0 s followed by an interstimulus interval of 0.5 s in which a cross was shown. The 10 stimuli in each block thus lasted for 25 s, and the whole duration of a block, including the cue, was 27.5 s. In the analyses described here, the activations and functional connectivities were measured as described below during these 25‐s periods, which, with a TR of 0.72 s, provided 35 volumes. There were two runs in which data were acquired, and each run included eight task blocks, four task blocks for 0‐back, and four task blocks for 2‐back. Each stimulus type (faces, scenes, etc.) thus had 20 trials as 0‐back.

FIGURE 1.

FIGURE 1

The Human Connectome Project Working Memory task for the 0‐back condition (Barch et al. 2013). Four stimulus types were used in a block design: faces, places, tools, and body parts. + indicates a fixation cross presented in the intertrial interval. Examples of the large set of stimuli used are shown in this figure. In the 0‐back condition used for most of the analyses described here, a target cue was presented at the start of each block in the cue period, and the participant had to respond “target” to any presentation of that stimulus in the block. There were two runs in which data were acquired, and each run included eight task blocks: four task blocks for 0‐back and four task blocks for 2‐back. Each stimulus type (faces, scenes, etc.) thus had 20 trials as 0‐back, and 20 trials as 2‐back. (A) The task design in which runs of a task, such as the 0‐back task, were performed. Each run consisted of a 2.5‐s cue period followed by 10 trials in which a stimulus was shown for 2 s, followed by a 0.5‐s fixation period. The 10 stimuli in each run were thus presented over a 25 s period. Each run consisted of either faces, places, body parts, or tools. On 50% of runs, 0‐back faces, places, and tools were preceded by a 15 s screen showing only a fixation cross. (B–E) Examples of the different 0‐back runs.

2.2. HCP Data Acquisition

fMRI were acquired from a large cohort of individuals participating in the working memory task of the HCP (Barch et al. 2013). The data were obtained from the publicly available S1200 release (last updated April 2018) of the HCP (Van Essen et al. 2013). Participants provided written informed consent, and the scanning protocol was approved by the Institutional Review Board of Washington University in St. Louis, MO, USA (IRB #201204036). In this study, we utilized the task‐based fMRI data of the working memory task from all 956 participants who completed both runs of the task, with data quality approved by the HCP, and which had covariates available.

The whole‐brain EPI acquisitions were performed using a 32‐channel head coil on a modified 3 T Siemens Skyra scanner. The imaging parameters included a TR of 720 ms, TE of 33.1 ms, a flip angle of 52 degrees, a bandwidth of 2290 Hz/Px, and an in‐plane FOV of 208 × 180 mm. Each functional volume comprised 72 slices with a voxel size of 2.0 mm isotropic. A multiband acceleration factor of 8 was used during image acquisition (Feinberg et al. 2010; Moeller et al. 2010). Two runs of each task were acquired, one with right‐to‐left phase encoding and the other with left‐to‐right phase encoding (Barch et al. 2013).

2.3. Use of the HCP‐MMP Atlas

The HCP‐MMP (Glasser, Coalson, et al. 2016) is a well‐founded parcellation of the human cerebral cortex into 360 cortical regions that utilizes evidence from anatomy (cortical thickness and cortical myelin), FC, and task‐related fMRI (Glasser, Coalson, et al. 2016). This atlas provides a reference system that could be used in many investigations of human cortical function, to provide a reference standard to enable findings from different investigations to be compared. The HCP‐MMP (Glasser, Coalson, et al. 2016) has been extended to include 66 subcortical areas (Huang et al. 2022). The HCP‐MMP is the best cortical atlas we know for delineating the smallest cortical regions that can be reliably identified in humans, which may be building blocks of cortical function and provide a basis for advancing our understanding of cortical function (Rolls 2023b). It contrasts with many earlier parcellations of the cerebral cortex that are less computationally useful, as they are based on gross topology (Rolls et al. 2015, 2020) or on cortical regions categorized primarily by FC (Power et al. 2011). The order used here for the regions in this atlas is that of Huang et al. (2022), as that has been used extensively in previous investigations (Ma et al. 2022; Rolls et al. 2022a, 2022b, 2022c; Rolls 2023a, 2023b; Rolls, Deco, Huang, et al. 2023a, 2023b, 2023c, 2023d; Rolls, Deco, Zhang, et al. 2023; Rolls, Rauschecker, et al. 2023; Rolls, Wirth, et al. 2023; Rolls, Deco, et al. 2024; Rolls, Feng, et al. 2024; Rolls, Yan, et al. 2024; Rolls, Zhang, et al. 2024), and its cortical regions and abbreviations for each cortical region are provided in Supporting Information S1.

2.4. The Visual Cortical Regions Analyzed

The 55 visual cortical regions selected for analysis of right–left differences were as previously selected (Rolls, Deco, Huang, et al. 2023b), in the HCP‐MMP division, indicated where relevant. These 55 regions were selected because they are primarily of the cortical regions in the divisions listed below that are the main visual cortical divisions in the HCP‐MMP atlas (Glasser, Coalson, et al. 2016). Some additional regions with visual responses, such as the eye fields and the parahippocampal gyrus regions, were also included to provide evidence about how visual inputs reach these regions. These cortical regions are shown clearly on the surface maps of the human brain in Figures S1‐5, S1‐6, and S4, and are also clearly shown in the coronal slices in Figures S1‐1–S1‐4. In addition, a table of all the 180 cortical regions in each hemisphere in the HCP‐MMP atlas, including these 55 regions, is provided in Table S1.

Primary Visual division: Primary visual cortex V1.

Early Visual cortical division: V2, V3, and V4.

Dorsal Stream Visual Division: Intraparietal Sulcus Area 1 IPS1, V3A, V3B, V6, V6A, and V7.

Ventral Stream Visual Division: Fusiform face Complex FFC, Posterior Inferotemporal complex PIT, V8, Ventromedial Visual Areas 1–3 VMV1–VMV3, and Ventral Visual Complex VVC.

MT+ complex division: FST, Lateral Occipital Areas 1–3 LO1–LO3, Medial Superior Temporal Areas MST, Middle Temporal Area MT, PH, V3CD, and V4t (it is noted that an MT cluster has been described that includes FST, MST, MT, and V4t Kolster et al. 2010); but that the MT+ complex division of the HCP‐MMP includes more cortical regions as just specified (Glasser, Coalson, et al. 2016).

Eye Field regions: Supplementary and Cingulate Eye Field SCEF, Frontal Eye Fields FEF, and Premotor Eye Fields PEF.

STS regions with visual responses: STGa, STS dorsal anterior STSda, STS dorsal posterior STSdp, STS ventral anterior STSva, and STS ventral posterior STSvp.

Parahippocampal gyrus regions with visual responses: TF, Parahippocampal area 1–3 PHA1–PHA3 (which correspond to macaque TH).

Lateral Temporal division: PHT, TE1 anterior TE1a, TE1 middle TE1m; TE1 posterior TE1p, TE2 anterior TE2a, TE2 posterior TE2p, temporal pole TG dorsal TGd, and temporal pole ventral TGv.

Intraparietal sulcus regions in the Superior Parietal division: Anterior IntraParietal Area AIP, Lateral Intraparietal dorsal region LIPd, Lateral Intraparietal ventral region LIPv, Medial Intraparietal area MIP, and Ventral IntraParietal complex VIP. Intraparietal area 0–Intraparietal Area 2 IP0–IP2 from the inferior parietal division are also included for completeness.

2.5. Calculation of the Activations and Functional Connectivities for Each Type of Visual Stimulus

The current study employed surface‐based time series data from the HCP for the working memory task. We parcellated the timeseries data into the 360 cortical regions defined by the surface‐based HCP‐MMP atlas (Glasser, Coalson et al. 2016). We extracted the timeseries for each task block, which lasted for 27.5 s as described above, using the timing information for each block provided by the HCP (https://www.humanconnectome.org/hcp‐protocols‐ya‐task‐fmri).

Within each task block, the BOLD signal showed a consistently high level to the set of stimuli in that task block for the last 20 time points in a block (with TR = 0.72 s) (see fig. S3B of Rolls, Feng, et al. (2024)), and that period was used for the analysis of the cortical responses to the visual stimuli. The first 15 time points served as a baseline, with the activation calculated for each participant as the mean BOLD signal across the last 20 time points minus the mean for the first 15 baseline time points (see fig. S3B of Rolls, Feng, et al. (2024) which shows a flat BOLD signal in the first 15 timebins of a run, and then a very clear increase to the visual stimuli after that). The calculation of the average activation for each cortical region for each stimulus type for 0‐back was averaged for each subject across the two runs. For comparison with the task‐related activations, the resting state BOLD signal right–left differences were calculated from the resting state fMRI in the same HCP participants.

Additionally, the FC matrices for each participant were constructed by assessing the Pearson correlation between the last 20 time points of the timeseries for the 180 cortical regions in each hemisphere (again using the mean between the two runs available for each stimulus type for both 0‐back and 2‐back).

2.6. Statistical Analyses

We have just described how the data for individual subjects were extracted. For the population‐based statistical analysis, the aim was to examine the differences between the activation of the 180 cortical regions between the right and left hemispheres for each of the four stimulus types: faces, scenes, body parts, and tools. To implement this, paired t‐tests were performed to examine the differences in the BOLD signal level for each participant for the right minus the left hemisphere. The covariates of no interest in this analysis that were regressed out were sex, age, drinker status, smoking status, education qualification, and head motion. We then calculated across the participants the difference between the right and left hemispheres using Bonferroni correction for multiple comparisons. The effect sizes of the significant differences for the right–left hemisphere activations were measured using Cohen's d (Cohen 1992). These analyses were performed for 833 right‐handed HCP participants (450 females and 383 males), selected to have a laterality quotient ≥ 40 (following Schachter et al. (1987)) for the Edinburgh Handedness questionnaire (Oldfield 1971). It is emphasized that only right‐handed participants were included in the analyses, as including participants with different handedness might have confused the findings.

For the functional connectivities, paired t‐tests were conducted to identify FCs that were significantly different for the right minus the left hemisphere for the links between the 55 visual cortical regions and the 180 cortical regions in the atlas. This was performed separately for each of the four stimulus conditions: faces, scenes, body parts, and tools. FDR correction was applied to account for multiple comparisons, with the above covariates of no interest regressed out. FDR correction was used, given that each FC matrix was 55 × 180. The effect size, measured with Cohen's d, was calculated as the number of standard deviations between the means of the two conditions. These FC analyses were for the same 833 right‐handed HCP participants as the activation analyses.

3. Results

The activation differences for the right–left hemispheres are shown in Figure 2 for faces, scenes, body parts, and tools, using the effect size as a measure, with only regions with significant differences (Bonferroni corrected) shown. The activations themselves for the right and left hemispheres for each cortical region for faces, scenes, body parts, and tools are shown in Figure 3A–D. These figures allow not only the differences in the activations to be evident, but all the actual activations for each cortical region produced by faces, scenes, body parts, and tools to be shown. The FC differences for right–left are shown in Figures 4, 5, 6, 7 for faces, scenes, body parts, and tools. In presenting the results, we take each stimulus type (scenes, faces, etc.) and consider the right–left differences together for both the activations and the functional connectivities. For early visual cortical regions (V2, V3, and typically V4), all four visual stimuli produced greater activations in the right hemisphere, and the focus is on higher visual and other cortical regions where differences were found.

FIGURE 2.

FIGURE 2

Activation differences for faces, places, body parts, and tools for the right hemisphere–the left hemisphere for the 180 cortical regions in the surface‐based Human Connectome Project (HCP)‐MMP atlas (Glasser, Coalson, et al. 2016) in 833 right‐handed HCP participants. Higher activations in the right hemisphere are thus yellow/red, and higher activations in the left hemisphere are thus blue for all cortical regions in which the difference was significant after Bonferroni correction. The effect sizes of the differences are shown with Cohen's d. A list of the cortical regions and their abbreviations is shown in Table S1.

FIGURE 3.

FIGURE 3

(A) The activations for faces are shown separately for the right hemisphere and for the left hemisphere for the 180 cortical regions in the surface‐based Human Connectome Project (HCP)‐MMP atlas (Glasser, Coalson, et al. 2016) in the 833 right‐handed HCP participants so that the activations in the two hemispheres can be compared. The activations are shown as the percent BOLD signal change produced by the visual stimuli from the BOLD signal in the 15 time points before the visual stimuli were shown. The cortical regions with right–left hemisphere differences that were significant after Bonferroni correction are indicated by *(red for right hemisphere > left; blue for left hemisphere > right). A list of the cortical regions and their abbreviations is shown in Table S1, and the order used is that in the extended HCP‐MMP atlas (Huang et al. 2022). (B) The activations for scenes (viewed places) are shown separately for the right hemisphere and for the left hemisphere with the same conventions as in (A). (C) The activations for body parts are shown separately for the right hemisphere and for the left hemisphere with the same conventions as in (A). (D) The activations for tools are shown separately for the right hemisphere and for the left hemisphere with the same conventions as in (A).

FIGURE 4.

FIGURE 4

Functional connectivity differences for faces for the right hemisphere–the left hemisphere between visual cortical regions and 180 other cortical regions in 833 right‐handed Human Connectome Project (HCP) participants. Higher functional connectivity in the right hemisphere is thus yellow/red, and higher functional connectivity in the left hemisphere is thus blue. The upper figure shows the functional connectivity of the visual cortical regions with the first half of the cortical regions; the lower figure shows the functional connectivity with the second half of the cortical regions. Note that each link that is significant after FDR correction is shown only once in this matrix; that is, the matrix is asymmetric. Abbreviations: See Table S1. The groups of visual cortex regions are separated by red lines. Group 1 (top) Early Visual division of the HCP‐MMP atlas (labeled on the left Early); Group 2 Dorsal Visual division (Dorsal); Group 3 Ventral Visual division (Ventral); Group 4 MT+ division (MT+); Group 5 Eye Field regions (EF); Group 6 Superior Temporal Sulcus regions (STS); Group 7 Medial Temporal Regions (PHG, parahippocampal gyrus); Group 8 Lateral Temporal division including the Temporal Pole (L Temp); and Group 9 Intraparietal sulcus regions (IntraPar). The colored labeled bars show the cortical divisions in the HCP‐MMP atlas (Glasser, Coalson, et al. 2016). The order of the cortical regions is as in Huang et al. (2022).

FIGURE 5.

FIGURE 5

Functional connectivity differences for scenes (places) for the right hemisphere–the left hemisphere between visual cortical regions and 180 other cortical regions. The upper figure shows the functional connectivity of the visual cortical regions with the first half of the cortical regions; the lower figure shows the functional connectivity with the second half of the cortical regions. Conventions as for Figure 4. Abbreviations: See Table S1.

FIGURE 6.

FIGURE 6

Functional connectivity differences for body parts for the right hemisphere–the left hemisphere between visual cortical regions and 180 other cortical regions. The upper figure shows the functional connectivity of the visual cortical regions with the first half of the cortical regions; the lower figure shows the functional connectivity with the second half of the cortical regions. Conventions as for Figure 4. Abbreviations: See Table S1.

FIGURE 7.

FIGURE 7

Functional connectivity differences for tools for the right hemisphere–the left hemisphere between visual cortical regions and 180 other cortical regions. The upper figure shows the functional connectivity of the visual cortical regions with the first half of the cortical regions; the lower figure shows the functional connectivity with the second half of the cortical regions. Conventions as for Figure 4. Abbreviations: See Table S1.

3.1. Scenes

For scenes (viewed places), activations are greater in the right hemisphere for ventromedial visual cortical regions VMV1–3 and VVC; and in the medial parahippocampal gyrus (PHA1–3) (Figure 2B). This is the ventromedial scene pathway to the hippocampus (Rolls, Deco, Huang, et al. 2023b; Rolls 2024, 2025a, 2025b; Rolls, Yan, et al. 2024), and it is clearly organized to be right lateralized in terms of the activations (Figure 2B). The results in Figure 2B establish that when scenes (places) are being viewed, there are higher activations in the right hemisphere than the left hemisphere for these named regions (with red/yellow colors) after rigorous Bonferroni correction for multiple comparisons, and that the effect sizes as shown by Cohen's d are up to 2.3, which are large effect sizes (a value for d > 0.8 counts as a large effect size, Cohen 1992).

Further quantitative evidence to provide details of the results is in Figure 3B, which shows the change of BOLD signal produced by seeing scenes, and indicates how high the activations are to scenes in each of the 180 cortical regions in the right and left hemispheres. Figure 3B emphasizes that the activations to the VMV and PHA regions are not only right lateralized, but that the activations of these regions to scenes are large, and higher than for any of the other stimuli. Further analysis and context are provided in Figure S5, which shows the activations for scenes–faces, which are larger in regions including VMV1–3, VVC, and PHA1–3, which are in the ventromedial cortical visual scene pathway (Rolls, Deco, Huang, et al. 2023b; Rolls 2024; Rolls, Yan, et al. 2024). This is found in both hemispheres. But what is clearly established in Figures 2B and 3B is that the activations to scenes for these regions are very significantly greater in the right than in the left hemisphere, with a large effect size, and that this is a finding emphasized in this research.

It is interesting that the functional connectivities between these groups of regions are high in the right hemisphere; that is, some of the early visual cortical visual Regions V1–V4 have higher FC in the right hemisphere with some of the ventromedial cortical visual regions VMV1–3; and in turn, VMV3 has higher FC in the right hemisphere with the medial parahippocampal scene area PHA2 (Figure 5). Correspondingly, the effective connectivity is from V1 to V4 to VMV regions, and then to PHA regions (Rolls, Deco, Huang, et al. 2023b; Rolls, Yan, et al. 2024; Rolls and Turova 2025). This is all consistent with visual inputs for scenes received through early visual regions V1–V4 having high activations in the right hemisphere, which then drive VMV regions strongly in the right hemisphere, which in turn activate the PHA regions in the right hemisphere.

This whole ventromedial visual cortical pathway thus shows right lateralized activations for scenes (Figure 2B) more than for faces and tools, though body parts also activate some right hemisphere regions preferentially too (Figure 2C). However, the functional connectivities between VMV regions and PHA (plus TF and perirhinal) regions (Figure 5) show more consistent right lateralization for scenes than for body parts, faces and tools (Figures 4, 5, 6, 7), so overall the visual spatial scene system is more strongly lateralized on the right than for the other three types of visual stimulus, faces, body parts, and tools.

Another set of cortical regions that is right lateralized for scenes is the inferior parietal cortex visual regions, with higher activations of the right hemisphere for PGi, PGs, and PFm (Figure 2B). This inferior parietal right lateralization is unique for scenes and is not found for faces, tools, and body parts (Figure 2A,C,D). This provides key evidence that the human inferior parietal cortex is involved in spatial scene representations (on the right), and this contrasts with its activations when tools, faces, and body parts are viewed, which produce more activation on the left than the right (Figure 2). In addition, PGi and PGs have higher FC with many right intraparietal regions such as AIP, LIPd, and LIPv (Figure 5). A related part of the inferior parietal cortex, PGp, has higher FC on the right with visual cortical regions including V2–V4; VMV2; FFC, PIT, and V8; and some MT+ regions including V3CD, where the occipital place area may be located (Figure 5).

Some inferior temporal cortex regions, such as TE1p and TE1m, are more strongly activated on the right by scenes (Figure 2B), and this contrasts with generally stronger activation of inferior temporal visual cortex regions on the left for faces, body parts, and tools (Figure 2).

Another set of cortical regions that is right lateralized for scenes is the posterior cingulate cortical regions, with higher activations on the right, especially POS1, POS2, 31pv, 31pd, 7m, PCV, 7pm, 31a, which, as a group, do not show right lateralization of activations for faces, tools, and body parts. Correspondingly, the functional connectivities are higher on the right for scenes of the posterior cingulate cortex regions, especially intraparietal and some MT+ regions. This is of interest in understanding the functions of the right versus left posterior cingulate cortical regions (Rolls, Wirth, et al. 2023).

Interestingly, language areas 44, 45, and 47l (Rolls et al. 2022a) did not show strong lateralization for scenes on the left (Figure 3B), whereas for all other stimuli, the activations were higher in the left hemisphere. The anterior temporal lobe semantic regions in and connected with the STS (Rolls et al. 2022a) had higher activations (Figure 2B) and higher functional connectivities (Figure 3B) even for scenes on the left, suggesting that these semantic regions are not part of the right lateralized spatial scene processing system, which instead involves the right‐biased ventromedial spatial scene system via VMV and PHA regions, the right inferior parietal visual cortical regions, and the right posterior cingulate cortical regions.

The actual activation values shown in Figure 3B for scenes emphasize large right‐lateralized activation differences for especially V1, V2, VMV2, VMV3, PHA1, PHA2, PHA3, POS1, POS2, 31pv, 31pd, and 7m, a32pr, d32, p24, 46, 8Ad, 8Av, 9a, and 9p.

3.2. Faces

For faces, activations in many cortical regions are left lateralized (Figure 2A), rather than right lateralized as for scenes. For example, lateral inferior temporal cortical regions such as VVC, TE1m, and TE2a, and the STS visual face‐related regions STSvp and STSva (Rolls, Deco, Huang, et al. 2023b; Rolls 2024) are more strongly activated on the left by faces (Figure 2A) (VVC is adjacent to the FFC and becomes part of the ventrolateral cortical visual pathway, which includes FFC in terms of effective connectivity when faces vs. scenes are viewed; Rolls and Turova 2025).

The results in Figure 2A establish that when faces are being viewed, there are higher activations in the left hemisphere than the right hemisphere for these named regions (with blue colors) after rigorous Bonferroni correction for multiple comparisons, and that the effect sizes, as shown by Cohen's d, are moderate.

Further quantitative evidence to provide further details of the results is in Figure 3A, which shows the change of BOLD signal produced by seeing scenes and indicates how high the activations are to scenes in each of the 180 cortical regions in the right and left hemispheres. Further analysis and context are provided in Figure S5, which shows the activations for faces–scenes as blue, which are larger in regions including FFC, TE2p, TF, TE2a. This is found in both hemispheres. But what is established clearly in Figure 2B is that the activations to faces in regions such as VVC, TE1m, TE2a, STSvp, and STSva are significantly greater in the left than in the right hemisphere, with a large effect size, and that is a finding that is emphasized in this research.

The FFC has a higher FC on the left with TE1m and TE2a (Figure 4). Another comparison is that FFC has higher FC on the left with language areas such as 44, 45, and 47l (Figure 4), whereas this is not the case for scenes (Figure 5).

The ventromedial visual cortical regions VMV1–3 are overall much less activated on the right for faces compared to scenes (Figure 2A vs. Figure 2B), and correspondingly, the FCs for faces are not right lateralized for VMV2 with other VMV regions or with MT+ complex regions (Figure 4), though they are for scenes (Figure 5).

The inferior parietal cortex visual regions PGi, PGs, and PFm (Rolls, Deco, Huang, et al. 2023c) have higher activations of the left hemisphere for faces (Figure 2A), and this is in contrast to scenes, which show greater activations of the right hemisphere for these inferior parietal regions (Figure 2B). Correspondingly, some FCs between inferior parietal regions are higher on the left with face‐related temporal cortex regions such as FFC and PIT (Figure 4), and that is not the case for scenes (Figure 5).

The activations in the posterior cingulate cortical regions are much less overall right lateralized for faces (Figure 2A) compared to scenes (Figure 2B), with differences for POS2, 7pm, PCV, 31pv, and 31pd not evident for faces (Figures 2A and 3).

However, it is interesting that the face processing was not greater in the left hemisphere for all visual cortical regions. For example, V2–V4, POS1, v23ab, 7m, d23ab, and 31a were more activated by faces in the right hemisphere, as were some dorsal visual division, MT+, and TPO division regions, including LO1–3, MST, and FST, and TPOJ2 and TPOJ3 (Figure 2A). Interestingly, faces also activated parts of the right medial orbitofrontal cortex, pOFC, and 13l, whereas these regions were activated more on the left for all the other stimuli (scenes, body parts, and tools). For the FCs for faces, though many for the visual cortical areas are higher on the left, there are some exceptions, including V3, and the connectivity of dorsal visual division regions with many somatomotor, premotor, and opercular division regions, which are right lateralized (Figure 4).

The actual activation values shown in Figure 3A for faces emphasize large left lateralized activation differences for especially regions STSdp, STSva, STSvp, 7PC, 7PI, IP2, PF, PFm, PFt, 10r, 10v, 44, 45, and 47l and SFL, 9p.

3.3. Body Parts

For body parts, activations in many ventral visual cortical regions are right lateralized, including V1–V4, V8, PIT, FFC, VVC, and TE2p (Figure 2C). However, these right lateralizations do not extend to the ventromedial visual pathway regions VMV2, PHA2, and PHA1 that are left lateralized for scenes (Figure 2C vs. Figure 2B). Interestingly, right lateralization was also found for visual motion regions MST, MST, FST, and LO1–3 (Figure 2C), and this is remarkable because the body part stimuli were stationary, and it is just the semantic association that accounts for these activations (Rolls et al. 2025). The higher right hemisphere activations for body parts for MT+ regions are very evident in Figure 3C. This was not found, as expected, for scenes, given that scenes are typically stationary. The activations for body parts were right lateralized for TPO division regions, including TPOJ1–3 and STV, which was less generally the case for faces, scenes, and tools.

For the sight of body parts it is interesting that somatosensory cortical regions (Rolls, Deco, Huang, et al. 2023c) are activated (Rolls, Feng, et al. 2024), and here we reveal that these activations are more on the right, and include regions such as 1, 2, PFt, and PFm (Figures 2C and 3C), whereas for all other visual stimuli, the activations of these somatosensory regions are higher on the left.

For the sight of body parts, the orbitofrontal cortex activations were higher on the left for almost all orbitofrontal cortex regions (Rolls et al. 2022c), and this is not the case for faces.

Although some of the functional connectivities of V1–V4 with ventral visual cortical regions such as FFT, PIT, V4 and VMV regions were left lateralized for body parts (Figure 6) (unlike scenes), V1–V4 had higher FC on the right with somatosensory regions, premotor regions, lateral temporal, superior parietal, inferior parietal, posterior cingulate, and parts of inferior frontal gyrus and DLPFC, and this is unique, and is not found for faces, scenes and tools. The higher FCs in the right hemisphere for V1–V4 regions are consistent with the right lateralization for many cortical regions produced by body parts.

3.4. Tools

Although early visual cortical regions showed right hemisphere lateralization for tools, as for most other visual stimuli, most of the other activations for tools are left lateralized, including higher ventral stream cortical regions FFC, TE2p, TE1m, TE2a, and PIT (Figures 2D and 3D). The visual motion regions, such as MST, FST, and LO2, also show left lateralization for these stationary stimuli with the semantic association of motion. The activations for language‐related regions TPOJ1, TPOJ2 (as well as 44, 45, SFL, and 55b) are left lateralized for tools. The activations for tools are left lateralized for more cortical regions with significant differences than for the other visual stimuli, faces, scenes, and body parts (Figures 2D and 3D).

The functional connectivities have consistency, with right lateralization for FCs involving V1–V4 with ventral stream visual regions such as PIT, V8, and VMV1–3 (Figure 7). The functional connectivities are higher on the left for the lateral temporal cortical visual regions and temporal pole regions, such as TE2a, TGd, and TGv, with some inferior parietal visual regions, such as PGs, PGi, and PFm. All of these regions are involved in visual semantic processing (Rolls et al. 2022a). The orbitofrontal cortex FCs for tools with most lateral temporal (TE) regions and with STS regions are also left lateralized (Figure 7).

3.5. Comparison of Activations to Scenes Versus Faces

The analyses shown in Figures 2 and 3 are for right versus left hemisphere differences for each of the scenes, faces, body parts, and tools. We also tested whether the laterality effects were significantly different from each other for different stimuli, such as scenes versus faces. For example, for which cortical regions were the activations significantly greater in the right versus left hemisphere for the differences between scenes and faces? In one such comparison, it was shown that the differences in the activations for scenes minus faces were greater in the right than left medial parahippocampal cortex scene regions PHA1, PHA2, and PHA3 after Bonferroni correction.

3.6. Hemisphere Differences of the BOLD Signals in the Resting State Compared With Those in the Visual Tasks

The hemisphere differences for the resting state BOLD signal are shown in Figure S2, with the values shown separately for the right and left hemispheres in Figure S3. In the resting state, many cortical regions have a significantly higher BOLD signal in the left hemisphere. The regions with greater BOLD signal in the left hemisphere in the resting state include especially the inferior frontal gyrus language‐related regions, such as 44, 45, and 47l, and those closely related to them, such as IFJa, IFJp, IFSa, and IFSp (Rolls et al. 2022a), and also some more dorsolateral prefrontal cortex regions (Figure S3). These higher left hemisphere BOLD signals may relate to the fact that, with no visual stimuli or task to process in the resting state, participants may be thinking and reflecting using language cortical and prefrontal short‐term/working memory systems, which are predominant in the left hemisphere in right‐handed participants. Somewhat curiously, some of the semantic regions in the anterior temporal lobe (Rolls et al. 2022a), such as the STS regions, have a little more signal in the resting state in the right hemisphere (Figure S3). Other regions with more BOLD signal in the left hemisphere in the resting state include somatosensory cortical regions such as FOP2–5 and insular regions, and also some superior parietal regions such as area 7 and intraparietal regions, and this may relate to the right‐handedness of these participants and hence stronger representation in these left somatosensory and related cortical regions.

4. Discussion

Some of the key findings are as follows. First, when viewing scenes, activations are greater in the right hemisphere for key regions in the human ventromedial cortical scene “where” stream (Figure S4) (Rolls, Deco, Huang, et al. 2023b; Rolls 2024; Rolls, Yan, et al. 2024; Rolls and Turova 2025), VMV1–3 and PHA1–3 (Figures 2B and 3B), and the functional connectivities are consistent. Some other visual cortical regions are also more strongly activated on the right by scenes, including visual regions in the right inferior parietal gyrus (PGi, PGs, and PFm), which are not selectively activated in the right for the other three visual stimuli.

Second, when viewing faces, activations are higher in the left hemisphere for a region that can be part of the ventrolateral visual cortical “what” stream VVC (Rolls, Deco, Huang, et al. 2023b; Rolls and Turova 2025) (Figures 2A and 3A), and the FC of the FFC and the related PIT is higher on the left with VVC and with anterior temporal lobe semantic regions TE1m and TE2a. In addition, regions STSvp and STSva, STS regions implicated in face expression and face motion processing (Hasselmo, Rolls, and Baylis 1989; Hasselmo, Rolls, Baylis, et al. 1989; Pitcher and Ungerleider 2021; Rolls 2024), are more strongly activated in the left hemisphere by faces.

Third, when viewing body parts, ventrolateral visual cortical regions such as FFC, VVC, TE2p, PIT and V8 are more strongly activated in the right hemisphere (Figures 2C and 3C), and the functional connectivities of some of the ventrolateral stream regions (Figure S4) such as TE1p, TE2p with V1–V4 and with dorsal visual stream regions is higher on the right. Interestingly, right lateralization was also found for activations of visual motion regions MST, MST, FST, and LO1–3 (Figure 2C), and this is remarkable because the body part stimuli were stationary, and it is just the semantic association that accounts for these activations in visual motion‐related cortical regions (Rolls et al. 2025).

Fourth, when viewing tools, ventrolateral visual cortical regions such as FFC, TE2p, and PIT are more strongly activated in the left hemisphere (Figures 2D and 3D). The visual motion regions, such as MST, FST, and LO2, also show left lateralization of their activations for these stationary stimuli with the semantic association of motion.

Fifth, the activations of many of the anterior temporal lobe regions such as TE1a, TE1m, TE2a, and the temporal pole regions TGv and TGd that are implicated in semantic representations (Rolls et al. 2022a; Rolls, Deco, Huang, et al. 2023b) are stronger on the left to most of the visual stimuli (faces, body parts, and tools), but not to scenes (Figures 2 and 3). This is consistent with the key functions of the left hemisphere in language and the connectivity of these anterior temporal lobe regions to Broca's area implicated in syntax/speech production (Rolls et al. 2022a; Rolls 2023b).

Sixth, the lateralizations of the functional connectivities were largely consistent with the activations, but additionally showed that groups of functional connectivities lateralize together, providing further evidence on computational units of the cerebral cortex.

Seventh, we do not suggest that the laterality effects described here imply that the processing described here takes place in only one hemisphere. Instead, we describe highly significant differences in the activations and functional connectivities between the two hemispheres with moderate effect sizes for different types of visual stimuli, which provide very interesting evidence about how groups of cortical regions operate together (by being grouped in one hemisphere or the other) to process different types of visual stimuli.

Eighth, for none of these four stimuli were activations in all cortical regions, let alone all visual cortical regions, higher in one hemisphere. For example, even for tools and faces where a number of visual cortical regions were left lateralized, for all four types of visual stimuli, the activations of early visual cortical regions such as V2 and V3 were higher in the right hemisphere (Figure 2). The implication is that each type of visual stimulus recruits most strongly a subset of visual cortical regions that are especially suited to performing the computations for each stimulus type, but with clustering of cortical regions that keep together in either the left or right hemisphere, as illustrated in Figure 2.

The findings described here make a number of fundamental points about cortical organization. First, whereas viewing scenes tends to activate right visual cortical regions (including VMV regions connecting to the medial parahippocampal gyrus), and viewing faces does activate some left visual cortical regions (e.g., VVC and STSvp and STSva activations, and functional connectivities of FFC with superior parietal inferior parietal and some posterior cingulate regions), lateralization is not just greater for all regions in one hemisphere, but certain parts of the processing are especially lateralized in the way described. For example, for scenes, some regions have activations and FCs that are higher on the left (e.g., MT+ regions, STS, TG, and TPOJ1 semantic regions, and medial orbitofrontal cortex 13l, OFC, and pOFC); and for faces, some regions have activations and FCs that are higher on the right (e.g., V1–V4; and orbitofrontal cortex regions 13l, pOFC, and 47m; Figure 2). Cortical processing for these stimuli is far from completely greater across all regions in one hemisphere than the other. An implication for understanding laterality effects related to brain damage is that, at least in the cases described, where there are very clear and statistically significant right–left differences in cortical activations and connectivity, the actual cortical processing in typical tasks may have different components that are specialized to operate in different hemispheres.

This is highly relevant to the evidence, particularly from the effects of brain damage that indicate that damage to an occipito‐temporal region in the right hemisphere is especially associated with face recognition deficits such as prosopagnosia (Rossion and Lochy 2022). Much previous fMRI evidence of responses to faces was consistent with greater activations in the right hemisphere (Rossion and Lochy 2022), though there are some exceptions (Hervais‐Adelman et al. 2019). Moreover, left hemisphere abnormalities in face‐selective activation and FC have been found in developmental prosopagnosia (Campbell et al. 2025). In the investigation described here, the cohort of participants was very much larger than in most previous investigations, and here we found evidence for at least some left hemisphere greater activation to faces. But the present investigation provides a possible resolution to this issue. For a higher cortical region in the ventrolateral processing stream, VVC, which is adjacent to and highly connected with the FFC (Rolls, Deco, Huang, et al. 2023b), we found left activations were higher than the right, and the FC evidence was consistent. And in another part of the face processing system, the cortex in the STS implicated in face expression and face motion, the regions STSvp and STSva had higher activations in the left hemisphere than in the right. However, the early stages of visual cortical processing, V2–V4, had greater activations in the right hemisphere to faces (and to the other visual stimuli). The evidence from the present large‐scale investigation is thus that the different parts of the face processing system have different lateralization, with early stages right lateralized, and later, higher stages left lateralized. Thus, the interpretation of the effects of brain damage on face perception may depend on exactly which parts of the whole system are most damaged in particular patients. From the present finding, we would predict that damage to the left STS regions would be more likely to impair emotional and social responses to faces because these regions are involved in face expression and gesture (Hasselmo, Rolls, and Baylis 1989; Hasselmo, Rolls, Baylis, et al. 1989; Cheng et al. 2015; Pitcher and Ungerleider 2021; Rolls 2024).

Further possible reasons for differences from some previous studies which found greater activation to faces in the right hemisphere are that in the present study we used a large number of 833 participants, greater than most previous studies; that these 833 participants were selected to be right‐handed, which could be relevant; that the participants had a clearly defined task, a 0‐back memory task that required careful processing of each face; that we used the very good HCP‐MMP parcellation of the cortex with its surface‐based parcellation, which enabled all 180 regions in each hemisphere to be statistically compared not only for activations but also for FC, which supported the left activations found for some key cortical regions involved in face processing; and that the left lateralized regions are high level visual cortical regions involved in face recognition (VVC, adjacent to FFC) and in the STS regions in the cortex in the STS implicated in face expression and movement processing, and that we found some right lateralization for some other cortical regions including early visual cortical regions.

Second, the activations and FCs for these stationary visual stimuli extend far beyond visual cortical areas, and when they do, these semantic associations can be strongly lateralized. For scenes, what may be described as semantic associations, in that the regions are not purely visual, activations are greater in the left hemisphere for language‐related regions such as STSdp, TPOJ1, TGd, TGv, and 45; for the medial orbitofrontal cortex 13l, OFC, and pOFC; and very many somatosensory cortical regions. Comparably, for faces, some right hemisphere semantically related activations and FCs are higher (e.g., orbitofrontal cortex regions 13l, pOFC, and 47m; e.g., visual motion regions MST and FST; and language regions TPOJ1 and TPOJ2).

An extension of these concepts is that some language‐related regions are not always more activated in the left hemisphere by these stimuli. For example, for faces, TPOJ2 and TPOJ3 are more strongly activated on the right (Figure 2A); for scenes, TPOJ3 and STV are on the right; for body parts, TPOJ1–3, STV, and SFL are on the right; and for tools, TPOJ3 is more strongly activated on the right.

Another interesting finding is that lateralization of function does appear in the orbitofrontal cortex and is different for different visual stimuli. For example, for faces some right orbitofrontal regions are preferentially activated (13l, pOFC and 47m), and some left (a47r and 47l); for scenes some left orbitofrontal cortex regions are preferentially activated (OFC, 13l, and pOFC), and some right (11l, a47r, and 47l); and for body parts and tools, almost all left orbitofrontal cortex regions are preferentially activated (Figure 2).

Another striking finding is that whole groups of cortical regions were consistently found to have higher FC in a given hemisphere, depending on the type of stimulus. This is a useful clue to what groups of cortical regions are involved in communication with each other to perform particular computations. One group identified in this way included regions in the ventromedial visual cortical pathways, including VMV and PHA regions, which were especially activated or had FC with each other as a group in the right hemisphere when scenes were shown. Another example is the FFC and some TE areas in the left hemisphere when viewing faces. Another group is the visual inferior parietal regions PGi, PGs, and PFm, which were together right lateralized for scenes and left lateralized for faces (Figure 2). This shows that cortical regions are allocated typically as groups to have higher activations and/or FCs in one hemisphere or the other, which indicates how such groups of cortical regions are linked together for particular computations, and for that matter how they are linked as a group to other cortical regions when a particular type of computation (scene‐related, face‐related, etc.) is performed.

Some of the other findings of this investigation can be summarized as follows:

First, some cortical divisions had activations and/or functional connectivities that were higher in one hemisphere than the other, independent of the type of visual stimulus being viewed: faces, scenes, body parts, or tools. One example is the early cortical visual regions V2, V3, and V4, and also POS1, which were activated more in the right hemisphere for all four stimulus types (Figure 2). Another example is that the inferior frontal gyrus language regions 44, 45, and 47l had functional connectivities with visual cortical regions that were higher on the left (Figures 4, 5, 6, 7). Similarly, the temporal lobe STS semantic regions, such as STSda, STSdp, STSva, and STSvp (Rolls et al. 2022a), had FCs that were generally higher in the left hemisphere with other cortical regions. These regularities, independent of stimulus type, may be summarized with the findings that the language and temporal lobe STS semantic regions consistently have higher activations and/or FCs on the left, consistent with the importance of the left hemisphere in language in right‐handed people. And early visual cortical regions, V2–V4 and POS1, have higher activations in the right hemisphere independently of stimulus type, faces, scenes, body parts, or tools.

Second, other cortical divisions and regions have activations and FCs that depend on the type of visual stimulus that is being shown. Overall, faces and tools tend to produce stronger effects in the left than the right hemisphere, and spatial scenes tend to produce stronger effects in the right hemisphere, especially in the ventromedial visual cortical stream from early visual cortical regions via ventromedial visual cortical regions (VMV1–3) and medial parahippocampal regions (PHA1–3) to the hippocampus (Rolls 2024, 2025b; Rolls, Yan, et al. 2024; Rolls and Turova 2025). Body parts activate right hemisphere early and intermediate visual cortical regions, such as FFC and TE2p, and VVC more on the right, but activate anterior temporal lobe semantic cortical regions (Rolls et al. 2022a), such as TE1p, TE1m, TE2a, and STSvp more on the left than the right. A general summary is that the sight of most objects (faces, tools, and body parts) tend to activate higher order visual cortical regions especially in the ventrolateral (lateral temporal lobe) and STS visual cortical streams (Rolls 2024) more on the left; whereas the sight of spatial scenes tends to activate ventromedial cortical visual stream regions (Rolls, Deco, Huang, et al. 2023b; Rolls 2024; Rolls, Yan, et al. 2024; Rolls and Turova 2025) more on the right.

Third, the activations and the hemisphere differences are very different in a task when visual stimuli are being presented (Figures 2 and 3A–D) than in the resting state (Figure S2). Moreover, when visual stimuli are being presented, which hemisphere is more activated, and in which regions, depends on the type of visual stimulus being shown. The research described here thus shows how the lateralization of cortical processing depends on which visual stimuli are being shown, with the particular regions in the high‐resolution HCP‐MMP atlas in which these asymmetries of processing are found revealed here. For example, when viewing scenes, some of the key regions activated asymmetrically are in the right hemisphere in cortical regions in the ventromedial visual cortical stream, including medial parahippocampal regions PHA1, PHA2, and PHA3; ventromedial cortical regions VMV2, VMV3, and VVC; posterior cingulate division regions including 31a, 31pd, 31pv, 7m, d23ab, PCV, POS1, POS2, ProS, and RSC; and in inferior parietal visual cortical regions (PGi, PGs, and PFm); and in posterior cingulate division regions (see Figures 2 and 3). The inferior parietal region activation on the right by scenes is of great interest, for it implicates this cortical region, so greatly developed in humans, in spatial processing related to scene perception. Similarly, the right hemisphere lateralization for scenes of many posterior cingulate division regions is of interest in relation to understanding the functions of the posterior cingulate cortex in processing related to episodic memory and hippocampal function (Rolls, Wirth, et al. 2023). The right lateralization of scenes in each of these ventromedial cortical visual regions, visual inferior parietal regions, and the posterior cingulate regions is a key finding presented in this research. In another example, when viewing faces, some of the key regions activated asymmetrically are in the left hemisphere, including cortical regions STSva, STSvp (implicated in face expression and motion processing Hasselmo, Rolls, and Baylis 1989; Pitcher and Ungerleider 2021; Rolls 2024), and anterior temporal lobe regions TE1m and TE2a, and these were not activated more on the left for scenes (see Figures 2 and 3).

The results described here are also consistent with the development and revolution (Rolls 2024, 2026b, 2026c) of our understanding beyond the dual stream “what” and “where” model of visual cortical function (Ungerleider and Mishkin 1982; Mishkin et al. 1983; Haxby et al. 1991; Ungerleider and Haxby 1994).

First, we have produced evidence and proposed that there is a third cortical visual pathway involving cortical regions in the STS that is involved in face expression and motion processing and that is involved in social behavior (Perrett et al. 1985; Baylis et al. 1987; Hasselmo, Rolls, and Baylis 1989; Hasselmo, Rolls, Baylis, et al. 1989; Rolls 2011, 2024, 2026a, 2026b; Rolls, Deco, Huang, et al. 2023b) (and implicated in autism Cheng et al. 2015), and that proposal has been accepted (Pitcher et al. 2019; Pitcher and Ungerleider 2021).

Second, we have identified a fourth visual cortical pathway that builds spatial view cells (Rolls et al. 1989, 1997, 1998; Feigenbaum and Rolls 1991; Rolls and O'Mara 1995; Robertson et al. 1998; Georges‐François et al. 1999; Rolls 2025a, 2025b) in the medial parahippocampal gyrus (where the parahippocampal place area [or better parahippocampal scene area] is located Epstein and Kanwisher 1998; Epstein 2005; Epstein and Julian 2013; Epstein and Baker 2019; Rolls 2024; Rolls, Feng, et al. 2024) for scene perception and for use in episodic memory (Rolls and Treves 2024; Rolls 2026b). The ventromedial cortical scene visual pathway is from early visual cortical regions such as V2 via the ProStriate cortex (where the retrosplenial scene area is located), and the ventromedial visual cortical regions VMV1–3, to reach the medial parahippocampal regions PHA1–3 (Rolls 2024, 2025b; Rolls, Yan, et al. 2024; Rolls and Turova 2025). A theory and model of the mechanisms by which scene representations are built using spatial view cells uses classical ventral stream feature hierarchy mechanisms supplemented by gain modulation by gaze direction (allocentric eye direction) from the dorsal visual stream (Rolls 2025b). It is somewhat revolutionary that “where” representations of locations in spatial scenes are built in a ventral visual cortical stream, the ventromedial visual cortical pathway (Rolls 2024, 2026b, 2026c). These spatial view representations are in allocentric space (Feigenbaum and Rolls 1991; Rolls and O'Mara 1995; Rolls et al. 1997, 1998; Georges‐François et al. 1999; Rolls 2025b).

Third, the dorsal visual “where” stream, now one of two “where” streams, is involved in a different type of “where” where actions should be made in typically egocentric space (Goodale and Milner 1992; Milner and Goodale 1995; Gallivan and Goodale 2018; Rolls, Deco, Huang, et al. 2023b; Rolls 2026b).

The “what” pathway to the temporal lobe for invariant object and face perception, we now term the ventrolateral cortical visual pathway (Rolls, Deco, Huang, et al. 2023b; Rolls 2024, 2026a, 2026b; Rolls and Turova 2025; Zhang et al. 2026), to distinguish it from the ventromedial cortical pathway for scenes. There are thus (at least) two cortical “what” pathways, and two cortical “where” pathways (Rolls 2024, 2026b).

Overall, the research described here greatly extends our understanding of cortical lateralization for visual stimuli by showing in a very large cohort of 833 participants that there are moderate to large effect sizes for hemisphere differences in activations to scenes, faces, tools, and body parts, with specialized subgroups of regions evident more in one hemisphere than the other for each different type of stimulus, and none of the stimuli processed primarily in only one hemisphere.

Author Contributions

E.T.R. designed and led the research, performed many of the analyses, and wrote the paper. R.Z. discussed the implementation of the research, performed the statistical analyses for the activations, and made those figures. All authors approved the paper.

Funding

The authors have nothing to report.

Ethics Statement

The authors have nothing to report.

Consent

The data were from the Human Connectome Project, and the WU‐Minn HCP Consortium obtained full informed consent from all participants, and research procedures and ethical guidelines were followed in accordance with the institutional review boards (IRBs), with details at the HCP website http://www.humanconnectome.org/.

Conflicts of Interest

The authors declare no conflicts of interest.

Supporting information

Data S1: Supporting Information.

HBM-47-e70494-s001.pdf (2.6MB, pdf)

Acknowledgments

The neuroimaging data were provided by the Human Connectome Project, WU‐Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657), funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University. They and the participants are thanked. Details of the working memory task and the stimuli used are available at https://www.humanconnectome.org/hcp‐protocols‐ya‐task‐fmri and https://db.humanconnectome.org/app/action/ChooseDownloadResources?project=HCP_Resources&resource=Scripts&filePath=HCP_TFMRI_scripts.zip.

Contributor Information

Edmund T. Rolls, Email: edmund.rolls@oxcns.org.

Ruohan Zhang, Email: ruohan.zhang.2@warwick.ac.uk.

Data Availability Statement

The data are available at the HCP website http://www.humanconnectome.org/. Standard MATLAB functions were used to calculate the functional connectivity, to perform the paired t‐test analyses, and to perform the Bonferroni and FDR corrections for multiple comparisons.

References

  1. Barch, D. M. , Burgess G. C., Harms M. P., et al. 2013. “Function in the Human Connectome: Task‐fMRI and Individual Differences in Behavior.” NeuroImage 80: 169–189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Barkas, L. J. , Henderson J. L., Hamilton D. A., Redhead E. S., and Gray W. P.. 2010. “Selective Temporal Resections and Spatial Memory Impairment: Cue Dependent Lateralization Effects.” Behavioural Brain Research 208: 535–544. [DOI] [PubMed] [Google Scholar]
  3. Baylis, G. C. , Rolls E. T., and Leonard C. M.. 1987. “Functional Subdivisions of the Temporal Lobe Neocortex.” Journal of Neuroscience 7: 330–342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Campbell, A. , Li X., Rothlein D., Esterman M., and DeGutis J.. 2025. “Left Hemisphere Abnormalities in Face‐Selective Activation and Functional Connectivity in Developmental Prosopagnosia.” Imaging Neuroscience 3: IMAG.a.971. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Celik, E. , Keles U., Kiremitci I., Gallant J. L., and Cukur T.. 2021. “Cortical Networks of Dynamic Scene Category Representation in the Human Brain.” Cortex 143: 127–147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Cheng, W. , Rolls E. T., Gu H., Zhang J., and Feng J.. 2015. “Autism: Reduced Functional Connectivity Between Cortical Areas Involved in Face Expression, Theory of Mind, and the Sense of Self.” Brain 138: 1382–1393. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Cohen, J. 1992. “A Power Primer.” Psychological Bulletin 112: 155–159. [DOI] [PubMed] [Google Scholar]
  8. Dalton, M. A. , Hornberger M., and Piguet O.. 2016. “Material Specific Lateralization of Medial Temporal Lobe Function: An fMRI Investigation.” Human Brain Mapping 37: 933–941. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Epstein, R. 2005. “The Cortical Basis of Visual Scene Processing.” Visual Cognition 12: 954–978. [Google Scholar]
  10. Epstein, R. , and Kanwisher N.. 1998. “A Cortical Representation of the Local Visual Environment.” Nature 392: 598–601. [DOI] [PubMed] [Google Scholar]
  11. Epstein, R. A. , and Baker C. I.. 2019. “Scene Perception in the Human Brain.” Annual Review of Vision Science 5: 373–397. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Epstein, R. A. , and Julian J. B.. 2013. “Scene Areas in Humans and Macaques.” Neuron 79: 615–617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Feigenbaum, J. D. , and Rolls E. T.. 1991. “Allocentric and Egocentric Spatial Information Processing in the Hippocampal Formation of the Behaving Primate.” Psychobiology 19: 21–40. [Google Scholar]
  14. Feinberg, D. A. , Moeller S., Smith S. M., et al. 2010. “Multiplexed Echo Planar Imaging for Sub‐Second Whole Brain FMRI and Fast Diffusion Imaging.” PLoS One 5: e15710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Gainotti, G. 2021. “Is There a Causal Link Between the Left Lateralization of Language and Other Brain Asymmetries? A Review of Data Gathered in Patients With Focal Brain Lesions.” Brain Sciences 11: 1644. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Gallivan, J. P. , and Goodale M. A.. 2018. “The Dorsal “Action” Pathway.” In Handbook of Clinical Neurology, edited by Vallar G., and Branch Coslett H., vol. 151, 449–466. Elsevier. [DOI] [PubMed] [Google Scholar]
  17. Georges‐François, P. , Rolls E. T., and Robertson R. G.. 1999. “Spatial View Cells in the Primate Hippocampus: Allocentric View Not Head Direction or Eye Position or Place.” Cerebral Cortex 9: 197–212. [DOI] [PubMed] [Google Scholar]
  18. Glasser, M. F. , Coalson T. S., Robinson E. C., et al. 2016. “A Multi‐Modal Parcellation of Human Cerebral Cortex.” Nature 536: 171–178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Glasser, M. F. , Smith S. M., Marcus D. S., et al. 2016. “The Human Connectome Project's Neuroimaging Approach.” Nature Neuroscience 19: 1175–1187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Goodale, M. A. , and Milner A. D.. 1992. “Separate Visual Pathways for Perception and Action.” Trends in Neurosciences 15: 20–25. [DOI] [PubMed] [Google Scholar]
  21. Hasselmo, M. E. , Rolls E. T., and Baylis G. C.. 1989. “The Role of Expression and Identity in the Face‐Selective Responses of Neurons in the Temporal Visual Cortex of the Monkey.” Behavioural Brain Research 32: 203–218. [DOI] [PubMed] [Google Scholar]
  22. Hasselmo, M. E. , Rolls E. T., Baylis G. C., and Nalwa V.. 1989. “Object‐Centred Encoding by Face‐Selective Neurons in the Cortex in the Superior Temporal Sulcus of the Monkey.” Experimental Brain Research 75: 417–429. [DOI] [PubMed] [Google Scholar]
  23. Haxby, J. V. , Grady C. L., Horwitz B., et al. 1991. “Dissociation of Object and Spatial Visual Processing Pathways in Human Extrastriate Cortex.” Proceedings of the National Academy of Sciences of the United States of America 88: 1621–1625. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Hervais‐Adelman, A. , Kumar U., Mishra R. K., et al. 2019. “Learning to Read Recycles Visual Cortical Networks Without Destruction.” Science Advances 5: eaax0262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Huang, C. C. , Rolls E. T., Feng J., and Lin C. P.. 2022. “An Extended Human Connectome Project Multimodal Parcellation Atlas of the Human Cortex and Subcortical Areas.” Brain Structure & Function 227: 763–778. [DOI] [PubMed] [Google Scholar]
  26. Kanwisher, N. , McDermott J., and Chun M. M.. 1997. “The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception.” Journal of Neuroscience 17: 4302–4311. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Kolster, H. , Peeters R., and Orban G. A.. 2010. “The Retinotopic Organization of the Human Middle Temporal Area MT/V5 and Its Cortical Neighbors.” Journal of Neuroscience 30: 9801–9820. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Labache, L. , Ge T., Yeo B. T. T., and Holmes A. J.. 2023. “Language Network Lateralization Is Reflected Throughout the Macroscale Functional Organization of Cortex.” Nature Communications 14: 3405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Liu, J. , Harris A., and Kanwisher N.. 2010. “Perception of Face Parts and Face Configurations: An FMRI Study.” Journal of Cognitive Neuroscience 22: 203–211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Ma, Q. , Rolls E. T., Huang C.‐C., Cheng W., and Feng J.. 2022. “Extensive Cortical Functional Connectivity of the Human Hippocampal Memory System.” Cortex 147: 83–101. [DOI] [PubMed] [Google Scholar]
  31. Milner, A. D. , and Goodale M. A.. 1995. The Visual Brain in Action. Oxford University Press. [Google Scholar]
  32. Mishkin, M. , Ungerleider L. G., and Macko K. A.. 1983. “Object Vision and Spatial Vision: Two Cortical Pathways.” Trends in Neurosciences 6: 414–417. [Google Scholar]
  33. Moeller, S. , Yacoub E., Olman C. A., et al. 2010. “Multiband Multislice GE‐EPI at 7 Tesla, With 16‐Fold Acceleration Using Partial Parallel Imaging With Application to High Spatial and Temporal Whole‐Brain fMRI.” Magnetic Resonance in Medicine 63: 1144–1153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Natu, V. S. , Arcaro M. J., Barnett M. A., et al. 2021. “Sulcal Depth in the Medial Ventral Temporal Cortex Predicts the Location of a Place‐Selective Region in Macaques, Children, and Adults.” Cerebral Cortex 31: 48–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Oldfield, R. C. 1971. “The Assessment and Analysis of Handedness: The Edinburgh Inventory.” Neuropsychologia 9: 97–113. [DOI] [PubMed] [Google Scholar]
  36. Perrett, D. I. , Smith P. A., Mistlin A. J., et al. 1985. “Visual Analysis of Body Movements by Neurones in the Temporal Cortex of the Macaque Monkey: A Preliminary Report.” Behavioural Brain Research 16: 153–170. [DOI] [PubMed] [Google Scholar]
  37. Pitcher, D. , Ianni G., and Ungerleider L. G.. 2019. “A Functional Dissociation of Face‐, Body‐ and Scene‐Selective Brain Areas Based on Their Response to Moving and Static Stimuli.” Scientific Reports 9: 8242. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Pitcher, D. , and Ungerleider L. G.. 2021. “Evidence for a Third Visual Pathway Specialized for Social Perception.” Trends in Cognitive Sciences 25: 100–110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Piza, D. B. , Corrigan B. W., Gulli R. A., et al. 2024. “Primacy of Vision Shapes Behavioral Strategies and Neural Substrates of Spatial Navigation in Marmoset Hippocampus.” Nature Communications 15: 4053. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Power, J. D. , Cohen A. L., Nelson S. M., et al. 2011. “Functional Network Organization of the Human Brain.” Neuron 72: 665–678. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Quin‐Conroy, J. E. , Bayliss D. M., Daniell S. G., and Badcock N. A.. 2024. “Patterns of Language and Visuospatial Functional Lateralization and Cognitive Ability: A Systematic Review.” Laterality 29: 63–96. [DOI] [PubMed] [Google Scholar]
  42. Robertson, R. G. , Rolls E. T., and Georges‐François P.. 1998. “Spatial View Cells in the Primate Hippocampus: Effects of Removal of View Details.” Journal of Neurophysiology 79: 1145–1156. [DOI] [PubMed] [Google Scholar]
  43. Rolls, E. T. 2011. “Face Neurons.” In The Oxford Handbook of Face Perception, edited by Calder A. J., Rhodes G., Johnson M. H., and Haxby J. V., 51–75. Oxford University Press. [Google Scholar]
  44. Rolls, E. T. 2012. “Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.” Frontiers in Computational Neuroscience 6: 35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Rolls, E. T. 2021. “Learning Invariant Object and Spatial View Representations in the Brain Using Slow Unsupervised Learning.” Frontiers in Computational Neuroscience 15: 686239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Rolls, E. T. 2023a. “Emotion, Motivation, Decision‐Making, the Orbitofrontal Cortex, Anterior Cingulate Cortex, and the Amygdala.” Brain Structure & Function 228: 1201–1257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Rolls, E. T. 2023b. Brain Computations and Connectivity. Oxford University Press, Open Access. [Google Scholar]
  48. Rolls, E. T. 2024. “Two What, Two Where, Visual Cortical Streams in Humans.” Neuroscience and Biobehavioral Reviews 160: 105650. [DOI] [PubMed] [Google Scholar]
  49. Rolls, E. T. 2025a. “Hippocampal Discoveries: Spatial View Cells, Connectivity, and Computations for Memory and Navigation, in Primates Including Humans.” Hippocampus 35: e23666. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Rolls, E. T. 2025b. “A Theory and Model of Scene Representations With Hippocampal Spatial View Cells.” Hippocampus 35: e70013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Rolls, E. T. 2026a. Neuroscience Discoveries. MIT Press. [Google Scholar]
  52. Rolls, E. T. 2026b. Brain Computations and Principles; and AI. Oxford University Press, Open Access. [Google Scholar]
  53. Rolls, E. T. 2026c. “Hippocampal Revolutions.” Neuroscience and Biobehavioral Reviews 180: 106492. [DOI] [PubMed] [Google Scholar]
  54. Rolls, E. T. , Deco G., Huang C. C., and Feng J.. 2022a. “The Effective Connectivity of the Human Hippocampal Memory System.” Cerebral Cortex 32: 3706–3725. [DOI] [PubMed] [Google Scholar]
  55. Rolls, E. T. , Deco G., Huang C. C., and Feng J.. 2022b. “The Human Orbitofrontal Cortex, vmPFC, and Anterior Cingulate Cortex Effective Connectome: Emotion, Memory, and Action.” Cerebral Cortex 33: 330–356. [DOI] [PubMed] [Google Scholar]
  56. Rolls, E. T. , Deco G., Huang C. C., and Feng J.. 2022c. “The Human Language Effective Connectome.” NeuroImage 258: 119352. [DOI] [PubMed] [Google Scholar]
  57. Rolls, E. T. , Deco G., Huang C. C., and Feng J.. 2023a. “The Human Posterior Parietal Cortex: Effective Connectome, and Its Relation to Function.” Cerebral Cortex 33: 3142–3170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Rolls, E. T. , Deco G., Huang C. C., and Feng J.. 2023b. “Human Amygdala Compared to Orbitofrontal Cortex Connectivity, and Emotion.” Progress in Neurobiology 220: 102385. [DOI] [PubMed] [Google Scholar]
  59. Rolls, E. T. , Deco G., Huang C. C., and Feng J.. 2023c. “Prefrontal and Somatosensory‐Motor Cortex Effective Connectivity in Humans.” Cerebral Cortex 33: 4939–4963. [DOI] [PubMed] [Google Scholar]
  60. Rolls, E. T. , Deco G., Huang C. C., and Feng J.. 2023d. “Multiple Cortical Visual Streams in Humans.” Cerebral Cortex 33: 3319–3349. [DOI] [PubMed] [Google Scholar]
  61. Rolls, E. T. , Deco G., Huang C. C., and Feng J.. 2024. “The Connectivity of the Human Frontal Pole Cortex, and a Theory of Its Involvement in Exploit Versus Explore.” Cerebral Cortex 34: 1–19. [DOI] [PubMed] [Google Scholar]
  62. Rolls, E. T. , Deco G., Zhang Y., and Feng J.. 2023. “Hierarchical Organization of the Human Ventral Visual Streams Revealed With Magnetoencephalography.” Cerebral Cortex 33: 10686–10701. [DOI] [PubMed] [Google Scholar]
  63. Rolls, E. T. , Feng J., and Zhang R.. 2024. “Selective Activations and Functional Connectivities to the Sight of Faces, Scenes, Body Parts and Tools in Visual and Non‐Visual Cortical Regions Leading to the Human Hippocampus.” Brain Structure & Function 229: 1471–1493. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Rolls, E. T. , Huang C. C., Lin C. P., Feng J., and Joliot M.. 2020. “Automated Anatomical Labelling Atlas 3.” NeuroImage 206: 116189. [DOI] [PubMed] [Google Scholar]
  65. Rolls, E. T. , Huang Y., and Huang C. C.. 2026. “Cortical Folding.”
  66. Rolls, E. T. , Joliot M., and Tzourio‐Mazoyer N.. 2015. “Implementation of a New Parcellation of the Orbitofrontal Cortex in the Automated Anatomical Labeling Atlas.” NeuroImage 122: 1–5. [DOI] [PubMed] [Google Scholar]
  67. Rolls, E. T. , Miyashita Y., Cahusac P. M. B., et al. 1989. “Hippocampal Neurons in the Monkey With Activity Related to the Place in Which a Stimulus Is Shown.” Journal of Neuroscience 9: 1835–1845. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Rolls, E. T. , and O'Mara S. M.. 1995. “View‐Responsive Neurons in the Primate Hippocampal Complex.” Hippocampus 5: 409–424. [DOI] [PubMed] [Google Scholar]
  69. Rolls, E. T. , Rauschecker J. P., Deco G., Huang C. C., and Feng J.. 2023. “Auditory Cortical Connectivity in Humans.” Cerebral Cortex 33: 6207–6227. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Rolls, E. T. , Robertson R. G., and Georges‐François P.. 1997. “Spatial View Cells in the Primate Hippocampus.” European Journal of Neuroscience 9: 1789–1794. [DOI] [PubMed] [Google Scholar]
  71. Rolls, E. T. , and Treves A.. 2024. “A Theory of Hippocampal Function: New Developments.” Progress in Neurobiology 238: 102636. [DOI] [PubMed] [Google Scholar]
  72. Rolls, E. T. , Treves A., Robertson R. G., Georges‐François P., and Panzeri S.. 1998. “Information About Spatial View in an Ensemble of Primate Hippocampal Cells.” Journal of Neurophysiology 79: 1797–1813. [DOI] [PubMed] [Google Scholar]
  73. Rolls, E. T. , and Turova T. S.. 2025. “Visual Cortical Networks for ‘What’ and ‘Where’ to the Human Hippocampus Revealed With Dynamical Graphs.” Cerebral Cortex 35: bhaf106. [DOI] [PubMed] [Google Scholar]
  74. Rolls, E. T. , Wirth S., Deco G., Huang C. C., and Feng J.. 2023. “The Human Posterior Cingulate, Retrosplenial and Medial Parietal Cortex Effective Connectome, and Implications for Memory and Navigation.” Human Brain Mapping 44: 629–655. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Rolls, E. T. , Yan X., Deco G., Zhang Y., Jousmaki V., and Feng J.. 2024. “A Ventromedial Visual Cortical ‘Where’ Stream to the Human Hippocampus for Spatial Scenes Revealed With Magnetoencephalography.” Communications Biology 7: 1047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Rolls, E. T. , Zhang C., and Feng J.. 2025. “Slow Semantic Learning in the Cerebral Cortex, and Its Relation to the Hippocampal Episodic Memory System.” Cerebral Cortex 35: bhah107. [DOI] [PubMed] [Google Scholar]
  77. Rolls, E. T. , Zhang R., Deco G., Vatansever D., and Feng J.. 2024. “Selective Brain Activations and Connectivities Related to the Storage and Recall of Human Object‐Location, Reward‐Location, and Word‐Pair Episodic Memories.” Human Brain Mapping 45: e70056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Rossion, B. , and Lochy A.. 2022. “Is Human Face Recognition Lateralized to the Right Hemisphere due to Neural Competition With Left‐Lateralized Visual Word Recognition? A Critical Review.” Brain Structure & Function 227: 599–629. [DOI] [PubMed] [Google Scholar]
  79. Schachter, S. C. , Ransil B. J., and Geschwind N.. 1987. “Associations of Handedness With Hair Color and Learning Disabilities.” Neuropsychologia 25: 269–276. [DOI] [PubMed] [Google Scholar]
  80. Ungerleider, L. G. , and Haxby J. V.. 1994. “‘What’ and ‘Where’ in the Human Brain.” Current Opinion in Neurobiology 4: 157–165. [DOI] [PubMed] [Google Scholar]
  81. Ungerleider, L. G. , and Mishkin M.. 1982. “Two Cortical Visual Systems.” In Analysis of Visual Behavior, edited by Ingle D. J., Goodale M. A., and Mansfield R. J. W., 549–586. MIT Press. [Google Scholar]
  82. Van Essen, D. C. , Smith S. M., Barch D. M., et al. 2013. “The WU‐Minn Human Connectome Project: An Overview.” NeuroImage 80: 62–79. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Weiner, K. S. , and Grill‐Spector K.. 2013. “Neural Representations of Faces and Limbs Neighbor in Human High‐Level Visual Cortex: Evidence for a New Organization Principle.” Psychological Research 77: 74–97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Zhang, C. , Rolls E. T., and Feng J.. 2026. “Invariant Visual Object and Face Learning in the Ventral Cortical Visual Pathway: A Biologically Plausible Model.” PLoS Computational Biology 22: e1013959. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data S1: Supporting Information.

HBM-47-e70494-s001.pdf (2.6MB, pdf)

Data Availability Statement

The data are available at the HCP website http://www.humanconnectome.org/. Standard MATLAB functions were used to calculate the functional connectivity, to perform the paired t‐test analyses, and to perform the Bonferroni and FDR corrections for multiple comparisons.


Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES