Skip to main content

This is a preprint.

It has not yet been peer reviewed by a journal.

The National Library of Medicine is running a pilot to include preprints that result from research funded by NIH in PMC and PubMed.

bioRxiv logoLink to bioRxiv
[Preprint]. 2024 Sep 27:2024.09.25.615084. [Version 1] doi: 10.1101/2024.09.25.615084

Retinotopic coding organizes the opponent dynamic between internally and externally oriented brain networks

Adam Steel 1,2,3,6,*, Peter A Angeli 3, Edward H Silson 4, Caroline E Robertson 3,5,**
PMCID: PMC11463438  PMID: 39386717

Abstract

How the human brain integrates internally- (i.e., mnemonic) and externally-oriented (i.e., perceptual) information is a long-standing puzzle in neuroscience. In particular, the internally-oriented networks like the default network (DN) and externally-oriented dorsal attention networks (dATNs) are thought to be globally competitive, which implies DN disengagement during cognitive states that drive the dATNs and vice versa. If these networks are globally opposed, how is internal and external information integrated across these networks? Here, using precision neuroimaging methods, we show that these internal/external networks are not as dissociated as traditionally thought. Using densely sampled high-resolution fMRI data, we defined individualized whole-brain networks from participants at rest, and the retinotopic preferences of individual voxels within these networks during an independent visual mapping task. We show that while the overall network activity between the DN and dATN is opponent at rest, a latent retinotopic code structures this global opponency. Specifically, the anti-correlation (i.e., global opponency) between the DN and dATN at rest is structured at the voxel-level by each voxel’s retinotopic preferences, such that the spontaneous activity of voxels preferring similar visual field locations are more anti-correlated than those that prefer different visual field locations. Further, this retinotopic scaffold integrates with the domain-specific preferences of subregions within these networks, enabling efficient, parallel processing of retinotopic and domain-specific information. Thus, DN and dATN dynamics are opponent, but not competitive: voxel-scale anti-correlation between these networks preserves and encodes information in the negative BOLD responses, even in the absence of visual input or task demands. These findings suggest that retinotopic coding may serve as a fundamental organizing principle for brain-wide communication, providing a new framework for understanding how the brain balances and integrates internal cognition with external perception.


A fundamental goal of neuroscience is to understand how activity distributed across the brain’s functional networks gives rise to cognition18. Central to this aim is understanding principles that govern interactions between different brain networks, particularly those involved in externally-oriented attention (e.g., processing sensory input) and internally-oriented attention (e.g., introspection and memory)912. This knowledge gap confounds our understanding of human cognition: the interaction between internally- and externally-oriented neural systems is foundational to how we perceive, remember, and navigate our world, yet the ‘common language’ facilitating their communication remains a mystery.

One reason this knowledge gap exists is that the neural systems that subserve internally- and externally-oriented attention, such as the Default Network (DN) and dorsal attention network (dATN) respectively, are typically considered in competition911. Seminal neuroimaging studies investigating externally-oriented attention (visual processing, working memory, etc.) showed that visual tasks reliably activate brain areas in lateral occipital temporal cortex, dorsal parietal cortex, and prefrontal cortex, now collectively referred to as the dATN9,13,14. In contrast, visual tasks reliably deactivate regions in the internally-oriented DN, including lateral and medial parietal cortex, anterior temporal lobe, and medial prefrontal cortex12,15. This pattern reverses during introspective tasks, e.g., scene construction or theory of mind tasks: the DN systematically activates and the dATN deactivates6,1620. Together, this is thought to reflect a network-level opponency: these networks are thought to globally inhibit each other, possibly to facilitate attending to external stimuli without competition from internal mnemonic representations, and vice versa9,11,21,22. However, if internally- and externally-oriented networks compete, it is not clear how the brain accomplishes tasks that require integrating perceptual and mnemonic information (e.g., anticipatory saccades, memory-based attention tasks, and mental imagery).

Two recent findings have shed light on this question. First, while the internally-oriented DN is traditionally thought to use an abstract or semantic neural code2,2224, recent work suggests that a neural code that is typically associated with externally-oriented processing -- retinotopy -- also manifests in the DN2528. Crucially, compared to classic visually responsive areas, including the posterior portion of the dATN, the visual response of DN is an “inverted retinotopic code.” Specifically, while stimulation of the retina causes typical visual areas to increase neural activity in a position-dependent manner, the DN exhibits position-specific decreases in activity25,27,28. Thus, the DN’s deactivation reflects specific properties of the attended external stimulus and is informative during visual attention tasks.

Second, an inverted retinotopic code has recently been shown to play a functional role in structuring mnemonic-perceptual interactions at the cortical apex28. During familiar scene processing, activity in brain areas specialized for scene perception and memory differentially increases during perception and recall tasks, respectively. However, at the voxel level, these areas exhibit an interlocked retinotopically-specific opponent dynamic. In other words, stimuli in a specific visual field location activate perception voxels and suppress memory area voxels monitoring that location. This pattern reverses during memory tasks. This challenges the traditional view of internally- and externally-oriented brain networks as functionally-opposed via global opponent dynamics, suggesting instead that they are mutually engaged in a common information processing stream. It also emphasizes the importance of voxel-wise activity patterns in uncovering neural codes that underpin global opponent dynamics.

Based on these findings, we reasoned that opponent retinotopic coding could be a widespread mechanism that scaffolds global opponent interactions across large-scale internally- and externally-oriented brain networks29. We leveraged a high-resolution 7T fMRI dataset30 and voxel-wise modeling to test whether retinotopic coding structures the global opponent dynamic between the DN and dATN. We tested three hypotheses. First, the voxel-wise retinotopic push-pull dynamic observed in the highly-specialized subareas of the DN/dATN during visual tasks28 will generalize to the overall networks’ spontaneous neural activity, even in the absence of experimenter-imposed task demands. Second, among subareas of the DN/dATN, retinotopic coding will be integrated (multiplexed) with an areas’ functional domain to allow fine-grained control of information exchange across cortex. Finally, retinotopy should be intrinsic throughout the brain, even in areas of the brain associated with internal attention. So, the retinotopic code will be evident in both top-down and bottom-up interactions between perceptual and mnemonic areas. Together, these results would suggest retinotopy is a unifying framework organizing brain-wide information processing and internal versus external attention dynamics.

Retinotopic coding in internally and externally oriented networks

To investigate the role of retinotopic coding in structuring activity between internally- and externally-oriented brain networks, we used voxel-wise visual population receptive field (pRF) modeling and resting-state fMRI data from the Natural Scenes Dataset30 (Fig. 1A). We assessed the retinotopic responsiveness of all voxels in the brain by modeling BOLD activity in response to a sweeping bar stimulus31. Then, we used resting-state fMRI data to identify each participant’s individually parcellated cortical networks8,32 and assess the retinotopic biases of voxels within these networks (Fig. 1B; Fig. S1). We considered any voxel with >8% variance explained by our pRF model to be exhibiting a retinotopic code, and hereafter we refer to these voxels as “pRFs”. PRF amplitude maps for all participants are shown in Fig. S2. Because we did not observe any hemispheric differences in any pRF features or subsequent analyses (ps > 0.55), data are presented collapsed across hemispheres.

Fig. 1.

Fig. 1.

Inversion of retinotopic coding between externally- and internally- oriented networks in the human brain. A. Population receptive field (pRF) modeling with fMRI. A visual pRF model was fit for all participants to establish visual responsiveness and visual field preferences for each voxel. Voxels with positive BOLD responses to the visual stimulus are referred to as positive pRFs (+pRFs), and those with negative BOLD responses to visual stimulation are referred to as negative pRFs (-pRFs). B. Individualized resting-state network parcellation. Resting-state fMRI was collected in all participants and used to derive individualized cortical network parcellations. Participants (N=7) had between 34–102 minutes of resting-state data. Networks were parcellated33 using the multi-session hierarchical Bayesian modelling approach8,32 with the Yeo 15 HCP atlas33 as a prior. C. Task-negative and task-positive (internally/externally oriented) brain networks contain differential concentrations of +/−pRFs. Bars show the concentration of −pRFs within each individual’s cortical networks. The Default Networks A and B (DN-A/B), canonically internally-oriented, contained the highest proportion of -pRFs, while the Dorsal Attention Networks (dATN-A/B), canonical externally-oriented, contained the highest proportion of +pRFs.

Consistent with prior reports25, we observed a strong distinction in the visual response of canonical internally- and externally-oriented brain networks, the Default Networks A-B and the Dorsal Attention Networks A-B, respectively (hereafter, DNs and dATNs) (Fig. 1C). On average, 55.3% of dATN voxels were retinotopic and 26.95% of DN voxels were retinotopic, consistent with their role as externally- and internally-oriented networks, respectively. However, the nature of retinotopy in these two networks differed mainly on amplitude (i.e. whether a stimulus in their preferred visual field location evoked a positive or a negative BOLD response) (Fig 1C). On average more than half of pRFs in the DNs were inverted (i.e., had a negative BOLD response to visual stimulation in their population receptive field, -pRFs). In contrast, less than 20% of the voxels in the dATN were −pRFs (i.e., the majority of the voxels had positive BOLD responses to visual stimulation in their receptive field, +pRFs)). This distinction is particularly remarkable given the proximity of the dATN and DN clusters in posterior cerebral cortex (Fig. 1B). The response amplitude of these pRF populations was reliable (Fig. S3). Together, these results establish the necessary foundation to assess whether the voxel-wise retinotopic code structures the global opponent dynamic between the internally- and externally-oriented networks.

To test whether the retinotopic code structured the opponent interaction between the internally- and externally-oriented brain networks at a voxel level, we examined whether the visual field preferences of individual voxels within these networks predicted their anti-correlation at rest. In other words, would the opponent dynamic between voxels in the DNs and dATNs be stronger for voxels with similar visual field preferences? During resting-state fMRI, participants simply fixate with not externally imposed stimulus (aside from a fixation cross)34. This allowed us to examine whether the retinotopic code structures the intrinsic organization of neural interactions absent an experimentally imposed external or internal task4,35. Resting-state data were preprocessed using ICA with manual noise component selection36,37, and no global signal regression was performed38.

For each pRF in the DNs and dATNs, we calculated the pairwise distance between the RF center position (x, y parameter estimates) of −pRFs in the DNs and +pRFs in dATNs (Fig.2A). For each DN -pRF, we found the 10 closest (Matched) and furthest (Anti-matched) dATN +pRF centers. Interestingly, matched pRFs were distributed across the dATN, spanning posterior areas in retinotopic cortex through prefrontal cortical regions not typically associated with visual analysis (Fig. S4). We then averaged these matched and antimatched pRF’s resting-state time series together, and we compared the correlation of these matched/antimatched timeseries with the time series from the DN −pRFs (see methods). This resulted in two correlation values per resting-state run, which represented the correlation of the average matched and anti-matched pRF’s time series between these areas. Importantly, the primary statistics relevant to the conclusions in the manuscript replicated when considering randomly sampled pRF pairs, and therefore our conclusions do not depend on matching only the bottom 10 voxels in a specific region or network (See Supplemental methods and results)

Fig. 2.

Fig. 2.

Retinotopic coding organizes spontaneous interaction between internally and externally oriented brain networks. A. Spatially-matched pRFs in dATNs and DNs. We assessed the influence of retinotopic coding on the interaction between internally- and externally-oriented brain areas’ spontaneous activity during resting-state fMRI, by comparing the correlation in activation between pRFs in these networks that represent similar regions of visual space. For each −DN pRF, we established the top 10 closest +dATN pRF voxels’ centers (“matched”) and the 10 furthest pRF centers (“antimatched”). In each resting-state fMRI scan, we extracted the average time series from −DN pRFs and correlated that time series with the average time series from the +dATN matched and antimatched pRFs. We repeated this procedure for all resting-state runs in all participants. Plot shows one example resting-state time series from a participant’s -DN, +dATN matched and +dATN antimatched pRFs and the associated correlation values. B. Spatially matched −DN/+dATN pRFs have a greater opponent interaction than anti-matched pRFs, showing that opponent dynamics depend on retinotopic preferences. Histogram shows the distribution in correlation values between matched (dark green) and antimatched (light green) pRF pairs for each resting-state run in all participants, which were significantly different (matched versus antimatched: D(392)=0.245, p<0.001). Bar plot shows the average correlation for each participant (matched versus anti-matched: t(6)=3.49, p=0.011). C. DN subnetworks A and B both evidenced a retinotopic opponent interaction (DN-A: t(6)=3.02, p=0.023, DN-B: t(6)=2.72, p=0.034; DN-A vs. DN-B: t(6)=1.64, p<0.15), although the opponency was stronger overall in DN-A compared to DN-B (difference in average correlation between −DN/+dATN pRFs: t(6)=5.41, p=0.002).

Retinotopic coding scaffolds DN and dATN interactions

If the −DN and +dATN opponent interaction is scaffolded by a retinotopic code, the correlation between spatially matched pRFs should be significantly more negative than the anti-matched pRFs. Our results were consistent with this hypothesis. We observed a negative correlation between both matched and anti-matched −DN and +dATN pRFs in the overwhelming majority of resting state runs (Fig. 2B, left). Critically, the distribution of matched +dATN and −DN pRFs was significantly shifted compared to unmatched pRFs (D(392)=0.22, p<0.001), confirming an overall stronger negative correlation, and thus a stronger opponent interaction, between matched compared to unmatched pRFs. The stronger opponent interaction for matched versus anti-matched pRFs was clear when resting-state runs were averaged within each participant (7/7 participants; t(6)=3.63, p=0.010, Fig.2B, right). Interestingly, retinotopy played a similarly strong role in structuring the opponent dynamic between the dATN and both subnetworks of the DN (DN-A: t(6)=3.02, p=0.023, DN-B: t(6)=2.72, p=0.034; DN-A vs. DN-B: t(6)=1.511, p<0.182), although the opponency was stronger overall in DN-A compared to DN-B (t(6)=5.532, p=0.002). Importantly, tSNR of the matched and anti-matched pRF’s resting-state time series did not differ (t(6)=1.007, p=0.353), suggesting that differences in signal quality did not underlie the difference in correlation. Together, these results suggest that the retinotopic code plays a role in structuring the opponent dynamic between the brain’s internally- and externally-oriented cortical networks.

Retinotopic coding organizes activity within functional domains

Our results so far clearly establish that retinotopic coding structures spontaneous interactions between internally- and externally-oriented neural systems in the absence of task demands. However, high-level cortical areas generally associate into networks based on their functional domain. For example, within the visual system, brain areas with differing retinotopic preferences (e.g., the scene-selective areas on the lateral and ventral surfaces of the brain)39,40 nevertheless form functional networks based on their apparent preference for specific visual categories (e.g., faces, objects, or scenes)41,42. This raises a crucial question: do retinotopic and domain-specific organizational principles interact to facilitate or constrain information flow across internally- and externally-oriented networks? Addressing this question would shed light on mechanisms that enable the brain to integrate information while maintaining functional specialization.

To address this question, we focused on the functional interplay between a set of areas in posterior cerebral cortex that are established models for mnemonic and visual processing in the domains of scene and face perception. Specifically, we considered the mnemonic lateral place memory area (LPMA43,44), an area on the brain’s lateral surface that is implicated in processing mnemonic information relevant to visual scenes at the border between DN-A and the dATNs (Fig 2A). We examined how LPMA activation co-fluctuates with the adjacent, scene-perception area “occipital place area” on the brain’s lateral surface (OPA45,46) compared to two face-perception regions on the lateral and ventral surfaces, the occipital and fusiform face areas (OFA and FFA47,48). At a group level, these perceptual regions are situated within the dATNs and are at the same level of the visual hierarchy (Fig. 3A), but they are differentially associated with the domains of scene (OPA) and face (OFA, FFA) processing, making them ideal model systems to examine the impact of domain-specificity and retinotopic coding in organizing neural activity.

Fig. 3.

Fig. 3.

Retinotopic coding structures the spontaneous opponent interaction between functionally-coupled mnemonic and perceptual areas during resting-state fMRI. A. Isolating functionally coupled internally- and externally-oriented brain areas within the DNs and dATNs. We established brain areas relevant to two cognitive domains, visual analysis of 1) scenes and 2) faces. Specifically, we focused on a memory area in the domain of scene perception (the lateral place memory area (LPMA; from43), white) at the posterior edge of the DN-A (purple; from33). We examined LPMA’s relationship to a set of perceptual areas in the dATN (green; from33), 1) the occipital place area (OPA; from43), an area within the domain of scene perception, along with 2) the occipital face area (OFA) and 3) the fusiform face area (FFA), two areas involved in the domain of face perception (white; from51). B-C. We localized LPMA in all participants by contrasting the correlation in resting-state activity between anterior and posterior parahippocampal place area (PPA) (B). This yielded a region in lateral occipital-parietal cortex that overlapped with the LPMA defined in an independent group of participants (C). D-E. Consistent with prior work, the connectivity-defined LPMA had greater concentration of −pRFs compared to OPA (D), and exhibited a lower visual field bias to OPA (E), consistent with an opponent interaction between these areas during perception. F. We assessed the influence of retinotopic coding on the interaction between −pRFs in mnemonic and +pRFs in perceptual areas using the same pRF matching and correlation procedure described above. We compared pRFs within functional domain (scene memory × perception –LPMA to OPA) as well as across domains (scene memory × face perception –LPMA to the occipital face area (OFA) and fusiform face area (FFA)). G. Within functional domain opponent interaction reflects voxel-wise retinotopic coding. We observed a stronger negative correlation between matched compared to anti-matched −LPMA/+OPA pRFs (-LPMA × +OPA matched versus anti-matched pRFs: D(392)=0.22, p<0.001). H. Retinotopic coding did not impact the interaction between areas across functional domains. We found no significant difference between matched and anti-matched pRFs between the scene memory area LPMA and the face perception areas FFA and OFA (-LPMA × +OFA: D(392)=0.09, p=0.44; −LPMA × +FFA: D(392)=0.11, p=0.15). Histograms depict the distribution of correlation values between matched (dark) and antimatched (light) pRFs for all runs in all participants. I. Retinotopic coding organizes interactions within a domain, but not across domains. When the correlation values were averaged within each participant, we observed a significant difference between matched versus anti-matched pRFs within functional domain (-LPMA × +OPA: t(6)=4.45, p=0.004) but not across domains (-LPMA × +OFA: t(6)=1.04, p=0.34; −LPMA x +FFA: t(6)=1.48, p=0.188).

We defined the mnemonic area LPMA in the NSD participants using resting-state data by contrasting functional connectivity between the anterior and posterior halves of the parahippocampal place area49,50 (Fig. 3B), which revealed a cluster in lateral parietal cortex with a similar topographic profile as LPMA based on a group analysis from our prior work43 (Fig.3C). Replicating our previous findings28, this connectivity-defined LPMA had a higher concentration of robust −pRFs compared to OPA (t(7)=5.26, p=0.002) (Fig.3D, Supplemental Fig. S3, S5) and exhibited a similar lower visual field bias as OPA (OPA: 8/8 t(7)=3.13, p=0.016; LPMA: 7/8 participants, t(7)=2.11, p=0.07; OPA v LPMA: t(7)=0.441, p=0.67) (Fig. 3E).

We first considered whether the retinotopic opponent dynamic we have previously shown in the domain of scenes (i.e., between −LPMA and +OPA pRFs) during perceptual and mnemonic tasks28 was also present at rest (Fig. 3F). Like the overall DN and dATN networks, we found that −LPMA and +OPA pRFs are interlocked in a retinotopically-grounded opponent interaction. Resting-state activity of −LPMA and +OPA pRFs was reliably negatively correlated, and this negative correlation was stronger for matched compared with anti-matched pRFs (K-S test: D(392)=0.22, p<0.001; t(6)=4.45, p=0.004) (Fig.3I; Fig. S6). This pattern was consistent when matched/anti-matched pRFs were equated for eccentricity and size (matched vs. anti-matched: D(392)=0.148, p=0.028; t(6)=2.03, p=0.087; 5/7 participants) (Fig. S7). Importantly, matched and anti-matched +OPA pRFs resting-state tSNR (t(6)=1.922, p=0.103) and variance explained by the pRF model (t(6)=0.47, p=0.65) did not differ, suggesting that idiosyncratic voxels did not drive these results. Additionally, the opponent interaction was specific to −pRFs in LPMA: the activity of the best matched +pRFs in LPMA and OPA was positively correlated, and significantly more so than anti-matched +pRFs in LPMA and OPA (t(6)=3.22, p=0.018; Supplementary Fig. S8). Taken together, these results show that the retinotopic code scaffolds the spontaneous interaction between perceptual and mnemonic brain areas within a functional domain, conceptually replicating our previous findings from task-fMRI28.

Having established that the retinotopic opponent dynamic is present among functionally-paired brain areas within the domain of scenes, we next tested whether this opponent dynamic was modified by functional domain (i.e., the scene memory area LPMA paired with the face perception areas FFA and OFA). Remarkably, we observed no significant difference between the distribution of correlation values for matched and anti-matched pRFs across functional domains (Matched versus anti-matched pRFs --LPMA × +OFA: D(392)=0.09, p=0.44, t(6)=1.04, p=0.34; FFA: D(392)=0.11, p=0.15, t(6)=1.48, p=0.18; Fig. 2GI), and retinotopic-opponency was greater within-compared to across-domain matching (scene-memory × scene-perception versus average scene-memory × face-perception: t(6)=2.46, p=0.049). Importantly, matched pRFs within domain had a significantly stronger opponent interaction than across domains (within vs across domains –LPMAx+OPA v −LPMAx+FFA: t(6)=5.94, p=0.001; −LPMAx+OPA v −LPMAx+OFA: t(6)=7.03, p<0.0005). These results indicate that retinotopic coding does not structure the interaction between pairs of regions associated with distinct functional domains. Instead, retinotopic scaffolding appears to be selective, operating only within a given functional domain. This coding scheme could allow for efficient, parallel processing of domain-specific information (e.g., faces, scenes) that can be flexibly adapted depending on task demands.

Retinotopic coding is inherent to internally-oriented areas

Our findings support the crucial role of retinotopic coding in scaffolding spontaneous interactions between functionally-coupled mnemonic and perceptual brain areas. However, a fundamental question remains: is retinotopy intrinsic to mnemonic cortical areas or merely adopted in response to perceptual input? Resolving this distinction is critical to understanding if the retinotopic scaffold is a general purpose mechanism for cross-network interaction that transcends specific task demands and cognitive domains (i.e,. memory vs. perception).

To address this question, we developed an analytical approach to disambiguate bottom-up perceptual signals from top-down mnemonic signals, in which we identified spontaneous neural “events” at individual −LPMA (top-down) or +OPA (bottom-up) pRFs and examined the co-activation of the target area at the event time. We defined an “event” as any time point where the z-scored BOLD signal of a given voxel exceeded the 99th percentile of activity in a given resting-state run. We then analyzed the peri-event activation in the target area’s matched and anti-matched pRFs (Fig. 3A). Importantly, because participants do not have any experimentally imposed task demands during resting-state fMRI, any structured interaction between areas will reflect these regions’ spontaneous dynamics.

Our analysis detected 20,958 top-down events and 7,193 bottom-up events. Top-down and bottom-up events occurred at the same rate per pRF (t(6)=0.71, p=0.50). Individual participants averaged 116±70.4 top-down and 32±11.76 bottom up (mean±sd) events per run. All pRFs had between 0 and 4 events per run (Fig. S8AC). The wide distribution of events in time suggested that individual pRF-level correlation could be isolated from global fluctuations in regional activity (Fig. S8A), making this approach suitable for evaluating distinct interactions at the individual voxel level.

We found that both top-down and bottom-up events tended to reduce target area activity via a retinotopic code, supporting the hypothesis that retinotopic coding is intrinsic to mnemonic cortical areas. During top-down events, +OPA pRFs showed significant deactivation that was more pronounced in matched compared to anti-matched pRFs (Fig. 3B). On the other hand, −LPMA pRFs had elevated activity during bottom-up events, but, crucially, the activation of matched pRFs was significantly reduced compared to anti-matched pRFs. Thus, bottom-up events also showed evidence of an inhibitory, retinotopic influence: the elevated activity in −LPMA (e.g., due to mentation and mind wandering during rest) is reduced after input from retinotopically-matched +OPA pRFs. This is clear evidence that both top-down and bottom-up events evoke retinotopically-specific inhibitory responses in the target region, suggesting that retinotopic coding is intrinsic to memory areas, even in the absence of task demands or overt visual input.

Importantly, the influence of retinotopic coding was similar for both bottom-up and top-down events, indicating a symmetric opponent interaction. Target area activity was significantly lower for matched compared to anti-matched pRFs in both directions (top-down: t(6)=4.13, p=0.006; bottom-up: t(6)=3.17, p=0.02), with no difference between event types (t(6)=0.86, p=0.42). The balanced dynamic between −LPMA and +OPA pRFs mirrors other nervous system interactions5255, extending this well-established framework in these sensory/motor domains to higher-order cognitive functions.

Discussion

In summary, it is well-established that internally- and externally-oriented distributed brain networks, including DNs and dATNs, support higher cognition in humans18,1012. Yet we lack an understanding of what coding principles, if any, underpin interactions across these distributed brain networks24,5660. Here, we show that a retinotopic code scaffolds the voxel-scale opponent interaction between internally- and externally-oriented brain networks, even in the absence of overt visual demands. Moreover, by examining functionally-linked perceptual and mnemonic areas straddling the boundary between the DNs and dATNs, we found that this retinotopic information is multiplexed with domain-specific information, which may enable effective parallel processing of representations depending on retinotopic location, attentional-state, and task-demands. Finally, analysis of neural events in these functionally-linked regions showed that the retinotopic opponency is present in top-down as well as bottom-up events, suggesting that the retinotopic code is intrinsic to both perceptual and mnemonic cortical areas. Collectively, our results provide a unified framework for understanding the flow of neural activity between brain areas, whereby macro-scale neural dynamics are organized at the meso-scale by functional domain and at the voxel-scale by a low-level retinotopic code. This multi-scale view of information processing has broad implications for understanding how the brain’s distributed networks give rise to attention, perception, and memory.

Because of the DN’s importance in many cognitive processes, including spatial and episodic memory, social processing, and executive functioning12,6166, resolving the coding principles inherent to the DN is central to understanding human cognition12,6166. The traditional view of neural coding posits that sensory codes like retinotopy are shed in favor of abstract, amodal codes moving up the cortical hierarchy towards the DN2,23,24. Our data contrast strikingly with this view. Instead of shedding the retinotopic code at the cortical apex, our prior work has shown that the low-level retinotopic code structures interactions between functionally paired perceptual and mnemonic areas in posterior cerebral cortex involved in visual scene analysis during visual and memory tasks28. Here we significantly extend that earlier finding by showing that the retinotopic code organizes spontaneous interactions across the brain’s large-scale cortical networks, even in the absence of task demands. The importance of retinotopic coding is further underscored when assessing “top-down” events (i.e. events that originate in the DN) detected at rest, which had a retinotopically-specific suppressive influence on pRFs in their downstream target area. This result joins mounting evidence demonstrating retinotopic coding in the DN2528, as well as new evidence supporting the importance of visual coding in the DN during visual tasks28,67. Together, these findings show that the retinotopic code is a “native language” of the DN, and they suggest that the DN plays an active role in perceptual experience and behavior by shaping visual responses via the retinotopic code.

Our findings of retinotopic, voxel-scale opponency between DN and dATN pRFs is at odds with theories that posit a global competition between these networks and a general disengagement of the DN from sensory processing912. Instead of network-level opponency, where DN globally disengages during cognitive states that drive the dATNs and vice versa, the opponent interaction between the DN and dATN reflects their common engagement in retinotopic information processing at a voxel level. Without considering voxel-wise information this opponency appears to “average out” into a global task-negative/task-positive response11, which has led to the conclusion that these networks are engaged in fundamentally different cognitive processes. In contrast, our results emphasize that the DN and dATNs are mutually engaged in processing common, retinotopic information. Taken together with prior results2528, our findings prompt a reevaluation of the role of the DN in perceptual processing, and the extent to which roles in “internally- and externally-oriented attention” adequately captures the DN and dATN’s role in cognition.

Our findings align with other recent studies suggesting an important role of the DN in shaping sensory responses beyond retinotopy. For example, recent studies have shown that ongoing prestimulus DN activity influences the sensitivity of near-threshold visual object recognition67. Other work has shown that subareas of the DN represent the visuospatial context that is associated in memory with a perceived scene28,43,44,68. Relatedly, DN activity has been shown to reflect semantic-level attentional priorities during visual search69. Taken together, these findings show that, rather than being disengaged during visual tasks, the DN actively shapes responses in perceptually-oriented cortex. This raises a fundamental question: is the DN a “visual” network? One possibility is that the DN directly and obligatorily represents retinal position, like low-level visual areas, and is engaged in processing visual features in addition to more abstract features31,70. Under this hypothesis, the retinotopic code indicates that the information represented in the DN is, in part, sensory. Alternatively, the apparent visual coding may simply represent an underlying connectivity structure71, which enables associative72 and semantic24,73 information that the DN directly represents to be effectively transferred to sensory areas of the brain. Under this alternative hypothesis, the retinotopic code serves as a “highway” between networks that are engaged in fundamentally different functions: abstract (DN) and sensory (dATN) processing. Crucially, in either case the retinotopic scaffolding represents a latent structure linking high-level and low-level areas through consistent, spatially-organized interactions71. Further studies investigating this framework with diverse tasks and stimuli could reshape our understanding of how the brain processes and integrates information across different levels of cognitive complexity.

Finally, our data paint a clear picture of cortical dynamics spanning levels of analysis, from macro-scale brain networks, to meso-scale domain-specific subareas, to small-scale voxel-level interactions. Prior work has emphasized the importance of each of these organizational scales independently1,8,13,25,26,28,31,33,42,47,74,75. Here, our data reveal a comprehensive framework for understanding the brain’s functional organization that spans these levels of description. Specifically, we demonstrate that retinotopic coding underpins voxel-scale interactions that are observable across the whole brain. However, these retinotopic interactions are constrained at the mesoscale by specific brain areas’ functional domains (e.g., processing visual scenes). Further, interplay among these domain-specific regions underpins the organization of large-scale brain networks, whose interactions give rise to specific complex behavior, like memory recall or visual attention. This nested hierarchy of neural interactions accounts for the efficiency of parallel information processing in the brain and the flexibility to adapt to ongoing task demands.

In summary, our results show that a retinotopic code organizes the spontaneous interactions of large-scale internally- and externally-oriented networks in the human brain. These findings challenge our classic understanding of internally-oriented networks like the DN, showing that the global opponent dynamic between the DN and dATN does not reflect disengagement from visual processing. Rather, the global opponent dynamic is structured by a voxel-wise retinotopic code that scaffolds interactions across these large-scale internally- and externally-oriented networks. Taken together, these results indicate that retinotopic coding, the human brain’s foundational visuo-spatial reference frame29,71, structures large-scale neural dynamics and may be a “common currency” or subspace for information exchange across the brain’s functional networks.

Methods

The data analyzed here are part of the Natural Scenes Dataset (NSD), a large 7T dataset of precision MRI data from 8 participants, including retinotopic mapping, anatomical segmentations, functional localizers for visual areas, and task and resting-state fMRI. A full description of the dataset can be found in the original manuscript30. Here we detail the data processing and analysis steps relevant to the present work.

Subjects

The NSD comprises data from 8 participants collected at the University of Minnesota (two male, six female, ages 19–32). One subject (subj03) was excluded from resting-state analyses for having insufficient resting-state runs that passed our quality metrics. All participants had normal or correct to normal vision and no known neurological impairments. Informed consent was collected from all participants, and the study was approved by the University of Minnesota institutional review board.

MRI acquisition and processing

For this study we made use of the following data from the NSD: anatomical (T1 and FreeSurfer segmentation/reconstruction76,77), functional regions of interest ROIs, and minimally preprocessed retinotopy and resting-state time series. All analyses were conducted in original subject volume space, and data were projected on the surface for visualization purposes only.

Anatomical data

Anatomical data was collected using a 3T Siemens Prisma scanner and a 32-channel head coil. We used the anatomical data provided at 0.8mm resolution as well as the registered output from FreeSurfer recon-all, aligned to the 1.8mm functional data. For visualization purposes, we projected statistical and retinotopy data to the cortical surface using SUMA78 from the afni software package79.

Defining functional regions of interest (PPA, OPA, OFA, FFA)

We used the volumetric functional regions of interest provided with the NSD at 1.8mm resolution. Specifically, we used the parahippocampal place area (PPA), occipital place area (OPA), iog-faces (referred to here as occipital face area (OFA)), and pfus-faces (fusiform face area, FFA1) regions of interest. In brief, these regions were defined in the NSD using within-subject data collected from 6 runs of a multi-category visual localizer paradigm30.

Defining functional region LPMA

Because the NSD did not include mnemonic localizers43, we used the resting-state data to define the lateral place memory area (LPMA). Briefly, the LPMA is a region anterior to OPA and near caudal inferior parietal lobule49,50 that selectively responds during recall of personally familiar places compared to other stimulus types. We have previous shown that OPA and LPMA are functionally-linked and work jointly to process knowledge of visuospatial context out of view during scene perception.

Prior work has suggested that a mnemonic area linked to scenes on the lateral surface can be localized by comparing resting-state co-fluctuations of anterior versus posterior PPA (aPPA and pPPA, respectively)16,49,50, and we adopted that approach here. We preprocessed the resting-state fixation fMRI data, runs 1 and 14 from NSD sessions 22 and 23, in all participants (prior to data exclusion) and extracted the average time series of aPPA, pPPA, aFFA, and pFFA. We used these time series as regressors in a general linear model, and we compared the beta-values from the aPPA and pPPA. We considered any voxels with a t-statistic > 5 within posterior parietal-occipital cortex on the lateral surface as an LPMA ROI (Individual ROIs can be found in Supplemental Fig.4). Across subjects, there was considerable overlap between this connectivity-defined area and a group-level LPMA defined based on our prior work (Fig. 1).

Functional MRI data acquisition and processing

All analyses were conducted on 1.8mm isotropic resolution minimally processed runs of a sweeping-bar retinotopy task and resting-state fixation provided in the NSD.

Quality assessment

To ensure only high-quality resting-state data were included, we trimmed the first 25 TRs (40s) from each run68, which left 4.25 min of resting-state data per run. We then used afni’s quality control assessment tool (APQC80) on the raw trimmed resting-state time series to assess degree of motion in the resting-state scans. We excluded runs with greater than 1.8mm maximum displacement per run or 0.12mm framewise displacement from analysis and assessed runs with greater than 1mm displacement or 0.10mm framewise displacement on a case-by-case basis8. After exclusions, one participant (subj03) had only a single run that survived our criteria (4.25 minutes of data)), so we excluded them from resting-state analyses. The remaining 7 participants had at least 8 resting-state runs (> 34 minutes of data; mean number of runs: 14±6.13 (sd), range: 8–24 runs).

ICA denoising

To further denoise the retinotopy and remaining resting-state data, we used manual ICA classification of signal and noise on the minimally-preprocessed time series36,37,43. We used manual classification because automated tools perform poorly on the high spatial and temporal resolution data of the NSD81. For each retinotopy and resting state run, we decomposed the data into independent spatial components and their associated temporal signals using ICA (FSL’s melodic82,83). We then manually classified each component as signal or noise using the criteria established in37. Noise signals were projected out of the data using fsl_regfilt36.

No global signal regression was performed on these data38. Data were normalized to percent signal change. A 2.5mm FWHM smooth was applied to resting-state fixation data used in functional connectivity network identification, otherwise no spatial smoothing was applied to the data.

Data analysis - retinotopy

pRF Modelling

The NSD retinotopy stimuli features a mosaic of faces, houses, and objects superimposed on pink noise that are revealed through a continuously drifting aperture. For our analysis, we considered only the bar stimulus time series, which is consistent with other studies investigating −pRFs in high-level cortical areas25,27,28. We did not consider the wedge/ring stimulus for any analyses.

After denoising the retinotopy data, we averaged the three bar stimulus retinotopy runs together to form the final retinotopy time series. We performed population receptive field modeling using afni following the procedure described in39. First, because the pRF stimulus in the NSD is continuous, we resampled the stimulus time series to the fMRI temporal resolution (TR = 1.333s). Next, we implemented afni’s pRF mapping procedure (3dNLfim). Given the position of the stimulus in the visual field at every time point, the model estimates the pRF parameters that yield the best fit to the data: pRF amplitude (positive, negative), pRF center location (x, y) and size (diameter of the pRF). Both Simplex and Powell optimization algorithms are used simultaneously to find the best time series/parameter sets (amplitude, x, y, size) by minimizing the least-squares error of the predicted time series with the acquired time series for each voxel. Relevant to the present work, the amplitude measure refers to the signed (positive or negative) degree of linear scaling applied to the pRF model, which reflects the sign of the neural response to visual stimulation of its receptive field.

Visual field coverage

Visual-field coverage (VFC) plots represent the sensitivity of an ROI across the visual field. We followed the procedure in Steel et al., 2024 to compute these28, which we have reproduced here. Individual participant VFC plots were first derived. These plots combine the best Gaussian receptive field model for each suprathreshold voxel within each ROI. Here, a max operator is used, which stores, at each point in the visual field, the maximum value from all pRFs within the ROI. The resulting coverage plot thus represents the maximum envelope of sensitivity across the visual field. Individual participant VFC plots were averaged across participants to create group-level coverage plots.

To compute the elevation biases, we calculated the mean pRF value (defined as the mean value in a specific portion of the visual-field coverage plot) in the contralateral upper visual field (UVF) and contralateral lower visual field (LVF) and computed the difference (UVF–LVF) for each participant, ROI and amplitude (+/−) separately. A positive value thus represents an upper visual-field bias, whereas a negative value represents a lower visual-field bias. Analysis of the visual-field biases considers pRF center, as well as pRF size and R2.

Reliability of pRF amplitude estimate

To assess the reliability of pRF amplitude (i.e., positive versus negative), we iteratively compared the amplitude of significant pRFs from individual runs of pRF data. Specifically, for each single run of pRF data, we fit our pRF model. We then binarized vertices according to significance and amplitude –voxels that surpassed our significance threshold (R2 > 0.08) in the full model were assigned a value of 0 or 1. When investigating +pRF reliability in OPA, significant vertices with a positive amplitude were assigned to 1, all other vertices 0. For -pRF reliability in LPMA, the opposite was done: significant negative amplitude vertices were assigned a value of 1, and all other vertices set to 0.

For each participant, after binarization, we calculated a dice-like coefficient within each ROI which considered all three retinotopy runs (cofficient=3|run1run2run3||run1|+|run2||run3|). We compared this value versus 5000 iterations of the same number of voxels randomly sampled from all voxels, both significant and non-significant, in the ROI. For each participant, this resulted in 1 “observed” dice coefficients, along with 5000 bootstrapped values representing the distribution of dice coefficients expected by chance which we used to evaluate the significance of each pRF amplitude’s run-to-run consistency.

Voxel-wise pRF matching

We matched −pRFs in source (i.e., the DN) with target (i.e., +pRFs in the dATN) using the following procedure. Within each participant, we computed the pairwise Euclidean distance between the center (x,y) of each source pRF with each target pRF. For each source pRF, we considered the top 10 closest target pRFs the “matched pRFs” and the 10 furthest target pRFs the “anti-matched pRFs.” So, each pRF within a memory area yielded 10 matched and 10 anti-matched pRFs.

To investigate the opponent interaction between areas for top-down interactions (below), we conducted this procedure using −LPMA pRFs as the source and +OPA pRFs as targets. To investigate bottom-up interactions, we considered +OPA pRFs as the source and −LPMA pRFs as targets. To investigate the importance of functional domain in retinotopic interactions, we considered −LPMA pRFs as the source and 1) +OFA and 2) +FFA pRFs as targets.

Resting-state analyses

Individual-specific cerebral network estimation

In each participant, we identified a set of 15 distributed networks on the cerebral cortical surface using a multi-session hierarchical Bayesian model (MS-HBM) functional connectivity approach8,32. Briefly, the MS-HBM approach calculates a connectivity profile for every vertex on the cortical surface derived from that vertex’s correlation to all other vertices during each run of resting-state fixation. The MS-HBM then uses the resulting run- and participant-level profiles, along with a 15 network group-level prior created from a portion of the HCP S900 data33, to create a unique 15-network parcellation for each individual.

The MS-HBM has two primary advantages compared to other approaches used to parcellate functional connectivity data, such as k-mean clustering. First, it accounts for differences in connectivity both within an individual (likely the result of confounding variables such as scanner variability, time of day, etc.), and between participants (reflecting potentially meaningful individual differences), allowing for more reliable estimates. Second, by incorporating a group prior which includes all the networks of interest, we ensure that all networks will be identified in all participants, while allowing for idiosyncratic topographic differences.

Opponent interaction at rest

To assess the influence of retinotopy on the correlation of areas at rest, we used the following procedure. First, we established matched source (i.e., −DN or -LPMA) and target (i.e., +dATN or +OPA) pRF pairs using the procedure described above. For this analysis, our primary focus was on the −pRFs in the DN or LPMA and their relationship with perceptual areas (dATN, OPA, OFA, FFA). Among these highly connected areas, large-scale fluctuations due to attention and motion will cause these voxels to be highly correlated. To control for this, consistent with prior work, we extracted the average time series of all +pRFs in the DN or LPMA and partialed out the variance associated with these pRFs from the −pRFs in the DN or LPMA and +pRFs in the perceptual areas of interest from each resting-state run28,84.

To examine the opponent interaction for matched pRFs, we considered each resting state run separately. In each resting state run, we first calculated the average time series of all −pRFs in LPMA. We then calculated the average time series for the top 10 best matched +pRFs in OPA of all voxels. For example, if a participant had 211 −pRFs in LPMA, these voxels were averaged together to get a single −LPMA pRF time series, and the top 10 matched +pRFs in OPA (i.e., 2110 time series) would be averaged together to constitute the +OPA time series. Note that the matched +OPA pRFs were not unique for each voxel. We then correlated these time series (−LPMA and +OPA pRFs). A negative correlation was considered an opponent interaction.

To compare the importance of retinotopy in structuring the interaction between LPMA and OPA, we performed the same averaging and correlation procedure described above with the 10 worst matched pRFs (anti-matched). For each participant, all Fisher-transformed correlation values (z) for matched versus anti-matched pRFs were averaged together and we compared these matched and anti-matched correlation values using a paired t-test. To examine the specificity within each functional network, we repeated this procedure for −LPMA matched/anti-matched with OFA and FFA-1.

To confirm that retinotopic coding structured the opponent interaction, we repeated this anaylsis with two key differences: 1), to ensure that the choosing the bottom 10 voxels did not drive our results, we randomly sampled 10 pRFs from the furthest 33% of pRFs from each participant, and 2) we performed the analysis without averaging the pRF’s resting-state time series within each ROI. For each resting state run, we extracted each -pRF time series, and correlated this time series with the time series from the average of its top 10 best matched +pRF in OPA. We performed this matching and random sampling procedure 1000 times for each pRF in the source area. For example, if a participant had 211 −pRFs in LPMA, we would correlated each of these pRFs with the average time series from the their top 10 best matched +pRFs in OPA, resulting in 211 individual r-values for each resting-state run, and compared each of these 211 values with the correlation of 1000 randomly paired pRFs for that source pRF. We then Fisher-transformed these matched and randomly sampled values and averaged them to constitute a single value for each participant. The mean Fisher-transformed values were compared using a paired t-test.

Resting-state event detection

We reasoned that the influence of top-down versus bottom-up drive on the spontaneous interaction between regions could be isolated by examining periods of unusually high activity in voxels in these respective areas (“events”)85,86. Specifically, we tested the hypothesis that top-down events in −LPMA pRFs would co-occur with periods of lower activity in +OPA pRFs, and that the suppressive influence of top-down events would be stronger compared with bottom-up events. To test these hypotheses, we isolated neural events in each source region’s (−LPMA (top-down) and +OPA (bottom-up)) pRF time series and examined the activity in the corresponding target region at these event times.

Event detection was performed for each +/− pRF independently. To detect events, we z-scored all pRF’s resting-state time series and identified TRs with unusually high activity in source region pRFs. Specifically, we considered each time point with a z-value greater than 2.4 (i.e., 99.18 percentile) as a neural event. Results were comparable with varying thresholds between 2.1 < z < 2.9. At each event, we extracted the 6 TRs before and after the event time (i.e., 13 TRs) surrounding the average time from that pRF’s top-10 matched pRFs in the target region. To make time series comparable across events, we normalized the event time series to the mean of the first 4 TRs. We repeated this procedure for all -LPMA/+OPA pRFs for top-down and bottom-up events.

For top-down and bottom-up events, we compared the activity of the target region (top-down: +OPA pRFs; bottom up: −LPMA pRFs) at event time using paired t-tests. We only considered matched pRFs for this analysis.

Statistical tests

Statistical analyses were implemented in Matlab (Mathworks, Inc). Given the small number of subjects in this dataset, we used two statistical analysis methods to ensure robustness of any detected effects. First, borrowing analytical methods from neuroscientific studies using animal models, we leveraged the large within participant data by pooling observations (e.g., resting-state runs or pRFs) from each participant. We then tested for differences in distributions using two-sample Kolmogorov-Smirnov goodness-of-fit hypothesis tests. Second, we adopted more classic statistical methods to test for effects within participants. We used paired-sample t-tests and corrected for multiple comparisons where appropriate.

Supplementary Material

1

Fig. 4.

Fig. 4.

Top-down vs. bottom-up neural events detected in spontaneous resting-state dynamics show evidence for retinotopically-specific suppression. A. Event detection and analysis procedure and example events from a single resting-state run. To detect events, we extracted the time series from each pRF in the source regions (top-down: -LPMA; bottom-up: +OPA) and isolated time points where the z-scored time series exceeded 2.4 s.d. (99th percentile). We then examined the activity of matched and anti-matched pRFs from the target region in this peri-event time frame (6 TRs (8 s) before and after the event). Overall, this event detection procedure yielded 20958 top-down and 7985 bottom-up events that were well distributed in time (Fig. S8). B. −LPMA pRF events co-occur with suppression of retinotopically-matched +OPA pRFs. Peri-event time series depicts the grand average activity of matched (dark) and anti-matched +OPA pRFs. Time series are baselined to the mean of the first three TRs (TRs −6 to −4 relative to event onset, dotted line). Red significance line shows time points with a significant difference between matched and anti-matched activation, corrected for multiple comparisons (alpha-level: p<0.05/13 = 0.0038). C. Suppression of ongoing activity in retinotopically matched −LPMA pRFs during +OPA events. Peri-event time series depicts the grand average activity of matched (dark) and anti-matched +OPA pRFs. As predicted, −LPMA activity is elevated during resting-state. This elevated ongoing activity is suppressed during events in retinotopically matched +OPA pRFs. Time series are baselined to the mean of the first three TRs (TRs −6 to −4 relative to event onset, dotted line). Blue significance line shows time points with a significant difference between matched and anti-matched activation, corrected for multiple comparisons. D. Target area shows retinotopically-specific suppression of activity for both top-down and bottom-up events. Bars show the average activation at event time of the target areas’ matched and anti-matched pRFs for each participant. Activity in matched pRFs was significantly lower than anti-matched pRFs for both top-down (t(6)=4.13, p=0.006) and bottom-up (t(6)=3.17, p=0.02), and there was no difference in the influence of retinotopic coding between these event types (t(6)=0.86, p=0.42).

Acknowledgements:

The authors would like to thank the authors of the NSD for making these data publicly available. AS was supported by the Neukom Institute for Computational Sciences. This work was supported by funding from the National Institutes of Mental Health under award number R01MH130529 to CER and from the Biotechnology and Biological Sciences Research Council, award BB/V003917/1 to EHS.

Footnotes

Competing interests: The authors declare no competing interests.

Code availability: This data does not use any original code. Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.

During the preparation of this work, the author(s) used Claude (Anthropic) to assist in revising the manuscript. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.

Data availability:

All data is made publicly available via the Natural Scenes Dataset (https://naturalscenesdataset.org/)

References

  • 1.Thomas Yeo B.T., Krienen F.M., Sepulcre J., Sabuncu M.R., Lashkari D., Hollinshead M., Roffman J.L., Smoller J.W., Zöllei L., Polimeni J.R., et al. (2011). The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J Neurophysiol 106, 1125–1165. 10.1152/JN.00338.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Margulies D.S., Ghosh S.S., Goulas A., Falkiewicz M., Huntenburg J.M., Langs G., Bezgin G., Eickhoff S.B., Castellanos F.X., Petrides M., et al. (2016). Situating the default-mode network along a principal gradient of macroscale cortical organization. Proc Natl Acad Sci U S A 113, 12574–12579. 10.1073/PNAS.1608282113/SUPPL_FILE/PNAS.201608282SI.PDF. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Braga R.M., and Buckner R.L. (2017). Parallel Interdigitated Distributed Networks within the Individual Estimated by Intrinsic Functional Connectivity. Neuron 95, 457–471.e5. 10.1016/j.neuron.2017.06.038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Gordon E.M., Laumann T.O., Gilmore A.W., Newbold D.J., Greene D.J., Berg J.J., Ortega M., Hoyt-Drazen C., Gratton C., Sun H., et al. (2017). Precision Functional Mapping of Individual Human Brains. Neuron 95, 791–807.e7. 10.1016/J.NEURON.2017.07.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Ranganath C., and Ritchey M. (2012). Two cortical systems for memory-guided behaviour. Preprint at Nature Publishing Group, https://doi.org/10.1038/nrn3338 10.1038/nrn3338. [DOI] [PubMed] [Google Scholar]
  • 6.Andrews-Hanna J.R., Reidler J.S., Sepulcre J., Poulin R., and Buckner R.L. (2010). Functional-Anatomic Fractionation of the Brain’s Default Network. Neuron 65, 550–562. 10.1016/j.neuron.2010.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Barnett A.J., Reilly W., Dimsdale-Zucker H.R., Mizrak E., Reagh Z., and Ranganath C. (2021). Intrinsic connectivity reveals functionally distinct cortico-hippocampal networks in the human brain. PLoS Biol 19, e3001275. 10.1371/JOURNAL.PBIO.3001275. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Du J., DiNicola L.M., Angeli P.A., Saadon-Grosman N., Sun W., Kaiser S., Ladopoulou J., Xue A., Yeo B.T.T., Eldaief M.C., et al. (2024). Organization of the human cerebral cortex estimated within individuals: networks, global topography, and function. J Neurophysiol 131, 1014–1082. 10.1152/jn.00308.2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Chun M.M., Golomb J.D., and Turk-Browne N.B. (2011). A Taxonomy of External and Internal Attention. Annu Rev Psychol 62, 73–101. 10.1146/annurev.psych.093008.100427. [DOI] [PubMed] [Google Scholar]
  • 10.Dixon M.L., Andrews-Hanna J.R., Spreng R.N., Irving Z.C., Mills C., Girn M., and Christoff K. (2017). Interactions between the default network and dorsal attention network vary across default subsystems, time, and cognitive states. Neuroimage 147, 632–649. 10.1016/j.neuroimage.2016.12.073. [DOI] [PubMed] [Google Scholar]
  • 11.Fox M.D., Snyder A.Z., Vincent J.L., Corbetta M., Van Essen D.C., and Raichle M.E. (2005). The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proc Natl Acad Sci U S A 102, 9673–9678. 10.1073/PNAS.0504136102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Raichle M.E. (2015). The Brain’s Default Mode Network. Annu Rev Neurosci 38, 433–447. 10.1146/annurev-neuro-071013-014030. [DOI] [PubMed] [Google Scholar]
  • 13.Fedorenko E., Duncan J., and Kanwisher N. (2013). Broad domain generality in focal regions of frontal and parietal cortex. Proceedings of the National Academy of Sciences 110, 16616–16621. 10.1073/pnas.1315235110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Vossel S., Geng J.J., and Fink G.R. (2013). Dorsal and Ventral Attention Systems: Distinct Neural Circuits but Collaborative Roles. The Neuroscientist 20, 150–159. 10.1177/1073858413494269. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Shulman G.L., Fiez J.A., Corbetta M., Buckner R.L., Miezin F.M., Raichle M.E., and Petersen S.E. (1997). Common Blood Flow Changes across Visual Tasks: II. Decreases in Cerebral Cortex. J Cogn Neurosci 9, 648–663. 10.1162/JOCN.1997.9.5.648. [DOI] [PubMed] [Google Scholar]
  • 16.Silson E.H., Steel A., Kidder A., Gilmore A.W., and Baker C.I. (2019). Distinct subdivisions of human medial parietal cortex support recollection of people and places. Elife 8. 10.7554/eLife.47391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.DiNicola L.M., Braga R.M., and Buckner R.L. (2020). Parallel distributed networks dissociate episodic and social functions within the individual. J Neurophysiol 123, 1144–1179. 10.1152/jn.00529.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Thakral P.P., Madore K.P., and Schacter D.L. (2017). A Role for the Left Angular Gyrus in Episodic Simulation and Memory. J Neurosci 37, 8142–8149. 10.1523/JNEUROSCI.1319-17.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Gilmore A.W., Quach A., Kalinowski S.E., Gotts S.J., Schacter D.L., and Martin A. (2021). Dynamic Content Reactivation Supports Naturalistic Autobiographical Recall in Humans. Journal of Neuroscience 41, 153–166. 10.1523/JNEUROSCI.1490-20.2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Christoff K., Gordon A.M., Smallwood J., Smith R., and Schooler J.W. (2009). Experience sampling during fMRI reveals default network and executive system contributions to mind wandering. Proc Natl Acad Sci U S A 106, 8719–8724. 10.1073/PNAS.0900234106/SUPPL_FILE/0900234106SI.PDF. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Murphy C., Wang H.-T., Konu D., Lowndes R., Margulies D.S., Jefferies E., and Smallwood J. (2019). Modes of operation: A topographic neural gradient supporting stimulus dependent and independent cognition. Neuroimage 186, 487–496. 10.1016/j.neuroimage.2018.11.009. [DOI] [PubMed] [Google Scholar]
  • 22.Murphy C., Jefferies E., Rueschemeyer S.A., Sormaz M., Wang H. ting, Margulies D.S., and Smallwood J. (2018). Distant from input: Evidence of regions within the default mode network supporting perceptually-decoupled and conceptually-guided cognition. Neuroimage 171, 393–401. 10.1016/J.NEUROIMAGE.2018.01.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Bellmund J.L.S., Gärdenfors P., Moser E.I., and Doeller C.F. (2018). Navigating cognition: Spatial codes for human thinking. Science (1979) 362, eaat6766. 10.1126/SCIENCE.AAT6766. [DOI] [PubMed] [Google Scholar]
  • 24.Popham S.F., Huth A.G., Bilenko N.Y., Deniz F., Gao J.S., Nunez-Elizalde A.O., and Gallant J.L. (2021). Visual and linguistic semantic representations are aligned at the border of human visual cortex. Nature Neuroscience 2021 24:11 24, 1628–1636. 10.1038/s41593-021-00921-6. [DOI] [PubMed] [Google Scholar]
  • 25.Szinte M., and Knapen T. (2020). Visual Organization of the Default Network. Cerebral Cortex 30, 3518–3527. 10.1093/CERCOR/BHZ323. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Knapen T. (2021). Topographic connectivity reveals task-dependent retinotopic processing throughout the human brain. Proc Natl Acad Sci U S A 118. 10.1073/PNAS.2017032118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Christiaan Klink P., Chen X., Vanduffel W., and Roelfsema P.R. (2021). Population receptive fields in non-human primates from whole-brain fmri and large-scale neurophysiology in visual cortex. Elife 10. 10.7554/ELIFE.67304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Steel A., Silson E.H., Garcia B.D., and Robertson C.E. (2024). A retinotopic code structures the interaction between perception and memory systems. Nat Neurosci 27, 339–347. 10.1038/s41593-023-01512-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Groen I.I.A., Dekker T.M., Knapen T., and Silson E.H. (2022). Visuospatial coding as ubiquitous scaffolding for human cognition. Trends Cogn Sci 26, 81–96. 10.1016/J.TICS.2021.10.011. [DOI] [PubMed] [Google Scholar]
  • 30.Allen E.J., St-Yves G., Wu Y., Breedlove J.L., Prince J.S., Dowdle L.T., Nau M., Caron B., Pestilli F., Charest I., et al. (2021). A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nature Neuroscience 2021 25:1 25, 116–126. 10.1038/s41593-021-00962-x. [DOI] [PubMed] [Google Scholar]
  • 31.Dumoulin S.O., and Wandell B.A. (2008). Population receptive field estimates in human visual cortex. Neuroimage 39, 647. 10.1016/J.NEUROIMAGE.2007.09.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Kong R., Yang Q., Gordon E., Xue A., Yan X., Orban C., Zuo X.-N., Spreng N., Ge T., Holmes A., et al. (2021). Individual-Specific Areal-Level Parcellations Improve Functional Connectivity Prediction of Behavior. Cerebral Cortex 31, 4477–4500. 10.1093/cercor/bhab101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Yeo B.T.T., Krienen F.M., Eickhoff S.B., Yaakub S.N., Fox P.T., Buckner R.L., Asplund C.L., and Chee M.W.L. (2015). Functional Specialization and Flexibility in Human Association Cortex. Cerebral Cortex 25, 3654–3672. 10.1093/cercor/bhu217. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Smallwood J., and Schooler J.W. (2015). The science of mind wandering: Empirically navigating the stream of consciousness. Annu Rev Psychol 66, 487–518. 10.1146/ANNUREV-PSYCH-010814-015331. [DOI] [PubMed] [Google Scholar]
  • 35.Biswal B., Zerrin Yetkin F., Haughton V.M., and Hyde J.S. (1995). Functional connectivity in the motor cortex of resting human brain using echo-planar mri. Magn Reson Med 34, 537–541. 10.1002/mrm.1910340409. [DOI] [PubMed] [Google Scholar]
  • 36.Griffanti L., Salimi-Khorshidi G., Beckmann C.F., Auerbach E.J., Douaud G., Sexton C.E., Zsoldos E., Ebmeier K.P., Filippini N., Mackay C.E., et al. (2014). ICA-based artefact removal and accelerated fMRI acquisition for improved resting state network imaging. Neuroimage 95, 232–247. 10.1016/j.neuroimage.2014.03.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Griffanti L., Douaud G., Bijsterbosch J., Evangelisti S., Alfaro-Almagro F., Glasser M.F., Duff E.P., Fitzgibbon S., Westphal R., Carone D., et al. (2017). Hand classification of fMRI ICA noise components. Neuroimage 154, 188–205. 10.1016/J.NEUROIMAGE.2016.12.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Saad Z.S., Gotts S.J., Murphy K., Chen G., Jo H.J., Martin A., and Cox R.W. (2012). Trouble at Rest: How Correlation Patterns and Group Differences Become Distorted After Global Signal Regression. Brain Connect 2, 25–32. 10.1089/brain.2012.0080. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Silson E.H., Chan A.W.Y., Reynolds R.C., Kravitz D.J., and Baker C.I. (2015). A retinotopic basis for the division of high-level scene processing between lateral and ventral human occipitotemporal cortex. Journal of Neuroscience 35, 11921–11935. 10.1523/JNEUROSCI.0137-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Silson E.H., Groen I.I.A., Kravitz D.J., and Baker C.I. (2016). Evaluating the correspondence between face-, scene-, and object-selectivity and retinotopic organization within lateral occipitotemporal cortex. J Vis 16, 14–14. 10.1167/16.6.14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Hutchison R.M., Culham J.C., Everling S., Flanagan J.R., and Gallivan J.P. (2014). Distinct and distributed functional connectivity patterns across cortex reflect the domain-specific constraints of object, face, scene, body, and tool category-selective modules in the ventral visual pathway. Neuroimage 96, 216–236. 10.1016/j.neuroimage.2014.03.068. [DOI] [PubMed] [Google Scholar]
  • 42.Stevens W.D., Tessler M.H., Peng C.S., and Martin A. (2015). Functional connectivity constrains the category-related organization of human ventral occipitotemporal cortex. Hum Brain Mapp 36, 2187–2206. 10.1002/hbm.22764. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Steel A., Billings M.M., Silson E.H., and Robertson C.E. (2021). A network linking scene perception and spatial memory systems in posterior cerebral cortex. Nature Communications 2021 12:1 12, 1–13. 10.1038/s41467-021-22848-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Steel A., Garcia B.D., Goyal K., Mynick A., and Robertson C.E. (2023). Scene Perception and Visuospatial Memory Converge at the Anterior Edge of Visually Responsive Cortex. The Journal of Neuroscience 43, 5723. 10.1523/JNEUROSCI.2043-22.2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Dilks D.D., Julian J.B., Paunov A.M., and Kanwisher N. (2013). The occipital place area is causally and selectively involved in scene perception. Journal of Neuroscience 33, 1331–1336. 10.1523/JNEUROSCI.4081-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Hasson U., Harel M., Levy I., and Malach R. (2003). Large-scale mirror-symmetry organization of human occipito-temporal object areas. Neuron 37, 1027–1041. 10.1016/S0896-6273(03)00144-2. [DOI] [PubMed] [Google Scholar]
  • 47.Kanwisher N., McDermott J., and Chun M.M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience 17, 4302–4311. 10.1523/jneurosci.17-11-04302.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Halgren E., Dale A.M., Sereno M.I., Tootell R.B.H., Marinkovic K., and Rosen B.R. (1999). Location of human face-selective cortex with respect to retinotopic areas. Hum Brain Mapp 7, 29. . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Silson E.H., Steel A.D., and Baker C.I. (2016). Scene-Selectivity and Retinotopy in Medial Parietal Cortex. Front Hum Neurosci 10, 412. 10.3389/fnhum.2016.00412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Baldassano C., Esteva A., Fei-Fei L., and Beck D.M. (2016). Two Distinct Scene-Processing Networks Connecting Vision and Memory. eNeuro 3. 10.1523/ENEURO.0178-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Rosenke M., van Hoof R., van den Hurk J., Grill-Spector K., and Goebel R. (2021). A Probabilistic Functional Atlas of Human Occipito-Temporal Visual Cortex. Cerebral Cortex 31, 603–619. 10.1093/cercor/bhaa246. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Heeger D.J., Boynton G.M., Demb J.B., Seidemann E., and Newsome W.T. (1999). Motion Opponency in Visual Cortex. The Journal of Neuroscience 19, 7162. 10.1523/JNEUROSCI.19-16-07162.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Reid R.C., and Shapley R.M. (1992). Spatial structure of cone inputs to receptive fields in primate lateral geniculate nucleus. Nature 356, 716–718. 10.1038/356716a0. [DOI] [PubMed] [Google Scholar]
  • 54.Berardelli A., Day B.L., Marsden C.D., and Rothwell J.C. (1987). Evidence favouring presynaptic inhibition between antagonist muscle afferents in the human forearm. J Physiol 391, 71–83. 10.1113/jphysiol.1987.sp016726. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Hubel D.H., and Wiesel T.N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol 160, 106–154. 10.1113/jphysiol.1962.sp006837. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Buschman T.J., Siegel M., Roy J.E., and Miller E.K. (2011). Neural substrates of cognitive capacity limitations. Proc Natl Acad Sci U S A 108, 11252–11255. 10.1073/PNAS.1104666108/-/DCSUPPLEMENTAL/PNAS.201104666SI.PDF. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Libby A., and Buschman T.J. (2021). Rotational dynamics reduce interference between sensory and memory representations. Nature Neuroscience 2021 24:5 24, 715–726. 10.1038/s41593-021-00821-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Semedo J.D., Zandvakili A., Machens C.K., Yu B.M., and Kohn A. (2019). Cortical areas interact through a communication subspace. Neuron 102, 249. 10.1016/J.NEURON.2019.01.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Gregoriou G.G., Gotts S.J., Zhou H., and Desimone R. (2009). High-frequency, long-range coupling between prefrontal and visual cortex during attention. Science 324, 1207–1210. 10.1126/SCIENCE.1171402. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Bosman C.A., Schoffelen J.M., Brunet N., Oostenveld R., Bastos A.M., Womelsdorf T., Rubehn B., Stieglitz T., De Weerd P., and Fries P. (2012). Attentional stimulus selection through selective synchronization between monkey visual areas. Neuron 75, 875–888. 10.1016/J.NEURON.2012.06.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Margulies D.S., Ghosh S.S., Goulas A., Falkiewicz M., Huntenburg J.M., Langs G., Bezgin G., Eickhoff S.B., Castellanos F.X., Petrides M., et al. (2016). Situating the default-mode network along a principal gradient of macroscale cortical organization. Proc Natl Acad Sci U S A 113, 12574–12579. 10.1073/pnas.1608282113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Buckner R.L., and DiNicola L.M. (2019). The brain’s default network: updated anatomy, physiology and evolving insights. Nat Rev Neurosci. 10.1038/s41583-019-0212-7. [DOI] [PubMed] [Google Scholar]
  • 63.Yeshurun Y., Nguyen M., and Hasson U. (2021). The default mode network: where the idiosyncratic self meets the shared social world. Nature Reviews Neuroscience 2021 22:3 22, 181–192. 10.1038/s41583-020-00420-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Spreng R.N., Stevens W.D., Chamberlain J.P., Gilmore A.W., and Schacter D.L. (2010). Default network activity, coupled with the frontoparietal control network, supports goal-directed cognition. Neuroimage 53, 303–317. 10.1016/j.neuroimage.2010.06.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Spreng R.N., Mar R.A., and Kim A.S.N. (2009). The common neural basis of autobiographical memory, prospection, navigation, theory of mind, and the default mode: A quantitative meta-analysis. J Cogn Neurosci 21, 489–510. 10.1162/jocn.2008.21029. [DOI] [PubMed] [Google Scholar]
  • 66.Foster B.L., Koslov S.R., Aponik-Gremillion L., Monko M.E., Hayden B.Y., and Heilbronner S.R. (2023). A tripartite view of the posterior cingulate cortex. Nat Rev Neurosci 24, 173–189. 10.1038/s41583-022-00661-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Wu Y., Podvalny E., Levinson M., and He B.J. (2024). Network mechanisms of ongoing brain activity’s influence on conscious visual perception. Nat Commun 15, 5720. 10.1038/s41467-024-50102-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Steel A., Thomas C., Trefler A., Chen G., and Baker C.I. (2019). Finding the baby in the bath water –evidence for task-specific changes in resting state functional connectivity evoked by training. Neuroimage 188, 524–538. 10.1016/J.NEUROIMAGE.2018.12.038. [DOI] [PubMed] [Google Scholar]
  • 69.Çukur T., Nishimoto S., Huth A.G., and Gallant J.L. (2013). Attention during natural vision warps semantic representation across the human brain. Nat Neurosci 16, 763–770. 10.1038/nn.3381. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Wandell B.A., Dumoulin S.O., and Brewer A.A. (2007). Visual field maps in human cortex. Neuron 56, 366–383. 10.1016/J.NEURON.2007.10.012. [DOI] [PubMed] [Google Scholar]
  • 71.Arcaro M., and Livingstone M. (2024). A Whole-Brain Topographic Ontology. Annu Rev Neurosci 47, 21–40. 10.1146/annurev-neuro-082823-073701. [DOI] [PubMed] [Google Scholar]
  • 72.Peer M., Brunec I.K., Newcombe N.S., and Epstein R.A. (2020). Structuring Knowledge with Cognitive Maps and Cognitive Graphs. Trends Cogn Sci. 10.1016/j.tics.2020.10.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Huth A.G., De Heer W.A., Griffiths T.L., Theunissen F.E., and Gallant J.L. (2016). Natural speech reveals the semantic maps that tile human cerebral cortex. Nature 532, 453–458. 10.1038/NATURE17637. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Epstein R., and Kanwisher N. (1998). A cortical representation the local visual environment. Nature 392, 598–601. 10.1038/33402. [DOI] [PubMed] [Google Scholar]
  • 75.Grill-Spector K., and Weiner K.S. (2014). The functional architecture of the ventral temporal cortex and its role in categorization. Preprint at Nature Publishing Group, https://doi.org/10.1038/nrn3747 10.1038/nrn3747. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Dale A.M., Fischl B., and Sereno M.I. (1999). Cortical surface-based analysis: I. Segmentation and surface reconstruction. Neuroimage 9, 179–194. 10.1006/nimg.1998.0395. [DOI] [PubMed] [Google Scholar]
  • 77.Fischl B., Salat D.H., Busa E., Albert M., Dieterich M., Haselgrove C., Van Der Kouwe A., Killiany R., Kennedy D., Klaveness S., et al. (2002). Whole brain segmentation: Automated labeling of neuroanatomical structures in the human brain. Neuron 33, 341–355. 10.1016/S0896-6273(02)00569-X. [DOI] [PubMed] [Google Scholar]
  • 78.Saad Z.S., and Reynolds R.C. (2012). SUMA. Neuroimage 62, 768–773. 10.1016/J.NEUROIMAGE.2011.09.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Cox R.W. (1996). AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical Research 29, 162–173. 10.1006/cbmr.1996.0014. [DOI] [PubMed] [Google Scholar]
  • 80.Taylor P.A., Glen D.R., Chen G., Cox R.W., Hanayik T., Rorden C., Nielson D.M., Rajendra J.K., and Reynolds R.C. (2024). A Set of FMRI Quality Control Tools in AFNI: Systematic, in-depth and interactive QC with afni_proc.py and more. bioRxiv, 2024.03.27.586976. 10.1101/2024.03.27.586976. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Beckers A.B., Drenthen G.S., Jansen J.F.A., Backes W.H., Poser B.A., and Keszthelyi D. (2023). Comparing the efficacy of data-driven denoising methods for a multi-echo fMRI acquisition at 7T. Neuroimage 280, 120361. 10.1016/j.neuroimage.2023.120361. [DOI] [PubMed] [Google Scholar]
  • 82.Jenkinson M., Beckmann C.F., Behrens T.E.J., Woolrich M.W., and Smith S.M. (2012). FSL. Neuroimage 62, 782–790. 10.1016/j.neuroimage.2011.09.015. [DOI] [PubMed] [Google Scholar]
  • 83.Smith S.M., Jenkinson M., Woolrich M.W., Beckmann C.F., Behrens T.E.J., Johansen-Berg H., Bannister P.R., De Luca M., Drobnjak I., Flitney D.E., et al. (2004). Advances in functional and structural MR image analysis and implementation as FSL. In NeuroImage (Academic Press; ), pp. S208–S219. 10.1016/j.neuroimage.2004.07.051. [DOI] [PubMed] [Google Scholar]
  • 84.Smith S.M., Vidaurre D., Beckmann C.F., Glasser M.F., Jenkinson M., Miller K.L., Nichols T.E., Robinson E.C., Salimi-Khorshidi G., Woolrich M.W., et al. (2013). Functional connectomics from resting-state fMRI. Trends Cogn Sci 17, 666–682. 10.1016/j.tics.2013.09.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Gordon E.M., Chauvin R.J., Van A.N., Rajesh A., Nielsen A., Newbold D.J., Lynch C.J., Seider N.A., Krimmel S.R., Scheidter K.M., et al. (2023). A somato-cognitive action network alternates with effector regions in motor cortex. Nature 617, 351–359. 10.1038/s41586-023-05964-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Mitra A., Snyder A.Z., Blazey T., and Raichle M.E. (2015). Lag threads organize the brain’s intrinsic activity. Proceedings of the National Academy of Sciences 112, E2235–E2244. 10.1073/pnas.1503960112. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

Data Availability Statement

All data is made publicly available via the Natural Scenes Dataset (https://naturalscenesdataset.org/)


Articles from bioRxiv are provided here courtesy of Cold Spring Harbor Laboratory Preprints

RESOURCES