Skip to main content
MIT Press Open Journals logoLink to MIT Press Open Journals
. 2024 Dec 1;36(12):2594–2617. doi: 10.1162/jocn_a_02141

From Motion to Emotion: Visual Pathways and Potential Interconnections

Aina Puce 1,
PMCID: PMC11416577  NIHMSID: NIHMS2004548  PMID: 38527078

Abstract

The two visual pathway description of Ungerleider and Mishkin changed the course of late 20th century systems and cognitive neuroscience. Here, I try to reexamine our laboratory's work through the lens of the Pitcher and Ungerleider new third visual pathway. I also briefly review the literature related to brain responses to static and dynamic visual displays, visual stimulation involving multiple individuals, and compare existing models of social information processing for the face and body. In this context, I examine how the posterior STS might generate unique social information relative to other brain regions that also respond to social stimuli. I discuss some of the existing challenges we face with assessing how information flow progresses between structures in the proposed functional pathways and how some stimulus types and experimental designs may have complicated our data interpretation and model generation. I also note a series of outstanding questions for the field. Finally, I examine the idea of a potential expansion of the third visual pathway, to include aspects of previously proposed “lateral” visual pathways. Doing this would yield a more general entity for processing motion/action (i.e., “[inter]action”) that deals with interactions between people, as well as people and objects. In this framework, a brief discussion of potential hemispheric biases for function, and different forms of neuropsychological impairments created by focal lesions in the posterior brain is highlighted to help situate various brain regions into an expanded [inter]action pathway.

INTRODUCTION: A TRIBUTE

Cognitive, social, and systems neuroscientists who study the characteristics of the visual system in human and nonhuman primates owe so much to the late Dr. Leslie Ungerleider. For decades, her groundbreaking work in primate neurophysiology, neuroanatomy, and neuroimaging in visual system function has laid the cornerstone for how we think about visual information processing in the primate brain. I dedicate this article to Dr. Ungerleider's memory and honor her by first trying to put our work into the scientific context that she created, and then considering how that context might be expanded. I would also like to acknowledge the influence and contribution of two close colleagues who are also no longer with us today—Drs. Truett Allison and Shlomo Bentin. We all stand on the shoulders of giants.

VISUAL PATHWAYS: AND THEN THERE WERE THREE …

In 1982, I was embarking on a graduate career and starting to perform studies in the human visual system when the landmark article on parallel visual pathways in the human brain was published (Ungerleider & Mishkin, 1982). The discussion and implications for the field in that article helped channel and shape my research directions for the decades to come. Ungerleider and Mishkin (1982) had a clear “What?” and “Where?” emphasis for the main functional divisions of the respective ventral and dorsal pathways. Their work was predominantly based on the nonhuman primate literature—on painstaking studies of single-unit neurophysiology and structural neuroanatomy—based on investigations of object recognition and their locations in space. A slightly different interpretation of the ventral and dorsal visual systems was proposed a decade later (Goodale & Milner, 1992), where the dorsal system was examined from the point of view of “How?” In this formulation, based heavily on the apraxia literature, both spatial location and how an object was handled were important. To non-experts, the two visual pathway model made vision seem simple. However, when the now classic schema of known anatomical areal interconnections in the primate brain was viewed through the data lens of the early 1990s, the story was far from simple even then (Felleman & Van Essen, 1991)!

One vexing issue, which still looms large to this day, relates to exactly how information between the pathways is transferred and used by each visual system. For us, our everyday world is a seamlessly continuous and complete one, and today, there are still many questions about how we achieve this holistic view. From existing white matter tract knowledge of human cerebral cortex (e.g., Zekelman et al., 2022; Wang, Metoki, Alm, & Olson, 2018; Mori, Oishi, & Faria, 2009), sometimes there might be no direct white matter connections between brain structures that share common functions. For example, the human posterior superior temporal sulcus (pSTS) and the mid-fusiform gyrus (FG; Figure 1A) exhibit strong sensitivity to faces but have no direct interconnections (Ethofer, Gschwind, & Vuilleumier, 2011). Even today, existing structural interconnections between human brain areas sensitive to faces have been challenging to document clearly (Figure 1A; Babo-Rebelo et al., 2022; Grill-Spector, Weiner, Kay, & Gomez, 2017).

Figure 1. .

Figure 1. 

Potential structural and functional connections between main brain structures in the face processing network. (A) Known structural connections between main structures (solid lines), based mainly in the ventral system. Question marks highlight unknown structural connections in the network. Core (purple) and extended (blue) systems are color-coded, as are other (black) important brain regions. Reproduced with CC BY 4.0 Deed from Babo-Rebelo and colleagues (2022). (B) The third visual pathway of Pitcher and Ungerleider (2021). The general directions of the dorsal and ventral pathways are displayed by respective blue and green arrows as they emerge from primary visual cortex (V1). For the third visual pathway, key component structures MT/V5, p(osterior) STS, and a(nterior) STS are shown by red-brick-colored circles in a pathway to the ATL.

Perhaps a real understanding of the interconnections between dynamic face (and body) and other visually sensitive brain regions dealing with object motion is lacking? MRI-guided electrical micro-stimulation of “face-patches” in monkey inferior temporal cortex highlights their strong interconnections and separation from nonface regions (Moeller, Freiwald, & Tsao, 2008), yet micro-stimulation in face-patches influences activity in “object” regions when “face-like” objects or abstract faces are viewed (Moeller, Crapse, Chang, & Tsao, 2017). What seems to be critical here is the study of visually sensitive cortex that is not directly responsive to either faces or objects. Recent elegant work taking this line of reasoning has proposed a complex object map, or space, in monkey inferior temporal cortex where these category-specific properties can be observed (Bao, She, McGill, & Tsao, 2020). This approach channels a now classic human fMRI study, albeit at coarser spatial scale (Haxby et al., 2001), now refined with a state-of-the-art machine learning data analysis technique known as “hyperalignment.” This computationally demanding method effectively scrubs out individual participant idiosyncrasies in high-resolution fMRI data, showcasing across-subject similarities in category-specific activation patterns in human occipitotemporal cortex (Haxby, Guntupalli, Nastase, & Feilong, 2020; Haxby, Connolly, & Guntupalli, 2014).

A second issue for the original dorsal/ventral visual pathway scheme was that it was not clear where structures such as the pSTS sat. The pSTS is highly active in many studies dealing with dynamic human form (Yovel & O'Toole, 2016; Puce & Perrett, 2003; Allison, Puce, & McCarthy, 2000), and for this reason, it was seen as part of the dorsal system (Bernstein & Yovel, 2015; O'Toole, Roark, & Abdi, 2002). Working with Truett Allison, we had always regarded the pSTS as an important information integration point between the two visual pathways (Allison et al., 2000).

Perhaps the uncertainty in classifying and connecting other brain structures to the two visual pathways came about because this was not the complete picture? What were we missing? Was this the motivation for David Pitcher and Leslie Ungerleider when they proposed their “third visual pathway,” in which the pSTS was a major feature? The third pathway is a freeway linking primary visual cortex with area MT/V5, pSTS, and anterior STS (aSTS; Figure 1B; Pitcher & Ungerleider, 2021).

Pitcher and Ungerleider's (2021) questioning of the status quo is not unique: Current thinking relating to brain pathways devoted to emotion (de Gelder & Poyo Solanas, 2021), and how emotions arise or progress (Li & Keil, 2023; Critchley & Garfinkel, 2017), for example, have also undergone “remodeling.” In terms of visual pathways themselves, the idea of an additional pathway to the ventral and dorsal systems is not new. Weiner and Grill-Spector (2013) proposed an additional lateral pathway that selectively processed information related to faces and limbs and integrated vision, haptics, action, and language (Weiner & Grill-Spector, 2013). Perplexingly, dynamic visual stimulation was not considered in this model, so structures strongly driven by human face and body motion (such as pSTS) are not included in this model. The pSTS is also not explicitly considered in other “lateral” visual pathway formulations that centered on the lateral occipito-temporal complex (LOTC; with a left-hemisphere bias) and in which MT/V5, the extrastriate body area (EBA), and middle temporal gyrus (MTG) feature prominently (Wurm & Caramazza, 2022; Lingnau & Downing, 2015), in contrast to the pSTS-centered third visual pathway, which has a right hemisphere bias (Pitcher & Ungerleider, 2021). To complicate the picture still further, the idea of the dorsal system contributing unique knowledge regarding object representations has also been advanced (Freud, Behrmann, & Snow, 2020; Freud, Plaut, & Behrmann, 2016).

INVASIVE HUMAN BRAIN RESPONSES TO OBSERVED FACIAL AND BODY MOTION

Our facial movements provide clear social signals about our emotional states and foci of social attention. Close-up, this information related to emotions comes from characteristic changes in upper and lower face parts (e.g., Waller, Julle-Daniere, & Micheletta, 2020; Müri, 2016). In the non-emotional domain, the eyes (via gaze direction) signal the focus of (social) attention and can shift the (visual) attention of the viewer (Dalmaso, Castelli, & Galfano, 2020). In humans, rhythmic mouth movements (in the 3- to 8-Hz range) are tightly correlated with rhythmic vocalizations, unlike in nonhuman primates, where rhythmic facial motion is absent (Ghazanfar & Takahashi, 2014). Therefore, mouth movements provide supplementary information on verbal output. To improve comprehension, even people with normal hearing lipread in noisy environments, or when listening to speakers in their nonnative language (Campbell, 2008). So, an opening mouth might be attention-grabbing, as it could signal the onset of an utterance (Carrick, Thompson, Epling, & Puce, 2007; Puce, Smith, & Allison, 2000).

The Human Brain's Response to Viewing Gaze Changes of Others

It has been known for a long time that non-invasive and invasive neurophysiological responses to viewing dynamic gaze aversions and mouth opening movements are significantly larger to direct gaze shifts or closing mouths (Caruana et al., 2014; Ulloa, Puce, Hugueville, & George, 2014; Allison et al., 2000). fMRI studies from many laboratories have consistently shown that the pSTS is a critical locus for facial motion signals (Yovel & O'Toole, 2016; Campbell et al., 2001; Puce, Allison, Bentin, Gore, & McCarthy, 1998). Neurophysiologically, MT/V5 also shows some selectivity to dynamic faces relative to nonface controls (Miki & Kakigi, 2014; Watanabe, Kakigi, & Puce, 2001; Campbell, Zihl, Massaro, Munhall, & Cohen, 1997). These older findings are consistent with the proposed active loci in the third visual pathway.

What about the time course for this neural activity? Non-invasive neurophysiological effects discussed above occur in the 170- to 220-msec postmotion onset range. Intracranial responses recorded to viewed dynamic gaze changes concur with non-invasive data: Significantly larger field potentials in pSTS occur to gaze aversions relative to direct gaze transitions. In contrast, modulation of dynamic emotions (happiness vs. fear) is not a prominent feature in the STS response (Figure 2; Babo-Rebelo et al., 2022). This pattern of results was seen in four patients (of 11 studied).

Figure 2. .

Figure 2. 

Effects of gaze and emotion in pSTS field potentials. Left image: Data from an epilepsy patient display field potentials from three electrode contacts within pSTS to viewing a dynamic face changing its gaze (direct, averted) and expression (from neutral to either fear or happiness). Significant differences between averted and direct gaze (top line of plots) were seen, with largest responses for averted gaze. These significant differences could persist beyond the displayed 400-msec epoch (not shown). Emotion conditions (bottom line of plots) do not show prolonged amplitude differences post-emotion change. Two phase reversals in the potential at ∼200 msec seen across the three sites are a signal of local generators at these locations. The respective MNI coordinate locations (x, y, z) for the three electrode contacts were: Site 1: +43, −53, +9; Site 2: +48, −53, +9; Site 3: +54, −53, +9. *p < .05, **p < .01, ***p < .005, corrected-over-time Monte Carlo p values. Right: Locations of left image electrode sites appear on coronal views of the post-implant structural MRI. Reproduced with CC BY 4.0 Deed from Babo-Rebelo and colleagues (2022).

In addition to the STC, or superior temporal cortex, ROI that included cortex on the superior temporal gyrus and in pSTS, three other occipitotemporal ROIs were studied: an inferior temporal cortical region (which included the inferior temporal sulci and gyri), a fusiform cortical region (FC, including the midfusiform sulcus, and occipitotemporal and collateral sulci), and an inferior occipital region (IOC, composed of the inferior occipital gyrus [IOG]). Effect sizes to viewing facial changes were calculated for normalized amplitudes of bipolar field potentials at active electrode pairs in the four ROIs in the 11 patients (Figure 3). All four ROIs showed responses to both gaze and emotion transitions, but the pSTS (STC ROI on the plot on Figure 3) was most sensitive to gaze relative to emotion. These effects are not because of motion extent per se: for these same stimuli, the largest facial changes occurred for emotion transitions—specifically in the lower part of the face (Huijgen et al., 2015).

Figure 3. .

Figure 3. 

Effect sizes for gaze and emotion within four occipitotemporal ROIs. Bottom image: Schematic axial slices for each ROI (IOC, FC, ITC, and STC) showing bipolar site pair locations (indicated by dots) responding significantly to emotion (left side) or gaze (right side). Color legend for individual patients is at the right. Top image: effect size (absolute Cohen's d) plotted as a function of ROI for each patient's bipolar electrode sites. The dark gray open circles denote mean effect size across sites within each ROI, for emotion and gaze, respectively. Statistical comparison of effect sizes between emotion and gaze in each ROI (gray bars between open circles) and across ROIs for emotion and gaze effects (top bars) was performed. Broken lines on the plot represent commonly accepted evaluations of effect size values. FC = fusiform cortex; ITC = inferior temporal cortex; ns = not significant. **p < .01. Reproduced with CC BY 4.0 Deed from Babo-Rebelo and colleagues (2022).

Our data indicate that initial gaze processing in pSTS is already underway ∼1/5 of a second after the gaze change. Typically, field potentials in human V1 occur at ∼100 msec post-stimulus onset (Allison, Puce, Spencer, & McCarthy, 1999), and presumably this information travels to MT/V5 (Watanabe et al., 2001) and then to the pSTS, consistent with the information flow in the third visual pathway. The pSTS is clearly important for processing gaze in real life: Lesions of human pSTS can produce deficits in judging gaze direction/social attention in others (Akiyama, Kato, Muramatsu, Saito, Nakachi, et al., 2006; Akiyama, Kato, Muramatsu, Saito, Umeda, et al., 2006).

Amygdalae recordings in five patients have shown small amplitude selective responses to gaze aversion and not facial emotion using the same stimuli and task. The early response latency of ∼120 msec in the right amygdala (Huijgen et al., 2015) was earlier than that in extrastriate cortex (Babo-Rebelo et al., 2022), raising questions about alternate information flow perhaps via pulvinar-collicular routes. The left amygdala's sensitivity to increased eye white area (as seen in fear), relative to the right amygdala that responds to various changes in eye white area (including gaze aversions and depicted fear; Hardee, Thompson, & Puce, 2008), has previously been reported. Notably, patients with amygdala injury can have difficulties judging gaze direction (Gosselin, Spezio, Tranel, & Adolphs, 2011) as well as emotions, especially fear (Gosselin et al., 2011; Adolphs, Tranel, Damasio, & Damasio, 1994), so the absence of field potentials to fearful faces in the amygdala are puzzling.

Insular cortex is also sensitive to dynamic eye gaze transitions. Averted gaze changes produce larger invasive neurophysiological responses than do direct gaze transitions. Gaze extent per se is not a factor—as evidenced by smaller evoked responses to spatially large extreme left–right gaze shifts (Caruana et al., 2014).

The Human Brain's Response to Viewing the Mouth Movements of Others

Earlier, I mentioned the similarity in morphology and latency of non-invasive neural activity and pSTS field potentials to dynamic gaze changes. An outstanding question has been whether there are field potentials in the pSTS and/or other brain regions that are selectively elicited to viewing mouth movements.

In the late 1990s, we recorded field potentials to faces, face parts, objects, and scrambled versions of these stimuli in more than 20 activation tasks in ∼100 epilepsy surgery patients (Allison et al., 1999; McCarthy, Puce, Belger, & Allison, 1999; Puce, Allison, & McCarthy, 1999). We began some new studies, including a dynamic facial motion task from which we had already collected non-invasive data (i.e., Puce et al., 2000). Here, I include data originally published in abstract form (Puce & Allison, 1999), data that generated substantial interest at the Memorial Symposium devoted to Dr. Leslie Ungerleider's memory (at National Institutes of Health in September 2022). Although these data remain anecdotal, I present them here to stimulate further thinking and seed further studies.

For the dynamic facial motion study, we recorded data from epilepsy surgery patients (who provided informed consent in a study approved by the Human Investigations Committee at the Yale School of Medicine). Talairach coordinates of active electrodes were calculated (Allison et al., 1999). Figure 4 displays data recorded from two depth electrodes in a patient—one electrode was in pSTS, and the other was in the Sylvian fissure, abutting insular cortex (Figure 4A). Averaged field potentials selective to mouth opening were seen at ∼400 msec after motion onset and were polarity negative in both sites. At these latencies, no prominent evoked activity occurred to dynamic gaze changes (Figure 4B), static isolated mouths or eyes, full faces (Figure 4C), or other static visual stimuli (Figure 4D). Some slow, nondescript, late activity (after 500 msec to the static full face and parts might be present in STS 6 (Figure 4C), and general static visual stimuli (Figure 4D) appear to produce small deflections at earlier (400 msec) and later (800 msec) latencies, suggesting that while these sites show a distinct preference to moving mouths, nevertheless they retain some degree of general visual responsivity.

Figure 4. .

Figure 4. 

Field potentials from STS and Sylvian fissure (insular cortex) from three experiments. (A) Two partial coronal slices of a structural MRI scan display electrode contacts in the STS (top image) and Sylvian fissure (SF; insular cortex; bottom image). (B) The dynamic facial motion study elicits strong responses to mouth opening movements at ∼400 msec post-motion onset and negligible responses to gaze aversions. (C) Isolated static face parts (eyes and mouth) and full faces elicit very long latency responses, particularly in STS Contact 6. (D) Visual stimuli in general (faces, flowers, scrambled faces, or words) do not appear to induce prominent responses at these sites. Talairach coordinates (x, y, z) for the electrode contacts were: STS 4–12, −41, −6; STS 5–8,−49, −7; STS 6–8, −55, −7; SF 3–15, −46, +12; SF 4–15, −53, +13; SF 5–15, −60, +14. Data from Puce and Allison (1999).

The Sylvian fissure/insula responses (Figure 4B) are intriguing, given this general region's known rich functional neuroanatomy (Gogolla, 2017; Uddin, Nomi, Hébert-Seropian, Ghaziri, & Boucher, 2017). Unfortunately, insular recordings are uncommon, as a number of major branches of the middle cerebral artery course through it (Ture, Yaşargil, Al-Mefty, & Yaşargil, 2000). In a rare study of intracranial recordings from insular cortex, a sensitivity to gaze aversion in the posterior insula has been previously reported with field potentials in the 200- to 600-msec range post-motion onset (Caruana et al., 2014), paralleling the anecdotal data presented to moving mouths here.

The location of the depth probe in the Sylvian fissure/insula (Figure 4) is likely posterior to primary auditory cortex (Heschl's gyrus) in the temporal lobe, and posterior to gustatory cortex, secondary somatosensory cortex, and cortex related to vestibular function in the insula (Gogolla, 2017; Uddin et al., 2017). It is more likely to be near a region sensitive to coughing in the Sylvian fissure (Simonyan, Saad, Loucks, Poletto, & Ludlow, 2007), and a region in the posterior insula known to be sensitive to visual stimuli (Frank & Greenlee, 2018). Notably, epilepsy surgery of the insula (but not temporal lobe) is known to affect emotion recognition—for happiness and surprise (Boucher et al., 2015), emotions where mouth configuration is a prominent component.

DO LOW-LEVEL VISUAL FEATURES DRIVE BRAIN RESPONSES TO OBSERVED FACIAL EMOTIONS?

I have already noted that mouth (opening and closing) movements that appear in a neutral face devoid of emotion generate reliable fMRI activation and neurophysiological activity in the pSTS, and robust non-invasive EEG activity. Studies in our laboratory using impoverished visual stimuli, that is, biological motion displays of faces with opening and closing mouths, elicit identical neurophysiological responses to stimulation with full (grayscale) faces (Rossi, Parada, Kolchinsky, & Puce, 2014; Puce et al., 2000). In stark contrast, when biological motion displays of faces with averting and direct dynamic gaze are contrasted to the same motion in full faces, the brain responses are very different (Rossi, Parada, Latinus, & Puce, 2015; Rossi et al., 2014). These striking neurophysiological differences suggest that multiple, low-level visual mechanisms might drive these respective effects. Mouth movements occur from the action of an articulated joint—the mandible is physically linked to the cranium via the temporomandibular joint. Hence, a strong response to a biological motion display of a moving mouth might be expected (Rossi et al., 2014). In contrast, a biological motion effect should not be present for (impoverished) eye motion, which does not involve joint action, but arises from the coordinated action of a suite of ocular muscles. This is exactly what we see in our studies (Rossi et al., 2014, 2015).

The biological motion effect is not the only low-level visual factor that could generate stimulus-driven activity from viewing a dynamic face. These could come about because of local luminance and contrast changes. For example, when a person is very happy, their smile (or laugh) will likely show teeth. White teeth can be clearly seen against the darker aspect of the lips and mouth cavity. Sometimes the teeth might also be seen in fear. Indeed, displayed teeth can be clearly seen at a distance. The presence, or absence, of teeth in a mouth expression affects neurophysiological sensory responses in the latency range of ∼100–200 msec (i.e., P100 and N200). Participants also rate mouths with visible teeth as being more arousing relative to those without visible teeth (daSilva, Crager, Geisler, et al., 2016). So, this additional low-level visual effect may explain in part why neural studies of emotion consistently report larger responses to happiness (e.g., daSilva, Crager, & Puce, 2016). Teeth are more likely to be visible to happiness in its canonical form (Smith et al., 2008).

A local luminance/contrast effect, when discriminating between emotions, could also apply to the eye region. The human sclera are bright white relative to the iris and pupil, and this is unusual relative to other primates who typically do not have such a high luminance/contrast structure of the eye (Kobayashi & Kohshima, 1997). In gaze aversions, and in the display of emotions such as fear and surprise, high luminance/ contrast changes in local visual space occur from iris movement or expansion of the eye—resulting in an eye white area increase relative to a neutral face (Hardee et al., 2008). Like the teeth in a smile, gaze aversions or widened eyes in fearful and surprised expressions can also be well seen at a distance. We believe that the local luminance/contrast change in the eye is the major driver of the larger neurophysiological response in the pSTS during a gaze aversion. Although there is a significant fMRI signal increase in the pSTS, the amygdala appears to be even more sensitive than the pSTS (Hardee et al., 2008).

A third low-level effect that might drive part of the response to a dynamic facial expression may be because of the extent of the facial movement itself. Mouth movements can produce large changes in mouth configuration, when one looks at facial images depicting the net change in the face (Huijgen et al., 2015). The mouth may also take up more area on the face than the eyes, depending on its configuration, for example, as in a wide smile or a grimace of extreme fear. In contrast, in a gaze change, or widening of the eyes, the main shape of the eye is preserved. From the eye gaze data of Caruana and colleagues (2014), gaze aversions from direct gaze produce larger neurophysiological effects than did large extreme left–right/right–left gaze transitions (Caruana et al., 2014), so motion extent, in terms of excursion of the iris, does not appear to be a factor here. One further possibility to consider is that the attention-grabbing nature of the facial motion simply makes us foveate the potentially most informative location in space.

A final point on “low-level” effects: Some facial features function as social cues only when the face is upright. Multiple factors can affect identity recognition, and inverted faces typically serve as control stimuli (e.g., McKone & Yovel, 2009; Rossion, 2009; Rhodes, Brake, & Atkinson, 1993). In the Thatcher Illusion, the eyes and mouth are inverted in an upright face. When the face is viewed upright, the result is grotesque, but when viewed inverted, no glaring irregularities are noted (Thompson, 1980), suggesting that mouth and eye orientation, and therefore configuration, matters. When human face pairs are compared on judgments of gender or emotional expression, inversion impairs difference judgments of expressions and gender, and sameness judgments of expression (see Pallett & Meng, 2015). These data imply that aspects of evaluating gender or viewed emotions, that is, social cues, show that configuration matters and that there may be holistic aspects to processing gender and emotions.

WHERE WE LOOK ON SOMEONE'S FACE MATTERS FOR COMPREHENDING SOCIAL INTERACTIONS

In the abovementioned studies, participants typically fixated on a fixation cross placed at the nose bridge on a (full) face. Alternatively, if face parts were presented in isolation, participants gazed at a fixation cross at the screen's center. In everyday life, our eyes rove continuously about the visual scene, and when they land on a person's face, they will not necessarily look at the bridge of the nose. In a social interaction, our eyes travel to the most informative parts of the face. In paintings and naturalistic scenes with people in them, participants often fixate on faces, and the eye and mouth regions in particular (Bush & Kennedy, 2015; Yarbus, 1967). In the 1960s, the Russian scientist Alfred Yarbus clearly showed that observers “triangulate” a face when viewing it, that is, they focus their gaze on the two eyes and the mouth, and their eye scanning movements form a triangular shape when they examine a face (Yarbus, 1967).

Much has been made of the information provided by the eyes in emotion recognition and theory of mind tasks (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001). Therefore, it is surprising that when an observer's gaze is tracked as they successfully recognize dynamic (basic) emotions, healthy participants can spend more time looking at the mouth region relative to the eyes (Blais, Roy, Fiset, Arguin, & Gosselin, 2012).

SOME ISSUES THAT MAY HAVE COMPLICATED THE SCIENCE

Implicit versus Explicit Studies Involving Experiments with Facial Stimuli

The brain responses that I mainly focused on above are involuntary and are consistently observed during implicit tasks, and likely arise from low-level visual factors. This implicit way of functioning seems more ecologically valid and might closely approximate what we do in everyday life unconsciously (Puce et al., 2015; Smith & Lane, 2016). Yet, when we read the social attention and emotion recognition literature, so much of it is built on explicit tasks, for example, requiring emotions to be categorized or named, often by forced choice. How do brain responses to these explicit tasks vary relative to those in implicit tasks, when identical stimulus material is presented?

We studied how neurophysiology is modified across the implicit–explicit task dimension for social attention, low-level visual factors, and emotion. First, in a social attention task, gaze in neutral faces changed with different degrees of aversion. In the implicit task, participants indicated by button press if gaze deviated to the “left” or “right” and we replicated our averted gaze > direct gaze N170 effect. In the explicit task using the same stimuli, participants indicated if gaze moved “toward” or “away” from them. This time, N170 was equally large to gaze aversions and gaze returning to look directly at the observer (Latinus et al., 2015).

Second, in three different mouth configurations, we changed the presence/absence of teeth—I already mentioned effects of visible teeth earlier. In the implicit task, participants detected infrequent grayscale versions (target) of any of the (color) mouth stimuli. In the explicit task, participants saw color stimuli only and pressed one of three response buttons to indicate if the mouth formed an “O,” an arc, or a straight line—mouth configurations typically seen in surprise, happiness, or fear. Mouth shapes could occur with, or without, visible teeth. A robust main effect of teeth for P100, N170, and VPP occurred for both tasks, but there were also Teeth × Task interactions in the explicit task. For later potentials: (1) P250 showed no main effects for teeth, but showed Task × Teeth and Task × Mouth Configuration interactions; (2) LPP/P350 was only seen in the implicit task; (3) Slow positive wave (SPW) was seen only in the explicit task (daSilva, Crager, Geisler, et al., 2016).

Third, faces portraying positive emotions (happiness and pride) and neutral expressions were studied. In the implicit task, participants looked to see if a freckle was on the face (between the eyes and mouth; an infrequent target). N170, VPP, and P250 ERPs were significantly greater for both emotions relative to neutral but did not differ between emotions. The late SPW potential significantly differentiated between happy and proud expressions. In the explicit task, participants pressed one of three buttons to differentiate the neutral, happy, and proud faces. We saw the same main effects occurred for N170, VPP, P250, LPP, and SPW, but this time, we also saw Emotion × Task interactions involving P250 and SPW (daSilva, Crager, & Puce, 2016).

Across the above three experiments, task interactions with main effects occurred mainly in the longer latency responses, also raising questions for how these neurophysiological changes might impact hemodynamic activation patterns in fMRI studies.

How generalizable are the results we observe in the laboratory to what we experience in everyday life? From our social attention studies, we posited that, in real life, we might function in two main modes—a “nonsocial” or default mode (not the same as the resting-state [RS] concept) and a “socially aware” mode (where we explicitly judge others on some social attribute). These modes might function somewhat akin to the implicit and explicit evaluations of faces we use in the laboratory. When we are in a nonsocial mode (out in the world and interacting with objects or with others at a superficial level), our sensory systems do some of the hard work for us and differentiate between certain socially salient stimuli—just in case we might wish to explicitly evaluate them further (by switching to social mode). In social mode, our sensory systems likely increase their input gain, so incoming social signals are augmented, enabling us to better evaluate rapidly unfolding social situations (Puce et al., 2015).

Use of Morphs in Studies of Emotion

Creating stimulus sets with genuine dynamic emotional expressions is incredibly challenging, so for many years, experimenters resorted to using static faces displayed at the peak of the emotion (e.g., Biehl et al., 1997). Early attempts at creating emotional displays with real faces showed how dependent the behavioral performance of participants was on technical aspects of animated stimuli, such as frame rate (Kamachi et al., 2013). Morphing between static faces of the same identity, but with different expressions at their peak could make blended emotional stimuli. Similarly, neutral faces and emotional expressions at their peak were blended, with different proportions of the emotion being mixed with the neutral face. Unfortunately, dynamic morphed displays using different morphing strategies could produce different experimental results (Vikhanova, Mareschal, & Tibber, 2022). These morphed stimuli were criticized for not being ecologically valid, as real emotions can be initiated by one part of the face and then progressively involve other face parts. So, nonlinear facial changes occur in real emotions, but not in morphed displays. It has been argued that this is an important cue for perceiving real emotions (Korolkova, 2018). Indeed, quite different neurophysiological activity can be elicited with real versus artificially created dynamic facial expressions (Perdikis et al., 2017). A recent review examines some of these issues and challenges for studying emotional expressions (Straulino, Scarpazza, & Sartori, 2023).

Feigned emotional stimuli have also been criticized, as studies using real versus feigned emotional faces can report inconsistent findings. In real-life, we can generate emotional expressions via two motor pathways—a spontaneous and involuntary one, and a volitional one. Two upper motoneuron (UMN) tracts project to the facial nerve nucleus in the pons. In evolutionary terms, the extrapyramidal UMN tract is the older one—it produces involuntary and automatic facial expressions that arise rapidly and can be short lasting. In contrast, the pyramidal UMN, the evolutionarily newer route, allows us to make volitional expressions that can onset more slowly and be longer lasting (see Straulino et al., 2023).

A Disconnect between Literatures?

Our face and body movements display our emotions, intentions, and actions. Social interactions are dynamic and involve the orchestrated dance of a suite of facial and bodily muscles that signal one's inner mental states rapidly, spontaneously, and involuntarily. Alternatively, we can also conceal these important aspects of our inner mental life. As observers parsing the face and bodily movements of others, we do not need to explicitly name or note them—we register them effortlessly and unconsciously, and adjust our behavior to suit the social situation we are in. So, in one sense, from motion comes emotion. Unfortunately, the bulk of studies have been based on viewing isolated static stimuli on computer screens and collecting impoverished behavioral measures (e.g., simple button presses). Only relatively recently have naturalistic tasks—which are challenging to perform—become part of mainstream social neuroscience (Morgenroth et al., 2023; Saarimäki, 2021).

Another major issue for the field? How do we make sense of the varied scientific findings that are complicated by an apparent disconnect between the “low-level sensory” and “affective” literatures? This seems most pertinent for studying how the brain perceives and recognizes portrayed emotions. The famous 1977 Sydney Harris cartoon comes to mind where two scientists are at a blackboard solving a complex problem. Between two sets of detailed formulas, a text fragment reads, “a miracle happens ….” One scientist says to the other “I think you should be more specific in Step 2.” (https://www.sciencecartoonsplus.com/gallery/physics/index.php). So, at the core of our emotion problem, “the miracle” occurs between sensory stimulus registration and recognition of the emotion. It seems that many scientists working on the side of low-level visual effects (e.g., local contrast or brightness) often ignore some important higher order cognitive and affective confounds, whereas others working at higher levels of functional brain organization often do not consider potentially significant low-level confounds.

That all said, a number of investigators who are studying top–down and bottom–up interactions to better understand how we see an integrated, seamless world report interesting findings. First, retinotopic properties of (newer) category-specific regions in posterior cortex have differential areal properties. For example, some higher level brain areas (e.g., occipital place area and LOTC) have overlapping multiple retinotopic maps, whereas others such as OFA (occipital face area) do not (Silson, Groen, Kravitz, & Baker, 2016). These results could be clues to the differential computations these areas might provide. Second, “typical” spatial locations (e.g., eyes placed higher and mouth lower in a display) produce larger retinotopic hemodynamic signals and better behavioral measures (de Haas et al., 2016)—a finding with direct implications for the face inversion literature. Third, contralateral bias varies in higher level, category-specific visual regions. For instance, the right FFA (fusiform face area) and FBA (fusiform body area) exhibit more information integration from both hemifields relative to their left hemisphere homologs (Herald, Yang, & Duchaine, 2023)—helping explain why prosopagnosias have a right hemisphere basis (Meadows, 1974).

Fourth, still on biases but at a higher level, top–down social cognitive factors (e.g., in–out group attributes, attitudes, stereotypes, prejudices) can provide a powerful lens, which shapes our perception and how we look at others. Specifically, information from the anterior temporal lobe (ATL) on social knowledge and stereotypes from a lifetime of experience can be accessed by OFC and fed down to the FFA, affecting how sensory information is processed (Brooks & Freeman, 2019). This top–down drive can be unconscious and rapid (Freeman & Johnson, 2016), where facial appearance of an unfamiliar individual can drive strong (right or wrong) impressions (e.g., intelligent, trustworthy)—as shown by behavioral and hemodynamic studies of face-trait spaces (Stolier, Hehman, & Freeman, 2018; Todorov, Said, Engell, & Oosterhof, 2008). The culture that we grow up in shapes our social cognitive impressions (Freeman, Rule, & Ambady, 2009). We form these social impressions quickly—from “thin slices of behavior”—in about 30 sec or so (Ambady & Rosenthal, 1992). A number of years ago, an interacting top–down/bottom–up connectionist model for person construal was described. Partial parallel interactions were probabilistic in nature, allowing continuously evolving activation states that are consistent with an individual's goals (Freeman & Ambady, 2011). It would be good to see this used more in multimodal social neuroscience data interpretation.

The Disembodied Brain and … Face?

In the latter half of the 20th century, reductionist approaches in cognitive and social neuroscience focused on identifying the basic building blocks for cognitive and social functions. Although this work was fundamental and important, it seemed like we had laid out the jigsaw puzzle pieces on the table, but we had no idea of the overall image that the jigsaw presented. Perhaps part of the problem was that we had disembodied the brain? There it was sitting in a glass jar separated from the body—isolated from interoceptive messages from the body's internal milieu and from integrated multisensory input from the external world.

During this century, literature on the “brain–heart” and “brain–gut” axes has highlighted the key role in interoception and modulation of the viscera by the vagus nerve. The bidirectional vagal messaging to and from the brain affects both physical and mental function and has major implications for disease (Mayer, Nance, & Chen, 2022; Manea, Comsa, Minca, Dragos, & Popa, 2015). Regular electrical activity of the heart can be recorded as the electrocardiogram (ECG)—a series of waves labeled by alphabetic letters from P to T, with the R-wave signaling the main (ventricular) contraction of the heart (Hari & Puce, 2023). In addition, electrical activity from smooth muscle contractions of the gut can be recorded with electrogastrography. The electrogastrography signal is a complex entity where periodic respiratory activity from electrical activity of the diaphragm as well as cardiac activity are also sampled with electrical activity of the gut, because of the effects of volume conduction in the body. The three electrophysiological signals can be easily teased apart from their different power spectral content (for a review, see Wolpert, Rebollo, & Tallon-Baudry, 2020).

Studies in social neuroscience are starting to investigate interoceptive neurophysiology. For example, the brain is sensitive to the beating of the heart: A biphasic response in primary somatosensory cortex 280–360 msec occurs after the contraction of the heart's ventricles (Kern, Aertsen, Schulze-Bonhage, & Ball, 2013). Interoception of cardiac activity can modulate brain activity in somatosensory cortex and behavior during the detection and localization of somatosensory stimuli (Al et al., 2020). In addition, preliminary data from our laboratory suggest that: (1) there is a direct dependence of brain functional connectivity on systolic–diastolic parts of the cardiac cycle; (2) this dependence can remain stable over a prolonged RS period, showing a stable core–periphery network topology (Salibayeva, Sporns, Gitton, George, & Puce, 2023). This latter work clearly indicates that our future studies will have to consider interactions between the body and brain when estimating brain functional connectivity.

With respect to the “disembodied” face, in social neuroscience, the literature has tended to focus on the face, which is somewhat ironic given how we see our (whole) conspecifics in everyday interactions. In the next section, I discuss some face and body processing models to illustrate how thinking has progressively changed, as knowledge of structural and functional neuroanatomy has grown and (non-invasive) assessment techniques for studying the in vivo brain and body activity have improved.

It would be remiss of me not to mention the cerebellum. In some ways, it is the “Cinderella of the brain.” The cerebellum has more neurons than cerebral cortex and is an important contributor to social cognitive function (Schmahmann, 2019), but our focus has always been on cerebral cortex. Some relatively new work indicates that cerebellar activity can be reliably detected and modeled with magnetoencephalography (MEG) and EEG methods (Samuelsson, Sundaram, Khan, Sereno, & Hämäläinen, 2020). Perhaps Cinderella will now be able to wear her glass slippers?

“FACING UP” TO SOME EXISTING MODELS OF PROCESSING INCOMING INFORMATION FROM THE FACE AND BODY

In 1967, Polish neurophysiologist Jerzy Konorski proposed the idea of “gnostic units and areas” in the central nervous system, predicting neurons and brain areas with selectivity for faces, objects, places, and words. His nine-category model (Konorski, 1967) foreshadowed ideas of category specificity and pre-dated the “grandmother cell” concept by a couple of years (Gross, 2002). The first non-invasive neurophysiological evidence for human category-specific responsivity came from E.R. John—published in a memorial special journal issue (following Dr. Konorski's untimely death; John, 1975). Konorski's predictions included where on the human scalp selective ERP responses would occur in the 10–20 EEG electrode system (Jasper, 1958). E. R. John's anecdotal data (Figure 5) showed a clear negative potential at 150–200 msec to a vertical line signifying the letter “I,” which was less evident when read as the number “1” (John, 1975)! At the time, neuropsychological studies were amassing evidence on visual agnosias, especially to faces, in patients with acquired brain lesions (Meadows, 1974). In the laboratory of Truett Allison and Greg McCarthy in the 1990s, a discussion of Konorski's categories for selective stimulus evaluation was quite common.

Figure 5. .

Figure 5. 

A single-subject, category-selective non-invasive neurophysiological response. Averaged EEG data from 10-20 sites on the right parietal (P4), temporal (T6), and occipital (O2) scalp. Top 2 traces show waveforms elicited to 50 presentations of a vertical line described to the participant as the letter “I,” and the number “1,” respectively. Site T6 shows a prominent potential (first peak) in the old “negative polarity is up” display convention. A voltage calibration bar is absent, but activity appears in μV. A time calibration marker (bottom of the figure) shows total time in milliseconds. Trace 3 displays the difference waveforms between the two stimuli. A larger response to the letter relative to the number is seen halfway through the epoch at all sites at ∼200 msec. Trace 4 depicts point-by-point t-test values between the two conditions, with significant differences (p < .01; identified within the outlined box with broken lines) for sites T6 and P4, but not O2. The original figure included data from another participant with similar left temporal scalp. This is from John (1975), with kind permission from the current Editor-in-Chief of Acta Neurobiologiae Experimentalis.

In the 1980s, it was noted that a prosopagnosic patient could generate an (unconscious) autonomic galvanic skin response to a familiar face, although he could not identify it (Bauer, 1984). The patient was a motorcycle accident victim who had sustained extensive bilateral occipito-temporal lesions, as seen from the computerized tomography images (Bauer, 1982). It appears that the left pSTS and bilateral V1 were spared, leaving a potential route for the unconscious response to the familiar face.

Not long after Bauer's (1984) case report, a model for familiar face recognition was proposed by British neuropsychologists Vicky Bruce and Andy Young, which was based on decades of neuropsychological investigations in patients with facial processing deficits (Bruce & Young, 1986). The model also included facial speech processing and facial expression analysis—in a separate parallel pathway to that for familiar face recognition. Following human and monkey fMRI and neurophysiological studies devoted to face perception in the latter 2 decades of the 20th century, this model was refined (Figure 6A; see Gobbini & Haxby, 2007). Interestingly, the “emotion” component of this later model focused on emotional reactions generated in the viewer of the familiar face, and not in the emotions or other dynamic social signals present on the familiar individual's face. The original model by Haxby, Hoffman, and Gobbini (2000) was modified by both Gobbini and Haxby (2007) and O'Toole and colleagues (2002). O'Toole's model expanded on how dynamic faces are analyzed and had identity signals being sent from the dynamic face from the dorsal pathway (where STS was said to be) to the ventral pathway (Figure 6B).

Figure 6. .

Figure 6. 

Two models of active brain regions in face perception and recognition. In both models, an initial branch point occurs where invariant facial features (important for identity recognition) are separated from dynamic aspects of the face (important for emotional expressions and social attention). (A) The refined Gobbini and Haxby (2007) familiar face recognition model has a core system that decodes visual appearance via two streams: one for invariant feature identification and another for dynamic face feature perception. Information from core passes to the extended system, activating either aspects of person knowledge or our own emotions elicited by that person. (B) The O'Toole and colleagues (2002) model expands the original Haxby and colleagues (2000) model. Here, dynamic aspects of a familiar (and unfamiliar) face (e.g., an expressed emotion) are processed in the dorsal visual pathway and an identity signal is sent to the ventral pathway. STS is assumed to be part of the dorsal pathway.

Years later, Bernstein and Yovel (2015) worked with the “Haxby and colleagues (2000)” and “O'Toole and colleagues (2002)” models to try to deal with some existing issues. For example, OFA is a ventral pathway region that extracts form information from faces and is strongly connected to FFA, but both areas do not share strong connections with pSTS. Hence, the pSTS was placed in the other pathway, devoted to extracting information from dynamic faces. Bernstein and Yovel (2015) placed aSTS and inferior frontal gyrus—structures reactive to dynamic faces—in the dorsal face pathway because, in their view, “the primary functional division between the dorsal and ventral pathways of the face perception network is the dissociation of motion and form information” (Bernstein & Yovel, 2015).

The models typically have not included human voice processing and remained largely visuocentric, although the original Bruce and Young (1986) model included speech analysis. A model for the “auditory” face, that is, processing auditory information from human voices, proposed a parallel structure to the analysis of visual information from the human face (Figure 7; Belin, Bestelmeyer, Latinus, & Watson, 2011; Belin, Fecteau, & Bédard, 2004).

Figure 7. .

Figure 7. 

A model of voice perception. The Belin and colleagues (2004) VOICE perception model (left: brown/gold flowchart) shows parallelism with the original components of the Bruce and Young (1986) familiar FACE perception pathway (right: green flowchart). There are paths of intercommunication between the two. This model is based on extensive unimodal processing (within each pathway, colored arrows) and multimodal interactions (cross-pathway interactions, black arrows).

As already noted, bodies are important messengers of emotional state and action intent, as well as signaling identity. Not surprisingly, fMRI studies identified areas of occipitotemporal cortex sensitive to human body motion, such as the EBA (Downing, Jiang, Shuman, & Kanwisher, 2001) and FBA (Peelen & Downing, 2005). Intracranial field potential studies have reported selective responses to human hands and bodies in ventral and lateral occipitotemporal regions (Pourtois, Peelen, Spinelli, Seeck, & Vuilleumier, 2007; Puce & Allison, 1999). The parallel fMRI studies in our laboratory also showed lateral regions—most prominently the inferior temporal sulcus. The EBA was sensitive to body parts and wholes (Downing & Peelen, 2016), whereas the FBA responded more vigorously to whole bodies, although it could respond to body parts (Peelen & Downing, 2007). The pSTS also showed a vigorous, selective fMRI response to both realistic human hand and leg motion and animated avatars of whole mannequin bodies, faces, and hands (Thompson, Clarke, Stewart, & Puce, 2005; Wheaton, Thompson, Syngeniotis, Abbott, & Puce, 2004). In contrast, the MTG is highly active to man-made object/tool motion (Beauchamp, Lee, Haxby, & Martin, 2003).

A challenge for the field has been to place activation to (dynamic) bodies and their parts into existing (face-centric) models of social information processing. It is no surprise that these body-related findings prompted substantial revisions to existing models, for example, a multisensory model for person recognition using dynamic information from faces, bodies, and voices has been proposed. Here, the pSTS acts as a “neural hub for dynamic person recognition,” sending multisensory information to the aSTS and then onto ATL—a region critical for person recognition. Unisensory auditory (pSTS and aSTS) and visual (OFA and FFA) pathways also send information to the ATL for person recognition (Yovel & O'Toole, 2016). The EBA and FBA are not part of this model. Another important aspect of interpreting “body language” is the unconscious nature and speed with which we make sense of this information—proposed to be possible via the existence of subcortical and cortical pathways, which are intertwined in three interconnected brain networks (Figure 8; de Gelder, 2006).

Figure 8. .

Figure 8. 

Emotional body language (EBL) processing across three interrelated brain networks. EBL visual information enters subcortical (red) and cortical (blue) routes in parallel. The subcortical reflex-like EBL network (red) is rapid and comprises the superior colliculus (SC), pulvinar (Pulv), striatum, and amygdala (Amyg). Its output is not amenable to conscious awareness. The (cortical) visuomotor perception of EBL network (blue) has the core areas of LOC, STS, IPS, premotor cortex (PM), FG, and amygdala (Amyg). (The amygdala is common to two networks in this scheme.) The third network is called the (cortical) body awareness of EBL (green). Its core structures are the insula, somatosensory cortex (SS), ACC, and ventromedial prefrontal cortex. It processes incoming information from others, and interoceptive information from the individual. The subcortical (reflex-like EBL) sends feedforward connections (red lines) to the two cortical networks. Reciprocal interactions (blue lines) exist between the two cortical systems.

An updated model for recognizing emotion from body motion has been proposed (de Gelder & Poyo Solanas, 2021). The EBA and FBA are not in this model. Processing of body movements starts in the IOG, dividing into a ventral and a “dorsal” route where pSTS plays a key role, relaying information to the limbic system and intraparietal sulcus (IPS). This “radically distributed” model includes a subset of brain regions sensitive to human face motion (Figure 9A). A major departure from more established models is the addition of a mid-level of feature analysis dealing with affect, which no longer passes through a structural analysis of the body (Figure 9B). This would allow affective information to be processed more rapidly. Identity recognition would be dealt with via the body-specific structural analysis route, but there is no direct link between it and affective analysis of body posture. Given that we have idiosyncratic bodily movements and postures, it is not clear how this information would be extracted in this formulation.

Figure 9. .

Figure 9. 

A model for recognizing emotional expressions from an individual's movements. (A) Brain regions involved in recognizing emotions and their connections. TP = temporal pole. (B) Side-by-side flowcharts of the classical hierarchical model for recognizing emotional expressions versus an alternative proposal (radically distributed model) that does not rely on a uniquely hierarchical progression of information from lower to higher level brain regions. In this newer model, the fainter identity box indicates that this element was not present in the original figure in de Gelder and Poyo Solanas (2021), but from the discussion in the article, I surmise that this would be the case.

The pathway of de Gelder and Poyo Solanas (2021) has its starting point in the IOG—similar to that of the original suggestion by Haxby and colleagues (2000), their updated model (Gobbini & Haxby, 2007), and that of Ishai (2008). In our recent intracranial study of four occipitotemporal ROIs and dynamic changes in gaze direction and emotion (see Figures 2 and 3), we evaluated likely white matter pathways that might carry information between these regions (Babo-Rebelo et al., 2022). Posterior brain major white matter pathway endpoints were identified (Bullock et al., 2019) from 1066 healthy brains from the Human Connectome Project (Figure 10A). Then an overlap analysis between these endpoints and active intracerebral sites in the 11 epilepsy surgery patients was performed. From the overlap analysis and field potential latencies, we proposed a potential information flow in part of the occipitotemporal cortex (Figure 10B).

Figure 10. .

Figure 10. 

Putative information flow routes for faces in the posterior brain. (A) Schematic figure of white matter pathways routing information from visually sensitive brain regions. Meyer's (loop), the optic radiation connecting the lateral geniculate nucleus and occipital lobe, is also included. (B) Cartoon of putative routes of information flow relating to faces focusing mainly on fusiform and STC. This schematic is based on an overlap analysis of white matter tract endpoints (from 1066 healthy participants) and coordinates of active bipolar sites (from epilepsy patients). All data are in MNI space. Solid lines represent routes with overlap between tract endpoints and active sites. Broken lines show connections with overlap at one tract end only, as seen from the data of Babo-Rebelo and colleagues (2022). Short-range fibers aiding information flow across ventral occipitotemporal cortex were not included in the tract endpoint analysis and are not represented here. Tract abbreviations are identical to Part A. Parts A and B are reproduced with CC BY 4.0 Deed from Babo-Rebelo and colleagues (2022). SLF = superior longitudinal fasciculus; TP-SPL = temporoparietal connection of the superior parietal lobule; Arc = arcuate fasciculus; pArc = posterior arcuate fasciculus; ILF = inferior longitudinal fasciculus; VOF = vertical occipital fasciculus; MdLF-Ang = middle longitudinal fasciculus branch of the angular gyrus; MdLF-SPL = middle longitudinal fasciculus branch of the superior parietal lobule; FC = fusiform cortex; ITC = inferior temporal cortex.

Our data-driven model (Figure 10) is incomplete: It was focused on how information might be routed between pSTS and fusiform cortex. We posit that inferior temporal cortex may act as the mediator between the two, with information transfer via the posterior arcuate fasciculus. Our invasive neurophysiological data set was limited—sampling cortex implanted for clinical needs. Therefore, other connective links in the face pathway could not be evaluated. That said, we propose that combined neurophysiological and neuroanatomical investigations are one way forward for making sense of the complex network of interconnections that allow processing of the human form, and also for visual function in a more general sense.

Looking beyond the visual pathway scheme and existing models of face and body processing, we also need to consider existing social brain network models. In one 4-network model, the pSTS sits in a mentalizing (theory of mind) network, together with the EBA, FFA, TPJ, temporal pole, posterior cingulate cortex (PCC), and parts of dorso-medial pFC. The other three networks include the amygdala network (amygdala, middle fusiform/inferior temporal cortex, and parts of ventromedial prefrontal/OFC, the empathy network (parts of the insula and middle cingulate cortex), and the mirror/ simulation/action perception network (parts of inferior parietal and inferior frontal cortex; Stanley & Adolphs, 2013). Another literature review-based model makes pSTS a central hub for three neural social information processing systems: social perception, action observation, and theory of mind (Yang, Rosenblau, Keifer, & Pelphrey, 2015). This latter formulation does not consider the EBA or FFA.

One other important consideration relates to the actual nature of a real-life social interaction. These do not happen in isolation and involve at least one other individual. For this reason, the need for “2-person” (or dyadic) social neuroscience studies has been emphasized (Quadflieg & Koldewyn, 2017; Schilbach et al., 2013; Hari & Kujala, 2009). Dynamic dyadic stimuli contain information along (at least) three dimensions (i.e., perceptual, action, and social), therefore requiring multiple control conditions. For example, the perceptual dimension includes interpersonal signals such as mutual smiles or coordinated movement patterns. In the action dimension, these can be independent or joint (e.g., reading vs. discussing), have opposing goals (e.g., collaborating vs. competing), or are positive or negative (e.g., kissing vs. punching someone). In the social dimension, acquaintance type (e.g., strangers or acquaintances/family), interaction type (e.g., formal, casual or intimate), and its level (e.g., subordinate or dominant) matter (Quadflieg & Koldewyn, 2017).

Just being an observer for a social interaction is also not enough—our science needs to study the neural sequelae of real-life human interactions. Fortunately, now we have portable technology to perform such studies, but data acquisition and analysis methods are not without pitfalls (Hari & Puce, 2023). In addition, new dynamic stimulus sets with large numbers of exemplars are being generated—including social and nonsocial interactions, and interactions with objects (e.g., Monfort et al., 2020).

SLURPING FROM THE BRAIN REGION ‘BOWL OF ALPHABET SOUP’

Altogether, we now have a large alphabet soup of brain regions, including the pSTS, OFA, FFA, FBA, and EBA—regions known for evaluating social stimuli. Relevant to the social brain models discussed earlier, where the EBA and FBA belong is important (Taubert, Ritchie, Ungerleider, & Baker, 2022). The pSTS has been proposed to be critical for analysis of social scenarios involving multiple individuals (Quadflieg & Koldewyn, 2017), which would site it in a mentalizing network (Stanley & Adolphs, 2013) or as a central hub for social information processing networks (Yang et al., 2015). The EBA is active for viewing multiple people. Its role is to potentially generate “perceptual predictions about compatible body postures and movements between people that result in enhanced processing when these predictions are violated” (Quadflieg & Koldewyn, 2017). Both STS and EBA augment their activation when individuals' body postures and movements face each other (see Taubert et al., 2022). Notably, when inverted, reliable behavioral decrements occur relative to their upright counterparts (Papeo & Abassi, 2019). When facing body stimuli are evaluated, effective connectivity (EC) increases between EBA and pSTS (Bellot, Abassi, & Papeo, 2021), suggesting that delineating their respective roles will be challenging.

Where does EBA belong in the visual streams? Some investigators argue that it does not belong in the ventral stream (Zimmermann, Mars, de Lange, Toni, & Verhagen, 2018). Is the EBA heteromodal? This does not seem to be the case. The pSTS has subregions that have multimodal capability (Landsiedel & Koldewyn, 2023). It also has a complex anterior–posterior gradient of functionality with considerable overlap between functions (Deen, Koldewyn, Kanwisher, & Saxe, 2015). Gradient complexity does not increase in a simple posterior–anterior direction because proximity of the TPJ to pSTS complicates the functionality gradient. TPJ is active in theory of mind tasks such as false belief. Additional heterogeneity of pSTS functionality is evident from multivoxel pattern analysis. Although multivoxel pattern analysis shows similarities in EBA and pSTS function during observing dyadic interactions, the EBA shows unique functionality in dyadic interaction conditions that the pSTS does not (Walbrin & Koldewyn, 2019). To add to this complex story, human EBA and pSTS function can be doubly dissociated. In a clever manipulation of visual psychophysics, fMRI-guided TMS delivered to the EBA disrupts body form discrimination, whereas TMS to pSTS disrupts body motion discrimination (Vangeneugden, Peelen, Tadin, & Battelli, 2014). There are two additional alphabet soup ingredients: The pSTS and aSTS have been subdivided to have “social interaction” subregions (i.e., pSTS-SI and aSTS-SI). These are subregions that respond to interactions between multiple individuals, but not to affective information per se (McMahon, Bonner, & Isik, 2023; and see also Walbrin & Koldewyn, 2019).

What about dynamic human interactions with objects? The LOC (lateral occipital complex) activates to human interactions with objects—but primarily represents the object information in the interaction or possibly the features of the interaction (Baldassano, Beck, & Fei-Fei, 2017), leading Walbrin and Koldewyn (2019) to suggest that the EBA/LOTC may play a role relative to distinct people and objects (or distinct ways in which to use them; see also Wu, Wang, Wei, He, & Bi, 2020). Here, the term LOTC refers to LOC cortex in close proximity to the EBA. We also need to remember that the classic ventral LOC, essential in object identification, also has visuo-haptic properties, so it should be considered as a multisensory region (Amedi, Malach, Hendler, Peled, & Zohary, 2001).

HOW WILL A THIRD VISUAL PATHWAY CHANGE THE EXISTING LANDSCAPE IN SOCIAL AND SYSTEMS NEUROSCIENCE?

How does the third visual pathway proposed by Pitcher and Ungerleider (2021) fit into the existing context of the other two, dorsal and ventral, pathways originally proposed by Ungerleider and Mishkin (1982)? It has been proposed to be a pathway for social perception with component structures strongly activated by human motion/action and social perception. The ventral pathway has always been the stalwart for form processing and as such included human (faces, bodies) and nonhuman (objects, animals) forms. Acquired lesions produce various types of visual agnosias in this pathway. The dorsal pathway is devoted to processing space, and different types of spatial neglect likely are the most well-known of the sequelae of acquired lesions of this pathway. The dorsal pathway also codes space in various coordinate systems relative to eye, head, hand, and body coordinates.

An Expanded Third/Lateral Pathway?

For me at least, it might make sense for the third pathway to have the (multisensory) capability to decode:

  • (1) 

    other human, animal, and object (including tool) motion and action;

  • (2) 

    other human interactions with conspecifics (including dyads or groups);

  • (3) 

    other human interactions with other animate beings (e.g., animals) or the natural environment;

  • (4) 

    other human interactions with objects (including tools);

  • (5) 

    self-other interactions (with other humans);

  • (6) 

    self-interactions with animals; and

  • (7) 

    self-interactions with tools.

With the above expanded features, the third/lateral pathway would preserve some parallelism with the ventral form pathway. It might therefore be regarded as a “[inter]action” pathway, and the proposed social perception pathway of Pitcher and Ungerleider (2021) would be an essential component within that scheme.

In Figure 11, based on the above proposition, I have tried to sort the ingredients of our brain region alphabet soup into two basic divisions—assigning putative membership to the ventral pathway or the expanded third pathway. I have not considered the classical dorsal pathway and its putative elements as this is beyond the scope of this review. Note I have not included lower level visual areas V2, V3, and V4 here for the sake of simplicity. My purpose here is to start a discussion on what the main tasks of the ventral and lateral/third pathways might be, and who the card-carrying members of each might be. I have based the general distinctions, or biases (Figure 11), not only on human functional neuroimaging and neurophysiological studies but also on sequelae of acquired human lesions. I have focused shamelessly on the human side of the fence, as some of the capabilities I have listed cannot easily be tested in nonhuman primates. At the very least, shining a spotlight on some human impairments in the clinical literature might stimulate future experimentation in healthy participants that could help fill out the general classification scheme. I would posit that once there was a better consensus of which structures sit in the ventral and third//lateral pathways, then it might be time to tackle the dorsal pathway in a similar exercise to determine pathway members and their standing relative to the other two pathways.

Figure 11. .

Figure 11. 

Putative members of the ventral and third/lateral pathways in the human brain? A schematic of a lateral (top) and partial inferior (bottom) view of a human cerebral hemisphere segregating some selected known functional brain areas into third/lateral or ventral pathways. The dorsal pathway appears as an outline and is not considered further here. The green, red-brick, and blue colors identify the three pathways for which primary visual cortex (V1) is the departure point. The inferior parietal lobule (IPL) is presented as an outlier and hence does not fit the color scheme. V1 = primary visual cortex; VWFA = visual word form area; MT/V5 = motion-sensitive fifth visual area; MST = human homolog of macaque medial superior temporal area with high-level motion sensitivity; MTG/ITS middle temporal gyral and inferior temporal sulcal cortex (sensitive to motion of animals and tools); TPJ = temporoparietal junction; Amyg = amygdala.

Below, I consider in which division of the visual pathways some of the ingredients of our alphabet soup might sit. For the structures appearing in the ventral pathway in Figure 11, a large literature with converging evidence exists from neuropsychological lesion studies with various types of agnosia, epilepsy surgery patients, and neuroimaging studies in healthy participants, in addition to the monkey studies. With respect to the idea of an expanded lateral (or third) pathway, there are a number of uncertainties that arise from the fact that:

  1. some of the newer category-selective regions have never been definitively seated into the original visual pathway scheme;

  2. the new expanded scheme of the third/lateral pathway (Figure 11) considers functionality that has not previously been included in previous pathway classifications.

From the task-related work of Vangeneugden and colleagues (2014) described in the previous section, the EBA would seem to fit best in the ventral stream. That said, RS investigations coupled with diffusion-weighted MRI data site it in the dorsal stream, based on EC measures (Zimmermann et al., 2018). If someone forced my hand on the issue, I would place the EBA in the third/lateral pathway, and not the dorsal stream, based on its task-related response properties. If we regard the third/lateral pathway as a [inter]action stream, with social processing as a central component, then the EBA would be a member—because like parts of the pSTS, it activates to dyadic and multiperson interactions.

In real life, we evaluate animate (biological) and also inanimate motion (i.e., from man-made objects such as tools) to which we know that the middle temporal gyral (MTG) and inferior temporal sulcal cortex have respective sensitivities (Beauchamp et al., 2003; Beauchamp & Martin, 2007). Therefore, I would advocate that brain regions with these response properties would sit in this third/lateral [inter]action pathway. That said, however, this raises a lot of questions. What about complex motion deficits in stroke patients, such as the inability to process form from motion, or in recognized motion per se (e.g., discussed by Cowey & Vaina, 2000)—rare cases with posterior brain lesions with very specific deficits? In addition, given a three-visual pathway scheme, where would first- and second-order motion processing now sit? In the ventral/dorsal visual pathway model, these were proposed to sit in the ventral and dorsal systems, respectively, based on monkey literature and rare acquired lesions in patients (Vaina & Soloviev, 2004).

Social perception involves evaluating interactions with others, relative to our integrated (multisensory and embodied) self. Even our personal space is delimited by our arm length. The TPJ is an important locus for multisensory integration within the self. Notably, when functionality of the TPJ is disrupted by focal epileptic seizures, bizarre phenomena such as out-of-body-experience (OBE) can occur, and complex visual hallucinations involving the self, as well as relative to others, can be experienced (Blanke & Arzy, 2005; Blanke, 2004). Out-of-body-like sensations can be elicited in epilepsy patients with direct cortical stimulation of the TPJ (Blanke, Ortigue, Landis, & Seeck, 2002), or in healthy participants using TMS (Blanke et al., 2005). The OBE is an extreme example of an autoscopic phenomenon, where different degrees of multisensory disintegration of visual, proprioceptive, and vestibular senses can take place at the lower level. These lower level features can also interact with higher level features such as egocentric visuospatial perspective taking and self-location, as well as agency (Blanke & Arzy, 2005). The TPJ is also active in theory of mind tasks. Therefore, given that successful interactions with others in the world cannot occur without an intact self, I would situate the TPJ in the [inter]action pathway.

Some Outstanding Questions

What is the impact of the “other” route from retina to cortex on the visual pathway model? The three-pathway model's input in the current formulation is V1—acknowledging input from the retina via the lateral geniculate route. However, a more rapid, lower resolution pathway from the retina passes thru the pulvinar nucleus and the superior colliculus to “extrastriate cortex.” Currently, its exact terminations with respect to the functional areas making up the ventral and third/lateral pathways are not known. Knowing where the optic radiations terminate in individual participants would not only be important for understanding the structural connectivity (SC), but also would impact functionality.

Issues related to underlying short- and long-white matter fiber connections, and relationships between structural and functional connectivity. Wang and colleagues (2020) performed a heroic study evaluating data in 677 Human Connectome Project healthy participants across three dimensions in MRI-based data: SC, functional connectivity (FC) using RS and face localizer task data, and EC. Their conclusions need to be interpreted with caution, as the included Human Connectome Project fMRI data only have RS and a face localizer task (a 0-back and 2-back working memory task with no social evaluation). Their analyses included nine face network ROIs/hemisphere, and they estimated short- versus long-range white matter fibers in their SC connectome analysis. More than ∼60% of ROI–ROI connections could be regarded as short range, with the rest being labeled as long range, that is, tracts in the white matter atlas. If there was greater physical distance between any two face ROIs, then this was associated with an increased amount of long-range fiber connections.

The early visual cortex (presumably V1?), OFA, FFA, STS, interior frontal gyrus (IFG), and PCC were observed to form a (functional) six-region core subnetwork, which was active and synchronized across RS and task-related contexts (Wang et al., 2020). Given the low level of task requirements related to social judgments, perhaps this might correspond to the “default mode of social processing” in implicit tasks that I mentioned earlier (Latinus et al., 2015; Puce et al., 2015)?

Overall, Wang and colleagues (2020) reported that the organization of the nine ROI face network was highly homogeneous across participants from the point of view of SC, RS or task-related functional connectivity in their 677 participants. These results give a real shot in the arm for data analysis using hyperalignment methods (Haxby et al., 2014, 2020).

Wang and colleagues (2020) noted three structural routes (their “pathways”) between structures in the nine ROI face network: the ventral route, composed of the OFA, FFA, and ATL, for processing static face information; the “dorsal” route, consisting of the STS and IFG, for dealing with dynamic information; the “medial” route, dealing with “social, motivational and emotional importance” of faces, and composed of the PCC–amygdala–OFC. They also noted that there did not seem to be a gateway or clear entry point to either the FFA or the STS. Interestingly, the pattern of EC varied as a function of hemisphere (see below).

Issues related to laterality. Pitcher and Ungerleider (2021) noted that their pathway was predominantly right-hemisphere biased. Although there is a clear right-sided bias in many studies of dynamic face and body perception, activation is not confined to the right hemisphere: 6- to 8-year-old children have a clear right-sided pSTS bias for dynamic faces/bodies relative to older children aged 9–12 years and healthy adults, whose activation patterns are more bilateral (Walbrin, Mihai, Landsiedel, & Koldewyn, 2020). In contrast, left EBA is preferentially engaged in adults viewing interacting (face-to-face) dyads. This activation can be disrupted by stimulus inversion, and the inversion effect can be disrupted by fMRI-guided TMS to the left EBA (Gandolfo et al., 2024). The literature related to category-selectivity and animal motion, tool motion, and human–object interactions overall tends to report stronger left occipitotemporal activation—suggesting that these functionalities may show a left hemisphere bias.

I already mentioned the study of Wang and colleagues (2020) above. Relevant to the hemispheric asymmetry issue, their EC analysis (psychophysical interactions [PPI]) indicated that face subnetworks were present in both hemispheres, but connectivity patterns were quite different. Consistent with the right hemisphere bias for faces, in this case, the connectivity pattern had mainly reciprocal (or bidirectional) connections, whereas the left hemisphere pattern was a predominantly feedforward one. These results were obtained for RS and working memory tasks for faces. Will this connectivity pattern persist for more demanding facial or social judgments?

We seem to be developing a parallelism with the language literature here. For the longest time, the very dominant left hemisphere contribution to language was championed, particularly in the neuropsychological sphere. Today, however, although there is an extensive language model with a left-hemisphere bias, we appreciate the unique and important contributions that are made by the right hemisphere (see Hickok, 2009). In the case of faces, “… areas in the right hemisphere were more anatomically connected, more synchronized during rest and more actively communicating with each other during face perception. Furthermore, we found a critical association between the ratio of intra- and interhemispheric connection and the degree of lateralization, which lends support to an older theory that suggests that hemispheric asymmetry arises from interhemispheric conduction delay.” (Wang et al., 2020)

Inclusion of other structures in the [inter]action pathway? The intraparietal lobule. I have included the intraparietal lobule (IPL), which is composed of the angular and supramarginal gyri, in Figure 11 labeling it as a multicolored outlier. I raised some of the issues here—and their resolution is way beyond the scope of this article. First, OBE and related experiences can also occur with angular gyral stimulation, similar to TPJ (Blanke et al., 2002). Second, Gerstmann syndrome is a complex neuropsychological deficit that classically involves (left) angular gyral lesions that exhibits complex visuo-motor (Arbuse, 1947; Gerstmann, 1924). Classically, it has a tetrad of signs: finger agnosia (not only of one's own fingers, but those of others), agraphia (without alexia), acalculia, and left–right confusion. The visual agnosia for fingers might be expected to belong in the ventral stream together with its other agnostic cousins. The spatial impairments are more consistent with the dorsal stream, for example, the left–right confusion, and the acalculia can have a spatial component (where can be an inability to carry the “1” in addition or subtraction). Finally, there is the agraphia—an inability to write. The signs of Gerstmann syndrome can also be induced with focal cortical stimulation for presurgical mapping, and some of the signs, for example, the finger agnosia and acalculia, can extend into the neighboring supramarginal gyrus (Roux, Boetto, Sacko, Chollet, & Trémoulet, 2003). Third, IPL lesions are well known to produce apraxias most commonly of the upper limb. Interestingly, right IPL lesions exhibit more extensive apraxias (and agnosias) and can produce visual distortions that can include the arms and even lower limbs (Nielsen, 1938)—arguably, one could consider these as distortions of the self and others. In general, apraxias can range from difficulties with using physical objects spontaneously or executing verbal commands, or to higher level cognitive aspects where patients cannot form an action plan for how to sequence multistep events, for example, such as toasting a slice of bread and buttering it (for a recent brief review on apraxias involving the upper limb, see Heilman, 2021).

In summary, the IPL's complex functionality makes it difficult to situate it completely into either the lateral/third or the dorsal pathway. Perhaps it may turn out to be an important gateway between the two?

Other structures? Amygdala, insular cortex, IFG. The amygdala with its key role in responding to socially salient stimuli (including fear and social attention) in the environment would argue for its inclusion in the three-pathway scheme. It is not clear, however, how the amygdala conveys information to the structures in the ventral and the third/lateral pathways. This will be complicated because the amygdala is a complex of nine nuclei—which can be loosely split into centromedial and basolateral groupings, with the respective roles related to autonomic function and processing visual salience. There are abundant interconnections between the two nuclear groupings. Given that short latencies field potentials can be recorded to visual emotions in the human amygdala (Huijgen et al., 2015), one would expect a direct pathway of visual information to the amygdala. Indeed, recently, a direct pathway from human superior colliculus-pulvinar-amygdala has been demonstrated, dealing predominantly with visual and auditory information related to negative affect, but not from positive images or noxious stimuli (Kragel et al., 2021).

Among its many functions (Uddin et al., 2017), the insular cortex is important for visceral sensation and interoception as well as processing information related to affect (e.g., related to different forms of disgust), social cognition (e.g., Figure 4), and empathy. It also houses secondary somatosensory cortex and gustatory cortex. Given that so many of these functions are important for the self, it would seem relevant to include parts of this cortex into the three-pathway scheme.

The inferior frontal (IFG) and posterior superior temporal gyri are important for their roles in communication with others. The pars opercularis and pars triangularis of the IFG is well known colloquially as Broca's area, with the posterior superior temporal gyrus and part of angular gyrus forming Wernicke's area. These areas are critical for verbal communication with others, in terms of understanding incoming speech and also producing one's own coherent and appropriate verbal output. In addition to this, the pars opercularis and pars triangularis of the IFG, together with the IPL, is also part of the human mirror/action perception network (Bonini, Rotunno, Arcuri, & Gallese, 2022)—important for representing the actions of the self and others, and for imitation learning.

CONCLUDING REMARKS

Perhaps we should be flipping our approach and using dynamic stimuli as the default for future studies? Isolated and static visual stimuli were traditionally used because of technical ease and because of limitations in technology, which are no longer the case. We know that static face and body stimuli elicit relatively meager brain activation in the third/lateral pathway in particular. Is our situation similar to that of neurophysiologists who performed studies in anesthetized primates in the 1960–1970s? Testing in awake, behaving animals years later revealed many additional response properties of brain regions, as neural responses were no longer obliterated by anesthesia. Furthermore, it took a long time to acknowledge how complex (multisensory) influences could affect activity in “primary” sensory regions (see Ghazanfar & Schroeder, 2006).

Dynamic stimuli depicting naturalistic social interactions are now demonstrating unique functionality within brain regions responsive to dynamic faces and bodies (e.g., McMahon et al., 2023). Future studies like these will be needed to distinguish between the subtle flavors of the ingredients of our alphabet soup. However, this will likely require dogged experimentation across multiple experimental sessions in the same participants, as the set of localizer tasks alone will be daunting. Furthermore, multimodal, studies using fMRI targets in MRI-guided TMS, or focused ultrasound, might also clarify unique areal specializations. These TMS/focused ultrasound stimulation studies could also target neural sources identified in combined MEG/EEG investigations. Studies of long-range and short-range white matter connections will also be important to try to clearly identify the structural “bones” on which our functional “muscles” sit, and if possible, these could be combined with analyses of functional data (e.g., Wang et al., 2020), either fMRI or source space MEG/EEG data.

As already discussed, the artificial separation of faces from bodies in experimental studies has been problematic (see Taubert et al., 2022). Given that brain biological motion processing systems recognize the entire living organism (Giese & Poggio, 2003), this division seems non-ecologically valid. Although prosopagnosia is primarily thought of as an impairment of face recognition, it can take forms where the core complaint is about not being able to recognize people or extract their identities (i.e., names) from memory (Meadows, 1974). Traditional neuropsychological tests were based on recognizing faces (e.g., the Benton Facial Recognition Test [Benton & Van Allen, 1968]), but omitted routine testing for potential impairments with bodies. Recent studies of patients with EBA lesions, at least, are starting to explore these questions (see discussion in Taubert et al., 2022).

I expect pushback for my attempts to reassess the composition of structures in the visual pathways and add functionality into a third pathway that would deal with [inter]action. My choices for region membership in a particular visual pathway might be controversial. That said, I do this to start a conversation about whether the pathways we have are appropriate and complete, and if not, what needs to be changed and what is missing? Ultimately, we need to think about interconnections between the (three) parallel visual pathways. This discussion might force us to clarify exactly what kind of information is exchanged between regions and between pathways. At that point, we might be in a better position to generate a fully integrated model of general visual function, and perhaps also social brain function. To fully achieve this, we will need to bring in research on human white matter pathways and not focus only on functional neuroanatomy (see Wang et al., 2020).

Overall, it is my belief that we are truly living in an exciting time for doing experiments in systems and social neuroscience, and I am very optimistic about the future. Indeed, if Leslie was still with us today, I am sure that she would smile her enigmatic smile and emphatically tell us that there is still an awful lot of work to do.

Acknowledgments

I sincerely thank the two anonymous reviewers who provided discerning and very thoughtful and probing questions, as well as general excellent constructive feedback on an earlier version of this article.

Corresponding author: Aina Puce, Department of Psychological and Brain Sciences, Indiana University, 1101 E 10th St, Bloomington, IN 47405, or via e-mail: ainapuce@iu.edu.

Data Availability Statement

This article is a review and presents published findings in the literature. There is one figure with data (previously presented at a conference abstract), which has been given permission to be published in the paper by the IRB that originally approved the study. The IRB has designated that these data cannot be shared as they are patient data, which have been de-identified in this article.

Author Contributions

Aina Puce: Conceptualization; Visualization; Writing—Original draft; Writing—Review & editing.

Funding Information

The field potential data from Allison et al. (1999) were acquired under National Institute of Mental Health Research Grant R01 MH-05286 (Localization of function in the human brain. [1996–2001] PI: G. McCarthy, Co-Is: T. Allison, A. Puce, and A. Adrignolo). Dr. Greg McCarthy has received permission from the Yale University Institutional Review Board for the inclusion of these data from a de-identified epilepsy surgery patient in this article. Aina Puce is supported by National Institute of Biomedical Imaging and Bioengineering (U.S.) Grant R01 EB030896. She acknowledges the generous support of Eleanor Cox Riggs and the College of the Arts and Sciences at Indiana University.

Diversity in Citation Practices

Retrospective analysis of the citations in every article published in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/W = .115, and W/W = .159, the comparable proportions for the articles that these authorship teams cited were M/M = .549, W/M = .257, M/W = .109, and W/W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance.

REFERENCES

  1. Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372, 669–672. 10.1038/372669a0, [DOI] [PubMed] [Google Scholar]
  2. Akiyama, T., Kato, M., Muramatsu, T., Saito, F., Nakachi, R., & Kashima, H. (2006). A deficit in discriminating gaze direction in a case with right superior temporal gyrus lesion. Neuropsychologia, 44, 161–170. 10.1016/j.neuropsychologia.2005.05.018, [DOI] [PubMed] [Google Scholar]
  3. Akiyama, T., Kato, M., Muramatsu, T., Saito, F., Umeda, S., & Kashima, H. (2006). Gaze but not arrows: A dissociative impairment after right superior temporal gyrus damage. Neuropsychologia, 44, 1804–1810. 10.1016/j.neuropsychologia.2006.03.007, [DOI] [PubMed] [Google Scholar]
  4. Al, E., Iliopoulos, F., Forschack, N., Nierhaus, T., Grund, M., Motyka, P., et al. (2020). Heart-brain interactions shape somatosensory perception and evoked potentials. Proceedings of the National Academy of Sciences, U.S.A., 117, 10575–10584. 10.1073/pnas.1915629117, [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Allison, T., Puce, A., & McCarthy, G. (2000). Social perception from visual cues: Role of the STS region. Trends in Cognitive Sciences, 4, 267–278. 10.1016/S1364-6613(00)01501-1, [DOI] [PubMed] [Google Scholar]
  6. Allison, T., Puce, A., Spencer, D. D., & McCarthy, G. (1999). Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cerebral Cortex, 9, 415–430. 10.1093/cercor/9.5.415, [DOI] [PubMed] [Google Scholar]
  7. Ambady, N., & Rosenthal, R. (1992). Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychological Bulletin, 111, 256–274. 10.1037/0033-2909.111.2.256 [DOI] [Google Scholar]
  8. Amedi, A., Malach, R., Hendler, T., Peled, S., & Zohary, E. (2001). Visuo-haptic object-related activation in the ventral visual pathway. Nature Neuroscience, 4, 324–330. 10.1038/85201, [DOI] [PubMed] [Google Scholar]
  9. Arbuse, D. I. (1947). The Gerstmann syndrome: Case report and review of the literature. Journal of Nervous and Mental Disease, 105, 359–371. 10.1097/00005053-194704000-00002, [DOI] [PubMed] [Google Scholar]
  10. Babo-Rebelo, M., Puce, A., Bullock, D., Hugueville, L., Pestilli, F., Adam, C., et al. (2022). Visual information routes in the posterior dorsal and ventral face network studied with intracranial neurophysiology and white matter tract endpoints. Cerebral Cortex, 32, 342–366. 10.1093/cercor/bhab212, [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Baldassano, C., Beck, D. M., & Fei-Fei, L. (2017). Human–object interactions are more than the sum of their parts. Cerebral Cortex, 27, 2276–2288. 10.1093/cercor/bhw077, [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bao, P., She, L., McGill, M., & Tsao, D. Y. (2020). A map of object space in primate inferotemporal cortex. Nature, 583, 103–108. 10.1038/s41586-020-2350-5, [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Baron-Cohen, S., Wheelwright, S., Hill, J., Raste, Y., & Plumb, I. (2001). The “Reading the Mind in the Eyes” Test revised version: A study with normal adults, and adults with Asperger syndrome or high-functioning autism. Journal of Child Psychology and Psychiatry, 42, 241–251. 10.1111/1469-7610.00715, [DOI] [PubMed] [Google Scholar]
  14. Bauer, R. M. (1982). Visual hypoemotionality as a symptom of visual-limbic disconnection in man. Archives of Neurology, 39, 702–708. 10.1001/archneur.1982.00510230028009, [DOI] [PubMed] [Google Scholar]
  15. Bauer, R. M. (1984). Autonomic recognition of names and faces in prosopagnosia: A neuropsychological application of the Guilty Knowledge Test. Neuropsychologia, 22, 457–469. 10.1016/0028-3932(84)90040-x, [DOI] [PubMed] [Google Scholar]
  16. Beauchamp, M. S., Lee, K. E., Haxby, J. V., & Martin, A. (2003). FMRI responses to video and point-light displays of moving humans and manipulable objects. Journal of Cognitive Neuroscience, 15, 991–1001. 10.1162/089892903770007380, [DOI] [PubMed] [Google Scholar]
  17. Beauchamp, M. S., & Martin, A. (2007). Grounding object concepts in perception and action: Evidence from fMRI studies of tools. Cortex, 43, 461–468. 10.1016/s0010-9452(08)70470-2, [DOI] [PubMed] [Google Scholar]
  18. Belin, P., Bestelmeyer, P. E. G., Latinus, M., & Watson, R. (2011). Understanding voice perception. British Journal of Psychology, 102, 711–725. 10.1111/j.2044-8295.2011.02041.x, [DOI] [PubMed] [Google Scholar]
  19. Belin, P., Fecteau, S., & Bédard, C. (2004). Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8, 129–135. 10.1016/j.tics.2004.01.008, [DOI] [PubMed] [Google Scholar]
  20. Bellot, E., Abassi, E., & Papeo, L. (2021). Moving toward versus away from another: How body motion direction changes the representation of bodies and actions in the visual cortex. Cerebral Cortex, 31, 2670–2685. 10.1093/cercor/bhaa382, [DOI] [PubMed] [Google Scholar]
  21. Benton, A. L., & Van Allen, M. W. (1968). Impairment in facial recognition in patients with cerebral disease. Transactions of the American Neurological Association, 93, 38–42. [PubMed] [Google Scholar]
  22. Bernstein, M., & Yovel, G. (2015). Two neural pathways of face processing: A critical evaluation of current models. Neuroscience and Biobehavioral Reviews, 55, 536–546. 10.1016/j.neubiorev.2015.06.010, [DOI] [PubMed] [Google Scholar]
  23. Biehl, M., Matsumoto, D., Ekman, P., Hearn, V., Heider, K., Kudoh, T., et al. (1997). Matsumoto and Ekman's Japanese and Caucasian Facial Expressions of Emotion (JACFEE): Reliability data and cross-national differences. Journal of Nonverbal Behavior, 21, 3–21. 10.1023/A:1024902500935 [DOI] [Google Scholar]
  24. Blais, C., Roy, C., Fiset, D., Arguin, M., & Gosselin, F. (2012). The eyes are not the window to basic emotions. Neuropsychologia, 50, 2830–2838. 10.1016/j.neuropsychologia.2012.08.010, [DOI] [PubMed] [Google Scholar]
  25. Blanke, O. (2004). Out of body experiences and their neural basis. BMJ, 329, 1414–1415. 10.1136/bmj.329.7480.1414, [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Blanke, O., & Arzy, S. (2005). The out-of-body experience: Disturbed self-processing at the temporo-parietal junction. Neuroscientist, 11, 16–24. 10.1177/1073858404270885, [DOI] [PubMed] [Google Scholar]
  27. Blanke, O., Mohr, C., Michel, C. M., Pascual-Leone, A., Brugger, P., Seeck, M., et al. (2005). Linking out-of-body experience and self processing to mental own-body imagery at the temporoparietal junction. Journal of Neuroscience, 25, 550–557. 10.1523/JNEUROSCI.2612-04.2005, [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Blanke, O., Ortigue, S., Landis, T., & Seeck, M. (2002). Stimulating illusory own-body perceptions. Nature, 419, 269–270. 10.1038/419269a, [DOI] [PubMed] [Google Scholar]
  29. Bonini, L., Rotunno, C., Arcuri, E., & Gallese, V. (2022). Mirror neurons 30 years later: Implications and applications. Trends in Cognitive Sciences, 26, 767–781. 10.1016/j.tics.2022.06.003, [DOI] [PubMed] [Google Scholar]
  30. Boucher, O., Rouleau, I., Lassonde, M., Lepore, F., Bouthillier, A., & Nguyen, D. K. (2015). Social information processing following resection of the insular cortex. Neuropsychologia, 71, 1–10. 10.1016/j.neuropsychologia.2015.03.008, [DOI] [PubMed] [Google Scholar]
  31. Brooks, J. A., & Freeman, J. B. (2019). Neuroimaging of person perception: A social-visual interface. Neuroscience Letters, 693, 40–43. 10.1016/j.neulet.2017.12.046, [DOI] [PubMed] [Google Scholar]
  32. Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305–327. 10.1111/j.2044-8295.1986.tb02199.x, [DOI] [PubMed] [Google Scholar]
  33. Bullock, D., Takemura, H., Caiafa, C. F., Kitchell, L., McPherson, B., Caron, B., et al. (2019). Associative white matter connecting the dorsal and ventral posterior human cortex. Brain Structure & Function, 224, 2631–2660. 10.1007/s00429-019-01907-8, [DOI] [PubMed] [Google Scholar]
  34. Bush, J. C., & Kennedy, D. P. (2015). Aberrant social attention and its underlying neural correlates in adults with autism spectrum disorder. In Puce A. & Bertenthal B. I. (Eds.), The many faces of social attention: Behavioral and neural measures (pp. 179–220). Cham, Switzerland: Springer. 10.1007/978-3-319-21368-2_7 [DOI] [Google Scholar]
  35. Campbell, R. (2008). The processing of audio-visual speech: Empirical and neural bases. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 363, 1001–1010. 10.1098/rstb.2007.2155, [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Campbell, R., MacSweeney, M., Surguladze, S., Calvert, G., McGuire, P., Suckling, J., et al. (2001). Cortical substrates for the perception of face actions: An fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning). Cognitive Brain Research, 12, 233–243. 10.1016/S0926-6410(01)00054-4, [DOI] [PubMed] [Google Scholar]
  37. Campbell, R., Zihl, J., Massaro, D., Munhall, K., & Cohen, M. M. (1997). Speechreading in the akinetopsic patient, L.M. Brain, 120, 1793–1803. 10.1093/brain/120.10.1793, [DOI] [PubMed] [Google Scholar]
  38. Carrick, O. K., Thompson, J. C., Epling, J. A., & Puce, A. (2007). It's all in the eyes: Neural responses to socially significant gaze shifts. NeuroReport, 18, 763–766. 10.1097/WNR.0b013e3280ebb44b, [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Caruana, F., Cantalupo, G., Lo Russo, G., Mai, R., Sartori, I., & Avanzini, P. (2014). Human cortical activity evoked by gaze shift observation: An intracranial EEG study. Human Brain Mapping, 35, 1515–1528. 10.1002/hbm.22270, [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Cowey, A., & Vaina, L. M. (2000). Blindness to form from motion despite intact static form perception and motion detection. Neuropsychologia, 38, 566–578. 10.1016/S0028-3932(99)00117-7, [DOI] [PubMed] [Google Scholar]
  41. Critchley, H. D., & Garfinkel, S. N. (2017). Interoception and emotion. Current Opinion in Psychology, 17, 7–14. 10.1016/j.copsyc.2017.04.020, [DOI] [PubMed] [Google Scholar]
  42. Dalmaso, M., Castelli, L., & Galfano, G. (2020). Social modulators of gaze-mediated orienting of attention: A review. Psychonomic Bulletin & Review, 27, 833–855. 10.3758/s13423-020-01730-x, [DOI] [PubMed] [Google Scholar]
  43. daSilva, E. B., Crager, K., Geisler, D., Newbern, P., Orem, B., & Puce, A. (2016). Something to sink your teeth into: The presence of teeth augments ERPs to mouth expressions. Neuroimage, 127, 227–241. 10.1016/j.neuroimage.2015.12.020, [DOI] [PubMed] [Google Scholar]
  44. daSilva, E. B., Crager, K., & Puce, A. (2016). On dissociating the neural time course of the processing of positive emotions. Neuropsychologia, 83, 123–137. 10.1016/j.neuropsychologia.2015.12.001, [DOI] [PubMed] [Google Scholar]
  45. Deen, B., Koldewyn, K., Kanwisher, N., & Saxe, R. (2015). Functional organization of social perception and cognition in the superior temporal sulcus. Cerebral Cortex, 25, 4596–4609. 10.1093/cercor/bhv111, [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. de Gelder, B. (2006). Towards the neurobiology of emotional body language. Nature Reviews Neuroscience, 7, 242–249. 10.1038/nrn1872, [DOI] [PubMed] [Google Scholar]
  47. de Gelder, B., & Poyo Solanas, M. (2021). A computational neuroethology perspective on body and expression perception. Trends in Cognitive Sciences, 25, 744–756. 10.1016/j.tics.2021.05.010, [DOI] [PubMed] [Google Scholar]
  48. de Haas, B., Schwarzkopf, D. S., Alvarez, I., Lawson, R. P., Henriksson, L., Kriegeskorte, N., et al. (2016). Perception and processing of faces in the human brain is tuned to typical feature locations. Journal of Neuroscience, 36, 9289–9302. 10.1523/JNEUROSCI.4131-14.2016, [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Downing, P. E., Jiang, Y., Shuman, M., & Kanwisher, N. (2001). A cortical area selective for visual processing of the human body. Science, 293, 2470–2473. 10.1126/science.1063414, [DOI] [PubMed] [Google Scholar]
  50. Downing, P. E., & Peelen, M. V. (2016). Body selectivity in occipitotemporal cortex: Causal evidence. Neuropsychologia, 83, 138–148. 10.1016/j.neuropsychologia.2015.05.033, [DOI] [PubMed] [Google Scholar]
  51. Ethofer, T., Gschwind, M., & Vuilleumier, P. (2011). Processing social aspects of human gaze: A combined fMRI-DTI study. Neuroimage, 55, 411–419. 10.1016/j.neuroimage.2010.11.033, [DOI] [PubMed] [Google Scholar]
  52. Felleman, D. J., & Van Essen, D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1, 1–47. 10.1093/cercor/1.1.1-a, [DOI] [PubMed] [Google Scholar]
  53. Frank, S. M., & Greenlee, M. W. (2018). The parieto-insular vestibular cortex in humans: More than a single area? Journal of Neurophysiology, 120, 1438–1450. 10.1152/jn.00907.2017, [DOI] [PubMed] [Google Scholar]
  54. Freeman, J. B., & Ambady, N. (2011). A dynamic interactive theory of person construal. Psychological Review, 118, 247–279. 10.1037/a0022327, [DOI] [PubMed] [Google Scholar]
  55. Freeman, J. B., & Johnson, K. L. (2016). More than meets the eye: Split-second social perception. Trends in Cognitive Sciences, 20, 362–374. 10.1016/j.tics.2016.03.003, [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Freeman, J. B., Rule, N. O., & Ambady, N. (2009). The cultural neuroscience of person perception. Progress in Brain Research, 178, 191–201. 10.1016/S0079-6123(09)17813-5, [DOI] [PubMed] [Google Scholar]
  57. Freud, E., Behrmann, M., & Snow, J. C. (2020). What does dorsal cortex contribute to perception? Open Mind, 4, 40–56. 10.1162/opmi_a_00033, [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Freud, E., Plaut, D. C., & Behrmann, M. (2016). ‘What’ is happening in the dorsal visual pathway. Trends in Cognitive Sciences, 20, 773–784. 10.1016/j.tics.2016.08.003, [DOI] [PubMed] [Google Scholar]
  59. Gandolfo, M., Abassi, E., Balgova, E., Downing, P. E., Papeo, L., & Koldewyn, K. (2024). Converging evidence that left extrastriate body area supports visual sensitivity to social interactions. Current Biology, 34, 343–351. 10.1016/j.cub.2023.12.009, [DOI] [PubMed] [Google Scholar]
  60. Gerstmann, J. (1924). Finger agnosie: Eine umschreibene Störung der Orientierung am eigenen Körper. Wiener Klinische Wochenschrift, 37, 1010–1012. [Google Scholar]
  61. Ghazanfar, A. A., & Schroeder, C. E. (2006). Is neocortex essentially multisensory? Trends in Cognitive Sciences, 10, 278–285. 10.1016/j.tics.2006.04.008, [DOI] [PubMed] [Google Scholar]
  62. Ghazanfar, A. A., & Takahashi, D. Y. (2014). Facial expressions and the evolution of the speech rhythm. Journal of Cognitive Neuroscience, 26, 1196–1207. 10.1162/jocn_a_00575, [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Giese, M. A., & Poggio, T. (2003). Neural mechanisms for the recognition of biological movements. Nature Reviews Neuroscience, 4, 179–192. 10.1038/nrn1057, [DOI] [PubMed] [Google Scholar]
  64. Gobbini, M. I., & Haxby, J. V. (2007). Neural systems for recognition of familiar faces. Neuropsychologia, 45, 32–41. 10.1016/j.neuropsychologia.2006.04.015, [DOI] [PubMed] [Google Scholar]
  65. Gogolla, N. (2017). The insular cortex. Current Biology, 27, R580–R586. 10.1016/j.cub.2017.05.010, [DOI] [PubMed] [Google Scholar]
  66. Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15, 20–25. 10.1016/0166-2236(92)90344-8, [DOI] [PubMed] [Google Scholar]
  67. Gosselin, F., Spezio, M. L., Tranel, D., & Adolphs, R. (2011). Asymmetrical use of eye information from faces following unilateral amygdala damage. Social Cognitive and Affective Neuroscience, 6, 330–337. 10.1093/scan/nsq040, [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Grill-Spector, K., Weiner, K. S., Kay, K., & Gomez, J. (2017). The functional neuroanatomy of human face perception. Annual Review of Vision Science, 3, 167–196. 10.1146/annurev-vision-102016-061214, [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Gross, C. G. (2002). Genealogy of the “grandmother cell”. Neuroscientist, 8, 512–518. 10.1177/107385802237175, [DOI] [PubMed] [Google Scholar]
  70. Hardee, J. E., Thompson, J. C., & Puce, A. (2008). The left amygdala knows fear: Laterality in the amygdala response to fearful eyes. Social Cognitive and Affective Neuroscience, 3, 47–54. 10.1093/scan/nsn001, [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Hari, R., & Kujala, M. V. (2009). Brain basis of human social interaction: From concepts to brain imaging. Physiological Reviews, 89, 453–479. 10.1152/physrev.00041.2007, [DOI] [PubMed] [Google Scholar]
  72. Hari, R., & Puce, A. (2023). MEG-EEG primer (2nd ed.). New York, NY: Oxford University Press. 10.1093/med/9780197542187.001.0001 [DOI] [Google Scholar]
  73. Haxby, J. V., Connolly, A. C., & Guntupalli, J. S. (2014). Decoding neural representational spaces using multivariate pattern analysis. Annual Review of Neuroscience, 37, 435–456. 10.1146/annurev-neuro-062012-170325, [DOI] [PubMed] [Google Scholar]
  74. Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., & Pietrini, P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293, 2425–2430. 10.1126/science.1063736, [DOI] [PubMed] [Google Scholar]
  75. Haxby, J. V., Guntupalli, J. S., Nastase, S. A., & Feilong, M. (2020). Hyperalignment: Modeling shared information encoded in idiosyncratic cortical topographies. eLife, 9, e56601. 10.7554/eLife.56601, [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4, 223–233. 10.1016/S1364-6613(00)01482-0, [DOI] [PubMed] [Google Scholar]
  77. Heilman, K. M. (2021). Upper limb apraxia. Continuum: Lifelong Learning in Neurology, 27, 1602–1623. 10.1212/CON.0000000000001014, [DOI] [PubMed] [Google Scholar]
  78. Herald, S. B., Yang, H., & Duchaine, B. (2023). Contralateral biases in category-selective areas are stronger in the left hemisphere than the right hemisphere. Journal of Cognitive Neuroscience, 35, 1154–1168. 10.1162/jocn_a_01995, [DOI] [PubMed] [Google Scholar]
  79. Hickok, G. (2009). The functional neuroanatomy of language. Physics of Life Reviews, 6, 121–143. 10.1016/j.plrev.2009.06.001, [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Huijgen, J., Dinkelacker, V., Lachat, F., Yahia-Cherif, L., El Karoui, I., Lemaréchal, J.-D., et al. (2015). Amygdala processing of social cues from faces: An intracrebral EEG study. Social Cognitive and Affective Neuroscience, 10, 1568–1576. 10.1093/scan/nsv048, [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Ishai, A. (2008). Let's face it: It's a cortical network. Neuroimage, 40, 415–419. 10.1016/j.neuroimage.2007.10.040, [DOI] [PubMed] [Google Scholar]
  82. Jasper, H. (1958). The ten-twenty electrode system of the International Federation. Electroencephalography and Clinical Neurophysiology, 10, 371–375. [PubMed] [Google Scholar]
  83. John, E. R. (1975). Konorski's concept of gnostic areas and units: Some electrophysiological considerations. Acta Neurobiologiae Experimentalis, 35, 417–429. [PubMed] [Google Scholar]
  84. Kamachi, M., Bruce, V., Mukaida, S., Gyoba, J., Yoshikawa, S., & Akamatsu, S. (2013). Dynamic properties influence the perception of facial expressions. Perception, 42, 1266–1278. 10.1068/p3131n, [DOI] [PubMed] [Google Scholar]
  85. Kern, M., Aertsen, A., Schulze-Bonhage, A., & Ball, T. (2013). Heart cycle-related effects on event-related potentials, spectral power changes, and connectivity patterns in the human ECoG. Neuroimage, 81, 178–190. 10.1016/j.neuroimage.2013.05.042, [DOI] [PubMed] [Google Scholar]
  86. Kobayashi, H., & Kohshima, S. (1997). Unique morphology of the human eye. Nature, 387, 767–768. 10.1038/42842, [DOI] [PubMed] [Google Scholar]
  87. Konorski, J. (1967). Integrative activity of the brain: An interdisciplinary approach. University of Chicago Press. [Google Scholar]
  88. Korolkova, O. A. (2018). The role of temporal inversion in the perception of realistic and morphed dynamic transitions between facial expressions. Vision Research, 143, 42–51. 10.1016/j.visres.2017.10.007, [DOI] [PubMed] [Google Scholar]
  89. Kragel, P. A., Čeko, M., Theriault, J., Chen, D., Satpute, A. B., Wald, L. W., et al. (2021). A human colliculus-pulvinar-amygdala pathway encodes negative emotion. Neuron, 109, 2404–2412. 10.1016/j.neuron.2021.06.001, [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Landsiedel, J., & Koldewyn, K. (2023). Auditory dyadic interactions through the “eye” of the social brain: How visual is the posterior STS interaction region? Imaging Neuroscience, 1, 1–20. 10.1162/imag_a_00003, [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Latinus, M., Love, S. A., Rossi, A., Parada, F. J., Huang, L., Conty, L., et al. (2015). Social decisions affect neural activity to perceived dynamic gaze. Social Cognitive and Affective Neuroscience, 10, 1557–1567. 10.1093/scan/nsv049, [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Li, W., & Keil, A. (2023). Sensing fear: Fast and precise threat evaluation in human sensory cortex. Trends in Cognitive Sciences, 27, 341–352. 10.1016/j.tics.2023.01.001, [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Lingnau, A., & Downing, P. E. (2015). The lateral occipitotemporal cortex in action. Trends in Cognitive Sciences, 19, 268–277. 10.1016/j.tics.2015.03.006, [DOI] [PubMed] [Google Scholar]
  94. Manea, M. M., Comsa, M., Minca, A., Dragos, D., & Popa, C. (2015). Brain–heart axis—Review article. Journal of Medicine and Life, 8, 266–271. [PMC free article] [PubMed] [Google Scholar]
  95. Mayer, E. A., Nance, K., & Chen, S. (2022). The gut–brain axis. Annual Review of Medicine, 73, 439–453. 10.1146/annurev-med-042320-014032, [DOI] [PubMed] [Google Scholar]
  96. McCarthy, G., Puce, A., Belger, A., & Allison, T. (1999). Electrophysiological studies of human face perception. II: Response properties of face-specific potentials generated in occipitotemporal cortex. Cerebral Cortex, 9, 431–444. 10.1093/cercor/9.5.431, [DOI] [PubMed] [Google Scholar]
  97. McKone, E., & Yovel, G. (2009). Why does picture-plane inversion sometimes dissociate perception of features and spacing in faces, and sometimes not? Toward a new theory of holistic processing. Psychonomic Bulletin & Review, 16, 778–797. 10.3758/PBR.16.5.778, [DOI] [PubMed] [Google Scholar]
  98. McMahon, E., Bonner, M. F., & Isik, L. (2023). Hierarchical organization of social action features along the lateral visual pathway. Current Biology, 33, 5035–5047. 10.1016/j.cub.2023.10.015, [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Meadows, J. C. (1974). The anatomical basis of prosopagnosia. Journal of Neurology, Neurosurgery, and Psychiatry, 37, 489–501. 10.1136/jnnp.37.5.489, [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Miki, K., & Kakigi, R. (2014). Magnetoencephalographic study on facial movements. Frontiers in Human Neuroscience, 8, 550. 10.3389/fnhum.2014.00550, [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Moeller, S., Crapse, T., Chang, L., & Tsao, D. Y. (2017). The effect of face patch microstimulation on perception of faces and objects. Nature Neuroscience, 20, 743–752. 10.1038/nn.4527, [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Moeller, S., Freiwald, W. A., & Tsao, D. Y. (2008). Patches with links: A unified system for processing faces in the macaque temporal lobe. Science, 320, 1355–1359. 10.1126/science.1157436, [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Monfort, M., Andonian, A., Zhou, B., Ramakrishnan, K., Bargal, S. A., Yan, T., et al. (2020). Moments in time dataset: One million videos for event understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42, 502–508. 10.1109/TPAMI.2019.2901464, [DOI] [PubMed] [Google Scholar]
  104. Morgenroth, E., Vilaclara, L., Muszynski, M., Gaviria, J., Vuilleumier, P., & Van De Ville, D. (2023). Probing neurodynamics of experienced emotions—A hitchhiker's guide to film fMRI. Social Cognitive and Affective Neuroscience, 18, nsad063. 10.1093/scan/nsad063, [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Mori, S., Oishi, K., & Faria, A. V. (2009). White matter atlases based on diffusion tensor imaging. Current Opinion in Neurology, 22, 362–369. 10.1097/WCO.0b013e32832d954b, [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Müri, R. M. (2016). Cortical control of facial expression. Journal of Comparative Neurology, 524, 1578–1585. 10.1002/cne.23908, [DOI] [PubMed] [Google Scholar]
  107. Nielsen, M. (1938). Gerstmann syndrome: Finger agnosia, agraphia, confusion of right and left and acalculia. Comparison of this syndrome with disturbance of body scheme resulting from lesions of the right side of the brain. Archives of Neurology and Psychiatry, 39, 536–560. 10.1001/archneurpsyc.1938.02270030114009 [DOI] [Google Scholar]
  108. O'Toole, A. J., Roark, D. A., & Abdi, H. (2002). Recognizing moving faces: A psychological and neural synthesis. Trends in Cognitive Sciences, 6, 261–266. 10.1016/s1364-6613(02)01908-3, [DOI] [PubMed] [Google Scholar]
  109. Pallett, P. M., & Meng, M. (2015). Inversion effects reveal dissociations in facial expression of emotion, gender, and object processing. Frontiers in Psychology, 6, 1029. 10.3389/fpsyg.2015.01029, [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Papeo, L., & Abassi, E. (2019). Seeing social events: The visual specialization for dyadic human-human interactions. Journal of Experimental Psychology: Human Perception and Performance, 45, 877–888. 10.1037/xhp0000646, [DOI] [PubMed] [Google Scholar]
  111. Peelen, M. V., & Downing, P. E. (2005). Selectivity for the human body in the fusiform gyrus. Journal of Neurophysiology, 93, 603–608. 10.1152/jn.00513.2004, [DOI] [PubMed] [Google Scholar]
  112. Peelen, M. V., & Downing, P. E. (2007). The neural basis of visual body perception. Nature Reviews Neuroscience, 8, 636–648. 10.1038/nrn2195, [DOI] [PubMed] [Google Scholar]
  113. Perdikis, D., Volhard, J., Müller, V., Kaulard, K., Brick, T. R., Wallraven, C., et al. (2017). Brain synchronization during perception of facial emotional expressions with natural and unnatural dynamics. PLoS One, 12, e0181225. 10.1371/journal.pone.0181225, [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Pitcher, D., & Ungerleider, L. G. (2021). Evidence for a third visual pathway specialized for social perception. Trends in Cognitive Sciences, 25, 100–110. 10.1016/j.tics.2020.11.006, [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Pourtois, G., Peelen, M. V., Spinelli, L., Seeck, M., & Vuilleumier, P. (2007). Direct intracranial recording of body-selective responses in human extrastriate visual cortex. Neuropsychologia, 45, 2621–2625. 10.1016/j.neuropsychologia.2007.04.005, [DOI] [PubMed] [Google Scholar]
  116. Puce, A., & Allison, T. (1999). Differential processing of mobile and static faces by temporal cortex. Neuroimage, 9, S801. [Google Scholar]
  117. Puce, A., Allison, T., Bentin, S., Gore, J. C., & McCarthy, G. (1998). Temporal cortex activation in humans viewing eye and mouth movements. Journal of Neuroscience, 18, 2188–2199. 10.1523/JNEUROSCI.18-06-02188.1998, [DOI] [PMC free article] [PubMed] [Google Scholar]
  118. Puce, A., Allison, T., & McCarthy, G. (1999). Electrophysiological studies of human face perception. III: Effects of top–down processing on face-specific potentials. Cerebral Cortex, 9, 445–458. 10.1093/cercor/9.5.445, [DOI] [PubMed] [Google Scholar]
  119. Puce, A., Latinus, M., Rossi, A., daSilva, E., Parada, F., Love, S., et al. (2015). Neural bases for social attention in healthy humans. In Puce A. & Bertenthal B. I. (Eds.), The many faces of social attention: Behavioral and neural measures (pp. 93–127). Cham, Switzerland: Springer. 10.1007/978-3-319-21368-2_4 [DOI] [Google Scholar]
  120. Puce, A., & Perrett, D. (2003). Electrophysiology and brain imaging of biological motion. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 358, 435–445. 10.1098/rstb.2002.1221, [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Puce, A., Smith, A., & Allison, T. (2000). ERPs evoked by viewing facial movements. Cognitive Neuropsychology, 17, 221–239. 10.1080/026432900380580, [DOI] [PubMed] [Google Scholar]
  122. Quadflieg, S., & Koldewyn, K. (2017). The neuroscience of people watching: How the human brain makes sense of other people's encounters. Annals of the New York Academy of Sciences, 1396, 166–182. 10.1111/nyas.13331, [DOI] [PubMed] [Google Scholar]
  123. Rhodes, G., Brake, S., & Atkinson, A. P. (1993). What's lost in inverted faces? Cognition, 47, 25–57. 10.1016/0010-0277(93)90061-y, [DOI] [PubMed] [Google Scholar]
  124. Rossi, A., Parada, F. J., Kolchinsky, A., & Puce, A. (2014). Neural correlates of apparent motion perception of impoverished facial stimuli: A comparison of ERP and ERSP activity. Neuroimage, 98, 442–459. 10.1016/j.neuroimage.2014.04.029, [DOI] [PMC free article] [PubMed] [Google Scholar]
  125. Rossi, A., Parada, F. J., Latinus, M., & Puce, A. (2015). Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction. Frontiers in Human Neuroscience, 9, 185. 10.3389/fnhum.2015.00185, [DOI] [PMC free article] [PubMed] [Google Scholar]
  126. Rossion, B. (2009). Distinguishing the cause and consequence of face inversion: The perceptual field hypothesis. Acta Psychologica, 132, 300–312. 10.1016/j.actpsy.2009.08.002, [DOI] [PubMed] [Google Scholar]
  127. Roux, F.-E., Boetto, S., Sacko, O., Chollet, F., & Trémoulet, M. (2003). Writing, calculating, and finger recognition in the region of the angular gyrus: A cortical stimulation study of Gerstmann syndrome. Journal of Neurosurgery, 99, 716–727. 10.3171/jns.2003.99.4.0716, [DOI] [PubMed] [Google Scholar]
  128. Saarimäki, H. (2021). Naturalistic stimuli in affective neuroimaging: A review. Frontiers in Human Neuroscience, 15, 675068. 10.3389/fnhum.2021.675068, [DOI] [PMC free article] [PubMed] [Google Scholar]
  129. Salibayeva, K., Sporns, O., Gitton, C., George, N., & Puce, A. (2023). Cardiac cycle-related changes in MEG-EEG resting state functional connectivity. Submitted. [Google Scholar]
  130. Samuelsson, J. G., Sundaram, P., Khan, S., Sereno, M. I., & Hämäläinen, M. S. (2020). Detectability of cerebellar activity with magnetoencephalography and electroencephalography. Human Brain Mapping, 41, 2357–2372. 10.1002/hbm.24951, [DOI] [PMC free article] [PubMed] [Google Scholar]
  131. Schilbach, L., Timmermans, B., Reddy, V., Costall, A., Bente, G., Schlicht, T., et al. (2013). Toward a second-person neuroscience. Behavioral and Brain Sciences, 36, 393–414. 10.1017/S0140525X12000660, [DOI] [PubMed] [Google Scholar]
  132. Schmahmann, J. D. (2019). The cerebellum and cognition. Neuroscience Letters, 688, 62–75. 10.1016/j.neulet.2018.07.005, [DOI] [PubMed] [Google Scholar]
  133. Silson, E. H., Groen, I. I. A., Kravitz, D. J., & Baker, C. I. (2016). Evaluating the correspondence between face-, scene-, and object-selectivity and retinotopic organization within lateral occipitotemporal cortex. Journal of Vision, 16, 14. 10.1167/16.6.14, [DOI] [PMC free article] [PubMed] [Google Scholar]
  134. Simonyan, K., Saad, Z. S., Loucks, T. M. J., Poletto, C. J., & Ludlow, C. L. (2007). Functional neuroanatomy of human voluntary cough and sniff production. Neuroimage, 37, 401–409. 10.1016/j.neuroimage.2007.05.021, [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. Smith, F. W., Muckli, L., Brennan, D., Pernet, C., Smith, M. L., Belin, P., et al. (2008). Classification images reveal the information sensitivity of brain voxels in fMRI. Neuroimage, 40, 1643–1654. 10.1016/j.neuroimage.2008.01.029, [DOI] [PubMed] [Google Scholar]
  136. Smith, R., & Lane, R. D. (2016). Unconscious emotion: A cognitive neuroscientific perspective. Neuroscience and Biobehavioral Reviews, 69, 216–238. 10.1016/j.neubiorev.2016.08.013, [DOI] [PubMed] [Google Scholar]
  137. Stanley, D. A., & Adolphs, R. (2013). Toward a neural basis for social behavior. Neuron, 80, 816–826. 10.1016/j.neuron.2013.10.038, [DOI] [PMC free article] [PubMed] [Google Scholar]
  138. Stolier, R. M., Hehman, E., & Freeman, J. B. (2018). A dynamic structure of social trait space. Trends in Cognitive Sciences, 22, 197–200. 10.1016/j.tics.2017.12.003, [DOI] [PubMed] [Google Scholar]
  139. Straulino, E., Scarpazza, C., & Sartori, L. (2023). What is missing in the study of emotion expression? Frontiers in Psychology, 14, 1158136. 10.3389/fpsyg.2023.1158136, [DOI] [PMC free article] [PubMed] [Google Scholar]
  140. Taubert, J., Ritchie, J. B., Ungerleider, L. G., & Baker, C. I. (2022). One object, two networks? Assessing the relationship between the face and body-selective regions in the primate visual system. Brain Structure & Function, 227, 1423–1438. 10.1007/s00429-021-02420-7, [DOI] [PubMed] [Google Scholar]
  141. Thompson, J. C., Clarke, M., Stewart, T., & Puce, A. (2005). Configural processing of biological motion in human superior temporal sulcus. Journal of Neuroscience, 25, 9059–9066. 10.1523/JNEUROSCI.2129-05.2005, [DOI] [PMC free article] [PubMed] [Google Scholar]
  142. Thompson, P. (1980). Margaret Thatcher: A new illusion. Perception, 9, 483–484. 10.1068/p090483, [DOI] [PubMed] [Google Scholar]
  143. Todorov, A., Said, C. P., Engell, A. D., & Oosterhof, N. N. (2008). Understanding evaluation of faces on social dimensions. Trends in Cognitive Sciences, 12, 455–460. 10.1016/j.tics.2008.10.001, [DOI] [PubMed] [Google Scholar]
  144. Türe, U., Yaşargil, M. G., Al-Mefty, O., & Yaşargil, D. C. (2000). Arteries of the insula. Journal of Neurosurgery, 92, 676–687. 10.3171/jns.2000.92.4.0676, [DOI] [PubMed] [Google Scholar]
  145. Uddin, L. Q., Nomi, J. S., Hébert-Seropian, B., Ghaziri, J., & Boucher, O. (2017). Structure and function of the human insula. Journal of Clinical Neurophysiology, 34, 300–306. 10.1097/WNP.0000000000000377, [DOI] [PMC free article] [PubMed] [Google Scholar]
  146. Ulloa, J. L., Puce, A., Hugueville, L., & George, N. (2014). Sustained neural activity to gaze and emotion perception in dynamic social scenes. Social Cognitive and Affective Neuroscience, 9, 350–357. 10.1093/scan/nss141, [DOI] [PMC free article] [PubMed] [Google Scholar]
  147. Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In Ingle D. J., Goodale M. A., & Mansfield R. J. W. (Eds.), Analysis of visual behavior (pp. 549–586). Cambridge, MA: MIT Press. [Google Scholar]
  148. Vaina, L. M., & Soloviev, S. (2004). First-order and second-order motion: Neurological evidence for neuroanatomically distinct systems. Progress in Brain Research, 144, 197–212. 10.1016/S0079-6123(03)14414-7, [DOI] [PubMed] [Google Scholar]
  149. Vangeneugden, J., Peelen, M. V., Tadin, D., & Battelli, L. (2014). Distinct neural mechanisms for body form and body motion discriminations. Journal of Neuroscience, 34, 574–585. 10.1523/JNEUROSCI.4032-13.2014, [DOI] [PMC free article] [PubMed] [Google Scholar]
  150. Vikhanova, A., Mareschal, I., & Tibber, M. (2022). Emotion recognition bias depends on stimulus morphing strategy. Attention, Perception, & Psychophysics, 84, 2051–2059. 10.3758/s13414-022-02532-0, [DOI] [PMC free article] [PubMed] [Google Scholar]
  151. Walbrin, J., & Koldewyn, K. (2019). Dyadic interaction processing in the posterior temporal cortex. Neuroimage, 198, 296–302. 10.1016/j.neuroimage.2019.05.027, [DOI] [PMC free article] [PubMed] [Google Scholar]
  152. Walbrin, J., Mihai, I., Landsiedel, J., & Koldewyn, K. (2020). Developmental changes in visual responses to social interactions. Developmental Cognitive Neuroscience, 42, 100774. 10.1016/j.dcn.2020.100774, [DOI] [PMC free article] [PubMed] [Google Scholar]
  153. Waller, B. M., Julle-Daniere, E., & Micheletta, J. (2020). Measuring the evolution of facial ‘expression’ using multi-species FACS. Neuroscience and Biobehavioral Reviews, 113, 1–11. 10.1016/j.neubiorev.2020.02.031, [DOI] [PubMed] [Google Scholar]
  154. Wang, Y., Metoki, A., Alm, K. H., & Olson, I. R. (2018). White matter pathways and social cognition. Neuroscience and Biobehavioral Reviews, 90, 350–370. 10.1016/j.neubiorev.2018.04.015, [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Wang, Y., Metoki, A., Smith, D. V., Medaglia, J. D., Zang, Y., Benear, S., et al. (2020). Multimodal mapping of the face connectome. Nature Human Behaviour, 4, 397–411. 10.1038/s41562-019-0811-3, [DOI] [PMC free article] [PubMed] [Google Scholar]
  156. Watanabe, S., Kakigi, R., & Puce, A. (2001). Occipitotemporal activity elicited by viewing eye movements: A magnetoencephalographic study. Neuroimage, 13, 351–363. 10.1006/nimg.2000.0682, [DOI] [PubMed] [Google Scholar]
  157. Weiner, K. S., & Grill-Spector, K. (2013). Neural representations of faces and limbs neighbor in human high-level visual cortex: Evidence for a new organization principle. Psychological Research, 77, 74–97. 10.1007/s00426-011-0392-x, [DOI] [PMC free article] [PubMed] [Google Scholar]
  158. Wheaton, K. J., Thompson, J. C., Syngeniotis, A., Abbott, D. F., & Puce, A. (2004). Viewing the motion of human body parts activates different regions of premotor, temporal, and parietal cortex. Neuroimage, 22, 277–288. 10.1016/j.neuroimage.2003.12.043, [DOI] [PubMed] [Google Scholar]
  159. Wolpert, N., Rebollo, I., & Tallon-Baudry, C. (2020). Electrogastrography for psychophysiological research: Practical considerations, analysis pipeline, and normative data in a large sample. Psychophysiology, 57, e13599. 10.1111/psyp.13599, [DOI] [PMC free article] [PubMed] [Google Scholar]
  160. Wu, W., Wang, X., Wei, T., He, C., & Bi, Y. (2020). Object parsing in the left lateral occipitotemporal cortex: Whole shape, part shape, and graspability. Neuropsychologia, 138, 107340. 10.1016/j.neuropsychologia.2020.107340, [DOI] [PubMed] [Google Scholar]
  161. Wurm, M. F., & Caramazza, A. (2022). Two ‘what’ pathways for action and object recognition. Trends in Cognitive Sciences, 26, 103–116. 10.1016/j.tics.2021.10.003, [DOI] [PubMed] [Google Scholar]
  162. Yang, D. Y.-J., Rosenblau, G., Keifer, C., & Pelphrey, K. A. (2015). An integrative neural model of social perception, action observation, and theory of mind. Neuroscience and Biobehavioral Reviews, 51, 263–275. 10.1016/j.neubiorev.2015.01.020, [DOI] [PMC free article] [PubMed] [Google Scholar]
  163. Yarbus, A. L. (1967). Eye movements and vision. New York: Springer. 10.1007/978-1-4899-5379-7 [DOI] [Google Scholar]
  164. Yovel, G., & O'Toole, A. J. (2016). Recognizing people in motion. Trends in Cognitive Sciences, 20, 383–395. 10.1016/j.tics.2016.02.005, [DOI] [PubMed] [Google Scholar]
  165. Zekelman, L. R., Zhang, F., Makris, N., He, J., Chen, Y., Xue, T., et al. (2022). White matter association tracts underlying language and theory of mind: An investigation of 809 brains from the Human Connectome Project. Neuroimage, 246, 118739. 10.1016/j.neuroimage.2021.118739, [DOI] [PMC free article] [PubMed] [Google Scholar]
  166. Zimmermann, M., Mars, R. B., de Lange, F. P., Toni, I., & Verhagen, L. (2018). Is the extrastriate body area part of the dorsal visuomotor stream? Brain Structure & Function, 223, 31–46. 10.1007/s00429-017-1469-0, [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

This article is a review and presents published findings in the literature. There is one figure with data (previously presented at a conference abstract), which has been given permission to be published in the paper by the IRB that originally approved the study. The IRB has designated that these data cannot be shared as they are patient data, which have been de-identified in this article.


Articles from Journal of Cognitive Neuroscience are provided here courtesy of MIT Press

RESOURCES