Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Nov 30.
Published in final edited form as: J Cogn Neurosci. 2024 Dec 1;36(12):2594–2617. doi: 10.1162/jocn_a_02141

From motion to emotion: visual pathways and potential interconnections

Aina Puce 1
PMCID: PMC11416577  NIHMSID: NIHMS2004548  PMID: 38527078

Abstract

The two visual pathway description of Ungerleider and Mishkin (1982) changed the course of late 20th century systems and cognitive neuroscience. Here, I try to re-examine our lab’s work through the lens of the Pitcher and Ungerleider’s (2021) new third visual pathway. I also briefly review the literature related to brain responses to static and dynamic visual displays, visual stimulation involving multiple individuals, and compare existing models of social information processing for the face and body. In this context, I examine how the posterior superior temporal sulcus (pSTS) might generate unique social information relative to other brain regions that also respond to social stimuli. I discuss some of the existing challenges we face with assessing how information flow progresses between structures in the proposed functional pathways, and how some stimulus types and experimental designs may have complicated our data interpretation and model generation. I also note a series of outstanding questions for the field. Finally, I examine the idea of a potential expansion of the third visual pathway, to include aspects of previously proposed ‘lateral’ visual pathways. Doing this would yield a more general entity for processing motion/action (i.e., ‘[inter]action’) that deals with interactions between people, as well as people and objects. In this framework, a brief discussion of potential hemispheric biases for function, and different forms of neuropsychological impairments created by focal lesions in the posterior brain is highlighted to help situate various brain regions into an expanded [inter]action pathway.

INTRODUCTION: A TRIBUTE

Cognitive, social and systems neuroscientists who study the characteristics of the visual system in human and non-human primates owe so much to the late Dr. Leslie Ungerleider. For decades her groundbreaking work in primate neurophysiology, neuroanatomy and neuroimaging in visual system function has laid the cornerstone for how we think about visual information processing in the primate brain. I dedicate this article to Dr. Ungerleider’s memory and honor her by first trying to put our work into the scientific context that she created, and then considering how that context might be expanded. I would also like to acknowledge the influence and contribution of two close colleagues who are also no longer with us today — Drs. Truett Allison and Shlomo Bentin. We all stand on the shoulders of giants.

VISUAL PATHWAYS: AND THEN THERE WERE THREE…

In 1982, I was embarking on a graduate career and starting to perform studies in the human visual system when the landmark manuscript on parallel visual pathways in the human brain was published (Ungerleider & Mishkin, 1982). The discussion and implications for the field in that paper helped channel and shape my research directions for the decades to come. Ungerleider and Mishkin (1982) had a clear ‘What?’ and ‘Where?’ emphasis for the main functional divisions of the respective ventral and dorsal pathways. Their work was predominantly based on the non-human primate literature – on painstaking studies of single-unit neurophysiology and structural neuroanatomy – based on investigations of object recognition and their locations in space. A slightly different interpretation of the ventral and dorsal visual systems was proposed a decade later (Goodale & Milner, 1992), where the dorsal system was examined from the point view of ‘How?’ In this formulation, based heavily on the apraxia literature, both spatial location and how an object was handled were important. To non-experts, the two visual pathway model made vision seem simple. However, when the now classic schema of known anatomical areal interconnections in the primate brain was viewed through the data lens of the early 1990s, the story far from simple even then (Felleman & Van Essen, 1991)!

One vexing issue, which still looms large to this day, relates to exactly how information between the pathways is transferred and used by each visual system. For us, our everyday world is a seamlessly continuous and complete one, and today there are still many questions about how we achieve this holistic view. From existing white matter tract knowledge of human cerebral cortex (e.g., Mori, Oishi, & Faria, 2009; Wang, Metoki, Alm, & Olson, 2018; Zekelman et al., 2022), sometimes there might be no direct white matter connections between brain structures that share common functions. For example, the human posterior superior temporal sulcus (pSTS) and the midfusiform gyrus (FG) (Fig. 1a) exhibit strong sensitivity to faces but have no direct interconnections (Ethofer, Gschwind, & Vuilleumier, 2011). Even today, existing structural interconnections between human brain areas sensitive to faces have been challenging to document clearly (Fig 1a; Babo-Rebelo et al., 2022; Grill-Spector, Weiner, Kay, & Gomez, 2017).

Figure 1. Potential structural and functional connections between main brain structures in the face processing network.

Figure 1.

a.Known structural connections between main structures (solid lines), based mainly in the ventral system. Question marks highlight unknown structural connections in the network. Core (purple) and extended (blue) systems are color-coded, as are other (black) important brain regions. Reproduced with CC BY 4.0 Deed from Babo-Rebelo, M., Puce, A., Bullock, D., Hugueville, L., Pestilli, F., Adam, C., … George, N. (2022). Visual Information routes in the posterior dorsal and ventral face network studied with intracranial neurophysiology and white matter tract endpoints. Cereb Cortex, 32(2), 342–366. b. The third visual pathway of Pitcher and Ungerleider (2021). The general directions of the dorsal and ventral pathways are displayed by respective blue and green arrows as they emerge from primary visual cortex (V1). For the third visual pathway, key component structures MT/V5, p(osterior) STS and a(nterior) STS are shown by red-brick colored circles in a pathway to the anterior temporal lobe.

Perhaps a real understanding of the interconnections between dynamic face (and body) and other visually-sensitive brain regions dealing with object motion is lacking? MRI-guided electrical micro-stimulation of ‘face-patches’ in monkey IT cortex highlights their strong interconnections and separation from non-face regions (Moeller, Freiwald, & Tsao, 2008), yet micro-stimulation in face-patches influences activity in ‘object’ regions when ‘face-like’ objects or abstract faces are viewed (Moeller, Crapse, Chang, & Tsao, 2017). What seems to be critical here is the study of visually-sensitive cortex that is not directly responsive to either faces or objects. Recent elegant work taking this line of reasoning has proposed a complex object map, or space, in monkey IT where these category-specific properties can be observed (Bao, She, McGill, & Tsao, 2020). This approach channels a now classic human fMRI study, albeit at coarser spatial scale (Haxby et al., 2001), now refined with a state-of-the-art machine learning data analysis technique known as ‘hyperalignment’. This computationally-demanding method effectively scrubs out individual subject idiosyncrasies in high-resolution fMRI data, showcasing across-subject similarities in category-specific activation patterns in human occipitotemporal cortex (Haxby, Connolly, & Guntupalli, 2014; Haxby, Guntupalli, Nastase, & Feilong, 2020).

A second issue for the original dorsal/ventral visual pathway scheme was that it was not clear where structures such as the posterior STS (pSTS) sat. The pSTS is highly active in many studies dealing with dynamic human form (Allison, Puce, & McCarthy, 2000; Puce & Perrett, 2003; Yovel & O’Toole, 2016), and for this reason, it was seen as part of the dorsal system (Bernstein & Yovel, 2015; O’Toole, Roark, & Abdi, 2002). Working with Truett Allison, we had always regarded the pSTS as an important information integration point between the two visual pathways (Allison et al., 2000).

Perhaps the uncertainty in classifying and connecting other brain structures to the two visual pathways came about because this was not the complete picture? What were we missing? Was this the motivation for David Pitcher and Leslie Ungerleider when they proposed their ‘third visual pathway’, in which the pSTS was a major feature? The third pathway is a freeway linking primary visual cortex with area MT/V5, pSTS and anterior STS (aSTS) (Fig. 1b; Pitcher & Ungerleider, 2021).

Pitcher and Ungerleider’s (2021) questioning of the status quo is not unique: current thinking relating to brain pathways devoted to emotion (de Gelder & Poyo Solanas, 2021), and how emotions arise or progress (Critchley & Garfinkel, 2017; Li & Keil, 2023), for example, have also undergone ‘remodeling’. In terms of visual pathways themselves, the idea of an additional pathway to the ventral and dorsal systems is not new. Weiner and Grill-Spector (2013) proposed an additional lateral pathway that selectively processed information related to faces and limbs and integrated vision, haptics, action, and language (Weiner & Grill-Spector, 2013). Perplexingly, dynamic visual stimulation was not considered in this model, so structures strongly driven by human face and body motion (such as pSTS) are not included in this model. The pSTS is also not explicitly considered in other ‘lateral’ visual pathway formulations that centered on the LOTC (with a left-hemisphere bias) and in which MT/V5, the extrastriate body area (EBA) and middle temporal gyrus (MTG) feature prominently (Lingnau & Downing, 2015; Wurm & Caramazza, 2022), in contrast to the pSTS-centered third visual pathway which has a right hemisphere bias (Pitcher & Ungerleider, 2021). To complicate the picture still further, the idea of the dorsal system contributing unique knowledge regarding object representations has also been advanced (Freud, Behrmann, & Snow, 2020; Freud, Plaut, & Behrmann, 2016).

INVASIVE HUMAN BRAIN RESPONSES TO OBSERVED FACIAL AND BODY MOTION

Our facial movements provide clear social signals about our emotional states and foci of social attention. Close-up this information related to emotions comes from characteristic changes in upper and lower face parts (e.g., Muri, 2016; Waller, Julle-Daniere, & Micheletta, 2020). In the non-emotional domain, the eyes (via gaze direction) signal the focus of (social) attention, and can shift the (visual) attention of the viewer (Dalmaso, Castelli, & Galfano, 2020). In humans, rhythmic mouth movements (in the 3–8 Hz range) are tightly correlated with rhythmic vocalizations, unlike in non-human primates, where rhythmic facial motion is absent (Ghazanfar & Takahashi, 2014). Therefore, mouth movements provide supplementary information on verbal output. To improve comprehension, even people with normal hearing lipread in noisy environments, or when listening to speakers in their non-native language (Campbell, 2008). So, an opening mouth might be attention-grabbing, as it could signal the onset of an utterance (Carrick, Thompson, Epling, & Puce, 2007; Puce, Smith, & Allison, 2000).

The human brain’s response to viewing gaze changes of others

It has been known for a long time that non-invasive and invasive neurophysiological responses to viewing dynamic gaze aversions and mouth opening movements are significantly larger to direct gaze shifts or closing mouths (Allison et al., 2000; Caruana et al., 2014; Ulloa, Puce, Hugueville, & George, 2014). Functional MRI studies from many laboratories have consistently shown that the pSTS is a critical locus for facial motion signals (Campbell et al., 2001; Puce, Allison, Bentin, Gore, & McCarthy, 1998; Yovel & O’Toole, 2016). Neurophysiologically, MT/V5 also shows some selectivity to dynamic faces relative to non-face controls (Campbell, Zihl, Massaro, Munhall, & Cohen, 1997; Miki & Kakigi, 2014; Watanabe, Kakigi, & Puce, 2001). These older findings are consistent with the proposed active loci in the third visual pathway.

What about the time course for this neural activity? Non-invasive neurophysiological effects discussed above occur in the 170–220 ms post-motion onset range. Intracranial responses recorded to viewed dynamic gaze changes concur with non-invasive data: significantly larger field potentials in pSTS occur to gaze aversions relative to direct gaze transitions. In contrast, modulation of dynamic emotions (happiness versus fear) is not a prominent feature in the STS response (Fig. 2; Babo-Rebelo et al., 2022). This pattern of results was seen in 4 patients (of a total of 11 studied).

Figure 2. Effects of gaze and emotion in pSTS field potentials.

Figure 2.

Left panel: Data from an epilepsy patient displays field potentials from 3 electrode contacts within pSTS to viewing a dynamic face changing its gaze (direct, averted) and expression (from neutral to either fear or happiness). Significant differences between averted and direct gaze (top line of plots) were seen, with largest responses for averted gaze. These significant differences could persist beyond the displayed 400 ms epoch (not shown). Emotion conditions (bottom line of plots) do not show prolonged amplitude differences post-emotion change. Two phase reversals in the potential at ~ 200 ms seen across the 3 sites – a signal of local generators at these locations. The respective MNI co-ordinate locations (x,y,z) for the 3 electrode contacts were: Site 1: +43,−53,+9; Site 2: +48,−53,+9; Site 3: +54,−53,+9. LEGEND: *: P <0.05, **: P <0.01, ***: P <0.005, corrected-over-time Monte Carlo P values. Right panel: Locations of left panel electrode sites appear on coronal views of the post-implant structural MRI. Reproduced with CC BY 4.0 Deed from Babo-Rebelo, M., Puce, A., Bullock, D., Hugueville, L., Pestilli, F., Adam, C., … George, N. (2022). Visual Information Routes in the Posterior Dorsal and Ventral Face Network Studied with Intracranial Neurophysiology and White Matter Tract Endpoints. Cereb Cortex, 32(2), 342–366.

In addition to the STC, or superior temporal cortex, region of interest (ROI) that included cortex on the superior temporal gyrus and in pSTS, 3 other occipitotemporal ROIs were studied: an inferior temporal cortical region (‘ITC’, which included the inferior temporal sulci and gyri), a fusiform cortical region (‘FC’, including the midfusiform sulcus, and occipitotemporal and collateral sulci), and an inferior occipital region (‘IOC’, comprised of mainly the inferior occipital gyrus). Effect sizes to viewing facial changes were calculated for normalized amplitudes of bipolar field potentials at active electrode pairs in the 4 ROIs in the 11 patients (Fig. 3). All four ROIs showed responses to both gaze and emotion transitions, but the pSTS (STC ROI on the plot on Fig. 3) was most sensitive to gaze relative to emotion. These effects are not due to motion extent per se: for these same stimuli, as the largest facial changes occurred for emotion transitions – specifically in the lower part of the face (Huijgen et al., 2015).

Figure 3. Effect sizes for gaze and emotion within 4 occipitotemporal ROIs.

Figure 3.

Bottom panel: Schematic axial slices for each ROI (IOC, FC, ITC, and STC) showing bipolar site pair locations (indicated by dots) responding significantly to Emotion (left side) or Gaze (right side). Color legend for individual patients is at right. Top panel: Effect size (absolute Cohen’s d) plotted as a function of ROI for each patient’s bipolar electrode sites. The dark gray open circles denote mean effect size across sites within each ROI, for Emotion and Gaze, respectively. Statistical comparison of effect sizes between Emotion and Gaze in each ROI (gray bars between open circles) and across ROIs for Emotion and Gaze effects (top bars) was performed. Broken lines on the plot represent commonly accepted evaluations of effect size values. LEGEND: IOC, Inferior Occipital Cortex; FC, Fusiform Cortex; ITC, Inferior Temporal Cortex; STC, Superior Temporal Cortex. NS: not significant; **: P <0.01. Reproduced with CC BY 4.0 Deed from Babo-Rebelo, M., Puce, A., Bullock, D., Hugueville, L., Pestilli, F., Adam, C., … George, N. (2022). Visual Information Routes in the Posterior Dorsal and Ventral Face Network Studied with Intracranial Neurophysiology and White Matter Tract Endpoints. Cereb Cortex, 32(2), 342–366.

Our data indicate that initial gaze processing in pSTS is already underway ~ 1/5 of a second after the gaze change. Typically, field potentials in human V1 occur at ~100 ms post-stimulus onset (Allison, Puce, Spencer, & McCarthy, 1999), and presumably this information travels to MT/V5 (Watanabe et al., 2001) and then to the pSTS, consistent with the information flow in the third visual pathway. The pSTS is clearly important for processing gaze in real life: lesions of human pSTS, can produce deficits in judging gaze direction/social attention in others (Akiyama, Kato, Muramatsu, Saito, Nakachi, & Kashima, 2006; Akiyama, Kato, Muramatsu, Saito, Umeda, & Kashima, 2006).

Amygdalae recordings in 5 patients have shown small amplitude selective responses to gaze aversion and not facial emotion using the same stimuli and task. The early response latency of ~120 ms in the right amygdala (Huijgen et al., 2015), was earlier than that in extrastriate cortex (Babo-Rebelo et al., 2022), raising questions about alternate information flow perhaps via pulvinar-collicular routes. The left amygdala’s sensitivity to increased eye white area (as seen in fear), relative to the right amygdala which responds to various changes in eye white area (including gaze aversions and depicted fear) (Hardee, Thompson, & Puce, 2008) has previously been reported. Notably, patients with amygdala injury can have difficulties judging gaze direction (Gosselin, Spezio, Tranel, & Adolphs, 2011) as well as emotions, especially fear (Adolphs, Tranel, Damasio, & Damasio, 1994; Gosselin et al., 2011), so the absence of field potentials to fearful faces in the amygdala are puzzling.

Insular cortex is also sensitive to dynamic eye gaze transitions. Averted gaze changes produce larger invasive neurophysiological responses than do direct gaze transitions. Gaze extent per se is not a factor – as evidenced by smaller evoked responses to spatially-large extreme left-right gaze shifts (Caruana et al., 2014).

The human brain’s response to viewing the mouth movements of others

Earlier, I mentioned the similarity in morphology and latency of non-invasive neural activity and pSTS field potentials to dynamic gaze changes. An outstanding question has been whether there are field potentials in the pSTS and/or other brain regions that are selectively elicited to viewing mouth movements?

In the late 1990s, we recorded field potentials to faces, face parts, objects, and scrambled versions of these stimuli in more than 20 activation tasks in ~ 100 epilepsy surgery patients (Allison et al., 1999; McCarthy, Puce, Belger, & Allison, 1999; Puce, Allison, & McCarthy, 1999). We began some new studies, including a dynamic facial motion task from which we had already collected non-invasive data (i.e., Puce et al., 2000). Here I include data originally published in abstract form (Puce & Allison, 1999), data that generated substantial interest at the Memorial Symposium devoted to Dr. Leslie Ungerleider’s memory (at NIH in September 2022). While these data remain anecdotal, I present them here to stimulate further thinking and seed further studies.

For the dynamic facial motion study, we recorded data from epilepsy surgery patients (who provided informed consent in a study approved by the Human Investigations Committee at the Yale School of Medicine). Talairach coordinates of active electrodes were calculated (Allison et al., 1999). Figure 4 displays data recorded from two depth electrodes in a patient - one electrode was in pSTS, and the other in the Sylvian fissure, abutting insular cortex (Fig. 4a). Averaged field potentials selective to mouth opening were seen at ~ 400 ms after motion onset and were polarity negative in both sites. At these latencies no prominent evoked activity occurred to dynamic gaze changes (Fig. 4b), static isolated mouths or eyes, full faces (Fig. 4c), or other static visual stimuli (Fig. 4d). Some slow, non-descript, late activity (after 500 ms to the static full face and parts might be present in STS 6 (Fig. 4c), and general static visual stimuli (Fig. 4d) appear to produce small deflections at earlier (400 ms) and later (800 ms) latencies, suggesting that while these sites show a distinct preference to moving mouths, nevertheless they retain some degree of general visual responsivity.

Figure 4. Field potentials from STS and Sylvian fissure (insular cortex) from three experiments.

Figure 4.

a. Two partial coronal slices of a structural MRI scan display electrode contacts in the STS (top image) and Sylvian fissure (SF; insular cortex) (bottom image). b. The dynamic facial motion study elicits strong responses to mouth opening movements at ~ 400 ms post-motion onset and negligible responses to gaze aversions. c. Isolated static face parts (eyes and mouth) and full faces elicit very long latency responses, particularly in STS contact 6. d. Visual stimuli in general (faces, flowers, scrambled faces or words) do not appear to induce prominent responses at these sites. LEGEND: Talairach co-ordinates (x,y,z) for the electrode contacts were: STS 4 −12,−41,−6; STS 5 −8,−49,−7; STS 6 −8,−55,−7; SF 3 −15,−46,+12; SF 4 −15,−53,+13; SF 5 −15, −60, +14. Data from Puce, A., & Allison, T. (1999). Differential processing of mobile and static faces by temporal cortex (OHBM Annual Meeting abstract). Neuroimage 9(6), S801.

The Sylvian fissure/insula responses (Fig. 4b) are intriguing, given this general region’s known rich functional neuroanatomy (Gogolla, 2017; Uddin, Nomi, Hebert-Seropian, Ghaziri, & Boucher, 2017). Unfortunately, insular recordings are not common, as a number of major branches of the middle cerebral artery course through it (Ture, Yasargil, Al-Mefty, & Yasargil, 2000). In a rare study of intracranial recordings from insular cortex, a sensitivity to gaze aversion in the posterior insula has been previously reported with field potentials in the 200–600 ms range post-motion onset (Caruana et al., 2014), paralleling the anecdotal data presented to moving mouths here.

The location of the depth probe in the Sylvian fissure/insula (Fig. 4) is likely posterior to primary auditory cortex (Heschl’s gyrus) in the temporal lobe, and posterior to gustatory cortex, secondary somatosensory cortex and cortex related to vestibular function in the insula (Gogolla, 2017; Uddin et al., 2017). It is more likely to be near a region sensitive to coughing in the Sylvian fissure (Simonyan, Saad, Loucks, Poletto, & Ludlow, 2007), and a region in the posterior insula known to be sensitive to visual stimuli (Frank & Greenlee, 2018). Notably, epilepsy surgery of the insula (but not temporal lobe) is known to affect emotion recognition – for happiness and surprise (Boucher et al., 2015), emotions where mouth configuration is a prominent component.

DO LOW-LEVEL VISUAL FEATURES DRIVE BRAIN RESPONSES TO OBSERVED FACIAL EMOTIONS?

I have already noted that mouth (opening and closing) movements that appear in a neutral face devoid of emotion generate reliable fMRI activation and neurophysiological activity in the pSTS, and robust non-invasive EEG activity. Studies in our lab using impoverished visual stimuli i.e., biological motion displays of faces with opening and closing mouths, elicit identical neurophysiological responses to stimulation with full (greyscale) faces (Puce et al., 2000; Rossi, Parada, Kolchinsky, & Puce, 2014). In stark contrast, when biological motion displays of faces with averting and direct dynamic gaze are contrasted to the same motion in full faces, the brain responses are very different (Rossi et al., 2014; Rossi, Parada, Latinus, & Puce, 2015). These striking neurophysiological differences suggest that multiple low-level visual mechanisms might drive these respective effects. Mouth movements occur from the action of an articulated joint – the mandible is physically linked to the cranium via the temporomandibular joint. Hence, a strong response to a biological motion display of a moving mouth might be expected (Rossi et al., 2014). In contrast, a biological motion effect should not be present for (impoverished) eye motion, which does not involve joint action, but arises from the coordinated action of a suite of ocular muscles. This is exactly what we see in our studies (Rossi et al., 2014; Rossi et al., 2015).

The biological motion effect is not the only low-level visual factor that could generate stimulus-driven activity from viewing a dynamic face. These could come about due to local luminance and contrast changes. For example, when a person is very happy, their smile (or laugh) will likely show teeth. White teeth can be clearly seen against the darker aspect of the lips and mouth cavity. Sometimes the teeth might also be seen in fear. Indeed, displayed teeth can be clearly seen at a distance. The presence, or absence, of teeth in a mouth expression affects neuro-physiological sensory responses in the latency range of ~100–200 ms (i.e., P100 and N200). Subjects also rate mouths with visible teeth as being more arousing relative to those without visible teeth (daSilva, Crager, et al., 2016). So, this additional low-level visual effect may explain in part why neural studies of emotion consistently report larger responses to happiness (e.g., (daSilva, Crager, & Puce, 2016). Teeth are more likely to be visible to happiness in its canonical form (F. W. Smith et al., 2008).

A local luminance/contrast effect, when discriminating between emotions, could also apply to the eye region. The human sclera are bright white relative to the iris and pupil, and this is unusual relative to other primates who typically do not have such a high luminance/contrast structure of the eye (Kobayashi & Kohshima, 1997). In gaze aversions, and in the display of emotions such as fear and surprise, high luminance/ contrast changes in local visual space occur from iris movement or expansion of the eye – resulting in an eye white area increase relative to a neutral face (Hardee et al., 2008). Like the teeth in a smile, gaze aversions or widened eyes in fearful and surprised expressions can also be well seen at a distance. We believe that the local luminance/ contrast change in the eye is the major driver of the larger neurophysiological response in the pSTS during a gaze aversion. Although there is a significant fMRI signal increase in the pSTS, the amygdala appears to be even more sensitive than the pSTS (Hardee et al., 2008).

A third low-level effect that might drive part of the response to a dynamic facial expression may be due to the extent of the facial movement itself. Mouth movements can produce large changes in mouth configuration, when one looks at facial images depicting the net change in the face (Huijgen et al., 2015). The mouth may also take up more area on the face than the eyes, depending on its configuration e.g., as in a wide smile or a grimace of extreme fear. In contrast, in a gaze change, or widening of the eyes, the main shape of the eye is preserved. From the eye gaze data of Caruana et al. (2014) gaze aversions from direct gaze produce larger neurophysiological effects than did large extreme left-right/right-left gaze transitions (Caruana et al., 2014), so motion extent, in terms of excursion of the iris, does not appear to be a factor here. One further possibility to consider is that the attention-grabbing nature of the facial motion simply makes us foveate the potentially most informative location in space.

A final point on ‘low-level’ effects: Some facial features function as social cues only when the face is upright. Multiple factors can affect identity recognition, and inverted faces typically serve as control stimuli (e.g., McKone & Yovel, 2009; Rhodes, Brake, & Atkinson, 1993; Rossion, 2009). In the Thatcher Illusion, the eyes and mouth are inverted in an upright face. When the face is viewed upright the result is grotesque, but when viewed inverted no glaring irregularities are noted (P. Thompson, 1980), suggesting that mouth and eye orientation, and therefore configuration, matters. When human face pairs are compared on judgments of gender or emotional expression, inversion impairs difference judgments of expressions and gender, and sameness judgements of expression (see Pallett & Meng, 2015)). These data imply that aspects of evaluating gender or viewed emotions i.e., social cues, show that configuration matters and that there may be holistic aspects to processing gender and emotions.

WHERE WE LOOK ON SOMEONE’S FACE MATTERS FOR COMPREHENDING SOCIAL INTERACTIONS

In the abovementioned studies, subjects typically fixated on a fixation cross placed at the nose bridge on a [full] face. Alternatively, if face parts were presented in isolation, subjects gazed at a fixation cross at screen’s center. In everyday life, our eyes rove continuously about the visual scene, and when they land on a person’s face they will not necessarily look at the bridge of the nose. In a social interaction, our eyes travel to the most informative parts of the face. In paintings and naturalistic scenes with people in them, subjects often fixate on faces, and the eye and mouth regions in particular (Bush & Kennedy, 2016; Yarbus, 1967). In the 1960s, the Russian scientist Alfred Yarbus clearly showed that observers ‘triangulate’ a face when viewing it, i.e., they focus their gaze on the two eyes and the mouth, and their eye scanning movements form a triangular shape when they examine a face (Yarbus, 1967).

Much has been made of the information provided by the eyes in emotion recognition and Theory of Mind tasks (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001). Therefore, it is surprising that when an observer’s gaze is tracked as they successfully recognize dynamic (basic) emotions, healthy subjects can spend more time looking at the mouth region relative to the eyes (Blais, Roy, Fiset, Arguin, & Gosselin, 2012).

SOME ISSUES THAT MAY HAVE COMPLICATED THE SCIENCE

Implicit versus explicit studies involving experiments with facial stimuli

The brain responses that I mainly focused on above are involuntary and are consistently observed during implicit tasks, and likely arise from low-level visual factors. This implicit way of functioning seems more ecologically-valid and might closer approximate what we do in everyday life unconsciously (Puce et al., 2016; R. Smith & Lane, 2016). Yet, when we read the social attention and emotion recognition literature, so much of it is built on explicit tasks e.g., requiring emotions to be categorized or named, often by forced choice. How do brain responses to these explicit tasks vary relative to those in implicit tasks, when identical stimulus material is presented?

We studied how neurophysiology is modified across the implicit–explicit task dimension for social attention, low-level visual factors, and emotion. First, in a social attention task, gaze in neutral faces changed with different degrees of aversion. In the implicit task, subjects indicated by button press if gaze deviated to the ‘left’ or ‘right’ and we replicated our averted gaze > direct gaze N170 effect. In the explicit task using the same stimuli, subjects indicated if gaze moved ‘toward’ or ‘away’ from them. This time, N170 was equally large to gaze aversions and gaze returning to look directly at the observer (Latinus et al., 2015).

Second, in 3 different mouth configurations we changed the presence/absence of teeth – I already mentioned effects of visible teeth earlier. In the implicit task, subjects detected infrequent greyscale versions [target] of any of the [color] mouth stimuli. In the explicit task, subjects saw color stimuli only and pressed one of 3 response buttons to indicate if the mouth formed an ‘O’, an arc, or a straight line – mouth configurations typically seen in surprise, happiness or fear. Mouth shapes could occur with, or without, visible teeth. A robust main effect of teeth for P100, N170 and VPP occurred for both tasks, but there were also Teeth X Task interactions in the explicit task. For later potentials: (1) P250 showed no main effects for teeth, but showed Task X Teeth and Task X Mouth Configuration interactions; (2) LPP/P350 was only seen in the implicit task; (3) SPW was seen only in the explicit task (daSilva, Crager, et al., 2016).

Third, faces portraying positive emotions (happiness and pride) and neutral expressions were studied. In the implicit task, subjects looked to see if a freckle was on the face (between the eyes and mouth) (an infrequent target). N170, VPP, and P250 ERPs were significantly greater for both emotions relative to neutral but did not differ between emotions. The late SPW potential significantly differentiated between happy and proud expressions. In the explicit task, subjects pressed one of 3 buttons to differentiate the neutral, happy and proud faces. We saw the same main effects occurred for N170, VPP, P250, LPP and SPW, but this time we also saw Emotion X Task interactions involving P250 and SPW (daSilva, Crager, & Puce, 2016).

Across the above 3 experiments task interactions with main effects occurred mainly in the longer latency responses, also raising questions for how these neurophysiological changes might impact hemodynamic activation patterns in fMRI studies.

How generalizable are the results we observe in the laboratory to what we experience in everyday life? From our social attention studies, we posited that in real life we might function in two main modes – a ‘non-social’ or default mode (not the same as the resting-state concept), and a ‘socially-aware’ mode (where we explicitly judge others on some social attribute). These modes might function somewhat akin to the implicit and explicit evaluations of faces we use in the lab. When we are in a non-social mode (out in the world and interacting with objects or with others at a superficial level), our sensory systems do some of the hard work for us and differentiate between certain socially salient stimuli – just in case we might wish to explicitly evaluate them further (by switching to social mode). In social mode, our sensory systems likely increase their input gain, so incoming social signals are augmented, enabling us to better evaluate rapidly unfolding social situations (Puce et al., 2016).

Use of morphs in studies of emotion

Creating stimulus sets with genuine dynamic emotional expressions is incredibly challenging, so for many years experimenters resorted to using static faces displayed at the peak of the emotion (e.g., Ekman & Matsumoto, 1993). Early attempts at creating emotional displays with real faces showed how dependent the behavioral performance of subjects was on technical aspects of animated stimuli, such as frame rate (Kamachi et al., 2013). Morphing between static faces of the same identity, but with different expressions at their peak could make blended emotional stimuli. Similarly, neutral faces and emotional expressions at their peak were blended, with different proportions of the emotion being mixed with the neutral face. Unfortunately, dynamic morphed displays using different morphing strategies could produce different experimental results (Vikhanova, Mareschal, & Tibber, 2022). These morphed stimuli were criticized for not being ecologically valid, as real emotions can be initiated by one part of the face and then progressively involve other face parts. So, non-linear facial changes occur in real emotions, but not in morphed displays. It has been argued that this is an important cue for perceiving real emotions (Korolkova, 2018). Indeed, quite different neurophysio-logical activity can be elicited with real versus artificially-created dynamic facial expressions (Perdikis et al., 2017). A recent review examines some of these issues and challenges for studying emotional expressions (Straulino, Scarpazza, & Sartori, 2023).

Feigned emotional stimuli have also been criticized, as studies using real versus feigned emotional faces can report inconsistent findings. In real-life we can generate emotional expressions via 2 motor pathways – a spontaneous and involuntary one, and a volitional one. Two upper motoneuron (UMN) tracts project to the facial nerve nucleus in the pons. In evolutionary terms, the extrapyramidal UMN tract is the older one – it produces involuntary and automatic facial expressions that arise rapidly and can be short lasting. In contrast, the pyramidal UMN, the evolutionarily newer route, allows us to make volitional expressions that can onset more slowly and be longer lasting (see (Straulino et al., 2023).

A disconnect between literatures?

Our face and body movements display our emotions, intentions and actions. Social interactions are dynamic and involve the orchestrated dance of a suite of facial and bodily muscles that signal one’s inner mental states rapidly, spontaneously and involuntarily. Alternatively, we can also conceal these important aspects of our inner mental life. As observers parsing the face and bodily movements of others we do not need to explicitly name or note them – we register them effortlessly and unconsciously, and adjust our behavior to suit the social situation we are in. So, in one sense, from motion comes emotion… Unfortunately, the bulk of studies have been based on viewing isolated static stimuli on computer screens and collecting impoverished behavioral measures (e.g., simple button presses). Only relatively recently have naturalistic tasks - which are challenging to perform - become part of mainstream social neuroscience (Morgenroth et al., 2023; Saarimaki, 2021).

Another major issue for the field? How do we make sense of the varied scientific findings that are complicated by an apparent disconnect between the ‘low-level sensory’ and ‘affective’ literatures? This seems most pertinent for studying how the brain perceives and recognizes portrayed emotions. The famous 1977 Sydney Harris cartoon comes to mind where two scientists are at a blackboard solving a complex problem. Between 2 sets of detailed formulae, a text fragment reads, ‘a miracle happens…’. One scientist says to the other ‘I think you should be more specific in Step 2.’ (http://www.sciencecartoonsplus.com/gallery/physics/index.php). So, at the core of our emotion problem, ‘the miracle’ occurs between sensory stimulus registration and recognition of the emotion. It seems that many scientists working on the side of low-level visual effects (e.g., local contrast or brightness) often ignore some important higher-order cognitive and affective confounds, whereas others working at higher levels of functional brain organization often do not consider potentially significant low-level confounds.

That all said, a number of investigators who are studying top-down and bottom-up interactions to better understand how we see an integrated, seamless world report interesting findings. First, retinotopic properties of (newer) category-specific regions in posterior cortex have differential areal properties. For example, some higher-level’ brain areas (e.g., OPA [occipital place area] and LOTC [lateral occipito-temporal complex]) have overlapping multiple retinotopic maps, whereas others such as OFA [occipital face area] do not (Silson, Groen, Kravitz, & Baker, 2016). These results could be clues to the differential computations these areas might provide. Second, ‘typical’ spatial locations (e.g., eyes placed higher and mouth lower in a display) produce larger retinotopic hemodynamic signals and better behavioral measures (de Haas et al., 2016) – a finding with direct implications for the face inversion literature. Third, contralateral bias varies in higher-level category-specific visual regions. For instance, the right FFA [fusiform face area] and FBA [fusiform body area] exhibit more information integration from both hemifields relative to their left hemisphere homologs (Herald, Yang, & Duchaine, 2023) – helping explain why prosopagnosias have a right hemisphere basis (Meadows, 1974).

Fourth, still on biases but at a higher level, top-down social cognitive factors (e.g., in-out group attributes, attitudes, stereotypes, prejudices) can provide a powerful lens which shapes our perception and how we look at others. Specifically, information from the anterior temporal lobe (ATL) on social knowledge and stereotypes from a lifetime of experience can be accessed by orbitofrontal cortex and fed down to the FFA, affecting how sensory information is processed (Brooks & Freeman, 2019). This top-down drive can be unconscious and rapid (Freeman & Johnson, 2016), where facial appearance of an unfamiliar individual can drive strong (right or wrong) impressions (e.g., intelligent, trustworthy etc.) – as shown by behavioral and hemodynamic studies of face-trait spaces (Stolier, Hehman, & Freeman, 2018; Todorov, Said, Engell, & Oosterhof, 2008). The culture that we grow up in shapes our social cognitive impressions (Freeman, Rule, & Ambady, 2009). We form these social impressions quickly – from ‘thin slices of behavior’ – in about 30 seconds or so (Ambady & Rosenthal, 1992). A number of years ago, an interacting top-down/bottom-up connectionist model for person construal was described. Partial parallel interactions were probabilistic in nature, allowing continuously evolving activation states that are consistent with an individual’s goals (Ambady & Freeman, 2011). It would be good to see this used more in multimodal social neuroscience data interpretation.

The disembodied brain and … face?

In the latter half of the 20th century, reductionist approaches in cognitive and social neuroscience focused on identifying the basic building blocks for cognitive and social functions. While this work was fundamental and important, it seemed like we had laid out the jigsaw puzzle pieces on the table, but we had no idea of the overall image that the jigsaw presented. Perhaps part of the problem was that we had disembodied the brain? There it was sitting in a glass jar separated from the body – isolated from interoceptive messages from the body’s internal milieu and from integrated multisensory input from the external world.

During this century, literature on the ‘brain-heart’ and ‘brain-gut’ axes has highlighted the key role in interoception and modulation of the viscera by the vagus nerve. The bidirectional vagal messaging to and from the brain affects both physical and mental function and has major implications for disease (Manea, Comsa, Minca, Dragos, & Popa, 2015; Mayer, Nance, & Chen, 2022). Regular electrical activity of the heart can be recorded as the electrocardiogram (ECG) – a series of waves labelled by alphabetic letters from P-T, with the R-wave signaling the main (ventricular) contraction of the heart (Hari & Puce, 2023). Additionally, electrical activity from smooth muscle contractions of the gut can be recorded with electrogastrography (EGG). The EGG signal is a complex entity where periodic respiratory activity from electrical activity of the diaphragm as well as cardiac activity are also sampled with electrical activity of the gut, due to the effects of volume conduction in the body. The three electrophysiological signals can be easily teased apart from their different power spectral content (for a review see Wolpert, Rebollo, & Tallon-Baudry, 2020).

Studies in social neuroscience are starting to investigate interoceptive neurophysiology. For example, the brain is sensitive to the beating of the heart: a biphasic response in primary somatosensory cortex 280–360 ms occurs after the contraction of the heart’s ventricles (Kern, Aertsen, Schulze-Bonhage, & Ball, 2013). Interoception of cardiac activity can modulate brain activity in somatosensory cortex and behavior during the detection and localization of somatosensory stimuli (Al et al., 2020).

With respect to the ‘disembodied’ face, in social neuroscience, the literature has tended to focus on the face, which is somewhat ironic given how we see our (whole) conspecifics in everyday interactions. In the next section, I discuss some face and body processing models to illustrate how thinking has progressively changed, as knowledge of structural and functional neuroanatomy has grown and (non-invasive) assessment techniques for studying the in-vivo brain and body activity have improved.

It would be remiss of me not to mention the cerebellum. In some ways it is the ‘Cinderella of the brain’. The cerebellum has more neurons than cerebral cortex and is an important contributor to social cognitive function (Schmahmann, 2019), but our focus has always been on the cerebral cortex. Some relatively new work indicates that cerebellar activity can be reliably detected and modeled with MEG and EEG methods (Samuelsson, Sundaram, Khan, Sereno, & Hamalainen, 2020). Perhaps Cinderella will now be able to wear her glass slippers?

‘FACING UP’ TO SOME EXISTING MODELS OF PROCESSING INCOMING INFORMATION FROM THE FACE AND BODY

In 1967 Polish neurophysiologist Jerzy Konorski proposed the idea of ‘gnostic units and areas’ in the central nervous system, predicting neurons and brain areas with selectivity for faces, objects, places and words. His 9 category model (Konorski, 1967), fore-shadowed ideas of category-specificity and pre-dated the ‘grandmother cell’ concept by a couple of years (Gross, 2002). The first non-invasive neurophysiological evidence for human category-specific responsivity came from E.R. John – published in a memorial special journal issue (following Dr. Konorski’s untimely death) (John, 1975). Konorski’s predictions included where on the human scalp selective ERP responses would occur in the 10–20 EEG electrode system (Jasper, 1958). E.R. John’s anecdotal data (Fig. 5) showed a clear negative potential at 150–200 ms to a vertical line signifying the letter ‘I’, which was less evident when read as the number ‘1’ (John, 1975)! At the time, neuropsychological studies were amassing evidence on visual agnosias, especially to faces, in patients with acquired brain lesions (Meadows, 1974). In the lab of Truett Allison and Greg McCarthy in the 1990s, a discussion of Konorski’s categories for selective stimulus evaluation was quite common.

Figure 5. A single subject category-selective non-invasive neurophysiological response.

Figure 5.

Averaged EEG data from 10–20 sites on the right parietal (P4), temporal (T6) and occipital (O2) scalp. Top 2 traces show waveforms elicited to 50 presentations of a vertical line described to the subject as the letter ‘I’, and the number ‘1’, respectively. Electrode T6 shows a prominent potential (first peak) in the older ‘negative polarity is up’ display convention. The voltage calibration bar is absent, but activity appears in μV. A time calibration marker ( bottom of the figure) shows total time in milliseconds (ms). Trace 3 displays difference waveforms between the two stimulus types. A larger response to the letter relative to the number is seen halfway through the epoch in all 3 sites at ~200 ms. Trace 4 depicts point-by-point t-test values between the two conditions, with significant differences ( p < 0.01; identified within the outlined box with broken lines) for sites T6 and P4, but not O2. The original figure showed data from another subject with similar effects in the left temporal scalp. From John, E.R. (1975) Konorski’s concept of gnostic areas and units: some electrophysiological considerations. Acta Neurobiologiae Experimentalis (ANE) 35: 417–429, reprinted with kind permission from the Editor-in-Chief, ANE.

In the 1980s it was noted that a prosopagnosic patient could generate an (unconscious) autonomic Galvanic Skin Response (GSR) to a familiar face, even though he could not identify it (Bauer, 1984). The patient was a motorcycle accident victim who had sustained extensive bilateral occipito-temporal lesions, as seen from the computerized tomography images (Bauer, 1982). It appears that the left pSTS and bilateral V1 were spared, leaving a potential route for the unconscious response to the familiar face.

Not long after Bauer’s (1984) case report, a model for familiar face recognition was proposed by British neuropsychologists Vicky Bruce and Andy Young, which was based on decades of neuropsychological investigations in patients with facial processing deficits (Bruce & Young, 1986). The model also included facial speech processing and facial expression analysis – in a separate parallel pathway to that for familiar face recognition. Following human and monkey fMRI and neurophysiological studies devoted to face perception in the latter two decades of the 20th century this model was refined (Fig. 6a; see Gobbini & Haxby, 2007). Interestingly, the ‘emotion’ component of this later model focused on emotional reactions generated in the viewer of the familiar face, and not in the emotions or other dynamic social signals present on the familiar individual’s face. The original model by (Haxby, Hoffman, & Gobbini, 2000) was modified by both (Gobbini & Haxby, 2007) and (O’Toole et al., 2002). O’Toole’s model expanded on how dynamic faces are analyzed and had identity signals being sent from the dynamic face from the dorsal pathway (where STS was said to be) to the ventral pathway (Fig. 6b).

Figure 6. Two models of active brain regions in face perception and recognition.

Figure 6.

In both models, an initial branch point occurs where invariant facial features (important for identity recognition) are separated from dynamic aspects of the face (important for emotional expressions and social attention). a. The refined Gobbini and Haxby (2007) familiar face recognition model has a core system that decodes visual appearance via two streams: one for invariant feature identification and another for dynamic face feature perception. Information from core passes to the extended system, activating either aspects of person knowledge or our own emotions elicited by that person b. The O’Toole et al. (2002) model expands the original Haxby et al. (2000) model. Here dynamic aspects of a familiar (and unfamiliar) face (e.g., an expressed emotion) are processed in the dorsal visual pathway and an identity signal is sent to the ventral pathway. STS is assumed to be part of the dorsal pathway.

Years later, Bernstein and Yovel (2015) worked with the ‘Haxby (2000)’ and ‘O’Toole (2002)’ models to try to deal with some existing issues. For example, OFA is a ventral pathway region that extracts form information from faces and is strongly connected to FFA, but both areas do not share strong connections with pSTS. Hence, the pSTS was placed in the other pathway, devoted to extracting information from dynamic faces. Bernstein and Yovel (2015) placed aSTS and inferior frontal gyrus – structures reactive to dynamic faces – in the dorsal face pathway because in their view, “the primary functional division between the dorsal and ventral pathways of the face perception network is the dissociation of motion and form information” (Bernstein & Yovel, 2015).

The models typically have not included human voice processing, and remained largely visuocentric, even though the original Bruce and Young (1986) model included speech analysis. A model for the ‘auditory’ face i.e., processing auditory information from human voices, proposed a parallel structure to the analysis of visual information from the human face (Fig. 7; Belin, Bestelmeyer, Latinus, & Watson, 2011; Belin, Fecteau, & Bedard, 2004).

Figure 7. A model of voice perception.

Figure 7.

The Belin et al., (2004) VOICE perception model (left: brown/gold flowchart) shows parallelism with the original components of the Bruce & Young (1986) familiar FACE perception pathway (right: green flowchart). There are paths of intercommunication between the two. This model is based on extensive unimodal processing (within each pathway, colored arrows) and multimodal interactions (cross-pathway interactions, black arrows).

As already noted, bodies are important messengers of emotional state and action intent, as well as signaling identity. Not surprisingly, fMRI studies identified areas of occipitotemporal cortex sensitive to human body motion, such as the Extrastriate Body Area (EBA) (Downing, Jiang, Shuman, & Kanwisher, 2001) and Fusiform Body Area (FBA) (Peelen & Downing, 2004). Intracranial field potential studies have reported selective responses to human hands and bodies in ventral and lateral occipitotemporal regions (Pourtois, Peelen, Spinelli, Seeck, & Vuilleumier, 2007; Puce & Allison, 1999). The parallel fMRI studies in our lab also showed lateral regions – most prominently the inferior temporal sulcus The EBA was sensitive to body parts and wholes (Downing & Peelen, 2016), whereas the FBA responded more vigorously to whole bodies, although it could respond to body parts (Peelen & Downing, 2007). The pSTS also showed a vigorous selective fMRI response to both realistic human hand and leg motion and animated avatars of whole mannequin bodies, faces and hands (J. C. Thompson, Clarke, Stewart, & Puce, 2005; Wheaton, Thompson, Syngeniotis, Abbott, & Puce, 2004). In contrast, the middle temporal gyrus (MTG) is highly active to man-made object/tool motion (Beauchamp, Lee, Haxby, & Martin, 2003).

A challenge for the field has been to place activation to (dynamic) bodies and their parts into existing (face-centric) models of social information processing. It is no surprise that these body-related findings prompted substantial revisions to existing models, e.g., a multisensory model for person recognition using dynamic information from faces, bodies and voices has been proposed. Here the pSTS acts as a ‘neural hub for dynamic person recognition’, sending multisensory information to the aSTS and then onto ATL — a region critical for person recognition. Unisensory auditory (pSTS and aSTS) and visual (OFA and FFA) pathways also send information to the ATL for person recognition (Yovel & O’Toole, 2016). The EBA and FBA are not part of this model. Another important aspect of interpreting ‘body language’ is the unconscious nature and speed with which we make sense of this information – proposed to be possible via the existence of subcortical and cortical pathways, which are intertwined in three interconnected brain networks (Fig. 8; de Gelder, 2006).

Figure 8. Emotional body language (EBL) processing across three interrelated brain networks.

Figure 8.

EBL visual information enters subcortical (red) and cortical (blue) routes in parallel. The subcortical Reflex-like EBL network (red) is rapid and comprises the superior colliculus (SC), pulvinar (Pulv), striatum and amygdala (Amyg). Its output is not amenable to conscious awareness. The (cortical) Visuomotor perception of EBL network (blue) has the core areas of lateral occipital complex (LOC), superior temporal sulcus (STS), intraparietal sulcus (IPS), premotor cortex (PM), fusiform gyrus (FG) and amygdala (Amyg). (The amygdala is common to two networks in this scheme.) The third network is called the (cortical) Body awareness of EBL (green). Its core structures are the insula, somatosensory cortex (SS), anterior cingulate cortex (ACC) and ventromedial prefrontal cortex (vmPFC). It processes incoming information from others, and interoceptive information from the individual. The subcortical (Reflex-like EBL) sends feedforward connections (red lines) to the two cortical networks. Reciprocal interactions (blue lines) exist between the two cortical systems

An updated model for recognizing emotion from body motion has been proposed (de Gelder & Poyo Solanas, 2021). The EBA and FBA are not in this model. Processing of body movements starts in the inferior occipital gyrus (IOG), dividing into a ventral and a ‘dorsal’ route where pSTS plays a key role, relaying information to the limbic system and intra-parietal sulcus (IPS). This ‘radically distributed’ model includes a subset of brain regions sensitive to human face motion (Fig. 9a). A major departure from more established models is the addition of a mid-level of feature analysis dealing with affect, which no longer passes through a structural analysis of the body (Fig. 9b). This would allow affective information to be processed more rapidly. Identity recognition would be dealt with via the body specific structural analysis route, but there is no direct link between it and affective analysis of body posture. Given that we have idiosyncratic bodily movements and postures, it is not clear how this information would be extracted in this formulation.

Figure 9. A model for recognizing emotional expressions from an individual’s movements.

Figure 9.

a. Brain regions involved in recognizing emotions and their connections. LEGEND: IOG=inferior occipital gyrus; FG=fusiform gyrus; TP=temporal pole; STS=superior temporal sulcus; IPS=intraparietal sulcus. b. Side-by-side flowcharts of the classical hierarchical model for recognizing emotional expressions versus an alternative proposal (radically distributed model) that does not rely on a uniquely hierarchical progression of information from lower- to higher-level brain regions. Note: in this newer model the fainter identity box indicates that this element was not present in the original figure in de Gelder & Poyo Solanas (2021), but from the discussion in the manuscript I surmise that this would be the case.

The pathway of de Gelder & Poyo Solanas (2021) has its starting point in the IOG - similar to that of the original suggestion by Haxby and colleagues (2000), their updated model (Gobbini & Haxby, 2007), and that of (Ishai, 2008). In our recent intracranial study of four occipitotemporal ROIs and dynamic changes in gaze direction and emotion (see Figs. 2 and 3), we evaluated likely white matter pathways that might carry information between these regions (Babo-Rebelo et al., 2022). Posterior brain major white matter pathway endpoints were identified (Bullock et al., 2019) from 1066 healthy brains from the Human Connectome Project (Fig. 10a). Then an overlap analysis between these endpoints and active intracerebral sites in the 11 epilepsy surgery patients was performed. From the overlap analysis and field potential latencies we proposed a potential information flow in part of the occipitotemporal cortex (Fig. 10b).

Figure 10. Putative information flow routes for faces in the posterior brain.

Figure 10.

a. Schematic figure of white matter pathways routing information from visually sensitive brain regions. LEGEND: SLF, superior longitudinal fasciculus; TP-SPL, temporoparietal connection of the superior parietal lobule; Arc, arcuate fasciculus; pArc, posterior arcuate fasciculus; ILF, inferior longitudinal fasciculus; VOF, vertical occipital fasciculus; MdLF-Ang, middle longitudinal fasciculus branch of the angular gyrus; MdLF-SPL, middle longitudinal fasciculus branch of the superior parietal lobule. Meyer’s (loop), the optic radiation connecting the lateral geniculate nucleus and occipital lobe, is also included. b. Cartoon of putative routes of information flow relating to faces focusing mainly on fusiform and superior temporal cortex. This schematic is based on an overlap analysis of white matter tract endpoints (from 1066 healthy subjects) and coordinates of active bipolar sites (from epilepsy patients). All data are in MNI space. Solid lines represent routes with overlap between tract endpoints and active sites. Broken lines show connections with overlap at one tract end only, as seen from the data of Babo-Rebelo et al (2022). Note: short-range fibers aiding information flow across ventral occipitotemporal cortex were not included in the tract endpoint analysis and are not represented here. LEGEND: IOC: inferior occipital cortex, FC: fusiform cortex, ITC: inferior temporal cortex, STC: superior temporal cortex, IPS: inferior parietal sulcus. Tract abbreviations are identical to part a. Parts a and b reproduced with CC BY 4.0 Deed from Babo-Rebelo, M., Puce, A., Bullock, D., Hugueville, L., Pestilli, F., Adam, C., … George, N. (2022). Visual information routes in the posterior dorsal and ventral face network studied with intracranial neurophysiology and white matter tract endpoints. Cereb Cortex, 32(2), 342–366.

Our data-driven model (Fig. 10) is incomplete: it was focused on how information might be routed between pSTS and fusiform cortex. We posit that inferior temporal cortex may act as the mediator between the two, with information transfer via the posterior arcuate fasciculus. Our invasive neurophysiological dataset was limited – sampling cortex implanted for clinical needs. Therefore, other connective links in the face pathway could not be evaluated. That said, we propose that combined neurophysiological and neuroanatomical investigations are one way forward for making sense of the complex network of interconnections that allow processing of the human form, and also for visual function in a more general sense.

Looking beyond the visual pathway scheme and existing models of face and body processing, we also need to consider existing social brain network models. In one 4-network model, the pSTS sits in a mentalizing (theory of mind) network, together with the EBA, FFA, temporo-parietal junction, temporal pole, posterior cingulate cortex (PCC), and parts of dorso-medial prefrontal cortex (dmPFC). The other 3 networks include the amygdala network (amygdala, middle fusiform/inferior temporal cortex, and parts of ventro-medial prefrontal/orbitofrontal cortex (vmPFC/OFC), the empathy network (parts of the insula and middle cingulate cortex) and the mirror/ simulation/action perception network (parts of inferior parietal and inferior frontal cortex) (Stanley & Adolphs, 2013). Another literature review-based model makes pSTS a central hub for 3 neural social information processing systems: social perception, action observation, and theory of mind (Yang, Rosenblau, Keifer, & Pelphrey, 2015). This latter formulation does not consider the EBA or FFA.

One other important consideration relates to the actual nature of a real-life social interaction. These do not happen in isolation and involve at least one other individual. For this reason, the need for ‘2-person’ (or dyadic) social neuroscience studies has been emphasized (Hari & Kujala, 2009; Quadflieg & Koldewyn, 2017; Schilbach et al., 2013). Dynamic dyadic stimuli contain information along (at least) three dimensions (i.e., perceptual, action and social), therefore requiring multiple control conditions. For example, the perceptual dimension includes interpersonal signals such as mutual smiles or coordinated movement patterns. In the action dimension, these can be independent or joint (e.g., reading versus discussing), have opposing goals (e.g., collaborating versus competing), or are positive or negative (e.g., kissing versus punching someone). In the social dimension, acquaintance type (e.g. strangers or acquaintances/family), interaction type (e.g., formal, casual or intimate), and its level (e.g., subordinate or dominant) matter (Quadflieg & Koldewyn, 2017).

Just being an observer for a social interaction is also not enough – our science needs to study the neural sequelae of real-life human interactions. Fortunately, now we have portable technology to perform such studies, but data acquisition and analysis methods are not without pitfalls (Hari & Puce, 2023). Additionally, new dynamic stimulus sets with large numbers of exemplars are being generated – including social and non-social interactions, and interactions with objects (e.g., Monfort et al., 2020).

SLURPING FROM THE BRAIN REGION ‘BOWL OF ALPHABET SOUP’

Altogether we now have a large alphabet soup of brain regions, including the pSTS, OFA, FFA, FBA, and EBA – regions known for evaluating social stimuli. Relevant to the social brain models discussed earlier, where the EBA and FBA belong is important (Taubert, Ritchie, Ungerleider, & Baker, 2022). The pSTS has been proposed to be critical for analysis of social scenarios involving multiple individuals (Quadflieg & Koldewyn, 2017), which would site it in a mentalizing network (Stanley & Adolphs, 2013) or as a central hub for social information processing networks (Yang et al., 2015). The EBA is active for viewing multiple people. It’s role is to potentially generate “perceptual predictions about compatible body postures and movements between people that result in enhanced processing when these predictions are violated” (Quadflieg & Koldewyn, 2017). Both STS and EBA augment their activation when individuals’ body postures and movements face each other (see Taubert et al., 2022). Notably, when inverted, reliable behavioral decrements occur relative to their upright counterparts (Papeo & Abassi, 2019). When facing body stimuli are evaluated, effective connectivity increases between EBA and pSTS (Bellot, Abassi, & Papeo, 2021), suggesting that delineating their respective roles will be challenging.

Where does EBA belong in the visual streams? Some investigators argue that it does not belong in the ventral stream (Zimmermann, Mars, de Lange, Toni, & Verhagen, 2018). Is the EBA heteromodal? This does not seem to be the case. The pSTS has subregions that have multimodal capability (Landsiedel & Koldewyn, 2023). It also has a complex anterior-posterior gradient of functionality with considerable overlap between functions (Deen, Koldewyn, Kanwisher, & Saxe, 2015). Gradient complexity does not increase in a simple posterior-anterior direction because proximity of the temporo-parietal junction (TPJ) to pSTS complicates the functionality gradient. TPJ is active in Theory of Mind tasks such as false belief. Additional heterogeneity of pSTS functionality is evident from multivoxel pattern analysis (MVPA). While MVPA shows similarities in EBA and pSTS function during observing dyadic interactions, the EBA shows unique functionality in dyadic interaction conditions that the pSTS does not (Walbrin & Koldewyn, 2019). To add to this complex story, human EBA and pSTS function can be doubly dissociated. In a clever manipulation visual psychophysics, fMRI-guided TMS delivered to the EBA disrupts body form discrimination, whereas TMS to pSTS disrupts body motion discrimination (Vangeneugden, Peelen, Tadin, & Battelli, 2014). There are two additional alphabet soup ingredients: the pSTS and aSTS have been subdivided to have ‘social interaction’ subregions (i.e., pSTS-SI and aSTS-SI). These are subregions that respond to interactions between multiple individuals, but not to affective information per se (McMahon, Bonner, & Isik, 2023) (and see also Walbrin & Koldewyn, 2019).

What about dynamic human interactions with objects? The LOC (lateral occipital complex) activates to human interactions with objects – but primarily represents the object information in the interaction or possibly the features of the interaction (Baldassano, Beck, & Fei-Fei, 2017), leading Walbrin and Koldewyn (2019) to suggest that the EBA/LOTC may play a role relative to distinct people and objects (or distinct ways in which to use them) (see also Wu, Wang, Wei, He, & Bi, 2020)). Here the term LOTC refers LOC cortex in close proximity to the EBA. We also need to remember that the classic ventral LOC, essential in object identification, also has visuo-haptic properties, so should be considered as a multisensory region (Amedi, Malach, Hendler, Peled, & Zohary, 2001).

HOW WILL A THIRD VISUAL PATHWAY CHANGE THE EXISTING LANDSCAPE IN SOCIAL AND SYSTEMS NEUROSCIENCE?

How does the third visual pathway proposed by Pitcher and Ungerleider (2021) fit into the existing context of the other two, dorsal and ventral, pathways originally proposed by Ungerleider and Mishkin (1982)? It has been proposed to be a pathway for social perception with component structures strongly activated by human motion/action and social perception. The ventral pathway has always been the stalwart for form processing and as such included human (faces, bodies) and non-human (objects, animals) forms. Acquired lesions produce various types of visual agnosias in this pathway. The dorsal pathway is devoted to processing space and different types of spatial neglect likely are the most well-known of the sequelae of acquired lesions of this pathway. The dorsal pathway also codes space in various co-ordinate systems relative to eye, head, hand and body co-ordinates.

An expanded third/lateral pathway?

For me at least, it might make sense for the third pathway to have the (multisensory) capability to decode:

  1. other human, animal and object (including tool) motion and action;

  2. other human interactions with other humans (including dyads or groups);

  3. other human interactions with other animate beings (e.g., animals) or the natural environment;

  4. other human interactions with objects (including tools);

  5. self-other interactions (with other humans);

  6. self-interactions with animals;

  7. self-interactions with tools.

With the above expanded features, the third/lateral pathway would preserve some parallelism with the ventral form pathway. It might therefore be regarded as a ‘[inter]action’ pathway, and the proposed social perception pathway of Pitcher and Ungerleider (2021) would be an essential component within that scheme.

In Figure 11, based on the above proposition, I have tried to sort the ingredients of our brain region alphabet soup into 2 basic divisions – assigning putative membership to the ventral pathway or the expanded third pathway. I have not considered the classical dorsal pathway and its putative elements as this is beyond the scope of this review. Note I have not included lower-level visual areas V2, V3, V4 here for the sake of simplicity. My purpose here is to start a discussion on what the main tasks of the ventral and lateral/third pathways might be, and who the card-carrying members of each might be. I have based the general distinctions, or biases (Fig. 11), not only on human functional neuroimaging and neurophysiological studies, but also on sequelae of acquired human lesions. I have focused shamelessly on the human side of the fence, as some of the capabilities I have listed cannot easily be tested in non-human primates. At the very least, shining a spotlight on some human impairments in the clinical literature might stimulate future experimentation in healthy subjects that could help fill out the general classification scheme. I would posit that once there was a better consensus of which structures sit in the ventral and third//lateral pathways, then it might be time to tackle the dorsal pathway in a similar exercise to determine pathway members and their standing relative to the other two pathways.

Figure 11. Putative members of the ventral and third/lateral pathways in the human brain?

Figure 11.

A schematic of a lateral (top) and partial inferior (bottom) view of a human cerebral hemisphere segregating some selected known functional brain areas into third/lateral or ventral pathways. The dorsal pathway appears as an outline and is not considered further here. The green, red-brick and blue colors identify the 3 pathways for which primary visual cortex (V1) is the departure point. The inferior parietal lobule (IPL) is presented as an outlier and hence does not fit the color scheme. LEGEND: V1=primary visual cortex; LOC=lateral occipital complex; OFA=occipital face area; FFA= fusiform face area; VWFA=visual word form area; FBA=fusiform body area; ATL=anterior temporal lobe; EBA=extrastriate body area; MT/V5=motion-sensitive fifth visual area; MST=human homolog of macaque medial superior temporal area with high-level motion sensitivity; LOTC=lateral occipitotemporal complex; MTG/ITS middle temporal gyral and inferior temporal sulcal cortex (sensitive to motion of animals and tools); pSTS=posterior superior temporal sulcus; aSTS=anterior superior temporal sulcus; TPJ=temporoparietal junction; IPL=inferior parietal lobule; IFG=inferior frontal gyrus; Amyg=amygdala.

Below I consider in which division of the visual pathways some of the ingredients of our alphabet soup might sit. For the structures appearing in the ventral pathway in Figure 11, a large literature with converging evidence exists from neuropsychological lesion studies with various types of agnosia, epilepsy surgery patients, and neuroimaging studies in healthy subjects, in addition to the monkey studies. With respect to the idea of an expanded lateral (or third) pathway, there are a number of uncertainties that arise from the fact that:

  1. some of the newer category-selective regions have never been definitively seated into the original visual pathway scheme;

  2. the new expanded scheme of the third/lateral pathway (Fig. 11) considers functionality that has not previously been included in previous pathway classifications.

From the task-related work of Vangeneugden and colleagues (2014) described in the previous section, the EBA would seem to fit best in the ventral stream. That said, resting-state investigations coupled with diffusion-weighted MRI data site it in the dorsal stream, based on effective connectivity measures (Zimmermann et al., 2018). If someone forced my hand on the issue, I would place the EBA in the third/lateral pathway, and not the dorsal stream, based on its task-related response properties. If we regard the third/lateral pathway as a ‘[inter]action’ stream, with social processing as a central component, then the EBA would be a member – because like parts of the pSTS – it activates to dyadic and multiperson interactions.

In real-life we evaluate animate (biological) and also inanimate motion (i.e., from man-made objects such as tools) to which we know that the middle temporal gyral and inferior temporal sulcal cortex (MTG/ITS) have respective sensitivities (Beauchamp et al., 2003; Beauchamp & Martin, 2007). Therefore, I would advocate that brain regions with these response properties would sit in this third/lateral ‘[inter]action’ pathway. That said, however, this raises a lot of questions. What about complex motion deficits in stroke patients, such the inability to process form from motion, or in recognize motion per se (e.g., discussed by Cowey & Vaina, 2000) – rare cases with posterior brain lesions with very specific deficits? Also, given a three visual pathway scheme, where would first- and second-order motion processing now sit? In the ventral/dorsal visual pathway model, these were proposed to sit in the ventral and dorsal systems, respectively, based on monkey literature and rare acquired lesions in patients (Vaina & Soloviev, 2004).

Social perception involves evaluating interactions with others, relative to our integrated (multisensory and embodied) self. Even our personal space is delimited by our arm length. The TPJ is an important locus for multisensory integration within the self. Notably, when functionality of the TPJ is disrupted by focal epileptic seizures, bizarre phenomena such as out-of-body-experiences (OBE) can occur, and complex visual hallucinations involving the self, as well as relative to others can be experienced (Blanke, 2004; Blanke & Arzy, 2005). Out-of-body-like sensations can be elicited in epilepsy patients with direct cortical stimulation of the TPJ (Blanke, Ortigue, Landis, & Seeck, 2002), or in healthy subjects using TMS (Blanke et al., 2005). The OBE is an extreme example of an autoscopic phenomenon, where different degrees of multisensory disintegration of visual, proprioceptive, and vestibular senses can take place at the lower-level. These lower-levels features can also interact with higher-level features such as egocentric visuo-spatial perspective taking and self-location, as well as agency (Blanke & Arzy, 2005). The TPJ is also active in Theory of Mind tasks, Therefore, given that successful interactions with others in the world cannot occur without an intact self, I would situate the TPJ in the [inter]action pathway.

Some outstanding questions

What is the impact of the ‘other’ route from retina to cortex on the visual pathway model?

The three-pathway model’s input in the current formulation is V1 – acknowledging input from the retina via the lateral geniculate route. However, a more rapid, lower resolution, pathway from the retina passes thru the pulvinar nucleus and the superior colliculus to ‘extrastriate cortex’. Currently, its exact terminations with respect to the functional areas making up the ventral and third/lateral pathways are not known. Knowing where the optic radiations terminate in individual subjects would not only be important for understanding the structural connectivity, but also would impact functionality as well.

Issues related to underlying short- and long-white matter fiber connections, and relationships between structural and functional connectivity.

Wang and colleagues (2020) performed a heroic study evaluating data in 677 Human Connectome Project healthy subjects across 3 dimensions in MRI-based data: structural connectivity (SC), functional connectivity (FC) using resting state (RS) and face localizer task data, and effective connectivity (EC). Their conclusions need to be interpreted with caution, as the included HCP fMRI data only have RS and a face localizer task (a 0-back and 2-back working memory task with no social evaluation). Their analyses included 9 face network ROIs/hemisphere and they estimated short- vs long-range white matter fibers in their SC connectome analysis. More than ~60% of ROI–ROI connections could be regarded as short-range, with the rest being labelled as long-range i.e., tracts in the white matter atlas. If there was greater physical distance between any two face ROIs, then this was associated with an increased amount of long-range fiber connections.

The early visual cortex (presumably V1?), OFA, FFA, STS, interior frontal gyrus (IFG) and PCC were observed to form a (functional) 6 region core subnetwork, which was active and synchronized across RS and task-related contexts (Wang et al., 2020). Given the low-level of task requirements related to social judgments, perhaps this might correspond to the ‘default mode of social processing’ in implicit tasks that I mentioned earlier (Latinus et al., 2015; Puce et al., 2016)?

Overall, (Wang et al., 2020) reported that the organization of the 9 ROI face network was highly homogeneous across subjects from the point of view of SC, RS FC and task-related FC in their 677 subjects. These results give a real shot in the arm for data analysis using hyperalignment methods (Haxby et al., 2014; Haxby et al., 2020).

Wang and colleagues (2020) noted 3 structural routes (their ‘pathways’) between structures in the 9 ROI face network: the ventral route, composed of the OFA, FFA and anterior temporal lobe, for processing static face information; the ‘dorsal’ route, consisting of the STS and IFG, for dealing with dynamic information; the ‘medial’ route, dealing with ‘social, motivational and emotional importance’ of faces, and composed of the PCC–amygdala–OFC. They also noted that there did not seem to be a gateway or clear entry point to either the FFA or the STS. Interestingly, the pattern of effective connectivity varied as a function of hemisphere (see below).

Issues related to laterality.

Pitcher and Ungerleider (2021) noted that their pathway was a predominantly right-hemisphere biased. While there is a clear right-sided bias in many studies of dynamic face and body perception, activation is not confined to the right hemisphere: 6–8 year-old children have a clear right-sided pSTS bias for dynamic faces/bodies relative to older children aged 9–12 years and healthy adults, whose activation patterns are more bilateral (Walbrin, Mihai, Landsiedel, & Koldewyn, 2020). In contrast, left EBA is preferentially engaged in adults viewing interacting (face-to-face) dyads. This activation can be disrupted by stimulus inversion and the inversion effect can be disrupted by fMRI-guided TMS to left EBA (Gandolfo et al., 2024). The literature related to category-selectivity and animal motion, tool motion and human-object interactions overall tends to report stronger left occipitotemporal activation – suggesting that these functionalities may show a left hemisphere bias.

I already mentioned the study of Wang et al. (2020) above. Relevant to the hemispheric asymmetry issue, their effective connectivity analysis (PPI) indicated that face subnetworks were present in both hemispheres, but connectivity patterns were quite different. Consistent with the right hemisphere bias for faces, in this case the connectivity pattern had mainly reciprocal (or bidirectional) connections, whereas the left hemisphere pattern was a predominantly feedforward one. These results were obtained for RS and working memory tasks for faces. Will this connectivity pattern persist for more demanding facial or social judgments?

We seem to be developing a parallelism with the language literature here. For the longest time the very dominant left hemisphere contribution to language was championed, particularly in the neuropsychological sphere. Today, however, while there is an extensive language model with a left-hemisphere bias, we appreciate the unique and important contributions that are made by the right hemisphere (see (Hickok, 2009). In the case of faces, “…areas in the right hemisphere were more anatomically connected, more synchronized during rest and more actively communicating with each other during face perception. Furthermore, we found a critical association between the ratio of intra- and interhemispheric connection and the degree of lateralization, which lends support to an older theory that suggests that hemispheric asymmetry arises from interhemispheric conduction delay.” (Wang et al., 2020)

Inclusion of other structures in the [inter]action pathway? The IPL.

I have included the intraparietal lobule (IPL), which is composed of the angular and supramarginal gyri, in Figure 11 labelling it as a multicolored outlier. I raise some of issues here – and their resolution is way beyond the scope of this manuscript. First, OBE and related experiences can also occur with angular gyral stimulation, similar to TPJ (Blanke et al., 2002). Second, Gerstmann syndrome is a complex neuropsychological deficit that classically involves (left) angular gyral lesions that exhibits complex visuo-motor (Arbuse, 1947; Gerstmann, 1924). Classically it has a tetrad of signs: finger agnosia (not only of one’s own fingers, but those of others), agraphia (without alexia), acalculia and left-right confusion. The visual agnosia for fingers, might be expected to belong in the ventral stream together with its other agnostic cousins. The spatial impairments are more consistent with the dorsal stream, e.g., the left-right confusion, and the acalculia can have a spatial component (where can be an inability to carry the ‘1’ in addition or subtraction). Finally, there is the agraphia – an inability to write. The signs of Gerstmann syndrome can also be induced with focal cortical stimulation for pre-surgical mapping, and some of the signs e.g., the finger agnosia and acalculia, can extend into the neighboring supramarginal gyrus (Roux, Boetto, Sacko, Chollet, & Tremoulet, 2003). Third, IPL lesions are well known to produce apraxias most commonly of the upper limb. Interestingly, right IPL lesions exhibit more extensive apraxias (and agnosias) and can produce visual distortions that can include the arms and even lower limbs (Nielsen, 1938) – arguably one could consider these as distortions of the self and others. In general, apraxias can range from difficulties with using physical objects spontaneously or executing verbal commands, or to higher-level cognitive aspects where patients cannot form an action plan for how to sequence multi-step events, e.g., such as toasting a slice of bread and buttering it (for a recent brief review on apraxias involving the upper limb see Heilman, 2021).

In sum, the IPL’s complex functionality makes it difficult to situate it completely into either the lateral/third or the dorsal pathway. Perhaps it may turn out to be an important gateway between the two?

Other structures? Amygdala, insular cortex, IFG.

The amygdala with its key role in responding to socially salient stimuli (including fear and social attention) in the environment would argue for its inclusion in the three-pathway scheme. It is not clear, however, is how the amygdala conveys information to the structures in the ventral and the third/lateral pathways. This will be complicated because the amygdala is a complex of 9 nuclei – which can be loosely split into centromedial and basolateral groupings, with the respective roles related to autonomic function and processing visual salience. There are abundant interconnections between the two nuclear groupings. Given that short latencies field potentials can be recorded to visual emotions in the human amygdala (Huijgen et al., 2015), one would expect a direct pathway of visual information to the amygdala. Indeed, recently a direct pathway from human superior colliculus-pulvinar-amygdala been demonstrated, dealing predominantly with visual and auditory information related to negative affect, but not from positive images or noxious stimuli (Kragel et al., 2021).

Among its many functions (Uddin et al., 2017), the insular cortex is important for visceral sensation and interoception as well as processing information related to affect (e.g., related to different forms of disgust), social cognition (e.g., Fig. 4) and empathy. It is also houses secondary somatosensory cortex and gustatory cortex. Given that so many of these functions are important for the self, it would seem relevant to include parts of this cortex into the three-pathway scheme.

The inferior frontal and posterior superior temporal gyri (IFG and pSTG) are important for their roles in communication with others. The pars opercularis and pars triangularis of the IFG is well known colloquially as Broca’s area, with the pSTG and part of angular gyrus forming Wernicke’s area. These areas are critical for verbal communication with others, in terms of understanding incoming speech and also producing one’s own coherent and appropriate verbal output. In addition to this, the pars opercularis and pars triangularis of the IFG, together with the IPL is also part of the human mirror/action perception network (Bonini, Rotunno, Arcuri, & Gallese, 2022) – important for representing the actions of the self and others, and for imitation learning.

CONCLUDING REMARKS

Perhaps we should be flipping our approach and using dynamic stimuli as the default for future studies? Isolated and static visual stimuli were traditionally used because of technical ease and due to limitations in technology, which are no longer the case. We know that static face and body stimuli elicit relatively meager brain activation in the third/lateral pathway in particular. Is our situation similar to that of neurophysiologists who performed studies in anesthetized primates in the 1960–70s? Testing in awake, behaving animals years later revealed many additional response properties of brain regions, as neural responses were no longer obliterated by anesthesia. Further, it took a long time to acknowledge how complex (multisensory) influences could affect activity in ‘primary’ sensory regions (see Ghazanfar & Schroeder, 2006).

Dynamic stimuli depicting naturalistic social interactions are now demonstrating unique functionality within brain regions responsive to dynamic faces and bodies (e.g., (McMahon et al., 2023). Future studies like these will be needed distinguish between the subtle flavors of the ingredients of our alphabet soup. However, this will likely require dogged experimentation across multiple experimental sessions in the same subjects, as the set of localizer tasks alone will be daunting. Further, multimodal, studies using functional MRI targets in MRI-guided TMS, or focused ultrasound (FUS), might also clarify unique areal specializations. These TMS/FUS stimulation studies could also target neural sources identified in combined MEG/EEG investigations. Studies of long-range and short-range white matter connections will also be important to try to clearly identify the structural ‘bones’ on which our functional ‘muscles’ sit, and if possible, these could be combined with analyses of functional data (e.g., Wang et al., 2020), either fMRI or source space MEG/EEG data.

As already discussed, the artificial separation of faces from bodies in experimental studies has been problematic (see Taubert et al., 2022). Given that brain biological motion processing systems recognize the entire living organism (Giese & Poggio, 2003), this division seems non-ecologically valid. While prosopagnosia is primarily thought of as an impairment of face recognition, it can takes forms where the core complaint is about not being to recognize people or extract their identities (i.e. names) from memory (Meadows, 1974). Traditional neuropsychological tests were based on recognizing faces (e.g., the Benton Facial Recognition Test (Benton & Van Allen, 1968)), but omitted routine testing for potential impairments with bodies. Recent studies of patients with EBA lesions, at least, are starting to explore these questions (see discussion in (Taubert et al., 2022).

I expect pushback for my attempts to re-assess the composition of structures in the visual pathways and add functionality into a third pathway that would deal with ‘[inter]action’. My choices for region membership in a particular visual pathway might be controversial. That said, I do this to start a conversation about whether the pathways we have are appropriate and complete, and if not, what needs to be changed, and what is missing? Ultimately, we need to think about interconnections between the (three) parallel visual pathways. This discussion might force us to clarify exactly what kind of information is exchanged between regions and between pathways. At that point we might be in a better position to generate a fully integrated model of general visual function, and perhaps also social brain function. To fully achieve this, we will need to bring in research on human white matter pathways and not focus only on functional neuroanatomy (see Wang et al., 2020).

Overall, it is my belief that we are truly living in an exciting time for doing experiments in systems and social neuroscience, and I am very optimistic about the future. Indeed, if Leslie was still with us today, I am sure that she would smile her enigmatic smile and emphatically tell us that there is still an awful lot of work to do…

ACKNOWLEDGEMENT

I sincerely thank the two anonymous Reviewers who provided discerning and very thoughtful and probing questions, as well as general excellent constructive feedback on an earlier version of this manuscript.

The field potential data from Allison & Puce (1999) were acquired under research grant NIMH R01 MH-05286 (Localization of function in the human brain. [1996–2001] PI: G McCarthy, Co-Is: T Allison, A Puce, A Adrignolo.) Dr. Greg McCarthy has received permission from the Yale University Institutional Review Board for the inclusion of these data from a de-identified epilepsy surgery patient in this manuscript.

Aina Puce is supported by NIBIB (USA) grant R01 EB030896. She acknowledges the generous support of Eleanor Cox Riggs and the College of the Arts and Sciences at Indiana University.

REFERENCES

  1. Adolphs R, Tranel D, Damasio H, & Damasio A (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372(6507), 669–672. [DOI] [PubMed] [Google Scholar]
  2. Akiyama T, Kato M, Muramatsu T, Saito F, Nakachi R, & Kashima H (2006). A deficit in discriminating gaze direction in a case with right superior temporal gyrus lesion. Neuropsychologia, 44(2), 161–170. doi: 10.1016/j.neuropsychologia.2005.05.018 [DOI] [PubMed] [Google Scholar]
  3. Akiyama T, Kato M, Muramatsu T, Saito F, Umeda S, & Kashima H (2006). Gaze but not arrows: a dissociative impairment after right superior temporal gyrus damage. Neuropsychologia, 44(10), 1804–1810. doi: 10.1016/j.neuropsychologia.2006.03.007 [DOI] [PubMed] [Google Scholar]
  4. Al E, Iliopoulos F, Forschack N, Nierhaus T, Grund M, Motyka P, … Villringer A (2020). Heart-brain interactions shape somatosensory perception and evoked potentials. Proc Natl Acad Sci U S A, 117(19), 10575–10584. doi: 10.1073/pnas.1915629117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Allison T, Puce A, & McCarthy G (2000). Social perception from visual cues: role of the STS region. Trends Cogn Sci, 4(7), 267–278. [DOI] [PubMed] [Google Scholar]
  6. Allison T, Puce A, Spencer DD, & McCarthy G (1999). Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cereb Cortex, 9(5), 415–430. [DOI] [PubMed] [Google Scholar]
  7. Ambady N, & Freeman JB (2011). A dynamic interactive theory of person construal. Psychological Review, 118(2), 247–279. [DOI] [PubMed] [Google Scholar]
  8. Ambady N, & Rosenthal R (1992). Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychological Bulletin, 111(2), 256–274. [Google Scholar]
  9. Amedi A, Malach R, Hendler T, Peled S, & Zohary E (2001). Visuo-haptic object-related activation in the ventral visual pathway. Nat Neurosci, 4(3), 324–330. [DOI] [PubMed] [Google Scholar]
  10. Arbuse DI (1947). The Gerstmann syndrome; case report and review of the literature. J Nerv Ment Dis, 105(4), 359–371. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/20291825 [PubMed] [Google Scholar]
  11. Babo-Rebelo M, Puce A, Bullock D, Hugueville L, Pestilli F, Adam C, … George N (2022). Visual Information Routes in the Posterior Dorsal and Ventral Face Network Studied with Intracranial Neurophysiology and White Matter Tract Endpoints. Cereb Cortex, 32(2), 342–366. doi: 10.1093/cercor/bhab212 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Baldassano C, Beck DM, & Fei-Fei L (2017). Human-Object Interactions Are More than the Sum of Their Parts. Cereb Cortex, 27(3), 2276–2288. doi: 10.1093/cercor/bhw077 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Bao P, She L, McGill M, & Tsao DY (2020). A map of object space in primate inferotemporal cortex. Nature, 583(7814), 103–108. doi: 10.1038/s41586-020-2350-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Baron-Cohen S, Wheelwright S, Hill J, Raste Y, & Plumb I (2001). The “Reading the Mind in the Eyes” Test revised version: a study with normal adults, and adults with Asperger syndrome or high-functioning autism. J Child Psychol Psychiatry, 42(2), 241–251. [PubMed] [Google Scholar]
  15. Bauer RM (1982). Visual hypoemotionality as a symptom of visual-limbic disconnection in man. Arch Neurol, 39(11), 702–708. doi: 10.1001/archneur.1982.00510230028009 [DOI] [PubMed] [Google Scholar]
  16. Bauer RM (1984). Autonomic recognition of names and faces in prosopagnosia: a neuropsychological application of the Guilty Knowledge Test. Neuropsychologia, 22(4), 457–469. doi: 10.1016/0028-3932(84)90040-x [DOI] [PubMed] [Google Scholar]
  17. Beauchamp MS, Lee KE, Haxby JV, & Martin A (2003). FMRI responses to video and point-light displays of moving humans and manipulable objects. J Cogn Neurosci, 15(7), 991–1001. doi: 10.1162/089892903770007380 [DOI] [PubMed] [Google Scholar]
  18. Beauchamp MS, & Martin A (2007). Grounding object concepts in perception and action: evidence from fMRI studies of tools. Cortex, 43(3), 461–468. doi: 10.1016/s0010-9452(08)70470-2 [DOI] [PubMed] [Google Scholar]
  19. Belin P, Bestelmeyer PE, Latinus M, & Watson R (2011). Understanding voice perception. Br J Psychol, 102(4), 711–725. doi: 10.1111/j.2044-8295.2011.02041.x [DOI] [PubMed] [Google Scholar]
  20. Belin P, Fecteau S, & Bedard C (2004). Thinking the voice: neural correlates of voice perception. Trends Cogn Sci, 8(3), 129–135. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=15301753 [DOI] [PubMed] [Google Scholar]
  21. Bellot E, Abassi E, & Papeo L (2021). Moving Toward versus Away from Another: How Body Motion Direction Changes the Representation of Bodies and Actions in the Visual Cortex. Cereb Cortex, 31(5), 2670–2685. doi: 10.1093/cercor/bhaa382 [DOI] [PubMed] [Google Scholar]
  22. Benton AL, & Van Allen MW (1968). Impairment in facial recognition in patients with cerebral disease. Transactions of the American Neurological Association, 93, 38–42. [PubMed] [Google Scholar]
  23. Bernstein M, & Yovel G (2015). Two neural pathways of face processing: A critical evaluation of current models. Neurosci Biobehav Rev, 55, 536–546. doi: 10.1016/j.neubiorev.2015.06.010 [DOI] [PubMed] [Google Scholar]
  24. Blais C, Roy C, Fiset D, Arguin M, & Gosselin F (2012). The eyes are not the window to basic emotions. Neuropsychologia, 50(12), 2830–2838. doi: 10.1016/j.neuropsychologia.2012.08.010 [DOI] [PubMed] [Google Scholar]
  25. Blanke O (2004). Out of body experiences and their neural basis. BMJ, 329(7480), 1414–1415. doi:329/7480/1414 [pii] 10.1136/bmj.329.7480.1414 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Blanke O, & Arzy S (2005). The out-of-body experience: disturbed self-processing at the temporo-parietal junction. Neuroscientist, 11(1), 16–24. doi:11/1/16 [pii] 10.1177/1073858404270885 [DOI] [PubMed] [Google Scholar]
  27. Blanke O, Mohr C, Michel CM, Pascual-Leone A, Brugger P, Seeck M, … Thut G (2005). Linking out-of-body experience and self processing to mental own-body imagery at the temporoparietal junction. J Neurosci, 25(3), 550–557. doi:25/3/550 [pii] 10.1523/JNEUROSCI.2612-04.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Blanke O, Ortigue S, Landis T, & Seeck M (2002). Stimulating illusory own-body perceptions. Nature, 419(6904), 269–270. doi: 10.1038/419269a419269a [pii] [DOI] [PubMed] [Google Scholar]
  29. Bonini L, Rotunno C, Arcuri E, & Gallese V (2022). Mirror neurons 30 years later: implications and applications. Trends Cogn Sci, 26(9), 767–781. doi: 10.1016/j.tics.2022.06.003 [DOI] [PubMed] [Google Scholar]
  30. Boucher O, Rouleau I, Lassonde M, Lepore F, Bouthillier A, & Nguyen DK (2015). Social information processing following resection of the insular cortex. Neuropsychologia, 71, 1–10. doi: 10.1016/j.neuropsychologia.2015.03.008 [DOI] [PubMed] [Google Scholar]
  31. Brooks JA, & Freeman JB (2019). Neuroimaging of person perception: A social-visual interface. Neurosci Lett, 693, 40–43. doi: 10.1016/j.neulet.2017.12.046 [DOI] [PubMed] [Google Scholar]
  32. Bruce V, & Young A (1986). Understanding face recognition. Br J Psychol, 77 ( Pt 3), 305–327. doi: 10.1111/j.2044-8295.1986.tb02199.x [DOI] [PubMed] [Google Scholar]
  33. Bullock D, Takemura H, Caiafa CF, Kitchell L, McPherson B, Caron B, & Pestilli F (2019). Associative white matter connecting the dorsal and ventral posterior human cortex. Brain Struct Funct. doi: 10.1007/s00429-019-01907-8 [DOI] [PubMed] [Google Scholar]
  34. Bush JC, & Kennedy DP (2016). Aberrant social attention and its underlying neural correlates in adults with Autism Spectrum Disorder. In Puce A & Bertenthal BI (Eds.), The Many Faces of Social Attention: Behavioral and Neural Measures (pp. 179–220). Cham, Switzerland.: Springer. [Google Scholar]
  35. Campbell R (2008). The processing of audio-visual speech: empirical and neural bases. Philos Trans R Soc Lond B Biol Sci, 363(1493), 1001–1010. doi:723644J2W3155U20 [pii] 10.1098/rstb.2007.2155 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Campbell R, MacSweeney M, Surguladze S, Calvert G, McGuire P, Suckling J, … David AS (2001). Cortical substrates for the perception of face actions: an fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning). Brain Res Cogn Brain Res, 12(2), 233–243. [DOI] [PubMed] [Google Scholar]
  37. Campbell R, Zihl J, Massaro D, Munhall K, & Cohen MM (1997). Speechreading in the akinetopsic patient, L.M. Brain, 120 ( Pt 10), 1793–1803. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=9365371 [DOI] [PubMed] [Google Scholar]
  38. Carrick OK, Thompson JC, Epling JA, & Puce A (2007). It’s all in the eyes: neural responses to socially significant gaze shifts. Neuroreport, 18(8), 763–766. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=17471062 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Caruana F, Cantalupo G, Lo Russo G, Mai R, Sartori I, & Avanzini P (2014). Human cortical activity evoked by gaze shift observation: an intracranial EEG study. Hum Brain Mapp, 35(4), 1515–1528. doi: 10.1002/hbm.22270 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Cowey A, & Vaina LM (2000). Blindness to form from motion despite intact static form perception and motion detection. Neuropsychologia, 38(5), 566–578. [DOI] [PubMed] [Google Scholar]
  41. Critchley HD, & Garfinkel SN (2017). Interoception and emotion. Curr Opin Psychol, 17, 7–14. doi: 10.1016/j.copsyc.2017.04.020 [DOI] [PubMed] [Google Scholar]
  42. Dalmaso M, Castelli L, & Galfano G (2020). Social modulators of gaze-mediated orienting of attention: A review. Psychon Bull Rev, 27(5), 833–855. doi: 10.3758/s13423-020-01730-x [DOI] [PubMed] [Google Scholar]
  43. daSilva EB, Crager K, Geisler D, Newbern P, Orem B, & Puce A (2016). Something to sink your teeth into: The presence of teeth augments ERPs to mouth expressions. Neuroimage, 127, 227–241. doi: 10.1016/j.neuroimage.2015.12.020 [DOI] [PubMed] [Google Scholar]
  44. daSilva EB, Crager K, & Puce A (2016). On dissociating the neural time course of the processing of positive emotions. Neuropsychologia, 83, 123–137. doi: 10.1016/j.neuropsychologia.2015.12.001 [DOI] [PubMed] [Google Scholar]
  45. de Gelder B (2006). Towards the neurobiology of emotional body language. Nat Rev Neurosci, 7(3), 242–249. doi:nrn1872 [pii] 10.1038/nrn1872 [DOI] [PubMed] [Google Scholar]
  46. de Gelder B, & Poyo Solanas M (2021). A computational neuroethology perspective on body and expression perception. Trends Cogn Sci, 25(9), 744–756. doi: 10.1016/j.tics.2021.05.010 [DOI] [PubMed] [Google Scholar]
  47. de Haas B, Schwarzkopf DS, Alvarez I, Lawson RP, Henriksson L, Kriegeskorte N, & Rees G (2016). Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations. J Neurosci, 36(36), 9289–9302. doi: 10.1523/JNEUROSCI.4131-14.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Deen B, Koldewyn K, Kanwisher N, & Saxe R (2015). Functional Organization of Social Perception and Cognition in the Superior Temporal Sulcus. Cereb Cortex, 25(11), 4596–4609. doi: 10.1093/cercor/bhv111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Downing PE, Jiang Y, Shuman M, & Kanwisher N (2001). A cortical area selective for visual processing of the human body. Science, 293(5539), 2470–2473. [DOI] [PubMed] [Google Scholar]
  50. Downing PE, & Peelen MV (2016). Body selectivity in occipitotemporal cortex: Causal evidence. Neuropsychologia, 83, 138–148. doi: 10.1016/j.neuropsychologia.2015.05.033 [DOI] [PubMed] [Google Scholar]
  51. Ekman P, & Matsumoto D (1993). Japanese and Caucasian Neurtral faces (JACFEE) and Japanese and Caucasian Facial Expressions of Emotions (JACNEUF) CD. [Google Scholar]
  52. Ethofer T, Gschwind M, & Vuilleumier P (2011). Processing social aspects of human gaze: a combined fMRI-DTI study. Neuroimage, 55(1), 411–419. doi: 10.1016/j.neuroimage.2010.11.033 [DOI] [PubMed] [Google Scholar]
  53. Felleman DJ, & Van Essen DC (1991). Distributed hierarchical processing in the primate cerebral cortex. Cereb Cortex, 1(1), 1–47. [DOI] [PubMed] [Google Scholar]
  54. Frank SM, & Greenlee MW (2018). The parieto-insular vestibular cortex in humans: more than a single area? J Neurophysiol, 120(3), 1438–1450. doi: 10.1152/jn.00907.2017 [DOI] [PubMed] [Google Scholar]
  55. Freeman JB, & Johnson KL (2016). More Than Meets the Eye: Split-Second Social Perception. Trends Cogn Sci, 20(5), 362–374. doi: 10.1016/j.tics.2016.03.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Freeman JB, Rule NO, & Ambady N (2009). The cultural neuroscience of person perception. Prog Brain Res, 178, 191–201. doi: 10.1016/S0079-6123(09)17813-5 [DOI] [PubMed] [Google Scholar]
  57. Freud E, Behrmann M, & Snow JC (2020). What Does Dorsal Cortex Contribute to Perception? Open Mind (Camb), 4, 40–56. doi: 10.1162/opmi_a_00033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Freud E, Plaut DC, & Behrmann M (2016). ‘What’ Is Happening in the Dorsal Visual Pathway. Trends Cogn Sci, 20(10), 773–784. doi: 10.1016/j.tics.2016.08.003 [DOI] [PubMed] [Google Scholar]
  59. Gandolfo M, Abassi E, Balgova E, Downing PE, Papeo L, & Koldewyn K (2024). Converging evidence that left extrastriate body area supports visual sensitivity to social interactions. Curr Biol, 34(2), 343–351 e345. doi: 10.1016/j.cub.2023.12.009 [DOI] [PubMed] [Google Scholar]
  60. Gerstmann J (1924). Fingeragnosie: Eine umschreibene Störung am eigenen Körper. Wiener klinische Wochenschrift, 37, 1010–1012. [Google Scholar]
  61. Ghazanfar AA, & Schroeder CE (2006). Is neocortex essentially multisensory? Trends Cogn Sci, 10(6), 278–285. doi:S1364–6613(06)00104–5 [pii] 10.1016/j.tics.2006.04.008 [DOI] [PubMed] [Google Scholar]
  62. Ghazanfar AA, & Takahashi DY (2014). Facial expressions and the evolution of the speech rhythm. J Cogn Neurosci, 26(6), 1196–1207. doi: 10.1162/jocn_a_00575 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Giese MA, & Poggio T (2003). Neural mechanisms for the recognition of biological movements. Nat Rev Neurosci, 4(3), 179–192. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=12612631 [DOI] [PubMed] [Google Scholar]
  64. Gobbini MI, & Haxby JV (2007). Neural systems for recognition of familiar faces. Neuropsychologia, 45(1), 32–41. doi:S0028–3932(06)00158–8 [pii] 10.1016/j.neuropsychologia.2006.04.015 [DOI] [PubMed] [Google Scholar]
  65. Gogolla N (2017). The insular cortex. Curr Biol, 27(12), R580–R586. doi: 10.1016/j.cub.2017.05.010 [DOI] [PubMed] [Google Scholar]
  66. Goodale MA, & Milner AD (1992). Separate visual pathways for perception and action. Trends Neurosci, 15(1), 20–25. doi:0166–2236(92)90344–8 [pii] [DOI] [PubMed] [Google Scholar]
  67. Gosselin F, Spezio ML, Tranel D, & Adolphs R (2011). Asymmetrical use of eye information from faces following unilateral amygdala damage. Soc Cogn Affect Neurosci, 6(3), 330–337. doi: 10.1093/scan/nsq040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Grill-Spector K, Weiner KS, Kay K, & Gomez J (2017). The Functional Neuroanatomy of Human Face Perception. Annu Rev Vis Sci, 3, 167–196. doi: 10.1146/annurev-vision-102016-061214 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Gross CG (2002). Genealogy of the “grandmother cell”. Neuroscientist, 8(5), 512–518. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=12374433 [DOI] [PubMed] [Google Scholar]
  70. Hardee JE, Thompson JC, & Puce A (2008). The left amygdala knows fear: laterality in the amygdala response to fearful eyes. Soc Cogn Affect Neurosci, 3(1), 47–54. doi:nsn001 [pii] 10.1093/scan/nsn001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Hari R, & Kujala MV (2009). Brain basis of human social interaction: from concepts to brain imaging. Physiol Rev, 89(2), 453–479. doi:89/2/453 [pii] 10.1152/physrev.00041.2007 [DOI] [PubMed] [Google Scholar]
  72. Hari R, & Puce A (2023). MEG-EEG Primer, Second Edition (Second ed.). New York, NY: Oxford University Press. [Google Scholar]
  73. Haxby JV, Connolly AC, & Guntupalli JS (2014). Decoding neural representational spaces using multivariate pattern analysis. Annu Rev Neurosci, 37, 435–456. doi: 10.1146/annurev-neuro-062012-170325 [DOI] [PubMed] [Google Scholar]
  74. Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, & Pietrini P (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539), 2425–2430. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=11577229 [DOI] [PubMed] [Google Scholar]
  75. Haxby JV, Guntupalli JS, Nastase SA, & Feilong M (2020). Hyperalignment: Modeling shared information encoded in idiosyncratic cortical topographies. Elife, 9. doi: 10.7554/eLife.56601 [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Haxby JV, Hoffman EA, & Gobbini MI (2000). The distributed human neural system for face perception. Trends Cogn Sci, 4(6), 223–233. [DOI] [PubMed] [Google Scholar]
  77. Heilman KM (2021). Upper Limb Apraxia. Continuum (Minneap Minn), 27(6), 1602–1623. doi: 10.1212/CON.0000000000001014 [DOI] [PubMed] [Google Scholar]
  78. Herald SB, Yang H, & Duchaine B (2023). Contralateral Biases in Category-selective Areas Are Stronger in the Left Hemisphere than the Right Hemisphere. J Cogn Neurosci, 35(7), 1154–1168. doi: 10.1162/jocn_a_01995 [DOI] [PubMed] [Google Scholar]
  79. Hickok G (2009). The functional neuroanatomy of language. Phys Life Rev, 6(3), 121–143. doi: 10.1016/j.plrev.2009.06.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Huijgen J, Dinkelacker V, Lachat F, Yahia-Cherif L, El Karoui I, Lemarechal JD, … George N (2015). Amygdala processing of social cues from faces: an intracrebral EEG study. Soc Cogn Affect Neurosci, 10(11), 1568–1576. doi: 10.1093/scan/nsv048 [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Ishai A (2008). Let’s face it: it’s a cortical network. Neuroimage, 40(2), 415–419. doi: 10.1016/j.neuroimage.2007.10.040 [DOI] [PubMed] [Google Scholar]
  82. Jasper H (1958). The ten-twenty electrode system of the International Federation. Electroencephalography and Clinical Neurophysiology, 10, 371–375. [PubMed] [Google Scholar]
  83. John ER (1975). Konorski’s concept of gnostic areas and units: Some electrophysiological considerations. Acta Neurobiologiae Experimentalis, 35(5–6), 417–429. [PubMed] [Google Scholar]
  84. Kamachi M, Bruce V, Mukaida S, Gyoba J, Yoshikawa S, & Akamatsu S (2013). Dynamic properties influence the perception of facial expressions. Perception, 42(11), 1266–1278. doi: 10.1068/p3131n [DOI] [PubMed] [Google Scholar]
  85. Kern M, Aertsen A, Schulze-Bonhage A, & Ball T (2013). Heart cycle-related effects on event-related potentials, spectral power changes, and connectivity patterns in the human ECoG. Neuroimage, 81, 178–190. doi: 10.1016/j.neuroimage.2013.05.042 [DOI] [PubMed] [Google Scholar]
  86. Kobayashi H, & Kohshima S (1997). Unique morphology of the human eye. Nature, 387(6635), 767–768. doi: 10.1038/42842 [DOI] [PubMed] [Google Scholar]
  87. Konorski J (1967). Integrative activity of the brain; an interdisciplinary approach. Chicago, IL, USA: University of Chicago Press. [Google Scholar]
  88. Korolkova OA (2018). The role of temporal inversion in the perception of realistic and morphed dynamic transitions between facial expressions. Vision Res, 143, 42–51. doi: 10.1016/j.visres.2017.10.007 [DOI] [PubMed] [Google Scholar]
  89. Kragel PA, Ceko M, Theriault J, Chen D, Satpute AB, Wald LW, … Wager TD (2021). A human colliculus-pulvinar-amygdala pathway encodes negative emotion. Neuron, 109(15), 2404–2412 e2405. doi: 10.1016/j.neuron.2021.06.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Landsiedel J, & Koldewyn K (2023). Auditory dyadic interactions through the “eye” of the social brain: How visual is the posterior STS interaction region? Imaging Neurosci (Camb), 1, 1–20. doi: 10.1162/imag_a_00003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Latinus M, Love SA, Rossi A, Parada FJ, Huang L, Conty L, … Puce A (2015). Social decisions affect neural activity to perceived dynamic gaze. Soc Cogn Affect Neurosci, 10(11), 1557–1567. doi: 10.1093/scan/nsv049 [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Li W, & Keil A (2023). Sensing fear: fast and precise threat evaluation in human sensory cortex. Trends Cogn Sci, 27(4), 341–352. doi: 10.1016/j.tics.2023.01.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Lingnau A, & Downing PE (2015). The lateral occipitotemporal cortex in action. Trends Cogn Sci, 19(5), 268–277. doi: 10.1016/j.tics.2015.03.006 [DOI] [PubMed] [Google Scholar]
  94. Manea MM, Comsa M, Minca A, Dragos D, & Popa C (2015). Brain-heart axis--Review Article. J Med Life, 8(3), 266–271. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/26351525 [PMC free article] [PubMed] [Google Scholar]
  95. Mayer EA, Nance K, & Chen S (2022). The Gut-Brain Axis. Annu Rev Med, 73, 439–453. doi: 10.1146/annurev-med-042320-014032 [DOI] [PubMed] [Google Scholar]
  96. McCarthy G, Puce A, Belger A, & Allison T (1999). Electrophysiological studies of human face perception. II: Response properties of face-specific potentials generated in occipitotemporal cortex. Cereb Cortex, 9(5), 431–444. [DOI] [PubMed] [Google Scholar]
  97. McKone E, & Yovel G (2009). Why does picture-plane inversion sometimes dissociate perception of features and spacing in faces, and sometimes not? Toward a new theory of holistic processing. Psychon Bull Rev, 16(5), 778–797. doi: 10.3758/PBR.16.5.778 [DOI] [PubMed] [Google Scholar]
  98. McMahon E, Bonner MF, & Isik L (2023). Hierarchical organization of social action features along the lateral visual pathway. Curr Biol, 33(23), 5035–5047 e5038. doi: 10.1016/j.cub.2023.10.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Meadows JC (1974). The anatomical basis of prosopagnosia. J Neurol Neurosurg Psychiatry, 37(5), 489–501. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=4209556 [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Miki K, & Kakigi R (2014). Magnetoencephalographic study on facial movements. Front Hum Neurosci, 8, 550. doi: 10.3389/fnhum.2014.00550 [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Moeller S, Crapse T, Chang L, & Tsao DY (2017). The effect of face patch microstimulation on perception of faces and objects. Nat Neurosci, 20(5), 743–752. doi: 10.1038/nn.4527 [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Moeller S, Freiwald WA, & Tsao DY (2008). Patches with links: a unified system for processing faces in the macaque temporal lobe. Science, 320(5881), 1355–1359. doi: 10.1126/science.1157436 [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Monfort M, Andonian A, Zhou B, Ramakrishnan K, Bargal SA, Yan T, … Oliva A (2020). Moments in Time Dataset: One Million Videos for Event Understanding. IEEE Trans Pattern Anal Mach Intell, 42(2), 502–508. doi: 10.1109/TPAMI.2019.2901464 [DOI] [PubMed] [Google Scholar]
  104. Morgenroth E, Vilaclara L, Muszynski M, Gaviria J, Vuilleumier P, & Van De Ville D (2023). Probing neurodynamics of experienced emotions-a Hitchhiker’s guide to film fMRI. Soc Cogn Affect Neurosci, 18(1). doi: 10.1093/scan/nsad063 [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Mori S, Oishi K, & Faria AV (2009). White matter atlases based on diffusion tensor imaging. Curr Opin Neurol, 22(4), 362–369. doi: 10.1097/WCO.0b013e32832d954b [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Muri RM (2016). Cortical control of facial expression. J Comp Neurol, 524(8), 1578–1585. doi: 10.1002/cne.23908 [DOI] [PubMed] [Google Scholar]
  107. Nielsen M (1938). Gerstmann Syndrome: finger agnosia, agraphia, confusion of right and left and acalculia. Comparison of this syndrome with disturbance of body scheme resulting from lesions of the right side of the brain. Archives of Neurology and Psychiatry, 39(3), 536–560. [Google Scholar]
  108. O’Toole AJ, Roark DA, & Abdi H (2002). Recognizing moving faces: a psychological and neural synthesis. Trends Cogn Sci, 6(6), 261–266. doi: 10.1016/s1364-6613(02)01908-3 [DOI] [PubMed] [Google Scholar]
  109. Pallett PM, & Meng M (2015). Inversion effects reveal dissociations in facial expression of emotion, gender, and object processing. Front Psychol, 6, 1029. doi: 10.3389/fpsyg.2015.01029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Papeo L, & Abassi E (2019). Seeing social events: The visual specialization for dyadic human-human interactions. J Exp Psychol Hum Percept Perform, 45(7), 877–888. doi: 10.1037/xhp0000646 [DOI] [PubMed] [Google Scholar]
  111. Peelen MV, & Downing PE (2004). Selectivity for the human body in the fusiform gyrus. J Neurophysiol, 93(1), 603–608. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=15295012 [DOI] [PubMed] [Google Scholar]
  112. Peelen MV, & Downing PE (2007). The neural basis of visual body perception. Nat Rev Neurosci, 8(8), 636–648. doi: 10.1038/nrn2195 [DOI] [PubMed] [Google Scholar]
  113. Perdikis D, Volhard J, Muller V, Kaulard K, Brick TR, Wallraven C, & Lindenberger U (2017). Brain synchronization during perception of facial emotional expressions with natural and unnatural dynamics. PLoS One, 12(7), e0181225. doi: 10.1371/journal.pone.0181225 [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Pitcher D, & Ungerleider LG (2021). Evidence for a Third Visual Pathway Specialized for Social Perception. Trends Cogn Sci, 25(2), 100–110. doi: 10.1016/j.tics.2020.11.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Pourtois G, Peelen MV, Spinelli L, Seeck M, & Vuilleumier P (2007). Direct intracranial recording of body-selective responses in human extrastriate visual cortex. Neuropsychologia, 45(11), 2621–2625. doi: 10.1016/j.neuropsychologia.2007.04.005 [DOI] [PubMed] [Google Scholar]
  116. Puce A, & Allison T (1999). Differential processing of mobile and static faces by temporal cortex (OHBM Annual Meeting abstract). Neuroimage 9(6), S801. [Google Scholar]
  117. Puce A, Allison T, Bentin S, Gore JC, & McCarthy G (1998). Temporal cortex activation in humans viewing eye and mouth movements. J Neurosci, 18(6), 2188–2199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  118. Puce A, Allison T, & McCarthy G (1999). Electrophysiological studies of human face perception. III: Effects of top-down processing on face-specific potentials. Cereb Cortex, 9(5), 445–458. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=10450890 [DOI] [PubMed] [Google Scholar]
  119. Puce A, Latinus M, Rossi A, daSilva E, Parada FJ, Love S, … Jayaraman S (2016). Neural bases for social attention in healthy humans. In Puce A & Bertenthal BI (Eds.), The Many Faces of Social Attention: Behavioral and Neural Measures (pp. 93–128). Cham, Switzerland: Springer. [Google Scholar]
  120. Puce A, & Perrett D (2003). Electrophysiology and brain imaging of biological motion. Philos Trans R Soc Lond B Biol Sci, 358(1431), 435–445. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=12689371 [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Puce A, Smith A, & Allison T (2000). ERPs evoked by viewing facial movements. Cog Neuropsychol, 17, 221–239. [DOI] [PubMed] [Google Scholar]
  122. Quadflieg S, & Koldewyn K (2017). The neuroscience of people watching: how the human brain makes sense of other people’s encounters. Ann N Y Acad Sci, 1396(1), 166–182. doi: 10.1111/nyas.13331 [DOI] [PubMed] [Google Scholar]
  123. Rhodes G, Brake S, & Atkinson AP (1993). What’s lost in inverted faces? Cognition, 47(1), 25–57. doi: 10.1016/0010-0277(93)90061-y [DOI] [PubMed] [Google Scholar]
  124. Rossi A, Parada FJ, Kolchinsky A, & Puce A (2014). Neural correlates of apparent motion perception of impoverished facial stimuli: a comparison of ERP and ERSP activity. Neuroimage, 98, 442–459. doi: 10.1016/j.neuroimage.2014.04.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  125. Rossi A, Parada FJ, Latinus M, & Puce A (2015). Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction. Front Hum Neurosci, 9, 185. doi: 10.3389/fnhum.2015.00185 [DOI] [PMC free article] [PubMed] [Google Scholar]
  126. Rossion B (2009). Distinguishing the cause and consequence of face inversion: the perceptual field hypothesis. Acta Psychol (Amst), 132(3), 300–312. doi: 10.1016/j.actpsy.2009.08.002 [DOI] [PubMed] [Google Scholar]
  127. Roux FE, Boetto S, Sacko O, Chollet F, & Tremoulet M (2003). Writing, calculating, and finger recognition in the region of the angular gyrus: a cortical stimulation study of Gerstmann syndrome. J Neurosurg, 99(4), 716–727. doi: 10.3171/jns.2003.99.4.0716 [DOI] [PubMed] [Google Scholar]
  128. Saarimaki H (2021). Naturalistic Stimuli in Affective Neuroimaging: A Review. Front Hum Neurosci, 15, 675068. doi: 10.3389/fnhum.2021.675068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  129. Samuelsson JG, Sundaram P, Khan S, Sereno MI, & Hamalainen MS (2020). Detectability of cerebellar activity with magnetoencephalography and electroencephalography. Hum Brain Mapp, 41(9), 2357–2372. doi: 10.1002/hbm.24951 [DOI] [PMC free article] [PubMed] [Google Scholar]
  130. Schilbach L, Timmermans B, Reddy V, Costall A, Bente G, Schlicht T, & Vogeley K (2013). Toward a second-person neuroscience. Behav Brain Sci, 36(4), 393–414. doi: 10.1017/S0140525X12000660 [DOI] [PubMed] [Google Scholar]
  131. Schmahmann JD (2019). The cerebellum and cognition. Neurosci Lett, 688, 62–75. doi: 10.1016/j.neulet.2018.07.005 [DOI] [PubMed] [Google Scholar]
  132. Silson EH, Groen II, Kravitz DJ, & Baker CI (2016). Evaluating the correspondence between face-, scene-, and object-selectivity and retinotopic organization within lateral occipitotemporal cortex. J Vis, 16(6), 14. doi: 10.1167/16.6.14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  133. Simonyan K, Saad ZS, Loucks TM, Poletto CJ, & Ludlow CL (2007). Functional neuroanatomy of human voluntary cough and sniff production. Neuroimage, 37(2), 401–409. doi: 10.1016/j.neuroimage.2007.05.021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  134. Smith FW, Muckli L, Brennan D, Pernet C, Smith ML, Belin P, … Schyns PG (2008). Classification images reveal the information sensitivity of brain voxels in fMRI. Neuroimage, 40(4), 1643–1654. doi: 10.1016/j.neuroimage.2008.01.029 [DOI] [PubMed] [Google Scholar]
  135. Smith R, & Lane RD (2016). Unconscious emotion: A cognitive neuroscientific perspective. Neurosci Biobehav Rev, 69, 216–238. doi: 10.1016/j.neubiorev.2016.08.013 [DOI] [PubMed] [Google Scholar]
  136. Stanley DA, & Adolphs R (2013). Toward a neural basis for social behavior. Neuron, 80(3), 816–826. doi: 10.1016/j.neuron.2013.10.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  137. Stolier RM, Hehman E, & Freeman JB (2018). A Dynamic Structure of Social Trait Space. Trends Cogn Sci, 22(3), 197–200. doi: 10.1016/j.tics.2017.12.003 [DOI] [PubMed] [Google Scholar]
  138. Straulino E, Scarpazza C, & Sartori L (2023). What is missing in the study of emotion expression? Front Psychol, 14, 1158136. doi: 10.3389/fpsyg.2023.1158136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  139. Taubert J, Ritchie JB, Ungerleider LG, & Baker CI (2022). One object, two networks? Assessing the relationship between the face and body-selective regions in the primate visual system. Brain Struct Funct, 227(4), 1423–1438. doi: 10.1007/s00429-021-02420-7 [DOI] [PubMed] [Google Scholar]
  140. Thompson JC, Clarke M, Stewart T, & Puce A (2005). Configural processing of biological motion in human superior temporal sulcus. J Neurosci, 25(39), 9059–9066. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=16192397 [DOI] [PMC free article] [PubMed] [Google Scholar]
  141. Thompson P (1980). Margaret Thatcher: a new illusion. Perception, 9(4), 483–484. doi: 10.1068/p090483 [DOI] [PubMed] [Google Scholar]
  142. Todorov A, Said CP, Engell AD, & Oosterhof NN (2008). Understanding evaluation of faces on social dimensions. Trends Cogn Sci, 12(12), 455–460. doi:S1364–6613(08)00235–0 [pii] 10.1016/j.tics.2008.10.001 [DOI] [PubMed] [Google Scholar]
  143. Ture U, Yasargil MG, Al-Mefty O, & Yasargil DC (2000). Arteries of the insula. J Neurosurg, 92(4), 676–687. doi: 10.3171/jns.2000.92.4.0676 [DOI] [PubMed] [Google Scholar]
  144. Uddin LQ, Nomi JS, Hebert-Seropian B, Ghaziri J, & Boucher O (2017). Structure and Function of the Human Insula. J Clin Neurophysiol, 34(4), 300–306. doi: 10.1097/WNP.0000000000000377 [DOI] [PMC free article] [PubMed] [Google Scholar]
  145. Ulloa JL, Puce A, Hugueville L, & George N (2014). Sustained neural activity to gaze and emotion perception in dynamic social scenes. Soc Cogn Affect Neurosci, 9(3), 350–357. doi: 10.1093/scan/nss141 [DOI] [PMC free article] [PubMed] [Google Scholar]
  146. Ungerleider LG, & Mishkin M (1982). Two cortical visual systems. In Dingle J, Goodale MA, & Mansfield RJW (Eds.), Analysis of visual behavior (pp. 549–586). Cambridge, MA: MIT. [Google Scholar]
  147. Vaina LM, & Soloviev S (2004). First-order and second-order motion: neurological evidence for neuroanatomically distinct systems. Prog Brain Res, 144, 197–212. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=14650850 [DOI] [PubMed] [Google Scholar]
  148. Vangeneugden J, Peelen MV, Tadin D, & Battelli L (2014). Distinct neural mechanisms for body form and body motion discriminations. J Neurosci, 34(2), 574–585. doi: 10.1523/JNEUROSCI.4032-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  149. Vikhanova A, Mareschal I, & Tibber M (2022). Emotion recognition bias depends on stimulus morphing strategy. Atten Percept Psychophys, 84(6), 2051–2059. doi: 10.3758/s13414-022-02532-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  150. Walbrin J, & Koldewyn K (2019). Dyadic interaction processing in the posterior temporal cortex. Neuroimage, 198, 296–302. doi: 10.1016/j.neuroimage.2019.05.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  151. Walbrin J, Mihai I, Landsiedel J, & Koldewyn K (2020). Developmental changes in visual responses to social interactions. Dev Cogn Neurosci, 42, 100774. doi: 10.1016/j.dcn.2020.100774 [DOI] [PMC free article] [PubMed] [Google Scholar]
  152. Waller BM, Julle-Daniere E, & Micheletta J (2020). Measuring the evolution of facial ‘expression’ using multi-species FACS. Neurosci Biobehav Rev, 113, 1–11. doi: 10.1016/j.neubiorev.2020.02.031 [DOI] [PubMed] [Google Scholar]
  153. Wang Y, Metoki A, Alm KH, & Olson IR (2018). White matter pathways and social cognition. Neurosci Biobehav Rev, 90, 350–370. doi: 10.1016/j.neubiorev.2018.04.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  154. Wang Y, Metoki A, Smith DV, Medaglia JD, Zang Y, Benear S, … Olson IR (2020). Multimodal mapping of the face connectome. Nat Hum Behav, 4(4), 397–411. doi: 10.1038/s41562-019-0811-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Watanabe S, Kakigi R, & Puce A (2001). Occipitotemporal activity elicited by viewing eye movements: a magnetoencephalographic study. Neuroimage, 13(2), 351–363. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=11162275 [DOI] [PubMed] [Google Scholar]
  156. Weiner KS, & Grill-Spector K (2013). Neural representations of faces and limbs neighbor in human high-level visual cortex: evidence for a new organization principle. Psychol Res, 77(1), 74–97. doi: 10.1007/s00426-011-0392-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  157. Wheaton KJ, Thompson JC, Syngeniotis A, Abbott DF, & Puce A (2004). Viewing the motion of human body parts activates different regions of premotor, temporal, and parietal cortex. Neuroimage, 22(1), 277–288. Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=15110018 [DOI] [PubMed] [Google Scholar]
  158. Wolpert N, Rebollo I, & Tallon-Baudry C (2020). Electrogastrography for psychophysiological research: Practical considerations, analysis pipeline, and normative data in a large sample. Psychophysiology, 57(9), e13599. doi: 10.1111/psyp.13599 [DOI] [PMC free article] [PubMed] [Google Scholar]
  159. Wu W, Wang X, Wei T, He C, & Bi Y (2020). Object parsing in the left lateral occipitotemporal cortex: Whole shape, part shape, and graspability. Neuropsychologia, 138, 107340. doi: 10.1016/j.neuropsychologia.2020.107340 [DOI] [PubMed] [Google Scholar]
  160. Wurm MF, & Caramazza A (2022). Two ‘what’ pathways for action and object recognition. Trends Cogn Sci, 26(2), 103–116. doi: 10.1016/j.tics.2021.10.003 [DOI] [PubMed] [Google Scholar]
  161. Yang DY, Rosenblau G, Keifer C, & Pelphrey KA (2015). An integrative neural model of social perception, action observation, and theory of mind. Neurosci Biobehav Rev, 51, 263–275. doi: 10.1016/j.neubiorev.2015.01.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  162. Yarbus AL (1967). Eye Movements and Vision (B. Haigh, Trans.). New York: Springer Science+Business Media, LLC. [Google Scholar]
  163. Yovel G, & O’Toole AJ (2016). Recognizing People in Motion. Trends Cogn Sci, 20(5), 383–395. doi: 10.1016/j.tics.2016.02.005 [DOI] [PubMed] [Google Scholar]
  164. Zekelman LR, Zhang F, Makris N, He J, Chen Y, Xue T, … O’Donnell LJ (2022). White matter association tracts underlying language and theory of mind: An investigation of 809 brains from the Human Connectome Project. Neuroimage, 246, 118739. doi: 10.1016/j.neuroimage.2021.118739 [DOI] [PMC free article] [PubMed] [Google Scholar]
  165. Zimmermann M, Mars RB, de Lange FP, Toni I, & Verhagen L (2018). Is the extrastriate body area part of the dorsal visuomotor stream? Brain Struct Funct, 223(1), 31–46. doi: 10.1007/s00429-017-1469-0 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES