Abstract
Despite the similarity in structure, the hemispheres of the human brain have somewhat different functions. A traditional view of hemispheric organization asserts that there are independent and largely lateralized domain-specific regions in ventral occipitotemporal (VOTC), specialized for the recognition of distinct classes of objects. Here, we offer an alternative account of the organization of the hemispheres, with a specific focus on face and word recognition. This alternative account relies on three computational principles: distributed representations and knowledge, cooperation and competition between representations, and topography and proximity. The crux is that visual recognition results from a network of regions with graded functional specialization that is distributed across both hemispheres. Specifically, the claim is that face recognition, which is acquired relatively early in life, is processed by VOTC regions in both hemispheres. Once literacy is acquired, word recognition, which is co-lateralized with language areas, primarily engages the left VOTC and, consequently, face recognition is primarily, albeit not exclusively, mediated by the right VOTC. We review psychological and neural evidence from a range of studies conducted with normal and brain-damaged adults and children and consider findings which challenge this account. Last, we offer suggestions for future investigations whose findings may further refine this account.
Keywords: hemispheric organization, visual object recognition, neural basis, distributed organization
Hemispheric Organization in the Service of Visual Object Recognition
The organization of the two cerebral hemispheres of the human brain, and specifically their individual and joint contributions to visual object recognition, has been the subject of decades of theoretical controversy. The empirical findings from hundreds of experiments, while important, have still not led to an obvious consensus. Although the structure of the two hemispheres appears remarkably similar, there are thought to be important functional differences between them. The nature of these lateralized differences is at the core of the dispute.
The nature of the functional specialization between hemispheres is intimately related to the nature of functional specialization within each hemisphere. A theoretical account that has largely dominated the literature in this regard claims that there are circumscribed and largely lateralized cortical “modules” subserving individual recognition functions. We first outline this domain-specific account and review the evidence that is taken to support it. We then argue that closer scrutiny of the data, as well as new evidence, suggests an alternative account in which visual recognition is the product of a distributed network of cortical regions that engages both hemispheres, and we lay out a set of computational principles that form the crux of this account. Thereafter, we describe empirical findings that are more consistent with this account than with the traditional one, and propose that the principles of this alternative account apply not only to visual object recognition but also have implications for other cognitive processes. Last, we tackle several empirical challenges that appear to run counter to this account and we lay out some future directions that might help adjudicate between different models of hemispheric organization.
Note that claims about domain-specificity and hemispheric lateralization are logically distinct. That is, a domain-specific module might span both hemispheres or, conversely, a more distributed function might nonetheless be restricted to regions within a single hemisphere. In practice, however, claims about modularity are often accompanied by claims of lateralization, whereas computational principles that might give rise to a more distributed organization would be expected to apply both within and between hemispheres. Thus, although we will need to be mindful of this distinction at various points in our analysis, a consideration of hemispheric organization is unavoidably bound together with a consideration of functional specialization.
Circumscribed Cortical Regions Subserving Recognition of Different Visual Classes
Domain-specific accounts of the organization of the hemispheres for object perception and its distinctive neural correlate have a long and compelling history. For example, Konorski (1967), perhaps the most extreme proponent of modularity and the champion of the “gnostic neuron,” argued that there are nine different subsystems engaged in visual object recognition, including, for example, subsystems for the recognition of small manipulable objects, larger partially manipulable objects, and nonmanipulable objects (see Figure 1; for further description of this view, also see Farah, 1992). The notion of independent brain areas with specific, independent functions is seemingly parsimonious, and theories that speculate about the existence of many independent abilities are intuitively appealing, well cited, and influential (Wilmer et al., 2014).
A more recent theoretical view of regional and functional independence, which has had substantial impact in visual cognitive neuroscience, is also one in which there are distinct cortical regions that are individually specialized for the high-level visuoperceptual analysis of different classes of objects (often termed “domain specificity”), although the specific functions do not align with those proposed by Konorski (1967). Much of the evidence to support this view comes from studies using functional magnetic resonance imaging (fMRI), and these investigations have revealed cortical regions with selective, perhaps even dedicated, responses to particular visual classes (Kanwisher, 2010, 2017). For example, regions have been found with a selective response to written words (Cohen & Dehaene, 2004; Cohen et al., 2002; Price & Devlin, 2003; Price & Mechelli, 2005), numerals (Shum et al., 2013), common objects (Grill-Spector, Kushnir, Edelman, Itzchak, & Malach, 1998; Malach et al., 1995), scenes or houses (Epstein & Kanwisher, 1998), hands (Bracci, Caramazza, & Peelen, 2015, 2018), tools (Almeida, Mahon, & Caramazza, 2010; Martin, 2007), and body parts (Peelen, Glaser, Vuilleumier, & Eliez, 2009; Schwarzlose, Baker, & Kanwisher, 2005), with some of these regions—for example, the visual word form area (VWFA), fusiform face area (FFA), and parahippocampal place area (PPA)—showing distinct patterns of lateralization (see Figure 2 for some of these brain-behavior mappings).
Among these candidate domains, two have received the most attention and are probably the strongest contenders for claims about domain-specificity: face recognition and word recognition, associated with the FFA and the right hemisphere (RH) and the VWFA and the left hemisphere (LH), respectively. Whereas both of these areas have been ascribed a distinct function based on extensive empirical findings, there are also several compelling a priori reasons why the processing of these two visual classes would be expected to be highly segregated and independent (Hellige, Laeng, & Michimata, 2010; Kanwisher, 2017; J. Levy, Heller, Banich, & Burton, 1983; Maurer, Rossion, & McCandliss, 2008; Mercure, Dick, Halit, Kaufman, & Johnson, 2008). First, the image properties of faces and words are completely distinct—whereas faces comprise 3D structure with more curved features and with parts which are not easily separable (e.g., eyes, nose and mouth), words are composed of 2D structure with individual letters which occur independently in their own right and are made of mostly straight edges. The image differences, and the engagement of more configural processing for faces and more part-based or compositional processing for words, have led to a view in which there are two distinct mechanisms, one a more holistic system and the other a more feature-based system, such that faces are processed entirely by the former, words are processed entirely by the latter, and other classes of objects are processed by some combination of the two (Boremanse, Norcia, & Rossion, 2014; Busigny & Rossion, 2011; Farah, 1991, 1992; Rossion et al., 2000).
A second motivation for the independence of the subsystems is that face recognition is an evolutionarily old skill and one that is likely to be conserved across species (McKone & Kanwisher, 2005; Sheehan & Nachman, 2014). In contrast, word recognition is only about 5,000 years old (Dehaene & Cohen, 2007) and, until roughly 200 years ago, was limited to a minority of the population. Moreover, word recognition capacity is largely restricted to humans (although extensive training can induce some form of symbol recognition in monkeys; Srihasam, Mandeville, Morocz, Sullivan, & Livingstone, 2012; Srihasam, Vincent, & Livingstone, 2014). Last, the differences in the acquisition of face and word representations are stark: Whereas face recognition is acquired incidentally starting at birth, word recognition usually requires explicit instruction and thousands of hours of practice, and generally starts when children begin formal schooling. Given the persuasive a priori arguments for independence of face and word processing, we now examine in detail the empirical data on this issue.
Face Selective Responses in the RH
Evidence from neuropsychology has played a large and highly informative role with respect to domain-specific claims of a RH face-selective region. Many studies have demonstrated that patients with a bilateral lesion, or even just a unilateral right-sided lesion, are impaired at recognizing known faces and even at judging the similarity of pairs of unknown faces (Barton, 2011; Busigny, Graf, Mayer, & Rossion, 2010; Sergent & Signoret, 1992). Whether the deficit is specific to faces or also affects the recognition of other objects still remains controversial—among the complications for comparing faces and another class of objects is that, relative to faces, these other classes are not well matched on exemplar homogeneity or expertise (Gauthier, Behrmann, & Tarr, 1999, 2004; Geskin & Behrmann, 2018). Last, there are behavioral signatures that are taken as evidence that face processing is domain-specific. For example, compared to other visual classes, face recognition is considered to engage more configural or holistic computations which fail to operate as the face is rotated away from the upright orientation in the picture plane (Farah, Wilson, Drain, & Tanaka, 1998; Yin, 1969; Yovel & Kanwisher, 2005) or when the face is decomposed into parts (Tanaka & Farah, 1993, 2003; for meta-analysis and review, see Richler & Gauthier, 2014).
Consistent with the domain-specificity illustrated in Figure 2, many neuroimaging studies have provided evidence for a selective neural response to the viewing of a face. Early positron emission tomography (PET; Sergent, Ohta, & MacDonald, 1992) and fMRI (Kanwisher, McDermott, & Chun, 1997) studies provided the first observations of face-selective responses in the right ventral occipitotemporal cortex (VOTC), and dozens if not hundreds of studies have replicated and extended these findings (for review, see Grill-Spector, Weiner, Kay, & Gomez, 2017). The face-selective nature of the RH has also gained substantial support from evoked response potential (ERP) studies and magnetoencephalography (MEG) studies, which uncover a specific N170 or M170 response to faces that is greater than the response to other tested categories (e.g., birds, cars, or furniture; Bentin, Allison, Puce, Perez, & McCarthy, 1996; Gao et al., 2013). Last, the distinction between face-selective and word-selective sites has also been noted in studies using intracranial recordings used to monitor pharmacologically resistant epilepsy (Allison, Puce, Spencer, & McCarthy, 1999; McCarthy, Puce, Belger, & Allison, 1999; Puce et al., 1995, 1996). Further confirmation has been obtained from more recent studies using electrocorticography (Ghuman et al., 2014) and stereotactic encephalography for recording and stimulation of neural responses in the same patient group (Parvizi et al., 2012; Rangarajan & Parvizi, 2016). Together, these findings support the domain-specific aspect of face recognition and uncover differences between the way faces are processed compared to other non-face stimuli.
Word Selective Responses in the LH
In complementary fashion to face domain-specificity, empirical evidence from neuropsychology has helped support the claim of an area in the LH that is selective for words (or letter strings). Since the classic studies of Dejerine (1891, 1892) and early descriptions of so-called “pure” alexia (Geschwind, 1965), many single case or small group studies have shown that a lesion to the left VOTC (and not necessarily to the splenium, as argued previously) can impair orthographic processing (Behrmann, Plaut, & Nelson, 1998; Damasio, 1983; Henderson, Friedman, Teng, & Weiner, 1985).
Consistently, neuroimaging studies have uncovered a region of the left VOTC that is preferentially activated by words (Schlaggar & McCandliss, 2007) and many early findings from PET (Petersen, Fox, Snyder, & Raichle, 1990) and fMRI studies (Carlos, Hirshorn, Durisko, Fiez, & Coutanche, 2019; Cohen et al., 2000; Cohen et al., 2002; Price, 2000) attest to the relative specificity of the response to orthographic input. The word-selective nature of the VWFA in the LH has also gained substantial support from studies using either ERPs (Appelbaum, Liotti, Perez, Fox, & Woldorff, 2009; Bentin et al., 1996; for a review, see Maurer & Mccandliss, 2008) or MEG (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999) which uncover a specific N170 or M170 response to written words. Also, cortical surface electrophysiological recordings from left VOTC in patients showed a strong LH response to words presented singly or as part of a sentence (Canolty et al., 2007; Nobre, Allison, & McCarthy, 1994) and further confirmation of the LH advantage has been obtained from recent studies using electrocorticography (Ghuman & Fiez, 2018).
A Distributed Account of Hemispheric Organization of Face and Word Perception
As the earlier brief overview makes clear, the evidence favoring the separate processing of faces in the RH and words in the LH supports a strong account of psychological and neural domain segregation. In contrast to the claims of independence of function, however, we have proposed a distributed account in which the systems supporting face and word recognition exhibit graded and overlapping functional specialization both within and, especially, between hemispheres (Behrmann & Plaut, 2013, 2015; Plaut & Behrmann, 2011). This account was initially inspired by close scrutiny of the empirical data—detailed review of many studies has revealed bilateral, rather than unilateral, activation for words (Appelbaum et al., 2009) and for faces (Allison et al., 1999; Carmel & Bentin, 2002).
Perhaps even more compelling is that in those few studies in which cortical responses for faces and words are measured within-individual, as shown in Figure 3(a), there is bilateral activation for faces and for words although there is an asymmetry with greater activation for words than faces over the LH and greater activation for faces than words over the RH (see also Kay & Yeatman, 2017; Matsuo et al., 2015). A similar weighted asymmetry is noted in one of the relatively few ERP studies in which words and faces are displayed as a within-subject factor (Rossion, Joyce, Cottrell, & Tarr, 2003) and the same is found in the electrophysiological responses from surface electrodes in the RH and LH of patients being monitored prior to neurosurgery (Allison, McCarthy, Nobre, Puce, & Belger, 1994).
In addition to the possible overlap of neural regions, the behavioral signatures typically associated with either holistic- or part-based processing may apply to both faces and words. For example, the characteristic face-inversion effect (Farah, Tanaka, & Drain, 1995; Rossion et al., 1999; Yin, 1969)—that is, the substantial decrement in performance as a function of stimulus orientation—has been reported for words as well and reaction time is linearly slowed as a function of deviation from upright for both faces (Valentine, 1988) and words (Koriat & Norman, 1985; Wong, Wong, Lui, Ng, & Ngan, 2019). Last, the pattern of encoding following a RH or LH VOTC lesion is similar. For example, patients with prosopagnosia, a disproportionate impairment in face versus object recognition, usually after a RH VOTC lesion make multiple fixations across a face (Stephan & Caine, 2009), considered to reflect a breakdown in holistic processing. Similarly, patients with pure alexia, a disproportionate impairment in word versus object recognition after a LH VOTC lesion also make multiple fixations across the word and read in a piecemeal, letter-by-letter fashion (Behrmann, Shomstein, Black, & Barton, 2001).
These observations of similarity in the neural and psychological bases of face and word perception are left unexplained by modular theories of cortical organization. To be clear, they are not incompatible with such theories: It might be the case that the two independent modules just happen to be distributed across the hemispheres in complementary fashion and to operate according to similar principles. However, a modular theory provides no insight into why the face and word modules are organized in this manner, as they have nothing to do with each other.
In contrast to a modular account, we have formulated an account that can explain the data supporting partial segregation because face and word representations are not, in fact, independent. This more graded, distributed account is based on three key principles. None of these principles is novel or particularly controversial, but they have important implications when considered together.
The first principle is that representations and knowledge are distributed. We assume that the neural system for visual recognition consists of a set of hierarchically organized cortical areas, ranging from local retinotopic information in primary visual cortex, V1, through more global, object-based and semantic information in anterior temporal cortex (see Grill-Spector & Malach, 2004). At each level, the visual stimulus is represented by the activity of a large number of neurons, and each neuron participates in coding a large number of stimuli. Generally, stimuli that are similar with respect to the information coded by a particular region evoke similar (overlapping) patterns of activity in that region. Knowledge of how features at one level combine to form features at the next level is encoded by the pattern of synaptic connections and strengths between and within the regions. Learning involves modifying these synapses in a way that alters the representations to capture the relevant information in the domain better and to support better behavioral outcomes. With extended experience, expertise develops through the refinement, specialization, and elaboration of representations, requiring the recruitment of additional neurons and regions of cortex.
The second principle is that there is cooperation and competition between representations. The ability of a set of synaptic connections to encode a large number of stimuli depends on the degree to which the relevant knowledge is consistent or systematic (i.e., similar representations at one level correspond to similar representations at another). In general, systematic domains benefit from highly overlapping neural representations that support generalization, whereas unsystematic domains require largely nonoverlapping representations to avoid interference. Thus, if a cortical region represents one type of information (e.g., faces), it is ill-suited to represent another type of information that requires unrelated knowledge (e.g., words), with the result that the domains are better represented separately (due to competition). On the other hand, effective cognitive processing requires the coordination (cooperation) of multiple levels of representation within and across domains. Of course, representations can cooperate directly only to the extent that they are connected—that is, there are synapses between the regions encoding the relevant knowledge of how they are related; otherwise, they must cooperate indirectly through mediating representations. In this way, the neural organization of cognitive processing is strongly constrained by available connectivity for both competition and cooperation (Mahon & Caramazza, 2011).
The third principle is that there are pressures on hemispheric organization associated with topography and proximity. Brain organization must permit sufficient connectivity among neurons to carry out the necessary information processing, but the total axonal volume must fit within the confines of the skull. This constraint is severe: Fully connecting the brain’s 1011 neurons would require more than 20 million cubic meters of axon volume.1 Clearly, connectivity must be as local as possible. Long-distance projections are certainly present in the brain, but they are relatively rare and presumably play a sufficiently critical functional role to offset the “cost” in volume. In fact, the organization of human neocortex as a folded sheet can be understood as a compromise between the spherical shape that would minimize long-distance axon length and the need for greater cortical area to support highly elaborated representations (see recent papers on cytoarchitectonics and receptor architecture as constraints on functional organization; Amunts & Zilles, 2015; Caspers et al., 2015; Weiner et al., 2014). The organization into two hemispheres is also relevant here, as interhemispheric connectivity is largely restricted to homologous areas and is thus vastly less dense than connectivity within each hemisphere (Suarez et al., 2018). Even at a local scale, the volume of connectivity within an area can be minimized by adopting a topographic organization so that related information is represented in as close proximity as possible (Jacobs & Jordan, 1992). This is seen most clearly in the retinotopic organization of early visual areas, given that light falling on adjacent patches of the retina is highly likely to contain related information (also see Arcaro, Schade, & Livingstone, 2019a, for a related account with strong emphasis on constraints of retinotopy). The relevant dimensions of similarity for higher-level visual areas are, of course, far less well understood, but the local connectivity constraint is no less pertinent (Jacobs, 1997).
Despite their differences, both word and face recognition are highly overlearned and—given the high degree of visual similarity among exemplars—place extensive demands on high-acuity vision (Gomez, Natu, Jeska, Barnett, & Grill-Spector, 2018; Hasson, Levy, Behrmann, Hendler, & Malach, 2002; I. Levy, Hasson, Avidan, Hendler, & Malach, 2001). This foveal specificity also holds for many other cases of overlearned visual representations (Pokemon in Pokemon experts; Gomez, Barnett, & Grill-Spector, 2019). Thus, representations for both words and faces need to cooperate with (i.e., be connected to and, hence, adjacent to) representations of central visual information; as a result, in both hemispheres, words and faces compete for neural space in areas adjacent to retinotopic cortex in which information from central vision is encoded (Hasson, Levy, Behrmann, Hendler, & Malach, 2002; Roberts et al., 2013; Woodhead, Wise, Sereno, & Leech, 2011). These areas are sculpted further over development (Gomez et al., 2018; Nordt et al., 2019) and end up being labeled the VWFA in the LH and the FFA in the RH, although there is generally bilateral activation to both visual classes (see Figure 3).
To minimize connection length, orthographic representations are further constrained to be proximal to language-related information—especially phonology—which is left-lateralized in most right- and left-handed individuals; across the population, word-selective activation is co-lateralized with language areas (Cai, Paulignan, Brysbaert, Ibarrola, & Nazir, 2010; Gerrits, Van der Haegen, Brysbaert, & Vingerhoets, 2019; Van der Haegen, Cai, & Brysbaert, 2012). As a result, letter and word representations come to rely most heavily—albeit not exclusively—on the left VOTC region (VWFA) as an intermediate cortical region bridging between early vision and language. This claim is consistent with the interactive view in which left occipitotemporal regions become specialized for word processing because of top-down predictions from the language system integrating with bottom-up visual inputs (Carreiras et al., 2009; Devlin, Jamison, Gonnerman, & Matthews, 2006; Price & Devlin, 2011). With reading acquisition, as the LH region becomes increasingly tuned to represent words (Nordt et al., 2019), the competition with face representations in that region increases. Consequently, face representations that were initially bilateral in children (Dundas, Plaut, & Behrmann, 2014; Lochy, de Heering, & Rossion, 2019) become more lateralized to the right fusiform region (FFA) albeit, again, not exclusively. Last, the exact site of the VWFA is somewhat lateral (relative to FFA on the lateral to medial axis), and even within the VWFA there is a medial to lateral axis (Bouhali, Bezagu, Dehaene, & Cohen, 2019), with this arrangement likely a result of within-hemisphere competition to maintain close connectivity to areas engaged in phonological processing (Barttfeld et al., 2018; Dehaene et al., 2010).
Empirical Support for the Distributed Account of Hemispheric Organization
We and others have accumulated substantial evidence in support of this distributed view of face and word recognition, and, here, we describe the evidence as it pertains to the three key computational principles.
The first principle concerning the distribution of representation has already been covered earlier by the evidence favoring the spatial localization of the FFA and VWFA relative to the anterior extrapolation of the fovea in extrastriate cortex, and the medial-to-lateral organization in high visual-acuity regions (see Figure 3), and so we do not discuss this further.
Shared Representations With Weighting
The second principle of the distributed account states that representations that are compatible are coordinated and co-localized, and that incompatible information, which is subject to competition, is segregated. One possibility is that this incompatibility might result in binary separation between systems for face and word representation but in fact, as revealed earlier (Figure 3(a)), the cortical solution appears to be one of bihemispheric engagement but with greater weighting for face and word lateralization in the RH and LH, respectively.
Notwithstanding the fact that there is considerable evidence for bilateral activation for both faces and words, it still remains to be determined whether both hemispheres contribute functionally to both face and word perception rather than just being activated epiphenomenally, for example, through interhemispheric connectivity. To evaluate this, we tested the performance of adults with a unilateral lesion to VOTC to either the RH (three patients with prosopagnosia) or to the LH (four patients with pure alexia; Behrmann & Plaut, 2014). Face recognition was tested in one task in a same or different discrimination procedure with morphed faces and in a second task in which participants matched identity of a face across viewpoint. Word recognition was measured in a task requiring the reading aloud of words of different length and then in a lexical decision task with matched words and nonwords of various lengths. The key finding from these four experiments was that both patient groups were significantly impaired on all tasks, relative to their own group of matched controls, but, as predicted, on some dependent measures, a direct comparison of the two groups showed some of the differences associated with the postulated weighted asymmetry. For example, on word recognition, the Alexia group performed more slowly than the Prosopagnosia group and disproportionately so as word length increased. A second example is from the same or different face discrimination task: Although both groups made significantly more errors than respective controls, and there was no overall difference in accuracy between the two patient groups, there was a significant interaction of Patient Group × Condition. Closer scrutiny showed that even for easy discriminations, the Prosopagnosia group was less accurate than the Alexia group although the groups were equally poor on the medium discrimination trials.
Much of the current research with individuals with neuropsychological deficits such as pure alexia and prosopagnosia examine the findings in terms of a double dissociation. On the traditional account (words and faces are independent and never the twain shall meet), one might predict a double dissociation with the former group impaired at word but not face recognition and the latter group showing the converse. Indeed, recent criteria differentiate between classic versus strong dissociations (Shallice, 1988). For the former to hold, the patient’s score on one task should differ significantly from the controls but be in the normal range on the other task and there should be a statistical difference between the scores of the two tasks. For the latter to hold, there should be a significant difference between the patient’s score on both tasks compared to controls, but one task is performed better than the other task. The findings from the aforementioned patient study might be interpreted as satisfying the criteria for the strong but not classical dissociation in that the patients are impaired at both face and word tasks relative to controls, albeit significantly more so on one than the other (but see McIntosh, 2018) for the value of simple dissociations in neuropsychology). The strong dissociation results we report are at odds with the double dissociation predicted by the traditional account in that both Alexic and Prosopagnosic groups are clearly impaired at both word and face recognition, relative to controls, but the patients with LH lesions were more impaired at word than face recognition and the patients with RH lesions were more impaired at face than word recognition. These findings suggest a relationship between the mechanisms underlying face and word recognition and, when damage to this mechanism occurs, a deficit, albeit weighted, is evident on both tasks.
The co-occurrence of pure alexia and prosopagnosia has also been reported in some other studies such as that of a case with left occipital arteriovenous malformation (Y.-C. Liu, Wang, & Yen, 2011), Case 3 of Damasio, Damasio, and Van Hoesen (1982), and a few additional cases reported in the overview by Farah (1991) that may also fit this profile. Also relevant is the finding that patients with lesions to the left posterior fusiform gyrus were impaired at processing orthographic and complex nonorthographic stimuli (Roberts et al., 2013) as well as faces (Roberts et al., 2015). These authors also proposed a deficit in a common underlying mechanism, namely, the loss of high spatial frequency visual information coded in this region and damage thus affects both word and face recognition.
Analogous results come from a study of children between the ages of 5 and 17 years in which, following a unilateral posterior injury to the temporal lobe in infancy, there were no differences in the nature and extent of the face recognition deficit as a function of which hemisphere was affected (de Schonen et al., 2005). In addition, in recent studies of patients with impairments following posterior cerebral infarcts, all patients who exhibited word recognition difficulties also had problems in face recognition, regardless of which hemisphere was affected (Asperud, Kühn, Gerlach, Delfi, & Starrfelt, 2019; Gerlach, Marstrand, Starrfelt, & Gade, 2014).
In further support of the distributed view, in one study, adults with developmental dyslexia (DD) not only performed poorly on word recognition, but they also matched faces more slowly and discriminated between similar faces (but not cars) more poorly than controls (Gabay et al., 2017). Moreover, DD individuals showed reduced hemispheric lateralization of words and faces, as demonstrated using a half-field paradigm (see also Sigurdardottir, Ivarsson, Kristinsdottir, & Kristjansson, 2015). The neural profile of children with DD is also atypical in that, relative to controls, they evince a normal Blood-Oxygen-Level-Dependent (BOLD) response to checkerboards and houses but reduced activation to faces in the FFA and to words in the VWFA (Monzalvo, Fluss, Billard, Dehaene, & Dehaene-Lambertz, 2012). It is worth pointing out that, whereas the observation of mixed impairments following unilateral brain damage does not rule out bilateral, co-localized face and word modules, such an account provides no basis for understanding mixed and asymmetric deficits in the acquisition of faces and words, or any other functional relationship between the two domains.
Findings from the selectivity of extrastriate cortex in illiterate adults are also illuminating with respect to the relationship between word and face neural substrates. In one study, the BOLD fMRI response to spoken and written language, visual faces, houses, tools, and checkers was measured in individuals who were illiterate as well as those who became literate in adulthood or in later childhood (Dehaene, Cohen, Morais, & Kolinsky, 2015; Dehaene et al., 2010). Most relevant is that, unsurprisingly, the illiterate individuals revealed no response to written words in the left VWFA region but a response to faces was apparent in this region. In those individuals in whom literacy was acquired, however, this left fusiform area was activated by written words and concomitantly, activation of faces in this region was reduced. This competitive word-face effect was observed both in individuals who acquired literacy in childhood and in those who acquired literacy in adulthood, a finding that speaks to the possibility of ongoing competition in cortex over the lifespan. Somewhat at odds with these findings of a competitive effect with voxels in the LH being increasingly tuned to words, and subsequently voxels in the RH increasing in proportion to reading scores, is a new study conducted with a large number of individuals of varying degrees of literacy (Hervais-Adelman et al., 2019). The findings revealed that the acquisition of literacy does indeed recycle existing object representations, but there was no concomitant impinging on other stimulus categories, and face activation remained detectable in the left VOTC even after word acquisition.
Finally, we have shown that in children who have undergone left posterior lobectomy for the control of medically intractable epilepsy, the VWFA emerges in the RH (T. T. Liu, Freud, Patterson, & Behrmann, 2019; see also Cohen et al., 2004). This atypical localization suggests that the RH must have some capacity for word recognition and that this region can be recruited when necessary. Whether or not this RH VWFA is entirely normal in terms of its functional capability is not yet fully determined.
Developmental Emergence of the FFA and the VWFA
As has been demonstrated previously, face representations are acquired slowly over the course of development (Scherf, Behrmann, Humphreys, & Luna, 2007) and are not adult-like until just after the age of 30 years (Germine, Duchaine, & Nakayama, 2011). Critically, on the distributed account, the claim is that prior to the onset of literacy, which occurs usually around 5 or 6 years of age, there is no hemispheric specialization for face recognition. At the onset of literacy, the LH becomes increasingly tuned for word recognition under the pressure for communication between visual and language areas and, by virtue of the optimization of the left VOTC for orthographic processing, further refinement of face representations occurs primarily, although not exclusively, in the right VOTC (for related ideas and evidence (see Cantlon, Pinel, Dehaene, & Pelphrey, 2011; Dehaene et al., 2015)).
To evaluate this putative sequence of events, we collected behavioral as well as ERP data from children aged 7.5 to 9.5 years, adolescents aged 11 to 13 years, and adults performing a same or different discrimination task with words and faces as stimuli. On each trial, as shown in Figure 4, an initial word is shown centrally at a duration long enough for young children to encode the input, followed by the presentation of the same or a different word, which could appear with equal probability in the right or left visual field (RVF and LVF, respectively; see Figure 4). In other blocks of trials, the identical procedure was followed but with faces, rather than words, as input and the order of the face or word block was counterbalanced.
Adults showed the expected hemispheric organization, with better performance for the matching task when words were presented to the RVF than LVF and better performance when faces were presented in the LVF than in the RVF (Dundas, Plaut, & Behrmann, 2013; see Figure 5(a)). Although their overall accuracy was roughly equal to that of adults, adolescents showed a hemispheric advantage only for words but not for faces, and the same was true for the children, although their overall accuracy was reduced relative to the other two groups. Of particular relevance was the observation of a significant correlation between reading competence and hemispheric lateralization of faces (after regressing out age, other cognitive scores and face discrimination accuracy) in the children and adolescents (see Figure 5(b)). The same finding of a positive correlation between the reading performance of 5-year-old children and the lateralization of their electrophysiological response to faces has also been reported using fast periodic visual stimulation: The more letters known by the children, the more right-lateralized their face response (Lochy et al., 2019; for further discussion, see section on Challenges to the Distributed Account of Hemispheric Organization of Face and Word Perception). Together, these findings support the notion that face and word recognition do not develop independently and that word lateralization, which emerges earlier, may drive face lateralization.
We also used the identical discrimination paradigm described earlier in children and adults while ERP data were collected simultaneously. We then analyzed the ERP signal only for the centrally presented face or word (as ERPs to laterally presented stimuli tend to be weaker and a motor response was required for these stimuli). The data indicated that the standard N170 ERP component for adults was greater in the LH over RH for words and greater in the RH over LH for faces over posterior electrodes (Dundas et al., 2014; see Figure 5(c)), consistent with the behavioral findings reported earlier. Although the children (aged 7–12years) showed the greater LH over RH N170 superiority for words, there was neither a behavioral nor neural hemispheric superiority of faces. These electrophysiological findings further support the claim that the lateralization of word recognition may precede and drive the later lateralization of face perception.
The emergence of the left VWFA and right FFA has also been documented in young children aged 6 years of age and learning to read in the first trimester of school. In an imaging study, these children evinced activation of voxels specific to written words and digits in the LH VWFA location and, at the same time, RH activation in response to faces increased in proportion to reading scores (Dehaene-Lambertz, Monzalvo, & Dehaene, 2018). These findings further support the claim of an interdependence of face recognition with literacy acquisition.
Having shown that the VOTCs of the two hemispheres are weighted differently but still interdependent, the question is what would happen in instances where only a single VOTC is present. This is indeed the situation in children who undergo surgical resection for the management of pharmacologically intractable epilepsy. Recently, we had occasion to conduct a longitudinal study of a single case, a child (from age roughly 7–10 years) with initials UD, who underwent surgical resection of the RH VOTC at age 6.9 years (T. T. Liu et al., 2018). Figure 6 shows the functional MRI data from two scans (category localizer [CL] consisting of faces, objects, words, houses, and patterns) at two timepoints roughly 3 years apart. The VWFA was detectable in the LH at the first timepoint at age 7 years 10 months, and by the age of 10 years and 10 months, both the VWFA and the FFA were detectable in the LH. Although not shown in this figure (but see T. T. Liu et al., 2018, Figure 4), the face-selective and word-selective voxels competed for representational space, such that, over time, the location of the voxels that were word-selective (VWFA) shifted rather more lateral than is typical of the lateral-medial axis present in controls, and the extent of the left voxels that were face-selective (FFA) expanded across the course of development and were situated more medial than is typical (see Figure 6).
Finally, we have developed computational (neural network) simulations that illustrate how the adult hemispheric differences emerge over development due to cooperative and competitive interactions in the formation of face and word representations (Plaut & Behrmann, 2011). A network, instantiating the three computational principles articulated earlier, was trained on abstract face, word, and house stimuli, and was required to identify the stimulus at output. The network exhibited the emergence of a LH-biased word recognition system by virtue of connectivity with LH phonology. Although there was some LH involvement in face recognition, the recognition system was mostly RH-biased. Damage to a region analogous to the LH fusiform cortex resulted in a deficit in word identification with a mild deficit in face identification, whereas damage to the RH analogue of the fusiform cortex produced a deficit in face identification with a concurrent mild deficit in word recognition, as observed empirically (Behrmann & Plaut, 2014).
Challenges to the Distributed Account of Hemispheric Organization of Face and Word Perception
Thus far, we have provided a theoretical account of the manner in which the division of labor for face and word recognition emerges with graded asymmetries across the two hemispheres. We have also offered empirical support for this account from a range of investigations, including ERP studies conducted with adults and with children across the course of development, behavioral half-field studies with children and adolescents as well as adults, and neuropsychological studies of patients’ impairment after a unilateral RH or LH lesion. Last, we have instantiated these principles in an artificial network which enabled us to explore the consequences of the core principles and (simulated) brain damage on face and word recognition behavior.
Since the publication of these studies, a number of empirical challenges have risen to the fore. Next, we describe these challenges and our response to them. In the end, we will conclude that there remains much more work to be done and a full resolution of all of the issues awaits further exploration.
Functional Specialization of Visual Cortex in Congenitally Blind Individuals
Our account emphasizes the importance of the nature and degree of visual experience with different types of stimuli in giving rise to graded domain-specific functional specialization among high-level visual cortical areas. A particularly intriguing challenge to our account, therefore, comes from observations that certain general aspects of the organization of visual cortex are preserved in congenitally blind individuals (Reich, Szwed, Cohen, & Amedi, 2011). For example, in blind individuals, the VWFA is activated during Braille reading (Büchel, Price, Frackowiak, & Friston, 1998; Reich et al., 2011; Sadato et al., 1996), when making other highly precise tactile discriminations (Siuda-Krzywicka et al., 2016), and in response to lexically associated auditory “sound-scapes” (Striem-Amit, Cohen, Dehaene, & Amedi, 2012; Striem-Amit, Wang, Bi & Caramazza, 2018). Similarly, the FFA is activated during tactile exploration of a face (Pietrini et al., 2004), vocal emotional expression (Fairhall et al., 2017), and in response to face-associated auditory stimuli such as laughing and whistling (van den Hurk, Van Baelen, & Op de Beeck, 2017). Indeed, visual cortical selectivity in sighted and blind individuals is similar in other respects, too, including the more lateral response to animals compared to the more medial response to tools (Mahon, Anzellotti, Schwarzbach, Zampini, & Caramazza, 2009) and to scenes (i.e., PPA; He et al., 2013; van den Hurk et al., 2017; Wang, Caramazza, Peelen, Han, & Bi, 2015).
It is important to bear in mind that all of the relevant studies were primarily concerned with establishing some statistically reliable relationship between the visual cortical organization of blind and sighted individuals. On close examination, however, the observed relationships appear to be relatively weak. For instance, Reich et al. (2011) reported only a similar peak of activation for blind Braille reading compared to sighted visual reading but provided no additional information about the distribution of the activation pattern or its degree of selectivity. In fact, Büchel et al. (1998) found massively broader activation in congenitally blind and late blind (mean time of onset aged 18 years) individuals compared to VWFA activation in sighted individuals reading the same words. More recently, van den Hurk et al. (2017) found that, in congenitally blind individuals, the topographic organization of responses to visual faces, objects, body parts, and scenes in sighted individuals accounted for only 1% of the variance of the analogous organization under auditory presentation (even when based on unthresholded selectivity labels). Moreover, all of the observed relationships are restricted to the medial-lateral axis and do not capture the detailed hierarchical structure observed for both faces (Grill-Spector et al., 2017) and words (Vinckier et al., 2007; Weiner et al., 2016). Thus, the organization of visual cortex in the blind appears perhaps to be only coarsely related to that in sighted individuals. Nonetheless, the fact that even very general aspects of the medial-lateral organization of visual cortex does not depend on visual input indicates that the account we have articulated to this point is incomplete.
Some researchers have suggested that the functions of high-level visual cortex can be characterized at a more abstract level that is not vision-specific (e.g., deriving category-based representations), and that, in the absence of visual input, these regions continue to carry out these functions on tactile or auditory input (Op de Beeck, Pillet, & Ritchie, 2019; Reich et al., 2011; Striem-Amit et al., 2012). While certainly a possibility, such accounts still need to specify not only how other modalities of input access occipitotemporal cortex, but how the same abstract functions emerge through a combination of innate structure and (altered) experience.
The critical question in the current context is whether the computational principles we have proposed can be extended to explain the observed findings. Most researchers ascribe the preservation of coarse domain-specific organization of visual cortex in the blind, at least in part, to pre-existing and possibly innate connectivity with domain-specific regions elsewhere in the brain (see, e.g., Mahon et al., 2009; Op de Beeck et al., 2019). This connectivity pattern accounts for the site of the emergence of future VOTC regions in children; for example, the (future) FFA is connected with RH anterior temporal cortex, superior temporal sulcus, and the amygdala (Grimaldi, Saleem, & Tsao, 2016; Saygin et al., 2012) and the (future) VWFA is connected with LH language-related areas (Bouhali et al., 2014; Saygin et al., 2016; Stevens, Kravitz, Peng, Henry Tessler, & Martin, 2017). Cross-modal activation would then result from back-activation via shared higher level representations. The influences of such connectivity on the functional organization of face and word representations in the fusiform gyrus can be viewed as extending our principle of topography and proximity to include top-down as well as bottom-up constraints and influences.
The question remains, though, as to the origin of such connectivity and whether it is properly interpreted as domain-specific. In this regard, Arcaro, Schade, and Livingstone (2019b; see also Livingstone, Arcaro, & Schade, 2019a) have recently put forth the intriguing proposal that critical aspects of this connectivity, and the categorical organization of visual cortex more generally, can be understood as the consequence of the cross-modal alignment of topographic “protomaps” (Srihasam, Vincent, & Livingstone, 2014) along high-precision to low-precision axes. The eccentricity bias that we emphasize is the visual version of this axis, and analogous distinctions can be made for auditory and tactile discrimination. This alignment—which might arise due to thalamic remapping or via cortical association areas—would allow, for example, high-precision tactile information (in Braille reading) to co-opt high-precision regions of visual cortex in the blind. Evidence for cross-modal activation in VOTC is clear, and there is growing agreement that VOTC can be activated by stimuli from other modalities such as haptics and audition (van den Hurk et al., 2017; von Kriegstein, Kleinschmidt, & Giraud, 2005; von Kriegstein, Kleinschmidt, Sterzer, & Giraud, 2005), favoring a view of multisensory alignment.
Beyond its parsimony, this alignment account has the advantage of being able to explain why difficult tactile discriminations which are not language-related also engage visual cortex (Siuda-Krzywicka et al., 2016) and why the coarse spatial similarity among blind versus sighted visual cortical areas seems to be restricted to the lateral (high-precision) to medial (low-precision) axis. Working out the computational details of how the protomaps become aligned during typical and modality-deprived development in a way that can account for the full range of observed findings remains a challenge for future work.
The Role of Literacy Acquisition as the Trigger for Lateralization
A further challenge for the distributed account concerns the finding of an early RH lateralization for face recognition. We have claimed that, prior to the onset of word recognition, there are no obvious hemispheric differences, and so, early in life, face processing is supported by both hemispheres. As a modal right-handed child starts to learn to read, however, word recognition increasingly tunes the VWFA in the LH to enable communication with language areas (and interactivity between VOTC and language areas in top-down fashion; Price & Devlin, 2011). As a product of cooperative and competitive dynamics, the representations of faces become largely but not exclusively tuned in the RH.
In contrast with our account, some data indicate that the RH lateralization for faces is present in infancy, long before the beginning of literacy. For example, one study using functional near infrared spectroscopy in 5- to 8-month-old children reported a significant difference between visually presented faces, compared with control visual stimuli, in the RH but not in the LH (Otsuka et al., 2007). Similarly, in an investigation that employed an electroencephalography face discrimination paradigm with lateralized presentation of faces to infants in the first postnatal semester, responses to faces were observed only in the RH, and this RH lateralization increased over the course of the first semester (Adibpour, Dubois, & Dehaene-Lambertz, 2018). Because these studies employed only faces and not any other category of homogeneous objects as target stimuli, we do not yet know whether this RH lateralization is specific to faces. It may be the case, for example, that the RH is more sensitive to all visual stimuli at this age, perhaps resulting from an early RH attentional advantage or from a spatial frequency bias (see below for further details).
In a further challenge, in experiments conducted with 4- to 6-month-old infants, natural images of faces were displayed embedded among a stream of common objects, and in a second experiment, phase scrambled versions of the faces and objects were displayed (de Heering & Rossion, 2015). A clear response at the base stimulation frequency was observed for faces but not for the phase-scrambled versions, and this response was present to a greater degree over the RH than LH (see also Leleu et al., 2019). These findings led to the conclusion that selective processes for face processing are present well before word recognition is acquired and hence cannot possibly be the outcome of the hemispheric competition and cooperation that ensues over development.
It is surprising, then, that fast periodic visual stimulation study conducted with children aged roughly 5 to 6 years showed a strong face-selective response but no lateralization or hemispheric superiority (Lochy et al., 2019) given that a study using the identical methods showed a right lateralized response for faces (de Heering & Rossion, 2015). Moreover, consistent with our account and with the data from Dundas et al. (2013) shown in Figure 4(b), in Lochy et al. (2019) there was a small positive correlation between the extent of letter knowledge and the degree of RH response superiority for faces (better letter knowledge is associated with greater RH response). Lochy et al. did recognize the inconsistency of the studies in infants and in young children and concluded that there must exist a nonlinearity in the development of face processing, with a very early RH lateralization which then disappears in 5- to 6-year-old children and then re-emerges in adulthood. Exactly why this nonlinearity exists and what purpose it might serve is obviously unknown and requires further investigation.
Although, on the surface, some of these findings appear to contest our distributed account, an early bias toward low spatial frequencies in infancy might afford the early superiority to the RH. This early RH advantage, however, might not be specifically related to the acquisition of cortical face representations. As discussed by Johnson and colleagues (Johnson, 2005; Johnson, Senju, & Tomalski, 2015), many studies have provided evidence for a rapid, low-spatial-frequency (LSF), sub-cortical face-detection system, labeled “ConSpec,” that involves the superior colliculus, pulvinar, and amygdala. The RH advantage in infants then might be the output of this sub-cortical system which supports the orienting of newborns and young infants to top-heavy stimuli like faces and, thus, does not reflect the cortical organization for face processing per se. This bias is akin to that suggested by Arcaro, Schade, and Livingstone (2019a) in which a preference for small dark regions on lighter background coupled with the upper field advantage as in monkeys (Hafed & Chen, 2016) may suffice to drive what appears on the surface to be a preference for faces. In fact, the idea that infants are born with any specific face-related information has been ruled out by demonstrations showing that young infants do not definitively evince a face preference (Cassia, Turati, & Simion, 2004; Turati, Simion, Milani, & Umilta, 2002; for a review, see Morton & Johnson, 1991). Rather, the claim is that bottom-up information, especially with a low spatial frequency bias, might be sufficient to account for an early RH lateralization (see earlier also for an alternative but consistent suggestion of an early RH attentional advantage).
Patients With a Selective Impairment of Either Face or Word Recognition
In support of the claim of bilateral representations of words and faces, we have presented data from patients with a lesion to the RH or LH VOTC and have shown an impairment for the patients in the recognition of both stimulus classes (see Shared Representations With Weighting section), albeit in a weighted fashion, with a greater face deficit following a RH than LH lesion and vice versa for words (Behrmann & Plaut, 2014).
One finding that seems inconsistent with this distributed account is that there are case reports of patients who are impaired in their recognition of only one of the two stimulus classes, with some showing either “pure” alexia (Cohen & Dehaene, 2004) or “pure” prosopagnosia (Busigny et al., 2010; Hills, Pancaroglu, Duchaine, & Barton, 2015; Susilo, Wright, Tree, & Duchaine, 2015) and preservation of the other stimulus class, a pattern suggestive of independence of the domains. How might the distributed account explain the presence of a selective deficit restricted to just one of the visual classes with normal performance on the other?
One possible way in which this might occur is by virtue of individual differences in the nature of the weighted asymmetry of the hemispheres. These hemispheric differences, we have argued, are a direct consequence of the cooperative and competitive forces that ensue over development and these may play out differently in different individuals. As a means of exploring individual differences, we have collected pilot data from a fMRI study in which 14 participants viewed blocks of consecutively shown faces or words (or objects, houses, or scrambled patterns) in a n-back paradigm (participants pressed a response button if two consecutive images are the same). Figure 7 plots the difference in the BOLD response from a simple subtraction of RH minus LH face activation against the difference of LH minus RH word activation. As evident from this figure, there are considerable individual differences in the magnitude of the superiority of one hemisphere over the other. Although many of the points fall close to the diagonal, showing a balanced advantage for the “preferred” stimulus in each hemisphere, there are some cases where there is more of an imbalance. For example, in Figure 7, there is a patient who shows a 0.4 signal advantage for faces in the RH over the LH but a 0.2 advantage for words in the LH over the RH. There is also a second example case which shows the reverse pattern, with greater asymmetry for words than for faces. Although there is no case which shows a sufficiently large degree of laterality for any one domain, this is of course hypothetically possible in a full distribution. We propose, then, that it is likely that some (rare) individuals will fall in the tail of one of these distributions. Depending on whether the absolute lateralization is strong for faces or for words, such an individual, following a lesion to the RH, might become selectively prosopagnosic or, following a lesion to the LH, might become selectively pure alexic and, of course, these kinds of “pure” cases are rare in their own right. An argument identical to this has already been offered in the literature in accounting for the presence or absence of surface dyslexia in individuals with semantic dementia (Woollams, Ralph, Plaut, & Patterson, 2007). Such an account can explain both the common association between face and word processing abilities (and their weighted deficits after a unilateral hemispheric lesion) but can also account for the possibility of dissociations in the same population.
There is, in addition, another reasonably large group of individuals who appear to evince a selective impairment in face recognition (“congenital” or “developmental” prosopagnosia; CP for short). The severity of the face recognition disorder in such individuals can be as severe as that noted in individuals with a frank RH VOTC lesion and, like those individuals, These individuals rely on voice and other cues such as hairstyle to support individual face recognition (Avidan & Behrmann, 2014; Avidan et al., 2014; Barton, Albonico, Susilo, Duchaine, & Corrow, 2019; Rosenthal et al., 2017). In contrast with the association of word and face recognition deficits in DD, as reviewed in Shared Representations With Weighting section above, there are now several studies of individuals with CP which have demonstrated a dissociation between face and word recognition (Burns et al., 2017; Rubino, Corrow, Corrow, Duchaine, & Barton, 2016; Starrfelt, Klargaard, Petersen, & Gerlach, 2018).
We suggest that, from a theoretical perspective, both the dissociation between face and word processing in CP and their association in DD may be explained within the distributed hemispheric account. On this account, if the acquisition of word recognition is impaired by, for example, a phonological deficit, as in DD, the initial trigger for lateralization, namely, the optimizing of the LH to connect visual and language areas will not be present. The absence of this tuning for words in the LH will not result in the competition that drives the lateralization of faces. In this scenario, both face and word recognition would be adversely impacted and their hemispheric organization affected. If, however, it is face recognition that is initially affected, the acquisition of word recognition can proceed apace and be preserved. The argument then is that there is a chronological sequence: Face lateralization is contingent on the process of preserved literacy acquisition but not vice versa. In an empirical study to examine this claim of temporal staging and the differential reliance of face lateralization on word lateralization, we collected data from two groups of adults, those with DD and those with CP, and matched control participants using the same behavioral and ERP paradigm shown in Figures 3 and 4.
As depicted in Figure 8, relative to the typically developed controls who evince a more negative waveform to faces over the RH than LH and a more negative waveform to words over the LH than RH in the expected N170 time window, the DD individuals showed no asymmetries over the LH or RH for either words or faces, as would be expected if face lateralization is contingent on normal lateralized word acquisition. In contrast, in the CP individuals, the ERP waveforms for faces shown no hemispheric differences, but there is a greater negative component over the LH than RH for words in the expected time window, as in the controls. These findings of a differential relationship between face and word hemispheric lateralization is borne out by the differences between the DD and CP individuals and this observed asymmetry is predicted a priori by the distributed account.
The Nature of Bilateral Hemispheric Representation of Faces and Words
We have suggested that representations of both faces and words exist in both hemispheres and that these bilateral representations play a functional role—a lesion to either hemisphere results in a recognition deficit for both classes of stimuli (albeit weighted depending on which hemisphere is affected; Behrmann & Plaut, 2014; see Shared Representations With Weighting section above). Thus far, we have not yet characterized the information content of the representations in each hemisphere and, in particular, have not evaluated whether these representations are the same or not (see section on Moving Forward). One existing proposal suggests that they are not. Barton et al. have shown that patients with left fusiform lesions and alexia do have face recognition deficits, but that the deficit is primarily for lip-reading and not for face recognition per se. And in complementary fashion, they showed that patients with right fusiform lesions and prosopagnosia do have difficulty reading but the problem is not in word recognition itself and is, rather, in identifying the font or handwriting of the text (Barton, Fox, Sekunova, & Iaria, 2010; Barton, Sekunova, et al., 2010; Hills et al., 2015). Albonico and Barton (2017) offer a potential resolution to the apparent inconsistency. They conclude that, in addition to the role in word recognition, left VOTC regions participate in face recognition—the key claim is that the left fusiform area codes or represents linear contours at higher spatial frequency and thus damage affects both word recognition as well as the processing of facial speech patterns. Whether a general process is also at play in the right fusiform area that would give rise to both prosopagnosia and a deficit in orthographic processing is yet to be determined.
The proposal in favor of LH lip-reading and RH font-perception is not necessarily at odds with the claim of bihemispheric representation of words and faces. Rather, the representation of individual faces and words, as we have suggested, may coexist with the processes engaged in lip-reading in the LH and font-detection in the RH. Clearly, further investigations remain to be conducted to confirm and elucidate this coexistence of processes.
Moving Forward
As we point out in several places throughout this article, much research remains to be done. One obvious line of investigation concerns the representational format of individual faces and words in the two hemispheres, and the extent to which they are the same or different. A multivariate analysis of BOLD data collected when observers view faces and words will be helpful in shedding light on this issue.
Another line of future study concerns the individual differences in hemispheric organization we have outlined in The Nature of Bilateral Hemispheric Representation of Faces and Words section. One hypothesis is that the mature hemispheric profile is an emergent function of the competition for representation in the two hemispheres. But what determines the nature and extent of the competition? One possible factor that might constrain this competition is the inter- as well as intrahemispheric structural connectivity. The prediction is that, across individuals, as the volume of the corpus callosum increases (with the capacity for fast and detailed interhemispheric transmission), word and face representations should become more evenly bilateral. Those with less callosal volume, in contrast, will evince more unilateral specialization for words in the LH and faces in the RH. Relatedly, in those with greater within-hemisphere connectivity (e.g., by virtue of greater integrity of the inferior fronto-occipital fasciculus), we would predict more unilateral VWFA as a result of stronger connections between left fusiform and left language areas. Through the competitive dynamics we have described, these differential patterns of connectivity would lead to greater lateralization of face representations to the RH. Those with less within-hemisphere connectivity might, then, have more balanced bilateral representations. These claims regarding the relative roles of between- and within-hemisphere connectivity remain speculative but if these predictions are upheld, they would further consolidate the distributed account of hemispheric organization and the nature of the interdependence of face and word representations.
There is much to be learned about the functional organization of the hemispheres and the manner in which this organization emerges over development. We have offered a framework within which to begin to outline a possible mechanism and further confirmation of the predictions, and expect that future challenges will help refine and extend this framework. We recognize that the description of the findings to date may appear to suffer from a confirmatory bias, and that the theory articulated may be viewed as unnecessarily complex. But this theoretical framework is subject to challenge as demonstrated earlier and there are many ways in which the model might be tested further and refuted by new findings. The goal of this study has been to lay out a computational account and empirical findings that examine the emergence of, and constraints on, the pattern of hemispheric organization of human ventral occipitotemporal cortex. This account has offered a number of testable predictions and, based on future investigations, is subject to modification or, if necessary, refutation. Above all, we hope that our focus on principles of hemispheric organization of human visual recognition helps to move the field forward.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
We approximate the brain as 1011 neurons uniformly distributed within a sphere of radius 6.6 cm and connected by straight axons with cross-sectional radius of 0.1 μm (ignoring overlap). As the average distance between two random points within a sphere of radius r is , the average volume of an axon is (6.6 × 10−2 m) π (0.1 × 10−6 m)2 ≅ 2.13 × 10−15 m3. Thus, the total volume of 1022 axons (full connectivity) is 2.13 × 107 m3 = 21.3 million cubic meters. Even connecting each neuron to only 104 others—as is roughly true in the brain—would require 2.13 cubic meters of axon volume if connections were distributed randomly rather than mostly locally.
Based on the Perception Lecture delivered at the 42nd European Conference on Visual Perception, Leuven, 25 August 2019.
References
- Adibpour P, Dubois J, & Dehaene-Lambertz G (2018). Right but not left hemispheric discrimination of faces in infancy. Nature Human Behaviour, 2, 67–79. doi: 10.1038/s41562-017-0249-4 [DOI] [PubMed] [Google Scholar]
- Albonico A, & Barton JJS (2017). Face perception in pure alexia: Complementary contributions of the left fusiform gyrus to facial identity and facial speech processing. Cortex, 96, 59–72. doi: 10.1016/j.cortex.2017.08.029 [DOI] [PubMed] [Google Scholar]
- Allison T, McCarthy G, Nobre AC, Puce A, & Belger A (1994). Human extrastriate visual cortex and the perception of faces, words, numbers and colors. Cerebral Cortex, 5, 544–554. [DOI] [PubMed] [Google Scholar]
- Allison T, Puce A, Spencer DD, & McCarthy G (1999). Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cerebral Cortex, 9, 415–430. [DOI] [PubMed] [Google Scholar]
- Almeida J, Mahon BZ, & Caramazza A (2010). The role of the dorsal visual processing stream in tool identification. Psychological Science, 21, 772–778. doi: 10.1177/0956797610371343 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Amunts K, & Zilles K (2015). Architectonic mapping of the human brain beyond Brodmann. Neuron, 88, 1086–1107. doi: 10.1016/j.neuron.2015.12.001 [DOI] [PubMed] [Google Scholar]
- Appelbaum LG, Liotti M, Perez R, Fox SP, & Woldorff MG (2009). The temporal dynamics of implicit processing of non-letter, letter, and word-forms in the human visual cortex. Frontiers in Human Neuroscience, 3, 56. doi: 10.3389/neuro.09.056.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Arcaro MJ, Schade PF, & Livingstone MS (2019a). Universal mechanisms and the development of the face network: What you see is what you get. Annual Review of Vision Science, 5, 341–372. doi: 10.1146/annurev-vision-091718-014917 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Arcaro MJ, Schade PF, & Livingstone MS (2019b). Body map proto-organization in newborn macaques. Proceedings of the National Academy of Sciences of the United States of America, 116, 24861–24871. doi: 10.1073/pnas.1912636116 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Asperud J, Kühn CD, Gerlach C, Delfi TS, & Starrfelt R (2019). Word recognition and face recognition following posterior cerebral artery stroke: Overlapping networks and selective contributions. Visual Cognition, 1, 52–65. [Google Scholar]
- Avidan G, & Behrmann M (2014). Impairment of the face processing network in congenital prosopagnosia. Frontiers in Bioscience (Elite Edition), 6, 236–257. [DOI] [PubMed] [Google Scholar]
- Avidan G, Tanzer M, Hadj-Bouziane F, Liu N, Ungerleider LG, & Behrmann M (2014). Selective dissociation between core and extended regions of the face processing network in congenital prosopagnosia. Cerebral Cortex, 24, 1565–1578. doi: 10.1093/cercor/bht007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barton JJS (2011). Disorder of higher visual function. Current Opinion in Neurology, 24, 1–5. doi: 10.1097/WCO.0b013e328341a5c2 [DOI] [PubMed] [Google Scholar]
- Barton JJS, Albonico A, Susilo T, Duchaine B, & Corrow SL (2019). Object recognition in acquired and developmental prosopagnosia. Cognitive Neuropsychology, 36, 54–84. doi: 10.1080/02643294.2019.1593821 [DOI] [PubMed] [Google Scholar]
- Barton JJS, Fox CJ, Sekunova A, & Iaria G (2010). Encoding in the visual word form area: An fMRI adaptation study of words versus handwriting. Journal of Cognitive Neuroscience, 22, 1649–1661. doi: 10.1162/jocn.2009.21286 [DOI] [PubMed] [Google Scholar]
- Barton JJS, Sekunova A, Sheldon C, Johnston S, Iaria G, & Scheel M (2010). Reading words, seeing style: The neuropsychology of word, font and handwriting perception. Neuropsychologia, 48, 3868–3877. doi: 10.1016/j.neuropsychologia.2010.09.012 [DOI] [PubMed] [Google Scholar]
- Barttfeld P, Abboud S, Lagercrantz H, Aden U, Padilla N, Edwards AD, … Dehaene-Lambertz G (2018). A lateral-to-mesial organization of human ventral visual cortex at birth. Brain Structure and Function. doi: 10.1007/s00429-018-1676-3 [DOI] [PubMed] [Google Scholar]
- Behrmann M, & Plaut DC (2013). Distributed circuits, not circumscribed centers, mediate visual recognition. Trends in Cognitive Sciences, 17, 210–219. doi: 10.1016/j.tics.2013.03.007 [DOI] [PubMed] [Google Scholar]
- Behrmann M, & Plaut DC (2014). Bilateral hemispheric processing of words and faces: Evidence from word impairments in prosopagnosia and face impairments in pure alexia. Cerebral Cortex, 24, 1102–1118. doi: 10.1093/cercor/bhs390 [DOI] [PubMed] [Google Scholar]
- Behrmann M, & Plaut DC (2015). A vision of graded hemispheric specialization. Annals of the New York Academy of Sciences, 1359, 30–46. doi: 10.1111/nyas.12833 [DOI] [PubMed] [Google Scholar]
- Behrmann M, Plaut DC, & Nelson J (1998). A literature review and new data supporting an interactive account of letter-by-letter reading. Cognitive Neuropsychology, 15, 7–51. [DOI] [PubMed] [Google Scholar]
- Behrmann M, Shomstein SS, Black SE, & Barton JJS (2001). The eye movements of pure alexic patients during reading and nonreading tasks. Neuropsychologia, 39, 983–1002. doi: 10.1016/S0028-3932(01)00021-5 [DOI] [PubMed] [Google Scholar]
- Bentin S, Allison T, Puce A, Perez E, & McCarthy G (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551–565. doi: 10.1162/jocn.1996.8.6.551 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Boremanse A, Norcia AM, & Rossion B (2014). Dissociation of part-based and integrated neural responses to faces by means of electroencephalographic frequency tagging. European Journal of Neurosciences, 40, 2987–2997. doi: 10.1111/ejn.12663 [DOI] [PubMed] [Google Scholar]
- Bouhali F, Bezagu Z, Dehaene S, & Cohen L (2019). A mesial-to-lateral dissociation for orthographic processing in the visual cortex. Proceedings of the National Academy of Sciences of the United States of America. doi: 10.1073/pnas.1904184116 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bouhali F, Thiebaut de Schotten M, Pinel P, Poupon C, Mangin JF, Dehaene S, & Cohen L (2014). Anatomical connections of the visual word form area. Journal of Neurosciences, 34, 15402–15414. doi: 10.1523/JNEUROSCI.4918-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bracci S, Caramazza A, & Peelen MV (2015). Representational similarity of body parts in human occipitotemporal cortex. J Neurosci, 35, 12977–12985. doi: 10.1523/JNEUROSCI.4698-14.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bracci S, Caramazza A, & Peelen MV (2018). View-invariant representation of hand postures in the human lateral occipitotemporal cortex. Neuroimage, 181, 446–452. doi: 10.1016/j.neuroimage.2018.07.001 [DOI] [PubMed] [Google Scholar]
- Büchel C, Price C, Frackowiak RSJ, & Friston K (1998). Different activation patterns in the visual cortex of late and congenitally blind subjects. Brain, 121, 409–419. [DOI] [PubMed] [Google Scholar]
- Burns EJ, Bennetts RJ, Bate S, Wright VC, Weidemann CT, & Tree JJ (2017). Intact word processing in developmental prosopagnosia. Science Reports, 7, 1683. doi: 10.1038/s41598-017-01917-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Busigny T, Graf M, Mayer E, & Rossion B (2010). Acquired prosopagnosia as a face-specific disorder: Ruling out the general visual similarity account. Neuropsychologia, 48, 2051–2067. doi: 10.1016/j.neuropsychologia.2010.03.026 [DOI] [PubMed] [Google Scholar]
- Busigny T, & Rossion B (2011). Holistic processing impairment can be restricted to faces in acquired prosopagnosia: Evidence from the global/local Navon effect. Journal of Neuropsychology, 5, 1–14. doi: 10.1348/174866410X500116 [DOI] [PubMed] [Google Scholar]
- Cai Q, Paulignan Y, Brysbaert M, Ibarrola D, & Nazir TA (2010). The left ventral occipitotemporal response to words depends on language lateralization but not on visual familiarity. Cerebral Cortex, 20, 1153–1163. doi: 10.1093/cercor/bhp175 [DOI] [PubMed] [Google Scholar]
- Canolty RT, Soltani M, Dalal SS, Edwards E, Dronkers NF, Nagarajan SS, … Knight RT (2007). Spatiotemporal dynamics of word processing in the human brain. Frontiers in Neuroscience, 1, 185–196. doi: 10.3389/neuro.01.1.1.014.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cantlon JF, Pinel P, Dehaene S, & Pelphrey KA (2011). Cortical representations of symbols, objects, and faces are pruned back during early childhood. Cerebral Cortex, 21, 191–199. doi: 10.1093/cercor/bhq078 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carlos BJ, Hirshorn EA, Durisko C, Fiez JA, & Coutanche MN (2019). Word inversion sensitivity as a marker of visual word form area lateralization: An application of a novel multivariate measure of laterality. Neuroimage, 191, 493–502. doi: 10.1016/j.neuroimage.2019.02.044 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carmel D, & Bentin S (2002). Domain specificity versus expertise: Factors influencing distinct processing of faces. Cognition, 83, 1–29. [DOI] [PubMed] [Google Scholar]
- Carreiras M, Seghier ML, Baquero S, Estevez A, Lozano A, Devlin JT, & Price CJ (2009). An anatomical signature for literacy. Nature, 461, 983–986. doi: 10.1038/nature08461 [DOI] [PubMed] [Google Scholar]
- Caspers J, Palomero-Gallagher N, Caspers S, Schleicher A, Amunts K, & Zilles K (2015). Receptor architecture of visual areas in the face and word-form recognition region of the posterior fusiform gyrus. Brain Structure and Function, 220, 205–219. doi: 10.1007/s00429-013-0646-z [DOI] [PubMed] [Google Scholar]
- Cassia VM, Turati C, & Simion F (2004). Can a nonspecific bias toward top-heavy patterns explain newborns’ face preference? Psychological Science, 15, 379–383. [DOI] [PubMed] [Google Scholar]
- Cohen L, & Dehaene S (2004). Specialization within the ventral stream: The case for the visual word form area. Neuroimage, 22, 466–476. [DOI] [PubMed] [Google Scholar]
- Cohen L, Dehaene S, Naccache L, Lehericy S, Dehaene-Lambertz G, Henaff MA, & Michel F (2000). The visual word form area: Spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain, 123, 291–307. [DOI] [PubMed] [Google Scholar]
- Cohen L, Lehericy S, Chochon F, Lemer C, Rivaud S, & Dehaene S (2002). Language-specific tuning of visual cortex? Functional properties of the visual word form area. Brain, 125, 1054–1069. [DOI] [PubMed] [Google Scholar]
- Cohen L, Lehericy S, Henry C, Bourgeois M, Larroque C, Sainte-Rose C, … Hertz-Pannier L (2004). Learning to read without a left occipital lobe: Right-hemispheric shift of visual word form area. Annals of Neurology, 56, 890–894. [DOI] [PubMed] [Google Scholar]
- Damasio AR (1983). Pure alexia. Trends in Neurosciences, 6, 93–96. [Google Scholar]
- Damasio AR, Damasio H, & Van Hoesen GW (1982). Prosopagnosia: Anatomic basis and behavioral mechanisms. Neurology, 32, 331–341. [DOI] [PubMed] [Google Scholar]
- de Heering A, & Rossion B (2015). Rapid categorization of natural face images in the infant right hemisphere. Elife, 4. doi: 10.7554/eLife.06564 [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Schonen S, Mancini J, Camps R, Maes E, & Laurent A (2005). Early brain lesions and face-processing development. Dev Psychobiol, 46, 184–208. doi: 10.1002/dev.20054 [DOI] [PubMed] [Google Scholar]
- Dehaene S, & Cohen L (2007). Cultural recycling of cortical maps. Neuron, 56, 384–398. doi: 10.1016/j.neuron.2007.10.004 [DOI] [PubMed] [Google Scholar]
- Dehaene S, Cohen L, Morais J, & Kolinsky R (2015). Illiterate to literate: Behavioral and cerebral changes induced by reading acquisition. Nature Reviews Neuroscience, 16, 234–244. [DOI] [PubMed] [Google Scholar]
- Dehaene S, Pegado F, Braga LW, Ventura P, Nunes Filho G, Jobert A, … Cohen L (2010). How learning to read changes the cortical networks for vision and language. Science, 330, 1359–1364. doi: 10.1126/science.1194140 [DOI] [PubMed] [Google Scholar]
- Dehaene-Lambertz G, Monzalvo K, & Dehaene S (2018). The emergence of the visual word form: Longitudinal evolution of category-specific ventral visual areas during reading acquisition. PLoS Biol, 16, e2004103. doi: 10.1371/journal.pbio.2004103 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dejerine J (1891). Sur un cas de cecite verbale avec agraphie, suivi d’autopsie. [On a case of verbal blindness with agraphia followed by autopsy]. Comptes rendus des séances de la Société de biologie, 43, 197–201. [Google Scholar]
- Dejerine J (1892). Contribution a l’étude anatomo-pathologique et clinique des differentes variétés de cécité-verbale. [Contribution to the anatomo-pathological and clinical study of the different varieties of verbal deficit]. Mémoires Sociéte Biologique, 4, 61–90. [Google Scholar]
- Devlin JT, Jamison HL, Gonnerman LM, & Matthews PM (2006). The role of the posterior fusiform gyrus in reading. Journal of Cognitive Neuroscience, 18, 911–922. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dundas EM, Plaut DC, & Behrmann M (2013). The joint development of hemispheric lateralization for words and faces. Journal of Experimental Psychology General, 142, 348–358. doi: 10.1037/a0029503 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dundas EM, Plaut DC, & Behrmann M (2014). An ERP investigation of the co-development of hemispheric lateralization of face and word recognition. Neuropsychologia, 61C, 315–323. doi: 10.1016/j.neuropsychologia.2014.05.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Epstein R, & Kanwisher N (1998). A cortical representation of the local visual environment. Nature, 392, 598–601. [DOI] [PubMed] [Google Scholar]
- Fairhall SL, Porter KB, Bellucci C, Mazzetti M, Cipolli C, & Gobbini MI (2017). Plastic reorganization of neural systems for perception of others in the congenitally blind. Neuroimage, 158, 126–135. doi: 10.1016/j.neuroimage.2017.06.057 [DOI] [PubMed] [Google Scholar]
- Farah MJ (1991). Patterns of co-occurrence among the associative agnosias: Implications for visual object recognition. Cognitive Neuropsychology, 8, 1–19. [Google Scholar]
- Farah MJ (1992). Is an object and object and object? Cognitive and neuropsychological investigations of domain specificity in visual object recognition. Current Directions in Psychological Science, 1, 164–169. [Google Scholar]
- Farah MJ, Tanaka JW, & Drain HM (1995). What causes the face inversion effect? Journal of Experimental Psychology: Human Perception and Performance, 21, 628–634. [DOI] [PubMed] [Google Scholar]
- Farah MJ, Wilson KD, Drain M, & Tanaka JN (1998). What is “special” about face perception? Psychological Review, 105, 482–498. [DOI] [PubMed] [Google Scholar]
- Gabay Y, Dundas E, Plaut DC, & Behrmann M (2017). Atypical perceptual processing of faces in developmental dyslexia. Brain and Language, 173, 41–51. [DOI] [PubMed] [Google Scholar]
- Gao Z, Goldstein A, Harpaz Y, Hansel M, Zion-Golumbic E, & Bentin S (2013). A magneto-encephalographic study of face processing: M170, gamma-band oscillations and source localization. Human Brain Mapping, 34, 1783–1795. doi: 10.1002/hbm.22028 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gauthier I, Behrmann M, & Tarr MJ (1999). Can face recognition really be dissociated from object recognition? Journal of Cognitive Neuroscience, 11, 349–370. [DOI] [PubMed] [Google Scholar]
- Gauthier I, Behrmann M, & Tarr MJ (2004). Are Greebles like faces? Using the neuropsychological exception to test the rule. Neuropsychologia, 42, 1961–1970. [DOI] [PubMed] [Google Scholar]
- Gerlach C, Marstrand L, Starrfelt R, & Gade A (2014). No strong evidence for lateralization of word reading and face recognition deficits following posterior brain injury. Journal of Cognitive Psychology. doi: 10.1080/20445911.2014.928713. [DOI] [Google Scholar]
- Germine LT, Duchaine BC, & Nakayama K (2011). Where cognitive development and aging meet: Face learning ability peaks after age 30. Cognition, 118, 201–210. doi: 10.1016/j.cognition.2010.11.002 [DOI] [PubMed] [Google Scholar]
- Gerrits R, Van der Haegen L, Brysbaert M, & Vingerhoets G (2019). Laterality for recognizing written words and faces in the fusiform gyrus covaries with language dominance. Cortex, 117, 196–204. doi: 10.1016/j.cortex.2019.03.010 [DOI] [PubMed] [Google Scholar]
- Geschwind N (1965). Disconnection syndromes in animals and man. Brain, 88, 237–294. [DOI] [PubMed] [Google Scholar]
- Geskin J, & Behrmann M (2018). Congenital prosopagnosia without object agnosia? A literature review. Cognitive Neuropsychology, 35, 4–54. doi: 10.1080/02643294.2017.1392295 [DOI] [PubMed] [Google Scholar]
- Ghuman AS, Brunet NM, Li Y, Konecky RO, Pyles JA, Walls SA, … Richardson RM (2014). Dynamic encoding of face information in the human fusiform gyrus. Nature Communications, 5, 5672. doi: 10.1038/ncomms6672 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ghuman AS, & Fiez JA (2018). Parcellating the structure and function of the reading circuit. Proceedings of the National Academy of Sciences of the United States of America. doi: 10.1073/pnas.1814648115 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gomez J, Barnett M, & Grill-Spector K (2019). Extensive childhood experience with Pokémon suggests eccentricity drives organization of visual cortex. Nature Human Behaviour, 3, 611–624. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gomez J, Natu V, Jeska B, Barnett M, & Grill-Spector K (2018). Development differentially sculpts receptive fields across early and high-level human visual cortex. Nature Communications, 9, 788. doi: 10.1038/s41467-018-03166-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grill-Spector K, & Malach R (2004). The human visual cortex. Annual Review of Neuroscience, 27, 649–677. [DOI] [PubMed] [Google Scholar]
- Grill-Spector K, Kushnir T, Edelman S, Itzchak Y, & Malach R (1998). Cue invariant activation in object-related areas of the human occipital lobe. Neuron, 21, 191–202. [DOI] [PubMed] [Google Scholar]
- Grill-Spector K, Weiner KS, Kay KN, & Gomez J (2017). The functional neuroanatomy of human face perception. Annual Review of Vision Science, 3, 167–196. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grimaldi P, Saleem KS, & Tsao D (2016). Anatomical connections of the functionally defined “face patches” in the macaque monkey. Neuron, 90, 1325–1342. doi: 10.1016/j.neuron.2016.05.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hafed ZM, & Chen CY (2016). Sharper, stronger, faster upper visual field representation in primate superior colliculus. Current Biology, 26, 1647–1658. doi: 10.1016/j.cub.2016.04.059 [DOI] [PubMed] [Google Scholar]
- Hasson U, Levy I, Behrmann M, Hendler T, & Malach R (2002). Eccentricity bias as an organizing principle for human high-order object areas. Neuron, 34, 479–490. doi: 10.1016/s0896-6273(02)00662-1 [DOI] [PubMed] [Google Scholar]
- He C, Peelen MV, Han Z, Lin N, Caramazza A, & Bi Y (2013). Selectivity for large non-manipulable objects in scene-selective visual cortex does not require visual experience. Neuroimage, 79, 1–9. doi: 10.1016/j.neuroimage.2013.04.051 [DOI] [PubMed] [Google Scholar]
- Hellige JB, Laeng B, & Michimata C (2010). Processing asymmetries in the visual system. In Hugdahl R & Westerhausen K (Eds.), The two halves of the brain: Information processing in the cerebral hemispheres (pp. 379–415). Cambridge, MA: MIT Press. [Google Scholar]
- Henderson VW, Friedman RB, Teng EL, & Weiner JM (1985). Left hemisphere pathways in reading: Inferences from pure alexia without hemianopia. Neurology, 35, 962–968. [DOI] [PubMed] [Google Scholar]
- Hervais-Adelman A, Kumar U, Mishra RK, Tripathi VN, Guleria A, Singh JP, … Huettig F (2019). Learning to read recycles visual cortical networks without destruction. Science Advances, 5, eaax0262. doi: 10.1126/sciadv.aax0262 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hills CS, Pancaroglu R, Duchaine B, & Barton JJ (2015). Word and text processing in acquired prosopagnosia. Annals of Neurology, 78, 258–271. doi: 10.1002/ana.24437 [DOI] [PubMed] [Google Scholar]
- Jacobs RA (1997). Nature, nurture and the development of functional specializations: A computational approach. Psychonomic Bulletin and Review, 4, 299–309. [Google Scholar]
- Jacobs RA, & Jordan MI (1992). Computational consequences of a bias toward short connections. Journal of Cognitive Neuroscience, 4, 323–336. [DOI] [PubMed] [Google Scholar]
- Johnson MH (2005). Subcortical face processing. Nature Reviews Neuroscience, 6, 766–774. doi: 10.1038/nrn1766 [DOI] [PubMed] [Google Scholar]
- Johnson MH, Senju A, & Tomalski P (2015). The two-process theory of face processing: Modifications based on two decades of data from infants and adults. Neuroscience & Biobehavioral Reviews, 50C, 169–179. doi: 10.1016/j.neubiorev.2014.10.009 [DOI] [PubMed] [Google Scholar]
- Kanwisher N (2010). Functional specificity in the human brain: A window into the functional architecture of the mind. Proceedings of the National Academy of Sciences of the United States of America, 107, 11163–11170. doi: 10.1073/pnas.1005062107 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kanwisher N (2017). The Quest for the FFA and Where It Led. Journal of Neurosciences, 37, 1056–1061. doi: 10.1523/JNEUROSCI.1706-16.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kanwisher N, McDermott J, & Chun MM (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neurosciences, 17, 4302–4311. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kay KN, & Yeatman JD (2017). Bottom-up and top-down computations in word- and face-selective cortex. Elife, 6, doi: 10.7554/eLife.22341 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Konorski J (1967). Integrative activity of the brain. Chicago, IL: University of Chicago Press. [Google Scholar]
- Koriat A, & Norman J (1985). Reading rotated words. Journal of Experimental Psychology: Human Perception and Performance, 11, 490–508. [DOI] [PubMed] [Google Scholar]
- Leleu A, Rekow D, Poncet F, Schaal B, Durand K, Rossion B, & Baudouin JY (2019). Maternal odor shapes rapid face categorization in the infant brain. Developmental Science, e12877. doi: 10.1111/desc.12877 [DOI] [PubMed] [Google Scholar]
- Levy I, Hasson U, Avidan G, Hendler T, & Malach R (2001). Center-periphery organization of human object areas. Nature Neuroscience, 4, 533–539. doi: 10.1038/87490 [DOI] [PubMed] [Google Scholar]
- Levy J, Heller W, Banich MT, & Burton LA (1983). Are variations among right-handed individuals in perceptual asymmetries caused by characteristic arousal differences between hemispheres? Journal of Experimental Psychology: Human Perception and Performance, 9, 329–359. [DOI] [PubMed] [Google Scholar]
- Liu TT, Freud E, Patterson C, & Behrmann M (2019). Perceptual function and category-selective neural organization in children with resections of visual cortex. Journal of Neurosciences. doi: 10.1523/JNEUROSCI.3160-18.2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu TT, Nestor A, Vida MD, Pyles JA, Patterson C, Yang Y, … Behrmann M (2018). Successful reorganization of category-selective visual cortex following occipito-temporal lobectomy in childhood. Cell Reports, 24, 1113–1122. e1116. doi: 10.1016/j.celrep.2018.06.099 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu Y-C, Wang A-G, & Yen M-Y (2011). Seeing but not identifying: Pure alexia coincident with prosopagnosia in occipital arteriovenous malformation. Graefe’s Archive for Clinical and Experimental Ophthalmology, 249, 1087–1089. doi: 10.1007/s00417-010-1586-4 [DOI] [PubMed] [Google Scholar]
- Livingstone MS, Arcaro MJ, & Schade PF (2019). Cortex is cortex: Ubiquitous principles drive face-domain development. Trends in Cognitive Sciences, 23, 3–4. doi: 10.1016/j.tics.2018.10.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lochy A, de Heering A, & Rossion B (2019). The non-linear development of the right hemispheric specialization for human face perception. Neuropsychologia, 126, 10–19. doi: 10.1016/j.neuropsychologia.2017.06.029 [DOI] [PubMed] [Google Scholar]
- Mahon BZ, Anzellotti S, Schwarzbach J, Zampini M, & Caramazza A (2009). Category-specific organization in the human brain does not require visual experience. Neuron, 63, 397–405. doi: 10.1016/j.neuron.2009.07.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mahon BZ, & Caramazza A (2011). What drives the organization of object knowledge in the brain? Trends Cogn Sci, 15, 97–103. doi: 10.1016/j.tics.2011.01.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Malach R, Reppas JB, Benson RR, Kwong KK, Jiang H, Kennedy WA, … Tootell RB (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences of the United States of America, 92, 8135–8139. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Martin A (2007). The representation of object concepts in the brain. Annual Review of Psychology, 58, 25–45. doi: 10.1146/annurev.psych.57.102904.190143 [DOI] [PubMed] [Google Scholar]
- Matsuo T, Kawasaki K, Kawai K, Majima K, Masuda H, Murakami H, … Hasegawa I (2015). Alternating zones selective to faces and written words in the human ventral occipitotemporal cortex. Cerebral Cortex, 25, 1265–1277. [DOI] [PubMed] [Google Scholar]
- Maurer U, & Mccandliss BD (2008). The development of visual expertise for words: The contribution of electrophysiology. In Grigorenko EL & Naples AJ (Eds.), Single word reading: Cognitive, behavioral and biological perspectives (pp. 43–64). Mahwah, NJ: Lawrence Erlbaum. [Google Scholar]
- Maurer U, Rossion B, & McCandliss BD (2008). Category specificity in early perception: Face and word n170 responses differ in both lateralization and habituation properties. Frontiers in Human Neuroscience, 2, 18. doi: 10.3389/neuro.09.018.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCarthy G, Puce A, Belger A, & Allison T (1999). Electrophysiological studies of human face perception. II: Response properties of face-specific potentials generated in occipitotemporal cortex. Cerebral Cortex, 9, 431–444. doi: 10.1093/cercor/9.5.431 [DOI] [PubMed] [Google Scholar]
- McIntosh RD (2018). Simple dissociations for a higher-powered neuropsychology. Cortex, 103, 256–265. doi: 10.1016/j.cortex.2018.03.015 [DOI] [PubMed] [Google Scholar]
- McKone E, & Kanwisher N (2005). Does the human brain process objects of expertise like faces? A review of the evidence. In Dehaene S, Duhamel JR, Hauser M, & Rizzolatti G (Eds.), From monkey brain to human brain (pp. 339–356). Cambridge, MA: MIT Press. [Google Scholar]
- Mercure E, Dick F, Halit H, Kaufman J, & Johnson MH (2008). Differential lateralization for words and faces: Category or psychophysics? Journal of Cognitive Neuroscience, 20, 2070–2087. doi: 10.1162/jocn.2008.20137 [DOI] [PubMed] [Google Scholar]
- Monzalvo K, Fluss J, Billard C, Dehaene S, & Dehaene-Lambertz G (2012). Cortical networks for vision and language in dyslexic and normal children of variable socio-economic status. Neuroimage, 61, 258–274. [DOI] [PubMed] [Google Scholar]
- Morton J, & Johnson MH (1991). CONSPEC and CONLERN: A two-process theory of infant face recognition. Psychological Review, 98, 164–181. [DOI] [PubMed] [Google Scholar]
- Nobre A, Allison T, & McCarthy G (1994). Word recognition in the inferior temporal lobe. Nature, 372, 261–263. [DOI] [PubMed] [Google Scholar]
- Nordt M, Gomez J, Natu V, Jeska B, Barnett M, & Grill-Spector K (2019). Learning to read increases the informativeness of distributed ventral temporal responses. Cerebral Cortex, 29, 3124–3139. doi: 10.1093/cercor/bhy178 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Op de Beeck HP, Pillet I, & Ritchie JB (2019). Factors determining where category-selective areas emerge in visual cortex. Trends in Cognitive Science, 23, 784–797. doi: 10.1016/j.tics.2019.06.006 [DOI] [PubMed] [Google Scholar]
- Otsuka Y, Nakato E, Kanazawa S, Yamaguchi MK, Watanabe S, & Kakigi R (2007). Neural activation to upright and inverted faces in infants measured by near infrared spectroscopy. Neuroimage, 34, 399–406. doi: 10.1016/j.neuroimage.2006.08.013 [DOI] [PubMed] [Google Scholar]
- Parvizi J, Jacques C, Foster BL, Witthoft N, Rangarajan V, Weiner KS, & Grill-Spector K (2012). Electrical stimulation of human fusiform face-selective regions distorts face perception. Journal of Neurosciences, 32, 14915–14920. doi: 10.1523/JNEUROSCI.2609-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peelen MV, Glaser B, Vuilleumier P, & Eliez S (2009). Differential development of selectivity for faces and bodies in the fusiform gyrus. Developmental Science, 12, F16–25. doi: 10.1111/j.1467-7687.2009.00916.x [DOI] [PubMed] [Google Scholar]
- Petersen SE, Fox PT, Snyder AZ, & Raichle ME (1990). Activation of extrastriate and frontal cortical areas by visual words and word-like stimuli. Science, 249, 1041–1044. doi: 10.1126/science.2396097 [DOI] [PubMed] [Google Scholar]
- Pietrini P, Furey ML, Ricciardi E, Gobbini MI, Wu WH, Cohen L, … Haxby JV (2004). Beyond sensory images: Object-based representation in the human ventral pathway. Proceedings of the National Academy of Sciences of the United States of America, 101, 5658–5663. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Plaut DC, & Behrmann M (2011). Complementary neural representations for faces and words: A computational exploration. Cognitive Neuropsychology, 28, 251–275. doi: 10.1080/02643294.2011.609812 [DOI] [PubMed] [Google Scholar]
- Price CJ (2000). The anatomy of language: Contributions from functional neuroimaging. Journal of Anatomy, 197, 335–359. doi: 10.1046/j.1469-7580.2000.19730335.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Price CJ, & Devlin JT (2003). The myth of the visual word form area. Neuroimage, 19, 473–481. [DOI] [PubMed] [Google Scholar]
- Price CJ, & Devlin JT (2011). The interactive account of ventral occipitotemporal contributions to reading. Trends in Cognitive Science, 15, 246–253. doi: 10.1016/j.tics.2011.04.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Price CJ, & Mechelli A (2005). Reading and reading disturbance. Current Opinion in Neurobiology, 15, 231–238. [DOI] [PubMed] [Google Scholar]
- Puce A, Allison T, Asgari M, Gore JC, & McCarthy G (1996). Differential sensitivity of human visual cortex to faces, letterstrings, and textures: A functional magnetic resonance imaging study. The Journal of Neuroscience, 16, 5205–5215. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Puce A, Allison T, Gore JC, & McCarthy G (1995). Face-sensitive regions in human extrastriate cortex studied by functional MRI. Journal of Neurophysiology, 74, 1192–1199. [DOI] [PubMed] [Google Scholar]
- Rangarajan V, & Parvizi J (2016). Functional asymmetry between the left and right human fusiform gyrus explored through electrical brain stimulation. Neuropsychologia, 83, 29–36. doi: 10.1016/j.neuropsychologia.2015.08.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reich L, Szwed M, Cohen L, & Amedi A (2011). A ventral visual stream reading center independent of visual experience. Current Biology, 21, 363–368. doi: 10.1016/j.cub.2011.01.040 [DOI] [PubMed] [Google Scholar]
- Richler JJ, & Gauthier I (2014). A meta-analysis and review of holistic face processing. Psychological Bulletin, 140, 1281–1302. doi: 10.1037/a0037004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roberts DJ, Lambon Ralph MA, Kim E, Tainturier MJ, Beeson PM, Rapcsak SZ, & Woollams AM (2015). Processing deficits for familiar and novel faces in patients with left posterior fusiform lesions. Cortex, 72, 79–96. doi: 10.1016/j.cortex.2015.02.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roberts DJ, Woollams AM, Kim E, Beeson PM, Rapcsak SZ, & Lambon Ralph MA (2013). Efficient visual object and word recognition relies on high spatial frequency coding in the left posterior fusiform gyrus: Evidence from a case-series of patients with ventral occipito-temporal cortex damage. Cerebral Cortex, 23, 2568–2580. doi: 10.1093/cercor/bhs224 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rosenthal G, Tanzer M, Simony E, Hasson U, Behrmann M, & Avidan G (2017). Altered topology of neural circuits in congenital prosopagnosia. eLife, 6. doi: 10.1101/100479 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rossion B, Delvenne JF, Debatisse D, Goffaux V, Bruyer R, Crommelinck M, & Guerit JM (1999). Spatio-temporal localization of the face inversion effect: An event-related potentials study. Biological Psychology, 50, 173–189. [DOI] [PubMed] [Google Scholar]
- Rossion B, Dricot L, Devolder A, Bodart JM, Crommelinck M, De Gelder B, & Zoontjes R (2000). Hemispheric asymmetries for whole-based and part-based face processing in the human fusiform gyrus. Journal of Cognitive Neuroscience, 12, 793–802. [DOI] [PubMed] [Google Scholar]
- Rossion B, Joyce CA, Cottrell GW, & Tarr MJ (2003). Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. Neuroimage, 20, 1609–1624. [DOI] [PubMed] [Google Scholar]
- Rubino C, Corrow SL, Corrow JC, Duchaine B, & Barton JJS (2016). Word and text processing in developmental prosopagnosia. Cognitive Neuropsychology, 33, 315–328. doi: 10.1080/02643294.2016.1204281 [DOI] [PubMed] [Google Scholar]
- Sadato N, Pascual-Leone A, Grafman J, Ibanez V, Deiber MP, Dold G, & Hallett M (1996). Activation of the primary visual cortex by Braille reading in blind subjects. Nature, 380, 526–528. doi: 10.1038/380526a0 [DOI] [PubMed] [Google Scholar]
- Saygin ZM, Osher DE, Koldewyn K, Reynolds G, Gabrieli JD, & Saxe RR (2012). Anatomical connectivity patterns predict face selectivity in the fusiform gyrus. Nature Neuroscience, 15, 321–327. doi: 10.1038/nn.3001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saygin ZM, Osher DE, Norton ES, Youssoufian DA, Beach SD, Feather J, … Kanwisher N (2016). Connectivity precedes function in the development of the visual word form area. Nature Neuroscience. doi: 10.1038/nn.4354 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scherf KS, Behrmann M, Humphreys K, & Luna B (2007). Visual category-selectivity for faces, places and objects emerges along different developmental trajectories. Developmental Science, 10, F15–F30. doi: 10.1111/j.1467-7687.2007.00595.x [DOI] [PubMed] [Google Scholar]
- Schlaggar BL, & McCandliss BD (2007). Development of neural systems for reading. Annual Review of Neuroscience, 30, 475–503. doi: 10.1146/annurev.neuro.28.061604.135645 [DOI] [PubMed] [Google Scholar]
- Schwarzlose RF, Baker CI, & Kanwisher N (2005). Separate face and body selectivity on the fusiform gyrus. Journal of Neurosciences, 25, 11055–11059. doi: 10.1523/JNEUROSCI.2621-05.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sergent J, Ohta S, & MacDonald B (1992). Functional neuroanatomy of face and object processing. A positron emission tomography study. Brain, 115, 15–36. [DOI] [PubMed] [Google Scholar]
- Sergent J, & Signoret JL (1992). Functional and anatomical decomposition of face processing: Evidence from prosopagnosia and PET study of normal subjects. Philosophical Transactions of the Royal Society of London B Biological Sciences, 335, 55–61; discussion 61–52. [DOI] [PubMed] [Google Scholar]
- Shallice T (1988). From neuropsychology to mental structure. Cambridge, England: Cambridge University Press. [Google Scholar]
- Sheehan MJ, & Nachman MWNC (2014). Morphological and population genomic evidence that human faces have evolved to signal individual identity. Nature Communications, 5, 4800. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shum J, Hermes D, Foster BL, Dastjerdi M, Rangarajan V, Winawer J, … Parvizi J (2013). A brain area for visual numerals. Journal of Neurosciences, 33, 6709–6715. doi: 10.1523/JNEUROSCI.4558-12.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sigurdardottir HM, Ivarsson E, Kristinsdottir K, & Kristjansson A (2015). Impaired recognition of faces and objects in dyslexia: Evidence for ventral stream dysfunction? Neuropsychology. doi: 10.1037/neu0000188 [DOI] [PubMed] [Google Scholar]
- Siuda-Krzywicka K, Bola L, Paplinska M, Sumera E, Jednorog K, Marchewka A, … Szwed M (2016). Massive cortical reorganization in sighted Braille readers. Elife, 5. doi:ARTN e10762 10.7554/eLife.10762 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Srihasam K, Mandeville JB, Morocz IA, Sullivan KJ, & Livingstone MS (2012). Behavioral and anatomical consequences of early versus late symbol training in macaques. Neuron, 73, 608–619. doi: 10.1016/j.neuron.2011.12.022 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Srihasam K, Vincent JL, & Livingstone MS (2014). Novel domain formation reveals proto-architecture in inferotemporal cortex. Nature Neuroscience, 17, 1776–1783. doi: 10.1038/nn.3855 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Starrfelt R, Klargaard SK, Petersen A, & Gerlach C (2018). Reading in developmental proso-pagnosia: Evidence for a dissociation between word and face recognition. Neuropsychology, 32, 138–147. doi: 10.1037/neu0000428 [DOI] [PubMed] [Google Scholar]
- Stephan BC, & Caine D (2009). Aberrant pattern of scanning in prosopagnosia reflects impaired face processing. Brain Cogn, 69, 262–268. doi: 10.1016/j.bandc.2008.07.015 [DOI] [PubMed] [Google Scholar]
- Stevens WD, Kravitz DJ, Peng CS, Henry Tessler M, & Martin A (2017). Privileged functional connectivity between the visual word form area and the language system. Journal of Neurosciences. doi: 10.1523/JNEUROSCI.0138-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Striem-Amit E, Cohen L, Dehaene S, & Amedi A (2012). Reading with sounds: Sensory substitution selectively activates the visual word form area in the blind. Neuron, 76, 640–652. doi: 10.1016/j.neuron.2012.08.026 [DOI] [PubMed] [Google Scholar]
- Striem-Amit E, Wang X, Bi Y, & Caramazza A (2018). Neural representation of visual concepts in people born blind. Nature Communications, 9, 5250. doi: 10.1038/s41467-018-07574-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Suarez R, Paolino A, Fenlon LR, Morcom LR, Kozulin P, Kurniawan ND, & Richards LJ (2018). A pan-mammalian map of interhemispheric brain connections predates the evolution of the corpus callosum. Proceedings of the National Academy of Sciences of the United States of America, 115, 9622–9627. doi: 10.1073/pnas.1808262115 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Susilo T, Wright V, Tree JJ, & Duchaine B (2015). Acquired prosopagnosia without word recognition deficits. Cognitive Neuropsychology, 1–19. doi: 10.1080/02643294.2015.1081882 [DOI] [PubMed] [Google Scholar]
- Tanaka JW, & Farah MJ (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology, 46, 225–245. [DOI] [PubMed] [Google Scholar]
- Tanaka JW, & Farah MJ (2003). The holistic representation of faces. In Rhodes G & Peterson MA (Eds.), Analytic and holistic processes in perception of faces, objects and scenes. New York, NY: Oxford University Press. [Google Scholar]
- Tarkiainen A, Helenius P, Hansen PC, Cornelissen PL, & Salmelin R (1999). Dynamics of letter string perception in the human occipitotemporal cortex. Brain, 122, 2119–2132. [DOI] [PubMed] [Google Scholar]
- Turati C, Simion F, Milani I, & Umilta C (2002). Newborns’ preference for faces: What is crucial? Developmental Psychology, 38, 875–882. doi: 10.1037//0012-1649.38.6.875 [DOI] [PubMed] [Google Scholar]
- Valentine T (1988). Upside-down faces: A review of the effects of inversion upon face recognition. British Journal of Psychology, 79, 471–491. [DOI] [PubMed] [Google Scholar]
- van den Hurk J, Van Baelen M, & Op de Beeck HM (2017). Development of visual category selectivity in ventral visual cortex does not require visual experience. Proceedings of the National Academy of Sciences of the United States of America, 114, 4501–4510. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van der Haegen L, Cai Q, & Brysbaert M (2012). Colateralization of Broca’s area and the visual word form area in left-handers: FMRI evidence. Brain Language, 122, 171–178. doi: 10.1016/j.bandl.2011.11.004 [DOI] [PubMed] [Google Scholar]
- Vinckier F, Dehaene S, Jobert A, Dubus JP, Sigman M, & Cohen L (2007). Hierarchical coding of letter strings in the ventral stream: Dissecting the inner organization of the visual word-form system. Neuron, 55, 143–156. doi: 10.1016/j.neuron.2007.05.031 [DOI] [PubMed] [Google Scholar]
- von Kriegstein K, Kleinschmidt A, & Giraud AL (2005). Voice recognition and cross-modal responses to familiar speakers’ voices in prosopagnosia. Cerebral Cortex, 16, 1314–1322. [DOI] [PubMed] [Google Scholar]
- von Kriegstein K, Kleinschmidt A, Sterzer P, & Giraud AL (2005). Interaction of face and voice areas during speaker recognition. Journal of Cognitive Neuroscience, 17, 367–376. [DOI] [PubMed] [Google Scholar]
- Wang X, Caramazza A, Peelen MV, Han Z, & Bi Y (2015). Reading without speech sounds: VWFA and its connectivity in the congenitally deaf. Cerebral Cortex, 25, 2416–2426. doi: 10.1093/cercor/bhu044 [DOI] [PubMed] [Google Scholar]
- Weiner KS, Barnett MA, Lorenz S, Caspers J, Stigliani A, Amunts K, … Grill-Spector K (2016). The cytoarchitecture of domain-specific regions in human high-level visual cortex. Cerebral Cortex. doi: 10.1093/cercor/bhw361 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weiner KS, Golarai G, Caspers J, Chuapoco MR, Mohlberg H, Zilles K, … Grill-Spector K (2014). The mid-fusiform sulcus: A landmark identifying both cytoarchitectonic and functional divisions of human ventral temporal cortex. Neuroimage, 84, 453–465. doi: 10.1016/j.neuroimage.2013.08.068 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilmer JB, Germine LT, & Nakayama K (2014). Face recognition: a model specific ability. Front Hum Neurosci, 8, 769. doi: 10.3389/fnhum.2014.00769 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wong AC, Wong YK, Lui KFH, Ng TYK, & Ngan VSH (2019). Sensitivity to configural information and expertise in visual word recognition. Journal of Experimental Psychology: Human Perception and Performance, 45, 82–99. doi: 10.1037/xhp0000590 [DOI] [PubMed] [Google Scholar]
- Woollams AM, Ralph MA, Plaut DC, & Patterson K (2007). SD-squared: On the association between semantic dementia and surface dyslexia. Psychological Review, 114, 316–339. doi: 10.1037/0033-295X.114.2.316 [DOI] [PubMed] [Google Scholar]
- Woodhead ZV, Wise RJ, Sereno M, & Leech R (2011). Dissociation of sensitivity to spatial frequency in word and face preferential areas of the fusiform gyrus. Cerebral Cortex, 21, 2307–2312. doi: 10.1093/cercor/bhr008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yin RK (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145. [Google Scholar]
- Yovel G, & Kanwisher N (2005). The neural basis of the behavioral face-inversion effect. Current Biology, 15, 2256–2262. [DOI] [PubMed] [Google Scholar]