Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Mar 1.
Published in final edited form as: Neuroimage. 2011 Dec 22;60(1):661–672. doi: 10.1016/j.neuroimage.2011.12.031

Cortical plasticity for visuospatial processing and object recognition in deaf and hearing signers

Jill Weisberg 1, Daniel S Koo 1, Kelly L Crain 1,2, Guinevere F Eden 1
PMCID: PMC3288167  NIHMSID: NIHMS346630  PMID: 22210355

Abstract

Experience-dependent plasticity in deaf participants has been shown in a variety of studies focused on either the dorsal or ventral aspects of the visual system, but both systems have never been investigated in concert. Using functional magnetic resonance imaging (fMRI), we investigated functional plasticity for spatial processing (a dorsal visual pathway function) and for object processing (a ventral visual pathway function) concurrently, in the context of differing sensory (auditory deprivation) and language (use of a signed language) experience. During scanning, deaf native users of American Sign Language (ASL), hearing native ASL users, and hearing participants without ASL experience attended to either the spatial arrangement of frames containing objects or the identity of the objects themselves. These two tasks revealed the expected dorsal/ventral dichotomy for spatial versus object processing in all groups. In addition, the object identity matching task contained both face and house stimuli, allowing us to examine category-selectivity in the ventral pathway in all three participant groups. When contrasting the groups we found that deaf signers differed from the two hearing groups in dorsal pathway parietal regions involved in spatial cognition, suggesting sensory experience-driven plasticity. Group differences in the object processing system indicated that responses in the face-selective right lateral fusiform gyrus and anterior superior temporal cortex were sensitive to a combination of altered sensory and language experience, whereas responses in the amygdala were more closely tied to sensory experience. By selectively engaging the dorsal and ventral visual pathways within participants in groups with different sensory and language experiences, we have demonstrated that these experiences affect the function of both of these systems, and that certain changes are more closely tied to sensory experience, while others are driven by the combination of sensory and language experience.

Keywords: plasticity, dorsal stream, ventral stream, spatial processing, face processing, deaf, sign language

1. Introduction

The brain’s functional architecture shows remarkable reorganization following altered sensory experiences. Owing to the importance of visual motion in signed languages, particularly motion of the hands to convey linguistic information, studies of congenitally deaf individuals who use American Sign Language (ASL) have often focused on changes in the dorsal visual stream associated with visual motion perception (Bavelier et al., 2001; Bosworth & Dobkins, 2002a, 2002b; Fine, Finney, Boynton, & Dobkins, 2005; Finney & Dobkins, 2001; Finney, Fine, & Dobkins, 2001; Neville & Lawson, 1987a). These studies sought to tease apart differences in visual motion perception attributable to experience with ASL from those due to altered early sensory experience (i.e. deafness) by including groups of hearing signers as well as hearing non-signers. Recent studies have shown that although activity in primary visual cortex is comparable across these groups (Bavelier et al., 2001; Fine, Finney, Boynton, & Dobkins, 2005) differences arise in dorsal brain regions associated with visual motion processing and attention (Bavelier et al., 2001). For example, both deaf and hearing participants who acquired ASL early in life showed greater MT/V5 activation in the left than in the right hemisphere, whereas hearing subjects without ASL experience showed the opposite pattern. This finding, together with earlier visual field and ERP studies (Bosworth & Dobkins, 1999; Brozinsky & Bavelier, 2004; Neville & Lawson, 1987a, 1987b), indicates that early acquisition of a visuospatial language can bias cortical lateralization for perception of visual motion (Neville et al., 1987b, Bavelier et al., 2001).

At the same time, other cortical changes seem to stem more from sensory experience than use of ASL. Bavelier and colleagues (Bavelier et al. 2001) reported that area MT/V5 was more active when deaf signers attended to peripherally than to centrally presented visual motion stimuli (see also Bavelier et al., 2000), whereas in hearing groups (both signing and non-signing) area MT/V5 showed a preference for stimuli presented to the central visual field. In addition, regardless of the attended location, posterior parietal and superior temporal cortices showed larger responses in deaf participants than in either hearing group. Such differences were interpreted as driven by sensory experience because hearing signers do not show this pattern of responses.

While these data provide compelling evidence of sensory and language experience-driven plasticity, recently there has been speculation as to whether such plasticity is constrained to the dorsal stream or manifests in the ventral visual pathway as well. Experience-dependent changes in ventral visual cortex would be predicted based on evidence that deafness and/or ASL usage impacts behavioral performance during visual object perception, particularly for faces. Faces serve a unique functional role in deaf communication as deaf individuals are disproportionately dependent on facial cues for decoding affective communicative nuances that are conveyed by prosody and tone of voice for hearing persons. In addition, while visually monitoring the periphery to attend to the hand motions and gestures that comprise ASL, signers (deaf and hearing) attend to faces foveally (De Filippo & Lansing, 2006; Siple, 1978) to distinguish not only affective, but also lexical, semantic, and syntactic information carried by facial expressions involving eyebrow, eye, and mouth movements. Such linguistic facial expressions are unique to sign languages and distinct from facial expressions that code emotional information (see Corina, Bellugi, & Reilly, 1999; Reilly & Bellugi, 1996 for reviews).

Several investigations indicate that deaf participants outperform hearing individuals who have no sign language experience on tests of face recognition and discrimination (Arnold & Murray, 1998; Bettger, Emmorey, McCullough, & Bellugi, 1997; McCullough & Emmorey, 1997; Bellugi et al., 1990) such as the Facial Recognition Test (Benton, Sivan, Hamsher, Varney, & Spreen, 1983). These enhancements appear to be specific to faces, rather than a general improvement in visual processing (Bettger, Emmorey, McCullough, & Bellugi, 1997). In most of these studies deaf and hearing signers showed comparable performance enhancements compared to hearing non-signers, leading to the conclusion that superior face processing abilities appeared to be more related to ASL experience than deafness (Arnold & Murray, 1998; Bettger, Emmorey, McCullough, & Bellugi, 1997; McCullough & Emmorey, 1997). That conclusion is supported by an absence of any face processing advantage in deaf individuals raised with oral communication, which involves speech reading rather than use of signs (Parasnis, Samar, Bettger, & Sathe, 1996; Bettger, Emmorey, McCullough, & Bellugi, 1997).

The brain basis for enhanced performance of face processing in deaf users of sign language is poorly understood. Visual field studies suggested a redistribution of neural resources for face processing in deaf individuals, with a shift toward increased involvement of the left hemisphere (Corina, 1989; Vargha-Khadem, 1983). Thus, the linguistic salience of faces may lead to altered laterality for face processing in deaf signers. Few studies to date have directly examined the physiology of ventral stream function in deaf individuals. One required monitoring colors (Armstrong, Neville, Hillyard, & Mitchell, 2002) and another shapes (Shibata, Kwok, Zhong, Shrier, & Numaguchi, 2001), and both of these reported no differences in brain response between deaf signing and hearing non-signing groups. However, fMRI investigations of face perception by McCullough and colleagues (McCullough et al., 2005; Emmorey & McCullough, 2009) demonstrated differences in cortical activity between hearing and deaf participants in face-selective ventral temporal cortices. In these studies, hearing non-signers demonstrated bilateral activation in the fusiform gyrus, trending towards a rightward asymmetry in response to photographs of faces expressing either linguistic or affective information (contrasted to a gender judgment baseline task). In contrast, deaf subjects showed greater left than right fusiform gyrus activity for both types of facial expressions, indicating a left hemisphere bias for processing facial expressions (McCullough et al., 2005), as in the study by Corina (1989). Hearing native signers did not show the left hemisphere bias in this region (Emmorey & McCullough, 2009), indicating that ASL experience alone could not account for the leftward shift observed in the deaf group.

These reports reveal that early sensory experience not only changes the functional anatomy in dorsal stream regions, which were thought to be particularly malleable, but also induce changes in the ventral visual stream. However, none of these studies of experience-based cortical plasticity in deaf individuals described above have examined changes in both the dorsal and ventral processing streams simultaneously within the same participants. Further, it is unclear whether the ventral pathway differences identified by McCullough et al. (2005) and Emmorey & McCullough (2009) resulted from the communicative and affective nature of the stimuli (i.e. linguistic and emotional expressions), or rather, if the fundamental substrate for face processing was altered. That is, are physiological differences also found in deaf signers for non-expressive faces like those used in behavioral studies of deaf participants (Arnold & Murray, 1998; Bettger, Emmorey, McCullough, & Bellugi, 1997; McCullough & Emmorey, 1997; Bellugi et al., 1990), and like those used in brain imaging studies with hearing individuals (Gauthier, Tarr, Anderson, Skudlarski, & Gore, 1999; Haxby et al., 1994; Haxby et al., 1999; Kanwisher, McDermott, & Chun, 1997; Yovel & Kanwisher, 2004)? Finally, it is unknown if the differences observed by McCullough et al. reflect a more basic change in the physiology of object perception. That is, do similar group differences arise for other, non-face objects, known to activate discrete ventral stream regions during perception and identification, such as houses?

The present study sought to explore functional changes in dorsal and ventral stream tasks (spatial processing and object processing, respectively), and to account for them with regard to sensory experience, language experience, or their interaction. Thus, we examined the claim of disparate plasticity in these two visual pathways. Furthermore, we sought to clarify the determinants of plasticity related to face perception in the ventral visual stream by examining whether non-face objects (i.e. houses) elicit similar differences in ventral temporal cortex as do faces. To accomplish these goals we modified a paradigm previously shown to successfully differentiate the dorsal and ventral visual pathways in hearing populations (Grady et al., 1992; Grady et al., 1994; Haxby et al., 1991; Haxby et al., 1994). During scanning, hearing non-signers, deaf signers, and hearing signers performed tasks that involved either spatial processing (matching images based on their spatial arrangement) or object processing (matching images based on their identity), thereby preferentially tasking the dorsal and ventral visual pathways respectively. Just as in previous studies (e.g. Haxby et al., 1994), the actual stimuli used for engaging these two aspects of the visual system were the same and only the task requirements changed, with subjects attending to the spatial arrangement or the object identity of the stimuli in different blocks. We predicted that dorsal stream regions associated with spatial cognition (e.g. parietal regions) would exhibit greater activity in deaf and possibly hearing signers compared to non-signers during the spatial matching task. Stimuli included pictures of faces and houses, which are object categories known to elicit differential activity within the ventral visual stream (Kanwisher, McDermott, & Chun, 1997; Haxby et al., 1999; Williams, Morris, McGlone, Abbott, & Mattingley, 2004), allowing us to test our second hypothesis: we anticipated group differences for the processing of faces, but not for non-face objects (i.e. houses) in the ventral visual stream because face perception holds special significance for deaf individuals,.

2. Materials and Methods

2.1 Subjects

Ten hearing participants with no sign language experience, 15 congenitally deaf native users of American Sign Language, and 11 hearing native users of American Sign Language participated in the study. The Georgetown University and the Gallaudet University Institutional Review Boards approved all experimental procedures and we obtained written informed consent from all participants. Five deaf and three hearing signers were excluded from data analysis because either they moved excessively (> .20 voxels) during scanning or had poor behavioral performance on the tasks (< 75% correct). Thus, data analysis included ten hearing non-signing (five female), ten deaf signing (six female), and eight hearing signing individuals (seven female). Inclusion of the hearing signing group controlled for sensory experience, which they share with hearing non-signers but not with the deaf group; this group also controlled for visuospatial language experience, which they share with the deaf group, but not with hearing non-signers. All participants were healthy adults with no neurological or psychiatric illness and were medication free at the time of testing. All deaf participants considered their deafness genetic, all were born to two deaf parents, and all had binaural hearing loss with > 80db loss in the better ear. All hearing signers had at least one deaf parent who used ASL with their child from birth, and all had worked or were currently working as interpreters for the deaf community. A live 10-minute interview developed within the laboratory was administered to all signers by an ASL-fluent signer to verify that hearing and deaf signers were equally proficient in ASL and ensure group homogeneity.

Two independent raters (one deaf, one hearing) evaluated the videotaped interview using commonly accepted ASL features as scoring criteria (e.g. spatial or shape classifiers, facial grammatical markers, etc.). The scores from the two raters were significantly correlated (r = .50; p < .001) and revealed that the groups were equally proficient. We administered a battery of neuropsychological tests to all subjects to determine cognitive performance on measures relevant to our experimental tasks. Performance IQ was measured with the Matrix Reasoning and Block Design subtests of the Wechsler Abbreviated Scale of Intelligence (Wechsler, 1999). The Facial Recognition Test (Benton, Sivan, Hamsher, Sivan, Varney, & Spreen, 1983) and the Judgment of Line Orientation Test (Benton, Sivan, Hamsher, Varney, & Spreen, 1983) were included to assess face and spatial processing, respectively (See Table 1). As part of a larger study all subjects participated in additional fMRI and behavioral tests not relevant for the current report.

Table 1.

Summary of participant group characteristics, standardized neuropsychological test scores, and behavioral data collected during scanning1.

Hearing
Non-Signers
Hearing
Signers
Deaf
Signers

N 10 (5 female) 8 (7 female) 10 (6 female)
Age 23.2 (2.5) 28.5 (6.3) ** 23 (2.9)
Performance IQ 119.8 (7.4) 116.7 (11.5) 112.6 (8.6)
Benton Facial Recognition Test 114.4 (13.1) 119.4 (6.1) 118.7 (10.4)
Benton Judgment of Line Orientation 109.9 (8.0) 109.1 (8.8) 108.3 (4.0)

Reaction Times (ms)

Spatial Matching
      Faces 1454 (85.4) 1468 (60.9) 1378 (40.7)
      Houses 1462 (93.7) 1449 (50.2) 1392 (48.4)
Object Matching
      Faces 1527 (80.0) 1533 (65.1) 1283 (59.9)
      Houses 1402 (66.9) 1447 (70.8) 1260 (56.8)

Accuracy (% correct)

Spatial Matching
      Faces 95 (1.7) 93 (2.7) 88 (1.5)
      Houses 90 (1.9) 91 (1.9) 86 (1.4)
Object Matching
      Faces 92 (1.6) 94 (1.7) 91 (2.0)
      Houses 90 (2.0) 88 (2.6) 89 (1.4)
1 **

Hearing non-signers and deaf signers were slightly younger than hearing signers (p < .01). Group averaged median reaction times and mean accuracy scores were equivalent across groups for all tasks and conditions (F < 2). Participants were slower and more accurate performing matching tasks with Faces than those with Houses (p < .01). Numbers in parenthesis indicate standard error of the mean.

2.2. Stimuli and tasks

During scanning, participants performed a simultaneous match-to-sample task, attending to either spatial location or object identity in separate blocks. Consistent with methods used previously to examine dorsal and ventral stream function (Haxby et al., 1994), we used the same stimuli for these two tasks (with unique exemplars for each task, counterbalanced across subjects). The advantage of this design is that visually, the tasks are identical, and the attended feature (spatial location versus object identity) is the only difference between tasks. The stimuli consisted of three large white squares outlined in black, arranged in a triangular configuration on a grey background (Figure 1). Either the right or left edge of the top square had a thickened black line. In the bottom two squares, the thickened black line always appeared along the bottom edge. Each square contained a black and white photograph of an object (faces or houses, the same category was presented for all three squares). The top square and one bottom square depicted the same exemplar from different views and the third square depicted a different exemplar from the same category.

Figure 1.

Figure 1

Examples of the simultaneous match-to-sample tasks. Blocks of Spatial or Object Matching depicted either Faces (A) or Houses (B) in the embedded squares. During Spatial Matching subjects decided which bottom square contained the photograph in same position relative to the thick line as in the top square. During Object Matching, subjects decided which of the two bottom squares depicted the same object as the sample in the top square, regardless of view.

Spatial Matching

The Spatial Matching task required participants to indicate via a button press which bottom square contained the photograph in the same position, relative to the thick line, as in the top square. Instructions were to locate the thick line in the top square (along the left or right) and mentally rotate the square to decide which bottom square (thick line along the bottom edge) was a match. Faces and houses were depicted in the squares during these trials, but were not relevant to the task.

Object Matching

During the Object Matching task participants indicated via a button press, which bottom square contained a picture that matched the identity of the object in the top square. Objects were faces, with neutral expressions, in some blocks, and houses in others. The thick lines were present, but not relevant to the task.

During Spatial and Object Matching subjects held a one-button response device in each hand and pressed the button corresponding to their choice for each trial. For the ten trials in each block, correct responses were equally divided between left and right, in random order. Each trial remained visible until the participant responded or 2500 ms, whichever was shorter, and was then replaced by an inter-trial interval of 500 msec or longer to yield a total trial length of 3s. Each block contained only pictures of faces or pictures of houses, preceded by a 3s cue instructing participants to attend to either “Object” or “Location”.

Baseline Tasks

During the active baseline blocks, preceded by the cue “Alternate”, subjects viewed three squares, each with a thick line along the bottom edge and each containing identical phase-scrambled images of either a face or a house photograph. The unrecognizable phase-scrambled images were scrambled in the Fourier domain and preserved the contrast, luminance and frequencies of the original images. Active baseline Scrambled blocks contained 10 unique trials, and participants were instructed to alternate left and right button presses for each trial. In addition, a 12s period of Fixation followed each cycle, comprising a resting passive baseline.

Inter-trial intervals for all active conditions depicted capital letters in the center of the screen denoting the current task: an F (Faces) or an H (Houses) during Object matching blocks, an L (Location) during Spatial matching blocks, and an A (Alternate) during the scrambled image blocks. Each eight minute imaging run contained two blocks each of Object Matching with Faces, Object Matching with Houses, Spatial Matching with Faces, and Spatial Matching with Houses, with task and stimulus order counterbalanced across runs and subjects. Stimuli were presented and responses recorded (accuracy and reaction time) using Presentation software (Neurobehavioral Systems, Albany, CA) running in Windows XP. Subjects viewed stimuli via a mirror mounted on the head coil that reflected the image from a screen positioned behind them at the end of the magnet bore.

Prior to scanning, all subjects received instruction is their native language and practiced with stimuli not presented during scanning. The number of practice trials was approximately equivalent to one imaging run. Due to the additional complexity of instructions for the Spatial Matching task, additional practice was given as required to attain criterion performance (> 75% correct).

2.3 Imaging parameters

Participants were scanned on a 3T Siemen’s Trio scanner using gradient echo, echo planar imaging (50 contiguous axial slices, FOV = 209mm, TR = 3s, TE = 40ms, flip angle = 90 degrees, 64 × 64mm matrix, yielding 3 × 3 × 3.25mm voxels). A T1 weighted anatomical scan was collected in the same scanning session (160 contiguous sagittal slices, FOV = 256mm, TR = 1600ms, TE = 4.38ms, flip angle = 15 degrees, 256 × 256mm matrix, yielding 1mm3 voxels).

2.4 Behavioral data analysis

Neuropsychological test scores and demographic measures for the three participant groups were entered into separate one-way analysis of variance (ANOVA) tests. Behavioral data collected during scanning were assessed for each trial type to derive accuracy and median reaction times for each subject. These data were then averaged for each group and entered into a 3 (Group: hearing non-signers, deaf signers, hearing signers,) × 2 (Task: Spatial Matching, Object Matching) × 2 (Category: Face, House) ANOVA.

2.5 Imaging data analysis

All imaging data pre-processing and analysis were performed using AFNI (Cox, 1996). For each participant, functional data were motion corrected, spatially smoothed using a 4.5 mm FWHM 3D Gaussian filter, and converted to percent signal change. Six regressors of interest (Object Matching for Faces; Object Matching for Houses; Spatial Matching for Faces; Spatial Matching for Houses; Scrambled Faces, Scrambled Houses) were convolved with a gamma variate estimate of the hemodynamic response and multiple regression was performed on each voxel’s time series. Cue screens for each task were included in these regressors. Fixation period time points comprised the baseline for the regression model. The beta weights generated by this analysis for each participant for each condition were spatially normalized to standardized space (Talairach & Tournoux, 1988) and entered into random-effects ANOVAs for within- and between-group analyses (each subject’s high resolution anatomical scan was transformed to the TT_27 template provided in AFNI, and this transformation was applied to the functional data). Because we found activity to be stronger than expected for the active baseline condition (Scrambled images), most likely due to the novelty and complexity of the stimuli, we did not focus on this condition as a control, as originally intended. Thus, although the Scrambled image blocks were modeled with a regressor at the individual subject level, all results discussed in this report were derived from direct contrasts between Spatial and Object Matching, or between Face and House Matching.

2.5.1 Within-group analysis

For each of the three groups, we identified brain regions associated with Spatial or Object processing with a random effects analysis at the group level (p < .005) by directly contrasting Spatial versus Object Matching. To ensure that these regions reflected signal increases during the task of interest (rather than decreases from baseline), we masked this contrast by voxels that were more active during the task conditions than during the Fixation passive control condition (i.e. Spatial Matching > Fixation or Object Matching > Fixation, respectively). Finally, to protect against false positives, we performed Monte Carlo simulations to determine the minimum cluster extent surviving a corrected statistical threshold of p < .05 for each effect (AlphaSim software by B. Douglas Ward, part of the AFNI analysis package).

To determine regions differentially responsive to each object category, voxels surviving the analysis described above for the contrast of Object Matching > Spatial Matching were further interrogated to identify those significantly modulated by Category (p < .05 for the contrast of Face Matching versus House Matching).

2.5.2 Between-group analysis

Clusters surviving the within-group random effect analysis described above served as regions of interest (ROIs) for performing between-group statistical analysis. For each contrast of interest, we first examined ROIs representing similar anatomical regions across all three groups, and then examined those specific to the deaf signing or hearing non-signing group. ROIs were considered anatomically similar if, across groups, the clusters overlapped or their local maximas were located within 15mm of each other. Clusters present only in the deaf signing or only in the hearing non-signing group were applied as ROIs to the other group(s). This approach enabled between-group comparisons even in regions that did not emerge in all groups via within-group analysis. For every ROI, we submitted the extracted MRI signal from each individual’s functional scans to a mixed-effects ANOVA as described below.

To address our hypothesis concerning plasticity in the dorsal visual pathway, we entered the data from ROIs that survived statistical thresholding in the Spatial Matching > Object Matching contrast into a 2 (Group: deaf signers, hearing non-signers) × 2 (Task: Spatial Matching, Object Matching) ANOVA comparing task-related activity between the deaf and hearing non-signing groups. For this analysis we collapsed across Category (faces and houses), as our hypothesis concerned group differences during Spatial Matching and object category is not relevant. Regions exhibiting group differences (main effect of Group or interaction with Group) were further interrogated with two additional pair-wise ANOVAs comparing 1) deaf and hearing signers, and 2) hearing signers and non-signers. These latter tests allowed us to determine whether differences between the deaf and hearing non-signing groups were related to sensory or language experience (Bavelier et al., 2001).

To address our hypothesis regarding plasticity in the ventral visual stream, we conducted between-group analyses with independent-samples, two-tailed t-tests, but this time, we examined ROI’s that survived statistical thresholding in the within-group analyses for the Object Matching > Spatial Matching contrast, followed by the comparison of Face Matching versus House Matching. Regions preferentially responsive to faces (i.e. Face Matching > Houses Matching) and those preferentially responsive to houses (i.e. House Matching > Face Matching) were analyzed separately. Again, to examine the influence of sensory and language experience, any region exhibiting differences between the deaf signers and hearing non-signers were subsequently analyzed with pair-wise independent-samples t-tests comparing each of these groups with hearing signers. We report the significance level of all between-group differences surpassing the threshold of p < .05.

3. Results

3.1 Group characteristics and neuropsychological data

Table 1 presents a summary of group characteristics and neuropsychological test scores. Hearing signers were somewhat older than the other two groups (main effect of group, F (2, 25) = 5.14; p < .01; hearing signers vs. hearing non-signers, F = 7.70, p < .05; hearing signers vs. deaf signers F = 8.29, p < .01). All subjects performed in the normal range on the neuropsychological tests (Performance IQ, Facial Recognition Test, and Judgment of Line Orientation Test) with no differences between groups (F < 2 for each measure).

3.2 Behavioral data for Spatial and Object Matching

Analysis of the behavioral performance data collected during scanning for the Spatial Matching and Object Matching tasks indicated that all groups performed equally well (see Table 1; main effect of Group, F < 2 for accuracy and for reaction times). Moreover, neither sensory experience nor language experience interacted with the factors of Task or Category. Although the Spatial and Object Matching tasks proved to be equally difficult (F < 2), a significant main effect of Category (F (1,25) = 9.24, p < .01) revealed that subjects were more accurate when the Object Matching tasks depicted Faces than when they depicted Houses. The reaction time data suggest that this may reflect a speed/accuracy trade-off, as subjects were slower performing matching tasks with Faces than with Houses (main effect of Category, F (1,25) = 10.34, p < .01), especially during Object Matching (Task × Category interaction, F (1,25) = 8.70, p < .01).

3.3 Imaging data

3.3.1 Within-group results

Consistent with previous studies in hearing adults, within-group random effects analysis of the functional imaging data revealed that in each group the Spatial Matching and Object Matching tasks led to increased hemodynamic responses in dorsal and ventral extrastriate cortex, respectively (see Figure 2). Further, Face Matching and House Matching increased responses differentially in the lateral fusiform and parahippocampal/medial fusiform gyri, respectively (see Figure 3). Next we describe these results in detail, first focusing on the brain regions preferentially activated by Spatial Matching compared to Object Matching and vice versa, and then continue with a description of regions demonstrating differential responses to Faces and Houses. Lastly we address between-groups differences for each of these effects.

Figure 2.

Figure 2

Activation patterns in the dorsal visual pathway for hearing non-signers, deaf signers, and hearing signers. Coronal sections depicting clusters of activation from each group’s random effects analysis contrasting Spatial vs. Object matching, overlaid on a single subject’s structural image (Talairach and Tournoux coordinate y = −36). Regions in bilateral inferior parietal lobe (blue) were more active for Spatial than Object Matching (right IPL activity is not visible in this slice for hearing signers). Bilateral fusiform and parahippocampal regions (red) responded more during Object than Spatial matching.

Figure 3.

Figure 3

Axial sections (z = −11) depicting regions that responded differentially to Faces and Houses during Object matching. Bilateral lateral fusiform gyri (red regions) were more active during Face than House matching in all groups, and bilateral parahippocampal gyri (blue regions) were more active during House than Face matching. All regions p < .05, corrected.

Spatial > Object Matching

Regions depicted in blue in Figure 2 (see also Table 2) reflect increased brain activity during the Spatial Matching task relative to the Object Matching task. In all three groups, these areas were largely confined to the dorsal visual pathway, with large clusters of activity located in the bilateral superior and inferior parietal lobules (superior parietal lobule activity not visible in Figure 2). For the deaf group, signal increases were also seen for the Spatial Matching task in bilateral premotor cortex (BA 6) and right cuneus (BA 17/18).

Table 2.

Local maxima for significantly activated clusters in each group for the Spatial Matching vs. Object Matching contrast (p < .05, corrected).

Region # voxels x y z t-value
Spatial > Object Matching
    Hearing non-signers
R inferior parietal lobule/postcentral gyrus (BA 40/2) 137 52 −22 35 −4.24
L inferior parietal lobule/postcentral gyrus (BA 40/2) 67 −46 −28 29 −3.75
R superior parietal lobule (BA 7) 92 16 −52 50 −7.19
L superior parietal lobule (BA 7) 75 −10 −70 44 −3.73
    Deaf signers
R middle frontal gyrus (BA 6) 103 22 4 44 −7.74
L middle frontal/precentral gyrus (BA 6) 35 −22 1 47 −3.8
L inferior parietal lobule/postcentral gyrus (BA 40/2) 187 −52 −22 35 −3.82
R inferior parietal lobule/postcentral gyrus (BA 40/2) 221 43 −25 23 −4.1
L superior parietal lobule (BA 7) 187 −16 −49 47 −3.87
R superior parietal lobule (BA 7) 147 19 −58 41 −5.03
R cuneus/middle occipital gyrus (BA 17/18) 50 22 −79 14 −4.51
    Hearing signers
R inferior parietal lobule/postcentral gyrus (BA 40) 13 52 −25 38 −4.4
L inferior parietal lobule/postcentral gyrus (BA 40/2) 43 −52 −31 35 −4.94
L superior parietal lobule (BA 7) 53 −10 −55 47 −4.71
R superior parietal lobule (BA 7) 78 19 −58 44 −6.1
Object > Spatial Matching
    Hearing non-signers
R middle frontal gyrus (BA 46) 20 49 43 2 4.47
R inferior frontal gyrus (BA 47) 17 31 28 −6 3.83
L inferior frontal gyrus (BA 47) 10 −28 22 −6 4.13
R inferior/middle frontal gyrus (BA 44/9) 19 34 13 23 4.11
L inferior temporal/fusiform/parahippocampal gyrus (BA 20) 11 −34 −4 −27 3.73
L thalamus 30 −4 −7 2 4.15
R fusiform gyrus (BA 36) 538 34 −13 −24 4
L fusiform gyrus (BA 36/37) 494 −34 −40 −18 4.37
L cerebellum 11 −1 −52 −30 3.72
10 −4 −73 −33 5.73
L middle occipital gyrus (BA 19) 21 −40 −79 8 3.81
R cerebellum 12 10 −79 30 4.05
    Deaf signers
R superior temporal gyrus/sulcus (BA 22) 29 46 −13 −6 3.79
L fusiform/parahippocampal gyrus (BA 35/36) 449 −25 −28 −18 3.77
R fusiform/parahippocampal gyrus (BA 36) 215 31 −34 −18 4.09
R fusiform/inferior occipital gyrus (BA 18/19) 119 37 −73 −9 3.93
    Hearing signers
L middle frontal gyrus (BA 10) 18 −28 49 2 4.27
L anterior cingulate gyrus (BA 24/32) 163 −4 34 23 4.33
R orbital gyrus (BA 11) 213 25 28 −12 4.26
R inferior frontal gyrus (BA 45) 18 37 22 17 3.89
L inferior frontal gyrus (BA 44) 50 −37 13 20 4.16
R inferior frontal gyrus (BA 44) 12 55 13 26 3.83
R lentiform nucleus 37 16 4 0 3.75
R middle frontal/precentral gyrus (BA 6) 31 37 4 29 4.01
L lenfiform nuclueus 34 −13 1 0 3.71
R inferior temporal gyrus (BA 20) 11 37 −7 −18 3.85
R precentral gyrus (BA 4) 12 40 −10 38 4.58
R lentiform nucleus 12 28 −19 0 4.26
R parahippocampal gyrus (BA 36) 664 25 −25 −21 3.71
L fusiform/parahippocampal gyrus (BA 35/36) 614 −25 −28 −18 3.91
R inferior parietal lobule (BA 40) 25 37 −49 35 4.84
L cuneus (BA 17) 39 −1 −70 14 4.62
R cerebellum 18 10 −73 −27 4.97
Object > Spatial Matching

In each group, an extensive region spanning the ventral occipitotemporal cortex of both hemispheres exhibited increased activity during Object Matching, compared to Spatial Matching (Figure 2, red regions). Increased task-related activity spanned bilateral fusiform and parahippocampal gyri in all groups (e.g. BA 35, 36, 37, see Table 2 for local maxima in each group). In addition, all groups showed increased activity in occipital regions: in left middle occipital gyrus in hearing non-signers; in right inferior occipital gyrus in deaf signers; and in the left cuneus in hearing signers. Consistent with previous studies, activity for the Object Matching task was not confined to posterior ventral regions (see Table 2). For both hearing groups, additional regions in bilateral inferior or middle frontal gyri (e.g. BA 44), and in the cerebellum responded more to Object than Spatial Matching. Hearing non-signers also showed relatively greater Object Matching activity in a left anterior temporal region bordering the inferior temporal, fusiform, and parahippocampal gyri (BA 20), and in the left thalamus. The deaf group showed an additional cluster spanning the right anterior superior temporal gyrus and sulcus (BA 22). Lastly, hearing signers demonstrated object-related activity increases in the right inferior parietal lobule (BA 40), bilateral lentiform nucleus, right precentral (BA 4) and orbital (BA 11) gyri, and in the left anterior cingulate gyrus (BA 24/32).

Faces vs. House Matching

Consistent with previous reports, the within-group analyses of category-selectivity revealed that, in regions responding more to Object than Spatial Matching, activity was increased for faces, relative to houses, in the lateral fusiform gyrus, bilaterally (BA 19/37; Figure 3, red regions and Table 3). Face-selective responses in brain regions beyond ventral occipitotemporal cortex were not consistently activated across groups: both hearing groups showed heightened activity for faces relative to houses in middle and inferior frontal gyri (non-signers on the right, signers on the left) whereas only hearing non-signers demonstrated face selectivity in the right thalamus (pulvinar). The deaf group alone showed increased face-, compared to house-related activity spanning the right anterior superior temporal gyrus and sulcus (BA 22) near the temporal pole and in the right amygdala.

Table 3.

Local maxima for significantly activated clusters in each group the comparison of Face Matching vs. House Matching (p < .05) within clusters that showed an Object Matching > Location Matching effect (p < .05, corrected).

Region # voxels x y z t-value
Face > House Matching
    Hearing non-signers
R inferior frontal gyrus (BA 45) 6 49 37 5 3.27
R middle/inferior frontal gyrus (BA 8/9) 7 34 15 23 2.76
L inferior temporal gyrus (BA 20) 4 −34 −4 −27 3.14
R thalamus 8 1 −16 8 3.16
R pulvinar 3 1 −25 1 4.39
L inferior temporal/fusiform gyrus (BA 36/20) 7 −37 −34 −12 2.89
R fusiform gyrus (BA 37) 63 37 −43 −18 4.15
L fusiform gyrus (BA 37) 48 −40 −49 −12 2.36
L fusiform gyrus (BA 19) 3 −43 −67 −15 2.88
R cuneus/middle occipital gyrus (BA 18) 4 19 −100 2 2.47
    Deaf signers
R amygdala 4 16 −7 −9 3.85
R superior temporal gyrus/sulcus (BA 22) 5 46 −13 −6 2.38
R fusiform gyrus (BA 37) 5 37 −40 −12 3.1
L fusiform gyrus (BA 37) 12 −37 −40 −18 2.72
L inferior occipital/fusiform gyrus (BA 18) 24 16 −73 −9 2.56
R inferior occipital/fusiform gyrus (BA 18) 4 46 −76 −6 5.77
    Hearing signers
L inferior/middle frontal gyrus (BA 8/44) 5 −31 10 26 2.32
R fusiform gyrus (BA 36/37) 23 34 −40 −15 4.1
L fusiform gyrus (BA 37) 15 −37 −40 −18 4.06
L fusiform/inferior temporal gyrus (BA 37) 11 −40 −52 −12 3.81
R fusiform gyrus (BA 37) 17 43 −58 −6 4.69
R precuneus (BA 31) 11 1 −64 23 4.47
House > Face Matching
    Hearing non-signers
R parahippocampal gyrus (BA 35) 204 19 −28 −15 −4.58
L parahippocampal/fusiform gyrus (BA 35/36) 82 −31 −28 −15 −4.04
L lingual/parahippocampal gyrus (BA 19) 21 −13 −49 5 −4.5
L inferior temporal gyrus (BA 37) 3 −49 −58 −6 −3.33
R inferior occipital/lingual gyrus (BA 19) 45 25 −82 −6 −2.71
R middle occipital gyrus (BA 19) 4 37 −82 5 −3.05
L fusiform gyrus (BA 18/19) 34 −28 −82 −12 −2.67
    Deaf signers
R parahippocampal gyrus (BA 28/36) 127 22 −22 −15 −5.94
L parahippocampal/fusiform gyrus (BA 36/35) 109 −25 −28 −18 −5.84
L fusiform/inferior temporal gyrus (BA 37) 4 −40 −55 −6 −2.62
R fusiform gyrus (BA 19) 50 34 −82 −6 −3.16
L fusiform gyrus (BA 18/19) 84 −31 −82 −12 −2.48
    Hearing signers
R middle frontal gyrus (BA 46/10) 4 43 43 −3 −2.55
L middle/inferior frontal gyrus (BA 46) 6 −25 43 11 −2.29
R parahippocampal gyrus (BA 27/28) 211 22 −22 −18 −2.84
67 10 −34 2 −3.92
L fusiform/parahippocampal gyrus (BA 35/36) 161 −25 −37 −18 −2.88
L parahippocampal gyrus (BA 30) 27 −10 −43 2 −2.3
L middle/superior temporal gyrus (BA 21/22) 5 −49 −49 −3 −3.16
R inferior parietal lobule (BA 7) 9 37 −58 38 −2.56
L fusiform/inferior occipital gyrus (BA 19) 52 −31 −82 −6 −2.37
R lingual/fusiform gyrus (BA 18) 66 19 −88 −9 −2.54

Also consistent with previous findings, houses elicited stronger responses than faces for each group in the medial aspect of the fusiform gyri (BA 18/19/37) and the parahippocampal gyri (BA 28, 35, 36; Figure 3, blue regions and Table 3). In all groups the inferior occipital gyrus (BA 19) also showed a preference for houses. Additional ventral extrastriate regions showing greater activity for houses than faces included the lingual gyrus for both hearing groups (though in opposite hemispheres) and the left inferior temporal gyrus for hearing non-signers and deaf signers. Beyond the ventral visual pathway house-selective responses were seen only in hearing signers, in the right inferior parietal lobule (BA 7), left middle/superior temporal gyrus (BA 21/ 22), and bilateral middle frontal gyri (BA 10, 46).

3.3.2 Between-group results

Spatial > Object Matching

We next examined differences between deaf and hearing non-signers in brain regions that survived the within-group analyses reported above (see Table 2), beginning with the Spatial Matching > Object Matching contrast. Analysis of the four ROIs corresponding to analogous regions identified in deaf signers and hearing non-signers, located in bilateral superior and inferior parietal lobules (see Table 2), revealed a main effect of Task (Spatial > Object Matching, p < .001 in each region) (Figure 4), but no main effect of Group (F < 3 for each region). However, a significant Task × Group interaction revealed that responses differed between the deaf signing and hearing non-signing groups in the left inferior parietal lobule (IPL) and right superior parietal lobule (SPL). In the left IPL, deaf signers exhibited greater activity than hearing non-signers during Spatial Matching (Task × Group interaction, F (1, 18) = 12.38, p < .002) (Fig. 4). This heightened response in the left IPL was also significantly greater in deaf signers than in hearing signers (Task × Group interaction, F (1,16) = 10.30, p < .005). The two hearing groups did not differ from each other (F < 2 for main effect and interaction) in this region.

Figure 4.

Figure 4

Group results in ROIs based on within-group analyses, showing sensory experience-related differences between deaf and hearing groups. Cell means for the deaf group (red bars) were higher than either hearing group (blue and yellow bars) during Spatial Matching in the left IPL (A) and lower than hearing groups in the right SPL (B). In the left IPL (A), pair-wise between-group ANOVAs demonstrated Task × Group interaction (deaf signers vs. hearing non-signers, p < .002; deaf signers vs. hearing signers, p < .005). The hearing groups (signing and non-signing) did not differ from each other in this region. In the right SPL region (B) pair-wise ANOVAs revealed a Task × Group interaction between the deaf group and the hearing non-signers (p < .001), and a main effect of Group when deaf and hearing signers where compared (p < .004). See Table 2 for local maxima for each group.

However, the opposite effect was found in the right SPL, where Spatial Matching elicited greater activity in hearing non-signers than in deaf signers (Task × Group interaction, F (1, 18) = 14.58, p < .001). Hearing signers also activated the right SPL more than deaf signers (main effect of Group (F (1, 16) = 11.43, p < .004) and again, no differences were identified between the two hearing groups (F < 1 for main effect of Group; F < 3, p > .1 for Task × Group interaction). These findings suggest that for visuospatial processing, activity increases in left IPL and decreases in right SPL in deaf, compared to hearing individuals, are related to sensory experience.

Additional dorsal clusters that emerged for the deaf group but not for hearing non-signers in prefrontal cortex and cuneus showed only a main effect of Task (p < .001 in each region).

Face > House Matching

We next determined whether hearing and deaf groups differed in their response to specific object categories by examining ventral stream brain regions that showed increased responses during Object Matching relative Spatial Matching and responded preferentially to Face Matching vs. House Matching. Independent-samples t-tests comparing the responses of deaf signing and hearing non-signing groups in these face-preferring regions were followed-up with pair-wise t-tests with the hearing non-signers where appropriate to separately assess the effects of sensory and language experience.

Of the two clusters located in the left and one in the right fusiform gyrus showing preferential responses to faces during Object Matching (see Table 3), only one of these revealed a significant difference between deaf signers and hearing non-signers (Figure 5). Specifically, deaf individuals exhibited a smaller response than hearing non-signers in the lateral part of the right middle fusiform gyrus (BA 37) (t (1,18) = 2.12, p < .05, two-tailed). Pair-wise t-tests between the two signing groups and between the two hearing groups revealed no further differences (p > .1), suggesting that the reduced response for face recognition in deaf signers cannot be attributed solely to sensory or language experience, but rather, reflects the combined effects of both.

Figure 5.

Figure 5

Group averaged hemodynamic responses in face-selective regions exhibiting group differences between deaf signers and hearing non-signers. The deaf group (red bars) showed a reduced response for faces in the right lateral fusiform gyrus (A) compared to hearing non-signers (blue bars) (p < .05), and an increased response in the right STG/STS region (B) (p < .05). However, comparison of these groups with hearing signers revealed no differences, suggesting that these effects are not specific to either sensory or signing experience alone. The right amygdala (C) was more active in deaf subjects than in hearing non-signers (p < .05) and hearing signers (p < .01), indicating an effect of deafness. Below each histogram are coronal slices depicting the conjunction of each groups’ clusters for the regions described above. Activity for each group, and their conjunctions, are color coded as follows: orange = all groups; purple = deaf signers and hearing non-signers; green = hearing groups; blue = hearing non-signers; yellow = hearing signers; red = deaf signers.

The within group analysis had identified two additional clusters in the hearing non-signing group, located in the left lateral fusiform gyrus of the temporal lobe that were each nearby, but not contiguous with the cluster in BA 37 discussed above (see Table 3). Despite increasing the extent of the left fusiform ROI for hearing non-signers by averaging the signal from these additional voxels, our results were unaltered, still indicating a trend towards a weaker response in the deaf group (t (1,18) = 1.82, p = .08).

In addition to these analogously located ROIs, six additional face-selective clusters emerged only in the hearing non-signers' within- group analysis (reported in Table 3). We found no between-group differences in these regions, indicating that although responses were similar across groups, face-selectivity was not robust enough to survive stringent within-group statistical thresholding in all groups.

Lastly, the within-group analysis identified three regions selectively responsive during Face Matching that survived cluster analysis only in the deaf, but not in the hearing groups. Of these ROIs, those located in the right anterior superior temporal gyrus/sulcus (STG/STS) and the right amygdala revealed greater activity in deaf subjects compared to hearing non-signers, (t (1,18) = 2.28, p < .05 in the STG/STS; t (1,18) = 2.38, p < .05 in the amygdala; see Figure 5). In the right amygdala the response of deaf signers was also greater than that of hearing signers (t (1,16) = 3.19, p < .01), but responses of the two hearing groups were comparable to each other (t (1,16) = 1.07, p > .1), suggesting that this difference is likely related to deafness. In the right STG/STS, there were no differences in task-related responses when comparing hearing signers to either deaf signers (t (1,16) = 1.26, p > .1) or hearing non-signers (t (1,16) = .048, p > .1). Thus, it is less clear whether the source of the heightened response in deaf participants is related to sensory or language experience.

House > Face Matching

Regions common across groups more responsive to houses than faces during Object Matching were all located in the ventral occipitotemporal object processing stream and included the bilateral parahippocampal gyrus, extending into the medial part of the fusiform gyrus, the posterior fusiform gyrus of the occipital lobe (BA 18,19) and the left inferior temporal/fusiform region (BA 37). Pair-wise t-tests revealed no differences between groups in any of these regions. Two additional occipitotemporal regions (BA 19) that survived within-group analysis in hearing non-signers but not in the deaf group failed to reveal any between-group differences. Thus, the response to houses did not differ between the deaf and hearing non-signers in any region preferring house to faces.

4. Discussion

4.1 Overall Findings

To investigate the effects of deafness and visuospatial language experience simultaneously in dorsal and ventral visual pathways, we studied subjects with differing sensory and language experience while they attended to the spatial location or object identity of static face and house photographs. Deaf native signers, hearing native signers, and hearing non-signers all demonstrated the task-specific dissociations in the dorsal and ventral cortical processing streams previously reported for spatial and object processing (e.g. Haxby et al., 1994). In addition, each group showed face and house category-related activation patterns in the ventral visual stream, with greater activity for faces than houses in the lateral fusiform gyri, and greater activity for houses than for faces in the more medial fusiform and parahippocampal gyri (Haxby, et al., 1999; Epstein & Kanwisher, 1998). As predicted, however, we also identified discrete differences between the deaf and hearing groups in both visual processing streams. Specifically, in deaf participants, the Spatial Matching task elicited less activity in the right superior parietal lobe and more activity in the left inferior parietal cortex relative to hearing participants. These differences were also observed when the deaf group was compared with native hearing signers, indicating that deafness is most likely a strong impetus for this cortical reorganization.

We also identified differences between groups during Object Matching in several face-preferring regions. Face Matching elicited less activity in deaf signers relative to hearing non-signers in the right lateral fusiform gyrus, and more activity in the right anterior temporal cortex. However, comparisons between the two signing groups and between the two hearing groups revealed no differences, suggesting that sensory and language experience combine to induce changes in these regions. Finally, activity in the right amygdala was greater in the deaf group than in either hearing group, indicating a substantial influence of deafness. During scanning, behavioral performance on each task was comparable between all three groups, suggesting that the these changes in brain function reflect re-allocation of processing resources rather than task difficulty.

4.2 Cortical plasticity for spatial processing in deaf and hearing signers

All three participant groups activated bilateral parietal cortex during spatial processing, consistent with a large body of animal studies on spatial processing (Colby & Goldberg, 1999), and with clinical (Corballis, 1997) and brain imaging studies in which hearing individuals performed various spatial tasks including line bisection, location judgments, spatial navigation, and mental rotation (Fink et al., 2000; Kohler, Kapur, Moscovitch, Winocur, & Houle, 1995; Parsons, 2003; Shelton & Gabrieli, 2002). In deaf signers we observed greater recruitment of the left inferior parietal lobe and less activity in the right superior parietal lobe, compared with both groups of hearing subjects, differences that likely reflect sensory experience. Our finding is consistent with Bavelier et al.’s (2001) report in that they too attributed changes in dorsal stream function, in their case for visual motion processing, to sensory experience (rather than sign experience). Our data extend their work by showing that differences in deaf participants can be elicited by spatial processing as well as during visual motion perception. These differences may be a consequence of deaf individuals attending differently to the visuospatial world than hearing persons because of the absence of auditory cues signaling changes in the environment.

Studies indicate that the SPL is associated with action organization and attention (Colby & Goldberg, 1999; Culham & Valyear, 2006), and the IPL, while perhaps sharing these functions, is also implicated in motor processes such as eye movements, visual orienting to extra-personal space, mental rotation, distance and location judgments, and spatial navigation (Nobre et al., 2004; Vallar, 2001, Aguirre & D'Esposito, 1997; Fink et al., 2000; Kohler, Kapur, Moscovitch, Winocur, & Houle, 1995; Moscovitch, Kapur, Kohler, & Houle, 1995; Stephan et al., 2003; Vallar et al., 1999). These processes may become affected and their functional anatomy reorganized as a result of visual experiences that accompany deafness. For example, in contrast to hearing individuals who can use auditory cues to monitor extrapersonal space, deaf persons rely almost exclusively on visual monitoring to detect changes in their environment. Such constant and life-long visual vigilance may alter many aspects of spatial attention including orienting to peripheral and extrapersonal space, and judging the relative positions of objects (Bavelier, Dye & Hauser, 2006). Notably, although we instructed our participants to mentally rotate the stimuli during the Spatial Matching task, the task engages additional spatial processes associated with the IPL, including distance and location judgments.

Surprisingly, we found no evidence of dorsal stream reorganization specifically attributable to ASL experience, although the visuospatial nature of the language as well as behavioral studies showing enhanced mental imagery and mental rotation in signers might have predicted such a result (McKee 1987; Emmorey, Klima, & Hickok, 1998; Emmorey & Kosslyn, 1996; Emmorey, Kosslyn, & Bellugi, 1993; Talbot & Haude, 1993). Spatial processing has a prominent role in the perception and expression of ASL, which requires signers to attend to peripheral space to encode the gestures and hand shapes that constitute ASL, to place those gestures appropriately to convey meaning, and at times, to perform spatial transformations in order to decode them. For example, in ASL the spatial relationships among the referents of conversation (e.g. left of, behind) are usually conveyed from an egocentric perspective, and the addressee must make mental transformations in order to decode the proper positions of referents (Emmorey, Klima, & Hickok, 1998; Martin & Sera, 2006).

Evidence that the visuospatial nature of ASL influences the organization of dorsal stream function was provided by Bavelier et al.’s. (2001) investigation of visual motion processing, which found a right-to-left lateralization change in recruitment of MT/V5 in deaf and hearing signers, compared to hearing non-signers. Their finding supported earlier ERP and visual field studies documenting changes in lateralization for visual motion processing in signing groups (Bosworth & Dobkins, 1999;Neville & Lawson, 1987a, 1987b) and argues for an intimate relationship between visual motion and a visuospatial language, resulting in the left hemisphere taking a more active role in non-linguistic motion processing than in users of spoken languages (Neville & Bavelier, 2001). However, our results did not support a strong influence of ASL usage on dorsal spatial processing areas, perhaps because the visuospatial aspects of our task have less overlap with those invoked during ASL. Consistent with the idea that for certain tasks, plasticity in dorsal spatial processing regions is shaped more by sensory, than by language experience, Cattani & Clibbens (2005) reported that during location processing, two groups of deaf individuals (signers and non-signers) showed an opposite visual field preference than did two hearing groups (signers and non-signers).

Our findings in parietal cortex offer evidence of plasticity in the dorsal visual stream for tasks that do not involve visual motion, but rather spatial processing. We suggest that altered recruitment of parietal cortices is associated with the necessity for deaf individuals to detect changes in their environment through constant visual monitoring.

4.4 Cortical plasticity for object recognition in deaf and hearing signers

In the ventral visual stream, we identified face-selective and house-selective clusters of activity for all groups in the lateral middle fusiform gyri and parahippocampal gyri, respectively. Between group comparisons revealed that plasticity in these ventral visual pathway areas was limited to cortex demonstrating a preference for faces (as opposed to houses), indicating a stimulus-specific effect. Specifically, group differences during Face Matching were identified within and beyond ventrotemporal cortex: the right lateral fusiform gyrus showed more activity and right superior temporal gyrus/sulcus showed less activity in deaf signers relative to hearing non-signers. However, we found no differences when we compared those groups with hearing signers. This suggests that effects in face-selective areas cannot be directly attributed to either deafness or ASL experience alone, but more likely, to a combination of both. Deaf signers also demonstrated a heightened amygdala response to faces compared with both hearing groups, indicating that this change is strongly associated with sensory experience.

Over the last decade, a number of studies using a wide variety of tasks have focused on a functionally defined region of the fusiform gyrus in hearing populations that reliably shows increased activity for faces relative to houses (and most other objects) (Haxby et al., 1999; Kanwisher, McDermott, & Chun, 1997). Termed the fusiform face area (FFA), this region is thought to be a node critical for the recognition of facial identity (Haxby, Hoffman, & Gobbini, 2000) within a larger network of regions subserving face processing. Given the evidence supporting a face processing advantage in deaf compared to hearing subjects, (Arnold & Murray, 1998; Bettger, Emmorey, McCullough, & Bellugi, 1997; McCullough & Emmorey, 1997) face-responsive regions (including the FFA) can be considered primary candidates for cortical plasticity in deaf individuals. Our findings of differential activity in these regions in the deaf group bear out that prediction. In contrast, the more medial part of the fusiform gyrus and parahippocampal gyrus are associated with the perception of visual scenes, landmarks and houses (Epstein & Kanwisher, 1998; Haxby et al., 1999). Consistent with the few studies that have examined non-face object processing in deaf individuals (Arnold & Mills, 2001; Arnold & Murray, 1998), we did not identify between-group differences in any regions that preferred houses to faces.

This specificity (i.e. altered brain activity underlying face, but not house processing) is likely related to the salience of faces in deaf communication, but raises the question of why some aspects of the ventral stream processing would be amenable to change, whereas others are more resistant. Accumulated evidence suggests that specific types of visual processes are disposed to plasticity accompanying deafness: those that are visually demanding; require attention to specific task features; and would profit from cross-modal integration in hearing individuals (Bavelier, Dye, & Hauser, 2006). Indeed, recognition of person identity shares all three of these characteristics.

As is often the case in functional brain imaging studies of multiple populations, our results force us to grapple with how to interpret the pattern of activity increases and decreases in our deaf group. Increased skill is frequently associated with expanded representations and increased activity in topographically organized cortices. In association cortices, however, higher cognitive skill and perceptual learning is often reflected by decreased activity as a region becomes functionally more efficient (Kelly & Garavan, 2005). One mechanism thought to underlie this increased neural efficiency is sharper neuronal tuning, such that with increased experience, fewer neurons in a region respond to a particular task or stimulus (Desimone, 1996; Kelly & Garavan, 2005). Such decreases are often accompanied by connectivity changes between nodes of a distributed network (Kelly & Garavan, 2005). Although not directly tested in the current study, within this framework, it is likely that the pattern of increased and decreased activity we identified in face-responsive areas in our deaf group mediates the heightened demand placed on face processing in these individuals. Reduced activity in the right fusiform gyrus may reflect a general increase in face processing efficiency resulting from deaf signers’ experience decoding linguistic and affective facial expressions. Skill acquisition studies in diverse domains including language (Perani et al., 2003), visual skill learning (Kourtzi, Betts, Sarkheil, & Welchman, 2005; Poldrack, Desmond, Glover, & Gabrieli, 1998; Schiltz et al., 1999), and motor skill learning in musicians over many years of training (Haslinger et al., 2004) report similar task-specific activation decreases associated with increased efficiency in association cortices. Further support for the notion that increased processing efficiency leads to reduced activity are reports of studies in non-human primates showing greater selectivity for objects with which they have had extensive perceptual training (Baker, Behrmann, & Olson, 2002; Erickson, Jagadeesh, & Desimone, 2000).

A problem with this interpretation is our failure to observe a behavioral enhancement in face processing performance by our deaf participants relative to the other groups. That is, all three of our groups performed equally well on the Benton Facial Recognition Test and on the Face Matching task performed during scanning. However, the latter, experimental task was intentionally designed to avoid group differences so that we would not have to address such differences in the interpretation of our fMRI data. Studies reporting enhanced performance on the Benton Facial Recognition Test studied larger samples than ours (Bellugi et al., 1990; Bettger, Emmorey, McCullough, & Bellugi, 1997) and as can be seen in Table 1, both of our signing groups scored numerically higher on the Benton Facial Recognition Test than did the hearing non-signers.

The neural efficiency hypothesis is also inconsistent with the notion that signal increases in the FFA reflect greater expertise for faces than for other objects in hearing subjects (Gauthier et al., 1999), a model that would predict a signal increase in deaf, compared to hearing subjects, as deaf individuals presumably have greater experience and expertise with faces. However, an alternative view of FFA as a specialized face processing module (Kanwisher, McDermott, & Chun, 1997), rather than a process-specific region, would predict that the FFA maintains face-selectivity in the deaf group, despite their reduced response, which was the case here. McCullough et al. (2005) reported that for deaf signers matching facial expressions (compared to a gender matching task), extent of activation was greater in the left than in the right fusiform gyrus in deaf signers, whereas hearing non-signers showed equal extent in both hemispheres (with a trend to greater right hemisphere activity). Although we did not find increased activity in left FFA, several important experimental design differences may explain this discrepancy. First, McCullough et al. employed face stimuli in both the baseline and activation conditions, and, second, the activation task used faces selected for maximum expressiveness of emotional or linguistic content. Thus, attention to the communicative aspects of these facial expressions may have contributed to heightened engagement in the left hemisphere in deaf individuals. Our study provides an important baseline for the McCullough study, suggesting that the functional anatomy of face processing in the left ventral stream is not heightened in deaf signers overall, but perhaps specifically for processing communicative aspects of the face, or, as proposed by McCullough and Emmorey (1997), signers’ heightened ability to discriminate local facial features.

In the right amygdala we found greater face-related activity in the deaf group compared to either of the hearing groups. The amygdala is frequently active during face perception, and serves to modulate activity in cortical regions based on the social and emotional significance of stimuli (Phelps, 2006). We suggest that the heightened amygdala response in our deaf participants reflects the increased social salience of faces for deaf individuals.

The third face-responsive region demonstrating plasticity spanned the right anterior superior temporal gyrus and sulcus, just inferior to primary auditory cortex, where deaf signers displayed increased activity relative to hearing non-signers, but hearing signers did not differ from either of these groups. This region, which responds during speech reading in hearing and deaf groups, and during perception of signed languages in deaf individuals, is considered to be multimodal association cortex responsible, in part, for integrating visual and auditory signals (Calvert et al., 1997; Nishimura et al., 1999; Petitto et al., 2000; Capek et al., 2008). Additionally, this region is active in deaf individuals during visual motion and tactile perception (Fine, Finney, Boynton, & Dobkins, 2005; Finney & Dobkins, 2001; Lambertz, Gizewski, de Greiff, & Forsting, 2005; Levanen, 1998). Here, we demonstrate that deaf individuals also recruit this region for discriminating static images of faces. It is possible that in the absence of auditory input, visual neurons co-opt more territory in this multimodal region to support increased demands on the visual system. In this view, recruitment of this region to aid visual processing may lessen the burden on ventral temporal cortex during face processing, allowing for reduced right fusiform gyrus signaling. However, we cannot definitively separate the contributions of deafness versus linguistic experience in this re-mapping and the multimodal nature of this region may be particularly susceptible to influences from both sensory and language experience.

5. Conclusions

In conclusion, we have shown that the well-documented neural dichotomy for spatial and object processing is also observed in deaf individuals. At the same time, we have demonstrated experience-related plasticity within both the dorsal and ventral visual streams that is more closely tied to congenital deafness in some regions, and in others to the interaction between deafness and life-long visuospatial language experience. These findings extend previous reports of changes in the dorsal pathway for visual motion processing (Bavelier et al., 2001; Bavelier & Neville, 2002) and in the ventral visual pathway for discriminating facial expressions (McCullough, Emmorey, & Sereno, 2005). Importantly, our findings of both dorsal and ventral stream plasticity following altered sensory experience were elicited within subjects, using the same, static stimuli for both tasks, demonstrating that deafness leads to more widespread plasticity than previously thought.

Acknowledgements

We thank A.K. Merikangas, J.L. Rosenberg, and A. Wall, for help with data collection; J. M. Maisog for help with data processing; V. Staroselskiy for technical expertise; E. Napoliello for editorial assistance; J.V. Haxby for stimuli and permission to use his paradigm, and A. Martin for stimuli and helpful comments. We would also like to thank Carol LaSasso and especially all of the individuals who participated in our study.

Funding

This work was supported by the National Institute of Child Health and Human Development [P50 HD40095 to G.E.]; the National Institute of Deafness and Communication Disorders [1 F31 DC006794 to J.W.; F32 DC007774 to D.K.]; and the National Science Foundation [SBE 0541953 Science of Learning Center].

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Conflict of Interest

The authors have no financial conflict of interests.

9. References

  1. Aguirre GK, D'Esposito M. Environmental knowledge is subserved by separable dorsal/ventral neural areas. J Neurosci. 1997;177:2512–2518. doi: 10.1523/JNEUROSCI.17-07-02512.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Armstrong BA, Neville HJ, Hillyard SA, Mitchell TV. Auditory deprivation affects processing of motion, but not color. Cogn Brain Res. 2002;14(3):422–434. doi: 10.1016/s0926-6410(02)00211-2. [DOI] [PubMed] [Google Scholar]
  3. Arnold P, Mills M. Memory for faces, shoes, and objects by deaf and hearing signers and hearing nonsigners. J Psycholinguist Res. 2001;30(2):185–195. doi: 10.1023/a:1010329912848. [DOI] [PubMed] [Google Scholar]
  4. Arnold P, Murray C. Memory for faces and objects by deaf and hearing signers and hearing nonsigners. J Psycholinguist Res. 1998;27(4):481–497. doi: 10.1023/a:1023277220438. [DOI] [PubMed] [Google Scholar]
  5. Baker CI, Behrmann M, Olson CR. Impact of learning on representation of parts and wholes in monkey inferotemporal cortex. Nat Neurosci. 2002;5(11):1210–1216. doi: 10.1038/nn960. [DOI] [PubMed] [Google Scholar]
  6. Bavelier D, Brozinsky C, Tomann A, Mitchell T, Neville H, Liu GY. Impact of early deafness and early exposure to sign language on the cerebral organization for motion processing. J Neurosci. 2001;21(22):8931–8942. doi: 10.1523/JNEUROSCI.21-22-08931.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bavelier D, Dye MWG, Hauser PC. Do deaf individuals see better? Trends Cogn Sci. 2006;10(11):512–518. doi: 10.1016/j.tics.2006.09.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bavelier D, Neville HJ. Cross-modal plasticity: Where and how? Nat Rev Neurosci. 2002;3(6):443–452. doi: 10.1038/nrn848. [DOI] [PubMed] [Google Scholar]
  9. Bavelier D, Tomann A, Hutton C, Mitchell T, Corina D, Liu G, et al. Visual attention to the periphery is enhanced in congenitally deaf individuals. J Neurosci. 2000;20(17) doi: 10.1523/JNEUROSCI.20-17-j0001.2000. art. no.-RC93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bellugi U, O'Grady L, Lillo-Martin D, O'Grady M, van Hoek K, Corina D. Enhancement of spatial cognition in deaf children. In: Colterra V, Erting CJ, editors. From gesture to language in hearing and deaf children. New York: Springer-Verlag; 1990. pp. 278–298. [Google Scholar]
  11. Benton AL, Sivan AB, Hamsher K, de S, Varney NR, Spreen O. Contributions to neuropsychological assessment. A clinical manual. 2nd ed. New York: Oxford University Press; 1983. Facial Recognition; pp. 35–52. [Google Scholar]
  12. Benton AL, Sivan AB, Hamsher K, de S, Varney NR, Spreen O. Contributions to neuropsychological assessment. A clinical manual. 2nd ed. New York: Oxford University Press; 1983. Judgment of Line Orientation; pp. 53–64. [Google Scholar]
  13. Bettger JG, Emmorey K, McCullough SH, Bellugi U. Enhanced facial discrimination: effects of experience with American Sign Language. J Deaf Stud Deaf Educ. 1997;2(4):223–233. doi: 10.1093/oxfordjournals.deafed.a014328. [DOI] [PubMed] [Google Scholar]
  14. Bosworth RG, Dobkins KR. Left-hemisphere dominance for motion processing in deaf signers. Psychol Sci. 1999;10(3):256–262. [Google Scholar]
  15. Bosworth RG, Dobkins KR. The effects of spatial attention on motion processing in deaf signers, hearing signers, and hearing nonsigners. Brain Cogn. 2002a;49(1):152–169. doi: 10.1006/brcg.2001.1497. [DOI] [PubMed] [Google Scholar]
  16. Bosworth RG, Dobkins KR. Visual field asymmetries for motion processing in deaf and hearing signers. Brain Cogn. 2002b;49(1):170–181. doi: 10.1006/brcg.2001.1498. [DOI] [PubMed] [Google Scholar]
  17. Brozinsky CJ, Bavelier D. Motion velocity thresholds in deaf signers: changes in lateralization but not in overall sensitivity. Cogn Brain Res. 2004;21(1):1–10. doi: 10.1016/j.cogbrainres.2004.05.002. [DOI] [PubMed] [Google Scholar]
  18. Calvert GA, Bullmore ET, Brammer MJ, Campbell R, Williams SCR, McGuire PK, et al. Activation of Auditory Cortex During Silent Lipreading. Science. 1997;276(5312):593–596. doi: 10.1126/science.276.5312.593. [DOI] [PubMed] [Google Scholar]
  19. Capek CM, MacSweeney M, Woll B, Waters D, McGuire PK, David AS, et al. Cortical circuits for silent speechreading in deaf and hearing people. Neuropsychologia. 2008;46(5):1233–1241. doi: 10.1016/j.neuropsychologia.2007.11.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Cattani A, Clibbens J. Atypical lateralization of memory for location: Effects of deafness and sign language use. Brain Cogn. 2005;58(2):226–239. doi: 10.1016/j.bandc.2004.12.001. [DOI] [PubMed] [Google Scholar]
  21. Colby CL, Goldberg ME. Space and attention in parietal cortex. Annu Rev Neurosci. 1999;22:319–349. doi: 10.1146/annurev.neuro.22.1.319. [DOI] [PubMed] [Google Scholar]
  22. Corballis MC. Mental rotation and the right hemisphere. Brain Lang. 1997;57(1):100–121. doi: 10.1006/brln.1997.1835. [DOI] [PubMed] [Google Scholar]
  23. Corina DP. Recognition of affective and noncanonical linguistic facial expressions in hearing and deaf subjects. Brain Cogn. 1989;9:227–237. doi: 10.1016/0278-2626(89)90032-8. [DOI] [PubMed] [Google Scholar]
  24. Corina DP, Bellugi U, Reilly J. Neuropsychological studies of linguistic and affective facial expressions in deaf signers. Lang Speech. 1999;42(2–3):307–331. doi: 10.1177/00238309990420020801. [DOI] [PubMed] [Google Scholar]
  25. Cox RW. AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res. 1996;29:162–173. doi: 10.1006/cbmr.1996.0014. [DOI] [PubMed] [Google Scholar]
  26. Culham JC, Valyear KF. Human parietal cortex in action. Curr Opin Neurobiol. 2006;16(2):205–212. doi: 10.1016/j.conb.2006.03.005. [DOI] [PubMed] [Google Scholar]
  27. De Filippo CL, Lansing CR. Eye fixations of deaf and hearing observers in simultaneous communication perception. Ear Hear. 2006;27(4):331–352. doi: 10.1097/01.aud.0000226248.45263.ad. [DOI] [PubMed] [Google Scholar]
  28. Desimone R. Neural mechanisms for visual memory and their role in attention. Proc Natl Acad Sci U S A. 1996;93(24):13494–13499. doi: 10.1073/pnas.93.24.13494. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Emmorey K, Klima E, Hickok G. Mental rotation within linguistic and non-linguistic domains in users of American sign language. Cognition. 1998;68(3):221–246. doi: 10.1016/s0010-0277(98)00054-7. [DOI] [PubMed] [Google Scholar]
  30. Emmorey K, Kosslyn SM. Enhanced image generation abilities in deaf signers: A right hemisphere effect. Brain Cogn. 1996;32(1):28–44. doi: 10.1006/brcg.1996.0056. [DOI] [PubMed] [Google Scholar]
  31. Emmorey K, Kosslyn SM, Bellugi U. Visual-Imagery and Visual Spatial Language - Enhanced Imagery Abilities in Deaf and Hearing ASL Signers. Cognition. 1993;46(2):139–181. doi: 10.1016/0010-0277(93)90017-p. [DOI] [PubMed] [Google Scholar]
  32. Emmorey K, McCullough S. The bilingual brain: Effects of sign language experience. Brain Lang. 2009;109:124–132. doi: 10.1016/j.bandl.2008.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Epstein R, Kanwisher N. A cortical representation of the local visual environment. Nature. 1998;392(6676):598–601. doi: 10.1038/33402. [DOI] [PubMed] [Google Scholar]
  34. Erickson CA, Jagadeesh B, Desimone R. Clustering of perirhinal neurons with similar properties following visual experience in adult monkeys. Nat Neurosci. 2000;3(11):1143–1148. doi: 10.1038/80664. [DOI] [PubMed] [Google Scholar]
  35. Fine I, Finney EM, Boynton GM, Dobkins KR. Comparing the effects of auditory deprivation and sign language within the auditory and visual cortex. J Cogn Neurosci. 2005;17(10):1621–1637. doi: 10.1162/089892905774597173. [DOI] [PubMed] [Google Scholar]
  36. Fink GR, Marshall JC, Shah NJ, Weiss PH, Halligan PW, Grosse-Ruyken M, et al. Line bisection judgments implicate right parietal cortex and cerebellum as assessed by fMRI. Neurology. 2000;54(6):1324–1331. doi: 10.1212/wnl.54.6.1324. [DOI] [PubMed] [Google Scholar]
  37. Finney EM, Dobkins KR. Visual contrast sensitivity in deaf versus hearing populations: exploring the perceptual consequences of auditory deprivation and experience with a visual language. Cogn Brain Res. 2001;11(1):171–183. doi: 10.1016/s0926-6410(00)00082-3. [DOI] [PubMed] [Google Scholar]
  38. Finney EM, Fine I, Dobkins KR. Visual stimuli activate auditory cortex in the deaf. Nat Neurosci. 2001;4(12):1171–1173. doi: 10.1038/nn763. [DOI] [PubMed] [Google Scholar]
  39. Gauthier I, Tarr MJ, Anderson AW, Skudlarski P, Gore JC. Activation of the middle fusiform 'face area' increases with expertise in recognizing novel objects. Nat Neurosci. 1999;2(6):568–573. doi: 10.1038/9224. [DOI] [PubMed] [Google Scholar]
  40. Grady CL, Haxby JV, Horwitz B, Schapiro MB, Rapoport SI, Ungerleider LG, et al. Dissociation of object and spatial vision in human extrastriate cortex - Age-related-changes in activation of regional cerebral blood-flow measured with O-15 water and positron emission tomography. J Cogn Neurosci. 1992;4(1):23–34. doi: 10.1162/jocn.1992.4.1.23. [DOI] [PubMed] [Google Scholar]
  41. Grady CL, Maisog JM, Horwitz B, Ungerleider LG, Mentis MJ, Salerno JA, et al. Age-related-changes in cortical blood-flow activation during visual processing of faces and location. J Neurosci. 1994;14(3):1450–1462. doi: 10.1523/JNEUROSCI.14-03-01450.1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Haslinger B, Erhard P, Altenmuller E, Hennenlotter A, Schwaiger M, von Einsiedel HG, et al. Reduced recruitment of motor association areas during bimanual coordination in concert pianists. Hum Brain Mapp. 2004;22(3):206–215. doi: 10.1002/hbm.20028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Haxby JV, Grady CL, Horwitz B, Ungerleider LG, Mishkin M, Carson RE, et al. Dissociation of object and spatial visual processing pathways in human extrastriate cortex. Proc Natl Acad Sci U S A. 1991;88(5):1621–1625. doi: 10.1073/pnas.88.5.1621. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Haxby JV, Hoffman EA, Gobbini MI. The distributed human neural system for face perception. Trends Cogn Sci. 2000;4(6):223–233. doi: 10.1016/s1364-6613(00)01482-0. [DOI] [PubMed] [Google Scholar]
  45. Haxby JV, Horwitz B, Ungerleider LG, Maisog JM, Pietrini P, Grady CL. The functional-organization of human extrastriate cortex - a PET-rCBF study of selective attention to faces and locations. J Neurosci. 1994;14(11):6336–6353. doi: 10.1523/JNEUROSCI.14-11-06336.1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Haxby JV, Ungerleider LG, Clark VP, Schouten JL, Hoffman EA, Martin A. The effect of face inversion on activity in human neural systems for face and object perception. Neuron. 1999;22(1):189–199. doi: 10.1016/s0896-6273(00)80690-x. [DOI] [PubMed] [Google Scholar]
  47. Kanwisher N, McDermott J, Chun MM. The fusiform face area: A module in human extrastriate cortex specialized for face perception. J Neurosci. 1997;17(11):4302–4311. doi: 10.1523/JNEUROSCI.17-11-04302.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Kelly AMC, Garavan H. Human functional neuroimaging of brain changes associated with practice. Cereb Cortex. 2005;15(8):1089–1102. doi: 10.1093/cercor/bhi005. [DOI] [PubMed] [Google Scholar]
  49. Kohler S, Kapur S, Moscovitch M, Winocur G, Houle S. Dissociation of pathways for object and spatial vision - A PET study in humans. Neuroreport. 1995;6(14):1865–1868. doi: 10.1097/00001756-199510020-00011. [DOI] [PubMed] [Google Scholar]
  50. Kourtzi Z, Betts LR, Sarkheil P, Welchman AE. Distributed neural plasticity for shape learning in the human visual cortex. PLoS Biol. 2005;3(7):1317–1327. doi: 10.1371/journal.pbio.0030204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Lambertz N, Gizewski ER, de Greiff A, Forsting M. Cross-modal plasticity in deaf subjects dependent on the extent of hearing loss. Cogn Brain Res. 2005;25(3):884–890. doi: 10.1016/j.cogbrainres.2005.09.010. [DOI] [PubMed] [Google Scholar]
  52. Levanen S. Neuromagnetic studies of human auditory cortex function and reorganization. Scand Audiol. 1998;27:1–6. doi: 10.1080/010503998420595. [DOI] [PubMed] [Google Scholar]
  53. Martin AJ, Sera MD. The acquisition of spatial constructions in American Sign Language and English. J Deaf Stud Deaf Educ. 2006;11(4):391–402. doi: 10.1093/deafed/enl004. [DOI] [PubMed] [Google Scholar]
  54. McCullough S, Emmorey K, Sereno M. Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners. Cogn Brain Res. 2005;22(2):193–203. doi: 10.1016/j.cogbrainres.2004.08.012. [DOI] [PubMed] [Google Scholar]
  55. McCullough S, Emmorey K. Face processing by deaf ASL signers: evidence for expertise in distinguishing local features. J Deaf Stud Deaf Educ. 1997;2(4):212–222. doi: 10.1093/oxfordjournals.deafed.a014327. [DOI] [PubMed] [Google Scholar]
  56. McKee D. Unpublished doctoral dissertation. Pittsburgh, PA: University of Pittsburgh; 1987. An analysis of specialized cognitive functions in deaf and hearing signers. [Google Scholar]
  57. Moscovitch M, Kapur S, Kohler S, Houle S. Distinct neural correlates of visual long-term-memory for spatial location and object identity - A positron emission tomography study in humans. Proc Natl Acad Sci U S A. 1995;92(9):3721–3725. doi: 10.1073/pnas.92.9.3721. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Neville HJ, Bavelier D. Effects of auditory and visual deprivation on human brain development. Clin Neurosci Res. 2001;1(4):248–257. [Google Scholar]
  59. Neville HJ, Lawson D. Attention to central and peripheral visual space in a movement detection task .3. Separate effects of auditory deprivation and acquisition of a visual language. Brain Res. 1987a;405(2):284–294. doi: 10.1016/0006-8993(87)90297-6. [DOI] [PubMed] [Google Scholar]
  60. Neville HJ, Lawson D. Attention to central and peripheral visual space in a movement detection task: an event-related potential and behavioral study. 2. Congenitally deaf adults. Brain Res. 1987b;405(2):268–283. doi: 10.1016/0006-8993(87)90296-4. [DOI] [PubMed] [Google Scholar]
  61. Nishimura H, Hashikawa K, Doi K, Iwaki T, Watanabe Y, Kusuoka H, et al. Sign language 'heard' in the auditory cortex. Nature. 1999;397(6715):116–116. doi: 10.1038/16376. [DOI] [PubMed] [Google Scholar]
  62. Nobre AC, Coull JT, Maquet P, Frith CD, Vandenberghe R, Mesulam MM. Orienting attention to locations in perceptual versus mental representations. J Cogn Neurosci. 2004;16(3):363–373. doi: 10.1162/089892904322926700. [DOI] [PubMed] [Google Scholar]
  63. Parasnis I, Samar VJ, Bettger JG, Sathe K. Does deafness lead to enhancement of visual spatial cognition in children? Negative evidence from deaf non-signers. J Deaf Stud Deaf Educ. 1996;1:145–152. doi: 10.1093/oxfordjournals.deafed.a014288. [DOI] [PubMed] [Google Scholar]
  64. Parsons LM. Superior parietal cortices and varieties of mental rotation. Trends Cogn Sci. 2003;7(12):515–517. doi: 10.1016/j.tics.2003.10.002. [DOI] [PubMed] [Google Scholar]
  65. Perani D, Abutalebi J, Paulesu E, Brambati S, Scifo P, Cappa SF, et al. The role of age of acquisition and language usage in early, high-proficient bilinguals: An fMRI study during verbal fluency. Hum Brain Mapp. 2003;19(3):170–182. doi: 10.1002/hbm.10110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Petitto LA, Zatorre RJ, Gauna K, Nikelski EJ, Dostie D, Evans AC. Speech-like cerebral activity in profoundly deaf people processing signed languages: Implications for the neural basis of human language. Proc Natl Acad Sci U S A. 2000;97(25):13961–13966. doi: 10.1073/pnas.97.25.13961. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Phelps EA. Emotion and cognition: Insights from studies of the human amygdala. Annu Rev Psychol. 2006;57:27–53. doi: 10.1146/annurev.psych.56.091103.070234. [DOI] [PubMed] [Google Scholar]
  68. Poldrack RA, Desmond JE, Glover GH, Gabrieli JDE. The neural basis of visual skill learning: An fMRI study of mirror reading. Cereb Cortex. 1998;8(1):1–10. doi: 10.1093/cercor/8.1.1. [DOI] [PubMed] [Google Scholar]
  69. Reilly JS, Bellugi U. Competition on the face: Affect and language in ASL motherese. J Child Lang. 1996;23(1):219–239. doi: 10.1017/s0305000900010163. [DOI] [PubMed] [Google Scholar]
  70. Schiltz C, Bodart JM, Dubois S, Dejardin S, Michel C, Roucoux A, et al. Neuronal mechanisms of perceptual learning: Changes in human brain activity with training in orientation discrimination. Neuroimage. 1999;9(1):46–62. doi: 10.1006/nimg.1998.0394. [DOI] [PubMed] [Google Scholar]
  71. Shelton AL, Gabrieli JDE. Neural correlates of encoding space from route and survey perspectives. J Neurosci. 2002;22(7):2711–2717. doi: 10.1523/JNEUROSCI.22-07-02711.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Shibata DK, Kwok E, Zhong JH, Shrier D, Numaguchi Y. Functional MR imaging of vision in the deaf. Acad Radiol. 2001;8(7):598–604. doi: 10.1016/S1076-6332(03)80684-0. [DOI] [PubMed] [Google Scholar]
  73. Siple P. Visual constraints for sign language communication. Sign Language Studies. 1978;19:97–112. [Google Scholar]
  74. Stephan KE, Marshall JC, Friston KJ, Rowe JB, Ritzl A, Zilles K, et al. Lateralized cognitive processes and lateralized task control in the human brain. Science. 2003;301(5631):384–386. doi: 10.1126/science.1086025. [DOI] [PubMed] [Google Scholar]
  75. Talairach J, Tournoux P. Co-planar stereotaxic atlas of the human brain. New York: Thieme; 1988. [Google Scholar]
  76. Talbot KF, Haude R. The relation between sign language skill and spatial visualization ability: mental rotation of three dimensional objects. Percept Mot Skills. 1993;77:1387–1391. doi: 10.2466/pms.1993.77.3f.1387. [DOI] [PubMed] [Google Scholar]
  77. Vallar G. Extrapersonal visual unilateral spatial neglect and its neuroanatomy. Neuroimage. 2001;14(1):S52–S58. doi: 10.1006/nimg.2001.0822. [DOI] [PubMed] [Google Scholar]
  78. Vallar G, Lobel E, Galati G, Berthoz A, Pizzamiglio L, Le Bihan D. A fronto-parietal system for computing the egocentric spatial frame of reference in humans. Exp Brain Res. 1999;124(3):281–286. doi: 10.1007/s002210050624. [DOI] [PubMed] [Google Scholar]
  79. Vargha-Khadem F. Visual field assymetries in congenitally deaf and hearing children. Br J Dev Psychol. 1983;1:375–387. [Google Scholar]
  80. Wechsler D. Wechsler Abbreviated Scale of Intelligence. San Antonio, Texas: Psychological Corporation, Harcourt Brace and Company; 1999. [Google Scholar]
  81. Weisberg J, van Turennout M, Martin A. A neural system for learning about object function. Cereb Cortex. 2007;17:513–521. doi: 10.1093/cercor/bhj176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Williams MA, Morris AP, McGlone F, Abbott DF, Mattingley JB. Amygdala responses to fearful and happy facial expressions under conditions of binocular suppression. J Neurosci. 2004;24(12):2898–2904. doi: 10.1523/JNEUROSCI.4977-03.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Yovel G, Kanwisher N. Face Perception: Domain Specific, Not Process Specific. Neuron. 2004;44(5):889–898. doi: 10.1016/j.neuron.2004.11.018. [DOI] [PubMed] [Google Scholar]

RESOURCES