Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2012 Jul 11;32(28):9700–9705. doi: 10.1523/JNEUROSCI.1002-12.2012

Signed Words in the Congenitally Deaf Evoke Typical Late Lexicosemantic Responses with No Early Visual Responses in Left Superior Temporal Cortex

Matthew K Leonard 1,2,*, Naja Ferjan Ramirez 2,3,*,, Christina Torres 1,2, Katherine E Travis 1,2, Marla Hatrak 3, Rachel I Mayberry 3,, Eric Halgren 1,2,4,5,
PMCID: PMC3418348  NIHMSID: NIHMS393492  PMID: 22787055

Abstract

Congenitally deaf individuals receive little or no auditory input, and when raised by deaf parents, they acquire sign as their native and primary language. We asked two questions regarding how the deaf brain in humans adapts to sensory deprivation: (1) is meaning extracted and integrated from signs using the same classical left hemisphere frontotemporal network used for speech in hearing individuals, and (2) in deafness, is superior temporal cortex encompassing primary and secondary auditory regions reorganized to receive and process visual sensory information at short latencies? Using MEG constrained by individual cortical anatomy obtained with MRI, we examined an early time window associated with sensory processing and a late time window associated with lexicosemantic integration. We found that sign in deaf individuals and speech in hearing individuals activate a highly similar left frontotemporal network (including superior temporal regions surrounding auditory cortex) during lexicosemantic processing, but only speech in hearing individuals activates auditory regions during sensory processing. Thus, neural systems dedicated to processing high-level linguistic information are used for processing language regardless of modality or hearing status, and we do not find evidence for rewiring of afferent connections from visual systems to auditory cortex.

Introduction

Neuropsychological and neuroimaging studies generally show that, when acquired as a native language from birth in congenitally deaf individuals, sign language is processed in a primarily left frontotemporal brain network, remarkably similar to the network used by hearing subjects to understand spoken words (Petitto et al., 2000; MacSweeney et al., 2008; Mayberry et al., 2011). Similarly, the N400, an event-related component correlated with lexicosemantic processing (Kutas and Federmeier, 2011), is similar when evoked by signs in deaf individuals and spoken or written words in hearing individuals (Kutas et al., 1987; Neville et al., 1997; Capek et al., 2009). Language deficits in deafness are more pronounced after lesions in the left hemisphere (Klima and Bellugi, 1979; Poizner et al., 1987; Hickok et al., 1996). Finally, direct cortical stimulation in left inferior frontal and posterior superior temporal regions in a deaf signer disrupted sign language production similar to speech disruptions in hearing individuals (Ojemann, 1983; Corina et al., 1999).

Left frontotemporal language areas include the cortex surrounding primary auditory cortex (Price, 2010), which is functionally deafferented in congenitally deaf individuals. In animal models, it has been demonstrated that afferent connections from the retina can be induced to connect with the medial geniculate nucleus of the thalamus (Sur et al., 1988), resulting in maps of visual space within primary auditory cortex (Roe et al., 1990; Barnes and Finnerty, 2010). Likewise, in congenitally deaf humans, auditory regions have been shown to exhibit hemodynamic and neurophysiological activation to low-level moving visual stimuli, particularly in the right hemisphere (Finney et al., 2001, 2003) and even to sign language narratives more than in hearing controls (Lambertz et al., 2005). However, other studies have not found such responses (Hickok et al., 1997) or have found extensive interindividual variability (Bavelier and Neville, 2002).

If auditory cortex is actually rewired in deaf individuals to receive visual input directly, then the similar activation patterns evoked by signed words in deaf signers and spoken words in hearing individuals would be a natural consequence of neural plasticity: in both groups, low-level sensory processing in auditory cortex should be projected to adjacent superior temporal areas, and thence to the broader left frontotemporal language network for lexicosemantic processing. Alternatively, activity in the region surrounding auditory cortex to signed words in deaf individuals and to spoken words in hearing individuals may reflect higher-level semantic encoding rather than sensory analysis. In this scenario, common activations in superior temporal cortex occur only after distinct modality-specific sensory processing for sign or speech. These alternatives can be dissociated based on the timing of the activity in superior temporal regions, information that is not available from hemodynamic measures, but can be obtained using MEG. Here we show that this activity is semantic, not sensory. Only speech in hearing individuals activates auditory areas during early sensory processing. However, both speech in hearing individuals and sign in deaf native signers activate similar temporal and frontal regions in the classical language network during later semantic processing stages.

Materials and Methods

Participants.

Twelve healthy right-handed congenitally deaf native signers (6 female; age range, 17–36 years) with no history of neurological or psychological impairment were recruited for participation (Table 1). All had profound hearing loss from birth and acquired American Sign Language (ASL) as their native language from their deaf parents. In addition, eight hearing controls from an analogous task with spoken English were included for comparison (5 female; age range, 21–29 years).

Table 1.

Deaf and hearing participant information and task performance

Group Gender Age (years) Education (years) Accuracy (%) Reaction time (ms)
Deaf 6 female, 6 male 30 (6.37) 15.92 (2.87) 94.30 (3.93) 619.10 (97.5)
Hearing 5 female, 3 male 27 (2.87) 19.00 (2.45) 98.25 (3.01) 561.23 (94.3)

Values in parentheses are standard deviations.

Procedures.

Each deaf participant viewed single signs that were either congruously or incongruously paired with a preceding picture (Fig. 1). Stimuli were high-frequency concrete nouns in ASL presented as short video clips (range, 340–700 ms; mean, 515.3 ms). Since no frequency norms exist for ASL, the stimuli were selected from ASL developmental inventories (Schick, 1997; Anderson and Reilly, 2002) and picture naming data (Bates et al., 2003; Ferjan Ramirez et al., 2012). The signs were all concrete nouns representing highly imageable objects, and were reviewed by a panel of six deaf and hearing fluent signers to ensure they were accurately produced and highly familiar from an early age. Words that are typically fingerspelled or are compound signs were excluded. Each sign video began when all phonological parameters (handshape, location, movement, and orientation) were in place, and ended when the movement was completed. Each sign appeared in both the congruent and incongruent conditions, and if a trial from one condition was rejected due to artifacts in the MEG signal, the corresponding trial from the other condition was also rejected to ensure that sensory processing across congruent and incongruent trials was identical. Subjects were instructed to press a button when the sign matched the preceding picture in meaning; the response hand was counterbalanced across six blocks of 102 trials each. The hearing participants performed the same task, except that instead of viewing pictures and signs, subjects saw photos and then heard single auditory English words through earphones and pressed a button when they matched. The picture remained on the screen throughout the duration of the auditory word. Word duration ranged from 304 to 637 ms, with a mean of 445 ms. To analyze the response to pictures, we compared the deaf group to a different group of hearing participants who saw the same white-on-black line drawings in a separate but similar task.

Figure 1.

Figure 1.

Schematic diagram of task design. Each picture and sign appeared in both the congruent and incongruent conditions. Trials were presented pseudorandomly so that repetition of a given stimulus did not occur with fewer than eight intervening trials. Incongruent pairs were not related semantically or phonologically in ASL.

Neuroimaging.

While subjects performed the task, we recorded MEG from 204 planar gradiometer channels distributed over the scalp, at 1000 Hz with minimal filtering (0.1–200 Hz). Following the MEG session, each subject's structural MRI was acquired as a T1-weighted image. Sources were estimated by coregistering MEG and MRI data and using a linear minimum-norm approach, noise normalized to a prestimulus period, according to previously published procedures (Dale et al., 2000; Leonard et al., 2010; McDonald et al., 2010). Random-effects statistical analysis on the dynamic statistical parametric maps was performed using a cluster thresholding approach (Hagler et al., 2006; McDonald et al., 2010). Table 2 shows surface Talairach coordinates for peak vertices in the clusters. Two time windows were selected for analysis based on a grand average of the activity to signs and speech across both groups of participants. For the early (80–120 ms) time window, a grand average of all signed or spoken words was displayed on an average brain, and for the later time window (300–350 ms), a subtraction of congruous–incongruous words was displayed on the average brain. Regions with significant clusters (cluster threshold for signs 80–120 ms = 208.58 mm2, 300–350 ms = 212.32 mm2; cluster threshold for speech 80–120 ms = 238.60 mm2, 300–350 ms = 206.63 mm2) were selected for time course extraction (Fig. 2C,D, graphs).

Table 2.

Talairach surface coordinates for selected ROIs shown in Figure 2

ROI name Coordinates (x, y, z)
Left Right
Anterior insula −31, 13, 6 31, 16, 10
Planum temporal −35, −31, 22 36, −30, 17
Superior temporal sulcus −47, −35, 0 46, −20, −7
Temporal pole −25, −1, −24 35, 5, −30
Intraparietal sulcus −33, −47, 34 37, −42, 34
Figure 2.

Figure 2.

Superior temporal areas surrounding auditory cortex are active for both sign and speech during lexicosemantic processing, but only for speech during sensory processing. A, Grand average activity to signs at ∼100 ms in deaf subjects is localized to occipital cortex in calcarine and superior occipital sulci. B, Grand average activity to speech at ∼100 ms in hearing subjects is localized to posterior temporal cortex. C, Center, Grand average activity to incongruent–congruent signs at 300–350 ms (black arrow) in deaf subjects. Surrounding graphs, Regional time courses for congruent and incongruent conditions in five bilateral regions of interest from −100 to 600 ms (light blue arrow at 100 ms). D, Same as C for speech in hearing subjects. IPS, Intraparietal sulcus; PT, planum temporale; AI, anterior insula; STS, superior temporal sulcus; TP, temporal pole; V1, primary visual. All mapped activity is cluster thresholded dynamic statistical parametric maps, significantly greater than prestimulus baseline at p < 0.05, corrected.

Results

Behavioral responses

Both groups of participants performed the task with high accuracy and fast reaction times (Table 1). Deaf participants were accurate on 94.3% of trials (SD = 3.93) and responded at 619.10 ms on average (SD = 97.5 ms). Hearing participants were accurate on 98.25% of trials (SD = 3.01) and responded at 561.23 ms on average (SD = 94.28 ms). The between-group reaction time difference was not significant (t test; p > 0.1).

Anatomically constrained MEG: early time window (80–120 ms)

During early sensory processing (80–120 ms), we examined the grand average of activity for all signed words in deaf participants and all spoken words in hearing participants. Responses to signs were significant in posterior occipital regions, including the occipital pole (Fig. 2A). Responses to spoken words were strongest in bilateral superior temporal cortex, including primary auditory areas on the superior temporal plane (Fig. 2B). An auditory peak in superior temporal channels that did not differentiate between congruent and incongruent conditions was visible in individual hearing subjects, but was not present in deaf subjects (Fig. 3). Thus, at early latencies, neural responses are confined to modality-specific sensory regions and do not differentiate between semantically congruent and incongruent trials. Crucially, signs do not evoke activity in auditory cortex at ∼100 ms in deaf native signers.

Figure 3.

Figure 3.

Individual MEG sensors demonstrate the dissociation between early and late activity in auditory regions. A1, Head plot shows the location of a left superior temporal MEG channel showing significant incongruent > congruent activity in a deaf native signer. A2, The left superior temporal MEG channel shows the congruent versus incongruent difference for signs. B1, Head plot from a hearing participant. B2, The same channel shows a similar difference for speech in a single representative hearing participant. Both subjects begin to show a significant difference between conditions at ∼240 ms. C, The same channel shows a sensory peak at ∼100 ms for hearing (purple), but not deaf (green), subjects. Gray regions indicate significance at p < 0.01.

To determine whether auditory cortex activity differs between deaf and hearing individuals in response to visual stimuli, we compared the response to the pictures with that from a separate group of hearing subjects who saw the same line drawings. While both groups showed significant cluster-thresholded activity in posterior occipital cortex at ∼100 ms (minor localization differences between groups may be due to differences in the task design between the deaf group and this particular hearing group), neither group showed activity in auditory areas (Fig. 4).

Figure 4.

Figure 4.

Direct comparison of response to pictures between deaf (A) and hearing (B) subjects. Both groups show significant activity at ∼100 ms in occipital visual areas, and neither shows activity in auditory cortex.

Anatomically constrained MEG: late time window (300–350 ms)

In contrast to early latencies, very high overlap was observed between the deaf and hearing groups during lexicosemantic processing. In both groups, the subtraction of congruent from incongruent trials revealed semantically modulated activity in the classical left hemisphere frontotemporal network around the a priori time window at 300–350 ms. Although words in both sign (Fig. 2C) and speech (Fig. 2D) activated some modality-specific areas [e.g., left intraparietal sulcus (IPS) for sign], most activity occurred within a shared network including the left planum temporale, superior temporal sulcus, temporal pole, and, to a lesser extent, the homologous areas in the right hemisphere. Representative single-subject waveforms from individual sensors revealed similar onset in the timing and location of the congruent versus incongruent difference in left superior temporal areas surrounding auditory cortex (Fig. 3), as determined by a random effects resampling statistic (Maris and Oostenveld, 2007).

Discussion

Sign languages possess the sublexical, word-level, syntactic, and semantic characteristics typical of spoken language (Emmorey, 2002; Sandler and Lillo-Martin, 2006). When a deaf child is reared by signing parents, the developmental trajectory of linguistic knowledge (including specific syntactic structures) follows that of spoken language in hearing children (Anderson and Reilly, 2002; Mayberry and Squires, 2006).

We examined two stages of signed and spoken word processing in deaf and hearing participants. While the early sensory processing stage (∼100 ms) is confined to modality-specific visual cortex for signs and auditory cortex for speech, both kinds of language activate an overlapping network of left hemisphere frontotemporal regions (including areas surrounding auditory cortex) during lexicosemantic processing (∼300 ms). The similarity between sign and speech during the later time window confirms the hypothesis that areas including anteroventral temporal, superior temporal, superior planar, and inferior prefrontal cortex are specialized for processing word meaning, regardless of modality. In contrast, the early differences between modalities provide evidence that visual afferents are not directed to auditory cortex for initial sensory processing to a greater extent in deafness. Rather, early sensory processing of signed words takes place in visual cortex.

The current study is among the first investigations of the spatiotemporal dynamics of sign processing. The timing of the activity in the present study reveals that speech in hearing participants and sign in deaf participants activates the classical left frontotemporal language network between ∼200–400 ms, well beyond short-latency sensory processes. These areas have been shown to be involved in processing high-level semantic information for both auditory and written words in normal individuals with fMRI (Patterson et al., 2007; Binney et al., 2010; Price, 2010; Binder et al., 2011), MEG (Marinkovic et al., 2003; Leonard et al., 2011), and in direct intracranial recordings in patients with medically intractable epilepsy (Chan et al., 2011), although there is evidence for functional and modality-specific specialization within anterior temporal subregions (Visser and Lambon Ralph, 2011). These same areas are deficient or damaged in patients with semantic dementia (Binney et al., 2010; Lambon Ralph et al., 2010; Mion et al., 2010). Lexicosemantic activity in anteroventral temporal and superior temporal areas is observed in both languages for bilinguals (Leonard et al., 2010, 2011) and in 12- to 18-month-old infants (Travis et al., 2011), further demonstrating their fundamental role in processing meaning. We found only relatively minor differences in active loci, including greater activity in IPS in deaf signers, possibly related to an inherently greater praxic and biological motion component to sign (Emmorey et al., 2002; Pobric et al., 2010). Activity in this network in congenitally deaf native signers processing a visuogestural language provides additional support for the hypothesis that this processing reflects abstract, supramodal representations of word meaning regardless of the input modality.

Capitalizing on the high spatiotemporal resolution of MEG constrained by individual cortical anatomy obtained by MRI, we also examined whether activity observed in auditory regions in congenitally deaf individuals (Finney et al., 2001, 2003; Lambertz et al., 2005) is caused by rewiring of visual sensory information to cortex that has been underutilized due to sensory deprivation for the individual's entire life. While previous MEG results indicated that hemodynamic activation in these regions, particularly in the right hemisphere, reflected early processing, the time window that was examined extended to 400 ms after stimulus onset, well beyond initial sensory processing for both visual and auditory stimuli (Finney et al., 2003). Furthermore, other investigations with single deaf subjects have failed to find evidence for the hypothesized cross-modal plasticity in auditory areas (Hickok et al., 1997; Nishimura et al., 1999). The present study investigated a sensory-specific, short-latency time window and found that during the first pass of sensory processing, auditory cortex is not active in deaf participants, whether they are viewing signs or static pictures. Rather, these areas show semantically modulated activity only well after first-pass sensory processing is thought to be completed. Lexicosemantic activity in the left anteroventral temporal lobe between ∼200–400 ms has been shown with laminar multi-microelectrode recordings from different cortical layers to reflect recurrent associative or second-pass processing (Halgren et al., 2006). The latency of the responses in superior temporal cortex in deaf signers indicates that they receive the output of a long chain of visual processing, instead of participating in the early encoding of sensory information (which is performed in primary and secondary visual areas).

Cortical plasticity is a hallmark of early development (Bates and Roe, 2001) and continues well into adulthood in the form of learning-induced cortical and synaptic changes (Buonomano and Merzenich, 1998). Experimental results with animals showing cross-modal plasticity in the context of sensory deprivation are intriguing and of great importance for understanding fundamental principles of neural organization (Sur et al., 1988; Roe et al., 1990; Sur, 2004; Barnes and Finnerty, 2010). While there is extensive and convincing evidence that auditory stimuli activate visual areas in blind individuals (Sadato et al., 1996; Cohen et al., 1997; Barnes and Finnerty, 2010), such clear evidence for a reorganization of auditory cortex in deafness is lacking in both human (Bavelier and Neville, 2002; Kral, 2007) and animal (Kral et al., 2003; Kral, 2007) studies. Factors such as the extent of hearing loss and age of onset of deafness may impact cortical reorganization and rewiring (Bavelier and Neville, 2002; Lambertz et al., 2005), and there may be functional distinctions between A1 and surrounding areas that do show plasticity, such as the anterior auditory field in cats (Lomber et al., 2010; Meredith and Lomber, 2011; Meredith et al., 2011). Additionally, some neurons in auditory regions may be involved in processing nonauditory information (particularly in multimodal contexts); however, the present results suggest that in humans who are born profoundly deaf and are native signers, unimodal responses in primary sensory and semantic systems remain intact.

Thus, in deaf signers who acquired sign language from birth from their deaf parents, signs are processed in a brain network that is strikingly similar to that for spoken words in hearing individuals. The timing of activity in the language network (including superior temporal regions surrounding auditory cortex) reveals that this is due to semantic encoding, rather than to a rerouting of visual-sensory input. This provides evidence that left frontotemporal regions, including the superior temporal plane surrounding the auditory cortex, are specialized for encoding word meaning regardless of input modality.

Footnotes

This work was supported by NSF Grant BCS-0924539, NIH Grant T-32 DC00041, an innovative research award from the Kavli Institute for Brain and Mind, and a UCSD Chancellor's Collaboratories grant. We thank D. Hagler, A. Lieberman, P. Lott, A. Dale, and T. Brown for assistance.

References

  1. Anderson D, Reilly J. The MacArthur communicative development inventory: normative data for American Sign Language. J Deaf Stud Deaf Educ. 2002;7:83–106. doi: 10.1093/deafed/7.2.83. [DOI] [PubMed] [Google Scholar]
  2. Barnes SJ, Finnerty GT. Sensory experience and cortical rewiring. Neuroscientist. 2010;16:186–198. doi: 10.1177/1073858409343961. [DOI] [PubMed] [Google Scholar]
  3. Bates E, Roe K. Language development in children with unilateral brain injury. In: Nelson CA, Luciana M, editors. Handbook of developmental cognitive neuroscience. Cambridge: MIT; 2001. [Google Scholar]
  4. Bates E, D'Amico S, Jacobsen T, Székely A, Andonova E, Devescovi A, Herron D, Lu CC, Pechmann T, Pléh C, Wicha N, Federmeier K, Gerdjikova I, Gutierrez G, Hung D, Hsu J, Iyer G, Kohnert K, Mehotcheva T, Orozco-Figueroa A, et al. Timed picture naming in seven languages. Psychon Bull Rev. 2003;10:344–380. doi: 10.3758/bf03196494. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bavelier D, Neville HJ. Cross-modal plasticity: where and how? Nat Rev Neurosci. 2002;3:443–452. doi: 10.1038/nrn848. [DOI] [PubMed] [Google Scholar]
  6. Binder JR, Gross WL, Allendorfer JB, Bonilha L, Chapin J, Edwards JC, Grabowski TJ, Langfitt JT, Loring DW, Lowe MJ, Koenig K, Morgan PS, Ojemann JG, Rorden C, Szaflarski JP, Tivarus ME, Weaver KE. Mapping anterior temporal lobe language areas with fMRI: A multicenter normative study. Neuroimage. 2011;54:1465–1475. doi: 10.1016/j.neuroimage.2010.09.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Binney RJ, Embleton KV, Jefferies E, Parker GJ, Ralph MA. The ventral and inferolateral aspects of the anterior temporal lobe are crucial in semantic memory: evidence from a novel direct comparison of distortion-correct fMRI, rTMS, and semantic dementia. Cereb Cortex. 2010;20:2728–2738. doi: 10.1093/cercor/bhq019. [DOI] [PubMed] [Google Scholar]
  8. Buonomano DV, Merzenich MM. Cortical plasticity: from synapses to maps. Annu Rev Neurosci. 1998;21:149–186. doi: 10.1146/annurev.neuro.21.1.149. [DOI] [PubMed] [Google Scholar]
  9. Capek CM, Grossi G, Newman AJ, McBurney SL, Corina D, Roeder B, Neville HJ. Brain systems mediating semantic and syntactic processing in deaf native signers: biological invariance and modality specificity. Proc Natl Acad Sci U S A. 2009;106:8784–8789. doi: 10.1073/pnas.0809609106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Chan AM, Baker JM, Eskandar E, Schomer D, Ulbert I, Marinkovic K, Cash SS, Halgren E. First-pass selectivity for semantic categories in human anteroventral temporal lobe. J Neurosci. 2011;31:18119–18129. doi: 10.1523/JNEUROSCI.3122-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cohen LG, Celnik P, Pascual-Leone A, Corwell B, Falz L, Dambrosia J, Honda M, Sadato N, Gerloff C, Dolores Catalá MD, Hallett M. Functional relevance of cross-modal plasticity in blind humans. Nature. 1997;389:180–183. doi: 10.1038/38278. [DOI] [PubMed] [Google Scholar]
  12. Corina DP, McBurney SL, Dodrill C, Hinshaw K, Brinkley J, Ojemann G. Functional roles of Broca's area and SMG: evidence from cortical stimulation mapping in a deaf signer. Neuroimage. 1999;10:570–581. doi: 10.1006/nimg.1999.0499. [DOI] [PubMed] [Google Scholar]
  13. Dale AM, Liu AK, Fischl BR, Buckner RL, Belliveau JW, Lewine JD, Halgren E. Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity. Neuron. 2000;26:55–67. doi: 10.1016/s0896-6273(00)81138-1. [DOI] [PubMed] [Google Scholar]
  14. Emmorey K. Language, cognition and the brain: insights from sign language research. Mahway, NJ: Lawrence Erlbaum; 2002. [Google Scholar]
  15. Emmorey K, Damasio H, McCullough S, Grabowski T, Ponto LL, Hichwa RD, Bellugi U. Neural systems underlying spatial language in American Sign Language. Neuroimage. 2002;17:812–824. [PubMed] [Google Scholar]
  16. Ferjan Ramirez N, Lieberman AM, Mayberry RI. The initial stages of first-language acquisition begun in adolescence: when late looks early. J Child Lang. 2012;20:1–24. doi: 10.1017/S0305000911000535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Finney EM, Fine I, Dobkins KR. Visual stimuli activate auditory cortex in the deaf. Nat Neurosci. 2001;4:1171–1173. doi: 10.1038/nn763. [DOI] [PubMed] [Google Scholar]
  18. Finney EM, Clementz BA, Hickok G, Dobkins KR. Visual stimuli activate auditory cortex in deaf subjects: evidence from MEG. Neuroreport. 2003;14:1425–1427. doi: 10.1097/00001756-200308060-00004. [DOI] [PubMed] [Google Scholar]
  19. Hagler DJ, Jr, Saygin AP, Sereno MI. Smoothing and cluster thresholding for cortical surface-based group analysis of fMRI data. Neuroimage. 2006;33:1093–1103. doi: 10.1016/j.neuroimage.2006.07.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Halgren E, Wang C, Schomer DL, Knake S, Marinkovic K, Wu J, Ulbert I. Processing stages underlying word recognition in the anteroventral temporal lobe. Neuroimage. 2006;30:1401–1413. doi: 10.1016/j.neuroimage.2005.10.053. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Hickok G, Bellugi U, Klima ES. The neurobiology of signed language and its implications for the neural organization of language. Nature. 1996;381:699–702. doi: 10.1038/381699a0. [DOI] [PubMed] [Google Scholar]
  22. Hickok G, Poeppel D, Clark K, Buxton RB, Rowley HA, Roberts TP. Sensory mapping in a congenitally deaf subject: MEG and fMRI studies of cross-modal non-plasticity. Hum Brain Mapp. 1997;5:437–444. doi: 10.1002/(SICI)1097-0193(1997)5:6<437::AID-HBM4>3.0.CO;2-4. [DOI] [PubMed] [Google Scholar]
  23. Klima ES, Bellugi U. The signs of language. Cambridge: Harvard UP; 1979. [Google Scholar]
  24. Kral A. Unimodal and cross-modal plasticity in the ‘deaf’ auditory cortex. Int J Audiol. 2007;46:479–493. doi: 10.1080/14992020701383027. [DOI] [PubMed] [Google Scholar]
  25. Kral A, Schröder JH, Klinke R, Engel AK. Absence of cross-modal reorganization in the primary auditory cortex of congenitally deaf cats. Exp Brain Res. 2003;153:605–613. doi: 10.1007/s00221-003-1609-z. [DOI] [PubMed] [Google Scholar]
  26. Kutas M, Federmeier KD. Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP) Annu Rev Psychol. 2011;62:621–647. doi: 10.1146/annurev.psych.093008.131123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Kutas M, Neville H, Holcomb P. A preliminary comparison of the N400 response to semantic anomalies during reading, listening and signing. Electroencephalogr Clin Neurophysiol [Suppl] 1987;39:325–330. [PubMed] [Google Scholar]
  28. Lambertz N, Gizewski ER, de Greiff A, Forsting M. Cross-modal plasticity in deaf subjects dependent on the extent of hearing loss. Brain Res Cogn Brain Res. 2005;25:884–890. doi: 10.1016/j.cogbrainres.2005.09.010. [DOI] [PubMed] [Google Scholar]
  29. Lambon Ralph MA, Sage K, Jones RW, Mayberry EJ. Coherent concepts are computed in the anterior temporal lobes. Proc Natl Acad Sci U S A. 2010;107:2717–2722. doi: 10.1073/pnas.0907307107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Leonard MK, Brown TT, Travis KE, Gharapetian L, Hagler DJ, Jr, Dale AM, Elman JL, Halgren E. Spatiotemporal dynamics of bilingual word processing. Neuroimage. 2010;49:3286–3294. doi: 10.1016/j.neuroimage.2009.12.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Leonard MK, Torres C, Travis KE, Brown TT, Hagler DJ, Jr, Dale AM, Elman JL, Halgren E. Language proficiency modulates the recruitment of non-classical language areas in bilinguals. PLoS One. 2011;6:e18240. doi: 10.1371/journal.pone.0018240. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Lomber SG, Meredith MA, Kral A. Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf. Nat Neurosci. 2010;13:1421–1427. doi: 10.1038/nn.2653. [DOI] [PubMed] [Google Scholar]
  33. MacSweeney M, Capek CM, Campbell R, Woll B. The signing brain: the neurobiology of sign language. Trends Cogn Sci. 2008;12:432–440. doi: 10.1016/j.tics.2008.07.010. [DOI] [PubMed] [Google Scholar]
  34. Marinkovic K, Dhond RP, Dale AM, Glessner M, Carr V, Halgren E. Spatiotemporal dynamics of modality-specific and supramodal word processing. Neuron. 2003;38:487–497. doi: 10.1016/s0896-6273(03)00197-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Maris E, Oostenveld R. Nonparametric statistical testing of EEG- and MEG-data. J Neurosci Methods. 2007;164:177–190. doi: 10.1016/j.jneumeth.2007.03.024. [DOI] [PubMed] [Google Scholar]
  36. Mayberry RI, Squires B. Sign language: acquisition. In: Brown K, editor. Encyclopedia of language and linguistics. 2nd Edition. Oxford: Elsevier; 2006. pp. 739–743. [Google Scholar]
  37. Mayberry RI, Chen JK, Witcher P, Klein D. Age of acquisition effects on the functional organization of language in the adult brain. Brain Lang. 2011;119:16–29. doi: 10.1016/j.bandl.2011.05.007. [DOI] [PubMed] [Google Scholar]
  38. McDonald CR, Thesen T, Carlson C, Blumberg M, Girard HM, Trongnetrpunya A, Sherfey JS, Devinsky O, Kuzniecky R, Dolye WK, Cash SS, Leonard MK, Hagler DJ, Jr, Dale AM, Halgren E. Multimodal imaging of repetition priming: using fMRI, MEG, and intracranial EEG to reveal spatiotemporal profiles of word processing. Neuroimage. 2010;53:707–717. doi: 10.1016/j.neuroimage.2010.06.069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Meredith MA, Lomber SG. Somatosensory and visual crossmodal plasticity in the anterior auditory field of early-deaf cats. Hear Res. 2011;280:38–47. doi: 10.1016/j.heares.2011.02.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Meredith MA, Kryklywy J, McMillan AJ, Malhotra S, Lum-Tai R, Lomber SG. Crossmodal reorganization in the early deaf switches sensory, but not behavioral roles of auditory cortex. Proc Natl Acad Sci U S A. 2011;108:8856–8861. doi: 10.1073/pnas.1018519108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Mion M, Patterson K, Acosta-Cabronero J, Pengas G, Izquierdo-Garcia D, Hong YT, Fryer TD, Williams GB, Hodges JR, Nestor PJ. What the left and right anterior fusiform gyri tell us about semantic memory. Brain. 2010;133:3256–3268. doi: 10.1093/brain/awq272. [DOI] [PubMed] [Google Scholar]
  42. Neville HJ, Coffey SA, Lawson DS, Fischer A, Emmorey K, Bellugi U. Neural systems mediating American Sign Language: effects of sensory experience and age of acquisition. Brain Lang. 1997;57:285–308. doi: 10.1006/brln.1997.1739. [DOI] [PubMed] [Google Scholar]
  43. Nishimura H, Hashikawa K, Doi K, Iwaki T, Watanabe Y, Kusuoka H, Nishimura T, Kubo T. Sign language ‘heard’ in the auditory cortex. Nature. 1999;397:116. doi: 10.1038/16376. [DOI] [PubMed] [Google Scholar]
  44. Ojemann GA. Brain organization for language from the perspective of electrical stimulation mapping. Behav Brain Sci. 1983;6:189–230. [Google Scholar]
  45. Patterson K, Nestor PJ, Rogers TT. Where do you know what you know? The representation of semantic knowledge in the human brain. Nat Rev Neurosci. 2007;8:976–987. doi: 10.1038/nrn2277. [DOI] [PubMed] [Google Scholar]
  46. Petitto LA, Zatorre RJ, Gauna K, Nikelski EJ, Dostie D, Evans AC. Speech-like cerebral activity in profoundly deaf people processing signed languages: implications for the neural basis of human language. Proc Natl Acad Sci U S A. 2000;97:13961–13966. doi: 10.1073/pnas.97.25.13961. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Pobric G, Jefferies E, Lambon Ralph MA. Category-specific versus category-general semantic impairment induced by transcranial magnetic stimulation. Curr Biol. 2010;20:964–968. doi: 10.1016/j.cub.2010.03.070. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Poizner H, Klima ES, Bellugi U. What the hands reveal about the brain. Cambridge, MA: MIT; 1987. [Google Scholar]
  49. Price CJ. The anatomy of language: a review of 100 fMRI studies published in 2009. Ann N Y Acad Sci. 2010;1191:62–88. doi: 10.1111/j.1749-6632.2010.05444.x. [DOI] [PubMed] [Google Scholar]
  50. Roe AW, Pallas SL, Hahm JO, Sur M. A map of visual space induced in primary auditory cortex. Science. 1990;250:818–820. doi: 10.1126/science.2237432. [DOI] [PubMed] [Google Scholar]
  51. Sadato N, Pascual-Leone A, Grafman J, Ibañez V, Deiber MP, Dold G, Hallett M. Activation of the primary visual cortex by Braille reading in blind subjects. Nature. 1996;380:526–528. doi: 10.1038/380526a0. [DOI] [PubMed] [Google Scholar]
  52. Sandler W, Lillo-Martin D. Sign language and linguistic universals. Cambridge: Cambridge UP; 2006. [Google Scholar]
  53. Schick B. The American Sign Language vocabulary test. Boulder, CO: University of Colorado at Boulder; 1997. [Google Scholar]
  54. Sur M. Rewiring cortex: cross-modal plasticity and its implications for cortical development and function. In: Calvert GA, Spence C, Stein BE, editors. The handbook of multisensory processes. Cambridge, MA: MIT; 2004. [Google Scholar]
  55. Sur M, Garraghty PE, Roe AW. Experimentally induced visual projections into auditory thalamus and cortex. Science. 1988;242:1437–1441. doi: 10.1126/science.2462279. [DOI] [PubMed] [Google Scholar]
  56. Travis KE, Leonard MK, Brown TT, Hagler DJ, Jr, Curran M, Dale AM, Elman JL, Halgren E. Spatiotemporal neural dynamics of word understanding in 12- to 18-month-old-infants. Cereb Cortex. 2011;21:1832–1839. doi: 10.1093/cercor/bhq259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Visser M, Lambon Ralph MA. Differential contributions of bilateral ventral anterior temporal lobe and left anterior superior temporal gyrus to semantic processes. J Cogn Neurosci. 2011;23:3121–3131. doi: 10.1162/jocn_a_00007. [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES