Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Aug 14.
Published in final edited form as: Sign Lang Linguist. 2010 Jan 1;13(2):183–199. doi: 10.1075/sll.13.2.03mal

Early acquisition of sign language What neuroimaging data tell us

Evie Malaia a,b, Ronnie B Wilbur b
PMCID: PMC3155772  NIHMSID: NIHMS311091  PMID: 21847357

Abstract

Early acquisition of a natural language, signed or spoken, has been shown to fundamentally impact both one's ability to use the first language, and the ability to learn subsequent languages later in life (Mayberry 2007, 2009). This review summarizes a number of recent neuroimaging studies in order to detail the neural bases of sign language acquisition. The logic of this review is to present research reports that contribute to the bigger picture showing that people who acquire a natural language, spoken or signed, in the normal way possess specialized linguistic abilities and brain functions that are missing or deficient in people whose exposure to natural language is delayed or absent. Comparing the function of each brain region with regards to the processing of spoken and sign languages, we attempt to clarify the role each region plays in language processing in general, and to outline the challenges and remaining questions in understanding language processing in the brain.

Keywords: sign language, neuroscience, neuroimaging, language acquisition

1. Introduction

Early acquisition of a natural language, signed or spoken, has been shown to fundamentally affect both one's proficiency in the first language and in subsequent languages learned later in life (Mayberry 2007, 2009). For most deaf children, only sign languages provide adequate input at all levels of linguistic structure, allowing development of full linguistic proficiency on the normal timeline (Wilbur 2008). There is a special quality to natural language that separates it from other communicative systems (e.g. gestures). For example, when presented with visual linguistic input (e.g., American Sign Language, ASL) and unstructured pantomime (gesture), hearing infants demonstrate preference for the ASL as early as 6 months of age (Krentz & Corina 2008). It is also known that high fluency in ASL has been demonstrated to correlate with successful development of literacy in English for deaf children, indicating that knowledge of one natural language contributes to acquisition of another (Hoffmeister et al. 1998; Padden & Ramsey 1998; Prinz & Strong 1998; Singleton et al. 2004; Strong & Prinz 1997).

Experimental research has demonstrated that early acquisition of signed or spoken language provides equal advantages in all abstract linguistic domains: phonology, syntax, and semantics. For example, Mayberry and Witcher (2005) have shown that in a primed lexical decision task (wherein a task is constructed to determine if the presence of a particular stimulus item results in faster/better processing of a subsequent target item), phonological priming by a sign's place of articulation is observed both in native ASL signers (both deaf and hearing) and hearing early acquirers of spoken English who learned ASL as a second language later in life, but not in deaf people who learned ASL later in life. These results indicate that the larger process of sign recognition requires abstract phonological processing, and that one's ability to conduct such processing is sensitive to the age at which first language (L1) is acquired. Early acquisition of either a signed or a spoken language results in better processing of the sign language formational feature ‘place of articulation’, but deaf individuals who have not learned a first language according the normal development sequence do not have the same ability to perform formational analysis as shown by their worse performance on primed lexical decision tasks.

In the United States, children who are deaf before the acquisition of spoken English are taught English in school, laboriously, as though it were a foreign/second language (Charrow & Fletcher 1974). Mayberry and Lock (2003) also demonstrated that, despite radical differences in modality and linguistic structure between English and ASL, early learners of ASL acquire the same mastery of syntactic structures in English as a second language (L2) as early learners of spoken languages (Urdu, German, Spanish, French), replicating the findings of Charrow and Fletcher (1974). In general, the studies demonstrate that early language acquisition is a prerequisite for proficiency in processing linguistic structures either in L1 or L2. Lack of early exposure to natural sign language for deaf children leads to difficulties in acquisition of L2 (usually the dominant spoken language in the community), such that by age 18, deaf students generally do not have the linguistic competence of 10-year-old hearing children in many syntactic structures (Wilbur 2000). Their main challenge is the acquisition of the general language skills that underlie successful use of speech, signing, reading, and writing. To this day, language barriers still prevent deaf children from an equal opportunity of success in school, higher education, careers, and social venues.

In recent years, studies of deaf signers using neuroimaging methods provided us with new information about the neural changes that are associated with early acquisition of sign language. In the following section, we review research on sign language-related brain regions, focusing on the changes in their functions resulting from early sign language acquisition.

2. Neuroimaging methods

The experiments described in this review use one of the following techniques to investigate brain function: evoked response potentials (ERP), positron emission tomography (PET), or functional magnetic resonance imaging (fMRI). ERP measures the amplitude of electrical potentials, recorded from the surface of the head following presentation of a specific stimulus.

ERP recordings provide high temporal resolution of the overall intensity of electrical activity in neurons processing specific stimuli. However, this technique only allows for low-resolution localization of brain activity (where the activity is in the brain, e.g., whether it is left-lateralized, central, or right-lateralized; anterior or posterior, etc.).

PET is a nuclear imaging technique, which produces a three-dimensional image or picture of functional processes in the brain, and relies on use of tracers — radioactive materials — typically inhaled by the patient. fMRI also produces a 3-dimensional image of high spatial resolution, but does not require tracers. Both PET and fMRI results show an indirect measure of brain activity: blood flow to the specific regions of the brain. If the blood flow to the specific region of the brain increases in response to a task or stimulus, then it is assumed that the neurons in that area are requiring additional oxygen, which in turn leads to increased blood flow to that region. Since PET and fMRI rely on indirect measures of brain activity, their temporal resolution (following the time course of events) is lower than that of ERP. However, both provide high spatial resolution, allowing the researchers to identify anatomical brain structures and networks involved in specific tasks.

All studies of language that use neuroimaging techniques make conclusions based on the difference between activation patterns elicited by the stimuli and task, and a baseline measure (control) of brain activity. Because these differ from study to study, when comparing multiple studies, some caution is needed in interpreting such differential activations.

First, activations measured by fMRI, PET, or ERP studies only represent relative differences in neural activity between different brain states in the experimental baseline and linguistic task conditions. Stimulus- and task-specific properties of the experimental paradigm significantly affect both baseline and condition-related activations. Thus, some differences in activation effects between studies investigating the same linguistic phenomenon are to be expected, and can be used to both interpret comparative results of the studies, and generate further questions about neural bases of language processing.

Second, activation of a specific brain region means only that this region is engaged during the processing of a certain stimulus. It does not imply that the processing is successful. In fact, activation increases are often related to higher processing loads, reflecting difficulty of the task (Just et al. 1996). Thus, neuroimaging results should be interpreted in conjunction with behavioral data (such as accuracy and response times for probe questions), when available.

Third, the brain is never “at rest” — that is, there are always regions and networks of the brain that are active, even if there is no overt task or stimulus that needs to be processed (Buckner et al. 2008). Thus, neuroimaging results which rely on the assumption that a “no stimulus, no task” condition can be contrasted with the task of interest (linguistic, etc.) should be considered with caution.

Finally, small sample sizes typical of neuroimaging research, in combination with diverse backgrounds of participants, are also a source of potential variation in neuroimaging data.

3. Brain regions involved in linguistic processing

Early research on neurolinguistics of sign languages concentrated on the question of how such languages would be lateralized in the brain. The central issue of concern was how the brain would treat a language that is visual/spatial in modality, as it was already known that right hemisphere is specialized for visuospatial information. Multiple studies have confirmed that sign languages, just as spoken languages, make special use of left-lateralized language network (see Campbell et al. (2008) for a review). Several studies (Neville et al. 1997, 1998; Newman et al. 2002) have also found that early acquisition of sign language leads to more right hemisphere involvement in sign language processing, raising the question of how specific processing requirements for sign language processing shape functional organization of language system in the brain.

Recent neuroimaging studies have shown that early acquisition of sign language has an effect on a broad network of brain regions, both in the left and right hemispheres. Our review will cover the brain regions involved in sign language processing (Figure 1), beginning from the automatic modulatory activity in the cerebellum (“little brain”), through areas involved in sensory perception and information integration (including visual and spatial processing), to the regions responsible for the most abstract linguistic analysis and synthesis, including syntactic and phonological feature extraction.

Figure 1.

Figure 1

Lateral view of human brain, marking regions implicated in sign language processing.

3.1 Cerebellum1

Neural activity in the cerebellum is most often associated with motor control and coordination (Ivry & Justus 2001); however, right cerebellar activation is also frequently observed in language-related tasks. The right cerebellum has been suggested to regulate activity in the parts of the brain to which it is reciprocally connected, i.e., left language-dominant dorsolateral and medial frontal areas (Marien et al. 2001) with the extent and strength of the activation in the cerebellum dependent on the difficulty of the linguistic task (Xiang et al. 2003). The proposal for the participation of the cerebellum in language processing is supported by reports that patients with cerebellar lesions demonstrate more impairment on phonological rather than semantic fluency (Stoodley & Schmahmann 2009).

PET studies of ASL sign production in native signers (Corina et al. 2003; San Jose-Robertson et al. 2004) demonstrate that the production of verb signs elicits activation in the right cerebellar hemisphere and at the midline cerebellum. This right hemisphere laterality of activation was not affected by whether the signing was carried out by the dominant right or the non-dominant left hand (Corina et al. 2003), suggesting that the observed cerebellar activation is independent of the motor (articulatory) component of language production. Similar right-lateralized cerebellar activation has been observed in multiple studies of spoken language production (Ackermann, Mathiak & Riecker 2007; Bohland & Guenther 2006; Christoffels, Formisano & Schiller 2007; Riecker et al. 2008). Thus, the data from both sign and spoken languages appear to support the hypothesis that the cerebellum plays a role in modulating neural activity related to modality-independent linguistic representation.

3.2 Cerebral cortex2

3.2.1 Visual cortex

The visual cortex is a sensory processing area located in the occipital lobe, which consists of several specialized regions (V1-V5). It transmits motion and location information to the posterior parietal cortex, and information on object properties to the inferior temporal cortex (the so-called dorsal/ventral pathway dichotomy).

Sign language studies which have investigated functions of the primary visual cortex in signers have shown that basic visual processing of linguistic and non-linguistic stimuli in deaf participants does not differ from that of hearing populations. A PET study demonstrated similar activations of V1 and V5 areas in deaf signers and hearing non-signers during passive viewing of signs and linguistically organized non-signs, as compared to brain activation during visual fixation (Petitto et al. 2000). Similar results were obtained by Bavelier et al. (2001), who compared deaf signers, hearing native signers, and hearing non-signers' neural responses to motion and luminance tasks using fMRI; the V1 and V2 areas of the visual cortex were recruited similarly across populations. The participants in the Bavelier et al. study, however, differed in recruitment of the motion-selective area of the visual cortex MT+/V5. This area was more active in the left hemisphere in early signers (both deaf and hearing), suggesting that the establishment/development of such lateralization could be a function of age of sign language exposure. Additionally, only deaf signers showed enhanced recruitment of MT+/V5 during tasks requiring peripheral attention.

A PET study (San Jose-Robertson et al. 2004) also showed bilateral activation of MT and its immediate output area MST (medial superior temporal) during tasks that required phonological processing for sign language production. For example, producing noun signs after seeing them, as compared to passively viewing them, elicited this activation, suggesting that MT/MST is involved in visual self-monitoring (watching oneself signing) during sign production.

Overall, sign language experience appears to affect lateralization of activation in extrastriate (BA 18/19) as well as motion-sensitive (MT+/V5) areas of the occipital cortex, but not the neural functioning of the primary visual cortex. However, more studies are needed to establish to what extent the neural changes in MT+/V5 region are sensitive to the age of sign language exposure.

3.2.2 Parietal lobe: Posterior parietal cortex, supramarginal and angular gyri

The parietal lobe, positioned above the occipital and temporal lobes, is involved in integration of neural inputs from different sensory modalities, such as visual and auditory. Inferior portions of the parietal lobe, including supramarginal and angular gyri, have been especially implicated in processing of space-related data (Amorapanth, Widick & Chatterjee 2009). Extensive experience of signers with linguistic use of space has been argued to enhance recruitment of the right parietal cortex for processing of linguistic representation of spatial relations (such as “X located next to Y”). Interestingly, native hearing signers have demonstrated recruitment/activation of the right parietal cortices while describing relative positions of objects while they are speaking in English (Emmorey et al. 2005), unlike non-signing English speakers, who mostly recruited the left parietal cortex (Damasio et al. 2001).

Another remarkable observation is that non-linguistic visual communicative stimuli, such as pantomime (Emmorey et al. 2009), elicit superior parietal lobe activation in hearing non-signers, but not deaf signers. These data suggest that in deaf signers, the parietal lobe may be specialized for processing of visuo-spatial information that can be parsed into a limited set of units, such as phonological features in ASL (i.e., handshape, place of articulation, orientation, movement, facial expressions).

The angular gyrus (BA 39) is a part of the parietal lobe located above the temporal lobe, posterior to the supramarginal gyrus. Enhanced activation of the angular gyrus in the right hemisphere has been observed in bimodal bilinguals (native hearing signers) in response to ASL video stimuli (Newman et al. 2002), but not written English stimuli. Since late learners of ASL in the study did not show comparable activation, it was suggested that the right angular gyrus might be sensitive to the age of sign language acquisition. Interpretation of the results was somewhat complicated by the presence of prosody (rhythm, stress, and intonation) in the ASL but not the English stimuli. Nonetheless, the findings generally agreed with an earlier ERP study (Neville et al. 1997), which demonstrated that deaf native signers who read English sentences had left-lateralized ERP patterns, but bilateral responses to sign-by-sign ASL glosses (videos of ASL signs) of the same sentences. As neither type of stimuli in the ERP study contained suprasegmental information, the results could be interpreted to demonstrate extended right hemisphere recruitment for spatial processing of ASL stimuli. Taken together, these results suggest that right hemisphere involvement in the processing of sign language might also be affected by the age of language acquisition. It is also possible that lack of right hemisphere involvement can be a limiting factor for L2 mastery of sign languages.

The supramarginal gyrus (SMG, BA 40), located anterior to the angular gyrus in the parietal lobe, has been shown to play a role in integrating auditory and visual information for phoneme and syllable recognition in hearing populations (Bernstein et al. 2008), possibly facilitating creation of post-sensory, modality-independent representations of linguistic input (Noordzij et al. 2008). Sign language data from various neuroimaging studies support the notion that SMG performs a similar function in sign languages, creating abstract phonological representations from spatial properties of signs (Corina et al. 1999; Emmorey et al. 2003; Emmorey, Mehta & Grabowski 2007; MacSweeney et al. 2002, 2004, 2008b). A cortical stimulation mapping study, in which electrodes were placed on the surface of the brain to directly stimulate specific areas (Corina et al. 1999), showed that disruption of normal left SMG function induces phonological errors of sign placement. Further, a PET study of ASL demonstrated that producing phonologically complex signs, as compared to fingerspelling of English words translating those signs, led to higher activation of SMG (Emmorey et al. 2003). Such differences in SMG activation suggested that this region might encode the place of articulation, which is phonologically relevant for signs, but not for fingerspelling. Additionally, an fMRI study of British Sign Language (BSL) and the Tic Tac gestural system3 in deaf and hearings signers (MacSweeney et al. 2004) showed preferential activation of the left SMG only in deaf signers in response to BSL stimuli, pointing to a key role of SMG in processing SL phonology. MacSweeney et al. (2008b) also showed that the superior portion of left SMG supports the processing of phonological similarity judgments in BSL. In summary, involvement of the parietal lobe (including SMG and angular gyrus) in multisensory integration and spatial processing appears to be especially important for sign language processing, as it plays a key role in the generation of phonological representation from visual/spatial properties of signs.

3.2.3 Temporal lobe: Planum temporale, superior temporal sulcus, fusiform gyrus

The temporal lobe contains several language processing regions, most prominently the primary auditory cortex and auditory association cortex (Wernicke's area) in hearing individuals. However, multiple neuroimaging studies demonstrate that several areas of the temporal lobe (including planum temporale (PT), posterior section of superior temporal sulcus (pSTS), and the fusiform gyrus are also involved in processing both spoken and sign language, suggesting that the temporal cortex carries out modality-independent processing.

Several neuroimaging studies suggest that the PT, which is located in the posterior region of the superior temporal gyrus (BA 22, center of Wernicke's area), carries out modality-independent processing of rapid temporal alternations in response to both auditory and visual stimuli. A PET study showed that the posterior portion of STG (although not PT specifically) is activated bilaterally in response to passive viewing of phonologically acceptable nonsense signs for ASL and LSQ (Sign Language of Quebec) signers, but not sign naïve hearing subjects (Petitto et al. 2000). Additionally, several fMRI studies have demonstrated that an active task (such as sign production or decision-making) ensures PT activation in both signing and non-signing participants, regardless of hearing status, in response to a variety of stimuli, including sign language, gestural systems, and lip-reading of spoken language (MacSweeney et al. 2004; Sadato et al. 2005; San Jose-Robertson et al. 2004). Thus, left PT appears to be responsible for abstract, amodal processing of rapid temporal alternations.

The posterior portion of the superior temporal sulcus (pSTS) is a polymodal area, which receives input from both auditory and visual cortices, and has been strongly implicated in spoken language processing (Bornkessel et al. 2005; Shetreet et al. 2007; Thompson et al. 2007). Several neurolinguistic studies identified a distributed network, which includes pSTS and the inferior and middle frontal gyrus (BA44/45/47 and BA46) as involved in processing of predicates (verbs and their noun arguments) for spoken languages (Bornkessel et al. 2005; Shetreet et al. 2007; Thompson et al. 2007), although no comparable data are available for sign languages at the present time. Exposure of participants to non-linguistic stimuli, such as walking or running, demonstrates that this pSTS is also sensitive to distinctive differences in biological motion (Grossman, Battelli & Pascual-Leone 2005). Thus, interpretation of pSTS involvement in sign language processing is complicated by the fact that the area is activated in response to both to linguistic structures and visual motion stimuli. For example, Bavelier et al. (2001) reported enhanced recruitment of this area (both in extent of activation and percentage of signal change) in deaf participants, but not hearing signers, in response to a velocity-change judgment task (detection of a 70% transient increase in velocity for 1 second). The authors suggested that sensitivity of pSTS to velocity in this study can be interpreted as an effect of hearing loss.

Alternatively, MacSweeney et al. (2004) reported greater pSTS activation in response to BSL vs. the Tic Tac gestural system in both deaf and hearing signers, as compared to hearing non-signers, suggesting that enhanced pSTS activation is due to its sensitivity to the larger phonetic repertoire of the BSL as compared to the Tic Tac manual code. Corroborating evidence for the effects of sign language experience on temporal lobe function comes from a study of grammatical facial expressions (non-manuals) in ASL (McCullough, Emmorey & Sereno 2005). The study showed left-lateralized STS recruitment in deaf signers in response to static stimuli depicting adverbial mouth gestures accompanied by manual verb signs, as compared to pictures of non-linguistic emotional facial expressions, which elicited bilateral STS activation. Such specialization for linguistic non-manuals, however, appears to be affected by hearing loss, because the same linguistic stimuli did not elicit strong STS lateralization in hearing signers (Emmorey & McCullough 2009). The difference between deaf and hearing signers in this case is likely due to the effect of neuroplasticity: as the temporal cortex is not specialized for auditory processing in deaf signers, it is recruited for other forms of linguistic processing of visual language.

The fusiform gyrus, another portion of the temporal lobe, contains two specialized areas pertinent for sign language processing. One is the fusiform face area (FFA), which is engaged in face and body recognition (Kanwisher, McDermott & Chun 1997). The other is the Visual Word Form Area (VWFA), which is activated by whole-word recognition for written language forms (Glezer, Jiang & Riesenhuber 2009).

A study by Waters et al. (2007) showed greater bilateral activation of VWFA in deaf signers in response to fingerspelled stimuli as compared to BSL lexical signs. The authors proposed that these results might reflect a general role of the fusiform gyrus in mapping between the perception of meaningful stimuli and their phonological or semantic representations. Corina et al. (2003) demonstrated activation of the left fusiform gyrus in deaf signers during verb production as compared to noun repetition, which also suggests that the fusiform gyrus contributes to the formulation of signs as well as their abstract well-formedness (obeying the constraints of the language).

Sign language experience also appears to affect the processing of face-related information in fusiform gyrus. A study of non-manuals in ASL (McCullough et al. 2005) showed left-lateralized activation in deaf participants in response to static stimuli (pictures) containing linguistic (adverbial) non-manuals. In contrast, these same stimuli led to right-lateralized activation of the fusiform gyrus in hearing non-signers. A complementary investigation, which presented the same stimuli to bimodal bilinguals (hearing signers) (Emmorey & McCullough 2009) found bilateral activation in response to linguistic non-manuals in a verb context, without preferential activation of either hemisphere. Such differences could be interpreted as a function of sign language experience: since the face serves as a linguistic articulator in ASL, it might be preferentially processed in the left hemisphere in deaf signers to facilitate integration of non-manual sign components with other linguistic properties of the stimuli. If so, a similar adaptation would lead to bilateral activation of the fusiform gyrus in bimodal bilinguals. The authors suggested that left-biased lateralization for processing facial features in deaf as compared to hearing bilingual signers might be due to deaf people's training with speechreading. In comparison with other studies we mention, which elicited right hemisphere recruitment, the stimuli used in this study were static photographs, which might have contributed to a lower degree of right hemisphere involvement in deaf signers. In general then, recent neuroimaging data suggest that the processing carried out in the temporal lobe is more modality-independent than previously thought, and that exposure to sign language fine-tunes its function to facilitate processing of visual information in response to sign language input.

3.2.4 Inferior frontal gyrus

The brain region most consistently identified with language perception and production (including lexical-semantic, phonological, and syntactic processing) independent of stimulus modality (signing or speaking) is the inferior frontal gyrus in the left hemisphere (IFG, BA 44/45/47), and its right hemisphere homologue (Corina et al. 2003; MacSweeney et al. 2002, 2004, 2008b; Neville et al. 1997, 1998; Petitto et al. 2000; San Jose-Robertson et al. 2004). Sign language research has also demonstrated that preferential left lateralization of IFG activation by language production tasks is independent of articulatory load lateralization, i.e. whether dominant right or non-dominant left hand is used for signing (Corina et al. 2003). This indicates that left-biased IFG activation is related to language itself and not just a result of which hand is moving.

Another study compared sentence comprehension and detection of non-words contained in sentences in deaf Japanese Sign Language (JSL) signers, monolingual Japanese speakers, and bimodal bilinguals (Sakai et al. 2005). This study also demonstrated that recruitment of areas BA 44/45 in IFG for syntactic processing was independent of language modality (spoken or signed). Thus, the IFG appears to be involved in the most abstract cognitive tasks required for language processing. Further corroborative evidence for the abstract domain-independent function of IFG comes from a study of artificial grammar acquisition based on written language and non-linguistic symbols by (non-signing) native speakers of Italian (Tettamanti et al. 2009). The study showed that both symbolic and linguistic stimuli constructed using natural language-like syntax (i.e. string composition rules based on hierarchical complexity) activated BA 44 of the left IFG. At the same time, stimuli constructed on the basis of artificial non-language-like syntax (fixed positions of string components) did not activate this brain area. Results also showed that activation of BA 44 during the task was accompanied by improved performance on a grammaticality detection task, demonstrating specialization of the left IFG for processing structures closely resembling natural languages.

However, recruitment of this region for language processing appears to be sensitive to the age of language acquisition. A comparison of IFG activation in native and late learners of BSL and English (MacSweeney et al. 2008b) showed greater recruitment of this area during production of a “location alliteration” task in late learners of BSL as compared to native signers. Although both native and non-native signers performed equally well on behavioral measures of accuracy and response time in this study, the greater activation of IFG in non-native signers is probably a reflection of a higher processing load for the “location alliteration” task for this group.

In summary, the neural networks in the IFG, especially in the left hemisphere, support abstract language processing independent of input modality. Specialization of these networks for sign language processing appears to be dependent on exposure to complex visual linguistic input. Thus, current neuroimaging data are consistent with the claim put forth by behavioral research that linguistic proficiency (in signed or spoken language) is sensitive to the age of language acquisition.

4. Neural adaptations to processing linguistic input in a visual modality

Available neuroimaging data demonstrates a distinction between the effects of early hearing loss and early sign language acquisition on neural circuitry. Early hearing loss leads to recruitment of the temporal cortex for non-auditory processing tasks, as well as specialization of multimodal integration areas in the parietal lobe for the processing of visual input. Early acquisition of sign language, on the other hand, induces multiple neural changes which allow the extraction of linguistic information from complex visual input. The need to adapt to an extended use of space to encode abstract linguistic units, such as phonological units (e.g. meaningful distinctions in handshapes, place of formation, hand orientation, and movement) in sign languages appears to affect multimodal integration centers in the inferior parietal lobe, including the SMG and angular gyrus (Corina et al. 1999; Emmorey et al. 2003; Emmorey et al. 2007; MacSweeney et al. 2002, 2004, 2008b). Some studies also suggest that the requirements of sign languages for spatial processing lead to enhanced right hemisphere activation in signers (Emmorey et al. 2005; Neville et al. 1997; Newman et al. 2002). Because the articulators (hands, face, body) are all seen at the same time in sign languages, parallel delivery of information leads to changes in the brain regions which facilitate processing of visual motion, such as MT+/V5 (Bavelier et al. 2001; San Jose-Robertson et al. 2004). Activity in these areas appears lateralized to the left hemisphere in signers, possibly in order to facilitate processing visual input in other specialized language regions of the left hemisphere. Finally, brain regions responsible for multimodal information integration for language perception and production (such as the STG and pSTS, which are involved in phonological processing, or the anterior cingulate, which is associated with verbal working memory encoding) demonstrate adaptation to visual sources of linguistic information in both deaf and hearing native signers (Bavelier et al. 2001; MacSweeney et al. 2004; McCullough et al. 2005; San Jose-Robertson et al. 2004). Recruitment of these areas for linguistic processing appears to depend on early exposure to complex linguistic stimuli, regardless of modality.

The early exposure to language, irrespective of modality, is also crucial for development of Broca's area for language processing (MacSweeney et al. 2002, 2008b). Multiple neuroimaging studies confirm that the regions of the IFG supporting linguistic tasks, including phonological, semantic, and syntactic processing, are not affected by the modality of linguistic input (Corina et al. 2003; MacSweeney et al. 2002, 2008a, b; Neville et al. 1998; Petitto et al. 2000). Additional evidence indicating that the sensitive periods governing sign and spoken language acquisition are similar comes from ERP studies of early and late acquirers of spoken and sign languages as L2 (Neville et al. 1997; Weber-Fox & Neville 2001). These studies showed that online language processing in both signers and speakers is strongly affected by the timing of language acquisition, as evidenced by the changes in evoked response potentials during linguistic tasks.

5. Conclusion

The neural changes engendered by early exposure to complex linguistic stimuli are the basis for successful first and second language acquisition. Extensive data demonstrating functional adaptations of the human brain to auditory or visual linguistic stimuli clearly show that the benefits of early language exposure do not depend on the modality of stimuli. Although neuroimaging studies show that the modality of the native language does lead to some differences in specialization of brain regions, they also demonstrate that availability of complex linguistic input in either visual or auditory modality early in life is correlated with engagement of specialized neural networks for linguistic tasks. This research supports the conclusions made on the basis of behavioral studies that linguistic proficiency in both first and second languages is determined by the age of first language acquisition but not its modality.

Acknowledgments

This work was supported in part by the National Institutes of Health grant DC00524 and the National Science Foundation Linguistics Program grant #0345314.

Footnotes

1

Illustrations of the cerebellum from multiple sources can be found at http://en.wikipedia.org/wiki/Cerebellum

2

Illustrations of cortical regions discussed in this paper can be found at http://en.wikipedia.org/wiki/Cerebral_cortex

3

Tic Tac is a gestural system used by UK bookmakers to communicate the odds on certain racehorses in real-time.

References

  1. Ackermann Hermann, Mathiak Klaus, Riecker Axel. The contribution of the cerebellum to speech production and speech perception: clinical and functional imaging data. Cerebellum. 2007;6(3):202–213. doi: 10.1080/14734220701266742. [DOI] [PubMed] [Google Scholar]
  2. Amorapanth Prin X, Widick Page, Chatterjee Anjan. The neural basis for spatial relations. Journal of Cognitive Neuroscience. 2009;22(8):1739–1753. doi: 10.1162/jocn.2009.21322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bavelier Daphne, Brozinsky Craig, Tomann Andrea, Mitchell Teresa, Neville Helen, Liu Guoying. Impact of early deafness and early exposure to sign language on the cerebral organization for motion processing. Journal of Neuroscience. 2001;21(22):8931–8942. doi: 10.1523/JNEUROSCI.21-22-08931.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bernstein Lynne E, Auer Edward T, Jr, Wagner Michael, Ponton Curtis W. Spatiotemporal dynamics of audiovisual speech processing. Neuroimage. 2008;39(1):423–435. doi: 10.1016/j.neuroimage.2007.08.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bohland Jason W, Guenther Frank H. An fMRI investigation of syllable sequence production. Neuroimage. 2006;32(2):821–841. doi: 10.1016/j.neuroimage.2006.04.173. [DOI] [PubMed] [Google Scholar]
  6. Bornkessel Ina, Zysset Stefan, Friederici Angela D, von Cramon D Yves, Schlesewsky Matthias. Who did what to whom? The neural basis of argument hierarchies during language comprehension. Neuroimage. 2005;26(1):221–233. doi: 10.1016/j.neuroimage.2005.01.032. [DOI] [PubMed] [Google Scholar]
  7. Buckner Randy L, Andrews-Hanna Jessica R, Schacter Daniel L. The brain's default network: anatomy, function, and relevance to disease. Annals of the New York Academy of Sciences. 2008;1124:1–38. doi: 10.1196/annals.1440.011. [DOI] [PubMed] [Google Scholar]
  8. Campbell Ruth, MacSweeney Mairéad, Waters Dafydd. Sign language and the brain: a review. Journal of Deaf Studies and Deaf Education. 2008;13(1):3–20. doi: 10.1093/deafed/enm035. [DOI] [PubMed] [Google Scholar]
  9. Charrow Veda R, Fletcher J Dexter. English as the second language of deaf children. Developmental Psychology. 1974;10:463–470. [Google Scholar]
  10. Christoffels Ingrid K, Formisano Elia, Schiller Niels O. Neural correlates of verbal feedback processing: an fMRI study employing overt speech. Human Brain Mapping. 2007;28(9):868–879. doi: 10.1002/hbm.20315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Corina David P, Jose-Robertson Lucila S, Guillemin Andre, High Julia, Braun Allen R. Language lateralization in a bimanual language. Journal of Cognitive Neuroscience. 2003;15(5):718–730. doi: 10.1162/089892903322307438. [DOI] [PubMed] [Google Scholar]
  12. Corina David P, McBurney Susan L, Dodrill Carl, Hinshaw Kevin, Brinkley Jim, Ojemann George. Functional roles of Broca's area and SMG: Evidence from cortical stimulation mapping in a deaf signer. Neuroimage. 1999;10(5):570–581. doi: 10.1006/nimg.1999.0499. [DOI] [PubMed] [Google Scholar]
  13. Damasio Hanna, Grabowski Thomas J, Tranel Daniel, Ponto Laura L, Hichwa Richard D, Damasio Antonio R. Neural correlates of naming actions and of naming spatial relations. Neuroimage. 2001;13(6, Pt 1):1053–1064. doi: 10.1006/nimg.2001.0775. [DOI] [PubMed] [Google Scholar]
  14. Emmorey Karen, Grabowski Thomas, McCullough Stephen, Damasio Hanna, Bellugi Ursula. Neural systems underlying lexical retrieval for sign language. Neuropsychologia. 2003;41(1):85–95. doi: 10.1016/s0028-3932(02)00089-1. [DOI] [PubMed] [Google Scholar]
  15. Emmorey Karen, Grabowski Thomas, McCullough Stephen, Ponto Laura L, Hichwa Richard D, Damasio Hanna. The neural correlates of spatial language in English and American Sign Language: a PET study with hearing bilinguals. Neuroimage. 2005;24(3):832–840. doi: 10.1016/j.neuroimage.2004.10.008. [DOI] [PubMed] [Google Scholar]
  16. Emmorey Karen, McCullough Stephen. The bimodal bilingual brain: effects of sign language experience. Brain and Language. 2009;109(2–3):124–132. doi: 10.1016/j.bandl.2008.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Emmorey Karen, Mehta Sonya, Grabowski Thomas J. The neural correlates of sign versus word production. Neuroimage. 2007;36(1):202–208. doi: 10.1016/j.neuroimage.2007.02.040. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Emmorey Karen, Xu Jiang, Gannon Patrick, Goldin-Meadow Susan, Braun Allen. CNS activation and regional connectivity during pantomime observation: No engagement of the mirror neuron system for deaf signers. Neuroimage. 2009;49(1):994–1005. doi: 10.1016/j.neuroimage.2009.08.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Glezer Laurie S, Jiang Xiong, Riesenhuber Maximilian. Evidence for highly selective neuronal tuning to whole words in the “visual word form area”. Neuron. 2009;62(2):199–204. doi: 10.1016/j.neuron.2009.03.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Grossman Emily D, Battelli Lorella, Pascual-Leone Alvaro. Repetitive TMS over posterior STS disrupts perception of biological motion. Vision Research. 2005;45(22):2847–2853. doi: 10.1016/j.visres.2005.05.027. [DOI] [PubMed] [Google Scholar]
  21. Hoffmeister Robert, de Villiers Peter, Engen Elizabeth, Tolol Deborah. English reading achievement and ASL skills in deaf students. Proceedings of the 21st Annual Boston University Conference on Language Development; Brookline, MA: Cascadilla Press; 1998. [Google Scholar]
  22. Ivry Richard B, Justus Timothy C. A neural instantiation of the motor theory of speech perception. Trends in Neurosciences. 2001;24(9):513–515. doi: 10.1016/s0166-2236(00)01897-x. [DOI] [PubMed] [Google Scholar]
  23. Just Marcel Adam, Carpenter Patricia A, Keller Timothy A, Eddy William F, Thulborn Keith R. Brain activation modulated by sentence comprehension. Science. 1996;274:114–116. doi: 10.1126/science.274.5284.114. [DOI] [PubMed] [Google Scholar]
  24. Kanwisher Nancy, McDermott Josh, Chun Marvin M. The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of Neuroscience. 1997;17(11):4302–4311. doi: 10.1523/JNEUROSCI.17-11-04302.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Krentz Ursula C, Corina David P. Preference for language in early infancy: the human language bias is not speech specific. Developmental Science. 2008;11:1–9. doi: 10.1111/j.1467-7687.2007.00652.x. [DOI] [PubMed] [Google Scholar]
  26. MacSweeney Mairéad, Campbell Ruth, Woll Bencie, Giampietro Vincent, David Anthony S, McGuire Philip K, Calvert Gemma A, Brammer Michael J. Dissociating linguistic and nonlinguistic gestural communication in the brain. Neuroimage. 2004;22(4):1605–1618. doi: 10.1016/j.neuroimage.2004.03.015. [DOI] [PubMed] [Google Scholar]
  27. MacSweeney Mairéad, Capek Cheryl M, Campbell Ruth, Woll Bencie. The signing brain: the neurobiology of sign language. Trends in Cognitive Sciences. 2008a;12(11):432–440. doi: 10.1016/j.tics.2008.07.010. [DOI] [PubMed] [Google Scholar]
  28. MacSweeney Mairéad, Waters Dafydd, Brammer Michael J, Woll Bencie, Goswami Usha. Phonological processing in deaf signers and the impact of age of first language acquisition. Neuroimage. 2008b;40(3):1369–1379. doi: 10.1016/j.neuroimage.2007.12.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. MacSweeney Mairéad, Woll Bencie, Campbell Ruth, McGuire Philip K, David Anthony S, Williams Steven C, Suckling John, Calvert Gemma A, Brammer Michael J. Neural systems underlying British Sign Language and audio-visual English processing in native users. Brain. 2002;125(Pt 7):1583–1593. doi: 10.1093/brain/awf153. [DOI] [PubMed] [Google Scholar]
  30. Marien Peter, Engelborghs Sebastiaan, Fabbro Franco, De Deyn Peter P. The lateralized linguistic cerebellum: a review and a new hypothesis. Brain and Language. 2001;79(3):580–600. doi: 10.1006/brln.2001.2569. [DOI] [PubMed] [Google Scholar]
  31. Mayberry Rachel I. When timing is everything: Age of first-language acquisition effects on second-language learning. Applied Psycholinguistics. 2007;28:537–549. [Google Scholar]
  32. Mayberry Rachel I. Early language acquisition and adult language ability: What sign language reveals about the critical period for language. In: Marschark M, Spencer P, editors. The Oxford handbook of Deaf studies, language, and education. Vol. 2. Oxford: Oxford University Press; 2009. pp. 281–291. [Google Scholar]
  33. Mayberry Rachel I, Lock Elizabeth. Age constraints on first versus second language acquisition: evidence for linguistic plasticity and epigenesis. Brain and Language. 2003;87(3):369–384. doi: 10.1016/s0093-934x(03)00137-8. [DOI] [PubMed] [Google Scholar]
  34. Mayberry Rachel I, Witcher Pamela. San Diego, CA: University of California; 2005. What age of acquisition effects reveal about the nature of phonological processing. CRL technical report, http://crl.ucsd.edu/newsletter/17-3/TechReports/17-3.pdf, retrieved on 13/10/2010. [Google Scholar]
  35. McCullough Stephen, Emmorey Karen, Sereno Martin. Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners. Cognitive Brain Research. 2005;22(2):193–203. doi: 10.1016/j.cogbrainres.2004.08.012. [DOI] [PubMed] [Google Scholar]
  36. Neville Helen J, Bavelier Daphne, Corina David, Rauschecker Josef, Karni Avi, Lalwani Anil, Braun Allan, Clark Vince, Jezzard Peter, Turner Robert. Cerebral organization for language in deaf and hearing subjects: Biological constraints and effects of experience. Proceedings of the National Academy of Sciences of the United States of America. 1998;95(3):922–929. doi: 10.1073/pnas.95.3.922. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Neville Helen J, Coffey Sharon A, Lawson Donald S, Fischer Andrew, Emmorey Karen, Bellugi Ursula. Neural systems mediating American Sign Language: effects of sensory experience and age of acquisition. Brain and Language. 1997;57(3):285–308. doi: 10.1006/brln.1997.1739. [DOI] [PubMed] [Google Scholar]
  38. Newman Aaron J, Bavelier Daphne, Corina David, Jezzard Peter, Neville Helen J. A critical period for right hemisphere recruitment in American Sign Language processing. Nature Neuroscience. 2002;5(1):76–80. doi: 10.1038/nn775. [DOI] [PubMed] [Google Scholar]
  39. Noordzij Matthijs L, Neggers Sebastiaan F, Ramsey Nick F, Postma Albert. Neural correlates of locative prepositions. Neuropsychologia. 2008;46(5):1576–1580. doi: 10.1016/j.neuropsychologia.2007.12.022. [DOI] [PubMed] [Google Scholar]
  40. Padden Carol, Ramsey Claire. Reading ability in signing deaf children. Topics in Language Disorders. 1998;18(4):30–46. [Google Scholar]
  41. Petitto Laura A, Zatorre Robert J, Gauna Kristine, Nikelski EJ, Dostie Deanna, Evans Alan C. Speech-like cerebral activity in profoundly deaf people processing signed languages: implications for the neural basis of human language. Proceedings of the National Academy of Science of the United States of America. 2000;97(25):13961–13966. doi: 10.1073/pnas.97.25.13961. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Prinz Philip M, Strong Michael. ASL proficiency and English literacy within a bilingual deaf education model of instruction. Topics in Language Disorders. 1998;18(4):47–60. [Google Scholar]
  43. Riecker Axel, Brendel Bettina, Ziegler Wolfram, Erb Michael, Ackermann Hermann. The influence of syllable onset complexity and syllable frequency on speech motor control. Brain and Language. 2008;107(2):102–113. doi: 10.1016/j.bandl.2008.01.008. [DOI] [PubMed] [Google Scholar]
  44. Sadato Norihiro, Okada Tomohisa, Honda Manabu, Matsuki Ken-Ichi, Yoshida Masaki, Kashikura Ken-Ichi, Takei Wataru, Sato Tetsuhiro, Kochiyama Takanori, Yonekura Yoshiharu. Cross-modal integration and plastic changes revealed by lip movement, random-dot motion and sign languages in the hearing and deaf. Cerebral Cortex. 2005;15(8):1113–1122. doi: 10.1093/cercor/bhh210. [DOI] [PubMed] [Google Scholar]
  45. Sakai Kuniyoshi L, Tatsuno Yoshinori, Suzuki Kei, Kimura Harumi, Ichida Yashiro. Sign and speech: amodal commonality in left hemisphere dominance for comprehension of sentences. Brain. 2005;128(6):1407–1417. doi: 10.1093/brain/awh465. [DOI] [PubMed] [Google Scholar]
  46. San Jose-Robertson Lucila, Corina David P, Ackerman Debra, Guillemin Andre, Braun Allen R. Neural systems for sign language production: mechanisms supporting lexical selection, phonological encoding, and articulation. Human Brain Mapping. 2004;23(3):156–167. doi: 10.1002/hbm.20054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Shetreet Einat, Palti Dafna, Friedmann Naama, Hadar Uri. Cortical representation of verb processing in sentence comprehension: number of complements, subcategorization, and thematic frames. Cerebral Cortex. 2007;17(8):1958–1969. doi: 10.1093/cercor/bhl105. [DOI] [PubMed] [Google Scholar]
  48. Singleton Jenny L, Morgan Dianne, DiGello Elizabeth, Wiles Jill, Rivers Rachel. Vocabulary use by low, moderate, and high ASL-proficient writers compared to hearing ESL and monolingual speakers. Journal of Deaf Studies and Deaf Education. 2004;9(1):86–103. doi: 10.1093/deafed/enh011. [DOI] [PubMed] [Google Scholar]
  49. Stoodley Catherine J, Schmahmann Jeremy D. The cerebellum and language: Evidence from patients with cerebellar degeneration. Brain and Language. 2009;110(3):149–153. doi: 10.1016/j.bandl.2009.07.006. [DOI] [PubMed] [Google Scholar]
  50. Strong Michael, Prinz Philip M. A study of the relationship between ASL and English literacy. Journal of Deaf Studies and Deaf Education. 1997;2:37–46. doi: 10.1093/oxfordjournals.deafed.a014308. [DOI] [PubMed] [Google Scholar]
  51. Tettamanti Marco, Rotondi Irene, Perani Daniela, Scotti Giuseppe, Fazio Ferrucio, Cappa Stefano F, Moro Andrea. Syntax without language: neurobiological evidence for cross-domain syntactic computations. Cortex. 2009;45(7):825–838. doi: 10.1016/j.cortex.2008.11.014. [DOI] [PubMed] [Google Scholar]
  52. Thompson Cynthia K, Bonakdarpour Borna, Fix Stephen C, Blumenfeld Henrike K, Parrish Todd B, Gitelman Darren R, Mesulam M Marsel. Neural correlates of verb argument structure processing. Journal of Cognitive Neuroscience. 2007;19(11):1753–1767. doi: 10.1162/jocn.2007.19.11.1753. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Waters Dafydd, Campbell Ruth, Capek Cheryl M, Woll Bencie, David Anthony S, McGuire Philip K, Brammer Michael J, MacSweeney Mairéad. Fingerspelling, signed language, text and picture processing in deaf native signers: The role of the mid-fusiform gyrus. Neuroimage. 2007;35:1287–1302. doi: 10.1016/j.neuroimage.2007.01.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Weber-Fox Christine, Neville Helen J. Sensitive periods differentiate processing of open- and closed-class words: An ERP study of bilinguals. Journal of Speech Language and Hearing Research. 2001;44(6):1338–1353. doi: 10.1044/1092-4388(2001/104). [DOI] [PubMed] [Google Scholar]
  55. Wilbur Ronnie B. The use of ASL to support the development of English and literacy. Journal of Deaf Studies and Deaf Education. 2000;5(1):81–104. doi: 10.1093/deafed/5.1.81. [DOI] [PubMed] [Google Scholar]
  56. Wilbur Ronnie B. Success with deaf children: How to prevent educational failure. In: Lindgren KA, DeLuca D, Napoli DJ, editors. Signs and voices: Deaf culture, identity, language, and arts. Washington, DC: Gallaudet University Press; 2008. pp. 117–138. [Google Scholar]
  57. Xiang Huadong, Lin Chongyu, Ma Xiaohai, Zhang Zhaoqi, Bower James M, Weng Xuchu, Gao Jia-Hong. Involvement of the cerebellum in semantic discrimination: an fMRI study. Human Brain Mapping. 2003;18(3):208–214. doi: 10.1002/hbm.10095. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES