Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2000 Dec 5;97(25):13476–13477. doi: 10.1073/pnas.97.25.13476

Language-related cortex in deaf individuals: Functional specialization for language or perceptual plasticity?

David Caplan 1,*
PMCID: PMC34084  PMID: 11106391

In this issue of PNAS, Petitto et al. (1) report a positron emission tomography (PET) study of the regions of the brain that increase their regional cerebral blood flow when individuals process signed language. The essence of their results is that, in 11 profoundly deaf subjects, regional cerebral blood flow responses to a series of tasks involving signed language occurred in the same areas of the brain in which responses to similar tasks occur in hearing subjects processing spoken language. These brain regions are the auditory association cortex adjacent to the primary auditory koniocortex and the left inferior frontal cortex. Traditional views of the functional role of these areas, going back over 125 years and still widely accepted, maintain that the first of these areas is involved in speech perception and second in planning speech production. The results reported by Petitto et al. provide a quite different view of these brain regions—one that sees them as being involved in processing language, regardless of the modality in which it is presented.

The results of Petitto et al. are not totally unexpected. Both studies of the effects of stroke and previous studies using functional neuroimaging have shown considerable overlap between regions of the brain involved in processing spoken and signed language. The pioneering studies of congenitally deaf stroke victims carried out by Bellugi and her colleagues (2, 3) showed that left but not right hemisphere strokes in the perisylvian area produced aphasia in these patients, whereas right-sided lesions produced nonlinguistic visuospatial deficits. This, of course, is the usual pattern of effects of lesions seen in hearing individuals using spoken language. A particularly convincing finding was that deaf patients with left hemisphere strokes could not use space to establish the linguistic function of coreference (relating the signed equivalent of a pronoun to a noun) but could use space for other manual tasks. Activation studies by Neville and her colleagues using functional magnetic resonance imaging (fMRI) showed that both deaf and hearing native signers activated left hemisphere “language” areas when viewing sentences in American Sign Language (ASL) (4).

However, Petitto and her colleagues point out several limitations in these previous studies. The strokes in the patients studied by Bellugi and her colleagues tended to be large, and the language assessments were clinically oriented, making it unclear exactly what part of the lesions affected specific aspects of signed language processing. The tasks set by Neville and others often did not require overt responses, raising questions about how subjects processed the presented materials. Many of these tasks contrasted processing sentences with processing much more elementary linguistic entities (e.g., consonant strings in the case of written language and meaningless gestures in the case of signed language), and hemodynamic changes across conditions therefore reflected many different types of linguistic and nonlinguistic operations. It is possible that, although viewing ASL and written English sentences led to similar areas of activation, different operations activated different brain regions in deaf and hearing subjects in these studies. In addition to these methodological concerns, Petitto et al. point to contradictory results in the literature. Soderfeldt and his colleagues (5), for instance, found that bilingual Swedish-signed language native speakers activated perisylvian temporal cortex when listening to spoken stories and inferior temporo-occipital cortex when viewing signed stories.

In contrast to some previous examinations, the study by Petitto and her colleagues constitutes a more narrowly conceived and executed study. It focuses on the sublexical and lexical levels of language. Five conditions are contrasted: visual fixation, passive viewing of carefully selected signed language units that were themselves meaningless (the signed language equivalent of spoken language syllables), passive viewing of signed words, repetition (imitation) of signed words, and “verb generation”–production of a verb in response to a presented noun. These tasks were performed by five native signers of ASL and six native signers of Langue des Signes Quebecoise, as well as 10 hearing, nonsigning, controls (who were shown written words in the verb generation tasks). The authors emphasize two results: (i) the subtraction of fixation from verb generation activated the same parts of the left inferior frontal lobe in the deaf and hearing subjects, and (ii) the subtraction of fixation from the passive viewing, repetition, and verb generation conditions activated the superior temporal gyrus in the deaf subjects, whereas identical passive viewing and repetition tasks did not activate this region in hearing subjects.

What can we conclude from these results? Petitto and her colleagues try to introduce their study in the context of the lateralization of language, pointing out that we do not know why language is lateralized. Unfortunately, in the deaf subjects, the temporal activation was not lateralized, but rather bilateral, as was the inferior frontal activation when the repetition condition was subtracted from the verb generation condition. Their results therefore shed little light on this issue.

The results do speak more directly to the question of the functional neuroanatomy of two brain regions involved in perception and production of spoken language. The first region is the left inferior frontal cortex. The authors argue that their results show that this region is involved in “searching [for] and retrieving the meanings of words.” However, the results of the use of the verb generation task are ambiguous. The authors subtracted fixation from verb generation, so this activation could have resulted from any or all of the operations involved in perceiving a word, understanding it, retrieving an associate, and pronouncing the associate. To complicate interpretation, many of these operations are not even purely linguistic, such as switching from one category (a noun) to another (a verb), or verifying that the response is from the appropriate category. The authors do report a more focused subtraction of repetition from verb generation, but more research is needed to fully understand the many possible functional steps that separate these two tasks.

The second area is the planum temporal (PT). Petitto and her colleagues argue that “contrary to prevailing wisdom, the PT may not be exclusively dedicated to processing speech sounds.” This much seems clear, but what then is the PT specialized for? One of the important features of this study is its narrow focus on the signed equivalent of sublexical and lexical processing, which should serve to narrow down the possible interpretations of the functional source of regional cerebral blood flow increases in this brain region.

One possibility that Petitto et al. consider is that the PT is specialized for processing “more abstract properties essential to language… [the] neural specialization for aspects of language patterning appears to be neurally unmodifiable.” Another possibility they entertain is that it is specialized for certain aspects of high-level perceptual processing: “it is entirely possible that complex visual stimuli per se could activate the temporal cortices in deaf people;” the properties of the signal that the PT responds to may be “specific distributions of complex low-level units in rapid temporal alteration.” These are radically different notions of what the PT does. The first hypothesis maintains that it is specialized for representing abstract properties of language, and that these properties are invariantly processed in this location. The second hypothesis maintains that the PT is specialized to respond to certain temporal patterns of perceptual elements and that it shows plasticity, responding to these patterns in the auditory modality in hearing subjects and in the visual modality in deaf subjects. On this view, the PT responds to language because it happens to have such patterns.

Evidence relevant to adjudicating between these different views could come from two sources. One is a thorough exploration of the properties of visually presented stimuli that activate the PT in deaf compared with hearing subjects. If the perceptual view is correct, a variety of visual stimuli, not all of them obviously related to the structure of language, will activated PT in the deaf. Existing work (6, 7) has just begun the study of the responses of this region to such stimuli; the study of Petitto et al. should serve as a catalyst to more work in this area. The second approach is to study native bilingual speakers/signers (usually hearing offspring of deaf parents). If the view that the PT houses language is correct, this region should be activated by signed language in these subjects; if the perceptual view is correct, it should not be activated by either signed language or visually presented temporally complex patterns. As noted above, the literature on the responses of this region to signed language is contradictory and methodologically immature in a number of ways.

The paper of Petitto et al. (1) reinforces the view that the functions carried out in what is widely thought of as auditory association cortex need to be reconsidered. The careful anatomical analyses and narrowly designed experimental contrasts used in this study leave little doubt that this brain region can respond to visually presented elementary linguistic stimuli in individuals deprived of auditory stimulation who use signed language. Whether this is because this region supports language or because it supports high-level visual temporal processing is a fundamental question about the neural basis of language that remains to be answered.

Acknowledgments

This work was supported by grant DC02146 from the National Institutes of Health.

Footnotes

See companion article on page 13961.

References

  • 1.Petitto L A, Zatorre R J, Gauna K, Nikelski E J, Dostie D, Evans A C. Proc Natl Acad Sci USA. 2000;97:13961–13966. doi: 10.1073/pnas.97.25.13961. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Bellugi U, Poizner H, Klima E S. Trends Neurosci. 1989;12:380–388. doi: 10.1016/0166-2236(89)90076-3. [DOI] [PubMed] [Google Scholar]
  • 3.Hickok G, Bellugi U, Klima E S. Trends Cognit Sci. 1998;2:129–136. doi: 10.1016/s1364-6613(98)01154-1. [DOI] [PubMed] [Google Scholar]
  • 4.Neville H J, Bavelier D, Corina D, Rauschecker J, Karni A, Lalwani A, Braun A, Clark V, Jezzard P, Turner R. Proc Natl Acad Sci USA. 1998;95:922–929. doi: 10.1073/pnas.95.3.922. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Soderfeldt B, Ingvar M, Ronnberg J, Eriksson L, Serrander M, Stone-Elander S. Neurology. 1997;49:82–87. doi: 10.1212/wnl.49.1.82. [DOI] [PubMed] [Google Scholar]
  • 6.Neville H, Lawson D. Brain Res. 1987;405:268–283. doi: 10.1016/0006-8993(87)90296-4. [DOI] [PubMed] [Google Scholar]
  • 7.Hickok G, Poeppel D, Clark K, Buxton R B, Rowley H A, Roberts T P L. Hum Brain Map. 1997;5:437–444. doi: 10.1002/(SICI)1097-0193(1997)5:6<437::AID-HBM4>3.0.CO;2-4. [DOI] [PubMed] [Google Scholar]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES