Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2007 Sep 20.
Published in final edited form as: Neuroimage. 2007 Mar 6;36(1):202–208. doi: 10.1016/j.neuroimage.2007.02.040

The neural correlates of sign versus word production

Karen Emmorey 1, Sonya Mehta 2, Thomas J Grabowski 2
PMCID: PMC1987366  NIHMSID: NIHMS23018  PMID: 17407824

Abstract

The production of sign language involves two large articulators (the hands) moving through space and contacting the body. In contrast, speech production requires small movements of the tongue and vocal tract with no observable spatial contrasts. Nonetheless, both language types exhibit a sublexical layer of structure with similar properties (e.g., segments, syllables, feature hierarchies). To investigate which neural areas are involved in modality-independent language production and which are tied specifically to the input-output mechanisms of signed and spoken language, we reanalyzed PET data collected from 29 deaf signers and 64 hearing speakers who participated in a series of separate studies. Participants were asked to overtly name concrete objects from distinct semantic categories in either American Sign Language (ASL) or in English. The baseline task required participants to judge the orientation of unknown faces (overtly responding ‘yes’/‘no’ for upright/inverted). A random effects analysis revealed that left mesial temporal cortex and the left inferior frontal gyrus were equally involved in both speech and sign production, suggesting a modality-independent role for these regions in lexical access. Within the left parietal lobe, two regions were more active for sign than for speech: the supramarginal gyrus (peak coordinates: −60, −35, +27) and the superior parietal lobule (peak coordinates: −26, −51, +54). Activation in these regions may be linked to modality-specific output parameters of sign language. Specifically, activation within left SMG may reflect aspects of phonological processing in ASL (e.g., selection of hand configuration and place of articulation features), whereas activation within SPL may reflect proprioceptive monitoring of motoric output.

Introduction

Speech production involves the rapid integration and sequencing of movements of the tongue, lips, velum, and vocal cords. In contrast, the production of signed language primarily involves movements of more massive and slower articulators, the hands and arms, within a much larger space (from the waist to the top of the head). Further, the perceptual targets for speech production are auditory, but visual for signing. Since phonology is traditionally characterized as the sound patterns of language and phonetic systems are described in terms of oral articulators and acoustic features, it is possible that phonological patterning only arises for oral-aural languages. However, linguistic research over the past three decades has demonstrated that signed languages do indeed exhibit a phonological level of structure with properties that parallel speech, such as segments, syllables, and feature hierarchies (for reviews see Brentari, 1998; Sandler & Lillo-Martin, 2006). Nonetheless, the articulatory-perceptual properties of signing have an impact upon phonological patterning with respect to sequentiality (more segment sequences are produced for speech than sign), the nature of syllable structure (signs tend to be monosyllabic), and the complexity of autosegmental elements (hand configuration is more complex than tone) (Sandler & Lillo-Martin, 2006). Given both the similarities and the differences between sign and speech, we investigated the extent to which the neural systems that control sign production overlap with those controlling spoken word production.

It has long been known that speech production is lateralized to the left hemisphere (e.g., Broca, 1861; Geschwind, 1970), and evidence from lesion and neuroimaging studies indicates that sign language production is also strongly left-lateralized (e.g., Corina, San Jose-Robertson, Guillemin, High, & Braun, 2003; Emmorey et al., 2003; McGuire et al., 1997; Poizner, Klima, & Bellugi, 1987). Furthermore, Broca’s area, a classic speech production region, is reliably engaged during sign language production (Emmorey et al., 2002; Emmorey et al., 2004; Horwitz et al., 2003; Petitto et al., 2000).

In a meta-analysis of neuroimaging studies, Indefrey and Levelt (2004) mapped out several regions involved in spoken word production. They argued that lexical selection is associated with the left middle temporal gyrus and that phonological code retrieval involves the right supplementary motor area (SMA), left anterior insula, and left posterior superior and middle temporal gyri (Wernicke’s area). Broca’s area (left posterior inferior frontal gyrus) was argued to be critically involved in syllabification processes during word production. Whether sign and word production both engage these neural regions to an equal extent is unknown. To investigate this question, we conducted a conjunction analysis with data from Positron Emission Tomography (PET) studies of sign production and word production that utilized the same picture-naming and standard baseline tasks.

Previously, Braun, Guillemin, Hosey, and Varga (2001) directly compared discourse production in sign and speech by asking hearing bilinguals to produce autobiographical narratives in either American Sign Language (ASL) or English, while undergoing PET scanning. Conjunction analyses using oral or limb motor baselines revealed considerable neural overlap for ASL and English production. Both languages engaged classical left hemisphere language areas, e.g., the inferior frontal gyrus and posterior superior temporal regions. In addition, both signed and spoken narrative production recruited additional left anterior regions (the anterior insula, lateral premotor cortex, and anterior SMA) and bilateral posterior brain regions (inferior parietal cortices, middle temporal gyri, and basal temporal areas). However, it is unclear which of these regions were recruited for modality-independent narrative or sentential processes and which regions were recruited for phonological or lexical processes that might be shared by sign and speech. A comparison of single word and single sign production will help tease apart which regions of overlapping activation are due to sentential versus lexical level processes.

In addition, a comparison between deaf native signers and hearing monolingual speakers is important because the neural activation for ASL-English bilingual language production does not always parallel monolingual production (Emmorey, Grabowski et al., 2005). For example, Emmorey et al. (2005) found that when ASL-English bilinguals produced English spatial prepositions, the pattern of neural activation differed from that of monolingual English speakers and was similar to that observed when deaf signers produced ASL spatial classifier constructions. Furthermore, it is possible that more regions of overlapping activation between English and ASL are observed for hearing bilinguals because both languages may always be “on” to some extent. For example, ASL-English bilinguals have been found to unintentionally produce ASL signs as co-speech gesture when conversing with monolingual English speakers (Emmorey, Borinstein, & Thompson, 2005). Many investigators have argued that the bilingual brain is not equal to two monolingual brains in one body (e.g., Grosjean, 1989; Hull & Vaid, 2005). Therefore, to investigate the extent of overlap between the neural systems that control speaking versus signing without the confound of bilingualism, it is necessary to directly compare word production by monolingual English speakers and sign production by Deaf signers for whom ASL is their primary language.

Finally, several studies of sign language production have observed activation in the left superior parietal lobule (SPL) (Corina et al., 2003; Emmorey et al., 2003; Petitto et al., 2000 [supplement tables]; San Jose-Robertson, Corina, Ackerman, Guillemin, & Braun, 2004). However, Indefrey and Levelt (2004) found that left SPL was not reliably activated during spoken word production. Furthermore, Braun et al. (2001) found that left SPL was significantly more activated during signing than during speaking. Braun et al. (2001) hypothesized that ASL production might uniquely engage left parietal cortex because the grammar of ASL relies on syntactic constructions that are “spatialized,” i.e., locations in signing space are used to express grammatical relations. If so, then we should not observe left SPL activation for single sign production. A comparison of single word and single sign production may help to clarify the role of the left superior parietal lobe in signing.

We performed a cross-cohort analysis of PET data generated by parallel neuroimaging experiments in deaf ASL signers and hearing English speakers to assess which neural regions are equally engaged for lexical production in ASL and English and which neural regions are differentially engaged for sign versus word production.

Methods

Participants

Twenty-nine deaf ASL signers participated in three separate PET studies (Emmorey et al., 2002; Emmorey et al., 2004; Emmorey et al., 2003). The participants were 14 men and 15 women, aged 20−38 (mean = 25 years), with 12 years or more of formal education. All participants were right handed and were prelingually and profoundly deaf. All participants used ASL as their preferred and primary language, and none had any history of neurological or psychiatric disease.

Sixty-four monolingual English speakers participated in another series of PET studies (Damasio, Grabowski, Tranel, Hichwa, & Damasio, 1996; Damasio et al., 2001; Damasio, Tranel, Grabowski, Adolphs, & Damasio, 2004). The participants were 29 men and 35 women, aged 20−54 (mean = 30 years), with 12 years or more of formal education. All participants were right handed, had normal hearing, and none had any history of neurological or psychiatric disease.

Procedures

Experimental Tasks

Participants were presented with a series of object pictures and were asked to overtly name each object. In the deaf group, 9 participants named photographs of animals (4m/5f), 10 named photographs of manipulable tools and utensils (5m/5f) and 10 named line-drawn concrete objects in various categories (5m/5f). In the hearing group, 29 participants named photographs of animals (14m/15f), 25 named photographs of manipulable tools and utensils (10m/15f) and 10 named line-drawn concrete objects in various categories (5m/5f). ASL signers produced signs with their right hand in a modified whisper mode so that the hand did not touch the face. English speakers produced each name aloud.

Analysis of the data across these experiments was facilitated by the fact that all of the studies employed the same standard sensorimotor baseline task, in which participants saw unfamiliar faces presented normally (upright) or upside down and were asked to indicate YES (or “up”) for upright faces and NO (or “down”) for upside down faces. Thus, participants made an overt response, but no naming was involved. In this cross-cohort analysis, the images generated during this “standard baseline” task were used as an arbitrary activity standard, i.e. they were subtracted from all naming task images in the study, deaf and hearing, to control for task-unrelated subject-specific factors (e.g. anatomic differences remaining after coregistration). For the conjunction analysis, the baseline task was also used as a control for basic sensorimotor aspects of the naming task. This approach assumes that the baseline tasks are equivalent across experimental contexts, something that we could not test directly, but that seems reasonable, given that at least 15 minutes elapsed between tasks in PET studies, and that the task is easy and virtually automatic.

Image Acquisition and Analysis

In each of the original experiments, PET data were acquired with a GE 4096B tomograph using the [15O]water method (Hichwa, Ponto, & Watkins, 1995). Reconstructed images of the distribution of radioactive counts from each injection were coregistered with each other using AIR (Woods, Mazziotta, & Cherry, 1993). PET and MR data were coregistered using PET-Brainvox fiducials (Damasio et al., 1994; Grabowski et al., 1995) and AIR (Woods et al., 1993). Talairach space was constructed directly for each participant via user-identification of the anterior and posterior commissures and the midsagittal plane on the 3D MRI data set in Brainvox. An automated planar search routine defined the bounding box and a piecewise linear transformation was used (Frank, Damasio, & Grabowski, 1997), as defined in the Talairach atlas (Talairach & Tournoux, 1988). After Talairach transformation, the MR data sets were warped (AIR 5th order nonlinear algorithm) to an atlas space constructed by averaging 50 normal Talairach-transformed brains, rewarping each brain to the average, and finally averaging them again, analogous to the procedure described in Woods et al. (1999). The PET data were warped to the atlas space using the AIR warping parameters generated from the registration of the structural MR images to the atlas space. The co-registered MR images were used to mask away extracerebral voxels from the PET images; subsequently the PET data were smoothed with an isotropic 16 mm Gaussian kernel by Fourier transformation, complex multiplication, and reverse Fourier transformation. Participants performed each task (naming, standard baseline) twice, but these images were averaged, so that one contrast image per participant was entered into the random effects analysis (i.e the dependent variable was the difference between the naming and standard baseline tasks). The statistical analysis was performed using, tal_programs, a suite of modular general purpose custom image processing software that we have used for a number of PET and morphological imaging studies (Frank et al, 1997, Grabowski et al, 1996, Emmorey et al, 2003, 2004, 2005). The multiple regression module, tal_regress, is based on Gentleman's least squares routines (Miller, 1991), and was cross-validated against SAS (Frank et al., 1997; Grabowski et al., 1996). Group membership (hearing, deaf) was the covariate of interest. Gender and task type (naming animals, naming tools, naming line-drawn objects) were included as covariates of no interest.

Main effects and interactions were tested with t-tests, and thresholded using random field theory (Worsley, 1994; Worsley, Evans, Marrett, & Neelin, 1992). A search volume was restricted a priori to left frontal, temporal, parietal, and right parietal lobes. The critical t value (familywise error rate p < 0.05, corrected for multiple spatial comparisons over 25 resels) was 3.82. A conjunction analysis identified voxels that showed a significant contrast in both subject groups between the naming and standard baseline tasks (Nichols, Brett, Andersson, Wager, & Poline, 2005). We also performed an interaction analysis to ascertain voxels at which the contrast between naming and standard baseline tasks was significantly different between the subject groups. The interaction image was also used to help interpret the conjunction analysis. Voxels that showed a conjunction of effects across subject groups but no evidence of interaction at uncorrected thresholds (|t|>1.67) were taken to demonstrate an equivalent degree of activation for signing and speaking (Price & Friston, 1997). Data were displayed on average MR images in Talairach space. Exploratory conjunction and interaction analyses were also performed, over the entire scanned volume, using the more conservative threshold dictated by random field theory for the larger number of resels (203 resels, critical t(88) = 4.54).

Finally, we also tested explicitly for an interaction between subject group (deaf signers, hearing speakers) and task type (naming animals, naming tools, naming line-drawn objects). No significant interactions were found, inside or outside the search volume.

Results

The results are presented in Table 1 and Figure 1. The conjunction analysis revealed that the left inferior frontal gyrus was equivalently engaged by deaf and hearing subjects during speech and sign production (−45, +28, +16); see Figure 1A. A region within the left mesial temporal lobe (−34, −30, −14) and another in the left parieto-occipital transition zone (−28 −78 +37) were also engaged equivalently for both languages during the naming tasks. The left posterior inferotemporal cortex (−51, −51, −10) was engaged significantly by both groups, though there was a nonsignificant trend (interaction t>1.67 but < 3.82) for more activity in the deaf group. An exploratory analysis at the whole brain level, using a more conservative threshold revealed a conjunction of activity in the mesial occipital cortex (+1 −70, +10), again with a nonsignificant trend for more activity in the deaf group.

Table 1.

Results of the conjunction and interaction analyses between ASL sign production and English word production. Talairach coordinates indicate peak activation maxima. Bolding indicates regions that showed no evidence of an interaction with subject group. No regions were more active for speaking than signing.

Conjunction Analysis Activation for both for signing and speaking Interaction Analysis Greater activation for signing than for speaking
T88 Coords T88 Coords t (88 dof)
Region X Y Z t deaf, t hearing X Y Z
Frontal Lobe
L IFG (BA 45) −45 +28 +16 4.89, 4.95
Parietal Lobe
L SMG (BA 40) −60 −35 +27 4.25
L SPL (BA 7) −32 −46 +52 4.59
 (BA 7) −10 −62 +59 4.53
Temporal Lobe
L mesial temporal −34 −30 −14 4.38, 4.00
L inferotemporal (BA 37) −51 −51 −10 6.96, 5.72
Occipital Lobe
L Parieto-occipital (BA 19) −28 −78 +37 5.27, 5.74
Mesial (BA 17,18) −8 −73 +17 6.00, 5.36

Figure 1.

Figure 1

A) Regions displaying the conjunction effect of lexical production during picture naming for Deaf ASL signers and hearing English speakers. B) Regions displaying greater activation for signing than for speaking. No regions were significantly more active for speaking than for signing. Color scale and Z levels of the axial slices are shown. Only the deep red represents significant activation; the other levels are shown to provide a sense of the regions with subthreshold activation.

The interaction analysis revealed that two regions within the left parietal lobe were more active for sign than for word production (see Figure 1B): the supramarginal gyrus (−60, −35, +27) and the superior parietal lobule (two maxima: −26, −51, +54; −11, −63, +57). The superior parietal lobule loci are near the boundary of the volume of brain in which all subjects were scanned. If the whole brain had been scanned, it is possible that these regions would have been more extensive and confluent. No region was found to be more active during word production than during sign production. The exploratory, whole brain analysis for interaction effects revealed no additional areas of interaction outside the search volume.

Discussion

As predicted, both sign and word production engaged the left inferior frontal gyrus, specifically BA 45, the anterior portion of Broca’s area. The fact that Broca’s area was activated to an equal extent for both sign and speech indicates a modality-independent role for this region in language production and is consistent with previous research. Thus, the function of Broca’s area is not strongly tied to oral-acoustic phonological features of spoken language. Despite the anatomical proximity of Broca’s area to the sensori-motoric representation of the orofacial articulators and the anatomical connections between Broca’s area and auditory cortices, these results indicate that this neural region is nonetheless intimately involved in the production of a visual-manual language.

Indefrey and Levelt (2004) proposed that Broca’s area plays a critical role in syllabification during word production. However, multisyllabic signs (more than two syllables) are rare in ASL and in other signed languages. In fact, most ASL signs are monosyllabic (Brentari, 1998), and there is little evidence for onset-rhyme distinctions, the existence of a syllabury, or for resyllabification processes. The process of syllabification for signs and for words appears to be quite different, and therefore, we suggest that equal engagement of Broca’s area during sign and word production does not arise from shared syllabification processes. Rather, Broca’s area appears to be recruited for a number of different cognitive and linguistic functions. Other candidates for lexical processes that might engage Broca’s area for both sign and word production include lexical-semantic functions related to lexical retrieval (e.g., Cappa & Perani, 2006) and phonological, syntactic, and semantic feature binding (Hagoort, 2006). In addition, Broca’s area may subserve modality-independent, domain-general processes, such as cognitive control functions (Thompson-Schill, Bedny, & Goldberg, 2005) or selecting and inhibiting hierachically organized action plans (Koechlin & Jubault, 2006).

The conjunction analyses also indicated that both sign and word production engaged left temporal regions, which are likely to be involved in conceptually driven lexical access (Indefrey & Levelt, 2004). For both speakers and signers, activation within the left inferior temporal gyrus may reflect prelexical conceptual processing of the pictures to be named, while activation within the more mesial temporal regions may reflect lemma selection, prior to phonological code retrieval. Overall, the conjunction results argue for a modality-independent fronto-temporal network that subserves both sign and word production.

The conjunction analysis reported in Braun et al. (2001) for signed and spoken narratives detected several additional regions of joint activation that were not observed in our study of lexical production. Many regions of joint activation identified by the Braun et al. (2001) study may be tied to the production of autobiographical narratives. For example, the SMA has been associated with internal self-generated speech, as opposed to external stimulus-generated speech (Guenther, Ghosh, & Tourville, 2006). Premotor cortices and the insula may be more strongly engaged during narrative production than during single word/sign production due to the more complex articulation processes required for sentence production. In addition, our results are consistent with Braun et al.’s (2001) hypothesis that bilateral posterior temperopartietal brain regions are engaged in the encoding of discourse-related semantic information.

Lastly, the conjunction analysis revealed activation in occipital cortex (Table 1; Figure 1A). This region extended from the mesial temporal area in both hemispheres through the left supracalcarine region to the left parieto-occipital border zone (BA 17, 18, 19). We hypothesize that activation of these regions, among which dorsal visual stream regions are prominent, is not due to lexical production processes. Rather, this activation is likely associated with visual attention and search processes, including more exploratory eye movements, that were required for the picture naming tasks, but not for the much less effortful face-orientation decision task that was used as the standard baseline condition.

The interaction results indicated that left parietal cortices are uniquely recruited for the production of lexical signs (see Figure 1B). One region of sign-specific activation was the left supramarginal gyrus (SMG). This region is not typically activated in word production studies that utilize picture naming (Indefrey and Level, 2004). However, activation in left SMG has been reported during spoken word repetition (Shuster & Lemieux, 2005), delayed picture naming (Kemeny et al., 2006), and under conditions of delayed auditory feedback (Hashimoto & Sakai, 2003). All of these tasks involve the temporary storage of phonological representations, and left SMG has been shown to play a role in phonological working memory for both speech and sign (Buchsbaum et al., 2005; Jacquemot & Scott, 2006). However, the confrontation naming task used in our study did not involve the temporary storage of signs or words, and it is unlikely that the sign-specific SMG activation we observed reflected working memory processes.

Within the DIVA model of speech production (Directions in Velocities of Articulators) proposed by Guenther and colleagues (Bohland & Guenther, 2006; Guenther et al., 2006), the left inferior SMG is hypothesized to play a role in monitoring and guiding speech articulator movements. Specifically, somatosensory error maps are hypothesized to lie within the inferior parietal cortex along the anterior supramarginal gyrus, posterior to the primary somatosensory representations of the speech articulators. Somatosensory error maps are utilized during somatosensory target learning and feedback-based control. However, the primary somatosensory representations of the sign articulators (the hands and arm) are located more superiorly, adjacent to superior parietal cortex (Hlustik, Solodkin, Gullapalli, Noll, & Small, 2001; Maldjian, Gottschalk, Patel, Detre, & Alsop, 1999).

Interestingly, Rumiati et al. (2004) found activation within left SMG at a nearly identical site (−58, −32, +30) when hearing participants were asked to imitate or perform pantomimes of object use. Hesse, Theil, Sphan, and Fink (2006) argue that neural activity in left SMG underlies the selection and planning of motor movements of the hand and arm, independent of the actual execution of the movement. For ASL, left SMG may play a role in the phonological encoding of signs. In a cortical stimulation mapping study, Corina et al. (1999) reported that stimulation within left SMG (site PO) resulted in phonological and semantic substitutions during a picture-naming task by a deaf ASL signer. For example, when producing the ASL sign PIG, the signer produced a clearly articulated 3-handshape (thumb, index, and middle fingers extended) instead of the correct B-handshape (all fingers extended and touching). Corina et al. (1999) hypothesized that left SMG supports aspects of phonological encoding for sign language, such as selection of the hand configuration, place of articulation, and movement features of a sign.

Finally, the peak activation site within left SMG was close to the site where MacSweeney and colleagues (MacSweeney et al., 2002) reported greater activation for the perception of British Sign Language compared to audio-visually perceived spoken English (−55, −47, +34). Thus, left SMG appears to play a greater role in both the perception and the production of sign language compared to spoken language. This region within the inferior parietal lobule may function to bind together the disparate spatial, temporal, and configural phonological elements of sign (i.e., locations on the body, movements of the hands/fingers, and handshapes). The integration of these elements into a phonological representation is necessary for both the perception and production of sign language. In support of this hypothesis, the left supramarginal gyrus has been shown to play a crucial role in the integration of spatial and temporal information when perceiving action (Assmus, Marshall, Noth, Zilles, & Fink, 2005; Assmus et al., 2003) and when planning hand movements (Hesse et al., 2006).

As predicted, the left superior parietal lobule was uniquely engaged for sign production. However, the peak activation sites were more posterior and superior than the activation sites observed by Braun et al. (2001) for signed narratives. Therefore, our findings are consistent with the hypothesis that the SPL activation observed by Braun et al. (2001) may be due to the use of signing space to express syntactic relations in ASL. Petitto et al. (2000; supplement table 8) reported activation in SPL at very similar sites to what we report (−29, −46, +54; see Table 1) when deaf signers repeated ASL nouns (−27, −47, +63) or generated verbs (−23, −49, +66), in comparison to a fixation baseline. Other studies of single sign production also report activation in left parietal cortex, but in slightly inferior regions (Corina et al., 2003; San Jose-Robertson et al., 2004).

We hypothesize that sign production, unlike word production, may recruit left SPL for the proprioceptive monitoring of language output. Lesion, neuroimaging, and TMS data indicate a role for the superior parietal lobule in proprioception and the assessment and monitoring of self-generated movements (e.g., MacDonald & Paus, 2003; Pellijeff, Bonilha, Morgan, McKenzie, & Jackson, 2006; Wolpert, Goodbody, & Husain, 1998). Interestingly, Corina et al. (2003) found increased activation in left superior parietal cortex when right-handed signers produced signs with their left hand. It is possible that this increase in activation was due to the need for increased monitoring and assessment of the movement and hand configuration of the non-dominant hand.

Furthermore, proprioceptive monitoring may play a more important role in sign production because visual monitoring of signing (unlike auditory monitoring of speech) presents an unusual signal for language perception. For spoken language, speakers can monitor their speech output by listening to their own voice – a perceptual loop feeds back to the speech comprehension mechanism (Levelt, 1989). In contrast, signers do not look directly at their hands and cannot see their own faces (nb: facial expressions convey grammatical information). The visual input from their own signing is quite distinct from the visual input of another‘s signing. Therefore, a simple perceptual loop that feeds back to the sign comprehension mechanism is problematic. Sign production may crucially involve proprioceptive monitoring of hand and arm movements, hand posture, and body part location (particularly because sign production is not visually guided).

Another clue to the function of the neural activity in left parietal cortices for sign production can be found in our previous research comparing signing with finger-spelling (Emmorey et al., 2003). Although both signing and finger-spelling involve manual articulation, lexical signs differ from finger-spelled words with respect to phonological structure and complexity. Finger-spelled words contain sequences of handshapes that represent English letters. In contrast, phonological constraints limit the number and type of handshape sequences that can appear in lexical ASL signs (Brentari & Padden, 2001). Although finger-spelled words contain more complex hand configuration sequences, lexical signs exhibit a richer and more varied phonological structure because place of articulation on the body must be specified (e.g., forehead, nose, chin, chest, arm, etc.), along with movement features (e.g., path movement and/or secondary movements such as finger wiggling, wrist twisting, etc.). When the production of lexical signs was contrasted with finger-spelled words, Emmorey et al. (2003) found greater activation for signs in left SMG (−51, −36, +26) and SPL (−15, −59, +55). We hypothesize that greater engagement of left parietal regions during signing compared to finger-spelling reflects the greater phonological complexity of signs with respect to variable body locations and movement features that must be encoded and monitored during sign production.

Finally, the interaction analysis did not reveal any regions that were more engaged for word than sign production. In contrast, Braun et al. (2001) found greater activity for speech in left prefrontal and subcortical areas. They proposed that speech preferentially engages a prefrontal corticostriatal-thalamocortical circuit, which plays a role in the timing and sequencing of cognitive and motor behaviors. Greater activity in this circuit for speech may reflect more intense sequencing and timing demands for spoken language at all linguistic levels. Words tend to contain more segments and syllables than signs, but also English morphemes are arrayed linearly, rather than simultaneously as in ASL. English contains prefixes and suffixes, while ASL superimposes movements on sign stems to create morphologically complex signs. English sentences tend to contain more words that need to be ordered because ASL is a pro-drop language, does not require determiners, and contains few function words. The additional sequencing and timing demands above the level of the word may result in significant differential activation within the prefrontal corticostriatal-thalamocortical circuit for spoken narratives.

In sum, the comparison of sign production by deaf ASL signers and word production by hearing English speakers revealed regions of modality-independent neural activity within the left inferior frontal gyrus and left temporal regions, which are hypothesized to be involved in conceptually driven lexical access processes. Within the left parietal lobe, two regions were differentially engaged for sign production, reflecting modality-specific output parameters for sign language. We hypothesize that activation within the left supramarginal gyrus reflects phonological assembly and encoding in ASL (e.g., the selection of hand configuration and location features), while activation in the left superior parietal lobule reflects proprioceptive monitoring of manual and brachial language output.

Acknowledgments

This research was supported by grants from the National Institute on Deafness and other Communicative Disorders: 1 P50 DC 03189, awarded to the University of Iowa and to The Salk Institute for Biological Studies and RO1 DC006708, awarded to San Diego State University.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Assmus A, Marshall JC, Noth J, Zilles K, Fink GR. Difficulty of perceptual spatiotemporal integration modulates the neural activity of left inferior parietal cortex. Neuroscience. 2005;132(4):923–927. doi: 10.1016/j.neuroscience.2005.01.047. [DOI] [PubMed] [Google Scholar]
  2. Assmus A, Marshall JC, Ritzl A, Noth J, Zilles K, Fink GR. Left inferior parietal cortex integrates time and space during collision judgments. Neuroimage. 2003;20(Suppl 1):S82–88. doi: 10.1016/j.neuroimage.2003.09.025. [DOI] [PubMed] [Google Scholar]
  3. Bohland JW, Guenther FH. An fMRI investigation of syllable sequence production. Neuroimage. 2006;32(2):821–841. doi: 10.1016/j.neuroimage.2006.04.173. [DOI] [PubMed] [Google Scholar]
  4. Braun AR, Guillemin A, Hosey L, Varga M. The neural organization of discourse - An (H2O)-O-15-PET study of narrative production in English and American sign language. Brain. 2001;124:2028–2044. doi: 10.1093/brain/124.10.2028. [DOI] [PubMed] [Google Scholar]
  5. Brentari D. A prosodic model of sign language phonology. The MIT Press; Cambridge, MA: 1998. [Google Scholar]
  6. Brentari D, Padden C. A lexicon of multiple origins: Native and foreign vocabulary in American Sign Language. In: Brentari D, editor. Foreign vocabulary in sign languages: A crosslinguistic investigation of word formation. Lawrence Erlbaum Associates; Mahwah: 2001. pp. 87–119. [Google Scholar]
  7. Broca P. Remarques sur le Siége de la Faculté du Langage Articulé, Suivies d'une Observation d'aphémie (Perfe de la Parole) Bulletin de la Societé Anatomique de Paris. 1861;6:330–357. [Google Scholar]
  8. Buchsbaum B, Pickell B, Love T, Hatrak M, Bellugi U, Hickok G. Neural substrates for verbal working memory in deaf signers: fMRI study and lesion case report. Brain Lang. 2005;95(2):265–272. doi: 10.1016/j.bandl.2005.01.009. [DOI] [PubMed] [Google Scholar]
  9. Cappa SF, Perani D. Broca's area and lexical-semantic processing. In: Grodzinsky Y, Amunts K, editors. Broca's Region. Oxford University Press; Oxford: 2006. pp. 187–195. [Google Scholar]
  10. Corina DP, San Jose-Robertson L, Guillemin A, High J, Braun AR. Language lateralization in a bimanual language. J Cogn Neurosci. 2003;15(5):718–730. doi: 10.1162/089892903322307438. [DOI] [PubMed] [Google Scholar]
  11. Damasio H, Grabowski TJ, Frank R, Knosp B, Hichwa RD, Watkins GL, et al. PET-Brainvox, a technique for neuroanatomical analysis of positron emission tomography images. In: Uemura K, Lassen NA, Jones T, Kanno I, editors. Quantification of brain function. Elsevier; Amsterdam: 1994. pp. 465–474. [Google Scholar]
  12. Damasio H, Grabowski TJ, Tranel D, Hichwa RD, Damasio AR. A neural basis for lexical retrieval. Nature. 1996;380(6574):499–505. doi: 10.1038/380499a0. [DOI] [PubMed] [Google Scholar]
  13. Damasio H, Grabowski TJ, Tranel D, Ponto LL, Hichwa RD, Damasio AR. Neural correlates of naming actions and of naming spatial relations. Neuroimage. 2001;13(6 Pt 1):1053–1064. doi: 10.1006/nimg.2001.0775. [DOI] [PubMed] [Google Scholar]
  14. Damasio H, Tranel D, Grabowski T, Adolphs R, Damasio A. Neural systems behind word and concept retrieval. Cognition. 2004;92(12):179–229. doi: 10.1016/j.cognition.2002.07.001. [DOI] [PubMed] [Google Scholar]
  15. Emmorey K, Borinstein H, Thompson R. Bimodal bilingualism: Code-blending between spoken English and American Sign Language; Paper presented at the Proceedings of the 4th International Symposium on Bilingualism; 2005. [Google Scholar]
  16. Emmorey K, Damasio H, McCullough S, Grabowski T, Ponto LL, Hichwa RD, et al. Neural systems underlying spatial language in American Sign Language. NeuroImage. 2002;17(2):812–824. [PubMed] [Google Scholar]
  17. Emmorey K, Grabowski T, McCullough S, Damasio H, Ponto L, Hichwa R, et al. Motor-iconicity of sign language does not alter the neural systems underlying tool and action naming. Brain and Language. 2004;89(1):27–37. doi: 10.1016/S0093-934X(03)00309-2. [DOI] [PubMed] [Google Scholar]
  18. Emmorey K, Grabowski T, McCullough S, Damasio H, Ponto LL, Hichwa RD, et al. Neural systems underlying lexical retrieval for sign language. Neuropsychologia. 2003;41(1):85–95. doi: 10.1016/s0028-3932(02)00089-1. [DOI] [PubMed] [Google Scholar]
  19. Emmorey K, Grabowski T, McCullough S, Ponto LL, Hichwa RD, Damasio H. The neural correlates of spatial language in English and American Sign Language: a PET study with hearing bilinguals. NeuroImage. 2005;24(3):832–840. doi: 10.1016/j.neuroimage.2004.10.008. [DOI] [PubMed] [Google Scholar]
  20. Frank RJ, Damasio H, Grabowski TJ. Brainvox: an interactive, multimodal visualization and analysis system for neuroanatomical imaging. Neuroimage. 1997;5(1):13–30. doi: 10.1006/nimg.1996.0250. [DOI] [PubMed] [Google Scholar]
  21. Geschwind N. The organization of langauge in the brain. Science. 1970;170:940–944. doi: 10.1126/science.170.3961.940. [DOI] [PubMed] [Google Scholar]
  22. Grabowski TJ, Damasio H, Frank R, Hichwa RD, Ponto LL, Watkins GL. A new technique for PET slice orientation and MRI-PET coregistration. Human Brain Mapping. 1995;2:123–133. [Google Scholar]
  23. Grabowski TJ, Frank R, Brown CK, Damasio H, Boles Ponto LL, Watkins GL, et al. Reliability of PET activation across stastical methods, subject groups, and sample sizes. Human Brain Mapping. 1996;4:23–46. doi: 10.1002/(SICI)1097-0193(1996)4:1<23::AID-HBM2>3.0.CO;2-R. [DOI] [PubMed] [Google Scholar]
  24. Grosjean F. Neurolinguists, beware! The bilingual is not two monolinguals in one person. Brain and Language. 1989;36(1):3–15. doi: 10.1016/0093-934x(89)90048-5. [DOI] [PubMed] [Google Scholar]
  25. Guenther FH, Ghosh SS, Tourville JA. Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language. 2006;96(3):280–301. doi: 10.1016/j.bandl.2005.06.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hagoort P. On Broca, brain, and binding. In: Grodzinsky Y, Amunts K, editors. Broca's Region. Oxford University Press; Oxford: 2006. pp. 242–253. [Google Scholar]
  27. Hashimoto Y, Sakai KL. Brain activations during conscious self-monitoring of speech production with delayed auditory feedback: an fMRI study. Hum Brain Mapp. 2003;20(1):22–28. doi: 10.1002/hbm.10119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hesse MD, Thiel CM, Stephan KE, Fink GR. The left parietal cortex and motor intention: An event-related functional magnetic resonance imaging study. Neuroscience. 2006;140(4):1209–1221. doi: 10.1016/j.neuroscience.2006.03.030. [DOI] [PubMed] [Google Scholar]
  29. Hichwa RD, Ponto LL, Watkins GL. Clinical blood flow measurement with [150]water and positron emission tomography (PET) In: Emran AM, editor. Chemists' views of imaging centers, symposium proceedings of the International Symposium on “Chemists' Views of Imaging Centers”. Plenum Publishing; New York: 1995. [Google Scholar]
  30. Hlustik P, Solodkin A, Gullapalli RP, Noll DC, Small SL. Somatotopy in human primary motor and somatosensory hand representations revisited. Cereb Cortex. 2001;11(4):312–321. doi: 10.1093/cercor/11.4.312. [DOI] [PubMed] [Google Scholar]
  31. Horwitz B, Amunts K, Bhattacharyya R, Patkin D, Jeffries K, Zilles K, et al. Activation of Broca's area during the production of spoken and signed language: A combined cytoarchitectonic mapping and PET analysis. Neuropsychologia. 2003;41(14):1868–1876. doi: 10.1016/s0028-3932(03)00125-8. [DOI] [PubMed] [Google Scholar]
  32. Hull R, Vaid J. Clearing the cobwebs from the study of the bilingual brain. In: Kroll JF, Groot AMBD, editors. Handbook of Bilingualism: Psycholignuistic Approaches. Oxford University Press; Oxford: 2005. pp. 480–496. [Google Scholar]
  33. Indefrey P, Levelt WJ. The spatial and temporal signatures of word production components. Cognition. 2004;92(12):101–144. doi: 10.1016/j.cognition.2002.06.001. [DOI] [PubMed] [Google Scholar]
  34. Jacquemot C, Scott SK. What is the relationship between phonological short-term memory and speech processing? Trends Cogn Sci. 2006;10(11):480–486. doi: 10.1016/j.tics.2006.09.002. [DOI] [PubMed] [Google Scholar]
  35. Kemeny S, Xu J, Park GH, Hosey LA, Wettig CM, Braun AR. Temporal dissociation of early lexical access and articulation using a delayed naming task--an FMRI study. Cereb Cortex. 2006;16(4):587–595. doi: 10.1093/cercor/bhj006. [DOI] [PubMed] [Google Scholar]
  36. Koechlin E, Jubault T. Broca's area and the hierarchical organization of human behavior. Neuron. 2006;50(6):963–974. doi: 10.1016/j.neuron.2006.05.017. [DOI] [PubMed] [Google Scholar]
  37. Levelt WJM. Speaking: From Intention to Articulation. The MIT Press; Cambridge: 1989. [Google Scholar]
  38. MacDonald PA, Paus T. The role of parietal cortex in awareness of self-generated movements: A transcranial magnetic stimulation study. Cereb Cortex. 2003;13(9):962–967. doi: 10.1093/cercor/13.9.962. [DOI] [PubMed] [Google Scholar]
  39. MacSweeney M, Woll B, Campbell R, McGuire PK, David AS, Williams SC, et al. Neural systems underlying British Sign Language and audio-visual English processing in native users. Brain. 2002;125(Pt 7):1583–1593. doi: 10.1093/brain/awf153. [DOI] [PubMed] [Google Scholar]
  40. Maldjian JA, Gottschalk A, Patel RS, Detre JA, Alsop DC. The sensory somatotopic map of the human hand demonstrated at 4 Tesla. Neuroimage. 1999;10(1):55–62. doi: 10.1006/nimg.1999.0448. [DOI] [PubMed] [Google Scholar]
  41. McGuire PK, Robertson D, Thacker A, David AS, Kitson N, Frackowiak RS, et al. Neural correlates of thinking in sign language. Neuroreport. 1997;8(3):695–698. doi: 10.1097/00001756-199702100-00023. [DOI] [PubMed] [Google Scholar]
  42. Miller AJ. Least squares routines to supplement those of Gentleman [AS 274] Applied Statistics. 1991;41:458–478. [Google Scholar]
  43. Nichols T, Brett M, Andersson J, Wager T, Poline JB. Valid conjunction inference with the minimum statistic. NeuroImage. 2005;25:653–660. doi: 10.1016/j.neuroimage.2004.12.005. [DOI] [PubMed] [Google Scholar]
  44. Pellijeff A, Bonilha L, Morgan PS, McKenzie K, Jackson SR. Parietal updating of limb posture: An event-related fMRI study. Neuropsychologia. 2006 doi: 10.1016/j.neuropsychologia.2006.01.009. [DOI] [PubMed] [Google Scholar]
  45. Petitto LA, Zatorre RJ, Gauna K, Nikelski EJ, Dostie D, Evans AC. Speech-like cerebral activity in profoundly deaf people processing signed languages: implications for the neural basis of human language. Proc Natl Acad Sci U S A. 2000;97(25):13961–13966. doi: 10.1073/pnas.97.25.13961. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Poizner H, Klima E, Bellugi U. What the Hands Reveal About the Brain. MIT Press; Cambridge, MA: 1987. [Google Scholar]
  47. Price CJ, Friston KJ. Cognitive conjunction: a new approach to brain activation experiments. NeuroImage. 1997;5:261–270. doi: 10.1006/nimg.1997.0269. [DOI] [PubMed] [Google Scholar]
  48. Rumiati RI, Weiss PH, Shallice T, Ottoboni G, Noth J, Zilles K, et al. Neural basis of pantomiming the use of visually presented objects. Neuroimage. 2004;21(4):1224–1231. doi: 10.1016/j.neuroimage.2003.11.017. [DOI] [PubMed] [Google Scholar]
  49. San Jose-Robertson L, Corina DP, Ackerman D, Guillemin A, Braun AR. Neural systems for sign language production: Mechanisms supporting lexical selection, phonological encoding, and articulation. Hum Brain Mapp. 2004;23(3):156–167. doi: 10.1002/hbm.20054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Sandler W, Lillo-Martin D. Sign language and linguistic universals. Cambridge University Press; Cambridge: 2006. [Google Scholar]
  51. Shuster LI, Lemieux SK. An fMRI investigation of covertly and overtly produced mono- and multisyllabic words. Brain Lang. 2005;93(1):20–31. doi: 10.1016/j.bandl.2004.07.007. [DOI] [PubMed] [Google Scholar]
  52. Talairach J, Tournoux P. Co-planar stereotaxic atlas of the human brain. Thieme; New York: 1988. [Google Scholar]
  53. Thompson-Schill SL, Bedny M, Goldberg RF. The frontal lobes and the regulation of mental activity. Curr Opin Neurobiol. 2005;15(2):219–224. doi: 10.1016/j.conb.2005.03.006. [DOI] [PubMed] [Google Scholar]
  54. Wolpert DM, Goodbody SJ, Husain M. Maintaining internal representations: the role of the human superior parietal lobe. Nat Neurosci. 1998;1(6):529–533. doi: 10.1038/2245. [DOI] [PubMed] [Google Scholar]
  55. Woods RP, Mazziotta JC, Cherry SR. MRI-PET registration with automated algorithm. Journal of Computer Assisted Tomography. 1993;17:536–546. doi: 10.1097/00004728-199307000-00004. [DOI] [PubMed] [Google Scholar]
  56. Worsley KJ. Local maxima and the expected Euler characteristic of excursion sets of chi-squared, F and t fields. Advanced Applied Probability. 1994;26:13–42. [Google Scholar]
  57. Worsley KJ, Evans AC, Marrett S, Neelin P. A three-dimensional statistical analysis for CBF activation studies in human brain. Journal of Cerebral Blood Flow and Metabolism. 1992;12:900–918. doi: 10.1038/jcbfm.1992.127. [DOI] [PubMed] [Google Scholar]

RESOURCES