Abstract
Signed languages such as American Sign Language (ASL) are natural human languages that share all of the core properties of spoken human languages, but differ in the modality through which they are communicated. Neuroimaging and patient studies have suggested similar left hemisphere (LH)-dominant patterns of brain organization for signed and spoken languages, suggesting that the linguistic nature of the information, rather than modality, drives brain organization for language. However, the role of the right hemisphere (RH) in sign language has been less explored. In spoken languages, the RH supports the processing of numerous types of narrative-level information, including prosody, affect, facial expression, and discourse structure. In the present fMRI study, we contrasted the processing of ASL sentences that contained these types of narrative information with similar sentences without marked narrative cues. For all sentences, Deaf native signers showed robust bilateral activation of perisylvian language cortices, as well as the basal ganglia, medial frontal and medial temporal regions. However, RH activation in the inferior frontal gyrus and superior temporal sulcus was greater for sentences containing narrative devices, including areas involved in processing narrative content in spoken languages. These results provide additional support for the claim that all natural human languages rely on a core set of LH brain regions, and extend our knowledge to show that narrative linguistic functions typically associated with the RH in spoken languages are similarly organized in signed languages.
Introduction
A variety of distinct visual-manual signed languages have emerged, independently of the surrounding spoken languages, in Deaf communities around the globe. These languages possess all of the linguistic complexity and levels of structure of spoken languages, but rely on visuo-spatial, rather than acoustic, perception for their understanding. Insofar as particular brain areas possess predispositions for certain types of processing relevant to language (e.g., learning associations between arbitrary symbols and meanings; combining words into structured sentences), we would expect that the neural organization of spoken and signed languages would be similar. On the other hand, the perceptual and cognitive processing demands of a particular language may impose particular patterns of brain organization, leading to differences in the neural apparatus for processing spoken and signed languages that extend beyond sensory cortices. This paper examines the neural network engaged by narrative processing in signers, in particular prosody, facial expression, and role-shifting, to determine whether this narrative processing network is similar across language modalities in spite of differences in the way in which the information is conveyed.
The visual-manual modality affords options for expression that are not available for spoken languages, such as patterns of hand and body movement in space, and facial expressions to encode linguistic information. Facial and spatial information processing for non-linguistic materials is dependent on the right hemisphere (RH) (Kanwisher and Yovel, 2006; Vogel et al., 2003). The question arises whether the right and left hemispheres play the same relative roles during language processing in signers as they do in speakers. Neuropsychological and neuroimaging studies have largely suggested that in spite of modality differences, the brain organization for spoken and signed languages is quite similar. Left hemisphere (LH) damage in signers results in typical patterns of aphasia (e.g., nonfluent, agrammatic aphasia with anterior LH damage; fluent aphasias with posterior LH damage), while RH lesions have more subtle, if any, effects on grammar, fluency, or semantics (Corina, 1998; Corina et al., 1999; Hickok et al., 1996; Hickok et al., 1999; Poizner et al., 1987). Neuroimaging studies of signed language production and comprehension have similarly revealed a left-lateralized pattern of activation in classical language areas including the inferior frontal gyrus (IFG, or Broca’s area), the superior temporal sulcus (STS) and inferior parietal lobe (Wernicke’s area), and motor/premotor areas (Bavelier et al., 1998; Bavelier et al., 2008; Braun et al., 2001; Corina et al., 2003; Emmorey et al., 2003; Kassubek et al., 2004; Lambertz et al., 2005; MacSweeney et al., 2002; Meyer et al., 2004; Neville et al., 1998; Newman et al., 2002; Petitto et al., 2000; Sakai et al., 2005; San Jose-Robertson et al., 2004).
Other aspects of language processing have been shown to be more dependent on the RH. These include discourse-level processing such as interpretation of prosody and facial expressions, and the ability to properly maintain topics and comprehend narratives across several sentences (Beeman and Chiarello, 1997; Brownell et al., 1986; Gorelick and Ross, 1987; Rehak et al., 1992; Ross, 1981; Wymer et al., 2002). Neuroimaging studies have indicated that the key RH regions involved in processing these aspects of language are those homologous to classical LH language areas, including the IFG, STS, and inferior parietal lobe (Awad et al., 2007; Baum and Pell, 1999; Bloom et al., 1992; Caplan and Dapretto, 2001; Gandour et al., 2003b; Gur et al., 1994; Kotz et al., 2003; Meyer et al., 2002; Mitchell et al., 2003; Narumoto et al., 2001; Schmitt et al., 1997; St George et al., 1999). Neuropsychological evidence suggests that the primary role of the RH in processing narrative information holds for signed languages as well, including for topic coherence, the ability to maintain referential coherence by properly situating signs in the space in front of the signer and referring to the same locations consistently, and by properly signing the orientations, spatial relationships, and movement paths of objects (Atkinson et al., 2004; Emmorey et al., 1995; Hickok et al., 1999; Poizner et al., 1987). Taken together, this evidence suggests a universal pattern of brain organization for language irrespective of modality.
However, the neural bases of narrative processing in sign language have only been investigated in a relatively small number of patient studies, and not in neurologically intact native signers. It remains therefore possible that the LH may play a greater role in narrative processing in ASL as compared to speech. This possibility finds support in a few neuroimaging studies of signers which have demonstrated LH dominance for some functions that normally show greater RH activation. A leftward-shifted dominance has been reported, for example, in response to visual motion in signers as compared to non-signers (Bavelier et al., 2001; Bavelier et al., 2000; Fine et al., 2005; Finney et al., 2001; Neville and Lawson, 1987). The case of facial expression is also notable, with some aspects of its processing controlled by the left hemisphere in signers, but other aspects controlled by the right hemisphere, as in non-signers. Using chimeric stimuli, Corina et al. (1999) found that ASL linguistic expressions are perceived as most intense when produced by the LH of a signer (i.e., on the right side of the face), but affective expressions are viewed as more intensive when produced by the RH (on left side of the face). Corina et al. (1999) further reported a neuropsychological double dissociation for linguistic and affective facial expressions in signers. While RH damage led to a notable decrease in affective facial expressions produced by a congenitally deaf signer, linguistic facial expressions including adverbials and grammatical markers were still produced. In contrast, a congenitally deaf signer with LH damage produced affective facial expressions but not linguistic ones. McCullough et al. (2005) found similar results using fMRI, with an overall shift towards left-lateralization of activation within face-processing regions of the STS and fusiform gyrus that was most pronounced for ASL linguistic facial expressions. In sum, then, as motion and facial cues come to serve linguistic purposes, their processing may occur predominantly in the language-dominant left hemisphere. Since prosody in sign language is conveyed through face and body movements rather than through sound, some aspects of narrative processing in sign language may also come to depend on the LH. Thus at present it is unclear how similar the neural organization for discourse-level information, such as affective and prosodic markers, is for signed and spoken languages.
The present study was designed to determine whether the neural organization for the processing of narrative devices (including affective prosody and facial expression) in American Sign Language is similar to that observed in spoken languages. We constructed a set of ASL sentences, with two versions of each that differed in the presence or absence of a cluster of discourse/narrative features, including affective facial expressions, role marking using shifts of orientation of the torso and accompanying eye gaze, and narrative prosodic markers including facially-marked topicalized, specified, and emphasized phrases (see Videos 1 & 2). The narrative condition added additional linguistic and meta-linguistic features that reinforced or enlivened the content of the sentences, but these were neither grammatically required nor did they alter the basic propositional meaning of the sentences. The non-narrative sentences contained very little affective facial expression, though they did contain facial markers required by ASL grammar including topicalization and question markers, as well as some adverbial facial expressions1.
It is important to stress that across the two versions of each sentence, the semantic and propositional content, as well as most of the lexical items and syntactic devices, were held as constant as possible. However, differences imposed by narrative style in ASL led to some changes in word order and some differences of lexical item choice. For example, in Video 1, a teacher informs students in a sewing class of their grades. In the narrative and non-narrative versions, the same signs are used in the same order. In the narrative condition, however, the signer employs role-shifting to assume the point of view of a narrator at the event. This is effected through the addition of eye gaze direction, head tilt, and facial affect cues. As another example, in Video 2 the non-narrative version started with SUPPOSE (someone is) SLEEPY, followed by the suggestion that one should get up and walk around; in contrast, the narrative version involved role-shifting (the signer assuming the point of view of the speaker of the sentence), saying HEY, (are you) SLEEPY?, followed by the suggestion to get up and walk around.
Thus the sentences in the two conditions differed in the presence or absence of narrative/meta-linguistic devices, but not in the number of referents, the basic propositions, or in syntactic complexity.
Of interest in this study is the contrast between the brain systems recruited by narrative and non-narrative sentences in native signers. A direct contrast of the activation produced by each sentence type would not achieve this aim as the narrative sentences tended to include overall more and larger hand, arm, body, and head movements and more marked and active expressions of the face. To control for these differences, we developed control stimuli matched to each sentence type that contained all of the visual information in the ASL sentences, but that were not processed linguistically. This was achieved by digitally overlaying three semi-transparent ASL sentence video clips of the same sentence type and playing them backward - called “backward layered”. Subjects upon viewing these stimuli were asked to press a button whenever they detected instances of bimanual symmetry (i.e., two hands with the same handshapes). This symmetry detection task ensured subjects’ attention remained focused on the primary articulators, but was directed away from linguistic analysis.
We expected robust activation of classical language cortex in the LH (including inferior frontal, temporal and inferior parietal areas) for both sentence types relative to their backward-layered control conditions, but little difference in LH activation between the narrative and non-narrative sentences. In contrast, we hypothesized that, like spoken languages, the processing of narrative-level information in ASL relies primarily on the RH temporal, inferior frontal, and inferior parietal regions, and so these areas would show greater activation for narrative sentences (relative to their matched control condition) than for non-narrative sentences.
Materials and Methods
Subjects
fMRI data were collected from 17 right-handed (Oldfield, 1971), congenitally deaf young adults who were exposed to and learned ASL from birth from their deaf parents or caregivers. All had deafness (≥ 90 dB loss in each ear) of peripheral etiology and had no other known neurological or psychological disease. All subjects gave informed consent and were free to terminate participation at any time. Procedures were approved by the Research Subjects Review Board of the University of Rochester. Data from 3 participants was excluded (see below), leaving data from 14 participants contributing to the results presented. This included 6 females and 8 males, with a mean age of 25.5 years (range 18 – 36), and an average of 3.4 years of post-secondary education (range: 0 – 8 years).
Materials
Stimuli consisted of a set of 24 ASL sentences. Each sentence was produced by author T.S. (a Deaf native ASL signer), recorded to digital video tape, edited on a Macintosh computer using Final Cut Pro (Apple Inc., Cupertino, CA), and saved to QuickTime format (Sorenson 3 video compression) for playback during the experiment. Each sentence was recorded in 3 versions, the data from only 2 of which will be discussed here2: one version (narrative) included a number of narrative devices (role shifting markers involving shift of the torso, head, and eyes, affective facial expressions, and narrative prosodic markers indicating topicalized, specified, or emphasized phrases). The other version (non-narrative) was matched in lexical and syntactic/propositional content and also in containing inflections for grammatical role and aspect/number, but lacked the additional narrative cues. The use of narrative devices frequently required changes in word order (16/24 sentences) and sometimes involved the substitution of third-person verbs with first-person actions that conveyed the same information (e.g. MONKEY EAT-UP rather than FEED MONKEY), but the propositional content of the sentences was not altered. (The third sentence type, word order, was not included in the analyses of the present paper, also used the same lexical items and semantic content but contained neither narrative nor inflectional devices; all grammatical information was conveyed through separate lexical items and word order.) Control stimuli (“backward layered”) were produced by creating backward versions of each movie, and then overlaying 3 such clips using Final Cut Pro. This produced movies that had the appearance of 3 semi-transparent versions of the signer moving their arms and faces simultaneously. Importantly, the overlaid clips were chosen to be of comparable length, and all belonged to the same sentence type. Thus, two different types of control stimuli were created, one for the non-narrative condition and one for the narrative. In pilot testing, signers asked to view these backward-layered stimuli were unable to understand any of the sentences and could only rarely identify even single lexical items. For each type of sentence, twenty-four of these “backward-layered” movies were created, each movie using different triplets of sentences. Each sentence and backward-layered movie was saved as a 7-second clip. Because the actual sentences were shorter than 7 sec, each ASL sentence or backward-layered movie was padded with a still image of the signer (using a smooth morphing transition to avoid apparent sudden “jumps” in the position of the signed between sentences). The average duration of actual signing in the movies was 4.63 sec for narrative movies and 4.65 sec for Non-narrative movies. Examples of each type of stimuli are shown in Videos 1 and 2.
Procedure
Subjects were familiarized with the procedure and given a practice run (using stimuli not included in the fMRI procedure). Once in the MRI scanner, they performed 4 scanning runs, each consisting of 6 blocks of sentences and 6 blocks of backward-layered movies. Two blocks of each sentence/control type (including the condition not discussed here and its matched control condition) were presented per run, with 3 sentences per block. Each of these 21 sec blocks was separated from the next by a 15 sec baseline period where a still frame of the signer was displayed. The subjects’ task while viewing the ASL sentences was to press a button whenever they detected a sign from a particular semantic category (1 category/run: “food”, “women”, “clothing” and “money”). During backward-layered control blocks, subjects’ task was to press the button whenever they detected bimanual symmetry – a left and a right hand with the same handshape and position. Subjects were reminded of the semantic category at the beginning of each run, and immediately prior to each ASL block by a small icon superimposed on the chest of the signer in the still frame being displayed. An iconic task cue was similarly presented immediately prior to each backward-layered control block. Responses were made using a button box (Rowland Inc., Boston, USA) placed in a sandal on one of the subject’s feet, with response foot counterbalanced across subjects. Because the perception of ASL engages premotor cortex in the region of the hand representation (Bavelier et al., 2008; Neville et al., 1998; Newman et al., 2002), we required foot responses so as to be able to differentiate perception- from response-related activation. The ordering of tasks across runs, as well as the particular order of blocks/sentence types presented in each run, was randomized for each subject.
MR Scanning Procedures
Data were collected on a 1.5T GE Sigma LX MRI system located at the University of Rochester Medical Center. For functional runs, gradient-echo, echoplanar images were acquired with the following parameters: TR = 3 sec, TE = 40 msec, flip angle = 90 deg, interleaved slice acquisition order, 64 × 64 matrix, FOV = 24 cm, in-plane voxel resolution = 3.75 × 3.75 mm, 21 slices, slice thickness 5 mm, inter-slice gap 1 mm, anterior-posterior phase encoding, bandwidth = ±62.5 kHz. Structural images were collected using a 3D spoiled gradient recall (SPGR) sequence, with parameters of TE = minimum, flip angle = 20 deg, 256 × 256 matrix, FOV = 24 cm, voxel size 1 × 1 × 1.2 mm, 128 slices.
Data Preprocessing and Analysis
FMRI data processing was carried out using FEAT (FMRI Expert Analysis Tool) Version 5.98, part of FSL (FMRIB’s Software Library, www.fmrib.ox.ac.uk/fsl). Prior to statistical analysis, the following preprocessing steps were applied to the data from each run, for each subject: motion correction using MCFLIRT (Jenkinson et al., 2002); non-brain removal using BET (Smith, 2002); spatial smoothing using a Gaussian kernel of FWHM 8.0 mm; grand-mean intensity normalisation of the entire 4D dataset by a single multiplicative factor; and highpass temporal filtering (Gaussian-weighted least-squares straight line fitting, with sigma=36.0s). One subject’s data contained excessive head motion (numerous movements of > 2 mm) in every run; for two other subjects the data were corrupted on the MRI scanner and were unusable. Thus data from 14 participants were used in the statistical analyses reported.
Statistical analysis proceeded through 3 levels, again using FEAT. The first level was the analysis of each individual run, using general linear modeling (GLM)(Woolrich et al., 2001). The time series representing the ‘on’ blocks for each of the 6 stimulus types (Narrative, Non-narrative, Word Order, and their respective backward-layered control stimuli; Word Order sentences were modelled in the first-level analyses to properly account for the variance associated with them, but were excluded from subsequent levels of analysis) were entered as separate regressors into the GLM, with prewhitening to correct for local autocorrelation. Coefficients were obtained for each stimulus type as well as for contrasts between each sentence type and its backward-layered control condition.
To identify brain areas activated by each sentence type relative to its backward-layered control condition, a second-level analysis was performed for each participant, with the inputs being the contrast coefficients obtained from the first-level GLM for the comparison of the ASL sentence condition with its backward-layered control. This was done using a fixed effects model, by forcing the random effects variance to zero in FLAME (FMRIB’s Local Analysis of Mixed Effects) (Beckmann et al., 2003; Woolrich, 2008; Woolrich et al., 2004). A third-level analysis was then performed on the coefficients from each subject determined in the second-level GLM. This was done using FLAME (FMRIB’s Local Analysis of Mixed Effects) stage 1 and stage 2 (Beckmann et al., 2003; Woolrich et al., 2004). Z (Gaussianised T/F) statistic images were thresholded using clusters determined by z > 2.3 and a (corrected) cluster significance threshold of p < .05 (Worsley, 2001). Subsequent to thresholding the z maps were masked to include only voxels that showed greater activation for ASL sentences than the low-level baseline (a still image of the signer).
To compare activation between the two sentence types, a second-level analysis was performed for each participant, entering as inputs the two coefficients from the first-level GLM: the contrast between the Narrative sentences and their control condition, and the contrast between Non-narrative sentences and their control condition. This was done using a fixed effects model, by forcing the random effects variance to zero in FLAME. The contrast coefficients from this second-level analysis for each subject were entered into a third-level analysis using FLAME stage 1 and stage 2. The resulting statistical maps were thresholded at z > 1.96 (uncorrected) with a minimum cluster size of 0.24 mL (30 voxels), and then masked to include only voxels that were significantly more activated by ASL sentences than the low-level baseline (z > 2.3 and a (corrected) cluster significance threshold of p < .05).
The activations were quite extensive for the comparison of each ASL sentence type with its respective backward-layered control condition, with clusters often spanning multiple anatomically and functionally distinct regions. To decompose these large clusters of activation we first identified all anatomical regions activated in these comparisons, then extracted the results from anatomically-defined regions of interest (ROIs) based on the work of Tzourio-Mazoyer et al. (2002). ROIs from each hemisphere included inferior frontal gyri (IFG), the lateral temporal lobes (superior, middle, and inferior temporal gyri and temporal pole), medial temporal lobes (hippocampi, parahippocampal gyri, and amygdalae), temporal-parietal-occipital junctions (including the angular and supramarginal gyri as well as the posterior bifurcation of the superior temporal sulcus), the medial superior frontal gyri (SFG), and the caudate nuclei, as well as the left supplementary motor cortex (SMA) and the right globus pallidus.
Results
Behavioral Data
Due to technical failures, response data were recorded during fMRI scanning from only 8 of the 14 subjects, though all were given the response button boxes and performed the task during scanning3. A one-way analysis of variance on these data revealed no difference in accuracy of detecting semantic category targets between the sentence types, F(2,12) = 1.96, p = .18. The mean number of errors was less than one per sentence type, per subject.
To further test whether the presence or absence of narrative information affected sentence processing, we conducted a follow-up behavioural study on a group of 9 deaf native ASL signers, including 2 who had been in the fMRI experiment. These participants performed a similar semantic monitoring task to the one performed while in the MRI scanner, using the same sentence materials. However, rather than responding to exemplars from a semantic category that occurred in only a small percentage of the sentences, as in the fMRI task, in this behavioural paradigm subjects were cued with a target category immediately before each sentence, and an exemplar from that category always appeared in the sentence. Subjects were asked to press the response button as soon as they detected a lexical item of that category. Thus we were able to obtain reaction times (RTs) for each sentence. Targets were selected to occur late in each sentence in order to assess the speed of comprehension of each sentence. Reaction times were predicted to reflect sentence complexity on the assumption that, if greater resources were allocated to parsing the sentence, then target detection would be slowed. Similar logic has been used previously in studies of sentence processing (Friederici, 1983, 1985) and is implicit in eye movement and self-paced studies of reading (Rayner, 1998). The results showed no differences in RTs between narrative (mean 1089 msec) and non-narrative sentences (mean 1027 msec), t = 0.66, p = .69, suggesting that there were not processing complexity differences between the two types of sentences that would be responsible for obtained fMRI effects, described below.
fMRI Data
The brain regions activated for each ASL sentence type relative to its backward-layered control condition are shown in Figure 1. Both narrative and non-narrative sentences evoked bilateral activation in the IFG, STS and surrounding lateral temporal regions, the temporal-parietal-occipital junction, the medial superior frontal gyri, and medial temporal regions including the hippocampus, parahippocampal gyri, and amygdalae. Within the temporal lobes, activation included areas of the anterior fusiform gyrus and posterior STS previously implicated in face and biological motion processing, respectively. Additionally, non-narrative (but not narrative) sentences activated medial cortical and subcortical structures including the supplementary motor area (SMA) and the basal ganglia (head of the caudate nuclei bilaterally and the right pallidum). The details of these activations are reported in Table 1.
Figure 1.
Activation elicited by each ASL sentence type, relative to its matched control condition (movies of 3 ASL sentences played backward and overlaid). Z statistic images were thresholded using clusters determined by z > 2.3 and a (corrected) cluster significance threshold of p = .05. Subsequent to thresholding, these maps were masked to include only voxels that showed greater activation for ASL sentences than the low-level baseline (a still image of the signer). The maps for each condition have been overlaid to show areas of conjunction (i.e., significant activation in each condition) in purple, and areas of disjunction in blue (activation only for narrative sentences) and red (activation only for non-narrative sentences).
Table 1.
Location, spatial extent, and maximum z values in anatomically-defined (Tzourio-Mazoyer et al., 2002) regions of interest (ROIs) covering all activations. For each condition relative to its backward-layered baseline condition, z maps were thresholded at z > 2.3, with a corrected cluster size threshold of p < .05 prior to clustering within ROIs. Subsequent to thresholding, these maps were masked to include only voxels that showed greater activation for ASL sentences than the low-level baseline (a still image of the signer). Comparisons between sentence types were performed by first subtracting the backward-layered condition activation from its respective ASL sentence condition, then thresholding at z > 1.96 (uncorrected) with a minimum cluster size of 0.3 mL (38 voxels).
| ROI | Narrative | Non-Narrative | Narrative > Non-Narrative | Non-Narrative > Narrative | |||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Hemi | X | Y | Z | Max z | Volume (mL) | X | Y | Z | Max z | Volume (mL) | X | Y | Z | Max z | Volume (mL) | X | Y | Z | Max z | Volume (mL) | |
| IFG | L | −40 | 34 | −16 | 4.38 | 7.66 | −52 | 28 | 2 | 4.55 | 10.64 | ||||||||||
| R | 52 | 26 | 14 | 3.45 | 4.01 | 42 | 32 | -−12 | 3.92 | 2.26 | 56 | 24 | −4 | 2.84 | 0.31 | ||||||
| Temporal lobe - lateral | L | −54 | −8 | −22 | 4.59 | 16.54 | −56 | −26 | −6 | 5.28 | 19.44 | ||||||||||
| R | 50 | −20 | −12 | 5.31 | 29.90 | 54 | −12 | −14 | 5.83 | 24.42 | 54 | −38 | 6 | 4.03 | 3.03 | 62 | −8 | 18 | 2.99 | 0.36 | |
| Temporal lobe - medial | L | −18 | −6 | −16 | 3.19 | 0.13 | −20 | −12 | −20 | 4.12 | 0.61 | ||||||||||
| R | 16 | 0 | −18 | 3.85 | 0.50 | 28 | −16 | −20 | 4.12 | 1.39 | |||||||||||
| Temporo-parietal occipital | L | −52 | −58 | 16 | 3.86 | 3.87 | −64 | −42 | −8 | 4.48 | 5.46 | ||||||||||
| R | 60 | −60 | 16 | 3.77 | 2.37 | 54 | −64 | 22 | 4.95 | 2.86 | |||||||||||
| SMA | L | −8 | 20 | 60 | 3.99 | 0.82 | |||||||||||||||
| R | |||||||||||||||||||||
| SFG medial | L | −12 | 52 | 42 | 4.06 | 0.49 | |||||||||||||||
| R | 10 | 60 | 24 | 4.33 | 1.91 | 4 | 58 | 36 | 4.14 | 1.78 | |||||||||||
| Caudate | L | −8 | 10 | 10 | 3.64 | 1.14 | −6 | 14 | −16 | 2.80 | 0.46 | ||||||||||
| R | 8 | 10 | 16 | 3.33 | 1.73 | ||||||||||||||||
| Globus Pallidus | L | ||||||||||||||||||||
| R | 10 | 0 | −6 | 3.46 | 0.15 | 4 | −10 | 10 | 4.11 | 1.82 | |||||||||||
| Total volume | 59.23 | 62.55 | 3.03 | 2.65 | |||||||||||||||||
As indicated by areas of disjunction in Figure 1, activation in the IFG and STS was more extensive for the narrative sentences, particularly in the RH. This observation was supported by the statistical comparison between the two sentence types shown in Figure 2. Narrative sentences elicited significantly greater activation than non-narrative sentences in the middle portion of the RH STS and in a smaller region of the RH IFG pars triangularis. Several other areas showed the opposite pattern, with greater activation for non-narrative sentences. These included the RH globus pallidus, the head of the LH caudate nucleus, and the RH anterior STS. Details of the differences between conditions are provided in Table 1.
Figure 2.
Areas showing greater activation for narrative than non-narrative sentences. Z maps were thresholded at z > 1.96 (uncorrected) with a minimum cluster size of 0.3 mL, and were masked to include only voxels significantly activated in the comparison between that sentence type and its backward-layered control condition as well as greater activation for ASL sentences than the low-level baseline.
Discussion
The present study was designed to determine whether the processing of affective and other meta-linguistic narrative information in sign language, like spoken languages, relies primarily on the right cerebral hemisphere. We contrasted brain activation for two types of ASL sentences. The two types contained very similar propositional, lexical-semantic, and syntactic content, except where changes were necessitated by the narrative devices used. However, one sentence type (narrative) additionally contained a cluster of narrative devices including affective prosody, affective facial expression, and role shifting involving body movement, that the other sentence type did not. Both sentence types activated a broadly similar bilateral network of brain regions, including classical language areas of the LH (IFG, STS, temporal-parietal junction) and their RH homologues. This pattern of activation is similar to that reported previously for sign languages, and provides further support for the argument that all natural human languages, spoken or signed, rely on a common core set of brain regions for their processing. Critically, our data extend this knowledge by showing that sentences containing narrative devices elicited greater activation in the middle portion of the RH STS and in the RH IFG pars triangularis relative to sentences that did not contain narrative information. This predicted pattern of RH activation for processing narrative information is similar to that found in neuroimaging and neuropsychological studies of spoken language users (Baum and Pell, 1999; Bloom et al., 1992; Gandour et al., 2003b; Gur et al., 1994; Kotz et al., 2003; Meyer et al., 2002; Mitchell et al., 2003; Narumoto et al., 2001; Schmitt et al., 1997), and suggests common neural circuits are involved in signed and spoken languages for the processing of narrative markers and structure.
Our findings indicate that the RH retains its main role for narrative and meta-linguistic processing in signers. Because the narrative sentences used in this experiment differed from the non-narrative sentences in a cluster of features, we cannot selectively associate areas that showed greater activation for narrative sentences with the processing of specific narrative devices. However, given the similarities in localization of activation for ASL narrative sentences here and narrative devices previously studied in spoken languages, we can make some inferences about the organization of narrative processing in ASL. The RH temporal lobe has been recognized as playing a crucial role in the interpretation of prosody in spoken languages - both affective prosody and the overall prosodic “envelope” of sentences (Baum and Pell, 1999; Gandour et al., 2003b; Kotz et al., 2003; Meyer et al., 2002; Schmitt et al., 1997). Both superior and ventral temporal areas also play key roles in the processing of facial expressions (Gur et al., 1994; McCullough et al., 2005; Mitchell et al., 2003; Narumoto et al., 2001; Schmitt et al., 1997). Greater RH STS activation has also been found in studies that employed audiovisual speech including natural prosody compared to studies using printed words or auditory sentences with minimal prosody (Capek et al., 2004; Wright et al., 2003). Our finding of greater RH STS activation for sentences containing narrative devices, including prosody, is thus consistent with the literature from spoken languages.
The RH IFG has been implicated in the processing of prosody and intonation in speech (Gandour et al., 2003a; Meyer et al., 2002) as well as for music including rhythm, pitch, and harmonic sequences (Koelsch et al., 2002; Maess et al., 2001; Tillmann et al., 2006; Zatorre et al., 1992). Although basic intonational prosody was present in both the narrative and non-narrative sentences used in this experiment, this prosody was emphasized and more lively and salient in the narrative condition, where RH IFG activation was greater. The present results suggest that, at least in deaf signers, the RH IFG may serve the same function but for sequences of movement rather than sound. In a previous study (Newman et al., 2002) we found similar RH IFG activation in both deaf and hearing native signers for ASL sentences that contained narrative content similar to that in the present study, thus suggesting that auditory deprivation may not be a requirement for the RH IFG to process visual prosodic information. However this suggestion will require further testing as in that previous study we did not have a non-narrative sentence condition, so we cannot definitively attribute the RH IFG activation to prosodic processing.
It is important to emphasize that activations observed for narrative relative to non-narrative sentences in the present study are directly attributable to linguistic/meta-linguistic processing, in spite of the fact that these sentences also differed markedly in their basic visual characteristics such as the overall amount and speed of motion, body shifting, etc. This is because many of the visual features that differentiated the narrative from non-narrative condition, such as affective facial expressions and body shifting, were visible in the backward-layered control condition as well. Thus the activations observed for narrative sentences here reflect greater activation when these cues co-occurred with interpretable linguistic content. This is particularly pertinent with respect to the activation observed within the temporal lobes. Even with the same facial expressions (including facial affect) and biological motion visible in both the ASL and control conditions, greater activation was observed within the anterior fusiform gyrus and the posterior STS (Step) for both ASL sentence conditions relative to their backward-layered controls. While these foci of activation were part of a much larger cluster of temporal lobe activation in each hemisphere, these regions are of particular interest as they have previously been implicated in face and biological motion processing.
The bilateral anterior fusiform gyrus activation observed in the present study was 2 – 4 cm rostral (MNI coordinates 41, −16, −30 in the RH and −42, −16, −28 in the LH) than the area often referred to as the “fusiform face area” or FFA (Haxby et al., 2000). In a previous study of facial expression accompanying ASL, McCullough et al. (2005) reported FFA activation in response to both emotional and linguistic facial expressions, but the activation did not extend into the anterior fusiform. However, our study differed from McCullough et al.’s in a number of ways – in our study, participants viewed whole sentences, and their task directed attention to the semantic content of the sentences. In McCullough et al., the stimuli were single signs, and subjects’ task was to decide if each facial expression matched the preceding one. Retrospectively, we examined activation in each condition separately (relative to the resting baseline) and observed FFA activation for both ASL and backward-layered conditions, suggesting that the presence of faces in the stimuli activated the FFA to an equivalent degree in the presence and absence of understandable linguistic content. Thus the pattern of activation is consistent with McCullough et al.’s report of FFA activation in response to both emotional and linguistic facial expressions in signers. But, in addition, we found a region of the anterior fusiform gyrus to be more activated by narrative than non-narrative sentences. This part of the anterior fusiform has been previously shown to be activated in hearing subjects listening to narrative speech (Awad et al., 2007), suggesting again a possible overlap between the neural systems for narrative devices between sign and speech.
Activation was observed in the STSp for both ASL sentence conditions relative to backward-layered controls, but significantly more so for narrative than non-narrative sentences in the RH. Previous studies have found activation within this region associated with the processing of biological motion (e.g., Bonda et al., 1996; Grossman et al., 2000), facial emotion and other “changeable” aspects of faces (Haxby et al., 2000; McCullough et al., 2005), and interpreting others’ mental states and intentions (Saxe and Wexler, 2005; Saxe et al., 2004)4. The RH STSp activation in the present study is consistent with all of these previous findings, and given that the narrative stimuli contained more marked facial expressions (both affective and narrative), greater biological motion (e.g., body movements associated with ASL role-shifting), and more information about the mental states of the persons involved in some of the sentences (though not all sentences described mental states), it is difficult to make a specific functional interpretation of this activation. Indeed, all of these functions are typically used in the interpretation of narrative discourse, and it has been proposed that the STSp supports multiple cognitive functions depending on task demands and co-activation of other brain regions (Hein and Knight, 2008).
The role of attention cannot be overlooked either: the greater activation of anterior fusiform and posterior STS regions here for ASL sentences than for control stimuli containing equivalent, but non-linguistic, biological motion and facial expressions may indicate that in native signers these areas play some role in sign language processing. On the other hand, in the control condition subjects’ attention was directed to the configurations of the hands, and not to the overall patterns of motion or to the face. Thus while these areas may become sensitized to facial expression and biological motion with communicative relevance, we cannot rule out the possible differences in attention. This will be an important avenue for future study.
One final point of note is that several brain regions were found to be more active in the non-narrative control condition than in response to narrative sentences, including the globus pallidus, caudate nucleus, and a part of the RH STS anterior to that showing greater recruitment to the narrative sentences. While we did not predict this pattern of results, we hypothesize that they may reflect the increased dependence on processing non-narrative cues to derive sentence meaning. The narrative information such as prosody, role-shifting, etc. serve to convey information that facilitates derivation of the meaning of the sentences. In the absence of such information, other cues may become more important and salient, such as inflectional morphology altering the processing strategies and associated brain activation for sentence processing. This notion of “cue validity” has been proposed by MacWhinney and Bates (MacWhinney and Bates, 1989) in other contexts. The anterior STS and basal ganglia have both been implicated in grammatical processing (Friederici and Kotz, 2003; Humphries et al., 2005; Meyer et al., 2000; Ullman, 2001) and so their greater activation here is consistent with the hypothesis of greater reliance on grammatical information in the absence of narrative cues.
Conclusions
American Sign Language is a natural human language that shares all of the core properties of spoken human languages, but differs in the modality through which it is transmitted. The present results provide additional support for the claim that all natural human languages rely on a common set of brain regions within the left hemisphere, including inferior frontal, lateral temporal, and inferior parietal areas. The results further extend our knowledge to show that linguistic functions typically associated with the right hemisphere in spoken languages, including prosody, facial expression, and other narrative devices, also rely primarily on the RH STS and IFG in signers. However, sign language additionally recruits areas involved in face perception and biological motion, suggesting that these regions may assume a specific role in linguistic processing in native signers.
Supplementary Material
Example of one sentence used in the experiment. The non-narrative version is shown first, followed by the narrative version. Subsequent to this are the backward-layered control versions of each sentence type. These contain the sentences shown in the first part of the video, each overlaid with two other videos and played backward. The gloss of the ASL sentence is: CLOTHES SEW TEACHER FINISH INFORM+AGR STUDENT SOME “Bs”. The English translation of this is The sewing teacher informed the students that some of them got Bs. For details of the differences between narrative and non-narrative versions please see the text.
A second example sentence from the experiment. Details are as for Video 1. The gloss of the ASL sentence is: SUPPOSE/HEY SLEEPY, NS:D SUGGEST GET-UP WALK-around FEEL BETTER. Suppose you’re sleepy, D. says get up and walk around, you’ll feel better. For details of the differences between narrative and non-narrative versions please see the text.
Acknowledgments
We are grateful to the following people for their help in this project: Dara Baril, Patty Clark, Nina Fernandez, Matt Hall, Angela Hauser, Vanessa Lim, Don Metlay, Emily Nichols, Aparna Sapre, Jennifer Vannest, and Hazlin Zaini. This study was supported by a grant from the James S. McDonnell Foundation to DB, EN, and TS, and by NIH grants DC00167 (EN & TS) and DC04418 (DB). AJN was supported by a postdoctoral fellowship from the Canadian Institutes of Health Research and is supported by the Canada Research Chairs program and the Natural Sciences and Engineering Research Council.
Footnotes
The non-narrative sentences have the standard, somewhat flat intonation produced in psycholinguistic experiments for both spoken and signed languages, but contain appropriate ASL structure and are fully grammatical. In contrast, the narrative sentences contain the same propositional information but are quite lively and more typical of casual conversation or story-telling.
Details of this other sentence condition are the subject of a separate report (Newman et al., submitted). The “non-narrative” sentences used here were those containing inflectional morphology discussed in Newman et al. (submitted).
To ensure that there was no systematic difference in the fMRI data from participants for whom behavioral data were recorded vs. those for whom it was not recorded, we conducted a post-hoc analysis of the fMRI data comparing these two subgroups of 8 and 6 participants, respectively. Using the same thresholding criteria described below, there were no significant differences between the two groups, supporting the presentation here of analyses across all participants.
We are grateful to an anonymous reviewer for suggesting this latter interpretation.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Atkinson J, Campbell R, Marshall J, Thacker A, Woll B. Understanding ‘not’: neuropsychological dissociations between hand and head markers of negation in BSL. Neuropsychologia. 2004;42:214–229. doi: 10.1016/s0028-3932(03)00186-6. [DOI] [PubMed] [Google Scholar]
- Awad M, Warren JE, Scott SK, Turkheimer FE, Wise RJS. A Common System for the Comprehension and Production of Narrative Speech. J Neurosci. 2007;27:11455–11464. doi: 10.1523/JNEUROSCI.5257-06.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baum SR, Pell MD. The neural bases of prosody: Insights from lesion studies and neuroimaging. Aphasiology. 1999;13:581–608. [Google Scholar]
- Bavelier D, Brozinsky C, Tomann A, Mitchell T, Neville H, Liu G. Impact of early deafness and early exposure to sign language on the cerebral organization for motion processing. J Neurosci. 2001;21:8931–8942. doi: 10.1523/JNEUROSCI.21-22-08931.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bavelier D, Corina DP, Jezzard P, Clark V, Karni A, Lalwani A, Rauschecker JP, Braun A, Turner R, Neville HJ. Hemispheric specialization for English and ASL: left invariance-right variability. Neuroreport. 1998;9:1537–1542. doi: 10.1097/00001756-199805110-00054. [DOI] [PubMed] [Google Scholar]
- Bavelier D, Newman AJ, Mukherjee M, Hauser P, Kemeny S, Braun A, Boutla M. Encoding, Rehearsal, and Recall in Signers and Speakers: Shared Network but Differential Engagement. Cerebral Cortex. 2008;18:2263–2274. doi: 10.1093/cercor/bhm248. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bavelier D, Tomann A, Hutton C, Mitchell T, Corina D, Liu G, Neville H. Visual attention to the periphery is enhanced in congenitally deaf individuals. J Neurosci. 2000;20:RC93. doi: 10.1523/JNEUROSCI.20-17-j0001.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beckmann CF, Jenkinson M, Smith SM. General multilevel linear modeling for group analysis in FMRI. Neuroimage. 2003;20:1052–1063. doi: 10.1016/S1053-8119(03)00435-X. [DOI] [PubMed] [Google Scholar]
- Beeman MJ, Chiarello C. Right hemisphere language comprehension: Perspectives from cognitive neuroscience. Lawrence Erlbaum; 1997. [Google Scholar]
- Bloom RL, Borod JC, Obler LK, Gerstman LJ. Impact of emotional content on discourse production in patients with unilateral brain damage. Brain Lang. 1992;42:153–164. doi: 10.1016/0093-934x(92)90122-u. [DOI] [PubMed] [Google Scholar]
- Bonda E, Petrides M, Ostry D, Evans A. Specific involvement of human parietal systems and the amygdala in the perception of biological motion. J Neurosci. 1996;16:3737–3744. doi: 10.1523/JNEUROSCI.16-11-03737.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Braun AR, Guillemin A, Hosey L, Varga M. The neural organization of discourse: an H2 15O-PET study of narrative production in English and American sign language. Brain. 2001;124:2028–2044. doi: 10.1093/brain/124.10.2028. [DOI] [PubMed] [Google Scholar]
- Brownell HH, Potter HH, Bihrle AM, Gardner H. Inference deficits in right brain-damaged patients. Brain and Language. 1986;27:310–321. doi: 10.1016/0093-934x(86)90022-2. [DOI] [PubMed] [Google Scholar]
- Capek CM, Bavelier D, Corina DP, Newman AJ, Jezzard P, Neville HJ. The cortical organization of audio-visual sentence comprehension: an fMRI study at 4 Tesla. Brain Res Cogn Brain Res. 2004;20:111–119. doi: 10.1016/j.cogbrainres.2003.10.014. [DOI] [PubMed] [Google Scholar]
- Caplan R, Dapretto M. Making sense during conversation: an fMRI study. Neuroreport. 2001;12:3625–3632. doi: 10.1097/00001756-200111160-00050. [DOI] [PubMed] [Google Scholar]
- Corina DP. Aphasia in users of signed languages. In: Coppens Patrick, Lebrun Yvan, et al., editors. Aphasia in atypical populations. Lawrence Erlbaum Associates, Inc, Publishers; Mahwah, NJ, US: 1998. pp. 261–309. [Google Scholar]
- Corina DP, Bellugi U, Reilly J. Neuropsychological studies of linguistic and affective facial expressions in deaf signers. Language & Speech. 1999;42:307–331. doi: 10.1177/00238309990420020801. [DOI] [PubMed] [Google Scholar]
- Corina DP, San Jose-Robertson L, Guillemin A, High J, Braun AR. Language lateralization in a bimanual language. J Cogn Neurosci. 2003;15:718–730. doi: 10.1162/089892903322307438. [DOI] [PubMed] [Google Scholar]
- Emmorey K, Corina DP, Bellugi U. Differential processing of topographic and referential functions of space. In: Emmorey K, Reilly JS, editors. Language, gesture, and space. Lawrence Erlbaum Associates, Inc; Hillsdale, NJ, US: 1995. pp. 43–62. [Google Scholar]
- Emmorey K, Grabowski T, McCullough S, Damasio H, Ponto LL, Hichwa RD, Bellugi U. Neural systems underlying lexical retrieval for sign language. Neuropsychologia. 2003;41:85–95. doi: 10.1016/s0028-3932(02)00089-1. [DOI] [PubMed] [Google Scholar]
- Fine I, Finney EM, Boynton GM, Dobkins KR. Comparing the effects of auditory deprivation and sign language within the auditory and visual cortex. J Cogn Neurosci. 2005;17:1621–1637. doi: 10.1162/089892905774597173. [DOI] [PubMed] [Google Scholar]
- Finney EM, Fine I, Dobkins KR. Visual stimuli activate auditory cortex in the deaf. Nature Neuroscience. 2001;4:1171–1173. doi: 10.1038/nn763. [DOI] [PubMed] [Google Scholar]
- Friederici AD. Aphasics’ perception of words in sentential context: Some real-time processing evidence. Neuropsychologia. 1983;21:351–358. doi: 10.1016/0028-3932(83)90021-0. [DOI] [PubMed] [Google Scholar]
- Friederici AD. Levels of processing and vocabulary types: Evidence from online comprehension in normals and agrammatics. Cognition. 1985;19:133–166. doi: 10.1016/0010-0277(85)90016-2. [DOI] [PubMed] [Google Scholar]
- Friederici AD, Kotz SA. The brain basis of syntactic processes: functional imaging and lesion studies. Neuroimage. 2003;20(Suppl 1):S8–17. doi: 10.1016/j.neuroimage.2003.09.003. [DOI] [PubMed] [Google Scholar]
- Gandour J, Dzemidzic M, Wong D, Lowe M, Tong Y, Hsieh L, Satthamnuwong N, Lurito J. Temporal integration of speech prosody is shaped by language experience: an fMRI study. Brain Lang. 2003a;84:318–336. doi: 10.1016/s0093-934x(02)00505-9. [DOI] [PubMed] [Google Scholar]
- Gandour J, Wong D, Dzemidzic M, Lowe M, Tong Y, Li X. A cross-linguistic fMRI study of perception of intonation and emotion in Chinese. Hum Brain Mapp. 2003b;18:149–157. doi: 10.1002/hbm.10088. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gorelick PB, Ross ED. The aprosodias: Further functional-anatomical evidence for the organisation of affective language in the right hemisphere. Journal of Neurology, Neurosurgery & Psychiatry. 1987;50:553–560. doi: 10.1136/jnnp.50.5.553. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grossman E, Donnelly M, Price R, Pickens D, Morgan V, Neighbor G, Blake R. Brain areas involved in perception of biological motion. Journal of Cognitive Neuroscience. 2000;12:711–720. doi: 10.1162/089892900562417. [DOI] [PubMed] [Google Scholar]
- Gur RC, Skolnick BE, Gur RE. Effects of emotional discrimination tasks on cerebral blood flow: Regional activation and its relation to performance. Brain & Cognition. 1994;25:271–286. doi: 10.1006/brcg.1994.1036. [DOI] [PubMed] [Google Scholar]
- Haxby JV, Hoffman EA, Gobbini MI. The distributed human neural system for face perception. Trends in Cognitive Sciences. 2000;4:223–233. doi: 10.1016/s1364-6613(00)01482-0. [DOI] [PubMed] [Google Scholar]
- Hein G, Knight RT. Superior Temporal Sulcus--It’s My Area: Or Is It? Journal of Cognitive Neuroscience. 2008;20:2125–2136. doi: 10.1162/jocn.2008.20148. [DOI] [PubMed] [Google Scholar]
- Hickok G, Bellugi U, Klima ES. The neurobiology of sign language and its implications for the neural basis of language. Nature. 1996;381:699–702. doi: 10.1038/381699a0. [DOI] [PubMed] [Google Scholar]
- Hickok G, Wilson M, Clark K, Klima ES, Kritchevsky M, Bellugi U. Discourse deficits following right hemisphere damage in deaf signers. Brain & Language. 1999;66:233–248. doi: 10.1006/brln.1998.1995. [DOI] [PubMed] [Google Scholar]
- Humphries C, Love T, Swinney D, Hickok G. Response of anterior temporal cortex to syntactic and prosodic manipulations during sentence processing. Human Brain Mapping. 2005;26:128–138. doi: 10.1002/hbm.20148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jenkinson M, Bannister P, Brady M, Smith S. Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage. 2002;17:825–841. doi: 10.1016/s1053-8119(02)91132-8. [DOI] [PubMed] [Google Scholar]
- Kanwisher N, Yovel G. The fusiform face area: a cortical region specialized for the perception of faces. Philos Trans R Soc Lond B Biol Sci. 2006;361:2109–2128. doi: 10.1098/rstb.2006.1934. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kassubek J, Hickok G, Erhard P. Involvement of classical anterior and posterior language areas in sign language production, as investigated by 4T functional magnetic resonance imaging. Neuroscience Letters. 2004;364:168–172. doi: 10.1016/j.neulet.2004.04.088. [DOI] [PubMed] [Google Scholar]
- Koelsch S, Gunter TC, v Cramon DY, Zysset S, Lohmann G, Friederici AD. Bach speaks: a cortical “language-network” serves the processing of music. Neuroimage. 2002;17:956–966. [PubMed] [Google Scholar]
- Kotz SA, Meyer M, Alter K, Besson M, von Cramon DY, Friederici AD. On the lateralization of emotional prosody: an event-related functional MR investigation. Brain Lang. 2003;86:366–376. doi: 10.1016/s0093-934x(02)00532-1. [DOI] [PubMed] [Google Scholar]
- Lambertz N, Gizewski ER, de Greiff A, Forsting M. Cross-modal plasticity in deaf subjects dependent on the extent of hearing loss. Cognitive Brain Research. 2005;25:884–890. doi: 10.1016/j.cogbrainres.2005.09.010. [DOI] [PubMed] [Google Scholar]
- MacSweeney M, Woll B, Campbell R, McGuire PK, David AS, Williams SC, Suckling J, Calvert GA, Brammer MJ. Neural systems underlying British Sign Language and audio-visual English processing in native users. Brain. 2002;125:1583–1593. doi: 10.1093/brain/awf153. [DOI] [PubMed] [Google Scholar]
- MacWhinney B, Bates E. The Crosslinguistic study of sentence processing. Cambridge University Press; Cambridge; New York: 1989. [Google Scholar]
- Maess B, Koelsch S, Gunter TC, Friederici AD. Musical syntax is processed in Broca’s area: An MEG study. Nature Neuroscience. 2001;4:540–545. doi: 10.1038/87502. [DOI] [PubMed] [Google Scholar]
- McCullough S, Emmorey K, Sereno M. Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners. Brain Res Cogn Brain Res. 2005;22:193–203. doi: 10.1016/j.cogbrainres.2004.08.012. [DOI] [PubMed] [Google Scholar]
- Meyer M, Alter K, Friederici AD, Lohmann G, von Cramon DY. FMRI reveals brain regions mediating slow prosodic modulations in spoken sentences. Hum Brain Mapp. 2002;17:73–88. doi: 10.1002/hbm.10042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meyer M, Friederici AD, von Cramon DY. Neurocognition of auditory sentence comprehension: event related fMRI reveals sensitivity to syntactic violations and task demands. Cognitive Brain Research. 2000;9:19–33. doi: 10.1016/s0926-6410(99)00039-7. [DOI] [PubMed] [Google Scholar]
- Meyer M, Toepel U, Keller J, Friederici A. The neural substrates of German Sign Language (DGS) Cognitive Neuroscience Society Abstracts. 2004:12. [Google Scholar]
- Mitchell RL, Elliott R, Barry M, Cruttenden A, Woodruff PW. The neural response to emotional prosody, as revealed by functional magnetic resonance imaging. Neuropsychologia. 2003;41:1410–1421. doi: 10.1016/s0028-3932(03)00017-4. [DOI] [PubMed] [Google Scholar]
- Narumoto J, Okada T, Sadato N, Fukui K, Yonekura Y. Attention to emotion modulates fMRI activity in human right superior temporal sulcus. Cognitive Brain Research. 2001;12:225–231. doi: 10.1016/s0926-6410(01)00053-2. [DOI] [PubMed] [Google Scholar]
- Neville HJ, Bavelier D, Corina D, Rauschecker J, Karni A, Lalwani A, Braun A, Clark V, Jezzard P, Turner R. Cerebral organization for language in deaf and hearing subjects: biological constraints and effects of experience. Proc Natl Acad Sci U S A. 1998;95:922–929. doi: 10.1073/pnas.95.3.922. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neville HJ, Lawson D. Attention to central and peripheral visual space in a movement detection task: III. Separation effects of auditory deprivation and acquisition of a visual language. Brain Research. 1987;405:284–294. doi: 10.1016/0006-8993(87)90297-6. [DOI] [PubMed] [Google Scholar]
- Newman AJ, Bavelier D, Corina D, Jezzard P, Neville HJ. A critical period for right hemisphere recruitment in American Sign Language processing. Nature Neuroscience. 2002;5:76–80. doi: 10.1038/nn775. [DOI] [PubMed] [Google Scholar]
- Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113. doi: 10.1016/0028-3932(71)90067-4. [DOI] [PubMed] [Google Scholar]
- Petitto LA, Zatorre RJ, Gauna K, Nikelski EJ, Dostie D, Evans AC. Speech-like cerebral activity in profoundly deaf people processing signed languages: implications for the neural basis of human language. Proceedings of the National Academy of Sciences of the USA. 2000;97:13961–13966. doi: 10.1073/pnas.97.25.13961. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poizner H, Bellugi U, Klima ES. What the hands reveal about the brain. MIT Press; Cambridge, MA: 1987. [Google Scholar]
- Rayner K. Eye movements in reading and information processing: 20 years of research. Psychol Bull. 1998;124:372–422. doi: 10.1037/0033-2909.124.3.372. [DOI] [PubMed] [Google Scholar]
- Rehak A, Kaplan JA, Gardner H. Sensitivity to conversational deviance in right-hemisphere-damaged patients. Brain and Language. 1992;42:203–217. doi: 10.1016/0093-934x(92)90125-x. [DOI] [PubMed] [Google Scholar]
- Ross ED. The Aprosodias: Functional-Anatomic Organization of Affective Components of Language in the Right Hemisphere. Annals of Neurology. 1981;38:561–589. doi: 10.1001/archneur.1981.00510090055006. [DOI] [PubMed] [Google Scholar]
- Sakai KL, Tatsuno Y, Suzuki K, Kimura H, Ichida Y. Sign and speech: amodal commonality in left hemisphere dominance for comprehension of sentences. Brain. 2005;128:1407–1417. doi: 10.1093/brain/awh465. [DOI] [PubMed] [Google Scholar]
- San Jose-Robertson L, Corina DP, Ackerman D, Guillemin A, Braun AR. Neural systems for sign language production: mechanisms supporting lexical selection, phonological encoding, and articulation. Hum Brain Mapp. 2004;23:156–167. doi: 10.1002/hbm.20054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saxe R, Wexler A. Making sense of another mind: the role of the right temporo-parietal junction. Neuropsychologia. 2005;43:1391–1399. doi: 10.1016/j.neuropsychologia.2005.02.013. [DOI] [PubMed] [Google Scholar]
- Saxe R, Xiao DK, Kovacs G, Perrett DI, Kanwisher N. A region of right posterior superior temporal sulcus responds to observed intentional actions. Neuropsychologia. 2004;42:1435–1446. doi: 10.1016/j.neuropsychologia.2004.04.015. [DOI] [PubMed] [Google Scholar]
- Schmitt JJ, Hartje W, Willmes K. Hemispheric asymmetry in the recognition of emotional attitude conveyed by facial expression, prosody and propositional speech. Cortex. 1997;33:65–81. doi: 10.1016/s0010-9452(97)80005-6. [DOI] [PubMed] [Google Scholar]
- Smith SM. Fast robust automated brain extraction. Hum Brain Mapp. 2002;17:143–155. doi: 10.1002/hbm.10062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- St George M, Kutas M, Martinez A, Sereno MI. Semantic integration in reading: engagement of the right hemisphere during discourse processing. Brain. 1999;122:1317–1325. doi: 10.1093/brain/122.7.1317. [DOI] [PubMed] [Google Scholar]
- Tillmann B, Koelsch S, Escoffier N, Bigand E, Lalitte P, Friederici AD, von Cramon DY. Cognitive priming in sung and instrumental music: Activation of inferior frontal cortex. Neuroimage. 2006;31:1771–1782. doi: 10.1016/j.neuroimage.2006.02.028. [DOI] [PubMed] [Google Scholar]
- Tzourio-Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Etard O, Delcroix N, Mazoyer B, Joliot M. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage. 2002;15:273–289. doi: 10.1006/nimg.2001.0978. [DOI] [PubMed] [Google Scholar]
- Ullman MT. A neurocognitive perspective on language: the declarative/procedural model. Nature Reviews Neuroscience. 2001;2:717–726. doi: 10.1038/35094573. [DOI] [PubMed] [Google Scholar]
- Vogel JJ, Bowers CA, Vogel DS. Cerebral lateralization of spatial abilities: a meta-analysis. Brain Cogn. 2003;52:197–204. doi: 10.1016/s0278-2626(03)00056-3. [DOI] [PubMed] [Google Scholar]
- Woolrich MW. Robust group analysis using outlier inference. Neuroimage. 2008;41:286–301. doi: 10.1016/j.neuroimage.2008.02.042. [DOI] [PubMed] [Google Scholar]
- Woolrich MW, Behrens TE, Beckmann CF, Jenkinson M, Smith SM. Multilevel linear modelling for FMRI group analysis using Bayesian inference. Neuroimage. 2004;21:1732–1747. doi: 10.1016/j.neuroimage.2003.12.023. [DOI] [PubMed] [Google Scholar]
- Woolrich MW, Ripley BD, Brady M, Smith SM. Temporal autocorrelation in univariate linear modeling of FMRI data. Neuroimage. 2001;14:1370–1386. doi: 10.1006/nimg.2001.0931. [DOI] [PubMed] [Google Scholar]
- Worsley K. Statistical analysis of activation images. In: Jezzard P, Matthews PM, Smith SM, editors. Functional MRI: An introduction to Methods. Oxford University Press; New York: 2001. [Google Scholar]
- Wright TM, Pelphrey KA, Allison T, McKeown MJ, McCarthy G. Polysensory interactions along lateral temporal regions evoked by audiovisual speech. Cerebral Cortex. 2003;13:1034–1043. doi: 10.1093/cercor/13.10.1034. [DOI] [PubMed] [Google Scholar]
- Wymer JH, Lindman LS, Booksh RL. A neuropsychological perspective of aprosody: features, function, assessment, and treatment. Appl Neuropsychol. 2002;9:37–47. doi: 10.1207/S15324826AN0901_5. [DOI] [PubMed] [Google Scholar]
- Zatorre RJ, Evans AC, Meyer E, Gjedde A. Lateralization of phonetic and pitch discrimination in speech processing. Science. 1992;256:846–849. doi: 10.1126/science.1589767. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Example of one sentence used in the experiment. The non-narrative version is shown first, followed by the narrative version. Subsequent to this are the backward-layered control versions of each sentence type. These contain the sentences shown in the first part of the video, each overlaid with two other videos and played backward. The gloss of the ASL sentence is: CLOTHES SEW TEACHER FINISH INFORM+AGR STUDENT SOME “Bs”. The English translation of this is The sewing teacher informed the students that some of them got Bs. For details of the differences between narrative and non-narrative versions please see the text.
A second example sentence from the experiment. Details are as for Video 1. The gloss of the ASL sentence is: SUPPOSE/HEY SLEEPY, NS:D SUGGEST GET-UP WALK-around FEEL BETTER. Suppose you’re sleepy, D. says get up and walk around, you’ll feel better. For details of the differences between narrative and non-narrative versions please see the text.


