Abstract
A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a “frame” (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a “last item” belonging to one of four categories: a high-cloze-probability sign (a “semantically reasonable” completion to the sentence; e.g. BED), a low-cloze-probability sign (a real sign that is nonetheless a “semantically odd” completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity.
Keywords: sign language, ASL, ERP, N400, deaf, pseudo-word, grooming gesture
1. Introduction
While it is now widely accepted that signed languages used in deaf communities around the world represent full-fledged instantiations of human languages—languages which are expressed in the visual-manual modality rather than the aural-oral modality—the question of how a sign is recognized and integrated into a sentential context in real time has received far less attention (see Corina & Knapp, 2006; Emmorey, 2002; for some discussions). Sign language recognition may be more complicated than spoken language recognition by virtue of the fact that the primary articulators, the hands and arms, are also used in a wide range of other common everyday behaviors that include non-linguistic actions such a reaching and grasping, waving, and scratching oneself, as well gesticulations that accompany speech (i.e. co-speech gestures) or serve non-sign language deictic functions, such as pointing.
The formal relationship between signed languages and human gestural actions is of considerable interest to a range of disciplines. Linguists, psychologists and cognitive scientists have proposed a critical role for manual gesture in the development and evolution of human languages (Wilcox, 2004; Tomasello, 2005; Arbib, 2005, 2008; Gentilucci & Corballis, 2006; Rizzolatti & Arbib, 1998). Recently, linguists have documented compelling evidence that the development of nascent sign languages derives from idiosyncratic gestural and pantomimic systems used by isolated communities, which in some cases may be limited to individual families who have a need to communicate with a deaf child (Kegl, Senghas & Coppola, 1999; Morford & Kegl, 2000; Senghas, 2005; Meir, Sandler, Padden & Aronoff, 2010; Frishberg, 1987; Goldin-Meadow, 2003). Even within mature sign languages of Deaf communities, linguistic accounts of sign language structure have also argued that lexical and discourse components of American Sign Language (ASL) and other signed languages may be best understood as being gesturally based (Liddell, 2003). Thus diachronic and synchronic evidence from language research support the contention that signed languages might make use of perceptual systems similar to those through which humans understand or parse human actions and gestures more generally (Corballis, 2009). In contrast, given its linguistic status, sign language perception may require the attunement of specialized systems for recognizing sign forms.
A comprehensive theory of sign language recognition will be enhanced by providing an account of when and how the processing of sign forms diverges from the processing of human actions in general. Recent behavioral and neuro-imaging studies have reported differences in deaf subjects’ responses to single signs compared to non-linguistic gestures (Corina, Grosvald & Lachaud, in press; Corina et al., 2007; MacSweeney et al., 2004; Emmorey et al., 2010), but no studies to our knowledge have examined the recognition of signs and gestures under sentence processing constraints. Consider for example the signer, who, in mid-sentence, fulfills the urge to scratch his face, or perhaps swat away a flying insect. What is the fate of this non-linguistic articulation? Does the sign perceiver attempt to incorporate these manual behaviors into accruing sentential representations, or are these actions easily tagged as non-linguistic and thus rejected by the parser? The goal of the present paper was to use real-time electrophysiological measures to assess empirically the time course of sentence processing in cases where subjects encountered non-linguistic manual forms (here “self-grooming” behaviors, e.g. scratching the face, rubbing one’s eye, adjusting the sleeves of a shirt, etc.). We sought to compare the processing of these non-linguistic gestural forms within a sentential context to cases in which deaf signers encountered violations of semantic expectancy that have been observed to elicit a well-defined electrophysiological component, the N400.
The N400 component (Kutas & Hillyard, 1980; Holcomb & Neville, 1991) has been frequently investigated in previous ERP research on written, spoken and signed language (e.g. Kutas, Neville, & Holcomb, 1987; Capek et al., 2009). The N400 is a broad negative deflection generally seen at central and parietal scalp sites that peaks about 400 ms after the visual or auditory presentation of a word. Although all content words elicit an N400 component, the ERP response is larger for words that are semantically anomalous or less expected (Hagoort & Brown, 1994; Kutas & Hillyard, 1984); thus the N400 is often interpreted as an index of ease or difficulty in semantic conceptual integration (Brown & Hagoort, 1993; Hagoort & Van Berkum, 2007). For example, for listeners encountering the two sentences “I like my coffee with milk and sugar” and “I like my coffee with milk and mud,” the N400 response to the last word in the second item is expected to be larger.
An N400 or N400-like component can also be found in response to orthographically/phonologically legal but non-occurring “pseudo-words” (e.g. “blork”), and it has sometimes been reported that pseudo-words elicit a stronger N400 response than semantically incongruent real words (Bentin, 1987; Bentin, McCarthy & Wood, 1985; Hagoort & Kutas, 1995), consistent with the idea that the magnitude of N400 response is related to the difficulty of the ongoing process of semantic-contextual integration. However, orthographically illegal “non-words” (e.g. “rbsnk”) do not generally elicit an N400, and a positive component is sometimes seen instead (Hagoort & Kutas, 1995; Ziegler, Besson, Jacobs & Carr, 1997). This may reflect the operation of some kind of filtering mechanism during online processing, through which language users are able to quickly reject forms that lie beyond a certain point of acceptability, or plausibility, during the ongoing processing of the incoming language stream.1
The N400 (or N400-like responses) can also be observed in numerous contexts involving non-linguistic but meaningful stimuli, such as pictures (Ganis & Kutas, 2003; Ganis, Kutas & Sereno, 1996; Nigam, Hoffman, & Simons, 1992; Pratarelli, 1994), faces (Barrett & Rugg, 1989; Bobes, Valdes-Sosa, & Olivares, 1994), environmental noises (Chao, Nielsen-Bohlman, & Knight, 1995; Van Petten & Rheinfelder, 1995), movie clips (Sitnikova et al., 2008; Sitnikova, Kuperberg, &.Holcomb, 2003) and co-speech gestures (Kelly, Kravitz & Hopkins, 2004; Wu & Coulson, 2005).
Linguistically anomalous stimuli are not always associated with an N400 response. For example, the left anterior negativity (LAN; Neville et al., 1991; Friederici, 2002) and P600 (Osterhout & Holcomb, 1992) are well-known ERP components that have been found in syntactic violation contexts in spoken and written language, and more recent work has shown that these components can be elicited in the manual-visual modality as well. For example, in a recent study Capek et al. (2009) compared ERP responses to semantically and syntactically well-formed and ill-formed sentences. While semantic violations elicited an N400 that was largest over central and posterior sites, syntactic violations elicited an anterior negativity followed by a widely distributed P600. These findings are consistent with the idea that within written, spoken and signed languages, semantic and syntactic processes are mediated by non-identical brain systems (Capek et al., 2009).
The present study makes use of dynamic video stimuli showing ASL sentences completed by four classes of ending item—semantically congruent signs, semantically incongruent signs, phonologically legal but non-occurring pseudo-signs, and non-linguistic grooming gestures. Based upon previous studies, we expected a gradation of N400-like responses across conditions, with N400 effects of smaller magnitude for semantically incongruent endings and of larger magnitude (i.e. more negative) for phonologically legal pseudo-signs.
The ERP response for the non-linguistic gesture condition is a priori more difficult to predict. Previous neuro-imaging studies of deaf signers have reported differences in patterns of activation associated with the perception of signs compared to non-linguistic gestures (Corina et al., 2007; Emmorey et al., 2010; MacSweeney et al., 2004), but the methodologies used in those studies lacked the temporal resolution to determine at what stage of processing these differences may occur. While N400-like responses have been elicited to co-speech gestural mismatches (Kelly, Kravitz and Hopkins, 2004; Wu and Coulson, 2005) in our study, gestures occur in place of semantically appropriate sentence-ending items, rather than as a possible accompaniment. It should also be borne in mind that the relationship of signs and grooming gestures is probably not quite akin to that between standard lexical items in spoken language and the orthographically/phonotactically illegal pseudo-words used in earlier ERP studies. Unlike grooming gestures, which are part of everyday life, illegal non-words like “dkfpst” are probably alien to most people’s routine experience. A better spoken-language analogue of our grooming action condition might be something like “I like my coffee with milk and [clearing of throat],” though we know of no spoken-language studies which have incorporated such a condition. The non-linguistic grooming gestures used in the present study may be another example of forms that language users (in this case, signers) are able to quickly reject as non-linguistic during language processing. If this is the case, then one might also expect that such forms will not elicit an N400 but rather a positive-going component (cf. Hagoort & Kutas, 1995).
In summary, to the extent that semantic processing at the sentence level is similar for signed and spoken language, despite the obvious difference in modality, the ERP responses associated with our four sentence ending condition should be predictable. First, the incongruent signs should elicit a negative-going component relative to the baseline (congruent sign) condition, consistent with the classic N400 response seen for English and other spoken languages, as well as some previous ERP studies of ASL (Kutas, Neville & Holcomb, 1987; Neville et al., 1997). Second, the pseudo-signs should also elicit a negative-going wave, and this response can be expected to be of larger magnitude (i.e. be more negative) than that seen for the incongruent signs. Third, while the likely response to the grooming gesture condition is more difficult to predict, we may expect to see a positive-going component relative to the baseline.
2. Methodology
2.1. Participants
The 16 participants (12 female and 4 male; age range = [19, 45], mean = 25.4 and SD = 8.3) were deaf users of ASL; all were students and staff at Gallaudet University in Washington DC and received a small payment for participating. Three were left-handed. There were 11 native signers (i.e. born to signing parents; the remaining five non-natives’ mean self-reported age of acquisition of ASL was 9.0 (SD = 6.3, range = [2, 16]). All subjects were uninformed as to the purpose of the study and gave informed consent in accordance with established Institutional Review Board procedures at Gallaudet University.2
2.2. Stimuli and procedure
During the experiment, the subject was seated in a comfortable chair facing the computer screen approximately 85 cm away, at which distance the 4-inch-wide video stimuli subtended an angle of about 7 degrees. The stimuli were delivered using a program created by the first author using Presentation software (Neurobehavioral Systems).
For each trial, the subject viewed an ASL sentence “frame” consisting of an entire sentence minus a last item (e.g. BOY SLEEP IN HIS; see Figure 1 for an illustration), followed by an “ending item” completing the sentence. The ending item could be one of four types (shown, respectively, to the upper left, upper right, lower left and lower right of the question mark in Figure 1): a semantically congruent sign (e.g. BED for the sentence frame just given), a semantically incongruent sign (e.g. LEMON), a phonotactically legal but non-occurring “pseudo-sign” (e.g. BARK.A, this notation indicating that this pseudo-sign was formed by articulating the real sign BARK with an A handshape3), or a grooming gesture such as eye rubbing or head scratching. All stimulus items (sentence frames and ending items) were performed by a female native signer of ASL, who also verified that each sentence frame plus last sign item was grammatically acceptable in ASL. The ending items for both sign conditions (semantically congruent and incongruent) were all nouns.
Figure 1.
Still shots taken from one of the sentence frame stimuli, along with its four possible endings (see text). Note that the actual stimuli were dynamic, not static.
As is the case with spoken languages, signed languages, including ASL, have regional variants that could potentially affect comprehension. In the present situation, this would be relevant if subjects encountering extant but unfamiliar sign forms as ending items produced an ERP response similar to that for pseudo-signs. Our experience, however, suggests that in the majority of cases, fluent users of ASL have previously encountered regional variants. This can be compared to the way a New England English speaker, while not using the word the “sack” as part of his or her own dialect (instead using bag), would most likely comprehend a sentence such as “The grocer placed the vegetables in the sack” without difficulty. In principle, one might expect such forms to produce increased processing difficulties, similar to encountering low frequency forms. However, we are cognizant of the regional variants in ASL and in planning this study, aimed to use signs which were unambiguous and would reflect the most frequent forms.
As a check, we asked two native signers (not participants in the main study) to scrutinize all semantically congruent and semantically incongruent ending signs, to mark whether they knew of any regional variants or alternative pronunciations, and to list any such forms. Note that without a proper sociolinguistic analysis, it is difficult to ascertain whether such differences are indeed sociolinguistic variants, so we were most liberal in asking for any known alternative form. Out of 240 critical items, 49 were deemed to have a potential regional variant (e.g., PIZZA and FOOTBALL) or alternative pronunciations (e.g. the sign EARRINGS, which can be signed with an F handshape, or a b0 handshape, aka “baby-O”). Of these potentially problematic items, we then asked whether if seen in isolation, any would be unrecognized. Only 7 of those 49 sign forms were deemed as potentially unrecognized. Most importantly, none of these variants were the forms used in the actual experiment. For example, EASTER was a sign used as a semantically incongruent item. Both signers agreed that the form used in our study was the most common form (two E-handshapes held near the shoulders with a twisting motion). One of the two signers knew of an alternative form in which the E handshape rises off of the palm of the B-handshape base hand. The second signer noted she would not have known what that item meant if she had seen it in isolation. Most importantly, such forms were not used in our study. Thus we are confident that in choosing our sign ending items (during which we took into account intuitions and feedback from native signers, including one of our co-authors on this paper), we have selected highly frequent and recognizable forms likely to be recognized by sign users, even if they do not use the same forms themselves in each and every case (as in the bag/sack example in English).
Each sentence frame stimulus was filmed by having the signer begin with her hands in her lap, raise her hands to sign the sentence frame, then place her hands back in her lap. Each ending item was also filmed in this way, beginning and ending with the signer’s hands in her lap. Each frame was filmed just once, with no ending item in mind, rather than creating a separate instance of each frame for each of the four possible ending items. This was done to provide a consistent lead-up to each ending item, eliminating the possibility that differing coarticulatory effects or other confounds might lead to diverging processing on the part of the signer prior to the onset of each ending item. During video editing, the stimuli were trimmed slightly so that the full movement of the signer’s hands to and from her lap at the end of each sentence frame and beginning of each ending item was not seen when the stimuli were viewed during the running of the actual experiment.
Over the course of the entire experiment, the subject saw a sequence of 120 of these sign sentences, with 30 instances of each of the four types of ending items. The ordering of the sentences was randomized for each subject and the type of ending item (semantically congruent, semantically incongruent, pseudo-sign or gesture) shown for each sentence frame also varied among subjects. A complete list of all 120 sentence frames with each of their possible ending items is given in the Appendix. For most trials, no behavioral responses were required, but in order to encourage subjects to attend to the meaning of the sentences, an occasional comprehension check was given (see Behavioral task, below).
The sentence frames and ending items were separated by a 200-ms blank screen, so that the slight visual discontinuity between the two stimuli (which were filmed separately, as described earlier) would be less jarring. In the Presentation program which delivered the stimuli, the default color of the computer screen was set to match the color of the background behind the signer in the video stimuli for color and intensity. Blink intervals of randomly varying length between 3200 and 3700 ms were given after each trial. Longer blink intervals lasting an additional 4 s were provided after every eight trials. Fixation crosses appeared at key moments to remind subjects to maintain a consistent gaze toward the center of the screen, and to indicate when the next trial was about to begin. After every 30 trials (three times total during the experiment), open-ended break sessions were given so that subjects could rest longer if they desired. Before the experiment began, subjects were given a brief practice session, six trials long, to acquaint them with the format of the experiment. The six ASL sentences used in the practice session were different from the sentences used in the actual experiment.
2.4. Behavioral task
In order to provide an objective measure that could be used after-the-fact to verify that each subject had been paying attention to the sentences, occasional comprehension checks appearing at random intervals were programmed into the experiment; these appeared after each five to eight sentences. At each comprehension check, the subject was required to choose which of two words, presented on-screen, was most closely related to the meaning of the just-shown ASL sentence. For example, after the ASL sentence beginning with the frame “BOY SLEEP IN HIS,” the two candidate words were “fight” and “sleep,” with the latter being the correct answer in this case. Two such words were chosen for each sentence frame, and were always dependent only on the sentence frame, never on any of the four possible ending items for that sentence frame. The two quiz words always appeared side-by-side with the left vs. right position on-screen being chosen at random on each trial for the correct and incorrect word choices. The quiz words were presented in English, but only frequent words were used as quiz items, so that users of ASL would be unlikely to be unfamiliar with them. All subjects’ scores were deemed sufficiently high (mean: 97.3%, SD: 4.6%, range: [84.0%, 100%]) that no subjects were excluded because of poor performance on the quizzes.
2.3. Electroencephalogram (EEG) recording and data analysis
EEG data were recorded continuously from 32 scalp locations at frontal, parietal, occipital, temporal and central sites, using AgCl electrodes attached to an elastic cap (BioSemi). Vertical and horizontal eye movements were monitored by means of two electrodes placed above and below the left eye and two others located adjacent to the left and right eye. All electrodes were referenced to the average of the left and right mastoids. The EEG was digitized online at 256 Hz, and filtered offline below 30 Hz and above 0.01 Hz. Scalp electrode impedance threshold values were set at 20 kΩ.
Initial analysis of the EEG data was performed using the ERPLAB plugin (Lopez-Calderon & Luck) for EEGLAB (Delorme & Makeig, 2004). Epochs began 200 ms before stimulus onset and ended 1000 ms after. Inspection of subjects’ EEG data was performed by eye to check rejections suggested by a script run in ERPLAB whose artifact rejection thresholds were set at ±120 μV. For all 16 subjects, in each of the four sentence ending conditions at least 20 of the original 30 trials remained after the rejection procedure just described. The statistical analyses reported below were carried out using the SPSS statistical package.
To assess the significance of the observed effects, a column analysis was conducted (cf. Kim & Osterhout, 2005) for which a separate ANOVA was run on each of four subsets of the scalp sites, as illustrated in Figure 2. For the midline scalp sites, colored green in the figure, the two factors in the ANOVA were Electrode (one level for each of the four electrodes) and sentence Ending (semantically congruent sign, semantically incongruent sign, pseudo-sign and grooming gesture). The other three ANOVAs, corresponding to the inner (colored blue in the figure), outer (red), and outermost (orange) sites, included a third factor of hemisphere (left or right); in addition, for these three ANOVAs the factor Electrode had one level for each pair of electrodes, from most anterior to most posterior. For the N400 analysis, the dependent measure was mean amplitude of EEG response within the window from 400 ms to 600 ms after stimulus onset. Because the latency of the effects related to gesture was somewhat greater, a second column analysis was run for the window from 600 to 800 ms after stimulus onset. For the purposes of these column analyses, data from the two frontmost sites (FP1 and FP2) and from two posterior sites (PO3 and PO4) were not used. In all cases, Greenhouse-Geisser (Greenhouse & Geisser, 1959) adjustments for non-sphericity were performed where appropriate and are reflected in the reported results.
Figure 2.
Electrode groupings for the ANOVAs.
In the following section, the outcomes for the ANOVAs performed for each time window will be given in the order midline, inner, outer, and outermost. This establishes the significance of the results, which we first introduce with illustrations and descriptions of the waveforms and the associated topographic maps.
3. Results
3.4. Waveforms and topographic maps
Pictured in Figures 3 and 4 are grand-average waveforms at selected electrode sites for the four sentence Ending conditions. Visual inspection of the waveforms reveals that all conditions evoked exogenous potentials often associated with written words (Hagoort & Kutas, 1995); these include a posteriorly distributed positivity (P1) peaking at about 100 ms post-stimulus onset, followed by a posteriorly distributed negativity (N1) peaking at about 180 ms after target onset. Starting at approximately 300 msec. after stimulus onset, we begin to see a differentiation for different sentence ending conditions which we will quantify in detail in the statistical analysis. Relative to the baseline (semantically congruent sign) condition, both the semantically incongruent and pseudo-sign conditions can be seen to have elicited negative-going waves. In addition, the pseudo-sign negativity is generally greater in magnitude than the negativity elicited in the semantically incongruent condition. In contrast, beginning at approximately 400 ms we observe a large positive-going wave relative to the baseline for the grooming gesture condition. These effects appear to be long lasting, extending beyond the end of the 1000 ms time window.
Figure 3.
Grand-average waveforms at (from top to bottom) frontal, central, parietal and occipital sites. Units on the vertical axis are microvolts; those on the horizontal axis are milliseconds. Negative is plotted downward.
Figure 4.
Grand-average waveforms at the OZ site. Units on the vertical axis are microvolts; successive tick marks on the horizontal axis are 200 ms apart. Negative is plotted downward.
Figure 5 presents topographic maps of key contrasts, and reinforces the patterns seen Figures 3 and 4. The maps show mean amplitude difference between the indicated conditions during the two time windows we analyzed. Again, we see that the pseudo-signs elicit a negativity that is overall more pronounced than the one elicited by the semantically incongruent signs, both in terms of magnitude and distribution. In contrast, the response to the gesture condition starts to diverge clearly from the others in the earlier time window, starting at posterior sites and then spreading more generally.
Figure 5.
Grand-average topographic maps (viewed from above, with anterior oriented upward) for key contrasts between sentence Ending conditions over the two indicated time intervals. Units on the scale are microvolts.
We now continue with a presentation of the statistical results. For simplicity of presentation, only main effects and interactions related to sentence Ending are discussed in the text, but complete ANOVA results are given in Table 1. Our analyses confirm the very consistent patterning of mean ERP response with respect to sentence Ending that was seen in Figures 3 through 5 and noted in the foregoing discussion. Relative to the baseline (semantically congruent sign) condition, the incongruent signs and the pseudo-signs elicited a negative-going wave, while the grooming gestures elicited a large positivity; also, the negative-going component elicited by the pseudo-signs was overall larger than the one elicited by the incongruent signs. This general pattern was observed for almost all scalp sites; the relatively minor exceptions are noted below.
Table 1.
Summary of ANOVA results, with df = degrees of freedom, MSE=mean squared error.
| 400–600 ms | 600–800 ms | |||||||
|---|---|---|---|---|---|---|---|---|
| F df | F | MSE | p | F df | F | MSE | p | |
| Midline | ||||||||
| Ending | 3,45 | 7.19 | 23.8 | *** | 3,45 | 26.6 | 22.6 | *** |
| Ending-Electrode | 9,135 | 3.45 | 1.86 | *** | 4.62,69.3 | 7.68 | 5.05 | *** |
| Inner | ||||||||
| Ending | 3,45 | 9.87 | 32.1 | *** | 3,45 | 36.3 | 27.7 | *** |
| Ending-Electrode | 6,90 | 1.88 | 1.94 | 0.094 | 6,90 | 5.42 | 2.77 | *** |
| Electrode | 2,30 | 3.25 | 7.47 | 0.053 | - | - | - | ns |
| Outer | ||||||||
| Ending | 3,45 | 8.16 | 30.1 | *** | 3,45 | 20.3 | 30.3 | *** |
| Ending-Electrode | - | - | - | ns | 3.65,54.8 | 3.22 | 12.7 | * |
| Hemisphere-Electrode | 3,45 | 18.7 | 4.01 | *** | 3,45 | 6.29 | 6.80 | ** |
| Outermost | ||||||||
| Ending | 3,45 | 7.23 | 24.0 | *** | 2.04,30.6 | 15.9 | 37.9 | *** |
| Ending-Electrode | 12,180 | 2.15 | 3.78 | * | 5.84,87.5 | 5.92 | 9.24 | *** |
| Electrode | 1.40, 21.0 | 5,45 | 56.9 | * | 1.34, 20.1 | 3.76 | 67.0 | 0.056 |
| Hemisphere-Electrode | 4,60 | 7.34 | 4.17 | *** | 4,60 | 7.83 | 5.58 | *** |
p<0.05,
p<0.01,
p<0.001.
3.2. Window from 400 to 600 ms
For the earlier of the two time windows, the midline ANOVA found a main effect of Ending (p<0.001) and an interaction of Ending by Electrode (p<0.001). Mean amplitude for Ending showed the predicted pattern among condition means, with respective means for pseudo-signs, incongruent signs, congruent signs and grooming gestures equal to −4.95, −4.20, −3.36 and −1.16 microvolts. The gesture condition mean differed significantly from the rest (p<0.05 for gesture vs. baseline, p’s<0.001 for the other two comparisons); the other differences did not reach significance. It was at the frontmost sites that the baseline differed the most from the incongruent and pseudo-sign conditions (p<0.05 and p=0.062, respectively). At the vertex electrode CZ, a departure from the predicted progression of condition means was found; incongruent signs showed the most negative amplitude here, which however was not significantly different from the amplitudes for pseudo-signs or congruent signs. At the other three midline electrode sites, the familiar pattern among the four Ending conditions was seen.
The inner-electrode ANOVA found a main effect of Ending (p<0.001) and a marginal Ending by Electrode interaction (p=0.094). Mean amplitudes for the four conditions progressed in the same way as before; for pseudo-signs, incongruent signs, congruent signs and grooming gestures, respective means were −4.54, −3.57, −2.56 and −0.302. Pairwise differences between means were all significant, except for incongruent signs vs. congruent signs (marginally significant at p=0.088) and pseudo-signs vs. incongruent signs (p=0.24). The same ordering from most negative to most positive means for the four conditions was seen at all three levels of electrode. The most anterior sites showed the greatest differences between the baseline condition mean and the incongruent and pseudo-sign means; at these sites, both pairwise differences were significant.
For the outer electrodes ANOVA, the results included a main effect of Ending (p<0.001). The respective overall mean amplitudes for the pseudo-sign, incongruent, congruent and gesture conditions were −2.93, −1.97, −1.20 and 0.345, consistent with the predicted pattern. The highly significant main effect of Ending here is due to the fact that the pairwise differences between conditions were either significant or showed near-significant trends (cases of the latter were: for pseudo-sign vs. incongruent sign, p=0.11; for incongruent sign vs. congruent sign, p=0.14, for gesture vs. congruent sign, p=0.062). The absence of a significant Ending by Electrode interaction is due to the fact that the predicted pattern of pseudo-sign < incongruent sign < congruent sign < gesture was seen for all four levels of Electrode.
Finally, the ANOVA for the outermost set of electrodes found a significant main effect of sentence Ending (p<0.001) and an interaction of Ending by Electrode (p<0.05). The main effect of Ending reflects the pattern already seen repeatedly; for the outermost sites, mean EEG amplitudes were −2.45, −1.34, −1.03 and 0.083 microvolts, respectively, for the pseudo-signs, incongruent signs, congruent signs and grooming gestures. Follow-up comparisons showed that all of the pairwise differences among conditions were significant, except for the congruent sign vs. incongruent sign comparison (p=0.39), and the incongruent sign vs. pseudo-sign comparison, which was however marginally significant (p=0.078). The Ending by Electrode interaction reflects two exceptions to the predicted pattern for the four Ending conditions. First, at the two most posterior sites, the amplitude for the incongruent sign condition was slightly greater than for the congruent sign condition (this difference, however, did not approach significance; p=0.62). Second, at the anterior electrode sites the gesture condition was not associated with significantly greater positivity than the other three conditions.
3.3. Window from 600 to 800 ms
For the later time window, the midline-electrode ANOVA found a main effect of Ending (p<0.001) and an interaction of Ending by Electrode (p<0.001). Mean amplitude for the four Ending conditions diverged slightly here from the familiar pattern, with respective means for pseudo-signs, incongruent signs, congruent signs and grooming gestures equal to −2.43, −2.47, −1.33 and 3.97. The comparison of pseudo-sign vs. incongruent sign did not approach significance however (p=0.96), nor did the comparison of pseudo-sign vs. congruent sign (p=0.31). The congruent vs. incongruent sign comparison was marginally significant at p=0.063. The gesture means were again very different from the rest (all three p’s<0.001).
The inner-electrode ANOVA found a main effect of Ending (p<0.001) and an Ending by Electrode interaction (p<0.001). Mean amplitudes for the four conditions progressed in the predicted way, with the gesture means again very different from the rest (all three p’s<0.001). The baseline vs. incongruent comparison also reached significance (p<0.05), and the baseline vs. pseudo-sign comparison showed a near-significant trend (p=0.13). The same ordering from most negative to most positive means for the four conditions was seen at all three levels of electrode (respective mean values for pseudo-sign, incongruent sign, congruent sign, grooming gesture = −1.86, −1.60, −0.43, 5.05), with gesture means being most divergent from the other three condition means at posterior sites.
For the outer electrodes ANOVA, the results included a main effect of Ending (p<0.001) and an interaction of Ending by Electrode (p<0.05). The familiar pattern of pseudo-sign < incongruent sign < congruent sign < grooming gesture was seen for all four levels of Electrode (mean values in the usual order = −1.60, −0.85, −0.21, 3.35), but only the only differences reaching significance were for gesture vs. the other conditions (all three p’s<0.001). The baseline vs. pseudo-sign difference showed a near-significant trend (p=0.12). Again, gesture means showed the greatest differences from the other three condition means at posterior electrode sites.
Finally, the ANOVA for the outermost set of electrodes found a highly significant effect of sentence Ending (p<0.001), as well as an interaction of Ending by Electrode (p<0.001). The predicted pattern among condition means was seen again here (mean values in same order as before: −1.63, −0.66, −0.39, 2.12). For this later window, however, only the grooming gesture mean was significantly different from the rest (all three p’s<0.001), but the baseline vs. pseudo-sign comparison was marginally significant (p=0.085). The Ending by Electrode interaction is due to the fact that the differences between the gesture mean and the means associated with the other three conditions were greatest at posterior sites.
3.4. Earlier time windows
ANOVAs like those just described were also carried out for the following time windows: from 0 to 100 ms post-stimulus onset, 100 to 200 ms, 200 to 300 ms, and 300 to 400 ms. For each of these ANOVAs there was an absence of significant effects or interactions related to sentence Ending. An additional post hoc test was carried out in order to investigate specifically the negative-going trend for the pseudo-sign condition relative to baseline at the OZ site in the 200 to 400 ms window, seen most clearly in Figure 4. However, this contrast was found to be only marginally significant in that time window (p=0.085).
4. General discussion
This study investigated sign language users’ ERP responses when confronted with ASL sentences with four kinds of endings: semantically congruent signs, semantically incongruent signs, phonologically legal pseudo-signs and non-linguistic grooming gestures. We hypothesized that the neurophysiological response associated with these ending types would differ in a manner consistent with findings in analogous studies of spoken language. Specifically, we predicted that the incongruent signs would elicit negative-going waves relative to the baseline (congruent sign) condition, and that the pseudo-signs would elicit a negativity of larger magnitude than the incongruent sings. In addition, we suggested that the non-linguistic gestures might elicit a positive-going wave, relative to baseline. While this last prediction was more speculative than the others, the expected pattern was in fact observed very consistently in our analysis of the data. Together, these findings lead to a number of important observations.
First, the outcome of our incongruent sign vs. congruent sign comparison replicates earlier findings which have also indicated that the N400 generalizes across modalities, including the visual-manual modality of signed language (e.g. Kutas, Neville, & Holcomb, 1987; Capek et al., 2009). These findings are broadly consistent with other studies using a variety of methodologies including positron emission tomography (PET; Corina et al., 2003), functional magnetic resonance imaging (fMRI; Neville et al., 1998), and cortical stimulation mapping (Corina et al, 1999), highlighting key neural processing similarities between signed and spoken language, in spite of the obvious physical differences in the linguistic signal.
For instance, Neville et al. (1997) also found that deaf signers exhibited an N400 response to semantically incongruent ASL sentences, relative to congruent sentences. Like the effects in the present study, this response was broadly distributed and had an onset and peak that the researchers noted was somewhat later than would be expected for written language, but consistent with earlier studies on auditory language (Holcomb & Neville, 1990, 1991). Neville et al. suggested that this delay might be due to the fact that the recognition point of different signs will tend to vary more than for printed language, in which all information is made available at the same time. Capek et al. (2009) also found a relatively late N400 response to semantic incongruity in sign sentences; this bilateral and posteriorly prominent effect had an onset of about 300 ms post-stimulus onset and peaked at about 600 ms post-stimulus onset, very much like the negativities we have described in the present study.
These effects are somewhat different from those that have been described in studies incorporating incongruent co-speech gestures and other sorts of non-linguistic imagery like drawings, photographs and videos. In Wu and Coulson’s (2005) study of contextually incongruent gestures, a component described by the researchers as a “gesture N450” was observed. Wu and Coulson noted the similarity of this effect to the N450 reported by Barrett and Rugg (1990) for second items in unrelated picture pairs relative to related picture pairs (e.g. wrench/fork vs. knife/fork), stating (p. 659) that consistent with their own findings, “most such ‘picture’ ERP studies report a broadly distributed negativity largest at frontal electrode sites and not evident at occipital sites (Barrett & Rugg, 1990; Holcomb & McPherson, 1994; McPherson & Holcomb, 1999; Sitnikova, Kuperberg, & Holcomb, 2003; West & Holcomb, 2002).” In contrast, the negativity reported in the present study was quite evident at occipital sites, as can be seen clearly in Figures 3 and 4.
A second notable finding in our study concerns deaf subjects’ ERP response in the phonologically legal pseudo-sign condition, which was also consistent with an N400 response but was generally larger (more negative) than the negativity seen for semantically incongruent but fully lexical signs. This provides further evidence for broad processing similarities for different linguistic modalities, in the light of similar findings for pseudo-words in earlier studies (Bentin, 1987; Bentin, McCarthy & Wood, 1985; Hagoort & Kutas, 1995). It is interesting, however, that phonologically legal pseudo-signs did not more strongly differentiate from the semantically incongruent signs in the present study. This may be an indication that our pseudo-signs (or some of their sub-lexical components) are activating lexical representations to a substantial degree (cf. Friedrich, Eulitz & Lahiri, 2006), and that at the same time, these representations are incongruent with the sentential contexts in which they have been presented. The pseudo-sign and incongruent sign conditions shared another similarity in that the effects they elicited were very prominent at occipital sites, which differs from what has traditionally been observed in studies of word processing. Whether this is a reflection of the modality of expression or other experimental factors must await further study, though results of Corina et al (2007), discussed below, may offer some insights about useful directions such research might take.
A third set of findings, concerning the outcome related to our non-linguistic grooming actions, is especially provocative. In contrast to the three other kinds of sentence-final items, all of which could be considered linguistic (i.e. as actual lexical items in two cases, and phonologically legal lexical gaps in the third), the non-linguistic grooming actions elicited a large positivity. As noted earlier, phonologically illegal words in ERP studies have in some cases elicited a positive-going component rather than an N400 (Holcomb & Neville, 1990; Ziegler et al., 1997). Holcomb and Neville (1990) examined differences between pseudo-words and non-words in the visual and auditory modalities in the context of a lexical decision experiment. Pseudo-words accorded with phonotactic constraints of English; visually presented non-words were composed of consonant strings and auditory non-words were words played backwards. The researchers reported that within an early time window (150–300 ms), auditory non-words (but not visual non-words) elicited a more negative response than pseudo-words, but only at anterior and right hemisphere sites. In a later time window (300–500 ms), the response to non-words was more positive for both modalities, and like the positivity seen in the present study, this positivity was long-lasting, continuing past the 1000 ms time-point.
Ziegler et al. (1997) examined the effects of task constraints on the processing of visually presented words, pseudo-words and non-words. In a letter search task, following a post-stimulus N1-P2 complex, the researchers reported a negative component, peaking around 350 ms, which was larger for words and pseudo-words than for non-words. A late positive component (LPC) was then generated that appeared to be slightly larger for non-words than for words or pseudo-words. In a second experiment, in which subjects’ responses to the three types of stimuli were delayed, the ERP response in the 300 to 500 ms window was more positive to non-words than to words and pseudo-words; responses for words and pseudo-words did not significantly differ. In a final experiment which required a semantic categorization of the target, a negative component with a peak around 400 ms was elicited in response to words and pseudo-words. In contrast, a striking late positive component was observed in response to non-words; this lasted from 300 ms to the end of the recording period.
Thus, across multiple studies we see that illegal non-words, relative to pseudo-words and real words, appear to elicit a large late positivity. This positivity has sometimes been interpreted as a P300 response (e.g. Holcomb & Neville, 1990). In the present experiment, the centro-parietal distribution of the positive component elicited in the gesture condition also corresponds to the typical distribution of the P300 component. At least three interpretations of this effect may be relevant here. First, a P300 response is well-attested in studies making use of stimuli perceived by subjects to be in a low-probability category (e.g. Johnson & Donchin, 1980). In our experiment, grooming gestures occurred 1/4 of the time, rendering these non-linguistic events low-probability with respect to the other three (linguistic) sentence ending conditions. Second, ERP differences between non-words and words have been attributed to the fact that these non-linguistic items have little or nothing in common with lexical entries and therefore do not generate lexical activity (cf. Rugg and Nagy, 1987). Third, the ERP differences observed between pseudo-words and non-words have also been suggested to reflect a pre-lexical filtering process that quickly rejects non-linguistic items based upon aberrant physical characteristics (Holcomb & Neville, 1990). For example, Zielger et al. (1997) suggest such a categorization may be based on a spelling check in the case of non-word consonant-string stimuli.
This last interpretation accords well with a possibility we noted in the Introduction, that such an ERP response may be due to the operation of a filtering/rejection mechanism, allowing language users to efficiently reject items in the incoming linguistic signal that do not fall within some limits of linguistic acceptability. The gesture stimuli in the present study, in lacking the semantic appropriateness of semantically congruent signs, the lexicality of incongruent signs, and even the phonological legality of pseudo-signs, apparently fail to reach some “acceptability threshold,” causing them to be rejected and thus dealt with during processing in a qualitatively different way. This hypothesis also permits us to predict an answer to an interesting related question: what kind of ERP response would be elicited by phonologically illegal non-signs in sentence contexts like those explored in the present study? The positivities seen in studies incorporating non-words; the cross-modality parallels we have already noted in ERP studies of spoken, written and signed language; as well as the positive waveforms seen in response to the non-linguistic gesture stimuli in the present study; all support the prediction that phonologically illegal non-signs would elicit a positive-going waveform. However, confirmation of this must await future research.
We have already alluded to the growing number of studies which have used ERP methodology to examine the contributions of co-speech manual gestures to the interpretation of both linguistic and non-linguistic stimuli (Kelly et al., 2004; Wu & Coulson, 2005, 2007a, 2007b; Ozyürek et al., 2007; Holle & Gunter, 2007). Many of these studies have used iconic manual gestures that depict a salient visual-spatial property of concrete objects, such as their size and shape or an associated manner of movement (but see also Cornejo et al., 2009). Collectively these studies suggest that co-speech manual gestures influence semantic representations, and that discrepancies between gestural forms and the semantic contexts in which they occur lead to greater processing costs on the part of language perceivers. This in turn results in increased negativities in the time window often associated with the classic N400 effect, observed in response to word meanings that violate the wider semantic context (Kutas & Hillyard, 1980).
For example, Kelly et al. (2004) observed modulation of ERP responses for speech tokens that were either accompanied by matching, complementary or mismatched hand gestures. An N400-like component was observed for mismatched gesture-speech tokens relative to the other conditions. Wu and Coulson (2005) examined ERPs for subjects who watched cartoons followed by a gestural depiction that either matched or mismatched the events shown in the cartoons. Gestures elicited an N400-like component (a so-called “gesture N450”) that was larger for incongruent than congruent items. Ozyürek et al. (2007) recorded EEG while subjects listened to sentences with a critical verb (e.g. “knock”) accompanied by a related co-speech gesture (e.g. KNOCK). Verbal/gestural semantic content either matched or mismatched the earlier part of the sentence. The researchers noted that following the N1-P2 complex, the ERP response to mismatch conditions started to deviate from the response to the correct condition in the latency window of the P2 component, around 225 to 275 ms post stimulus onset, while at around 350 ms, the mismatch conditions deviated from the congruent condition. This was followed by a similar effect with a peak latency somewhat later than the one usually seen for the N400. These data were taken as evidence that that the brain integrates both speech and gestural information simultaneously. However, it is interesting to note that double violations (speech and gesture) did not produce additive effects, suggesting parallel integration of speech and gesture in this context.
In contrast, the grooming gesture condition in the present study was not associated with any N400-like effects. We suggest that this is due to the fact that subjects were unlikely to perceive these actions as being akin to co-speech (or co-sign) gestures, but instead as something qualitatively different. This detection process evidently occurred quite rapidly during online processing of these stimuli, comparable to the speed at which semantic processing was carried out for the linguistic stimuli. The findings of a PET study by Corina et al. (2007) may shed some additional light on this outcome. In that study, deaf signers were found to have engaged different brain regions when processing ASL signs and self-grooming gestures, in contrast with the hearing non-signers who also took part in the study. Specifically, deaf signers engaged left-hemisphere perisylvian language areas when processing ASL sign forms, but recruited middle-occipital temporal-ventral regions when processing self-grooming actions. The latter areas are known to be involved in the detection of human bodies, faces, and movements. The present findings add temporal precision the results of that study, enabling us to determine when such information is rejected as non-linguistic during the course of ASL sentence processing.
The findings of Corina et al. (2007) may also speak to the fact, noted earlier, that the N400 effects observed in the present study were more prominent at occipital sites than the N400 effects typically seen in analogous speech studies. The effects seen in the present study’s gesture condition were strongest in posterior areas as well, though in both cases, one must be cautious in making a connection between the scalp topography of ERP effects and location of their source.4
Finally, an alternative interpretation of the positivity seen in the grooming gesture condition in the present study is that it is due to subjects’ interpreting these non-linguistic final items as missing information, i.e. as the absence of a final item, rather than a final item which is present yet enigmatic. While this cannot be entirely ruled out, it should be noted that the pseudo-signs could also potentially be considered “semantically void” items, but the responses to the pseudo-signs (which have a discernible linguistic structure consistent with that of real ASL lexical items) and the gestures (which do not) were qualitatively different. Relative to the baseline condition, no significant positivity was seen in any time window for the pseudo-signs, and no significant negativity was seen in any time window for the grooming gestures.
5. Conclusion
To the best of our knowledge, this is the first ERP study of sign language users that investigates sentential processing in such a wide a range of lexical/semantic contexts. Consistent with previous research on both spoken and signed language, we found that ASL sentences ending with semantically incongruent signs were associated with significant N400-like responses relative to the baseline condition, in which sentences ended with semantically congruent signs. Furthermore, we found that phonologically legal pseudo-sign sentence endings elicited an N400-like effect that was somewhat stronger than the response to the semantically incongruent signs; this is consistent with existing work on spoken language, but represents a new finding for signed language. In contrast to the incongruent sign and pseudo-sign conditions, grooming actions elicited a very large positive-going wave; this is also a new finding, and complements earlier work on spoken language. The fact that our results largely parallel those seen in analogous ERP studies of spoken language constitutes strong evidence that high-level linguistic processing shows remarkable consistency across modalities. Moreover, our results offer important new information about the relationship between sign and action processing, particularly the topography and timing of the processes that are involved.
Highlights.
ERP study of Deaf signers’ processing of signs, pseudo-signs and gestures in ASL sentence context
Semantically incongruent signs elicit negativity (N400) relative to congruent sign baseline
Phonologically legal pseudo-signs elicit somewhat larger N400 than incongruent signs
Non-linguistic grooming gestures elicit very large-amplitude positivity relative to baseline
Acknowledgments
We thank the staff and students at Gallaudet University for helping make this study possible, and Kearnan Welch and Deborah Williams for their assistance in data collection. We also thank two anonymous reviewers for valuable feedback concerning the presentation of our results. This work was supported in part by grant NIH-NIDCD 2ROI-DC03099-11, awarded to David Corina.
Appendix: List of stimulus items
The table lists all 120 sentence frames and three of the corresponding endings for each sentence: the semantically congruent signs, semantically incongruent signs and pseudo-signs. In the fourth class of endings, the gesture stimuli, the sign performer was seen making brief grooming actions such as head scratching, eye rubbing or passing her fingers through her hair. To create 120 unique gesture stimuli, the actions were performed with differences in the number and configuration of hands or fingers used, the location of the body involved, and so on. Also shown in the rightmost two columns of the table are the “correct” and “incorrect” word choices for the occasional quiz items. A number sign (#) preceding an item means that item was fingerspelled. Many of these sentences were adapted from the English-language stimuli used in Johnson & Hamm (2000).
| FRAME | Semantically congruent | Semantically incongruent | Pseudo-sign | Quiz items | ||
|---|---|---|---|---|---|---|
| correct | incorrect | |||||
| 1 | BOY SLEEP IN HIS | BED | LEMON | BARK.A′ | sleep | fight |
| 2 | HEAR BARK BARK LOOK RIGHT PRO | DOG | SECRET | HOUSE.V | hear | touch |
| 3 | MAN PRO CARPENTER BUILD | HOUSE | HAIRCUT | NOSE.V″ | build | eat |
| 4 | DOOR PRO LOCK LOOK-FOR | KEY | NOSE | G_2H (contact 2x) | lock | hat |
| 5 | WATER+CL:pond PRO CL:many-across | FISH | WORD | GHOST.S | water | wine |
| 6 | MOTORCYCLE BROKE-DOWN-REPEATEDLY FINALLY BOUGHT NEW | CAR | GHOST | WRIST.CS (open & close) | buy | wear |
| 7 | PRO3 BECOME-ILL SICK CAN’T GO-TO | WORK | BOTTLE | NOSE.5 (2x) | sick | one |
| 8 | MY HOUSE LIGHTS BLACKOUT-POW CL:set-up-around | CANDLE | ELEPHANT | KNOCK.8 | lights | bath |
| 9 | DOG ANGRY CHASE | CAT | EASTER | KNOCK.G | angry | green |
| 10 | #PATIENT SIT-HABITUAL ANALYZE-PRO1, PRO3 | PSYCHOLOGIST | FRANCE | MONEY.X | sit | drive |
| 11 | BOY STOLE | MONEY | INJURY | PALM_up.5 | boy | mouse |
| 12 | KIDS PRO3 CL:go-out-in-group WATCH | (theater) PLAY | WEALTH | BOOK.openB on chin | watch | help |
| 13 | DAUGHTER MY LIKE READ | BOOK | VEGETABLE | X (moving left to right) | read | steal |
| 14 | PLUMBER HIS JOB FIX | TOILET | APPLE | BEANS.C | fix | burn |
| 15 | PRIEST WORK THERE | CHURCH | BEANS | EXAM_2H. openF | work | shower |
| 16 | STUDENTS WRITE-TOOK | EXAM | RING | MUSIC.V″ | students | hat |
| 17 | POLICEMAN PRO CATCH | THIEF | MUSIC | REASON.5″ | police | student |
| 18 | MOTHER HAVE 3 | CHILDREN | REASON | PIC.X | mother | father |
| 19 | DARK ROOM PRO FOR-FOR DEVELOP | PICTURE | SLED | INTERNET.H″ | room | bread |
| 20 | HUNTER SHOOT-AT KILL | DUCK | INTERNET | PAPER.3′ | kill | love |
| 21 | AIRPLANE SEAT FULL CL:mass 300 | PEOPLE | PAPER | GOLD.F | full | hard |
| 22 | PIRATE PRO3 HUNT WHERE | GOLD | NAPKIN | EMAIL.L″ | pirate | circle |
| 23 | GOLFER STROKE CL:ball-fly-across WRONG CL:ball-into-water | POND | PICKLE | NOSE(vb). openB | ball | tennis |
| 24 | SPRING THIS YEAR MINE TAX HEAVY | DEBT | CHOCOLATE | EXAMPLE.Z | tax | card |
| 25 | KING-PRO FALL-IN-LOVE | QUEEN | EXAMPLE | STAR_2H.X | love | try |
| 26 | TELESCOPE I TELESCOPE-FOCUS SEE | STAR | RUBBISH | PENNY.flat0′ | see | hear |
| 27 | #APOLLO ROCKET-MAN GO-TO TOUCH | MOON | BANANA | MONEY.C′ | touch | dry |
| 28 | FARMER NOW CL:milk (verb) | COW | PENCIL | DANCE.openH | milk | hat |
| 29 | SCIENTIST INVENT++ HARD | EXPERIMENT | TEACHER | BUTTERFLY. bent5 | scientist | cover |
| 30 | MURDERER PRO3 CAUGHT PUT-INTO | JAIL | LOBSTER | JAIL.T | catch | draw |
| 31 | FATHER COMMAND SON GO CLEAN | BATHROOM | MATHEMATICS | WHERE.bent4 | son | aunt |
| 32 | NEW YORK TIMES NEWSPAPER ITS TENDENCY I READ MANY | ARTICLE | EARRINGS | VOICE-UP.1 | newspaper | chair |
| 33 | JUDGE PRO3 ARGUMENT LISTEN LISTEN THINK-IT-OVER READY MAKE | DECISION | DOLL | WHITE (on neck) | listen | taste |
| 34 | MY BIRTHDAY SOON COME MY MOM PRO3 BAKE | CAKE | GIRAFFE | XMAS.5″ | mom | cousin |
| 35 | #ZOO ITS-TENDENCY HAVE MANY VARIOUS | ANIMAL | MAGAZINE | TALL.3 | zoo | arm |
| 36 | MEN PRO3 CL:group-up GO-OUT CHUG | BEER | SAUSAGE | TIME.openB″ | men | rats |
| 37 | I JOIN ARMY I SHOPPING CLOTHES NEED SHIRT PANTS | #BOOTS | HAMBURGER | DANCE. openH″ | army | hill |
| 38 | TEA DRINK TASTE BITTER NEED | SUGAR | DANCE | TIRED_2H.H | drink | cut |
| 39 | PANTS CL:pull-on CL:fit-loose NEED | BELT | LUNGS | MATH_2H.B | pants | arm |
| 40 | I CALL HOTEL RESERVE | ROOM | BUTTERFLY | DAMAGE.Y | hotel | mall |
| 41 | WOMAN CL:lie-down SUNNING THERE | BEACH | LANGUAGE | BEACH.L | sun | mouse |
| 42 | MUSEUM I LOOK-AT HALLWAY LOOK-AT WOW BEAUTIFUL | PAINTING | CABBAGE | MONEY.5″ | beautiful | bad |
| 43 | #OFFICE MAX I ENTER SHOP-AROUND PRO #FAX PRO COMPUTER PRO | PRINTER | CAP | KNOW.20 | computer | wheel |
| 44 | MORNING BOY PRO CL:get-on-bike RIDE-BIKE CL:deliver | NEWSPAPER | PLANE | MONEY.IL | bike | tiger |
| 45 | MY LIVING ROOM EMPTY NONE | COUCH | SKIN | NOSE.X | empty | yellow |
| 46 | MONEY FATHER GAVE-ME I PUT-IN WHERE | WALLET | #DORM | BUTTER.1 | father | nephew |
| 47 | POPCORN I MAKE FINISH I CL:pour-over | BUTTER | SOCKS | JUMP.X | popcorn | cigarette |
| 48 | I THIRST-FOR WATER NEED | CUP | HOCKEY | CORNER_2H.1 | water | hat |
| 49 | ME ENTER BDRM I SPOT MOUSE CL:sneak-under | DRESSER | GOVERNMENT | BUG.openB | mouse | cookie |
| 50 | FROG CL:tongue-stick-out-retract GULP | BUG | #TRUCK | KNOW.5″ | frog | square |
| 51 | GRASS CL:thereabout I WALK CL:walk-around WET OH | RAIN | MEMORY | MORE.B | wet | high |
| 52 | GIRLFRIEND GO FURNITURE STORE BUY BRING | TABLE | SCAREDNESS | BIKE.4 | store | mouse |
| 53 | BIRD EAT SLEEP WHERE | NEST | BICYCLE | CHANGE.4 | sleep | move |
| 54 | WOMAN PRO3 TOP ATHLETE PARTICIPATE | RACE | SMOKING | DINNER.9 | woman | shell |
| 55 | COOK PRO3 EXPERT THEIRS COOKING | DESSERT | KEYBOARD | TIME.9 | cook | breathe |
| 56 | I ENTER HOUSE I HEAR TICK-TICK AHA! PRO3 | CLOCK | SHIRT | ENGINE.B (palms in) | house | meal |
| 57 | VW BUS I BUY DRIVE WRONG BROKEDOWN | ENGINE | SHELF | LOCK.1 | drive | eat |
| 58 | OUTSIDE GIRL CL:lie-down OBSERVE | CLOUDS | SETUP | CHECK.2 | girl | boy |
| 59 | LAWYER CL:sit-down-with-someone DISCUSS WITH PRO | CLIENT | DOG | MOVIE.V″ | lawyer | dream |
| 60 | I INFORM SISTER PLEASE PACK BRING | BOX | GIRL | EGG.7 | sister | uncle |
| 61 | MAIL/LTR I GOT OPEN-ENVELOPE I GOT | CHECK | EARTH | LET.8 | bird | |
| 62 | MAN PRO3 I SEE PRO3 FIX FINISH CONSTRUCT | BRIDGE | GRANDFATHER | FRUIT.3 | man | dream |
| 63 | TARA PRO WANT MAKE PIE NEED BUY | FRUIT | T.V. | ENGINE.9 | pie | roof |
| 64 | THIS WKND I GO HIKE I LOOK-AT WOW BEAUTIFUL | MOUNTAIN | PIZZA | WINDOW.V | hike | paint |
| 65 | HOMEWORK I WRITE CL:glass-breaking I LOOK-AT | WINDOW | FOOTBALL | GRASSHOPPER.bent5 | break | drink |
| 66 | PLANT CL:branching I LOOK PRO3 OH | SEED | METAL | BABY.1 | plant | north |
| 67 | MOM CL:walk-by CL:pick-up | BABY | BUS | DAMAGE.V″ | mom | dad |
| 68 | BUILDING PRO3 CRUMBLE OH SEEM | BOMB | IRISH | PRINCIPAL.5 | building | yellow |
| 69 | SCHOOL THERE BOY TROUBLE MUST GO-TO | PRINCIPAL | CATERPILLAR | LECTURE.I | boy | wheel |
| 70 | COLLEGE I ENTER AUDIENCE CL:sit-down WATCH GOOD | LECTURE | TREE | TELL-ME.flatO | watch | drink |
| 71 | GIRL PRO3 THIRSTY WANT DRINK | WATER | LIGHT | ARIZONA.4 | thirsty | sad |
| 72 | I HUNGRY WANT EAT I GO-TO | RESTAURANT | TABLE | EAR.A | hungry | small |
| 73 | CAR CL:car-stalls RAN-OUT | GAS | SHEEP | OPEN.F | car | head |
| 74 | #STEAK MAN EAT TASTE-FUNNY NEED SEASON SALT | PEPPER | JESUS | ACTOR.F | steak | rabbit |
| 75 | MOVIE DIRECTOR MAKE MOVIE WANT SEEK INTERVIEW GOOD | ACTOR(S) | MEAT | JAWS_2H.H | movie | hat |
| 76 | DR PRO3 SELF PLASTIC SURGEON HIS SPECIALTY | BREASTS | VERB | ARCHITECTU RE.4 | doctor | wood |
| 77 | STAMP I SICK-OF ANNUAL INCREASE | PRICE | COLOR | THROAT.H | stamp | dream |
| 78 | BOY SICK I LOOK CHECK OH HURT | THROAT | EGG | HAMMER_2H.S | sick | free |
| 79 | COURT I GO-TO CL:sit-down FACE | JUDGE | #SALE | DRESS.bentV | court | mouse |
| 80 | DOWNTOWN LAUNDROMAT I GO ARRIVE I ENTER OH I NEED | COINS | CLOTHES | GAS.3 | arrive | write |
| 81 | SCHOOL CLOSED NEXT WEEK STUDENTS GO HOME (lowered eyebrows) | HOLIDAY | SODA | EAT.openB (palm in) | home | country |
| 82 | RESTAURANT PRO3 FIRST TIME I ENTER ME FEAST WHOA DELICIOUS | FOOD | DOOR | TIME.1I | restaurant | pen |
| 83 | PRO WOMAN SHORT THIN PRO3 EXPERT PRO3 CL:drink-shot | WHISKEY | WATCH (n) | TIRE.1I | drink | eat |
| 84 | OUTSIDE THERE WEATHER BAD THERE TORNADO PRO3 HIT++ DESTROY++ | SCHOOL | INTERVIEW | EARS_2H.X (2x) | bad | good |
| 85 | BOY PRO3 NOT-WANT GO SCHOOL PRO3 MOM SAY GO SCHOOL MUST TAKE | TEST | RADIO | STREET_2H.F | school | mouse |
| 86 | A-LONG-TIME-AGO STREET CL:flat-surface BUMPY-SURFACE | BRICK | #STEAK | ROCK.5″ | street | eye |
| 87 | GIRL PRO CL:foot-limping SHOE CL:shoe-taken-off OH | STONE | BOSS | FARM.F | shoe | hair |
| 88 | NEW HOME I MOVE-IN OH EMPTY NEED | FURNITURE | FARM | FEVER.5 | home | nose |
| 89 | DOWNTOWN MOVIE THERE I WATCH ANNOYED PEOPLE PRO3 RUDE CHAT++ IN | AUDIENCE | FEVER | WOOD.I | rude | tall |
| 90 | LAST_NIGHT I FEEL-LIKE SIT WATCH FIRE WRONG GONE | WOOD | IDEA | SUMMARY.H | fire | air |
| 91 | THIS MORNING CHILDREN I GO-TO PARK I LOOK-AROUND OH NONE | SWING/SLIDE | SUMMARY | BEARD.H″ | park | mouse |
| 92 | FATHER GONE 2 MONTHS CAME HOME I LOOK-STUNNED GROW | BEARD | PROBLEM | CAFETERIA. bent V″ | grow | mix |
| 93 | KNOW-THAT SMOKING MANY YEARS CAUSE (nod) | CANCER | FLOWER | NAME_2H.^20 | smoke | drink |
| 94 | RIVER I STAND LOOK-ACROSS OH MUST | BOAT | LAW | TICKET.split5 | river | hat |
| 95 | BICYCLE TIRE CL:flat NEED | PUMP | COMMUNITY | WORM.S | tire | yellow |
| 96 | CAR CL:gone-by-fast SPEED WRONG CL:pull-over GET | TICKET | WORM | GRASS.K | speed | bread |
| 97 | THIS SUMMER VACATION I WANT TRAVEL OVER-THERE | EUROPE | CLOSET | ANIMAL.B | travel | use |
| 98 | X-MAS GIFT I WRAP I LOOK-AROUND NONE | TAPE | HYPOCRITE | ELEVATOR. open8 | gift | lake |
| 99 | DAUGHTER PRO BECOME-SICK I TAP COME I GIVE | MEDICINE | ELEVATOR | WATER.1I | sick | sharp |
| 100 | WATER DRINK TOO-WARM PRO CL:cup MUST PUT-IN | ICE | TENT | FLAG.D | warm | sad |
| 101 | RESTAURANT PRO FANCY ENTER WANT MUST DRESS-UP DRESS COAT | TIE | RAINBOW | HOTEL.E | dress | snow |
| 102 | BATHROOM I ENTER MIRROR I LOOK-AT PUZZLED DIRTY | FACE | ALARM | COMPUTER.3 | mirror | sandwich |
| 103 | I TYPE++ ALL-NIGHT WRONG EYES CL:eyes-fuzzy FROM | COMPUTER | HANDBAG | DENTIST_2H.S | eyes | toes |
| 104 | CHILDREN PRO3 WANT ACTIVITY DIFF PRO3 WANT PAINTING PRO3 WANT BASKETBALL PRO3 WANT | BASEBALL | HEADACHE | APPOINTMENT_2H. bentV | children | lions |
| 105 | DAUGHTER PRO GO-TO DR TODAY NEED PRO3 WRONG I SICK CANCEL | APPOINTMENT | PERFUME | horizontal wiggle.V | daughter | father |
| 106 | RUN EVERY-MORNING RELISH I NOW MY FEET HURT BUY NEW | SHOES | JAPAN | CLOTHES.1 | feet | ears |
| 107 | GRAD PARTY COMING-UP-SOON I NEED SHOP THINGS FOOD CAKE | CHAMPAGNE | HEARING-AID | CLOTHES.1 (no contact) | shop | wait |
| 108 | #HILTON SELF FANCY (nod) | HOTEL | TAIL | EUROPE.X | Hilton | Microsoft |
| 109 | MY SON PRO I INFORM HIM TONITE EAT FINISH PRO3 MUST GO WASH | DISHES | VALLEY | FLOOR.^20 | wash | throw |
| 110 | TONITE I DO-ERRANDS MUST I COOK, VACUUM, CLEAN | FLOOR | WINK | GLASSES.1 | clean | ride |
| 111 | I LOOK-AROUND NOTICE++ CHILDREN NOWADAYS INCREASINGLY NEED | GLASSES | TRASH | CONTACT.V | children | men |
| 112 | EVERY-FRI NITE MY FAMILY LIKES WATCH BASEBALL | GAME | PLUG | CHEEK.G | watch | swim |
| 113 | BUGS CL:hovering CL:biting-me I LOOK OH | MOSQUITO | PRIEST | DENTIST.I″ | bugs | bread |
| 114 | GIRL PRO HER TOOTH BROKE GO SEE | DENTIST | POSSUM | CONTACT-LENS.A | tooth | nose |
| 115 | RESTAURANT THERE ITALIAN ITS FOOD DELICIOUS PIZZA RAVIOLI | SPAGHETTI | CONTACT-LENS | MONTH.A | pizza | taco |
| 116 | KNOW-THAT JULY MONTH ITS-TENDENCY HOT | MONTH | CRACKER | PRACTICE.C | hot | plain |
| 117 | APPLICATION YOU FILL-OUT FINISH SIGN (nod) | NAME | COP | LIBRARY.V″ | sign | sleep |
| 118 | BUY BOOK NOT-NECESSARY SIMPLY GO-TO | LIBRARY | FRIDGE | 6_2H (horizontal wiggle) | book | lamp |
| 119 | EVERY-MORNING MY FAMILY EAT EGGS #HASH BROWN | BACON | MOUSTACHE | NUT.5″ | eat | drink |
| 120 | SQUIRREL ITS FOOD BOX EMPTIED-OUT I LOOK-THERE OH RAN-OUT | NUT | MASK | RAINBOW.H | food | idea |
Footnotes
This possibility is bolstered by recent work of Albert Kim and colleagues, who have found that relative to real word controls, N400 amplitude decreases and P600 amplitude increases, parametrically, as orthographic irregularity increases (Kim & Pitkänen, submitted).
A group of 10 hearing non-signers was also run on the same experiment as a control measure. All were undergraduate students at the University of California at Davis with no substantial knowledge of sign language. Like the deaf group, these subjects were uninformed as to the purpose of the study and gave informed consent in accordance with established Institutional Review Board procedures at UC Davis. However, unlike the deaf group, no significant effects or interactions related to Ending item were found, and no further analysis related to this group will be presented here.
The pseudo-signs could be one-handed or two-handed. The two-handed variants include cases where a handshape is articulated on a base hand, as well as cases in which the two hands move symmetrically; both of these occur in real two-handed signs. Both the one- and two-handed pseudo-sign items appear as compositional forms that are non-occurring in ASL, and identification of the handshape alone is not sufficient to determine whether the sign is true sign or a pseudo-sign. The consensus view among the signers in our group is that our pseudo-signs are more akin to legal but non-occurring items, (e.g. “glack”), rather than being consistently seen as recognizable but altered lexical items (e.g. “glassu”).
It should also be noted that the effects seen in the present study were for gestures in a sentence context, while in Corina et al. (2007) the sign and gesture stimuli were seen in isolation.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Arbib MA. Interweaving protosign and protospeech: Further developments beyond the mirror. Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems. 2005;6:145–71. [Google Scholar]
- Arbib MA. From grasp to language: Embodied concepts and the challenge of abstraction. Journal of Physiology-Paris. 2008;102:4–20. doi: 10.1016/j.jphysparis.2008.03.001. [DOI] [PubMed] [Google Scholar]
- Barrett SE, Rugg MD. Event-related potentials and the semantic matching of faces. Neuropsychologia. 1989;27:913–922. doi: 10.1016/0028-3932(89)90067-5. [DOI] [PubMed] [Google Scholar]
- Bentin S. Event-Related Potentials, Semantic Processes, and Expectancy Factors in Word Recognition. Brain and Language. 1987;31:308–327. doi: 10.1016/0093-934x(87)90077-0. [DOI] [PubMed] [Google Scholar]
- Bentin S, McCarthy G, Wood CC. Event-related potentials, lexical decision, and semantic priming. Electroencephalography & Clinical Neurophysiology. 1985;60:353–355. doi: 10.1016/0013-4694(85)90008-2. [DOI] [PubMed] [Google Scholar]
- Bobes MA, Valdés-Sosa M, Olivares E. An ERP study of expectancy violation in face perception. Brain and Cognition. 1994;26:1–22. doi: 10.1006/brcg.1994.1039. [DOI] [PubMed] [Google Scholar]
- Brown C, Hagoort P. The processing nature of the N400: evidence from masked priming. J Cogn Neurosci. 1993;5:34–44. doi: 10.1162/jocn.1993.5.1.34. [DOI] [PubMed] [Google Scholar]
- Capek CM, Grossi G, Newman AJ, McBurney SL, Corina D, Roeder B, et al. Brain systems mediating semantic and syntactic processing in deaf native signers: biological invariance and modality specificity. Proc Natl Acad Sci USA. 2009;106:8784–8789. doi: 10.1073/pnas.0809609106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chao LL, Nielsen-Bohlman L, Knight RT. Auditory event-related potentials dissociate early and late memory processes. Electroencephalography & Clinical Neurophysiology. 1995;96:157–168. doi: 10.1016/0168-5597(94)00256-e. [DOI] [PubMed] [Google Scholar]
- Corballis MC. The evolution of language. Ann NY Acad Sci. 2009;1156:19–43. doi: 10.1111/j.1749-6632.2009.04423.x. [DOI] [PubMed] [Google Scholar]
- Corina DP, McBurney SL, Dodrill C, Hinshaw K, Brinkley J, Ojemann G. Functional roles of Broca’s area and SMG: evidence from cortical stimulation mapping in a deaf signer. Neuroimage. 1999;10:570–581. doi: 10.1006/nimg.1999.0499. [DOI] [PubMed] [Google Scholar]
- Corina D, Chiu YS, Knapp H, Greenwald R, San Jose-Robertson L, Braun A. Neural correlates of human action observation in hearing and deaf subjects. Brain Research. 2007;1152:111–129. doi: 10.1016/j.brainres.2007.03.054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Corina DP, Knapp HP. Psycholinguistic and neurolinguistic perspectives on sign languages. In: Traxler MJ, Gernsbacher MA, editors. Handbook of psycholinguistics. 2. San Diego, CA: Academic Press; 2006. pp. 1001–1024. [Google Scholar]
- Corina DP, San Jose-Robertson L, Guillemin A, High J, Braun AR. Language lateralization in a bimanual language. Journal of Cognitive Neuroscience. 2003;15:718–730. doi: 10.1162/089892903322307438. [DOI] [PubMed] [Google Scholar]
- Corina D, Grosvald M, Lachaud C. Perceptual invariance or orientation specificity in American Sign Language? Evidence from repetition priming for signs and gestures. Language and Cognitive Processes. doi: 10.1080/01690965.2010.549667. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cornejo C, Simonetti F, Ibáñez A, Aldunate N, López V, Ceric F. Gesture and metaphor comprehension: Electrophysiological evidence of cross-modal coordination by audiovisual stimulation. Brain and Cognition. 2009;70:42–52. doi: 10.1016/j.bandc.2008.12.005. [DOI] [PubMed] [Google Scholar]
- Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics. Journal of Neuroscience Methods. 2004;134:9–21. doi: 10.1016/j.jneumeth.2003.10.009. [DOI] [PubMed] [Google Scholar]
- Emmorey K. Language, cognition, and the brain: Insights from sign language research. Mahwah, NJ: Lawrence Erlbaum; 2002. [Google Scholar]
- Emmorey K, Xu J, Gannon P, Goldin-Meadow S, Braun A. CNS activation and regional connectivity during pantomime observation: No engagement of the mirror neuron system for deaf signers. NeuroImage. 2010;49:994–1005. doi: 10.1016/j.neuroimage.2009.08.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friedrich C, Eulitz C, Lahiri A. Not every pseudoword disrupts word recognition: an ERP study. Behavioral and Brain Functions. 2006;2:36. doi: 10.1186/1744-9081-2-36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friederici AD. Towards a neural basis of auditory sentence processing. Trends Cogn Sci. 2002;6:78–84. doi: 10.1016/s1364-6613(00)01839-8. [DOI] [PubMed] [Google Scholar]
- Frishberg N. Ghanaian Sign Language. In: Van Cleve J, editor. Gallaudet encyclopaedia of deaf people and deafness. New York: McGraw-Gill Book Company; 1987. [Google Scholar]
- Ganis G, Kutas M. An electrophysiological study of scene effects on object identification. Cognitive Brain Research. 2003;16:123–144. doi: 10.1016/s0926-6410(02)00244-6. [DOI] [PubMed] [Google Scholar]
- Ganis G, Kutas M, Sereno MI. The search for “common sense”: An electrophysiological study of the comprehension of words and pictures in reading. Journal of Cognitive Neuroscience. 1996;8:89–106. doi: 10.1162/jocn.1996.8.2.89. [DOI] [PubMed] [Google Scholar]
- Gentilucci M, Corballis M. From manual gesture to speech: A gradual transition. Neuroscience & Biobehavioral Reviews. 2006;30:949–960. doi: 10.1016/j.neubiorev.2006.02.004. [DOI] [PubMed] [Google Scholar]
- Goldin-Meadow S. Hearing gestures: How our hands help us think. Cambridge, MA: Harvard University Press; 2003. [Google Scholar]
- Greenhouse WW, Geisser S. On methods in the analysis of profile data. Psychometrika. 1959;24:95–112. [Google Scholar]
- Hagoort P, Brown C. Brain responses to lexical ambiguity resolution and parsing. In: Frazier L, Clifton Charles J, Rayner K, editors. Perspectives in sentence processing. Hillsdale, NJ, UK: Lawrence Erlbaum Associates; 1994. pp. 45–80. [Google Scholar]
- Hagoort P, Kutas M. Electrophysiological insights into language deficits. In: Boller F, Grafman J, editors. Handbook of neuropsychology. Amsterdam: Elsevier; 1995. pp. 105–134. [Google Scholar]
- Hagoort P, van Berkum J. Beyond the sentence given. Philos Trans R Soc Lond B Biol Sci. 2007;362:801–11. doi: 10.1098/rstb.2007.2089. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Holcomb PJ, McPherson WB. Event-related brain potentials reflect semantic priming in an object decision task. Brain and Cognition. 1994;24:259–276. doi: 10.1006/brcg.1994.1014. [DOI] [PubMed] [Google Scholar]
- Holcomb PJ, Neville HJ. Auditory and visual semantic priming in lexical decision: A comparison using event-related brain potentials. Language and Cognitive Processes. 1990;5:281–312. [Google Scholar]
- Holcomb PJ, Neville HJ. Natural speech processing: An analysis using event-related brain potentials. Psychobiology. 1991;19:286–200. [Google Scholar]
- Holle H, Gunter TC. The Role of Iconic Gestures in Speech Disambiguation: ERP Evidence. J Cogn Neurosci. 2007;19:1175–92. doi: 10.1162/jocn.2007.19.7.1175. [DOI] [PubMed] [Google Scholar]
- Johnson BW, Hamm JP. High-density mapping in an N400 paradigm: Evidence for bilateral temporal lobe generators. Clinical Neurophysiology. 2000;111:532–545. doi: 10.1016/s1388-2457(99)00270-9. [DOI] [PubMed] [Google Scholar]
- Johnson R, Jr, Donchin E. P300 and stimulus categorization: Two plus one is not so different from one plus one. Psychophysiology. 1980;17:167–178. doi: 10.1111/j.1469-8986.1980.tb00131.x. [DOI] [PubMed] [Google Scholar]
- Kegl J, Senghas A, Coppola M. Creation through contact: Sign language emergence and sign language change in Nicaragua. In: DeGraff M, editor. Language Creation and Language Change: Creolization, Diachrony, and Development. Cambridge MA: MIT Press; 1999. pp. 179–237. [Google Scholar]
- Kelly SD, Kravitz C, Hopkins M. Neural correlates of bimodal speech and gesture comprehension. Brain and Language. 2004;89:243–260. doi: 10.1016/S0093-934X(03)00335-3. [DOI] [PubMed] [Google Scholar]
- Kim A, Osterhout L. The independence of combinatory semantic processing: evidence from event-related potentials. Journal of Memory and Language. 2005;52:205–225. [Google Scholar]
- Kim A, Pitkänen I. Dissociation of ERPs to structural and semantic processing difficulty during sentence-embedded pseudoword processing. (submitted) [Google Scholar]
- Kutas M, Hillyard SA. Reading senseless sentences: Brain potentials reflect semantic incongruity. Science. 1980;207:203–208. doi: 10.1126/science.7350657. [DOI] [PubMed] [Google Scholar]
- Kutas M, Hillyard SA. Brain potentials during reading reflect word expectancy and semantic association. Nature. 1984;307:161–163. doi: 10.1038/307161a0. [DOI] [PubMed] [Google Scholar]
- Kutas M, Neville HJ, Holcomb PJ. A preliminary comparison of the N400 response to semantic anomalies during reading, listening, and signing. Electroencephalography and Clinical Neurophysiology, Supplement. 1987;39:325–330. [PubMed] [Google Scholar]
- Liddell SK. Grammar, gesture, and meaning in American Sign Language. Cambridge, UK: Cambridge University Press; 2003. [Google Scholar]
- Lopez-Calderon J, Luck S. ERPLAB. In development at the Center for Mind and Brain. University of California; Davis: Plug-in for EEGLAB. (forthcoming) [Google Scholar]
- MacSweeney M, Campbell R, Woll B, Giampietro V, David AS, McGuire PK, Calvert GA, Brammer MJ. Dissociating linguistic and nonlinguistic gestural communication in the brain. NeuroImage. 2004;22:1605–1618. doi: 10.1016/j.neuroimage.2004.03.015. [DOI] [PubMed] [Google Scholar]
- McPherson WB, Holcomb PJ. An electrophysiological investigation of semantic priming with pictures of real objects. Psychophysiology. 1999;36:53–65. doi: 10.1017/s0048577299971196. [DOI] [PubMed] [Google Scholar]
- Meir I, Sandler W, Padden C, Aronoff M. Emerging sign languages. In: Marschark M, Spencer P, editors. Oxford Handbook of Deaf Studies, Language, and Education. Vol. 2. New York: Oxford University Press; 2010. [Google Scholar]
- Morford JP, Kegl JA. Gestural precursors to linguistic constructs: How input shapes the form of language. In: McNeill D, editor. Language and gesture. Cambridge, UK: Cambridge University Press; 2000. pp. 358–387. [Google Scholar]
- Neville HJ, Bavelier D, Corina D, Rauschecker J, Karni A, Lalwani A, Braun A, Clark V, Jezzard P, Turner R. Cerebral organization for language in deaf and hearing subjects: biological constraints and effects of experience. Proc Natl Acad Sci USA. 1998;95:922–29. doi: 10.1073/pnas.95.3.922. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neville HJ, Coffey SA, Lawson DS, Fischer A, Emmorey K, Bellugi U. Neural systems mediating American Sign Language: Effects of sensory experience and age of acquisition. Brain and Language. 1997:285–308. doi: 10.1006/brln.1997.1739. [DOI] [PubMed] [Google Scholar]
- Neville HJ, Nicol JL, Barss A, Forster KI, Garrett MF. Syntactically based sentence processing classes: Evidence from event-related brain potentials. Journal of Cognitive Neuroscience. 1991;3:151–165. doi: 10.1162/jocn.1991.3.2.151. [DOI] [PubMed] [Google Scholar]
- Nigam A, Hoffman JE, Simons RF. N400 and semantic anomaly with pictures and words. Journal of Cognitive Neuroscience. 1992;4:15–22. doi: 10.1162/jocn.1992.4.1.15. [DOI] [PubMed] [Google Scholar]
- Osterhout L, Holcomb P. Event-related brain potentials elicited by syntactic anomaly. Journal of Memory and Language. 1992;31:785–806. [Google Scholar]
- Ozyürek A, Willems RM, Kita S, Hagoort P. On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience. 2007;19:605–616. doi: 10.1162/jocn.2007.19.4.605. [DOI] [PubMed] [Google Scholar]
- Pratarelli ME. Semantic processing of pictures and spoken words: Evidence from event-related brain potentials. Brain and Cognition. 1994;24:137–157. doi: 10.1006/brcg.1994.1008. [DOI] [PubMed] [Google Scholar]
- Rizzolatti G, Arbib MA. Language within our grasp. Trends in Neurosciences. 1998;21:188–194. doi: 10.1016/s0166-2236(98)01260-0. [DOI] [PubMed] [Google Scholar]
- Rugg MD, Nagy ME. Lexical contribution to non-word-repetition effects: Evidence from event-related potentials. Memory and Cognition. 1987;15:473–481. doi: 10.3758/bf03198381. [DOI] [PubMed] [Google Scholar]
- Senghas A. Language emergence: Clues from a new Bedouin sign language. Current Biology. 2005;15:463–465. doi: 10.1016/j.cub.2005.xx.xxx. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sitnikova T, Holcomb PJ, Kiyonaga KA, Kuperberg GR. Two neurocognitive mechanisms of semantic integration during the comprehension of real-world events. Journal of Cognitive Neuroscience. 2008;20:2037–2057. doi: 10.1162/jocn.2008.20143. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sitnikova T, Kuperberg G, Holcomb PJ. Semantic integration in videos of real-world events: An electrophysiological investigation. Psychophysiology. 2003;40:160–164. doi: 10.1111/1469-8986.00016. [DOI] [PubMed] [Google Scholar]
- Tomasello M. Constructing a language: a usage-based theory of language acquisition. Cambridge, MA: Harvard University Press; 2005. [Google Scholar]
- Van Petten C, Rheinfelder H. Conceptual relationships between spoken words and environmental sounds: Event-related brain potential measures. Neuropsychologia. 1995;33:485–508. doi: 10.1016/0028-3932(94)00133-a. [DOI] [PubMed] [Google Scholar]
- West WC, Holcomb PJ. Event-related potentials during discourse-level semantic integration of complex pictures. Cognitive Brain Research. 2002;13:363–75. doi: 10.1016/s0926-6410(01)00129-x. [DOI] [PubMed] [Google Scholar]
- Wilcox S. Gesture and language: Cross-linguistic and historical data from signed languages. Gesture. 2004;4:43–73. [Google Scholar]
- Wu YC, Coulson S. Meaningful gestures: Electrophysiological indices of iconic gesture comprehension. Psychophysiology. 2005;42:654–667. doi: 10.1111/j.1469-8986.2005.00356.x. [DOI] [PubMed] [Google Scholar]
- Wu YC, Coulson S. How iconic gestures enhance communication: An ERP study. Brain Lang. 2007a;101:234–245. doi: 10.1016/j.bandl.2006.12.003. [DOI] [PubMed] [Google Scholar]
- Wu YC, Coulson S. Iconic gestures prime related concepts: An ERP study. Psychon B Rev. 2007b;14:57–63. doi: 10.3758/bf03194028. [DOI] [PubMed] [Google Scholar]
- Ziegler JC, Besson M, Jacobs AM, Nazir TA, Carr TH. Word, pseudoword, and nonword processing: a multitask comparison using event-related brain potentials. J Cogn Neurosci. 1997;9:758–775. doi: 10.1162/jocn.1997.9.6.758. [DOI] [PubMed] [Google Scholar]





