Abstract
One of the key questions in the study of human language acquisition is the extent to which the development of neural processing networks for different components of language are modulated by exposure to linguistic stimuli. Sign languages offer a unique perspective on this issue, because prelingually Deaf children who receive access to complex linguistic input later in life provide a window into brain maturation in the absence of language, and subsequent neuroplasticity of neurolinguistic networks during late language learning. While the duration of sensitive periods of acquisition of linguistic subsystems (sound, vocabulary, and syntactic structure) is well established on the basis of L2 acquisition in spoken language, for sign languages, the relative timelines for development of neural processing networks for linguistic sub-domains are unknown. We examined neural responses of a group of Deaf signers who received access to signed input at varying ages to three linguistic phenomena at the levels of classifier signs, syntactic structure, and information structure. The amplitude of the N400 response to the marked word order condition negatively correlated with the age of acquisition for syntax and information structure, indicating increased cognitive load in these conditions. Additionally, the combination of behavioral and neural data suggested that late learners preferentially relied on classifiers over word order for meaning extraction. This suggests that late acquisition of sign language significantly increases cognitive load during analysis of syntax and information structure, but not word-level meaning.
Keywords: age of acquisition, plasticity, sign language, N400, word order, semantics, syntax
1. Introduction
The organization of neural circuits which process human language is amenable to rapid changes during a limited window in early development, known as the sensitive period (Hartshorne et al., 2018; Malaia and Wilbur, 2010). One of the key questions in the study of human language acquisition is the extent to which the development of neural networks for processing of different components of language are modulated by exposure to linguistic stimuli.
This question is difficult to address on the basis of spoken language, because for speech, normal language acquisition can begin even before birth (cf. Partanen et al., 2013). This is not the case for sign language acquisition. Only about 5% of Deaf children are born to Deaf parents and acquire sign language from birth (Mitchell & Karchmer (2004) for American Sign Language, ASL). The vast majority of Deaf children are born to hearing parents and have no access to sign language from birth. These children are in unique developmental circumstances where they do not hear speech, nor see sign language around them. Prelingual deafness has the result of dissociating brain maturation and linguistic experience, the onset of which can vary widely. This naturally occurring situation provides a window into the relationship between neuroplasticity of language-processing networks at various stages of development and relevance of complex input (Malaia, Talavage & Wilbur, 2018; Malaia & Wilbur, 2014). Thus, Deaf signers receiving access to signed input at varying ages offer a unique perspective on the effects of AoA. For spoken languages, the critical windows for acquisition of specific components of language – phonology, lexicon, and syntax – have been established almost entirely on the basis of L2 acquisition (Hartshorne et al., 2018). However, L2 proficiency depends strongly on L1 proficiency, as has been demonstrated for spoken and sign languages alike. Here we report the first study investigating the neurophysiological effects of the age of sign language acquisition (AoA) on the processing of structures across linguistic domains (interface of word order with lexical structure and topicalization).
1.1. Effects of Age of acquisition on different linguistic levels
Early acquisition of a natural language, signed or spoken, has been shown to be the basis of proficiency in the first language, and the ability to learn subsequent languages later in life (Mayberry, 2007). Neuroimaging evidence shows that people who acquire a natural language in the normal timeframe possess specialized linguistic abilities and brain functions that are different in people whose exposure to natural language is delayed or absent (Neville et al., 1997; Newman et al., 2002; Mayberry et al, 2011). In spoken language, later learners of second language have been shown to have attenuated sensitivity to grammatical and semantic violations as indexed by ERP components such as N400 and P600, with higher sensitivity to violations presented in the auditory domain (Meulman et al., 2014).
Hernandez and Li (2007) suggested an integrative theory of AoA effect on both linguistic and non-linguistic domains, based on the sensorimotor learning framework. The framework suggests that, in young learners, sensory processing and sensorimotor integration ability underlies both action language perception and production. This theory accounts for difficulty in later learning of phonological rules of a language, manifested, for example, in accents of late learners; it also accounts for a relatively longer acquisition window for word learning. Within this framework, sensitivity to suprasegmental cues marking syntactic structures would predict that both phonology and syntax might be more sensitive to AoA than semantics.
A somewhat different view on syntax acquisition is offered by Hartshorne et al. (2018), who investigated behavioral proficiency data in 600,000 English speakers (both L1 and L2), and demonstrated that grammar-learning ability is preserved throughout childhood and declines rapidly in late adolescence (16–17 years of age). One caveat to this study is that all late acquirers in the participant pool were L2 learners; hence, the results are based on an interaction of variables (AoA for L1 and AoA for L2, which might be grounded in L1). Thus, as late acquisition of language rarely happens for speech, the key insights into dissociation of AoA and L1 learning have been provided by sign language research.
1.2. Effects of Age of sign language acquisition
Research in late acquisition of sign language as L1 suggests that the neural potential for L1 acquisition mechanism might be a gradient, rather than a strict cutoff, despite the cutoffs demonstrated by behavioral data. Mayberry et al. (2011) have shown that functional organization of the adult brain is affected by AoA for sign language, such that Age of acquisition (0 to 14) is linearly and negatively related to activation levels in anterior language regions and positively related to activation levels in posterior visual regions. These effects were observed for both shallow sensory-phonetic processing, as well as syntactic processing in ASL. In contrast, behavioral data indicated that AoA affected sensitivity to ASL syntactic structures, but not to sensory-phonetic processing (Mayberry et al., 2011). This dichotomy between neural and behavioral results in late learners suggests that the cumulative bias toward visual features of the input in late learners reflects an underlying difference in resource allocation: late learners lean more on sensory/visual features within the language processing stream, while early learners have access to neural algorithms for deep (syntactic) processing based on frontal brain networks. Overall, generalized algorithms for syntax processing are more engaged with earlier AoA, while late AoA means that the information will be processed by the sensorimotor networks; the cumulative effect of the change in resource allocation is reflected in the less sensitive behavioral paradigms as a sharp proficiency cutoff.
There is ample behavioral evidence that indicates that AoA affects sign language processing and proficiency across multiple linguistic domains. Proficiency in morphology and syntax appears to be most closely associated with AoA. Deaf non-native signers were shown to be less sensitive to verb agreement violations compared to Deaf native signers (Emmorey, Bellugi, Friederici, & Horn, 1995) and the accuracy of grammaticality judgments for sentences declines in ASL as a linear function of AoA (Boudreault & Mayberry, 2006). The influence of AoA was tested also in the context of morphologically complex sign language classifier constructions. Sign language classifiers are specific handshapes that are bound to verbs to express handling, motion and/or location of the referents (Frishberg, 1975). The classifier handshape denotes physical and geometrical properties of the entities - it can refer to the participant involved in the action (e.g. human being), and/or the shape of the entity (e.g. something that is long and thin) (Wilbur, Bernstein, & Kantor, 1985). Classifier handshapes are linked to previously introduced entities and can be used to refer back to previously established referents (Supalla, 1986). The structure of some classifier constructions can be highly iconic, i.e. their form shows a close relationship to their meaning. With regard to the production of sign language classifier constructions, it has been shown that late learners produce fewer classifiers and instead prefer to use simpler constructions (without classifiers) when compared to native signers (Newport, 1990, 1988 for ASL; Karadöller, Sümer, & Özyürek, 2017 for Turkish Sign Language TİD).
Notably, it appears that the length of language experience in late signers does not correlate with proficiency. Newport (1990) observed that linguistic performance on a test of verb morphology declined as a linear function of AoA in Deaf signers who had used ASL for a minimum of 30 years. A similar result was obtained by Mayberry and Eichen (1991), who examined the recall of complex ASL sentences by Deaf late signers with a minimum of 20 years’ exposure to ASL. Single-case studies (Ferjan Ramirez et al., 2016) indicated that when sign language acquisition began in adolescence, neural responses to sign languages remained atypical in terms of distribution and amplitude even after 15 months of language exposure, although the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. AoA also appears to influence phonological and lexical processing. Mayberry and Fischer (1989) reported different error patterns for native and non-native signers in narrative shadowing tasks and signed sentence recall: while native signers’ errors were primarily associated with the semantics of the stimulus, non-native signers made errors associated with the phonology of the stimuli. These phonological errors in the production of late learners were negatively correlated with signers’ comprehension accuracy as well as AoA (Mayberry & Eichen, 1991; Mayberry & Fischer, 1989). Eye-tracking studies of ASL lexical recognition indicated that native Deaf signers are sensitive to the phonological structure of signs during lexical recognition, while non-native Deaf signers are not, suggesting that late signers’ mental lexicon is organized differently than that of early learners (Lieberman, Borovsky, Hatrak, & Mayberry, 2015, 2016). The effects of AoA have been clearly demonstrated not only across linguistic domains, but also across different, unrelated sign languages. For example, in both ASL and British Sign Language (BSL), native and non-native Deaf signers weight phonological features differently (Hildebrandt & Corina, 2002 on ASL; Orfanidou, Adam, Morgan, & McQueen, 2010 on BSL). The impact of AoA on lexical decision has been demonstrated for both Spanish Sign Language (LSE) and BSL (Carreiras, Gutiérrez-Sigut, Baquero, & Corina, 2008 on LSE; Dye & Shih, 2006 on BSL). There are, however, components of language proficiency that appear relatively unaffected by AoA, such as processing of basic word order (Mayberry, Cheng, Hatrak, & Ilkbasaran, 2017; Newport, 1990). This suggests a closer interaction between perceptual and linguistic universals than previously thought.
1.3. The present study
The current investigation looked at the relationship between the age of sign language acquisition, on one hand, and the neural activity during processing of multiple interacting linguistic structures, on the other hand. Our goal was to identify the relative malleability of linguistic processing across domains to the age of sign language acquisition. We focused on online processing of linguistic structures in Austrian Sign Language (ÖGS) by using EEG recording (cf. Neville et al., 1997). All of the participants were, by the time of the experiment, proficient users of ÖGS, but they acquired it at very different points in life, from 0 to after 22 years of age.
ÖGS is organized hierarchically, like other sign and spoken languages: lexical items are combined into sentences using inflectional and derivational morphemes, sentence structure is governed by syntactic rules (Schalber, 2006a, 2006b). The psycholinguistic processes that underlie ÖGS comprehension are similar to those of native signers of other sign languages (Krebs et al., 2018; Leonard et al., 2012; Malaia et al., 2012), and with modality-dependent caveats, of spoken languages (Blumenthal-Dramé and Malaia, 2019; Malaia & Wilbur, 2019).
To investigate the relationship among different levels of linguistic processing (classifier signs, syntactic, and level of information structure represented here by topicalization1), the present study used manipulation of the basic word order (subject-object-verb, SOV) vs. marked word order (object-subject-verbs, OSV) in three different sentence types: simple sentences, sentences with iconic signs (classifiers), and sentences with marked information structure (non-manual/facial topic marking). For languages with canonical SOV word order, but possible OSV word order, the less typical OSV sentence structure incurs increased processing costs during perception, reflected behaviorally in longer reading times for spoken languages (e.g. Schlesewsky, Fanselow, Kliegl, & Krems, 2000), and lower acceptability ratings with longer probe reaction times (e.g. Bornkessel, McElree, Schlesewsky, & Friederici, 2004; Haupt et al., 2008). For multiple languages, including ÖGS, a “reanalysis N400” has been observed in neural response to OSV vs. SOV word order (e.g. Haupt et al., 2008; Krebs et al., 2018; Krebs et al., 2019). In particular, for ÖGS, which has basic SOV order, a “reanalysis N400” was observed for the processing of locally ambiguous OSV compared to SOV orders. This reanalysis effect reflects the tendency of the human parser to interpret a sentence-initial syntactically ambiguous argument as the subject of the clause (subject preference). Preference towards SOV requires reanalysis when parsing locally ambiguous OSV orders, which in turn leads to increased cognitive load in processing OSV in contrast to SOV orders (Krebs et al., 2018). Based on previous behavioral results (Newport, 1990; Mayberry et al., 2017), we hypothesized that AoA will be correlated with attenuated sensitivity to word order indexed by a reanalysis-driven N400 ERP component (cf. Meulman et al., 2014 for spoken languages, and Krebs et al., 2018 for signed). Thus, we expected the sign language users with earlier AoA to produce stronger N400 in response to OSV word order and topic marking.
For the classifiers, however, based on previous literature, two different hypotheses can be formulated. On the one hand, previous work showed that classifier acquisition is difficult for L1 and L2 learners of sign languages (Schick, 1987; Slobin et al., 2003). During L1 acquisition, classifier acquisition is protracted and error-prone (Schick, 1987) and correct use of classifiers is not fully acquired until children are in their early teens (e.g. Slobin et al., 2003). Also for hearing L2 learners (e.g. Marshall & Morgan, 2015) as well as Deaf late signers (Newport, 1988, 1990; Karadöller et al., 2017) the correct use of classifier constructions is challenging. Thus, the processing of classifier constructions could be influenced by AoA in the present study as well. On the other hand, it has been shown that although the comprehension of classifiers is affected by AoA, they are, similar to basic order, relatively intact in contrast to other morphosyntactically complex constructions (Boudreault & Mayberry, 2006). In addition, it has been shown that their comprehension is relatively unproblematic for hearing L2 learners in contrast to their production (Marshall & Morgan, 2015). In the present study we examine classifier comprehension which might (according to previous literature) be relatively unaffected by AoA. The two mutually exclusive possibilities for the processing of classifier constructions are:
if classifier constructions were processed similarly by all participants, regardless of the age of sign language acquisition, we would expect an N400 effect for the marked word order condition (OSV) in comparison to the unmarked SOV order for early and late learners (i.e. amplitude of N400 in response to OSV would not differ between early and late learners);
if, however, signers with later AoA engaged a qualitatively different processing strategy for the classifiers, such as reliance on their iconicity, instead of their morphosyntactic properties, ERPs to sentences containing classifiers with SOV vs. OSV word order would not differ for late learners (i.e., amplitude of N400 in response to OSV would differ between early and late learners).
Combining classifiers, word order, and topicalization in this investigation of ÖGS processing allowed us to tap into the effects of age of language acquisition at the interfaces of vocabulary, syntax, and information structure.
2. Methods
2.1. Participants
From the 25 persons who participated, 20 (9 females) were included in the final analysis. Only participants who were both proficient and fluent in sign production as evaluated by a certified sign language interpreter, and who used sign language as primary means of communication in daily life, were accepted into the study. Four were excluded due to artifacts in EEG data (less than 70% of critical trials remaining after artifact correction); one participant was excluded due to behavioral noncompliance. The mean age of the remaining 20 participants was 39.37 years (sd = 10.19; range = 28 to 58 years). All participants were born Deaf or lost their hearing early in life (prelingual deafness), but had no concomitant neurological disorders. Three have Deaf parents, the others had hearing parents. Due to privacy concerns, age of sign language acquisition was coded within approximate ranges: 0–3 (n=5), 4–7 (n=10), 13–17 (n=1), 18–22 (n=1), and >22 (n=3). Fifteen were right-handed, four left-handed and one did not have a dominant hand preference (tested by an adapted German version of the Edinburgh Handedness Inventory; Oldfield, 1971). At the time of the study none showed any neurological or psychological disorders. All had normal or corrected vision and were not influenced by medication or other substances which may impact cognitive ability. The late learners in the cohort had contact with other languages before learning ÖGS. The first language they acquired, as Deaf learners, was either a spoken language (n = 4), or another sign language (n =1). All of the participants used ÖGS as their primary language in daily life, and are members of the Deaf community in Austria. As self-reports are not considered a solid basis for evaluating the participants’ skills in their “first language” (e.g. Cormier, Schembri, Vinson, & Orfanidou, 2012), the participants’ proficiency was independently evaluated by an ÖGS interpreter. Only data from proficient participants who understood and carried out the rating task correctly was used in the analysis.
2.2. Stimuli and task design
To investigate the processing of marked word order (OSV, as compared to SOV), we used material comparable to testing subject/object ambiguities in spoken languages. In sign languages, discourse referents are referenced in space e.g. by index signs (a kind of pointing). Verb agreement can be expressed either by path movement of the agreeing verb (in the regular case) from the position in space associated with the subject to the position associated with the object referent, or by facing of the palm/fingertips towards the object referent. In ÖGS, the marked OSV order can be used in specific contexts, such as in sentences with agreeing verbs. In both SOV and OSV conditions, argument noun phrases were signed in the same order and were referenced at the same points in space; i.e. the first argument was always referenced at the left side of the signer. After both arguments were referenced in space by an index sign, the disambiguating agreeing verb unambiguously marks the argument structure by path movement and/or facing (Figure 1). 40 sentences were presented in each simple word order condition (40 with SOV word order; 40 with OSV word order) by a Deaf native ÖGS signer.
Figure 1.
A static word-by-word representation of ÖGS sentence in SOV vs. OSV (simple word order) condition. The critical signs are marked by arrows.
In the Information Structure conditions, topic marking was expressed by a combination of non-manuals such as raised eyebrows, wide eyes, chin directed towards the chest and an enhanced mouthing. The index (pointing sign) referencing the topic (sentence-initial) argument was also followed by a pause, during which the index sign was held in space (Figure 2). 40 sentences were presented in each Information Structure condition (40 with SOV word order and topic marking; 40 with OSV word order and topic marking).
Figure 2.
A static word-by-word representation of ÖGS sentence; A denotes unmarked conditions; B denotes conditions marked by topic non-manuals. Topic marking accompanies the sentence-initial argument and the index sign referencing this argument (framed section). Topic marking is expressed by raised eyebrows, wide eyes, chin directed towards the chest and an enhanced mouthing. The critical signs disambiguating argument structure are marked by arrows.
The conditions with classifiers included locally ambiguous classifier constructions, which expressed spatial relationship between two human arguments. After the arguments were referenced in space, a classifier predicate indicated the spatial relation between them (e.g. a man moves towards/away from another man). The sentence-initial argument was always referenced on the left side of the signer by a whole entity classifier. After both arguments were referenced and located in space, a classifier predicate indicated the relationship between the arguments. In SOV word order condition, the classifier referencing the first argument moved in relation to the argument referenced second; in OSV condition, the classifier referencing the second argument moved in relation to the argument referenced first (Figure 3). 40 sentences were presented in each classifier condition (40 with SOV word order and classifier; 40 with OSV word order and classifier).
Figure 3.
A static word-by-word representation of ÖGS sentence in SOV vs. OSV conditions with classifier predicates. After both arguments are indexed in space by classifier handshapes, the sentence-final classifier predicate indicates the spatial relation between the arguments by movement from subject to object location. The critical signs are marked by arrows.
Sentence order was pseudo-randomized among the 6 conditions (simple SOV, simple OSV, topic-marked SOV, topic-marked OSV, SOV with classifier; OSV with classifier), for a total of 240 critical sentences interspersed with 40 fillers (time-reversed videos) to ensure behavioral compliance and distract from strategic processing.
2.3. Procedure
The videos were presented on the screen (35.3 × 20 cm in size); participants sat 1 meter away from the screen. Material was presented in 20-sentence blocks (14 in total). Every trial started with presentation of a fixation cross to get their attention. The fixation cross, on the screen for 2000 ms, was followed by an empty black screen for 200 ms. Then a stimulus sentence (one video) was presented in the middle of the screen. Each trial ended with a rating task, indicated by a green question mark for 3000 ms after each stimulus. Participants had to rate the videos on a scale from one to seven as to whether the stimulus was good ÖGS or not (1 stood for ‘that is not ÖGS’; 7 stood for ‘that is good ÖGS’). Ratings were given by button press on a keyboard. Instructions were given by an ÖGS video signed by one of the authors. Prior to the actual experiment, a training block was presented to familiarize subjects with task requirements and permit them to ask questions in case anything was unclear. The duration of breaks after each block was determined by the subjects themselves. Participants were instructed to avoid eye movements and other motions during the presentation of the video material and to view the sentences with attention.
2.4. Electrophysiological Recordings
The EEG was recorded from twenty-six electrodes (Fz, Cz, Pz, Oz, F3/4, F7/8, FC1/2, FC5/6, T7/8, C3/4, CP1/2, CP5/6, P3/7, P4/8, O1/2) fixed on the participant’s scalp by means of an elastic cap (Easy Cap, Herrsching-Breitbrunn, Germany). Horizontal eye movements (HEOG) were registered by electrodes at the lateral ocular muscles and vertical eye movements (VEOG) were recorded by electrodes fixed above and below the left eye. All electrodes were referenced against the electrode on the left mastoid bone and offline re-referenced against the averaged electrodes at the left and right mastoid. The AFz electrode functioned as the ground electrode. The EEG signal was recorded with a sampling rate of 500 Hz. For amplifying the EEG signal we used a Brain Products amplifier (high pass: 0.01 Hz). In addition, a notch filter of 50 Hz was used. The electrode impedances were kept below 5 kΩ. Offline, the signal was filtered with a bandpass filter (Butterworth Zero Phase Filters; high pass: 0.1 Hz, 48 dB/Oct; low pass: 20 Hz, 48 dB/Oct).
Analysis
The statistical evaluation of the EEG data was carried out by comparison of the mean amplitude values per time window per condition and per participant using the following electrodes: anterior left = F7, F3, FC5; anterior right = F8, F4, FC6; central left = FC1, CP5, CP1; central right = FC2, CP6, CP2; posterior left = P7, P3, O1 and posterior right = P8, P4, O2. The signal was corrected for ocular artifacts by the Gratton and Coles method (Gratton, Coles, & Donchin, 1983) and automatically screened for artifacts (minimal/maximal amplitude at −75/+75 μV). Data was baseline-corrected to −300 to 0, as appropriate for longer analysis windows in sign language (Krebs et al., 2018). For each condition no more than 15% of the trials were excluded. Participants were excluded from analysis if less than 70% of the critical trials remained after artifact correction. Mean amplitude of each participant’s N400 response was computed for each word order (SOV vs. OSV) within each condition (simple word order, topic, classifier) on all 18 electrodes listed above.
Previous experimental findings revealed reanalysis effects for locally ambiguous OSV compared to SOV structures in ÖGS bound to the time point when the transitional movement of the index referencing the second argument towards the verb sign (i.e. towards the target handshape of the verb sign) was visible (Krebs et al., 2018; Krebs et al., 2019). Therefore, in the present study ERPs were measured with respect to the time point when the transitional trajectory towards the disambiguating verb was visible (for simple and topic orders involving lexical agreeing verbs) or with respect to the time point (first video frame) at which the hand that referenced the subject starts to move (for classifier predicates). Trigger markers were determined by a qualified ÖGS interpreter, who assessed the video recording frame-by-frame.
Time windows for assessing the difference between conditions were determined based on sign language processing literature (Hosemann et al., 2018; Krebs et al., 2018). In the simple word order and topic conditions, the 200 to 400 ms window post-onset of critical signs was used for calculation of the N400 response to OSV condition (Krebs et al., 2018). In the classifier condition, 300 to 800 ms window was used (Hosemann et al., 2018). To correct for violations of sphericity, the Greenhouse - Geisser (1959) correction was applied to repeated measures with greater than one degree of freedom. Kendall’s τ was used to assess correlation between ranked ranges of AoA and mean amplitudes of each participant’s N400 response. For the behavioral data, we applied Kendall’s τ to assess correlations between AoA and mean acceptability ratings, as well as response times2.
3. Results
3.1. Behavioral results
No correlation was identified between AoA and reaction times (all ps>.09).
Later age of sign language acquisition was strongly positively correlated with the probability of high grammaticality rating (“this is good ÖGS”) for SOV word order in simple word order condition (τ = .283, p<.001), topic-marked condition (τ = .344, p<.001), and classifier condition (τ = .301, p<.001), as well as for OSV word order for topic-marked condition (τ = .062, p<.043), and classifier condition (τ = .190, p<.001). Mean acceptability ratings and mean reaction times per condition are presented in Table 1.
Table 1.
Mean ratings, mean reaction times, and standard deviations (sd) for each of the experimental conditions.
| Condition |
Mean acceptability rating (SD) |
|
| SOV | OSV | |
| Word order | 6.10 (0.90) | 5.89 (1.07) |
| Topic | 6.16 (0.84) | 5.95 (1.14) |
| Classifier | 5.76 (0.92) | 5.67 (0.90) |
| Condition |
Mean reaction time in msec (SD) |
|
| SOV | OSV | |
| Word order | 880.06 (459.81) | 886.40 (442.82) |
| Topic | 879.35 (461.62) | 869.60 (452.82) |
| Classifier | 870.18 (446.90) | 896.84 (460.53) |
3.2. Electrophysiological results
Age of sign language acquisition was significantly negatively correlated with the mean amplitude of the N400 response in simple word order condition (τ = .259, p<.036; see Figure 4) and topic-marked condition (τ = .265, p<.032; see Figure 5), but not classifier condition (τ = .118, p>.1; see Figure 6). By-condition analysis revealed that for SOV word order, no correlations were identified between AoA and magnitude of N400 in either of the three conditions (all ps>.2). In the marked word order condition (OSV), EEG response was significantly correlated with the age of acquisition for simple word order (τ = .442, p<.014) and topic condition (τ = .379, p<.034), but not for classifier condition (τ = .013, p>.9).
Figure 4.
Correlation of age of acquisition and mean amplitude of the N400 ERP in simple word orders: SOV vs. OSV. Mean amplitude of the N400 ERP is represented in μV on the x-axis. Age ranges of sign language acquisition are represented on the y-axis (1 stands for 0–3 years of acquisition age, 2 stands for 4–7, 3 stands for 13–17, 4 stands for 18–22, and 5 stands for < 22).
Figure 5.
Correlation of age of acquisition and mean amplitude of the N400 ERP in topic-marked sentences with SOV vs. OSV word orders. Mean amplitude of the N400 ERP is represented in μV on the x-axis. Age ranges of sign language acquisition are represented on the y-axis (1 stands for 0–3 years of acquisition age, 2 stands for 4–7, 3 stands for 13–17, 4 stands for 18–22, and 5 stands for < 22).
Figure 6.
Correlation of age of acquisition and mean amplitude of the N400 ERP in SOV vs. OSV sentences with classifier predicates. Mean amplitude of the N400 ERP is represented in μV on the x-axis. Age ranges of sign language acquisition are represented on the y-axis (1 stands for 0–3 years of acquisition age, 2 stands for 4–7, 3 stands for 13–17, 4 stands for 18–22, and 5 stands for < 22).
4. Discussion
The present study investigated the effects of age of sign language acquisition (AoA) on the N400 neural response to linguistic phenomena at the levels of individual signs (classifiers), syntactic structure, and information structure. Based on prior analyses of late learning for spoken and sign languages, a less pronounced N400 response to marked word order and information structure (topic marking) was expected from later learners. With regard to the processing of classifiers, however, two competing hypotheses were put forth:
in case of traditional linguistic processing of classifiers, the marked word order condition (OSV) was expected to elicit increased cognitive load in contrast to SOV in later learners of ÖGS;
in case of engagement of a qualitatively different processing strategy for the classifiers by late learners, such as reliance on the overt iconicity, no overt modulation of N400 response was expected to OSV word order for those subjects.
Age of acquisition negatively correlated with the mean amplitude of the N400 ERP component in OSV simple word order and topic-marked conditions (Figure 7). Behavioral data also indicated that acceptability of the marked word order (OSV) in the classifier condition was highly positively correlated with later AoA. The neural processing of classifiers, however, did not show additive cognitive load of marked word order, and did not correlate with AoA. These results support the hypothesis that signers with later AoA engaged a qualitatively different (non-linguistic) processing strategy for the classifiers. There are two possibilities as to the nature of this strategy. On one hand, the overt iconicity of classifier constructions might engage a non-language-specific ‘semantic’ system (akin to that engaged by non-signers viewing signs, cf. Strickland et al., 2015). On the other hand, classifiers might be represented at the morpho-lexical level in the structure of language, which is not as strongly affected by AoA, as word order (syntactic) and information structure (interface of pragmatics, syntax, prosody) (cf. Ferjan Ramirez et al., 2014).
Figure 7.
Correlation of age of acquisition and mean amplitude of the N400 ERP across SOV and OSV word orders and conditions. Mean amplitude of the N400 ERP is represented in μV on the x-axis. Age of sign language acquisition is represented on the y-axis, whereby age ranges are ranked ranges; 1 stands for 0–3 years of acquisition age, 2 stands for 4–7, 3 stands for 13–17, 4 stands for 18–22, and 5 stands for < 22. Statistically significant correlations are marked by a red box.
One possible reason for the observation that classifiers were unaffected by AoA may be connected to their highly iconic character, i.e. their meaning is closely related to their form. Although earlier studies suggest that iconicity has no or relatively little effect on L1 sign language acquisition by children (Meier, Mauk, Cheek, & Moreland, 2008; Newport & Meier, 1985; Orlansky & Bonvillian, 1984), recent research reports that the first signs children acquire are iconic (Thompson, Vinson, Woll, & Vigliocco, 2012). In fact, the effect of iconicity on language acquisition is not restricted to the visual-(non)manual modality. Previous research suggests that iconicity impacts processing and development of spoken as well as sign languages (Perniss, Thompson, & Vigliocco, 2010). Iconicity has also been shown to support L1 acquisition of spoken languages (Imai & Kita, 2014; Imai, Kita, Nagumo, & Okada, 2008) as well as L2 spoken language learning (Deconinck, Boers, & Eyckmans, 2017). In the present work, high iconicity of classifiers might have facilitated the processing of word order variations involving classifiers in late learners, suggesting that late learners have used a different strategy for the processing of classifier constructions. Hence, the present data provide further support for the hypothesis that iconic words/signs are easier to learn because these are more grounded in perceptual and motoric experience (Imai & Kita, 2014; Perniss & Vigliocco, 2014; Ortega et al., 2017).
The findings in the present study confirm and extend the understanding of the age of acquisition as a critical parameter for achieving proficiency at the levels of syntactic and information structure processing. The analysis provides novel insights into the relationship between linguistic levels: while we observed additive effect of marked structures on cognitive load in the domains of syntax (word order) and information structure (topicalization), no such effect was observed for the interaction of word order and processing of classifiers.
The present findings suggest that there exists substantial interaction among levels of language during online processing. These findings are parallel to the existing literature on the interaction between the age of sign language acquisition and processing at the interface of phonology and lexicon. For instance, Deaf late L1 learners have been shown to be more sensitive to the visual properties of signs, as compared with native Deaf signers and hearing L2 signers (Best, Mathur, Miranda, & Lillo-Martin, 2010; see also Morford, Grieve-Smith, MacFarlane, Stanley, & Waters, 2008 for similar results). Phonological processing appears to be less automatized in late learners, such that they focus on fine-grained phonetic properties of signs, which are ignored by persons acquiring an L1 in infancy. This suggests that there is a need to better understand the interaction between spatiotemporal properties of input (visual input, in the case of sign language) and the trajectory of maturation for sensory and cognitive brain networks.
Limitation in language exposure, or lack of language exposure in an appropriate sensory modality in early life (i.e. sign language for the Deaf), affects language processing through adulthood. Later AoA results, at the neural level, in a cumulative delay in deep processing algorithms, such as those underlying the processing of syntactic structures and information structure (in the present study, topicalization). Early acquisition of language allows language processing to mature to the degree where automatic algorithms operate on larger linguistic units, such as phrases and sentences. Late acquisition, on the other hand, contributes to preserved reliance on sensory units of language – in this, the present EEG data corroborates functional neuroimaging findings by Mayberry et al. (2011), that the primary weight of neural activation patterns shift from more posterior to more anterior brain regions in early, but not in late acquirers. The findings are compatible with the Hernandez and Li (2007) framework indicating that early language learning relies on sensory integration. Within the field of psycholinguistics, further work in modeling of behavioral manifestations of language proficiency is needed to connect the gradient effect of AoA on neural processing with a more pronounced „cutoff“ effect on behavioral data for L2 learning. Existing data, however, is sufficient to clearly point to the need for early and immersive sign language exposure for non-hearing children, as neural effects of delay in AoA are both systemic in affecting multiple levels of linguistic processing, and pervasive in their persistence to adulthood.
The limitations of the study include incomplete information on variables that could potentially contribute to more exhaustive modeling of interaction between AoA and age effects, such as proficiency, fluency, non-verbal IQ, etc., due to lack of adapted tools for normed language assessments in ÖGS.
As sign language input is quantitatively significantly different from non-linguistic biological motion that humans are exposed to (Bosworth et al., 2019; Borneman et al., 2018; Malaia et al., 2016), the present findings suggest the limits of neuroplasticity as the brain matures. The results of the present study highlight the importance of comprehensive analysis of language proficiency at the interfaces between multiple linguistic domains, and possibly sensory-linguistic interface (e.g. temporal resolution of visual signal) to better understand the processes that underlie the critical period (or periods) for typical and atypical language acquisition.
Highlights.
Age of language acquisition can affect linguistic sub-systems unevenly.
Later learners show qualitatively different processing for semantics and syntax.
Information structure (topic) is processed similarly to syntax.
Spatial-iconic structures (classifiers) are processed as conceptually meaningful.
Late learners of sign language rely on meaning over structure in sentence processing.
Funding and Acknowledgements
Preparation of this manuscript was partially funded by the European Union Marie S. Curie FRIAS COFUND Fellowship Programme (FCFP) award to EM, grant #1734938 from the U.S. National Science Foundation to RBW and EM, grant #1932547 from the U.S. National Science Foundation to EM, and grant R01#108306 from the National Institutes for Health to RBW. We thank all Deaf informants taking part in the present study, with special thanks to Waltraud Unterasinger for signing stimulus materials.
Footnotes
Competing interests
The authors have no competing interests to declare.
Topicalization is one manifestation of the formal expression of information structure within the syntax of a sentence, such that the highlighted topic portion of the message is moved to initial position (if it is not already in that position before highlighting).
The choice of statistics was driven by the sparsity of data, rather than assumption of linear effects due to AoA.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Best CT, Mathur G, Miranda KA, & Lillo-Martin D (2010). Effects of sign language experience on categorical perception of dynamic ASL pseudosigns. Attention, Perception, & Psychophysics, 72, 747–762. [DOI] [PubMed] [Google Scholar]
- Blumenthal-Dramé A, & Malaia E (2019). Shared neural and cognitive mechanisms in action and language: The multiscale information transfer framework. Wiley Interdisciplinary Reviews: Cognitive Science, 10(2), e1484. [DOI] [PubMed] [Google Scholar]
- Borneman JD, Malaia E, & Wilbur RB (2018). Motion characterization using optical flow and fractal complexity. Journal of Electronic Imaging, 27(05), 1 10.1117/1.JEI.27.5.051229 [DOI] [Google Scholar]
- Bornkessel I, McElree B, Schlesewsky M, & Friederici AD (2004). Multi-dimensional contributions to garden path strength: Dissociating phrase structure from case marking. Journal of Memory and Language, 51, 495–522. [Google Scholar]
- Bosworth RG, Wright CE, & Dobkins KR (2019). Analysis of the visual spatiotemporal properties of American Sign Language. Vision research, 164, 34–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Boudreault P, & Mayberry RI (2006). Grammatical processing in American SignLanguage: Age of first-language acquisition effects in relation to syntactic structure. Language and Cognitive Processes, 21, 608–635. [Google Scholar]
- Carreiras M, Gutiérrez-Sigut E, Baquero S, & Corina D (2008). Lexical processing in Spanish sign language (LSE). Journal of Memory and Language, 58(1), 100–122. [Google Scholar]
- Cormier K, Schembri A, Vinson D, & Orfanidou E (2012). First language acquisition differs from second language acquisition in prelingually deaf signers: Evidence from sensitivity to grammaticality judgement in British Sign Language. Cognition, 124(1), 50–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deconinck J, Boers F, & Eyckmans J (2017). ‘Does the form of this word fit its meaning?’The effect of learner-generated mapping elaborations on L2 word recall. Language teaching research, 21(1), 31–53. [Google Scholar]
- Dye MW, & Shih S (2006). Phonological priming in British sign language. Laboratory Phonology, 8, 241–263. [Google Scholar]
- Emmorey K, Bellugi U, Friederici A, & Horn P (1995). Effects of age of acquisition on grammatical sensitivity: Evidence from on-line and off-line tasks. Applied Psycholinguistics, 16(1), 1–23. [Google Scholar]
- Ferjan Ramirez N, Leonard MK, Davenport TS, Torres C, Halgren E, & Mayberry RI (2014). Neural language processing in adolescent first-language learners: Longitudinal case studies in American Sign Language. Cerebral Cortex, 26(3), 1015–1026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ferjan Ramirez N, Leonard MK, Davenport TS, Torres C, Halgren E, & Mayberry RI (2016). Neural language processing in adolescent first-language learners: Longitudinal case studies in American Sign Language. Cerebral Cortex, 26(3), 1015–1026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Frishberg N (1975). Arbitrariness and iconicity: Historical change in American Sign Language. Language, 51, 696–719. [Google Scholar]
- Gratton G, Coles MG, & Donchin E (1983). A new method for off-line removal of ocular artifact. Electroencephalography and Clinical Neurophysiology, 55(4), 468–484. [DOI] [PubMed] [Google Scholar]
- Haupt FS, Schlesewsky M, Roehm D, Friederici AD, & Bornkessel-Schlesewsky I (2008). The status of subject-object reanalyses in the language comprehension architecture. Journal of Memory and Language, 59, 54–96. [Google Scholar]
- Hartshorne JK, Tenenbaum JB, & Pinker S (2018). A critical period for second language acquisition: Evidence from 2/3 million English speakers. Cognition, 177, 263–277. 10.1016/j.cognition.2018.04.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hernandez AE, & Li P (2007). Age of acquisition: its neural and computational mechanisms. Psychological bulletin, 133(4), 638. [DOI] [PubMed] [Google Scholar]
- Hildebrandt U, & Corina D (2002). Phonological similarity in american sign language. Language and Cognitive Processes, 17(6), 593–612. [Google Scholar]
- Hosemann J, Herrmann A, Sennhenn-Reulen H, Schlesewsky M, & Steinbach M (2018). Agreement or no agreement. ERP correlates of verb agreement violation in German Sign Language. Language, Cognition and Neuroscience, 1–21. [Google Scholar]
- Imai M, & Kita S (2014). The sound symbolism bootstrapping hypothesis for language acquisition and language evolution. Philosophical transactions of the Royal Society B: Biological sciences, 369(1651), 20130298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Imai M, Kita S, Nagumo M, & Okada H (2008). Sound symbolism facilitates early verb learning. Cognition, 109(1), 54–65. [DOI] [PubMed] [Google Scholar]
- Karadöller DZ, Sümer B, & Özyürek A (2017). Effects of delayed language exposure on spatial language acquisition by signing children and adults. In: Gunzelmann G, Howes A, & Tenbrink T (Ed.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) Austin, TX: Cognitive Science Society, 2372–2376. [Google Scholar]
- Krebs J, Malaia E, Wilbur RB, & Roehm D (2018). Subject preference emerges as crossmodal strategy for linguistic processing. Brain Research. 10.1016/j.brainres.2018.03.029 [DOI] [PubMed] [Google Scholar]
- Krebs J, Malaia E, Wilbur RB, & Roehm D (2019). Interaction between topic marking and subject preference strategy in sign language processing. Language, Cognition and Neuroscience. doi: 10.1080/23273798.2019.1667001 [DOI] [Google Scholar]
- Krebs J, Wilbur RB, Alday PM, & Roehm D (2019). The impact of transitional movements and non-manual markings on the disambiguation of locally ambiguous argument structures in Austrian Sign Language (ÖGS). Language and Speech, 62(4), 652–680. doi: 10.1177/0023830918801399 [DOI] [PubMed] [Google Scholar]
- Leonard MK, Ferjan Ramirez N, Torres C, Travis KE, Hatrak M, Mayberry RI, & Halgren E (2012). Signed Words in the Congenitally Deaf Evoke Typical Late Lexicosemantic Responses with No Early Visual Responses in Left Superior Temporal Cortex. Journal of Neuroscience, 32(28), 9700–9705. 10.1523/JNEUROSCI.1002-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lieberman AM, Borovsky A, Hatrak M, & Mayberry RI (2015). Real-time processing of ASL signs: Delayed first language acquisition affects organization of the mental lexicon. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(4), 1130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lieberman AM, Borovsky A, Hatrak M, & Mayberry RI (2016). Where to look for American Sign Language (ASL) sublexical structure in the visual world: Reply to Salverda. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 2002–2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Malaia E, Borneman JD, & Wilbur RB (2016). Assessment of information content in visual signal: analysis of optical flow fractal complexity. Visual Cognition, 24(3), 246–251. [Google Scholar]
- Malaia E, Ranaweera R, Wilbur RB, & Talavage TM (2012). Event segmentation in a visual language: Neural bases of processing American Sign Language predicates. Neuroimage, 59(4), 4094–4101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Malaia E, Talavage TM, & Wilbur RB (2014). Functional connectivity in task-negative network of the Deaf: effects of sign language experience. PeerJ, 2, e446. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Malaia E, & Wilbur RB (2010). Early acquisition of sign language What neuroimaging data tell us. Sign Language and Linguistics, 13(2), 183–199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Malaia E, & Wilbur RB (2018). Visual and linguistic components of short-term memory: Generalized Neural Model (GNM) for spoken and sign languages. Cortex. 10.1016/j.cortex.2018.05.020 [DOI] [PubMed] [Google Scholar]
- Malaia E, & Wilbur RB (2019). Syllable as a unit of information transfer in linguistic communication: The entropy syllable parsing model Wiley Interdisciplinary Reviews: Cognitive Science, e1518. [DOI] [PubMed] [Google Scholar]
- Marshall CR, & Morgan G (2015). From Gesture to Sign Language: Conventionalization of Classifier Constructions by Adult Hearing Learners of British Sign Language. Topics in Cognitive Science, 7, 61–80. doi: 10.1111/tops.12118 [DOI] [PubMed] [Google Scholar]
- Mayberry, Cheng Q, Hatrak M, & Ilkbasaran D (2017). Late L1 learners acquire simple but not syntactically complex structures. Poster Presented at the International Association for Child Language Lyon. [Google Scholar]
- Mayberry RI, Chen JK, Witcher P, & Klein D (2011). Age of acquisition effects on the functional organization of language in the adult brain. Brain and language, 119(1), 16–29. [DOI] [PubMed] [Google Scholar]
- Mayberry RI (2007). When timing is everything: Age of first-language acquisition effects on second-language learning. Applied Psycholinguistics, 28(3), 537–549. 10.1017/S0142716407070294 [DOI] [Google Scholar]
- Mayberry RI, & Eichen EB (1991). The long-lasting advantage of learning sign language in childhood: Another look at the critical period for language acquistion. Journal of Memory and Language, 30(1), 486. [Google Scholar]
- Mayberry RI, & Fischer SD (1989). Looking through phonological shape to lexical meaning: The bottleneck of non-native sign language processing. Memory & Cognition, 17(6), 740–754. [DOI] [PubMed] [Google Scholar]
- Meier RP, Mauk CE, Cheek A, & Moreland CJ (2008). The form of children’s early signs: Iconic or motoric determinants?. Language learning and development, 4(1), 63–98. [Google Scholar]
- Meulman N, Stowe LA, Sprenger SA, Bresser M, & Schmid MS (2014). An ERP study on L2 syntax processing: When do learners fail? Frontiers in Psychology, 5 10.3389/fpsyg.2014.01072 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mitchell RE, & Karchmer M (2004). Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign Language Studies, 4, 138–163. [Google Scholar]
- Morford JP, Grieve-Smith AB, MacFarlane J, Staley J, & Waters G (2008). Effects of language experience on the perception of American Sign Language. Cognition, 109, 41–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neville HJ, Coffey SA, Lawson DS, Fischer A, Emmorey K, & Bellugi U (1997). Neural systems mediating American Sign Language: Effects of sensory experience and age of acquisition. Brain and Language, 57(3), 285–308. [DOI] [PubMed] [Google Scholar]
- Newman AJ, Bavelier D, Corina D, Jezzard P, & Neville HJ (2002). A critical period for right hemisphere recruitment in American Sign Language processing. Nature neuroscience, 5(1), 76. [DOI] [PubMed] [Google Scholar]
- Newport E (1988). Constraints on learning and their role in language acquisition: Studies of the acquisition of American Sign Language. Language Sciences, 10, 147–172. [Google Scholar]
- Newport EL (1990). Maturational Constraints on Language Learning. Cognitive Science, 14(1), 11–28. 10.1207/s15516709cog1401_2. [DOI] [Google Scholar]
- Newport EL, & Meier RP (1985). The acquisition of American Sign Language. Lawrence Erlbaum Associates, Inc. [Google Scholar]
- Oldfield RC (1971). The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia, 9(1), 97–113. [DOI] [PubMed] [Google Scholar]
- Orfanidou E, Adam R, Morgan G, & McQueen JM (2010). Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure. Journal of Memory and Language, 62(3), 272–283. [Google Scholar]
- Orlansky MD, & Bonvillian JD (1984). The role of iconicity in early sign language acquisition. Journal of Speech and Hearing Disorders, 49(3), 287–292. [DOI] [PubMed] [Google Scholar]
- Ortega G (2017). Iconicity and sign lexical acquisition: a review. Frontiers in psychology, 8, 1280. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Partanen E, Kujala T, Näätänen R, Liitola A, Sambeth A, & Huotilainen M (2013). Learning-induced neural plasticity of speech processing before birth. Proceedings of the National Academy of Sciences, 110(37), 15145–15150. 10.1073/pnas.1302159110 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Perniss P, Thompson R, & Vigliocco G (2010). Iconicity as a general property of language: evidence from spoken and signed languages. Frontiers in psychology, 1, 227. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Perniss P, & Vigliocco G (2014). The bridge of iconicity: from a world of experience to the experience of language. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1651), 20130300. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schalber K (2006a). Event visibility in Austrian Sign Language (ÖGS). Sign Language & Linguistics, 9(1), 207–231. 10.1075/sll.9.1.11sch [DOI] [Google Scholar]
- Schalber K (2006b). What is the chin doing?: An analysis of interrogatives in Austrian sign language. Sign Language & Linguistics, 9(1), 133–150. 10.1075/sll.9.1.08sch [DOI] [Google Scholar]
- Schick BS (1987). The acquisition of classifier predicates in American Sign Language. (Doctoral dissertation, Purdue University) [Google Scholar]
- Schlesewsky M, Fanselow G, Kliegl R, & Krems J (2000). The subject preference in the processing of locally ambiguous wh-questions in German In Hemforth B, & Konieczny L (Eds.), German sentence processing (pp. 65–93). Dordrecht, Netherlands: Kluwer. [Google Scholar]
- Slobin DI, Hoiting N, Kuntze M, Lindert R, Weinberg A, Pyers J, Anthony M, Biederman Y, & Thumann H (2003). A cognitive/functional perspective on the acquisition of “classifiers” In Emmorey K (Ed.), Perspectives on classifier constructions in sign languages (pp. 271–296). Mahwah, New Jersey: Lawrence Erlbaum Associates. [Google Scholar]
- Strickland B, Geraci C, Chemla E, Schlenker P, Kelepir M, & Pfau R (2015). Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases. Proceedings of the National Academy of Sciences, 112(19), 5968–5973. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Supalla T (1986). The classifier system in American Sign Language In: Craig C (Ed.), Noun classes and categorization. Amsterdam: John Benjamins, 181–214. [Google Scholar]
- Thompson RL, Vinson DP, Woll B, & Vigliocco G (2012). The road to language learning is iconic: Evidence from British Sign Language. Psychological science, 23(12), 1443–1448. [DOI] [PubMed] [Google Scholar]
- Wilbur RB, Bernstein ME, Kantor R 1985. The semantic domain of classifiers in American Sign Language. Sign Language Studies 46: 1–38. [Google Scholar]







