Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Oct 12.
Published in final edited form as: Curr Dir Psychol Sci. 2023 May 15;32(5):387–394. doi: 10.1177/09637214231173071

Ten things you should know about sign languages

Karen Emmorey 1
PMCID: PMC10568932  NIHMSID: NIHMS1892667  PMID: 37829330

Abstract

The ten things you should know about sign languages are the following. 1) Sign languages have phonology and poetry. 2) Sign languages vary in their linguistic structure and family history, but share some typological features due to their shared biology (manual production). 3) Although there are many similarities between perceiving and producing speech and sign, the biology of language can impact aspects of processing. 4) Iconicity is pervasive in sign language lexicons and can play a role in language acquisition and processing. 5) Deaf and hard-of-hearing children are at risk for language deprivation. 6) Signers gesture when signing. 7) Sign language experience enhances some visual-spatial skills. 8) The same left hemisphere brain regions support both spoken and sign languages, but some neural regions are specific to sign language. 9) Bimodal bilinguals can code-blend, rather code-switch, which alters the nature of language control. 10) The emergence of new sign languages reveals patterns of language creation and evolution. These discoveries reveal how language modality does and does not affect language structure, acquisition, processing, use, and representation in the brain. Sign languages provide unique insights into human language that cannot be obtained by studying spoken languages alone.

Keywords: sign languages, language processing, gesture, bimodal bilinguals, language creation


Current psycholinguistic and neurobiological theories of language generally overlook or omit phenomena that are integral to the sign languages of the world. For example, theories often ignore iconicity (a resemblance between form and meaning) and phonological structure that is not based on sound – both inherent properties of sign languages. By widening our scientific lens to include languages in a different modality, we gain a deeper understanding of human language in part because we can ask questions that cannot be addressed with spoken languages. In this article, I review ten things that we have learned from the study of sign languages and highlight what these findings tell us about human language and its cognitive and neural underpinnings. Of course, more than ten important discoveries have been made about sign languages, and I chose these facts because they have significant theoretical implications and also debunk misconceptions that are sometimes held by both lay audiences and the scientific community. Numbering does not reflect the priority or importance of these facts.

1. Sign languages have phonology and poetry

The traditional definition of phonology is the study of speech sounds, but the discovery that sign languages have form-level structure has challenged this narrow view. Linguistic research has revealed a universal and fundamental level of structure in human language in which discrete, meaningless units of form are combined in rule-governed ways to create meaningful units (words or signs). For spoken languages, these units are consonants and vowels, while for sign languages they are handshapes, locations on the body, and movements (see Brentari, 2019, for a detailed account). In addition, the syllable turns out to be an amodal (or multimodal) phonological primitive defined by a peak of phonetic energy, either a vowel for speech (an acoustic energy peak) or a path movement for sign (a visual energy peak). Further, five-month old infants are able to extract phonological reduplication rules from sign language stimuli (discriminating repeated from non-repeated sequences), as they do for spoken language (Berent et al., 2021). These findings indicate that structure at the level of meaningless form is amodal (at least in part) and forms a core aspect of our human capacity for language and language learning. Given the existence of sign language phonology, it follows that sign poets can utilize phonological patterns to create artistic expressions, just as spoken language poets create rhyme and meter (Bauman et al., 2006). Sound is not a prerequisite for poetry.

2. Sign languages vary in their linguistic structure and family history, but share some typological features due to their shared biology (manual production)

There is no universal sign language, despite the persistence of this popular myth. The website Ethnologue currently lists 150 distinct sign languages around the globe, and new sign languages are still being discovered and documented (e.g., Central Taurus Sign Language in Turkey was only recently identified). Sign languages vary at the phonetic, phonological, morphological, and syntactic levels. For example, the “t” handshape (thumb inserted between the index and middle fingers) occurs in American Sign Language (ASL), but not British Sign Language (BSL). Japanese Sign Language marks gender with different handshapes, unlike European sign languages, and Italian Sign Language is a verb-final language, in contrast to spoken Italian, ASL, and BSL.

Unlike spoken languages, relationships between different sign languages can often be traced to the establishment of public deaf schools. For example, ASL is historically unrelated to BSL but is related to French Sign Language (LSF) due to the use of LSF in the first public school for the deaf established in Connecticut in 1817. Many sign languages in West Africa are related to ASL due to the establishment and spread of deaf schools there by the Reverend Andrew Foster, a deaf African-American educator and graduate of Gallaudet College, who promoted instruction in ASL (Nyst, 2010). Using historical evidence and linguistic analyses, some language families have been identified. For example, BANZSL (British, Australian, and New Zealand Sign Languages) and an East Asian family (sign languages in Korea, Japan, and Taiwan).

A full phylogenetic analysis of the many sign languages of the world has not yet been conducted in part because sign languages were only recently (since ~1960) considered worthy of study, and historical analyses face unique challenges. There is a lack of written records (no sign language has a standard written form), and a sign-based parallel to the International Phonetic Alphabet (used to transcribe unwritten spoken languages) has not been agreed upon by sign language linguists. In addition, many signs are likely to share iconic roots (see #4) which can compromise lexical comparisons across languages. For example, the verb meaning ‘to eat’ in many unrelated sign languages is made at the mouth, and body-part terms typically involve pointing to the relevant body location. Further, several typological features are expected to be found in unrelated sign languages due to their shared modality. For example, the hands are larger and slower than the vocal articulators, which promotes simultaneous over sequential morphology. Prefixes and suffixes are rare in sign languages, but the simultaneous production of linguistic facial expressions (e.g., adverbial mouth patterns) with manual signs is relatively common.

3. Although there are many similarities between perceiving and producing speech and sign, the biology of language can impact aspects of processing

Signers quickly extract meaning from the incoming visual signal, often in ways that are parallel to speakers. Both sign and speech are segmented using the same form-based constraints (e.g., the Possible Word Constraint; Orfanidou et al., 2010). Sign and word recognition are both automatic, as evidenced by Stroop effects (naming the color of the signer’s hand is slowed if the color sign (e.g., GREEN1 in ASL) is incongruent (e.g., red) with the hand color) (Bosworth et al., 2021). Both sign and word recognition are influenced by frequency (faster identification for more frequent signs) and by phonological neighborhood density (slower recognition for signs that are similar in form to many other signs) (Caselli et al., 2021). At the sentence level, syntactic priming occurs for both language types; for example, viewing an ASL noun-adjective phrase subsequently increases the probability of producing the same syntactic structure over the (also possible) adjective-noun phrase (Hall et al., 2015).

However, biology impacts the speed of lexical recognition – signs are recognized faster than words. Early lexical recognition can occur because the manual articulators are fully visible (unlike the hidden vocal articulators), and phonological information is available simultaneously and early in the signal (Emmorey et al., 2022). With respect to production, signing (like speaking) requires phonological assembly of sublexical units, as evidenced by systematic “slips of the hand”. Both sign and speech production involve a two-stage process in which lexical semantic representations are retrieved independently of phonological representations, as evidenced by tip-of-the-tongue and tip-of-the-finger states (see Emmorey, 2023, for a comparison of sign and speech production). However, language output monitoring differs for sign and speech due to differences in perceptual feedback: speakers hear themselves speak, but signers do not see themselves sign. While speakers can use auditory feedback to catch production errors, signers cannot comprehend their visual output, and they likely rely more on somatosensory feedback during on-line language monitoring. Somatosensory feedback is effective as sign and speech errors are detected at the same rate and because signing is slower, signers can even repair their errors earlier than speakers.

4. Iconicity is pervasive in sign language lexicons and can play a role in language acquisition and processing

Psycholinguistic models of language currently assume that form-meaning mappings are arbitrary and that there is a strict modular separation between phonological and semantic representations. The prevalence of iconic forms in sign languages challenges these assumptions. Sign languages afford greater iconicity than spoken languages because the visible bodily articulators more easily permit the depiction of actions, objects, locations, and shapes. For example, the signs for ‘bird’ in ASL, BSL, Japanese SL, Icelandic SL, German SL, and Chinese SL, all depict a bird’s beak, whereas the words in the surrounding spoken languages bear little resemblance to a bird. Early linguistic work downplayed the role of iconicity in sign languages because arbitrariness was considered a hallmark of human language.

Nonetheless, recent research has revealed a role for iconicity in both spoken and sign language learning. For example, iconic signs and sound-symbolic words are acquired early by children (Caselli & Pyers, 2017; Perry et al., 2015). However, a causal explanation for the role of iconicity in language learning is lacking (Nielsen & Dingemanse, 2021). Recent studies also indicate that iconicity can impact both sign recognition (Vinson et al., 2015) and production (Sehyr & Emmorey, 2022). However, iconicity effects are not always observed, and some effects of iconicity may be task-specific (Gimeno-Martínez & Baus, 2022). Further, many variables have yet to be fully investigated, such as the type of iconicity, the role of language proficiency, and how iconic mappings are construed. Because of the high prevalence of iconic forms in sign languages, they provide a rich testing ground for investigating whether and how iconicity impacts language learning and processing, as well as linguistic structure. These domains of inquiry are more limited for spoken languages, even for those with larger iconic vocabularies (e.g., languages with ideophones, a large class of words that evoke sensory images, such as pika in Japanese, meaning “a flash of light”).

5. Deaf and hard-of-hearing children are at risk for language deprivation

Most deaf children (90–95%) are born with no access to language because their hearing parents do not know a sign language, and their deafness prevents or impairs access to a spoken language. Deaf children born into deaf families acquire a sign language as their first language, and acquisition follows the same time course as spoken language, from manual babbling, to first signs, to two-word utterances and syntactic development (see Lillo-Martin & Henner, 2021, for a recent review). However, adult deaf signers who experienced delays in access to a sign language, e.g. because doctors and/or educators recommended families use speech only and avoid signing, perform poorly on tests of sign language ability, even if they had been signing for many years. Thus, there is a critical period for sign language acquisition, which contradicts the common misconception that deaf children can learn a sign language later if they have difficulty learning a spoken language (Mayberry & Kluender, 2018). Many deaf children face a high risk of language deprivation even with cochlear implants because spoken language outcomes are not predictable and are quite variable for these children. Unfortunately, early language deprivation has serious cognitive, social, and linguistic consequences (Hall et al., 2019). Fortunately, recent research has found that early sign language input from hearing parents who are learning the language can prevent acquisition delays (Caselli et al., 2021). Overall, the evidence indicates an equal potential and similar patterns for the acquisition of sign and spoken languages by infants, a critical period for acquisition in both language modalities, and the potential for early sign language exposure to prevent the effects of language deprivation in deaf and hard-of-hearing children.

6. Signers gesture when signing

All speakers produce co-speech gestures when they talk, and these gestures facilitate spoken language communication, index readiness to learn, and shape mental representations and cognitive processes (Kita et al., 2017). Although both signs and gestures are produced in the same visual-manual modality, it is possible to distinguish the two. Signs, like words but unlike gestures, have conventional forms, internal structure (phonology), and belong to grammatical categories (e.g., nouns, adjectives, determiners, etc.). The fact that signers also gesture while signing means that the notion of ‘gesture’ needs to be expanded or re-defined to capture what type of information is typically conveyed by gesture (idiosyncratic, imagistic, gradient) versus language (e.g., conventional, discrete, categorical) (Goldin-Meadow & Brentari, 2017).

Like co-speech gestures, co-sign gestures are produced simultaneously with signing and can take several forms. Signers can produce whole-body gestures to illustrate movements of the body that co-occur with the action expressed by the manual sign, such as swaying back and forth to depict waltzing while signing DANCE in ASL. Signers can produce iconic facial gestures that depict aspects of the scene, e.g., producing puffed cheeks to depict the large size of an object. Signers can also alter the form of signs for illustrative purposes, such as modifying the movement of a verb to depict the speed of an action (similar to a speaker saying “looooong”). Further, co-sign gestures seem to have many of the same functions as co-speech gestures – see Kita and Emmorey (in press) for a review and theoretical perspective.

7. Sign language experience enhances some visual-spatial skills

The visual-spatial processing required for sign language comprehension can enhance certain non-linguistic cognitive abilities in both hearing and deaf signers. For example, highly fluent signers tend to exhibit better mental rotation skills than non-signers (e.g, Kubicek & Quandt, 2021). One explanation for this result is that comprehending spatial descriptions from the signer’s perspective requires a mental transformation of locations in signing space, and the ability to understand such spatial descriptions is correlated with mental rotation ability (Secora & Emmorey, 2020). In addition, sign language experience impacts hemispheric laterality for non-linguistic motion processing, such that signers exhibit a leftward asymmetry, in contrast to non-signers (Bavelier et al., 2001). This laterality pattern may arise from systematic processing of linguistic movements in the left hemisphere (see #8) and/or because sign movements tend to fall in the right visual field (left hemisphere) of sign perceivers (Bosworth et al., 2019). Signs also tend to fall in the lower visual field of the addressee (signers look at the face, not the hands), and this appears to lead to increased attentional resources for non-linguistic stimuli in the inferior visual field for signers (Stoll & Dye, 2019). These effects are distinct from the changes in visual attention that are associated with early deafness (Bavelier et al., 2006). Signers provide a unique window onto the interplay between language and cognition because visual processes can be compared across signers and non-signers. In contrast, it is difficult to compare auditory processes across speakers and non-speakers because there are no individuals who can hear and who do not acquire a spoken language.

8. The same left hemisphere brain regions support both spoken and sign languages, but some neural regions are specific to sign language

Damage to the left hemisphere causes sign language aphasia, but right hemisphere damage does not (Hickok et al., 1998). Within the left hemisphere, secondary auditory cortex is activated during both sign and speech comprehension, and Broca’s area (the left inferior frontal gyrus) is engaged for both signing and speaking (see Emmorey, 2021, for a recent review of the neural substrate for sign language processing). Recent research has shown that syntactic/semantic combinatorial processing engages the same left hemisphere regions for signed and spoken languages (specifically, the left superior temporal sulcus and left anterior temporal lobe). In line with the finding that some aspects of phonology are amodal (see #1), similar brain regions are implicated in phonological processing for both speech and sign (inferior parietal cortex). However, other brain regions support modality-specific aspects of sign language phonology, such as targeting body locations (regions in superior parietal cortex). In addition, a recent electrocorticography study by Leonard et al. (2020) identified neural selectivity for phonological units specific to sign (handshapes and locations) in sensory-motor and parietal cortices. As alluded to in #7, signing space is used for spatial descriptions – spatial relations are depicted by the location of the hands in signing space, rather than by prepositions. This leads to greater involvement of the right hemisphere when producing and comprehending spatial language. In sum, investigations of the neurobiology of sign language reveal that key regions in the left hemisphere are specialized for language not speech, and that some neural computations are modality-dependent.

9. Bimodal bilinguals can code-blend, rather code-switch, which alters the nature of language control

Unimodal bilinguals must code-switch between their two spoken languages because they have only one output channel – a Spanish-English bilingual cannot say dog and perro at the same time. In contrast, bimodal bilinguals have the ability to code-blend – to produce a word and a sign at the same time, and they overwhelmingly prefer to code-blend rather than to switch between signing and speaking (Emmorey et al., 2008). As reviewed in Emmorey et al. (2016), code-blending does not require the inhibition of one language and also is not costly. For example, picture-naming times do not differ when pictures are named with ASL alone or simultaneously with ASL and English. Comprehension (assessed by semantic decisions) is faster and more accurate for code-blends than for either language alone. In addition, bimodal bilinguals sometimes produce ASL signs as co-speech “gestures” in monolingual contexts, whereas unimodal bilinguals rarely produce a word from the non-target language when speaking with monolinguals. These results indicate that bimodal bilinguals require less language control than unimodal bilinguals. However, language control is needed to switch into and out of a code-blend, which provides a novel way to investigate language control costs. When unimodal bilinguals switch between two spoken languages, they must simultaneously “turn off” one language and “turn on” the other language. When ASL-English bilinguals switch from speaking English into a code-blend, they only need to turn on a language (ASL), and when they switch out of a code-blend, they only need to turn off a language (ASL). Behavioral and neuroimaging data indicate that turning on a language does not incur a processing cost nor does it recruit language control regions of the brain, but turning off (inhibiting) a language does both (Blanco-Elorrieta et al., 2018; Emmorey et al., 2020).

10. The emergence of new sign languages reveals patterns of language creation and evolution

Sign languages are the only human languages that can emerge de novo at any time (Sandler et al., 2022). New sign languages typically emerge either when deaf people come together around a deaf school (e.g., Nicaraguan Sign Language; LSN) or because there is a high incidence of deafness within the community (e.g., Al-Sayyid Bedoun Sign Language). Researchers can trace the path of language creation by studying the signing of younger and older generations within these communities (see Brentari & Coppola, 2013, for review). New sign languages begin with an ‘initial contact’ stage among deaf homesigners2 who form a linguistic community, followed by a ‘sustained contact’ stage with the addition of more deaf children. One difference between homesign systems and initial contact signing is that in the latter, pointing gestures are more consistently integrated into phrases (i.e., they become more like pronouns). For LSN, signing in the sustained contact stage uses spatial modulations of verbs to systematically mark arguments (agents, patients). Critically, this systematicity stems from the children rather than the adults (Senghas & Coppola, 2001). The study of emerging sign languages provides critical evidence regarding the roles of a linguistic community (and its size) and child learners in language creation and change. Further, this work can reveal patterns of language evolution that cannot be easily investigated with spoken languages. For example, linguists are tracking when and under what circumstances a phonological level of structure emerges for young sign languages (Brentari & Goldin-Meadow, 2017).

Conclusion

By including sign languages in our scientific inquiries, we gain valuable insights that cannot be obtained from the study of spoken languages alone (see Table 1 for a summary). The discoveries highlighted here reveal ways in which the visual-manual properties of sign languages do and do not impact a) the structure of language, b) its neurocognitive underpinnings, and c) how language is acquired, used, and created.

Table 1.

Properties of sign languages that provide insights into human language that cannot be obtained by studying spoken languages alone (numbers refer to relevant article sections).

Properties of sign languages Theoretical significance
The primary linguistic articulators are the hands/arms, not the vocal tract Allows for an investigation of how the biology of language production does and does not impact linguistic structure (#1, #2), processing (#3), and the neural substrate for language (#8)
The primary perceptual system for language comprehension is vision, not audition Affords an opportunity to examine effects of language experience on sensory-based cognitive processes that is not available for spoken language users (#7).
Language in the visual modality is fully accessible to deaf and hard-of-hearing children and can prevent language deprivation (#5)
Iconicity is pervasive (i.e., the form of signs is often motivated by their meaning) Challenges traditional psycholinguistic models in which phonology and semantics are completely independent and provides greater opportunities to investigate effects of iconicity on language structure and processing (#4)
Gesture is in the same modality as language The characterization of ‘gesture’ needs to be modality-free in order to capture functional and structural parallels between co-sign and co-speech gesture (#6)
Code-blending (simultaneous word and sign production) is possible for bimodal bilinguals Provides a novel way to investigate language control and language mixing costs/benefits (#9)
Most (if not all) sign languages are relatively young Represents a unique testing ground for exploring what factors drive language change over a short time-scale, e.g., 25 – 200 years (#2, #10)

Acknowledgments

This work was supported by an award from the National Institute on Deafness and Other Communication Disorders (R01 DC1010997) to K.E. and San Diego State University.

Footnotes

1.

By convention signs are glossed with the nearest translation equivalent and are written in uppercase. Hyperlinks to view the signs mentioned in this article are from https:www.spreadthesign.com.

2.

Homesign is a basic gestural communication system created by a deaf child who has no or little exposure to an existing language (signed or spoken).

Suggested readings

  1. Brentari D (2019). Sign language phonology. Cambridge University Press. [Google Scholar]; Provides a detailed and accessible account of the nature of phonology in visual-manual languages.
  2. Emmorey K (2021). New perspectives on the neurobiology of sign languages. Frontiers in Communication, 6, 748430. 10.3389/fcomm.2021.748430 [DOI] [PMC free article] [PubMed] [Google Scholar]; Provides a recent review of the neural networks that support sign language production and comprehension.
  3. Emmorey K, Giezen MR, & Gollan TH (2016). Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. Bilingualism: Language and Cognition, 19(2), 223–242. 10.1017/S1366728915000085 [DOI] [PMC free article] [PubMed] [Google Scholar]; Provides a review of research on bimodal bilingualism.
  4. Fenlon J, & Wilkinson E (2015). Sign languages in the world. In Schembri A & Lucas C (Eds.), Sociolinguistics and Deaf communities, pp. 5–28, Cambridge University Press. [Google Scholar]; Provides an overview of the types of sign language communities around the world.
  5. Gutierrez-Sigut E, & Baus C (2021). Lexical processing in comprehension and production – experimental perspectives. In Quer J, Pfau R, & Herrmann A (Eds.), The Routledge handbook of theoretical and experimental sign language research (pp. 45–69). Taylor & Francis. [Google Scholar]; Provides a recent review of psycholinguistic studies of sign language processing.
  6. Lillo-Martin D, & Henner J (2021). Acquisition of sign languages. Annual Review of Linguistics, 7(1), 395–419. 10.1146/annurev-linguistics-043020-092357 [DOI] [PMC free article] [PubMed] [Google Scholar]; Provides a recent review of sign language acquisition.

References

  1. Bauman D, Rose H, & Nelson J (Eds.). (2006). Signing the body poetic: Essays on American Sign Language literature. University of California Press. [Google Scholar]
  2. Bavelier D, Brozinsky C, Tomann A, Mitchell T, Neville H, & Liu G (2001). Impact of early deafness and early exposure to sign language on the cerebral organization for motion processing. The Journal of Neuroscience, 21(22), 8931–8942. 10.1523/JNEUROSCI.21-22-08931.2001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bavelier D, Dye MWG, & Hauser PC (2006). Do deaf individuals see better? Trends in Cognitive Sciences, 10(11), 512–518. 10.1016/j.tics.2006.09.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Berent I, de la Cruz-Pavía I, Brentari D, & Gervain J (2021). Infants differentially extract rules from language. Scientific Reports, 11(1), 20001. 10.1038/s41598-021-99539-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Blanco-Elorrieta E, Emmorey K, & Pylkkänen L (2018). Language switching decomposed through MEG and evidence from bimodal bilinguals. Proceedings of the National Academy of Sciences, 115(39), 9708–9713. 10.1073/pnas.1809779115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bosworth RG, Binder EM, Tyler SC, & Morford JP (2021). Automaticity of lexical access in deaf and hearing bilinguals: Cross-linguistic evidence from the color Stroop task across five languages. Cognition, 212, 104659. 10.1016/j.cognition.2021.104659 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bosworth RG, Wright CE, & Dobkins KR (2019). Analysis of the visual spatiotemporal properties of American Sign Language. Vision Research, 164, 34–43. 10.1016/j.visres.2019.08.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Brentari D, & Coppola M (2013). What sign language creation teaches us about language: Sign language creation. Wiley Interdisciplinary Reviews: Cognitive Science, 4(2), 201–211. 10.1002/wcs.1212 [DOI] [PubMed] [Google Scholar]
  9. Brentari D, & Goldin-Meadow S (2017). Language Emergence. Annual Review of Linguistics, 3(1), 363–388. 10.1146/annurev-linguistics-011415-040743 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Caselli N, Emmorey K, & Cohen-Goldberg AM (2021). The signed mental lexicon: Effects of phonological neighborhood density, iconicity, and childhood language experience. Journal of Memory and Language, 121, 104282. 10.1016/j.jml.2021.104282 [DOI] [Google Scholar]
  11. Caselli N, & Pyers J (2017). The road to language learning Is not entirely iconic: Iconicity, neighborhood density, and frequency facilitate acquisition of sign language. Psychological Science, 28(7), 979–987. 10.1177/0956797617700498 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Caselli N, Pyers J, & Lieberman AM (2021). Deaf children of hearing parents have age-level vocabulary growth when exposed to American Sign Language by 6 months of age. The Journal of Pediatrics, 232, 229–236. 10.1016/j.jpeds.2021.01.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Emmorey K (2023). Signing vs. speaking: How does the biology of linguistic expression affect production? In Hartsuiker R and Strijkers K (Eds). Cognitive issues in the psychology of language. Routledge (Taylor & Francis), UK. 10.4324/9781003145790 [DOI] [Google Scholar]
  14. Emmorey K, Borinstein HB, Thompson R, & Gollan TH (2008). Bimodal bilingualism. Bilingualism: Language and Cognition, 11(1), 43–61. 10.1017/S1366728907003203 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Emmorey K, Li C, Petrich J, & Gollan TH (2020). Turning languages on and off: Switching into and out of code-blends reveals the nature of bilingual language control. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 443–454. 10.1037/xlm0000734 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Emmorey K, Midgley KJ, & Holcomb PJ (2022). Tracking the time course of sign recognition using ERP repetition priming. Psychophysiology, 59(3). 10.1111/psyp.13975 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Gimeno-Martínez M, & Baus C (2022). Iconicity in sign language production: Task matters. Neuropsychologia, 108166. 10.1016/j.neuropsychologia.2022.108166 [DOI] [PubMed] [Google Scholar]
  18. Goldin-Meadow S, & Brentari D (2017). Gesture, sign, and language: The coming of age of sign language and gesture studies. Behavioral and Brain Sciences, 40, e46. 10.1017/S0140525X15001247 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hall ML, Ferreira VS, & Mayberry RI (2015). Syntactic priming in American Sign Language. PLOS ONE, 10(3), e0119611. 10.1371/journal.pone.0119611 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Hall ML, Hall WC, & Caselli NK (2019). Deaf children need language, not (just) speech. First Language, 39(4), 367–395. 10.1177/0142723719834102 [DOI] [Google Scholar]
  21. Hickok G, Bellugi U, & Klima ES (1998). The neural organization of language: Evidence from sign language aphasia. Trends in Cognitive Sciences, 2(4), 129–136. 10.1016/S1364-6613(98)01154-1 [DOI] [PubMed] [Google Scholar]
  22. Kita S, Alibali MW, & Chu M (2017). How do gestures influence thinking and speaking? The gesture-for-conceptualization hypothesis. Psychological Review, 124(3), 245–266. 10.1037/rev0000059 [DOI] [PubMed] [Google Scholar]
  23. Kita S, & Emmorey K (in press). Gesture links language and cognition for spoken and signed languages. Nature Reviews Psychology. [Google Scholar]
  24. Kubicek E, & Quandt LC (2021). A positive relationship between sign language comprehension and mental rotation abilities. The Journal of Deaf Studies and Deaf Education, 26(1), 1–12. 10.1093/deafed/enaa030 [DOI] [PubMed] [Google Scholar]
  25. Leonard MK, Lucas B, Blau S, Corina DP, & Chang EF (2020). Cortical encoding of manual articulatory and linguistic features in American Sign Language. Current Biology. 10.1016/j.cub.2020.08.048 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Mayberry RI, & Kluender R (2018). Rethinking the critical period for language: New insights into an old question from American Sign Language. Bilingualism: Language and Cognition, 21(5), 886–905. 10.1017/S1366728917000724 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Nielsen AK, & Dingemanse M (2021). Iconicity in word learning and beyond: A critical review. Language and Speech, 64(1), 52–72. 10.1177/0023830920914339 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Nyst V (2010). Sign languages in West Africa. In Brentari D (Ed.), Sign languages: A Cambridge language survey, pp. 405–432, Cambridge: Cambridge University Press. [Google Scholar]
  29. Orfanidou E, Adam R, Morgan G, & McQueen JM (2010). Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure. Journal of Memory and Language, 62(3), 272–283. 10.1016/j.jml.2009.12.001 [DOI] [Google Scholar]
  30. Perry LK, Perlman M, & Lupyan G (2015). Iconicity in English and Spanish and Its Relation to Lexical Category and Age of Acquisition. PLOS ONE, 10(9), e0137147. 10.1371/journal.pone.0137147 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Sandler W, Padden C, & Aronoff M (2022). Emerging sign languages. Languages, 7(4), Article 4. 10.3390/languages7040284 [DOI] [Google Scholar]
  32. Secora K, & Emmorey K (2020). Visual-spatial perspective-taking in spatial scenes and in American Sign Language. The Journal of Deaf Studies and Deaf Education, 25(4), 447–456. 10.1093/deafed/enaa006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Sehyr ZS, & Emmorey K (2022). The effects of multiple linguistic variables on picture naming in American Sign Language. Behavior Research Methods, 54(5), 2502–2521. 10.3758/s13428-021-01751-x [DOI] [PubMed] [Google Scholar]
  34. Senghas A, & Coppola M (2001). Children creating language: How Nicaraguan Sign Language acquired a spatial grammar. Psychological Science, 12(4), 323–328. 10.1111/1467-9280.00359 [DOI] [PubMed] [Google Scholar]
  35. Stoll C, & Dye MWG (2019). Sign language experience redistributes attentional resources to the inferior visual field. Cognition, 191, 103957. 10.1016/j.cognition.2019.04.026 [DOI] [PubMed] [Google Scholar]
  36. Vinson D, Thompson RL, Skinner R, & Vigliocco G (2015). A faster path between meaning and form? Iconicity facilitates sign recognition and production in British Sign Language. Journal of Memory and Language, 82, 56–85. [Google Scholar]

RESOURCES