Abstract
A rare case of a deaf signer undergoing awake craniotomy has revealed that sensorimotor cortex is functionally organized for signing. Electrocorticography recordings indicated neural tuning to linguistically-relevant handshapes and body locations and distinct neural activity for linguistic versus transitional movements.
A fundamental linguistic discovery is that all human languages, including sign languages, have a level of structure — phonology — in which meaningless units are combined in rule-governed ways to create an infinite number of meaningful utterances [1]. For spoken languages, these units include consonants and vowels, and for sign languages they include hand configurations, body locations, and movements of the fingers, hands, and arms. Although both speaking and signing require exquisite timing of motor movements, the actions of the linguistic articulators are directly observable for sign, but not for speech as the tongue and vocal cords are hidden from view. As they report in this issue of Current Biology, Leonard et al. [2] have exploited this property of sign languages and used electrocorticography (ECoG) to examine how the brain encodes the sensorimotor properties of phonological units in American Sign Language (ASL).
In the new study by Leonard et al. [2], neural activity was recorded directly from the cortical surface of a profoundly deaf signer who was undergoing awake craniotomy to map language areas for resection of a brain tumor. The signer was asked to make a lexical decision —is this a real sign? — to videos of ASL signs and pseudosigns (possible, but non-existing ASL signs). Happily for this study, the participant did not always follow directions to just answer “yes” or “no.” He also produced a number of spontaneous signed responses, such as repeating the sign or pseudosign, commenting on the task, and fingerspelling (spelling an English word with a sequence of handshapes that represent letters). Thus, the researchers had a large set of manual behaviors to analyse. This unique case study revealed neural selectivity for the production of very similar, but linguistically contrastive, handshapes (Figure 1A), for distinct places of articulation (locations on the body; Figure 1B), and for linguistic versus transitional movements. In addition, neural decoding and classification analyses identified a hierarchical functional organization of phonological features within sensorimotor cortex, as well as temporally distributed patterns of neural activity that likely reflect how linguistic movements are planned and executed in sign language.
Figure 1. Illustration of signs in American Sign Language.

(A) The signs ICE-CREAM and ORAL differ in handshapes which are very similar. (B) The signs EVENING and PIG differ in where they are articulated on the body. (C) The signs COMB and DRINK resemble pantomimic actions. (Photos courtesy of Brennan Terhune-Cotter.)
Signs can sometimes look like pantomimic actions; for example, the ASL sign COMB depicts the action of combing hair and the sign DRINK mimics drinking from a cup (Figure 1C). However, the brain encodes such manual productions as linguistic representations, not as pantomimic gestures. The production of such action-depicting verbs engages classic left hemisphere language regions, while the production of pantomimic gestures that look identical to these verbs does not [3]. Evidence that meaning and form are not conflated in sign language, despite frequent form-meaning overlap, comes from the finding that signers experience a tip-of-the-finger state, analogous to the tip-of-the-tongue state [4]. That is, signers can recall the meaning of a sign but fail to retrieve its form, which indicates that meaning and form are retrieved and represented separately during sign production. Evidence that signs are not wholistic gestures and that signing involves the assembly of phonological units comes from the existence of ‘slips of the hand’ in which signers occasionally mis-select or swap around different handshapes, locations, or movements [5]. Furthermore, signing differs from non-linguistic reaching and grasping actions because signing is not visually guided — signers gaze at their addressee and do not track the movements of their hands.
The results of Leonard et al. [2] provide strong support for the hypothesis that lexical signs, like words, are rapidly constructed on-line from sublexical component parts. Using ECoG recordings, the authors were able to identify electrodes over pre-central, post-central, and supramarginal cortex that selectively encoded distinct handshapes and body locations. Some of these electrodes showed neural activity that began before the signer moved his hand, which likely reflects planning activity prior to the actual motor movement, such as preparing to contact the face. Other electrodes exhibited activity that was locked to the onset of movement, which may reflect motor and proprioceptive feedback used to guide the formation and maintenance of a target handshape, for example. Direct electrical stimulation confirmed that these neural populations were causally involved in sign production, because stimulation induced hand- and location-specific movements and/or sensations by the participant.
But to what extent were these cortical responses specific to language production, rather than simply reflecting general motor actions of the hand and arm? Leonard et al. [2] obtained several types of evidence for linguistic-specificity. First, the spatial distribution of the neural activity across location and handshape electrodes was clustered along a linguistically-relevant hierarchy, assessed by unsupervised clustering of the neural data. One primary cluster distinguished lexical signs from finger-spelled words which each have very distinct linguistic and sensorimotor properties. Unlike signs, finger-spelling is produced at a single location in signing space, can contain sequences of several handshapes, and is based on the orthography of English. Within the lexical sign cluster, neural activity distinguished between two large, linguistically-relevant classes of signs: neutral space signs which are produced at an unspecified location in front of the torso, and body-anchored signs which are produced on (or near) a specific location on the face or body.
Second, Leonard et al. [2] found that linguistic movements had distinct neural activity patterns compared to transitional (non-linguistic) movements within sensorimotor and supramarginal cortices. Linguistic movements are specified as part of the lexical representation of a sign that is stored in memory, whereas transitional movements vary across contexts and are not linguistically constrained. Importantly, these movement types were matched for sensorimotor properties in this analysis.
Third, the production of real signs also exhibited distinct neural patterns compared to pseudosigns, which were also matched for low-level sensorimotor features. Signs evoked stronger neural activity than pseudosigns in most regions, except posterior parietal cortex, which was more active for pseudosigns (and also more active for non-lexical transitional movements).
Finally, cortical responses were correlated with sign frequency, such that greater neural activity was associated with more frequent signs, particularly in dorsal pre-frontal cortex. Frequency effects are argued to be associated with phonological representations for both words and signs [6,7].
Previous neuroimaging studies have found that very similar sensorimotor, frontal, and parietal regions are active during sign production [8,9]. The key contribution of the Leonard et al. [2] study is the discovery that discrete cortical locations within these regions map onto distinct phonological features of sign language, distinguishing between handshapes, body locations, and (to a lesser extent) types of linguistic movements (wrist versus finger movement). Further, the fine-scale temporal patterns derived from the ECoG data provide evidence for the time course of sign production. Early pre-movement activity encodes linguistic articulatory goals, and later activity (during movement) reflects feedback and error monitoring of sign production. Because signers receive little visual feedback during signing (the hands often fall outside the field of view), error monitoring for signing must occur primarily through somatosensory and motor feedback [10].
For speakers, ECoG data have identified speech-articulator representations (such as the tongue and lips) that are laid out somatotopically along sensorimotor cortex, and spatiotemporal patterns of neural activity that are hierarchically organized by articulatory-defined phonetic features, such as lip-rounding or tongue position [11]. The Leonard et al. [2] results for signing are surprisingly parallel, despite the dramatic difference in linguistic articulators. Speaking and signing both involve rapid, fine-grain coordination of multiple articulators that together encode abstract linguistic representations of form (‘phonemes’). This universal level of structure appears to be supported by the same neural principles and architecture, regardless of language modality. Future research in both linguistics (clarifying the nature of phonological features in sign language) and neuroscience (more controlled mapping of the neural encoding of these features in time and space) will reveal the extent to which phonological structure in human language is modality-independent versus specific to the manual or vocal articulators.
REFERENCES
- 1.Brentari D (2019). Sign Language Phonology (Cambridge University Press; ). [Google Scholar]
- 2.Leonard MK, Lucas B, Blau S, Corina DP, and Chang EF (2020). Cortical encoding of manual articulatory and linguistic features in American Sign Language. Curr. Biol 30, 4342–4351. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Emmorey K, McCullough S, Mehta S, Ponto LLB, and Grabowski TJ (2011). Sign language and pantomime production differentially engage frontal and parietal cortices. Lang. Cogn. Process 26, 878–901. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Thompson R, Emmorey K, and Gollan TH (2005). “Tip of the fingers” experiences by deaf signers: insights into the organization of a sign-based lexicon. Psych. Sci 16, 856–860. [DOI] [PubMed] [Google Scholar]
- 5.Klima E, and Bellugi U (1979). The Signs of Language (Harvard University Press; ). [Google Scholar]
- 6.Levelt WJ, Roelofs A, and Meyer AS (1999). A theory of lexical access in speech production. Behav. Brain Sci 22, 1–38. [DOI] [PubMed] [Google Scholar]
- 7.Emmorey K, Winsler K, Midgley KJ, Grainger J, and Holcomb PJ (2020). Neurophysiological correlates of frequency, concreteness, and iconicity in American Sign Language. Neurobiol. Lang 1, 249–267. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Emmorey K, Mehta S, McCullough S, and Grabowski TG (2016). The neural circuits recruited for the production of signs and fingerspelled words. Brain Lang. 160, 30–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.San José-Robertson L, Corina DP, Ackerman D, Guillemin A, and Braun AR (2004). Neural systems for sign language production: mechanisms supporting lexical selection, phonological encoding, and articulation. Hum. Brain Mapp 23, 156–167. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Emmorey K, Bosworth R, and Kraljic T (2009). Visual feedback and self-monitoring of sign language. J. Mem. Lang 61, 398–411. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Bouchard KE, Mesgarani N, Johnson K, and Chang EF (2013). Functional organization of human sensorimotor cortex for speech articulation. Nature 495, 327–332. [DOI] [PMC free article] [PubMed] [Google Scholar]
