Skip to main content
MIT Press Open Journals logoLink to MIT Press Open Journals
. 2022 Jan 5;34(2):224–235. doi: 10.1162/jocn_a_01790

The Cortical Organization of Syntactic Processing Is Supramodal: Evidence from American Sign Language

William Matchin 1,2, Deniz İlkbaşaran 1, Marla Hatrak 1, Austin Roth 1, Agnes Villwock 1,3, Eric Halgren 1, Rachel I Mayberry 1
PMCID: PMC8764739  PMID: 34964898

Abstract

Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.

INTRODUCTION

Neuroimaging studies of multiple languages (English, French, German, and Dutch) have isolated syntactic processing by contrasting the activation elicited by syntactically structured sentences or phrases with that elicited by conditions with reduced or absent structure such as word lists. fMRI and PET studies using this subtraction paradigm have found activation in four broad left hemisphere regions, with some variability in the specific regions activated for each study: Broca's area, or the posterior two-thirds of the inferior frontal gyrus (IFG), the anterior temporal lobe (ATL), the posterior temporal lobe, and the temporal–parietal junction (Matchin, Hammerly, & Lau, 2017; Zaccarella, Meyer, Makuuchi, & Friederici, 2017; Goucha & Friederici, 2015; Pallier, Devauchelle, & Dehaene, 2011; Fedorenko, Hsieh, Nieto-Castañón, Whitfield-Gabrieli, & Kanwisher, 2010; Makuuchi, Bahlmann, Anwander, & Friederici, 2009; Rogalsky, Matchin, & Hickok, 2008; Vandenberghe, Nobre, & Price, 2002; Stowe et al., 1998; Mazoyer et al., 1993).

Although there is debate as to which regions specifically underlie syntactic versus semantic combinatorial processing, combinatorial complexity effects are observed in both spoken and written language. For instance, Uddén et al. (2019) performed an fMRI study with more than 200 participants and observed syntactic complexity effects in a left-lateralized frontal-temporal–parietal network for both spoken and written language. These left hemisphere regions have been found to tightly overlap for activation patterns to spoken and written language stimuli at the word and sentence level (Wilson, Bautista, & McCarron, 2018; Pallier et al., 2011; Marinkovic et al., 2003; Booth et al., 2002a). Recent studies using more temporally sensitive methods, such as electrocorticography and magnetoencephalography, have also found that these regions dynamically coordinate with combinatorial processing for both written and spoken language in French and English (Matchin et al., 2017; Nelson et al., 2017; Fedorenko et al., 2016; Brennan & Pylkkänen, 2012).

Together, these results demonstrate that the neural network for syntactic processing operates cross-linguistically, as would be predicted by linguistic theory, and that this network also operates cross-modally with respect to spoken versus written language. If the neural network for syntactic processing is truly abstract and supramodal, then it should also be sensitive to syntactic complexity in sign language.

The neural basis of sign languages has generally been found to be similar to that of spoken and written languages, involving similar temporal dynamics of neural activity associated with lexical, syntactic, and semantic processing (Newman, Supalla, Fernandez, Newport, & Bavelier, 2015; Leonard et al., 2012, 2013; Capek et al., 2009; Sakai, Tatsuno, Suzuki, Kimura, & Ichida, 2005; Neville et al., 1997). Left hemisphere lesions are predominantly associated with aphasia in American Sign Language (ASL), as in spoken language (Hickok, Kritchevsky, Bellugi, & Klima, 1996; Poizner, Klima, & Bellugi, 1990), although the use of space for linguistic modulation may involve the right hemisphere in British Sign Language (BSL; Atkinson, Marshall, Woll, & Thacker, 2005).

Structured stimuli, such as sentences, reliably show increased activation in the left ATL relative to unstructured word lists in spoken and written language stimuli, but studies of sign languages using this paradigm have not found such effects. Deaf and hearing native signers (individuals who acquired a sign language from birth) showed neural activation in response to BSL sentences in the ATL, but without differential activation to word lists compared with sentences, leading to the conclusion that BSL lacked the close class inflections associated with ATL processing (MacSweeney et al., 2006). In another study, deaf native and nonnative (individuals who began to acquire the language after infancy) signers of French Sign Language (LSF) showed significant effects, but they were unexpectedly located in the bilateral basal ganglia, left insula, and right IFG/insula (Moreno, Limousin, Dehaene, & Pallier, 2018).

Building on this work, we examine the neural underpinnings of syntactic processing in ASL. Doing so allows us to investigate two key questions about language processing in the brain. First, does the visual–motor channel through which language is perceived and expressed alter the neural network of syntactic processing as argued by studies finding unexpected results? Second, does variation in the age and setting of language acquisition affect the neural underpinnings of syntactic processing? The goal of this study is to investigate the first question as a prerequisite to answering the second one. To isolate syntactic processing from lexical processing, we compared neural activation to ASL sign strings in which syntax is fully present (complex sentences [CSs]) to when it is reduced (two word clauses) or absent (unstructured word lists), a paradigm that is reliably sensitive to combinatorial processing in spoken and written language (Pallier et al., 2011; Jobard, Vigneau, Mazoyer, & Tzourio-Mazoyer, 2007). Here, we used fMRI to record deaf native signers' neural responses to ASL sign strings as a function of combinatorial and syntactic complexity.

METHODS

Participants

Thirteen ASL signers (five men) ranging in age from 19 to 33 years (M = 26.6, SD = 4.7 years) participated in the study and were compensated for their time. The institutional review board of the University of California San Diego approved the experimental protocol. Each participant was born severely-to-profoundly deaf to deaf parents who used ASL with them from birth. All the participants had normal or corrected-to-normal vision and were right-handed (Edinburgh Handedness Inventory, M = 3.5/4.0, SD = 0.66; Oldfield, 1971). All participants performed within the average range on a nonverbal cognitive screening battery (Block Design, M = 12.8, SD = 2.84; Picture Arrangement, M = 10.4, SD = 2.29; ASL Digits Forward, M = 5.9, SD = 0.79; ASL Digits Backwards, M = 4.7, SD = 1.23; Wechsler & Naglieri, 2006). In a separate comprehension study using a sentence-to-picture matching task study, each participant showed accurate comprehension of complex ASL syntactic structures (Mayberry et al., in preparation).

Stimuli

For this study, we controlled syntactic structure in the ASL stimuli by creating six-sign strings that varied at three levels of combinatorial complexity: 1) sign lists (SLs) where signs were strung together with no phrasal structure; 2) two-sign sentences with a series of two-sign, subject + verb clauses (3SV) with structure between pairs of adjacent signs but none across the pairs; and 3) one 6-sign CS with two clauses hierarchically linked, conditional and relative clause sentences. Each combinatorial condition consisted of 60, six-sign stimulus strings.

Lexical Items

To ensure that lexical familiarity was not a confound with syntactic structure, the lexical items of the stimulus strings were selected from ASL vocabulary familiar to young deaf, signing children using the ASL version of the McArthur–Bates Communicative Development Index (Anderson & Reilly, 2002), excluding prepositions, conjunctions, compounds, classifier signs, and fingerspelled words. The additional lexical items required to complete the experimental design were selected from a previous study of the subjective frequency ratings of a corpus consisting of 432 ASL signs (Mayberry, Hall, & Zvaigzne, 2014). Signs were selected from among those having ratings greater than 4 on a 7-point Likert scale by 59 deaf signers. Additional signs were obtained from vocabulary used in teaching English as a Second Language. Using these resources to control for the familiarity of the ASL signs resulted in a pool of 712 unique but familiar ASL signs. Because the experimental design required 1,080 lexical items (3 combinatorial conditions × 60 six-sign strings), 28% (n = 199) of the signs appeared in two conditions and another 9% (n = 68) appeared in three conditions. Crucially, no sign appeared more than once within any condition (except for the closed class ASL sign IF in the CS condition described below, and Table 1). The ASL signs were further arranged within each string to avoid phonological or semantic overlap between adjacent signs.1

Table 1. .

Examples of the ASL Stimulus Sign Strings for Each Combinatorial Condition Showing the Serial Position of the Signs, the Type and Scope of the Nonmanual Marker, and the English Translation

Condition   Serial Position
1 2 3 4 5 6
SL NMMa br          
Glossb MONKEY BOX LEAF WEDDING HELICOPTER ADULT
Trans c monkey box leaf wedding helicopter adult
 
3SV NMM tm1   tm1   tm1  
Gloss FAMILY TRAVEL DESSERT ALL-GONE LETTER SAD
Trans The family travels. The dessert is all gone. The letter is sad.
 
CS NMM       br br br
Conditional Gloss HUSBAND COOK DINNER IF WIFE LATE
Trans The husband cooks dinner if the wife is late.
 
  NMM tm3 tm2 tm2 tm2    
Relativized Gloss RACOON BEHIND TALL TREE TRUE-BUS MEAN
Trans The racoon that is behind the tall tree is seriously mean.

3SV = S + V clauses; CS = complex clausal sentences. See https://osf.io/xjn4y/?view_only=819587a173d341b5bbc8668b26b99d23 for video examples of the conditions.

a

Nonmanual grammatical marker (NMM): br = brow raise; tm1, tm2, tm = Topic Markers 1, 2, 3 (Sandler & Lillo-Martin, 2006); see Methods for alternate serial placement of the br in the SL condition).

b

ASL sign gloss.

c

English translation.

Visual–Facial Complexity

Complex ASL sentences, such as the conditionals and relative clause sentences used in the CS condition are marked in ASL with obligatory nonmanual markers (Sandler & Lillo-Martin, 2006; Liddell, 1978), which have been found to be processed in the language areas of the left hemisphere (McCullough, Emmorey, & Sereno, 2005). To control for visual–facial complexity across the conditions and ensure that the CS stimuli could not be distinguished from the SL and 3SV stimuli solely by the presence or absence of nonmanual marker, we added brow raises (which were prosodic but did not indicate any particular syntactic structure) to the SL trials and topic markers to the 3SV stimulus strings as described below. One of the co-authors, a highly experienced ASL researcher and a deaf native signer, produced the experimental stimuli multiple times while being videotaped. The most natural-looking renditions were then selected for the study. The stimuli were pilot tested with three native deaf signers and refilmed where necessary to ensure that each stimulus string was readily understood and judged to look like “natural” ASL independent of condition. Table 1 gives examples of the stimulus sign strings from the three combinatorial conditions.

Unrelated SLs

The stimulus strings in the SL condition consisted of six unrelated nouns or adjectives, for example, MONKEY, BOX, LEAF, WEDDING, HELICOPTER, ADULT or RULE, HORSE, MEETING, PLATE, SCISSORS, SUMMER. Each string was constructed so that no adjacent signs formed a phrase or had phonological overlap. The signs were produced in sequence as if they created a sentence with no pauses between them. For the SL condition: Trials 1–20 had an eyebrow raise on the first sign; Trials 21–40 had an eyebrow raise on the third sign; Trials 41–60 had an eyebrow raise on the fifth sign. The mean duration of the ASL signs in the SL condition was 0.68 sec, SD = 0.14 with a mean stimulus string duration of 4.06 sec.

Two-Sign Sentences

The stimulus strings of the 3SV condition consisted of six signs each of three consecutive 3SV, as for example, CHILDREN FIGHT, COUNTRY BEAUTIFUL,2 CLOWN RUN. The three S + V sentences within each stimulus string were semantically and pragmatically unrelated to one another so that the three clauses did not cohere over the stimulus string, syntactically or pragmatically. There was no phonological overlap in adjacent signs within each string. For each 3SV trial, a topic NMM accompanied the first, third, and fifth signs in the string. The mean duration of the ASL signs in the 3SV condition was 0.70 sec, SD = 0.18 with a mean stimulus string duration of 4.20 sec.

CSs

The CS condition consisted of one 6-sign CS, conditionals (n = 24) or relative clause (n = 36) sentences, for example, HUSBAND COOK DINNER IF WIFE LATE (“The husband cooks dinner if the wife is late.”); BLACK JACKET EVERYONE LOVE POLICE GIFT-ME (“The police gave me the back jacket that everyone loves.”). Each conditional and relative clause sentence was signed with obligatory, NMM co-occurring with the sign IF and the conditional clauses. The relativized clauses were accompanied by obligatory NMMs with the relativized clause (see Table 1). No adjacent signs within a stimulus sentence had phonological overlap. The mean duration of the ASL signs in the CS condition was 0.56 sec, SD = 0.20, with a mean stimulus sentence duration of 3.37 sec.

Procedure

Stimulus Presentation and Scanner Task

Stimuli were presented in blocks of three stimulus strings of the same condition. To promote attention, a fixation-cross appeared prior to each trial to which participants responded with a thumb press. To encourage comprehension, participants performed a picture-probe recognition task after the third trial of each block. Altogether, there were 20 picture-probes for each condition, yielding 60 picture-probes across the experiment (3 conditions × 20 picture-probes). Participants decided with a finger press (index finger for yes; middle finger for no) whether a line drawing appearing at the end of the block represented a sign presented in the previous stimulus string. The response hand, right versus left, was counterbalanced across runs. Each run consisted of 15 blocks of each condition, for a total of 60 blocks per condition across the entire experiment.

To control for basic visual processing, a still image of the signer's face and body was presented for the same average duration as the stimulus blocks. A white fixation-cross appeared at the center of the image every 4 sec to which the participants responded with a thumb press (MacSweeney et al., 2006).

fMRI Data and Analyses

Data were collected with a 3 T GE MR750 fMRI scanner with a 32-channel head coil (at the Keck Center for Functional Magnetic Resonance Imaging at University of California San Diego) using echo-planar imaging (echo time, 25 msec, repetition time = 3000 msec, 90° flip angle, 1.875 × 1.875 in-plane resolution with a 2.5-mm slice thickness: interleaved slice acquisition, no gap). Following the experimental runs, a high-resolution anatomical image was collected (1 mm isotropic). Data were processed and analyzed with AFNI software using standard procedures. We discarded the first four volumes of each run to control for T1 saturation effects and then performed slice-timing correction. Motion correction was achieved by using a 6-parameter rigid-body transformation, with each functional volume in each run first aligned to a single volume in that run. Functional volumes were aligned to the anatomical image and subsequently aligned to the Talairach template brain (Talairach & Tournoux, 1988). Functional images were resampled to 3-mm isotropic voxels and spatially smoothed using a Gaussian kernel of 6-mm FWHM. The data were high-pass filtered with a cutoff frequency of 0.0023 Hz at the first-level analysis stage by the means of AFNI's 3dDeconvolve function using the “polort” parameter with a value of 3.

First-level analyses were performed on each participant's data. Each predictor variable representing the time course of stimulus presentation was lagged by 4 sec following stimulus offset to account for the delayed hemodynamic response. Five regressors of interest were used in the experimental analysis: SL, 3SV, CS, still face, and picture-probe. The six motion parameters were included as regressors of no interest. For all whole-brain analyses, we used a voxel-wise threshold of p < .005 (one-tailed), correcting for multiple comparisons with an FWE rate of p < .05 by using a cluster size correction (40 voxels) and Monte Carlo simulations, taking into account the smoothness in the data by using AFNI's 3dFWHMx function with the acf option (Cox, Chen, Glen, Reynolds, & Taylor, 2017).

RESULTS

Analyses of the participants' performance on the probe recognition task during scanning revealed that our attempt to control ASL syntactic complexity was successful. Combinatorial complexity, or syntactic structure, had a robust effect on the native deaf signers' ASL comprehension and memory. More important, and as predicted, syntactic complexity modulated the neural response of areas in the left anterior and posterior superior temporal sulcus (aSTS, pSTS), areas previously shown to be sensitive to syntax in spoken and written languages, with lexical processing being more bilateral and posteriorly located.

Comprehension Results

Accuracy

The native deaf signers were highly accurate (mixed effects linear regression, R2 = .40, p < .02; Table 2) on the picture-probe recognition task. They were significantly more accurate in the CS condition (M = 0.94, SD = 0.06) compared with either the 3SV (M = 0.79, SD = 0.14) or SL conditions (M = 0.81, SD = 0.09), for which accuracy did not differ (Tukey's honest significant difference[HSD], p > .05; Table 2). Thus, ASL combinatorial, or syntactic, structure clearly renders the lexical content of ASL sign strings easier to comprehend and remember.

Table 2. .

Mean (SD) for Picture-Probe Recognition Accuracy (Percent Correct) and Millisecond Response Time, for the ASL Scanner Comprehension Task as a Function of Syntactic Complexity Condition

  SL 3SV CS
Accuracy 0.81 (0.09) 0.79 (0.14) 0.94 (0.06)a
RT 1514 (349) 1420 (312) 1354 (360)b
a

R2 = .40, p < .02.

b

R2 = .90, p < .02.

Response Time

As would be predicted from their performance accuracy, the participants were fastest to respond to probes in the CS condition (mixed effects linear regression, R2 = .90, p < .01; Table 1): CS condition (M = 1354 msec, SD = 360 msec) compared with either the 3SV (M = 1420 msec, SD = 312 msec) or SL (M = 1514 msec, SD = 349 msec) conditions, for which RT did not differ (Tukey's honestly significant difference, p > .05; Table 2). Syntactic structure clearly renders the meaning of sign strings more readily accessible and memorable.

Having found facilitative effects of syntactic complexity in ASL sign strings on comprehension and memory on the scanner task, both in terms of accuracy and response time, we next asked whether these effects correspond to a neural sensitivity to syntactic complexity and, if so, whether the locus of this neural sensitivity overlaps in cortical organization with that previously found for syntactic processing in spoken and written languages, as described above.

Neuroimaging Results

Syntactic Processing

In order to identify brain regions involved in ASL syntactic processing, we performed a whole-brain analysis for brain regions showing a linearly increased response to syntactic complexity. We used contrast weights proportional to the maximum constituent size (the number of signs in a single phrase) for each condition (−2, −1, 3). Our analysis yielded one large significant cluster in the left STS (135 voxels) that was composed of two distinct clusters, one in the pSTS and one in the aSTS, which were made contiguous by a few intervening voxels (Figure 1A). This effect replicates in ASL the sensitivity of these regions to syntactic complexity in spoken and written languages. For the purposes of analyzing the complexity effect within each of these regions separately, we manually segmented the one significant cluster at these intervening clusters, yielding two clusters of nearly the same size (aSTS: 68 voxels, pSTS: 66 voxels). Each of these clusters independently surpassed the cluster extent threshold for multiple comparisons (40 voxels). We then averaged the estimated percent signal change values for each linguistic condition > still-face baseline across voxels within each ROI separately and plotted them (Figure 1B).

Figure 1. .

Figure 1. 

(A) Whole-brain fMRI parametric analysis of syntactic structure, displayed on a template brain in Talairach space (Talairach & Tournoux, 1988). (B) Average percent signal change values for each linguistic condition > still-face baseline extracted from the aSTS and pSTS portions of the significant clusters shown in (A). Error bars indicate standard error of the mean of each condition with subject effects removed (Cousineau, 2005).

Table 3 lists the Talairach coordinates and cluster size of significant clusters for the complexity analyses. Table 4 lists this information for the analysis of lexical-sensory processing, SL, relative to the still-face baseline.

Table 3. .

Significant Clusters from the Whole-Brain Parametric Analysis of Structure

  Hemisphere x y z Cluster Size (mm3)
STS Left       3645
pSTS peak   −51 −34 5  
aSTS peak   −47 −1 −12  

Coordinates (Talairach space) represent local peaks within the single significant STS cluster.

Table 4. .

Significant Clusters from the Whole-Brain Analysis of Lexical and Sensory Processing, SL > Still-Face Baseline

  Hemisphere x y z Cluster Size (mm3)
Lateral occipital-temporal lobe Right       10,557
Middle temporal gyrus / middle occipital gyrus peak(MT/V5)   46 −65 −1  
pSTS peak   44 −38 7  
Lateral occipital-temporal lobe Left       7695
Middle occipital gyrus / inferior temporal gyrus peak (MT/V5)   −47 −69 −2  
pSTS peak   −46 −41 13  
Fusiform gyrus Left −40 −39 −15 1755
Inferior occipital gyrus Right 29 −85 −6 1755

Coordinates in Talairach space (Talairach & Tournoux, 1988) represent center of mass unless noted as local peaks.

Syntactic Complexity Effects

To investigate the effects of syntactic complexity in these brain areas, we averaged the estimated percent signal change values for each condition within these significant clusters. Syntactic complexity showed a significant linear effect on BOLD signal change in left pSTS (R2 = .97, p < .001; linear contrasts [−1, 0, +1], F(1, 24) = 50.526, p < .0001). The CS condition elicited a stronger signal change (M = 0.364, SD = 0.359) than did either the 3SV (M = 0.221, SD = 0.323) or SL conditions (M = 0.164, SD = 0.316; Tukey's HSD, p < .05; Figure 2B). Likewise, syntactic complexity showed a significant linear effect on BOLD signal change in left aSTS (R2 = .985, p < .0001; linear contrasts [−1, 0, +1], F(1, 24) = 35.54, p < .0001). Again, CS structure elicited a stronger signal change (M = 0.0275, SD = 0.403) than did either the 3SV (M = −0.0746, SD = 0.415) or SL conditions (M = −0.1235, SD = 0.453; Tukey's HSD, p < .05; Figure 2B). These results parallel those previously found for neural sensitivity to syntax in spoken and written language, indicating that this neural sensitivity to syntactic structure is modality independent (Matchin et al., 2017; Nelson et al., 2017; Fedorenko et al., 2016; Brennan & Pylkkänen, 2012; Pallier et al., 2011).

Figure 2. .

Figure 2. 

Whole-brain fMRI analysis of lists of unconnected signs (SL condition) > the still-face baseline, displayed on a template brain in Talairach space (Talairach & Tournoux, 1988).

Lexical and Sensory Processing Effects

To determine the neural substrates of basic lexical and sensory processing for ASL, we compared activation for the SL condition relative to the still-face baseline condition, for which the scanner task was detection of an intermittent focus mark. This contrast revealed bilateral activation of the inferior temporal sulcus/fusiform gyrus, inferior lateral occipital lobe (likely area medial temporal [MT]), and the pSTS, extending on in the right hemisphere into the middle STS (Figure 2). These areas have previously been reported to be active for basic sign language processing at the lexical level in ASL (Ferjan Ramirez et al., 2014; Leonard et al., 2012, 2013).

Overlap of Lexical-Sensory and Syntactic Processing Networks

In order to clarify the relationship between lexical-sensory and sentential-syntactic processing in ASL, we performed an overlap analysis. First, we performed the conjunction of significant clusters from the contrast of each of the linguistic conditions (SL, 3SV, CS) compared with the still-face baseline, which identified shared lexical and sensory processing across these conditions. Then, we computed the overlap of this shared lexical-sensory processing with the syntactic complexity effect (Figure 3). The results indicated largely nonoverlapping networks. Lexical and sensory processing were associated with bilateral and posterior activations whereas hierarchical syntactic processing was associated with left anterior activations, with minimal overlap between these effects of linguistic processing in the left pSTS.

Figure 3. .

Figure 3. 

Overlap analysis displayed on a template brain in Talairach space (Talairach & Tournoux, 1988) of the effect of lexical-sensory processing (yellow), identified by taking the conjunction of each of the linguistic conditions > still-face baseline, and syntactic processing (blue). Overlap of these two effects is shown in green.

DISCUSSION

In this study, we used fMRI to investigate the neural correlates of combinatorial processing of ASL sign strings. For native deaf signers, BOLD signal changes in left aSTS and pSTS correlated with the degree of syntactic structure contained in ASL sign strings while performing a comprehension and memory task. The results indicate, first, that the brain areas sensitive to syntactic structure in spoken and written languages in the left temporal lobe, aSTS and pSTS, are sensitive to syntactic structure in a sign language, ASL. These findings provide evidence that the neural network for combinatorial processing has evolved to operate supramodally to extract meaning from lexical combinations. The neural combinatorial processing of language is unaltered by the sensory–motor origins of the units being combined. Second, our results indicate that although syntactic, or combinatorial, processing is left lateralized to areas in the temporal lobe, lexical processing, in the absence of syntactic structure, is more bilateral and posterior by comparison. We discuss each of these findings in turn.

A key neurolinguistic finding here is that syntactic structure performs the same informational and neural functions in ASL that it performs in spoken and written language. Syntactic structure in ASL sign strings significantly facilitates language comprehension and memory (Table 2) and elicits significant activity in the left aSTS and pSTS (Figure 1). These same regions have been found to be sensitive to the syntactic complexity of word strings in spoken and written language (Nelson et al., 2017; Brennan & Pylkkänen, 2012; Jobard et al., 2007) for native speakers of Dutch, German, French, and English (Uddén et al., 2019; Matchin et al., 2017; Nelson et al., 2017; Fedorenko et al., 2016; Pallier et al., 2011; Makuuchi et al., 2009).

The present results, in conjunction with previous research, suggests that the left aSTS and pSTS play a key role in the processing of combinatorial or syntactic structure in ASL. First, the present results show activation in these regions correlate with the combinatorial properties of the sign stimuli on a comprehension and memory task. Second, activation of the aSTS, but not the pSTS, region has also been observed when two sign or speech combinations are produced by deaf native signers while being imaged with magnetoencephalography (Blanco-Elorrieta, Kastner, Emmorey, & Pylkkänen, 2018). Whether the sensitivity of the pSTS to combinatorial structure observed in this study is unique to comprehension and memory processes as contrasted with those that underlie production requires more investigation.

One potential caveat for the present results is the relative lack of a combinatorial or syntax-related activation in the IFG, or Broca's area, reported in previous studies of combinatorial processing in spoken and written language. This region has previously been suggested to underlie core syntactic processing abilities (Hagoort, 2014; Hagoort & Indefrey, 2014; Friederici, 2011; Grodzinsky & Santi, 2008). However, many studies of spoken language processing have failed to identify combinatorial effects in Broca's area, with robust effects being in the temporal lobe (Brennan & Pylkkänen, 2012; Brennan et al., 2012; Rogalsky et al., 2008; Stowe et al., 1998; Mazoyer et al., 1993). It has often been suggested that Broca's area plays a role in sentence processing restricted to working memory and/or production and cognitive control resources that assist sentence processing in difficult circumstances but are not critical for basic combinatorial processing (Matchin, 2018; Rogalsky & Hickok, 2011; January, Trueswell, & Thompson-Schill, 2009; Rogalsky et al., 2008; Novick, Trueswell, & Thompson-Schill, 2005).

Another possible interpretation of the lack of activation observed in Broca's area is that this is because of a modality effect on combinatorial processing arising from the sensory–motor origins of words in speech versus sign. However, there are two reasons to rule out this conclusion. First, a previous fMRI study in BSL (MacSweeney et al., 2006) reported a robust combinatorial effect in Broca's area (while failing to identify an effect in the ATL). The null effect in Broca's area in this study is likely because of task differences. Although the present picture-probe word recognition task did not elicit activation in this area, ongoing work in our laboratory supports this interpretation. Using the same design but with an anomaly detection task with ASL stimuli appears to elicit activation in the IFG and the pSTS.

Variations in experimental methods may explain the inconsistency of the present results with previous studies of sign languages using a similar subtraction paradigm. Null effects for neural activation patterns in response to sentences versus word lists in BSL may have arisen from the low performance of all the groups (deaf or hearing native signers and hearing sign-naive participants) on the sentence condition compared with the word list condition. Moreover, stimulus length was not controlled across conditions and each condition used a different scanner task (MacSweeney et al., 2006). These uncontrolled factors could inject noise into the syntactic neural processing signal. The unexpected finding of significant syntactic complexity effects for LSF in the basal ganglia may have arisen from the stimulus presentation method, which was list-like for both the sentence and unrelated word list conditions and used by MacSweeney et al. for their word-list condition. Although producing stimulus signs one at a time, beginning and ending in the lap, is conceptually similar to presenting one word at a time in text (Moreno et al., 2018), this presentation mode would have the effect of removing the phonotactic and prosodic features encoded in the dynamic visual-manual signal of LSF or BSL, perhaps prompting the signer participants to mentally fill in the articulatory and prosodic gaps in the stimuli.

In this study, all the stimuli contained ASL prosody and all the participants comprehended the complex ASL sentences with a high level of accuracy. This was indicated by their performance on the picture-probe recognition task while in the scanner (94%) and on a sentence-to-picture recognition experiment in a separate study outside the scanner (Mayberry et al., in preparation). The present results demonstrate that aSTS and pSTS sensitivity to combinatorial complexity in language stimuli operates independently of the sensory–motor origins of the units being combined.

Second, by contrasting lexical recognition with lower-level face recognition with conjunction analyses, our results indicate that, in the absence of syntactic structure, lexical processing in ASL is more bilateral and posterior by comparison. This result is consistent with previous findings showing that the initial stages of language comprehension clearly rely on sensory–motor processes in contrast to downstream processes of extracting meaning from syntax (Leonard et al., 2013; Marinkovic et al., 2003; Booth et al., 2002b).

This finding is further consistent with the neural correlates of lexical processing observed in previous neuroimaging studies of sign languages and are located more posteriorly and bilaterally compared to the more anterior regions of loci associated with syntactic processing (Capek et al., 2009; Neville et al., 1997, 1998). The present conjunction analyses showed lexical and sensory processing is associated with more posterior bilateral activations with some overlap between these effects in the left pSTS (Figure 3). These contrasts suggest that separate and overlapping neural systems underlie syntactic and lexical processing in ASL.

Neural sensitivity to syntactic structure in sign languages has been shown to be modulated by at least three factors. One is the language proficiency of the participants (Mayberry, Chen, Witcher, & Klein, 2011). The participants in this study were all ASL signers who were deaf from birth and learned ASL from infancy from their deaf parents. Moreover, they all showed high accuracy levels when comprehending complex ASL sentences. ASL proficiency at levels lower than native-like proficiency is associated with reduced levels of syntactic performance and patterns of neural activation that vary from those of native and language-dominant controls in studies of spoken and sign languages (Cargnelutti, Tomasino, & Fabbro, 2019; Mayberry & Kluender, 2018; Leonard et al., 2010). An important unanswered question is the extent to which neural sensitivity to syntactic structure is modulated by the age onset of the initial language experience, a unique developmental sequalae of infant deafness (Mayberry, Davenport, Roth, & Halgren, 2018; Ferjan Ramirez et al., 2014, 2016; Mayberry et al., 2011). Pinpointing the neural correlates of syntactic processing of ASL when it is learned from birth, as we do in this study, is a necessary first step to reliably identify and interpret the effects of delayed first-language acquisition along with other factors that may affect development of the brain language system.

Our findings accord with those of numerous other neuroimaging studies of sign language demonstrating that the neural network sensitive to syntactic structure is supramodal in nature. Together, these findings help explain why loosely organized gesture systems can evolve into sign languages with complex grammars as successive generations of deaf children are exposed to and add conventionalization to signs that emerge from children's gesture systems (Nyst, 2007; Sandler, Meir, Padden, & Aronoff, 2005; Senghas, Kita, & Özyürek, 2004; Kegl, 2002). This robust phenomenon of language emergence within the manual-visual modality would not be possible if the neural mechanism for extracting meaning from abstract combinatorial patterns evolved its function solely for symbols originating from within the vocal–auditory modality.

In conclusion, we find that, when hierarchical structure is present in ASL sign strings, areas in the left aSTS and pSTS show sensitivity to this complexity. We further find that the locus of these brain areas responsive to syntactic complexity in ASL overlap to a limited extent with lexical processing, which is more posterior and bilateral by comparison. Our results demonstrate that the integrative process of extracting meaning from hierarchical representation is supramodal and operates independently of the original sensory–motor form of the words creating the sentences.

APPENDIX

The present experiment was designed prior to the availability of the ASL-LEX database (Sehyr, Caselli, Cohen-Goldberg, & Emmorey, 2021). Note that we used subjective frequency ratings from another database as described in the methods (Mayberry et al., 2014) to create the present experiment. As a post hoc investigation, however, we searched ASL-LEX database for any subjective frequency and iconicity ratings available for the present stimuli (n = 712). Here, we describe the search procedure. First, direct matches between the present stimuli glosses and the ID tags in ASL-LEX were initially identified in a batch process using scripts in R (R Core Team, 2020). Second, all direct matches and the remaining stimuli were manually searched in the database, identifying alternate IDs as needed along with the missing items (e.g., SHOW-UP vs. APPEAR; STORE vs. shop_2).

ASL-LEX often included more than one sign variant for a concept (e.g., GLASS, LIGHT, LUNCH, MEAT), for which we picked the identical sign and its respective ID when available. In ASL-LEX, signs with alternatives are marked with ID tags such as “sign_1 vs. sign_2,” but in some cases, the initial entry was unmarked as “sign vs. sign_2.” Manual evaluation of signs also identified some mismatches that resulted from initial glossing inconsistencies between the present stimuli and the ASL-LEX variants. Rarely, there were incorrectly matched synonyms (e.g., MINE as in “my” incorrectly matched for MINE as in the act of mining).

In several cases, there were slight phonological differences (mainly handshape or movement) between the present stimuli and the signs shown in ASL-LEX (e.g., AUNT, CITY, COUNTRY, BASKET). For the purposes of this investigation, we included these items in our statistical analysis, but noted the differences. In a few cases, there were signs with two nearly identical entries in ASL-LEX, with the only difference being the presence of an affective facial expression (e.g., FINE, CUTE, EAT), which had different ratings than the same examples without such expressions. In these instances, we picked either the unmarked sign or the closest fitting match to our stimuli.

Finally, in some cases, the ASL-LEX database included a relatively infrequent and inflected form of a sign without an entry for the base form (e.g., FORGETFUL instead of FORGET). In these cases, we did not use the frequency/iconicity ratings. Conversely, in a few cases, the present stimuli included inflected verbs not found in ASL-LEX; in these instances, we did not use the ratings for the citation/base form (e.g., SHOW-1 s to 3pl vs. SHOW or GIVE-OUT vs. GIVE) because our perusal of the database showed that base/uninflected forms have different subjective ratings from inflected ones.

Subsequent to above described retrieval procedure, we analyzed the subjective frequency and iconicity ratings available for the present stimuli retrieved from the ASL-LEX database. The subjective frequency ratings for all but 91 signs (for which no match was found) did not differ between the SL and 3SV conditions but were significantly higher for the signs of the CS condition (means of 4.54, 4.68, and 5.14, respectively; t = 5.54, p < .001). This is to be expected. The CSs of the CS condition included closed class lexical items, which the other two conditions did not. Because closed class lexical create syntactic structure, they are more frequent in language corpora than open class ones. For example, the closed class pronoun signs used in the CS condition (e.g., I/me, we, you, your, he/she/it, his/hers/its) all have higher subjective frequency ratings in ASL-LEX (ranging from 5.78 to 6.76 on a 7-point Likert scale) than the open class lexical items comprising the SL and 3SV conditions (nouns, verbs, and adjectives with mean ratings of 4.54 and 4.68, as given above). The subjective iconicity ratings for all but 101 signs (for which no match was found) in this study did not significantly differ across the combinatorial conditions (mean ratings of 2.96, 3.74, 2.85, respectively, p > .2).

Acknowledgments

The research reported in this publication was supported by National Institutes of Health grant R01DC012797. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. We thank K. Vincent for help in recruiting the participants, T. Davenport for experimental assistance, Y. Huang for help with database search, and especially the individuals who participated in the study.

Reprint requests should be sent to Rachel I. Mayberry, Department of Linguistics, University of California San Diego, 9500 Gilman Drive, La Jolla, CA 92093–0108, or via e-mail: rmayberry@ucsd.edu.

Author Contributions

William Matchin: Data curation; Formal analysis; Supervision; Visualization; Writing—Original draft. Deniz İlkbaşaran: Data curation; Investigation; Methodology; Supervision. Marla Hatrak: Conceptualization; Investigation; Methodology; Project administration; Supervision. Austin Roth: Data curation; Formal analysis; Investigation; Software; Visualization. Agnes Villwock: Data curation; Investigation; Writing—Review & editing. Eric Halgren: Conceptualization; Formal analysis; Resources; Software; Writing—Review & editing. Rachel I. Mayberry: Conceptualization; Data curation; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Visualization; Writing—Original draft; Writing—Review & editing.

Funding Information

Rachel I. Mayberry, National Institute of Deafness and Communication Disorders, grant number: R01DC012797.

Diversity in Citation Practices

A retrospective analysis of the citations in every article published in this journal from 2010 to 2020 has revealed a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .408, W(oman)/M = .335, M/W = .108, and W/W = .149, the comparable proportions for the articles that these authorship teams cited were M/M = .579, W/M = .243, M/W = .102, and W/W = .076 (Fulvio et al., JoCN, 33:1, pp. 3–7). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance.

Notes

1. 

The Appendix details post hoc analyses of the present stimuli using subjective frequency and iconicity ratings obtained from the ASL-LEX database (Sehyr et al., 2021).

2. 

ASL uses a zero copula, among other morphosyntactic devices, similar to languages such as Hebrew, Arabic, Russian, and others (Sampson & Mayberry, 2019).

REFERENCES

  1. Anderson, D., & Reilly, J. (2002). The MacArthur Communicative Development Inventory: Normative data for American Sign Language. Journal of Deaf Studies and Deaf Education, 7, 83–106. 10.1093/deafed/7.2.83, [DOI] [PubMed] [Google Scholar]
  2. Atkinson, J., Marshall, J., Woll, B., & Thacker, A. (2005). Testing comprehension abilities in users of British Sign Language following CVA. Brain and Language, 94, 233–248. 10.1016/j.bandl.2004.12.008, [DOI] [PubMed] [Google Scholar]
  3. Blanco-Elorrieta, E., Kastner, I., Emmorey, K., & Pylkkänen, L. (2018). Shared neural correlates for building phrases in signed and spoken language. Scientific Reports, 8, 5492. 10.1038/s41598-018-23915-0, [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Booth, J. R., Burman, D. D., Meyer, J. R., Gitelman, D. R., Parrish, T. B., & Mesulam, M. M. (2002a). Functional anatomy of intra- and cross-modal lexical tasks. Neuroimage, 16, 7–22. 10.1006/nimg.2002.1081, [DOI] [PubMed] [Google Scholar]
  5. Booth, J. R., Burman, D. D., Meyer, J. R., Gitelman, D. R., Parrish, T. B., & Mesulam, M. M. (2002b). Modality independence of word comprehension. Human Brain Mapping, 16, 251–261. 10.1002/hbm.10054, [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Brennan, J., Nir, Y., Hasson, U., Malach, R., Heeger, D. J., & Pylkkänen, L. (2012). Syntactic structure building in the anterior temporal lobe during natural story listening. Brain and Language, 120, 163–173. 10.1016/j.bandl.2010.04.002, [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brennan, J., & Pylkkänen, L. (2012). The time-course and spatial distribution of brain activity associated with sentence processing. Neuroimage, 60, 1139–1148. 10.1016/j.neuroimage.2012.01.030, [DOI] [PubMed] [Google Scholar]
  8. Capek, C. M., Grossi, G., Newman, A. J., McBurney, S. L., Corina, D., Roeder, B., et al. (2009). Brain systems mediating semantic and syntactic processing in deaf native signers: Biological invariance and modality specificity. Proceedings of the National Academy of Sciences, U.S.A., 106, 8784–8789. 10.1073/pnas.0809609106, [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Cargnelutti, E., Tomasino, B., & Fabbro, F. (2019). Language brain representation in bilinguals with different age of appropriation and proficiency of the second language: A meta-analysis of functional imaging studies. Frontiers in Human Neuroscience, 13, 154. 10.3389/fnhum.2019.00154, [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cousineau, D. (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson's method. Tutorials in Quantitative Methods for Psychology, 1, 42–45. 10.20982/tqmp.01.1.p042 [DOI] [Google Scholar]
  11. Cox, R. W., Chen, G., Glen, D. R., Reynolds, R. C., & Taylor, P. A. (2017). fMRI clustering in AFNI: False-positive rates redux. Brain Connectivity, 7, 152–171. 10.1089/brain.2016.0475, [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Fedorenko, E., Hsieh, P.-J., Nieto-Castañón, A., Whitfield-Gabrieli, S., & Kanwisher, N. (2010). New method for fMRI investigations of language: Defining ROIs functionally in individual subjects. Journal of Neurophysiology, 104, 1177–1194. 10.1152/jn.00032.2010, [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Fedorenko, E., Scott, T. L., Brunner, P., Coon, W. G., Pritchett, B., Schalk, G., et al. (2016). Neural correlate of the construction of sentence meaning. Proceedings of the National Academy of Sciences, U.S.A., 113, E6256–E6262. 10.1073/pnas.1612132113, [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Ferjan Ramirez, N., Leonard, M. K., Davenport, T. S., Torres, C., Halgren, E., & Mayberry, R. I. (2016). Neural language processing in adolescent first-language learners: Longitudinal case studies in American Sign Language. Cerebral Cortex, 26, 1015–1026. 10.1093/cercor/bhu273, [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Ferjan Ramirez, N., Leonard, M. K., Torres, C., Hatrak, M., Halgren, E., & Mayberry, R. I. (2014). Neural language processing in adolescent first-language learners. Cerebral Cortex, 24, 2772–2783. 10.1093/cercor/bht137, [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Friederici, A. D. (2011). The brain basis of language processing: From structure to function. Physiological Reviews, 91, 1357–1392. 10.1152/physrev.00006.2011, [DOI] [PubMed] [Google Scholar]
  17. Goucha, T., & Friederici, A. D. (2015). The language skeleton after dissecting meaning: A functional segregation within Broca's area. Neuroimage, 114, 294–302. 10.1016/j.neuroimage.2015.04.011, [DOI] [PubMed] [Google Scholar]
  18. Grodzinsky, Y., & Santi, A. (2008). The battle for Broca's region. Trends in Cognitive Sciences, 12, 474–480. 10.1016/j.tics.2008.09.001, [DOI] [PubMed] [Google Scholar]
  19. Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136–141. 10.1016/j.conb.2014.07.013, [DOI] [PubMed] [Google Scholar]
  20. Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347–362. 10.1146/annurev-neuro-071013-013847, [DOI] [PubMed] [Google Scholar]
  21. Hickok, G., Kritchevsky, M., Bellugi, U., & Klima, E. S. (1996). The role of the left frontal operculum in sign language aphasia. Neurocase, 2, 373–380. 10.1080/13554799608402412 [DOI] [Google Scholar]
  22. January, D., Trueswell, J. C., & Thompson-Schill, S. L. (2009). Co-localization of Stroop and syntactic ambiguity resolution in Broca's area: Implications for the neural basis of sentence processing. Journal of Cognitive Neuroscience, 21, 2434–2444. 10.1162/jocn.2008.21179, [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Jobard, G., Vigneau, M., Mazoyer, B., & Tzourio-Mazoyer, N. (2007). Impact of modality and linguistic complexity during reading and listening tasks. Neuroimage, 34, 784–800. 10.1016/j.neuroimage.2006.06.067, [DOI] [PubMed] [Google Scholar]
  24. Kegl, J. (2002). Language emergence in a language-ready brain: Acquisition. In Morgan G. & Woll B. (Eds.), Directions in sign language acquisition (pp. 207–254). Amsterdam: John Benjamins. 10.1075/tilar.2.12keg [DOI] [Google Scholar]
  25. Leonard, M. K., Brown, T. T., Travis, K. E., Gharapetian, L., Hagler, D. J., Jr., Dale, A. M., et al. (2010). Spatiotemporal dynamics of bilingual word processing. Neuroimage, 49, 3286–3294. 10.1016/j.neuroimage.2009.12.009, [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Leonard, M. K., Ferjan Ramirez, N., Torres, C., Hatrak, M., Mayberry, R. I., & Halgren, E. (2013). Neural stages of spoken, written, and signed word processing in beginning second language learners. Frontiers in Human Neuroscience, 7, 322. 10.3389/fnhum.2013.00322, [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Leonard, M. K., Ferjan Ramirez, N., Torres, C., Travis, K. E., Hatrak, M., Mayberry, R. I., et al. (2012). Signed words in the congenitally deaf evoke typical late lexicosemantic responses with no early visual responses in left superior temporal cortex. Journal of Neuroscience, 32, 9700–9705. 10.1523/JNEUROSCI.1002-12.2012, [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Liddell, S. K. (1978). Nonmanual signals and relative clauses in American Sign Language. In Siple P. (Ed.), Understanding language through sign language research (pp. 59–90). New York: Academic Press. [Google Scholar]
  29. MacSweeney, M., Campbell, R., Woll, B., Brammer, M. J., Giampietro, V., David, A. S., et al. (2006). Lexical and sentential processing in British Sign Language. Human Brain Mapping, 27, 63–76. 10.1002/hbm.20167, [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Makuuchi, M., Bahlmann, J., Anwander, A., & Friederici, A. D. (2009). Segregating the core computational faculty of human language from working memory. Proceedings of the National Academy of Sciences, U.S.A., 106, 8362–8367. 10.1073/pnas.0810928106, [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Marinkovic, K., Dhond, R. P., Dale, A. M., Glessner, M., Carr, V., & Halgren, E. (2003). Spatiotemporal dynamics of modality-specific and supramodal word processing. Neuron, 38, 487–497. 10.1016/s0896-6273(03)00197-1, [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Matchin, W. (2018). A neuronal retuning hypothesis of sentence-specificity in Broca's area. Psychonomic Bulletin & Review, 25, 1682–1694. 10.3758/s13423-017-1377-6, [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Matchin, W., Hammerly, C., & Lau, E. (2017). The role of the IFG and pSTS in syntactic prediction: Evidence from a parametric study of hierarchical structure in fMRI. Cortex, 88, 106–123. 10.1016/j.cortex.2016.12.010, [DOI] [PubMed] [Google Scholar]
  34. Mayberry, R. I., Chen, J.-K., Witcher, P., & Klein, D. (2011). Age of acquisition effects on the functional organization of language in the adult brain. Brain and Language, 119, 16–29. 10.1016/j.bandl.2011.05.007, [DOI] [PubMed] [Google Scholar]
  35. Mayberry, R. I., Cheng, Q., Hatrak, M., İlkbaşaran, D., Hall, M. L., & Huang, Y. (in prep). Learning sentence structure after a childhood of gesture. [Google Scholar]
  36. Mayberry, R. I., Davenport, T., Roth, A., & Halgren, E. (2018). Neurolinguistic processing when the brain matures without language. Cortex, 99, 390–403. 10.1016/j.cortex.2017.12.011, [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Mayberry, R. I., Hall, M. L., & Zvaigzne, M. (2014). Subjective frequency ratings for 432 ASL signs. Behavior Research Methods, 46, 526–539. 10.3758/s13428-013-0370-x, [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Mayberry, R. I., & Kluender, R. (2018). Rethinking the critical period for language: New insights into an old question from American Sign Language. Bilingualism: Language and Cognition, 21, 886–905. 10.1017/s1366728917000724, [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Mazoyer, B. M., Tzourio, N., Frak, V., Syrota, A., Murayama, N., Levrier, O., et al. (1993). The cortical representation of speech. Journal of Cognitive Neuroscience, 5, 467–479. 10.1162/jocn.1993.5.4.467, [DOI] [PubMed] [Google Scholar]
  40. McCullough, S., Emmorey, K., & Sereno, M. (2005). Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners. Cognitive Brain Research, 22, 193–203. 10.1016/j.cogbrainres.2004.08.012, [DOI] [PubMed] [Google Scholar]
  41. Moreno, A., Limousin, F., Dehaene, S., & Pallier, C. (2018). Brain correlates of constituent structure in sign language comprehension. Neuroimage, 167, 151–161. 10.1016/j.neuroimage.2017.11.040, [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Nelson, M. J., El Karoui, I., Giber, K., Yang, X., Cohen, L., Koopman, H., et al. (2017). Neurophysiological dynamics of phrase-structure building during sentence processing. Proceedings of the National Academy of Sciences, U.S.A., 114, E3669–E3678. 10.1073/pnas.1701590114, [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Neville, H. J., Bavelier, D., Corina, D., Rauschecker, J., Karni, A., Lalwani, A., et al. (1998). Cerebral organization for language in deaf and hearing subjects: Biological constraints and effects of experience. Proceedings of the National Academy of Sciences, U.S.A., 95, 922–929. 10.1073/pnas.95.3.922, [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Neville, H. J., Coffey, S. A., Lawson, D. S., Fischer, A., Emmorey, K., & Bellugi, U. (1997). Neural systems mediating American Sign Language: Effects of sensory experience and age of acquisition. Brain and Language, 57, 285–308. 10.1006/brln.1997.1739, [DOI] [PubMed] [Google Scholar]
  45. Newman, A. J., Supalla, T., Fernandez, N., Newport, E. L., & Bavelier, D. (2015). Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture. Proceedings of the National Academy of Science, U.S.A., 112, 11684–11689. 10.1073/pnas.1510527112, [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Novick, J. M., Trueswell, J. C., & Thompson-Schill, S. L. (2005). Cognitive control and parsing: Reexamining the role of Broca's area in sentence comprehension. Cognitive, Affective, & Behavioral Neuroscience, 5, 263–281. 10.3758/cabn.5.3.263, [DOI] [PubMed] [Google Scholar]
  47. Nyst, V. A. S. (2007). A descriptive analysis of Adamorobe Sign Language (Ghana). Utrecht, The Netherlands: LOT. [Google Scholar]
  48. Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9, 97–113. 10.1016/0028-3932(71)90067-4, [DOI] [PubMed] [Google Scholar]
  49. Pallier, C., Devauchelle, A.-D., & Dehaene, S. (2011). Cortical representation of the constituent structure of sentences. Proceedings of the National Academy of Sciences, U.S.A., 108, 2522–2527. 10.1073/pnas.1018711108, [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Poizner, H., Klima, E. S., & Bellugi, U. (1990). What the hands reveal about the brain. Cambridge, MA: MIT Press. 10.7551/mitpress/7206.001.0001 [DOI] [Google Scholar]
  51. R Core Team. (2020). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.r-project.org/index.html [Google Scholar]
  52. Rogalsky, C., & Hickok, G. (2011). The role of Broca's area in sentence comprehension. Journal of Cognitive Neuroscience, 23, 1664–1680. 10.1162/jocn.2010.21530, [DOI] [PubMed] [Google Scholar]
  53. Rogalsky, C., Matchin, W., & Hickok, G. (2008). Broca's area, sentence comprehension, and working memory: An fMRI study. Frontiers in Human Neuroscience, 2, 14. 10.3389/neuro.09.014.2008, [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Sakai, K. L., Tatsuno, Y., Suzuki, K., Kimura, H., & Ichida, Y. (2005). Sign and speech: Amodal commonality in left hemisphere dominance for comprehension of sentences. Brain, 128, 1407–1417. 10.1093/brain/awh465, [DOI] [PubMed] [Google Scholar]
  55. Sampson, T., & Mayberry, R. I. (2019). An emerging SELF: The copula cycle in ASL. Paper presented at 13th International Conference on Theoretical Issues in Sign Language Research (TISLR). Hamburg: Universität Hamburg. [Google Scholar]
  56. Sandler, W., & Lillo-Martin, D. (2006). Sign language and linguistic universals. Cambridge: Cambridge University Press. 10.1017/CBO9781139163910 [DOI] [Google Scholar]
  57. Sandler, W., Meir, I., Padden, C., & Aronoff, M. (2005). The emergence of grammar: Systematic structure in a new language. Proceedings of the National Academy of Sciences, U.S.A., 102, 2661–2665. 10.1073/pnas.0405448102, [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Sehyr, Z. S., Caselli, N., Cohen-Goldberg, A. M., & Emmorey, K. (2021). The ASL-LEX 2.0 Project: A database of lexical and phonological properties for 2,723 signs in American Sign Language. Journal of Deaf Studies and Deaf Education, 26, 263–277. 10.1093/deafed/enaa038, [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Senghas, A., Kita, S., & Özyürek, A. (2004). Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science, 305, 1779–1782. 10.1126/science.1100199, [DOI] [PubMed] [Google Scholar]
  60. Stowe, L. A., Broere, C. A. J., Paans, A. M. J., Wijers, A. A., Mulder, G., Vaalburg, W., et al. (1998). Localizing components of a complex task: Sentence processing and working memory. NeuroReport, 9, 2995–2999. 10.1097/00001756-199809140-00014, [DOI] [PubMed] [Google Scholar]
  61. Talairach, J., & Tournoux, P. (1988). Co-planar stereotaxic atlas of the human brain. 3-dimensional proportional system: An approach to cerebral imaging. New York: Thieme. [Google Scholar]
  62. Uddén, J., Hultén, A., Schoffelen, J.-M., Lam, N., Harbusch, K., van den Bosch, A., et al. (2019). Supramodal sentence processing in the human brain: fMRI evidence for the influence of syntactic complexity in more than 200 participants. bioRxiv. 10.1101/576769 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Vandenberghe, R., Nobre, A. C., & Price, C. J. (2002). The response of left temporal cortex to sentences. Journal of Cognitive Neuroscience, 14, 550–560. 10.1162/08989290260045800, [DOI] [PubMed] [Google Scholar]
  64. Wechsler, D., & Naglieri, J. A. (2006). Wechsler nonverbal scale of ability (WNV). Oxford: Pearson Clinical. 10.1037/t15176-000 [DOI] [Google Scholar]
  65. Wilson, S. M., Bautista, A., & McCarron, A. (2018). Convergence of spoken and written language processing in the superior temporal sulcus. Neuroimage, 171, 62–74. 10.1016/j.neuroimage.2017.12.068, [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Zaccarella, E., Meyer, L., Makuuchi, M., & Friederici, A. D. (2017). Building by syntax: The neural basis of minimal linguistic structures. Cerebral Cortex, 27, 411–421. 10.1093/cercor/bhv234, [DOI] [PubMed] [Google Scholar]

Articles from Journal of Cognitive Neuroscience are provided here courtesy of MIT Press

RESOURCES