Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Aug 1.
Published in final edited form as: Brain Lang. 2024 Jul 29;255:105447. doi: 10.1016/j.bandl.2024.105447

Neural underpinnings of sentence reading in deaf, native sign language users

Justyna Kotowicz a,*,1, Anna Banaszkiewicz b,*,1, Gabriela Dzięgiel-Fivet c, Karen Emmorey d, Artur Marchewka b, Katarzyna Jednoróg c,*,1
PMCID: PMC11890182  NIHMSID: NIHMS2014446  PMID: 39079468

Abstract

The goal of this study was to investigate sentence-level reading circuits in deaf native signers, a unique group of deaf people who are immersed in a fully accessible linguistic environment from birth, and hearing readers. Task-based fMRI, functional connectivity and lateralization analyses were conducted. Both groups exhibited overlapping brain activity in the left-hemispheric perisylvian regions in response to a semantic sentence task. We found increased activity in left occipitotemporal and right frontal and temporal regions in deaf readers. Lateralization analyses did not confirm more rightward asymmetry in deaf individuals. Deaf readers exhibited weaker functional connectivity between inferior frontal and middle temporal gyri and enhanced coupling between temporal and insular cortex. In conclusion, despite the shared functional activity within the semantic reading network across both groups, our results suggest greater reliance on cognitive control processes for deaf readers, possibly resulting in greater effort required to perform the task in this group.

Keywords: Deaf, Sentence reading, Task-based fMRI, Connectivity, Brain lateralization

1. Introduction

Deaf individuals raised by deaf families constitute around 5 % of deaf population (Mitchell & Karchmer, 2004). Due to early immersion in sign language, they have a lower risk of language deprivation than deaf children born to hearing parents, surrounded by a spoken language environment only (Glickam & Hall, 2019; Humphries et al., 2012). However, unlike hearing individuals who have developed knowledge of a spoken language, when learning to read, deaf native signers start without a rich and full experience with the spoken language (Hoffmeister & Caldwell-Harris, 2014). Hence, the process of reading and possibly the neural architecture that supports reading may differ in deaf signers compared to hearing speakers (Emmorey & Lee, 2021).

The majority of research on the neural underpinnings of reading in deaf individuals was focused on linguistic processing at the single word-level (Emmorey & Lee, 2021), while still relatively little is known about the neural architecture supporting reading in deaf individuals at the sentence level. The results from studies using single-word reading tasks on deaf and hearing individuals are mixed. Some (Emmorey et al., 2013; Waters et al., 2007) suggest that neural circuits for reading for meaning are closely overlapping irrespective of the presence or lack of direct access to auditory speech. Others point to certain differences, the most consistent ones include lower activation of the left inferior frontal gyrus (IFG) in deaf compared to hearing readers accompanied by greater recruitment of the right IFG (Aparicio et al., 2007; Li et al., 2014).

The activity of the reading network of deaf individuals during single-word reading tasks was also related to their reading proficiency. Skilled deaf readers compared to less skilled deaf readers exhibited increased activation in left hemisphere language regions and bilateral ventral occipitotemporal regions (Emmorey et al., 2016; Corina et al., 2013). Emmorey et al. (2016) also reported a positive correlation between reading skills and the level of brain activity in the right IFG and left inferior temporal gyrus (ITG).

Three published studies analysed sentence-level reading and reported overlapping classical reading networks in left perisylvian areas in deaf signers and hearing individuals (Gizewski et al., 2005; Hirshorn et al., 2014; Moreno et al., 2018). In Moreno et al. (2018) sentence reading was compared to word reading exclusively in deaf readers. Activation of bilateral IFG (pars triangularis and opercularis), middle frontal gyrus (MFG) and middle temporal gyrus (MTG), as well as subcortical regions, replicated results from a prior study with a similar paradigm applied in a group of hearing individuals (Pallier et al., 2011). Two other studies directly compared the groups’ activation during sentence reading contrasted to meaningless characters (Gizewski et al., 2005; Hirshorn et al., 2014). Gizewski et al. (2005) recruited hearing, skilled readers and less-skilled deaf readers, who varied in age of acquisition of sign language (0–6 years). Deaf readers had greater activation in the primary auditory cortex, but lower in the MTG, the left IFG, occipital lobe and cerebellum as compared to hearing individuals. Hirshorn et al. (2014), besides hearing controls, included two groups of deaf readers: one consisted of spoken language users with limited knowledge of sign language (‘oral deaf’) and the second of native sign language users with little capacity to process auditory spoken language. For all groups, a similar activation pattern was found in frontal and temporal areas bilaterally. Both deaf groups exhibited greater activation in comparison to hearing individuals in bilateral superior temporal gyri (STG), including the auditory cortex. The authors argued that this reflects cross-modal plasticity: an adaptive capacity of the brain to functionally reorganize in the absence of sensory input (Bavelier & Neville, 2002). Deaf signers compared to both hearing and oral deaf readers had lower activation in the ventral occipitotemporal cortex (vOT, often described as the visual word form area, VWFA). Subsequently, the authors conducted a functional connectivity analysis with seeds in the left and right STG (defined based on the group difference), and in anatomically-defined IFG (pars triangularis and opercularis). Both deaf groups had stronger connectivity between the left STG and left IFG (pars triangularis) – areas associated with semantic processing – and between the left STG and thalami when compared to hearing readers. In the hearing and oral deaf participants, the connectivity between the left STG and areas involved in phonological processing (pre- and post-central gyri) was greater compared to deaf native signers. Hearing readers, compared to deaf native signers, also had stronger connectivity between the left IFG (pars triangularis and opercularis) and the left posterior MTG (pMTG) and between the left IFG pars opercularis and left posterior fusiform gyrus (FG). Hirshorn et al. (2014) concluded that both deafness and sign language experience affect the functional reorganisation of the auditory areas for reading in deaf adults.

Importantly, studies examining the reading network of deaf individuals suggested potential differences in hemispheric lateralization. Greater engagement of the right hemisphere in deaf readers when compared to hearing readers was reported in Aparicio et al. (2007) and Li et al. (2014). Interestingly, greater activation of the right hemisphere in hearing readers has been associated with poorer reading skills (Shaywitz & Shaywitz, 2005). This compensatory mechanism was described as maladaptive because the orthographic representations in the right hemisphere might be less precise than those in the left hemisphere (Laszlo & Sacchi, 2015). In contrast, previous studies of skilled deaf readers (Emmorey et al., 2016; Moreno et al., 2018) reported positive correlations between reading proficiency and brain activity in the right hemisphere. Emmorey (2020) suggested that the engagement of the right hemisphere could be an indicator of higher reading skills in deaf individuals and not a marker of poor reading ability like in hearing individuals (but see Corina et al., 2013). However, to our knowledge, no studies thus far have directly compared lateralization indices during reading between deaf and hearing individuals.

Here, we addressed the following questions: what are the neural underpinnings of short sentence reading in deaf native signers and to what extent do they show similar functional patterns or differ from hearing individuals? The existing neuroimaging literature on deaf readers suggests that the classical reading network largely overlaps in deaf and hearing people, at least in left perisylvian areas. With respect to differences, we predicted greater bilateral recruitment of the STG in deaf readers due to functional reorganisation of auditory areas, as previously shown by Hirshorn et al. (2014; see also Cardin et al., 2020). Second, to investigate the semantic reading network with greater precision, we investigated task-based connectivity patterns of the core nodes of the reading network, common for both groups, to examine how they communicate with other brain areas during sentence reading. Finally, to verify previous claims that the right hemisphere is more engaged during reading in deaf signers we directly compared lateralization indices between deaf and hearing participants.

2. Materials and methods

2.1. Participants

Twenty hearing native Polish speakers (mean age = 23.7, SD=1.4, range = 20.9 – 26.3) and 12 deaf participants (mean age = 27.4, SD=4.4, range = 19.8 – 34.8) were included in the fMRI analysis reported here. Both hearing and deaf participants came from two larger longitudinal studies of sign language learning and comprehension (Banaszkiewicz et al., 2021a; Banaszkiewicz et al., 2021b). One deaf participant included in Banaszkiewicz et al., 2021b was excluded from current analyses due to incomplete data (technical problem resulting in lack of accuracy measurements for the in-scanner task). All of the participants were right-handed, healthy, had normal or corrected-to-normal vision, and nonverbal IQ (Raven Progressive Matrices) within the age norms. All participants had 13 or more years of formal education (one hearing and four deaf participants completed higher education). It is important to note that hearing individuals underwent an 8-month long Polish Sign Language (polski język migowy – PJM) course and at the time of testing for the current experiment they had elementary knowledge of PJM (level A1/A2). For more details about the course see Banaszkiewicz et al. (2021a).

All of the deaf participants were born into deaf, signing families and reported PJM as their first language. Eleven individuals were congenitally deaf; one person reported hearing loss at the age of three. The mean hearing level, as determined by audiogram data, was 95.3 dB for the right ear (range = 70 – 120 dB) and 96.6 dB for the left ear (range = 80 – 120 dB). Eight participants were using hearing aids, and they reported their level of speech comprehension with the aids ranged from poor to very good. Prior to the study, deaf participants were asked to fill out a background reading experience questionnaire – self-reported reading skills varied from poor (N=1) to very good (N=6; see Table S1 in supplementary materials for details). The majority of the deaf group declared that they read books (N=7; four participants reported 3 – 10 books per year and three participants 11 – 40 books per year) and journals on a daily basis (N=8). Only one person declared that they do not read regularly. A PJM-Polish interpreter was present during the study to assist with communication between the deaf participants and the hearing experimenters.

Hearing and deaf participants had no contraindications to MRI, gave written informed consent, and were financially reimbursed for their time and effort. The study was approved by the Committee for Research Ethics of the Institute of Psychology of the Jagiellonian University.

2.2. fMRI task and stimuli

The experimental reading task (‘reading condition’) presented short two-word sentences, and participants were asked to make a semantic judgement (Semantic Judgment Task, SJT; Binder et al., 2009). The control task (‘control condition’) required a visual search for hashtags (#) within “two words” consonant strings. In the reading condition participants decided whether two-word, grammatically-valid written phrases were semantically correct (e.g. “boy runs”) or anomalous (e.g. “table drinks”). Both correct and anomalous phrases consisted of words controlled for frequency according to the SUBTLEX-PL database (Mandera et al., 2015) and followed a Verb + Noun or Noun + Verb order. While it is acknowledged that two-word sentences are brief and simple, it is important to note that they are complete and include all necessary grammatical elements of a sentence. In the control condition, two groups of random consonant strings were displayed on the screen. Half of the strings contained two “#” (e.g. gt#j t#pk) and half did not (e.g. rgsh tncf). Participants were asked to discriminate between both types of letter strings. All of the stimuli (words/strings) were 3–6 letters long.

In total, 80 stimuli (half belonging to the reading condition and half to the control condition) were displayed using Presentation software (Neurobehavioral Systems, Berkeley, CA) on a screen located in the back of the scanner, reflected in the mirror mounted on the MRI head coil. Participants’ responses were collected using an MRI compatible ResponseGrip device (NordicNeuroLab; https://nordicneurolab.com/nordic-fmri-solution/). Participants kept the ResponseGrip device in their left hand and were asked to press a button with their thumb for one decision (semantically correct sentence/strings with “#”) and their index finger for the other decision (semantically anomalous sentence/strings without “#”). All of the answers were saved in log files, which contained the list of correct, incorrect, and missing responses. The stimuli are listed in supplementary materials (Table S2).

2.3. Procedure

Stimuli were presented in a block design, with 5 reading conditions and 5 control conditions in alternating blocks. Each block consisted of 8 trials presented in pseudorandomized order: 4 correct / 4 anomalous phrases and 4 strings with “#” / 4 without “#”. Before each block a fixation cross was presented for 6–8 s, followed by 2 s of a visual cue informing participants about the type of upcoming block (reading condition or control condition) followed by another fixation cross (1–2 s). The total duration of the SJT was 7.9 min: mean block duration = 44 s; stimuli length = 2 s; inter-stimulus interval = 3 s (starting with a blank screen = 1 s, followed by a fixation cross indicating answer window = 2 s).

2.4. Imaging parameters

MRI data were acquired on a 3 T Siemens Trio Tim MRI scanner using a 12-channel head coil. T1-weighted (T1w) images were acquired with the following specifications: 176 slices, slice-thickness = 1 mm, TR=2530 ms, TE=3.32 ms, flip angle = 7 deg, FOV=256 mm, matrix size: 256 × 256, voxel size: 1 × 1 × 1 mm. An echo planar imaging (EPI) sequence was used for functional imaging. 41 slices were collected with the following protocol: slice-thickness = 3 mm, TR=2500, flip angle = 80 deg, FOV=216 × 216 mm, matrix size: 72 × 72, voxel size: 3 × 3 × 3 mm).

2.5. fMRI data preprocessing

The preprocessing and statistical analyses of neuroimaging data were performed using SPM12 (Wellcome Imaging Department, University College, London, UK, https://fil.ion.ucl.ac.uk/spm), run in MATLAB R2021a (The MathWorks Inc. Natick, MA, USA). For each participant, functional volumes were realigned to the mean functional image and motion corrected. Additionally, data artifacts for head motion were detected using Art Toolbox (https://www.nitrc.org/projects/artifact_detect). Image volumes were defined as outliers when signal intensity changes were greater than 3 standard deviations (default settings). In order to include a session in the analyses, 80 % of the volumes needed to be outlier free and all of the sessions were included. Two-sample t-test revealed that groups did not differ in the number of outliers (p = 0.889). The T1w images were coregistered to the mean functional image and segmented based on the tissue probability map. Further on, the normalization of the functional data to the MNI (Montreal Neurological Institute) space was carried out using deformation fields acquired from T1w images, with voxel size 2 × 2 × 2 mm. Finally, normalized images were smoothed with 6 mm full width at half maximum Gaussian kernel.

2.6. Statistical analysis−Behavioural measurements

Accuracy (percent correct) for the reading condition and the control condition were analysed. A 2 × 2 repeated measure ANOVA was used with conditions (reading condition and control condition) as a within-subject factor and group (deaf readers and hearing readers) as a between-subject factor. The accuracy data were not normally distributed in all collected data. As the ANOVA is quite robust when the assumption of normality is not met, the data were not transformed and the ANOVA was applied to identify the between-group difference in accuracy (Field, 2013). Then, in the post-hoc analysis, bootstrapping with 1000 resamples was computed, and the bias-corrected and accelerated bootstrap (BCa) confidence intervals (CIs) were reported because the parametric t-test is quite fragile to the violation of the assumption of normal distribution (normality of the sampling distribution) and bootstrapping is a solution for the violation of this assumption (Field, 2013). The BCa CIs that do not include zero indicate significant difference between groups.

2.7. Statistical analysis−Task-based activation

Statistical analysis was performed on subject 1st and group 2nd levels using general linear models (GLM). Timings of reading and control blocks and visual cues were entered into separate design matrices with the addition of six head movement regressors of no interest. Obtained functions were then convolved with the hemodynamic response function as implemented in SPM12. Data were high-pass filtered with a cut-off period of 1/210 Hz. Then, reading condition > control condition contrasts were computed for each participant and used in all of the group-level (2nd level) models as input. First, within group results were computed using one-sample t-tests. Second, a conjunction analysis (null conjunction, Friston et al., 2005) was employed to look for common regions involved in the reading task across both groups. Finally, two-sample t-tests were performed to compare brain activity between hearing and deaf readers. Whole-brain results were considered significant at voxel-level at p < 0.001, using cluster-wise family-wise error (FWEc) correction at p < 0.05. The anatomical regions were identified according to the AAL2 atlas (Rolls et al., 2015).

2.8. Statistical analysis−Task-based connectivity

Task-related functional connectivity with a seed-to-voxel correlation mapping (weighted GLM option) was performed using the CONN Toolbox v.19.c (Whitfield-Gabrieli & Nieto-Castanon, 2012). Spatial normalisation of the structural data was conducted in the toolbox, as well as a default denoising procedure of the functional data, preprocessed previously using SPM12 (as described above). The functional data were high-pass filtered (0.008 Hz, as recommended by the toolbox developers for the task-related connectivity analyses) to remove low-frequency signal drifts. A standard denoising procedure was implemented including e.g. noise components from cerebral white matter, cerebrospinal fluid and subject-motion parameters that, in order to remove unwanted artifactual effects, were further entered in a GLM model as nuisance covariates. Seeds were pre-defined as regions activated during the reading (semantic judgement) task common for deaf and hearing participants, identified in conjunction analysis. A whole-brain bivariate correlation analysis was run using a seed-to-voxel approach (between the BOLD time-series of each seed and the BOLD time-series of all other voxels of the brain). The obtained Pearson’s correlation coefficients were automatically transformed to normally distributed standardised scores using Fisher’s z-transformation. Subject-specific connectivity maps for each seed were then used in second-level GLM to test within and between groups differences. Second-level results of task-based correlations thresholded at p < 0.001, corrected for multiple comparisons with FWEc at p < 0.05 are reported. As functional connectivity analyses are complementary to the functional activations’ analyses, we focus only on positive correlations, omitting negative correlations of which interpretation could be unclear.

2.9. Lateralization analysis

Lateralization indices (LI) were calculated using SPM LI toolbox (Wilke & Schmithorst, 2006). The bootstrap approach with default settings was applied, and the LI were calculated in 3 anatomical masks: IFG (including pars triangularis and opercularis), STG, and MTG created in Marsbar (Brett et al., 2002) based on the AAL2 atlas (Rolls et al., 2015). These specific anatomical regions were selected due to their significance in language processing across different modalities, including sign languages, speech and written system (Rueckl et al., 2015, Trettenbrein et al., 2021). Participants were included in the analyses if they had at least 10 voxels active in the mask in both hemispheres with p < 0.1 voxel threshold. Based on this criterion one hearing participant was excluded from the analysis within the STG. The positive LI indicated leftward lateralization of the activation in a given structure. The LI of hearing and deaf participants were compared using bias-corrected and accelerated bootstrap (BCa) confidence intervals (CIs) with 1000 resamples (Field, 2013). The BCa CIs that do not include zero indicate significant difference between groups. In an additional post-hoc analysis participants were categorised as left-lateralized, right-lateralized, or bilateral. LI values below −0.2 were considered indicative of rightward lateralization, above 0.2 – leftward lateralization and between −0.2 and 0.2 as bilateral (Seghier, 2008). As there were < 5 cases for several lateralization categories, Fisher’s exact test was used to test for association between group and lateralization. In order to investigate a relationship between the level of hemispheric engagement and reading accuracy a post-hoc correlation analysis of LI and in-scanner performance during a reading condition was computed for each group. Given the lack of normal distribution in some variables, Spearman’s rho was used with bias-corrected and accelerated bootstrapped confidence intervals. The results of a post-hoc analysis are presented in supplementary materials (1.2. and Table S4).

3. Results

3.1. Behavioural results

The main effect of condition was significant (F(1,30) = 36.086, p < 0.001, partial η2 = 0.546) showing that the control condition (hashtag detection) had significantly higher accuracy than the reading condition. The overall main effect of the group was significant (F (1,30) = 7.304, p = 0.011, partial η2 = 0.196) – hearing participants had overall higher accuracy than deaf participants. In addition, the two-way interaction between condition and group was significant (F(1,30) = 15.583, p < 0.001, partial η2 = 0.342). The post-hoc analysis revealed that hearing participants had higher accuracy in the reading condition (M=97.0 %, SD=4.4 %) when compared to deaf participants (M=89.4 %, SD=8.1 %; t(14.941) = −2.973, p = 0.005, Cohen’s d = −1.252, 95 % CIs [−12.937, −2.130], bootstrap: BCa 95 % CIs [−11.764,−3.584]). No significant between-group differences were detected in the control condition (t(30) = 0.148, p = 0.442., Cohen’s d = 0.054, 95 % CIs [−1.703, 1.969]; bootstrap: BCa 95 % CIs [−1.169, 1.557]). Fig. 1 presents accuracy in the reading condition and control conditions for both groups.

Fig. 1.

Fig. 1.

Behavioural results for the sentence reading task (semantic judgement). Accuracy scores in the reading condition and control condition for deaf and hearing groups. Significant differences between groups are indicated with an asterisk, p ≤ 0.001.

3.2. Neuroimaging results

3.2.1. Task-based activity

3.2.1.1. Within group activations.

Contrasts between the reading condition and the control condition were computed for each participant and utilised as input in all group-level (2nd level) models.

One sample t-test revealed that deaf readers activated the left MTG and ITG, the left inferior and middle occipital gyri (IOG, MOG) and the left FG. Furthermore, bilateral recruitments of the IFG (pars triangularis and opercularis), cerebellum (crus I and II), supplementary motor area (SMA) and insula were observed (Fig. 2A, Table 1). Whereas, for hearing participants we found engagement in the left IFG and in the left MTG (Fig. 2B, Table 1).

Fig. 2.

Fig. 2.

Brain activity during sentence reading (reading condition > control condition) in the deaf group, the hearing group, and the conjunction of the two groups, L – left hemisphere, R – right hemisphere. All clusters at voxel-wise p < 0.001 and cluster-wise p < 0.05 FWEc.

Table 1.

Results from whole-brain analysis for both groups: deaf and hearing readers.

Brain regions Cluster size t-value MNI coordinates
x y z

Deaf readers
Left hemisphere
Middle temporal gyrus, Inferior occipital gyrus, Inferior temporal gyrus, Fusiform gyrus, Middle occipital gyrus 1038 11.19 −62 −50 6
Inferior frontal gyrus (pars opercularis, pars triangularis), Precentral gyrus 572 8.53 −38 6 30
Inferior frontal gyrus (pars triangularis) 272 10.51 −34 36 12
Insula 178 7.86 −32 26 2
Cerebellar hemisphere (crus I & II) 125 6.52 −8 −80 −20
Middle occipital gyrus 121 6.40 −18 −92 −4
Right hemisphere
Inferior frontal gyrus (pars triangularis, pars opercularis), Middle frontal gyrus 363 15.47 42 30 24
Supplementary motor area (cluster extending to the left hemisphere) 182 6.08 6 16 48
Insula 134 6.41 34 24 −4
Cerebellar hemisphere (crus I & II, lobule VI) 119 5.79 10 −74 −24
Hearing readers
Left hemisphere
Middle temporal gyrus 856 9.32 −58 −42 4
Inferior frontal gyrus (pars triangularis, pars opercularis, pars orbitalis), Insula 1222 7.27 −50 30 2
Conjunction analysis
Deaf and hearing readers
Left hemisphere
Middle temporal gyrus 422 6.63 −60 −48 6
Inferior frontal gyrus (pars opercularis, pars triangularis) 276 6.12 −42 12 24
Deaf > hearing readers
Left hemisphere
Inferior occipital gyrus, Inferior temporal gyrus, Middle temporal gyrus, Middle occipital gyrus, Fusiform gyrus 751 5.49 −50 −70 −10
Right hemisphere
Middle frontal gyrus, Inferior frontal gyrus (pars triangularis) 204 8.21 42 32 24
Inferior temporal gyrus 180 5.60 52 −68 −8
Hearing > deaf readers
Right hemisphere
Precuneus 181 4.86 16 −64 36
3.2.1.2. Conjunction analysis.

The analysis of conjunction showed overlapping patterns of activation in both groups (reading condition > control condition). These overlaps occurred in left perisylvian reading network areas including the left pMTG and the IFG pars opercularis and triangularis (Fig. 2C, Table 1).

3.2.1.3. Group differences.

Direct comparison between the deaf and hearing groups revealed greater activation for deaf readers in the IOG and MOG, the ITG, the MTG and the FG (including the vOT, the site of the VWFA) in the left hemisphere (reading condition > control condition). Differences were observed also in the right MFG, the right IFG pars triangularis and the right ITG (Fig. 3A, Table 1). The opposite comparison showed that only the right precuneus had greater activation for hearing readers when compared to deaf readers (Fig. 3B, Table 1).

Fig. 3.

Fig. 3.

Group differences between deaf and hearing during sentence reading (reading condition > control condition), L – left hemisphere, R – right hemisphere. All clusters at voxel-wise p < 0.001 and cluster-wise p < 0.05 FWEc.

3.2.2. Task-based connectivity

Clusters identified in the conjunction analysis – left IFG and MTG – were used in the task-based seed-to-voxel connectivity analysis as seed regions. The analysis revealed significant between-group differences for the contrast hearing readers > deaf readers (reading condition > control condition) in connectivity between the left IFG seed region and voxels within the left MTG (Fig. 4A, Table S3 in supplementary materials). The opposite comparison (deaf > hearing) revealed greater correlation between the left pMTG seed region and voxels within the left insula (Fig. 4B and supplementary materials Table S3). For the results of within-group connectivity for reading and control conditions separately see Figures S1, S2 and Table S3 in supplementary materials.

Fig. 4.

Fig. 4.

Increased seed-to-voxel functional connectivity of A) left IFG (hearing > deaf) and B) left pMTG (deaf > hearing) seed regions (blue) for the contrast reading condition > control condition. All resulting clusters (green) are significant at voxel-wise p < 0.001 and cluster-wise p < 0.05 FWEc. Bar plots are presented to illustrate the effect size in each of the resulting clusters.

3.2.3. Lateralization analysis

Lateralization indices values were computed within three anatomical regions that are essential for language processing: IFG (pars triangularis and opercularis), STG, and MTG (Rueckl et al., 2015; Trettenbrein et al., 2021). The results of the group comparisons of lateralization indices are presented in Table 2. The groups differed significantly only in the STG, with the deaf group presenting more leftward lateralization than the hearing group.

Table 2.

Bootstrap confidence intervals for the difference between the hearing and deaf groups.

Mean LI hearing (SD) Mean LI deaf (SD) Difference BCa CI lower limit BCa CI higher limit

IFG 0.47 (0.27) 0.36 (0.39) −0.11 −0.36 0.12
MTG 0.64 (0.17) 0.61 (0.22) −0.03 −0.18 0.11
STG 0.10 (0.37) 0.54 (0.26) 0.44 0.23 0.64

4. Discussion

In this study we used whole-brain task-related neural activity, task-based functional connectivity, and lateralization analyses to investigate the neural markers of short sentence reading in deaf readers as compared to hearing readers. All deaf participants were native signers who were raised in a fully accessible linguistic environment with deaf parent(s) using sign language (only around 5 % of the deaf population).

At the whole brain level for both groups we observed similar and overlapping neural activity in brain regions classically described as ventral and dorsal reading pathways: the left MTG and left IFG (pars triangularis and opercularis) – regions that are associated with semantic processing (Binder et al., 2009; Bookheimer 2002). Neural activation in these canonical areas of the reading network has also been observed both in children (Chyl et al., 2021) and adults (Rueckl et al., 2015) who speak highly contrasting languages. Similarities in brain activation may arise because the reading network is constrained by the organisation of the language network, which through literacy training starts to process print as well as speech (Chyl et al., 2018). Our results are consistent with studies reporting that the left fronto-temporal pathway, including the left IFG and MTG, is activated by both hearing and deaf readers when they read sentences (Hirshorn et al., 2014; Moreno et al., 2018).

Despite the overlap in the core nodes of the semantic network between deaf and hearing readers, differences in brain activation and task-based connectivity patterns indicated that deaf and hearing individuals exhibit a diverging organisation of the reading circuitry. Greater activation for deaf readers compared to hearing readers was observed in the left ventral occipitotemporal cluster. Greater activity of the left vOT in deaf compared to hearing readers stands in contrast to previous studies (Aparicio et al., 2007; Emmorey et al., 2013; Waters et al., 2007). The role of the left vOT in reading has been defined as integrating visual input and high-level language processing, with engagement dependent on attention and task demands (Price, 2012; Price & Devlin, 2011). It is also an area important for lexical-level processing (Glezer et al., 2009; Kronbichler et al., 2007). In the present study, the accuracy difference suggests that the Semantic Judgement Task was more demanding for deaf than for hearing readers, which might have resulted in greater engagement of the left vOT in deaf readers. It is possible that previous studies did not capture differences in the activation of the vOT between less-skilled deaf readers and more-skilled hearing readers because less demanding lexical and semantic word-level tasks were used (Aparicio et al., 2007; Waters et al., 2007). Deaf readers also recruited the right IFG pars triangularis and the right MFG to a greater extent than hearing readers. These regions have been associated with various executive functions including inhibition, attention control and interference suppression (Hartwigsen et al., 2019). Right IFG and MFG have been suggested to be a part of the Ventral Attention Network (Corbetta et al., 2008; Japee et al., 2015) and are activated when participants are performing executively demanding semantic tasks (Noonan et al., 2013). Their increased activation during reading in the deaf readers is in line with previous studies (Aparicio et al., 2007; Li et al., 2014) and might be related to greater task difficulty.

Contrary to our hypothesis based on previous research using a sentence reading task (Hirshorn et al., 2014), we did not observe significant differences between deaf and hearing readers in the engagement of the STG. This might be due to differences in the complexity of the reading tasks in both studies. Hirshorn and colleagues used long, elaborate sentences while our task required processing of short two-word phrases. Moreover, previous research on word-level reading did not detect greater activity in the STG in deaf compared to hearing readers (Emmorey et al., 2013; Li et al., 2014). Hence, future research is needed to investigate with greater precision whether the increasing complexity of the stimuli used in the reading task could modulate the level of activation in the auditory cortex in deaf readers.

Additionally, we performed connectivity analyses to investigate whether similar neural coupling occurs during short sentence reading between the core nodes of the reading network common for deaf and hearing participants (identified in the conjunction analysis) and other, anatomically remote, brain regions. Hearing readers showed greater correlation between the left IFG seed region and the left MTG compared to deaf readers. This result is in line with Hirshorn et al. (2014) who analysed two separated seeds, the IFG pars triangularis and opercularis, and found that both exhibited stronger connectivity with the left pMTG in hearing readers when compared to deaf native signers. The left IFG and the left MTG are crucial components of the Memory-Unification-Control Model (MUC), a general framework for the neural underpinnings of language (Hagoort, 2005; Snijders et al., 2009). According to the MUC, the left MTG retrieves semantic and syntactic information from memory and the left IFG unifies words into larger structures. Perhaps, the increased coupling between these two areas reflects efficient language processing and facilitates lexical semantic comprehension (see also Xu et al., 2015). In contrast, we found that deaf readers had a greater correlation between the left pMTG seed region and the left insula compared to hearing readers. The insula has been implicated in executive function such as cognitive flexibility (Jiang et al., 2015), and its increased coupling with left MTG in deaf signers during sentence reading might reflect hyperconnectivity due to increased cognitive effort. In support of this hypothesis, increased insular involvement during reading has been attributed to increased task difficulty and attentional demands (Achal et al., 2016). Thus, we speculate that task-based connectivity results are a byproduct of different cognitive processes – while the greater connectivity between the left IFG and MTG reflect domain-specific linguistic processing, the coupling between pMTG and insula tackle domain-general functions.

The comparison of lateralization indices between the groups did not show more rightward asymmetry during reading in deaf readers when compared to hearing readers. Even though on the group level previous research (Aparicio et al., 2007; Li et al., 2014) and the current study showed greater activation in the right hemisphere (including the IFG) in deaf readers when compared to hearing, this result does not necessarily reflect differences in activation in individual participants (Olulade et al., 2020). It might simply indicate that activation in the right hemisphere is not as consistent across hearing readers as it is in deaf readers. However, we observed a larger leftward asymmetry in deaf than hearing individuals in the STG during reading. This result, which requires further replication, might reflect the broad plasticity effect in the right superior temporal cortex (STC) in deaf individuals. Namely, existing evidence suggests that while cross-modal plasticity effects in deaf individuals in the left STC are specific for language and working memory, the right STC undergoes functional plasticity across a variety of tasks including lower-level visuospatial tasks (Cardin et al., 2013, 2020; Twomey et al., 2017). Therefore, it is plausible that larger leftward STG asymmetry in the deaf compared to the hearing group is a consequence of a relative right STG disengagement in language processing in the former group. No group differences in lateralization indices were found for MTG and IFG, but in the IFG stronger leftward lateralization was related to better performance in the reading task in both deaf and hearing participants (see supplementary materials: 1.2. and Table S4).

5. Limitations and future directions

There are no norms for standardised tests in Poland that can be used to assess reading competences of deaf adults. Thus, our study lacks additional measurements of reading ability that would allow for analysis of various levels of reading skills across participants – phonological, lexical, syntactic. Such tests would provide better characteristics of reading competences in hearing and deaf individuals. Future studies would greatly contribute to the field by investigating a relationship between individual differences in reading skill and neural activity patterns. Research may also benefit from incorporating reading conditions in the scanner that involve longer and more complex sentences. While two-word sentences are grammatically complete and contain all the necessary elements, their simplicity limits the exploration of more complex syntactic and semantic constructions. Using longer sentences could provide deeper insights into neural mechanisms underlying sentence processing. Moreover, although a similar number of deaf individuals was reported in previous experiments (Aparicio et al., 2007; Emmorey et al., 2013; Waters et al., 2007) the current findings should be replicated on larger samples.

6. Conclusion

Here, we investigated neural components of sentence reading in deaf native sign language users and hearing individuals. We observed shared patterns of brain activity in the left-hemispheric perisylvian regions – a classical reading network – together with increased neural activity in left occipitotemporal and right frontotemporal areas in deaf, relative to hearing readers. Connectivity analyses revealed enhanced coupling within the semantic network in hearing readers and between regions involved in cognitive control processes in deaf individuals. These results suggest that reading requires greater cognitive effort and less-automatized top-down processing in deaf readers for whom the sentence reading task was most likely more challenging than for hearing individuals, as reflected by lower performance. The analysis of lateralization indices did not support previous findings of more extensive activation in the right hemisphere in deaf compared to hearing readers which also might be due to a lower performance in the reading task in the deaf individuals. In contrast, we found more left-lateralized reading-related activation in the STG in the deaf readers, which might reflect hemispheric differences in cross-modal plasticity. Future studies are needed to investigate more precisely the interplay between hemispheric specialisation, cross-modal plasticity, and reading skills in deaf signers.

Supplementary Material

1
2
3
4
5
6
7

Acknowledgments

The study was supported by the National Science Centre Poland (2014/14/M/HS6/00918) awarded to AM. AM was additionally supported by the National Science Centre Poland (2018/30/E/HS6/00206). AB was additionally supported by the National Science Centre Poland (2017/27/N/HS6/02722 and 2019/32/T/HS6/00529). JK was supported by the National Science Centre Poland (2020/04/X/HS6/00347, implemented at Pedagogical University of Cracow). KE was supported in part by a grant from the National Institute on Deafness and other Communication Disorders (R01 DC014246). We gratefully acknowledge all our participants.

Footnotes

CRediT authorship contribution statement

Justyna Kotowicz: Conceptualization, Formal analysis, Methodology, Visualization, Writing – original draft, Writing – review & editing. Anna Banaszkiewicz: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Visualization, Writing – original draft, Writing – review & editing. Gabriela Dzięgiel-Fivet: Conceptualization, Formal analysis, Methodology, Visualization, Writing – original draft, Writing – review & editing. Karen Emmorey: Conceptualization, Writing – review & editing. Artur Marchewka: Conceptualization, Data curation, Funding acquisition, Methodology, Project administration, Supervision, Writing – review & editing. Katarzyna Jednoróg: Conceptualization, Methodology, Supervision, Writing – review & editing.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A. Supplementary data

Supplementary data to this article can be found online at https://doi.org/10.1016/j.bandl.2024.105447.

Data availability

Data will be made available on request.

References

  1. Achal S, Hoeft F, & Bray S. (2016). Individual differences in adult reading are associated with left temporo-parietal to dorsal striatal functional connectivity. Cerebral Cortex, 26(10), 4069–4081. 10.1093/cercor/bhv214 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aparicio M, Gounot D, Demont E, & Metz-Lutz MN (2007). Phonological processing in relation to reading: An fMRI study in deaf readers. NeuroImage, 35(3), 1303–1316. 10.1016/j.neuroimage.2006.12.046 [DOI] [PubMed] [Google Scholar]
  3. Banaszkiewicz A, Bola, Matuszewski J, Szczepanik M, Kossowski B, Mostowski P, Rutkowski P, Śliwińska M, Jednoróg K, Emmorey K, & Marchewka A. (2021a). The role of the superior parietal lobule in lexical processing of sign language: Insights from fMRI and TMS. Cortex, 135, 240–254. 10.1016/j.cortex.2020.10.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Banaszkiewicz A, Matuszewski J, Bola Ł, Szczepanik M, Kossowski B, Rutkowski P, Szwed M, Emmorey K, Jednoróg K, & Marchewka A. (2021b). Multimodal imaging of brain reorganization in hearing late learners of sign language. Human Brain Mapping, 42(2), 384–397. 10.1002/hbm.25229 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bavelier D, & Neville HJ (2002). Cross-modal plasticity: Where and how? Nature Reviews Neuroscience, 3(6), 443–452. 10.1038/nrn848 [DOI] [PubMed] [Google Scholar]
  6. Binder JR, Desai RH, Graves WW, & Conant LL (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19(12), 2767–2796. 10.1093/cercor/bhp055 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bookheimer S. (2002). Functional MRI of language: New approaches to understanding the cortical organization of semantic processing. Annual Review of Neuroscience, 25, 151–188. 10.1146/annurev.neuro.25.112701.142946 [DOI] [PubMed] [Google Scholar]
  8. Cardin V, Orfanidou E, Rönnberg J, Capek CM, Rudner M, & Woll B. (2013). Dissociating cognitive and sensory neural plasticity in human superior temporal cortex. Nature Communications, 4. 10.1038/ncomms2463 [DOI] [PubMed] [Google Scholar]
  9. Cardin V, Grin K, Vinogradova V, & Manini B. (2020). Crossmodal reorganisation in deafness: Mechanisms for functional preservation and functional change. Neuroscience and Biobehavioral Reviews, 113, 227–237. 10.1016/j.neubiorev.2020.03.019 [DOI] [PubMed] [Google Scholar]
  10. Chyl K, Kossowski B, Dębska A, Łuniewska M, Banaszkiewicz A, Żelechowska A, Frost SJ, Mencl WE, Wypych M, Marchewka A, Pugh KR, & Jednoróg K. (2018). Prereader to beginning reader: Changes induced by reading acquisition in print and speech brain networks. Journal of child Psychology and Psychiatry, and Allied Disciplines, 59(1), 76–87. 10.1111/jcpp.12774 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Chyl K, Kossowski B, Wang S, Dębska A, Łuniewska M, Marchewka A, Wypych M, Bunt MVD, Mencl W, Pugh K, & Jednoróg K. (2021). The brain signature of emerging reading in two contrasting languages. NeuroImage, 225, Article 117503. 10.1016/j.neuroimage.2020.117503 [DOI] [PubMed] [Google Scholar]
  12. Corbetta M, Patel G, & Shulman GL (2008). The reorienting system of the human brain: From environment to theory of mind. Neuron, 58(3), 306–324. 10.1016/j.neuron.2008.04.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Corina DP, Lawyer LA, Hauser P, & Hirshorn E. (2013). Lexical processing in deaf readers: An fMRI investigation of reading proficiency. PLoS ONE, 8(1). 10.1371/journal.pone.0054696 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Emmorey K, Weisberg J, McCullough S, & Petrich JAF (2013). Mapping the reading circuitry for skilled deaf readers: An fMRI study of semantic and phonological processing. Brain and Language, 126(2), 169–180. 10.1016/j.bandl.2013.05.001 [DOI] [PubMed] [Google Scholar]
  15. Emmorey K, McCullough S, & Weisberg J. (2016). The neural underpinnings of reading skill in deaf adults. Brain and Language, 160, 11–20. 10.1016/j.bandl.2016.06.007 [DOI] [PubMed] [Google Scholar]
  16. Emmorey K. (2020). Neurobiology of reading differs for deaf and hearing adults. Oxford Handbook of Deaf Studies in Learning and Cognition, 346–359. 10.1093/oxfordhb/9780190054045.013.25 [DOI] [Google Scholar]
  17. Emmorey K, & Lee B. (2021). The neurocognitive basis of skilled reading in prelingually and profoundly deaf adults. Language and Linguistics Compass, 15(2). doi: 10.1111/lnc3.12407. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Field A. (2013). Discovering statistics useing SPSS. Sage. 10.1111/insr.12011_21 [DOI] [Google Scholar]
  19. Gizewski ER, Lambertz N, Ladd ME, Timmann D, & Forsting M. (2005). Cerebellar activation patterns in deaf participants for perception of sign language and written text. NeuroReport, 16(17), 1913–1917. 10.1097/01.wnr.0000186592.41587.3e [DOI] [PubMed] [Google Scholar]
  20. Glezer LS, Xiong J, & Riesenhuber M. (2009). Evidence for highly selective neuronal tuning to whole words in the “Visual Word Form Area”. Neuron, 62(2), 199–204. 10.1016/j.neuron.2009.03.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Hagoort P. (2005). On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9(9), 416–423. 10.1016/j.tics.2005.07.004 [DOI] [PubMed] [Google Scholar]
  22. Hartwigsen G, Neef NE, Camilleri JA, Margulies DS, & Eickhoff SB (2019). Functional segregation of the right inferior frontal gyrus: Evidence from coactivation-based parcellation. Cerebral Cortex, 29(4), 1532–1546. 10.1093/cercor/bhy049 [DOI] [PubMed] [Google Scholar]
  23. Hirshorn EA, Dye MWG, Hauser PC, Supalla TR, & Bavelier D. (2014). Neural networks mediating sentence reading in the deaf. Frontiers in Human Neuroscience, 8, 922–929. 10.3389/fnhum.2014.00394 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Hoffmeister RJ, & Caldwell-Harris CL (2014). Acquiring English as a second language via print: The task for deaf children. Cognition, 132(2), 229–242. 10.1016/j.cognition.2014.03.014 [DOI] [PubMed] [Google Scholar]
  25. Humphries T, Kushalnagar P, Mathur G, Napoli DJ, Padden C, Rathmann C, & Smith SR (2012). Language acquisition for deaf children: Reducing the harms of zero tolerance to the use of alternative approaches. Harm Reduction Journal, 9(1), 16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Japee S, Holiday K, Satyshur MD, Mukai I, & Ungerleider LG (2015). A role of right middle frontal gyrus in reorienting of attention: A case study. Frontiers in Systems Neuroscience, 9, 1–16. 10.3389/fnsys.2015.00023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Jiang J, Beck J, Heller K, & Egner T. (2015). An insula-frontostriatal network mediates flexible cognitive control by adaptively predicting changing control demands. Nature Communications, 6, 1–11. 10.1038/ncomms9165 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Kronbichler M, Bergmann J, Hutzler F, Staffen W, Mair A, Ladurner G, & Wimmer H. (2007). Taxi vs. taksi: On orthographic word recognition in the left ventral occipitotemporal cortex. Journal of Cognitive Neuroscience, 19(10), 1584–1594. 10.1162/jocn.2007.19.10.1584.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Laszlo S, & Sacchi E. (2015). Individual differences in involvement of the visual object recognition system during visual word recognition. Brain and Language, 145–146, 42–52. 10.1016/j.bandl.2015.03.009 [DOI] [PubMed] [Google Scholar]
  30. Li Y, Peng D, Liu L, Booth JR, & Ding G. (2014). Brain activation during phonological and semantic processing of Chinese characters in deaf signers. Frontiers in Human Neuroscience, 8. 10.3389/fnhum.2014.00211 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Mandera P, Keuleers E, Wodniecka Z, & Brysbaert M. (2015). Subtlex-pl: Subtitle-based word frequency estimates for Polish. Behavior Research Methods, 47(2), 471–483. 10.3758/s13428-014-0489-4 [DOI] [PubMed] [Google Scholar]
  32. Mitchell RE, & Karchmer MA (2004). Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign Language Studies, 4(2), 138–163. 10.1353/sls.2004.0005 [DOI] [Google Scholar]
  33. Moreno A, Limousin F, Dehaene S, & Pallier C. (2018). Brain correlates of constituent structure in sign language comprehension. NeuroImage, 167, 151–161. 10.1016/j.neuroimage.2017.11.040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Noonan K, Jefferies E, & Visser M. (2013). Going beyond Inferior Prefrontal Involvement in Semantic Control: Evidence for the Additional Contribution of Dorsal Angular Gyrus and Posterior Middle Temporal Cortex. Journal of Cognitive Neuroscience, 25(11), 1824–1850. 10.1162/jocn [DOI] [PubMed] [Google Scholar]
  35. Olulade OA, Seydell-Greenwald A, Chambers CE, Turkeltaub PE, Dromerick AW, Berl MM, Gaillard WD, & Newport EL (2020). The neural basis of language development: Changes in lateralization over age. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23477–23483. doi: 10.1073/pnas.1905590117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Pallier C, Devauchelle AD, & Dehaene S. (2011). Cortical representation of the constituent structure of sentences. Proceedings of the National Academy of Sciences of the United States of America, 108(6), 2522–2527. doi: 10.1073/pnas.1018711108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Price CJ (2012). A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. NeuroImage, 62(2), 816–847. 10.1016/j.neuroimage.2012.04.062 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Price CJ, & Devlin JT (2011). The Interactive Account of ventral occipitotemporal contributions to reading. Trends in Cognitive Sciences, 15(6), 246–253. 10.1016/j.tics.2011.04.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Rolls ET, Joliot M, & Tzourio-Mazoyer N. (2015). Implementation of a new parcellation of the orbitofrontal cortex in the automated anatomical labeling atlas. NeuroImage, 122, 1–5. 10.1016/j.neuroimage.2015.07.075 [DOI] [PubMed] [Google Scholar]
  40. Rueckl JG, Paz-Alonso PM, Molfese PJ, Kuo WJ, Bick A, Frost SJ, Hancock R, Wu DH, Mencl WE, Duñabeitia JA, Lee JR, Oliver M, Zevin JD, Hoeft F, Carreiras M, Tzeng OJ, Pugh KR, & Frost R. (2015). Universal brain signature of proficient reading: Evidence from four contrasting languages. Proceedings of the National Academy of Sciences of the United States of America, 112 (50), 15510–15515. doi: 10.1073/pnas.1509321112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Seghier ML (2008). Laterality index in functional MRI: methodological issues. Magnetic Resonance Imaging, 26(5):594–601. doi: 10.1016/j.mri.2007.10.010. Epub 2007 Dec 26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Shaywitz SE, & Shaywitz BA (2005). Dyslexia (specific reading disability). Biological Psychiatry, 57(11), 1301–1309. 10.1016/j.biopsych.2005.01.043 [DOI] [PubMed] [Google Scholar]
  43. Snijders TM, Vosse T, Kempen G, Van Berkum JJA, Petersson KM, & Hagoort P. (2009). Retrieval and unification of syntactic structure in sentence comprehension: An fMRI study using word-category ambiguity. Cerebral Cortex, 19 (7), 1493–1503. 10.1093/cercor/bhn187 [DOI] [PubMed] [Google Scholar]
  44. Trettenbrein PC, Papitto G, Friederici AD, Zaccarella E. (2021). Functional neuroanatomy of language without speech: An ALE meta-analysis of sign language. Human Brain Mapping, 42(3), 699–712. doi: 10.1002/hbm.25254. Epub 2020 Oct 28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Twomey T, Waters D, Price CJ, Evans S, & Macsweeney M. (2017). How auditory experience differentially influences the function of left and right superior temporal cortices. Journal of Neuroscience, 37(39), 9564–9573. 10.1523/JNEUROSCI.0846-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Waters D, Campbell R, Capek CM, Woll B, David AS, McGuire PK, Brammer MJ, & MacSweeney M. (2007). Fingerspelling, signed language, text and picture processing in deaf native signers: The role of the mid-fusiform gyrus. NeuroImage, 35(3), 1287–1302. 10.1016/j.neuroimage.2007.01.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Whitfield-Gabrieli S, & Nieto-Castanon A. (2012). Conn: A functional connectivity toolbox for correlated and anticorrelated brain networks. Brain Connectivity, 2(3), 125–141. 10.1089/brain.2012.0073 [DOI] [PubMed] [Google Scholar]
  48. Wilke M, & Schmithorst VJ (2006). A combined bootstrap/histogram analysis approach for computing a lateralization index from neuroimaging data. Neuroimage, 33(2), 522–530. [DOI] [PubMed] [Google Scholar]
  49. Xu J, Wang J, Fan L, Li H, Zhang W, Hu Q, & Jiang T. (2015). Tractography-based Parcellation of the Human Middle Temporal Gyrus. Scientific reports, 5, 18883. 10.1038/srep18883 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1
2
3
4
5
6
7

Data Availability Statement

Data will be made available on request.

RESOURCES