Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Feb 1.
Published in final edited form as: Lang Linguist Compass. 2021 Feb 26;15(2):e12407. doi: 10.1111/lnc3.12407

The neurocognitive basis of skilled reading in prelingually and profoundly deaf adults

Karen Emmorey 1,2, Brittany Lee 1,2
PMCID: PMC8302003  NIHMSID: NIHMS1724184  PMID: 34306178

Abstract

Deaf individuals have unique sensory and linguistic experiences that influence how they read and become skilled readers. This review presents our current understanding of the neurocognitive underpinnings of reading skill in deaf adults. Key behavioural and neuroimaging studies are integrated to build a profile of skilled adult deaf readers and to examine how changes in visual attention and reduced access to auditory input and phonology shape how they read both words and sentences. Crucially, the behaviours, processes, and neural circuity of deaf readers are compared to those of hearing readers with similar reading ability to help identify alternative pathways to reading success. Overall, sensitivity to orthographic and semantic information is comparable for skilled deaf and hearing readers, but deaf readers rely less on phonology and show greater engagement of the right hemisphere in visual word processing. During sentence reading, deaf readers process visual word forms more efficiently and may have a greater reliance on and altered connectivity to semantic information compared to their hearing peers. These findings highlight the plasticity of the reading system and point to alternative pathways to reading success.

1 |. INTRODUCTION

Reading is a complex process that differs for deaf and hearing adults because of their distinct sensory and linguistic experiences. Changes in visual processing that stem from a lack of auditory input during development (e.g., enhanced visual attention in the periphery; see Pavani & Bottari, 2012, for review) and less robust phonological representations that result from reduced access to auditory speech can both alter the nature of the reading process. This review presents our current understanding of the neurocognitive underpinnings of reading skill in adults who were born severely to-profoundly deaf (or became deaf during infancy) and who were exposed to a sign language in early childhood. Our review examines the end state of the reading process (i.e., adult readers), and we focus on deaf signers because (a) they are less likely to have experienced language deprivation during childhood (e.g., Hall et al., 2019; Humphries et al., 2012), which may have distinct effects on reading acquisition and (b) they are less likely to access phonological codes when reading compared to deaf adults who acquired only a spoken language (e.g., Hirshorn et al., 2015; Koo et al., 2008). Thus, prelingually deaf signers are more likely to achieve reading success through alternative pathways compared to hearing speakers. To explore this issue, we focus primarily on studies that involve skilled deaf readers whose reading levels are matched with those of hearing peers.1

Skilled reading depends upon efficient recognition of visually encountered words, which frees up resources for sentence-level comprehension. Conversely, slow or effortful retrieval of low-quality word representations places demands on processing resources that could otherwise be dedicated to comprehension (Perfetti, 2007; Perfetti & Hart, 2001). Thus, we first discuss whether the neurocognitive processes that support visual word recognition in skilled deaf readers are the same as in hearing readers, examining phonological, orthographic and semantic processing. In the third section, we explore whether and how reduced access to auditory phonology and changes in visual attention associated with congenital deafness impact sentence-level reading for skilled deaf readers.

2 |. WORD-LEVEL READING

Efficient word identification is thought to involve the rapid and automatic retrieval of high-quality lexical representations (e.g., Perfetti, 2007; Perfetti & Hart, 2001). Skilled word reading in the hearing population involves accessing robust phonological, orthographic, and semantic representations that are supported by a distributed neural network (e.g., Dehaene, 2009; Pugh et al., 2001). To understand the nature of word processing in deaf adults, we examine the nature of each of these representations, their neural correlates, and how they each relate to reading ability.

2.1 |. Speech-based phonological processing in skilled deaf readers

Speech-based phonological awareness is a strong predictor of reading ability for hearing individuals (e.g., Elbro, 1996; Seidenberg, 2017). Deaf individuals can develop varying degrees of phonological knowledge depending on their language experience (Hirshorn et al., 2015) and may even employ phonological strategies in memory, rhyme detection, and pronunciation tasks (Hanson & McGarr, 1989; Perfetti & Sandak, 2000; Sehyr et al., 2017). However, it has been hotly debated whether deaf individuals need to access phonological codes in order to become successful readers (Easterbrooks, 2008; Hanson & Fowler, 1987; Paul et al., 2009; Perfetti & Sandak, 2000; Wang et al., 2008) or not (Bélanger, Baum, et al., 2012; Bélanger et al., 2013; Cripps et al., 2005; Izzo, 2002; Mayberry et al., 2011; Miller & Clark, 2011).

Recently, several studies suggest that skilled adult deaf readers do not automatically access phonological representations during visual word recognition, in contrast to hearing readers. Using lexical decision and a masked priming paradigm, Bélanger, Slattery, et al. (2012) found that phonological overlap between non-word primes and target words did not impact target word recognition for either skilled or less-skilled deaf readers, in contrast to hearing readers. That is, hearing but not deaf readers exhibited greater priming with French pseudohomophone primes (baur—BORD) compared to orthographically similar, non-homophone primes (boin—BORD). Similarly, an eye-tracking study by Bélanger et al. (2013) revealed that neither skilled nor less-skilled deaf readers activated phonological representations when reading, in contrast to hearing readers. Specifically, only hearing readers showed evidence of a phonological preview benefit. For deaf readers, viewing a homophone (e.g., mail for male) in the parafovea did not influence gaze behaviour any more than seeing an orthographically similar preview word (e.g., beard for board). In both studies, reading skill in the deaf group did not predict use of phonological codes.

Results from studies with deaf readers of Spanish are mixed. Fariña et al. (2017) found that the typical pseudohomophone effect in lexical decision (i.e., slower response times for non-words that sound like real words, such as brane) did not occur for skilled deaf adult readers of Spanish, unlike their hearing peers. In contrast, Gutierrez-Sigut, et al. (2017) found evidence from event-related potentials (ERPs) for pseudohomophone priming in deaf readers, suggesting automatic access to phonological codes. However, the size of the pseudohomophone priming effect was not correlated with reading ability for deaf readers, unlike hearing readers for whom the correlation was significant. Similarly, several studies have now reported that phonological skills are not associated with reading ability in deaf signers (Bélanger, Baum, et al., 2012; Emmorey et al., 2017; Hirshorn et al., 2015; Mayberry et al., 2011; Sehyr et al., 2017). Together, these results support the contention that access to phonological representations is not a marker of skilled reading in these deaf adults.

When asked to make overt phonological judgments (e.g., about rhymes, syllable counts, or tests of phonological awareness), deaf readers typically perform worse than hearing readers with similar reading abilities (e.g., Gutierrez-Sigut et al., 2017; Hirshorn et al., 2015; Koo et al., 2008; MacSweeney et al., 2013). Nonetheless, the neural regions that support phonological processing appear to be similar for deaf and hearing readers. For example, during speech-based phonological tasks, several fMRI studies have found that inferior parietal cortex is engaged for both deaf and hearing readers, but deaf readers exhibit greater activation in this region (e.g., Aparicio et al., 2007; Emmorey et al., 2013; Li et al., 2014). Increased parietal activation could have occurred because the phonological tasks were more difficult for the deaf than the hearing readers (i.e., more neural resources were required). In fact, when MacSweeney et al. (2009) compared subsets of deaf and hearing participants who performed similarly on a rhyme judgment task (using picture stimuli rather than word stimuli), both groups engaged parietal cortex to an equal degree; no activation differences were observed between groups in this region.

However, MacSweeney et al. (2009) found that deaf readers with good rhyme judgment abilities exhibited increased activation in the left posterior inferior frontal cortex (IFC) compared to their hearing peers. Similarly, Emmorey et al. (2016) found that deaf readers who performed better on a syllable counting task exhibited greater activation in left posterior IFC. This region is known to be involved in the articulatory coding of speech and may therefore be recruited to a greater extent by deaf compared to hearing individuals when making explicit phonological judgments.

In addition, Glezer et al. (2016) recently found that for typical hearing readers, a region within the temporal-parietal cortex is finely tuned to phonological representations when reading single words. However, this region showed only weak selectivity to phonology in skilled deaf readers (Glezer et al., 2018). Glezer et al. (2018) concluded that deaf readers activate phonology when making phonological decisions about written words, but they rely on more coarse-grained representations compared to hearing readers. Therefore, fine-grained phonological representations are not necessary for reading success.

In sum, deaf individuals appear to develop and use coarse-grained phonological representations for some tasks, but accessing them does not appear to be an automatic or requisite component of skilled reading as it is for hearing readers.

2.2 |. Orthographic processing in skilled deaf readers

With mounting evidence that speech-based phonology is not a determinant of reading ability for deaf adults, the role of orthography may be particularly important to their reading success. Just as pseudohomophone studies can probe access to phonological representations, experiments with transposed-letter (TL) non-words (e.g., tosat from toast) assess the quality of orthographic representations. TL effects tap into the flexibility (or stability) of letter position coding in orthographic representations. Hearing readers have slower, less accurate lexical decisions to TL non-words compared to control non-words, that is, TL non-words are more likely to be misidentified as real words. While pseudohomophones are mistaken for real words because of shared phonological representations, TL non-words are mistaken for real words because they activate orthographic lexical representations due to flexible letter position coding. The size of the TL non-word effect is taken as an indication of the precision of orthographic representations. Recently, Fariña et al. (2017) found similar TL effects for both deaf and hearing Spanish readers. That is, both groups were slower and less accurate at rejecting TL non-words (e.g., mecidina from medicina) than replaced-letter control non-words (e.g., mesifina from medicina). Deaf readers may not need to activate phonological codes, but they appear to be as sensitive to orthographic codes as hearing readers.

In addition, TL priming effects are taken as evidence for less precise orthographic coding of letter position (e.g., Ktori et al., 2014; Perea & Lupker, 2004). Specifically, word targets preceded by TL primes (e.g., chikcen-CHICKEN) elicit faster lexical decision responses than those preceded by substitution primes (e.g., chidven–CHICKEN). If letters were assigned specific positions within a word, TL and substitution primes should facilitate target word recognition equally. Meade et al. (2020) used TL priming and ERPs to investigate whether deaf readers’ reduced access to spoken phonology reduces orthographic precision. More flexible (i.e., less precise) representations should be more susceptible to activation by TL primes, resulting in larger TL priming effects. However, Meade et al. (2020) found that the size of the TL priming effects (smaller amplitude negativities and faster response times for targets with TL primes compared to controls) were the same for deaf and hearing readers (who were matched on spelling ability). These findings indicate that phonological ‘tuning’ is not required to represent orthographic information in a precise manner.

Neuroimaging studies of the visual word form area (VWFA) can also be used to examine orthotactic tuning and sensitivity. This brain region, located in the ventral occipito-temporal cortex, preferentially responds to written words and encodes abstract orthographic representations (e.g., the neural response is insensitive to case, font or script; see Dehaene & Cohen, 2011, for review). As hearing children learn to read, this region becomes tuned to print, with better readers exhibiting stronger activation. Adults who are illiterate do not show increased neural activity in the VWFA when viewing written words compared to similar non-word visual stimuli, but VWFA activation increases with increased literacy.

Thus far, most studies have found no difference in VWFA activation between deaf and hearing readers (Aparicio et al., 2007; Emmorey et al., 2013; Wang et al., 2015; Waters et al., 2007). The location and extent of activation within the VWFA when recognizing visual words is similar for both groups, but how this region connects to other brain areas differs. Wang et al. (2015) found that the functional connectivity between the VWFA and auditory speech areas in the left superior temporal cortex was reduced for congenitally deaf readers, but the connectivity between the VWFA and the frontal and parietal regions of the reading circuit was similar for deaf and hearing readers. The authors concluded that auditory speech experience does not significantly alter the location or response strength of the VWFA.

However, there are mixed results with respect to whether reading skill impacts neural activity within the VWFA for deaf adults. Corina et al. (2013) found greater activation for more proficient deaf readers, while Emmorey et al. (2016) found no difference between skilled and less-skilled deaf readers in this region. Studies that compared deaf readers with more skilled hearing readers have also reported no group differences in activation in this region (Aparicio et al., 2007; Wang et al., 2015; Waters et al., 2007). These latter findings suggest that a primary marker of poor reading in hearing individuals, namely reduced activation in the VWFA (e.g., Hoeft et al., 2007), may not constitute an indicator of reading skill for deaf individuals. Rather, Emmorey et al. (2016) found that better reading ability in deaf adults was associated with increased neural activation in a region that was in front of (anterior to) the VWFA when deaf readers made semantic decisions about words. Emmorey et al. (2016) speculated that this region may be involved in mapping orthographic word-level representations onto semantic representations (e.g., Purcell et al., 2014) and that better deaf readers may have stronger or more finely tuned links between orthographic and semantic lexical representations. Better deaf readers may activate this interface more consistently or to a greater extent than less-skilled deaf readers when reading for meaning.

Glezer et al. (2018) recently examined whether the same type of neural tuning to orthography in the VWFA that has been observed in typical hearing readers (Glezer et al., 2009) is also found in skilled deaf readers. Glezer et al. (2018) used an fMRI rapid adaptation paradigm to investigate whether orthographic neural tuning occurs in the absence of robust auditory phonological input during development. The results revealed that as with hearing readers, neurons in the VWFA of skilled deaf readers were tightly tuned to known written whole words. This finding indicates that the nature of orthographic tuning in the VWFA is not altered by imprecise phonological representations.

In contrast to what has been found for typical hearing readers, however, Glezer et al. (2018) found that skilled deaf readers also exhibited evidence of orthographic tuning in the VWFA in the right hemisphere. For hearing readers, it is often not even possible to identify the VWFA in the right hemisphere (the VWFA is typically identified through a separate localizer scan which contrasts words with other visual stimuli). For example, only two out of 20 participants demonstrated a right VWFA in Baker et al. (2007) and only 14 out of 34 in Glezer et al. (2015). The right VWFA was identified in 10 out of the 12 skilled deaf readers in the Glezer et al. (2018) study. Thus, word reading for deaf—but not hearing—adults appears to engage the right VWFA. Interestingly, recruitment of the right hemisphere also appears to support compensatory and protective mechanisms for dyslexic readers. Wang et al. (2017) found that children with a familial risk for dyslexia who became good readers showed faster white matter development in the right superior longitudinal fasciculus (a fibre tract that connects brain regions involved in reading) compared to those who were subsequent poor readers (see also Hoeft et al., 2011). In addition, dyslexic readers who improved with reading interventions had increased activations in right-hemisphere regions compared to those who did not respond to the treatment (Barquero et al., 2014; Yu et al., 2018). Thus, right-hemisphere pathways may be important for achieving reading success in deaf readers (see also Li et al., 2014) as well as hearing readers with (or at-risk for) dyslexia.

Possibly related to the VWFA, the N170 ERP response is elicited by written words and word-like stimuli and has been studied extensively in hearing readers. The N170 is left-lateralized in hearing adults and appears to index fine-tuning of orthographic representations, as the N170 amplitude is larger to words than to other visual stimuli, such as symbol strings (e.g., Maurer et al., 2005; Rossion et al., 2003). One possible explanation for the left-lateralization of the N170 in hearing readers is the phonological mapping hypothesis which proposes that the emergence of left-hemisphere processing of visual words is the result of linking written words to left-hemisphere auditory language regions in order to map orthographic onto phonological representations when learning to read (McCandliss & Noble, 2003). Recently, Sacchi and Laszlo (2016) provided support for this hypothesis by showing that in hearing children (ages 10–11 years) the degree of left lateralization of the N170 to words was predicted by phonological awareness (but not by vocabulary size).

Emmorey et al. (2017) tested the phonological mapping hypothesis by examining the laterality of the N170 response in deaf and hearing adults who were matched on reading ability but who differed in phonological awareness, with the deaf readers performing significantly worse on the Hirshorn et al. (2015) tests of phonological ability. Hearing readers exhibited the expected left-lateralized N170 (greater negativity for words than symbol strings), while deaf readers exhibited a much more bilateral N170 response at temporal electrode sites. This result supports the hypothesis that the left-lateralized N170 to words at sites near auditory language regions in hearing readers arises from the developmental process of consistently mapping orthographic representations to precise auditory-based phonological representations of speech.

Importantly, linear mixed-effects regression analyses revealed that the relation between reading ability and N170 amplitude differed for deaf and hearing readers. Better reading ability was associated with a larger right-hemisphere N170 over occipital sites for deaf readers, but for hearing readers, better reading ability was associated with a smaller right-hemisphere N170. Since the deaf and hearing groups were matched on reading ability, this finding suggests that the optimal neural dynamics of visual word recognition differs for skilled deaf and hearing readers. Recruitment of the right hemisphere has been argued to be maladaptive for typical hearing readers because the right hemisphere may process words more like visual objects, thus resulting in less differentiated orthographic representations (Laszlo & Sacchi, 2015). For deaf readers, in contrast, recruitment of the right hemisphere appears to be beneficial. In addition, Emmorey et al. (2017) and Sehyr et al. (2020) found that better spelling ability was associated with a larger right-hemisphere N170 for deaf readers, but not for hearing readers. Thus, recruitment of right-hemisphere regions is not indicative of poorly specified orthographic representations in deaf readers, possibly because orthographic representations are not fine-tuned by left-lateralized phonological mappings.

Overall, deaf readers are just as sensitive to orthographic codes in word reading as hearing readers (see also Meade et al., 2019). Deaf and hearing readers make use of common neural pathways when reading, for example, recruiting the left inferior frontal gyrus and the VWFA (Emmorey et al., 2013). However, due to reduced access to auditory input and a lack of phonological tuning of orthographic representations, the neural circuits of successful deaf readers are characterized by greater engagement of the right hemisphere for processing visual word forms. Recruitment of the right hemisphere may be a protective factor in deaf readers, as in hearing children at risk for reading disorder. That is, engagement of right-hemisphere structures during reading may help to lower the risk of reading difficulties for these populations. However, to confirm this hypothesis, further research is needed with deaf children and with intervention programs; for example, do struggling deaf readers exhibit increased right-hemisphere activation after successful reading intervention?

2.3 |. Semantic word processing in skilled deaf readers

The role of semantics in word processing for deaf readers has not been studied as much as the role of phonology and orthography. Some studies report that deaf individuals perform worse than hearing individuals on semantic tasks. Such results are often interpreted as showing that the mental lexicons of deaf individuals ‘lag behind’, are less developed, less organized, or less accessible than those of their hearing counterparts (Green & Shepherd, 1975; Marschark et al., 2004; Ormel et al., 2010). However, it is important to note a number of variables at play that may explain negative associations between semantic knowledge and deafness, especially when studies include deaf individuals with highly variable backgrounds. Poor performance on semantic tasks and under-developed semantic networks may be attributed to other factors, including poor reading ability (just as in hearing readers), second language proficiency (as in other bilinguals), and the poor language ability that results from language deprivation and late acquisition of a first language (e.g., Hall et al., 2019).

Studies that carefully control some of these confounding factors can help tease apart what is actually driving poor semantic abilities for some deaf readers. When deaf and hearing readers are matched on their reading ability, they appear similar in how they process single words for meaning. ERP studies have shown behavioural and N400 semantic priming effects for deaf individuals reading English words (MacSweeney et al., 2004; Meade et al., 2017). The N400 is a well-studied ERP component that is sensitive to lexical-semantic processing. Furthermore, a recent ERP study by Gutierrez-Sigut et al. (2019) found strong evidence that lexical-semantic feedback modulates orthographic processing in deaf (and hearing) readers: words, but not pseudowords, modulated the N250 response (an ERP component associated with orthographic processing) in a priming paradigm that manipulated letter case. Correlational analyses suggested that more skilled deaf readers had a stronger connection between lexical-semantic and orthographic levels of processing. fMRI studies involving semantic decisions to single words have also shown that deaf and hearing readers appear to recruit similar neural circuits in left-hemisphere language regions when processing words for meaning (e.g., Emmorey et al., 2013). More studies are needed, but the results thus far suggest that deaf and hearing readers with similar reading abilities demonstrate similar performance, processing, and pathways for semantic tasks.

Some studies have worked to separate semantic effects from second language effects by testing deaf and hearing individuals in their respective first languages (L1). When deaf bilinguals perform semantic tasks in sign language (their L1), their performance is comparable to that of hearing individuals completing tasks with printed or spoken words (Beal-Alvarez & Figueroa, 2017; Kutas et al., 1987; Marshall et al., 2013). Deaf signing children may even have some advantages in early language development; their expressive vocabulary development largely parallels that of hearing speaking children, but first signs emerge before first words and the early sign lexicon includes more verbs (Anderson & Reilly, 2002). Indeed, a number of studies highlight the importance of developing semantic knowledge and vocabulary through sign language, and signing skill correlates with performance on semantic tasks (Chamberlain & Mayberry, 2000; Ormel et al., 2010). These findings highlight the importance of early language exposure for the development of semantic concepts and networks.

A strong foundation in a sign language can support later literacy, as signing skill also correlates with reading proficiency (e.g., Andrew et al., 2014; Chamberlain & Mayberry, 2008; Henner et al., 2016; Novogrodsky et al., 2014; Scott & Hoffmeister, 2017; Strong & Prinz, 1997). Some deaf readers may perform poorly on semantic tasks in English not because they lack semantic knowledge but because they lack English proficiency or the ability to transfer such knowledge from American Sign Language (ASL) to English. Many strategies in deaf education like chaining (associating written words with signs) and fingerspelling (representing English letters with handshapes) aim to build associations between signed and spoken languages in order to facilitate this transfer across language and semantic networks (Humphries & MacDougall, 1999; Padden & Ramsey, 2000).

Finally, studies that compare age of first language acquisition are able to tease apart the effects of language deprivation on semantics. Within the deaf population, world knowledge often differs between deaf children with deaf parents and those with hearing parents (Marschark, 1997). Aside from other benefits to their psychosocial development, deaf children with deaf parents are more likely to learn sign language from birth and receive quality language input from fluent signers, which also supports semantic development. Recent studies by Mayberry and colleagues have shown that late acquisition of a first language alters connectivity to semantic pathways (Cheng et al., 2019; Ramirez et al., 2016; Mayberry et al., 2018). Deaf and hearing adults with early first language exposure show leftward laterality of ventral white matter pathways (associated with language-related processing), regardless of their first language modality. However, deaf individuals with late first language exposure show altered white matter structure and variable activation along these left-hemisphere pathways when performing semantic tasks, perhaps related to variation in their age of first language acquisition or in their non-verbal semantic development. These studies show that late first language acquisition and language deprivation have detrimental effects on developing semantic knowledge and pathways.

In sum, variables such as language deprivation or second language learning may explain some of the reports of poorer semantic abilities in deaf compared to hearing individuals. However, all other factors being equal, semantic processing appears to be largely the same for deaf and hearing readers.

3 |. SENTENCE-LEVEL READING

The studies reviewed thus far have all examined reading at the single word level. However, natural reading typically occurs in full sentences, which provide context for top-down processing that aids in comprehension. Sentence-level reading also allows for multi-word processing, a feature that is especially important for deaf readers given differences in visual processing that influence how they read (Bélanger & Rayner, 2015). Here, we discuss the behavioural and neural signatures of sentence reading in deaf adults compared to hearing adults and with respect to reading ability.

3.1 |. Effects of deafness on eye movements during sentence reading

When reading sentences, both deaf and hearing readers begin processing upcoming words in the parafovea before they are fixated. The issue of parafoveal word processing is especially important for deaf readers because a large body of evidence indicates that early deafness results in a redistribution of attentional resources across the visual periphery (for reviews see Bavelier et al., 2006; Pavani & Bottari, 2012). For example, deaf signers respond faster to visual stimuli presented peripherally (Codina et al., 2017) and allocate more attention to the visual periphery (Dye et al., 2008; Proksch & Bavelier, 2002) compared to hearing non-signers. This enhanced visual attention appears to extend to peripheral vision during reading. Skilled adult hearing readers have a perceptual span of 14–15 characters to the right of fixation (McConkie & Rayner, 1975). By comparison, skilled adult deaf readers have a perceptual span of up to 18 characters to the right of fixation (Bélanger, Slatter, et al., 2012). In short, deaf readers are able to perceive and attend to more letters within a single fixation compared to hearing peers with similar reading ability.

Dye et al. (2008) hypothesized that deaf individuals may find increased access to parafoveal information distracting, causing slower processing in the fovea, longer fixations, and slower reading. Bélanger, Slatter, et al. (2012) tested this hypothesis in an eye-tracking experiment that measured common eye movement behaviours during reading: regressions (going back in the text to reread), saccade length (how many characters the eyes pass over between fixations), skipped words (words that are passed over during saccades and not fixated), and refixations (multiple fixations on a single word). Deaf readers’ efficiency at processing visual word forms was seen in their unique eye movement patterns during sentence reading, as skilled deaf readers had fewer regressions, longer forward saccades, more skipped words, and fewer refixations than hearing readers matched for reading ability. Importantly, sentence comprehension was equally good for the deaf and hearing readers. Results from Chinese deaf readers also do not support the hypothesis that increased visual attention in the periphery is disruptive for deaf readers. Yan et al. (2015) found that deaf readers obtained parafoveal semantic information earlier than reading-matched hearing readers.

These findings led Bélanger and Rayner (2015) to propose the Word Processing Efficiency hypothesis, in which increased access to parafoveal information makes deaf readers more efficient at extracting the orthographic code that supports lexical access. This efficiency was attributed to the visual advantage specific to deafness and was not merely a function of reading skill, as less-skilled deaf readers had comparable perceptual spans to those of skilled hearing readers. An enhanced perceptual span (Bélanger et al., 2018) and efficient eye movements (Bélanger et al., 2014) have also been documented in young deaf readers relative to hearing peers of similar reading ability. Thus, deafness results in early changes in visual attention, which allow for efficient word processing during sentence reading.

3.2 |. Neural underpinnings of sentence reading in deaf and hearing adults

Much less research has been conducted on the neural substrates that support sentence reading in deaf individuals. Mehravari, et al. (2017) investigated ERP responses to semantic violations (e.g., The huge house still listens to my aunt), which generate a large N400, and to syntactic violations (e.g., The huge house still belong to my aunt), which generate a P600, a positive-going wave that peaks about 600 ms after the grammatical error. The deaf participants in this study were all prelingually and profoundly deaf and varied in their age of exposure and use of a signed language, although the majority of participants were signers. Overall, the deaf participants had poorer reading comprehension scores than the hearing participants, but more than half of the deaf group performed within the same range as the hearing group. Both deaf and hearing readers exhibited a large N400 response to semantic violations (see also Neville et al., 1992). Similarly, Skotara et al. (2011, 2012) found that deaf native and non-native signers of German Sign Language (DGS) exhibited an N400 response to semantic violations in written German sentences that was identical to that observed in hearing German readers. These results indicate that similar neural processes are involved in semantic processing for deaf and hearing readers at the sentence level. However, Mehravari et al. (2017) found that only the hearing readers exhibited a significant P600 to syntactic violations. Importantly, this pattern also held for the subgroup of skilled deaf readers who were matched with the hearing participants on reading ability.

Furthermore, Mehravari et al. (2017) found that deaf and hearing readers exhibited different relationships between reading skill and sensitivity to semantic and grammatical information, and this differential pattern also held for the subgroup of deaf and hearing participants with similar reading ability. Specifically, the size of the N400 effect was strongly correlated with reading ability for deaf but not hearing individuals. In contrast, the size of the P600 response to syntactic violations was related to reading ability for the hearing but not the deaf participants. Together, these results suggest that equally proficient deaf and hearing readers rely on different types of linguistic information when reading sentences. The best deaf readers rely primarily on semantic information, while the best hearing readers rely on both semantic and syntactic information.

However, age of sign language acquisition may play a role in whether a robust P600 effect is observed for syntactic violations in deaf readers. The small P600 effect observed by Mehravari et al. (2017) may have occurred because most of the participants (90%) were not exposed to ASL in early childhood and thus were at risk for language deprivation. Supporting this possibility, Skotara et al. (2011, 2012) found that deaf native DGS signers exhibited a P600 response for written sentences with German verb agreement violations, which was comparable to the P600 effect observed for hearing readers. Furthermore, non-native deaf signers who were exposed to DGS at the time of school enrolment exhibited much weaker P600 effects (Skotara et al., 2012). Thus, it is possible that the lack of a robust P600 effect in the deaf readers studied by Mehravari et al. could have been due, at least in part, to a lack of accessible language input during early development.

A study by Hirshorn et al. (2014) helped separate effects of deafness and language exposure on semantic processing during sentence reading. Controlling for reading ability across groups, they used fMRI to examine sentence reading in two groups of deaf adults (native signers and ‘oral’ deaf adults who had acquired only a spoken language) and hearing adults (monolingual speakers of English). Sentences were contrasted with ‘false font sentences’ created using the Wingdings font. To ensure participants were reading for meaning, sentences were occasionally followed by a picture, and participants indicated whether it matched the sentence or not (these trials were not analysed). A conjunction analysis identified regions of activation that were greater for sentences compared to false font strings in all three groups. This analysis revealed bilateral activation in the inferior frontal cortices and in temporal cortices (with greater activation on the left).

Significant group differences were observed in just two regions: bilateral superior temporal gyri (STG; specifically, primary and secondary auditory regions) and a region overlapping with the VWFA in inferior temporal cortex. Both deaf groups exhibited greater activation in left and right STG compared to hearing readers, and activation in these regions did not differ between the two deaf groups. This result suggests that absent or reduced auditory input during development leads to reorganization of function within auditory cortices when deaf individuals read, whether they are signers or non-signers (see also Cardin et al., 2016). A regression analysis revealed that greater dB loss was associated with greater activation in left STG auditory cortex, but no behavioural measures or demographic variables were linked to neural activation in right STG. This pattern suggests that increased activation within auditory cortices when deaf people read is a result of functional neural changes that result from deafness, rather than of differences in linguistic abilities since the groups were matched for reading ability and phonological skill did not modulate activation in these auditory regions.

Hirshorn et al. (2014) also conducted a functional connectivity analysis, which revealed stronger connectivity for deaf readers (both groups) from left auditory cortex to anterior IFC (BA 45), an area that is associated with semantic processing. This finding supports the emerging view that proficient deaf readers rely more on semantic information than their hearing peers (regardless of sign language experience). In contrast, connectivity from left STG to regions associated with a speech-based processing region (left postcentral gyrus) was greater for both oral deaf and hearing speakers compared to deaf signers. In addition, connectivity from posterior IFG (BA 44; a region associated with phonological processing) to the inferior temporal lobe was greater for both oral deaf and hearing readers compared to deaf signers. This finding indicates that use of speech (by hearing or deaf individuals) strengthens the connection between visual word processing (in inferior temporal cortex) and phonological speech-based processing (in IFC) during reading comprehension. Note that such neural connections are not necessary for reading success since the deaf signers (who did not use speech) were equally skilled readers.

The sentence-level studies presented here provide further evidence of how reduced auditory input and phonological processing influence reading networks in deaf readers. Deaf readers may rely more heavily on semantic information and orthographic-to-semantic connections during sentence reading compared to hearing readers. More studies controlling for reading ability and age of language acquisition are needed to explore different aspects of sentence reading in deaf individuals, including syntactic processing.

4 |. CONCLUSIONS

This paper reviewed key behavioural and neuroimaging studies looking at how deaf individuals read words and sentences in comparison to hearing readers with similar reading ability. Skilled deaf and hearing readers appear comparable in terms of their orthographic sensitivity and semantic knowledge for word reading. However, deaf readers rely less on phonology and show greater engagement of the right hemisphere during visual word processing. They also process visual word forms more efficiently and may have a greater reliance on and altered connectivity to semantic information during sentence reading.

These findings carry implications for both research and education. Future research involving deaf readers should be more diligent about assessing and reporting reading ability and using that information to match hearing controls. This level of rigor is necessary to distinguish reading deficits found in the deaf population but caused by intervening variables like language deprivation from reading differences that arise from deafness but support skilled reading. Matching groups on reading skill allows us to focus on identifying alternative but equally effective approaches to reading and make fair comparisons to hearing readers.

Future research should also apply our growing knowledge of deaf readers and contribute to the evidence base for effective reading interventions for deaf children. The empirical evidence reviewed here can help tailor interventions to deaf students’ needs; it stresses the benefits of early language exposure, less phono-centric approaches to word reading, and more semantic-based comprehension strategies for sentence reading. Researchers and educators can advocate for deaf readers by supporting interventions that acknowledge and leverage their unique abilities to bolster their chances of reading success.

ACKNOWLEDGEMENT

This work was supported by grants from the National Institutes of Health (DC014246) and from the National Science Foundation (BCS 1651372; BCS 1756403).

Funding information

National Science Foundation, Division of Behavioral and Cognitive Sciences, Grant/Award Numbers: BCS 1651372, BCS 1756403; National Institute on Deafness and Other Communication Disorders, Grant/Award Number: R01 DC014246

AUTHOR BIOGRAPHIES

Karen Emmorey is a Professor of Speech, Language, and Hearing Sciences at San Diego State University. She obtained her Ph.D. in Linguistics at UCLA in 1987 and was a Staff Scientist at the Salk Institute for Biological Studies until 2005. Her research focuses on what sign languages and the deaf and hearing people who use them can reveal about the nature of human language, cognition, and the brain.

Brittany Lee is a graduate student in the SDSU/UCSD Joint Doctoral Program in Language and Communicative Disorders, and she expects to receive her Ph.D. in the Spring of 2021. She studies reading and sign language processing using ERP, eye-tracking, and psycholinguistic methods. Her reading research focuses on identifying alternative approaches to achieving reading success.

Footnotes

1

Reading skill in the reviewed studies is typically assessed by standardized tests that are adapted for or are already appropriate for deaf and hearing readers, such as a timed paragraph reading test with multiple-choice questions (e.g., Bélanger et al., 2012), a timed written sentence completion task (Aparicio et al., 2007) or a sentence-picture matching task (from the Peabody Individual Assessment Test; Bélanger et al., 2018; Emmorey et al., 2016; Hirshorn et al., 2014; 2015).

REFERENCES

  1. Anderson D, & Reilly J (2002). The MacArthur communicative development inventory: Normative data for American Sign Language. Journal of Deaf Studies and Deaf Education, 7(2), 83–106. [DOI] [PubMed] [Google Scholar]
  2. Andrew KN, Hoshooley J, & Joanisse MF (2014). Sign language ability in young deaf signers predicts comprehension of written sentences in English. PLoS One, 9(2), e89994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aparicio M, Gounot D, Demont E, & Metz-Lutz MN (2007). Phonological processing in relation to reading: An fMRI study in deaf readers. NeuroImage, 35(3), 1303–1316. [DOI] [PubMed] [Google Scholar]
  4. Baker CI, Liu J, Wald LL, Kwong KK, Benner T, & Kanwisher N (2007). Visual word processing and experiential origins of functional selectivity in human extrastriate cortex. Proceedings of the National Academy of Sciences, 104(21), 9087–9092. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Barquero LA, Davis N, & Cutting LE (2014). Neuroimaging of reading intervention: A systematic review and activation likelihood estimate meta-analysis. PLoS One, 9(1), e83668. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bavelier D, Dye MW, & Hauser PC (2006). Do deaf individuals see better? Trends in Cognitive Sciences, 10(11), 512–518. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Beal-Alvarez JS, & Figueroa DM (2017). Generation of signs within semantic and phonological categories: Data from deaf adults and children who use American Sign Language. The Journal of Deaf Studies and Deaf Education, 22(2), 219–232. [DOI] [PubMed] [Google Scholar]
  8. Bélanger NN, Baum SR, & Mayberry RI (2012). Reading difficulties in adult deaf readers of French: Phonological codes, not guilty! Scientific Studies of Reading, 16(3), 263–285. [Google Scholar]
  9. Bélanger NN, Lee M, & Schotter ER (2018). Young skilled deaf readers have an enhanced perceptual span in reading. The Quarterly Journal of Experimental Psychology, 71(1), 291–301. [DOI] [PubMed] [Google Scholar]
  10. Bélanger NN, Mayberry RI, & Rayner K (2013). Orthographic and phonological preview benefits: Parafoveal processing in skilled and less-skilled deaf readers. The Quarterly Journal of Experimental Psychology, 66(11), 2237–2252. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Bélanger NN, & Rayner K (2015). What eye movements reveal about deaf readers. Current Directions in Psychological Science, 24(3), 220–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bélanger NN, Schotter E, & Rayner K (2014, November). Young deaf readers’ word processing efficiency. 55th Meeting of the Psychonomic Society, Long Beach, CA. [Google Scholar]
  13. Bélanger NN, Slattery TJ, Mayberry RI, & Rayner K (2012). Skilled deaf readers have an enhanced perceptual span in reading. Psychological Science, 23(7), 816–823. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Cardin V, Smittenaar RC, Orfanidou E, Rönnberg J, Capek CM, Rudner M, & Woll B (2016). Differential activity in Heschl’s gyrus between deaf and hearing individuals is due to auditory deprivation rather than language modality. NeuroImage, 124, 96–106. [DOI] [PubMed] [Google Scholar]
  15. Chamberlain C, & Mayberry RI (2000). Theorizing about the relation between American Sign Language and reading. Language Acquisition by Eye, 221–259. [Google Scholar]
  16. Chamberlain C, & Mayberry RI (2008). American Sign Language syntactic and narrative comprehension in skilled and less skilled readers: Bilingual and bimodal evidence for the linguistic basis of reading. Applied Psycholinguistics, 29(3), 367–388. [Google Scholar]
  17. Cheng Q, Roth A, Halgren E, & Mayberry RI (2019). Effects of early language deprivation on brain connectivity: Language pathways in deaf native and late first-language learners of American Sign Language. Frontiers in Human Neuroscience, 13, 320. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Codina CJ, Pascalis O, Baseler HA, Levine AT, & Buckley D (2017). Peripheral visual reaction time is faster in deaf adults and British Sign Language interpreters than in hearing adults. Frontiers in Psychology, 8, 50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Corina DP, Lawyer LA, Hauser P, & Hirshorn E (2013). Lexical processing in deaf readers: An fMRI investigation of reading proficiency. PLoS One, 8(1), e54696. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Cripps JH, McBride KA, & Forster KI (2005). Lexical processing with deaf and hearing: Phonology and orthographic masked priming. The Arizona Working Papers in Second Language Acquisition and Teaching, 12, 31–44. [Google Scholar]
  21. Dehaene S (2009). Reading in the brain: The new science of how we read. Penguin. [Google Scholar]
  22. Dehaene S, & Cohen L (2011). The unique role of the visual word form area in reading. Trends in Cognitive Science, 15, 254–262. [DOI] [PubMed] [Google Scholar]
  23. Dye MW, Hauser PC, & Bavelier D (2008). Visual skills and cross-modal plasticity in deaf readers. Annals of the New York Academy of Sciences, 1145(1), 71–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Easterbrooks SR (2008). Knowledge and skills for teachers of individuals who are deaf and hard of hearing: Advanced set development. Communication Disorders Quarterly, 30(1), 37–48. [Google Scholar]
  25. Elbro C (1996). Early linguistic abilities and reading development: A review and a hypothesis. Reading and Writing, 8(6), 453–485. [Google Scholar]
  26. Emmorey K, McCullough S, & Weisberg J (2016). The neural underpinnings of reading skill in deaf adults. Brain and Language, 160, 11–20. [DOI] [PubMed] [Google Scholar]
  27. Emmorey K, Midgley KJ, Kohen CB, Sehyr ZS, & Holcomb PJ (2017). The N170 ERP component differs in laterality, distribution, and association with continuous reading measures for deaf and hearing readers. Neuropsychologia, 106, 298–309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Emmorey K, Weisberg J, McCullough S, & Petrich JA (2013). Mapping the reading circuitry for skilled deaf readers: An fMRI study of semantic and phonological processing. Brain and Language, 126(2), 169–180. [DOI] [PubMed] [Google Scholar]
  29. Fariña N, Duñabeitia JA, & Carreiras M (2017). Phonological and orthographic coding in deaf skilled readers. Cognition, 168, 27–33. [DOI] [PubMed] [Google Scholar]
  30. Glezer LS, Eden G, Jiang X, Luetje M, Napoliello E, Kim J, & Riesenhuber M (2016). Uncovering phonological and orthographic selectivity across the reading network using fMRI-RA. NeuroImage, 138, 248–256. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Glezer LS, Jiang X, & Riesenhuber M (2009). Evidence for highly selective neuronal tuning to whole words in the “visual word form area”. Neuron, 62(2), 199–204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Glezer LS, Kim J, Rule J, Jiang X, & Riesenhuber M (2015). Adding words to the brain’s visual dictionary: Novel word learning selectively sharpens orthographic representations in the VWFA. Journal of Neuroscience, 35(12), 4965–4972. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Glezer LS, Weisberg J, Farnady CO, McCullough S, Midgley KJ, Holcomb PJ, & Emmorey K (2018). Orthographic and phonological selectivity across the reading system in deaf skilled readers. Neuropsychologia, 117, 500–512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Green WB, & Shepherd DC (1975). The semantic structure in deaf children. Journal of Communication Disorders, 8(4), 357–365. [DOI] [PubMed] [Google Scholar]
  35. Gutierrez-Sigut E, Vergara-Martínez M, & Perea M (2017). Early use of phonological codes in deaf readers: An ERP study. Neuropsychologia, 106, 261–279. [DOI] [PubMed] [Google Scholar]
  36. Gutierrez-Sigut E, Vergara-Martínez M, & Perea M (2019). Deaf readers benefit from lexical feedback during orthographic processing. Scientific Reports, 9(1), 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Hall ML, Hall WC, & Caselli NK (2019). Deaf children need language, not (just) speech. First Language, 39(4), 367–395. [Google Scholar]
  38. Hanson VL, & Fowler CA (1987). Phonological coding in word reading: Evidence from hearing and deaf readers. Memory & Cognition, 15(3), 199–207. [DOI] [PubMed] [Google Scholar]
  39. Hanson VL, & McGarr NS (1989). Rhyme generation by deaf adults. Journal of Speech, Language, and Hearing Research, 32(1), 2–11. [DOI] [PubMed] [Google Scholar]
  40. Henner J, Caldwell-Harris CL, Novogrodsky R, & Hoffmeister R (2016). American sign language syntax and analogical reasoning skills are influenced by early acquisition and age of entry to signing schools for the deaf. Frontiers in Psychology, 7, 1982. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Hirshorn EA, Dye MW, Hauser PC, Supalla TR, & Bavelier D (2014). Neural networks mediating sentence reading in the deaf. Frontiers in Human Neuroscience, 8, 394. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Hirshorn EA, Dye MWD, Hauser P, Supalla TR, & Bavelier D (2015). The contribution of phonological knowledge, memory, and language background to reading comprehension in deaf populations. Frontiers in Psychology, 6, 1153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Hoeft F, McCandliss BD, Black JM, Gantman A, Zakerani N, Hulme C, … Gabrieli JD (2011). Neural systems predicting long-term outcome in dyslexia. Proceedings of the National Academy of Sciences, 108(1), 361–366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Hoeft F, Meyler A, Hernandez A, Juel C, Taylor-Hill H, Martindale JL, … Deutsch GK (2007). Functional and morphometric brain dissociation between dyslexia and reading ability. Proceedings of the National Academy of Sciences, 104(10), 4234–4239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Humphries T, Kushalnagar P, Mathur G, Napoli DJ, Padden C, Rathmann C, & Smith SR (2012). Language acquisition for deaf children: Reducing the harms of zero tolerance to the use of alternative approaches. Harm Reduction Journal, 9(1), 16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Humphries T, & MacDougall F (1999). Chaining and other links: Making connections between American Sign Language and English in two types of school settings. Visual Anthropology Review, 15(2), 84–94. [Google Scholar]
  47. Izzo A (2002). Phonemic awareness and reading ability: An investigation with young readers who are deaf. American Annals of the Deaf, 147, 18–28. [DOI] [PubMed] [Google Scholar]
  48. Koo D, Kelly L, LaSasso C, & Eden G (2008). Phonological awareness and short-term memory in hearing and deaf individuals of different communication backgrounds. Learning, Skill, Acquisition, Reading and Dyslexia, 1145, 83–99. [DOI] [PubMed] [Google Scholar]
  49. Ktori M, Kingma B, Hannagan T, Holcomb PJ, & Grainger J (2014). On the time-course of adjacent and non-adjacent transposed-letter priming. Journal of Cognitive Psychology, 26(5), 491–505. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Kutas M, Neville HJ, & Holcomb PJ (1987). A preliminary comparison of the N400 response to semantic anomalies during reading, listening and signing. Electroencephalography and Clinical Neurophysiology Supplement, 39, 325–330. [PubMed] [Google Scholar]
  51. Laszlo S, & Sacchi E (2015). Individual differences in involvement of the visual object recognition system during visual word recognition. Brain and Language, 145, 42–52. [DOI] [PubMed] [Google Scholar]
  52. Li Y, Peng D, Liu L, Booth JR, & Ding G (2014). Brain activation during phonological and semantic processing of Chinese characters in deaf signers. Frontiers in Human Neuroscience, 8, 211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. MacSweeney M, Brammer MJ, Waters D, & Goswami U (2009). Enhanced activation of the left inferior frontal gyrus in deaf and dyslexic adults during rhyming. Brain, 132(7), 1928–1940. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. MacSweeney M, Goswami U, & Neville H (2013). The neurobiology of rhyme judgment by deaf and hearing adults: An ERP study. Journal of Cognitive Neuroscience, 25(7), 1037–1048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. MacSweeney M, Grossi G, & Neville H (2004). Semantic priming in deaf adults: An ERP study. Proceedings at the Cognitive Neuroscience Society Annual Meeting. [Google Scholar]
  56. Marschark M (1997). Psychological development of deaf children. Oxford University Press on Demand. [Google Scholar]
  57. Marschark M, Convertino C, McEvoy C, & Masteller A (2004). Organization and use of the mental lexicon by deaf and hearing individuals. American Annals of the Deaf, 149(1), 51–61. [DOI] [PubMed] [Google Scholar]
  58. Marshall C, Rowley K, Mason K, Herman R, & Morgan G (2013). Lexical organisation in deaf children who use British sign language: Evidence from a semantic fluency task. Journal of Child Language, 40(1), 193–220. [DOI] [PubMed] [Google Scholar]
  59. Maurer U, Brandeis D, & McCandliss BD (2005). Fast, visual specialization for reading in English revealed by the topography of the N170 ERP response. Behavioral and Brain Functions, 1(1), 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Mayberry RI, Davenport T, Roth A, & Halgren E (2018). Neurolinguistic processing when the brain matures without language. Cortex, 99, 390–403. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Mayberry RI, Del Giudice AA, & Lieberman AM (2011). Reading achievement in relation to phonological coding and awareness in deaf readers: A meta-analysis. The Journal of Deaf Studies and Deaf Education, 16(2), 164–188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. McCandliss BD, & Noble KG (2003). The development of reading impairment: A cognitive neuroscience model. Mental Retardation and Developmental Disabilities Research Reviews, 9, 196–205. [DOI] [PubMed] [Google Scholar]
  63. McConkie GW, & Rayner K (1975). The span of the effective stimulus during a fixation in reading. Perception & Psychophysics, 17(6), 578–586. [Google Scholar]
  64. Meade G, Grainger J, Midgley KJ, Holcomb PJ, & Emmorey K (2019). ERP effects of masked orthographic neighbour priming in deaf readers. Language, Cognition and Neuroscience, 34, 1016–1026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Meade G, Grainger J, Midgley KJ, Holcomb PJ, & Emmorey K (2020). An ERP investigation of orthographic precision in deaf and hearing readers. Neuropsychologia, 146, 107542. 10.1016/j.neuropsychologia.2020.107542 [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Meade G, Midgley KJ, Sehyr ZS, Holcomb PJ, & Emmorey K (2017). Implicit co-activation of American Sign Language in deaf readers: An ERP study. Brain and Language, 170, 50–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Mehravari A, Emmorey K, Prat C, Klarman L, & Osterhout L (2017). Brain-based individual difference measures of reading skill in deaf and hearing adults. Neuropsychologia, 101, 153–168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Miller P, & Clark MD (2011). Phonemic awareness is not necessary to become a skilled deaf reader. Journal of Developmental and Physical Disabilities, 23(5), 459. [Google Scholar]
  69. Neville HJ, Mills DL, & Lawson DS (1992). Fractionating language: Different neural subsystems with different sensitive periods. Cerebral Cortex, 2(3), 244–258. [DOI] [PubMed] [Google Scholar]
  70. Novogrodsky R, Caldwell-Harris CL, Fish S, & Hoffmeister R (2014). The development of antonyms knowledge in American Sign Language (ASL) and its relationship to reading comprehension in English language learning. Language Learning, 64(4), 749–770. [Google Scholar]
  71. Ormel EA, Gijsel MA, Hermans D, Bosman AM, Knoors H, & Verhoeven L (2010). Semantic categorization: A comparison between deaf and hearing children. Journal of Communication Disorders, 43(5), 347–360. [DOI] [PubMed] [Google Scholar]
  72. Padden C, & Ramsey C (2000). American Sign Language and reading ability in deaf children. Language Acquisition by Eye, 1, 65–89. [Google Scholar]
  73. Paul PV, Wang Y, Trezek BJ, & Luckner JL (2009). Phonology is necessary, but not sufficient: A rejoinder. American Annals of the Deaf, 154(4), 346–356. [DOI] [PubMed] [Google Scholar]
  74. Pavani F, & Bottari D (2012). Visual abilities in individuals with profound deafness: A critical review. In Murray MM & Wallace MT (Ed.), The neural bases of multisensory processes (pp. 421–445). CRC Press/Taylor & Francis. [PubMed] [Google Scholar]
  75. Perea M, & Lupker SJ (2004). Can CANISO activate CASINO? Transposed-letter similarity effects with nonadjacent letter positions. Journal of Memory and Language, 51, 231–246. [Google Scholar]
  76. Perfetti C (2007). Reading ability: Lexical quality to comprehension. Scientific Studies of Reading, 11(4), 357–383. [Google Scholar]
  77. Perfetti CA, & Hart L (2001). The lexical bases of comprehension. In Gorfien D (Ed.), On the consequences of meaning selection (pp. 189–213). Washington, DC: American Psychological Association. [Google Scholar]
  78. Perfetti CA, & Sandak R (2000). Reading optimally builds on spoken language: Implications for deaf readers. Journal of Deaf Studies and Deaf Education, 5(1), 32–50. [DOI] [PubMed] [Google Scholar]
  79. Proksch J, & Bavelier D (2002). Changes in the spatial distribution of visual attention after early deafness. Journal of Cognitive Neuroscience, 14(5), 687–701. [DOI] [PubMed] [Google Scholar]
  80. Pugh KR, Mencl WE, Jenner AR, Katz L, Frost SJ, Lee JR, … Shaywitz BA (2001). Neurobiological studies of reading and reading disability. Journal of Communication Disorders, 34(6), 479–492. [DOI] [PubMed] [Google Scholar]
  81. Purcell JJ, Shea J, & Rapp B (2014). Beyond the visual word form area: The orthography–semantics interface in spelling and reading. Cognitive Neuropsychology, 31(5–6), 482–510. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Ramirez NF, Leonard MK, Davenport TS, Torres C, Halgren E, & Mayberry RI (2016). Neural language processing in adolescent first-language learners: Longitudinal case studies in American Sign Language. Cerebral Cortex, 26(3), 1015–1026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Rossion B, Joyce CA, Cottrell GW, & Tarr MJ (2003). Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. NeuroImage, 20, 1609–1624. [DOI] [PubMed] [Google Scholar]
  84. Sacchi E, & Laszlo S (2016). An event-related potential study of the relationship between N170 lateralization and phonological awareness in developing readers. Neuropsychologia, 91, 415–425. [DOI] [PubMed] [Google Scholar]
  85. Scott JA, & Hoffmeister RJ (2017). American Sign Language and academic English: Factors influencing the reading of bilingual secondary school deaf and hard of hearing students. Journal of Deaf Studies and Deaf Education, 22(1), 59–71. [DOI] [PubMed] [Google Scholar]
  86. Sehyr ZS, Midgley KJ, Holcomb PJ, Emmorey K, Plaut DC, & Behrmann M (2020). Unique N170 asymmetries to visual words and faces reflect experience-specific adaptation in adult deaf ASL signers. Neuropsychologia, 141, 107414. http://www.elsevier.com/locate/neuropsychologia [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Sehyr ZS, Petrich J, & Emmorey K (2017). Fingerspelled and printed words are recoded into a speech-based code in short-term memory. Journal of Deaf Studies and Deaf Education, 22(1), 72–81. [DOI] [PubMed] [Google Scholar]
  88. Seidenberg M (2017). Language at the speed of sight: How we read, why so many can’t, and what can be done about it. Basic Books. [Google Scholar]
  89. Skotara N, Kügow M, Salden U, Hänel-Faulhaber B, & Röder B (2011). ERP correlates of intramodal and crossmodal L2 acquisition. BMC Neuroscience, 12(1), 48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Skotara N, Salden U, Kügow M, Hänel-Faulhaber B, & Röder B (2012). The influence of language deprivation in early childhood on L2 processing: An ERP comparison of deaf native signers and deaf signers with a delayed language acquisition. BMC Neuroscience, 13(1), 44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Strong M, & Prinz PM (1997). A study of the relationship between American Sign Language and English literacy. The Journal of Deaf Studies and Deaf Education, 2(1), 37–46. [DOI] [PubMed] [Google Scholar]
  92. Wang X, Caramazza A, Peele MV, Han Z, & Bi Y (2015). Reading without speech sounds: VWFA and its connectivity in the congenitally deaf. Cerebral Cortex, 25(9), 2416–2426. [DOI] [PubMed] [Google Scholar]
  93. Wang Y, Mauer MV, Raney T, Peysakhovich B, Becker BL, Sliva DD, & Gaab N (2017). Development of tract-specific white matter pathways during early reading development in at-risk children and typical controls. Cerebral Cortex, 27(4), 2469–2485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Wang Y, Trezek BJ, Luckner J, & Paul PV (2008). The role of phonology and phonologically related skills in reading instruction for students who are deaf or hard of hearing. American Annals of the Deaf, 153, 396–407. [DOI] [PubMed] [Google Scholar]
  95. Waters D, Campbell R, Capek CM, Woll B, David AS, McGuire PK, Brammer MJ, & MacSweeney M (2007). Fingerspelling, signed language, text and picture processing in deaf native signers: The role of the mid-fusiform gyrus. NeuroImage, 35(3), 1287–1302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Yan M, Pan J, Bélanger NN, & Shu H (2015). Chinese deaf readers have early access to parafoveal semantics. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(1), 254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Yu X, Zuk J, & Gaab N (2018). What factors facilitate resilience in developmental dyslexia? Examining protective and compensatory mechanisms across the neurodevelopmental trajectory. Child Development Perspectives, 12(4), 240–246. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES