Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Jan 1.
Published in final edited form as: Lang Cogn Neurosci. 2022 Oct 17;38(5):636–650. doi: 10.1080/23273798.2022.2135746

Electrophysiological patterns of visual word recognition in deaf and hearing readers: An ERP mega-study

Kurt Winsler 1, Phillip J Holcomb 2, Karen Emmorey 3
PMCID: PMC10249718  NIHMSID: NIHMS1903827  PMID: 37304206

Abstract

Deaf and hearing readers have different access to spoken phonology which may affect the representation and recognition of written words. We used ERPs to investigate how a matched sample of deaf and hearing adults (total n = 90) responded to lexical characteristics of 480 English words in a go/no-go lexical decision task. Results from mixed effect regression models showed a) visual complexity produced small effects in opposing directions for deaf and hearing readers, b) similar frequency effects, but shifted earlier for deaf readers, c) more pronounced effects of orthographic neighborhood density for hearing readers, and d) more pronounced effects of concreteness for deaf readers. We suggest hearing readers have visual word representations that are more integrated with phonological representations, leading to larger lexically-mediated effects of neighborhood density. Conversely, deaf readers weight other sources of information more heavily, leading to larger semantically-mediated effects and altered responses to low-level visual variables.

Keywords: deafness, neighborhood density, concreteness, event-related potentials

Introduction

Reading is a complex, culturally invented task that is likely implemented by co-opting evolutionarily older circuitry in the human cortex (i.e., neuronal recycling – Dehaene, 2005). Fundamental to reading is the process of visual word recognition, the mapping of arbitrary arrays of connected lines (letters) to learned, semantic representations. Humans, both deaf and hearing, can learn to do this remarkably efficiently, and yet this process is not fully understood. For hearing people, the ability to understand written language (and by extension, to recognize written words) is developed in part by using existing spoken language capabilities as a scaffold (Frost, 1998), which predates the use of written language both within an individual and in the evolution of the human species. For the developing hearing reader, utilizing phonological representations for learning written language makes sense, given that these representations have already been established, so learning to read predominately involves linking new, orthographic (visual) input to the existing phonological (auditory) input. Indeed, there is strong evidence that for hearing people learning orthographic wordforms piggybacks on phonological knowledge. For instance, developing phonological abilities predict reading ability (e.g., Caravolas, Hulme, and Snowling, 2007), and adult readers automatically activate phonological information when they read written words (e.g., Kiyonaga et al., 2007). These findings and others suggest that for hearing readers of an alphabetic script, word representations involve phonological information, which will affect the way that word representations are accessed and recognized.

However, for deaf readers, the critical role of phonological knowledge in the development of reading has been called into question (e.g., Bélanger et al., 2012; Emmorey & Lee, 2021). For instance, several studies report that phonological awareness ability is not associated with reading skill in deaf individuals, in contrast to their hearing peers (Cates et al., 2021; Mayberry et al., 2011; Sehyr & Emmorey, 2021); rather, vocabulary knowledge and spelling ability (including fingerspelling) are associated with skilled reading in this population (Cates et al., 2021; Emmorey et al., 2017; Lederberg et al., 2019). In addition, deaf adults do not activate phonology to the same extent as hearing adults when recognizing written words, as evidenced by a lack of phonological priming effects (Bélanger et al., 2013; Costello et al., 2021). If an orthographic-phonological mapping scheme is not engaged during visual word recognition, it should be expected that deaf readers must recognize visual words somewhat differently than hearing readers.

One way to disentangle the complex time-course of word recognition is to model how variability in the lexical characteristics of words affects the brain’s electrophysiological signals, as measured by event-related potentials (ERPs). For instance, the time frame in which the concreteness of a word’s meaning affects the ERP signal is related to when and to what extent semantic information is being activated. Concrete words typically generate larger negativities than abstract words in the N400 time-window during word recognition. This effect is attributed to concrete words having denser semantic networks (Kounios & Holcomb, 1994). Similarly, a word’s orthographic neighborhood density (how many other words are similar in form) affects the amount of lexical information activated, with words from larger neighborhoods coactivating more similar words. This additional lexical activation elicits larger negativities for higher density words in N400 time windows (Holcomb, Grainger and O’rourke, 2002), and as early as 250ms, which is thought to represent initial lexical activation prior to semantic processing (Meade et al., 2019). Lexical frequency, i.e., how often a word is encountered, is another widely studied lexical variable that affects word recognition. Lower frequency words elicit larger N400s than higher frequency words (e.g. Van Petten & Kutas, 1990), and in some studies produce increased negativity earlier in the ERP time course (e.g Hauk et al., 2006 at ~110 ms; Chen et al., 2015 at ~160ms). These effects are hypothesized to reflect the additional processing necessary to recognize orthographic and lexical forms that are less familiar. There are also variables, such as the visual complexity of a word, that are not linguistic per se, but affect word recognition. Dufau et al. (2015) found that visually complex words elicited an increased negativity starting at about 100ms.

Finally, it should be noted that the magnitude and time-course of effects of lexical variables are flexible, responding to the demands of the particular task at hand. For instance, the presence of early orthographic neighborhood density effects is dependent on a task that requires lexical-form activation (Meade et al., 2019), and the magnitude of concreteness effect is increased in a task that requires semantic activation (Winsler et al., 2018). These interactions with task (e.g. lexical decision vs. semantic categorization) reflect how the word recognition system is optimized for different processing demands, which changes how the system responds to different dimensions of the words being recognized. We suggest that long term pressures placed on the system can also impact the word recognition system. We propose that having reduced access to phonological information can change how and when lexical variables impact different ERP components.

Given the multidimensional nature of words, experimental designs that employ item-level regression analyses on ERP data from a large set of words and from a large group of participants are especially useful compared to factorial designs since they can model multiple effects simultaneously and as continuous variables. This approach enables more natural samples of words as well as many statistical benefits, including better control for collinearity, modeling item random effects, and increased power (see Baayen et al., 2008). These sorts of studies (sometimes termed “megastudies”) have been consequential in uncovering aspects of visual word recognition in hearing readers (e.g. Hauk et al., 2006; Dufau et al., 2015), as well as for auditory word recognition (Winsler et al., 2018) and sign recognition in deaf ASL signers (Emmorey et al., 2020). However, no studies to date have examined how deaf readers respond to multiple lexical variables in a large sample of visual words.

The current study

We adopted a similar procedure and set of stimuli as the Dufau et al. (2015) study, which used item-level analyses to find cascaded effects of lexical variables across the ERP time-course of hearing readers recognizing visual words. Our study aims to expand our understanding of how differences in access to spoken language phonology impact written word recognition by comparing ERP data from a sample of deaf individuals reading single words to a matched sample of hearing individuals reading the same words. The lexical variables that we examined were visual complexity, written frequency, orthographic neighborhood density, and concreteness. Each of these variables has been found to impact the ERP responses of hearing readers in distinct ways, reflecting their distinct influences on visual word recognition – ranging from low level visual processing, to lexical and semantic processing. Hence, the pattern of ERP results for deaf readers, and how they differ from hearing readers, will contribute valuable information about how the visual word recognition system is implemented with, and without, full access to sound.

Methods

Participants

A total of 92 participants were recruited from the San Diego and Southern California regions. Forty-six participants (25 female; 6 left-handed, 1 ambidextrous) were congenitally deaf or became deaf before age three, and all were severely or profoundly deaf (> 70 db loss). Seventeen deaf participants had at least one deaf parent (exposed to ASL from birth), and 29 had hearing parents and were exposed to ASL at a mean age of 2.26 years (SD = 2.95). Forty-six hearing participants were tested, but two were rejected for having excessive artifacts in their EEG data. The remaining 44 hearing participants (20 female; 7 left-handed, 1 ambidextrous) all reported normal hearing. No participant reported a history of learning disability or dyslexia. The two groups did not differ in age (see Table 1).

Table 1.

Means and standard deviations (sd) for each measure for the deaf and hearing groups. The last column shows a p-value from an unpaired sample two tail t-test comparing the deaf and hearing groups for each measure. Data are missing for the PPVT in one hearing participant, and data are missing on the phonological test for one deaf participant.

Deaf mean (sd) Hearing mean (sd) T-test p value
Age (years) 33.33 (8.03) 31.09 (8.59) p = 0.206
PIAT (raw scores; max=100) 82.22 (11.97) 83.61 (9.91) p = 0.547
Spelling (% correct) 72.70 (8.85) 70.61 (9.01) p = 0.272
PPVT (up to 228) 190.96 (22.09) 205.19 (12.63) P < 0.001
Phonological test (% correct) 62.2 (15) 87.0 (10) p < 0.001

Reading assessments

All participants underwent an assessment battery that measured reading comprehension, spelling recognition, vocabulary, and phonological awareness. The battery included the following tests, and mean scores for each group are given in Table 1.

Peabody Individual Achievement Test (PIAT) – Revised; Reading Comprehension Subtest (Markwardt, 1998).

In this subtest, participants read (silently) a sentence, then choose from four pictures the one that best matches the sentence. Items increase in difficulty throughout the test, and the test is discontinued if a participant produces seven consecutive responses containing five errors.

Spelling Recognition Test (Andrews & Hersch, 2010).

The test contains 88 items, half correctly spelled and half misspelled. Misspellings change one to three letters of the word and often preserve the pronunciation of the base word (e.g., addmission, seperate). Items are printed in columns, and participants are instructed to circle items they think are incorrectly spelled. The recognition test score is the number of correctly classified items, both hits and correct rejections.

Phonological Awareness Test (Hirshorn, Dye, Hauser, Supalla, & Bavelier, 2015)

This test was specifically designed for profoundly deaf adults and does not require overt speech production. For one task, three pictures are displayed in a triangle formation, and participants select the “odd man out” – the item that has a different first sound or a different vowel (blocked conditions). In a second task, participants are shown two pictures (e.g., a bird and a toe) and are asked to combine the first sound of the word in the first picture with the rime of the word in the second picture to make a new word (e.g., bow). Participants type the new word that is created on a keyboard. The mean total accuracy for both groups is given in Table 1.

Peabody Picture Vocabulary Test (PPVT–IV; Dunn & Dunn, 2007) adapted for deaf individuals by Sarchet, Marschark, Borgna, Convertino, Sapere, & Dirmyer (2014).

On each trial, participants saw four pictures and a visually presented target word in the middle of the display and were asked to point to the picture that corresponded to the word. Participants began at item 157 (set 14) out of a total of 228 items or lower if the basal rule was not reached. The test was discontinued when participants reached a ceiling (eight or more errors in a set). The score was calculated as the ceiling item minus errors.

As shown in Table 1, the two groups did not differ significantly in reading comprehension or in spelling ability. The hearing readers had higher vocabulary scores, and as expected, the deaf readers had lower phonological scores than the hearing readers.

Materials

Stimuli were a representative subset of 480 words from the stimuli used in Dufau et al. (2015). Three words were excluded from analysis because they were considered non-words by a large number of participants; thus, the final number of critical stimuli was 477 words. Zipf frequency was used to quantify the frequency of each word in English (Van Heuven et al., 2014), which is a logarithmically normed measure based on American English subtitle frequency (SUBTLEX-US frequency; Brysbaert & New, 2009) that ranges from 1 to 7. In the current sample of words, Zipf frequency ranged from 2.14 to 5.75, with a mean of 4.04 (SD = 0.78). Orthographic neighborhood density was quantified using orthographic Levenshtein distance (OLD - see Yarkoni, Balota, and Yap, 2008), obtained from the English Lexicon Project (Balota et al., 2007). This variable measures each word’s distance from all other words in the lexicon, so a high orthographic distance corresponds to a small orthographic neighborhood density. In our sample, OLD ranged from 1 to 3.95 with a mean of 2.14 (SD = 0.6). Although OLD was used in the models, for clarity the results will be presented and discussed in terms of OND, i.e., density instead of distance. Concreteness was measured using a separate rating study, where 24 undergraduates rated each word on a scale of 1 (very abstract) to 7 (very concrete). These were the same ratings used by Dufau et al. (2015) and Winsler et al. (2018). For this subset of words, concreteness ranged from 1.65 to 6.7 with a mean of 4.35 (SD = 1.15). Visual complexity was quantified using the average perimetric complexity of each letter in the word (Pelli et al., 2006). These were the same values as used in Dufau et al. (2015). Our sample of words ranged in visual complexity from 44.84 to 82.24 with a mean complexity of 65.15 (SD = 6.25). Another variable that was controlled for in the models was the number of letters in the word. The words in our sample had an even distribution of number of letters ranging from 4 to 8 with a mean of 5.99 (SD = 1.41)

Procedure

Participants were seated 150cm away from a computer monitor in a darkened room. A short training block was completed before the experiment. Participants completed 2 experimental blocks, each containing 240 critical stimuli and 35 non-word probes. As in Dufau et al. (2015), the task was go/no-go lexical decision, where participants pressed a button on a keypad whenever they saw a non-word. The probes were phonotactically and orthographically legal non-words and were rearranged versions of real words. They ranged in length from 4 to 8 letters as did the word stimuli.

Stimuli were presented in white Arial font in the center of a black background, and subtended between 1 and 2 degrees of visual angle (depending on number of letters). Each item was presented for 400ms with a 600ms blank screen between each stimulus. There was a brief rest break every 60 trials, and a blink cue (“- -“) was presented for 3.5s randomly between every 7 to 12 trails that encouraged participants to blink.

Data processing

EEG data were continuously recorded at 500hz from 29 scalp sites arranged in the standard 10-20 montage (see Figure 1), as well as from the left and right mastoid, and from a horizontal eye channel and a vertical eye channel. The left mastoid was used as the reference for the other channels, and the eye channels were used to monitor for artifacts. Offline, each subject’s continuous data file was decomposed with ICA using the ‘runica’ algorithm in EEGLAB (Delorme & Makeig, 2004) and components were visually inspected and removed from the data if they were clearly blink or horizontal eye movement components. Then the continuous data were epoched to the onset of each visual word, from a baseline of −100 to 0ms, to 600ms. Trials with remaining artifacts were rejected (~1.5% of trials in each group), and remaining trials were bandpass filtered at .01 – 20hz. Data were measured from every electrode as mean amplitudes in 50ms time windows, starting with a 100-150ms window up to a 550-600ms window (for a total of 10 time windows). This measurement strategy of 50ms windows was chosen in order to strike a balance of sufficient temporal resolution to pick up timing differences between the deaf and hearing readers, while not requiring multitudes of statistical tests which could not feasibly be interpreted and discussed individually.

Figure 1.

Figure 1

Electrode montage showing the 29 scalp electrodes used in the analyses.

Data analysis

The data and scripts used for analysis are posted on OSF (https://osf.io/5vcxa/). The data were modeled in an exploratory fashion, similar to Winsler et al. (2018) and Emmorey et al. (2020). Each of the 10 time windows were analyzed with an identical linear mixed effect regression model, separately for each group. The fixed effect structure for each model included an effect for each of the four variables of interest (Frequency, Orthographic neighborhood density (OND), Concreteness, and Visual complexity), and the interaction of each of these variables with each of the distributional variables (X-position, Y-position, Z-position). The distributional variables correspond to the positions of each electrode in physical space. Thus, interactions with the distributional variables indicate that the distribution of an effect differs linearly along left-to-right electrodes (X), anterior-to-posterior electrodes (Y), or electrodes on the top/center vs the bottom/periphery of the scalp (Z). Note that this means interactions with Z-position picks up on whether an effect is centrally (e.g. around Cz) focused. Additionally, the fixed effects included a covariate – the number of letters in each word, to control for possible differences in ERPs due to the visual intensity/size of the word stimulus. Interactions with the three distributional variables were also included for this control variable. The random effect structure included random intercepts for participants, items, and electrodes, as well as random participant slopes (uncorrelated with participant intercepts) for the effects of the four experimental variables and the control variable. This model was fit separately for the deaf and hearing readers, and separately for each of the 10 time windows.

To compare the deaf and hearing readers, a different set of models were fit on the data from each time window. These models had the same structure as the group-specific models, with added interactions between each of the five fixed effects (Frequency, OND, Concreteness, Visual complexity, number of letters) and Group. However, all interactions with distributional variables were removed to simplify the complexity of the group-comparison results. Hence, significant item-variable by group interactions indicate evidence for group differences in the effect of a variable across all electrode sites on average. The random effect structure for the group comparison models were the same as the group specific models (three random intercepts, five uncorrelated random by-subject slopes).

The models were fit in R (R core team, 2014) using the package lme4 (Bates et al., 2014). If a model did not converge using default parameters, it was refit using more iterations and a different optimizer, so that every model converged. P values for each fixed effect parameter were obtained with Wald t tests using degrees of freedom from the Satterthwiate approximation using the lmerTest package (Kuznetsova el al., 2017). Detailed model outputs, as well as more information on how to interpret the direction of distributional interactions are included in the supplementary materials. Repeated comparisons for each effect across distributional variables and the 10 time-windows for each of the 4 effects of interest per group model and group comparison model were false detection rate (FDR) corrected for multiple comparisons (Benjamini & Hochberg, 1995). This means, for example, that the 40 p-values for the frequency effect in the hearing group were adjusted separately from the 40 p-values for the frequency effect in the deaf group, and the 10 p-values for the frequency effect group comparison.

The results are presented as t-value heatmaps (figures 2, 3, 4, and 5), with significant results (FDR corrected p-value < .05) marked with a black dot. Additionally, to visualize the distribution of each of the effects over electrodes and time, topographic t-value scalp maps were generated. To do this, smaller versions of the model were fit for each electrode at each time window. The fixed effects for these models contained only the five variables of interest, and random intercepts for participants and items. The random slopes had to be dropped for these individual-electrode models (which had much less data) to allow them to converge. Finally, for readers who would like to see the averaged ERPs, we also plotted ERPs corresponding to the highest and lowest quartile of each variable of interest at 5 sites (Fpz, T3, Cz, T4, and Oz). These figures can be found in the supplementary materials (supplementary figures 1, 2, 3, and 4). Note that they do not reflect the way the data were analyzed and are only for visual aid.

Figure 2.

Figure 2

LME model visual complexity results and topographical t-value maps for hearing and deaf readers, and the group comparisons. Model results are in shown as t-value heat maps (−7 to 7) for the effect of visual complexity and the interaction between visual complexity and each of the distributional variables for each epoch. Significant results (FDR corrected p < .05) are marked with a dot. Below each heatmap are topographical t-value (−7 to 7) scalp maps for the effect of visual complexity at each electrode site. In the scalp maps and for effect of visual complexity over all electrodes, positive t-values indicate higher complexity words elicited more positive voltages, while negative t-values indicate higher complexity words elicited more negative voltages. On the bottom is marked whether the visual complexity by group interaction for each epoch was significant.

Figure 3.

Figure 3

LME model frequency results and topographical t-value maps for hearing and deaf readers, and the group comparisons. Model results are in shown as t-value heat maps (−7 to 7) for the effect of frequency and the interaction between frequency and each of the distributional variables for each epoch. Significant results (FDR corrected p < .05) are marked with a dot. Below each heatmap are topographical t-value (−7 to 7) scalp maps for the effect of frequency at each electrode site. In the scalp maps and for effect of frequency over all electrodes, positive t-values indicate higher frequency words elicited more positive voltages (the typical direction of frequency effects), while negative t-values indicate higher frequency words elicited more negative voltages. On the bottom is marked whether the frequency by group interaction for each epoch was significant.

Figure 4.

Figure 4

LME model OND results and topographical t-value maps for hearing and deaf readers, and the group comparisons. Model results are in shown as t-value heat maps (−7 to 7) for the effect of OND and the interaction between OND and each of the distributional variables for each epoch. Significant results (FDR corrected p < .05) are marked with a dot. Below each heatmap are topographical t-value (−7 to 7) scalp maps for the effect of OND at each electrode site. In the scalp maps and for the effect of OND over all electrodes, positive t-values indicate lower OND (higher orthographic Levenshtien distance) words elicited more positive voltages (the typical direction of OND effects), while negative t-values indicate higher OND (higher orthographic Levenshtien distance) words elicited more negative voltages. On the bottom is marked whether the OND by group interaction for each epoch was significant.

Figure 5.

Figure 5

LME model concreteness results and topographical t-value maps for hearing and deaf readers, and the group comparisons. Model results are in shown as t-value heat maps (−7 to 7) for the effect of concreteness and the interaction between concreteness and each of the distributional variables for each epoch. Significant results (FDR corrected p < .05) are marked with a dot. Below each heatmap are topographical t-value (−7 to 7) scalp maps for the effect of concreteness at each electrode site. In the scalp maps and for effect of concreteness over all electrodes, positive t-values indicate higher concreteness words elicited more positive voltages, while negative t-values indicate higher concreteness words elicited more negative voltages (the typical direction of concreteness effects). On the bottom is marked whether the concreteness by group interaction for each epoch was significant.

Results

Behavioral results

Overall, deaf and hearing readers had similar behavioral performance on the lexical decision task. Deaf readers had an average accuracy of 80.77% (SD = 13.67%) and hearing readers had an accuracy of 78.28% (SD = 11.69%). A two sample T-test showed the difference between the two groups was not statistically significant (p = 0.35). Deaf readers had an average of 25.67 false alarms (SD = 17.5) and hearing readers had an average of 25.57 false alarms (SD = 20.46). A two sample T-test showed the difference between the two groups was not statistically significant (p = 0.98). Reaction times to probe non-word targets averaged 680 ms for hearing readers and 673 ms for deaf readers (p = .60).

Visual complexity

Hearing readers

The hearing group had a significant effect of visual complexity starting in the 200-250ms epoch, with higher visual complexity associated with more negative voltages, especially in channels on the right side of the montage. In the 300-400ms epochs, the effect returns, now more focused on central-posterior sites.

Deaf readers

The effect of visual complexity occurred earlier for the deaf group in the 150-200ms epoch, with posterior sites generating more positive voltages to words with higher visual complexity. The effect returns only briefly in the 300-350ms epoch where again visually complex words generated more positivity, now with a more anterior focus.

Group comparison

In the group comparison models, every interaction between visual complexity and group was significant until the 500-550ms epoch. The interactions demonstrated that the hearing and deaf readers showed reversed directions of visual complexity effects. For the hearing readers, visual complexity was associated with more negative voltages, while for the deaf readers, visual complexity was associated with more positive voltages.

Frequency

Hearing readers

The hearing group showed an initial frequency effect in the 150-200ms epoch, with lower frequency words generating more negative voltages at anterior sites. The distribution of the effect changes to more posterior sites in the 300-350ms epoch. Through the next four epochs, 350-550ms, the frequency effect grows to be centered on left and central sites, still with lower frequency words generating more negative voltages. In the final two epochs, 500-550ms and 550-600ms, the frequency effect appears to reverse at occipital and right-side sites, with lower frequency words now producing more positive voltages.

Deaf readers

The initial frequency effect was earlier for the deaf group, appearing in the 100-150ms epoch, with lower frequency words eliciting more negative voltages, but at posterior sites. This pattern continued through the 200-350ms epochs, focused on left-occipital channels. As with the hearing readers, the frequency effect became centered on left-central sites in the 350-400ms epoch and maintained the typical direction of more negativity to lower frequency words. Also like the hearing readers, the typical frequency effect for deaf readers ends in the 500-550ms epoch and is replaced in the final epoch with a reversed effect in occipital and central channels.

Group comparison

Overall, the patterns of word frequency effects were similar for deaf and hearing readers. However, in the first epoch (100-150 ms), the frequency effect was larger for the deaf group, indicating an earlier onset for lexical frequency effects. There were no significant differences between the two groups until the 450-500ms epoch. In the final three epochs (450-600 ms), the late “reversed” effect (i.e., lower frequency words generating more positive ERPs at occipital channels) was larger for the deaf than the hearing readers.

Orthographic Neighborhood Density (OND)

Hearing readers

The first significant effect in the hearing group was in the 200-250ms epoch: words with higher neighborhood density produced more negative voltages in central channels. This effect continued through the next two epochs, with the OND effect becoming more anterior. There was no significant OND effect in the 350-400ms epoch, but the effect returned in the remaining epochs 400-600ms, now more focused on central and right-side sites. All the effects are in the typical direction of N400 neighborhood density effects, with words from dense neighborhoods producing more negativity than words from sparse neighborhoods.

Deaf readers

The deaf group showed an early OND effect in the 150-200ms epoch, with high OND words generating more negative voltages in right frontal channels. The OND effect emerged again in the 250-300ms epoch towards the frontal channels, and persisted until the 350-400ms epoch where it was more focused in central channels. The OND effect was not significant after this epoch.

Group comparison

Starting in the 200-250ms epoch the hearing readers showed a significantly larger effect of OND than the deaf readers. This pattern continued in all the remaining epochs, except for the 350-400ms epoch, where the OND effect for both groups transitioned to a more-central distribution, and the interaction with group was not significant.

Concreteness

Hearing readers

Surprisingly, for the hearing group, there were no significant effects of concreteness throughout the time-course. Epochs during the N400 time window (e.g. 350-400ms) revealed a non-significant effect in the expected direction for typical N400 concreteness effects – more concrete words produced more-negative voltages than less concrete words. Some of the concreteness effects here would have been significant if not for the FDR correction, or if the data were analyzed in a way to a priori target the N400 time window (e.g., measure mean voltage from central electrodes in a 300-500ms time window and fit with a confirmatory model).

Deaf readers

In contrast to the hearing readers, the deaf readers showed at least one significant concreteness effect in every epoch. In the first time point, 100-150ms, the concreteness effect was graded across the Y-positions of the electrodes (anterior-posterior), with posterior sites showing higher voltages to more concrete words. However, starting in the 150-200ms epoch, the effect became more widely distributed, such that more concrete words produced more negative mean-amplitudes (the direction of typical N400 concreteness effects). The concreteness effect continued through all remaining epochs with a broad distribution, but centered on right frontal-central sites.

Group comparison

The statistical comparison between the two groups confirmed the pattern seen in the group-specific models. Starting in the 150-200ms epoch and continuing through the final 550-600ms epoch, the deaf readers showed a larger effect of concreteness than the hearing readers. Qualitatively, in the early epochs the hearing group exhibited no concreteness effect (or a reversed effect) which may drive the group interactions in these epochs. In later epochs (after ~300ms), both groups showed evidence of a similar concreteness effect, with the deaf readers showing a much larger effect.

Discussion

Overall, both deaf and hearing readers were sensitive to each of the word-level variables of interest, but there were group differences in the timing, size, and direction of several effects. Deaf readers generally showed earlier onsets for the effects of all four variables compared to hearing readers. Overall, the direction of the effects was similar across the two groups for frequency (less frequent words generated more negative voltages), orthographic neighborhood density (words from high density neighborhoods generated more negative voltages), and concreteness (more concrete words generated more negative voltages). However, the direction of the visual complexity effect was reversed, with hearing readers exhibiting a more negative response for visually complex words, replicating Dufau et al. (2015)’s results with a separate group of hearing readers. In contrast, the deaf readers showed a more negative response for words that were visually less complex. The size of the orthographic neighborhood density effect was larger for the hearing than deaf readers, whereas the size of the concreteness effect was larger for the deaf than hearing readers. Below we discuss in more detail the differences and similarities in the neural response for each of these lexical variables across deaf and hearing readers. Reflecting the exploratory data analysis strategy, we generally avoid making inferences as to which ERP component might be associated with a given effect. Nonetheless, effects later in the ERP time course are assumed to be likely related to the N400 component and lexical-level processes.

Visual complexity produced small effects for both groups. Hearing readers showed weak effects of visual complexity around 200ms, and larger effects between 300-400 ms when more visually complex words generated more negativity especially in occipital sites. In the Dufau et al (2015) study, which included all the words in the current study, effects of visual complexity were found in the same direction but more widespread in time – with an initial burst after 100ms, and a more sustained effect from about 200-400ms. One likely explanation for the similar but stronger effects in the previous study might be that it had higher power (the Dufau study had more participants and items). Deaf readers showed an earlier effect in occipital channels (150-200 ms), as well as a more frontally distributed effect in the 300-350 ms epoch. Across all epochs up to 500ms, the direction of the visual complexity effects for the two groups were opposite, with deaf readers showing less negative ERPs to visually complex words. One tentative possibility is to interpret the reduced negativity to complex words as reflecting less sub-lexical or lexico-semantic activity. Such an interpretation would suggest that for deaf readers more complex visual forms activate lexical or sub-lexical representations less efficiently or less robustly than for hearing readers. Another possible hypothesis is that deaf readers process the visual form of words differently than hearing readers. A recent ERP study by Gutierrez-Sigut et al. (2022) supports this idea. The authors found that only deaf readers were sensitive to the outline shape of a pseudoword, e.g., mofor, but not mosor, is shape-consistent with the target word motor. Inconsistent-shape pseudowords (mosor) elicited larger amplitude ERPs than consistent-shape pseudowords (mofer) starting at 150ms and continuing into the N400 time-window. Gutierrez et al. (2022) suggested that deaf readers rely more on visual characteristics during word recognition than their hearing peers with similar reading ability. Our results are consistent with this hypothesis. Specifically, we speculate that words with higher visual complexity may be processed more efficiently by deaf readers, perhaps because these words have fewer feature competitors than visually ‘simple’ words, which leads to reduced negativity in the ERPs. Further research directly manipulating the visual complexity of words or other non-linguistic stimuli may elucidate this potential difference in visual recognition processes between deaf and hearing people.

Early frequency effects (greater negativity for lower frequency words) were observed for both groups, replicating findings from other ERP studies showing effects of frequency before 200ms (e.g. Hauk et al., 2006; Chen et al., 2015; Dufau et al., 2015; Winsler et al., 2018). This early effect could be attributed to the frequency of a word affecting the activation of sub-lexical orthographic representations. As with the visual complexity effect, the frequency effect onset slightly earlier for deaf readers, in the 100-150 ms epoch. This earlier onset could be interpreted as evidence that deaf readers are activating sublexical representations faster than hearing readers, perhaps due to stronger connections between sublexical and lexical orthographic representations that emerge from a reliance on visual features when reading. In the following epochs, the frequency effect became more focused in occipital channels for the deaf group, and then starting in the 350-400 ms epoch, both groups showed the typical centrally-distributed N400 frequency effect. In the final epochs after 500ms, both groups showed a reversed frequency effect (larger negativity for higher frequency words), especially in posterior electrodes, and this effect was larger in the deaf group. Such a reversed pattern in later epochs replicates the same reversed pattern found in the Dufau et al. study (2015). We speculate that this effect may be related to a sustained representation of the recently-seen word still maintained in visual cortex after the stimulus offsets, which interacts with frequency in the opposite direction as initial frequency effects - i.e. less familiar visual stimuli generating more positivity. This later effect may be easier to see in analyses where visual complexity is controlled for, explaining why it is not seen more often.

Both deaf and hearing readers showed fairly early effects of orthographic neighborhood density (OND), starting in the 150-200ms epoch for deaf readers and in the 200-250ms epoch for hearing readers. The OND effect persisted for hearing readers until the final epoch, with words from higher density orthographic neighborhoods producing more negative amplitudes. This is the typical pattern of neighborhood density effects (Holcomb et al., 2002) and is thought to reflect the activation of orthographic neighbors during word recognition. In most of the epochs the OND effect was larger for hearing than deaf readers. For deaf readers, the OND effect was smaller and shorter-lived as it was no longer significant after the 350-400ms epoch. This group difference suggests that hearing readers activate orthographic neighborhoods to a greater extent during word recognition than deaf readers. This explanation fits with a recent priming study showing less lexical competition among deaf readers compared to hearing readers (Varga, Toth, Csepe, 2021). One possible reason for this group difference is that the hearing readers in our study had a larger vocabulary than the deaf readers, as evidence by significantly better performance on the PPVT (see Table 1). Larger lexical neighborhoods could lead to larger OND effects. Another possible contributing factor to the group difference is that for hearing readers, there may be additional activation of phonological neighborhoods which are automatically activated from orthographic-phonological connections (since phonological and orthographic neighborhoods are highly correlated in English). Thus, larger and longer-lasting OND effects for hearing readers could reflect the dual activation of orthographic and phonological lexical neighborhoods. This pattern also suggests that word recognition for hearing readers is more reliant on or results in lengthier processing at the level of whole-word lexical representations because there is more information encoded in these networks (whether through a larger vocabulary or the addition of phonological information). Additional phonological information may be beneficial because it represents a parallel format of lexical information that can be leveraged during word recognition. However more items (from a larger vocabulary) or more phonological information may also be obstructive, leading to more coactivated information for the system to disambiguate.

Somewhat surprisingly, the concreteness effect did not reach significance for the hearing readers, although we observed the typical pattern of concrete words generating larger negative voltages than abstract words, and this effect would be significant in the N400 time-window if a more confirmatory analysis strategy was used. In contrast, the deaf readers showed widespread and significant effects of concreteness which were both larger and earlier than for the hearing readers. The concreteness effect started well before the N400 window where it is expected to first emerge (e.g,. Kunious and Holcomb 1994, West and Holcomb, 2000). That said, early effects of semantic variables, including concreteness, have been observed in some studies (e.g. Hauk et al., 2006). This difference between deaf and hearing readers suggests that during visual word recognition, deaf readers are accessing semantic information faster or to a greater extent than hearing readers. This explanation is in line with the hypothesis that deaf readers rely more on orthographic-to-semantic connections due to weaker phonological-to-semantic connections compared to hearing readers (e.g., Bélanger & Rayner, 2015; Emmorey and Lee, 2021). Further, larger concreteness effects for deaf readers may suggest that a greater portion of the word recognition process is leveraging information contained in semantic networks, rather than lexical networks. That is, during natural word recognition, where semantic context greatly informs the identity of individual words, all readers use semantic information. However, if deaf readers have sparser lexical networks, they might weight this information source more heavily than their hearing counterparts. One way to assess this hypothesis is to examine whether deaf readers rely more on morphological analysis than hearing readers, given that morphology provides an important degree of regularity across the mapping between printed words and their meanings (Rastle, 2019).

It is tempting to conclude that since deaf readers are accessing semantic information earlier than hearing readers, they are recognizing the words faster. Indeed, the deaf readers also showed earlier and shorter-lived effects of neighborhood density than hearing readers, as well as a pattern of lexical frequency results which was also shifted slightly earlier in time. This pattern would be consistent with faster recognition times, if deaf and hearing readers are recognizing words in exactly the same way. However, as argued here and elsewhere, this is unlikely to be the case. In addition, it is unclear whether there is an appropriate way to even measure at what point a word is completely recognized, but clearly a word must be recognized to some extent before an accurate behavioral response can be made. Further, while the concreteness effects for the deaf readers onset quite early, they do persist through the final epoch, suggesting the word recognition process is not fully resolving in early epochs. It is possible that deaf readers might recognize words both differently and earlier (at some point) than hearing readers. Currently, however, the evidence for faster responses in word recognition tasks by deaf readers is mixed, with some studies finding no group differences (Cripps et al., 2005; Brown and Brewer, 1996; Li et al., 2013) and others reporting faster decision times for deaf than hearing readers (Clark et al., 2016; Morford et al., 2017; Villwock et al., 2021).

We argue that the differences between deaf and hearing readers observed in the current study are due to fundamental differences in their lexical representations which biases the manner in which skilled deaf and hearing adult readers recognize a word. With full access to spoken language phonology throughout their lives, the word representations of hearing readers of an alphabetic language involve close interconnections between orthographic and phonological representations. These connections serve as parallel systems of perceptual information which, interfacing with semantic information, recurrently works to find converging evidence for the best fitting word given the sensory input received (e.g., the BIAM of Grainger and Holcomb, 2009). For all readers, the system is dynamic, weighting different sources of information based on the constraints of the task at hand (Chen et al., 2015, Strijkers et al., 2015; Meade et al., 2018; Winsler et al., 2018).

For deaf readers, words have not been learned alongside extensive and automatic access to phonology, and thus their word representations contain less precise phonological information (McQuarrie and Parrila, 2009). This difference leads to smaller ortho-phonological lexically mediated effects during recognition (e.g. OND) because there is less lexical information to activate (and to inhibit). Conversely, this bias leads to larger semantically mediated effects (e.g. concreteness) since there is relatively more information within semantic networks, which are activated when reading text. This tradeoff is similar to the task effect tradeoff for spoken word recognition in Winsler et al. (2018) where during semantic categorization there were larger concreteness effects, and during lexical decision there were larger early effects of phonological neighborhood density. In that study, task constraints affected to what extent lexical variables impacted word recognition. The current study suggests that additionally, constraints on what information is available during the learning of written words affects the nature of how they are represented, which in turn is reflected by the effects of lexical variables during word recognition.

A direction for further research is to investigate whether the nature of the mapping between orthography and phonology impacts lexical representations and word recognition processes for deaf readers. English has a relatively opaque orthography, while other languages like Spanish have more transparent orthographies with regular sound-to-letter correspondences. Phonological representations may be accessed more readily by deaf readers of a transparent orthography (see Gutierrez-Sigut et al., 2017). However, transparent (shallow) orthographic systems are not acquired more easily by deaf readers than opaque (deep) systems (Clark et al., 2016), and some evidence indicates that skilled deaf readers do not rely on phonological codes even for a transparent orthography (Fariña et al., 2017).

It should be noted that there is evidence of differences in early visual processing and visual attention allocation between deaf and hearing individuals that are not necessarily related to reading or language (e.g. Bottari et al., 2011; Bavelier et al., 2000). It is possible these differences may help to explain some of the differences we observed between deaf and hearing readers in how they responded to lexical frequency and visual complexity. Deaf readers showed a slightly earlier frequency effect, and a stronger late reversal of the effect in occipital channels which may be evidence of a stronger effect of visual form familiarity in the deaf readers. Further, the deaf and hearing groups had opposing directions of the effect of visual complexity throughout the time epochs. Either of these differences could potentially be due to differences in the speed or nature of early visual processing in deaf individuals. However, these differences could additionally relate to the argument that deaf readers are weighting different information during word recognition – relying more on visual form and semantic information, and less on orthographic-phonological lexical information compared to hearing readers.

Conclusion

While deaf readers show many similarities to hearing readers in the ERP effects of lexical variables, there were some marked dissimilarities that are attributable to differences in how deaf and hearing readers may represent and recognize written words. Visual complexity showed small but opposite effects for the two groups, suggesting deaf readers may be using low-level visual information differently during word recognition, a possibility that deserves future investigation. The pattern of frequency effects was largely similar in the two groups, although there was some evidence that the time-course was shifted earlier for deaf readers, suggesting a faster progression through the initial stages of word recognition. Perhaps most interestingly, hearing readers showed larger and more widespread effects of orthographic neighborhood density, while deaf readers showed larger and more widespread effects of concreteness. We attribute these differences to hearing readers relying more on information in orthographic and phonological lexical networks, and deaf readers relying more on information in semantic networks.

Supplementary Material

Supp 1

References

  1. Andrews S, & Hersch J (2010). Lexical precision in skilled readers: Individual differences in masked neighbor priming. Journal of Experimental Psychology: General, 139(2), 299–318. [DOI] [PubMed] [Google Scholar]
  2. Baayen RH, Davidson DJ, & Bates DM (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of memory and language, 59(4), 390–412. [Google Scholar]
  3. Balota David A., Yap Melvin J., Hutchison Keith A., Cortese Michael J., Kessler Brett, Loftis Bjorn, Neely James H., Nelson Douglas L., Simpson Greg B., and Treiman Rebecca. “The English lexicon project.” Behavior research methods 39, no. 3 (2007): 445–459. [DOI] [PubMed] [Google Scholar]
  4. Bates D, Mächler M, Bolker B, & Walker S (2014). Fitting linear mixed-effects models using lme4. arXiv preprint arXiv:1406.5823. [Google Scholar]
  5. Bavelier D, Tomann A, Hutton C, Mitchell T, Corina D, Liu G, & Neville H (2000). Visual attention to the periphery is enhanced in congenitally deaf individuals. Journal of neuroscience, 20(17) RC93; DOI: 10.1523/JNEUROSCI.20-17-j0001.2000 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Benjamini Y, & Hochberg Y (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1), 289–300. [Google Scholar]
  7. Bélanger NN, Baum SR, & Mayberry RI (2012). Reading difficulties in adult deaf readers of French: Phonological codes, not guilty! Scientific Studies of Reading, 16(3), 263–285. [Google Scholar]
  8. Bélanger NN, Mayberry RI, & Rayner K (2013). Orthographic and phonological preview benefits: Parafoveal processing in skilled and less-skilled deaf readers. Quarterly Journal of Experimental Psychology, 66(11), 2237–2252. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bélanger NN, & Rayner K (2015). What eye movements reveal about deaf readers. Current Directions in Psychological Science, 24(3), 220–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bottari D, Caclin A, Giard MH, & Pavani F (2011). Changes in early cortical visual processing predict enhanced reactivity in deaf individuals. PloS one, 6(9), e25607. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Brown PM, & Brewer LC (1996). Cognitive processes of deaf and hearing skilled and less skilled readers. The Journal of Deaf Studies and Deaf Education, 1(4), 263–270. [DOI] [PubMed] [Google Scholar]
  12. Brysbaert M, & New B (2009). Moving beyond Kučera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior research methods, 41(4), 977–990. [DOI] [PubMed] [Google Scholar]
  13. Caravolas M, Hulme C, & Snowling MJ (2001). The foundations of spelling ability: Evidence from a 3-year longitudinal study. Journal of memory and language, 45(4), 751–774. [Google Scholar]
  14. Cates DM, Traxler MJ, & Corina DP (2021). Predictors of reading comprehension in deaf and hearing bilinguals. Applied Psycholinguistics, 43(1), 81–123. [Google Scholar]
  15. Chen Y, Davis MH, Pulvermüller F, & Hauk O (2015). Early visual word processing is flexible: Evidence from spatiotemporal brain dynamics. Journal of Cognitive Neuroscience, 27(9), 1738–1751. [DOI] [PubMed] [Google Scholar]
  16. Clark MD, Hauser PC, Miller P, Kargin T, Rathmann C, Guldenoglu B, Kubus O, Spurgeon E, & Israel E (2016). The Importance of Early Sign Language Acquisition for Deaf Readers. Reading & Writing Quarterly, 32(2), 127–151. 10.1080/10573569.2013.878123 [DOI] [Google Scholar]
  17. Costello B, Caffarra S, Fariña N, Duñabeitia JA, & Carreiras M (2021). Reading without phonology: ERP evidence from skilled deaf readers of Spanish. Scientific reports, 11(1), 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Cripps JH, McBride KA, & Forster KI (2005). Lexical processing with deaf and hearing: Phonology and orthographic masked priming. Journal of Second Language Acquisition and Teaching, 12, 31–44. [Google Scholar]
  19. Dehaene S, Cohen L, Sigman M, & Vinckier F (2005). The neural code for written words: a proposal. Trends in cognitive sciences, 9(7), 335–341. [DOI] [PubMed] [Google Scholar]
  20. Delorme A & Makeig S (2004) EEGLAB: an open-source toolbox for analysis of single-trial EEG dynamics, Journal of Neuroscience Methods 134:9–21 [DOI] [PubMed] [Google Scholar]
  21. Dufau S, Grainger J, Midgley KJ, & Holcomb PJ (2015). A thousand words are worth a picture: Snapshots of printed-word processing in an event-related potential megastudy. Psychological science, 26(12), 1887–1897. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Dunn Lloyd M., and Dunn Douglas M.. (2007) PPVT-4: Peabody picture vocabulary test. Pearson Assessments. [Google Scholar]
  23. Emmorey K, & Lee B (2021). The neurocognitive basis of skilled reading in prelingually and profoundly deaf adults. Language and Linguistics Compass, 15(2), e12407. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Emmorey K, Midgley KJ, Kohen CB, Sehyr ZS, & Holcomb PJ (2017). The N170 ERP component differs in laterality, distribution, and association with continuous reading measures for deaf and hearing readers. Neuropsychologia, 106, 298–309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Emmorey K, Winsler K, Midgley KJ, Grainger J, & Holcomb PJ (2020). Neurophysiological correlates of frequency, concreteness, and iconicity in American Sign Language. Neurobiology of Language, 1(2), 249–267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Fariña N, Duñabeitia JA, & Carreiras M (2017). Phonological and orthographic coding in deaf skilled readers. Cognition, 168, 27–33. 10.1016/j.cognition.2017.06.015 [DOI] [PubMed] [Google Scholar]
  27. Frost R (1998). Toward a strong phonological theory of visual word recognition: True issues and false trails. Psychological Bulletin, 123(1), 71–99. [DOI] [PubMed] [Google Scholar]
  28. Grainger J, & Holcomb PJ (2009). Watching the word go by: On the time-course of component processes in visual word recognition. Language and linguistics compass, 3(1), 128–156. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Gutierrez-Sigut E, Vergara-Martínez M, & Perea M (2017). Early use of phonological codes in deaf readers: An ERP study. Neuropsychologia, 106, 261–279. 10.1016/j.neuropsychologia.2017.10.006 [DOI] [PubMed] [Google Scholar]
  30. Gutierrez-Sigut E, Vergara-Martínez M, & Perea M (2022). The impact of visual cues during visual word recognition in deaf readers: An ERP study. Cognition, 218, 104938. [DOI] [PubMed] [Google Scholar]
  31. Hauk O, Davis MH, Ford M, Pulvermüller F, & Marslen-Wilson WD (2006). The time course of visual word recognition as revealed by linear regression analysis of ERP data. Neuroimage, 30(4), 1383–1400. [DOI] [PubMed] [Google Scholar]
  32. Hirshorn EA, Dye MWG, Hauser P, Supalla TR, & Bavelier D (2015). The contribution of phonological knowledge, memory, and language background to reading comprehension in deaf populations. Frontiers in Psychology, 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Holcomb PJ, Grainger J, & O’rourke T (2002). An electrophysiological study of the effects of orthographic neighborhood size on printed word perception. Journal of Cognitive Neuroscience, 14(6), 938–950. [DOI] [PubMed] [Google Scholar]
  34. Kiyonaga K, Grainger J, Midgley K, & Holcomb PJ (2007). Masked cross-modal repetition priming: An event-related potential investigation. Language and cognitive processes, 22(3), 337–376. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Kounios J, & Holcomb PJ (1994). Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(4), 804. [DOI] [PubMed] [Google Scholar]
  36. Kuznetsova A, Brockhoff PB, & Christensen RH (2017). lmerTest package: tests in linear mixed effects models. Journal of statistical software, 82, 1–26. [Google Scholar]
  37. Lederberg AR, Branum-Martin L, Webb M, Schick B, Antia S, Easterbrooks SR, & Connor CM (2019). Modality and Interrelations Among Language, Reading, Spoken Phonological Awareness, and Fingerspelling. The Journal of Deaf Studies and Deaf Education, 24(4), 408–423. [DOI] [PubMed] [Google Scholar]
  38. Li D, Gao K, Wu X, Chen X, Zhang X, Li L, & He W (2013). Deaf and hard of hearing adolescents’ processing of pictures and written words for taxonomic categories in a priming task of semantic categorization. American Annals of the Deaf, 158(4), 426–437. [DOI] [PubMed] [Google Scholar]
  39. Markwardt FC (1989). Peabody individual achievement test-revised: PIAT-R. American Guidance Service Circle Pines. [Google Scholar]
  40. Mayberry RI, del Giudice AA, & Lieberman AM (2011). Reading achievement in relation to phonological coding and awareness in deaf readers: A meta-analysis. Journal of Deaf Studies and Deaf Education, 16(2), 164–188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. McQuarrie L, & Parrila R (2009). Phonological representations in deaf children: Rethinking the “functional equivalence” hypothesis. Journal of Deaf Studies and Deaf Education, 14(2), 137–154. [DOI] [PubMed] [Google Scholar]
  42. Meade G, Grainger J, & Holcomb PJ (2019). Task modulates ERP effects of orthographic neighborhood for pseudowords but not words. Neuropsychologia, 129, 385–396. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Morford JP, Occhino-Kehoe C, Piñar P, Wilkinson E, & Kroll JF (2017). The time course of cross-language activation in deaf ASL–English bilinguals. Bilingualism: Language and Cognition, 20(2), 337–350. 10.1017/S136672891500067X [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Pelli DG, Burns CW, Farell B, & Moore-Page DC (2006). Feature detection and letter identification. Vision research, 46(28), 4646–4674. [DOI] [PubMed] [Google Scholar]
  45. Rastle K (2019). The place of morphology in learning to read in English. Cortex, 116, 45–54. 10.1016/j.cortex.2018.02.008 [DOI] [PubMed] [Google Scholar]
  46. Sarchet T, Marschark M, Borgna G, Convertino C, & Dirmyer R (2014). Vocabulary knowledge of deaf and hearing postsecondary students. Journal of Postsecondary Education and Disability, 27(2), 161–176 [PMC free article] [PubMed] [Google Scholar]
  47. Sehyr Z, & Emmorey K (2021). Assessing the contribution of lexical quality and sign language variables to reading comprehension in deaf adult ASL signers. Paper presented at the Society for the Scientific Study of Reading, July, Virtual meeting. [DOI] [PubMed] [Google Scholar]
  48. West WC, & Holcomb PJ (2000). Imaginal, semantic, and surface-level processing of concrete and abstract words: an electrophysiological investigation. Journal of Cognitive Neuroscience, 12(6), 1024–1037. [DOI] [PubMed] [Google Scholar]
  49. Winsler K, Midgley KJ, Grainger J, & Holcomb PJ (2018). An electrophysiological megastudy of spoken word recognition. Language, cognition and neuroscience, 33(8), 1063–1082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Van Heuven WJ, Mandera P, Keuleers E, & Brysbaert M (2014). SUBTLEX-UK: A new and improved word frequency database for British English. Quarterly journal of experimental psychology, 67(6), 1176–1190. [DOI] [PubMed] [Google Scholar]
  51. Van Petten C, & Kutas M (1990). Interactions between sentence context and word frequencyinevent-related brainpotentials. Memory & cognition, 18(4), 380–393. [DOI] [PubMed] [Google Scholar]
  52. Varga V, Tóth D, & Csépe V (2021). Lexical Competition Without Phonology: Masked Orthographic Neighbor Priming With Deaf Readers. The Journal of Deaf Studies and Deaf Education. 27(2), 151–165. [DOI] [PubMed] [Google Scholar]
  53. Villwock A, Wilkinson E, Piñar P, & Morford JP (2021). Language development in deaf bilinguals: Deaf middle school students co-activate written English and American Sign Language during lexical processing. Cognition, 211, 104642. 10.1016/j.cognition.2021.104642 [DOI] [PubMed] [Google Scholar]
  54. Yarkoni T, Balota D, & Yap M (2008). Moving beyond Coltheart’s N: A new measure of orthographic similarity. Psychonomic bulletin & review, 15(5), 971–979. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supp 1

RESOURCES