Abstract
Previous studies with deaf adults reported reduced N170 waveform asymmetry to visual words, a finding attributed to reduced phonological mapping in left-hemisphere temporal regions compared to hearing adults. An open question remains whether this pattern indeed results from reduced phonological processing or from general neurobiological adaptations in visual processing of deaf individuals. Deaf ASL signers and hearing nonsigners performed a same-different discrimination task with visually presented words, faces, or cars, while scalp EEG time-locked to the onset of the first item in each pair was recorded. For word recognition, the typical left-lateralized N170 in hearing participants and reduced left-sided asymmetry in deaf participants were replicated. The groups did not differ on word discrimination but better orthographic skill was associated with larger N170 in the right hemisphere only for deaf participants. Face recognition was characterized by unique N170 signatures for both groups, and deaf individuals exhibited superior face discrimination performance. Laterality or discrimination performance effects did not generalize to the N170 responses to cars, confirming that deaf signers are not inherently less lateralized in their electrophysiological responses to words and critically, giving support to the phonological mapping hypothesis. P1 was attenuated for deaf participants compared to the hearing, but in both groups, P1 selectively discriminated between highly learned familiar objects – words and faces versus less familiar objects – cars. The distinct electrophysiological signatures to words and faces reflected experience-driven adaptations to words and faces that do not generalize to object recognition.
Keywords: N170, ERPs, word recognition, face recognition, deafness, American Sign Language
1. Introduction
A large body of research using Event-Related Potentials (ERPs) has suggested that the early differentiation between words and other categories of visual stimuli, such as faces or cars, is characterized by a negative-going waveform peaking at around 170ms post-stimulus onset (e.g., Bentin, Allison, Puce, Perez, & McCarthy, 1996; Bentin, Mouchetant-Rostaing, Giard, Echaillier, & Pernier, 1999; Curran, Tanaka, & Weiskopf, 2002; Rossion, Joyce, Cottrell, & Tarr, 2003). It has been suggested that this N170, or some variant of it, might serve as a neural marker of perceptual expertise with a given stimulus category indexing learned category sensitivity in the ventral occipito-temporal cortex (Bentin et al., 1996; Rossion, Curran, & Gauthier, 2002; Tanaka & Curran, 2001).
In hearing individuals, the N170 to visually presented words exhibits unique spatial distribution and stimulus-specific characteristics: it is larger in the left hemisphere (LH) than the right hemisphere (RH) in occipital-temporal regions, compared to visually-matched stimuli, such as strings of ASCII symbols (for recent examples, see Emmorey, Midgley, Kohen, Sevcikova Sehyr, & Holcomb, 2017; Maurer, Rossion, & McCandliss, 2008; Mercure, Cohen Kadosh, & Johnson, 2011; Mercure, Dick, Halit, Kaufman, & Johnson, 2008). This N170 has been associated with automatic orthographic processing in accomplished adult readers (Dundas, Plaut, & Behrmann, 2014). The amplitude and left-sided asymmetry of the N170 increase with reading experience in developing hearing readers and is thought to reflect expertise in script-specific processing (Maurer & McCandliss, 2007; Maurer et al., 2008).
One explanation for the left-lateralization of the N170 in word recognition is the phonological mapping hypothesis which proposes that leftward asymmetries emerge due to the mapping between orthography and phonology in LH occipital-temporal (auditory) regions during reading acquisition (McCandliss & Noble, 2003). For example, among developing readers, greater phonological awareness was associated with greater left-lateralized N170 (Sacchi & Laszlo, 2016). However, congenitally deaf individuals may develop weaker or coarser connections between orthography and phonology given their reduced access to auditory spoken language. This may, in turn, lead to decreased involvement of some LH regions during visual word recognition (Emmorey et al., 2017; Neville, Kutas, & Schmidt, 1982a, 1984). The extent of phonological mapping processes during reading in deaf readers, and how necessary such mapping is for skilled reading, remains a controversial issue (Allen et al., 2009; McQuarrie & Parrila, 2009; Wang, Trezek, Luckner, & Paul, 2008).
Given the controversy of this issue, perhaps a broader question should be addressed: Are deaf readers inherently less lateralized in their N170 response to visual words? If so, is this due to the reduced phonological mapping processes in the left hemisphere regions, or, does the reduced asymmetry expose more general neurobiological adaptations in visual processing that may occur due to deafness and/or life-long sign language use? To shed light on this debate, we examined early electrophysiological responses (N170, P1) to visually presented single words and compared them with responses to other highly familiar and learned objects that do not require sound assembly (i.e., faces, cars) in a group of deaf signers and hearing nonsigners. If deaf signers were found to exhibit distinct N170 signatures to visual stimuli other than words, this would weaken the phonological mapping hypothesis account and instead implicate a more general mechanism influencing visual recognition processes.
In addition, deaf signers must encode faces for linguistic information in American Sign Language (ASL) (e.g., adverbial markers, conditional clauses, questions; see Sandler & Lillo-Martin, 2006). As a result, deaf and hearing individuals might also exhibit distinct neurophysiological patterns for faces driven by such experience. The third, control category of cars should therefore serve as a further baseline if we observe such a pattern. The core question of this study is: what are the experience-driven effects on processing highly learned visual categories?
1.1. Electrophysiological responses to visual words in deaf individuals
A recent study by Emmorey et al. (2017) provided evidence for experience-driven adaptations for orthographic stimuli by comparing the N170 response in deaf and hearing adults, matched on reading skill, as they made familiarity judgments to visually presented words and symbol strings. Whereas hearing participants produced larger N170s for words over LH compared to RH occipital-temporal sites, deaf participants showed a smaller LH asymmetry and only at occipital sites. Symbol strings in both hearing and deaf participants yielded similar and more bilaterally symmetrical N170 responses. The authors interpreted these results as further evidence for the phonological mapping hypothesis (McCandliss & Noble, 2003; Sachhi & Laszlo, 2016).
However, an alternative explanation, not considered by Emmorey et al. (2017), is that deaf individuals are inherently less lateralized in their N170 response to all types of visual stimuli due to more general neurobiological changes in visual processing that occur due to congenital deafness (e.g., Bavelier et al., 2001; Bavelier, Dye, & Hauser, 2006; Bavelier et al., 2000; Bosworth & Dobkins, 2002; Neville & Lawson, 1987a; Neville, Schmidt, & Kutas, 1983; Scott, Karns, Dow, Stevens, & Neville, 2014). For example, modulation of the N170 could arise as a result of experience-specific adaptations in the allocation of visual attention in deaf individuals. However, support for such an explanation would critically depend on the neural response to other classes of visual stimuli. In the Emmorey et al. (2017) study, such control stimuli were symbol strings which did not show large differences in N170 asymmetry between the deaf and hearing groups. Although both groups rated these symbols as familiar, strings of ASCII symbols might not be the best control for testing for differences in lateral asymmetries because, unlike letters, such symbols are rarely, if ever, presented as a series of five side-by-side characters to form new and more complex units. Thus, such symbol strings are novel, compared to the highly overlearned visual words, and may not be optimal for eliciting an experience-driven N170. To ascertain whether deaf individuals exhibit a distinct N170 profile for all types of highly familiar visual stimuli, or whether the altered N170 patterns pertains only to the recognition of orthographic material, another category of familiar visual stimuli that does not require sound assembly, such as faces or cars, ought to be included. If a similar reduction in N170 asymmetry was found for other visual objects in deaf individuals, this would imply more general changes in early visual recognition and undermine the conclusion that group differences in the N170 response to words are due to a different strength of phonological mapping per se.
1.2. Electrophysiological responses to faces in deaf individuals
Like word recognition, face recognition has been associated with a category-specific N170 in hearing adults, with a larger and earlier peak than that found for other visual object categories (Bentin et al., 1996). The N170 for faces tends to be either larger over right compared to left occipital-temporal regions (Maurer et al., 2008; Mercure et al., 2008; Rossion et al., 2003), or bilaterally symmetrical over these regions (Maurer et al., 2008; Mercure et al., 2008; Mitchell, 2017). Importantly, face-specific N170 effects have been widely argued to reflect the near-universal expertise that sighted individuals have with faces (e.g., Bentin et al., 1996). Comparing deaf and hearing groups on N170 to faces offers an invaluable opportunity to examine experience-driven effects on the early electrophysiological responses to visual objects.
Although both deaf and hearing individuals have extensive experience processing human faces for affective information, for deaf individuals, there are additional demands of encoding faces for linguistic information in ASL (or in visible speech). A variety of facial expressions convey distinct grammatical or prosodic information in ASL, e.g., brow raising marks conditional clauses (Dachkovsky & Sandler, 2009; Liddell, 1980; Sandler & Lillo-Martin, 2006). Only a few studies tested whether such additional demands alter the characteristics of the N170 response to faces in deaf signers. In two such studies, examining the N170 to faces vs. other objects, sign language experience and/or deafness did not affect the overall N170 pattern, eliciting a larger N170 in the RH over the LH for both deaf ASL signers and hearing nonsigners when performing same-different judgments on halves of faces (Mitchell, 2017; Mitchell, Letourneau, & Maslin, 2013). The N170 responses to faces and to other familiar objects during a probe detection task revealed no lateral asymmetries in either group and no overall group differences in the N170 laterality or distribution (Mitchell, 2017; Mitchell et al., 2013). Based on these findings, the electrophysiology of early face processing would be expected to be similar between deaf and hearing participants. For this reason, comparing deaf and hearing individuals on N170 to faces is useful for determining whether the differences in word-based N170 asymmetries between deaf and hearing individuals are due to altered sensory experience more generally, or are better explained by reduced phonological mapping.
To this end, we are not aware of a study that has directly compared early electrophysiological responses to faces, or other types of non-symbolic objects, with N170 responses to words in deaf signing populations. We hypothesized that if deaf adults exhibit distinct laterality patterns of N170 to faces compared to the hearing nonsigners, it would suggest that the sensory experiences of deaf adults, and the functional pressures on visual recognition, alter the electrophysiology of early visual processing of faces. A lack of N170 differences for faces between groups would suggest that the N170 to faces remains unaffected by the sensory experiences or the functional pressures on visual recognition in deaf adults.
1.3. Inclusion of cars as a third category of visual stimuli
Because of the possible experience-dependent influences on face perception for deaf signers, we also included a third stimulus category, cars. Cars are a) highly familiar to both deaf and hearing people who typically are not car experts, b) have no direct connection with speech sounds, and c) are more familiar than the symbol strings that were used as the baseline stimulus by Emmorey et al. (2017). A comparable pattern of N170 lateralization for cars between groups, and replication of the Emmorey et al. (2017) results with words, would suggest that the weaker N170 asymmetry for words in deaf participants was indeed due to differences in prior experience in learning to map letters to sounds, supporting the phonological mapping hypothesis. Alternatively, different N170 lateralization patterns to cars (and faces) between deaf and hearing participants would suggest that the N170 asymmetry is altered in deaf participants due to more general changes in visual processing, and would cast some doubt on the phonological mapping hypothesis as the locus of word-based N170 asymmetries.
1.4. Are hemisphere effects specific to the N170 component?
To examine whether any hemisphere effects were specific to the N170 component, we also examined the P1 component for words, faces and cars. The P1 component is generally the first positive ERP component with an onset within 60–80 ms, typically peaks between 100–130 ms over lateral occipital cortex and tends to be bilaterally distributed. P1 is an electrophysiological response elicited by visual stimuli that is sensitive to variations in stimulus parameters (Luck, 2005) and is modulated by selective attention (Hillyard, Vogel, & Luck, 1998). P1 tends to be larger (i.e., more positive) for faces than for words (Dundas et al., 2014), but patterns of hemispheric specialization observed for the subsequent N170 component have not been reported for P1 (Dundas et al., 2014).
It is unclear whether P1 might be mediated by stimulus category or by hearing status. In some visual tasks, an enhanced P1 in deaf individuals has been observed (Bottari, Caclin, Giard, & Pavani, 2011) and such differences tended to be interpreted as being due to a difference in early sensory experience or enhanced perceptual processing in deaf individuals. Other studies reported no differences in P1 amplitude or scalp distribution between deaf and hearing individuals (Armstrong, Neville, Hillyard, & Mitchell, 2002; Neville & Lawson, 1987a, 1987b). In contrast, Emmorey et al. (2017) reported an attenuated P1 in deaf individuals compared to hearing individuals for orthographic stimuli (words, symbol strings), although their study did not compare P1 for words against other types of visual stimuli. Thus unlike for the N170, we do not have specific predictions for the P1. Due to the category-specificity of the N170, but not for the P1, we do not expect the hemispheric lateralization patterns to be observed for the P1 component.
1.5. Is there a relationship between ERP responses to words and orthographic skill?
We examined the relationship between the amplitude of both N170 and P1 components in each hemisphere and orthographic (spelling) skill in deaf and hearing individuals. The role of RH in word recognition has been implicated in coarser orthographic processing perhaps due to less precise orthographic representations, e.g., for evidence with hearing readers see (Laszlo & Sacchi, 2015). The reduced N170 asymmetry to visual words in deaf participants could be due to the fact that the RH occipital-temporal regions play distinct role in orthographic processing for deaf readers; for example, deaf readers could process words as visual objects, or have coarser orthographic representations. Thus, we should see a correlation between RH amplitudes and better spelling skill in deaf readers, but not necessarily in hearing readers.
To recap, the overall aim was to establish an explanation for the reduced hemispheric asymmetry of electrophysiological responses to words in deaf individuals with a specific focus on the N170 as the hallmark of expert processing of familiar stimulus category. Our hypotheses were as follows. If we found the typical LH asymmetry in hearing participants but reduced LH asymmetry in deaf participants to words but not to other types of highly learned visual stimuli, such as faces or cars, the differences in lateral asymmetries could be explained by phonological mapping processes as initially argued by Emmorey et al. (2017). If, however, group differences in N170 were also present to cars, this hypothesis would be refuted and differences in lateral asymmetries could be attributed to general changes in visual processing that occur with congenital deafness. In addition, faces present a special category for deaf ASL signers, thus, if a group difference in N170 to faces was observed, it might be ascribed to the deaf participants’ deafness and/or life-long signing experience. An analysis of the P1 component was included to verify whether any effects observed on the N170 may have been an epiphenomenon of earlier visual processing stages.
2. Method
2.1. Participants
Twenty-four deaf ASL signers (M age = 30, SD = 6, age range 21–43, 12 female) participated in the experiment. These participants were congenitally deaf with severe to profound hearing loss and acquired ASL from deaf parents from birth. All indicated ASL as their primary and preferred language and completed, on average, 6.3 years of college. Thirty hearing non-signers (M age = 24, SD = 5, age range 20–36, 15 female) participated in the experiment and all were native monolingual English speakers who completed on average 4 years of college; differences between groups in age were significant, t (52) = 3.6, p = .001, SE = 1.45, 95% CI [2.28, 8.10]. All participants had normal or corrected to normal vision and no history of neurological disorders. All participants were right-handed as determined by the modified version of Edinburgh Handedness Questionnaire (Oldfield, 1971) using a scale from 100 (extreme right-handed) – 100 (extreme left-handed). The mean handedness score for deaf participants was 87 (SD = 26) and the mean for the hearing participants was 90 (SD = 12); this difference was not statistically significant (p = .547).
2.2. Stimuli
The stimuli, words, faces and cars and the experimental design were adapted from recent studies that examined the processing of these stimulus categories by hearing participants (Behrmann & Plaut, 2020; Collins, Dundas, Gabay, Plaut, & Behrmann, 2017; Dundas et al., 2014; Dundas, Plaut, & Behrmann, 2015).The face stimuli consisted of 48 pairs of faces (24 male and 24 female) from the Face-Place Database project (2008, Tarr, M. www.wiki.cnbc.cmu.edu/Face_Place). Face stimuli were all forward-facing with neutral expressions and hair removed, and pairs were matched on gender to increase discrimination difficulty. Faces measured 3.35 inches wide and 5.04 inches tall which, at a viewing distance of 60 inches, yielded visual angles of 3.2 by 4.8 degrees respectively. The faces were presented in grayscale on a black screen.
The word stimuli were 24 pairs of words presented in gray Arial font on a black screen. Word stimuli measured on average 3.35 inches wide and 1.68 inches tall which, at 60 inches, yielded visual angles of 3.2 and 1.6 degrees respectively (to the center of the word). Pairs of words were matched so that half of these words would either differ in the second letter position (e.g. posh-push), or in the third letter position (e.g. cord-cold). The mean log10 word frequency in English was 2.74 (range = 1 to 5.2) (SUBTLEXUS). The face and word categories were matched on the difficulty of discrimination (Dundas, Plaut, & Behrmann, 2013).
Twenty-four pairs of car images were presented in gray scale on a black screen at a ¾ left-facing view. The car stimuli were approximately 5.85 inches wide and 3.35 inches tall, which, at a 60 inch viewing distance, resulted in visual angles of 5.57 and 3.2 degrees respectively.
2.3. Procedure
All stimuli were presented on a 24-inch LCD monitor (ASUS VG248) set to resolution of 1920×1080 pixels with a refresh rate of 100 Hz and the monitor was located 60 inches directly in front of the participant.
Faces, cars, and words were presented in separate blocks counterbalanced across participants. There were a total of 192 trials in each block and trials were pseudorandomized for each participant. Approximately every 40 trials (2–3 minutes) participants were given a brief rest break. In each block, every stimulus item was seen four times in the central position and twice in each eccentric position. Each stimulus was presented once in each left and right visual field in a repeated trial and once in each visual field in an unrepeated trial.
Each trial began with a fixation cross displayed in the center of the screen for a duration varying between 1500–2500ms. Following the offset of the fixation cross, a centrally presented stimulus appeared for 750ms, immediately followed by an intermediate central fixation cross for 150ms and then a second stimulus was presented for 150ms in either the left or right visual field. The lateral stimuli were centered at a visual angle of 5.3 degrees from the central fixation cross, which remained on the screen during presentation of the lateralized stimuli to prevent gaze shifting towards the lateralized image. Lateralized stimuli were used to increase the difficulty of the task and obtain a measure of behavioral performance, but only ERPs recorded to the centrally-presented stimuli (first item in the pair) were included in the analysis. The fixation cross disappeared with the eccentric stimulus offset and a black screen followed during which the participant recorded their response on a game pad.
Participants made same-different judgments by pressing one of two keys to indicate whether the second stimulus was identical to the first (50% of trials) or not. The same-different response keys were counterbalanced across participants. Immediately following the key response, the fixation cross re-appeared and initiated the next trial. In order to reduce blinks during trial presentations, the color of the initial fixation cross changed from purple to white at a variable duration. Participants were required to maintain fixation at all times and the presence of a central target and uncertain location of probe helped ensure this. A short block consisting of 10 trials was used for practice. Deaf participants received instructions in ASL and written English, hearing participants received instructions in written English.
Additionally, participants completed the Spelling Recognition Test (Andrews & Hersch, 2010) as a measure of orthographic skill. The test contains 88 items, half correctly spelled and half misspelled. Misspellings change one to three letters of the word and often preserve the pronunciation of the base word (e.g., addmission, seperate). Items are printed in columns, and participants are instructed to circle items they think are incorrectly spelled. The recognition test score is the number of correctly classified items, both hits and correct rejections. The deaf group (M = 76, SD = 7) and hearing group (M = 74, SD = 7) did not differ in their spelling recognition ability, t (52) < 1, p = .502.
2.4. EEG recording
Participants were seated in a comfortable chair in a sound attenuated backlit room. An electro-cap fitted with tin electrodes was used to record continuous electroencephalogram (EEG) from 29 sites on the scalp (see Figure 1). Four additional electrodes were attached: one below the left eye (LE, to monitor for vertical eye movement/blinks), one to the right of the right eye (HE, to monitor for horizontal eye movements), one over the left mastoid (A1, reference), and one over the right mastoid (A2, recorded actively to monitor for differential mastoid activity). Although the number of electrodes is relatively small, we had sufficient coverage over the typical posterior sites implicated in visual processing. All electrode impedances were reduced below 5 kΩ (eye electrodes < 10 kΩ). The EEG signal was amplified by NeuroScan Synamp RT amplifier with a bandpass of DC to 200 Hz, and was continuously digitized at a sampling rate of 500 Hz (22 bit A/D).
Figure 1.
The modified 10–20 system electrode montage used in this study. The four sites used in the average reference data analyses are circled.
2.5. ERP analysis
The signal was high-pass filtered at 0.1 Hz and low-pass filtered at 20Hz offline (see Emmorey et al., 2017). As is standard for studies of the N170, all scalp sites were re-referenced offline to the average of all 29 scalp electrodes and experimental results under average reference are more objective and appropriate than mastoid reference for examining the N170 (Joyce & Rossion, 2005; Wang et al., 2019). Trials with horizontal or vertical eye movements between 100 and 600 ms were rejected (< 7% of trials), the threshold for removing ocular artifacts was 50–75 microvolts. Epochs were baseline corrected over a 100ms pre-stimulus interval. ERP data were quantified by calculating mean amplitudes within the following latency windows: N170 amplitude was measured between 120–240 ms for all stimuli types at four lateral posterior sites: T5, O1, T6, O2 (see Figure 1) and the P1 amplitude was measured between 60–120 ms. We selected the same time windows as in Emmorey et al. (2017) to permit comparisons between previous and current findings. For each epoch, the average amplitude was entered into a four-way mixed design ANOVA with repeated-measures factors of Stimulus type (words vs. faces vs. cars), Hemisphere (left vs. right) and Anteriority (temporal vs. occipital), and a between-subject factor of Group (deaf vs. hearing). The Greenhouse-Geisser correction was used on all repeated measures factors with greater than 1 degree of freedom in the numerator.
3. Results
3.1. Same-different discrimination accuracy
Deaf and hearing participants did not differ on task accuracy for word (deaf: 83%, hearing: 80%, F (1, 52) = 2.4, p = .13; ) or car discrimination (deaf: 88%, hearing: 87%, F <1, p = .39). However, deaf participants performed significantly better than hearing participants in face discrimination (Deaf: 84% vs. hearing: 78%, F (1, 52) = 7.3, p = .009, ). Response time analyses revealed no significant effects or interactions (all p values > .05).
3.2. ERP results: 120–240 ms (N170 epoch)
3.2.1. Overall Results
The largest N170 response was to words whereas the smallest N170 was for faces (main effect of Stimulus type, F (2, 104) = 74, p < .001; ). Overall, the N170 was also more negative over temporal than occipital sites (main effect of Anteriority, F (1, 52) = 54, p < .001; ), and the LH was more negative than the RH (main effect of Hemisphere, F (1, 52) = 9.1, p = .004. ). There were also a three-way Stimulus × Hemisphere × Anteriority, F (2, 104) = 9.8, p < .001; , and a four-way Stimulus × Hemisphere × Anteriority × Group interaction, F (2, 104) = 3.5, p = .034; . To better understand these interactions, we conducted separate follow-up analyses (Group × Hemisphere × Anteriority) for each stimulus category (see Figure 2). Data were normally distributed (Shapiro-Wilk: all p values ≥ .05).1
Figure 2.
A) Grand mean ERPs from four posterior electrode sites for deaf and hearing participants for words, faces and cars. B) Average N170 amplitude for deaf and hearing participants (first column) and average P1 amplitude for the three stimuli types in the left hemisphere (light grey) and right hemisphere (dark grey) temporal (T) and occipital (O) sites. Error bars depict 95% CI. Negative amplitude is plotted upwards. C) Voltage maps during the P1 and N170 time epochs for deaf and hearing participants.
3.2.2. N170 to Words
The N170 response was more negative for words in the LH than RH, F (1, 52) = 54, p < .001; , and was the smallest at the RH occipital sites compared to other sites as indicated by a significant two-way Anteriority × Hemisphere interaction, F (1, 52) = 18.4, p < .001; . Overall, the two groups did not differ on N170 amplitude to words (F (1, 52) < 1, p = .44), however, the hemispheric asymmetry was reduced for deaf participants as indicated by the significant two-way Group × Hemisphere interaction, F (1, 52) = 4.6, p = .036; . We calculated the laterality difference for each group by subtracting RH amplitude from LH amplitude to illustrate the asymmetry difference between the groups. Hearing participants showed a greater laterality difference (Mdiff = −2μV; range 6.6μV; SD = 1.7) than the deaf participants (Mdiff = −1.08 μV; range 4.8μV; SD = 1.3), t (52) = −2.2, p = .036, 95% CI [−1.7; −.06] (see Figure 2B), confirming that deaf individuals exhibited reduced hemispheric asymmetry of N170 to words. No other significant interactions were observed.
3.2.3. N170 to Faces
The N170 was more negative for faces over temporal than occipital sites (main effect of Anteriority, F (1, 52) = 111, p < .001; ), but no other main effects were found. We found a three-way Hemisphere × Anteriority × Group interaction, F (1, 52) = 11.2, p = .002, , which was followed up by a separate analysis for each group. For the hearing participants, the N170 was more negative in temporal than occipital sites (main effect of Anteriority, F (1, 29) = 55, p < .001, ), and there was an interaction between Anteriority and Hemisphere, F (1, 29) = 6, p = .02, . This interaction was driven by the difference in the direction of the hemispheric asymmetry in temporal vs. occipital sites; that is, in temporal sites, the N170 was more negative going in the RH than LH, but in occipital sites, N170 was more negative going in the LH than RH. The deaf group also exhibited a more negative N170 over temporal than occipital sites (main effect of Anteriority, F (1, 23) = 56, p < .001, ), and a Hemisphere × Anteriority interaction, F (1, 23) = 5.3, p = .03; . For the deaf group, the hemispheric asymmetry was also observed at temporal sites although going in the opposite direction than for the hearing, that is, in temporal sites, N170 was more negative going in LH than RH but slightly more negative going in LH than RH occipital sites (see Figure 2B)2. As for words, we calculated the hemispheric laterality difference (LH minus RH), but this time for temporal and occipital sites separately. In temporal sites, the hearing participants showed a greater hemispheric laterality difference that approached statistical significance (Mdiff = −.68μV; range 7.6μV; SD = 1.9) than the deaf (Mdiff = .24; range 6.9μV; SD = 1.7), t (52) = −1.9, p = .06, 95% CI [−1.9; .05]. In occipital sites, the laterality difference was greater for the deaf (Mdiff = −.39; range 9.2μV; SD = 2.1) than for the hearing participants (Mdiff = −.07; range 8.6μV; SD = 2.2), however, this difference was not statistically significant, t (52) < 1, p = .58, 95% CI [−.84; 1.5]. This illustrates that the Anteriority × Hemisphere interaction in each group appeared to be driven by the magnitude as well as the differences in the direction of the N170 asymmetry.
3.2.4. N170 to Cars
As expected, the N170 for cars was more negative in temporal than occipital sites (main effect of Anteriority, F (1, 52) = 36, p < .001; ), but there were no other main effects or interactions. Both groups, thus, showed a similar magnitude and pattern of scalp distribution of N170 to cars that was bilaterally distributed over temporal sites.
3.3. ERP results: 50–120 ms (P1 epoch)
3.3.1. Overall Results
We used the same ANOVA approach to assess whether the effects above were specific to the N170 component. The P1 amplitude was the largest (i.e., most positive going) for faces and the smallest for cars (main effect of Stimulus, F (2, 104) = 78.6, p < .001, ). P1 was larger in RH compared to LH (main effect of Hemisphere, F (1, 54) = 7, p = .011, ), and larger in occipital than temporal sites (main effect of Anteriority, F (1, 54) = 48.6, p < .001, ). Hearing participants exhibited larger P1 amplitude than the deaf participants, F (1, 52) = 6.8, p = .012, , and we found a Stimulus × Anteriority interaction, F (2, 104) = 43.7, p < .001, ). No other main effects and interactions were observed. We further broke down the analysis by stimulus type to examine whether distinct laterality patterns occur for each category.
3.3.2. P1 to Words
The P1 amplitude to words was larger in the RH than LH, F (1, 52) = 5.3, p = .025, , larger in occipital than temporal sites, F (1, 52) = 15.7, p < .001, and hearing participants exhibited a larger P1 than deaf participants, F (1, 52) = 7.4, p = .009, . No interactions were found.
3.3.3. P1 to Faces
Similarly to words, P1 amplitude to faces was larger in the RH than LH, F (1, 52) = 7.5, p = .008, , larger in occipital than temporal sites, F (1, 52) = 94, p < .001, , and the hearing participants exhibited a larger P1 compared to the deaf participants, F (1, 52) = 4.3, p = .044, . No interactions were observed.
3.3.3. P1 to Cars
The P1 amplitude was larger in occipital than temporal sites, F (1, 52) = 39, p < .001, , and the hearing group exhibited a larger P1 than the deaf group, F (1, 52) = 7.1, p = .01, , but no hemispheric differences and no interactions were observed.
3.4. Correlations between N170 / P1 amplitude to words and orthographic skill
To explore whether the N170 and P1 components might relate to orthographic processing, we correlated the N170 and P1 amplitude to words in each hemisphere and each site with an offline measure of orthographic skill (spelling recognition test), see Figure 3A. In the hearing group, no relationship between N170 and orthography in either hemisphere was observed, all rs ≤ .25; all ps ≥ .189. In the deaf group, there was a moderate relationship between orthographic skill and the N170 amplitude in RH, that is, better orthographic skill was associated with a more negative N170 amplitude at right occipital and temporal sites, RH overall: r = −.40; p = .056; 95% CI [−.76; .06]; RH-temporal: r = −.37, p = .07; 95% CI [−.78; .04]; RH-occipital: r = −.40, p = .056; 95% CI [−.80; .01], but no significant correlations at LH sites were observed, all rs ≤ −.36; all ps ≥ .084.
Figure 3.
Scatterplots showing the correlations between orthographic skill (spelling scores) on the X axis and average N170 amplitude (3A) and P1 amplitude (3B) on the Y axis in the left and right hemispheres.
For the P1 (Figure 3B), the hearing group showed no correlations between hemispheres and orthographic skill, all rs ≤ −.04, all ps ≥ .769. In contrast, the deaf group again showed a relationship between P1 amplitude in the RH and orthography where the P1 amplitude in RH positively correlated with orthography in the RH, RH overall: r = .42, p = .043; 95% CI [.002; 0.11]; RH-temporal: r = .40, p = .051; 95% CI [0; 0.11]; RH-occipital: r = .41, p = .05; 95% CI [0; 0.11], but no correlations in LH were found, all rs ≤ .19, all ps ≥ .384. There were no correlations between spelling scores and N170 or P1 amplitude to faces and cars in either hemisphere (all rs ≤ .31, ps ≥ .137), which confirmed that the relationship between orthographic skill and RH amplitude indeed pertained to words in deaf individuals.
To sum up, the magnitude of early visual ERP components in the RH, that is, the negativity of N170 and positivity of P1, was associated with skilled orthographic processing in deaf individuals, a relationship we did not observe in the hearing individuals.
4. Discussion
4.1. Discussion of main findings
We investigated whether the reduced N170 hemispheric asymmetry to visual words previously reported for deaf individuals might arise as a result of weaker phonological mapping compared with hearing individuals who typically exhibit strong left-hemisphere asymmetry (Emmorey et al., 2017; McCandliss & Noble, 2003), or whether deaf individuals are inherently less lateralized in their N170 to centrally presented visual words3 due to possible general neurobiological changes in visual processing that occur due to deafness and/or sign language use. We first summarize our main findings before discussing the results in further detail below.
In line with our main hypothesis, we found the typical LH asymmetry in hearing participants but reduced LH asymmetry in deaf participants to centrally presented words, but not to cars, confirming that the differences in lateral asymmetries could be explained by phonological mapping processes as initially argued by Emmorey et al. (2017). Differences in lateral asymmetries of word-specific N170 may not be attributed to general changes in visual processing in deaf individuals. Further, we found interesting differences in the asymmetry of the N170 to centrally presented faces between deaf and hearing participants, accompanied by superior face discrimination performance by the deaf participants. The deaf participants’ lifelong experience with sign language, visual speech, and/or deafness were likely contributing factors, although teasing these factors apart warrants further research. With respect to P1, both groups exhibited a similar right-sided asymmetry of P1 to words and faces (in temporal and occipital regions) and bilateral P1 responses to cars, offering some support for category-selective modulations of P1 amplitudes (see also Dering, Martin, Moro, Pegna, & Thierry, 2011). It remains unclear whether these reflect early face-selectivity or visual differences between stimulus sets. The scalp distribution of this early component was not influenced by group membership, but was impacted by the type of the stimulus. That is, early visual recognition processes indexed by the P1 component are sensitive to differences between highly learned familiar objects – words and faces versus less familiar objects – cars. Interestingly, deaf participants had a less positive P1 than hearing participants for all stimulus types. This was not the case for the subsequent N170 component as both groups showed comparable amplitudes. Therefore, there was no overall reduction in amplitude in deaf individuals but the distinct perceptual experience of deaf individuals may attenuate, rather than enhance, the P1 amplitude, perhaps as a result of differentially distributed or modulated attentional resources. These differences reflect an adaptive, not maladaptive, mechanism because there were no overall differences on the subsequent N170 amplitude between groups. Crucially, the P1 patterns did not mirror the N170 patterns and we thus conclude that that the results we observed for N170 were not an epiphenomenon of earlier processing stages. Next, we discuss the results for each stimulus type in further detail.
4.2. Reduced N170 lateralization to words in deaf individuals reflects experience-specific adaptation to orthographic stimulus processing
We replicated the group differences in the N170 to visually presented words (Emmorey et al., 2017; Maurer et al., 2008; Mercure et al., 2011; Mercure et al., 2008); that is, hearing participants exhibited a left-lateralized N170 to words in both occipital and temporal regions, and deaf participants showed a reduced left-lateralized N170, and the asymmetry was primarily observed in occipital rather than temporal sites, similarly to Emmorey et al. (2017), see Figure 2B. Further, deaf and hearing participants did not differ in the overall N170 amplitude to words, which suggested that the reduced asymmetry reflects bilateral distribution in the deaf group rather than a general reduction in amplitude or power.
Moreover, the right hemisphere may play an independent role in skilled orthographic processing for deaf readers. Further to our main result, we found a potential relationship between a larger (more negative) N170 amplitude in the RH occipital and temporal regions and orthographic skill (spelling) in the deaf group only, see Figure 3A, even though both deaf and hearing groups exhibited equivalent orthographic skill (p = .506) and did not differ on N170 amplitude overall. Deaf skilled spellers exhibited an increased activation marked by a larger N170 in the RH. Traditionally, in hearing readers, increased RH recruitment has been associated with poorer reading skill (Emmorey et al., 2017; Laszlo & Sacchi, 2015; Shaywitz & Shaywitz, 2005) and regarded as maladaptive, perhaps related to the fact that the right occipito-temporal regions might be responsible for coarser level processing or might process words as visual objects, which could consequently lead to less efficient or less precise orthographic representations (Laszlo & Sacchi, 2015). Neville and colleagues (Neville et al., 1982a; Neville, Kutas, & Schmidt, 1982b; Neville et al., 1984) also reported reduced N170 asymmetry to visually presented words in deaf individuals and originally attributed the outcome to the possibility that their deaf participants may have not had full mastery of English grammar because English was acquired as a second language. Given that our participants rated themselves as proficient in written English (mean 6 on a 1–7 scale; 7 = “like native”), this explanation for our results is unlikely. We argue that the RH involvement in deaf participants was not an indication of poorly specified orthographic representations or poorer language skill and, on the contrary, these RH regions might contribute positively to orthographic analysis in deaf readers. The results further support the argument that phonological ability fine-tunes the N170 to words in the left hemisphere for hearing participants, while orthographic knowledge may fine-tune the N170 in the right hemisphere for deaf participants (for fMRI evidence for orthographic tuning in the right hemisphere in deaf readers, see Glezer et al., 2018).
As for the P1 component, the results revealed a similar right-sided asymmetry in both groups and deaf participants exhibited attenuated P1 amplitude compared to hearing participants, the latter result was also reported in Emmorey et al. (2017). The right-sided asymmetry of P1 to words in both groups was intriguing. Sacchi and Laszlo (2016) reported right-sided P1 in hearing readers but not in deaf individuals, but Dundas et al. (2014) reported bilateral P1 for hearing individuals. Emmorey et al. also did not report hemispheric asymmetries of P1 to words for either deaf or hearing participants. Moreover, as mentioned in the results summary above, the right-sided P1 asymmetry was observed for words and faces, but not for cars. This asymmetry could arise due to stimulus-specific encoding perhaps in addition to the specific task demands on attention. In other words, the asymmetry did not arise solely because the participants anticipated the second, laterally presented word because we would have observed this effect for cars.
4.3. Unique face-sensitive N170 effects were modulated by experience-specific demands on face recognition
We found subtle yet distinctive scalp distribution patterns of the face-sensitive N170 component in the two groups, qualified by a three-way interaction and a hemisphere and region interaction in each group. These effects were driven by the difference in the direction of the hemispheric asymmetry in temporal vs. occipital regions. Hearing participants were right-lateralized in temporal sites but left-lateralized in occipital sites in their N170 to faces. Deaf participants showed a different pattern; they were left-lateralized in temporal sites, but showed slightly right-lateralized, almost bilateral, N170 responses in occipital sites (see Figure 2B). Thus, augmented perceptual experiences of deaf adults may uniquely shape their N170 signature for face recognition. But before we discuss why the N170 signature might be different for deaf individuals, let us compare our findings with previous literature.
Only two published studies thus far compared N170 to faces with N170 to other visual objects (e.g., cars, mushrooms, furniture) in adult deaf and hearing individuals (Mitchell, 2017; Mitchell et al., 2013). These studies reported bilateral N170 responses to faces in both groups, and no group differences on other visual objects, suggesting no effect of deafness and/or sign language on face recognition. However, an important limitation preventing direct comparison is that these studies averaged the N170 peak amplitude across four posterior sites in each hemisphere, which may have obscured some important nuances in the N170 scalp distribution in deaf individuals. Averaging the N170 amplitude across temporal and occipital sites, we also found no effect of hemisphere, F (1, 52) = .22, p = .64, or group, F (1, 52) = .001, p = .97. This suggests that any experience-driven effects on the N170 to faces could be relatively nuanced and best examined at a single-electrode level.
As for the hearing participants, we did not observe the canonical right-lateralized N170 for faces, as they exhibited a leftward shift in occipital sites. Evidence for a robust right-lateralized N170 for faces has been inconsistently supported in the literature (Maurer et al., 2008; Mercure et al., 2011; Mitchell, 2017). However, it is again possible that such discrepancies could arise as a result of collapsing signal across multiple electrode sites for analysis; in Dundas et al. (2014, 2015), the studies on which our experiment design was based, the factor of anteriority was not included in their analysis.
Going back to our main argument, we suggested that the distinct perceptual experiences of deaf adults could uniquely shape their N170 signature for face recognition. Support for this argument comes from the deaf participants’ superior face discrimination ability in the present study. This behavioral result matches previous reports of deaf individuals’ enhancement in some aspects of face processing, including those relevant to detecting local feature configurations that must be generalized over individual faces (e.g., similarities across the mouth region to detect ‘visemes’ in speech or adverbial expressions in ASL), and enhanced attention to lower half versus top half of faces (McCullough & Emmorey, 1997; Mitchell et al., 2013; Stoll et al., 2017). The habitual encoding of facial expressions during both signed and spoken language recognition might enhance visual or attentional resources dedicated for face processing and lead to enhanced face discrimination ability and subtle distributional shifts in the N170 for deaf signers.
But how precisely signers’ distinct perceptual experience might shape visual recognition of faces remains subject to further investigation. A possible explanation, suggested previously by Stoll and colleagues (2017), is that deaf signers do not explore faces in the same way as hearing non-signers (Watanabe, Matsuda, Nishioka, & Namatame, 2011). In other words, there may be differences in the structural encoding stage at which face features are detected, independent of face identity, a stage that is typically indexed by the N170. Ample evidence from studies with hearing non-signers suggested that hearing individuals process faces holistically. For example, face parts were easier to recognize when presented in the whole object than in isolation (Tanaka & Farah, 1993) and the N170 component “suffers” when inverted as opposed to canonical faces must be recognized (Rossion & Gauthier, 2002). As discussed above, face parts have different weighted importance during face recognition for deaf signers. For example, deaf signers were less affected by the face inversion effect than hearing non-signers, only when the changes occurred in the mouth area compared to the eye area (He, Xu, & Tanaka, 2016). Such differences in processing of faces may have resulted in distinct (i.e., reversed) N170 asymmetries in the deaf vs. hearing participants. That is, holistic face processing may more selectively recruit RH temporal regions in the hearing nonsigners, but in the deaf group, local level face processing may tap LH resources and thus drive N170 to LH temporal sites. While this is a compelling suggestion, the extent to which the laterality of N170 responses reflects differences in face recognition, or why the laterality is reversed at occipital sites, remains subject to further investigation. For example, techniques using higher spatial resolution (e.g., fMRI) or localization methods could shed light on the specific neuronal resources recruited for face recognition.
The P1 component was larger to faces than words, which replicated previous work by Dundas et al. (2014) – P1 to faces tends to be larger relative to other objects. A closer analysis of the P1 component to faces revealed a very similar pattern to words – both groups showed right-lateralized P1 and this laterality was not modulated by group membership. However, P1 amplitude was modulated by group as deaf participants exhibited an attenuated P1 compared to the hearing participants. The literature tends to report a bilateral distribution of P1 to faces (e.g., Dundas et al., 2014), although studies directly comparing deaf and hearing individuals on P1 are limited. Mitchell et al. (2013) reported marginally right-lateralized P1 for hearing but bilateral P1 for deaf participants (although it should be noted that they also found a right-lateralized N170 to faces in both groups). Lastly, the attenuated P1 amplitude to faces in deaf participants compared to hearing participants was also perhaps surprising. One might expect that deaf signers would exhibit an enhanced rather than decreased P1 to faces because faces represent linguistically relevant material for deaf signers, and additionally, expert processing of visual stimuli has been associated with an increase in P1 amplitude (Tanaka & Curran, 2001). It is tempting to argue here that P1 to faces might be modulated by deafness and/or lifelong sign language use. However, because deaf signers exhibited attenuated P1 across all three object categories, we conclude that such attenuation might reflect more general modulation of attentional resources in deaf individuals that pertains to early visual object recognition.
4.4. Cars are not special: The lack of experience-driven effects on car recognition
The car stimuli served as a control category. Deaf and hearing participants did not differ on the N170 to cars, or on car discrimination accuracy, and we observed no hemispheric specialization of N170 to cars. The P1 to cars was bilaterally distributed in both groups, and the deaf participants had smaller P1 responses than the hearing. The main finding here is that the results for cars confirmed the main hypothesis that the left-sided N170 asymmetry to words observed for hearing participants can indeed be explained by the phonological mapping hypothesis, and that the reduced N170 asymmetry to words in deaf participants is due to reduced phonological mapping in the LH temporal lobes. The bilateral N170 to cars in both groups confirmed that the reduced asymmetry pertains to word-specific adaptations in deaf readers, does not generalize to visual object recognition (cars) and is therefore a likely outcome of a reduction in coupling between phonological and orthographic processes as occurs in hearing readers. Finally, both groups exhibited a bilateral P1 to cars confirming that any asymmetries in P1 to words and faces are indeed specific to these highly specialized and familiar categories and did not generalize to cars. We suggest that the P1 asymmetry might reflect early differentiation between highly familiar stimuli, like words and faces, versus less familiar stimuli, such as cars, during early visual decoding stages, rather than task effect (i.e., expecting a lateral stimulus to appear on the screen). Attending to highly familiar stimulus like words and faces may call upon RH neuronal resources compared with less familiar stimulus like cars. The support for this claim however warrants further research. Finally, deaf participants again showed attenuated P1 compared to hearing participants but no detriment to behavioral discrimination of cars, a result that suggests adaptive, not maladaptive mechanisms in early visual recognition in deaf individuals.
Conclusion
We conclude that the attenuated left-hemisphere asymmetries of N170 to words in deaf individuals result from word-specific adaptations and can be explained by the reduced phonological mapping in the left hemisphere temporal lobes, a result that supports the phonological mapping hypothesis. This distinct hemispheric profile in deaf individuals is adaptive: reduced phonological mapping during reading leads to reduced left-lateralization of N170 in the temporal regions without detriment to word recognition performance, and skilled orthographic processing may recruit greater right hemisphere resources in deaf individuals. The unique pattern of hemispheric asymmetries of a face-specific N170 in deaf and hearing groups may arise as a result of the different demands on face processing across the two groups. Unlike their sign-naïve counterparts, deaf signers must habitually attend to facial features that are relevant for encoding linguistic information in sign language and/or visual speech and this also manifested in the deaf participants’ enhanced face discrimination ability. Importantly, the distinct N170 signatures to words and faces reflected experience driven adaptations that were specific to each stimulus category did not generalize to recognition of other visual stimuli (i.e., cars). The different functional pressures on recognizing words and faces may differentially alter the electrophysiology of early visual processing for deaf and hearing individuals. Such alterations pertain to these highly specialized and familiar stimuli such as words and faces and do not generalize to object recognition. In other words, deaf signers are not inherently less lateralized in their electrophysiological responses to words and changes in lateral asymmetries of early ERP components (N170) appear to be driven by the distinct ways deaf and hearing perceivers habitually engage with these stimuli. The P1 component, which was right-lateralized for words and faces but bilateral for cars in both groups, appeared to be sensitive to the differences between highly rehearsed, familiar stimuli (words and faces) and less familiar stimuli (cars). Deaf signers showed attenuated P1 responses overall; we attribute this result to redistribution of attentional resources that generalize to visual object recognition in deaf individuals, an effect that is driven by their unique visual-perceptual experience.
Highlights:
Weaker N170 asymmetry to words in deaf individuals reflects stimulus specific effects
Changes to N170 to words or faces do not generalize to other visual objects
Right hemisphere N170 and spelling skill are associated in deaf readers
Reduced P1 in deaf signers suggests changes in visual attention in object recognition
Acknowledgments
This work was supported by grants from the National Science Foundation (BCS-1439257 and BCS-0923763) and the National Institutes of Health (R01DC014246). We thank Lucinda O’Grady Farnady and Allison Bassett for help with participant recruitment, and Ben Eaton for assistance with experimental design. We also thank all of the study participants, without whom this research would not be possible.
We thank Lucinda O’Grady Farnady and Allison Bassett for help with participant recruitment, and Ben Eaton for assistance with experimental design. We also thank all of the study participants, without whom this research would not be possible.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
We conducted the analysis with participant age entered in the model as a covariate, however, age was not a significant factor in these models and was thus excluded from further analyses (F (1, 51) < 1, p = .63).
For comparison’s sake, we re-analyzed the N170 to faces using the same time window as in Dundas et al. (2014), 160–220ms, but we found the same pattern of results as above.
Note that we analyzed N170 responses to an initial (centrally-presented) stimulus in a pair and not the lateralized (second) stimulus to which the behavioral response was elicited. Following Dundas, Plaut, & Behrmann (2013), the purpose was to examine the N170 to the visual encoding of the stimulus in the absence of task demands, which, in and of themselves, might arguably influence the characteristics of N170, a question that is outside of the scope of the present study.
References
- Allen TE, Clark MD, Del Giudice A, Koo DS, Lieberman A, Mayberry R, & Miller P (2009). Phonology and reading: A response to Wang, Trezek, Luckner, and Paul. American Annals of the Deaf, 154(4), 338–345. doi: 10.1353/aad.0.0109 [DOI] [PubMed] [Google Scholar]
- Andrews S, & Hersch J (2010). Lexical precision in skilled readers: Individual differences in masked neighbor priming. Journal of Experimental Psychology Gen, 139(2), 299–318. doi: 10.1037/a0018366 [DOI] [PubMed] [Google Scholar]
- Armstrong BA, Neville HJ, Hillyard SA, & Mitchell TV (2002). Auditory deprivation affects processing of motion, but not color. Brain Res. Cogn. Brain Res, 14, 422–434. [DOI] [PubMed] [Google Scholar]
- Bavelier D, Brozinsky C, Tomann A, Mitchell TV, Neville HJ, & Liu J (2001). Impact of early deafness and early exposure to sign language on the cerebral organization for motion processing. Journal of Neuroscience, 21(22), 8931–8942. doi: 10.1523/JNEUROSCI.21-22-08931.2001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bavelier D, Dye MWG, & Hauser PC (2006). Do deaf individuals see better? Trends in Cognitive Science, 10(11), 512–518. doi: 10.1016/j.tics.2006.09.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bavelier D, Tomann A, Hutton C, Mitchell TV, Corina DP, Liu G, & Neville HJ (2000). Visual attention to the periphery is enhanced in congenitally deaf individuals. Journal of Neuroscience, 20(17), RC93. doi: 10.1523/JNEUROSCI.20-17-j0001.2000 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Behrmann M, & Plaut DC (2020). Hemispheric Organization for Visual Object Recognition: A Theoretical Account and Empirical Evidence. Perception. doi: 10.1177/0301006619899049 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bentin S, Allison T, Puce A, Perez E, & McCarthy G (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8(6), 551–565. doi: 10.1162/jocn.1996.8.6.551 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bentin S, Mouchetant-Rostaing Y, Giard M, Echaillier JF, & Pernier J (1999). ERP manifestations of processing printed words at different psycholinguistic levels: Time course and scalp distribution. Journal of Cognitive Neuroscience, 11, 235–260. doi:https://www.ncbi.nlm.nih.gov/pubmed/10402254 [DOI] [PubMed] [Google Scholar]
- Bosworth RG, & Dobkins KR (2002). The effects of spatial attention on motion processing in deaf signers, hearing signers, and hearing nonsigners. Brain and Cognition, 49(1), 152–169. doi: 10.1006/brcg.2001.1497 [DOI] [PubMed] [Google Scholar]
- Bottari D, Caclin A, Giard M, & Pavani F (2011). Changes in early cortical visual processing predict enhanced reactivity in deaf individuals. PLoS One, 6(9), e25607. doi: 10.1371/journal.pone.0025607 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Collins MS, Dundas E, Gabay Y, Plaut DC, & Behrmann M (2017). Hemispheric organization in disorders of development. Visual Cognition, 25(4–6: Person Perception), 416–429. doi: 10.1080/13506285.2017.1370430 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Curran T, Tanaka JW, & Weiskopf DM (2002). An electrophysiological comparison of visual categorization and recognition memory. Cogn Affect Behav Neurosci, 2(1), 1–18. doi: 10.3758/CABN.2.1.1 [DOI] [PubMed] [Google Scholar]
- Dachkovsky S, & Sandler W (2009). Visual intonation in the prosody of a sign language. Language and Speech, 52(2/3), 287–314. doi: 10.1177/0023830909103175 [DOI] [PubMed] [Google Scholar]
- Dering B, Martin CD, Moro S, Pegna AJ, & Thierry G (2011). Face-sensitive processes one hundred milliseconds after picture onset. Frontiers in Human Neuroscience, 5(93). doi: 10.3389/fnhum.2011.00093 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dundas EM, Plaut DC, & Behrmann M (2013). The joint development of hemispheric lateralization for words and faces. J Exp Psychol Gen, 142(2), 348–358. doi: 10.1037/a0029503 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dundas EM, Plaut DC, & Behrmann M (2014). An ERP investigation of the co-development of hemispheric lateralization of face and word recognition. Neuropsychologia, 61, 315–323. doi: 10.1016/j.neuropsychologia.2014.05.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dundas EM, Plaut DC, & Behrmann M (2015). Variable left-hemisphere language and orthographic lateralization reduces right-hemisphere face lateralization. J Cogn Neurosci, 27(5), 913–925. doi: 10.1162/jocn_a_00757 [DOI] [PubMed] [Google Scholar]
- Emmorey K, Midgley KJ, Kohen CB, Sevcikova Sehyr Z, & Holcomb PJ (2017). The N170 ERP component differs in laterality, distribution, and association with continuous reading measures for deaf and hearing readers. Neuropsychologia, 106, 298–309. doi: 10.1016/j.neuropsychologia.2017.10.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glezer LS, Weisberg J, O’Grady Farnady C, McCullough S, Midgley KJ, Holcomb PJ, & Emmorey K (2018). Orthographic and phonological selectivity across the reading system in deaf skilled readers. Neuropsychologia, 117, 500–512. doi: 10.1016/j.neuropsychologia.2018.07.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- He H, Xu B, & Tanaka JW (2016). Investigating the face inversion effect in a deaf population using the dimensions tasks. Visual Cognition, 6285, 1–11. doi: 10.1080/13506285.2016.1221488 [DOI] [Google Scholar]
- Hillyard S, Vogel E, & Luck S (1998). Sensory gain control (amplification) as a mechanism of selective attention: Electrophysiological and neuroimaging evidence. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences(353), 1257–1270. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Joyce CA, & Rossion B (2005). The face-sensitive N170 and VPP components manifest the same brain processes: the effect of reference electrode site. Clin Neurophysiol, 116(11), 2613–2631. doi: 10.1016/j.clinph.2005.07.005 [DOI] [PubMed] [Google Scholar]
- Laszlo S, & Sacchi E (2015). Individual differences in involvement of the visual object recognition system during visual word recognition. Brain and Language, 145–146, 42–52. doi: 10.1016/j.bandl.2015.03.009 [DOI] [PubMed] [Google Scholar]
- Liddell SK (1980). American Sign Language syntax: Volume 52: Approaches to Semiotics: Mouton. [Google Scholar]
- Luck S (2005). An introduction to the event-related potential technique. Cambridge, Massachusetts: MIT Press. [Google Scholar]
- Maurer U, & McCandliss BD (2007). The development of visual expertise for words: The contribution of electrophysiology In Grigorenko EL & Naples AJ (Eds.), Single-word reading: Biological and behavioral perspectives (pp. 43–64). Mahwah, NJ: Erlbaum. [Google Scholar]
- Maurer U, Rossion B, & McCandliss BD (2008). Category specificity in early perception: face and word n170 responses differ in both lateralization and habituation properties. Front Hum Neurosci, 2(18), 18. doi: 10.3389/neuro.09.018.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCandliss BD, & Noble KG (2003). The development of reading impairment: A cognitive neuroscience model. Developmental Disabilities Research Review, 9(3), 196–204. doi: 10.1002/mrdd.10080 [DOI] [PubMed] [Google Scholar]
- McCullough S, & Emmorey K (1997). Face Processing by Deaf ASL Signers: Evidence for Expertise in Distinguishing Local Features. Journal of Deaf Studies and Deaf Education, 2(4), 212–222. http://www.jstor.org/stable/23805385. [DOI] [PubMed] [Google Scholar]
- McQuarrie L, & Parrila RK (2009). Deaf children’s awareness of phonological structure: Rethinking the “functional-equivalence” hypothesis. Journal of Deaf Studies and Deaf Education, 14(2), 137–154. doi: 10.1093/deafed/enn025 [DOI] [PubMed] [Google Scholar]
- Mercure E, Cohen Kadosh K, & Johnson MH (2011). The N170 shows differential repetition effects for faces, objects, and orthographic stimuli. Frontiers in Human Neuroscience, 5, article 6. doi: 10.3389/fnhum.2011.00006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mercure E, Dick F, Halit H, Kaufman J, & Johnson MH (2008). Differential lateralization for words and faces: category or psychophysics? Journal of Cognitive Neuroscience, 20(11), 2070–2087. doi: 10.1162/jocn.2008.20137 [DOI] [PubMed] [Google Scholar]
- Mitchell TV (2017). Category selectivity of the N170 and the role of expertise in deaf signers. Hear Res, 343, 150–161. doi: 10.1016/j.heares.2016.10.010 [DOI] [PubMed] [Google Scholar]
- Mitchell TV, Letourneau SM, & Maslin MC (2013). Behavioral and neural evidence of increased attention to the bottom half of the face in deaf signers. Restor Neurol Neurosci, 31(2), 125–139. doi: 10.3233/RNN-120233 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neville HJ, Kutas M, & Schmidt A (1982a). Event-related potential studies of cerebral specialization during reading. II. Studies of congenitally deaf adults. Brain and Language, 16(2), 316–337. doi: 10.1016/0093-934X(82)90089-X [DOI] [PubMed] [Google Scholar]
- Neville HJ, Kutas M, & Schmidt A (1982b). Event - related potential studies of cerebral specialization during reading. Brain and Language, 16(2), 300–315. doi: 10.1016/0093-934X(82)90088-8 [DOI] [PubMed] [Google Scholar]
- Neville HJ, Kutas M, & Schmidt A (1984). Event-related potential studies of cerebral specialization during reading. A comparison of normally hearing and congenitally deaf adults. Ann N Y Acad Sci, 425, 370–376. [DOI] [PubMed] [Google Scholar]
- Neville HJ, & Lawson D (1987a). Attention to central and peripheral visual space in a movement detection task: an event-related potential and behavioral study. II. Congenitally deaf adults. Brain Research, 405(2), 268–283. doi: 10.1016/0006-8993(87)90296-4 [DOI] [PubMed] [Google Scholar]
- Neville HJ, & Lawson D (1987b). Attention to central and peripheral visual space in a movement detection task: An event-related potential and behavioral study. III. Separate effects of auditory deprivation and acquisition of a visual language. Brain Research, 405(2), 284–294. doi: 10.1016/0006-8993(87)90297-6 [DOI] [PubMed] [Google Scholar]
- Neville HJ, Schmidt A, & Kutas M (1983). Altered visual-evoked potentials in congenitally deaf adults. Brain Research, 266(1), 127–132. [DOI] [PubMed] [Google Scholar]
- Oldfield RC (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9(1), 97–113. doi: 10.1016/0028-3932(71)90067-4 [DOI] [PubMed] [Google Scholar]
- Rossion B, Curran S, & Gauthier I (2002). A defense of the subordinate-level expertise account for the N170 component. Cognition, 85(2), 189–196. doi: 10.1016/S0010-0277(02)00101-4 [DOI] [PubMed] [Google Scholar]
- Rossion B, & Gauthier I (2002). How does the brain process upright and inverted faces? Behavioral and Cognitive Neuroscience Reviews, 1(1), 63–75. doi: 10.1177/1534582302001001004 [DOI] [PubMed] [Google Scholar]
- Rossion B, Joyce CA, Cottrell GW, & Tarr MJ (2003). Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. Neuroimage, 20(3), 1609–1624. [DOI] [PubMed] [Google Scholar]
- Sachhi E, & Laszlo S (2016). An event-related potential study of the relationship between N170 lateralization and phonological awareness in developing readers. Neuropsychologia, 91, 415–425. doi: 10.1016/j.neuropsychologia.2016.09.001 [DOI] [PubMed] [Google Scholar]
- Sandler W, & Lillo-Martin D (2006). Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. [Google Scholar]
- Scott GD, Karns CM, Dow MW, Stevens C, & Neville HJ (2014). Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex. Frontiers in Human Neuroscience, 8(177), 1–9. doi: 10.3389/fnhum.2014.00177 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stoll C, Palluel-Germain R, Caldara R, Lao J, Dye MWG, Aptel F, & Pascalis O (2017). Face recognition is shaped by the use of sign language. Journal of Deaf Studies and Deaf Education, 23(1), 62–70. doi: 10.1093/deafed/enx034 [DOI] [PubMed] [Google Scholar]
- Tanaka JW, & Curran S (2001). The neural basis for expert object recognition. Psychological Science, 12(1), 43–47. doi: 10.1111/1467-9280.00308 [DOI] [PubMed] [Google Scholar]
- Tanaka JW, & Farah MJ (1993). Parts and wholes in face recognition. The Quarterly Journal of Experimental Psychology, 46A(2), 225–245. doi: 10.1080/14640749308401045 [DOI] [PubMed] [Google Scholar]
- Wang Y, Huang H, Yang H, Xu J, Mo S, Lai H, … Zhang J (2019). Influence of EEG References on N170 Component in Human Facial Recognition. Frontiers in Neuroscience, 13(705). doi: 10.3389/fnins.2019.00705 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Y, Trezek BJ, Luckner JL, & Paul PV (2008). The role of phonology and phonologically related skills in reading instruction for students who are deaf or hard of hearing. Americal Annals of the Deaf, 4, 396–407. doi: 10.1353/aad.0.0061 [DOI] [PubMed] [Google Scholar]
- Watanabe K, Matsuda T, Nishioka T, & Namatame M (2011). Eye gaze during observation of static faces in deaf people. PLoS One, 6(2), e16919. doi: 10.1371/journal.pone.0016919 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shaywitz SE & Shaywitz BA (2005). Dyslexia (Specific Reading Disability). Biological Psychiatry, 57(11), 1301–1309. doi: 10.1016/j.biopsych.2005.01.043 [DOI] [PubMed] [Google Scholar]