Abstract
Objectives
This research studied whether the mode of input (auditory vs audiovisual) influenced semantic access by speech in children with sensorineural hearing impairment (HI).
Design
Participants, 31 children with HI and 62 children with normal hearing (NH), were tested with our new multi-modal picture word task. Children were instructed to name pictures displayed on a monitor and ignore auditory or audiovisual speech distractors. The semantic content of the distractors was varied to be related vs unrelated to the pictures (e.g, picture-distractor of dog-bear vs dog-cheese respectively). In children with NH, picture naming times were slower in the presence of semantically-related distractors. This slowing, called semantic interference, is attributed to the meaning-related picture-distractor entries competing for selection and control of the response [the lexical selection by competition (LSbyC) hypothesis]. Recently, a modification of the LSbyC hypothesis, called the competition threshold (CT) hypothesis, proposed that 1) the competition between the picture-distractor entries is determined by a threshold, and 2) distractors with experimentally reduced fidelity cannot reach the competition threshold. Thus, semantically-related distractors with reduced fidelity do not produce the normal interference effect, but instead no effect or semantic facilitation (faster picture naming times for semantically-related vs -unrelated distractors). Facilitation occurs because the activation level of the semantically-related distractor with reduced fidelity 1) is not sufficient to exceed the competition threshold and produce interference but 2) is sufficient to activate its concept which then strengthens the activation of the picture and facilitates naming. This research investigated whether the proposals of the CT hypothesis generalize to the auditory domain, to the natural degradation of speech due to HI, and to participants who are children. Our multi-modal picture word task allowed us to 1) quantify picture naming results in the presence of auditory speech distractors and 2) probe whether the addition of visual speech enriched the fidelity of the auditory input sufficiently to influence results.
Results
In the HI group, the auditory distractors produced no effect or a facilitative effect, in agreement with proposals of the CT hypothesis. In contrast, the audiovisual distractors produced the normal semantic interference effect. Results in the HI vs NH groups differed significantly for the auditory mode, but not for the audiovisual mode.
Conclusions
This research indicates that the lower fidelity auditory speech associated with HI affects the normalcy of semantic access by children. Further, adding visual speech enriches the lower fidelity auditory input sufficiently to produce the semantic interference effect typical of children with NH.
Although understanding spoken language seems easy, its underpinnings are complex. For example, as children people must learn that words label concepts or categories of objects with common properties. The word dog for instance labels a group of objects within the animal category whose members share common semantic features such as breathes, has fur, four-legs, etc. This knowledge also needs to be accessed rapidly and efficiently in everyday usage because speech occurs at a rate of several words a second (Bloom 2000). In this study, we investigated how accessing a spoken word’s meaning (i.e., its lexical-semantic representation) may be affected in child listeners with sensorineural hearing impairment (HI). Our specific focus was whether this semantic access by speech is influenced by the mode of input (auditory vs audiovisual). Prior to elaborating our research focus, however, we will consider how HI may affect children’s development of semantic capabilities.
Semantic Capabilities in Children with HI
With regard to word meaning, vocabulary development in children with HI may show a reasonably normal pattern of development. However, the rate of acquisition is typically slowed and may plateau prematurely, yielding pronounced individual variability (Davis et al. 1986; Gilbertson & Kamhi 1995; Briscoe et al. 2001; Borg et al. 2007; Moeller et al. 2007; Fitzpatrick et al. 2011). With regard to categorical knowledge in children with HI, this knowledge base appears normal for categories such as those used herein that are easily perceived visually (Osberger & Hesketh 1988). As detailed below, we controlled for possible deficiencies in vocabulary or categorical knowledge in the current study by deleting all test trials containing any item that was not correctly identified or categorized on a category knowledge laboratory task (see Methods).
With regard to lexical-semantic representations in children with HI, learning words via an impaired auditory channel may result in less robust and less well structured representations, perhaps due to 1) decreased hearing/overhearing and inference from context and 2) increased intentional explicit learning of isolated word meanings (Yoshinaga-Itano & Downey 1986; Moeller 1988; Moeller et al. 1996). To the extent that more robust and richer representations have lower thresholds of activation and are more easily retrieved (Bjorklund 1987; Cowan 1995), semantic access may be more effortful and vulnerable to retrieval failure in children with HI. Learning and constructing lexical-semantic representations in children with HI may also be influenced by attentional resources. Attention is conceptualized as a capacity limited pool of resources shared among concurrent tasks/stimuli (see, e.g., Kahneman 1973; Cowan 1995). From this viewpoint, processing lower fidelity auditory speech requires more effort (Hicks & Tharpe 2002) —thus more attentional resources—and can drain the capacity limited pool of resources needed to learn and construct semantic representations (see Rabbitt 1968; Werker & Fennell 2004; and Wingfield et al. 2005, for similar reasoning). With regard to the current research focusing on how the mode of input affects semantic access by speech in children with HI, such higher level difficulties should reduce semantic access for both auditory and audiovisual modes. Techniques that have been particularly successful in studying semantic access by words are called picture-word tasks.
Picture Word Task
In the picture word task, participants are instructed to name pictures displayed on a monitor and ignore irrelevant seen or heard word distractors (see Schriefers et al. 1990; Damian & Martin 1999). The set of target pictures is held constant, and the content of the irrelevant distractors is systematically varied. For current purposes, the distractors were varied to represent a semantic categorical relationship vs no relationship between the picture-distractor pairs. Examples respectively are the picture-distractor pairs of dog-bear vs dog-cheese. The dependent measure is the speed of picture naming. Both adults and children require more time to name pictures presented with semantically-related (vs -unrelated) distractors, an effect called semantic interference (see Jerger et al., 2013, for review). This interference is commonly attributed to competition between the lexical-semantic representations of the picture and distractor for selection and control of the response, called the lexical selection by competition hypothesis (Levelt et al. 1999; Damian et al. 2001; Damian & Bowers 2003).
With regard to lower fidelity input, recent investigations with written as opposed to spoken word distractors in adults have focused on how experimentally reducing the fidelity of the distractors affects semantic access by words (e.g., Finkbeiner & Caramazza 2006; Piai et al. 2012). As a typical example, investigators required participants to name pictures and ignore word distractors whose visibility was manipulated (clearly visible vs masked). Results showed that the clearly visible distractors produced the typical semantic interference effect (i.e., slower naming times for related than unrelated distractors). By contrast, the masked distractors with reduced fidelity produced an unexpected semantic facilitation effect (faster naming times for related than unrelated distractors). In an attempt to explain the effects produced by reducing the fidelity of the distractors, Piai and colleagues (2012) recently modified the lexical selection by competition hypothesis with the competition threshold (CT) hypothesis. This hypothesis is particularly relevant to listeners hearing spoken distractors of reduced fidelity due to HI (see, e.g., Moore, 1996), and thus we consider in depth the CT hypothesis below.
Lexical Selection by Competition Hypothesis and CT Hypothesis Modification
Figure 1a illustrates the general stages of processing for the picture word task with auditory distractors, assumed by numerous models of lexical selection by competition. The solid lines represent the speech production (picture) process and the dashed lines represent the speech perception (distractor) process. Figure 1a portrays all the stages in an activated state. However, the concept of spreading activation involves a dynamic process that changes the activation levels of the stages during the time course of processing. More specifically, during the dynamics of processing, some stages will have greater activation than others. The activation levels within a stage will also vary over time, with a selected item becoming more highly activated and other items becoming less activated. The text below carefully details the dynamics of the time course characterizing the activated stages portrayed in Figure 1a.
Figure 1.
Figures 1a & b. Schematic of the general stages of processing for the picture word task that are assumed by several models of lexical access. The solid lines represent the speech production (picture) process; the dashed lines represent the speech perception (distractor word) process. Fig 1a conceptualizes the theoretical interaction between the picture and spoken word at the lexical-semantic stage. To do this, the picture input starts at the top of the graph and the spoken word input starts at the bottom. Fig 1b illustrates how manipulating the temporal relation between the onsets of the picture and the distractor (SOA) can maximize or minimize interaction between the picture and spoken word at the lexical-semantic stage. To do this, all inputs start at the top of the graph and proceed downward in time. The grey box illustrates that the SOA of −165 ms produces co-activation of the picture and word within the same time window, in contrast to the SOA of +165ms.
The speech production process (input dog) consists of four dynamic stages: conceptual, lexical-semantic, output phonological, and articulatory motor. More specifically, the picture dog 1) activates its concept and semantic features (animal: breathes, has fur, four-legs, etc), which spreads to 2) to activate a set of meaning-related lexical-semantic items (dog, cat, bear, etc) with selection of the correct item dog, followed by 3) activation of output phonological representations and the articulatory motor pattern for picture naming. The dynamics of the speech perception process (input bear) proceed in the opposite direction. The perceptual process consists of acoustic/phonetic, input phonological, lexical-semantic, and conceptual stages. The speech waveform 1) activates its acoustic/phonetic and input phonological representations, which spread to 2) activate a set of phonologically-related lexical-semantic items (bear, bed, bell, etc) with selection of the correct item bear, followed by 3) activation of the word’s concept and semantic features (animal: breathes, has fur, four-legs, etc). Again, the occurrence of semantic interference is attributed to competition between the lexical-semantic representations of the picture and semantically-related distractor for selection and control of the response. This competition is illustrated in Figure 1a by the two enlarged circles at the lexical-semantic level, representing the animals dog and bear (Levelt et al. 1999; Damian et al. 2001; Damian & Bowers 2003).
With regard to the CT hypothesis, Piai et al.’s (2012) modification added a minimum threshold level that a semantically-related distractor must reach in order to engage in competition with the picture for selection and control of the response. If a distractor’s level of activation is weakened such that this competition threshold cannot be reached (imagine this by shrinking the size of the black circle bear in the upper right hand corner, lexical-semantic level, Figure 1a), the model proposes two possible outcomes: 1) the distractor will not influence picture naming or 2) the distractor will facilitate picture naming. With regard to the latter outcome, the CT hypothesis assumes interactive-activation levels of processing, with spreading activation between the stages in both feed-forward and -backward modes (bi-directional arrows, Figure 1a). The facilitation of naming is proposed to occur because the weakened activation level of the distractor bear 1) is not sufficient to exceed the competition threshold and produce competition but 2) is sufficient to spread forward and activate its concept (animal); this conceptual activation then spreads downward to boost the already existing activation of the picture’s representation and facilitate naming.
Of interest to this research is whether the CT hypothesis generalizes to the auditory domain, to the natural degradation of input due to HI, and to participants who are children. We will assess whether hearing loss reduces the fidelity of speech to the extent that the activation level produced by an auditory distractor cannot exceed the competition threshold and thus produces a null effect or semantic facilitation. Further, we will assess whether visual speech enriches the fidelity of auditory speech to the extent that the activation level of an audiovisual distractor exceeds the competition threshold and thus produces semantic interference as expected. Previous research demonstrates that visual speech benefits word recognition in listeners perceiving lower fidelity auditory speech due to HI or a degraded listening situation (Sumby & Pollack, 1954; Erber 1969; MacLeod & Summerfield 1987; Tye-Murray 2009). Our multi-modal picture word task (described below) allows evaluating the effects of both auditory and audiovisual spoken distractors for the first time (Jerger et al. 2009a).
Multi-Modal Picture Word Task
The multi-modal picture word task is the same as the picture word task with auditory distractors, called the cross-modal task, with two modifications (detailed in the Methods section). First, the to-be-named pictured object is displayed on a talker’s T-shirt along with the head and chest of the talker rather than on a blank screen as in the cross-modal task. Second, performance is assessed in the presence of both auditory-static face and audiovisual-dynamic face distractors rather than the auditory only distractors without a face of the cross-modal task. In other words, the multimodal task shows the talker’s face (along with his chest) as a still image (auditory) or while uttering the distractor (audiovisual).
In picture word tasks, another experimental manipulation that affects whether the distractor influences performance is the stimulus onset asynchrony (SOA), the timing relation between the onset of the distractor and the onset of the picture. Figure 1b illustrates this manipulation by regraphing the model with both inputs (picture and distractor) starting at the top and proceeding downward. The figure portrays the two SOAs used in this study: −165 ms with the spoken distractor presented before the onset of the picture and +165 ms with the spoken distractor presented after the onset of the picture. The schematic illuminates the finding that adults and children typically show semantic interference at −165 ms SOA, with little or no semantic interference at +165 ms SOA (Schriefers et al. 1990; Damian & Martin 1999; Jerger et al. 2002c). The explanation for the effect of the SOA is as follows. Semantic interference is hypothesized to occur when the lexical-semantic representations of the picture and semantically-related distractor are co-activated. This co-activation is promoted by presenting the onset of the spoken distractor slightly before the onset of the picture. As depicted in Figure 1b by the grey box, the overlap between the two lexical-semantic entries is greater at −165 ms than at +165 ms. When the distractor begins slightly after the picture (+165 ms SOA), there is no effective co-activation and no interference because the picture’s lexical-semantic entry has been selected prior to the distractor’s complete lexical-semantic activation.
In sum, the current study will investigate effects of semantic relatedness as determined by the semantic and temporal onset relationships between the picture-distractor pairs and by the auditory vs audiovisual modes of the distractors in HI vs normal hearing (NH) groups. Thus we will have a complex factorial design. Below we predict possible results on the multi-modal picture word task in the children with HI from knowledge of the 1) lexical selection by competition hypothesis, 2) competition threshold hypothesis, 3) mode of the distractor, 4) SOA, and 5) semantic capabilities. Table 2 in the Results section condenses these predictions.
Table 2.
Predicted results in the children with HI from knowledge of the 1) lexical selection by competition hypothesis and 2) competition threshold hypothesis and from previous results for the 3) mode of the distractor, 4) SOA, and 5) semantic capabilities.
| Results: Based on Theory | Predicted Results | |
|---|---|---|
| Lexical Selection by Competition | cross-modal task: semantic interference at leading SOA only. | If no effect of HI,
|
| Competition Threshold Hypothesis | cross-modal task: lower fidelity distractor
|
|
| Results: Based on Studies of Children with HI | ||
| Mode | multi-modal task: phonological conflicting distractors produced interference for audiovisual mode but not for auditory mode |
|
| SOA | cross-modal task: semantic distractors produced significant interference at both leading and lagging SOAs for auditory mode |
|
| Semantic Capabilities | influence of vocabulary: controlled less robust lexical- semantic representations |
|
Predicted Results in HI Group
Lexical Selection by Competition Hypothesis
It is possible that the children with HI will show a semantic interference effect (i.e., slower picture naming times for semantically-related than - unrelated distractors) comparable to that of the children with NH. This pattern would indicate that lexical selection by competition was present in the HI group and not different from that in the NH group (e.g. Levelt et al. 1999; Damian et al. 2001; Damian & Bowers 2003; Jerger et al. 2013). To the extent that the HI group shows the typical semantic interference effect, the lexical selection by competition hypothesis also predicts that semantic interference will occur at −165 ms SOA, with little or no semantic interference at +165 ms SOA.
Competition Threshold Hypothesis Modification
Given that sensorineural HI creates lower fidelity auditory speech (e.g., Moore 1996), the competition threshold hypothesis predicts that the semantically-related distractors in the HI group will produce null effects or semantic facilitation, rather than interference. An important issue raised by this hypothesis is how the addition of visual speech may affect the strength or fidelity of the distractor.
Mode of the Distractor
Previous results in the HI group of this study on the multi-modal task with auditory vs audiovisual phonological distractors allow us to predict the influence of the mode (Jerger et al. 2009b). An analogous phonological interference effect was produced by distractors consisting of onsets conflicting in voicing or in place-of-articulation with the picture (e.g., picture- distractor: bus-duck). These results are relevant to our study of semantic interference in that activation of lexical-semantic representations by speech is indirect via phonology (see Figure 1). These results showed significant phonological interference for the audiovisual conflicting distractors, but not for the auditory conflicting distractors. In other words, adding visual speech created an interference effect, suggesting that visual speech improved the fidelity of the auditory input sufficiently to produce more normalized results. To the extent that the phonological results generalize to semantic results, we predict that the HI group will exhibit semantic interference for the audiovisual mode, but not for the auditory mode. In addition to the fidelity of the distractors, the effects of semantic relatedness may also be influenced by the SOA.
SOA
Previous results in a similar HI group of children on the picture word task with auditory only semantically-related distractors and pictures shown on a blank monitor (cross-modal task) allow us to predict how the SOA will influence performance. These results revealed pronounced semantic interference at both the leading and lagging SOAs (Jerger et al. 2002a). The unusually broad time course of semantic interference in the HI group implied that the lexical semantic stage of processing was abnormally prolonged. These results allow us to predict significant effects of semantic relatedness for the auditory distractors in the HI group at both the leading and lagging SOAs. Stated differently, results for the auditory distractors are predicted to show a significant difference in the effects of semantic relatedness between the HI vs NH groups at the lagging SOA, but not at the leading SOA. Predictions about SOA based on the lexical selection by competition hypothesis are presented above. The CT hypothesis modification did not address the effects of SOA. A novel contribution of this research may be to offer evidence about the effects of SOA on results with lower fidelity distractors. Finally, the effects of semantic relatedness may also be influenced by semantic capabilities.
Semantic Capabilities
With regard to the quality of lexical-semantic representations, we predict that the effects of semantic relatedness will be reduced in the HI group relative to NH group if semantic representations are impoverished and/or harder to access. Such higher level difficulties should reduce semantic access for both auditory and audiovisual modes. Results in the literature in individuals with childhood HI report mixed results on a wide variety of semantic tasks (e.g., cross-modal picture word task, auditory and visual Stroop tasks, category verification tasks). Findings have been consistent with normal (Jerger et al. 2006), abnormal (Allen 1971; Jerger et al. 1994), and mixed normal and abnormal (Jerger et al. 1993, 2002a) semantic capabilities. With regard to vocabulary or categorical knowledge, again we controlled for possible deficiencies by deleting all test trials containing any item that was not correctly identified or categorized on a category knowledge laboratory task (see Methods).
In short, our research should yield new insights about semantic access by lower fidelity auditory speech in children with HI and whether visual speech enriches the fidelity of the auditory speech sufficiently to promote more normalized results. Positive results would support an intervention approach that emphasizes hearing and seeing the talker (i.e., lipreading) and suggest a possible disadvantage to an auditory-verbal therapy approach that does not encourage attending to visual speech (e.g., Estabrooks, 2006). Positive results would also support the idea that attending to both auditory and visual speech inputs may allow children to devote more adequate attentional resources to learning and constructing semantic representations that are more typical of children with NH.
Methods
Participants
HI Group
Participants were 31 children with prelingual sensorineural HI (65% boys) ranging in age from 5-0 to 12-2 (M=8-0). The racial distribution was 74% White, 16% Black, 6% Asian, and 3% multiracial, with 6% reporting Hispanic ethnicity. Average unaided sensitivity on the better ear at 500, 1000, and 2000 Hz (pure tone average or PTA) was 50.13 dB Hearing Level (HL) (American National Standards Institute, ANSI 2004) and was distributed as follows: ≤ 20 dB (23%), 21–40 dB (16%), 41–60 dB (29%), 61–80 dB (13%), 81–100 dB (6%), and >101 dB (13%). The PTAs in the ≤ 20 dB subgroup did not reflect the hearing loss due to the uneven HLs across the 500–4000 Hz range. As an example, unaided sensitivity at the poorest two HLs across 500–4000 Hz in this subgroup averaged 26 dB on the better ear and 35 dB on the poorer ear. In the total group, hearing aids were used by 58% of the children and a cochlear implant or cochlear implant plus hearing aid was used by 19%. Most devices were self adjusting digital aids with the volume control either turned off or non-existent. Participants who wore amplification were tested while wearing their devices. Auditory word recognition (with amplification) was greater than 80% correct in 81% of the children (M=87.34%). The average age at which the children who wore amplification received their first listening device was 34.65 mo (SD = 19.67 mo); the duration of device use was 60.74 mo (SD= 20.87 mo). The type-of-educational program was a mainstream setting in 81% of the children with some assistance from 1) special education services in 3%, 2) deaf education in 16%, and 3) total communication in 3%.
NH Group
Participants were 62 children with NH (53% boys) who also participated in a concurrent project with the multi-modal task (Jerger et al. 2009b). Ages (yr-mo) ranged from 5-3 to 12-1 (M=7–8). The racial distribution was 76% Whites, 5% Asian, 2% Black, 2% Native American, and 6% multiracial with 15% reporting Hispanic ethnicity.
Criteria for Participation
All participants met the following criteria: 1) English as a native language, 2) ability to communicate successfully aurally/orally, 3) no diagnosed or suspected disabilities other than HI and its accompanying speech and language problems, 4) auditory only phoneme discrimination of greater than 85% correct on a two-alternative forced choice test comprised of stop consonants (/p/, /b/, /t/, /d/) coupled with the vowels (/i/ and /ʌ/), and 5) ability to identify accurately on auditory only testing with the phonological distractors at least 50% of the onsets starting with a consonant and 100% of the onsets starting with a vowel. On the latter measure, average performance was 90% in the HI group and 99% in the NH group. All participants also passed measures establishing the normalcy of visual acuity (including corrected to normal, Rader 1977), oral motor function (Peltzer 1997), and hearing (NH group only). A comparison of the HI and NH groups on a set of cognitive measures is detailed in the Results section.
Materials and Instrumentation: Picture Word Task
Stimulus Preparation
The speech distractors were recorded by an 11-year-old boy actor with clearly intelligible, normal speech without pubertal characteristics as judged by a speech pathologist. The talker looked directly into the camera, starting and ending each utterance with a neutral face/closed mouth position. His full facial image and upper chest were recorded. The audiovisual recordings were digitized via a Macintosh G4 computer with Apple Fire Wire, Final Cut Pro, and Quicktime software. Color video was digitized at 30 frames/sec with 24-bit resolution at 720 × 480 pixel size. Auditory input was digitized at a 22 kHz sampling rate with 16-bit amplitude resolution.
Colored pictures were scanned into a computer as 8-bit PICT files and edited to achieve objects of a similar size and complexity on a white background. Each picture was displayed on the talker’s T-shirt at shoulder level (below his neck). The total image (inner face, neck, and picture) subtended a visual angle of 10.53° vertically when viewed from 80 cm (participant’s forehead to monitor). The picture and inner face images respectively subtended visual angles of 4.78° and 5.15° (eyebrow to chin) vertically and 6.25° and 5.88° (eye level) horizontally. The visual angles are approximate because participants were free to move in their chairs. With regard to the SOA, the pictures were pasted into the video track to form SOAs of −165 ms (the onset of the distractor was 5 frames before the onset of the picture) or +165 ms (the onset of the distractor was 5 frames after the onset of the picture) (see Figure 1b). To be consistent with the cross-modal task, we defined a distractor’s onset on the basis of its auditory onset.
The pictures were coupled to both audiovisual (dynamic face) and auditory (static face) speech distractors. As an example of a stimulus for the audiovisual condition, participants experienced a 1000 ms (get-ready) period of the talker’s still neutral face and upper chest, followed by an audiovisual utterance of one distractor word and the presentation of one picture on the chest, followed by 1000 ms of the still neutral face and the colored picture. For the auditory condition, participants experienced exactly the same stimulus except the video track was edited to contain only the still neutral face for the entire trial.
Test Materials
Development of the pictures and distractors has been detailed previously (Jerger et al. 2002c). The content of the distractors was manipulated to represent semantic or phonological relations or no relation to the pictures. Because this paper is focused on the semantic items, the phonological items are not detailed (see Jerger et al., 2009b). The semantic items consisted of 7 pictured objects and 14 word distractors that were coupled to the pictures to represent semantically-related and -unrelated picture word pairs (see Supplemental Digital Content 1 for items). Examples respectively are the picture-distractor pairs of dog-bear and dog-cheese.
In addition to the picture word task, a distractor recognition task quantified the children’s ability to recognize the spoken words of the picture-word task. The recorded items were presented both auditorily and audiovisually, and the children were instructed to repeat each item. The responses of the HI group were scored by an audiologist who was familiar with each child’s consistent mispronunciations, which were not scored as incorrect. Finally, a category knowledge (picture pointing) task quantified the children’s ability to recognize the semantically-related item pairs of the picture word task. Children were instructed to find each pair of items out of six pictured alternatives by category membership and name the items (which ones are food, animals, etc).
Experimental Instrumentation
The video track of the Quicktime movie file was routed to a high resolution monitor, and the auditory track was routed through a speech audiometer to a loudspeaker. The outer borders of the monitor contained a colorful frame, yielding an effective monitor size of about 36 cm. The monitor and loudspeaker, mounted on an adjustable height table, were directly in front of the child at eye level. Participants named pictures by speaking into a unidirectional microphone mounted on an adjustable stand. The microphone was placed approximately 30 cm from the participant’s mouth without blocking his or her view of the monitor. To obtain naming latency, the computer triggered a counter/timer with better than 1 ms resolution at the initiation of a movie file. The timer was stopped by the onset of the participant’s naming response into the microphone, which was fed through a stereo mixing console amplifier and 1 dB step attenuator to a voice-operated relay (VOR). A pulse from the VOR stopped the timing board via a data module board. The counter timer values were corrected by the amount of silence in each movie file before the onset of the picture. We verified that the VOR was not triggered by the distractors.
Procedure
Participants were tested in two sessions, one for auditory testing and one for audiovisual testing. For the HI group, the first session was always the audiovisual mode because pilot results indicated better recognition of the auditory distractors when the children had previously undergone audiovisual testing. For the NH group, the first session was counterbalanced across participants according to modality. The sessions were separated by about 13 days for the NH group and 5 days for the HI group. Prior to beginning, a tester showed each picture on a 5″ × 5″ card, asking children to name the picture and teaching them the target names of any pictures named incorrectly. Next the tester flashed some picture cards quickly and modeled speeded naming. The child copied the tester for another few pictures. Speeded naming practice continued until the child was naming the pictures fluently.
The children sat at a child-sized table with a co-tester alongside to keep them on task. The tester sat at a computer workstation. Each trial was initiated by the tester’s pushing the space bar (out of the participant’s sight). Participants were instructed to ignore the distractors and to name each picture as quickly and as accurately as possible. They completed one unblocked condition (in the auditory or audiovisual mode) comprised of randomly intermixed distractors—semantic or phonological relationships, no semantic or phonological relationship, or a vowel-onset (/i/ and /ʌ/) —presented at two SOAs (−165 ms and +165 ms). No individual picture or word distractor was allowed to reoccur without at least two intervening trials. The intensity level of the distractors was approximately 70 dB SPL as measured at the imagined center of the participant’s head with a sound level meter.
Results
Comparison of Groups
The children with NH were selected from a pool of 100 typically developing children (see Jerger et al. 2009a, 2013) to form a group with a mean and distribution of ages as akin to that in the HI group as possible. The purpose of developing an age-comparison NH group was to evaluate our criteria that performance in the HI group was comparable to that in the NH group excepting the speech and language measures. We quantified performance on a set of nonverbal and verbal measures (see Supplemental Digital Content 2 for results and citations for measures). Statistical analyses of the results and average performance in the groups are presented herein. With regard to age and the nonverbal measures, a mixed-design analysis of variance with one between-participants factor (Groups: NH vs HI) and one within-participants factor (Measures: standardized scores for age, visual motor integration, visual perception, visual simple RT) indicated no significant differences between groups. The measures x group interaction, however, approached significance, F (3, 273) = 2.45, MSE = 0.893, p = .064, partial η2 = .026, suggesting that at least one measure might differ significantly between groups. Multiple t-tests with the problem of multiple comparisons controlled with the False Discovery Rate (FDR) procedure (Benjamini & Hochberg 1995; Benjamini et al. 2006) indicated that age, visual motor integration, and visual simple RT did not differ in the groups. Averages in both groups were about 7 yr 10 mo for age, 100 standard score for visual motor integration, and 725 ms for simple RT. In contrast to these findings, visual perception performance was significantly better in the NH than the HI group (average standard scores respectively of 115 and 95).
With regard to the verbal measures, a mixed-design analysis of variance with one between-participants factor (Groups: NH vs HI) and one within-participants factor (Measures: standardized scores for receptive vocabulary, expressive vocabulary, articulation, auditory word recognition, visual only lipreading) indicated significantly different overall performance in the groups, F (1, 91) = 5.74, MSE = 0.808, p = .019, partial η2 = .059. A significant measures x groups interaction indicated that the relationship between groups, however, was not consistent across the measures, F (4, 364) = 30.51, MSE = 0.736, p < .0001, partial η2 = .251. Multiple t-tests with the FDR procedure indicated that auditory word recognition, articulation proficiency, and receptive and expressive vocabulary were significantly better in the NH group whereas visual only lipreading was significantly better in the HI group. Performance in the NH vs HI groups respectively averaged 99% vs 87% correct for auditory word recognition, 1 vs 5 errors for articulatory proficiency, and 115 vs 95 standard scores for vocabulary skills. In contrast to these results, visual only lipreading in the NH vs HI groups averaged 11% vs 23% respectively. Enhanced lipreading ability in individuals with early-onset hearing loss has been reported previously (Lyxell & Holmberg 2000; Auer & Bernstein 2007). Overall, these data indicate that performance differed in the NH vs HI groups only on the speech/language measures with one exception. Results were better in the NH group even though visual performance was within the average normal range in both groups. Reasons for this difference are unclear.
Characteristics of the Picture Word Data
Picture naming responses that were incorrect (i.e., misnamed the picture) or flawed (e.g., lapses of attention; triggering the VOR with a nonspeech sound, dysfluency, etc) were deleted on-line and re-administered after intervening items. The total number of trials deleted with replacement averaged about 2.5 in both the NH and HI groups (range = 0–6). The number of missing trials remaining at the end because the replacement trial was also flawed averaged about 0.6 in both groups (range = 0–3).
To control for mishearing a distractor and for categorical knowledge deficiencies, we deleted all trials containing items that were not correct on 1) the distractor repetition task or 2) the category knowledge test. This constraint did not require any deletions in the NH group. In the HI group, performance on the distractor repetition task (N=14) averaged about 13.3 items correct for both the audiovisual and auditory modes, requiring the deletion of about 0.7 items/child (range = 0–4). Performance on the category knowledge task for the pictures and distractors (N=21) averaged about 20.9 items correct in the HI group, with two children requiring the deletion of 1 item each. Overall out of a total of 14 picture-word pairs or trials, the naming times considered below were based, on average, on 13.5 pairs for children in the NH group and 12.8 pairs for children in the HI group.
Effects of Semantic Relatedness
Tables 1a and b summarize average absolute naming times for the unrelated and related distractors in the NH and HI groups for the auditory and audiovisual modes at an SOA of −165 ms (upper panel) and +165 ms (bottom panel). Figures 2a and b depict the effects of semantic relatedness as quantified by adjusted naming times (difference between the two types of distractors) in the groups for the two modes at each SOA. The zero baseline of the ordinate represents absolute naming times for the unrelated distractors (Table 1).
Table 1.
Average absolute naming times for the semantically-related and −unrelated distractors in the NH and HI groups for the auditory and audiovisual modes at an SOA of −165 ms (a) and +165 ms (b).
| a. SOA of −165 ms
| |||||
|---|---|---|---|---|---|
| NH Group Distractor Modality | HI Group Distractor Modality | ||||
|
| |||||
| Picture Distractor Pairs | Auditory | Audiovisual | Auditory | Audiovisual | |
| Related | 1481 (411) | 1502 (381) | 1546 (511) | 1624 (458) | |
| Unrelated | 1389 (387) | 1434 (416) | 1507 (488) | 1537 (457) | |
| b. SOA of +165 ms
| |||||
|---|---|---|---|---|---|
| NH Group Distractor Modality | HI Group Distractor Modality | ||||
|
| |||||
| Picture Distractor Pairs | Auditory | Audiovisual | Auditory | Audiovisual | |
| Related | 1612 (429) | 1711 (447) | 1692 (501) | 1834 (539) | |
| Unrelated | 1614 (469) | 1697 (435) | 1766 (522) | 1804 (580) | |
Figure 2.
Figures 2a & b. Effects of semantic relatedness as quantified by adjusted naming times (difference between semantically-related and -related distractors) for the auditory and audiovisual distractors in the groups with NH vs HI at SOAs of −165 (2a) and +165 ms (2b). The zero baseline of the ordinate represents absolute naming times for the unrelated distractors (Table 1a & b). A star indicates significant semantic interference or facilitation. Error bars are standard errors of the mean.
We have a complex factorial design with one between-participants factor (Group: NH vs HI) and three within-participants factors (SOA: −165 ms vs +165 ms, Mode: auditory vs audiovisual, and Type of Distractor: unrelated vs related). In this circumstance, an omnibus factorial analysis of variance addressing only global effects is typically less powerful than more focused approaches that address specific predictions/effects (Rosenthal et al. 2000; Abdi et al. 2009). Thus we carried out planned orthogonal contrasts (Abdi & Williams 2010) (see Supplemental Digital Content 3 for results of omnibus analysis). The contrasts below address effects of semantic relatedness in terms of the 1) lexical selection by competition hypothesis, 2) competition threshold hypothesis, 3) mode, 4) SOA, and 5) semantic knowledge. Our predictions are summarized in Table 2.
Lexical Selection by Competition Hypothesis
Planned orthogonal contrasts evaluated whether the semantically-related vs -unrelated naming times (Figures 2a and b) differed significantly, an outcome that would indicate significant effects of semantic relatedness as predicted by the lexical selection by competition hypothesis. Results at −165 ms SOA indicated significant semantic interference 1) in the NH group for both the auditory and audiovisual modes, respectively Fcontrast (1,91) = 8.63, MSE = 21758.372, p = .004, partial η2 = .086, and Fcontrast (1,91) = 4.66, MSE = 21758.372, p = .033, partial η2 = .048, and 2) in the HI group for the audiovisual mode, Fcontrast (1,91) = 7.63, MSE = 21758.372, p = .007, partial η2 = .077. Results at +165 ms SOA indicated significant semantic facilitation in the HI group for the auditory mode, Fcontrast (1,91) = 5.58, MSE = 21758.372, p = .020, partial η2 = .058. Interestingly, Tables 1a and b show that the absolute naming times in both groups were consistently slower (about 200–300 ms) at +165 ms relative to −165 ms SOA with one exception, namely the facilitated semantically-related times in the HI group for auditory input. Thus the facilitation effect for the poorer fidelity auditory input seems to represent a true speeding up of the semantically-related times. No other significant results were observed.
Results in the NH group for the auditory and audiovisual modes and results in the HI group for the audiovisual mode showed significant semantic interference at −165 ms SOA and no effect at +165 ms SOA. This pattern of results is consistent with the lexical selection by competition hypothesis. Results in the HI group for the auditory mode, however, are not consistent with the lexical selection by competition hypothesis.
Competition Threshold Hypothesis
To address the predictions of the competition threshold hypothesis, we may apply the above planned orthogonal contrasts evaluating whether the semantically-related vs -unrelated naming times for the auditory mode in the HI group (Figure 2) differed significantly (i.e., showed semantic interference or facilitation). Results for the auditory mode in the HI group indicated no effect of semantic relatedness at −165 ms SOA and significant semantic facilitation at +165 ms SOA, p = .020 as reported above. Results support the competition threshold hypothesis.
Mode of the Distractor
To address the predictions based on our previous results in the HI group on the multi-modal task with phonological distractors, planned orthogonal contrasts evaluated whether the adjusted naming times (Figure 2) collapsed across SOA differed significantly 1) between the auditory vs audiovisual modes for the HI group and 2) between the HI vs NH groups for each mode. Results for the auditory vs audiovisual modes in the HI group indicated that adjusted naming times differed significantly, Fcontrast (1,91) = 17.82, MSE = 7065.696, p <.001, partial η2 = .164. Results in the HI vs NH groups for the different modes indicated that adjusted naming times differed significantly only for the auditory mode, Fcontrast (1,91) = 12.17, MSE = 7065.696, p <.001, partial η2 = .118. This outcome mirrors the results for the phonological distractors and supports the supposition that adding visual speech produces more normalized results.
SOA
To address the predictions based on our previous results in a similar group of children with HI on the cross-modal picture word task with semantic auditory distractors, we may apply the Fcontrast results for the competition threshold hypothesis. Results for the auditory mode in the HI group (Figure 2) indicated no effects of semantic relatedness at the leading SOA (−165 ms) and significant semantic facilitation at the lagging SOA (+165 ms), p = .020 as reported above. This pattern of results contrasts with our previous results on the cross-modal picture word task, which showed pronounced semantic interference in HI group at both the leading and lagging SOAs (i.e., −150 ms and +150 ms).
Semantic Capabilities
To address the predictions based on our theories and research about semantic development in children with HI, we may apply the Fcontrast results for the mode of the distractor for the HI vs NH groups. Results indicated that the adjusted naming times differed significantly between groups only for the auditory mode, p <.001 as reported above. Thus results do not support the idea that semantic representations for our set of lexical items are impoverished in the current HI group. Higher level difficulties associated with less rich and robust semantic representations should have affected the results for both modes of inputs.
Individual Variability in the HI Group
To probe individual variability in the semantic facilitation effect (auditory mode, Figure 2b) and interference effect (audiovisual mode, Figure 2a) due to different degrees of HI and age, we conducted multiple regression analyses (see Supplemental Digital Content 4 for results). Neither the degree of HI nor age significantly influenced results. The maximum variance in performance accounted for by either the combined or unique influences of degree of HI and age ranged from only 0–7%.
Discussion
This research applied a new multi-modal picture word task to examine how poorer fidelity auditory input in children with HI may influence semantic access by speech. Our multi-modal approach allowed us to 1) quantify semantic access by lower fidelity auditory speech and 2) probe whether the addition of visual speech enriched the fidelity of the auditory input sufficiently to promote more normalized results. Below we focus on examining these issues in the HI group in terms of the competition threshold hypothesis, semantic capabilities, and previous results in the current or similar children with HI on picture-word tasks.
If we generalize the competition threshold hypothesis to our study, it suggests that the poorer fidelity auditory semantically-related (relative to -unrelated) distractors will produce no effect or semantic facilitation, rather than interference, on picture word tasks. Our results for the auditory mode offered clear support for the competition threshold hypothesis. The lower fidelity auditory speech heard by children with HI affected the normalcy of semantic access. The competition threshold hypothesis did not model the effects of SOA, but our results indicated that SOA is a critical determinant of the outcome. Results for the auditory distractors in the HI group (Figures 2a & b) indicated a null effect at −165 ms SOA and a facilitation effect at +165 ms SOA. This outcome implies that the null and facilitation effects in these children were not either/or effects. Initially the poorer fidelity auditory distractors did not produce any effect; with time the initial null effect morphed into a facilitation effect. Finally, these results for the auditory mode do not agree with our previous results on the cross-modal task in a similar group of children with HI. The previous results revealed pronounced semantic interference at both SOAs. Further research is needed to resolve this difference.
With regard to the mode of the distractor, the addition of visual speech transformed the pattern of results. In the presence of visual speech, the semantic distractors produced an interference effect at −165 ms SOA and no effect at +165 ms SOA, yielding a pattern of results typical of normal children on the multi-modal task (Figure 2) and children and adults on the cross-modal task (Schriefers et al. 1990; Jerger et al. 1994; Damian & Martin 1999; Hanauer & Brooks 2003, 2005; Jerger et al. 2002a, 2002c; Seiger-Gardner & Schwartz 2008).
Finally, a consistent implication in both the current and the Jerger et al. (2002a) picture word studies is that the organization of semantic memory is well structured in terms of categorical knowledge in children with HI. Although the items of our cross-modal and multi-modal tasks are early learned and highly familiar, the pronounced semantic relatedness effects observed in both studies suggest that the organization of semantic memory and semantic representations do not differ in children with NH vs HI. Early lexical learning appears robust over a range of early auditory sensory experiences. This idea is also consistent with our previous semantic results on a category verification task assessing category typicality and out-of-category relatedness effects in children with HI (Jerger et al. 2006).
In short, this research applied a multi-modal picture word task to investigate semantic access by auditory and audiovisual speech. A value of our newly developed on-line approach is in delineating the information that becomes available to listeners when a word is spoken. Results highlighted the critical importance of audiovisual speech in promoting the normalcy of semantic access by spoken words in children with HI.
SDC 1 pdf. Table detailing the 7 pictured objects and 14 word distractors comprising the semantically-related and -unrelated picture-word pairs.
SDC 2 pdf. Table detailing average ages and results on a set of nonverbal and verbal measures in the groups along with citations for the measures.
SDC 3 pdf. Table detailing results of an omnibus factorial analysis with one between-participants factor (Group: NH vs HI) and three within-participants factors (SOA: −165 ms vs +165 ms, Mode: auditory vs audiovisual, and Type of Distractor: unrelated vs related).
SDC 4 pdf. Summary of multiple regression analyses investigating whether the degree of HI and age affected the facilitation effect for the auditory mode at +165 ms SOA (Fig. 2b) or the interference effect for the audiovisual mode at −165 ms SOA (Fig. 2a). For both assessments, the criterion variable was adjusted naming times and the predictor variables were the standard scores for age and auditory word recognition, our proxy variable for degree of HI.
Supplementary Material
Acknowledgments
Source of Funding. This work was supported by the National Institute on Deafness and Other Communication Disorders, grant DC-00421.
This research was supported by the National Institute on Deafness and Other Communication Disorders, grant DC-00421. We thank Dr. Alice O’Toole for her advice and assistance in recording our audiovisual stimuli. We thank the children and parents who participated and the research staff who assisted, namely Elizabeth Mauze of CID-Washington University School of Medicine and Karen Banzon, Sarah Joyce Bessonette, Carissa Dees, K. Meaghan Dougherty, Alycia Elkins, Brittany Hernandez, Kelley Leach, Michelle McNeal, Anastasia Villescas of UT-Dallas (data collection, analysis, and/or presentation), and Derek Hammons and Scott Hawkins of UT-Dallas and Brent Spehar of CID-Washington University (computer programming). We thank the anonymous reviewers for their constructive and consultative comments.
Footnotes
Conflicts of Interests: None are declared.
References
- Abdi H, Edelman B, Valentin D, et al. Experimental design and analysis for psychology. New York: Oxford University Press; 2009. [Google Scholar]
- Abdi H, Williams L. Contrast analysis. In: Salkind N, editor. Encyclopedia of Research Design. Thousand Oaks, CA: Sage; 2010. pp. 243–251. [Google Scholar]
- American, National, Standards, Institute, & (ANSI) Specifications for audiometers (ANSI S3.6-2004) New York: Author; 2004. [Google Scholar]
- Auer E, Bernstein L. Enhanced visual speech perception in individuals with early-onset hearing impairment. J Speech Lang Hear Res. 2007;50:1157–1165. doi: 10.1044/1092-4388(2007/080). [DOI] [PubMed] [Google Scholar]
- Beery K, Beery N. The Beery-Buktenica developmental test of visual-motor integration with supplemental developmental tests of visual perception and motor coordination. 5. Minneapolis: NCS Pearson, Inc; 2004. [Google Scholar]
- Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J Royal Stat Soc Series B (Methodological) 1995;57:289–300. [Google Scholar]
- Benjamini Y, Krieger A, Yekutieli D. Adaptive linear step-up procedures that control the false discovery rate. Biometrika. 2006;93:491–507. [Google Scholar]
- Bjorklund D. How age changes in knowledge base contribute to the development of children’s memory: An interpretive review. Developmental Review. 1987;7:93–130. [Google Scholar]
- Bloom P. How children learn the meanings of words. Cambridge, MA: The MIT Press; 2000. [Google Scholar]
- Borg E, Edquist G, Reinholdson A, et al. Speech and language development in a population of Swedish hearing-impaired pre-school children, a cross-sectional study. International Journal of Pediatric Otorhinolaryngology. 2007;71:1061–1077. doi: 10.1016/j.ijporl.2007.03.016. [DOI] [PubMed] [Google Scholar]
- Briscoe J, Bishop D, Norbury C. Phonological processing, language, and literacy: A comparison of children with mild-to-moderate sensorineural hearing loss and those with specific language impairment. J Child Psychol Psychiatry. 2001;42:329–340. [PubMed] [Google Scholar]
- Brownell R. Expressive one-word picture vocabulary test. 3. Novato, CA: Academic Therapy Publications; 2000. [Google Scholar]
- Cowan N. Attention and memory. An integrated framework. New York: Oxford University Press; 1995. [Google Scholar]
- Damian M, Bowers J. Locus of semantic interference in picture-word interference tasks. Psychonomics Bull Rev. 2003;10:111–117. doi: 10.3758/bf03196474. [DOI] [PubMed] [Google Scholar]
- Damian M, Martin R. Semantic and phonological codes interact in single word production. J Exp Psychol: Learn, Mem, and Cogn. 1999;25:345–361. doi: 10.1037//0278-7393.25.2.345. [DOI] [PubMed] [Google Scholar]
- Damian M, Vigliocco G, Levelt W. Effects of semantic context in the naming of pictures and words. Cogn. 2001;81:B77–B86. doi: 10.1016/s0010-0277(01)00135-4. [DOI] [PubMed] [Google Scholar]
- Davis J, Elfenbein J, Schum R, et al. Effects of mild and moderate hearing impairments on language, educational, and psychosocial behavior of children. J Speech Hear Dis. 1986;51:53–62. doi: 10.1044/jshd.5101.53. [DOI] [PubMed] [Google Scholar]
- Dunn L, Dunn L. The peabody picture vocabulary test-III. 3. Circle Pines, MN: American Guidance Service; 1997. [Google Scholar]
- Erber N. Interaction of audition and vision in the recognition of oral speech stimuli. J Speech Hear Res. 1969;12:423–425. doi: 10.1044/jshr.1202.423. [DOI] [PubMed] [Google Scholar]
- Estabrooks W. Auditory-verbal therapy and practice. Washington, DC: Alex Graham Bell Assn for Deaf; 2006. [Google Scholar]
- Finkbeiner M, Caramazza A. Now you see it, now you don’t: On turning semantic interference into facilitation in a Stroop-like task. Cortex. 2006;42:790–796. doi: 10.1016/s0010-9452(08)70419-2. [DOI] [PubMed] [Google Scholar]
- Fitzpatrick E, Crawford L, Ni A, Durieux-Smith A. A descriptive analysis of language and speech skills in 4- to 5-yr old children with hearing loss. Ear Hear. 2011;32:605–616. doi: 10.1097/AUD.0b013e31821348ae. [DOI] [PubMed] [Google Scholar]
- Gilbertson M, Kamhi A. Novel word learning in children with hearing impairment. J Speech Hear Res. 1995;38:630–642. doi: 10.1044/jshr.3803.630. [DOI] [PubMed] [Google Scholar]
- Goldman R, Fristoe M. Goldman Fristoe 2 test of articulation. Circle Pines, MN: American Guidance Service; 2000. [Google Scholar]
- Hanauer J, Brooks P. Developmental change in the cross-modal Stroop effect. Perception & Psychophysics. 2003;65:359–366. doi: 10.3758/bf03194567. [DOI] [PubMed] [Google Scholar]
- Hanauer J, Brooks P. Contributions of response set and semantic relatedness to cross-modal Stroop-like picture-word interference in children and adults. J ExpChild Psychol. 2005;90:21–47. doi: 10.1016/j.jecp.2004.08.002. [DOI] [PubMed] [Google Scholar]
- Hicks C, Tharpe A. Listening effort and fatique in school-age children with and without hearing loss. J Speech Lang Hear Res. 2002;45:573–584. doi: 10.1044/1092-4388(2002/046). [DOI] [PubMed] [Google Scholar]
- Jerger S, Damian M, Mills C, et al. Effect of perceptual load on semantic access by speech in children. J Speech Lang Hear Res. 2013 doi: 10.1044/1092-4388(2012/11-0186). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jerger S, Damian M, Tye-Murray N, et al. Effects of childhood hearing loss on organization of semantic memory: Typicality and relatedness. Ear Hear. 2006;27:686–702. doi: 10.1097/01.aud.0000240596.56622.0c. [DOI] [PubMed] [Google Scholar]
- Jerger S, Damian MF, Spence MJ, et al. Developmental shifts in children’s sensitivity to visual speech: A new multimodal picture-word task. J Exp Child Psychol. 2009a;102(1):40–59. doi: 10.1016/j.jecp.2008.08.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jerger S, Elizondo R, Dinh T, et al. Linguistic influences on the auditory processing of speech by children with normal hearing or hearing impairment. Ear Hear. 1994;15:138–160. doi: 10.1097/00003446-199404000-00004. [DOI] [PubMed] [Google Scholar]
- Jerger S, Lai L, Marchman V. Picture naming by children with hearing loss: I. Effect of Semantically-related auditory distractors. J Am Acad Audiol. 2002a;13:463–477. [PubMed] [Google Scholar]
- Jerger S, Lai L, Marchman V. Picture naming by children with hearing loss: II. Effect of phonologically-related auditory distractors. J Am Acad Audiol. 2002b;13:478–492. [PubMed] [Google Scholar]
- Jerger S, Martin R, Damian M. Semantic and phonological influences on picture naming by children and teenagers. J Mem Lang. 2002c;47:229–249. [Google Scholar]
- Jerger S, Stout G, Kent M, et al. Auditory Stroop effects in children with hearing impairment. J Speech Hear Res. 1993;36:1083–1096. doi: 10.1044/jshr.3605.1083. [DOI] [PubMed] [Google Scholar]
- Jerger S, Tye-Murray N, Abdi H. Role of visual speech in phonological processing by children with hearing loss. J Speech Lang Hear Res. 2009b;52:412–434. doi: 10.1044/1092-4388(2009/08-0021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kahneman D. Attention and effort. Englewood Cliffs, NJ: Prentice-Hall; 1973. [Google Scholar]
- Levelt W, Roelofs A, Meyer A. A theory of lexical access in speech production. Behav Brain Sci. 1999;22:1–75. doi: 10.1017/s0140525x99001776. [DOI] [PubMed] [Google Scholar]
- Lyxell B, Holmberg I. Visual speechreading and cognitive performance in hearing-impaired and normal hearing children (11–14 years) Br J Educ Psychol. 2000;70:505–518. doi: 10.1348/000709900158272. [DOI] [PubMed] [Google Scholar]
- MacLeod A, Summerfield Q. Quantifying the contribution of vision to speech perception in noise. Br J Audiol. 1987;(21):131–141. doi: 10.3109/03005368709077786. [DOI] [PubMed] [Google Scholar]
- Moeller M. Combining formal and informal strategies for language assessment of hearing-impaired children. J Acad Rehab Audiol. 1988;21:S73–S99. [Google Scholar]
- Moeller M, Tomblin J, Yoshinaga-Itano C, et al. Current state of knowledge: Language and literacy of children with hearing impairment. Ear Hear. 2007;28:740–753. doi: 10.1097/AUD.0b013e318157f07f. [DOI] [PubMed] [Google Scholar]
- Moeller M, Watkins S, Schow R. Audiologic rehabilitation for children: Assessment and management. In: Schow R, Nerbonne M, editors. Introduction to audiologic rehabilitation. Boston: Allyn and Bacon; 1996. pp. 288–360. [Google Scholar]
- Moore B. Perceptual consequences of cochlear hearing loss and their implications for the design of hearing aids. Ear Hear. 1996;17:133–160. doi: 10.1097/00003446-199604000-00007. [DOI] [PubMed] [Google Scholar]
- Mozolic J, Hugenschmidt C, Peiffer A, et al. Multisensory integration and aging. In: Murray M, Wallace M, editors. The neural bases of multisensory processes. Boca Raton, FL: CRC Press; 2012. pp. 381–394. [PubMed] [Google Scholar]
- Osberger M, Hesketh L. Speech and language disorders related to hearing impairment. In: Lass N, Northern J, McReynolds L, Yoder D, editors. Handbook of speech-language pathology and audiology. Toronto: B. C. Decker; 1988. pp. 858–886. [Google Scholar]
- Peltzer L. Oral Motor Function Questionnaire. 1997. Unpublished. [Google Scholar]
- Piai V, Roelofs A, Schriefers H. Distractor strength and selective attention in picture-naming performance. Mem Cogn. 2012;40:614–627. doi: 10.3758/s13421-011-0171-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rabbitt P. Channel capacity, intelligibility, and immediate memory. Q J Psychol. 1968;20:241–248. doi: 10.1080/14640746808400158. [DOI] [PubMed] [Google Scholar]
- Rader K. Rader near point vision test. Tulsa, OK: Modern Education Corp; 1977. [Google Scholar]
- Rosenthal R, Rosnow R, Rubin D. A correlational approach. Cambridge: Cambridge University Press; 2000. Contrasts and effect sizes in behavioral research. [Google Scholar]
- Ross M, Lerman J. Word intelligibility by picture identification. Pittsburgh: Stanwix House, Inc; 1971. [Google Scholar]
- Salthouse T. Speed of behavior and its implications for cognition. In: Birren J, Schaie K, editors. Handbook of the psychology of aging. 2. New York: Van Nostrand Reinhold Co; 1985. pp. 400–426. [Google Scholar]
- Schriefers H, Meyer A, Levelt W. Exploring the time course of lexical access in language production: Picture-word interference studies. J Mem Lang. 1990;29:86–102. [Google Scholar]
- Seiger-Gardner L, Schwartz R. Lexical access in children with and without specific language impairment: A cross-modal picture-word interference study. Int J Lang Com Dis. 2008;43(5):528–551. doi: 10.1080/13682820701768581. [DOI] [PubMed] [Google Scholar]
- Stiles D, Bentler R, McGregor K. The speech intelligibility index and the pure-tone average as predictors of lexical ability in children fit with hearing aids. J Speech Lang Hear Res. 2012;55:764–778. doi: 10.1044/1092-4388(2011/10-0264). [DOI] [PubMed] [Google Scholar]
- Sumby W, Pollack I. Visual contributions to speech intelligibility in noise. J Acoustl Soc Am. 1954;26:212–215. [Google Scholar]
- Tye-Murray N. Foundations of aural rehabilitation: Children, adults, and their family members. 3. San Diego: Singular Publishing Group; 2009. [Google Scholar]
- Tye-Murray N, Geers A. Children’s audio-visual enhancement test. St. Louis, MO: Central Institute for the Deaf; 2001. [Google Scholar]
- Werker J, Fennell C. Listening to sounds versus listening to words: Early steps in word learning. In: Hall D, Waxman S, editors. Weaving a lexicon. Cambridge, MA: MIT Press; 2004. pp. 79–109. [Google Scholar]
- Wingfield A, Tun P, McCoy S. Hearing loss in older adulthood: What it is and how it interacts with cognitive performance. Curr Dir Psychol Sci. 2005;14:144–148. [Google Scholar]
- Yoshinaga-Itano C, Downey D. A hearing-impaired child’s acquisition of schemata: Something’s missing. Topics Lang Dis. 1986;7:45–57. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.




