Abstract
Objectives:
To evaluate the subconscious knowledge of between-word phonological similarities in children with cochlear implants as compared to children with typical hearing.
Design:
Participants included 30 children with cochlear implants between the ages of five and seven who used primarily spoken English to communicate, 30 children matched for chronological age, and 30 children matched for vocabulary size. Participants completed an animacy judgement task in either a (a) neutral condition, (b) a phonological prime condition where the consonant and vowel onset of the pictured word was presented prior to the visual target’s appearance, (c) an inhibition prime condition where a consonant and vowel onset not matching the pictured word was presented prior to the target’s appearance. Reaction times were recorded.
Results:
Children with cochlear implants reacted differently and more slowly than children with typical hearing in both groups to the primes: children with typical hearing experienced a phonological facilitation effect in the phonological prime condition, whereas children with cochlear implants did not. Children with cochlear implants also had reaction times that, overall, were slower than children matched for chronological age but similar to children matched for vocabulary size.
Conclusions:
The different experience of children with cochlear implants with phonological facilitation and inhibition effects may indicate children with cochlear implants have phonological organization strategies that are different from those of children with typical hearing.
Cochlear implants allow unprecedented access to sound for children with severe to profound hearing loss. However, there is mounting evidence (Kenett et al. 2013; Weschler-Kashi et al. 2014) that access to phonemes alone is not sufficient for children with cochlear implants to develop phonological organization strategies that are the same as those of their peers with typical hearing. Difficulties developing an efficient phonological organization system for lexical knowledge could yield negative outcomes related to word learning and retention, or even later academic skills (e.g., Metsala & Walley 1998; Munson et al. 2005; Leach & Samuel 2007). The purpose of this study is to evaluate the subconscious knowledge of between-word phonological similarities in children with cochlear implants as compared to children with typical hearing of the same age and younger children of the same vocabulary size.
Phonological Organization
Children with cochlear implants, perhaps predictably, experience difficulties with speech perception (Pyman et al. 2000). Those difficulties may be the result of technological limitations of hearing devices, leading to difficulty processing a particular sound in a given moment, may be the result of periods of auditory deprivation prior to implantation, leading to a cochlear implant user having less experience perceiving speech than peers with typical hearing or be a result of both factors (e.g., Kral et al. 2016). Children who experience difficulties with speech perception might also then, be expected to have difficulty evaluating phonological similarities between words. However, the perception and recognition of phonemes within words represents only one part of auditory processing of spoken language. Other domains of language, including semantic characteristics (i.e., the meaning of linguistic tokens) and lexical characteristics (i.e., typical sound-sequence patterns and normalities within words, captured by constructs like neighborhood density and word frequency) of words influence the recognition, retention and retrieval of spoken words (e.g., Nittrouer & Boothroyd 1990).
When a person learns a new word, that word must be integrated with his or her existing word knowledge (Leach and Samuel 2007). When one is able to organize known words according to linguistic properties of those words (e.g., semantic, syntactic or phonological properties), that person can efficiently access those items for receptive recognition and expressive use (e.g., Siew & Vitevitch 2016). A first step in spoken word recognition is speech perception, whereby a stream of speech is recognized and processed for meaning (e.g., Marslen-Wilson, 1987). The TRACE model of speech perception, for example, proposes that as a person receives auditory information, the brain interprets each piece of information both according to phonological knowledge and lexical knowledge, thus activating both phoneme and word candidates until sufficient information is available to achieve word recognition (McClelland and Elman 1986). In other words, processing of a spoken word may occur by attending incrementally to word segments as they appear and in parallel via processing of phonemes that come before and after segments (e.g., forward and backward processing; Marslen-Wilson & Warren 1994; McClelland & Elman 1986). However, successful and efficient word recognition must also involve some level of organization of the lexicon: a full phoneme inventory and lexicon size for a person learning English, for example, would require millions of units and billions of connections to process (Hannagan, Magnuson & Grainger 2013). Linguistic processing is substantially improved when one can use a dynamic organizational network to apply knowledge across multiple linguistic domains (e.g., lexical, phonological, semantic and syntactic) to the task of word recognition.
The concept of a lexical storage network is derived from the spreading-activation theory of semantic processing (e.g., Collins & Loftus 1975). This theory postulates that lexical concept items are linked in a mental storage network whereby related words or concepts are connected, not only via semantic knowledge, but also additional domains of linguistic knowledge. For example, activation of the concept “dog” may in turn activate taxonomic associations (e.g., animal), syntactic associations (e.g., barks), phonological associations (e.g., dig), or semantic associations (e.g., bone; Collins & Loftus 1975).
Specific to phonologically-based lexical organization, the Lexical Restructuring Model provides a hypothesis for how children begin to attend to increasingly segmental phonological features of words (Metsala and Walley 1998). This model leads to predictions that, as children acquiring increasingly dense vocabularies (e.g., words that are very similar, such as dog, dot, log and dig), they begin to attend to incremental pieces, or phonemes, within words rather than representing and processing words holistically. That is, when children acquire words in high density networks, they must find a way to organize those words phonologically such that hearing the beginning of a word, such as /d/, efficiently activates those words that correspond to that initial phoneme (e.g., dog, dot, dig) and does not activate other lexical items (e.g., log); for further explanation, see the Neighborhood Activation Model (Luce & Pisoni, 1998). Those words that are activated, particularly early in the process of speech perception, are lexical competitors and the listener must use additional information to determine which of those competitors should be inhibited (e.g., not recognized as the word the speakers is saying) and which word should be identified as the intended word (Marslen-Wilson, 1987).
Studies of children’s development of incremental versus holistic processing do indicate changes in word representations over time as children’s vocabularies expand (Brooks and MacWhinney 2000). Because the Lexical Restructuring Model leads to predictions that children attend increasingly to phonemes within words, it has been linked to the development of phonological awareness in children, which is a predictor of later reading outcomes (e.g., Goodrich & Lonigan, 2015; Ainsworth et al. 2016). Thus, lexical organization at the phonological level plays a role in the development of later academic skills. Additionally, increases in vocabulary knowledge and the development of a phonologically-based organization system has implications for the activation of phonological competitors. Amongst children with typical hearing, it appears that vocabulary acquisition and the subsequent development of a phonological organization system follow a developmental trajectory (Rigler et al., 2015; Sekerina & Brooks, 2007). Further, in cases where school-age children with typical hearing are presented with lower-intensity speech, it appears that lexical competitors are activated longer than for conversational speech (Hendrickson, Oleson, & Walker, 2021). This finding may further support the hypothesis that children who have imperfect access to sound (e.g., children with cochlear implants) are likely to experience difficulties related to speech processing, lexical and phonological competition.
Assessment of Phonological Organization
As a construct, phonological organization has been assessed in a number of different ways. The present study focused on phonological priming. Phonological priming tasks have been used to strategically assess phonological organization systems in neurotypical children with typical hearing. For children who have access to a phonological organization system, a phonological prime that occurs prior to a target word should either enhance or inhibit responses to that target. For example, Melnick, Conture and Ohde (2003) assessed the effects of phonological primes, an auditory presentation of the initial consonant and vowel of the target word, on picture-naming reaction times for children who were three and five years old. Children, particularly five-year-old children, responded more quickly when the prime was related to the target pictures as compared to when it was unrelated. Using a similar paradigm, Jerger, Martin and Damian (2002) also found an effect of initial consonant plus vowel-onset auditory distractors in five- and seven-year-old children. Other studies of priming have assessed receptive, non-speech reactions by presenting a phonological prime (word onsets) and then asking children to make a lexical decision (e.g., Is this a word?; Bonte & Blomert 2004; Sankar, Malik & Shanbal 2009 ) or an animacy judgement (e.g. Is this an animal?; Velez and Schwartz 2010). Priming effects are also evident for receptive decision tasks: children, particularly those who are older and/or judged to have better access to phonological organization systems, experience greater facilitation effects when the phonological prime matches the target word.
Similarly, gating paradigms are also used to assess a child’s word recognition based on phonological cues. Within a gating paradigm, children are given partial-word representations, sometimes embedded in a sentence, and children must guess the intended word. Often, words in gating paradigms are presented repeatedly, with incrementally more acoustic information in each presentation, allowing researchers to track thresholds of necessary acoustic information for word recognition (Grosjean, 1980). Children must use the phonological and lexical information from the partial-word representation and, sometimes, use semantic information from the carrier sentence to infer the meaning of a target. Gating tasks indicate that children with typical hearing develop more fine-grained phonological representations of words as they age, consistent with the Lexical Restructuring Model (e.g., Metsala, 1997).
Other ways of assessing phonological organization have included verbal fluency tasks (e.g., Troyer et al. 1997; Riva et al. 2000; Sauzeon et al. 2004), repeated word association (e.g., Sheng and McGregor, 2010) and error analysis (e.g., Biran, et al., 2019). In the case of these methodologies, a child’s semantic or syntactically-based organizational networks are likely to be activated alongside phonological organization networks. Thus, there exists a limited ability to directly analyze the strength of a phonologically-based lexical organization strategy. Expressive tasks, such as verbal fluency, repeated word association, and naming likely activate semantic associations as well as phonological associations and can be used to analyze the primacy of semantic versus phonological organization (e.g., children tend to rely more heavily on semantic organization strategies; e.g., Riva et al. 2000; Sauzeon et al. 2004), but don’t necessarily allow researchers to focus on phonological organization alone.
Phonological Organization in Children with Cochlear Implants
There is an emerging focus in research literature on lexical processing and organization systems of children with cochlear implants. Schwartz and colleagues (2013) reported on preliminary data that evaluated the effects of phonological priming on lexical access to words (necessary for word recognition and production) to gain insight into language processing differences between children with cochlear implants and children with typical hearing. In a lexical judgement task, children with cochlear implants (n = 18) experienced greater inhibition effects than children with typical hearing when the target word was preceded by a word with a shared onset. Children with cochlear implants (n = 30) also responded more quickly in a picture-picture interference priming (visual) paradigm as compared to a picture-word interference priming paradigm, which the authors interpreted as delayed auditory processing in children with cochlear implants. Finally, children with cochlear implants who participated in eye-tracking tasks (n = 11) tended to look to phonologically-related competitor words (e.g., words that shared a rhyme or the same word onset) than did children with typical hearing (Schwartz et al 2013). This series of studies reported by Schwartz and colleagues indicates that children with cochlear implants are likely to hold on to competitor words during lexical processing longer than their peers with typical hearing. Delays in auditory processing, perhaps as a result of longer activation periods for competitor words, could relate to a child’s development of and access to a phonological organization system.
Studies that have attempted to evaluate lexical organization systems have indicated that children with cochlear implants may organize lexical items differently, or less efficiently, than their typical-hearing peers. Weschler-Kashi et al. (2014) and Kenet et al. (2013) determined, in an analysis of verbal fluency tasks responses, that children with cochlear implants have access to semantic and phonological lexical organization systems, but that their phonological representations appear less mature than those of their age-matched peers with typical hearing. Lexical organization issues based on other semantic variables, such as taxonomic knowledge, have also been indicated specifically for children with cochlear implants (e.g., Lund & Dinsmoor 2016). These organizational issues may lead to the processing delays observed in adults with cochlear implants. McMurray and colleagues (2017) determined that adults with cochlear implants were able to process speech incrementally, but committed to the target item more slowly and with less confidence than age-matched listeners with typical hearing. It is possible that processing delays are the result of processing a degraded sound signal via the cochlear implant device (McMurray et al. 2018); however, lexical organization strategies could also contribute to response delays.
Jerger and colleagues (2016) specifically assessed the effects of phonological priming in children with hearing loss as compared to children with typical hearing between the ages of four and fourteen. Results demonstrated that children with hearing loss experienced priming effects similar to children with typical hearing, and those children with better auditory word recognition scores experienced priming effects more strongly than children who had worse recognition scores. The nature of this study, however, does not allow for conclusions to be drawn specifically about the lexical organization skills of children with cochlear implants (who often begin life with more severe degrees of hearing loss than children who use hearing aids long term and who can have different auditory and linguistic trajectories than children with hearing aids; e.g., Ashori, 2020; Lund et al. 2021). The majority of the children wore hearing aids, and most participants experienced pure-tone average degrees of hearing loss in the range of 21 to 80 dBHL. Given the results of studies of children with cochlear implants specifically, further investigation of the effects of phonological priming as an indicator of phonological organization strategy use is warranted.
The purpose of the present study is to evaluate the subconscious awareness of between-word phonological relations for children with cochlear implants as compared to children with typical hearing matched for age and matched for vocabulary size via a phonological priming task. The research questions this study sought to address: Do early-elementary aged children with typical hearing experience facilitation from word-onset phonological primes during animacy judgement tasks better than children with cochlear implants or younger children with typical hearing?
Materials and Methods
Participants
This study was approved by the Texas Christian University Institutional Review Board. Three groups of children participated: 30 with cochlear implants (cochlear implant group), 30 matched for age (age-matched group), and 30 children matched for vocabulary size according to the raw score on the Expressive One Word Picture Vocabulary Test (Martin and Brownell 2010a; cochlear implant group M = 68.93, SD = 13.89; vocabulary-matched group M = 68.53, SD = 12.35). Children in the vocabulary-matched group were recruited after children from the cochlear implant group, and recruitment targeted the age range likely to match the raw scores of children with cochlear implants on the Expressive One Word Picture Vocabulary Test. The children in this study also participated in the Lund, 2019 study. Children were recruited via flyers sent to local school districts in the southwestern United States and posted in online discussion groups for parents (of children with and without cochlear implants). Interested families contacted the first author to receive details about the study. Those families that chose to participate received a gift card at the end of the study visit.
Children in the comparison groups completed a hearing screening at 25 dBHL across 500, 1000 and 2000 Hz frequencies. Four participants had self-reported diagnoses that were not thought to affect speech and language development. Those diagnoses included seizure disorder, oppositional defiant disorder, and attention deficit hyperactivity disorder. Norm-referenced speech, language, and cognitive scores for those four participants fell within the range of normal as compared to test norms.
All children in the cochlear implant group wore at least one cochlear implant device, used spoken English as the primary mode of communication in the home (i.e., no child or parent only used sign language or used sign language a majority of the time). No parents of children in the study had hearing loss themselves. The four parents who did report using sign language with their children reported that sign was not used as a full linguistic system, but rather included only a few words. Two parents reported that their child was exposed to a second language by a bilingual parent or grandparent, but that second language exposure did not comprise more than 20% of the child’s regular language exposure. Children were not eligible to participate in the study if they had a diagnosis aside from hearing loss that was likely to be associated with delayed linguistic or overall cognitive development (e.g., Down syndrome). Five parents reported an additional diagnosis, and those diagnoses included hypothyroidism, sensory processing disorder, a seizure disorder and attention deficit hyperactivity disorder.
In the cochlear implant group, eight children used an Advanced Bionics device, seventeen wore a Cochlear device, and four children wore a Med-El device. Ten children wore unilateral cochlear implants (and only two used a hearing aid in the other ear), and twenty children wore bilateral cochlear implants. Within the group, average age at implantation was 22.6 months (SD = 9.17; range 11 to 38 months) and average amount of time using the implant was 50.3 months (SD = 15.04; range = 30 to 65 months). Most children in this sample had their hearing loss identified at birth via a new born hearing screening, but five of the total sample did not have their hearing loss identified until they were older than one year (mean age at diagnosis for the entire sample = 6.4 months). It is possible that any of those five children experienced a progressive hearing loss. Scores for those five children on the standardized, norm-referenced descriptive assessments fell within a standard deviation of the group mean for the other 25 children. All children included in this study completed the CID Early Speech Perception Test (Moog and Geers 2012) and demonstrated Consistent Word Identification, meaning they were able to recognize the difference between and point to very similar words with a consonant-vowel-consonant patterns from a set of twelve words (e.g., ball versus bed).
All children in the study participated in descriptive assessments, which included: Expressive One Word Picture Vocabulary Test- Fourth Edition (Martin and Brownell 2010a), Receptive One Word Picture Vocabulary Test – Fourth Edition (ROWPVT-4; Martin & Brownell 2010b), the Primary Test of Nonverbal Intelligence (Erhler and McGhee 2008), the Test of Early Language Development (Hresko et al. 1999), the CID Early Speech Perception Test and the Arizona Articulation Proficiency Scale - 3 (Fudala 2000). With exception of the CID Early Speech Perception test (scored as 1–4, with 4 being the highest score), all descriptive measures were norm-referenced with a mean of 100 and standard deviation of 15. The demographic characteristics and aggregate performance of each group is listed in Tables 1 and 2.
Table 1.
Demographic characteristics by group
| CI group n = 30 | AM group n = 30 | VM group n = 30 | |
|---|---|---|---|
| Chronological age | M = 72.90 | M = 72.86 | M = 59.57 |
| SD = 11.15 | SD = 10.09 | SD = 11.06 | |
| Range = 47 – 84 | Range = 48 – 84 | Range = 42 – 74 | |
| Age at PreK entry | M = 29.65 | M = 26.63 | M = 24.90 |
| SD = 12.46 | SD = 17.24 | SD = 9.75 | |
| Range = 3 – 50 | Range = 3 – 48 | Range = 8 – 40 | |
| Race | White: 23 | White: 26 | White: 22 |
| Black: 6 | Black: 3 | Black: 3 | |
| Asian: 1 | Asian: 1 | Asian: 5 | |
| Ethnicity | Hispanic: 8 | Hispanic: 6 | Hispanic: 8 |
| Nonhispanic: 22 | Nonhispanic: 24 | Nonhispanic: 22 | |
| Years of maternal | M = 16.43 | M = 17.16 | M = 17.23 |
| education | SD = 1.52 | SD = 1.51 | SD = 1.26 |
| Range = 11 – 20 | Range = 12 – 20 | Range = 12 – 20 |
Note. CI = cochlear implant, AM = age-matched, VM = vocabulary matched
Table 2.
Descriptive Measure Performance by Group
| CI group | AM group | VM group | |
|---|---|---|---|
| PTONI Standard | M = 101.80 | M = 108.90 | M = 102.20 |
| Score | SD = 16.28 | SD = 16.70 | SD = 15.94 |
| Range = 78 – 131 | Range = 81 – 140 | Range = 77 – 138 | |
| TELD Standard | M = 85.33** | M = 109.03 | M = 111.37 |
| Score | SD = 15.50 | SD = 8.17 | SD = 13.20 |
| Range = 52 – 108 | Range = 92 – 123 | Range = 85 – 140 | |
| AAPS Standard | M = 86.13** | M = 94.83 | M = 96.30 |
| Score | SD =9.09 | SD = 6.36 | SD = 9.51 |
| Range = 67 – 102 | Range = 78 – 100 | Range = 75 – 109 | |
| EOWPVT Standard | M = 96.03** | M = 120.03 | M = 110.03* |
| Score | SD = 11.13 | SD = 10.76 | SD = 9.76 |
| Range = 70 – 115 | Range = 98 – 145 | Range = 91 – 132 | |
| ROWPVT Standard | M = 93.76** | M = 114.46 | M = 107.80 |
| Score | SD = 14.75 | SD = 10.90 | SD =9.76 |
| Range = 69 – 121 | Range = 94 – 137 | Range = 93 – 125 |
Note. PTONI = Primary Test of Nonverbal Intelligence (Erhler and McGhee 2008), TELD = Test of Early Language Development (Hresko et al. 1999), AAPS = Arizona Articulation Proficiency Scale (Fudala 2000), ROWPVT = Receptive One Word Picture Vocabulary Test (Martin and Brownell 2010b), EOWPVT = Expressive One Word Picture Vocabulary Test (Martin and Brownell 2010a), CI = cochlear implant, AM = age-matched, VM = vocabulary-matched.
indicates a group difference from the other two groups with p < .001. Values not marked with ** do not significantly differ from the other groups.
indicates a group difference (p = .004) between the VM and AM group.
Experimental Task Development
Forty target words with an age of acquisition rating of less than four years old (Kuperman et al. 2013) that also corresponded to a picture with high familiarity ratings (Snodgrass and Vanderwart 1980) were selected for use in the experimental task. Ten adult college students were asked to name all forty pictures identified from the Snodgrass and Vanderwart (1980) set and naming accuracy was 100% across all adults. These forty pictures and target words were used for the experimental task.
An adult male with no history of speech or hearing problems was used to create the priming stimuli. The speaker’s dialect was judged to be General American English, and stimuli were recorded via a high-quality recorder (Olympus LS-100 PCM Recorder) placed approximately 12 inches from the speaker’s mouth in a quiet room. The speaker was asked to produce each of the target words. If a recording was judged to be at all distorted, the speaker re-recorded the word. Recordings were converted to digital files, and cut down to only the initial consonant and approximately 50 milliseconds of the vowel onset (M = 49.00 milliseconds, SD = 24.70) using the waveform editor of Adobe Audition (2017).
Three conditions were created: (a) the neutral condition, (b) the inhibition prime condition and (c) the phonological prime condition. All trials across each of the conditions began with a fixation point (“+”) appearing in the middle of the screen and then disappearing. In the inhibition and phonological prime conditions, following the focus point, an auditory prime played, and then 500ms later, the target picture appeared. In the neutral condition, no auditory prime played before the picture appeared. In the inhibition prime condition, the sound segment that played contained an initial consonant that differed from the target picture label’s initial consonant in both phoneme place and manner. For example, if the target picture was “tiger,” the initial sound segment played was /m/ plus the onset of a vowel. In the phonological prime condition, the initial consonant and vowel onset matched the picture (see Supplemental Appendix A). Only sound segment primes were selected, as opposed to whole-word primes, in an attempt to activate only phonological and not additional semantic priming effects, consistent with many other studies of phonological primes (e.g., Jerger et al 2002, Melnick et al 2003, Sankar et al 2009).
Procedure
Priming task. Prior to participation in the priming task, child knowledge of target picture names was confirmed. The pictures to be used in the priming task were shown to children, one at a time, and children were asked to label those pictures. Every child was able to label each of the target pictures. Seven children, two in the cochlear implant group, one in the age-matched group, and four in the vocabulary-matched group, produced a different label than one of the targets the first time it was introduced (e.g., produced “sofa” when asked to label a picture of “couch”), but all seven were able to be re-prompted (i.e., “Can you think of another word for that picture?) to name the picture with the target label. That is, seven children made one error each when labeling pictures initially.
Child perception of the auditory primes was also assessed to ensure that all children had access to the auditory primes. During a time period that was more than an hour before the priming task was administered, children were asked to imitate the sounds that they heard played by the computer. One by one, children heard each of the sound clips that would be played during the priming task. Children were asked to imitate the sound that they heard. All children were able to approximate the sounds played by the computer. Those children who did not exactly reproduce the sound from the audio clip did reproduce an approximation consistent with that child’s articulation assessment (e.g., if a child tended to produce /w/ instead of /r/ in the articulation assessment, that child also produced /w/ when hearing /r/ as an audio clip).
Children were seated in front of a computer (HP EliteBook850 laptop with 15.6 inch screen), with two buttons placed in front of them, and asked to place one hand on each button. One button was green, and had a “check mark” and word “yes” printed on it. The other button was red, had an “X” mark” and the word “no” printed on it. Children were given the instructions, “We are going to play a game! I want to see how fast you can decide whether or not the pictures that you will see are animals. If a picture is an animal, I want you to press the green button as fast as you can. If a picture is not an animal, I want you to press the red button as fast as you can. Are you ready? Let’s practice.” The child was then shown the fixation point, followed by a picture of an animal or non-animal (pictures that were not part of the target picture set for the experimental task). The child was asked to press either the red or the green button in response to the picture. The computer then displayed either a “check” mark on the screen or an “X” mark on the screen, depending on the accuracy of the child’s response. Children were allowed ten practice trials. All children showed comprehension and ability to recognize animals versus non-animals during this task by responding correctly to at least 9 of 10 trials.
After the child was comfortable with the task, the researcher began the experimental program. The priming task was run on E-Prime 2.0 (Psychology Software Tools, Inc. 2012), and consisted of five blocks, requiring 20 trials (child responses) within each block. The first block contained items only in the “neutral prime” condition to gather baseline information about child response times. Children were presented with the fixation point and then, after 500 milliseconds, shown a target picture with an animal or non-animal and asked to respond. After every child response, the computer program indicated whether or not the child was correct. After the child’s response, 3000 milliseconds passed before the next trial began. After each block, the participant was told “Great! Now you only have [number of blocks] levels left to complete.” Between blocks, children were invited to “shake out” their hands and stretch along with the researcher, and were given verbal reinforcement (e.g., “You’re doing so well!” or “Wow, you’re so fast!”).
Blocks two through five included items in the inhibition prime and the phonological prime conditions. Prior to beginning these blocks, children were told, “This time, you might hear a funny sound before you see the picture. Your job is still the same. We want you to push the green button if you see an animal, and the red button if the picture you see is not an animal.” Across the 80 trials divided into blocks of twenty, children saw each of the target pictures twice, once paired with a phonological prime and once paired with an inhibition prime. Each block contained ten phonologically-primed pictures and ten inhibition-primed pictures. No picture appeared twice within the same block. Picture presentation (and prime type) were randomized within each block. Between blocks, the children continued to receive verbal praise for their participation in the task. A video-recorder was set up to monitor the child’s focus on the task throughout each of the blocks.
To measure attention to task, a graduate assistant reviewed all videos of all participants during the phonological priming task. A coding manual of attentiveness behaviors was developed to allow the assistant to rate the child’s attentiveness to the task on a scale from 1 to 5, with 5 indicating that the child was fully attentive to every trial (as indicated by looking at the screen and not attempting to talk to the examiner). When the child took his or her eyes of the screen for a trial, or tried to talk to the examiner during a trial, the examiner marked the number of trials for which that occurred within a block. A “5” rating was representative of attentiveness during all 20 trials within a block, a “4” rating was representative of inattention during up to 5 trials within a block, a “3” of inattention during up to 10 trials, a “2” of inattention during up to 15 trials, and “1” for inattention during more than 15 trials. The graduate assistant was trained to use this rating scale by viewing three videos of children who were not participants in the present study and comparing ratings for those children (who varied in attention during each of the blocks) with those of the first author. Agreement in the second and third video viewed was above 95%, and at this point, the graduate assistant was deemed fully trained.
Reliability
All tasks were video recorded, and all test scores for descriptive and experimental tasks were double-entered into a database for analysis. An undergraduate assistant who had not been the initial scorer of any task was asked to review a randomly selected ten videos from each of the three participant groups to determine (a) examiner fidelity of priming task administration and (b) attention scale ratings during the priming task.
To evaluate fidelity of administration of tasks as described for (a), the student observed the examiner’s behavior to determine if the examiner gave appropriate instructions, allowed the child to practice the task appropriately, and set up the computer task appropriately. This checklist was completed for thirty priming task videos and fidelity of administration was 100% across all videos.
To evaluate reliability of score for (b), a research assistant was trained to use the attention rating scale by reading the scoring manual, viewing and scoring three videos of non-study participants and comparing those scores with the original attention rater’s scores. For the third video, agreement was 100%. The reliability rater viewed thirty videos and assigned an attention rating to each block within the video. A point-by-point comparison of those scores was 89.16%, and deemed sufficiently reliable to use the original scorer’s coding for analysis.
Data Analysis
To answer the first research question, reaction times during neutral trials, phonologically primed trials and inhibition primed trials were calculated. Individual participant reaction times for each trial were accessed via the E-Prime 2.0 software. Consistent with other studies of priming, outliers were then identified and excluded. Outlier responses were considered any response that was more than two standard deviations from the mean reaction time for the participant within a block. If a child had more than two outliers in a block, only the two outliers furthest from the mean of responses were excluded as outliers. Nearly all children had at least one outlier response; and as outliers were considered within-child, no full child response was considered “outlier.” Thus, no child was excluded from analysis for this reason. This decision was made to be consistent with very conservative criteria for eliminating data: if there were more than two outliers in a block and that child was not removed for inattention-related issues, the author considered the child’s responses to have a naturally wide range, and chose not to flag many responses as outliers. The net result of this decision may be to diminish between-group differences (because variability is maintained); however, that was considered an acceptable risk to minimize exclusion of data. Across all groups, approximately 4% of trials were identified as outliers, consistent with other studies of children in this age range (e.g., Jerger et al. 2002). Once outliers were removed from a data set, each reaction time was coded by prime type and block within each participant.
Attention ratings were calculated within this study because they are theoretically necessary for interpreting priming data from children in this young age range. Children who are not attending to the task or have sporadic attention to task are likely to skew the results of the study. Therefore, prior to analyzing between and within group differences, attention ratings were analyzed. Differences in attention ratings by group and by block were analyzed via descriptive statistics (means and standard deviations for responding). Means and standard deviations for attention rating by group and block are presented in Table 3. Children were excluded from final analysis if they had a score of “3” or below, indicating they were inattentive or distracted during up to 50% of the trials in at least one block of responses. This criterion eliminated none of the age-matched group, four children from the cochlear implant group and two children from the vocabulary-matched group. Number of outlier responses excluded from analysis was then compared between remaining group participants, and there was no significant difference found between groups (p = .56). Those children who were excluded did not have standard scores on any descriptive, norm-referenced measure that fell more than a half of a standard deviation from the mean of the rest of the included group. Mean response times by block and prime type (phonological or inhibition), along with attention rating by block for the six participants excluded from analysis are included in the Supplemental Appendix B.
Table 3.
Mean attention ratings (standard deviations) by group and response block
| CI group | AM group | VM group | Overall mean | |
|---|---|---|---|---|
| Block 1 | 4.80 (.55) Range = 3 – 5 | 5.00 (.00) Range = 5 – 5 | 4.80 (.48) Range = 3 – 5 | 4.87 (.43) |
| Block 2 | 4.73 (.74) Range = 2 – 5 | 5.00 (.00) Range = 5 – 5 | 4.87 (.57) Range = 3 – 5 | 4.87 (.54) |
| Block 3 | 4.76 (.73) Range = 2 – 5 | 4.97 (.25) Range = 4 – 5 | 4.73 (.78) Range = 4 – 5 | 4.81 (.62) |
| Block 4 | 4.75 (.64) | 4.93 (.10) | 4.79 (.79) | 4.80 (.64) |
| Range = 2 – 5 | Range = 4 – 5 | Range = 3 – 5 |
Note. CI = cochlear implant, AM = age-matched, VM = vocabulary matched
To analyze differences in the effect of group (between subjects independent variable), prime type (within-subjects independent variable), and response block (within-subjects independent variable) on reaction time, a repeated measures analysis of covariance, with neutral prime reaction time used as a covariate or baseline response time. Follow-up linear contrasts were planned to compare within-group differences between phonological prime and inhibition prime responses only for group differences (a total of 3 planned contrasts: age-matched versus cochlear implant group, age-matched versus vocabulary-matched group, cochlear implant versus vocabulary-matched group). Differences in phonological versus inhibition prime performance was of particular interest for theoretical reasons. It was hypothesized that responses to phonological primes would be fast as a result of close phonological connections and activation of phonological neighbors in the child’s lexicon. Similarly, responses to inhibition primes were expected to be slower as a result of activation of other/ phonologically unrelated words. The difference between responses to phonological versus inhibition primes, therefore, are thought to represent possible strength of activation for phonological versus inhibition lexical neighbors as children make lexical judgments about pictures. If the hypotheses about responses to phonological versus inhibitory primes are correct, children who efficiently activate phonological neighbors (phonological prime) and phonologically-unrelated neighbors (inhibition prime) based on prime information should have exhibit a large difference in reaction times between those response types. An alpha value of .01 was used as the Bonferroni-corrected standard for concluding statistical significance in a follow-up contrast.
Results
This study sought to compare reaction time responses during an animacy judgment task for trials that included a phonological prime and trials that included an inhibition prime for children with cochlear implants, children with typical hearing matched for age, and children with typical hearing matched for expressive vocabulary size.
Prior to the removal of outlier responses, response accuracy rates were compared across groups. Only accurate responses were considered in final analyses. Mean percent accuracy overall for the cochlear implant group was 90.57% (SD = 8.02), accuracy for the age-matched group was 91.37% (SD = 9.91); accuracy for the vocabulary-matched group was 91.50% (SD = 6.50). A Kruskal Wallis test revealed no significant differences between groups for overall accuracy (H(2) = .92; p = .63). A follow-up analysis of groups with children excluded for inattention removed also revealed no significant differences between groups for overall accuracy (H(2) = .44, p = .80); within group analyses across condition revealed no across-condition differences in accuracy (all p values above .05). When accuracy data were evaluated for differences according to prime type or identification of an animal picture or non-animal picture, no significant differences were found between groups (all p values above .05).
A repeated measures analysis of covariance was used with prime type (phonological or inhibition) and trial block (Block 1, Block 2, Block 3 or Block 4) entered as within-subjects independent variables and group membership (cochlear implant, age-matched or vocabulary matched) as the between-subjects independent variable. The dependent variable was mean reaction time by participant for trials with accurate animacy judgements. Mean reaction time during neutral trials (neutral trials; mean for age-matched group = 1317.07 ms, SD = 728.15 ms; mean for cochlear implant group = 1650.60 ms, SD = 671.37 ms; mean for vocabulary-matched group = 1673.18 ms; SD = 454.84 ms) was entered into the model as a covariate. There was not a significant difference in baseline reaction time between groups during neutral trials (F (87, 2) = 3.01; p = .054). Mean neutral-prime reaction time was linearly related to the dependent variable at each level of the independent variables and met assumptions of homoscedasticity and homogeneity of slopes.
The analysis revealed a significant main effect of Group (F(2, 80) = 6.744, p = .002) but not Block (F(1, 80) = 1.586, p = .212) or Prime type (F(1, 80) = .011, p = .918). Additionally, there was a significant interaction effect for Group x Prime type (F(1, 80) = 4.465, p = .015), and a significant interaction effect for Block x Group x Prime type (F(2, 80) = 3.422, p = .037). Figure 1 shows child performance (adjusted raw score based on using neutral response as a covariate; that is, means that center the neutral response covariate within each group through regression analysis) by prime type and group, thus illustrating the overall significant main effect group.
Figure 1.
Reaction times by group and prime type Note. Means are adjusted; centered for the covariate value for neutral trial response by group.
The cochlear implant group and vocabulary-matched groups clearly had slower overall reaction times than the age-matched group, even when controlling for a baseline (neutral) reaction time.
Follow-up, planned linear contrasts were calculated to compare the difference between an inhibition prime and phonological prime across each group. These analyses follow up on the significant contrasts yielded by the initial overall analysis. Difference scores were pre-planned for use in this case because they are of theoretical significance: differences between phonological and inhibition primes represent possible strength of activation for phonological versus inhibition lexical neighbors as children make lexical judgments about pictures. For more information about raw scores by block, see Supplemental Appendix A. The linear contrast in differences scores between the vocabulary-matched and cochlear implant group approached significance but did not meet the Bonferroni-corrected comparison value (F(1, 54) = 4.05, p = .032; d = .59). The contrast between the age-matched group and cochlear implant group, however, was significant (F(1, 52) = 7.12, p = .005; d = .64). The age-matched/ vocabulary-matched group comparison failed to reach significance (F(1,56) = 2.81, p = .438, d = .26). See Figure 2.
Figure 2.
Difference in reaction time between inhibitory and phonological prime trials by group and response block
Thus, children with cochlear implants did respond to phonological and inhibition primes differently than their peers with typical hearing matched for age and matched for vocabulary size. Particularly in mid-task blocks of responses (see Figure 2), children with cochlear implants appeared to exhibit a pattern of responding that was opposite of children with typical hearing: children with cochlear implants responded more slowly to primes that were phonologically similar to the target than to primes that were phonologically different than the target. Within Block 1, all children appeared to be equally fast responders across primes, and within Block 4 all children appeared to show similar responses across primes; thus there were minimal differences in scores in Block 1 and in Block 4; Supplemental Appendix A). Children in the age-matched group and the vocabulary-matched group, conversely, tended to respond more quickly to phonologically-similar primes than to inhibition (phonologically different) primes.
Discussion
The purpose of the present study was to compare the performance of children with and without cochlear implants on phonological priming tasks, allowing researchers to evaluate subconscious awareness of phonological similarities between lexical items. Children with cochlear implants, particularly in Block 2 and 3 of their responses, had faster reactions times in trials with inhibition primes than in trials with phonological primes. Responses in the first blocks may indicate an effect of task newness, as might be expected of children in this age range during a task with 100 trials. Children with typical hearing in the age-matched group and in the vocabulary-matched group responded more quickly to trials with phonological primes than trials with inhibition primes.
Particularly notable in the present pattern of responses was the difference in priming response between children with cochlear implants and younger children with typical hearing matched for expressive vocabulary size. There were similarities between these two groups on other variables: for example, children with cochlear implants and children matched for vocabulary size appeared to respond more slowly to both prime types (even when response means were adjusted for baseline reaction time to trials without a prime) than did children in the age-matched group. Children with cochlear implants and children matched for vocabulary size also appeared similar in the ratings of attention to the priming task: no child in the age-matched group stopped paying attention to the task during the duration of the experiment, but a few children in both the vocabulary-matched and cochlear implant group had to be removed from analysis as a result of attention issues. In other words, children with cochlear implants and children matched for vocabulary size had similar overall response times to primes and similar issues with attending to the task: however, children matched for vocabulary size still demonstrated a phonological facilitation effect and children with cochlear implants had the opposite response. Thus, children with cochlear implants showed a different, rather than delayed, pattern of responding: if children with cochlear implants were only delayed in their responses as a result of vocabulary size, they likely would have shown the same responses as children in the vocabulary-matched group.
The finding that children with cochlear implants experience a different priming effect from children with typical hearing may indicate that children with cochlear implants struggle to efficiently use phonological priming information to react to a target word. Children with typical hearing showed a pattern of response consistent with models of speech perception that would predict that hearing a word onset before having to process that word would help a child more efficiently narrow down candidate words for selection (McClelland and Elman 1986). For example, hearing /k/ prior to processing cat gives the child a “head start” on identifying the target word by eliminating target words that do not begin with a /k/ sound. Children who are able to process words incrementally, rather than holistically, and make use of a dynamic organization system that activates candidates beginning with the same sound would be likely to activate the target word more quickly than a child with a less efficient system (Metsala and Walley 1998). This finding fits with other studies evaluating the lexical organization strategies of people with cochlear implants. It is possible, as suggested by McMurray and colleagues (2018), that children experience a longer period of auditory processing (needing more time and information to deal with lexical ambiguity) than do children with typical hearing, and therefore are not able to make use of priming information to efficiently select word candidates during processing in the same way as children with typical hearing. Longer periods of auditory processing may also yield the results reported by Schwartz and colleagues (2013), that children with cochlear implants activate phonological competitors during word recognition tasks for a longer period than do children with typical hearing. The present study design does not allow one to draw conclusions about the time course of word candidate selection; however, different patterns of response to phonological versus inhibition primes add to the body of evidence that children with cochlear implants make use of phonological information, even when they can perceive it, differently than children with typical hearing.
Other studies that have not used priming to assess lexical organization support the idea that children with cochlear implants may have different lexical organization strategies, particularly those based on phonology (e.g., Kenett et al. 2013; Weschler-Kashi et al. 2014; Lund & Dinsmoor 2016). Weschler-Kashi and colleagues (2014) used verbal fluency as an assessment of lexical access and organization, and found that children with cochlear implants generated fewer words than children with typical hearing when asked to generate words grouped phonologically. Kenett and colleagues (2013) applied a computational analysis of lexical networks to verbal fluency task responses for children with cochlear implants, and concluded that semantic networks in children with cochlear implants are underdeveloped. Combined with the results of the present study, this emerging data about the processing of phonological information by children with cochlear implants may indicate that children with cochlear implants are less likely than their peers with typical hearing to subconsciously recognize and efficiently process phonological similarities between words.
Contrary to the findings of this study, Jerger and colleagues (2016) concluded that children with hearing loss responded similarly to phonological primes as compared to children with typical hearing. Differences in findings may be related to methodological differences between these studies. The Jerger et al (2016) study included participants with a range of hearing losses (and consequently, who wore a range of devices) who were between the ages of 4 and 15 years. The present study evaluated only children with cochlear implants between the ages of 5 and 7 years. Jerger and colleagues (2016) primed children’s responses with both words and nonwords, rather than word onsets, whereas this study used only word onsets as primes. Differences in these populations in age and amplification type may be expected to yield different phonological priming results (Rigler et al 2015). Phonological priming with whole words rather than word onsets may also trigger differing organizational strategies (and consequently, different priming effects): in particular, priming with real words may also trigger semantic organization systems.
Extant research has speculated that children with other language-based disabilities (e.g., fluency issues or language impairment) have difficulty processing phonological cues in words, as evidenced by different priming responses compared to children who are typically developing (Byrd et al. 2007; Brooks et al. 2015). The results of this study may indicate similar difficulties for five- to seven-year-old children with cochlear implants. For children with cochlear implants, different phonologically-based lexical organization strategies may be the result of a period of auditory deprivation. Differing responses could also be the result of hearing degraded auditory input via cochlear implants during the task itself; however, children’s abilities to accurately imitate that auditory input make this interpretation less likely. It is also possible, however, that imitation does not fully represent a child’s ability to incrementally process words. It is possible that delayed linguistic development, perhaps as a result of auditory deprivation and subsequent development in less-than-ideal condition, contributes to the similar response patterns of children with cochlear implants and children with other speech/language difficulties. Future research is needed to consider how linguistic processing and organization in children with other disorders may inform our understanding of children with cochlear implants, and whether children with cochlear implants are likely to also have underlying linguistic deficits in addition to their hearing loss (e.g., Geers et al. 2016)
Theoretically, phonological organization could lead to other linguistic deficits in children with cochlear implants. Children with typical hearing use lexical and phonological cues, for example, to learn and retain new vocabulary words (e.g., Gray et al. 2014). Phonological organization of vocabulary could also force children to attend to more incremental differences between words, thus leading to more incremental than holistic processing of spoken words (Metsala and Walley 1998), which may then lead to the development of phonological awareness, the subconscious awareness of phonemes within words. Children in the cochlear implant group in this study did have smaller vocabularies than children with typical hearing, as evidenced by standard scores from the EOWPVT-4 and ROWPVT-4. Other studies have indicated that children with cochlear implants also display deficits in the development of phonological awareness (Werfel 2017; Nittrouer et al 2018; Lund 2021). The present study adds evidence that lexical-semantic organization strategies used by children with cochlear implants should be subject to further exploration.
There are limitations of the present study that provide avenues for future studies in this area of inquiry. First, participant selection in this study was limited to children who were using spoken language as a primary means of communication, and children who participated had relatively high speech perception scores on the CID Early Speech Perception Test (Moog and Geers 2012). This study did not allow for enough variability in speech perception/ device use to make predictions about how these audiological variables would have influenced outcomes. A future study may consider a more diverse group of children with hearing loss, or a group within a different age range, to determine if differences in priming response persist for all children with hearing loss. Further, this study screened children with typical hearing at 25 dB HL, and it is possible that some of them may have had a slight hearing loss between 15 and 25 dB HL. However, if that was the case, it is unlikely that children with slight hearing loss would have decreased the group differences found in this study. Nevertheless, an interesting follow-up study could consider whether children with slight hearing loss perform differently than children with typical hearing on priming tasks.
Additionally, this study only measured phonologically-based organization via a priming task. To fully conclude that phonological organization is different in children with hearing loss, additional measures of lexical organization would need to be included and validated. The question of how to best measure lexical-semantic organization remains unanswered. In this case, the influence of word-initial auditory priming may not be the best measure of lexical-semantic organization in children with hearing loss. Factors such as attention and fatigue likely played a role in differing responses across trial blocks. Further, the methods in this study did not purposefully counterbalance the placement of response buttons or track child handedness to see how it affected results. To acquire more robust results, a replication of this study with a larger number of trials and participants may be warranted.
Conclusion
This study aimed to evaluate the subconscious knowledge of between-word phonological similarities in children with cochlear implants as compared to children with typical hearing of the same age and younger children of the same vocabulary size. On the word-onset priming of reaction time task, children with cochlear implants experienced phonological facilitation and inhibition effects differently than children with typical hearing matched for age or vocabulary size. The results of this study support findings of past research that may indicate children with cochlear implants have phonological organization strategies that are different from those of children with typical hearing. Within this study, the finding that children with cochlear implant even differ from younger children with typical hearing matched for vocabulary size indicates that phonological organization may be different and not necessarily delayed. Awareness of this difference, which could theoretically affect other linguistic skills, and continued investigation of lexical-semantic organization in children with cochlear implants could contribute to a better understanding of how to improve linguistic outcomes in this population.
Supplementary Material
Acknowledgments
Financial disclosures/ conflicts of interest: This work was supported by the National Institutes of Health – National Institute on Deafness and Other Communication Disorders [5R03DC015078 to E. L.]
References
- Adobe Systems Incorporated. (2017). Adobe Audition Sound Analysis Software [Computer software]. [Google Scholar]
- Ainsworth S, Welbourne S, & Hesketh A. (2016). Lexical restructuring in preliterate children: Evidence from novel measures of phonological representation. Applied Psycholinguistics, 37, 997–1023. [Google Scholar]
- Ashori M. (2020). Speech intelligibility and auditory perception of pre-school children with hearing aid, cochlear implant and typical hearing. Journal of Otology, 15(2), 62–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Biran M, Novogrodsky R, Harel-Nov E, Gil M, & Mimouni-Bloch A. (2018). What we can learn from naming errors of children with language impairment at preschool age. Clinical Linguistics & Phonetics, 32, 298–315. [DOI] [PubMed] [Google Scholar]
- Bonte M, & Blomert L. (2004). Developmental changes in ERP correlates of spoken word recognition during early school years: A phonological priming study. Clinical Neurophysiology, 115, 409–423. [DOI] [PubMed] [Google Scholar]
- Brooks PJ, & Macwhinney B. (2000). Phonological priming in children’s picture naming. Journal of Child Language, 27, 335–366. [DOI] [PubMed] [Google Scholar]
- Brooks PJ, Seiger-Gardner L, Obeid R, & MacWhinney B. (2015). Phonological priming with nonwords in children with and without specific language impairment. Journal of Speech, Language, and Hearing Research, 58, 1210–1223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Byrd CT, Conture EG, & Ohde RN (2007). Phonological priming in young children who stutter: Holistic versus incremental processing. American Journal of Speech-Language Pathology, 16, 43–53. [DOI] [PubMed] [Google Scholar]
- Collins AM, & Loftus EF (1975). A spreading-activation theory of semantic processing. Psychological Review, 82, 407– 428. [Google Scholar]
- Ehrler DJ, & McGhee RL (2008). PTONI: Primary Test of Nonverbal Intelligence. Austin, TX: Pro-Ed. [Google Scholar]
- Fudala JB (2000). Arizona 3: Arizona Articulation Proficiency Scale, Third Revision. Torrance, CA: Western Psychological Services. [Google Scholar]
- Gray S, Pittamn A, & Weinhold J. (2014). Effect of phonoatctic probability and neighborhood density on word-learning configuration by preschoolers with typical development and specific language impairment. Journal of Speech, Language, and Hearing Research, 57, 1011–1025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goodrich JM, & Lonigan CJ (2015). Lexical characteristics of words and phonological awareness skills of preschool children. Applied Psycholinguistics, 36, 1509–1531. [Google Scholar]
- Grosjean F. (1980). Spoken word recognition processes and the gating paradigm. Perception & Psychophysics, 28, 267–282. [DOI] [PubMed] [Google Scholar]
- Hannagan T, Magnuson JS, & Grainger J. (2013). Spoken word recognition without a TRACE. Frontiers in Psychology, 4, 563, 1– 17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hendrickson K, Oleson J, & Walker E. (2021). School‐age children adapt the dynamics of lexical competition in suboptimal listening conditions. Child Development, 92(2), 638–649. [DOI] [PubMed] [Google Scholar]
- Hresko WP, Reid DK, & Hammill DD (1999). TELD-3: Test of Early Language Development. Austin, TX: Pro-Ed. [Google Scholar]
- Jerger S, Martin RC, & Damian MF (2002). Semantic and phonological influences on picture naming by children and teenagers. Journal of Memory and Language, 47, 229–249. [Google Scholar]
- Jerger S, Tye-Murray N, Damian MF, & Abdi H. (2016). Phonological priming in children with hearing loss: Effect of speech mode, fidelity and lexical status. Ear & Hearing, 37, 623–633. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kenett YN, Wechsler-Kashi D, Kenett DY, Schwartz RG, Ben Jacob E, & Faust M. (2013). Semantic organization in children with cochlear implants: computational analysis of verbal fluency. Frontiers in Psychology, 4, 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kral A, Kronenberger WG, Pisoni DB, & O’Donoghue GM (2016). Neurocognitive factors in sensory restoration of early deafness: A connectome model. The Lancet, Neurology, 15, 610–621. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kuperman V, Stadthagen-Gonzalez H, & Brysbaert M. (2012). Age-of-acquisition ratings for 30,000 english words. Behavior Research Methods, 44(4), 978–990. [DOI] [PubMed] [Google Scholar]
- Leach L. & Samuel AG (2007). Lexical configuration and lexical engagement: When adults learn new words. Cognitive Psychology, 55, 306–353. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luce P, & Pisoni DB (1998). Recognizing spoken words: The neighborhood activation model. Ear & Hearing, 19, 1– 36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lund E. (2019). Comparing word characteristic effects on vocabulary of children with cochlear implants. Journal of Deaf Studies and Deaf Education, 24, 424–434. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lund E, & Dinsmoor J. (2016). Taxonomic knowledge of children with and without cochlear implants. Language, Speech, and Hearing Services in Schools, 47, 236–245. [DOI] [PubMed] [Google Scholar]
- Martin NA, & Brownell R. (2010a). Expressive One-Word Picture Vocabulary Test-4 (EOWPVT-4). Novato, CA: Academic Therapy Publications. [Google Scholar]
- Martin NA, & Brownell R. (2010b). Receptive One-Word Picture Vocabulary Test Fourth Edition (ROWPVT-4). Novato, CA: Academic Therapy Publications. [Google Scholar]
- Marslen-Wilson WD (1987). Functional parallelism in spoken word recognition. Cognition, 25, 71–102. [DOI] [PubMed] [Google Scholar]
- Marslen-Wilson W, & Warren P. (1994). Levels of perceptual representation and process in lexical access: words, phonemes, and features. Psychological Review, 101(4), 653. [DOI] [PubMed] [Google Scholar]
- McClelland JL, & Elman JL (1986). The TRACE model of speech perception. Cognitive Psychology, 18(1), 1–86. [DOI] [PubMed] [Google Scholar]
- McMurray B, Farris-Trimble A, & Rigler H. (2017). Waiting for lexical access: Cochlear implants or severely degraded input lead listeners to process speech less incrementally. Cognition, 169, 147–169. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McMurray B, Ellis TP, & Apfelbaum KS (2018). How do you deal with uncertainty? Cochlear implant users differ in dynamics of lexical processing of noncanonical inputs. Ear & Hearing, 40, 961–980. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Melnick KS, Conture EG, & Ohde RN (2003). Phonological priming in picture naming of young children who stutter. Journal of Speech, Language, and Hearing Research, 46, 1428–1443. [DOI] [PubMed] [Google Scholar]
- Metsala JL, & Walley AC (1998). Spoken vocabulary growth and the segmental restructuring of lexical representations: Precursors to phonemic awareness and early reading ability. In Metsala JL & Ehri LC (Eds.), Word recognition in beginning literacy (p. 89–120). Lawrence Erlbaum Associates Publishers. [Google Scholar]
- Moog J, & Geers A. (2012). The CID Early Speech Perception Test. St. Louis, MO: Central Institute for the Deaf. [Google Scholar]
- Munson B, Swenson CL, & Manthei SC (2005). Lexical and phonological organization in children: Evidence from repetition tasks. Journal of Speech, Language and Hearing Research, 48, 108–124. [DOI] [PubMed] [Google Scholar]
- Nittrouer S, & Boothroyd A. (1990). Context effects in phoneme and word recognition by young children and older adults. The Jrouanl fo the Acoustical Society of America, 87, 2705–2715. [DOI] [PubMed] [Google Scholar]
- Nittrouer S, Muir M, Tietgens K, Moberly AC, & Lowenstein JH (2018). Development of phonological, lexical, and syntactic abilities in children with cochlear implants across the elementary grades. Journal of Speech, Language, and Hearing Research, 61, 2561–2577. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nittrouer S, Sansom E, Low K, Rice C, & Caldwell-Tarr A. (2014). Language structures used by Kindergartners with cochlear implants: Relationship to phonological awareness, lexical knowledge and hearing loss. Ear & Hearing, 35, 506–518. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Psychology Software Tools, Inc. (2012). [E‐Prime 2.0]. Pittsburgh, PA. [Google Scholar]
- Pyman B, Blamey P, Lacy P, Clark G, & Dowell R. (2000). The development of speech perception in children using cochlear implants: effects of etiologic factors and delayed milestones. Otology & Neurotology, 21(1), 57–61. [PubMed] [Google Scholar]
- Rigler H, Farris-Trimble A, Greiner L, Walker J, Tomblin JB, & McMurray B. (2015). The slow developmental timecourse of real-time spoken word recognition. Developmental Psychology, 51, 1690–1703. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Riva D, Nichelli F, & Devoti M. (2000). Developmental aspects of verbal fluency and confrontation naming in children. Brain and Language, 71, 267–284. [DOI] [PubMed] [Google Scholar]
- Sankar G, Malik J, & Shanbal JC (2009). Lexical processing in bilingual children: Evidence from masked phonological priming. Journal of All India Institute of Speech and Hearing, 28, 90– 96. [Google Scholar]
- Sauzeon H, Lestage P, Raboutet C, N’Kaoua B, & Claverie B. (2004). Verbal fluency output in children aged 7–16 as a function of the production criterion: Qualitative analysis of clustering, switching processes, and semantic network exploitation. Brain and Language, 89, 192–202. [DOI] [PubMed] [Google Scholar]
- Schwartz RG, Steinman S, Ying E, Mystal EY, & Houston DM (2013). Language processing in children with cochlear implants: a preliminary report on lexical access for production and comprehension. Clinical Linguistics & Phonetics, 27, 264–277. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sekerina IA, & Brooks PJ (2007). Eye movements during spoken word recognition in Russian children. Journal of Experimental Child Psychology, 98, 20– 45. [DOI] [PubMed] [Google Scholar]
- Sheng L, & McGregor KK (2010). Lexical-semantic organization in children with specific language impairment. Journal of Speech, Language, and Hearing Research, 53, 146–159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Siew CS, & Vitevitch MS (2016). Spoken word recognition and searial recall of words from components in the phonological network. Journal of Experimental Psychology: Learning, Memory and Cognition, 42, 394–410. [DOI] [PubMed] [Google Scholar]
- Snodgrass JG, & Vanderwart M. (1980). A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6, 174– 215. [DOI] [PubMed] [Google Scholar]
- Troyer AK, Moscovitch M, & Winocur G. (1997). Clustering and switching as two components of verbal fluency: Evidence from younger and older healthy adults. Neuropsychology, 11(1), 138–146. [DOI] [PubMed] [Google Scholar]
- Velez M, & Schwartz RG (2010). Spoken word recognition in school-age children with SLI: Semantic, Phonological, and Repetition Priming. Journal of Speech, Language, and Hearing Research, 53, 1616–1628. [DOI] [PubMed] [Google Scholar]
- Wechsler-Kashi D, Schwartz RG, & Cleary M. (2014). Picture naming and verbal fluency in children with cochlear implants. Journal of Speech, Language, and Hearing Research, 57, 1870–1882. [DOI] [PubMed] [Google Scholar]
- Werfel KL (2017). Emergent literacy skills in preschool children with hearing loss who use spoken language: Initial findings from the Early Language and Literacy Acquisition (ELLA) Study. Language, Speech, and Hearing Services in Schools, 48, 249–259. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.


