Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jun 10.
Published in final edited form as: Clin Linguist Phon. 2013 Apr;27(4):264–277. doi: 10.3109/02699206.2013.765913

Language Processing in Children with Cochlear Implants: A Preliminary Report on Lexical Access for Production and Comprehension

Richard G Schwartz 1,2, Susan Steinman 1, Elizabeth Ying 1, Elana Ying Mystal 1, Derek M Houston 3
PMCID: PMC3677759  NIHMSID: NIHMS476346  PMID: 23489339

Abstract

In this plenary paper we present a review of language research in children with cochlear implants along with at outline of a five-year project designed to examine lexical access for production and recognition. The project will use auditory priming, picture naming with auditory or visual interfering stimuli (Picture-Word Interference and Picture-Picture Interference, respectively) and eye tracking paradigms to examine the role of semantic and various phonological factors. Preliminary data are presented from auditory priming, the picture-word interference, and picture-picture interference tasks. The emergence of group difference is briefly discussed.

Keywords: lexical access, spoken word recognition, children, cochlear implants, language processing


Beyond the establishment of hearing, the primary goal of cochlear implantation in children is oral language acquisition. Cochlear implants (CIs) have proved to be very effective in permitting many children with severe to profound hearing loss to acquire oral language with the support of intervention following implantation. As one might expect, the first waves of research focused on the efficacy of cochlear implantation in yielding oral language in children as a group and then to discern individual differences in oral language outcomes.

Language outcomes in children with cochlear implants

A number of group studies have demonstrated the generally successful acquisition of language in children with CIs (Blamey et al., 2001; Geers & Moog, 1994; Geers, Nicholas, & Sedey, 2003; Svirsky, Robbins, Kirk, Pisoni, & Miyamoto, 2000; Tomblin, Spencer, Flock, Tyler, & Gantz, 1999). Despite these positive group language outcomes, there is substantial individual variation. Many children do not achieve language scores comparable to normal hearing peers. A similar pattern of positive group outcomes and wide individual variation has emerged from studies of acquisition rate (Geers & Brenner, 2004; Kirk et al., 2002; Svirsky, Teoh, & Neuburger, 2004; Tomblin, Barker, Spencer, Xuyang Zhang, & Gantz, 2005).

One important factor in language outcome variability is the age of implantation. Children implanted between 16 and 24 months of age had expected scores that appeared to match their hearing peers’ scores on the expressive portion of the Preschool Language Scale (PLS) at age 4;6, whereas children implanted later performed more poorly than their peers, even with similar durations of CI use. (Nicholas & Geers, 2007). Similar findings were reported for most language sample measures, particularly for early implantees with pre-implant average threshold above 65 dB. However, a closer examination of the expressive PLS standard scores (at 4;6) for children who received implants at 18 mos. (36 months of use) seems to be 1 SD below the mean, with many children scoring even lower. This level of performance would typically be characterized as language impairment. Children with CIs (5-15 years of age) had age-appropriate scores on a number of language measures, but overall scores that were consistently lower than those of their age-matched controls (Schorr, Roth, & Fox, 2008). For example, only 36% of the children with CIs had age-appropriate scores on structural measures (e.g., vocabulary production and comprehension as well as on tests of morphology and syntax), whereas 92% of the normal hearing children had age-appropriate scores. Here too, age of implantation predicted receptive vocabulary scores and duration of CI use predicted receptive syntax scores.

Thus, although group data indicate successful oral language outcomes for CIs, with many children achieving age-appropriate language abilities and a typical growth rate for language acquisition, many other children with CIs do not fare as well. Early implantation leads to better language outcomes as measured by standardized tests than later implantation. Other variables such as duration of use, residual hearing, bilateral versus unilateral implants, and pre-implant pure-tone average thresholds may also affect outcomes.

The specific deficits experienced by children with CIs encompass phonology, morphology, syntax and lexical development. Intelligibility remains low after four to six years of experience with the implant (63.5%) and even after seven years (70%) of experience (Tobey et al., 2003; Peng et al, 2004). Age of implantation and speech coding strategy predicted intelligibility scores.

Children with CIs also performed more poorly on spoken and written sentence formation, making more grammatical errors and using fewer words (Spencer et al 2003). Performance differences between receptive and expressive tasks suggests that language competence was still emerging, while their normal hearing (NH) peers had already mastered these skills. Children with CIs (3;0-9;0) lag behind their NH peers in several aspects of morphological recognition and production as well, including English plural, possessive, past tense, and progressive tense markers (Geren & Snedeker, 2009). They also had deficits in syntax, including tested reversible transitives, passives, double-object datives, and prepositional-datives, as measured by the Diagnostic Evaluation of Language Variation (DELV-NR Seymour, Roeper, & de Villiers, 2005) and performance on a syntactic recognition task with puppets.

The vast majority of language development studies focusing on children with CIs have employed standardized, omnibus language tests to measure language performance. Although these tests provide important general information about language relative to normative data, they fail to reveal details that may distinguish among children (even those with apparently successful outcomes) to provide critical information for intervention, and to elucidate cognitive/linguistic factors that may influence speech and language outcomes. Standardized omnibus tests are designed specifically to distinguish between children with and without language impairments, rather than provide in-depth language testing. Although language samples, elicited production, and constructed productions probes are an important step forward, like standardized tests, they examine off-line or endpoint language responses (pointing to, naming, or describing pictures), but they reveal little about the underlying representations or processes leading up to these endpoint responses. There is a need for more in-depth and dynamic testing of language in children with and without apparently successful language outcomes.

A closer examination of language in children with CIs could begin with a focus on any of several areas (phonology, inflectional morphology, lexicon, or syntax), but one critical area is lexical representation and the processing of underlying word production and comprehension (spoken word recognition). Despite some positive group outcomes for measures of vocabulary development (Nicholas & Geers, 2008), there is evidence that this is a critical challenge for children with CIs. Challenges in speech perception, speech production, and language are closely related in children with CIs (Rescorla, 2002; Svirsky et al., 2000). Specifically, phonological encoding, as well as the phonological storage and retrieval of spoken words with secondary semantic effects, may be limited in these children. More recent findings suggest that toddlers’ abilities to associate speech sounds with objects or learn novel words are more strongly related to vocabulary development than speech perception abilities (Houston et al, 2003; Houston et al., submitted). In a study of word learning by 2-5 year-olds, children with CIs performed more poorly in learning names (attributes) for beanie babies. Children with normal hearing performed near ceiling receptively in immediate and delayed comprehension, whereas children with CIs performed near or below 50% correct. In production, the normal hearing children performed at an 80% accuracy level, but few of the children with CIs produced these names (Houston, Carter, Pisoni, Kirk, & Ying, 2005).

Early speech perception and language development predict later language abilities

At the ages when early implantation occurs, lexical acquisition and phonological production/perception are the key elements of language development. The emergence of phonology and the lexicon depend on early speech perception, attention to the relevant acoustic-phonetic features, and the establishment of phonological representations. The formation of an initial phonology and lexicon are dependent on early speech perception, selective attention to language-specific acoustic cues, as well as short and long-term memory for those cues (Jusczyk & Hohne, 1997; Nittrouer & Burton, 2005; Werker & Curtin, 2005). Robust lexical representations (phonetic and phonological information) are necessary for rapid and efficient segmentation of the speech signal, word recognition, and novel word learning. Several studies have provided evidence that early speech perception and language abilities are related to later language development (e.g., Marchman & Fernald, 2008; Rescorla, 2002; Rescorla, 2005; Tsao, Liu, & Kuhl, 2004; Trehub & Henderson, 1996). Marchman and Fernald related gaze-based measures of recognition speed in a looking while listening paradigm and vocabulary at 25 months to language, working memory and cognitive measures in the same children at age 8. They found relations between vocabulary at 25 months and expressive language, IQ, and working memory at 8;0 as well as between word recognition speed at 25 months and language memory and IQ at 8;0. The strongest relations were between the early measures and working memory at 8;0. Given the early auditory deprivation experienced by deaf and hard-of-hearing infants, the differences between the auditory signal provided by cochlear implants and that of normal hearing, the variation in pre-implant residual hearing, and the variation in early speech perception and early language abilities post implantation (e.g., Houston et al., 2003), these early performance measures will predict individually varying outcomes in later lexical access.

Theories of Lexical Access

Although details vary depending on whether the focus is on spoken word recognition or on lexical production, the organization and access of the mental lexicon is commonly characterized by spreading activation models (e.g., Dell, 1986; Marslen-Wilson, 1990) beginning with acoustic-phonetic information. Spreading activation is an unconscious, automatic process that occurs within a few hundred milliseconds (Neely, 1977). Lexical access involves simultaneous activation of lexical cohorts with a rapid deactivation of candidates as new information is integrated. The revised Cohort Theory (Marslen-Wilson, 1990) involves three phases: the acoustic-phonetic stage, where acoustic information generates a set of candidates (word-initial cohorts); lexical selection process, in which semantic and syntactic information strengthens the target candidate activation, while deactivating non-candidates; and integration, where the remaining candidate is integrated into the broader context. Thus, word recognition is an interactive, staged model in which later-stage lexical processing can influence continuous processing at earlier stages. Any time-course disruption in the process could result in acoustic-phonetic, phonological, lexical, and semantic errors. Parallel models posit simultaneous rather than sequential stages (McClelland & Elman, 1986; Gaskell & Marslen-Wilson, 1997). But even in these models, the process of access is led by acoustic analysis and phonology. A number of factors in lexical access for recognition have emerged from empirical studies. For example, short stimulus onset asynchronies in priming studies (SOAs) reflect automatic processes, whereas longer SOAs reflect more attentional or controlled processes (see review in Balota, Yap, Cortese, & Watson, 2008). Task manipulations in lexical priming can also reveal the sources of priming effects. For example, picture naming is viewed as a purer measure of prelexical prime target semantic influence (e.g., Neely 1991), whereas a lexical decision with semantically related words involves a postlexical bias (i.e., Once the target is found to be related to the prime, it biases a word rather than a non-word decision.).

Lexical access theories focusing on production differ from recognition theories in that production access begins with semantic activation, followed by phonological activation of semantic alternatives, and then phonological activation. The production theories differ in the proposed discreteness of the processing levels: cascading activation versus discrete non-overlapping stages and forward versus backward activation. The discrete, two-stage model postulates a modular lexical access system with two serially ordered, non- overlapping, and independent stages that operate on different inputs (Schriefers, Meyer, & Levelt, 1990). Only semantic information becomes available during semantic processing, and only phonological information is available during phonological encoding. The spreading activation model (Dell, 1986) and the cascaded processing model (Peterson & Savoy, 1998; Jescheniak & Schriefers, 1998) view the production system as more interactive. Activation is predominantly, but not exclusively, semantic during semantic processing and predominantly, but not exclusively, phonological during phonological encoding (multiple lexical candidates including the target are semantically or phonologically activated at the appropriate stage). The spreading activation model assumes a bi-directional flow of activation, whereas the cascaded model assumes only a forward flow. These lexical access models account for the time course of activation of semantic and phonological information. However, there are other factors involved in how this information is represented, organized, and accessed. The assumed architecture of the lexicon can lead to predictions regarding the ease and accuracy of lexical production under various conditions.

Factors in Lexical Representation and Access

Phonological onsets and rhymes

In production and recognition, phonological factors play a major role in representation and access. Three factors interact in interesting ways to affect lexical organization and access: phonological structure, phonotactic probability, and neighbourhood density. Early in typical development, or somewhat later in atypical language development, the holistic or less detailed lexical representations formed may be inadequate for just-in-time incremental processing. As children develop more detailed representations, they become organized according to segmental and structural characteristics. The relatively sparse literature in phonological recognition priming in adults has yielded mixed results with some scattered reports of rhyme priming inhibition and others of onset priming facilitation of lexical decisions. We have found auditory list priming for rhymes in 7-10 year-old typically developing children (Velez & Schwartz, 2010), whereas children with SLI exhibit only repetition priming. In picture–auditory word interference (PWI) tasks where children have to name pictures as they hear related or unrelated words, the effects of onset- and rhyme-related distracters shift in speeded picture naming (Brooks & MacWhinney, 2000). Children 5;0-7;0 named pictures with an auditory rhyme faster, but this was not true for older children and adults. Onset-related auditory words speeded naming in all of the children and the adults. Children with specific language impairment (SLI) had slower picture naming when onset-related auditory words preceded the picture and faster naming when it followed the picture (Seiger-Gardner & Schwartz, 2008). In contrast, typically developing children named pictures faster when the auditory word followed the picture, but showed no effect when it occurred before the picture (Seiger-Gardner & Brooks, 2008). Neither group exhibited rhyming word facilitation. Other groups of children with limitations in lexical representation or access such as children with CIs may exhibit atypical patterns of inhibition and facilitation. Finally, the time course of activation differs for words related by onsets versus words related by rhymes. In English, the relative saliency differs because of trochaic stress patterns and thus, we might expect to find differences attributable to developmental patterns and to the child's hearing status.

Phonotactic Probability

Phonotactic probability, the occurrence frequency of a particular sound sequence, facilitates word learning in young children (Jusczyk, Luce, & Charles-Luce, 1994). High phonotactic probability (HPP) sequences facilitate new word learning (Storkel, 2001; 2003); children (3-6 years of age) learn words that have HPP sequences (e.g., coat) more easily than words with low phonotactic probability (LPP) sequences (e.g., watch). Young children are more likely to correctly produce sounds in HPP sequences as compared with LPP sequences (Storkel, 2001; Zamuner, Gerken, & Hammond, 2004). Toddlers (20-28 mos.) imitate codas and rhymes in HPP words and nonwords more accurately than in low probability words and nonwords (Zamuner et al., 2004). Late-talking children (MacRoy-Higgins 2009), do not exhibit the HPP advantage in acquiring novel words seen in typically developing peers. Typically developing children are able to organize new words efficiently, using regularities observed in their language that facilitate access and production. Children who face challenges in language development appear to be less able to take advantage of these regularities.

Phonological neighbourhood density

Words that differ in a single segment are considered phonological neighbours (Vitevitch, Luce, Pisoni, & Auer, 1999). Dense neighbourhoods contain many neighbours; sparse neighbourhoods contain few neighbours. Word recognition is negatively affected by high neighbourhood density in adults and children because of greater competition (Vitevitch et al., 1999; Garlock, Walley, & Metsala, 2001). Phonotactic probability and neighbourhood density are obviously correlated; words from dense neighbourhoods tend to have highly probable phonotactic structure. In studies where phonotactic probability was controlled (Vitevitch, Armbruster, & Chu, 2004), the same competitive inhibition effects of denser neighbourhood membership were observed in production. Children with and without word-finding difficulties produce words with few phonological neighbours more accurately (Newman & German, 2002; German & Newman, 2004). Two studies (Vitevitch & Luce, 1998; Vitevitch et al., 2004) have demonstrated that neighbourhood density effects are strongest for words and phonotactic effects are strongest for non-words, suggesting two levels of process and representation in models of spoken word recognition and production. The presence of neighbourhood effects in children with language impairments and the absence of phonotactic effects may be indicative of what we may find in children with CIs. The specific impact of each of these effects can be used to explore further the details of phonological factors in spoken word recognition and production.

Semantics

There is a long history of research concerning semantic factors in lexical access using a variety of recognition and production techniques. Associative priming is well established in the literature (Meyer & Schvaneveldt, 1971). Pairs of words related categorically (e.g., frog-turtle) or associatively (e.g., frog-pond) affect naming and recognition differently, with the first inhibiting and the latter facilitating production (e.g., Alario, Segui, & Ferrand, 2000; Sailor , Brooks, Seiger-Gardner, & Guterman, 2008). When semantic lexical access tasks involve auditory or orthographic stimuli, access to semantic information is mediated by phonological information. In contrast, naming picture stimuli (with picture primes or context) reflects more direct access to semantic information when speech production processes are controlled across conditions (Jeshniak et al., 2009). In picture naming paradigms with interfering stimuli (see PWI below and Theories above), researchers differentiate among the activation of semantic information, the phonological activation of semantic alternatives, and the activation of semantic information. Although these methods provide valuable information, there are methods that are better suited to track the course of semantic activation and inhibition. Despite their hearing impairment, we assume that children with CIs will have world experience similar to their hearing peers and thus, will be generally comparable in semantic aspects of their lexical access for recognition and production. In some studies, children with hearing impairments (5;0-15) exhibit comparable performance to their hearing peers in lexical access. In a PWI task (see below) (Jerger, Lai, & Marchman, 2002) children with varying levels of hearing impairment (5;0-15;0) exhibited similar semantic effects to hearing peers, but differed in the time course of semantic activation, perhaps because auditory words were used. Similar results were found for hearing impaired children in a category verification task (Jerger, et al., 2006). However, in a word fluency task where children with CIs and age-matched peers were asked to list as many members of semantic and phonological categories as they could within a minute, children with CIs listed fewer examples than their hearing peers. Because these tasks began with an auditory category name, responses may have been mediated by phonological access. Altered access to semantic information may reflect some semantic deficits due to delays in learning words for categories or it may reflect phonological influences on access to semantic information for hearing impaired children. With some of the methods described below we will be able to specify the nature and locus of these deficits.

Methodological Approaches

Priming

If two related stimuli are presented sequentially, task performance (e.g., lexical decision, naming, categorization) on the second stimulus is influenced compared to the same task performance when that second stimulus is preceded by something unrelated, presumably because the subject is still processing the first stimulus by the time he or she encounters the second stimulus (e.g., Neely, 1977; Plaut & Booth, 2000). Most frequently, the result is facilitative, but inhibition may also occur. Variables such as the nature of the relationship, the frequency of related pairs, and the interstimulus interval (ISI), can be manipulated to reveal information about the individual's lexicon, the time course of their lexical access, and the automatic or attention-directed (i.e., controlled) nature of their response. Priming has been widely used with adult subjects, but only rarely with children (e.g., Girbau & Schwartz, 2011; Radeau, 1963; Velez & Schwartz, 2010). Typically developing children exhibit semantic priming and phonological priming along with repetition priming to auditory lists at ISIs of 1000 ms, whereas children with specific language impairment (SLI) only exhibit repetition priming.

Picture-Word and Picture-Picture Interference paradigms (PWI, PPI)

These paradigms have been widely used to examine the time course of lexical access in adults and children (Brooks & MacWhinney, 2000; Jerger Lai, & Marchman, 2002; Jescheniak & Schriefers, 1998; Schriefers et al., 1990; Seiger-Gardner & Brooks, 2008; Seiger-Gardner & Schwartz, 2008). These paradigms permit a more detailed examination of the points in time during which semantic and phonological inhibition and facilitation occur in spoken word production. Auditory words, written words or pictures are presented as primes for target pictures that are to be named by the subject. These are referred to as Interfering Stimuli (IS). The point at which the IS is presented in relation to the target picture is the Stimulus Asynchrony (SA). ISs that are semantically, phonologically or unrelated to the target picture are presented at varying SA's (before, simultaneously, or after the picture presentation). Typically, semantically related ISs at early SA's (frequently presented at -150ms and 0ms SA) inhibit response time, as all semantic competitors of the target item are activated. Conversely, phonological facilitation is noted in later SA's (+150 ms), with no effect noted by the use of phonological IS's in the earlier SOA's (Cutting & Ferreira, 1999; Jescheniak & Schriefers, 1998; 2001). When phonological facilitation begins, activation of semantic information has ceased, as reflected by the absence of semantic inhibition effects at later SOA's. This method has the potential for revealing the time course of information availability as lexical production occurs. It has already been useful in demonstrating that children with SLI have a different pattern of information availability than their typically developing peers and has the potential to provide useful insights into the nature of lexical production processes in children with CIs (Seiger-Gardner & Schwartz, 2008). Jescheniak and his colleagues (2009) used a Picture-Picture Interference Paradigm (PPI), where the IS is an overlaid picture. This appears to activate only semantic information (no phonological interference). Comparing data from a PPI and a PWI paradigm may help to elucidate the extent to which any apparent differences between CI children and their normal hearing peers can be attributed to acoustic phonological mediation of semantic responses.

Eye tracking

Although priming and PWI/PPI are useful behavioural tools for examining online lexical recognition and production processes, they are limited in that the responses obtained are single and discrete. Non-invasive continuous methods using gaze behaviour called Listening While Looking (e.g., Fernald, Zangl, Portillo, & Marchman, 2008) or the Visual World Eye tracking Paradigm (e.g., Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995) provide continuous data over the course of language comprehension and language production. Such methods permit an examination of the extent to which children consider other alternatives, data that are not accessible with discrete behavioural methods. Eye tracking has been widely used with adults to examine a wide range of topics in language processing (e.g., Altmann & Kamide, 2007). Of particular relevance here is the application of this method to lexical access. Much of this work relies on the cohort effect. If an individual is presented with an auditory word and four pictures (the referent of the word, a picture representing a related word and two unrelated pictures), the eye gaze, as the word is presented and the subject carries out some command (e.g., Move the mouse to___), will reflect the activation of information over time including increased fixations on the related compared to the unrelated picture (e.g., Allopenna, Magnuson, & Tanenhaus, 1998; Eberhard, Spivey-Knowlton, Sedivy, & Tanenhaus, 1995). This method has been successfully applied to children (e.g., Sekerina & Brooks, 2007; McMurray, Samelson, Lee, & Tomblin, 2010; Trueswell, Sekerina, Hill, & Logrip, 1999). Adolescents with SLI, compared to children with typical development, with specific cognitive impairment, and with nonspecific language impairment appear to exhibit a deficit in the competition processes of lexical access. Specifically, children with SLI over-activate the competitors, with less than typical activation of the target, and appear to be slow in processing (McMurray et al., 2010). Onset and rhyme effects were also observed. Adults with cochlear implants (Ashley Farris-Trimble & McMurray, 2012) exhibit early and late effects; they are slower to exhibit initial activation of targets and maintain activation longer. In a group of CI users ranging from 12 to 26 years of age identified by the authors as paediatric, the effects were even stronger, with slower, later, fewer looks to target as well as reduced and later peak fixation towards cohorts. These findings provide a base from which to extend eye tracking studies of lexical access in children with cochlear implants.

The Project and Preliminary Findings

The project includes five experiment sets. The original plan was to conduct the sets successively, but to increase the possibility that we would have children who could complete all experiments and thus, have the ability to compare performance across paradigms and across production and recognition.

Subjects

The subjects for the project will include 30 children with cochlear implants, 30 age-matched normal hearing controls, and 30 normal hearing children matched on PPVT raw scores. The children with cochlear implants will range in age from 7 to 12 and will have had at their implants for at least three years. The children will all have nonverbal IQ scores within normal limits and the normal hearing children will have PPVT and CELF scores within normal limits. Normal hearing children will pass a hearing screening at 20dB and children with CIs will pass a hearing screening at 35dB. All subjects will be administered the MLNT.

Experiment Set 1

The first set examines lexical access for recognition and production using auditory priming and picture naming as it is affected by phonological variables (word position, neighborhood density, and phonotactic probability) and by semantic (associates and coordinates) variables using auditory priming, picture-word interference (PWI) and picture-picture interference (PPI) paradigms. In the auditory priming, we also compare automatic versus controlled processing in auditory priming by manipulating ISI. One goal is to compare the children's performance on PWI and PPI paradigms to determine the influence of the interfering stimulus modality.

Our priming task explores differences in semantic and phonological priming between CI users and their normal hearing peers, as well as the effects of phonotactic probability on phonological priming and accuracy in identification of real words and nonwords. To date, 35 NH and 18 children with CIs have completed the phonological half, 27 NH children and 13 children with CIs have completed the semantic half. Children were presented with pairs of digitized auditory words or nonwords separated by 50 ms (automatic processing) or 250 ms (controlled processing) and had to push a button indicating whether the second member of a pair was a word or a nonword. Pairs are related or unrelated. Related pairs include semantic relatives (associates or coordinates) or phonological relatives (shared onset or rhyme, high or low phonotactic probability). There are 20 pairs for each stimulus category.

Preliminary data reveal many differences in both semantic and phonological priming between CI users and their NH peers, including greater inhibition to words following shared onset real words and to nonwords following HPP nonwords, and greater facilitation at short ISIs to words following nonwords. In addition, CI users exhibited greater inhibition to associate pairs and greater facilitation to coordinate pairs at short ISIs. Average RT and accuracy in all categories except HPP word-nonword pairs were comparable to those of the NH group. However, CI users tended to judge HPP nonwords as real words, indicating a potential strategy of overusing phonotactic probability as an indicator of real words. The patterns of accuracy and priming suggest that CI users have and use awareness of phonotactic probability, and that even though they may perform on par with their peers, their phonological representation of the words and steps to accomplishing the task may be different.

The remaining experiments in this set employ the PWI and PPI paradigms to examine the time course of phonological and semantic availability in lexical access for production. We also want to compare children's performance across these tasks to determined differences in performance for auditory versus visual interfering stimuli. To date, 36 NH children aged 9;5 (+/- 1;1) and 17 children with CIs aged 8;8 (+/- 1.0) have participated in the PWI experiment and 31 NH aged 9;7 (+/- 1;0) and 13 children CIs 9.6 (+/- 1.1) have participated in the PPI task. These preliminary data again revealed different patterns of reaction times, suggesting differences in underlying lexical representation and processing. Children with CIs were slower to respond in the PWI task, but not in the PPI task, suggesting delays in auditory processing. Although accuracy was at ceiling for both groups, the children with CIs exhibited greater facilitation and inhibition effects than their NH peers, and these effects were present for longer periods of time, suggesting that the phonological and semantic relatives of the target remained active longer in CI children. While failure to inhibit the activation of these related words might not affect lexical recognition or production, it may impact sentence processing. Once additional subjects are added, data will be analysed using multilevel modelling to examine group and individual differences along with item analyses.

Experiment Sets 2 and 3

These experiments focus on the moment-by-moment time course of lexical access for recognition and for production as it is affected by phonological variables (word position (onset/rhyme), neighborhood density, and phonotactic probability), semantic variables (associates and coordinates), and the phonology-semantic interaction (Yee & Sedivy, 2006) using eye tracking. One goal is to compare recognition and production performance. We expect poorer semantic performance by children with CIs in auditory recognition tasks, as access to semantic information is filtered through their phonological system. For phonologically driven lexical access for production, we expect children with CIs to exhibit reduced cohort effects particularly for rhymes and to have an atypical time course of competitor activation compared to age-matched and vocabulary-matched groups. We expect semantic-based access to be similar to vocabulary-matched, hearing peers for production tasks.

We are currently carrying out four eye tracking studies in this set, each contributing to a different aspect of our portrait of lexical access in children with cochlear implants. The first three experiments involve recognition and production. For all recognition tasks, the child is presented with four pictures and is asked to mouse-click on the picture that matches the word played over the speaker. For the production tasks, the four pictures appear, and the child is asked to say the name of the picture that has a pink border. The first set of tasks involves a target, a semantic competitor (associate or coordinate) and two unrelated stimuli. The second set of tasks involves a target, a phonological competitor (onset or rhyme, dense or sparse lexical neighborhood), and two unrelated stimuli. Our participants to date include, 31 NH aged 10.0 (+/- 1.2) and 11 children with CIs aged 10.1 (+/- 1.0). Preliminary results reveal almost 100% looking time at the target for the production task, but very different patterns for the recognition task. Almost 100% of fixations of NH subjects were to the target, while children with CIs looked more at the relative in shared onset words from dense phonological neighborhoods and more at all of the other pictures in both rhyme conditions before selecting the correct answer. This suggests that children with CIs are holding onto rhyme and dense onset competitors for longer than their normal hearing peers, and that they are sensitive to lexical neighborhood density in decoding speech.

The third set of tasks involves a target, a semantic competitor, a phonological competitor, and an unrelated stimulus. This design allows us to estimate a timeline of lexical neighborhood activation using the proportion of looks to the semantic and phonological relatives and the target over the time between the presentation of the target stimulus and the response. The two groups showed a similar pattern during the production task, where they attended to all stimuli equally until about 200 ms (presumably the point of lexical identification) after which they looked predominantly at the target. However, in the recognition task, NH children showed greater attention to the target starting at 600 ms, whereas CI users showed preferential attention to the target earlier, but also spent a greater proportion of time fixating on the onset competitor.

The fourth task uses the recognition paradigm for a set of four pictures including a target, two foils, and an onset competitor of a semantic relative of the target (e.g., LOG for KEY, via semantic bridge lock, Yee & Sedivy, 2006). By examining visual attention to the competitor and comparing it to data from a recognition task where the only competitor is the semantic bridge word (in this example, lock), we can examine the effects of adding a layer of phonological processing onto a semantic task and vice versa. We do not yet have sufficient data for the fourth task.

Experiment Set 4

This experiment examines selective attention. The child is asked to attend first to one ear, then the other, while words are played simultaneously over the speakers Victorino & Schwartz, 2012). The child is asked to use the mouse to click on one of two pictures, selecting the one that matches the word that is being played on the attended side. The position of the pictures on the screen is counterbalanced, as is the match or mismatch between the non-target picture and the word played in the unattended ear. All children complete a localization task as a prerequisite to participation. Preliminary data shows that both groups look predominantly at the target when they answer correctly, but accuracy is much lower in CI users. Further analysis will elucidate any differences in looking patterns with respect to whether the target picture appears on the attended side or unattended side and whether the foil picture matches the word played in the unattended ear, as well as any difference in the performance of bilateral and unilateral CI users, bimodal listeners and those with normal hearing. We have just begun this experiment (20 NH subjects, 7 CI subjects).

Experiment Set 5

Our colleagues at Indiana University tested a cohort of infants and young children on speech perception and word learning tasks. These children are now in the age range for this project. We will examine the relations between their early perception and lexical learning abilities and the lexical access of these children.

Summary

To date, we know relatively few details about the language development of children with CIs beyond their performance on standardized tests. There is good reason to suspect that even the most successful CI users have lexical processes and representations that differ from those of their hearing peers, particularly with respect to phonological representations and processing. In this plenary paper, we presented the outline of a five-year project funded by the National Institutes of Health along with preliminary data. A combination of well-established behavioural methods and eye tracking will provide important new insights concerning the semantic and phonological aspects of lexical access for recognition and production in children with cochlear implants. Because selective attention and its control also appear to be areas of vulnerability for these children, we are also exploring the relationship between these cognitive abilities and lexical access. Several aspects of our study concern the relation between access for production and access for recognition. Such comparisons are rare, even in hearing children. Most previous studies assume reversible relationship between production and comprehension. Although these two processes share a common representation in adults, the task demands differ. The difference is particularly important in children with CIs because recognition begins with access to phonological information, whereas semantic information, assumed to be intact in children with CIs, has primacy in production. We aim to gain an even more fined-grained distinction between semantic access and phonologically mediated access. The relations between early perception and word learning and later fine-grained language processing will provide important information about the prediction of language outcomes in these children. The experiments described aim to provide a detailed characterization of lexical access in children with CIs that will lead to novel assessment and intervention approaches. Such approaches could identify and ameliorate deficits that are not currently addressed by standardized tests and by commonly used approaches to language intervention.

Acknowledgements

The authors wish to thank the children who participated and their families. We also thank Dr. Ronald Hoffman; the Children's Hearing Institute; and our summer interns Julie Winer, Jessica Schanker, and Alexandra Lewisohn for their support and assistance. A special thanks is owed to Dr. Jane Madell who initially brought the first author to NYEEI.

This article and the work reported was supported by a grant from the NIH, National Institute on Deafness and Other Communication Disorders, 5R01DC011041. The research was conducted at the New York Eye and Ear Infirmary and was presented as a plenary talk by the first author at the 2012 International Association of Clinical Phonetics and Linguistics Association.

Footnotes

Declarations of Interest

The authors have no conflict of interest.

References

  1. Alario F, Segui J, Ferrand L. Semantic and associative priming in picture naming. Quarterly Journal of Experimental Psychology: Section A. 2000;53:741–764. doi: 10.1080/713755907. [DOI] [PubMed] [Google Scholar]
  2. Allopenna PD, Magnuson JS, Tanenhaus MK. Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. Journal of Memory & Language. 1998;38:419–439. [Google Scholar]
  3. Altmann GTM, Kamide Y. The real-time mediation of visual attention by language and world knowledge: Linking anticipatory (and other) eye movements to linguistic processing. Journal of Memory & Language. 2007;57:502–518. [Google Scholar]
  4. Blamey P, Barry J, Bow C, Sarant J, Paatsch L, Wales R. The development of speech production following cochlear implantation. Clinical Linguistics & Phonetics. 2001;15:363–382. [Google Scholar]
  5. Brooks PJ, MacWhinney B. Phonological priming in children's picture naming. Journal of Child Language. 2000;27:335–366. doi: 10.1017/s0305000900004141. [DOI] [PubMed] [Google Scholar]
  6. Cutting JC, Ferreira VS. Semantic and phonological information flow in the production lexicon. Journal of Experimental Psychology / Learning, Memory & Cognition. 1999;25:318. doi: 10.1037//0278-7393.25.2.318. [DOI] [PubMed] [Google Scholar]
  7. Dell GS. A spreading-activation theory of retrieval in sentence production. Psychological Review. 1986;93:283–321. [PubMed] [Google Scholar]
  8. Eberhard KM, Spivey-Knowlton M, Sedivy JC, Tanenhaus MK. Journal of Psycholinguistic Research. Vol. 24. Johns Benjamin; Amsterdam: 1995. Eye movements as a window into real-time spoken language comprehension in natural contexts. pp. 409–436. [DOI] [PubMed] [Google Scholar]
  9. Farris-Trimble AW, McMurray B. Online speech processing and word recognition in post-lingually deafened cochlear implant users.. Poster presented at the 12th International Conference on Cochlear Implants and other Implantable Auditory Technologies; Baltimore, MD. 2012. [Google Scholar]
  10. Farris-Trimble AW, McMurray B. Online spoken word recognition in pediatric cochlear implant users.. Poster presented at the 13th Symposium on Cochlear Implants in Children; Chicago, IL. 2011. [Google Scholar]
  11. Garlock VM, Walley AC, Metsala JL. Age-of-acquisition word frequency, and neighborhood density effects on spoken word recognition by children and adults. Journal of Memory & Language. 2001;45:468–492. [Google Scholar]
  12. Gaskell MG, Marslen-Wilson W. Integrating form and meaning: A distributed model of speech perception. Language and Cognitive Processes. 1997;12:613–656. [Google Scholar]
  13. Geers AE, Nicholas JG, Sedey AL. Language skills of children with early cochlear implantation. Ear and Hearing. 2003;24(1 Suppl):46S–58S. doi: 10.1097/01.AUD.0000051689.57380.1B. [DOI] [PubMed] [Google Scholar]
  14. Geers A, Moog J. Spoken language results: Vocabulary, syntax, and communication. Volta Review. 1994;96:131–148. [Google Scholar]
  15. Geren J, Snedeker J. Harvard University; 2009. Syntactic and lexical development in children with cochlear implants. Unpublished paper. [Google Scholar]
  16. German DJ, Newman RS. The impact of lexical factors on children's word-finding errors. Journal of Speech, Language & Hearing Research. 2004;47:624–636. doi: 10.1044/1092-4388(2004/048). [DOI] [PubMed] [Google Scholar]
  17. Houston DM, Carter AK, Pisoni DB, Her Kirk K, Ying EA. Word learning in children following cochlear implantation. Volta Review. 2005;105:41–72. [PMC free article] [PubMed] [Google Scholar]
  18. Houston DM, Pisoni DB, Kirk KI, Ying EA, Miyamoto RT. Speech perception skills of deaf infants following cochlear implantation: A first report. International Journal of Pediatric Otorhinolaryngology. 2003;67:479–495. doi: 10.1016/s0165-5876(03)00005-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Jerger S, Damian MF, Tye-Murray N, Dougherty M, Mehta J, Spence M. Effects of childhood hearing loss on organization of semantic memory: typicality and relatedness. Ear and Hearing. 2006;27:686–702. doi: 10.1097/01.aud.0000240596.56622.0c. [DOI] [PubMed] [Google Scholar]
  20. Jerger S, Lai L, Marchman VA. Picture naming by children with hearing loss: I. Effect of semantically related auditory distractors. Journal of the American Academy of Audiology. 2002;13:463–477. [PubMed] [Google Scholar]
  21. Jescheniak JD, Schriefers H. Discrete serial versus cascade processing in lexical access in speech production: Further evidence from the activation of near-synonyms. Journal of Experimental Psychology / Learning, Memory & Cognition. 1998;24:1256–1274. [Google Scholar]
  22. Jescheniak J, Schriefers H. Priming effects from phonologically related distractors in picture-word interference. The Quarterly Journal of Experimental Psychology A: Human Experimental Psychology. 2001;54A:371–382. doi: 10.1080/713755981. [DOI] [PubMed] [Google Scholar]
  23. Jescheniak JD, Oppermann F, Hantsch A, Wagner V, Mädebach A, Schriefers H. Do perceived context pictures automatically activate their phonological code? Experimental Psychology. 2009;56:56–65. doi: 10.1027/1618-3169.56.1.56. [DOI] [PubMed] [Google Scholar]
  24. Jusczyk PW, Hohne EA. Infants’ memory for spoken word. Science. 1997;277(5334):1984. doi: 10.1126/science.277.5334.1984. [DOI] [PubMed] [Google Scholar]
  25. Jusczyk PW, Luce PA, Charles-Luce J. Infants’ sensitivity to phonotactic patterns in the native language. Journal of Memory and Language. 1994;33:630–645. [Google Scholar]
  26. Kirk KI, Ying E, Miyamoto RT, O'Neill T, Lento CL, Fears B. Effects of age at implantation in young children. Annals of Otology, Rhinology & Laryngology. 2002;111:69. doi: 10.1177/00034894021110s515. [DOI] [PubMed] [Google Scholar]
  27. Marchman VA, Fernald A. Speed of word recognition and vocabulary knowledge in infancy predict cognitive and language outcomes in later childhood. Developmental Science. 2008;11:1–9. doi: 10.1111/j.1467-7687.2008.00671.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Marslen-Wilson W. Activation, competition, and frequency in lexical access. In: Altmann G, editor. Cognitive models of speech processing: Psycholinguistic and computational perspectives. MIT Press; Cambridge, MA: 1990. pp. 148–172. [Google Scholar]
  29. McClelland JL, Elman JL. The TRACE model of speech perception. Cognitive Psychology. 1986;18:1–86. doi: 10.1016/0010-0285(86)90015-0. [DOI] [PubMed] [Google Scholar]
  30. McMurray B, Samelson V, Lee S, Tomblin JB. Individual differences in online spoken word recognition: Implications for SLI. Cognitive Psychology. 2010;60:1–39. doi: 10.1016/j.cogpsych.2009.06.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Meyer DE, Schvaneveldt RW. Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology. 1971;90:227–234. doi: 10.1037/h0031564. [DOI] [PubMed] [Google Scholar]
  32. Neely JH. Semantic priming and retrieval from lexical memory: Roles of inhibitionless spreading activation and limited-capacity attention. Journal of Experimental Psychology: General. 1977;106:226–254. [Google Scholar]
  33. Newman RS, German DJ. Effects of lexical factors on lexical access among typical language-learning children and children with word-finding difficulties. Language & Speech. 2002;45:285–317. doi: 10.1177/00238309020450030401. [DOI] [PubMed] [Google Scholar]
  34. Nicholas JG, Geers AE. Will they catch up? The role of age at cochlear implantation in the spoken language development of children with severe to profound hearing loss. Journal of Speech, Language, and Hearing Research. 2007;50:1048–62. doi: 10.1044/1092-4388(2007/073). [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Nicholas JG, Geers AE. Expected test scores for preschoolers with a cochlear implant who use spoken language. American Journal of Speech-Language Pathology. 2008;17:121–138. doi: 10.1044/1058-0360(2008/013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Nittrouer S, Burton LT. The role of early language experience in the development of speech perception and phonological processing abilities: Evidence from 5-year-olds with histories of otitis media with effusion. Journal of Communication Disorders. 2005;38:29–63. doi: 10.1016/j.jcomdis.2004.03.006. [DOI] [PubMed] [Google Scholar]
  37. Peterson RR, Savoy P. Lexical selection and phonological encoding during language production: Evidence for cascaded activation. Journal of Experimental Psychology: Learning, Memory & Cognition. 1998;24:539. [Google Scholar]
  38. Plaut DC, Booth JR. Individual and developmental differences in semantic priming: Empirical and computational support. Psychological Review. 2000;107:786. doi: 10.1037/0033-295x.107.4.786. [DOI] [PubMed] [Google Scholar]
  39. Rescorla L. Language and reading outcomes to age 9 in late-talking toddlers. Journal of Speech, Language & Hearing Research. 2002;45:360. doi: 10.1044/1092-4388(2002/028). [DOI] [PubMed] [Google Scholar]
  40. Rescorla L. Age 13 language and reading outcomes in late-talking toddlers. Journal of Speech, Language & Hearing Research. 2005;48(2):459–472. doi: 10.1044/1092-4388(2005/031). [DOI] [PubMed] [Google Scholar]
  41. Sailor K, Brooks PJ, Bruening PR, Seiger-Gardner L, Guterman M. Exploring the time course of semantic interference and associative priming in the picture-word interference task. The Quarterly Journal of Experimental Psychology. 2008;62:789–801. doi: 10.1080/17470210802254383. [DOI] [PubMed] [Google Scholar]
  42. Schorr EA, Roth FP, Fox NA. A comparison of the speech and language skills of children with cochlear implants and children with normal hearing. Communication Disorders Quarterly. 2008;29:195–210. [Google Scholar]
  43. Schriefers H, Meyer AS, Levelt WJ. Exploring the time course of lexical access in language production: Picture-word interference studies. Journal of Memory and Language. 1990;29:86–102. [Google Scholar]
  44. Seiger-Gardner L, Brooks PJ. Effects of onset- and rhyme-related distractors on phonological processing in children with specific language impairment. Journal of Speech, Language & Hearing Research. 2008;51:1263–1281. doi: 10.1044/1092-4388(2008/07-0079). [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Seiger-Gardner L, Schwartz RG. Lexical access in children with and without specific language impairment: A cross-modal picture-word interference study. International Journal of Language & Communication Disorders. 2008;43:528–551. doi: 10.1080/13682820701768581. [DOI] [PubMed] [Google Scholar]
  46. Sekerina IA, Brooks PJ. Eye movements during spoken word recognition in russian children. Journal of Experimental Child Psychology. 2007;98(1):20–45. doi: 10.1016/j.jecp.2007.04.005. [DOI] [PubMed] [Google Scholar]
  47. Seymour H, Roeper T, de Villiers JG. Diagnostic Evaluation of Language Variation—Norm Referenced (DELV–NR) Harcourt Assessment; San Antonio, TX: 2005. [Google Scholar]
  48. Storkel HL, Armbrüster J, Hogan TP. Differentiating phonotactic probability and neighborhood density in adult word learning. Journal of Speech, Language, and Hearing Research. 2006;49:1175–1192. doi: 10.1044/1092-4388(2006/085). [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Storkel HL. Learning new words: Phonotactic probability in language development. Journal of Speech, Language & Hearing Research. 2001;44:1321–1337. doi: 10.1044/1092-4388(2001/103). [DOI] [PubMed] [Google Scholar]
  50. Storkel HL. Learning new words II: Phonotactic probability in verb learning. Journal of Speech, Language & Hearing Research. 2003;46:1312–1323. doi: 10.1044/1092-4388(2003/102). [DOI] [PubMed] [Google Scholar]
  51. Svirsky MA, Robbins AM, Kirk KI, Pisoni DB, Miyamoto RT. Language development in profoundly deaf children with cochlear implants. Psychological Sciences. 2000;11:153–158. doi: 10.1111/1467-9280.00231. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Svirsky MA, Teoh S, Neuburger H. Development of language and speech perception in congenitally, profoundly deaf children as a function of age at cochlear implantation. Audiology & Neuro-Otology. 2004;9:224–233. doi: 10.1159/000078392. [DOI] [PubMed] [Google Scholar]
  53. Tomblin JB, Barker BA, Spencer LJ, Xuyang Zhang, Gantz BJ. The effect of age at cochlear implant initial stimulation on expressive language growth in infants and toddlers. Journal of Speech, Language & Hearing Research. 2005;48:854–867. doi: 10.1044/1092-4388(2005/059). [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Tomblin JB, Spencer L, Flock S, Tyler R, Gantz B. A comparison of language achievement in children with cochlear implants and children using hearing aids. Journal of Speech, Language & Hearing Research. 1999;42:497–511. doi: 10.1044/jslhr.4202.497. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Trehub SE, Henderson JL. Temporal resolution in infancy and subsequent language development. Journal of Speech & Hearing Research. 1996;39:1315–1320. doi: 10.1044/jshr.3906.1315. [DOI] [PubMed] [Google Scholar]
  56. Tsao F, Liu H, Kuhl PK. Speech perception in infancy predicts language development in the second year of life: A longitudinal study. Child Development. 2004;75:1067–1084. doi: 10.1111/j.1467-8624.2004.00726.x. [DOI] [PubMed] [Google Scholar]
  57. Victorino K, Schwartz RG. Selective attention & language processing in children with SLI.. Poster presented to the ASHA Convention; Atlanta. 2012. [Google Scholar]
  58. Vitevitch MS, Luce PA. When words compete: levels of processing in perception of spoken words. Psychological Science. 1998;9:325–329. [Google Scholar]
  59. Vitevitch MS, Luce PA, Pisoni DB, Auer ET. Phonotactics, neighborhood activation, and lexical access for spoken words. Brain and Language. 1999;68:306–311. doi: 10.1006/brln.1999.2116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Vitevitch MS, Armbruster J, Chu S. Sublexical and lexical representations in speech production: effects of phonotactic probability and onset density. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2004;30:514–529. doi: 10.1037/0278-7393.30.2.514. [DOI] [PubMed] [Google Scholar]
  61. Yee E, Sedivy J. Eye movements to pictures reveal transient semantic activation during spoken word recognition. Journal of Experimental Psychology: Learning, Memory and Cognition. 2006;32:1–14. doi: 10.1037/0278-7393.32.1.1. [DOI] [PubMed] [Google Scholar]
  62. Zamuner TS, Gerken L, Hammond M. Phonotactic probabilities in young children's speech production. Journal of Child Language. 2004;31:515–536. doi: 10.1017/s0305000904006233. [DOI] [PubMed] [Google Scholar]

RESOURCES