Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Nov 12.
Published in final edited form as: Ear Hear. 1995 Oct;16(5):470–481. doi: 10.1097/00003446-199510000-00004

Lexical Effects on Spoken Word Recognition by Pediatric Cochlear Implant Users

Karen Iler Kirk 1, David B Pisoni 2,3, Mary Joe Osberger 4
PMCID: PMC3495322  NIHMSID: NIHMS418676  PMID: 8654902

Abstract

Objective

The purposes of this study were 1) to examine the effect of lexical characteristics on the spoken word recognition performance of children who use a multichannel cochlear implant (CI), and 2) to compare their performance on lexically controlled word lists with their performance on a traditional test of word recognition, the PB-K.

Design

In two different experiments, 14 to 19 pediatric CI users who demonstrated at least some open-set speech recognition served as subjects. Based on computational analyses, word lists were constructed to allow systematic examination of the effects of word frequency, lexical density (i.e., the number of phonemically similar words, or neighbors), and word length. The subjects’ performance on these new tests and the PB-K also was compared.

Results

The percentage of words correctly identified was significantly higher for lexically “easy” words (high frequency words with few neighbors) than for “hard” words (low frequency words with many neighbors), but there was no lexical effect on phoneme recognition scores. Word recognition performance was consistently higher on the lexically controlled lists than on the PB-K. In addition, word recognition was better for multisyllabic than for monosyllabic stimuli.

Conclusions

These results demonstrate that pediatric cochlear implant users are sensitive to the acoustic-phonetic similarities among words, that they organize words into similarity neighborhoods in long-term memory, and that they use this structural information in recognizing isolated words. The results further suggest that the PB-K underestimates these subjects’ spoken word recognition.


The Nucleus multichannel cochlear implant provides substantial auditory information to children with profound hearing impairments who are unable to benefit from conventional amplification. However, children who use the Nucleus cochlear implant greatly vary in their spoken word recognition skills (Staller, Beiter, Brimacombe, Mecklenburg, & Arndt, 1991a), depending in part on the age at onset and duration of their hearing loss (Fryauf-Bertschy, Tyler, Kelsay, & Gantz, 1992; Osberger, Todd, Berry, Robbins, & Miyamoto, 1991b; Staller et al., 1991a; Staller, Dowell, Beiter, & Brimacombe, 1991b), and on the length of cochlear implant use (Fryauf-Bertschy et al., 1992; Miyamoto et al., 1992, 1994; Osberger et al., 1991a; Waltzman, Cohen, & Shapiro, 1992; Waltzman et al., 1990). Several different types of tests have been used to assess the perceptual benefits of cochlear implant use in children because of this variability in performance. Closed-set tests, which provide the listener with a limited number of response alternatives, have been used to measure the perception of prosodic cues, vowel and consonant identification, and word identification. According to Tyler (1993), approximately 50% of children with multichannel cochlear implants perform significantly above chance on closed-set tests of word identification, and some obtain very high levels of performance (70% to 100% correct). For this latter group, more difficult open-set tests of spoken word recognition, wherein no response alternatives are provided, are needed to assess their perceptual capabilities.

Historically, spoken word recognition tests were adapted from articulation tests used to evaluate military communications equipment during World War I1 (Hudgins, Hawkins, Karlin, & Stevens, 1947). Several criteria were considered essential in selecting test items, including familiarity, homogeneity of audibility, and phonetic balancing (i.e., to have phonemes within a word list represented in the same proportion as in English). Phonetic balancing was included as a criterion because it was assumed that all speech sounds must be included to test hearing (Hudgins et al., 1947), and that phonetic balancing ensured homogeneity across different lists (Hirsh et al., 1952). Subsequent research demonstrated that phonetic balancing was not necessary to achieve equivalent word lists (Carhart, 1965; Hood & Poole, 1980; Tobias, 1964) and that other nonauditory factors, such as subject age or language level, also influence spoken word recognition (Hodgson, 1985; Jerger, 1984; Smith & Hodgson, 1970). Nonetheless, phonetically balanced word recognition tests still enjoy widespread use in both clinical and research settings because their psychometric properties have been well established (Hirsh et al., 1952; Hudgins et al., 1947). These tests also are widely used because recorded versions of the test materials are available commercially, thereby facilitating comparison of results obtained at different test sites. Phonetically balanced word lists have been used to evaluate potential cochlear implant candidates, as well as to measure post-implant performance.

Spoken word recognition is often assessed in children using phonetically balanced materials such as the Phonetically Balanced Kindergarten word lists (PB-K) (Haskins, Reference Note 1). Children with multichannel cochlear implants generally perform poorly on these phonetically balanced tests (Fryauf-Bertschy et al., 1992; Miyamoto, Osberger, Robbins, Myres, & Kessler, 1993; Osberger et al., 1991a; Staller et al., 1991a). For example, Osberger et al. (1991a) reported that the mean PB-K score for 28 subjects with approximately 2 yr of cochlear implant use was 11% (range 0% to 36%). Only six of their subjects scored above 0% words correct. Similarly, Staller et al. (1991a) reported mean PB-K scores of approximately 9% words correct for 80 children who had 1 yr of multichannel cochlear implant experience. It is difficult to distinguish among children with differing spoken word recognition skills using the PB-K test, or to measure changes with increased device experience because the scores of these subjects cluster in a restricted range near 0% correct. Furthermore, the parents and educators of children with cochlear implants have sometimes reported a discrepancy between the observed performance on these phonetically balanced word lists and real-world or everyday communication abilities in more natural settings. That is, children may obtain very low scores on phonetically balanced word lists, but demonstrate relatively good performance during daily activities.

The administration of spoken word recognition tests assesses the underlying peripheral and central perceptual processes employed in spoken word recognition (Lively, Pisoni, & Goldinger, 1994; Pisoni & Luce, 1986). Models of spoken word recognition generally propose an initial stage of processing wherein the speech signal is converted to a phonetic representation, followed by a second stage wherein the phonetic representations are matched to the target words by comparing them to items stored in the mental lexicon (Luce, 1986; Luce, Pisoni, & Goldinger, 1990; Marslen-Wilson, 1987). (For an alternative view, see Klatt's Lexical Access From Spectra [LAFS] model [Klatt, 1980]). Poor performance on phonetically balanced speech identification tests may result from difficulties at either stage. If the auditory signal presented via the cochlear implant is too degraded to allow accurate phonetic encoding, word recognition performance will be impaired or reduced. The structure and organization of sound patterns in the mental lexicon can also influence word recognition (Pisoni, Nusbaum, Luce, & Slowiaczek, 1985). For example, when test item selection is constrained by phonetic balancing, the resulting lists may contain many words that are unfamiliar to children with profound hearing losses, who typically have limited vocabularies (Dale, 1974; Lach, Ling, & Ling, 1970; Quigley & Paul, 1984). Children should be able to repeat unfamiliar words if their sensory aid provides adequate auditory information for phoneme identification. If not, then children will most likely select a phonemically similar word within their working vocabulary. In addition, lexical characteristics, such as the frequency with which words occur in the language (Andrews, 1989; Elliot, Clifton, & Servi, 1983) and the number of phonemically similar words in the language (Treisman, 1978a, 1978b) have been shown to affect the speed and accuracy of spoken word recognition (Luce, 1986; Luce et al., 1990). Phonetically balanced word recognition tests were not designed to assess the influence of these lexical factors on word recognition.

This paper reports the development of two new word recognition tests in which lexical properties of the test items were carefully controlled; test development was motivated by several assumptions embodied in current theories of spoken word recognition discussed below. Pediatric cochlear implant subjects’ performance on these new tests will also be compared with their performance on a phonetically balanced, word recognition test, the PB-K test.

Theoretical Framework for Test Development

Both word frequency and lexical similarity affect spoken word recognition performance. One measure of lexical similarity is the number of “neighbors,” or words that differ by one phoneme from the target word (Greenberg & Jenkins, 1964; Landauer & Streeter, 1973). For example, the words bat, cap, cut, scat, and at are all neighbors of the target word cat. Pisoni and his colleagues used phonetic transcriptions from a computer-readable version of Webster's Pocket Dictionary (Pisoni et al., 1985) to conduct a series of computational analyses of the acoustic-phonetic similarity of sound patterns of words. These analyses revealed that words could be organized into “similarity neighborhoods” based on both the frequency of occurrence of words in the language and density of words within the lexical neighborhoods, as measured by one phoneme substitutions. Words with many lexical neighbors come from “dense” neighborhoods, whereas those with few neighbors come from “sparse” neighborhoods.

In a series of behavioral experiments with adult listeners, Luce (1986) and Luce et al. (1990) showed that these computational analyses have behavioral consequences, in that word frequency, neighborhood density, and the average frequency of words in the neighborhood all influence spoken word perception. Figure 1 illustrates the relationship between these three lexical characteristics. “Easy” words (high frequency words from sparse neighborhoods) were found to be recognized faster and with greater accuracy than “hard” words (low frequency words from dense neighborhoods). Similar lexical effects on spoken word recognition performance have been found in several different experimental paradigms (Cluff & Luce, 1990; Luce, 1986; Luce et al., 1990). The relationship between neighborhood density and word frequency has been formalized by Luce (1986) in terms of the Neighborhood Activation Model (NAM).

Figure 1.

Figure 1

lexical neighborhood for “easy” words and “hard” words based on computational analyses.

According to NAM, a stimulus input activates a set of similar acoustic-phonetic patterns in memory. These acoustic-phonetic representations are assumed to be activated in a multidimensional acoustic-phonetic space with activation levels proportional to the degree of similarity to the stimulus word. Over the course of processing, the pattern corresponding to the input receives successively higher levels of activation, while the activation levels of similar patterns become attenuated. This initial stage of activation is followed by a process of “lexical selection” among a large number of potential candidates that are consistent with the acoustic-phonetic input. Frequency is assumed to operate as a biasing factor by multiplicatively adjusting the activation levels of the acoustic-phonetic representations. In lexical selection, the activation levels are then summed and the probabilities of choosing each acoustic-phonetic representation are computed based on the overall activation level. Word recognition occurs when a given acoustic-phonetic representation is chosen based on the computed probabilities. Thus, NAM provides a two-stage account of how the structure and organization of the sound patterns of words in memory contributes to the perception of spoken words.

NAM has also provided a useful theoretical framework for examining the lexicons of children with normal hearing. Charles-Luce and Luce (1990) found that children aged 5 to 7 yr had relatively small lexical neighborhoods when compared with those of adults. Children also exhibited more confusions among words, suggesting that their lexical representations are “coarser.” More recently, Logan (1992) extended this work by applying the neighborhood similarity metric to language samples obtained from children between the ages of 18 mo and 5 yr. In Logan's analysis, word frequency and density referred, respectively, to the number of times each target word occurred, and to the number of lexical neighbors within the corpus. Logan found that neighborhood density increased significantly until age 2 yr, as new words were added to the lexicon, and thereafter remained relatively stable. He also found that neighborhood density in children was positively correlated with word frequency, paralleling the relationship found for these two variables in adults.

Little is currently known about the lexical representations of hearing-impaired children with multichannel cochlear implants or the organization of spoken words in their lexicons. If the structural organization of words in their memory mirrors that of listeners with normal hearing, then word recognition performance should be influenced by word frequency and lexical density in a manner similar to that of normal-hearing children. Evidence of similar lexical organization would suggest that the perceptual processes underlying word recognition are similar in children with cochlear implants and listeners with normal hearing. The goal of this research was to develop several new measures that would allow us to examine the underlying perceptual processes influencing spoken word recognition by children with multichannel cochlear implants. Previous research with this group of subjects has provided descriptive information about their spoken word recognition; our intent was to examine the structural organization and access of words stored in these subjects’ long-term lexical memory. If these subjects can encode specific acoustic-phonetic details, they should have narrow phonetic categories, resulting in better recognition of lexically “easy” than “hard” words. Conversely, if they are unable to encode fine phonetic details, the resulting broad phonetic categories should yield similar performance for “easy” and “hard” words because both “easy” and “hard” words would have many words with which they could be confused.

The present investigation consisted of three phases. First, new lexically controlled stimulus materials were developed and computational analyses were used to compare their lexical characteristics with those of the PB-K word lists. Second, behavioral measures were used to compare word recognition of lexically controlled words with phonetically balanced monosyllabic word lists. Third, the effects of word length and lexical characteristics on spoken word recognition were examined. The goal was to examine whether performance improved for multisyllabic stimuli compared with monosyllables, and to see whether lexical characteristics of the stimulus items influenced multisyllabic word recognition performance.

Preliminary Computational Analyses

Speech Materials

A primary goal in the development of these new perceptual tests was to select stimulus words that were likely to be within the vocabularies of children with profound hearing impairment. The vocabulary items needed to be familiar to children with limited lexicons, and yet meet certain lexical criteria. Most of the previous research concerning lexical effects of word recognition has been carried out with adult subjects, and this vocabulary was not suitable for the subjects in the present investigation. Logan's (1992) analyses of samples of child language obtained from the CHILDES (Child Language Data Exchange System) database (MacWhinney & Snow, 1985) is one of the few studies specifically concerned with lexical effects on spoken word recognition by young children. The CHILDES database contains data from published studies carried out by child language researchers (e.g., Brown, 1973) and consists of transcripts of verbal exchanges between a child or children and a caregiver or another child in the environment. Logan analyzed a large number of utterances obtained at regular intervals from several young children (1 to 5 yr of age). The children were in the early stages of language development, and therefore it seemed likely that the words in Logan's corpus would be familiar to children with limited vocabularies. Also, based on his computational analyses, the lexical properties of the words were known. Therefore, we selected our test items using data from Logan's computational analyses of the CHILDES database.

The subset of Logan's corpus containing words produced by children aged 3 to 5 yr was used to generate stimulus items for two new tests. A monosyllabic word test, the Lexical Neighborhood Test (LNT), was developed using two “easy” and two “hard” 25-item word lists. Monosyllabic test items for the LNT were selected from Logan's analyses. All multisyllabic words, proper nouns, possessives, contractions, plurals, and inflected forms of words were eliminated from the corpus. Next, the median value of these words was determined for word frequency and neighborhood density. In Logan's analysis, word frequency refers to the number of occurrences of a given word within the corpus that he analyzed, and neighborhood density refers to the number of neighbors for a target word that could be found in the corpus by adding, substituting, or deleting one phoneme from the target word (Logan, 1992). The median word frequency was four occurrences, with a range from 1 to 519 occurrences. Neighborhood density ranged from 0 to 19 neighbors, with a median of four neighbors per target word. “Easy” words were selected from those above the median for word frequency and below the median for neighborhood density, whereas “hard” words had the opposite characteristics. For example, old was classified as an “easy” word (frequency = 38 occurrences, density = 3 neighbors) and bed was classified as a “hard” word (frequency = 2 occurrences, density = 7 neighbors).

To quantify the differences between lexically controlled word lists and the stimulus items on the PB-K, it was necessary to obtain lexical statistics (e.g., word frequency and neighborhood density) for the individual PB-K words. Unfortunately, lexical statistics for the PB-K words could not be obtained from Logan's computational analyses of the CHILDES database because fewer than 31% of the PB-K test items were contained within his corpus. Thus, additional computational analyses were required based on a larger corpus, as described below.

Computational Analyses

New computational analyses were performed on the PB-K, the LNT ”easy,” and the LNT “hard” lists using lexical statistics obtained from a computerized version of Webster's Pocket Dictionary (see Pisoni et al., 1985). Only three of the four PB-K word lists were analyzed, because these are the lists administered in our current protocol. Both lists of the LNT Easy and Hard words were included. The lexical characteristics of interest were word frequency, neighborhood density, neighborhood frequency, and lexical familiarity. Word frequency counts stored in the Webster's Pocket Dictionary database were obtained from Kucera and Francis (1967) and indexed to the number of occurrences of the target word per 1 million words of printed text. The frequency counts from Kucera and Francis (1967) are assumed to reflect the frequency of occurrence of words in the language (i.e., the frequency of occurrence for the adult lexicon). Here neighborhood density refers to the number of neighbors for a target that are found within the 20,000 word database by adding, substituting, or deleting one phoneme from the target word. Neighborhood frequency refers to the average word frequency of all the lexical neighbors of a target word. The familiarity ratings contained in this database were obtained from adult listeners with normal hearing (Nusbaum, Pisoni, & Davis, 1984). Familiarity was rated using a scale varying from 1 to 7. A rating of 1 indicates an unfamiliar word, and 7 indicates a highly familiar word. Four words from the PBK lists were excluded from the present analysis because data concerning one or more of the lexical properties were not available. Thus, a total of 246 items were analyzed.

Table 1 presents a comparison of the average lexical characteristics for the three-word lists. The mean familiarity rating was near 7.0 for each list, indicating that all test words were highly familiar to adults with normal hearing. The three word lists were compared in regard to word frequency, lexical density, and mean neighborhood frequency. First univariate analyses were performed on these three variables to determine whether the assumption of normality was valid. Lexical density was the only one of the three variables that appeared to be normally distributed. Log-transformed values of word frequency and mean neighborhood frequency were used in the analyses because they achieved the normality requirements.

TABLE 1.

Mean lexical characteristics for the monosyllabic word tests.

LNT Easy
LNT Hard
PB-K
Mean (SD) Mean (SD) Mean (SD)
Familiaritya 7.0 (0.2) 7.0 (0.0) 6.9 (0.2)
Word frequencyb 275.9 (400.6) 144.7 (214.8) 568.2 (1642.9)
Neighborhood densityc 10.4 (6.0) 18.9 (5.7) 13.7 (7.6)
Neighborhood frequencyd 97.5 (181.8) 416.0 (823.5) 522.9 (3014.9)
a

Familiarity was rated from 1 to 7, with 1 indicating an unfamiliar word, and 7 indicating a highly familiar word (Nusbaum et al., 1984).

b

Word frequency refers to the number of occurrences per 1 million words of printed text (Kucera & Francis, 1967).

c

Neighborhood density refers to the number of lexical neighbors that could be found in the Webster's Pocket Dictionary database by deleting, adding, or substituting one phoneme of the target word (Pisoni et al., 1985).

d

Neighborhood frequency refers to the average word frequency of a target word's lexical neighbors.

An analyses of variance was performed to determine whether the three tests differed significantly in regard to the three variables of interest. Log-transformed word frequency was found only to be marginally significant among the measures (F[2, 243] = 2.90, p = 0.056). However, lexical density was significant (F[2, 243] = 19.16, p < 0.0001). Pairwise t-tests indicated that the LNT “hard” words had sigruficantly more neighbors than either the LNT “easy” words or the PB-K words (p < 0.0001). In addition, the PB-K words were significantly higher in lexical density than the LNT “easy” words (p < 0.003). Log-transformed mean neighborhood frequency was also found to be significant (F[2, 243] = 6.73, p < 0.0014). Again, pairwise t-tests indicated significant differences among all three word lists. The average neighborhood frequency was highest for the LNT “hard” words, followed by PB-K words, and then the LNT “easy” words.

The results of our computational analyses revealed that there were significant differences in the lexical characteristics of the three word lists. Log-transformed word frequency differed only marginally between the three tests. Lexical density and log-transformed neighborhood frequency were significantly greater for the LNT “hard” words than for either of the remaining lists. However, the variability in word frequency and neighborhood frequency was much greater for the PB-K than either LNT word list. This is not surprising, because the LNT items were specifically chosen to meet certain lexical criteria. According to NAM, word frequency should act as a biasing factor in favor of recognition. Thus, the extreme variability in word frequency and neighborhood frequency suggests that some PB-K words should be very difficult to identify. More specifically, these would be words that have many lexical neighbors and are high in word frequency thus competing during the lexical selection process. The lexical characteristics used in the present analyses were obtained from an adult lexicon. Therefore, although all three of the word lists contained words that were rated as highly familiar by adult listeners, only 31% of the PB-K words were actually contained within the corpus analyzed by Logan (1992), which is based on data from young children's vocabularies. This finding strongly suggests that many of the PB-K words simply may be unfamiliar to children with limited vocabulary skills, such as children who are very young, or children who have a profound hearing impairment. With this structural information available we can now turn to the behavioral tests of spoken word recognition.

Experiment I

The purpose of Experiment I was to determine whether differences in the lexical characteristics of word lists influence spoken word recognition by children with multichannel cochlear implants, and to compare spoken word recognition of lexically controlled and phonetically balanced word lists.

Method

Subjects

The subjects were pediatric Nucleus cochlear implant users who were seen at Indiana University Medical Center as part of their regularly scheduled postimplant appointments. Children were included as subjects if they demonstrated some spoken word recognition, even if only in a closed-set format. Children who did not demonstrate some evidence of word recognition were excluded. This eliminated very young children and some children who had used their device for less than 1 yr, as word recognition usually does not emerge prior to 1 yr of cochlear implant experience. Twenty-eight children who were evaluated as part of our research protocol met the criterion for inclusion in the study. Subject information is presented in Table 2.

TABLE 2.

Subject characteristics for Experiment I (N = 28).

Subject Mean (yr) SD (yr)
Age at onset 1.4 2.3
Length of auditory deprivation 5.9 3.0
Age at implant 7.2 2.7
Length of implant use 3.0 1.6
Age at time of testing 10.2 2.4

Device Characteristics

Two subjects used the WSP speech processor programmed in the F0/Fl/F2 strategy. This strategy encodes fundamental frequency (F0) and the frequency and amplitude of the first two formant frequencies (F1, F2). For voiced sounds, the stimulation rate of the two electrodes representing F1 and F2 is equal to F0. For unvoiced sounds, the pulse rate vanes at an average of 100 Hz. The remaining subjects used the Mini Speech processor (MSP) programmed in the MultiPeak (MPEAK) strategy (McKay & McDermott, 1993; Skinner et al., 1991). In addition to estimating F0, F1, and F2, the MPEAK strategy estimates amplitudes in three additional frequency bands (Bands 3, 4, and 5) encompassing the frequency range from 2.0 to 6.0 kHz. During voiced signals, electrodes representing F1 and F2, and Bands 3 and 4 are stimulated at a pulse rate equal to the fundamental frequency. For unvoiced sounds, a nonperiodic pulse rate varying between 200 and 300 Hz is used to stimulate electrodes representing F2 and Bands 3, 4, and 5.

Stimulus Materials and Procedures

The three word lists used for the preliminary computational analyses were used to assess spoken word recognition. Order of test presentation (LNT “easy” word list, LNT “hard” word list, and PB-K) was counterbalanced across subjects. Half of the items on a PB-K list were administered so that the number of items would be consistent across tests. The tests were administered live voice with the subjects seated facing an examiner approximately 1.5 ft away. The method of stimulus presentation was the same as that used for all live voice tests in our research protocol (Miyamoto et al., 1994). The examiner held a mesh-covered screen in front of her face so that speechreading cues were not available to the subjects. A team of five examiners (two audiologists and three speech-language pathologists) who had extensive experience with live-voice testing evaluated the children. The stimuli were articulated clearly and precisely in a manner described by Picheny, Durlach, and Braida (1985) and presented at levels that ranged roughly from 70 to 75 dB SPL (loud conversational speech). The test items were presented one time only; missed items were not repeated. Subjects responded by repeating the word, which the examiner entered on a response sheet. Subjects who used total communication were asked to sign and say their response. Subjects who used oral communication were asked to spell or write their response if their speech was not intelligible. If the subject was unable to sign or write a response, it was transcribed phonemically by the examiner. Responses were scored as the percent of words and phonemes correctly identified.

Results

Table 3 presents the mean percent of words and phonemes correctly identified on each monosyllabic word list. Nine of the 28 subjects scored less than 20% correct on both the LNT “easy” and the PB-K word lists (range = 0% to 12% for both tests). These subjects were excluded from further analyses because they were unable to recognize words in an open-set format because of a floor effect.

TABLE 3.

Mean percent correct scores for the three monosyllabic word tests (N = 28).

LNT Easy LNT Hard PB-K
Mean % words correct (SD) 29.6 (22.1) 23.4 (18.9) 13.9 (15.1)
Mean % phonemes correct (SD) 44.8 (24.9) 45.8 (23.4) 36.9 (22.6)

Figure 2 shows the percentage of words and phonemes correctly identified on the three lists for the remaining 19 subjects. Individual differences in word recognition were present. The percent of words correctly identified by the 19 subjects ranged from 20% to 72% on the LNT (“easy” word list, from 12% to 72% on the LNT “hard” list, and from 4% to 54% on the PB-K word lists. Fourteen of the subjects showed decrements in performance on the LNT “hard” lists compared to the LNT “easy” lists, ranging from 4% to 24% words correct. Of the remaining five subjects, four had similar scores on both lists of the LNT (increases on the LNT “hard” list ranged from 0% to 4%) and one showed an increase of 28%. Those subjects for whom lexical effects were not evident were those with poorer performance on the LNT “easy” condition (e.g., their scores ranged from 20% to 28% words correct). Individual phoneme recognition scores ranged from 31% to 82% correct on the LNT “easy”, 36% to 87% on the LNT “hard”, and 4% to 54% on the PB-K word lists. Differences between the “easy” and “hard” word lists were less for phoneme scores than for the word recognition scores. When compared with phoneme recognition on the LNT “easy” list, performance on the LNT “hard” list increased for seven subjects (range 2% to 21% higher), decreased for 11 subjects (range 3% to 15% higher) and remained unchanged for one subject. PB-K phoneme scores were consistently lower than those on the LNT “easy” list for all 19 subjects, with decrements in scores ranging from 1% to 28%. PB-K phoneme scores were lower than LNT “hard” phoneme scores for 16 subjects, with decrements ranging from 2% to 27%.

Figure 2.

Figure 2

Percentage of words and phonemes correctly produced on the three monosyllabic word lists.

A two-factor factorial randomized block design was used to analyze the performance results. Word list, score type (words versus phonemes), the interaction, and the blocking variable of subject were analyzed using percent correct as the dependent variable. Word list was found to be significant (F[2,90] = 50.36, p < 0.000l), as was score type (F[1, 90] = 308.90, p < 0.0001). Also, the interaction of word list and score type was significant (F[2, 90] = 6.01, p < 0.0035). Because score type (word versus phoneme) was significant in the full analyses, separate subset analyses were carried out for the word scores and phoneme scores. When only word scores were analyzed, word list was found to be highly significant F[2,36] = 31.62, p < 0.0001. Pairwise t-tests revealed that “easy” LNT words were identified most accurately, followed by “hard” LNT words, and then the PB-K words (p < 0.001). A significant effect of word list was also found for the phoneme scores (F[2,36] = 14.84, p < 0.0001). When only phoneme scores were considered, performance was significantly poorer on the PB-K than on either LNT word list p < 0.0001). However, phoneme identification performance did not differ between the LNT “easy” and “hard” word lists.

Discussion

The results of Experiment I revealed that pediatric cochlear implant users do use their lexical knowledge in word recognition tasks. That is, spoken word recognition performance was significantly better on the “easy” word list than on the “hard” word list of the LNT. However, performance on the two LNT word lists did not differ when the tests were scored by the percent of phonemes correctly identified. Thus, it appears that phoneme recognition scores do not reflect the perceptual processes underlying spoken word recognition performance, such as the way in which the lexical items are organized and selected in long-term memory.

Despite their hearing loss and the degraded sensory input provided via the cochlear implant, these subjects did display sensitivity to the acoustic-phonetic similarity among the test words. The results demonstrate that even though these children have limited vocabularies, they appear to organize words into similarity neighborhoods in long-term memory, and use this structural information in recognizing isolated words. This observation is further supported by the finding that word recognition performance differs significantly for the “easy” and “hard” word lists on the LNT test, but that phoneme recognition does not. If spoken words were simply recognized only as an isolated sequence of speech sounds, then similar phoneme scores would lead to similar word recognition scores. The present findings demonstrate that pediatric cochlear implant users recognize words in the context of other words in their lexicons (i.e., in terms of similarity neighborhoods) in much the same way as do children with normal hearing (Charles-Luce & Luce, 1990; Logan, 1992).

Word recognition was found to be best on the LNT “easy” words, followed by performance on the LNT “hard” and PB-K lists, respectively. The computational analyses carried out earlier provide a principled explanation for the pattern of these results. According to NAM, word recognition is influenced by the number of phonetically similar words in the lexicon and by word frequency. Neighborhood density was significantly higher for the LNT “hard” words than for the other two lists. That is, there were more phonetically similar words with which the LNT “hard” words could be confused. Of course, this does not explain why the PB-K words were identified less accurately than the LNT “easy” words. Based on the earlier computational analyses, it appears that the differences in neighborhood frequency (i.e., the average word frequency of a target word's lexical neighbors) among the three lists might have contributed to poor performance on the PB-K word lists. The average neighborhood frequency for the PB-K words was much higher than for the LNT “easy” words. NAM would explain this pattern of results in terms of the increased competition for selection from the lexical neighbors of the PB-K target words compared with the lexical neighbors of the LNT target words. Support for this proposal/account also comes from a recent study by Goldinger, Luce, and Pisoni (1989) who demonstrated that lexical competition among phonetically similar words strongly influences spoken word recognition performance.

A second reason for the improved word recognition performance on the LNT lists may be due to the distribution of speech sounds within each test. The LNT word lists are not phonetically balanced. Therefore, they may contain more phonemes that are well conveyed via a cochlear implant, such as stop or nasal consonants, than the PB-K word lists. Table 4 presents the average occurrence of consonants per 25 words by manner and voicing categories. The distribution of phonemes is similar across the three tests, suggesting that the phonemic content of the tests did not account for the observed variation in word recognition in the data.

TABLE 4.

Mean number of consonant occurrences per 25 stimulus words for the monosyllabic word tests.

LNT Easy LNT Hard PB-K
Voiced stops 11.0 8.5 9.4
Unvoiced stops 11.5 15.5 13.7
Voiced sibilants 0.0 1.0 2.0
Unvoiced sibilants 5.0 2.5 7.4
Voiced fricatives 1.5 0.0 1.0
Unvoiced fricatives 6.0 4.0 5.0
Voiced affricates 2.0 0.0 0.5
Unvoiced affricates 2.0 0.0 0.8
Nasals 9.5 10.0 5.4
Liquids 10.0 6.5 10.9

There is also another reason why performance on the PB-K words was significantly poorer than on the LNT word lists. All test items were classified as highly familiar by adults with normal hearing. However, word familiarity varies with age (Walley, 1993), and therefore the PB-K items may not have been equally familiar to children as they were to adults. As we noted earlier, only 31% of the words on the PB-K were contained within the CHILDES database, suggesting that the PB-K test contains a high proportion of words that are not familiar to young children or to children with profound hearing impairments.

Another important finding was that although word recognition performance was better overall on the LNT word lists than on the PB-K, there were still a number of children who received scores in a restricted range near 0% words correct on all three tests. These children differed from those who achieved scores of at least 20% on the LNT “easy” word lists in that they had a greater period of auditory deprivation prior to receiving a cochlear implant (8.6 yr versus 4.6 yr), and they had a shorter period of device use (1.7 yr versus 3.5 yr). This pattern of performance is consistent with earlier studies of word recognition in children with cochlear implants (Fryauf-Bertschy et al., 1992; Miyamoto et al., 1992, 1994; Osberger et al., 1991a; Waltzman et al., 1990, 1992). Because of their poor performance on monosyllabic word tests in these earlier studies it was not possible to examine whether the lexical properties of the stimulus items influenced their word recognition performance in any systematic way.

Experiment II

Another experiment was conducted to determine whether the use of multisyllabic test items would yield higher word recognition scores than monosyllabic stimuli. We were also interested in determining whether multisyllabic word recognition is also influenced by the lexical properties of the stimulus words. Monosyllabic words are particularly difficult to identify because the redundant linguistic and contextual cues typically present in multisyllabic words and in sentences are unavailable. The use of multisyllabic stimuli in spoken word identification test should yield higher recognition scores overall, because these items are less easily confused with other words than monosyllabic stimuli. However, lexical characteristics may not influence multisyllabic word recognition because these words have fewer lexical neighbors with which they must compete for recognition. In Experiment II we compared word and phoneme recognition in both mono- and multisyllabic word lists, and examined the effects of word frequency and neighborhood density on word recognition performance.

Methods

Subjects

The subjects were 19 pediatric Nucleus cochlear implant users who were seen at Indiana University Medical Center as part of their regularly scheduled postimplant appointments. Subject information is presented in Table 5. All of the subjects used the MSP processor programmed in the MPEAK strategy. The stimulation mode was common ground for two subjects, bipolar for three, bipolar+l for eight children, and bipolar+2 for one subject. The number of active electrodes programmed into their processor strategy ranged from 15 to 21 electrodes.

TABLE 5.

Subject characteristics for Experiment II (N = 19).

Mean (yr) SD (yr)
Age at onset 1.9 2.6
Length of auditory deprivation 5.7 3.4
Age at implant 7.6 3.0
Length of implant use 3.1 1.6
Age at time of testing 10.7 2.4

Stimulus Materials and Procedures

The LNT “easy” and “hard” word lists were used to assess monosyllabic word recognition. A new set of words, the Multisyllabic Lexical Neighborhood Test (MLNT), was developed to assess how listeners use information about word length and syllable structure in word identification tasks. The MLNT consists of an “easy” and a “hard” list each containing 15 items varying from two to three syllables in length. Multisyllabic test items were selected from Logan's (1992) corpus using similar procedures as in Experiment I, except that monosyllabic words were excluded from the analyses. That is, all words were highly familiar to young children, and were divided into “easy” and “hard” lists by word frequency and neighborhood density. Word frequency for the multisyllabic words within Logan's corpus ranged from 1 to 100 occurrences, with a median value of 2 occurrences. Neighborhood density ranged from 0 to 7 with a median of 0 neighbors. The MLNT “easy” list contained words that had word frequencies greater than two and neighborhood densities of 0; the MLNT “hard” word list contained words with frequencies less than 2 occurrences and neighborhood densities greater than 0 neighbors.

Order of test presentation and list difficulty was counterbalanced across subjects. All other procedures were the same as those used in Experiment I.

Results

As in Experiment I, data from subjects who did not score at least 20% words correct on the LNT “easy” word lists were excluded from the data analyses. Five subjects were eliminated using this criterion. Figure 3 presents the percent of words and phonemes correctly identified on the “easy” and “hard” lists according to syllable structure. On average, multisyllabic word recognition performance was higher than monosyllabic word recognition for both “easy” and “hard” word lists. However, the differences in performance between recognition scores for the multi- and monosyllabic stimulus items were much smaller for phoneme recognition than for word recognition.

Figure 3.

Figure 3

Percent of words and phonemes correctly produced on the monosyllabic and multisyllabic word lists.

Table 6 presents the range of individual word and phoneme recognition scores. The number of subjects performing more poorly on the “hard” than the “easy” word lists was 12 and 13, respectively, for the LNT and MLNT. The decrements ranged from 4% to 32% for the LNT, and from 0% to 67% for the MLNT. Eleven subjects had poorer phoneme recognition on the LNT “hard” list than on the “easy” list, with decrements ranging from 3% to 15%. Only six subjects had poorer phoneme recognition on the MLNT “hard” list compared with the “easy” list, with decrements ranging from 2% to 46%. The remaining nine subjects had better phoneme recognition on the MLNT “hard” list than the MLNT “easy” list, with increases ranging from 1% to 12%.

TABLE 6.

Range of individual scores for the LNT and MLNT measures.

Word Recognition
Phoneme Recognition
Test Easy Hard Easy Hard
LNT 20–72% 12–72% 39–82% 39–87%
MLNT 20–93% 13–87% 38–92% 25–97%

A three-factor factorial analysis using a randomized block design was used to assess the performance results. Lexical difficulty (“easy” versus “hard”), score type (word versus phoneme), syllable structure (mono- versus multisyllabic word lists), all interactions, and the blocking variable of subject were analyzed using percent correct as the dependent variable. Lexical difficulty was found to be signficant (F[1,91] = 19.57, p < 0.0001), as well as syllable structure (F[1, 91] = 16.98, p < 0.0001), and score type (F[1, 91] = 70.89, p < 0.0001). Also, the lexical difficulty by score type interaction was significant (F[1, 91] = 8.56, p < 0.004), as was the syllable structure by score type interaction (F[l, 91] = 4.36, p < 0.04).

Because score type was significant in the full analyses, separate subset analyses were carried out for the word and phoneme scores. When only word scores were considered, lexical difficulty was found to be highly significant (F[1, 39] = 20.03, p < 0.0001). That is, performance on the “easy” words was significantly better than on the “hard” words. The effect of syllable structure was also significant (F[l, 39] = 14.29, p < 0.0005). Multisyllabic words (MLNT) were recognized with significantly greater accuracy than monosyllabic words (LNT). The interaction between lexical difficulty and syllable structure was not significant. When only phoneme scores were analyzed, none of the effects were found to be significant. Thus, although the recognition of words decreased with increasing list difficulty, phoneme recognition remained relatively stable. None the less, phoneme recognition was strongly correlated with word recognition performance (r = 0.94 and 0.91 for the “easy” and “hard” words, respectively).

Discussion

The results of Experiment II demonstrated that pediatric cochlear implant subjects use word length cues to assist them in word recognition. These subjects were significantly better at recognizing multisyllabic than monosyllabic words, probably because multisyllabic words have fewer lexical neighbors than monosyllabic words, thus minimizing competition in lexical selection.

For both monosyllabic and multisyllabic words, word recognition was significantly better on the “easy” than the “hard” lists. This pattern replicates the findings obtained in Experiment I, and also demonstrates that lexical properties influence multisyllabic word recognition. It seems likely that word frequency is the important factor contributing to lexical effects of multisyllabic word recognition, as the variability in neighborhood lexical density is small.

On both the LNT and MLNT, word recognition was significantly poorer than phoneme recognition. Furthermore, word recognition decreased with increasing list difficulty, but phoneme recognition did not. The strong correlation between word and phoneme recognition performance suggests that phoneme recognition may reflect an important first stage of speech recognition and can also provide information about the speech cues that are conveyed via a cochlear implant (Tyler, Reference Note 2). However, the present results demonstrate that lexical factors also influence word recognition performance.

General Discussion

The results of this investigation demonstrate that pediatric cochlear implant users’ word recognition performance is influenced by the lexical properties of the stimulus words used on the perceptual tests. That is, words that are high in frequency and low in neighborhood density were identified with greater accuracy than those with the opposite characteristics. Improved word recognition on the lexically “easy” word lists was observed for both monosyllabic and multisyllabic stimulus words. Thus, these subjects appear to organize familiar words into similarity neighborhoods in long-term memory, and use this structural information in word recognition in a manner similar to listeners with normal hearing (Charles-Luce & Luce, 1990; Cluff & Luce, 1990; Luce, 1986; & Luce et al., 1990).

In both Experiments I and II lexical effects were observed for word recognition, but not for phoneme recognition. That is, phoneme perception was similar on the “easy” and “hard” lists, whereas word recognition was affected by the lexical properties of the test items. This finding demonstrates that children with CI perceive words in the context of other phonemically similar words in their lexicon, rather than as merely a sequence of unrelated sounds.

Multisyllabic word recognition was significantly better than monosyllabic word recognition suggesting that children with CI used length cues as well as spectral information in recognizing words (Charles-Luce, Luce, & Cluff, 1990; Cluff & Luce, 1990). Again, these findings replicate previous research with listeners with normal hearing (Cluff & Luce, 1990). As a group, the children who performed very poorly on the monosyllabic word recognition tests (i.e., less than 20% words correct on the LNT “easy” lists) were those who had longer periods of auditory deprivation prior to receiving a cochlear implant, and who had used their device for a much shorter period. It may be that word recognition skills will emerge in this population with increased device experience. The present results indicate that multisyllabic speech perception tests are useful in assessing the underlying perceptual processes in children with limited auditory perception skills.

The results of Experiment I revealed that some words were particularly easy for the subjects to identify (e.g., the LNT “easy” lists), whereas other words were much more difficult (e.g., the PB-K words lists). The computational analyses of the stimulus items on these tests provide a principled theoretical basis for accounting for these word recognition differences. In addition, the vocabulary items used on the PB-K may not be familiar to children with profound hearing losses because only 31% of the PB-K words were found in the CHILDES corpus of young children's productions analyzed by Logan (1992). The vocabulary on phonetically balanced tests may be constrained by this criterion, and thus unfamiliar words may be included which give lower estimates of a child's speech perception capabilities.

The observed differences in word recognition performance on these three tests can be accounted for by NAM, which assumes that both word frequency and lexical density influenced word recognition. The computational analyses revealed that the PB-K lists had intermediate values for lexical density, but that the average neighborhood frequency of these lexical items was much higher than the other tests. Thus, the poor word recognition displayed on the PB-K may be due to the presence of unfamiliar words as well as lexical competition from similar sounding words in the lexicon.

The new Lexical Neighborhood tests (LNT and MLNT) developed for this investigation appear to be very useful for measuring word recognition in children with multichannel cochlear implants who exhibit varying speech perception abilities. The new tests appear to be more sensitive to changes in word recognition that occur over time because they yield a wider range of scores within and across children. In addition, these tests allow for an examination of the perceptual processes underlying spoken word recognition, and they provide a framework for accounting for differences between tests and stimulus words. More importantly, these new tests may be used to gain further knowledge about the organization of sound patterns of words in young children's lexicons and the processes used to access these patterns in traditional speech identification tests.

Acknowledgments

We gratefully acknowledge the assistance of Susan Todd, M.A., Amy Robbins, M.S., and Allyson Riley, M.S., in data collection, Terri Kerr in data analysis, and Linette Caldwell for clerical assistance. We also thank John Logan, Ph.D., for making available his database and computational analyses which were used to generate the stimulus materials. Finally, we wish to thank Susan Jerger, Ph.D., Theodore Bell, Ph.D., and an anonymous reviewer for their comments on an earlier version of this work. This research was supported by NIH-NIDCD (Grants DC-00064 and DC-00111-16).

Footnotes

Mary Joe Osberger is currently affiliated with Advanced Bionics Corporation, Sylmar, CA 91342.

1 Haskins, H. (1949). A phonetically balanced test of speech discrimination for children. Unpublished master's thesis, Northwestern University, Evanston, IL.

2 Tyler, R. S. Personal communication, January 27, 1994.

Contributor Information

Karen Iler Kirk, Department of Otolaryngology-Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, Indiana.

David B. Pisoni, Department of Otolaryngology-Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, Indiana Speech Research Laboratory, Department of Psychology, Indiana University, Bloomington..

Mary Joe Osberger, Department of Otolaryngology-Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, Indiana.

References

  1. Andrews S. Frequency, & neighborhood effects on lexical access: Activation or search? Journal of Experimental Psychology: Learning, Memory, & Cognition. 1989;15:802–814. [Google Scholar]
  2. Brown R. A first language: The early stages. Harvard University Press; Cambridge, MA: 1973. [Google Scholar]
  3. Carhart R. Problems in the measurement of speech discrimination. Archives of Otolaryngology. 1965;82:253–260. doi: 10.1001/archotol.1965.00760010255007. [DOI] [PubMed] [Google Scholar]
  4. Charles-Luce J, Luce PA. Some structural properties of words in young children's lexicons. Journal of Child Language. 1990;17:205–215. doi: 10.1017/s0305000900013180. [DOI] [PubMed] [Google Scholar]
  5. Charles-Luce J, Luce PA, Cluff MS. Retroactive influence of syllable neighborhoods. In: Altmann GTM, editor. Cognitive models of speech processing: Psycholinguistic and computational perspectives. MIT Press; Cambridge, MA: 1990. pp. 173–184. [Google Scholar]
  6. Cluff MS, Luce PA. Similarity neighborhoods of spoken two-syllable words: Retroactive effects on multiple activation. Journal of Experimental Psychology: Human Perception and Performance. 1990;16:551–563. doi: 10.1037//0096-1523.16.3.551. [DOI] [PubMed] [Google Scholar]
  7. Dale DMC. Language development in deaf and partially hearing children. Charles C Thomas; Springfield, IL: 1974. [Google Scholar]
  8. Elliot LL, Clifton LAB, Servi DG. Word frequency effects for a closed-set identification task. Audiology. 1983;22:229–240. doi: 10.3109/00206098309072787. [DOI] [PubMed] [Google Scholar]
  9. Fryauf-Bertschy H, Tyler RS, Kelsay DM, Gantz BJ. Performance over time of congenitally deaf and postlingually deafened children using a multichannel cochlear implant. Journal of Speech and Hearing Research. 1992;35:892–902. doi: 10.1044/jshr.3504.913. [DOI] [PubMed] [Google Scholar]
  10. Goldinger SD, Luce PA, Pisoni DB. Priming lexical neighbors of spoken words: Effects of competition, & inhibition. Journal of Memory and Language. 1989;28:501–518. doi: 10.1016/0749-596x(89)90009-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Greenburg JH, Jenkins JJ. Studies in the psychological correlates of the sound system of American English. Word. 1964;20:157–177. [Google Scholar]
  12. Hirsch IJ, Davis H, Silverman SR, Reynolds EG, Eldert E, Benson RW. Development of materials for speech audiometry. Journal of Speech and Hearing Disorders. 1952;17:321–337. doi: 10.1044/jshd.1703.321. [DOI] [PubMed] [Google Scholar]
  13. Hodgson W. Testing infants and young children. In: Katz J, editor. Handbook of clinical audiology. William, & Wilkins; Baltimore: 1985. pp. 642–663. [Google Scholar]
  14. Hood JD, Poole JP. Influence of the speaker and other factors affecting speech intelligibility. Audiology. 1980;19:434–455. doi: 10.3109/00206098009070077. [DOI] [PubMed] [Google Scholar]
  15. Hudgins CV, Hawkins JE, Karlin JE, Stevens SS. The development of recorded auditory tests for measuring hearing loss for speech. Laryngoscope. 1947;57:57–89. [PubMed] [Google Scholar]
  16. Jerger S. Speech audiometry. In: Jerger J, editor. Pediatric audiology. College Hill Press; San Diego: 1984. pp. 71–94. [Google Scholar]
  17. Klatt DH. Speech perception: A model of acoustic-phonetic analysis and lexical access. In: Cole RA, editor. Perception and production of fluent speech. LEA; Hillsdale, NJ: 1980. [Google Scholar]
  18. Kucera F, Francis W. Computational analysis of present-day American English. Brown University Press; Providence, RI: 1967. [Google Scholar]
  19. Lach RD, Ling D, Ling AH. Early speech development in deaf infants. American Annals of the Deaf. 1970;115:522–526. [PubMed] [Google Scholar]
  20. Landauer TK, Streeter LA. Structural differences between common and rare words: Failure of equivalence assumptions for theories of word recognition. Journal of Verbal Learning and Verbal Behavior. 1973;12:119–131. [Google Scholar]
  21. Lively SE, Pisoni DB, Goldinger SD. Spoken word recognition: Research and theory. In: Gernsbacher M, editor. Handbook of psycholinguistics. Academic Press; New York: 1994. pp. 265–301. [Google Scholar]
  22. Logan JS. A computational analysis of young children's lexicons (Research on Spoken Language Processing Technical Report No. 8) Indiana University; Bloomington, IN: 1992. [Google Scholar]
  23. Luce P. Neighborhoods of words in the mental lexicon (Research on Speech Perception Technical Report No. 6). Speech Research Laboratory, Indiana University; Bloomington, IN: 1986. [Google Scholar]
  24. Luce PA, Pisoni DB, Goldinger SD. Similarity neighborhoods of spoken words. In: Altmann GTM, editor. Cognitive models of speech processing: Psycholinguistic and computational perspectives. MIT Press; Cambridge, MA: 1990. [Google Scholar]
  25. MacWhinney B, Snow C. The child language data exchange system. Journal of Child Language. 1985;12:271–296. doi: 10.1017/s0305000900006449. [DOI] [PubMed] [Google Scholar]
  26. Marslen-Wilson WD. Functional parallelism in spoken word-recognition. Cognition. 1987;25:71–102. doi: 10.1016/0010-0277(87)90005-9. [DOI] [PubMed] [Google Scholar]
  27. McKay CM, McDermott H. Perceptual performance of subjects with cochlear implants using the spectral maxima processor (SMSP) and the Mini Speech Processor (MSP). Ear and Hearing. 1993;14:350–367. doi: 10.1097/00003446-199310000-00006. [DOI] [PubMed] [Google Scholar]
  28. Miyamoto RT, Osberger MJ, Robbins AM, Myres WA, Kessler K. Prelingually deafened children's performance with the Nucleus multichannel cochlear implant. American Journal of Otology. 1993;14:437–445. doi: 10.1097/00129492-199309000-00004. [DOI] [PubMed] [Google Scholar]
  29. Miyamoto RT, Osberger MJ, Robbins AM, Myres WA, Kessler K, Pope ML. Longitudinal evaluation of communication skills of children with single- or multichannel cochlear implants. American Journal of Otology. 1992;13:215–222. [PubMed] [Google Scholar]
  30. Miyamoto RT, Osberger MJ, Todd SL, Robbins AM, Stroer BS, Zimmerman-Phillips S, Carney AE. Variables affecting implant performance in children. Laryngoscope. 1994;104:1120–1124. doi: 10.1288/00005537-199409000-00012. [DOI] [PubMed] [Google Scholar]
  31. Nusbaum HC, Pisoni DB, Davis CK. Indiana University; Bloomington, IN: 1984. Sizing up the Hoosier mental lexicon: Measuring the familiarity of 20,000 words (Research on Speech Perception Progress Report No. 10). [Google Scholar]
  32. Osberger MJ, Miyamoto RT, Zimmerman-Phillips S, Kemick JL, Stroer BS, Firszt JB, Novak MA. Independent evaluation of the speech perception abilities of children with the Nucleus 22-channel cochlear implant system. Ear and Hearing. 1991a;12(Suppl.):66s–80s. doi: 10.1097/00003446-199108001-00009. [DOI] [PubMed] [Google Scholar]
  33. Osberger MJ, Todd SL, Berry SW, Robbins AM, Miyamoto RT. Effect of age at onset of deafness on children's speech perception abilities with a cochlear implant. Annals of Otology, Rhinology, and Laryngology. 1991b;100:883–888. doi: 10.1177/000348949110001104. [DOI] [PubMed] [Google Scholar]
  34. Picheny MA, Durlach NI, Braida LD. Speaking clearly for the hard of hearing I: Intelligibility differences between clear and conversational speech. Journal of Speech, Hearing and Research. 1985;28:96–103. doi: 10.1044/jshr.2801.96. [DOI] [PubMed] [Google Scholar]
  35. Pisoni DB, Luce PA. Speech perception: Research, theory, and the principal issues. In: Schwab EC, Nusbaum HC, editors. Pattern Recognition by humans and machines: Speech perception. Vol. 1. Academic Press; New York: 1986. pp. 1–50. [Google Scholar]
  36. Pisoni DB, Nusbaum HC, Luce PA, Slowiaczek LM. Speech perception, word recognition and the structure of the lexicon. Speech Communication. 1985;4:75–95. doi: 10.1016/0167-6393(85)90037-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Quigley SP, Paul PV. Language and deafmss. College-Hill Press; San Diego: 1984. [Google Scholar]
  38. Skinner MW, Holden LK, Holden TA, Dowell RC, Seligman PM, Brimacombe JA, Beiter AL. Performance of postlinguistically deaf adults with the Wearable Speech Processor (WSP III) and Mini Speech Processor (MSP) of the Nucleus multielectrode cochlear implant. Ear and Hearing. 1991;12:3–22. doi: 10.1097/00003446-199102000-00002. [DOI] [PubMed] [Google Scholar]
  39. Smith K, Hodgson W. The effects of systematic reinforcement on the speech discrimination responses of normal and hearing-impaired children. Journal of Auditory Research. 1970;10:110–117. [Google Scholar]
  40. Staller SJ, Beiter AL, Brimacombe JA, Mecklenburg DJ, Arndt P. Pediatric performance with the Nucleus 22-channel cochlear implant system. American Journal of Otology. 1991a;12(Suppl.):126–136. [PubMed] [Google Scholar]
  41. Staller SJ, Dowell RC, Beiter AL, Brimacombe JA. Perceptual abilities of children with the Nucleus-22 channel cochlear implant. Ear and Hearing. 1991b;12(Suppl.):34S–47S. doi: 10.1097/00003446-199108001-00006. [DOI] [PubMed] [Google Scholar]
  42. Tobias JV. On phonemic analysis of speech discrimination tests. Journal of Speech and Hearing Research. 1964;7:98–100. doi: 10.1044/jshr.0701.98. [DOI] [PubMed] [Google Scholar]
  43. Treisman M. Space or lexicon? The word frequency effect and the error response frequency effect. Journal of Verbal Learning and Verbal Behavior. 1978a;17:37–59. [Google Scholar]
  44. Treisman M. A theory of the identification of complex stimuli with an application to word recognition. Psychological Review. 1978b;85:525–570. [Google Scholar]
  45. Tyler RS. Speech perception by children. In: Tyler RS, editor. Cochlear implants. Singular Publishing Group, Inc.; San Diego: 1993. pp. 191–256. [Google Scholar]
  46. Walley A. The role of vocabulary development in children's spoken word recognition and segmentation ability. Developmental Review. 1993;13:286–350. [Google Scholar]
  47. Waltzman SB, Cohen NL, Shapiro WH. Use of multichannel cochlear implant in the congenitally and prelingually deaf population. Laryngoscope. 1992;102:395–399. doi: 10.1288/00005537-199204000-00005. [DOI] [PubMed] [Google Scholar]
  48. Waltzman SB, Cohen NL, Spivak L, Ying E, Brackett D, Shapiro W, Hoffman R. Improvement in speech perception and production abilities in children using a multichannel cochlear implant. Laryngoscope. 1990;100:240–243. doi: 10.1288/00005537-199003000-00006. [DOI] [PubMed] [Google Scholar]

RESOURCES