Skip to main content
Journal of Speech, Language, and Hearing Research : JSLHR logoLink to Journal of Speech, Language, and Hearing Research : JSLHR
. 2022 Oct 4;65(10):3934–3950. doi: 10.1044/2022_JSLHR-20-00749

Can Closed-Set Word Recognition Differentially Assess Vowel and Consonant Perception for School-Age Children With and Without Hearing Loss?

Emily Buss a,, Jenna Felder b, Margaret K Miller c, Lori J Leibold c, Lauren Calandruccio d
PMCID: PMC9927623  PMID: 36194777

Abstract

Purpose:

Vowels and consonants play different roles in language acquisition and speech recognition, yet standard clinical tests do not assess vowel and consonant perception separately. As a result, opportunities for targeted intervention may be lost. This study evaluated closed-set word recognition tests designed to rely predominantly on either vowel or consonant perception and compared results with sentence recognition scores.

Method:

Participants were children (5–17 years of age) and adults (18–38 years of age) with normal hearing and children with sensorineural hearing loss (7–17 years of age). Speech reception thresholds (SRTs) were measured in speech-shaped noise. Children with hearing loss were tested with their hearing aids. Word recognition was evaluated using a three-alternative forced-choice procedure, with a picture-pointing response; monosyllabic target words varied with respect to either consonant or vowel content. Sentence recognition was evaluated for low- and high-probability sentences. In a subset of conditions, stimuli were low-pass filtered to simulate a steeply sloping hearing loss in participants with normal hearing.

Results:

Children's SRTs improved with increasing age for words and sentences. Low-pass filtering had a larger effect for consonant-variable words than vowel-variable words for both children and adults with normal hearing, consistent with the greater high-frequency content of consonants. Children with hearing loss tested with hearing aids tended to perform more poorly than age-matched children with normal hearing, particularly for sentence recognition, but consonant- and vowel-variable word recognition did not appear to be differentially affected by the amount of high- and low-frequency hearing loss.

Conclusions:

Closed-set recognition of consonant- and vowel-variable words appeared to differentially evaluate vowel and consonant perception but did not vary by configuration of hearing loss in this group of pediatric hearing aid users. Word scores obtained in this manner do not fully characterize the auditory abilities necessary for open-set sentence recognition, but they do provide a general estimate.


Vowels and consonants play different roles in spoken language perception and acquisition. Whereas consonants are thought to play a dominant role in lexical access and new word learning, vowels are thought to convey prosodic and syntactic information, as well as cues to talker identity (Nazzi & Cutler, 2019; Nespor et al., 2002). Most research on this topic has examined these differences in adults, but there is evidence of similar effects in young children (reviewed by Nazzi & Cutler, 2019). Although both vowels and consonants contribute to speech recognition, some data indicate that consonants are more important than vowels for recognizing words in isolation, whereas vowels play a larger role in sentence recognition (e.g., Fogerty et al., 2012). Differential contributions of vowels and consonants under different stimulus conditions could be related to cues for word segmentation, semantic context, or greater intensity and duration of vowels relative to consonants. For children with hearing loss, differential audibility of cues characterizing vowels and consonants could have marked effects on speech recognition, speech production, and language development (Ambrose et al., 2014; Moeller et al., 2007; Stelmachowicz et al., 2004). However, most existing materials for clinical assessment of speech perception are not designed to evaluate these effects. Given the fundamentally different roles of vowels and consonants in spoken language perception and acquisition, characterizing vowel and consonant perception separately could be clinically informative. This could be particularly beneficial for the assessment of functional hearing in children, for whom access to high-quality auditory input supports speech and language learning (Tomblin et al., 2015).

Vowels and consonants vary both in their acoustic features and in their linguistic function (reviewed by Fry, 1979; Ladefoged, 2001). In running English speech, vowels typically occur in the middle of syllables and are produced without obstruction of outgoing air, shaped by the configuration of the vocal tract. In contrast, consonants occur more often at the beginning and end of syllables and are generated via obstruction of outgoing air. Acoustically, vowels tend to be longer in duration and more intense than consonants, although consonants contain more high-frequency energy (Turner, 1993). These trends notwithstanding, there is considerable overlap in the acoustic features of vowels and consonants.

From the perspective of speech perception, there is compelling evidence that vowels and consonants are processed in fundamentally different ways that cannot be explained entirely by distinct acoustic features (Caramazza et al., 2000; Toro et al., 2008). Whereas consonants tend to be perceived categorically, vowels are perceived on a continuum (Pisoni, 1973; van Ooijen, 1996). Listeners use cues carried by vowels to extract information related to prosody, rhythmicity of speech, and speaker identity (Kolinsky et al., 2009; Owren & Cardillo, 2006; Port, 2003). In contrast, the relative consistency of consonant production across talkers is thought to support early lexical processing (Nespor et al., 2002; Toro et al., 2005) and lexical acquisition in preschool-age children and adults (Escudero et al., 2016; Havy et al., 2014). Consonants are also more effective than vowels for marking syllable boundaries (Mehler et al., 2006; Toro et al., 2005) and, therefore, could be particularly important for word segmentation and word learning.

Based on data from adults, the relative contributions of vowels and consonants to speech recognition appear to depend on the task. This is clearly demonstrated using a paradigm where epochs of speech are replaced with noise or silence. For words presented in isolation, replacing vowels with noise and preserving consonants tends to result in better recognition than replacing consonants with noise and preserving vowels (Fogerty et al., 2012; Owren & Cardillo, 2006). The opposite result is observed for sentence recognition (Cole et al., 1996; Fogerty & Kewley-Port, 2009; Kewley-Port et al., 2007), where preserving vowels results in better recognition than preserving consonants. In sentence context, vowels provide suprasegmental information about stress patterns and syllable duration (Fogerty & Humes, 2012), and semantic context reduces the amount of information required for lexical access. Vowels also appear to be more resistant to masking by speech-shaped noise than consonants, a result observed for both forward masking and simultaneous masking (Fogerty et al., 2017). This result could be due to the greater intensity and longer duration of vowels compared with consonants. One possible explanation for the differential reliance on consonants in word recognition and vowels in sentence recognition is related to the phonetic information required for speech recognition in these two tasks; more phonetic information is required to recognize words in isolation than semantically meaningful sentences.

Given the different roles of vowels and consonants in language acquisition and speech recognition, evaluating the perception of vowels and consonants separately could be clinically informative for children with hearing loss. Data from children indicate that vowel perception is more resilient in the face of hearing loss than consonant perception (Boothroyd, 1984; Eisenberg et al., 2007). Phoneme recognition is related to pure-tone thresholds, but there are marked individual differences in that association (Boothroyd, 1984; Phatak et al., 2009), which could be related to peripheral effects (e.g., resolution), central effects (e.g., phonemic awareness), or a combination of factors. For example, reducing the fidelity of spectral and temporal cues in adults can have differential effects on vowel and consonant recognition (Boothroyd et al., 1996; Drullman et al., 1994; ter Keurs et al., 1992). This suggests that loss of resolution as a consequence of hearing loss could likewise have differential effects on vowel and consonant perception, although it is challenging to differentiate effects of reduced resolution from other factors associated with hearing loss (Dubno & Dirks, 1990). Furthermore, a history of degraded auditory input can result in delayed phonemic awareness, which can further limit performance (Carroll & Breadmore, 2018). Identification of deficits related to vowel or consonant recognition in children with hearing loss could indicate a need for targeted audiologic rehabilitation and support for development of abilities associated with these phoneme categories (Trezek & Malmgren, 2005; Werfel & Schuele, 2014).

Phoneme recognition in young school-age children has often been studied with one of two methods (reviewed by Eisenberg et al., 2007): (a) phoneme-level scoring of verbal responses, with stimuli that are either words (Kirk et al., 1995; McCreery et al., 2010) or nonwords (Danhauer et al., 1986; Johnson, 2000; Neuman & Hochberg, 1983; Nishi et al., 2010), or (b) constructing a small set of consonant–vowel words that differ with respect to one phoneme (e.g., “knee,” “me,” and “tea”) and asking children to select their response from this closed set using a picture-pointing response (Leibold & Buss, 2013; Tyler et al., 1991; Vickers et al., 2018). There are advantages and disadvantages to each method. On the one hand, scoring verbal responses can be challenging with young children and children with hearing loss, particularly in light of the higher prevalence of speech production errors among children with hearing loss compared to those with normal hearing (Eisenberg, 2007; St John et al., 2020). On the other hand, selecting a response from a fixed set of response alternatives might not provide results that are representative of listening strategies used in less contrived, natural listening environments (Clopper et al., 2006).

There is some evidence that closed-set word recognition may provide insight into vowel and consonant recognition (Boothroyd, 1984; Buss et al., 2016; Talarico et al., 2007). For example, Buss et al. (2016) measured word recognition in a four-alternative forced-choice (4AFC) task using words from the Word Intelligibility by Picture Identification (WIPI) test (Ross & Lerman, 1970). In one condition, there were 25 sets of phonetically similar words (e.g., “arm,” “car,” “barn,” and “star”); for 24 of those sets, the words shared a common vowel and differed with respect to consonants. In another condition, those same 100 words were randomly assigned to four-word sets, such that words within a set were phonetically dissimilar (e.g., “arm,” “meat,” “spring,” and “black”), notably with respect to vowel content. Masked speech reception thresholds (SRTs) were estimated for children (5–13 years of age) and adults (18–38 years of age) with normal hearing. Overall, children had poorer (higher) SRTs than adults, but the comparison of interest here was the difference in SRTs for the phonetically similar versus dissimilar word sets. When tested in a speech-shaped noise masker, SRTs for both age groups were approximately 5 dB higher for the phonetically similar word sets than for the dissimilar sets. This result was interpreted as indicating that both children and adults used the context of the four alternatives to help them identify the target, consistent with previous data for adults (e.g., Pollack et al., 1959).

One goal of this study was to test closed-set word recognition for word sets that differ with respect to either consonants (e.g., “chick,” “stick,” and “fish”) or vowels (e.g., “nut,” “night,” and “net”). This general strategy for evaluating vowel and consonant perception has been used previously to test older children and adults with response alternatives represented orthographically (e.g., Minimal Auditory Capabilities, Auditec). For example, Boothroyd (1984) evaluated word recognition in 11- to 19-year-olds using an orthographic 4AFC test and 10 sets of words that differed with respect to either vowel or consonant content. This study was designed to demonstrate feasibility of a closed-set forced-choice word recognition task with a picture-pointing response that is appropriate for evaluating vowel and consonant recognition in young school-age children, who may not be sufficiently literate to use an orthographic response set and who may not provide clear spoken responses.

Four experiments were conducted to evaluate speech recognition in speech-shaped noise for school-age children with and without hearing loss and for adults with normal hearing. Forced-choice word recognition in noise was tested in two conditions: one in which vowels varied across the response set and one in which consonants varied. Performance was also measured for noise-masked sentence recognition, with targets that had either low or high semantic context, described as low and high probability, respectively. Hearing loss was expected to have different effects on speech recognition, with listeners relying more heavily on consonant perception or vowel perception, depending on the configuration of loss. This was evaluated in two ways. Participants with normal hearing were tested with low-pass (LP) filtered stimuli, simulating a steeply sloping hearing loss. Larger effects of the LP filtering were expected for tasks reliant on consonant information than for those reliant on vowel information, due to the high-frequency cues required for consonant recognition (Kasturi et al., 2002). The prediction was for greater detrimental effects of filtering for the consonant-variable than the vowel-variable word recognition task. If sentence context is responsible for adults' greater reliance on vowel content to recognize sentences as compared to words in isolation, then there should be a smaller detrimental effect of LP filtering for the high-probability than the low-probability sentence recognition task. A group of children with sensorineural hearing loss was also tested on the four speech materials. High-frequency hearing loss was predicted to have a larger effect on consonant-variable word recognition than vowel-variable word recognition, and low-frequency hearing loss was predicted to have a larger effect on vowel-variable word recognition than consonant-variable word recognition. Finally, an association between individual differences for word and sentence recognition tasks was also expected.

General Method

The four experiments used similar stimuli and test procedures. Participants were recruited in three cohorts: school-age children and adults with normal hearing and school-age children with bilateral sensorineural hearing loss. All participants with normal hearing had pure-tone thresholds of ≤ 20 dB HL at octave frequencies from 250 to 8000 Hz (American National Standards Institute, 2018). Children with hearing loss were consistent hearing aid users with a range of hearing loss configurations and severity, described in more detail below. All participants were monolingual speakers of American English, and children were neurotypical by parent report.

Target Stimuli for Vowel-Variable and Consonant-Variable Word Recognition

Stimuli were monosyllabic words that are familiar to young children and can be represented with a picture. Each corpus was organized into 25 sets of three words. In one corpus, words shared the initial and final consonants, but the vowel varied across words (vowel-variable). In the other corpus, words shared the same central vowel, but the initial consonant, the final consonant, or both varied (consonant-variable). Both corpora were balanced for lexical frequency and imageability, based on estimates obtained using N-Watch (Davis, 2005). The consonant-variable corpus was composed of nouns, but the vowel-variable list included nouns, verbs, and adjectives, due to the smaller pool of appropriate vowel-variable words. All words appear in a child corpus of American English spoken by kindergarten and first-grade children (Storkel & Hoover, 2010).

Consonant-variable words were selected from the WIPI test. The WIPI test includes 25 sets of six words each, four of which are phonetically similar; in all but one case, the four phonetically similar alternatives share a vowel. One of the four phonetically similar words in each set was identified for removal. In 15 cases, the words to be removed were those that 5-year-olds struggle to spontaneously identify based on illustrations used in the WIPI test (Dengerink & Bean, 1988). In the remaining 10 cases, words were removed due to cultural changes in language familiarity, or they were randomly selected for removal. Consonant-variable words were phonetically balanced for place, manner, and voicing of the consonants.

Vowel-variable words were constructed for this study based on children's literature, dictionaries, the Dolch word lists (Dolch, 1948), and lexical databases, including Storkel (2013). For vowel-variable words, words in each set differed with respect to the central vowel but shared initial and final consonants. Vowel-variable words were phonetically balanced for vowel height, backness, and stress. There was some overlap between consonant-variable and vowel-variable words; 22 words appeared in both corpora.

An artist drew an image for each monosyllabic word. The instructions provided to the artist specified that each picture should unambiguously differentiate each word in the associated three-word set when viewed by a young child. Illustrations from the WIPI test were provided as examples. To confirm that the new illustrations were recognizable by young children, a group of 20 children (ages 3.7–5.4 years, M = 4.6 years) was recruited from a local child care center in Chapel Hill, North Carolina. These children had normal vision, speech, and hearing by parent report. All images were printed on individual flash cards. One tester asked the child “What is this?” or “What is going on here?” depending on whether the word was a noun, a verb, or an adjective. If the child did not produce the intended word, they were prompted to produce additional labels. A second examiner recorded the child's responses. A response was only scored correct if the child spontaneously produced the intended word, regardless of whether this was the first response provided; synonyms were not scored as correct. Mean correct identification was 89% for the consonant-variable words and 77% for the vowel-variable words. Poorer spontaneous identification of vowel-variable words was expected, as some of the vowel-variable words are more abstract than the nouns used in the consonant-variable corpus (e.g., “big” or “mean”). Several illustrations were replaced based on the error patterns in children's responses, to improve visual recognition.

Target words were recorded in a double-walled sound-isolated room by a 23-year-old female native speaker of English with a mainstream American English accent. Recordings were made using a unidirectional condenser microphone (AKG C1000S) at a sampling rate of 44.1 kHz and 16-bit resolution. Twenty-five additional children (ages 3.3–5.4 years, M = 4.4 years) were recruited to evaluate these recordings and the revised illustrations. Children were tested in a quiet room, and stimuli were presented using a laptop with high-quality external speakers. A custom MATLAB script presented each target word in quiet, with pictures indicating the three alternatives. Children were instructed to point to the picture that they heard. Mean performance of 97% and 95% correct was observed for the consonant-variable and vowel-variable words, respectively. Based on these results, four words were rerecorded to improve discriminability (i.e., “chick,” “check,” “kick,” and “cake”). The final corpora of target words are shown in Table 1 and are available for download (see the Data Availability Statement).

Table 1.

Words comprising the two closed-set word corpora.

Consonant-variable Vowel-variable
broom moon spoon bike back book
bow bowl ball a cone cane coin
smoke goat coat nut night net
floor door corn toe tie tea
socks blocks box cake cook kick
hat flag black store stair star
pan fan man phone fan fin
bread red bed mess mouse moose
neck nest dress bat boot boat
bear pear chair stick steak stack
fly tie pie pen pain pan
key tea bee cat coat kite
street feet teeth pool pole pail
wing spring ring shop ship sheep
clown mouse mouth boy bee bow
shirt church skirt read red road
gun thumb sun moon man mean
bus bug cup wheel wall whale
train cake plane cap cape cup
car arm star chick chalk check
chick stick fish pie pea paw
ship crib lip bell bowl ball
wheel queen green sack sick sock
dog saw frog bun bean bone
pail jail tail lake lock leak
a

This word set includes variability in both vowels and consonants.

Target Stimuli for Low-Probability and High-Probability Sentence Recognition

Sentence recognition was evaluated using a corpus developed by Stelmachowicz et al. (2000), which uses vocabulary appropriate for children as young as 4 years of age. These materials include 60 high-probability sentences, which are semantically correct (e.g., “Tough guys sound mean”), and 60 low-probability sentences, which are semantically anomalous (e.g., “Quick books look bright”). Each sentence comprised four monosyllabic words. These stimuli were recorded by a second 23-year-old female native speaker of English with a mainstream American English accent for a previous study (Buss et al., 2019). Recording procedures were the same as those described above.

Procedure

Targets were calibrated differently for words and sentences. The speech-shaped noise used in each experiment was used to determine the root-mean-square (RMS) level associated with 70 dB SPL. Targets were then normalized based on this reference. Due to the short duration of monosyllabic words, consonant- and vowel-variable words were normalized based on the peak RMS integrated over 100 ms for each target; compared to overall RMS calibration, this procedure increased levels by an average of 5.4 dB (SD = 0.3 dB). Sentences and speech-shaped noise samples were calibrated based on their overall RMS.

Recognition was evaluated in noise for both the word and sentence materials. A different speech-shaped noise was generated for the word and sentence recognition tasks, each matching the long-term average magnitude spectra of the target speech. All stimuli were equalized at the outset of the experiment. For both word and sentence recognition, the overall level of the signal-plus-masker was fixed at 70 dB SPL throughout testing, and the signal-to-noise ratio (SNR) was adaptively varied using a pair of interleaved tracks with different stepping rules, an approach that samples a range of points on the psychometric function. The initial adjustments in SNR were made in steps of 8 dB; this was reduced to 4 dB after the second track reversal and then further reduced to 2 dB after the fourth reversal. The SRT was estimated by fitting a psychometric function to word-level data from both tracks. Conditions were completed in quasi-random order.

Word recognition was evaluated with an open-set response (Experiment 1) or using a three-alternative forced-choice (3AFC) picture-pointing response (Experiments 1–4). Open-set responses were scored as correct or incorrect by an experimenter. For the 3AFC task, three pictures associated with each trial (the target and two foils) appeared 750 ms before the onset of the auditory stimulus, arranged in random order in a horizontal sequence on the screen. Participants used a touch screen to indicate their responses. For both open-set and 3AFC testing, stepping rules for the two interleaved adaptive tracks were 1-down, 1-up and 3-down, 1-up. Each adaptive track contained 20 words (40 total in two interleaved tracks). All participants completed two runs for each word corpus (consonant-variable and vowel-variable), with a third estimate obtained in cases where the first two differed by 6 dB or more; in those cases, the outlier was discarded.

For sentence recognition, participants repeated each target sentence, and the experimenter scored each word in the sentence in real time. Stepping rules for the two interleaved adaptive tracks were both 1-down, 1-up, but those tracks used different criteria for what was considered a correct response; one track used a lax criterion (one or more words correct), and the other used a strict criterion (all or all but one word correct). Each adaptive track contained 30 sentences (60 total in two interleaved tracks), and participants never heard the same sentence twice. Each participant completed two runs: one for low-probability sentences and one for high-probability sentences. Conditions were not repeated due to the limited number of stimuli.

The experiment was controlled using custom MATLAB scripts. Stimuli were retrieved from disk and uploaded into a real-time processor (RZ6, Tucker-Davis Technologies). Stimuli were presented either diotically over headphones (Experiments 1, 2, and 4; HD 25, Sennheiser) or from a loudspeaker positioned 1 m directly in front of the participant (Experiments 3; Control 1 Pro, JBL). The masker turned on 500 ms before target onset and turned off 500 ms after target offset, including 20-ms raised-cosine ramps. The real-time processor ran at 48.8 kHz for word recognition and at 24.4 kHz for sentence recognition; this difference was a matter of convenience. For conditions including an LP filter, target and masker stimuli were passed through a 1024-tap finite impulse response filter. The frequency response was flat through 1 kHz and rolled off at 25 dB/octave above 1 kHz.

Data Analysis

SRTs were defined as the midpoint of the psychometric function fitted to data for each participant and condition, corresponding to 66% correct for 3AFC data and 50% correct for open-set data. Defining SRT at 66% for both tasks does not change the basic findings reported below. Linear mixed-effects models were used to evaluate SRTs, with a random intercept for each participant. Child age was represented in units of log10(years); this choice of units is supported by the observation of lower Akaike information criterion values for models using log-transformed age than age in years, but the same pattern of significance was observed when age was not log-transformed. A criterion of α = .05 was adopted for evaluating significance, and all tests were evaluated two-tailed unless specified otherwise.

Experiment 1: Vowel- and Consonant-Variable Word Recognition by Task and Spectral Content for Children and Adults

The first experiment evaluated word recognition for vowel-variable and consonant-variable words in noise for three conditions: an open-set task with full-bandwidth stimuli, a 3AFC task with full-bandwidth stimuli, and a 3AFC task with LP filtered stimuli. Participants were 10 children (ages 7.0–11.1 years, M = 9.0 years) and 10 adults (ages 22.1–33.0 years, M = 26.9 years), all with normal hearing. The open-set tasks were completed prior to the 3AFC tasks to prevent participants from limiting their responses to previously encountered targets. Apart from this constraint, conditions were completed in random order. Based on published data, SRTs were expected to be better (lower) for adults than for children and better for the closed-set task than for the open-set task (e.g., Buss et al., 2016). The LP filter was expected to elevate thresholds more for consonant-variable than vowel-variable words, due to the high-frequency content of consonants. This effect was expected for both age groups, with the caveat that high-frequency audibility may be more important for children than for adults (Stelmachowicz et al., 2001).

Results and Discussion

Figure 1 shows the distribution of SRTs for consonant-variable words (left panel) and vowel-variable words (right panel), with each of the three conditions indicated on the horizontal axis. Box shading indicates age group, and symbols indicate data for individual participants. Overall, SRTs tend to be modestly higher for children than for adults, with mean differences of 0.8 dB for the open-set task with full-bandwidth stimuli, 1.7 dB for the 3AFC task with full-bandwidth stimuli and 3.3 dB for the 3AFC task with LP filtered stimuli. For the full-bandwidth stimuli, SRTs were higher for the open-set task than for the 3AFC task; this was observed for both children and adults, with mean differences across tasks of 4.3 dB for consonant-variable words and 5.1 dB for vowel-variable words. For the 3AFC tasks, LP filtering had a larger effect on SRTs measured with consonant-variable words (6.0 dB) than when measured with vowel-variable words (1.7 dB).

Figure 1.

Figure 1.

Distribution of word recognition scores by task, which was either open-set or three-alternative forced-choice (3AFC) closed-set recognition, and spectral content of the stimulus, which was either full bandwidth or low-pass (LP) filtered. Results are shown separately for the consonant-variable words (left) and vowel-variable words (right). Horizontal lines indicate the median, boxes span the 25th–75th percentiles, and whiskers span the 10th–90th percentiles. Symbols and box shading reflect group, as defined in the legend. SRT = speech reception threshold.

These observations were confirmed statistically with a model that included two levels of stimulus (consonant-variable and vowel-variable), three levels of condition (open-set, full; 3AFC, full; and 3AFC, LP), and two levels of group (adult and child). Reference conditions were the consonant-variable stimulus, the 3AFC full-bandwidth condition, and the adult group. There were significant effects of condition, F(2, 90) = 44.04, p < .001, and group, F(1, 18) = 11.19, p = .004, but not stimulus, F(1, 90) = 0.59, p = .444. There was a significant Stimulus × Condition interaction, F(2, 90) = 22.75, p < .001; a significant Group × Condition interaction, F(2, 90) = 3.39, p = .038; and no significant Stimulus × Group or three-way interactions, F(1, 90) = 2.13, p = .148; F(2, 90) = 0.11, p = .894. The Stimulus × Condition interaction reflects the fact that mean SRTs are lower for consonant- than vowel-variable words for the full-bandwidth open-set task (0.5 vs. 1.5 dB), t(35.7) = 2.67, p = .011; similar for the two word types for the full-bandwidth 3AFC task (−3.8 vs. −4.0 dB), t(30.2) = −0.29, p = .776; and higher for the consonant- than vowel-variable words for the LP filtered 3AFC task (2.2 vs. −2.3 dB), t(36.5) = −5.75, p < .001. This indicates that the provision of a closed response set and removal of high-frequency cues have different effects on recognition of consonant- and vowel-variable words. The interaction between group and condition reflects an increasing child–adult difference for the open-set full-bandwidth condition (M = 0.8 dB), t(30.9) = 1.99, p = .055; the 3AFC full-bandwidth condition (M = 1.7 dB), t(28.3) = 3.54, p = .003; and the 3AFC LP condition (M = 3.3 dB), t(36.1) = 3.59, p < .001. This could indicate maturation of the ability to take full advantage of a small response set and spectrally sparse speech cues.

Results of this experiment demonstrate comparable performance for the full-bandwidth consonant-variable and vowel-variable words for both children and adults with normal hearing. Lower SRTs for the 3AFC task than for the open-set task for children and adults replicates previous data demonstrating that both age groups benefit from a restrictive response set (e.g., Buss et al., 2016). A greater detrimental effect of LP filtering for the consonant- than vowel-variable words for both age groups is consistent with the idea that listeners rely on cues from different spectral regions for recognition for these two stimulus conditions. These preliminary results support the possibility that consonant- and vowel-variable words in a 3AFC task could be used to differentially evaluate access to the acoustic cues underlying consonant and vowel perception in children with hearing loss.

Experiment 2: Word and Sentence Recognition by Child Age

The purpose of the second experiment was to document the developmental trajectory of word recognition in a 3AFC task across child age and to compare performance for consonant- and vowel-variable words to recognition of low- and high-probability sentences. The rationale for considering sentence recognition was twofold. First, sentence recognition may be more representative of natural communication than recognition of isolated words in a forced-choice context, due to greater linguistic complexity and reduced predictability. Second, low- and high-probability sentences may differ in their reliance on consonant and vowel recognition. Recall that word recognition appears to rely heavily on consonant perception (Fogerty et al., 2012; Owren & Cardillo, 2006), whereas sentence recognition relies more heavily on vowels (Cole et al., 1996; Fogerty & Kewley-Port, 2009; Kewley-Port et al., 2007). These differences in the relative importance of vowels and consonants could reflect listeners' ability to recognize speech based on vowel information when the semantic context is available and greater reliance on consonants in the absence of that context. If this is the case, then vowel-variable word recognition scores might be a better predictor of high-probability than low-probability sentence recognition, and the opposite might be observed for consonant-variable word recognition scores. Furthermore, any difference in the developmental trajectory for the consonant- and vowel-variable words might also be observed for low- and high-probability sentences.

Participants were 22 children (ages 5.8–17.0 years, M = 10.6 years) and 21 adults (ages 19.3–37.5 years, M = 24.9 years), all with normal hearing. They completed speech-in-noise testing for four target conditions: consonant-variable words, vowel-variable words, low-probability sentences, and high-probability sentences. Word recognition was evaluated with the 3AFC task, and sentence recognition was evaluated with an open-set response. These conditions were completed in random order. For all four stimulus types, young children were expected to perform more poorly than adults, and this child–adult difference was expected to shrink with increasing age. Children and adults were expected to benefit from stimulus context, whether that context was provided by a closed set of response alternatives or semantic content (Buss et al., 2016, 2019; Fallon et al., 2002).

Results and Discussion

Figure 2 shows SRTs from child participants, plotted as a function of age; the distributions of SRTs for adults are shown at the far right of each panel for comparison. Results are shown separately for consonant- and vowel-variable word recognition and for low- and high-probability sentence recognition. Performance improved with increasing child age in all four conditions, and the effect of age was larger for word recognition than for sentence recognition. Based on line fits to child data in each condition, performance improved over the age range tested by 5.2 dB (consonant-variable words), 7.0 dB (vowel-variable words), 3.7 dB (low-probability sentences), and 2.8 dB (high-probability sentences). These fits are consistent with mature performance for the oldest children tested. Sentence recognition is broadly consistent with the results of Buss et al. (2019), who reported a reduction in SRTs of approximately 3.6 dB over the age range tested here for both low- and high-probability sentences.

Figure 2.

Figure 2.

Speech reception thresholds (SRTs) as a function of child age in years, plotted separately for each of four stimulus conditions, as indicted in the upper right of each panel. Solid lines indicate improvement in SRT with child age. Box plots at the far right of each panel indicate the distribution of SRTs for adults, following the conventions of Figure 1.

A model including all four stimulus conditions, with consonant-variable words as reference, indicates significant main effects of stimulus type, F(3, 60) = 293.64, p < .001, and age, F(1, 20) = 17.38, p < .001, and a significant interaction between stimulus type and age, F(3, 60) = 5.77, p = .002. Reduced models including just the word or just the sentence data found no significant interaction between stimulus type and age (p ≥ .222). This supports the visual impression that age has a smaller effect on SRTs for the low- and high-probability sentences than for the consonant- and vowel-variable words. These reduced models indicated lower SRTs for high- than low-probability sentences (p < .001) and no significant difference between SRTs for consonant- and vowel-variable words (p = .175).

Residuals of the full model excluding the random intercept for participant were used to evaluate the consistency of individual differences in child data across conditions, controlling for age. Individual differences were significantly correlated for the two sentence recognition tasks (r = .71, p < .001) but not for the two word recognition tasks (r = −.01, p = .977). Theoretically, this could indicate that consonant- and vowel-variable word scores reflect different abilities related to consonant and vowel recognition, respectively. Scores for the two sentence types tended to be positively correlated with recognition scores for both consonant-variable words (low probability: r = .23, p = .299; high probability: r = .51, p = .016) and vowel-variable words (low probability: r = .32, p = .143; high probability: r = .25, p = .264), although these correlations were weak and did not reach significance in most cases. These results do not support the a priori prediction of larger correlations between scores for tasks thought to rely on consonants (low-probability sentences and consonant-variable words) or vowels (high-probability sentences and vowel-variable words) than the converse pairings. This could indicate that consonant and vowel perception contribute similarly to low- and high-probability sentence recognition; an alternative explanation is that failure to observe the predicted relationships is due to homogeneity of results among children with normal hearing.

Results of Experiment 2 demonstrate that performance matures between 5.8 and 17.0 years of age for both word and sentence recognition tasks, but the magnitude of this change is larger for 3AFC word recognition than for open-set sentence recognition. These data provide a normative comparison for results obtained from children with hearing loss, collected in the next experiment. They also provide tentative support for the idea that consonant- and vowel-variable word recognition scores predict different aspects of speech perception, as scores are not correlated for consonant- and vowel-variable word recognition after controlling for participant age. However, correlations between word and sentence scores were weak and failed to support the hypothesis that vowel- and consonant-variable word scores preferentially predict high- and low-probability sentence recognition, respectively.

Experiment 3: Word and Sentence Recognition for Children With Hearing Loss

The third experiment tested a cohort of children with sensorineural hearing loss and compared their results to those obtained from children with normal hearing in Experiment 2. For this experiment, stimuli were presented over a loudspeaker, and participants wore their hearing aids at user settings. The goal was to evaluate effects of hearing loss configuration on consonant- and vowel-variable word recognition. High-frequency loss was expected to have a larger effect on consonant-variable word recognition than on vowel-variable word recognition, consistent with data obtained with LP filtered stimuli in Experiment 1. Conversely, low-frequency loss was expected to have a larger detrimental effect on vowel-variable word recognition than on consonant-variable word recognition, based on the predominance of low-frequency vowel cues. Word recognition scores were expected to correlate with sentence recognition scores, although based on the results of Experiment 1, it was not clear whether to expect this association to differ for low- and high-probability sentences.

Participants were 14 children (ages 7.9–17.9 years, M = 11.2 years) with bilateral sensorineural hearing loss, confirmed by a licensed clinical audiologist. Audiograms for this cohort are shown in Figure 3, along with symbols indicating better-ear low-frequency (0.25, 0.5, and 1 kHz) and high-frequency (2, 4, and 8 kHz) pure-tone averages (PTAs). Audiograms were measured in the laboratory on the test day for 12 children; for the other two children, clinical audiograms measured no more than 3 months prior to the test day were taken from the medical record. All participants were active hearing aid users, as described in Table 2. In all cases, hearing aids were appropriately fitted to the participant's hearing loss, as confirmed on the day of testing.

Figure 3.

Figure 3.

Audiograms for children with hearing loss. Data for individual participants are shown in separate panels, arranged from youngest (top left) to oldest (bottom right). Child ages are indicated in each panel. Air-conduction thresholds are shown for the right ear (red circles) and left ear (blue Xs). The shading of circles at the bottom of each panel indicates mean three-frequency pure-tone averages (PTAs) in dB HL, as illustrated in the legend. Shading of the left half reflects the low-frequency PTA (0.25, 0.5, and 1 kHz), and shading of the right half reflects the high-frequency PTA (2, 4, and 8 kHz). yrs = years.

Table 2.

Hearing aid (HA) and associated information for children with hearing loss.

Age (years) Age at ID (years) Age at first HA fit (years) HA type Frequency lowering Ear Aided SII, 65 dB SPL Unaided SII, 65 dB SPL Daily use (hours) MAF (kHz)
graphic file with name JSLHR-65-3934-i001.jpg 7.9 1.2 1.3 Oticon No L .83 .48 5.0 8.0
Sensei Pro R .83 .49 3.2 8.0
graphic file with name JSLHR-65-3934-i002.jpg 8.3 2.5 2.5 Phonak Yes L .80 .16 12.8 8.0
Nios S H20 V R .75 .15 12.1 8.0
graphic file with name JSLHR-65-3934-i003.jpg 8.6 3.5 3.5 Phonak Yes L .69 .39 20.6 5.0
Nios S H20 III R .45 .31 19.5 2.0
graphic file with name JSLHR-65-3934-i004.jpg 9.5 7.3 7.4 Phonak No L .74 .36 12.0 6.5
Sky Q50 M13 R .69 .29 11.6 6.5
graphic file with name JSLHR-65-3934-i005.jpg 9.6 2.1 2.1 Oticon No L .87 .48 7.5 8.0
Safari 300 R .92 .63 6.7 8.0
graphic file with name JSLHR-65-3934-i006.jpg 9.8 7.0 7.0 ReSound No L .89 .41 13* 8.0
Up Smart 5 R .92 .54 13* 8.0
graphic file with name JSLHR-65-3934-i007.jpg 10.3 4.6 9.3 Phonak No L .91 .75 6.5 7.5
Sky V50 P R .90 .73 6.4 7.5
graphic file with name JSLHR-65-3934-i008.jpg 10.3 5.0 5.0 Phonak L .76 .33 16* 6.5
Nios S H20 III R .68 .24 16* 5.0
graphic file with name JSLHR-65-3934-i009.jpg 10.6 0.3 0.3 Phonak No L .84 .38 7.9 8.0
Nios S H20 III R .89 .57 8.0 8.0
graphic file with name JSLHR-65-3934-i010.jpg 11.4 3.8 3.8 Oticon No L .73 .45 1.2 7.0
Sensei Pro SP R .77 .31 2.4 7.0
graphic file with name JSLHR-65-3934-i011.jpg 13.1 3.7 4.0 Phonak Yes L .48 .08 10.0 5.0
Nios Micro III R .77 .14 10.0 5.0
graphic file with name JSLHR-65-3934-i012.jpg 13.7 8.0 8.0 Phonak Yes L .80 .03 6.0 7.0
Sky V50-SP R .94 .94 6.2 7.9
graphic file with name JSLHR-65-3934-i013.jpg 15.8 4.5 4.6 Phonak No L .87 .36 9.1 8.0
Nios S H20 III R .86 .32 9.7 8.0
graphic file with name JSLHR-65-3934-i014.jpg 17.9 Birth 3.0 Phonak Yes L .46 .02 21.3 7.0
Naída V50 RIC R .45 .01 21.2 7.0

Note. Symbols at the left indicate low- and high-frequency pure-tone averages (PTAs), as defined in the legends of Figures 3 and 4. Darker shading indicates higher three-frequency PTAs. The low-frequency PTA (0.25, 0.5, and 1 kHz) is indicated in the left half of the symbol, and the high-frequency PTA (2, 4, and 8 kHz) is indicated in the right half. Ear-specific values are provided for aided and unaided Speech Intelligibility Index (SII), daily hearing aid use, and the minimum audible frequency (MAF). Daily use is based on data logging for all but two listeners; for those two children, parents estimated HA use (indicated with asterisks). Age at identification (ID), age at first HA fitting, HA type, and use of frequency lowering were the same across ears. The em dash indicates missing data. L = left; R = right.

Results and Discussion

Figure 4 shows SRTs for individual children with hearing loss, plotted as a function of age for each of the four stimulus conditions. Solid lines indicate fits to data for the children with normal hearing tested in Experiment 2, and gray shaded regions show the associated 95% prediction intervals. Circles show individual data for children with hearing loss, and symbol shading indicates the low-frequency and high-frequency PTAs, as defined in the legend. On average, SRTs for children with hearing loss were poorer than those for children with normal hearing, but the effect of hearing loss differed across conditions. Most 3AFC word recognition data for children with hearing loss fell within the 95% prediction interval of the normal-hearing data, including 13/14 data points for the consonant-variable words and 10/14 data points for the vowel-variable words. In contrast, open-set sentence recognition scores for most children with hearing loss fell outside the 95% prediction interval for normal-hearing data; only 5/14 fell within the prediction interval for children with normal hearing for each of the two sentence conditions.

Figure 4.

Figure 4.

Speech reception thresholds (SRTs) for children with hearing loss as a function of age. The shading of circles in each panel indicates the low-frequency pure-tone average (PTA; left: 0.25, 0.5, and 1 kHz) and high-frequency PTA (right: 2, 4, and 8 kHz), following the conventions of Figure 3. Black lines indicate fits to data from children with normal hearing tested in Experiment 2, and gray shaded regions indicate the associated 95% prediction intervals. Data for the four speech materials are shown in separate panels.

The significance of these observations was evaluated with a linear mixed model that included four levels of stimulus type (consonant- and vowel-variable words and low- and high-probability sentences), age, participant group (hearing impaired and normal hearing), and all interactions. There were significant effects of stimulus type, F(3, 96) = 155.46, p < .001; age, F(1, 32) = 17.56, p < .001; and participant group, F(1, 32) = 12.88, p = .001. Significant interactions were observed between stimulus type and group, F(3, 96) = 15.16, p < .001, and between stimulus type and age, F(3, 96) = 3.24, p = .025. The other two interactions did not approach significance (p ≥ .257). Repeating the model with just the word scores or just the sentence scores resulted in a nonsignificant interaction between stimulus and group (p ≥ .244) and a nonsignificant three-way interaction (p ≥ .235), supporting the visual impression that the effect of hearing loss was significantly greater for low- and high-probability sentence recognition than for consonant- and vowel-variable word recognition.

Model fits to the data from children with normal hearing were used to estimate effects of hearing loss for each participant. Mean values of SRT elevation due to hearing loss were 2.2 dB for consonant-variable words, 1.8 dB for vowel-variable words, 5.1 dB for low-probability sentences, and 5.4 dB for high-probability sentences. These differences were significantly greater than zero for both words (p ≤ .011) and sentences (p < .001).

A second model was constructed to test the hypothesis that low- and high-frequency hearing loss affects SRTs differently for the four stimulus types. High-frequency hearing loss was expected to have a larger effect on consonant-variable than vowel-variable word recognition and a larger effect on low-probability than high-probability sentence recognition. Conversely, low-frequency hearing loss was expected to have a larger effect on vowel-variable word recognition and high-probability sentence recognition. This model included participant age, low-frequency PTA, high-frequency PTA, stimulus type, and interactions between stimulus type and the low- and high-frequency PTAs. There was a significant effect of participant age, F(1, 10) = 6.75, p = .027, and a nonsignificant trend for an effect of stimulus type, F(3, 33) = 2.60, p = .069, but none of the other effects approached significance (p ≥ .312). Importantly, there was not a significant interaction between stimulus condition and either low-frequency PTA, F(3, 33) = 0.12, p = .945, or high-frequency PTA, F(3, 33) = 1.24, p = .312. The same pattern of significance was observed when this analysis was restricted to the consonant- and vowel-variable words, with the exception that there was no indication of an effect of stimulus type (p = .645).

This result contrasts with the data pattern observed when participants with normal hearing were tested with and without the LP filter. In those conditions, reduced access to high-frequency speech information hurt performance more for the consonant-variable words than for the vowel-variable words. One possible explanation for discrepancies between effects of LP filtering and high-frequency hearing loss is that listeners with stable hearing loss may learn over time to optimize use of the cues that they have access to (Gatehouse, 1992). Another possible explanation is that the well-fitted hearing aids worn by children in this cohort may have provided access to the speech information required to perform 3AFC word recognition. This possibility is consistent with the high proportion of children with hearing loss whose SRTs fall within the prediction interval for normal-hearing data. Larger effects of hearing loss on sentence recognition could indicate that the access to speech cues provided by those hearing aids was not sufficient to support equivalently good performance on the open-set sentence recognition task for most children.

One practical question is whether SRT elevation on the 3AFC word recognition tasks relative to children with normal hearing predicts sentence recognition for children with hearing loss and whether there is any indication of differential reliance on consonants and vowels for low- and high-predictability sentence recognition. The hypothesis at the outset of this series of experiments was that consonant-variable word recognition would be a better predictor of low-probability sentence recognition and that vowel-variable word recognition would be a better predictor of high-probability sentence recognition. This was evaluated by calculating the effect of hearing loss for each participant and condition relative to predictions of the model fitted to data from children with normal hearing in Experiment 2. This approach partials out effects of age on performance. SRT elevation associated with hearing loss was most highly correlated for consonant- and vowel-variable word recognition (r = .65, p = .012) and for low- and high-probability sentence recognition (r = .87, p < .001). Of particular interest here, effects of hearing loss for consonant-variable word recognition were no more predictive of low- than high-probability sentence recognition (r = .43 and r = .55, respectively), and effects of hearing loss on vowel-variable word recognition were no more predictive of high- than low-probability sentence recognition (r = .48 and r = .44, respectively; z = 0.30, p = .384). These results provide no support for the idea that consonant perception plays a larger role in low- than high-probability sentence recognition or that vowel recognition plays a larger role in high- than low-probability sentence recognition.

An exploratory model was conducted to evaluate how demographic and hearing aid data affected individual differences in performance. This model included stimulus type, age on the test day, age at hearing aid fitting, and auditory dosage. Auditory dosage is a metric that combines better-ear Speech Intelligibility Index (aided and unaided) and the number of hours per day of hearing aid usage to characterize auditory experience (McCreery & Walker, 2022). The age at hearing aid fitting was log-transformed to maintain parity with age on the test day, and auditory dosage was log-transformed to normalize the distribution of values. This model resulted in significant effects of stimulus, F(3, 39) = 18.87, p < .001; age on the test day, F(1, 10) = 21.36, p = .001; age at hearing aid fitting, F(1, 10) = 7.09, p = .024; and auditory dosage, F(1, 10) = 12.98, p = .005. These results are consistent with previous work reporting an association between masked speech recognition and age at hearing aid fitting, hours of hearing aid use per day, and auditory dosage (McCreery & Walker, 2022; McCreery et al., 2015; Sininger et al., 2010). Other models including interactions between these participant factors and stimulus condition did not reveal any significant differences across stimulus conditions.

Residuals of this model were used to evaluate whether age on the test day, age at hearing aid fitting, and auditory dosage account for the correlation in SRTs between conditions. There was a positive correlation between residuals for the consonant- and vowel-variable words (r = .52, p = .056) and between residuals for the low- and high-probability sentences (r = .80, p < .001), but correlations between residuals for word and sentence scores did not approach significance (r = −.02 to .30, p ≥ .294). These trends could suggest that there are additional factors affecting individual differences for word and sentence recognition that are not captured in this model. For example, sentence recognition could rely on central factors such as linguistic knowledge, short-term memory, or executive function (McCreery et al., 2019, 2020). Alternatively, the audibility requirements for 3AFC word recognition could be reduced compared to open-set sentence recognition. This second possibility was evaluated in the final experiment.

Experiment 4: Effects of LP Filtering on Word and Sentence Recognition for Adults With Normal Hearing

The final experiment evaluated the effect of LP filtering on word and sentence recognition for adults with normal hearing. These results were compared to data obtained from adults in the associated full-bandwidth conditions of Experiment 2. As in that prior experiment, stimuli were presented over headphones. The motivation for evaluating 3AFC word recognition and open-set sentence recognition with LP filtered stimuli was to confirm one possible explanation for the larger effects of sensorineural hearing loss on sentence recognition in children, namely, that correct recognition of the low- and high-probability sentences is more sensitive to cue degradation than the 3AFC consonant- and vowel-variable word recognition. Participants were 21 adults (ages 18.5–38.7 years, M = 28.3 years) with normal hearing.

Results and Discussion

Figure 5 shows the distributions of SRTs for adults tested with full-bandwidth stimuli in Experiment 2 and for the adults tested with LP filtered stimuli in the present experiment. On average, LP filtering elevated thresholds by 3.1 dB for consonant-variable words and by 0.7 dB for vowel-variable words. This is consistent with data from adults tested in Experiment 1, where the mean effect of LP filtering was 4.6 dB for consonant-variable words and 0.9 dB for vowel-variable words. The effect of filtering was more pronounced for sentences, with mean differences in SRT for the full-bandwidth and LP filter conditions of 6.3 dB for low-probability sentences and 4.6 dB for high-probability sentences.

Figure 5.

Figure 5.

Distribution of speech reception thresholds (SRTs) for adults with normal hearing tested with full-bandwidth and low-pass filtered stimuli. Symbols indicate data for individual participants. Stimulus condition is indicated on the horizontal axis, and box fill indicates stimulus bandwidth. Full-bandwidth data are taken from Experiment 2. Horizontal lines indicate the median, boxes span the 25th–75th percentiles, and whiskers span the 10th–90th percentiles.

These observations were evaluated with a model that included data from the current experiment and data from adults tested with full-bandwidth stimuli in Experiment 2. There were two levels of bandwidth (full-bandwidth and LP) and four levels of stimulus type (consonant-variable words, vowel-variable words, low-probability sentences, and high-probability sentences). Bandwidth was a between-subjects variable, and stimulus was a within-subject variable. Results indicate significant effects of bandwidth, F(1, 40) = 34.66, p < .001, and stimulus, F(3, 120) = 86.13, p < .001, and a significant interaction between bandwidth and stimulus, F(3, 120) = 25.13, p < .001. Repeating this model with mean word data and mean sentence data resulted in the same pattern of significance. LP filtering significantly elevated SRTs for consonant-variable words and both sentence types (p < .001); there was no significant difference between SRTs for full-bandwidth and LP filtered vowel-variable words (p = .126).

Results of this experiment replicate the greater detrimental effects of LP filtering the consonant-variable words compared to the vowel-variable words, as previously shown in Experiment 1. These data also show that filtering had a larger detrimental effect on low-predictability than high-predictability sentence recognition. This could reflect greater reliance on high-frequency information in the consonant-variable and low-probability conditions. However, no data were taken with a high-pass filter; hence, we cannot rule out a more general requirement for greater audible bandwidth in these conditions. There was also a greater effect of LP filtering for sentence recognition than for 3AFC word recognition. This finding provides some support for the suggestion that relatively better performance for words than sentences among children with hearing loss tested in Experiment 3 may reflect greater tolerance of cue degradation for word recognition in a forced-choice format than for open-set sentence recognition.

Summary

Vowels and consonants play different roles in language acquisition and speech recognition: Vowels are less susceptible to noise masking and preferentially convey prosodic information, whereas consonants are thought to be relatively more important for language acquisition (Nazzi & Cutler, 2019; Nespor et al., 2002). Despite these differences, vowel and consonant perception is rarely evaluated separately in clinical evaluation of functional hearing abilities. This study was undertaken to collect preliminary data on a pair of 3AFC word recognition tasks designed to characterize vowel and consonant recognition in children using three-word sets that varied in either vowel or consonant content. Given the greater high-frequency content of consonants than vowels, consonant-variable word recognition should be more reliant on access to high-frequency information as compared to the vowel-variable task. Similarly, the low-frequency content of vowels should result in greater reliance on access to low-frequency information for the vowel-variable word recognition as compared to consonant-variable word recognition.

Four experiments evaluating masked speech recognition were carried out to begin exploring the psychophysical properties of the consonant- and vowel-variable closed-set speech-in-noise task. The first experiment confirmed that both children and adults with normal hearing benefit from the 3AFC response set as compared to open-set recognition and showed that LP filtering stimuli had a greater detrimental effect on recognition of consonant-variable words as compared to vowel-variable words. The second experiment showed that consonant- and vowel-variable word recognition in noise matures substantially between 5.8 and 17.0 years of age, with more modest effects of development for low- and high-probability sentences in noise. Individual differences among children after accounting for age indicate an association between low- and high-probability sentence recognition, but not between vowel- and consonant-variable word recognition. Correlations between individual differences on pairs of word and sentence conditions were weakly positive and mostly nonsignificant. These results could indicate that vowel and consonant recognition can be differentially assessed using the 3AFC word recognition task and that both contribute unique cues to sentence recognition irrespective of semantic context.

Data were also collected from a group of children with bilateral sensorineural hearing loss who were tested with appropriately fitted hearing aids. Threshold elevation associated with hearing loss was greater for sentence recognition than for word recognition. Contrary to prior predictions, there was no evidence for a greater effect of high-frequency hearing loss on consonant-variable word recognition or a greater effect of low-frequency hearing loss on vowel-variable word recognition. This could be due to the provision of amplification and to the short interval between the age of identification and the first hearing aid fitting for most children. Performance was associated with age on the test day, age at hearing aid fitting, and auditory dosage. After accounting for these factors, there was a significant correlation between sentence scores and a nonsignificant trend for a correlation between word scores, but no evidence of an association between scores for word and sentence recognition. This could reflect differences in the listener abilities necessary for closed-set word recognition and open-set sentence recognition, such as effects of cognitive and linguistic factors (McCreery et al., 2019, 2020), or greater susceptibility to cue degradation for open-set sentence recognition. Additional data from adults were collected on word and sentence recognition with LP filtered stimuli; greater detrimental effects of filtering for open-set sentence recognition than for 3AFC word recognition provide support for the idea that greater elevation of SRTs for sentences than words in children with hearing loss is due to greater susceptibility to cue degradation for the sentence recognition task.

The results obtained here are mixed with respect to the use of vowel- and consonant-variable word recognition to differentially evaluate vowel and consonant perception in children with hearing loss. Data from children with normal hearing are consistent with differential assessment of consonant and vowel perception, but data from children with hearing loss are equivocal with respect to the configuration of hearing loss, as well as the contributions of consonant and vowel discrimination to sentence recognition. While these data support the idea that closed-set word recognition and open-set sentence recognition rely on many of the same auditory abilities, there appear to be additional factors that are not shared across tasks, such as the number or quality of cues required for recognition or central factors related to linguistic knowledge, short-term memory, or executive function. These factors notwithstanding, closed-set word recognition appears to be a practical proxy for speech perception under some conditions, such as when open-set responses are challenging to score accurately due to speech production errors. Future research is needed to understand these factors before the clinical value of these methods can be established.

Data Availability Statement

Stimulus recordings, illustrations, and deidentified data for each experiment are available at https://osf.io/pw6ue/?view_only=c2e868b2dade498bb72eef6dbeac0b07.

Acknowledgments

This work was sponsored by National Institute on Deafness and Other Communication Disorders Grant R01 DC014460, awarded to Emily Buss. The authors would like to thank the members of the Human Auditory Development Lab for their contributions, particularly Sadie Cramer, Jessica Tran and Manuel Vicente for their assistance with data collection. Stacey Kane provided helpful feedback on this article. Stimulus illustrations were made by Joan Calandruccio and Gretchen Xue.

Funding Statement

This work was sponsored by National Institute on Deafness and Other Communication Disorders Grant R01 DC014460, awarded to Emily Buss.

References

  1. Ambrose, S. E. , Unflat Berry, L. M. , Walker, E. A. , Harrison, M. , Oleson, J. , & Moeller, M. P. (2014). Speech sound production in 2-year-olds who are hard of hearing. American Journal of Speech-Language Pathology, 23(2), 91–104. https://doi.org/10.1044/2014_AJSLP-13-0039 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. American National Standards Institute. (2018). American national standard specification for audiometers (ANSI S3.6-2018) .
  3. Boothroyd, A. (1984). Auditory perception of speech contrasts by subjects with sensorineural hearing loss. Journal of Speech and Hearing Research, 27(1), 134–144. https://doi.org/10.1044/jshr.2701.134 [DOI] [PubMed] [Google Scholar]
  4. Boothroyd, A. , Mulhearn, B. , Gong, J. , & Ostroff, J. (1996). Effects of spectral smearing on phoneme and word recognition. The Journal of the Acoustical Society of America, 100(3), 1807–1818. https://doi.org/10.1121/1.416000 [DOI] [PubMed] [Google Scholar]
  5. Buss, E. , Hodge, S. E. , Calandruccio, L. , Leibold, L. J. , & Grose, J. H. (2019). Masked sentence recognition in children, young adults, and older adults: Age-dependent effects of semantic context and masker type. Ear and Hearing, 40(5), 1117–1126. https://doi.org/10.1097/AUD.0000000000000692 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Buss, E. , Leibold, L. J. , & Hall, J. W., III. (2016). Effect of response context and masker type on word recognition in school-age children and adults. The Journal of the Acoustical Society of America, 140(2), 968–977. https://doi.org/10.1121/1.4960587 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Caramazza, A. , Chialant, D. , Capasso, R. , & Miceli, G. (2000). Separable processing of consonants and vowels. Nature, 403(6768), 428–430. https://doi.org/10.1038/35000206 [DOI] [PubMed] [Google Scholar]
  8. Carroll, J. M. , & Breadmore, H. L. (2018). Not all phonological awareness deficits are created equal: Evidence from a comparison between children with otitis media and poor readers. Developmental Science, 21(3), e12588. https://doi.org/10.1111/desc.12588 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Clopper, C. G. , Pisoni, D. B. , & Tierney, A. T. (2006). Effects of open-set and closed-set task demands on spoken word recognition. Journal of the American Academy of Audiology, 17(5), 331–349. https://doi.org/10.3766/jaaa.17.5.4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cole, R. A. , Yan, Y. H. , Mak, B. , Fanty, M. , & Bailey, T. (1996). The contribution of consonants versus vowels to word recognition in fluent speech [Paper presentation] . 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing, Atlanta, GA, United States. [Google Scholar]
  11. Danhauer, J. L. , Abdala, C. , Johnson, C. , & Asp, C. (1986). Perceptual features from normal-hearing and hearing-impaired children's errors on the NST. Ear and Hearing, 7(5), 318–322. https://doi.org/10.1097/00003446-198610000-00005 [DOI] [PubMed] [Google Scholar]
  12. Davis, C. J. (2005). N-Watch: A program for deriving neighborhood size and other psycholinguistic statistics. Behavior Research Methods, 37(1), 65–70. https://doi.org/10.3758/bf03206399 [DOI] [PubMed] [Google Scholar]
  13. Dengerink, J. E. , & Bean, R. E. (1988). Spontaneous labeling of pictures on the WIPI and NU-CHIPS by 5-year-olds. Language, Speech, and Hearing Services in Schools, 19(2), 144–152. https://doi.org/10.1044/0161-1461.1902.144 [Google Scholar]
  14. Dolch, E. W. (1948). Problems in reading. Garrard Press. [Google Scholar]
  15. Drullman, R. , Festen, J. M. , & Plomp, R. (1994). Effect of temporal envelope smearing on speech reception. The Journal of the Acoustical Society of America, 95(2), 1053–1064. https://doi.org/10.1121/1.408467 [DOI] [PubMed] [Google Scholar]
  16. Dubno, J. R. , & Dirks, D. D. (1990). Associations among frequency and temporal resolution and consonant recognition for hearing-impaired listeners. Acta Oto-Laryngologica, 109(Suppl. 469), 23–29. https://doi.org/10.1080/00016489.1990.12088405 [PubMed] [Google Scholar]
  17. Eisenberg, L. S. (2007). Current state of knowledge: Speech recognition and production in children with hearing impairment. Ear and Hearing, 28(6), 766–772. https://doi.org/10.1097/AUD.0b013e318157f01f [DOI] [PubMed] [Google Scholar]
  18. Eisenberg, L. S. , Martinez, A. S. , & Boothroyd, A. (2007). Assessing auditory capabilities in young children. International Journal of Pediatric Otorhinolaryngology, 71(9), 1339–1350. https://doi.org/10.1016/j.ijporl.2007.05.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Escudero, P. , Mulak, K. E. , & Vlach, H. A. (2016). Cross-situational learning of minimal word pairs. Cognitive Science, 40(2), 455–465. https://doi.org/10.1111/cogs.12243 [DOI] [PubMed] [Google Scholar]
  20. Fallon, M. , Trehub, S. E. , & Schneider, B. A. (2002). Children's use of semantic cues in degraded listening environments. The Journal of the Acoustical Society of America, 111(5), 2242–2249. https://doi.org/10.1121/1.1466873 [DOI] [PubMed] [Google Scholar]
  21. Fogerty, D. , Bologna, W. J. , Ahlstrom, J. B. , & Dubno, J. R. (2017). Simultaneous and forward masking of vowels and stop consonants: Effects of age, hearing loss, and spectral shaping. The Journal of the Acoustical Society of America, 141(2), 1133–1143. https://doi.org/10.1121/1.4976082 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Fogerty, D. , & Humes, L. E. (2012). The role of vowel and consonant fundamental frequency, envelope, and temporal fine structure cues to the intelligibility of words and sentences. The Journal of the Acoustical Society of America, 131(2), 1490–1501. https://doi.org/10.1121/1.3676696 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Fogerty, D. , & Kewley-Port, D. (2009). Perceptual contributions of the consonant–vowel boundary to sentence intelligibility. The Journal of the Acoustical Society of America, 126(2), 847–857. https://doi.org/10.1121/1.3159302 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Fogerty, D. , Kewley-Port, D. , & Humes, L. E. (2012). The relative importance of consonant and vowel segments to the recognition of words and sentences: Effects of age and hearing loss. The Journal of the Acoustical Society of America, 132(3), 1667–1678. https://doi.org/10.1121/1.4739463 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Fry, D. B. (1979). The physics of speech. Cambridge University Press. https://doi.org/10.1017/CBO9781139165747 [Google Scholar]
  26. Gatehouse, S. (1992). The time course and magnitude of perceptual acclimatization to frequency responses: Evidence from monaural fitting of hearing aids. The Journal of the Acoustical Society of America, 92(3), 1258–1268. https://doi.org/10.1121/1.403921 [DOI] [PubMed] [Google Scholar]
  27. Havy, M. , Serres, J. , & Nazzi, T. (2014). A consonant/vowel asymmetry in word-form processing: Evidence in childhood and in adulthood. Language and Speech, 57(2), 254–281. https://doi.org/10.1177/0023830913507693 [DOI] [PubMed] [Google Scholar]
  28. Johnson, C. E. (2000). Children's phoneme identification in reverberation and noise. Journal of Speech, Language, and Hearing Research, 43(1), 144–157. https://doi.org/10.1044/jslhr.4301.144 [DOI] [PubMed] [Google Scholar]
  29. Kasturi, K. , Loizou, P. C. , Dorman, M. , & Spahr, T. (2002). The intelligibility of speech with “holes” in the spectrum. The Journal of the Acoustical Society of America, 112(3, Pt. 1), 1102–1111. https://doi.org/10.1121/1.1498855 [DOI] [PubMed] [Google Scholar]
  30. Kewley-Port, D. , Burkle, T. Z. , & Lee, J. H. (2007). Contribution of consonant versus vowel information to sentence intelligibility for young normal-hearing and elderly hearing-impaired listeners. The Journal of the Acoustical Society of America, 122(4), 2365–2375. https://doi.org/10.1121/1.2773986 [DOI] [PubMed] [Google Scholar]
  31. Kirk, K. I. , Pisoni, D. B. , & Osberger, M. J. (1995). Lexical effects on spoken word recognition by pediatric cochlear implant users. Ear and Hearing, 16(5), 470–481. https://doi.org/10.1097/00003446-199510000-00004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Kolinsky, R. , Lidji, P. , Peretz, I. , Besson, M. , & Morias, J. (2009). Processing interactions between phonology and melody: Vowels sing but consonants speak. Cognition, 112(1), 1–20. https://doi.org/10.1016/j.cognition.2009.02.014 [DOI] [PubMed] [Google Scholar]
  33. Ladefoged, P. (2001). Vowels and consonants: An introduction to the sounds of languages. Blackwell.
  34. Leibold, L. J. , & Buss, E. (2013). Children's identification of consonants in a speech-shaped noise or a two-talker masker. Journal of Speech, Language, and Hearing Research, 56(4), 1144–1155. https://doi.org/10.1044/1092-4388(2012/12-0011) [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. McCreery, R. W. , Ito, R. , Spratford, M. , Lewis, D. , Hoover, B. , & Stelmachowicz, P. G. (2010). Performance-intensity functions for normal-hearing adults and children using Computer-Aided Speech Perception Assessment. Ear and Hearing, 31(1), 95–101. https://doi.org/10.1097/AUD.0b013e3181bc7702 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. McCreery, R. W. , Miller, M. K. , Buss, E. , & Leibold, L. J. (2020). Cognitive and linguistic contributions to masked speech recognition in children. Journal of Speech, Language, and Hearing Research, 63(10), 3525–3538. https://doi.org/10.1044/2020_JSLHR-20-00030 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. McCreery, R. W. , & Walker, E. A. (2022). Variation in auditory experience affects language and executive function skills in children who are hard of hearing. Ear and Hearing, 43(2), 347–360. https://doi.org/10.1097/AUD.0000000000001098 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. McCreery, R. W. , Walker, E. A. , Spratford, M. , Lewis, D. , & Brennan, M. (2019). Auditory, cognitive, and linguistic factors predict speech recognition in adverse listening conditions for children with hearing loss. Frontiers in Neuroscience, 13, 1093. https://doi.org/10.3389/fnins.2019.01093 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. McCreery, R. W. , Walker, E. A. , Spratford, M. , Oleson, J. , Bentler, R. , Holte, L. , & Roush, P. (2015). Speech recognition and parent ratings from auditory development questionnaires in children who are hard of hearing. Ear and Hearing, 36(Suppl. 1), 60S–75S. https://doi.org/10.1097/AUD.0000000000000213 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Mehler, J. , Peña, M. , Nespor, M. , & Bonatti, L. (2006). The “soul” of language does not use statistics: Reflections on vowels and consonants. Cortex, 42(6), 846–854. https://doi.org/10.1016/s0010-9452(08)70427-1 [DOI] [PubMed] [Google Scholar]
  41. Moeller, M. P. , Tomblin, J. B. , Yoshinaga-Itano, C. , Connor, C. M. , & Jerger, S. (2007). Current state of knowledge: Language and literacy of children with hearing impairment. Ear and Hearing, 28(6), 740–753. https://doi.org/10.1097/AUD.0b013e318157f07f [DOI] [PubMed] [Google Scholar]
  42. Nazzi, T. , & Cutler, A. (2019). How consonants and vowels shape spoken-language recognition. Annual Review of Linguistics, 5(1), 25–47. https://doi.org/10.1146/annurev-linguistics-011718-011919 [Google Scholar]
  43. Nespor, M. , Peña, M. , & Mehler, J. (2002). On the different roles of vowels and consonants in speech processing and language acquisition. Lingue e Linguaggio, 2(2), 203–229. https://doi.org/10.1418/10879 [Google Scholar]
  44. Neuman, A. C. , & Hochberg, I. (1983). Children's perception of speech in reverberation. The Journal of the Acoustical Society of America, 73(6), 2145–2149. https://doi.org/10.1121/1.389538 [DOI] [PubMed] [Google Scholar]
  45. Nishi, K. , Lewis, D. E. , Hoover, B. M. , Choi, S. , & Stelmachowicz, P. G. (2010). Children's recognition of American English consonants in noise. The Journal of the Acoustical Society of America, 127(5), 3177–3188. https://doi.org/10.1121/1.3377080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Owren, M. J. , & Cardillo, G. C. (2006). The relative roles of vowels and consonants in discriminating talker identity versus word meaning. The Journal of the Acoustical Society of America, 119(3), 1727–1739. https://doi.org/10.1121/1.2161431 [DOI] [PubMed] [Google Scholar]
  47. Phatak, S. A. , Yoon, Y. S. , Gooler, D. M. , & Allen, J. B. (2009). Consonant recognition loss in hearing impaired listeners. The Journal of the Acoustical Society of America, 126(5), 2683–2694. https://doi.org/10.1121/1.3238257 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Pisoni, D. B. (1973). Auditory and phonetic memory codes in the discrimination of consonants and vowels. Perception & Psychophysics, 13(2), 253–260. https://doi.org/10.3758/BF03214136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Pollack, I. , Rubenstein, H. , & Decker, L. (1959). Intelligibility of known and unknown message sets. The Journal of the Acoustical Society of America, 31(3), 273–279. https://doi.org/10.1121/1.1907712 [Google Scholar]
  50. Port, R. F. (2003). Meter and speech. Journal of Phonetics, 31(3–4), 599–611. https://doi.org/10.1016/j.wocn.2003.08.001 [Google Scholar]
  51. Ross, M. , & Lerman, J. (1970). A picture identification test for hearing-impaired children. Journal of Speech and Hearing Research, 13(1), 44–53. https://doi.org/10.1044/jshr.1301.44 [DOI] [PubMed] [Google Scholar]
  52. Sininger, Y. S. , Grimes, A. , & Christensen, E. (2010). Auditory development in early amplified children: Factors influencing auditory-based communication outcomes in children with hearing loss. Ear and Hearing, 31(2), 166–185. https://doi.org/10.1097/AUD.0b013e3181c8e7b6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. St John, M. , Columbus, G. , Brignell, A. , Carew, P. , Skeat, J. , Reilly, S. , & Morgan, A. T. (2020). Predicting speech-sound disorder outcomes in school-age children with hearing loss: The VicCHILD experience. International Journal of Language & Communication Disorders, 55(4), 537–546. https://doi.org/10.1111/1460-6984.12536 [DOI] [PubMed] [Google Scholar]
  54. Stelmachowicz, P. G. , Hoover, B. M. , Lewis, D. E. , Kortekaas, R. W. L. , & Pittman, A. L. (2000). The relation between stimulus context, speech audibility, and perception for normal-hearing and hearing-impaired children. Journal of Speech, Language, and Hearing Research, 43(4), 902–914. https://doi.org/10.1044/jslhr.4304.902 [DOI] [PubMed] [Google Scholar]
  55. Stelmachowicz, P. G. , Pittman, A. L. , Hoover, B. M. , & Lewis, D. E. (2001). Effect of stimulus bandwidth on the perception of /s/ in normal- and hearing-impaired children and adults. The Journal of the Acoustical Society of America, 110(4), 2183–2190. https://doi.org/10.1121/1.1400757 [DOI] [PubMed] [Google Scholar]
  56. Stelmachowicz, P. G. , Pittman, A. L. , Hoover, B. M. , Lewis, D. E. , & Moeller, M. P. (2004). The importance of high-frequency audibility in the speech and language development of children with hearing loss. Archives of Otolaryngology—Head & Neck Surgery, 130(5), 556–562. https://doi.org/10.1001/archotol.130.5.556 [DOI] [PubMed] [Google Scholar]
  57. Storkel, H. L. (2013). A corpus of consonant–vowel–consonant real words and nonwords: Comparison of phonotactic probability, neighborhood density, and consonant age of acquisition. Behavior Research Methods, 45(4), 1159–1167. https://doi.org/10.3758/s13428-012-0309-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Storkel, H. L. , & Hoover, J. R. (2010). An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English. Behavior Research Methods, 42(2), 497–506. https://doi.org/10.3758/BRM.42.2.497 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Talarico, M. , Abdilla, G. , Aliferis, M. , Balazic, I. , Giaprakis, I. , Stefanakis, T. , Foenander, K. , Grayden, D. B. , & Paolini, A. G. (2007). Effect of age and cognition on childhood speech in noise perception abilities. Audiology and Neurotology, 12(1), 13–19. https://doi.org/10.1159/000096153 [DOI] [PubMed] [Google Scholar]
  60. ter Keurs, M. , Festen, J. M. , & Plomp, R. (1992). Effect of spectral envelope smearing on speech reception. I. The Journal of the Acoustical Society of America, 91(5), 2872–2880. https://doi.org/10.1121/1.402950 [DOI] [PubMed] [Google Scholar]
  61. Tomblin, J. B. , Harrison, M. , Ambrose, S. E. , Walker, E. A. , Oleson, J. J. , & Moeller, M. P. (2015). Language outcomes in young children with mild to severe hearing loss. Ear and Hearing, 36(Suppl. 1), 76S–91S. https://doi.org/10.1097/AUD.0000000000000219 [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Toro, J. M. , Nespor, M. , Mehler, J. , & Bonatti, L. L. (2005). Finding words and rules in a speech stream. Psychological Science, 19(2), 137–144. https://doi.org/10.1111/j.1467-9280.2008.02059.x [DOI] [PubMed] [Google Scholar]
  63. Toro, J. M. , Shukla, M. , Nespor, M. , & Endress, A. D. (2008). The quest for generalizations over consonants: Asymmetries between consonants and vowels are not the by-product of acoustic differences. Perception & Psychophysics, 70(8), 1515–1525. https://doi.org.10.3758/PP.70.8.1515 [DOI] [PubMed] [Google Scholar]
  64. Trezek, B. J. , & Malmgren, K. W. (2005). The efficacy of utilizing a phonics treatment package with middle school deaf and hard-of-hearing students. Journal of Deaf Studies and Deaf Education, 10(3), 256–271. https://doi.org/10.1093/deafed/eni028 [DOI] [PubMed] [Google Scholar]
  65. Turner, C. W. (1993). Distribution of sound levels for consonants and vowels within individual frequency bands. The Journal of the Acoustical Society of America, 93(4), 2394. https://doi.org/10.1016/j.heares.2009.06.005 [Google Scholar]
  66. Tyler, R. S. , Fryauf-Bertschy, H. , & Kelsay, D. (1991). Audiovisual feature test for young children. University of Iowa Hospitals, Department of Otolaryngology—Head and Neck Surgery. [Google Scholar]
  67. van Ooijen, B. (1996). Vowel mutability and lexical selection in English: Evidence from a word reconstruction task. Memory & Cognition, 24(5), 573–583. https://doi.org/10.3758/bf03201084 [DOI] [PubMed] [Google Scholar]
  68. Vickers, D. A. , Moore, B. C. J. , Majeed, A. , Stephenson, N. , Alferaih, H. , Baer, T. , & Marriage, J. (2018). Closed-set speech discrimination tests for assessing young children. Ear and Hearing, 39(1), 32–41. https://doi.org/10.1097/AUD.0000000000000528 [DOI] [PubMed] [Google Scholar]
  69. Werfel, K. L. , & Schuele, C. M. (2014). Improving initial sound segmentation skills of preschool children with severe to profound hearing loss: An exploratory investigation. The Volta Review, 114(2), 113–134. https://doi.org/10.17955/tvr.114.2.737 [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Stimulus recordings, illustrations, and deidentified data for each experiment are available at https://osf.io/pw6ue/?view_only=c2e868b2dade498bb72eef6dbeac0b07.


Articles from Journal of Speech, Language, and Hearing Research : JSLHR are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES