Skip to main content
Journal of Speech, Language, and Hearing Research : JSLHR logoLink to Journal of Speech, Language, and Hearing Research : JSLHR
. 2018 Jun 19;61(6):1409–1425. doi: 10.1044/2018_JSLHR-L-17-0150

Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders

Julia L Evans a,, Ronald B Gillam b, James W Montgomery c
PMCID: PMC6195089  PMID: 29800024

Abstract

Purpose

This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children.

Method

Participants included 234 children (aged 7;0–11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition.

Results

Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group.

Conclusion

Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.


The term “specific language impairment” (SLI) is one of several terms often used to refer to a developmental language disorder (DLD) of unknown etiology, characterized by an inability to master spoken and written language comprehension and production despite normal nonverbal intelligence, normal hearing acuity, and the absence of mitigating factors known to cause language disorders in children such as neurological impairment secondary to focal lesions or traumatic brain injury, intellectual disability, autism, hearing impairment, and/or other recognized syndromes (Bishop, 2014; Leonard, 2014; Tager-Flusberg & Cooper, 1999). Although numbers vary slightly across countries, in the United States, SLI is estimated to occur in approximately 7% of English-speaking school-aged children (Tomblin et al., 1997). These language deficits persist, fully or partially, into adulthood, placing individuals with SLI at risk for poor academic performance, difficulty developing and maintaining friendships and significant relationships, difficulty in the work environment, reduced earning potential and standard of living, and secondary stress-related problems (Catts, Bridges, Little, & Tomblin, 2008; Conti-Ramsden, Mok, Pickles, & Durkin, 2013; Durkin & Conti-Ramsden, 2007; Tomblin, Freese, & Records, 1992). Whereas some researchers have used the term SLI to denote a narrow theoretical characterization of a language disorder where the deficit is viewed as specific to the “language” system (Adani, Van der Lely, Forgiarini, & Guasti, 2010), other SLI researchers use a broader interpretation of the term to denote the presence of both language-based deficits and weaknesses in areas that go beyond language (Leonard, 2014; Tager-Flusberg & Cooper, 1999). Recently, to avoid potential confusion with a more narrow definition of SLI, some researchers are shifting to labels such as DLD to refer to those children who fall into the broader definition of SLI (Bishop, 2014; Lee & Tomblin, 2012). In keeping with the issues raised in recent discussions by Bishop, Snowling, Thompson, Greenhalgh, and CATALISE Consortium (2016), we use the term DLD in this article to refer to this same broader defined group of children with language-based deficits. 1

In addition to deficits in the acquisition and use of morphological and syntactic knowledge, children with SLI have well-documented lexical deficits characterized by difficulty learning new words (Alt & Plante, 2006; Dollaghan, 1987; Gray, Brinkley, & Svetina, 2012; Kan & Windsor, 2010), having smaller vocabulary than would be expected based on their chronological age (Bishop, 1997; Watkins, Kelly, Harbers, & Hollis, 1995), and being consistently slower and less accurate in accessing words from their lexicon as compared with their typically developing (TD) peers (Coady & Mainela-Arnold, 2013; Edwards & Lahey, 1996; Kail, Hale, Leonard, & Nippold, 1984; Kail & Leonard, 1986; Leonard, Nippold, Kail, & Hale, 1983; Sheng & McGregor, 2010). Spoken word recognition is also problematic for children with SLI (Dollaghan, 1985, 1987; Evans, Gillam, & Montgomery, 2015; Mainela-Arnold, Evans, & Coady, 2008; Stark & Montgomery, 1995).

Spoken word recognition is also a problem for children with SLI. Studies show that, although children with SLI are as proficient as their peers in their ability to perceive the initial sounds of target words, they appear to require more of the acoustic signal before they are able to finally recognize a word in the stream of speech as compared with their TD peers (Dollaghan, 1998; Mainela-Arnold & Evans, 2014; Mainela-Arnold et al., 2008; McMurray, Samelson, Lee, & Tomblin, 2010; Montgomery, 1999, 2002; Stark & Montgomery, 1995). Using a forward gating paradigm, Mainela-Arnold et al. (2008) observed that children with SLI required significantly more of the speech stream to reach the final point of acceptance—the duration at which they identified the target word—as compared with the TD controls. In forward gating tasks, children hear stimulus words presented as successive gates—that is, fragments of the auditory stimulus, where the gates start from the beginning of the word and become progressively longer in duration (Grosjean, 1980). The child is told to guess the word based on the incomplete acoustic segment of the word. Examining children's responses in detail, Mainela-Arnold and colleagues observed that, in contrast to the control group, the children with SLI vacillated between correct and incorrect word targets, with many of their responses having no phonological relationship to the target word. In a prior study with the same participants, Mainela-Arnold and colleagues observed that phonological category boundaries for the children with SLI were significantly degraded and underspecified as compared with the TD controls (Coady, Evans, Mainela-Arnold, & Kluender, 2007; Coady, Kluender, & Evans, 2005). Taking the findings of the two studies together, Mainela-Arnold and colleagues proposed that the vacillating between potential word candidates during spoken word recognition on the part of the children with SLI was due to their inability to inhibit lexical cohort competitors arising from their poorly specified, underlying lexical–phonological representations.

More recently, McMurray et al. (2010) examined spoken word recognition in adolescents with and without SLI using the visual word paradigm (Allopenna, Magnuson, & Tanenhaus, 1998; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In the visual word paradigm, listeners hear a spoken word and select its referent from a visual display of pictures of the target word and phonologically related words while eye fixation points to each picture are measured. Similar to Mainela-Arnold et al. (2008), the adolescents with SLI in the McMurray et al. study also required more of the speech stream before they were able to recognize the target word as compared with TD controls. In contrast to Mainela-Arnold et al., however, McMurray and colleagues argue that the adolescents with SLI in their study experienced a greater rate of decay of the target word to be perceived because they are unable to maintain an activation of the phonological representation of the target word in memory while simultaneously processing the incoming stream of speech (e.g., McMurray et al., 2010).

Although these two accounts differ as to the hypothesized cause of spoken word recognition difficulties in SLI, both accounts propose that factors such as the inability to inhibit interference from competing cohorts, in the case of Mainela-Arnold and colleagues, or the inability to maintain phonological representations in active memory, in the case of McMurray and colleagues, underlie spoken word deficits in SLI. Both accounts are grounded in the theoretical framework of computational models of spoken word recognition such as the TRACE, cohort, and neighborhood activation models (Luce & Pisoni, 1998; Marslen-Wilson & Tyler, 1980; Marslen-Wilson & Zwitserlood, 1989; McClelland & Elman, 1986). There is a debate in some of these models regarding the degree to which speech processing, in particular for the earliest stages of phoneme recognition, is a passive and highly automated process or is instead a process that actively engages cognitive resources. For instance, in both the original cohort theory (Marslen-Wilson & Welsh, 1978) and the distributed cohort model (Gaskell & Marslen-Wilson, 1997), lexical candidates are assumed to be automatically activated from the perception of the sounds in a word and then, with each sound members of the lexical candidate set, are automatically deactivated.

The debate as to whether speech processing is a passive, automatic process or is instead an active process where executive functions and cognitive resources play a key role hinges on whether one assumes that acoustic input is mapped directly onto lexical–phonological representations with no hypothesis testing or information-contingent operations or if one assumes that the perception of speech is a cognitively flexible, active “information-contingent” process that involves hypothesis testing and rapid updating of the hypothesis based on new incoming information from the stream of speech. For instance, Hickok and Poeppel (Hickok, 2012; Hickok & Poeppel, 2007, 2015) argue that, because the listener is required to maintain sublexical representations in an active state in memory, even in the earliest stages of speech processing, this requires both executive control and working memory. Similarly, Nusbaum and colleagues argue that, because the acoustic signal is transitory and distributed in time, fades quickly from perception, and is highly variable both within and across speakers, speech perception is an active cognitive process, mediated by central executive functions such as attentional control and working memory (Heald & Nusbaum, 2014; Heald, Van Hedger, & Nusbaum, 2017). Nusbaum and colleagues argue further that evidence that speech processing actively engages cognitive resources can be seen in particular in the case of suboptimal listening conditions and/or hearing loss, where the additional cognitive processing that is required at the sensory level is costly and affects the availability of cognitive resources needed at later stages of language comprehension.

Studies of the neurobiology of speech processing suggest that different experimental tasks used to examine speech processing may differentially engage cortical and subcortical regions engaged in executive control and working memory. In particular, the degree to which the listener is required to actively engage in speech processing has been shown to affect the degree to which executive function and cognitive control are involved. For example, Christensen, Antonucci, Lockwood, Kittleson, and Plante (2008) observed that tasks that require the listener to actively attend to speech engage different cortical and subcortical regions as compared with passively listening to speech, with active processing engaging not only primary and secondary auditory regions but additional frontal and parietal brain regions supporting executive control and working memory.

Purpose of This Study

If cognitive factors such as working memory, executive functions, and attentional control are engaged in spoken word recognition under suboptimal conditions or conditions that require the listener to actively engage in the process, this raises questions regarding the extent to which spoken word recognition is mediated by cognitive factors in children with SLI. Specifically, although by definition, children with SLI have normal pure-tone hearing, they come to the task of spoken word recognition with poorly specified phonological representations and markedly impaired executive functions, working memory, and attentional control, which suggests that children with SLI may never experience spoken word recognition under optimal conditions (e.g., Coady & Evans, 2008; Evans et al., 2015; Finneran, Francis, & Leonard, 2009; Graf-Estes, Evans, & Else-Quest, 2007; Spaulding, Plante, & Vance, 2008).

In this study, we used a forward gating task to ask if cognitive factors play a role in spoken word recognition in a propensity-matched group of children with SLI and TD controls matched for age, gender, maternal education, and socioeconomic status (SES). Forward gating tasks require children to engage in active listening and hypothesis testing as they are asked to guess what they think the target word is based on a segment of the speech stream. In particular, the gating tasks require the child to (a) maintain a representation of the speech segment in active memory long enough to activate a representation of an initial speech sound of a target word, (b) update the initial sound held in memory based on the incoming stream of speech, (c) be cognitively flexible enough to continuously actively shift perceptual attention between the acoustic cues in the input that differentiate the activation of different sublexical representations, and (d) inhibit interference from the continuously changing set of potential lexical candidates that are activated in the mental lexicon. To examine these cognitive factors directly, we examined four cognitive measures: (a) phonological working memory (pWM) = the short-term phonological store responsible for remembering speech sounds in their temporal order, (b) updating = the ability to keep track of and continuously monitor and update incoming sublexical and syllable information in the stream of speech, (c) attention shifting = the ability to switch focus of attention, and (d) interference inhibition = the ability to inhibit interference from competing auditory stimuli.

Historically, the target words in SLI studies of spoken word recognition are presumed to be equally familiar to both the group with SLI and the TD control group. Yet, in many cases, children's ability to name the target words is not assessed (Mainela-Arnold et al., 2008), or it is assessed, but predictably, the children in the group with SLI are less accurate at naming the target words as compared with those in the control group (McMurray et al., 2010). This raises a second question: To what extent are differences in the degree to which the group with SLI and the TD group know the target words in spoken word recognition tasks creating the appearance of spoken word recognition deficits in children with SLI? To control for the possibility that poor spoken word recognition performance in children with SLI might be a reflection of a child's lack of familiarity with the target words, we also controlled for knowledge of the target words for the children in both groups.

Method

Participants

The participants in this study consisted of 234 children (aged 7;0–11;11 years;months), 117 children with DLD (72 boys and 45 girls) and 117 children with typical language abilities (83 boys and 34 girls). The children were all part of a larger group of 383 children who participated in an ongoing, multisite research project investigating the role of cognitive processing factors on sentence comprehension in school-aged children with and without DLD. The children were recruited from metropolitan schools in San Diego County, California; Dallas County, Texas; Cache County, Utah; and Athens County, Ohio.

All children met the following inclusion criteria: (a) nonverbal IQ (NVIQ) at or above 75 as measured by the Leiter International Performance Scale–Revised (Roid & Miller, 1997), (b) normal-range hearing sensitivity at the time of testing (American National Standards Institute, 1997), (c) normal or corrected vision, (d) normal oral and speech production as measured by the articulation subtest on the Test of Language Development–Intermediate: Fourth Edition (Hammill & Newcomer, 2008), and (e) a monolingual, English-speaking home environment. Children were excluded from participation if parents reported that their child had (a) neurodevelopmental disorder, (b) emotional or behavioral disturbances, (c) motor deficits or frank neurological signs, or (d) seizure disorders or use of medication to control seizures. English was the primary language spoken by all the children. The degree of exposure to a second language was assessed using the study of Bedore et al. (2012). All children were monolingual English speakers, as defined by having had less than 5% daily usage of a language other than English. Children who spoke more than an average of 30 min or more of another language at home or in school each day were excluded from the study.

Four language measures were used to determine DLD/TD classification. These were the receptive and expressive portions of the Comprehensive Receptive and Expressive Vocabulary Test–Second Edition (CREVT-2; Wallace & Hammill, 2000) and the Concepts and Following Directions subtest and Recalling Sentences subtest of the Clinical Evaluation of Language Fundamentals–Fourth Edition (CELF-4; Semel, Wiig, & Secord, 2003). The CREVT is a measure of children's receptive and expressive lexical knowledge, and the two CELF subtests are indices of sentence-level receptive and expressive knowledge and abilities. Because two of the subtests were standardized with deviation quotients (M = 100, SD = 15) and two were standardized with scaled scores (M = 10, SD = 3), we converted each child's norm-referenced scores for the four subtests to z-score scale (M = 0, SD =1) representing the number of standard deviations from the mean on each subtest. From these four z scores, a final mean composite z score was then calculated for each child based on the lowest 3:4 z scores.

DLD and TD Classification

Children were classified as DLD if their mean composite language z score on their three lowest of the four subtests was at or below −1 SD to be consistent with the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, definition of language disorder and multidimensional systems for defining DLD (e.g., Leonard, 2014; Tager-Flusberg & Cooper, 1999). Children were defined as having typical language if the mean composite language z score was greater than −1 SD. The average composite z score for the group with DLD was −1.48 with an SD of 0.39 (range = −2.73 to −1.00). The overwhelming majority of the children in the group with DLD (84.6%) had mixed receptive–expressive disorders. A few children (14.5%) exhibited expressive-only disorders, and only 1% exhibited receptive-only disorders. With respect to the language domain, 74.4% of the children performed at or below the criterion value on subtests in both lexical and sentential domains; 18.8% had difficulties on the grammatical subtests only, and 6.8% had difficulties on the lexical subtests only.

For the TD group, the average composite z score was 0.08 with an SD of 0.60 (range = −0.96 to 1.89), which was significantly smaller than that for the group with DLD, F(1, 233) = 556.74, p < .0001, ph2 = .71. The TD group attained a significantly higher score on each language measure (all with large or very large effect sizes): CREVT-2, F(1, 233) = 61.85, p < .0001, ph2 = .21; CREVT–Expressive, F(1, 233) = 37.31, p < .0001, ph2 = .14; CELF Concepts and Following Directions, F(1, 233) = 50.29, p < .0001, ph2 = .18; and CELF Recalling Sentences, F(1, 233) = 63.30, p < .0001, ph2 = .21. Language data for the two groups are presented in Table 1. The large effect sizes for the group differences on the language measures provide strong support for our inclusion/exclusion criteria for the two groups. NVIQ was in the normal range for both groups; however, NVIQ for the children in the group with DLD was significantly lower than the children in the TD group, F(1, 233) = 46.22, p < .0001, η2 = .17; therefore, NVIQ was used as a covariate in the statistical analysis.

Table 1.

Summary of standard scores on norm-referenced test measures administered to the group with DLD and the TD group.

Measures DLD (n = 117) TD (n = 117) Cohen's d
Age (months)
M 113.5 114.2 −0.06
SD 15.1 17.2
Leiter a
M 98.0 b 110.4 −0.77
SD 13.8 14.1
CREVT-2
 Receptive
  M 87.4 b 105.6 −1.22
  SD 9.0 11.8
CREVT-2
 Expressive
  M 81.0 b 101.3 −1.32
  SD 10.1 12.3
CELF-4
 Concepts and directions
  M 6.0 c 11.1 −1.33
  SD 3.0 2.0
CELF-4
 Recalling Sentences
  M 5.5 c 10.9 −1.51
  SD 2.0 2.5
TNL–Receptive
M 8.2 c 11.0 −1.50
SD 2.4 2.5
TNL–Expressive
M 6.5 c 9.5 −1.17
SD 2.3 2.57

Note. CELF-4 = Clinical Evaluation of Language Fundamentals–Fourth Edition; CREVT-2 = Comprehensive Receptive and Expressive Vocabulary Test–Second Edition; DLD = developmental language disorder; NL = Test of Narrative Language (Gillam & Pearson, 2004); TD = typically developing.

a

Visualization and Reasoning Battery of the Leiter International Performance Scale–Revised.

b

Standardized with a mean of 100 and a standard deviation of 15.

c

Standardized with a mean of 10 and a standard deviation of 3.

Propensity Matching and Group Assignment

To avoid selection bias and distortion of the results due to differences in participant enrollment, propensity score matching was used to create the group with DLD and the TD group from a larger pool of 383 children (127 with DLD, 256 TD) who completed the project. Propensity matching is a quasi-experimental approach that approximates the conditions of a randomized experiment by creating TD (control) and impaired (experimental) groups that are balanced on a wide variety of confounding variables. Propensity scores represent the probability of assignment to either the group with DLD or the TD group (the counterfactual condition) based on a vector of observed covariates. 2 Our target sample size was a minimum of 100 participants per group. To achieve this sample size, we oversampled TD children by a 2:1 ratio relative to the children with DLD. Using multivariate logistic regression, a single propensity score was calculated for each of the 383 children in the total participant pool using the moderating variables of age (continuous variable), gender (dichotomous variable: male or female), mother's educational level (dichotomous variable: no college degree [high school, some college but no degree] vs. college degree [associate, bachelor's, master's, or doctorate]), and family income (dichotomous variable: annual income of less than $30K vs. annual income of greater than $30K). Mother's education and family income were used as proxy measures of SES (Shavers, 2007). The nearest neighbor matching method was then used to match individual children with DLD to a TD counterpart. This resulted in 117 DLD–TD multidimensionally matched pairs identified from the 383 sample. 3

The 117 DLD–TD matched pairs did not differ significantly with respect to age, gender, mother's education, or family income. Demographic data for the two groups are presented in Table 2. Subsequent nonparametric analyses revealed that the groups were not significantly different with respect to age, gender, mother's education, or family income. Chi-square tests for categorical variables indicated that the propensity-matched group with DLD and TD group did not differ significantly on gender, χ2(1, 234) = 1.92, p = .21; family income, χ2(3, 234) = 4.12, p = .25; or maternal education, χ2(6, 234) = 10.91, p = .09.

Table 2.

Summary of participant demographics for the group with DLD and the TD group.

Measures DLD (n = 117) TD (n = 117)
Gender
 Male 57% 63%
 Female 43% 36%
Race and ethnicity
 White 61% 72%
 African American 10% 0%
 Hispanic 12% 12%
 Asian 4% 4%
 American Indian, Native Hawaiian 3% 3%
 More than one race 10% 9%
Mother's education
 No response 1% 1%
 High school degree 20% 16%
 Some college 30% 27%
 Associate degree 17% 11%
 Bachelor's degree 24% 23%
 Graduate degree 6% 20%
Family income
 0–25,000 42% 32%
 26,000–50,000 21% 22%
 51,000–75,000 16% 15%
> 75,000 21% 31%

Note. Race and ethnicity were reported as mixed for many of the participants, thus yielding the observed percentages. DLD = developmental language disorder; TD = typically developing.

General Procedure

Children were seen individually. The experimental tasks and standardized testing were completed over a series of three visits, each lasting approximately 2.5 hr. The order of the standardized assessment and experimental tasks was fixed across the participants. To avoid fatigue effects, for each of the experimental tasks, the total trials were divided into three sets of trials. Children completed 1:3 sets of the trials (one third of total trials) at each of the three visits. The order of trial presentation within each of the three sets of trials was randomized. With the exception of the rapid automatic naming (RAN) task, the order of the presentation of the three trial sets across the three visits was counterbalanced for both the group with DLD and the TD group. All participants completed the RAN task at the end of the third visit after having completed all three gating lists to avoid priming effects of the naming task on the gating task. For all of the experimental tasks, children were comfortably seated at a table in front of a computer monitor. Experimental stimuli presentation and trial order was controlled by E-Prime and Psyscope software (Cohen, MacWhinney, Flatt, & Provost, 1993; Schneider, Eschman, & Zuccolotto, 2002). All children passed hearing screen at the time of testing.

Spoken Word Recognition

The same forward gating task as Mainela-Arnold et al. (2008) was used to examine spoken word recognition. In the task, children heard stimulus words presented in successive gates—fragments of the auditory stimuli (e.g., Grosjean, 1980). Similar to Mainela-Arnold et al., the gating durations were 120, 180, 240, 300, 360, 420, 480, 540, 600, and 660 ms for 33 different words. The words were the 33 nouns from the larger experimental protocol (Montgomery, Evans, Gillam, Sergeev, & Finney, 2016). All the nouns were inanimate, monosyllabic words. To control for potential confounding factors that influence speed of spoken word recognition, the words all had spoken word frequency ratings of 6 years or younger (Moe, Hopkins, & Rush, 1982), age-of-acquisition (AoA) ratings of 5.5 years or younger, imageability ratings at or above 480, concreteness ratings at or above 480, and familiarity ratings at or above 480 (Coltheart, 1981; Garlock, Walley, & Metsala, 2001; Kuperman, Stadthagen-Gonzalez, & Brysbaert, 2012; Storkel & Hoover, 2010; Vitevitch & Luce, 2004; see Appendix).

The gating stimuli were generated from digital recordings of the words, spoken at a normal speaking rate and with normal prosodic variation by an adult male speaker of Midwestern American English. Words were digitally recorded at 44.1 kHz, 32-bit resolution; low-pass filtered (20 kHz); and then normalized for intensity. A signal tone was inserted at the beginning of each trial to alert the child. A period of 3 s of silence occurred after each trial to allow the child sufficient time to respond. The same duration-blocked format as Mainela-Arnold et al. (2008) was used. Three lists were created, each consisting of 11 randomly chosen words from the 33 nouns with 11 words for each of the 10 gate durations. Children complete one list at each of the three visits. List order was counterbalanced across the participants across the three visits.

Stimuli were presented under noise-reduction headphones at the 75-dB level. Children completed one of the three lists during each of the three visits to the laboratory. Order of presentation of the lists was counterbalanced across the three visits. For each word, two recognition points were calculated: (a) point of initial acceptance (POI), or the gate at which the children first correctly identified the target word, whether or not they changed their response at subsequent gate durations, and (b) point of final acceptance (POF), or the gate after which the child did not change from a correct response. Intercoder reliability was based on a second listener retranscribing the sound files for 10% of the participants from each of the three testing sites with equal numbers of children with DLD and TD children. Point-to-point reliability for the all gate durations was high, with agreement between the first and second coders being 96.2%.

RAN

The children completed an RAN task for the 33 target nouns in the gating task to ensure that the target words were in the children's lexicons. We used this task to control for possible differences in each child's familiarity of the target words. Because children with SLI often have receptive vocabulary skills that fall within the normal range but continue to have significant expressive vocabulary deficits, we used this more stringent measure to determine that the children were able to generate the target words, thereby ensuring that children had a lexical–phonological representation of the target words in their mental lexicon. To avoid priming effects on the gating tasks, all participants completed the RAN task at the end of the third visit after all three gating lists had been completed. The pictures for the RAN task were web-based clip-art color drawings. The images were standardized for name and image agreement, familiarity, and visual complexity (Rossion & Pourtois, 2004). Naming speed was calculated for all correct trials. Interscorer reliability was high across the three testing sites (96%). Cronbach's alpha for naming accuracy was .97. Identification of the acoustic onset of correctly named pictures was also high, with an average of 37-ms difference between Listeners 1 and 2 in identifying word onset.

Cognitive Variables

pWM

Children's pWM was measured using the Dollaghan and Campbell (1998) nonword repetition task. To control for the possible influence of regional differences in the dialect of the examiner and the participants from the Utah, Ohio, Texas, and California testing sites, a digital version of the original Dollaghan and Campbell stimuli was used. The speaker was a female adult with a Rocky Mountain Central Colorado regional dialect. Consistent with the original protocol, children also heard the digital recording of each nonword only once. Trained research assistants, who were blind to participants' group classification assignment and treatment status, scored children's responses.

Children's responses were scored offline. Each phoneme (consonant or vowel) was scored as correct or incorrect in relation to its target phoneme. Phoneme additions, substitutions, and omissions were scored as incorrect. The total percentage of phonemes correct was calculated for each child. Ten percent of the DLD and TD subject data pool was retranscribed by a second transcriber. Agreement between transcribers at both the vowel and consonant levels was high, with recoder agreement at 95%. Cronbach's alpha, a measure of the psychometric reliability and internal consistency of the nonword repetition task, was high at .83.

Updating

Updating tasks require adding and deleting information in working memory. A “keep-track task” modeled after the Gordon diagnostic assessment system was used (Gordon, McClure, & Aylward, 1997). In the task, children heard a series of digits from 1 to 10 presented in a random order, spoken by a male speaker, and were instructed to press a button each time they heard the number 1 followed by the number 9 in the sequence. The auditory stimuli were recorded at 44.1 kHz, 32-bit resolution by an adult male speaker of Midwestern American English. Sound files were low-pass filtered (20 kHz) and normalized for intensity. Performance was measured using d′. Ten percent of the data files were reanalyzed. Agreement was 100% between the initial and reanalysis data runs. Cronbach's alpha for children's accuracy was .88.

Attention Switching and Interference Inhibition

Children's ability to inhibit interference from competing auditory signals and their ability to shift their focus of attention between incoming streams of speech were assessed using a task modeling after cognitive switching tasks (Rogers & Monsell, 1995; Ross, Hillyard, & Picton, 2010) and adapted for school-aged children. In the version of the task created for this study, children were told that they were going to play a listening game where they would hear two speakers at the same time, a male speaker in one ear and a female speaker in the other ear. Children listened to the stimuli under headphones and were instructed that they would hear a beep in one ear or the other and that they were to listen only to the speaker in the ear where they heard the beep. Children were told that, after a certain period, the beep would switch and they would hear it in the other ear and that they were to then only listen to the speaker in the new ear. Each speaker said either a number (1–5) or a letter (A–E). Children were instructed to touch the letters or numbers on the screen depending on what the speaker was saying in the ear in which they heard the beep.

The speaker's voices were digitally created recordings using the AT&T speech generator at 44.1 kHz, 32-bit resolution. The visual stimuli consisted of letters (A–E) and numbers (1–5). The numbers were grouped together in a tight cluster in the upper center region of the screen, and the letters were grouped together in the lower center region of the screen. The letters and numbers were in different primary colors in 32-point Times New Roman font. Stimuli were presented under noise-reduction headphones at the 75-dB level and were paired so that the children always heard a number in one ear and a letter in the other ear. The presentation of male/female speakers to the left/right ears was counterbalanced across the children to control for speaker or preferred ear bias. The “beep” was a 1000-Hz tone. Trials were presented in the same fixed random order across the participants, and the occurrence of the tones in each ear was randomly distributed over the trials. Performance accuracy was calculated for each participant. Ten percent of the data files were reanalyzed. Agreement was 100% between the initial and reanalysis data runs. Cronbach's alpha was .94.

Two different points in the task were used to assess children's ability to switch their attention and their ability to inhibit interference from competing auditory signals. Attention switching—the child's ability to rapidly shift focus attention—was assessed based on children's percent correct on switch trials, the trials where the children had to shift their attention to the other ear after the “beep” in the new ear. Children's ability to inhibit interference from competing streams of speech was based on percent correct on internal trials—those trials where the child was to attend only to the speaker in the designated target ear while ignoring the speaker in the nontarget ear.

Results

Although RAN accuracy was high for both groups, not all children named all the words correctly (DLD = 88%–100%, TD = 94%–100%). Spoken word recognition involves the ability to activate and maintain a lexico-phonological representation of the target word in the mental lexicon. The purpose of this study was to examine the cognitive factors influencing this ability in children. For the results of this study to be theoretically interpretable, we needed to ensure that all of the children could activate a representation of the target words in their mental lexicon. Because phonological representations and lexical–phonological networks are underspecified in the mental lexicons of children with SLI (e.g., Mainela-Arnold et al., 2008) and children with SLI are inconsistent in naming words that they comprehend correctly on picture-pointing tasks (i.e., Kail & Leonard, 1986), we used RAN accuracy as a conservative measure of children's ability to activate a representation of the target word and included only those children having 100% RAN accuracy in the analysis (DLD = 60, TD = 87).

The standardized test scores and Cohen's d values for this subset of participants with DLD and TD participants are shown in Table 3. This subset of children with DLD and TD controls did not differ in age, F(1, 146) = .03, p < .86, η2 = .00, power = 0.05; however, similar to the larger group, they continued to differ with respect to NVIQ, F(1, 146) = 25.23, p < .003, η2 = .14, power = 0.99. The results for the tasks for the subset of the participants having an RAN accuracy of 100% are presented in Table 4. Naming speed differed for this group of participants with DLD and TD participants. Analysis of covariance (ANCOVA) with age and NVIQ as covariates revealed that, although the children had 100% accuracy in naming the words, the speed with which they named the pictures was significantly slower for the children in the group with DLD as compared with children in the TD group, F(1, 146) = 13.40, p < .001, η2 = .08, power = 0.95.

Table 3.

Standardized assessment scores for the subset of participants with DLD and TD participants with 100% RAN accuracy.

Measure DLD (n = 60)
TD (n = 87)
d
M SD M SD
Age (months) 115.50 14.8 115.9 16.4 0
Leiter nonverbal IQ a 98.55 b 13.3 110.3 b 14.4 −0.81
CREVT-2–Receptive 88.43 b 9.4 106.2 b 12.3 −1.69
CREVT-2–Expressive 80.96 b 11.3 101.7 b 12.2 −1.73
CELF-4, Concepts and Following Directions 6.21 c 3.0 11.1 c 1.9 −1.88
CELF-4, Recalling Sentences 5.56 c 1.9 11.0 c 2.4 −2.50
TNL–Receptive 8.46 c 2.6 11.0 c 2.5 −1.01
TNL–Expressive 6.55 c 2.8 9.4 c 2.8 −0.66

Note. CELF-4 = Clinical Evaluation of Language Fundamentals–Fourth Edition; CREVT-2 = Comprehensive Receptive and Expressive Vocabulary Test–Second Edition; DLD = developmental language disorder; TD = typically developing; TNL = Test of Narrative Language.

a

Leiter International Performance Scale–Revised (reported as full-scale IQ).

b

Standardized with a mean of 100 and a standard deviation of 15.

c

Standardized with a mean of 10 and a standard deviation of 3.

Table 4.

Means, standard deviations, and ranges for the cognitive predictor variables for the group with DLD and the TD group having 100% correct RAN performance.

Group Word recognition
pWM c Updating d Switching e Inhibition f
POI a POF b
DLD (n = 60)
 Mean 196.2 263.7 80.1* 3.5 82.4 82.2
SD 223.0 75.9 15.7 1.3 13.5 16.1
TD (n = 87)
 Mean 189.3 246.0 91.2* 3.5 87.6 86.8
SD 20.5 64.2 5.4 1.2 11.9 12.6

Note. DLD = developmental language disorder; RAN = rapid automatic naming task; TD = typically developing.

a

Point of initial word recognition accept.

b

Point of final word recognition accept.

c

Phonological working memory: total phonemes percent correct, nonword repetition task.

d

Gordon d′.

e

Attention switching task: percent correct switch trials.

f

Attention switching task: percent correct internal trials.

*

p < .001.

Spoken Word Recognition

For the subset of the participants having an RAN accuracy of 100%, a 2 × 2 Group (DLD = 60, TD = 87) × Word Recognition Point (POI accept, POF accept) repeated-measures ANCOVA controlling for NVIQ and age was conducted. Results revealed a main effect for condition, where the POI accept word recognition points were significantly faster than the POF accept word recognition points, F(1, 143) = 9.47, p < .003, η2 = .62, power = 0.86, for both groups. There was, however, no effect of group, F(1, 143) = 0.35, p = .54, η2 = .00, power = 0.09, or Group × Condition interaction, F(1, 143) = 0.09, p = .76, η2 = .00, power = 0.06. Thus, contrary to our expectations, after controlling for target word knowledge, age, and NVIQ, neither the POI nor POF accept word recognition points were slower for the children with DLD as compared with TD controls.

pWM

The group with DLD and the TD group were significantly different in their performance on the nonword repetition task. An ANCOVA with NVIQ and age as covariates revealed that the total percentage of phonemes correct was significantly lower for the group with DLD as compared with the TD group, F(1, 147) = 21.28, p < .001, η2 = .13, observed power = 0.99.

Updating

An ANCOVA with NVIQ and age as covariates revealed that performance on the sustained attention task was no different for the group with DLD and the TD group, F(1, 147) = 0.44, p = .50, η2 = .00, observed power = 0.10. Reaction times on the sustained attention task were also no different for the group with DLD as compared with the TD group, F(1, 147) = 0.09, p = .76, η2 = .00, observed power = 0.06, indicating that the children with DLD were as quick and accurate as the TD controls in their ability to add, delete, and update incoming auditory information.

Attention Shifting and Interference Inhibition

A repeated-measures ANCOVA with NVIQ and age as covariates revealed no difference in the groups' performance on either switch trials (attention switching: DLD = 82%, TD = 87%) or internal trials (interference inhibition: DLD = 82%, TD = 86%), F(1, 143) = 1.38, p = .24, η2 = .01, observed power = 0.13. ANCOVA with NVIQ and age as covariates also revealed that there was also no difference in the reaction times for the two groups for either the switching trials (attention switching: DLD = 1990 ms, TD = 1958 ms) or internal trials (interference inhibition: DLD = 1798 ms, TD = 1726 ms), F(1, 143) = 0.44, p = .50, η2 = .00, observed power = 0.44.

Correlational Analyses

The correlation matrices between word recognition POI and POFs, receptive vocabulary, pWM, updating, attention switching, interference inhibition, and age for the TD group and the group with DLD are shown in Tables 5 and 6. For the TD group, POI recognition—the earliest time point where the children recognized the target words––was significantly negatively correlated with receptive vocabulary, r(87) = −.31, p < .01; updating, r(87) = −.44, p < .0; and age, r(87) = −.43, p < .01. POF recognition—the time point where the children settled on the correct target word—was significantly negatively correlated with receptive vocabulary, r(87) = −.36, p < .01; updating, r(87) = −.27, p < .05; and age, r(87) = −.33, p < .01. Thus, for the TD controls, both POI and POF word recognition was faster for older children as compared with younger children, for those children with larger vocabulary, and for those children having better updating skills.

Table 5.

Correlation matrix for the subset of children in the TD group having 100% RAN accuracy.

Measures Word recognition POI Word recognition POF Receptive vocabulary pWM Updating Attention switching Interference inhibition
Word recognition POI
Word recognition POF .54**
Receptive vocabulary −.31** −.36**
pWM −.17 −.13 .40**
Updating −.44** −.27* .28** .18
Attention switching −.05 .00 .02 .14 .17
Interference inhibition −.08 .03 .19 .27** .33** .83*
Age −.43** −.33** .68** .40** .51** .24** .37**

Note. POF = point of final accept; POI = point of initial accept; pWM = phonological working memory; RAN = rapid automatic naming task; TD = typically developing.

*

p < .05 level (two-tailed).

**

p < .01 level (two-tailed).

Table 6.

Correlation matrix for the subset of children in the group with DLD having 100% RAN accuracy.

Measures Word recognition POI Word recognition POF Receptive vocabulary pWM Updating Attention switching Interference inhibition
Word recognition POI
Word recognition POF .70**
Receptive vocabulary −.14 .09
pWM −.23 −.08 .16
Updating −.18 −.21 .19 .30*
Attention switching −.36** −.35* .17 .40** .38**
Interference inhibition −.33** −.28* .33** .26* .32* .76**
Age −.12 .07 .60** .15 .25* .27* .45**

Note. DLD = developmental language disorder; POF = point of final accept; POI = point of initial accept; pWM = phonological working memory; RAN = rapid automatic naming task.

*

p < .05 level (two-tailed).

**

p < .01 level (two-tailed).

This was not the case for the group with DLD. For the children with DLD, POI recognition was significantly negatively correlated with attention switching, r(56) = −.36, p < .01, and interference inhibition, r(56) = −.33, p < .01. POF recognition was also significantly negatively correlated with attention switching, r(56) = −.35, p < .01, and interference inhibition, r(56) = −.28, p < .05. Thus, older children with DLD were not faster at either POI and POF word recognition points; however, those children who were better able to switch their focus of attention and inhibit interference from competing incoming speech were quicker to recognize the target words.

Linear Regression Analyses

Because the relationship between spoken word recognition and the cognitive variables differed for the TD group and the group with DLD, multiple regression analyses were conducted to evaluate the cognitive predictor variables (pWM, updating, attention switching, and interference inhibition) on POI and POF accept points for the children with DLD and TD children separately to determine if the pattern of influence of these variables on spoken word recognition was qualitatively the same or different between the two groups. Inspection of histograms and normal P-P plots of residuals suggested that the analysis described below met the assumptions of linear regression for the two groups. We considered different models for both POI and POF accept points. In each case, the first model was the combination of the predictor variables. In subsequent models, we examined the influence of receptive vocabulary, pWM, updating, attention switching, and interference inhibition. Finally, we examined the additional influence of age above and beyond that of receptive vocabulary and the cognitive predictor variables. The high correlation between many of the factors for the group with DLD and the TD group raises the possibility that multicollinearity could be a potential confound in the regression models (see Tables 7 and 8).

Table 7.

Regression model to predict spoken word recognition point of initial accept for the group with DLD and the TD group.

Model R R 2 Adj R 2 SE of estimate R 2 change F change df 1 df 2 Sig, F change
DLD
 Model a .40 .15 .07 22.27 .16 1.73 6 53 .13
 Model b .14 .02 .00 23.04 .02 1.25 1 58 .26
 Model c .23 .05 .04 22.63 .05 3.45 1 58 .06
 Model d .18 .03 .01 22.91 .03 1.94 1 58 .16
 Model e .36 .13 .11 21.67 .13 8.97 1 58 .00*
 Model f .33 .11 .10 21.91 .11 7.53 1 58 .00**
 Model g .39 .16 .07 22.42 .15 1.99 5 54 .09
TD
 Model a .49 .24 .19 18.36 .24 5.27 5 81 .00**
 Model b .30 .09 .08 19.63 .09 8.94 1 85 .00*
 Model c .17 .03 .01 20.32 .03 2.69 1 85 .10
 Model d .44 .19 .18 18.50 .19 20.70 1 85 .00**
 Model e .05 .00 00 20.56 .00 0.26 1 85 .61
 Model f .08 .00 .00 20.57 .00 0.58 1 85 .44
 Model g .53 .28 .22 18.04 .28 5.20 6 80 .00**

Note. Adj = adjusted; DLD = developmental language disorder; Sig = significance; TD = typically developing.

a

Model predictors: (constant), receptive vocabulary, pWM, updating, attention switching, interference inhibition.

b

Model predictors: (constant), receptive vocabulary.

c

Model predictors: (constant), pWM.

d

Model predictors: (constant), updating.

e

Model predictors: (constant), attention switching.

f

Model predictors: (constant), interference inhibition.

g

Model predictors: (constant), receptive vocabulary, pWM, updating, attention switching, interference inhibition, age.

*

p < .05.

**

p < .01.

Table 8.

Regression model to predict spoken word recognition point of final accept for the group with DLD and the TD group.

Model R R 2 Adj R 2 SE of estimate R 2 change F change df 1 df 2 Sig, F change
DLD
 Model a .41 .17 .10 72.13 .17 2.27 5 54 .06
 Model b .09 .00 −.00 76.23 .00 0.53 1 58 .46
 Model c .09 .01 −.01 76.28 .01 0.45 1 58 .50
 Model d .22 .05 .03 74.78 .05 2.82 1 58 .10
 Model e .36 .13 .11 71.48 .13 8.58 1 58 .00**
 Model f .28 .08 .06 73.44 .08 5.06 1 58 .03*
 Model g .44 .19 .10 71.95 .19 2.11 6 53 .06
TD
 Model a .45 .20 .15 59.11 .20 4.14 5 81 .00**
 Model b .36 .13 .12 60.11 .13 13.30 1 85 .00**
 Model c .14 .02 .01 64.06 .02 1.60 1 85 .21
 Model d .27 .08 .06 62.19 .08 6.88 1 85 .01*
 Model e .00 .00 −.01 64.66 .00 0.00 1 85 .99
 Model f .04 .00 −.01 64.61 .00 0.13 1 85 .72
 Model g .46 .21 .15 59.20 .21 3.56 6 80 .00**

Note. Adj = adjusted; DLD = developmental language disorder; Sig = significance; TD = typically developing.

a

Model predictors: (constant), receptive vocabulary, pWM, updating, attention switching, interference inhibition.

b

Model predictors: (constant), receptive vocabulary.

c

Model predictors: (constant), pWM.

d

Model predictors: (constant), updating.

e

Model predictors: (constant), attention switching.

f

Model predictors: (constant), interference inhibition.

g

Model predictors: (constant), receptive vocabulary, pWM, updating, attention switching, interference inhibition, age.

*

p < .05.

**

p < .01.

Spoken Word Recognition

POI Word Recognition

A multiple regression was first conducted to predict children's POI word recognition from the linear combination of receptive vocabulary, pWM, updating, attention switching, and interference inhibition (Table 7). Although the group with DLD was no slower than the TD controls for either the POI of POF recognition points, the pattern of factors influencing spoken word recognition differed for the group with DLD and the TD group. For the group with DLD, two predictor variables, attention switching and interference inhibition, accounted for a significant amount of variance in POI spoken word recognition, R 2 = .13, adjusted R 2 = .12, F(1, 58) = 8.97, p < .01 and R 2 = .11, adjusted R 2 = .12, F(1, 58) = 7.54, p < .01, respectively. Follow-up analysis revealed that age did not account for any additional variance in POI word recognition above and beyond the cognitive predictors examined, R 2 = .16, adjusted R 2 = .08, F(4, 54) = 1.98, p = ns. Tests for multicollinearity indicated that a very low level of multicollinearity was present for factors for the group with DLD (DLD: receptive vocabulary, variance inflation factor (VIF) = 1.6; pWM, VIF = 1.3; updating, VIF = 1.2; attention switching, VIF = 2.8; interference inhibition, VIF = 2.9; age, VIF = 2.0).

For the TD group, receptive vocabulary and updating accounted for unique amounts of variance in POI word recognition point, R 2 = .09, adjusted R 2 = .08, F(1, 85) = 8.94, p < .01 and R 2 = .19, adjusted R 2 = .18, F(1, 85) = 20.7, p < .01, respectively. Follow-up regression analysis revealed that, for the TD group, age accounted for an additional significant amount of additional variance in POI word recognition point above and beyond the cognitive predictors, R 2 = .28, adjusted R 2 = .24, F(5, 81) = 6.32, p < .01. Low level of multicollinearity was present for factors for the TD group as well (TD: receptive vocabulary, VIF = 2.1; pWM, VIF = 1.3; updating, VIF = 1.5; attention switching, VIF = 3.6; interference inhibition, VIF = 4.2; age, VIF = 2.5).

POF Word Recognition

For the group with DLD, attention switching and interference inhibition accounted for a significant amount of variance in POF spoken word recognition, R 2 = .13, adjusted R 2 = .11, F(1, 58) = 8.58, p < .01 and R 2 = .08, adjusted R 2 = .06, F(1, 58) = 5.06, p < .05, respectively. Tests for multicollinearity indicated that a very low level of multicollinearity was present for factors for the group with DLD (DLD: receptive vocabulary, VIF = 1.6; pWM, VIF = 1.3; updating, VIF = 1.2; attention switching, VIF = 2.8; interference inhibition, VIF = 2.8; age, VIF = 2.0).

For the TD group, similar to POI word recognition, receptive vocabulary, updating, and age accounted for a significant amount of variance in POF word recognition, R 2 = .13, adjusted R 2 = .12, F(1, 85) = 13.3, p < .01; R 2 = .08, adjusted R 2 = .06, F(1, 85) = 6.88, p < .05; and R 2 = .21, adjusted R 2 = .15, F(5, 80) = 3.56, p < .01, respectively. Low level of multicollinearity was present for factors for the TD group as well (TD: receptive vocabulary, VIF = 2.1; pWM, VIF = 1.3; updating, VIF = 1.4; attention switching, VIF = 3.6; interference inhibition, VIF = 4.2; age, VIF = 2.5).

The results from this study did not replicate prior studies of spoken word recognition in SLI. In this study, the target nouns had high concrete, high imageability, and early AoA ratings, but we also controlled target word familiarity by including only children who had 100% RAN accuracy of the target words. One question is if the failure to replicate prior findings was due to our excluding those children in both the group with DLD and the TD group who did not have 100% RAN accuracy. To address this question, we conducted a follow-up analysis for those children in the study who did not have 100% naming accuracy (DLD = 57, TD = 30). Naming accuracy for these children, although not 100%, was still high (SLI = 95%, range = 88%–97%; TD = 96%, range = 94%–97%). The two groups did not differ in age, F(1, 86) = 0.335, p = ns, but did differ significantly on pWM, F(1, 86) = 12.8, p = .001; RAN-RT, F(1, 86) = 12.0, p = .55; attention switching, F(1, 86) = 5.1, p = .02; and interference inhibition, F(1, 86) = 8.46, p = .01. A 2 × 2 Group (DLD = 60, TD = 87) × Word Recognition Point (POI accept, POF accept) repeated-measures ANCOVA controlling for NVIQ and age was conducted. Results revealed a main effect for condition, where the POI accept word recognition points were significantly faster than POF accept word recognition points for both groups, F(1, 83) = 4.94, p < .05, and a main effect for group where both POI and POF word recognition points were significantly slower for the group with DLD compared with the TD group, F(1, 83) = 4.24, p < .05. There was no Group × Condition interaction, F(1, 83) = 2.55, p = ns. These findings suggest that, if children with DLD are able to access the label for the target word from their mental lexicon (i.e., 100% RAN), their spoken word recognition is no different from their TD peers for concrete and imageable words that have very early AoA ratings. This is not the case for children in the study who were unable to accurately label all of the target words.

Discussion

In this study, we asked if cognitive factors influenced spoken word recognition in an auditory forward gating task for children with DLD and a group of TD children propensity matched for age, gender, maternal education, and SES. The cognitive factors we examined were (a) pWM, (b) information updating, (c) attention shifting, and (d) interference inhibition. Contrary to our expectations, the results from our study show that spoken word recognition for those children who were able to activate a lexical–phonological representation of the target word in their mental lexicon did not differ for children with DLD aged 7;0–11;0 years;months and propensity-matched TD controls for either time point: (a) time to activate the target candidate word from the acoustic signal (point of initial gate duration) or (b) time to settle on a final target word (point of final gate duration). Performance on the updating, attention switching, and interference inhibition tasks also did not differ for children with DLD and TD controls; however, pWM capacity was significantly impaired for those with DLD as compared with the TD controls.

Although spoken word recognition speed was the same for the two groups, the factors that influenced spoken word recognition differed qualitatively for the group with DLD and the TD group. Children with DLD who were better able to accurately switch their focus of attention between two competing auditory stimuli and better able to inhibit interference from competing auditory stimuli were also faster to recognize words within a stream of speech as compared with those children with DLD who were not. In contrast, children in the TD group who had larger receptive vocabularies and who were better able to add, delete, and update incoming auditory information were also faster to recognize the target word in the stream of speech as compared with younger children, who had smaller receptive vocabularies and were less able to effectively update incoming information from the speech stream. Surprisingly, the ability to hold speech information in pWM did not account for any variance in spoken word recognition for either group of children.

The results from this study raise some interesting questions. Although the children in our group with DLD were able to generate a label for the target words and were able to recognize them in a stream of speech as quickly as the propensity-matched children in our TD control group, the cognitive factors that influenced their spoken word recognition differed from their peers. Although both the Mainela-Arnold et al. (2008) and McMurray et al. (2010) studies also suggest that cognitive factors influence spoken word recognition for their group with SLI and TD group, they found that children with SLI were significantly slower to recognize the target words. Unlike these studies, our children did not differ in the speed with which they were able to recognize the target words in a stream of speech. Mainela-Arnold and her colleagues argue that their findings suggest that their children were unable to inhibit interference from cohort competitors. In contrast, McMurray and his colleagues instead propose that their children experienced rapid decay of the target word, which resulted in difficulty maintaining activation of the word in phonological store while simultaneously processing the incoming stream of speech (Mainela-Arnold et al., 2008; McMurray et al., 2010). In our study, we also found that difficulty inhibiting interference from competing speech played a role for our children with language impairments, but we also found that other aspects of executive control including attentional control also played a role.

Although the differences in the findings from these three studies could be the result of differences in stimuli, composition of the language-impaired groups, and/or experimental methodology, if one assumes that this is not the case but assumes instead that they characterize different aspects of spoken word challenges that children with language impairments face then together, they suggest that breakdown in spoken word recognition in children with SLI may be occurring at multiple time points in the process. The findings from these three studies also suggest that different factors may impact spoken word recognition at these different time points. Traditionally, spoken word recognition studies like these have been grounded in theoretical computational models of spoken word recognition that assume that the process is a passive one that moves along a single route from the initial perception of speech sounds to the final recognition of the target word. Hickok and Poeppel's dual-stream model of speech processing provides an interesting alternative theoretical model from which to examine the results from this study and the inconsistent pattern in findings from it and prior studies (Hickok & Poeppel, 2000, 2004, 2007, 2015).

Hickok and Poeppel's model of speech processing differentiates speech perception—the ability to process speech sounds under ecologically valid conditions at the sublexical level—from speech recognition—the set of computations that transform acoustic signals into a representation that makes contact with the mental lexicon. In their model, the “ventral stream” supports the processing of speech signals for comprehension (i.e., speech recognition), and the “dorsal stream” supports the translating of the acoustic speech signal into the underlying articulatory representations for speech production. Hickok and Poeppel suggest that traditional speech perception tasks do not need access to the lexicon but instead require the processes that allow the listener to maintain sublexical representations in an active state during the performance of the tasks. Their model holds that the early stages of speech processing bilaterally activate auditory regions on the dorsal superior temporal gyrus and superior temporal sulcus and then diverge into two different streams: the ventral stream that is bilaterally organized with a slight left-hemisphere bias and the dorsal stream that is strongly left-dominant. The more posterior regions of the ventral stream (posterior middle temporal gyrus [MTG] and posterior inferior temporal sulcus [ITS]) support the lexical interface linking phonological and semantic information, and the more anterior regions (anterior MTG and anterior ITS) support the combinatorial network. The dorsal stream maps auditory sensory representations onto articulatory motor representations and involves the Sylvian fissure at the parieto-temporal boundary (area Spt), anterior ITS, anterior MTG, posterior inferior frontal gyrus, and premotor cortex. Hickok and Poeppel argue that, although there is overlap in the two routes in the earliest stages of speech processing, the ability to process speech sounds is a distinct process that is not necessarily correlated with, nor should predict, the process of spoken word recognition. Furthermore, because the dorsal stream requires the listener to maintain sublexical representations in an active state in memory, Hickok and Poeppel suggest that some degree of executive control and working memory is involved in the process.

When viewed from Hickok and Poeppel's theoretical framework, different methodologies used to study spoken word recognition in children with SLI may provide insights into speech processing at different points along the two routes. McMurray and colleagues used a Visual World Field eye-tracking paradigm. This experimental paradigm is ideally suited to examine the earliest stages of children's processing and activation of sublexical and syllable level information, the stage that relies to some degree on executive control and working memory. This may be why they were able to discover that rapid rate of decay of the target word and subsequent challenges this caused in maintaining activation of the sublexical and syllable level representations were problematic for their group with SLI. In contrast, forward gating tasks may tap into not only the dorsal stream but also the ventral stream depending on the different gate durations. Because the ventral system supports the processing of speech signals for comprehension, linking to representations in the mental lexicon, this may be why both our study and that of Mainela-Arnold and her colleagues found that inhibiting interference from competing cohort competition was a problem for these children.

In forward gating tasks, listeners are presented with progressively longer “segments” of the target word and are asked to guess what the word is at each of the gates. In Hickok and Poeppel's model, there is overlap between speech perception and speech recognition in the operations leading up to and including activation of sublexical representations, but from this point forward, the ventral and dorsal systems diverge. The dorsal stream may be supporting processing at the shortest gate durations, because the shorter gate segments contain enough of the acoustic signal to generate only sublexical and syllable representation. At the longest gate durations, processing shifts, and the ventral stream begins to support the computation of representations that makes contact with the mental lexicon. This would predict variability both within and across individual listeners at those gate durations that are in the “crossover” time window where the listener shifts from the dorsal to ventral system as the gates become progressively longer. At this “crossover” gate duration, the amount of the speech signal that the listener has access to—the point where the process continues via the dorsal stream or the ventral stream—becomes a complex dynamic process that is a reflection of the interaction between the characteristics of the target words in the incoming stream of speech and individual differences in the listener. This means that factors that influence the earliest stages in speech perception such as phonological representations, executive control, and working memory and factors that influence later stages of speech recognition such as word frequency, phonotactic probability, neighborhood density, imageability, and concreteness all play a role in the ultimate speed and accuracy of spoken word recognition in children.

The purpose of Mainela-Arnold et al.'s study was to investigate the impact of word frequency, phonotactic probability, and neighborhood density on spoken word recognition in children with SLI; thus, the target words in the study consisted of words having both high and low phonotactic probability and neighborhood density. The purpose of McMurray et al.'s study was to refine models of word recognition by investigating parameters predicted to influence the potential interference effects of cohort and rhyme competitors on word recognition, and children with SLI were included to examine individual differences in these effects as they relate to overall differences in language ability. The purpose of our study was to examine the role of cognitive factors on spoken word recognition; thus, characteristics of the stimuli that might influence spoken word recognition in children were all controlled. As a result, the target words in our study all had high concreteness, high imageability, and early AoA ratings and were inanimate nouns. These studies suggest that not only will different experimental paradigms reveal valuable insights into spoken word recognition challenges in children with language impairments but different stimuli may as well.

One concern is that the cognitive tasks used in this study were not distinct enough to be tapping into distinct executive functions hypothesized to influence spoken word recognition. Large-scale developmental studies suggest that working memory, updating, task shifting, and inhibition, all executive functions that control and support goal-directed behavior, although related, are separable executive function factors (Miyake & Friedman, 2012). If our tasks were simply different assessment measures of the same attention and working memory, then in addition to being highly correlated in the two groups, all of the cognitive factors should have accounted for variance in spoken word recognition for the two groups. This was not the case. In the group with DLD, all of the cognitive factors were correlated. In the TD group, interference inhibition was significantly correlated with pWM, updating, and attention switching, but pWM, updating, and attention switching were not; only three cognitive factors accounted for variance in spoken word recognition in the group with DLD and the TD group, and these differed for the two groups.

This raises a related question: Were the switch and internal trials on the attention switching task measuring two distinct executive functions of attention switching and inhibition interference? The task required the children to selectively attend to one stream of speech while inhibiting interference from the competing ear and then switching ears at random points throughout the task. Although tolerance and VIF were low for these factors for both the group with DLD and the TD group, this does not address the question of whether they were measuring distinct constructs. It may have been that the task, similar to other dichotic listening tasks, was instead a broader measure of different facets of executive control. It has been suggested that gating tasks are not reflective of the natural process of real-time spoken word processing but are instead a measure of metalinguistic abilities (Montgomery, 1999). Metalinguistic tasks not only require linguistic competence but a degree of executive control as well. That attention switching and inhibition interference accounted for a unique amount of variance in spoken word recognition in the group with DLD may instead be a reflection of the poor linguistic competence in these children and their need to rely more heavily on executive control to complete the gating task.

Although traditionally viewed as a passive process of moving through the steps of speech perception to spoken word recognition along a single route, studies suggest instead that it may be an active, multiroute process. Speech processing occurs over the time course of several milliseconds, and in all but the perfect experimental setting, the listener must contend with an incoming signal that is noisy and variable and a noisy listening environment. In our study, children with DLD aged 7;0–11;0 years;months, who had 100% naming accuracy for the target words, which consisted of inanimate highly concrete and highly imageable nouns having early AoA, were as fast as their TD peers to both activate the target candidate word from the acoustic signal and settle on a final target word. However, different cognitive factors influenced how they processed spoken words as compared with their TD peers.

The findings from our study, when taken together with the findings from prior studies, suggest that spoken word recognition may not be a static deficit for children with DLD but suggest instead that spoken word recognition in these children may be a fragile skill whose success in any moment in time will depend on the cognitive abilities of the child, the specificity of the lexical–phonological network, and the ease with which the child can activate the target word from the mental lexicon, coupled with the characteristics of the individual words the child is trying to comprehend. In summary, the results from this study suggest that, although children with DLDs may have the words in their lexicon and appear to be able to process spoken words as quickly as their TD peers, the factors influencing how these children process spoken words in real time may be qualitatively different from their peers.

Acknowledgments

This research was supported by a grant (R01 DC010883) from the National Institute on Deafness and Other Communication Disorders. We express our gratitude to all the children and their parents who participated in this project. We also thank Hanna Gelfand, Jenny Boyden, Andrea Fung, Erin Burns, Beula Magimairaj, Naveen Nagaraj, Katie Squires, and Allison Hancock for their invaluable assistance during various phases of this study.

Appendix

Word Frequency (WF), Phonotactic-Probability (PP), Neighborhood Density (ND), Age-of-Acquisition (AoA) in Months, Concreteness (Con), Familiarity (Fam), Imageability (Imag) Ratings, and Picture Naming Speed Norms (ms) for the Nouns

Word WF a PP b ND c AoA a Conc a Fam a Imag a Naming (RT) d
ball 110 3.75 15 2.9 615 575 622 886
bed 127 3.95 22 2.89 635 636 635 706
belt 29 2.72 8 4.62 602 550 494 812
boat 72 3.62 25 3.84 637 584 631 1059
book 193 3.74 13 3.68 609 643 591 656
boot 13 2.04 21 3.89 595 566 604 869
bowl 23 2.91 19 4.26 575 557 579 831
box 70 3.47 4 4.3 597 599 591 753
bread 41 3.05 9 3.58 622 611 619 773
broom 2 2.45 4 5.5 613 547 608 821
cake 13 3.21 21 3.26 624 594 624 789
car 274 3.93 17 3.37 622 634 638 751
chair 66 3.98 4 3.43 606 617 610 732
clock 20 2.91 6 4.42 591 608 614 772
door 312 3.78 6 3.05 606 630 599 719
dress 67 3.62 3 4.05 595 588 595 840
drum 11 2.77 5 4.63 602 506 599 766
fork 14 2.26 7 3.63 592 584 598 723
glove 9 1.78 1 4.3 607 575 596 848
hat 56 3.65 25 3.33 601 580 562 684
key 88 2.38 24 3.58 612 603 618 738
kite 1 2.6 16 4.58 592 481 624 796
knife 76 2.88 6 4.15 612 573 633 816
ring 47 3.22 18 4.53 593 589 601 785
shirt 27 2.81 8 3.53 616 612 612 1334
shoe 14 3.31 19 2.6 600 569 601 737
sock 4 2.26 15 2.94 581 578 553 712
spoon 6 2.78 4 2.5 614 612 584 777
square 143 2.81 1 4.11 516 576 610 na
train 82 3.78 9 4 592 548 593 838
truck 57 3.72 7 3.79 595 620 621 987
watch 81 4.01 5 4.33 487 576 525 780
wheel 56 3.2 6 4.4 573 566 576 913
M 66.79 3.13 11 3.82 597 584 598 787.97
SD 73.69 0.62 8 0.67 29.2 35.1 30.67 126.81
Max 312.00 4.01 25 5.50 637 643 638 1334.00
Min 1.00 1.78 1 2.50 487 481 494 656.00

Funding Statement

This research was supported by a grant (R01 DC010883) from the National Institute on Deafness and Other Communication Disorders.

Footnotes

1

For continuity and consistency with the existing literature, we use the term SLI when discussing previous research in this area.

2

A propensity score is the conditional probability of a child being enrolled in the group with SLI or the control (TD) group given his or her key baseline characteristics (in our case, age, gender, mother's education, and family income). Because of its ability to match groups on a high dimensional set of characteristics, that is, simultaneous matching on several categorical and continuous variables, propensity score technique has become a critical statistical method in modern clinical research (D'Agostino, 1998; D'Agostino & D'Agostino, 2007; Rosenbaum & Rubin, 1983, 1984).

3

Only 10 of the 127 children with DLD were excluded because of the lack of an appropriate TD match.

References

  1. Adani F., Van der Lely H. K., Forgiarini M., & Guasti M. T. (2010). Grammatical feature dissimilarities make relative clauses easier: A comprehension study with Italian children. Lingua, 120(9), 2148–2166. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Allopenna P. D., Magnuson J. S., & Tanenhaus M. K. (1998). Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. Journal of Memory and Language, 38, 419–439. [Google Scholar]
  3. Alt M., & Plante E. (2006). Factors that influence lexical and semantic fast mapping of young children with specific language impairment. Journal of Speech, Language, and Hearing Research, 49(5), 941–954. https://doi.org/10.1044/1092-4388(2006/068) [DOI] [PubMed] [Google Scholar]
  4. American National Standards Institute. (1997). Specifications of audiometers (ANSI/ANS-8.3-1997; R2003). New York, NY: Author. [Google Scholar]
  5. Bedore L. M., Peña E. D., Summers C. L., Boerger K. M., Resendiz M. D., Greene K., … Gillam R. B. (2012). The measure matters: Language dominance profiles across measures in Spanish–English bilingual children. Bilingualism: Language and Cognition, 15(3), 616–629. https://doi.org/10.1017/S1366728912000090 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bishop D. V. (1997). Cognitive neuropsychology and developmental disorders: Uncomfortable bedfellows. The Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 50A(4), 899–923. [DOI] [PubMed] [Google Scholar]
  7. Bishop D. V. (2014). Ten questions about terminology for children with unexplained language problems. International Journal of Language & Communication Disorders, 49(4), 381–415. https://doi.org/10.1111/1460-6984.12101 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bishop D. V., Snowling M. J., Thompson P. A., Greenhalgh T., & CATALISE Consortium. (2016). CATALISE: A multinational and multidisciplinary Delphi consensus study. Identifying language impairments in children. PLoS One, 11(7), e0158753. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Catts H. W., Bridges M. S., Little T. D., & Tomblin J. B. (2008). Reading achievement growth in children with language impairments. Journal of Speech, Language, and Hearing Research, 51(6), 1569–1579. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Christensen T. A., Antonucci S. M., Lockwood J. L., Kittleson M., & Plante E. (2008). Cortical and subcortical contributions to the attentive processing of speech. NeuroReport, 19(11), 1101–1105. https://doi.org/10.1097/WNR.0b013e3283060a9d [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Coady J. A., & Evans J. L. (2008). Uses and interpretations of non-word repetition tasks in children with and without specific language impairments (SLI). International Journal of Language & Communication Disorders, 43(1), 1–40. https://doi.org/10.1080/13682820601116485 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Coady J. A., Evans J. L., Mainela-Arnold E., & Kluender K. R. (2007). Children with specific language impairments perceive speech most categorically when tokens are natural and meaningful. Journal of Speech, Language, and Hearing Research, 50, 41–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Coady J. A., Kluender K. R., & Evans J. L. (2005). Categorical perception of speech by children with specific language impairments. Journal of Speech, Language, and Hearing Research, 48(4), 944–959. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Coady J. A., & Mainela-Arnold E. (2013). Phonological and lexical effects in verbal recall by children with specific language impairments. International Journal of Language & Communication Disorders, 48(2), 144–159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Cohen J., MacWhinney B., Flatt M., & Provost J. (1993). PsyScope: An interactive graphic system for designing and controlling experiments in the psychology laboratory using Macintosh computers. Behavior Research Methods, Instruments, & Computers, 25(2), 257–271. [Google Scholar]
  16. Coltheart M. (1981). The MRC psycholinguistic database. The Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 33A, 497–505. [Google Scholar]
  17. Conti-Ramsden G., Mok P. L., Pickles A., & Durkin K. (2013). Adolescents with a history of specific language impairment (SLI): Strengths and difficulties in social, emotional and behavioral functioning. Research in Developmental Disabilities, 34(11), 4161–4169. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. D'Agostino R. (1998). Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Statistics in Medicine, 17, 2265–2281. [DOI] [PubMed] [Google Scholar]
  19. D'Agostino R. B. Jr., & D'Agostino R. B. Sr. (2007). Estimating treatment effects using observational data. Journal of the American Medical Association, 297(3), 314–316. https://doi.org/10.1001/jama.297.3.314 [DOI] [PubMed] [Google Scholar]
  20. Dollaghan C. (1985). Child meets word: “Fast mapping” in preschool children. Journal of Speech and Hearing Research, 28(3), 449–454. [PubMed] [Google Scholar]
  21. Dollaghan C. (1987). Fast mapping in normal and language-impaired children. Journal of Speech and Hearing Disorders, 52(3), 218–222. [DOI] [PubMed] [Google Scholar]
  22. Dollaghan C. (1998). Spoken word recognition in children with and without specific language impairment. Applied Psycholinguistics, 19(2), 193–207. [Google Scholar]
  23. Dollaghan C., & Campbell T. (1998). Nonword repetition and child language impairment. Journal of Speech, Language, and Hearing Research, 41, 1136–1146. [DOI] [PubMed] [Google Scholar]
  24. Durkin K., & Conti-Ramsden G. (2007). Language, social behavior, and the quality of friendships in adolescents with and without a history of specific language impairment. Child Development, 78(5), 1441–1457. [DOI] [PubMed] [Google Scholar]
  25. Edwards J., & Lahey M. (1996). Auditory lexical decisions of children with specific language impairment. Journal of Speech and Hearing Research, 39(6), 1263–1273. [DOI] [PubMed] [Google Scholar]
  26. Evans J., Gillam R., & Montgomery J. (2015, November). Relationship between spoken word recognition, lexical access, and phonological working memory in children with SLI. Poster session presented at the annual convention of the American Speech-Language-Hearing Association, Denver, CO. [Google Scholar]
  27. Finneran D. A., Francis A. L., & Leonard L. B. (2009). Sustained attention in children with specific language impairment (SLI). Journal of Speech, Language, and Hearing Research, 52(4), 915–929. https://doi.org/10.1044/1092-4388(2009/07-0053) [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Garlock V. M., Walley A. C., & Metsala J. L. (2001). Age-of-acquisition, word frequency, and neighborhood density effects on spoken word recognition by children and adults. Journal of Memory and Language, 45(3), 468–492. https://doi.org/10.1006/jmla.2000.2784 [Google Scholar]
  29. Gaskell M. G., & Marslen-Wilson W. D. (1997). Integrating form and meaning: A distributed model of speech perception. Language and Cognitive Processes, 12(5–6), 613–656. [Google Scholar]
  30. Gillam R., & Pearson N. A. (2004). Test of Narrative Language. Austin, TX: Pro-Ed. [Google Scholar]
  31. Gordon M., McClure F., & Aylward G. (1997). The Gordon Diagnostic System: Instructional manual and interpretive guide (3rd ed.). New York, NY: Gordon Systems. [Google Scholar]
  32. Graf-Estes K., Evans J. L., & Else-Quest N. M. (2007). Differences in the nonword repetition performance of children with and without specific language impairment: A meta-analysis. Journal of Speech, Language, and Hearing Research, 50, 177–195. https://doi.org/10.1044/1092-4388(2007/015) [DOI] [PubMed] [Google Scholar]
  33. Gray S., Brinkley S., & Svetina D. (2012). Word learning by preschoolers with SLI: Effect of phonotactic probability and object familiarity. Journal of Speech, Language, and Hearing Research, 55(5), 1289–1300. https://doi.org/10.1044/1092-4388(2012/11-0095) [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Grosjean F. (1980). Spoken word recognition processes and the gating paradigm. Perception & Psychophysics, 28(4), 267–283. [DOI] [PubMed] [Google Scholar]
  35. Hammill D. D., & Newcomer P. L. (2008). Test of Language Development–Intermediate: Fourth Edition. Austin, TX: Pro-Ed. [Google Scholar]
  36. Heald S. L., & Nusbaum H. C. (2014). Speech perception as an active cognitive process. Frontiers in Systems Neuroscience, 8, 35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Heald S. L., Van Hedger S. C., & Nusbaum H. C. (2017). Perceptual plasticity for auditory object recognition. Frontiers in Psychology, 8, 781. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Hickok G. (2012). The cortical organization of speech processing: Feedback control and predictive coding the context of a dual-stream model. Journal of Communication Disorders, 45(6), 393–402. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Hickok G., & Poeppel D. (2000). Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences, 4(4), 131–138. [DOI] [PubMed] [Google Scholar]
  40. Hickok G., & Poeppel D. (2004). Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language. Cognition, 92(1–2), 67–99. [DOI] [PubMed] [Google Scholar]
  41. Hickok G., & Poeppel D. (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393–402. [DOI] [PubMed] [Google Scholar]
  42. Hickok G., & Poeppel D. (2015). Neural basis of speech perception. Handbook of Clinical Neurology, 129, 149–160. [DOI] [PubMed] [Google Scholar]
  43. Kail R., Hale C. A., Leonard L. B., & Nippold M. A. (1984). Lexical storage and retrieval in language-impaired children. Applied Psycholinguistics, 5(1), 37–49. [Google Scholar]
  44. Kail R., & Leonard L. (1986). Word-finding abilities in language-impaired children. ASHA Monographs, (25), 1–39. [PubMed] [Google Scholar]
  45. Kan P. F., & Windsor J. (2010). Word learning in children with primary language impairment: A meta-analysis. Journal of Speech, Language, and Hearing Research, 53, 739–756. https://doi.org/10.1044/1092-4388(2009/08-0248) [DOI] [PubMed] [Google Scholar]
  46. Kuperman V., Stadthagen-Gonzalez H., & Brysbaert M. (2012). Age-of-acquisition ratings for 30,000 English words. Behavior Research Methods, Instruments, & Computers, 44(4), 978–990. [DOI] [PubMed] [Google Scholar]
  47. Lee J. C., & Tomblin J. B. (2012). Reinforcement learning in young adults with developmental language impairment. Brain and Language, 123(3), 154–163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Leonard L. B. (2014). Children with specific language impairment. Cambridge, MA: MIT Press. [Google Scholar]
  49. Leonard L. B., Nippold M., Kail R., & Hale C. (1983). Picture naming in language-impaired children. Journal of Speech and Hearing Research, 26, 609–615. [DOI] [PubMed] [Google Scholar]
  50. Luce P. A., & Pisoni D. B. (1998). Recognizing spoken words: The neighborhood activation model. Ear and Hearing, 19, 1–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Mainela-Arnold E., & Evans J. L. (2014). Do statistical segmentation abilities predict lexical–phonological and lexical–semantic abilities in children with and without SLI? Journal of Child Language, 41(2), 327–351. https://doi.org/10.1017/S0305000912000736 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Mainela-Arnold E., Evans J. L., & Coady J. A. (2008). Lexical representations in children with SLI: Evidence from a frequency-manipulated gating task. Journal of Speech, Language, and Hearing Research, 51(2), 381–393. https://doi.org/10.1044/1092-4388(2008/028) [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Marslen-Wilson W., & Tyler L. (1980). The temporal structure of spoken language understanding. Cognition, 25, 71–102. [DOI] [PubMed] [Google Scholar]
  54. Marslen-Wilson W., & Welsh A. (1978). Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology, 10, 29–63. [Google Scholar]
  55. Marslen-Wilson W., & Zwitserlood P. (1989). Accessing spoken words: The importance of word onsets. Journal of Experimental Psychology: Human Perception and Performance, 15(3), 576–585. [Google Scholar]
  56. McClelland J. L., & Elman J. L. (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1–86. [DOI] [PubMed] [Google Scholar]
  57. McMurray B., Samelson V. M., Lee S. H., & Tomblin J. B. (2010). Individual differences in online spoken word recognition: Implications for SLI. Cognitive Psychology, 60(1), 1–39. https://doi.org/10.1016/j.cogpsych.2009.06.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Miyake A., & Friedman N. P. (2012). The nature and organization of individual differences in executive functions: Four general conclusions. Current Directions in Psychological Science, 21(1), 8–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Moe A., Hopkins C., & Rush R. (1982). The vocabulary of first-grade children. Springfield, IL: Thomas. [Google Scholar]
  60. Montgomery J. W. (1999). Recognition of gated words by children with specific language impairment: An examination of lexical mapping. Journal of Speech, Language, and Hearing Research, 42(3), 735–743. https://doi.org/10.1044/jslhr.4203.735 [DOI] [PubMed] [Google Scholar]
  61. Montgomery J. W. (2002). Examining the nature of lexical processing in children with specific language impairment: A temporal processing or processing capacity deficit? Applied Psycholinguistics, 23, 447–470. [Google Scholar]
  62. Montgomery J. W., Evans J. L., Gillam R. B., Sergeev A. V., & Finney M. C. (2016). Whatdunit?: Developmental changes in children's syntactically-based sentence interpretation abilities and sensitivity to word order. Applied Psycholinguistics, 37, 1281–1309. [Google Scholar]
  63. Rogers R. D., & Monsell S. (1995). Costs of a predictable switch between simple cognitive tasks. Journal of Experimental Psychology: General, 124(2), 207–231. [Google Scholar]
  64. Roid G. H., & Miller L. J. (1997). Leiter International Performance Scale–Revised. Wood Dale, IL: Stoelting. [Google Scholar]
  65. Rosenbaum P. R., & Rubin D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41–55. [Google Scholar]
  66. Rosenbaum P. R., & Rubin D. B. (1984). Reducing bias in observational studies using subclassification on the propensity score. Journal of the American Statistical Association, 79, 516–524. [Google Scholar]
  67. Ross B., Hillyard S. A., & Picton T. W. (2010). Temporal dynamics of selective attention during dichotic listening. Cerebral Cortex, 20(6), 1360–1371. https://doi.org/10.1093/cercor/bhp201 [DOI] [PubMed] [Google Scholar]
  68. Rossion B., & Pourtois G. (2004). Revisiting Snodgrass and Vanderwart's object set: The role of surface detail in basic-level object recognition. Perception, 33, 217–236. [DOI] [PubMed] [Google Scholar]
  69. Schneider W., Eschman A., & Zuccolotto A. (2002). E-Prime: User's guide. Pittsburgh, PA: Psychology Software. [Google Scholar]
  70. Semel E., Wiig E., & Secord W. (2003). Clinical Evaluation of Language Fundamentals–Fourth Edition. San Antonio, TX: The Psychological Corporation. [Google Scholar]
  71. Shavers V. L. (2007). Measurement of socioeconomic status in health disparities research. Journal of the National Medical Association, 99(9), 1013–1023. [PMC free article] [PubMed] [Google Scholar]
  72. Sheng L., & McGregor K. K. (2010). Object and action naming in children with specific language impairment. Journal of Speech, Language, and Hearing Research, 53(6), 1704–1719. https://doi.org/10.1044/1092-4388(2010/09-0180) [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Spaulding T. J., Plante E., & Vance R. (2008). Sustained selective attention skills of preschool children with specific language impairment: Evidence for separate attentional capacities. Journal of Speech, Language, and Hearing Research, 51, 16–34. [DOI] [PubMed] [Google Scholar]
  74. Stark R. E., & Montgomery J. (1995). Sentence processing in language-impaired children under conditions of filtering and time compression. Applied Psycholinguistics, 16, 137–164. [Google Scholar]
  75. Storkel H., & Hoover J. (2010). On-line calculator of phonotactic probability and neighborhood density based on child corpora of spoken American English. Behavior Research Methods, Instruments, & Computers, 42, 497–506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Szekely A., D'Amica S., Devescovi A., Federmeier K., Herron D., Iyer G., … Bates E. (2005). Timed action and object naming. Cortex, 41(1), 7–25. [DOI] [PubMed] [Google Scholar]
  77. Tager-Flusberg H., & Cooper J. (1999). Present and future possibilities for defining a phenotype for specific language impairment. Journal of Speech, Language, and Hearing Research, 42(5), 1275–1278. [DOI] [PubMed] [Google Scholar]
  78. Tanenhaus M., Spivey-Knowlton M., Eberhard K., & Sedivy J. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632–1634. [DOI] [PubMed] [Google Scholar]
  79. Tomblin J. B., Freese P. R., & Records N. L. (1992). Diagnosing specific language impairment in adults for the purpose of pedigree analysis. Journal of Speech and Hearing Research, 35(4), 832–843. [DOI] [PubMed] [Google Scholar]
  80. Tomblin J. B., Records N. L., Buckwalter P., Zhang X., Smith E., & O'Brien M. (1997). Prevalence of specific language impairment in kindergarten children. Journal of Speech, Language, and Hearing Research, 40(6), 1245–1260. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Vitevitch M., & Luce P. (2004). A web-based interface to calculate phonotactic probability for words and nonwords in English. Behavior Research Methods, Instruments, & Computers, 36, 481–487. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Wallace G., & Hammill D. (2000). Comprehensive Receptive and Expressive Vocabulary Test–Second Edition. Austin, TX: Pro-Ed. [Google Scholar]
  83. Watkins R. V., Kelly D. J., Harbers H. M., & Hollis W. (1995). Measuring children's lexical diversity differentiating typical and impaired language learners. Journal of Speech and Hearing Research, 38, 1349–1355. https://doi.org/10.1044/jshr.3806.1349 [DOI] [PubMed] [Google Scholar]

Articles from Journal of Speech, Language, and Hearing Research : JSLHR are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES