Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Dec 16.
Published in final edited form as: J Speech Lang Hear Res. 2009 Apr;52(2):10.1044/1092-4388(2009/07-0189). doi: 10.1044/1092-4388(2009/07-0189)

Statistical Learning in Children With Specific Language Impairment

Julia L Evans 1, Jenny R Saffran 2, Kathryn Robe-Torres 2
PMCID: PMC3864761  NIHMSID: NIHMS330584  PMID: 19339700

Abstract

Purpose

In this study, the authors examined (a) whether children with specific language impairment (SLI) can implicitly compute the probabilities of adjacent sound sequences, (b) if this ability is related to degree of exposure, (c) if it is domain specific or domain general and, (d) if it is related to vocabulary.

Method

Children with SLI and normal language controls (ages 6;5–14;4 [years;months]) listened to 21 min of a language in which transitional probabilities within words were higher than those between words. In a second study, children with SLI and Age–Nonverbal IQ matched controls (8;0–10;11) listened to the same language for 42 min and to a second 42 min “tone” language containing the identical statistical structure as the “speech” language.

Results

After 21 min, the SLI group's performance was at chance, whereas performance for the control group was significantly greater than chance and significantly correlated with receptive and expressive vocabulary knowledge. In the 42-minute speech condition, the SLI group's performance was significantly greater than chance and correlated with receptive vocabulary but was no different from chance in the analogous 42-minute tone condition. Performance for the control group was again significantly greater than chance in 42-minute speech and tone conditions.

Conclusions

These findings suggest that poor implicit learning may underlie aspects of the language impairments in SLI.

Keywords: specific language impairment, implicit learning, statistical learning, child language development, child language disorders


Specific language impairment (SLI) refers to a group of children who have difficulty acquiring and using language in the absence of hearing, intellectual, emotional, or neurological impairments. Several theories have been proposed as accounts of SLI. Modularist accounts of SLI view grammar as distinct from other aspects of the language system, and have focused on characterizing grammatical impairments seen in SLI. These accounts include proposals that children with SLI are late in setting specific parameters of their grammatical system (e.g., Rice, Wexler, & Cleave, 1995), are missing specific grammatical features (e.g., Gopnik & Crago, 1991), or have a representational deficit for dependent relationships (e.g., van der Lely, 2003). Nonmodular accounts of SLI ask whether the auditory perceptual, working memory, and/or speed of processing deficits seen in SLI may instead be candidate causal mechanisms of the language disorders (e.g., Bishop, 1992; Ellis Weismer, Evans, & Hesketh, 1999; Gathercole & Baddeley, 1990, 1993; Joanisse & Seidenberg, 1998; Leonard et al., 2007; Merzenich et al., 1996; Miller, Kail, Leonard, & Tomblin, 2001; Montgomery, 2000; Tallal et al., 1996).

Ullman has recently proposed a modularist account of SLI based on his declarative/procedural (DP) model of language acquisition (Ullman, 2001a, 2001b; Ullman, 2004; Ullman & Pierpont, 2005). The DP model starts from the assumption that the language system is made up of a memorized mental lexicon and a structurally distinct computational mental grammar (Pinker, 1994). In the DP model, Ullman proposes that the mental lexicon is made up of memorized, arbitrary, word-specific knowledge that is learned by the declarative memory system. The mental grammar, in contrast, is supported by the brain structures that underlie one type of implicit memory—procedural learning—that is involved in the learning and processing of rule-like relations in the context of real-time serial, abstract, sensorimotor, or cognitive sequences (Eichenbaum & Cohen, 2001; Reber & Squire 1994; Schacter & Tulving, 1994; Squire, 1992; Squire & Knowlton, 2000; Squire, Knowlton, & Musen, 1993). With respect to SLI, Ullman proposes that at least a subset of these children has impaired procedural learning systems. Based on a review of SLI research and findings from a series of studies of inflectional morphology use and motor functions from the KE family, Ullman and colleagues (Ullman & Gopnik, 1999; Ullman & Pierpont, 2005) argue that children with SLI have abnormalities in the brain structures that support procedural learning in general, and more specifically, the learning of rule-based mental grammar. Accordingly, they argue that the morphosyntactic deficits, working memory involvement, and motor control impairments seen in children with SLI, in conjunction with relative strengths in the lexicon, are evidence of not only an impaired procedural learning system but also the adaptive reliance on intact brain structures that support the declarative memory system (Ullman & Pierpont, 2005).

Implicit learning—broadly construed, learning without awareness—is not a single construct but a complex, multifaceted phenomenon. It does not depend upon a single brain system but includes a collection of learning capacities that, in addition to procedural motor learning (e.g., serial reaction time [SRT]), includes probabilistic learning of categories, prototype abstraction, statistical learning, and artificial grammar learning (cf. Ashby & Ell, 2001; Perruchet & Pacton, 2006; Reber, 1989; Reber, Stark, & Squire, 1998; Squire & Knowlton, 2000; Squire & Zola, 1996). In all of these tasks, learning is incremental, unconscious, and expressed through changes in the behavioral response such as generalization to novel sequences in artificial grammar learning or increased speed of response in SRT tasks.

Two recent implicit learning studies—one using a serial reaction time (SRT) task with adolescents with SLI and one using artificial grammar learning tasks with college students with language/learning disabilities (L/LD)—suggest that implicit learning may be impaired in children with SLI (Plante, Gomez, & Gerken, 2002; Tomblin, Mainela-Arnold, & Zhang, 2007). The prototypical task for studying perceptual–motor procedural learning is the SRT task (Nissen & Bullemer, 1987). In this task, participants press a button corresponding to a spatial location of a visual stimulus trial by trial. Blocks of trials in which the spatial location occurs in a random order are followed by blocks of trials in which the order is either deterministic or probabilistic. Decrease in response times is viewed as evidence of learning. Using this SRT task to study procedural learning in 85 adolescents diagnosed with SLI in kindergarten and 47 normal language (NL) peers, Tomblin et al. observed decreased reaction times for correct trials over a period of four blocks of 100 trials across all of the participants. Learning was significantly slower in the group with SLI as compared with the NL controls. Moreover, the NL group demonstrated the expected learning pattern in which learning was initially rapid followed by a gradual approach toward an asymptote, whereas the shape of the learning curve for the SLI group consisted of a period of slowed responses prior to the onset of rapid learning, with no evidence of an asymptote by the last trial block.

Further analysis of the children's composite language scores from kindergarten revealed that the rate and pattern of learning during the SRT task differed between the SLI and NL groups. Reaction times for the children with SLI who had primarily grammatical deficits in kindergarten were significantly slower than the NL group. In contrast, no differences were found in reaction times during either the random or pattern blocks for children with SLI who had poor vocabulary abilities in kindergarten as compared with children having normal vocabulary abilities in kindergarten. What is significant about the Tomblin et al. (2007) study is that the SRT task used visual materials and required no overt use of language, eliminating the involvement of auditory processing or phonological information processing. This association between language status and visual-motor procedural learning is particularly striking given the adolescents' long-term histories of poor language learning.

In the second implicit learning study, Plante et al. (2002) used a classic artificial grammar learning (AGL) task to investigate sensitivity to rules governing word order in college students with and without L/LD. Using a finite-state grammar to generate grammatical strings, novel CVC words were combined into sentences of three to six words. Eighty “sentences” were recorded (each of 10 strings occurring 8 times in random order with the constraint that identical strings never appeared consecutively). An additional set of 20 word strings recorded from each CVC list was used during testing. Half of the strings were generated by the same finite-state grammar but were not heard during the exposure phase, and half contained violations of the finite-state grammar. Comparison of the two groups' ability to judge the grammaticality of a new set of test sentences after exposure to the training sentences indicated that participants in the L/LD group were unable to determine which word strings were generated by the finite-state grammar and which were not. Not only were the L/LD participants significantly worse than the NL controls, they made significantly more false positive identifications than the NL group, indicating that the individuals with L/LD were unable to implicitly learn a novel grammar from exposure to exemplars from that grammar.

Although Ullman argues that knowledge in the mental lexicon is acquired via the declarative memory system and not the implicit memory system, words themselves consist of mappings between sounds and meanings. In order to successfully perform such mappings, infants and young children must first discover the sounds that cohere into words via fine-tuning of native language speech perception, discovery of native language phonological structure, and segmenting words from fluent speech (Saffran & Graf Estes, 2006). A growing body of research shows that implicit learning is evident and critical in this earliest stage of children's word learning, as infants begin to discover words within the continuous stream of speech.

Although implicit learning in adults has typically been studied using tasks such as priming, motor learning (e.g., SRT), category learning, and artificial grammar learning using finite-state grammars (cf. Reber, 1989; Squire & Knowlton, 2000), a paradigmatic measure of implicit learning during infancy and childhood is statistical learning—the tracking patterns of regularities over input such as syllables, tones, or shapes (e.g., Aslin, Saffran, & Newport, 1998; Kirkham, Slemmer, & Johnson, 2002; Saffran, Aslin, & Newport, 1996; Saffran, Johnson, Aslin, & Newport, 1999). In these tasks, learners are exposed to a stream of elements that are organized according to a set of simple statistical regularities (e.g., the syllable /pa/ tends to be followed by the syllable /bi/). In the absence of instruction or reinforcement, infants and young children rapidly detect the regularities that link together elements in the stream, as evidenced by discrimination of familiar versus novel sequences of elements at test.

Research in this vein supports the claim that adults, children, and infants can implicitly learn statistical regularities that are hypothesized to be useful for certain aspects of language learning (e.g., Saffran, 2003). In particular, the discovery of statistical patterns linking together speech sounds may play a role in word segmentation: finding word boundaries in fluent speech. A number of studies suggest that the transitional probabilities between syllables or phonemes assist learners in discovering which sounds cohere together into words and which sounds span word boundaries (e.g., Aslin et al., 1998; Saffran et al., 1996; Saffran, Newport, & Aslin, 1996; Thiessen & Saffran, 2003). The ability to track statistical regularities in the speech stream and to use this information to discover word boundaries appears to be connected to subsequent mapping of those sounds to novel meanings (Graf Estes, Evans, Alibali, & Saffran, 2007). Importantly, these abilities are not limited to language. For example, adults and infants are able to track the statistics of tone sequences when they are organized into three-tone “words” following the same kinds of structure as the linguistic speech streams (e.g., Saffran et al., 1999; Saffran & Griepentrog, 2001).

Of particular relevance to the current project is a study by Saffran, Newport, Aslin, Tunick, and Barrueco (1997), in which children and adults were exposed to fluent speech while performing a cover task (coloring on the computer). Participants received no instruction regarding the content of the speech stream, nor that a test would follow. Despite the implicit nature of this task, both the adults and 6- to 9-year-old children were able to discriminate words from nonword foils following incidental exposure to the speech stream. This task presumably mimics the implicit learning performed by infants, who are obviously learning in the absence of experimenter-directed attention or instructions.

Notably, we see many individual differences on these tasks. For example, in the Saffran et al. (1997) article, the scores for both adults and children in Experiment 1 (21-min exposure to the fluent speech prior to test) ranged between 41% and 83%, with a range of 50% to 97% in Experiment 2 (double exposure over 2 days). It is unknown whether these individual differences are meaningful. Do they reflect real differences in learning skill and/or language skill, or are they merely artifacts of the test procedure? To the extent that learning of this kind—as measured in a laboratory task—is central to aspects of language acquisition, we might expect these individual differences to be correlated with native-language abilities.

Thus, we asked whether children with SLI are impaired in their ability to keep track of the sequences of syllables they hear in a stream of speech—an implicit learning ability fundamental to the earliest stages of word learning: the discovery of word boundaries in continuous speech. If so, then some of the linguistic challenges faced by children with SLI may go beyond implicit artificial grammar learning and serial recall to include difficulties tracking the sound sequences that are highly consistent versus those that are only occasional, an ability that is particularly relevant to the problem of word segmentation and potentially many other aspects of language learning.

There is considerable debate regarding the extent to which knowledge learned via the implicit memory system is abstract and domain general or is highly constrained and modality dependent. Clearly, this issue is highly relevant to the characterization of so-called “specific” language impairment. Some research suggests that the learning that occurs in implicit tasks is abstract and is not directly tied to the surface features or sensory instantiations of the stimuli (Altmann, Dienes, & Goode, 1995; Marcus, Vijayan, Rao, & Vishton, 1999; Reber, 1989). However, implicit memory systems appear to be sensitive to modality- or stimuli-specific features of the input (e.g., Chang & Knowlton, 2004; Christiansen & Curtin, 1999; McClelland & Plaut, 1999; Perruchet, Tyler, Galland, & Peereman, 2004). With respect to statistical learning, we see many commonalities across modalities as well as important differences that suggest that statistical learning is constrained by modality and/or by our perceptual systems (e.g., Conway & Christiansen, 2005; Saffran, 2002; Saffran & Thiessen, 2007).

If the characterization of SLI includes deficits in domain-general implicit learning abilities, we should see poor learning for both speech and matched non-speech stimuli in children with SLI. The following studies were designed to examine performance of children with SLI on the statistical learning word segmentation task.

In Experiment 1, we asked the following two questions:

  • (a)

    Are children with SLI sensitive to and able to store quantitative aspects of distributional information about a language corpus? Specifically, are children with SLI able to implicitly track statistical information in running speech to discover word boundaries?

  • (b)

    Is statistical word learning in children with and without SLI related to measures of expressive and receptive vocabulary? Links between performance on this laboratory task and native-language ability would be consistent with the claim that statistical learning is actually used for language acquisition.

In Experiment 2, we asked the following two questions:

  • (a)

    Is statistical word learning ability related to frequency or degree of exposure? Specifically, do children with SLI require greater exposure to the speech stream to discover word boundaries, as compared with age-matched peers?

  • (b)

    Is statistical language learning performance in children with SLI similar to their performance on an analogous nonlinguistic task (tone-word segmentation)? That is, how domain-specific are the learning impairments in SLI?

The answers to these questions will inform theories concerning both typical language acquisition and the nature of the deficit(s) in SLI.

Experiment 1

Method

Participants

A total of 113 children, 35 children with SLI (6;5–14;4) and 78 typically developing children with NL (5;7–12;10) participated in Experiment 1. All children met the following criteria: (a) non-verbal Intelligence of 85 or greater, as measured by the Leiter International Performance Scale (LIPS; Roid & Miller, 1997); (b) normal hearing based on ASHA 1997 guidelines for hearing screening on the day of the experiment (at 500, 1000, 2000, and 4000 Hz at 20 dB); (c) normal corrected vision; (d) normal oral and speech motor abilities; and (e) monolingual English speakers.

For the children with SLI, all subtests of the Clinical Evaluation of Language Fundamentals–Third Edition (CELF-3; Semel, Wiig, & Secord, 1995) were administered, and for the NL group, the three expressive language subtests and the Concepts and Directions receptive language subtest from the CELF-3 were administered. In addition, to investigate whether statistical word learning is related to lexical knowledge, the Peabody Picture Vocabulary Test-Third Edition (PPVT-III; Dunn & Dunn, 1997) and the Expressive Vocabulary Test (EVT; Williams, 1997) also were administered to both groups. For the group with SLI, composite expressive language scores from the CELF-3 were at or below 1.5 SD below the mean. For the NL group, standardized language measures from the CELF-3 and the PPVT-III, as well as the EVT, were all at or above age-level expectations. The SLI and NL groups' performance differed significantly on all standardized measures (see Table 1).1

Table 1.

Age and standardized scores for language assessment measures for the specific language impairment (SLI) and the normal language (NL) groups.

SLI (n = 35)
NL (n = 78)
Comparison
Variable M SD Range M SD Range t(110) P
Age (in months) 115 21 77–172 95 21 67–154 4.15 < .001*
Leiter–Nonverbal IQa 97 8 87–119 109 10 85–139 6.70 < .001*
CELF-3 ELSb 71 11 50–84 109 12 86–150 10.69 < .001*
CELF-3 RLSc 68 14 50–98 N/A N/A N/A
PPVT-IIId 89 11 66–112 109 11 87–135 8.91 < .001*
EVTe 81 9 61–109 104 10 85–124 11.56 < .001*

Note. For each variable, age-scaled scores have a mean of 100 and an SD of 15. IQ = intelligent quotient; N/A = not applicable.

a

Leiter International Performance Scale (Roid & Miller, 1997).

b

CELF-3 ELS = Clinical Evaluation of Language Fundamentals–3: Expressive Language score (Semel et al., 1995).

c

CELF-3: RLS = Clinical Evaluation of Language Fundamentals–3: Receptive Language Score (Semel et al., 1995).

d

PPVT-III = Peabody Picture Vocabulary Test, Third Edition (Dunn & Dunn, 1997).

e

EVT = Expressive Vocabulary Test (Williams, 1997).

Stimuli

The stimuli for this study were the same as those used by Saffran et al. (1997). The language consisted of 12 CV syllables made up of seven consonants and vowels (p, t, b, d, a, I, and u). These CV pairs were combined into six trisyllabic “words” (>dutaba, tutibu, pidabu, patubi, bupada, and babupu). The language was constructed to ensure that the transitional probabilities between syllables within the words were higher than the transitional probabilities between syllables across word boundaries. Because some of the syllables occurred in more words that others (e.g., bu occurred in four words, whereas ta occurred in only one word), the within-word transitional probabilities ranged from 0.37 to 1.0. The transitional probabilities across the word boundaries ranged from 0.1 to 0.2.

Three-hundred tokens of each of the six words were combined in a random sequence and were created into a stream of speech using the MacInTalk (Apple, Cupertino, CA) speech synthesizer, with the constraint that the same word could not occur twice in a row. The result was a 4,536-syllable, continuous speech stream that contained no acoustic word boundary cues, equivalent coarticulation between syllables, no prosodic cues, and no pauses between or within the words. The speech stream was produced using a female monotone voice speaking at 216 syllables per min.

In addition to the speech stream, six nonword foils were created (batipa, bidata, dupitu, pubati, tapuba, and tipabu). The nonwords were made up of syllables from the language's syllable inventory that never followed each other in the speech stream. The transitional probabilities of the syllable sequences for the nonwords were thus zero. The test stimuli—both words and nonwords—were synthesized in citation form using the MacInTalk speech synthesizer. The six words and six nonwords were paired exhaustively to generate a 36-trial, two-alternative forced-choice test; half of the test items contained a word as the first member of a pair, and half contained a nonword as the first member of a pair. The test items were recorded onto a digital minidisk for subsequent playback. The stimulus words and their transitional probabilities, as well as the nonword foils, are listed in Table 2.

Table 2.

Words and nonword foils.

Words Nonwords
dutaba (1.0) batipa
tutibu (.75) bidata
pidabu (.65) dupitu
patubi (.50) pubati
bupada (.42) tapuba
babupu (.37) tipabu

Note. Transitional probability for each of the words in parentheses.

Procedure

The procedure was the same as that used in Saffran et al. (1997). While the tape of the continuous speech stream played in the background, children were asked to draw a picture using a computer-coloring program, Kid Pix 2 (Houghton Mifflin Harcourt Learning Technology and Riverdeep International Education, Ltd, Beijing, China). Children listened to the tape for a total of 21 min. During the 21 min of exposure, the examiner sat quietly behind the children to ensure that they sustained interest in the drawing task and were not distracted. At the end of the 21 min, children were tested using a forced-choice paradigm. Children heard pairs of trisyllables (which is a word paired with a nonword) on each trial and were asked to choose the sound in each pair that sounded more like the sounds they heard while drawing. Prior to the testing phase, to ensure that the children understood the task, children were presented with practice trials containing word–nonword pairs derived from words in English and were asked to identify which one sounded more like a word (e.g., com-pu-ter vs. pu-ter-com). Following the practice trials, the children were presented with the 36 test pairs. All of the children were able to successfully complete the practice trials, and no children were excluded from the study due to their inability to understand the task.

Results and Discussion

The results for the SLI and NL groups are presented in Figure 1. An analysis of covariance (ANCOVA) with age and nonverbal IQ as covariates revealed that the SLI group's ability to attend to transitional probabilities in the speech stream was significantly poorer than the NL group's: F(1, 109) = 5.6, p < .01, partial η2 = .05, ω = .65. The mean for the children with SLI was 52% (SD = 11%) and for the NL group was 58% (SD = 13%) where chance equals 50%. Single-sample t tests (two-tailed) indicated that the SLI group's performance did not differ from chance, t(34) = 0.97, p = .33, whereas the typical children's performance was significantly better than chance, t(77) = 5.53, p < .001.

Figure 1.

Figure 1

Percent correct performance for children with specific language impairment (SLI) and normal language (NL) controls in Experiment 1 (21-minute speech statistical word learning task).

One question is whether the strength of the transitional probabilities between the words played a role in how well individual words were learned. As noted earlier, the transitional probabilities within the words ranged from 0.37 to 1.0. Analysis of the individual target words for the SLI group indicated that none of the six words were learned significantly better than chance. For the NL group, all six words were learned significantly better than expected by chance (p < .05), suggesting that after only 21 min of exposure, the typically developing children were easily able to exploit transitional probabilities to discover the words embedded in the speech stream.

There is reason to believe that the ability to track sequential statistics should be related to lexical knowledge. Challenges segmenting words from the speech stream would likely slow lexical development. Indeed, infant segmentation skill, broadly construed (i.e., including cues other than just sequential statistics), predicts later vocabulary outcomes (Newman, Bernstein Ratner, Jusczyk, Jusczyk, & Dow, 2006; Singh & Nestor, 2006; Singh, Nestor, Paulson, & Strand, 2007). We thus asked whether children's ability to track the transitional probabilities in the word segmentation task is related to their vocabulary. Pearson correlations for age, nonverbal IQ, and raw scores from measures of expressive (EVT) and receptive vocabulary (PPVT-III) indicate that the NL group's performance on the statistical word learning task was significantly correlated with age (p < .05), receptive vocabulary (p < .01), and expressive vocabulary (p < .01; Table 3). Given that age and statistical learning performance were significantly correlated, we conducted a second correlation analysis controlling for age. After removing age, performance on the statistical learning task remained significantly correlated with expressive vocabulary (r = .28, p < .001) and receptive vocabulary (r = .23, p < .05) for the typically developing children. Pearson correlations for age, nonverbal IQ, and raw scores from measures of expressive and receptive vocabulary (PPVT-III) indicate that the SLI group's performance on the statistical word learning task was not significantly correlated with age (p = .45), nonverbal IQ (p = .24), receptive vocabulary (p = .40), or expressive vocabulary (p = .17).

Table 3.

Pearson correlations for NL and SLI groups, Experiment 1 (21-min speech).

Variable Age in months Nonverbal IQ standard score PPVT-III raw score EVT raw score
NL group (n = 75)
Age in months
Nonverbal IQ −0.16
PPVT-III 0.80** 0.13
EVT 0.74** 0.07 0.82**
SWL 21-min. speech 0.25* 0.06 0.33** 0.39**
SLI group (n = 35)
Age in months
Nonverbal IQ −0.12
PPVT-III 0.70** 0.24
EVT 0.78** 0.10 0.71**
SWL 21-min. speech −0.01 0.12 0.02 −0.16

Note. SWL = Statistical Word Learning.

*

p < .05 (two-tailed).

**

p < .01 (two-tailed).

The results of Experiment 1 indicate that after 21 min of exposure to a continuous speech stream, children with SLI were not able to use statistical information to implicitly discover word boundaries based on differences in transitional probabilities. In contrast, typically developing children were able to discover word boundaries after only 21 min of exposure, and this ability to use statistical information in the speech stream was also significantly correlated with both expressive and receptive vocabulary knowledge.

One question is whether the children with SLI were exposed to the speech stream for a long enough duration to discover the statistical patterns among adjacent sound sequences. Saffran et al. (1997) observed a significant increase in performance for both adults and school-aged children when they were exposed to the 21 min of the language on 2 consecutive days. Thus, it is not clear from Experiment 1 if children with SLI are unable to track the differences in transitional probabilities due to the impoverished nature of the input or if they are simply inefficient at computing the statistics, requiring more exposure to the speech stream. If the latter is the case, the children with SLI may be able to compute the statistics given longer exposure to the input. We tested this hypothesis in Experiment 2a. In Experiment 2b, we asked whether the pattern of performance observed for children with SLI in the speech exposure condition in Experiment 2a is unique to speech processing. Specifically, in Experiment 2b, we used a nonlinguistic task designed to be analogous to the word segmentation task, in which tones were substituted for the syllables in the “words,” generating a fluent stream of tones (Saffran et al., 1999).

Experiment 2

Method

Participants

Thirty children who participated in Experiment 1 were brought back into the lab 6 months later to participate in Experiments 2a and 2b. The children were chosen to be part of a group of age- and nonverbal-IQ-matched groups. This group consisted of 15 children with SLI (ages 8;0–10;11) and 15 age- and nonverbal-IQ-matched (CA-NIQ) controls. The CA-NIQ group did not differ from the SLI group in age, t(28) = 0.35, p = .72, or nonverbal IQ, t(28) = 0.28, p = .77 (see Table 4). The children were seen for two visits with an average of 10–14 days between visits. On each visit, children participated in either Experiment 2a or Experiment 2b, with order of participation counterbalanced.

Table 4.

Age and standardized scores for language assessment measures for the SLI and the chronological Age - Nonverbal IQ matched (CA-NIQ) groups.

SLI (n = 15)
CA-NIQ (n = 15)
Comparison
Variable M SD Range M SD Range t(28) P
Age in months 111 10 99–130 113 18 96–154 0.35 p = .72
Leiter–Nonverbal IQa 101 8 89–119 102 7 91–113 0.28 p = .77
CELF-3 ELSb 72 12 50–84 109 12 88–132 8.62 p < .01
CELF-3 RLSc 71 12 50–90 N/A N/A N/A
PPVT-IIId 93 11 69–112 109 10 95–126 4.14 p < .01
EVTe 84 9 69–109 104 12 84–124 5.04 p < .01

Note. For each variable, age-scaled scores have a mean of 100 and an SD of 15. IQ = intelligent quotient; N/A = not applicable.

a

Leiter International Perfromance Scale (Roid & Miller, 1997).

b

CELF-3 ELS = Clinical Evaluation of Language Fundamentals–3: Expressive Language score (Semel et al., 1995).

c

CELF- 3 RLS = Clinical Evaluation of Language Fundamentals–3: Receptive Language Score (Semel et al., 1989).

d

PPVT-III = Peabody Picture Vocabulary Test, Third Edition (Dunn & Dunn, 1997).

e

EVT = Expressive Vocabulary Test (Williams, 1997).

Stimuli and Procedures

Experiment 2a

The stimuli and procedures for Experiment 2a were identical to those of Experiment 1, with the exception that the children listened to the same materials twice, without a break, for 42 continuous min. As in Experiment 1, prior to the testing phase, children were presented with practice trials containing word–nonword pairs derived from words in English (e.g., com-pu-ter vs pu-ter-com). Following the practice trials, the children were then presented with the test trials from Experiment 1. Again, all of the children were able to successfully complete all of the practice trials, and no children were excluded from the experiment due to their inability to understand the task.

Experiment 2b

The materials for Experiment 2b were identical to Tone Language 1 from Saffran et al. (1999). The tone stream was constructed out of 11 pure tones taken from the same octave (starting at middle C within a chromatic set), with the same duration (0.33 s), created using the sine wave generator in SoundEdit 16 (Adobe, San Jose, CA). The tones were combined into groups of three to form six tone words (GG#A, CC#D, D#ED, FCF#, DFE, and ADB). The tone words were not constructed in accordance with the rules of standard musical composition and did not resemble any paradigmatic melodic fragments. Transitional probabilities between tones within words averaged 0.64 (range = 0.25–1.00). In contrast, transitional probabilities between tones spanning word boundaries averaged 0.14 (range = 0.05–0.60). Although these two distributions did overlap, this overlap was rare, occurring for only 3 of the 30 across-word tone instances.

The six tones were concatenated together in a random order, with no silent junctures between words, to create six different blocks containing 18 words each. No words occurred twice in a row. The six blocks were, in turn, concatenated together to produce a 7-minute continuous stream of tones. As with the speech stimuli used in Experiments 1 and 2a, there were no acoustic markers of tone-word boundaries. The only consistent cue to the beginning and end of the tone words was the transitional probabilities between tones. In addition to the tone stream, six tone nonword foils were created (see Table 5).

Table 5.

Tone words and tone nonword foils.

Tone words Nonwords
GG#A (1.0) AC#E
CC#D (.75) F#G#E
D#ED (.65) GCD#
FCF# (.50) C#BA
DFE (.42) C#FD
ADB (.37) G#BA

Note. Transitional probability for each of the tone words in parentheses.

The procedure was identical to that used in Experiment 2a. While the tape of the continuous speech stream played in the background, children were asked to draw a picture using a computer-coloring program, Kid Pix 2. Children listened to the tape for a total of 42 min. At the end of the 42 min, children heard pairs of “word” and “nonword” tone sequences and were asked to choose the sound sequence that sounded more familiar. Prior to the testing phase, to ensure that the children understood the task, children were presented with practice trials containing tone sequences derived from familiar children's nursery rhymes presented in the correct or incorrect order (e.g., the tune, without words, from “Mary Had a Little Lamb” vs. the tune from “Lamb Little Mary Had a”). Following the practice trials, the children were then presented with the 36 test pairs. Again, all of the children were able to successfully complete all of the practice trials.

Results

The results for the SLI and CA-NIQ groups are presented in Figure 2. A repeated measures ANCOVA with age and nonverbal IQ as covariates revealed a main effect for group, F(1, 26) = 7.4, p = .003, partial η2 = .37, ω = .91, across the speech and tone conditions, with overall performance for the children with SLI being poorer than that of their typical language peers. An interaction was also observed where the performance of the children with SLI did not differ from that of the CA-NIQ group in the speech condition (Experiment 2a): F(1, 26) = 2.95, p = .11, partial η2 = .09, ω = .34. However, the two groups did exhibit significantly different performance in the tone condition (Experiment 2b): F(1, 26) = 12.3, p = .002, partial η2 = .09, ω = .92. In the speech condition, the mean was 56.2% (SD = 10%) for the SLI group and 64.4% (SD = 15%) for the CA-NIQ group, where chance equals 50%. Single-sample t tests (two-tailed) calculated for each group individually indicated that both groups performed significantly better than would be expected by chance: SLI, t(14) = 2.3, p < .05; CA-NIQ, t(14) = 3.74, p < .01. In the tone condition, the mean score for the SLI group was 48% (SD = 11%) and for the CA-NIQ control group was 66% (SD = 15%) where chance equals 50%. A single-sample t test (two-tailed) for the SLI group indicated that their performance after 42 min of exposure to the tone stimuli was no different from chance, t(14) = 0.62, p = .54, whereas the CA-NIQ group's performance was again significantly better than chance, t(14) = 4.09, p < .001.

Figure 2.

Figure 2

Percent correct performance for children with SLI and Age-Nonverbal IQ matched controls in Experiment 2a (42-min speech) and Experiment 2b (42-min tone statistical word learning tasks).

In Experiment 1, we observed that, after 21 min of exposure to the speech stream, the CA-NIQ groups' performance was significantly correlated with both expressive and receptive vocabulary. However, we observed no relationship between statistical learning and vocabulary knowledge for the children with SLI after 21 min of exposure. We thus asked whether the increased exposure to the speech stream in Experiment 2a would reveal a relationship between statistical learning and vocabulary knowledge for the children with SLI. A Pearson's correlation (two-tailed) for age, nonverbal IQ, and raw score for the EVT and PPVT-III indicated that the SLI group's statistical word learning performance, after double the exposure to the speech stream, was not significantly correlated with expressive vocabulary, age, or IQ but was significantly correlated with receptive vocabulary (p < .03; see Table 6).

Table 6.

Pearson correlations for Experiment 2 (42-min speech), children with SLI (n = 15).

Variable Age in months Nonverbal IQ standard score PPVT-III raw score EVT raw score
Age in months
Nonverbal IQ 0.07
PPVT-III 0.46 0.04
EVT 0.52* 0.23 0.13
SWL 42-min. speech 0.41 0.15 0.52 −0.07
*

p < .05 (two-tailed).

p < .03 (two-tailed).

In Experiment 1, we also observed that the typically developing children learned all six words after only 21 min of exposure. With doubled exposure, the performance for the CA-NIQ group did not differ from their performance after 21 min of exposure (64%) and 42 min (64%), t(14) = 0.04, p = .96, and they again learned all six words in Experiment 2a. The children with SLI learned 2:6 words at/or approaching a level significantly greater than chance—patubi having a transitional probability of 0.5 (p < .05) and bupada having a transitional probability of 0.42 (p = .07). That the children with SLI did not learn any of the words having the highest transitional probabilities suggests, at least on the surface, that even with double the exposure time, they were unable to use transitional probabilities to discover the word boundaries within the stream of speech. However, 10 of the 15 children with SLI had performance that was greater than 50% after double the exposure, whereas 5 of the 15 children with SLI had performance that was at or below 50%. One question is whether the pattern of learning for the children with SLI whose performance was above 50% differed from those whose performance was not above chance. In the 42-min speech condition, the children with SLI with above-chance performance learned five of the six target words at/or approaching a level significantly greater than chance (dutaba, patubi, bupada, babupu, p < .05; pidabu, p = .09). In contrast, after double the exposure, the remaining children with SLI learned only 2:6 words at a level significantly greater than chance (dutaba, pidabu, p < .05, vs. tutibu, p = .55; patubi, p = .63; bupada, p = .83; and babupu, p = .72). Thus, it appears that the children with SLI are tracking at least some statistical information in the input.

Importantly, however, for the above-chance SLI group, one word was not learned. This word, tutibu (p = .37), had the second highest transitional probability: 0.75. If the children with SLI were using transitional probability as a cue to discover word boundaries within the speech stream, why then, with double the exposure to the speech stream, were they unable to discover the boundaries for the word having the second highest transitional probability? Analysis of the response patterns of the children revealed that the target/foil test trials where the target and the foil had the identical vowel sequence—tutibu/dupitu—had the highest error rate, with 7 of the 10 children with SLI incorrectly choosing the foil dubitu. For the children to correctly choose the target over the foil, they not only had to track the transitional probabilities in the speech stream during the exposure phase but they also had to have a memory of the target words that contained enough phonological detail to enable them to differentiate the target from the foils during the testing phase. Taken together, the pattern of results for the children with SLI suggests that, with double the exposure, they are able to segment the speech stream to some extent but that their knowledge of the newly learned words may not contain sufficient phonological detail to enable them to differentiate newly learned target words from highly phonologically similar foils.

In Experiment 2b, we asked whether the pattern of performance observed for children with SLI in Experiment 2a is unique to speech. Specifically, we used a nonlinguistic task designed to be analogous to the word segmentation task, in which tones are substituted for the syllables in the “words,” generating a continuous stream of tones (Saffran et al., 1999). This task allowed us to ask whether statistical learning outcomes for children with SLI and CA-NIQ controls are the same for speech and for an analogous nonlinguistic task (e.g., tone-word segmentation) after 42 min of exposure.

As noted earlier, because the tone language is analogous to the speech stimuli (Saffran et al., 1996), we can compare the children's performance across the two modalities. The results of the ANCOVA discussed previously show that the CA-NIQ group performed above chance in both the speech and tone conditions, whereas performance of the SLI group differed as a function of the input stimuli, with above-chance performance only for the speech stimuli and not for the tone stimuli. In the 42-minute speech condition, the CA-NIQ group was able to learn all six words, and in the 42-minute tone condition, these same children were able to learn five of the six tone sequences at a level of significance greater than chance. Replicating Saffran's (1999) adult results, the one tone sequence not learned by the CA-NIQ group (the ABD tone sequence) had the lowest transitional probability (.37). Although the performance for the SLI group was no different from chance in the 42-minute tone condition, one of the six tone sequences was learned at a level greater than chance (D#ED; p < .05; transitional probability of .67). The results from Experiment 2b show that typically developing children are able to group sequences of auditory “events” in the same manner regardless of whether the input is linguistic (e.g., syllables) or nonlinguistic (e.g., tones). In Saffran et al. (1999), the adults heard the tone stimuli for a total of 21 min. With twice as much exposure, the typically developing children were able to learn five of the six trisyllabic tone-sequences embedded within the tone stream. However, the children with SLI were less successful with the tone-sequences, with overall performance no different from chance.

An important question is whether the children with SLI in Experiments 2a and 2b differed in some fundamental way from those children with SLI in Experiment 1. The children with SLI in Experiments 2a and 2b did not differ from the children with SLI in Experiment 1 in respect to Age, F(1, 49) = 0.32, p = .59; nonverbal IQ, F(1, 49) = 2.5, p = .11; Expressive Language, F(1, 49) = 0.35, p = .56; or Receptive Language, F(1, 49) = .65, p = .42. Thus, differences in age, nonverbal IQ, receptive, and/or expressive language abilities do not account for the differences in performance for the children with SLI in Experiment 1 versus Experiments 2a and 2b. This pattern of results suggests that the increased exposure to the speech stimuli in Experiment 2a played a key role in the performance of the children with SLI.

A second important question is whether the children whose performance was above chance differ in some fundamental way from those children whose performance was not. In Experiment 1, the children with SLI whose performance was above chance (n = 17) did not differ from the children with SLI whose performance was below chance (n = 18) in Age, F(1, 34) = 0.0, p = .99; nonverbal IQ, F(1, 34) = 0.35, p = .55; Expressive Language, F(1, 34) = 0.34, p = .55; or Receptive Language, F(1, 34) = 0.32, p = .57. The typically developing children whose performance was above chance (n = 26) also did not differ from the typically children whose performance was below chance in Age, F(1, 76) = 1.6, p = .20; nonverbal IQ, F(1, 76) = 0.64, p = .42; or Expressive Language, F(1, 76) = 0.81, p = .36.

Similarly, in Experiment 2a, the children with SLI whose performance was above chance (n = 10) did not differ from the children with SLI whose performance was below chance (n = 5) for Age, F(1, 14) = 0.53, p = .47; Expressive Language, F(1, 14) = 0.0, p = .95; or Receptive Language, F(1, 34) = 0.16, p = .69, but did differ in nonverbal IQ, F(1, 14) = 5.7, p = .03. Although non-verbal IQ was higher for the children with SLI whose performance was above chance than for the children whose performance was below chance, it was not significantly correlated with statistical word learning performance. For the CA-NIQ group, 13:15 children had performance that was above chance, precluding statistical analysis.

In Experiment 2b, the children with SLI whose performance was above 50% (n = 5) did not differ from the children whose performance was below 50% (n = 10) with respect to Age, F(1, 14) = 0.79, p = .38; nonverbal IQ, F(1, 14) = 0.0, p = .98; Expressive Language, F(1, 14) = 0.18, p = .67; or Receptive Language, F(1, 14) = 3.9, p = .07. For the CA-NIQ group, the children whose performance was above 50% (n = 11) also did not differ from the children whose performance was below 50% for Age, F(1, 13) = 1.3, p = .27; nonverbal IQ, F(1, 13) = 0.05, p = .95; or Expressive Language, F(1, 13) = 0.98, p = .90. At this time, given the available behavioral data, it is not clear what factors differentiate the children who were able to segment the input using statistical sequential information implicitly from children who were unable to segment the speech stream.

General Discussion

In these studies, we asked if children with SLI are both sensitive to the quantitative aspects of distributional information in a language corpus and able to store this information to a degree that supports vocabulary development—specifically, whether they are able to implicitly track statistical information to discover word boundaries in running speech. We also asked if this ability is related to frequency or degree of exposure and to vocabulary knowledge, and if it appears to be a domain-general or domain-specific skill. The findings from our studies support the hypothesis that typically developing children are equipped with computational tools that can harness statistical information to detect word boundaries, that this ability is related to measures of receptive and expressive word knowledge, and that it appears to be a domain-general ability being broadly similar across speech and tone conditions. The findings for the children with SLI are less clear cut and suggest that the computational mechanism that allows unimpaired children to use statistical information to discover word boundaries is not as effectively functional in children with SLI.

Although with double exposure, the children with SLI were able to track the transitional probabilities in the speech condition, they still had difficulty. Specifically, they were unsuccessful at differentiating newly learned target words from highly similar-sounding foils during the testing phase of the task. One possibility is that children with SLI were unable to retain in memory a sufficiently detailed phonological form of the target words. This is consistent with recent work suggesting that the phonological representations of words in the lexicons of children with SLI are more holistic and less well specified than those of typically developing children (Mainela-Arnold, Evans, & Coady, 2008). If one takes the view that representation and processing of all aspects of language (e.g., speech, words, and grammar) are dependent on a computational system where learning takes place over distributed representations, occurring through changes in the strength of these representations as a result of statistical contingencies in the environment (e.g., Elman et al., 1996), then the pattern of performance for the children with SLI suggests that even with double the exposure, the representations of newly learned words may be phonologically underspecified.

The difficulties that the children with SLI experienced segmenting the speech stream may also have been compounded by the nature of speech stimuli itself, as it was impoverished with respect to the cues that are available for children to discover word boundaries in naturally occurring speech. In natural speech, a variety of cues, such as prosody and coarticulation, occur in conjunction with transitional probabilities, aiding the listener in the discovery of word boundaries. Not only were these redundant cues unavailable for the children in the speech condition, but the speech stimulus was synthesized speech, possibly adding to the difficulties experienced by the children with SLI. There is a growing body of evidence that shows that children with SLI are significantly impaired across a range of speech perception tasks when the stimuli consist of synthesized speech in contrast to natural speech (Coady, Kluender, & Evans, 2005; Coady, Evans, Mainela-Arnold, & Kluender, 2007; Evans, Viele, & Kass, 2002; Joanisse et al., 2000). The results of Experiment 2b indicate, however, that the poor performance of the children with SLI in Experiments 1 and 2a was not due solely to the degraded speech stimuli. Experiment 2b consisted of highly perceptible tones, yet the children with SLI were still unable to discover the tone-word boundaries.

Taken together with prior research (Saffran et al., 1997), these studies make it clear that implicit learning is a robust phenomenon in typically developing children. Specifically, typically developing children can track transitional probabilities from a stream of speech with a level of efficiency and specificity that allows them to not only learn words having varying transitional probabilities (e.g., 0.37–1.0) but also to differentiate these newly learned words from highly similar-sounding foils during testing despite the degraded nature of the synthetic speech stimuli. The difficulty in sequence learning by children in the SLI group suggests that (a) difficulties in tracking statistical properties of sounds for children with SLI is not limited to speech and (b) the nonlinguistic materials were actually more difficult for the children with SLI than the linguistic materials, perhaps due to the relative novelty of the tone sequences. In any event, these findings only highlight the robustness of the implicit learning mechanism in typical children and the fragile and ineffective nature of this mechanism in children with SLI.

On the surface, the difference in the performance of the children with SLI in the 42-min speech and in the speech and tone conditions suggests that implicit learning is not a domain-general mechanism in children with SLI. However, the speech and tone conditions differed in important ways that may have resulted in different performance in the two conditions. First, all of the children in Experiments 2a and 2b had prior exposure to the speech stimuli because of their prior participation in Experiment 1. Thus, the children's exposure time to the speech stimuli, over the span of Experiments 1 and 2a, was actually 63 min. It could be that the children with SLI actually require not 40+ but 60+ min of exposure to the speech stream before they are able to discover the word boundaries. If the children had received 60+ min of exposure to the tone language, their performance in the speech and tone conditions may have been similar. Note, however, that there were 6 months between the two experiments. For Experiment 1 participation to have affected Experiment 2 performance, the children would have had to maintain these representations over a long time interval.

A second important difference is that there is overlap in the transitional probabilities within and across the word boundaries in the tone stimuli not present in the speech stimuli. This overlap is extremely rare, occurring for only 3 of the 30 across-word tone-pairs where the probability was .6 (when the words GG#A happened to be followed by DFE, as the cross-boundary sequence AD also occurred in the word ADB). The occurrence of such overlaps may have made segmentation more difficult in the tone language as compared with the speech language. However, recall that the typically developing children showed equivalent performance in the tone and speech conditions in Experiment 2. There is, thus, some factor that made the tone condition disproportionately harder for the children with SLI compared with their typically developing peers.

Importantly, however, there were children with SLI whose performance was above chance in all three experiments. An important question is whether these children differed in some fundamental way from the children whose performance was not above chance. With the exception of nonverbal IQ in Experiment 2a, which was not significantly correlated with statistical word learning abilities, the children did not differ by age, IQ, or receptive/ expressive vocabulary or language abilities. The fact that differences in intelligence, age, or language did not account for differences in statistical word learning abilities suggests that the children differed in some other fundamental way. One possibility is that the children differed in their working memory capacity and/or attentional resources. Studies comparing implicit learning in high- versus low-load conditions for adults show that statistical word learning performance is significantly poorer when working memory/attentional resources are reduced and are not available to be dedicated to the discovery of word boundaries (Ludden & Gupta, 2000). Children with SLI have reduced working memory capacity as compared with that of their peers (cf. Coady & Evans, 2008; Graf Estes, Evans, & Else-Quest, 2007; Leonard et al., 2007; Montgomery & Evans, 2009). Moreover, there is a growing body of evidence suggesting that children with SLI may also have problems with selective auditory attention, especially at the earliest stages of sensory processing (e.g., Helzer, Champlin, & Gillam, 1996; Montgomery, Evans, & Gillam, 2009; Stevens, Sanders, & Neville, 2006; Uwer, Albrecht, & von Suchodoletz, 2002). It may be that the children differed in attention or working memory abilities and this played a critical role in the statistical word learning abilities in children, which is clearly an important issue that warrants further investigation.

Another interesting outcome in Experiment 1 was that statistical learning was significantly correlated with age for the typical children (aged 5;7–12;10). Given that very young children and even infants succeed on statistical word learning tasks, this is a somewhat curious finding. There are, however, differences both in the complexity of the exposure languages and the methodologies used to measure statistical learning in infants and adults that may account for differences in implicit learning skills of infants, children, and adults. In infant studies, exposure languages are generally much less complex than adult languages. For example, Graf Estes et al. (2007) used a language consisting of four CVCV target words. This contrasts sharply with our study and others, where the language consisted of six CVCVCV target words. In addition to differences in the complexity of the exposure languages, learning in infant studies is often measured with paradigms such as preferential looking, which is presumably less cognitively demanding than the two-alternative forced-choice paradigm used in our study. In light of the role that working memory capacity and attention has on statistical learning (Ludden & Gupta, 2000), this mechanism may be sensitive to and influenced by developmental changes in attention and working memory capacity.

The differences in the individual words learned by the children with and without SLI as well as the differences in the relationships between statistical word learning and expressive and receptive vocabulary in the two groups indicate that the pattern of learning by the children with SLI differs somewhat from that of typically developing children. Tomblin et al. (2007) also observed differences in the pattern of sequence learning in the children with SLI when compared with their NL controls. Specifically, their NL control group exhibited an initial rapid rate of learning followed by a gradual approach toward an asymptote. In contrast, the shape of the learning curve for the SLI group in Tomblin et al.'s study consisted of a period of slowed responses prior to the rapid onset of learning but no evidence of an asymptote by the end of the training. Performance on the last block of trials did not differ between the SLI and NL controls. What is interesting is the slowing in reaction times for the SLI group after the initial block of pattern sequence trials. Tomblin et al. suggest that in the early stages of learning the new sequences, the representations are initially unstable in children with SLI as compared with NL children. Several studies of lexical and sentence processing in individuals with SLI suggest that there may be less suppression of competing candidate representations than observed in typically developing children (McMurray, Samuelson, Lee, & Tomblin, 2006; Mainela-Arnold, Evans, & Coady, 2007, 2008). Tomblin et al. suggest that for their individuals with SLI, multiple candidate targets are initially generated, with subsequent instability as these candidates compete for priority as the dominant representation. This instability is resolved only after sufficient training. Although our study does not allow for investigation of the stability of the representations of the single word/tone sequence that was learned by the children with SLI, our findings are consistent with Tomblin et al.'s work, strongly supporting the contention that the learning challenges for children with SLI are not limited to linguistic sequences (e.g., Tomblin et al., 2007).

The term implicit learning characterizes a heterogeneous collection of learning capacities that, in addition to perceptual motor learning (e.g., procedural memory), includes probabilistic learning of categories, statistical learning, artificial grammar learning, and prototype abstraction (Perruchet & Pacton, 2006; Squire & Zola, 1996). The findings for our experiments, taken together with the work by Tomblin et al. (2007) and Plante et al. (2002) suggest that Ullman's procedural learning deficit hypothesis of SLI may need to be extended to include deficits in other domains of implicit learning in children with SLI. Ullman's DP model assumes that the acquisition and use of the form-meaning–associated aspects of language (e.g., lexicon) are supported by the declarative memory system. Our results indicate that aspects of vocabulary learning are also supported by the implicit system, in the earliest stages in word learning where infants begin to discover word boundaries within the stream of speech around them. The findings from the current study also shed light on our understanding of the domain specificity of implicit learning and suggest that even when the statistical structure of the input is identical, differences in the features of the stimuli (e.g., speech vs. tones) result in different learning patterns in typically developing children. Finally, these results suggest that future studies need to consider implicit learning across the visual, auditory, and perceptual motor modalities in order to more carefully characterize the challenges facing learners with SLI.

Acknowledgments

This work was supported by National Institute on Deafness and Other Communication Disorders (NIDCD) Grant R0105263, awarded to the first author; from National Institute of Child Health and Human Development (NICHD) Grant R01HD37466, awarded to the second author; and from NICHD Grant P30HD03352, awarded to the Waisman Center. We thank Lisbeth Simon-Heilmann, Karin Phillips, and Kristen Ryan for their assistance with this research. We also thank the children and the families who generously contributed their time.

Footnotes

1

Although their CELF-3 scores were within the normal range, three children were excluded from the original group of 81 typically developing participants because their PPVT or EVT scores were not at or above age-level expectations.

References

  1. Altmann G, Dienes Z, Goode A. Modality independence of implicitly learned grammatical knowledge. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1995;21:899–912. [Google Scholar]
  2. American Speech-Language-Hearing Association Guidelines for audiologic screening. 1997 Available from www.asha.org/policy.
  3. Aslin RN, Saffran JR, Newport EL. Computation of conditional probability statistics by 8-month-old infants. Psychological Science. 1998;9:321–324. [Google Scholar]
  4. Ashby FG, Ell SW. The neurobiology of category learning. Trends in Cognitive Sciences. 2001;5:204–210. doi: 10.1016/s1364-6613(00)01624-7. [DOI] [PubMed] [Google Scholar]
  5. Bishop DV. The underlying nature of specific language impairment. Journal of Child Psychology and Psychiatry and Allied Disciplines. 1992;33:3–66. doi: 10.1111/j.1469-7610.1992.tb00858.x. [DOI] [PubMed] [Google Scholar]
  6. Chang GY, Knowlton BJ. Visual feature learning in artificial grammar classification. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2004;30:714–722. doi: 10.1037/0278-7393.30.3.714. [DOI] [PubMed] [Google Scholar]
  7. Christiansen MH, Curtin S. Transfer of learning: Rule acquisition or statistical learning? Trends in Cognitive Science. 1999;3:289–290. doi: 10.1016/s1364-6613(99)01356-x. [DOI] [PubMed] [Google Scholar]
  8. Coady JA, Evans JL. Uses and interpretations of non-word repetition tasks in children with and without specific language impairments (SLI) International Journal of Language and Communicative Disorders. 2008;43:1–40. doi: 10.1080/13682820601116485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Coady J, Evans JL, Mainela-Arnold E, Kluender K. Children with specific language impairments perceive speech most categorically when tokens are natural and meaningful. Journal of Speech, Language, and Hearing Research. 2007;50:41–57. doi: 10.1044/1092-4388(2007/004). [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Coady JA, Kluender KR, Evans JL. Categorical perception of speech by children with specific language impairments. Journal of Speech, Language, and Hearing Research. 2005;48:944–959. doi: 10.1044/1092-4388(2005/065). [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Conway CM, Christiansen MH. Modality-constrained statistical learning of tactile, visual and auditory sequences. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2005;31:24–39. doi: 10.1037/0278-7393.31.1.24. [DOI] [PubMed] [Google Scholar]
  12. Dunn L, Dunn L. Peabody Picture Vocabulary Test. 3rd ed AGS; Circle Pines, MN: 1997. [Google Scholar]
  13. Eichenbaum H, Cohen NJ. From conditioning to conscious recollection: Memory systems of the brain. Oxford University Press; New York: 2001. [Google Scholar]
  14. Ellis Weismer S, Evans JL, Hesketh L. An examination of verbal working memory capacity in children with specific language impairment. Journal of Speech, Language, and Hearing Research. 1999;42:1249–1260. doi: 10.1044/jslhr.4205.1249. [DOI] [PubMed] [Google Scholar]
  15. Elman JL, Bates EA, Johnson M, Karmiloff-Smith A, Parisi D, Plunkett K. Rethinking innateness: A connectionist perspective on development. MIT Press; Cambridge, MA: 1996. [Google Scholar]
  16. Evans JL, Viele K, Kass R, Tang F. Grammatical morphology and perception of synthetic and natural speech in children with specific language impairments. Journal of Speech, Language, and Hearing Research. 2002;45:494–504. doi: 10.1044/1092-4388(2002/039). [DOI] [PubMed] [Google Scholar]
  17. Gathercole S, Baddeley A. Phonological memory deficits in language disordered children: Is there a causal connection? Journal of Memory and Language. 1990;29:336–360. [Google Scholar]
  18. Gathercole S, Baddeley A. Working memory and language processing. Erlbaum; Hove, United Kingdom: 1993. [Google Scholar]
  19. Gopnik M, Crago M. Familial aggregation of a developmental language disorder. Cognition. 1991;39:1–50. doi: 10.1016/0010-0277(91)90058-c. [DOI] [PubMed] [Google Scholar]
  20. Graf Estes K, Evans JL, Else-Quest M. Differences in nonword repetition performance of children with and without specific language impairment: A meta-analysis. Journal of Speech, Language, and Hearing Research. 2007;50:177–195. doi: 10.1044/1092-4388(2007/015). [DOI] [PubMed] [Google Scholar]
  21. Graf Estes K, Evans JL, Alibali MW, Saffran JR. Can infants map meaning to newly segmented words? Statistical segmentation and word learning. Pscyhcological Science. 2007;18:254–260. doi: 10.1111/j.1467-9280.2007.01885.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Helzer J, Champlin C, Gillam R. Auditory temporal resolution in specifically language-impaired and age-matched children. Perceptual and Motor Skills. 1996;3:1171–1181. doi: 10.2466/pms.1996.83.3f.1171. [DOI] [PubMed] [Google Scholar]
  23. Joanisse MF, Manis FR, Keating P, Seidenberg MS. Language deficits in dyslexic children: Speech perception, phonology, and morphology. Journal of Experimental Child Psychology. 2000;71:30–60. doi: 10.1006/jecp.1999.2553. [DOI] [PubMed] [Google Scholar]
  24. Joanisse MF, Seidenberg MS. Specific language impairment: A deficit in grammar or processing? Trends in Cognitive Science. 1998;2:240–247. doi: 10.1016/S1364-6613(98)01186-3. [DOI] [PubMed] [Google Scholar]
  25. Kirkham NZ, Slemmer JA, Johnson SP. Visual statistical learning in infancy: Evidence of a domain general learning mechanism. Cognition. 2002;83:B35–B42. doi: 10.1016/s0010-0277(02)00004-5. [DOI] [PubMed] [Google Scholar]
  26. Leonard L, Ellis Weismer S, Miller C, Francis D, Tomblin B, Kail R. Speed of processing, working memory, and language impairment in children. Journal of Speech, Language, and Hearing Research. 2007;50:408–428. doi: 10.1044/1092-4388(2007/029). [DOI] [PubMed] [Google Scholar]
  27. Ludden D, Gupta P. Zen in the art of language acquisition: Statistical learning and the less is more hypothesis. In: Gleitman LR, Joshi AK, editors. Proceedings of the 22nd Annual Conference of the Cognitive Science Society; Hillsdale, NJ: Erlbaum; 2000. pp. 812–817. [Google Scholar]
  28. Mainela-Arnold E, Evans JL, Coady JA. The impact of lexical competition and response inhibition control on performance on a sentence span task in children SLI. Poster presented at the Symposium on Research in Child Language Disorders; Madison, WI. Jun, 2007. [Google Scholar]
  29. Mainela-Arnold E, Evans JL, Coady J. Lexical representation in children with SLI: Evidence from a frequency-manipulated gating task. Journal of Speech, Language, and Hearing Research. 2008;51:381–393. doi: 10.1044/1092-4388(2008/028). [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Marcus GF, Vijayan S, Rao SB, Vishton PM. Rule learning by seven-month-old infants. Science. 1999 Jan 1;283:77–79. doi: 10.1126/science.283.5398.77. [DOI] [PubMed] [Google Scholar]
  31. McClelland JL, Plaut DC. Does generalization in infant learning implicate abstract algebra-like rules? Trends in Cognitive Sciences. 1999;3:166–168. doi: 10.1016/s1364-6613(99)01320-0. [DOI] [PubMed] [Google Scholar]
  32. McMurray B, Samuelson V, Lee S, Tomblin JB. Eye movements reveal the time course of spoken word recognition in normal and language-impaired adolescents. Poster presented at the Symposium on Research in Child Language Disorders; Madison, WI. Jun, 2006. [Google Scholar]
  33. Merzenich MM, Jenkins WM, Johnston P, Schreiner C, Miller SL, Tallal P. Temporal processing deficits of language-learning impaired children ameliorated by training. Science. 1996 Jan 5;271:77–80. doi: 10.1126/science.271.5245.77. [DOI] [PubMed] [Google Scholar]
  34. Miller CA, Kail R, Leonard LB, Tomblin JB. Speed of processing in children with specific language impairment. Journal of Speech, Language, and Hearing Research. 2001;44:416–433. doi: 10.1044/1092-4388(2001/034). [DOI] [PubMed] [Google Scholar]
  35. Montgomery J. Relation of working memory to off-line and real-time sentence processing in children with specific language impairment. Applied Psycholinguistics. 2000;21:117–148. [Google Scholar]
  36. Montgomery J, Evans J. Complex sentence comprehension and working memory in children with specific language impairment. Journal of Speech, Language, and Hearing Research. 2009;52:xxx–xxx. doi: 10.1044/1092-4388(2008/07-0116). [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Montgomery J, Evans J, Gillam R. Relation of auditory attention on complex sentence comprehension in children with specific language impairment: A preliminary study. Applied Psycholinguistics. 2009;30:123–151. [Google Scholar]
  38. Newman RS, Bernstein Ratner N, Jusczyk AM, Jusczyk PW, Dow KA. Infants' early ability to segment the conversational speech signal predicts later language development: A retrospective analysis. Developmental Psychology. 2006;42:643–655. doi: 10.1037/0012-1649.42.4.643. [DOI] [PubMed] [Google Scholar]
  39. Nissen MJ, Bullemer P. Attentional requirements of learning: Evidence from performance measures. Cognitive Psychology. 1987;19:1–32. [Google Scholar]
  40. Perruchet P, Pacton S. Implicit learning and statistical learning: One phenomenon, two approaches. Trends in Cognitive Sciences. 2006;10:233–238. doi: 10.1016/j.tics.2006.03.006. [DOI] [PubMed] [Google Scholar]
  41. Perruchet P, Tyler MD, Galland N, Peereman R. Learning nonadjacent dependencies: No need for algebraic-like computations. Journal of Experimental Psychology: General. 2004;313:573–583. doi: 10.1037/0096-3445.133.4.573. [DOI] [PubMed] [Google Scholar]
  42. Pinker S. The language instinct. William Morrow; New York: 1994. [Google Scholar]
  43. Plante E, Gomez R, Gerken L. Sensitivity to word order cues by normal and language/learning disabled adults. Journal of Communication Disorders. 2002;35:453–462. doi: 10.1016/s0021-9924(02)00094-1. [DOI] [PubMed] [Google Scholar]
  44. Reber AS. Implicit learning and tacit knowledge. Journal of Experimental Psychology: General. 1989;118:219–235. [Google Scholar]
  45. Reber PJ, Squire LR. Parallel brain systems for learning with and without awareness. Learning and Memory. 1994;2:1–13. [PubMed] [Google Scholar]
  46. Reber PJ, Stark C, Squire LR. Contrasting cortical activity associated with category memory and recognition memory. Learning and Memory. 1998;5:420–428. [PMC free article] [PubMed] [Google Scholar]
  47. Rice M, Wexler K, Cleave P. Specific language impairment as a period of extended optional infinitive. Journal of Speech and Hearing Research. 1995;38:850–863. doi: 10.1044/jshr.3804.850. [DOI] [PubMed] [Google Scholar]
  48. Roid M, Miller L. Leiter International Performance Scale–Revised. Stoelting; Dale Wood, IL: 1997. [Google Scholar]
  49. Saffran JR. Constraints on statistical language learning. Journal of Memory and Language. 2002;47:172–196. [Google Scholar]
  50. Saffran JR. Statistical language learning: Mechanisms and constraints. Current Directions in Psychological Science. 2003;12:110–114. [Google Scholar]
  51. Saffran JR, Aslin RN, Newport EL. Statistical learning by 8-month-old infants. Science. 1996 Dec 13;274:1926–1928. doi: 10.1126/science.274.5294.1926. [DOI] [PubMed] [Google Scholar]
  52. Saffran JR, Graf Estes KM. Mapping sound to meaning: Connections between learning about sounds and learning about words. In: Kail R, editor. Advances in child development and behavior. Elsevier; New York: 2006. pp. 1–38. [DOI] [PubMed] [Google Scholar]
  53. Saffran JR, Griepentrog GJ. Absolute pitch in infant auditory learning: Evidence for developmental reorganization. Developmental Psychology. 2001;37:74–85. [PubMed] [Google Scholar]
  54. Saffran JR, Johnson EK, Aslin RN, Newport EL. Statistical learning of tone sequences by human infants and adults. Cognition. 1999;70:27–52. doi: 10.1016/s0010-0277(98)00075-4. [DOI] [PubMed] [Google Scholar]
  55. Saffran JR, Newport EL, Aslin RN. Word segmentation: The role of distributional cues. Journal of Memory and Language. 1996;35:606–621. [Google Scholar]
  56. Saffran JR, Newport EL, Aslin RN, Tunick RA, Barrueco S. Incidental language learning: Listening (and learning) out of the corner of your ear. Psychological Science. 1997;8:101–105. [Google Scholar]
  57. Saffran JR, Thiessen ED. Domain-general learning capacities. In: Hoff E, Shatz M, editors. Handbook of language development. Blackwell; Cambridge: 2007. pp. 68–86. [Google Scholar]
  58. Schacter DL, Tulving E, editors. Memory systems 1994. MIT Press; Cambridge, MA: 1994. [Google Scholar]
  59. Semel E, Wiig E, Secord W. Clinical Evaluation of Language Fundamentals–Third Edition (CELF-3) Psychological Corporation; San Antonio, TX: 1995. [Google Scholar]
  60. Singh L, Nestor SS. Concurrent and predictive validity of infant word segmentation tasks. Paper presented at Boston University Conference on Language Development; Boston, MA. Nov, 2006. [Google Scholar]
  61. Singh L, Nestor SS, Paulson J, Strand K. Predicting childhood vocabulary from infant word segmentation abilities. Paper presented at Boston University Conference on Language Development; Boston, MA. Nov, 2007. [Google Scholar]
  62. Squire LR. Memory and the hippocampus: A synthesis from findings with rats, monkeys, and humans. Psychological Review. 1992;99:195–231. doi: 10.1037/0033-295x.99.2.195. [DOI] [PubMed] [Google Scholar]
  63. Squire LR, Knowlton BJ. The medial temporal lobe, the hippocampus, and the memory systems of the brain. In: Gazzaniga MS, editor. The new cognitive neurosciences. MIT Press; Cambridge, MA: 2000. pp. 765–780. [Google Scholar]
  64. Squire LR, Knowlton B, Musen G. The structure and organization of memory. Annual Review of Psychology. 1993;44:453–495. doi: 10.1146/annurev.ps.44.020193.002321. [DOI] [PubMed] [Google Scholar]
  65. Squire LR, Zola SM. Structure and function of declarative and nondeclarative memory systems. Proceedings of the National Academy of Sciences of the United States of America. 1996;93:13515–13522. doi: 10.1073/pnas.93.24.13515. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Stevens C, Sanders L, Neville H. Neurophysiological evidence for selective auditory attention deficits in children with specific language impairment. Brain Research. 2006;1111:143–152. doi: 10.1016/j.brainres.2006.06.114. [DOI] [PubMed] [Google Scholar]
  67. Tallal P, Miller SL, Bedi G, Byma G, Wang X, Nagarajan SS, et al. Language comprehension in language-learning impaired children improved with acoustically modified speech. Science. 1996 Jan 5;271:81–84. doi: 10.1126/science.271.5245.81. [DOI] [PubMed] [Google Scholar]
  68. Thiessen ED, Saffran JR. When cues collide: Statistical and stress cues in infant word segmentation. Developmental Psychology. 2003;39:706–716. doi: 10.1037/0012-1649.39.4.706. [DOI] [PubMed] [Google Scholar]
  69. Tomblin B, Mainela-Arnold E, Zhang X. Procedural learning in children with and without specific language impairment. Journal of Child Language Learning and Development. 2007;3:269–293. [Google Scholar]
  70. Ullman M. The declarative/procedural model of lexicon and grammar. Journal of Psycholinguistic Research. 2001a;30:37–69. doi: 10.1023/a:1005204207369. [DOI] [PubMed] [Google Scholar]
  71. Ullman M. The neural basis of lexicon and grammar in first and second language: The declarative/procedural model. Bilingualism: Language and Cognition. 2001b;4:105–122. [Google Scholar]
  72. Ullman M. Contributions of memory circuits to language: The declarative/procedural model. Cognition. 2004;92:231–270. doi: 10.1016/j.cognition.2003.10.008. [DOI] [PubMed] [Google Scholar]
  73. Ullman MT, Gopnik M. Inflectional morphology in a family with inherited specific language impairment. Applied Psycholinguistics. 1999;20:51–117. [Google Scholar]
  74. Ullman M, Pierpont R. Specific language impairment is not specific to language: The procedural deficit hypothesis. Cortex. 2005;41:399–433. doi: 10.1016/s0010-9452(08)70276-4. [DOI] [PubMed] [Google Scholar]
  75. Uwer R, Albrecht R, von Suchodoletz W. Automatic processing of tones and speech stimuli in children with specific language impairment. Developmental Medicine and Child Neurology. 2002;44:527–532. doi: 10.1017/s001216220100250x. [DOI] [PubMed] [Google Scholar]
  76. van der Lely H. Do heterogeneous deficits require heterogeneous theories? SLI subgroups and the RDDR hypothesis. In: Levy Y, Schaeffer J, editors. Language competence across populations: Toward a definition of specific language impairment. Erlbaum; Mahwah, NJ: 2003. pp. 109–133. [Google Scholar]
  77. Williams K. Expressive Vocabulary Test (EVT) AGS; Circle Pines, MN: 1997. [Google Scholar]

RESOURCES