Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 May 1.
Published in final edited form as: Infancy. 2011 Jul 28;17(3):247–271. doi: 10.1111/j.1532-7078.2011.00085.x

Visual Sequence Learning in Infancy: Domain-General and Domain-Specific Associations with Language

PMCID: PMC3329153  NIHMSID: NIHMS306956  PMID: 22523477

Abstract

Research suggests that non-linguistic sequence learning abilities are an important contributor to language development (Conway, Bauernschmidt, Huang, & Pisoni, 2010). The current study investigated visual sequence learning as a possible predictor of vocabulary development in infants. Fifty-eight 8.5-month-old infants were presented with a three-location spatiotemporal sequence of multi-colored geometric shapes. Early language skills were assessed using the MacArthur-Bates CDI. Analyses of children’s reaction times to the stimuli suggest that the extent to which infants demonstrated learning was significantly correlated with their vocabulary comprehension at the time of test and with their gestural comprehension abilities 5 months later. These findings suggest that visual sequence learning may have both domain-general and domain-specific associations with language learning.

Keywords: domain-general, domain-specific, sequence learning, procedural learning, language development


Language acquisition depends on the development of fundamental linguistic and cognitive processes. Because of the range of variability in language skills that exists across both healthy individuals and various clinical populations, being able to pinpoint specific cognitive processes that give rise to such variability can have important theoretical and potentially clinical implications. If language is underwritten by one or more domain-general processes, then the same information processing abilities that contribute to nonverbal cognitive abilities should also contribute to language development (Hollich, Hirsh-Pasek, & Golinkoff, 2000). Clinically this is important because understanding how nonverbal cognitive abilities relate to language development could provide valuable information about possible causes underlying language delays and disorders.

Although previous work suggests that intrinsic cognitive abilities such as working memory contribute to language outcomes (Pisoni, Cleary, Geers, & Tobey, 1999), there has been very little work investigating the contribution of procedural learning processes (see Conway et al., 2010; Misyak, Christiansen, & Tomblin, 2010 for recent studies with adults), and no such work in infants. In this paper, we assess the extent to which performance on a novel visual sequence learning task predicts receptive vocabulary development in healthy infants. Before describing the study, we first review previous evidence for domain-specific and domain-general predictors of language outcomes.

Predictors of Language Outcomes

There is a growing body of research tying various early speech processing abilities to later vocabulary abilities. For instance, Tsao, Liu, and Kuhl (2004) found positive predictive relations between speech discrimination ability at 6 months of age and vocabulary at 13, 16, and 24 months. A number of other studies have also found that speech and language abilities measured in infancy predict later language development (Fernald, Perfors, & Marchman, 2006; Marchman & Fernald, 2008; Newman, Bernstein Ratner, Jusczyk, Jusczyk, & Dow, 2006).

There is also evidence for domain-general predictors of language. A domain-general ability is one that invokes parallel learning mechanisms across different domains (Saffran & Thiessen, 2007). For example, a domain-general ability could be expressed in analogous ways for both auditory speech and visual stimuli. Some examples include recognition memory and speed of processing, which are discussed below. In general, a substantial amount of empirical research has demonstrated a strong link between nonverbal and verbal cognitive abilities (e.g., Plomin & Dale, 2000; although for one recent exception, see Newman et al., 2006).

Visual recognition memory is one example of a skill that has been found to be correlated with cognitive and linguistic outcomes (Colombo, Shaddy, Richman, Maikranz, & Blaga, 2004; Fagan & McGrath, 1981; Rose, Feldman, & Jankowski, 2009). Rose and colleagues argue that children’s abstraction of perceptual features forms the basis for their concepts of objects and that those concepts need to be in place before language may be acquired (Rose, Feldman, & Wallace, 1991). In addition to recognition memory, working memory (Leonard et al., 2007) and speed of processing during a variety of non-linguistic tasks (Miller, Kail, Leonard, & Tomblin, 2001) have been found to explain language ability in children with language impairment. Habituation rate is also found to relate to language outcomes. Habituation is thought to involve encoding, which is a form of learning (see R. F. Thompson, 2009 for a historical review; see R. F. Thompson & Spencer, 1966 for a classic paper on habituation). Specifically, habituation to a stimulus is thought to reflect a decline in information processing—due to the stimulus being sufficiently encoded—rather than sensory fatigue. Studies on infant habituation rate and novelty preference have demonstrated a link between attention and cognitive outcomes, such that shorter looking times (i.e., faster information processing) were indicative of better vocabulary growth (Colombo et al., 2004; McCall & Carriger, 1993). Other studies of infant attention find similar results (see e.g., Kannass & Oakes, 2008; L. Thompson, Fagan, & Fulker, 1991). Taken together, these studies all suggest a positive relationship between the domain-general abilities of memory, habituation, and attention, with language development.

The Role of Sequence Learning in Language

The research discussed so far demonstrates that there are domain-specific (e.g., speech discrimination) as well as domain-general memory and attention abilities (e.g.., working memory, habituation behavior) that correlate with concurrent and future language ability. Another type of domain-general cognitive mechanism that may be important for language is sequence learning, a type of procedural or non-declarative memory (Clegg, DiGirolamo, & Keele, 1998). Sequence learning is the ability to acquire knowledge about complex sequential stimulus patterns in virtually any domain (music, speech, visual patterns, etc.), usually occurring under conditions without conscious intent or awareness (Berry & Dienes, 1993; Cleeremans & McClelland, 1991). This kind of learning is often studied using ‘implicit learning’ and ‘statistical learning’ tasks. Although having different terms, there is growing consensus that they may actually reflect the same underlying phenomenon (Perruchet & Pacton, 2006). For instance, Boyer and colleagues (Boyer, Destrebecqz, & Cleeremans, 2005) argued that implicit sequence learning is a type of statistical learning in that it involves “simple associative prediction mechanisms” (p.383).

Statistical learning involves computing co-occurrence statistics among distributed elements (often occurring in sequence). For example, Saffran and colleagues demonstrated that 8-month-old infants can incidentally learn relatively complex co-occurrence statistics—specifically, transitional probability information—from a continuous speech stream (Saffran, Aslin, & Newport, 1996). Similar results have emerged from studies using non-linguistic auditory stimuli such as tones (Saffran, Johnson, Aslin, & Newport, 1999).

While the initial studies focused on statistical learning using auditory stimuli, many subsequent studies have demonstrated statistical learning abilities in infants and adults using visual stimuli. For instance, Kirkham and colleagues (Kirkham, Slemmer, & Johnson, 2002) found that 2-, 5-, and 8-month-old infants were able to learn statistically predictable sequences of visual stimuli in a manner that appeared to be analogous to statistical learning with speech stimuli (see also Fiser & Aslin, 2002; Johnson et al., 2009; Kirkham, Slemmer, Richardson, & Johnson, 2007).

Although sequence learning and statistical learning have been suggested to be important for language acquisition, few studies have directly examined the relationship between such learning abilities and language outcomes. Recently, Conway et al. demonstrated that visual non-linguistic sequence learning abilities were correlated with language outcomes in a group of deaf children with cochlear implants (Conway, Pisoni, Anaya, Karpicke, & Henning, 2011). The present study aims to extend this finding to infants with normal hearing. In addition, this study aims to determine whether visual sequence learning is associated with the very early stages of language development such that it could be used as an early predictor of outcomes.

The Current Study

In the present study, we investigated visual sequence learning (VSL) and its connection to language development in 8.5-month-old infants. We used a novel VSL task that relies on reaction time to assess how well infants learned a simple repeating 3-item spatiotemporal sequence. The task is similar to paradigms used by Haith and colleagues (e.g., Wentworth & Haith, 1998; Wentworth, Haith, & Hood, 2002), McMurray (e.g., McMurray & Aslin, 2004), and Kirkham (Kirkham et al., 2007), but was modeled more directly after the paradigm in Clohessy, Posner, and Rothbart (2001). We used a 3-item temporal sequence (rather than the 2-item sequences that have been used in most infant studies that relied on reaction time) because it is more complex than a 2-item sequence, and therefore more likely to map onto cognitive processes that we were interested in (e.g., language acquisition, which involves complex sequences).

The VSL task assesses infants’ ability to learn a sequence of spatial locations. The prediction was that as infants learned the sequence they would get faster at orienting to the next stimulus location in the sequence. At the time of participation, we also used a receptive language measure, the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2006), to probe the relation between VSL ability and language comprehension ability, which is developing well before infants begin to speak. Finally, additional language measures were collected at a later time point—at approximately 13.5 months old—to investigate the predictive relation between VSL and language comprehension several months after participating in the study.

Method

Participants

The participants were 58 infants (32 female). On the day of testing infants ranged in age from 8.0 to 9.8 months (M=8.6 months) and all had passed their newborn hearing screening. An additional 11 infants (7 female) were tested, but were excluded from analyses for crying/fussing (9), failing to look at the monitor on the right side (1), or for falling asleep during the study (1).

Apparatus

The VSL task was conducted within a custom-built double-walled IAC sound booth approximately 6 feet in width. Infants were tested while seated on a caregiver’s lap in front of a 55-inch wide-aspect TV monitor with two 19-inch Dell computer monitors on either sidewall (see Figure 1). Infants sat on the caregiver’s lap so that the monitors were approximately eye level; the side monitors were at an angle of 57 degrees. Experimental sessions were recorded via a hidden camera and the experimenter (unable to see which stimulus was being presented) observed the session on a monitor that displayed the live-action video of the infant and controlled the stimulus presentation from outside the sound booth. The experiment was controlled by the Habit software package (Cohen, Atkinson, & Chaput, 2004) run on a Macintosh G4 desktop computer.

Figure 1.

Figure 1

This illustrates the sound booth set up used to run the VSL task. The experimenter is outside the booth and cannot see which stimulus the infant is viewing.

Stimuli

Although the task was modeled after Clohessy, Posner, & Rothbart, 2001 we did not pair the images with sounds. Our version is only visual so that it can be used with deaf infants in future studies. The stimuli consisted of twelve 2D visual images of colorful geometric shapes organized into four object sets (A–D; see Figure 2). Each object set consisted of three unique geometric shapes created using the Adobe custom shape tool in Adobe Photoshop CS3 (Knoll et al., 2007). The use of four different object sets was to hold the infants’ attention during the task. The Photoshop .png files were then animated using Final Cut Express HD to appear that they were looming in and out. We made the shapes loom instead of using static images based on a previous finding that infants’ attention was not sufficiently maintained when using static images (see Kirkham et al., 2002). The looming images were saved as Quicktime movies. The items in each set were all different colors and shapes, selected such that no color or shape repeated within or between sequences. All stimuli loomed from small to large and back to small within 2.66 seconds, and each stimulus loomed up to five times within the course of one trial or presentation. The maximum size for each shape was either 31 cm or 34 cm depending upon whether the shape appeared on the center or side monitors, respectively. No infant saw the same shape on both the side and center monitors so this slight difference in size is not likely to have had any bearing on infants’ performance on the task.

Figure 2.

Figure 2

These are the four object sets used as the stimuli for the VSL task. Object Set A consists of a red oval, a yellow triangle, and a green flower. Object Set B consists of a dark blue pentagon, and an orange heart. Object Set C consists of a pink clover, a cyan rectangle, and a violet arrow. Object Set D consists of a lime green star, an azure crescent moon, and a blue checkmark.

Procedure

The experiment consisted of one pre-test phase, one learning phase (Phase 1), and one test phase (Phase 2). In each phase, the stimulus presentation was contingent upon the infant looking at the monitor (infant controlled). Each trial (an individual stimulus presentation) began with the appearance of a stimulus and ended 700 milliseconds after the infant looked at the correct stimulus location. Stimuli within each sequence were separated by an inter-stimulus interval of 1100 milliseconds. An entire 3-item sequence thus consisted of 3 trials in 3 different spatial locations (either Left-Center-Right or Right-Center-Left). The experimental session consisted of 3 pre-test trials (1 sequence presentation), 12 learning trials (4 sequences; Phase 1), and another 12 test trials (4 sequences; Phase 2). The entire session lasted for a maximum of 7 minutes with each phase lasting a maximum of 3.6 minutes. The actual length of the sequences and phases varied dependent on how quickly the infant looked at the monitor, with an average testing session of 3 to 4 minutes.

All phases were presented to the infant without breaks or pauses. The parent or caregiver holding the infant was instructed to look down and keep their eyes closed to limit their influence on the infant’s direction of eye gaze at the monitors. Infants’ eye movements (sometimes relying on head movements) were analyzed offline to determine how quickly infants reacted to the correct location of the next stimulus.

Pre-test phase

To orient the infant to the task, warm-up stimuli were displayed in a particular spatiotemporal sequence. A looming blue lightning bolt on a white background was presented on each monitor, in one of two sequence orders (randomly assigned): Center, Left, Right (C-L-R) or Right, Left, Center (R-L-C). Two different Pre-test sequences were used to prevent the last trial of the Pre-test phase from appearing on the same monitor as the first trial of Phase 1 (see below). Infants were presented with a total of three pre-test trials. (i.e., 1 sequence presentation). The Pre-test was not used for inclusion/exclusion purposes, but rather to familiarize the infants with the task prior to learning the test sequence.

Phase 1: Learning phase

In Phase 1, infants were presented with one of the object sets (A-D; randomly assigned) in one of two spatiotemporal patterns (L-C-R or R-C-L) that repeated continuously (e.g., L-C-R/L-C-R/L-C-R, etc.). If the infant saw C-L-R in the pre-test phase, then the spatiotemporal sequence for Phase 1 was L-C-R. If the infant saw R-L-C during the Pre-test phase, then the spatiotemporal sequence that followed was R-C-L. Shapes within each object set were always presented in the same location, even when the spatial pattern was different. For example, if one infant observed Object Set A in the L-C-R pattern and another observed Object Set A in the R-C-L pattern, both infants saw an ellipse on the left monitor, a triangle on the center, and a flower on the right; all that was different between infants was the temporal order in which these images appeared (L-C-R or R-C-L).

Phase 2: Test phase

In Phase 2, the infant was tested for her ability to predict the location of the next stimulus based upon the spatial pattern seen in Phase 1. A new set of objects was used but they were presented in the same spatiotemporal sequence as Phase 1.

Data Collection

The video recordings of the experimental sessions were recorded at 29.97 frames per second and were coded offline using Supercoder (Hollich, 2005) for right, left, and center looks. The only eye movements coded were incorrect anticipatory looks and correct looks (either anticipatory or reactionary). Thus there were no more than 2 eye movements coded per trial. A first coder coded eye movements for all of the trials for all of the infants. Then a second coder coded all trials for a randomly-selected 25 percent of the infants (n=15) for reliability. The second coder was blind to the purpose of the experiment. Coding for anticipatory looks resulted in 90% agreement between the two coders and was discussed until there was 100% agreement. The average correlation between coders on RT prior to discussion was 0.99. The coded files were then run through an Excel Macros program, which calculated the RTs for each trial. The RT for trial X was the time between the onset of trial X and the onset of the first correct look to the correct location for trial X. Thus some RTs were negative (if they were anticipatory).

An anticipatory look was a look to the correct location that occurred before or during the first 150 ms after the onset of the current stimulus (see Johnson, Amso, & Slemmer, 2003). Thus, a look was counted as anticipatory even if it ended before the onset of the stimulus. Anticipatory looks were classified as correct or incorrect dependent on whether the infant looked to the correct location of the next stimulus.

In order to test for learning of the sequence, the median RT for each phase was used as the RT for that phase. Therefore each infant had 2 data points: the median RT for Phase 1 and the median RT for Phase 2. Medians were used rather than means in order to remove the influence of outlier trials, as was done in previous research on anticipations and RT in infants (Haith & McCarty, 1990). The proportion of change in median RTs between the two phases—Phase 1 RT minus Phase 2 RT (hereafter the ‘RT difference score’)—was then calculated and formed the basis for analyses. An additional dependent variable was calculated as the increase in the number of correct anticipatory looks from Phase 1 to 2. Thus there were two dependent variables for analysis: the RT difference score and the change in correct anticipatory looks from Phase 1 to 2. The expectation was that a decrease in RT from Phase 1 to Phase 2—a speeding up of the reaction—or an increase in the number of correct anticipatory looks indicated learning the sequence.

Language Measures

At the time of testing parents were asked to fill out a language questionnaire about their child—the MacArthur-Bates Communicative Development Inventory (CDI) ‘Words and Gestures’ form (Fenson et al., 2006).1 Parents were also mailed a follow-up CDI approximately 5 months after participating in the study. The CDI is primarily a receptive vocabulary questionnaire that consists of phrases, vocabulary words, and communicative gestures. Children’s Phrases Understood, Vocabulary Comprehension, and Gesture Comprehension raw scores were used as the language outcome measures. The parent marks whether their child understands each of the phrases (Phrases Understood; e.g., “Are you hungry?”), understands the vocabulary items (Vocabulary Comprehension), and whether their child understands and/or uses the actions and gestures for communication (Gesture Comprehension; e.g., shrugging to indicate “all gone”). At 13.5 months CDI Vocabulary Production was used as an additional language outcome measure. This score was derived from parent’s report of whether their child both understands and says the vocabulary items. See Table 1 for CDI descriptive statistics.

Table 1.

Descriptive Statistics for CDI Measures.

Measure 8.5mo Phrases Understood 8.5mo Vocab Comp 8.5mo Gestures 13mo Phrases Understood 13mo Vocab Comp 13mo Gestures 13mo Vocab Production
Using Raw Scores
 M 7.54 33.98 11.11 16.08 99.43 27.80 10.30
 SD 4.88 31.31 6.55 6.60 75.14 8.70 8.11
 Range 0 – 19 0 – 138 0 – 34 1 – 27 6 – 396 13 – 50 0 – 32
Using Percentile Scores
 M 44.24 46.11 39.01 46.88 42.02 42.74 50.44
 SD 26.80 28.14 26.39 26.19 26.57 25.18 16.94
 Range 5th - 93rd 5th - 94th 5th - 94th 5th - 95th 5th - 99th 5th - 93rd 10th - 85th

Results

Three sets of analyses were conducted. First, we analyzed children’s performance on the VSL task to determine whether they learned the spatiotemporal sequence. Second, we conducted correlation analyses between children’s performance on the VSL task and their concurrent CDI ability. Third, we conducted correlation analyses between children’s performance on the VSL task and their later CDI ability—as reported at approximately 13.5 months of age.

Did Infants Learn the Sequence?

In order to answer this question we conducted 2 paired-samples t tests: one on the change in RT from Phase 1 to Phase 2 (t(57) =2.08, p=.04, d= −.31, CI.95= −.68 to .06)2 and one on the change in the number of correct anticipatory looks from Phase 1 to Phase 2 (t(57) =0.76, p=.45, d= −.13, CI.95= −.50 to .23; see Table 2 for descriptive statistics). Contrary to our prediction, there was a significant increase in RT from Phase 1 to Phase 2 instead of a decrease. There was an increase in correct anticipatory looks, but it was not significant. This suggests that as a group, the 8.5-month-old infants may not have learned the visual sequence.

Table 2.

Descriptive Statistics for VSL Task Measures.

Measure Median RT in Phase 1 (sec) Median RT in Phase 2 (sec) Correct Anticipatory Looks in Phase 1 Correct Anticipatory Looks in Phase 2
M 0.47 0.57 1.95 2.16
SD 0.27 0.41 1.59 1.53
Range −0.07 – 1.03 0 – 1.73 0 – 7 0 – 6

The raw increase in anticipatory looks (i.e., getting faster) seems contradictory to the group increase in RT (i.e., getting slower). The reason for this is that not all of the infants had anticipatory looks. In Phase 1 there were 12 infants who had no anticipatory looks and 15 infants who had only 1 anticipatory look. In Phase 2 there were 10 infants who had no anticipatory looks and 10 who had only 1 anticipatory look. This means that only a subset of the infants were included in the anticipatory looks analysis, while all infants were included in the measure of RT. This means that there were fewer infants who demonstrated learning (i.e., a speeding up of RT) compared to those who did not. However, of the 27 infants who showed an increase in anticipatory looks in Phase 2, the majority of them (16) also demonstrated an overall decrease in RT.

The fact that the group overall did not demonstrate learning the sequence, and even increased their latencies, suggests that the task may have been difficult for infants this age. Indeed, only 23 of the 58 infants showed the expected RT pattern (a decrease in RT from Phase 1 to Phase 2) and only 27 had an increase in correct anticipatory looks from Phase 1 to 2. It is possible that there were two distinct groups of infants— ‘learners’ whose RTs decreased as they learned the sequence and ‘non-learners’ who did not pick up on the pattern and got bored, thus showing the unexpected pattern of increased latencies across the session. To evaluate this possibility we separated the data for the learners and the non-learners, which are analyzed separately in the following sections.

Although we expected the group as a whole to show a decrease in RT from Phase 1 to Phase 2, the main focus of this study was to investigate the relationship between RT change and reported language (CDI) ability. Thus the key finding here is that there was a lot of variability in infants’ performance, with some infants demonstrating clear patterns of learning. This is in line with previous studies suggesting that children as young as 3 months can learn a visual sequence (Canfield & Haith, 1991).

Does VSL Task Performance Correlate with Infants’ Receptive Language Ability?

In order to answer this question we conducted correlation analyses between RT difference scores and scores on the 8.5 month CDI from the study visit for the 56 infants whose parents completed a CDI (age range at CDI 7.8–11.3 months old, M=8.8 months). The RT difference score is Phase 1 RT minus Phase 2 RT, so a positive difference score indicates a decrease in RT, or learning of the sequence.

Using raw CDI scores (controlling for age at CDI)3, the RT difference score was positively correlated with Vocabulary Comprehension (r=.28, p=.04, zr=.29, CI.95= .02 to .56), but not significantly correlated with Phrases Understood (r=.07, p=.62, zr=.07 CI.95= −.20 to .34) or Gesture Comprehension (r=.17, p=.22, zr=.17, CI.95= −.10 to .44). Specifically, infants who’s RTs decreased from Phase 1 to Phase 2 had higher receptive vocabulary ability. This suggests that infants’ success at learning the spatiotemporal sequence was positively related to their concurrent vocabulary comprehension ability at 8.5 months of age (see Table 3). Correlations were also run between CDI scores and the increase in anticipatory looks from Phase 1 to Phase 2. None of those correlations were significant (see Table 3).

Table 3.

Partial Correlations between VSL Performance and Raw CDI Measures at 8.5 Months (Controlling for age at CDI).

Measure Proportion of decrease in RT from Phase 1 to Phase 2 Increase in Anticipatory looks from Phase 1 to Phase 2 CDI Phrases Understood (8.5 months) CDI Vocab Comprehension (8.5 months) CDI Gesture Comprehension (8.5 months)
Proportion of decrease in RT from Phase 1 to Phase 2 ---
Increase in Anticipatory looks from Phase 1 to Phase 2 .46** ---
CDI Phrases Understood (8.5 months) .07 −.07 ---
CDI Vocab Comprehension (8.5 months) .28* .18 .71** ---
CDI Gesture Comprehension (8.5 months) .17 .09 .41** .55** ---
**

p<.01;

*

p<.05

Next we wanted to examine the learners and the non-learners. There was a significant difference in 8.5 month Vocabulary Comprehension ability (t(54)=2.95, p=.005, d= .69, CI.95= 0.15 to 1.24), with learners demonstrating greater vocabulary comprehension ability (M=47.83 words out of a possible 396, SD=40.22) than the non-learners (M=24.33 words, SD=18.42). In order to further understand the differences between the learners and non-learners, we tested for correlations between VSL performance (RT difference score) and raw CDI scores (controlling for age at CDI) on each group separately. We expected weak or no correlations among the CDI scores and VSL ability for the non-learners because if these infants simply did not learn the sequence then the changes in their latencies are likely to be determined by other factors (e.g., fatigue) and thus should not be associated with vocabulary scores. In other words, we logically did not expect there to be degrees of non-learning that would be related to vocabulary development in any meaningful way. On the other hand, there likely exist degrees of learning that are meaningful: the better and faster an infant learns the sequence, the greater the decrease in latency, and as we would predict, the better their vocabulary ability. Thus, we expected stronger correlations among the CDI scores and VSL ability for the learners than for the non-learners. The results of the correlation analyses were consistent with these predictions (see Table 4). The learners’ RT difference score correlated positively with vocabulary comprehension whereas the non-learners’ RT difference score did not, confirming the existence of two subgroups: one that learned the sequence to varying degrees and another group that simply showed no learning.

Table 4.

Partial Correlations between VSL Performance and CDI Measures at 8.5 months by Learner Status (Controlling for age at CDI).

Measure Proportion of change in RT Phase 1 to Phase 2 Raw CDI Phrases Understood (8.5 months) Raw CDI Vocab Comprehension (8.5 months) Raw CDI Gesture Comprehension (8.5 months)
‘Learners’
 CDI Phrases Understood (8.5 months) .30 ---
 CDI Vocab Comprehension (8.5 months) .56** .81** ---
 CDI Gesture Comprehension (8.5 months) .34 .56** .68** ---
‘Non-Learners’
 CDI Phrases Understood (8.5 months) −.36* ---
 CDI Vocab Comprehension (8.5 months) −.24 .55** ---
 CDI Gesture Comprehension (8.5 months) .10 .29 .41* ---
**

p<.01;

*

p<.05

Does VSL Task Performance Correlate with Infants’ Receptive Language Ability 5 Months after Participating in the Study?

In order to answer this question we conducted correlation analyses between the RT difference score and the CDI scores from the follow-up CDI that was mailed to parents approximately 5 months after their lab visit. Not all of the parents returned the follow-up CDI that was mailed, so these analyses were conducted for only a subset of the sample (40 infants, age range 12.8–14.5 months old, M=13.4 months). Using raw CDI scores (controlling for age at CDI), the RT difference score was not significantly correlated with Phrases Understood (r=.11, p=.50, zr =.11, CI.95= −.21 to .43), Vocabulary Comprehension (r=.24, p=.15, zr = .25, CI.95= −.08 to .57), or Vocabulary Production (r=.01, p=.96, zr = .01, CI.95= −.31 to .33), but was positively correlated with Gesture Comprehension (r=.34, p=.04, zr =.35, CI.95= .03 to .68). This suggests that infants’ success at learning the spatiotemporal sequence was positively related to their gesture comprehension ability at 13.5 months of age (see Table 5). In addition, although we may lack statistical power, the correlation value with Vocabulary Comprehension is in the predicted direction—a decrease in RT from Phase 1 to Phase 2 is associated with higher receptive language ability. Correlations were also calculated between CDI scores and the increase in anticipatory looks from Phase 1 to Phase 2. None of those correlations were significant (see Table 5).

Table 5.

Partial Correlations between VSL Performance and Raw CDI Measures at 13.5 Months (Controlling for age at CDI).

Measure Proportion of decrease in RT Phase 1 to Phase 2 Increase in Anticipatory looks from Phase 1 to Phase 2 CDI Phrases Understood (13.5 months) Raw CDI Vocab Comprehension (13.5 months) CDI Gesture Comprehension (13.5 months) CDI Vocab Production (13.5 months)
Proportion of decrease in RT Phase 1 to Phase 2 ---
Increase in Anticipatory looks from Phase 1 to Phase 2 .46** ---
CDI Phrases Understood (13.5 months) .11 .09 ---
CDI Vocab Comprehension (13.5 months) .24 .10 .68** ---
CDI Gesture Comprehension (13.5 months) .34* .28T .54** .64** ---
CDI Vocab Production (13.5 months) .01 .25 .36* .28T .55** ---
**

p<.01;

*

p<.05,

T

p<.10

Again we wanted to investigate potential differences between infants who demonstrated learning of the sequence and those who did not. Contrary to results from the 8.5 month CDI, there was a nonsignificant difference in 13.5 month vocabulary comprehension ability for learners and non-learners (t(38)=1.51, p=.14, d= .41, CI.95= −0.24 to 1.07), although the learners did have greater reported vocabulary comprehension ability (M=123.50 words out of a possible 396, SD=100.29) than the non-learners (M=86.46 words, SD=55.48). We conducted correlation analyses between the RT difference score and raw CDI scores (controlling for age at CDI) on each group separately. Again, we expected weak or no correlations among the CDI scores and VSL ability for the non-learners and positive correlations for the learners. As with the 8.5 month CDI, the learners’ RT difference score was significantly positively correlated with 13.5 month Vocabulary Comprehension whereas the non-learners’ RT difference score was not (see Table 6).

Table 6.

Partial Correlations between VSL Performance and CDI Measures at 13.5 months by Learner Status (Controlling for age at CDI).

Measure Proportion of change in RT Phase 1 to Phase 2 Raw CDI Phrases Understood (13.5 months) Raw CDI Vocab Comprehension (13.5 months) Raw CDI Gesture Comprehension (13.5 months) Raw CDI Vocab Production (13.5 months)
‘Learners’
 CDI Phrases Understood (13.5 months) .34 ---
 CDI Vocab Comprehension (13.5 months) .67* .78** ---
 CDI Gesture Comprehension (13.5 months) .32 .75** .82** ---
 CDI Vocab Production (13.5 months) −.35 .43 .09 0.26 ---
‘Non-Learners’
 CDI Phrases Understood (13.5 months) −.06 ---
 CDI Vocab Comprehension (13.5 months) −.28 .66** ---
 CDI Gesture Comprehension (13.5 months) .37T .45* .53** ---
 CDI Vocab Production (13.5 months) −.02 .36T .45* .68** ---
**

p<.01;

*

p<.05,

T

p<.10

Discussion

In our investigation of visual sequence learning (VSL) and its connection to language development in infants we collected receptive language measures to probe the relation between VSL and language comprehension ability. Contrary to expectations, participants as a group did not demonstrate learning of the sequence. One explanation for this pattern is that whereas some infants did show sequence learning, others did not, and their latencies actually increased because the task became tiresome for them. Overall there was a great deal of variability in infants’ performance on the VSL task, which seemed to be meaningful: infants whose RTs decreased (i.e., demonstrated learning of the sequence) tended to have higher receptive vocabulary ability at testing and higher gestural ability at follow-up. The non-learners had lower vocabulary comprehension scores than the learners and among the learners, there was a linear relationship between degree of learning and vocabulary. In the remainder of the Discussion we explore possible explanations for the correlation between VSL and vocabulary, discuss a potential modality constraint affecting sequence learning and language, and briefly review evidence linking domain-general skills to language ability.

The Correlation between VSL Ability and Language Comprehension

There are several possible explanations for why infants’ performance on the VSL task is correlated with their vocabulary ability. One is that procedural learning itself – rather than some general cognitive ability such as attention – is used to learn language. Indeed, the possibility that there is a relationship between procedural skills and language learning is supported by recent theories of language acquisition that posit an important role for non-declarative, or procedural memory, in language development (Ullman, 2004) and by neuropsychological evidence showing that procedural memory deficits result in language problems (Ullman, 2001; Ullman et al., 1997; Ullman et al., 2005). Also, previous research on sequence learning has established that it is correlated with language processing in adults (Conway et al., 2010; Misyak et al., 2010) and hearing-impaired children (Conway et al., 2011).

On the other hand, it is possible that some other factor, such as general cognitive ability, is responsible both for infants’ performance on the VSL task and on their receptive language ability. For example, infants with better information processing skills may be better at both sequence learning and language learning. In order to determine the contribution of VSL specifically, future work would need to include measures of other cognitive skills that could be partialled out in the analyses. This approach was used by Rose and colleagues (Rose, Feldman, Jankowski, & VanRossem, 2005) who used structural equation modeling (SEM) to determine which of a series of information processing skills mediated cognitive development. However, that study did not include any procedural or sequential learning measures. The results of the current study suggest that future work should also include these types of learning measures. In addition, future studies should examine various components of language development (e.g., vocabulary vs. syntax) rather than using a single measure as a proxy for ‘language’.

Domain-Generality and Modality Specificity

Current theories suggest that sequence learning may contribute to language acquisition because the latter is an unconscious developmental process (Cleeremans, Destrebecqz, & Boyer, 1998) that appears to involve brain areas associated with procedural memory (Ullman, 2001). Because people often use language without an explicit understanding of the rules of grammar dictating its structure, it is likely that much knowledge of language is gained through implicit learning mechanisms such as sequence learning (Cleeremans et al., 1998). If these abilities are important for language development, early performance on such tasks could be used for predicting language outcomes from a very young age.

It is important to note that one significant correlation found in the current study – between sequence learning and vocabulary comprehension at time of test – involved skills that do not share learning modality. Specifically, the VSL task involved the use of visual-motor skills, while vocabulary comprehension involves the use of audition. The other correlation – between sequence learning ability and gestural ability 5 months after performing the VSL task – involved skills in the same modality (both are visual-motor). This pattern of results suggests that both sequence learning and language learning involve a combination of domain-general and modality-specific neurocognitive components (Conway & Pisoni, 2008). Behavioral evidence suggests that statistical sequential learning is constrained by the sense modality in which the input patterns occur, with auditory learning proceeding in substantially different ways compared to visual or tactile learning. In particular, in a study with tactile, auditory, and visual sequential learning tasks, adults were better at learning auditory sequences compared to the other two modalities (Conway & Christiansen, 2005; Emberson, Conway, & Christiansen, in press). Furthermore, there are qualitative differences in learning across the modalities, with each modality being differentially biased toward the beginning or final elements of a sequence (Conway & Christiansen, 2005). This behavioral evidence is supported by neuroimaging data showing that implicit learning is largely mediated by modality-specific unimodal processing mechanisms (Keele, Ivry, Mayr, Hazeltine, & Heuer, 2003; Turk-Browne, Scholl, Chun, & Johnson, 2009). Yet on the other hand, learning also appears to be domain-general in the sense that performance on a visual task was significantly correlated with performance on a measure of spoken language perception using auditory stimuli (Conway et al., 2010). In terms of neural mechanisms, implicit learning is known to involve supramodal brain regions, or regions that are unrestricted with regard to modality, such as the prefrontal cortex and basal ganglia (Bapi, Chandrasekhar Pammi, Miyapuram, & Ahmed, 2005; Clegg et al., 1998)—areas also used for language processing.

Importantly, this same combination of domain-generality and modality-specificity appears to also characterize language. For instance, both reading and listening tasks involve a common phonological network of brain regions, including the inferior frontal area, whereas visual and auditory unimodal and association areas are preferentially active during reading and listening tasks, respectively (Jobard, Vigneau, Mazoyer, & Tzourio-Mazoyer, 2007). This combination of domain-generality and modality-specificity in sequence learning and language may therefore explain the correlation between VSL task performance and the gesture comprehension score. Because VSL relies to some extent on the same domain-general learning mechanisms used for language processing, it is associated with global measures of language development, regardless of the domain (i.e., spoken vocabulary comprehension). On the other hand, because VSL also involves modality-specific components for learning the visual-motor sequential patterns, VSL appears to be useful for predicting aspects of visual-motor communication later in development, specifically, the comprehension of gesture. To our knowledge, this is the first evidence showing both a domain-general and modality-specific association between sequence learning and language development.

The Role of Domain-General Processes in Language

In either case, the current findings support the idea that domain-general cognitive processes are important for language development. As discussed, there is already evidence for a positive relation between visual recognition memory and cognitive and linguistic outcomes (Colombo et al., 2004; Fagan & McGrath, 1981; Rose & Feldman, 1997; Rose et al., 2009; Rose et al., 1991). In addition, studies on infant habituation rate and novelty preference have demonstrated a link between attention and cognitive outcomes, including language (Colombo et al., 2004; Kannass & Oakes, 2008; McCall & Carriger, 1993; L. Thompson et al., 1991). Taken together, and in conjunction with findings from the current study, these findings suggest a positive relation between certain domain-general abilities and language development.

Summary

The goal of this study was to investigate the relation between visual sequence learning and language outcomes in infants. Finding early predictors of later language development could allow clinicians to better focus their early therapy strategies on cognitive and linguistic skills that are important for language development. This study also opens the door for future research on how different domain-general abilities are related to different aspects of language and the role that modality may play in this transfer process. In this study we found that sequence learning (thought to rely on procedural memory ability) may contribute to vocabulary and gestural development, but it may be even more important for grammar acquisition (see Ullman, 2004)—a possibility that we are pursuing currently.

Footnotes

1

Some parents opted to take the questionnaire home and mail it to us. Therefore the age at test and the age at CDI is not the same for all children.

2

For t statistics Cohen’s d is the effect size statistic, which is the standardized mean difference (see Lipsey & Wilson, 2001). The effect size for correlations is Fisher’s z. CI.95 denotes the 95% confidence interval for the effect size.

3

Infants’ raw scores were used because of the lack of variability in CDI percentile scores for children this age.

References

  1. Bapi RS, Chandrasekhar Pammi VS, Miyapuram KP, Ahmed A. Investigation of sequence processing: A cognitive and computational neuroscience perspective. Current Science. 2005;89:1690–1698. [Google Scholar]
  2. Berry DC, Dienes Z. Implicit learning: Theoretical and empirical issues. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc; 1993. [Google Scholar]
  3. Boyer M, Destrebecqz A, Cleeremans A. Processing abstract sequence structure: Learning without knowing or knowing without learning? Psychological Research. 2005;69:383–398. doi: 10.1007/s00426-004-0207-4. [DOI] [PubMed] [Google Scholar]
  4. Canfield RL, Haith MH. Young infants’ visual expectations for symmetric and asymmetric stimulus sequences. Developmental Psychology. 1991;27:198–208. [Google Scholar]
  5. Cleeremans A, Destrebecqz A, Boyer M. Implicit learning: News from the front. Trends in Cognitive Sciences. 1998;2:406–416. doi: 10.1016/s1364-6613(98)01232-7. [DOI] [PubMed] [Google Scholar]
  6. Cleeremans A, McClelland JL. Learning the structure of event sequences. Journal of Experimental Psychology: General. 1991;120:235–253. doi: 10.1037//0096-3445.120.3.235. [DOI] [PubMed] [Google Scholar]
  7. Clegg BA, DiGirolamo GJ, Keele SW. Sequence learning. Trends in Cognitive Sciences. 1998;2:275–281. doi: 10.1016/s1364-6613(98)01202-9. [DOI] [PubMed] [Google Scholar]
  8. Clohessy AB, Posner MI, Rothbart MK. Development of the functional visual field. Acta Psychologica. 2001;106:51–68. doi: 10.1016/s0001-6918(00)00026-3. [DOI] [PubMed] [Google Scholar]
  9. Cohen LB, Atkinson DJ, Chaput HH. Habit X: A new program for obtaining and organizing data in infant perception and cognition studies (Version 1.0) Austin: University of Texas; 2004. [Google Scholar]
  10. Colombo J, Shaddy DJ, Richman WA, Maikranz JM, Blaga OM. The developmental course of habituation in infancy and preschool outcome. Infancy. 2004;5:1–38. [Google Scholar]
  11. Conway CM, Bauernschmidt A, Huang SS, Pisoni DB. Implicit statistical learning in language processing: Word predictability is the key. Cognition. 2010;114:356–371. doi: 10.1016/j.cognition.2009.10.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Conway CM, Christiansen MH. Modality-constrained statistical learning of tactile, visual, and auditory sequences. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2005;31:24–39. doi: 10.1037/0278-7393.31.1.24. [DOI] [PubMed] [Google Scholar]
  13. Conway CM, Pisoni DB. Neurocognitive basis of implicit learning of sequential structure and its relation to language processing. Annals of the New York Academy of Sciences. 2008;1145:113–131. doi: 10.1196/annals.1416.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Conway CM, Pisoni DB, Anaya EM, Karpicke J, Henning SC. Implicit sequence learning in deaf children with cochlear implants. Developmental Science. 2011;14:69–82. doi: 10.1111/j.1467-7687.2010.00960.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Emberson LL, Conway CM, Christiansen MH. Timing is everything: Changes in presentation rate have opposite effects on auditory and visual implicit statistical learning. Quarterly Journal of Experimental Psychology. 2011;64:1021–1040. doi: 10.1080/17470218.2010.538972. [DOI] [PubMed] [Google Scholar]
  16. Fagan JF, McGrath SK. Infant recognition memory and later intelligence. Intelligence. 1981;5:121–130. [Google Scholar]
  17. Fenson L, Marchman V, Thal D, Dale PS, Bates E, Reznick JS. MacArthur-Bates communicative development inventories (CDIs) 2. Baltimore, MD: Brookes Publishing; 2006. [Google Scholar]
  18. Fernald A, Perfors A, Marchman VA. Picking up speed in understanding: Speech processing efficiency and vocabulary growth across the 2nd year. Developmental Psychology. 2006;42:98–116. doi: 10.1037/0012-1649.42.1.98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Fiser J, Aslin RN. Statistical learning of new visual feature combinations by infants. Proceedings of the National Academy of Sciences. 2002;99:15822–15826. doi: 10.1073/pnas.232472899. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Haith MH, McCarty ME. Stability of visual expectations at 3.0 months of age. Developmental Psychology. 1990;26:68–74. [Google Scholar]
  21. Hollich GJ. Super Coder: A program for coding preferential looking (Version 1.5) [Computer software] West Lafayette, IN: Purdue University; 2005. [Google Scholar]
  22. Hollich GJ, Hirsh-Pasek K, Golinkoff RM. Breaking the language barrier: An emergentist coalition model for the origins of word learning. Monographs of the Society for Research in Child Development. 2000;65(3) [PubMed] [Google Scholar]
  23. Jobard G, Vigneau M, Mazoyer B, Tzourio-Mazoyer N. Impact of modality and linguistic complexity during reading and listening tasks. Neuroimage. 2007;34:784–800. doi: 10.1016/j.neuroimage.2006.06.067. [DOI] [PubMed] [Google Scholar]
  24. Johnson SP, Amso D, Slemmer JA. Development of object concepts in infancy: Evidence for early learning in an eye-tracking paradigm. Proceedings of the National Academy of Sciences. 2003;100:10568–10573. doi: 10.1073/pnas.1630655100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Johnson SP, Fernandes KJ, Frank MC, Kirkham NZ, Marcus GF, Rabagliati H, et al. Abstract rule learning for visual sequences in 8- and 11-month-olds. Infancy. 2009;14:2–18. doi: 10.1080/15250000802569611. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Kannass KN, Oakes LM. The development of attention and its relation to language in infancy and toddlerhood. Journal of Cognition and Development. 2008;9(2):222–246. [Google Scholar]
  27. Keele SW, Ivry R, Mayr U, Hazeltine E, Heuer H. The cognitive and neural architecture of sequence representation. Psychological Review. 2003;110:316–339. doi: 10.1037/0033-295x.110.2.316. [DOI] [PubMed] [Google Scholar]
  28. Kirkham NZ, Slemmer JA, Johnson SP. Visual statistical learning in infancy: evidence for a domain general learning mechanism. Cognition. 2002;83(2):B35–42. doi: 10.1016/s0010-0277(02)00004-5. [DOI] [PubMed] [Google Scholar]
  29. Kirkham NZ, Slemmer JA, Richardson DC, Johnson SP. Location, location, location: Development of spatiotemporal sequence learning in infancy. Child Development. 2007;78:1559–1571. doi: 10.1111/j.1467-8624.2007.01083.x. [DOI] [PubMed] [Google Scholar]
  30. Knoll T, Steetharam N, Coven A, Kmoch J, Byer S, et al. Adobe Photoshop CS3 (Version 10.0) Adobe Systems Inc; 2007. [Google Scholar]
  31. Leonard LB, Ellis Weismer S, Miller CA, Francis DJ, Tomblin JB, Kail RV. Speed of processing, working memory, and language impairment in children. Journal of Speech, Language and Hearing Research. 2007;50:408–428. doi: 10.1044/1092-4388(2007/029). [DOI] [PubMed] [Google Scholar]
  32. Lipsey MW, Wilson DB. Practical meta-analysis. Thousand Oaks, CA: Sage Publications, Inc; 2001. [Google Scholar]
  33. Marchman VA, Fernald A. Speed of word recognition and vocabulary knowledge in infancy predict cognitive and language outcomes in later childhood. Developmental Science. 2008;11:F9–F16. doi: 10.1111/j.1467-7687.2008.00671.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. McCall RB, Carriger MS. A meta-analysis of infant habituation and recognition memory performance as predictors of later IQ. Child Development. 1993;64:57–79. [PubMed] [Google Scholar]
  35. McMurray B, Aslin RN. Anticipatory eye movements reveal infants’ auditory and visual categories. Infancy. 2004;6:203–229. doi: 10.1207/s15327078in0602_4. [DOI] [PubMed] [Google Scholar]
  36. Miller CA, Kail RV, Leonard LB, Tomblin JB. Speed of processing in children with specific language impairment. Journal of Speech, Language and Hearing Research. 2001;44:416–433. doi: 10.1044/1092-4388(2001/034). [DOI] [PubMed] [Google Scholar]
  37. Misyak JB, Christiansen MH, Tomblin JB. On-line individual differences in statistical learning predict language processing. Frontiers in Psychology. 2010 doi: 10.3389/fpsyg.2010.00031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Newman R, Bernstein Ratner N, Jusczyk AM, Jusczyk PW, Dow KA. Infants’ early ability to segment the conversational speech stream predicts later language development: A retrospective analysis. Developmental Psychology. 2006;42:643–655. doi: 10.1037/0012-1649.42.4.643. [DOI] [PubMed] [Google Scholar]
  39. Perruchet P, Pacton S. Implicit learning and statistical learning: Two approaches, one phenomenon. Trends in Cognitive Sciences. 2006;10:233–238. doi: 10.1016/j.tics.2006.03.006. [DOI] [PubMed] [Google Scholar]
  40. Pisoni DB, Cleary M, Geers AE, Tobey E. Individual differences in effectiveness of cochlear implants in children who are prelingually deaf. The Volta Review. 1999;101:111–164. [PMC free article] [PubMed] [Google Scholar]
  41. Plomin R, Dale PS. Genetics and early language development: A UK study of twins. In: Bishop DVM, Leonard BE, editors. Speech and language impairments in children: Causes, characteristics, intervention and outcome. Hove, UK: Psychology Press; 2000. pp. 35–51. [Google Scholar]
  42. Rose SA, Feldman JF. Memory and speed: Their role in the relation of infant information processing to later IQ. Child Development. 1997;68:630–641. [PubMed] [Google Scholar]
  43. Rose SA, Feldman JF, Jankowski JJ. A cognitive approach to the development of early language. Child Development. 2009;80(1):134–150. doi: 10.1111/j.1467-8624.2008.01250.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Rose SA, Feldman JF, Jankowski JJ, VanRossem R. Pathways from prematurity and infant abilities to later cognition. Child Development. 2005;76:1172–1184. doi: 10.1111/j.1467-8624.2005.00843.x. [DOI] [PubMed] [Google Scholar]
  45. Rose SA, Feldman JF, Wallace IF. Language: A partial link between infant attention and later intelligence. Developmental Psychology. 1991;27:798–805. [Google Scholar]
  46. Saffran JR, Aslin RN, Newport EL. Statistical learning by 8-month-old infants. Science. 1996;274:1926–1928. doi: 10.1126/science.274.5294.1926. [DOI] [PubMed] [Google Scholar]
  47. Saffran JR, Johnson EK, Aslin RN, Newport EL. Statistical learning of tone sequences by human infants and adults. Cognition. 1999;70:27–52. doi: 10.1016/s0010-0277(98)00075-4. [DOI] [PubMed] [Google Scholar]
  48. Saffran JR, Thiessen ED. Domain-general learning capacities. In: Hoff E, Shatz M, editors. Handbook of language development. Cambridge: Blackwell; 2007. pp. 68–86. [Google Scholar]
  49. Thompson L, Fagan J, Fulker D. Longitudinal prediction of specific cognitive abilities from infant novelty preference. Child Development. 1991;67:530–538. [PubMed] [Google Scholar]
  50. Thompson RF. Habituation: A history. Neurobiology of Learning and Memory. 2009;92:127–134. doi: 10.1016/j.nlm.2008.07.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Thompson RF, Spencer WA. Habituation: A model phenomenon for the study of neuronal substrates of behavior. Psychological Review. 1966;73:16–43. doi: 10.1037/h0022681. [DOI] [PubMed] [Google Scholar]
  52. Tsao FM, Liu HM, Kuhl PK. Speech perception in infancy predicts language development in the 2nd year of life: A longitudinal study. Child Development. 2004;75:1067–1084. doi: 10.1111/j.1467-8624.2004.00726.x. [DOI] [PubMed] [Google Scholar]
  53. Turk-Browne NB, Scholl BJ, Chun MM, Johnson MK. Neural evidence of statistical learning: Efficient detection of visual regularities without awareness. Journal of Cognitive Neuroscience. 2009;21:1934–1945. doi: 10.1162/jocn.2009.21131. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Ullman MT. A neurocognitive perspective on language: The declarative/procedural model. Nature Reviews Neuroscience. 2001;2:717–726. doi: 10.1038/35094573. [DOI] [PubMed] [Google Scholar]
  55. Ullman MT. Contributions of memory circuits to language: The declarative/procedural model. Cognition. 2004;92:231–270. doi: 10.1016/j.cognition.2003.10.008. [DOI] [PubMed] [Google Scholar]
  56. Ullman MT, Corkin S, Coppola M, Hickok G, Growdon JH, Koroshetz WJ, et al. A neural dissociation within language: Evidence that the mental dictionary is part of declarative memory, and that grammatical rules are processed by the procedural system. Journal of Cognitive Neuroscience. 1997;9:266–276. doi: 10.1162/jocn.1997.9.2.266. [DOI] [PubMed] [Google Scholar]
  57. Ullman MT, Pancheva R, Love T, Yee E, Swinney D, HIckok G. Neural correlates of lexicon and grammar: Evidence from the production, reading, and judgment of inflection in aphasia. Brain and Language. 2005;93:185–238. doi: 10.1016/j.bandl.2004.10.001. [DOI] [PubMed] [Google Scholar]
  58. Wentworth N, Haith MM. Infants’ acquisition of spatiotemporal expectations. Developmental Psychology. 1998;34:247–257. doi: 10.1037//0012-1649.34.2.247. [DOI] [PubMed] [Google Scholar]
  59. Wentworth N, Haith MM, Hood R. Spatiotemporal regularity and interevent contingencies as information for infants’ visual expectations. Infancy. 2002;3:303–321. doi: 10.1207/S15327078IN0303_2. [DOI] [PubMed] [Google Scholar]

RESOURCES