Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Sep 1.
Published in final edited form as: Am Psychol. 2014 Jun 9;69(6):588–599. doi: 10.1037/a0036886

New Evidence About Language and Cognitive Development Based on a Longitudinal Study: Hypotheses for Intervention

Susan Goldin-Meadow 1, Susan C Levine 1, Larry V Hedges 1, Janellen Huttenlocher 1, Stephen W Raudenbush 1, Steven L Small 1
PMCID: PMC4159405  NIHMSID: NIHMS601254  PMID: 24911049

Abstract

We review findings from a four-year longitudinal study of language learning conducted on two samples: a sample of typically developing children whose parents vary substantially in socioeconomic status, and a sample of children with pre- or perinatal brain injury. This design enables us to study language development across a wide range of language learning environments and a wide range of language learners. We videotaped samples of children's and parents' speech and gestures during spontaneous interactions at home every four months, and then we transcribed and coded the tapes. We focused on two behaviors known to vary across individuals and environments—child gesture and parent speech—behaviors that have the potential to index, and perhaps even play a role in creating, differences across children in linguistic and other cognitive skills. Our observations have led to four hypotheses that have promise for the development of diagnostic tools and interventions to enhance language and cognitive development and brain plasticity after neonatal injury. One kind of hypothesis involves tools that could identify children who may be at risk for later language deficits. The other involves interventions that have the potential to promote language development. We present our four hypotheses as a summary of the findings from our study because there is scientific evidence behind them and because this evidence has the potential to be put to practical use in improving education.

Keywords: gesture, linguistic input, socioeconomic status, perinatal unilateral brain injury


Language learning is a robust process. Despite the magnitude of the task, and the fact that children experience very different types of learning situations, most children acquire language with relative ease. Moreover, within broad outlines, children acquire language according to a common trajectory. However, within the striking commonalties that characterize language learning in children, there are equally striking individual differences in the rate and timing of lexical and syntactic growth (e.g., Fenson et al., 1994; Hart & Risley, 1995; Huttenlocher, Haight, Bryk, Selzer, & Lyons, 1991; Miller & Chapman, 1981).

Differences in rate of language acquisition take on significance simply because language skills often lay the groundwork for other cognitive and social tasks. For example, preschoolers who do not speak clearly and have trouble communicating their ideas effectively are less able to sustain bouts of play with other children (Gertner, Rice, & Hadley, 1994). Moreover, early language milestones predict later academic achievement (Anderson & Freebody, 1981). In other words, language opens doors, and the timing of these door-openings may matter for subsequent development. It thus becomes important to understand the factors that underlie individual differences in rate of language learning.

Our goal is to report on a four-year longitudinal study of language learning conducted on two samples selected to maximize variation across children. The first are children who come from homes that vary in socioeconomic status (SES) and thus are likely to receive differing amounts and types of linguistic input, an external resource for language learning. We drew this sample to generate variation in language learning environments. The second are children who have experienced pre- or perinatal unilateral brain injury. In addition to receiving varied linguistic inputs, these children bring varying internal resources to the task of language learning. Our goal is to understand the joint effects that environmental variation (variations in linguistic input) and internal variation (variations in lesion characteristics) can have on language learning.

We examine two behaviors in each of these samples: (a) a child behavior that has the potential to serve as an early index (and perhaps source) of variation in linguistic skills—the spontaneous gestures children produce to communicate, and (b) a parent behavior that has the potential to serve as a source of variation in linguistic skills—the speech parents address to their young children. We also explore the impact that parent speech might have on variation in children's cognitive skills.

We believe that our findings have implications for prediction and diagnosis of later language deficits and for interventions that may improve language skills. Finding a marker of early variation helps identify tools for the early detection of children whose language learning is at risk for going awry. Moreover, understanding the sources of variation in language skills helps identify strategies for intervention that could increase the rate at which children learn language. For example, to preview some of our results, we find that among the children with brain injury, parental linguistic input strongly predicts the acquisition of vocabulary and syntax, controlling for lesion size, type (cerebrovascular, periventricular), and seizure history. Indeed, children with brain injury who receive high levels of linguistic input exhibit vocabulary growth at rates similar to those of typically developing children with low linguistic input. High levels of linguistic input can even accelerate syntactic growth in children with brain injury so that their syntactic skills at 46 months are comparable to those of typically developing children with high levels of input (Rowe, Levine, Fisher, & Goldin-Meadow, 2009). It is findings of this sort that lead us to recommend the development of experimental interventions to test whether these relationships are causal and hence potentially useful for improving the education of children with brain injury as well as typically developing children.

Varying the Language Learning Environment

One obvious factor that could account for individual differences in rate of acquisition is the linguistic environment to which a child is exposed. Environmental effects have been found for vocabulary (Hoff, 2003; Huttenlocher et al., 1991) and syntax (Furrow, Nelson, & Benedict, 1979; Hoff-Ginsberg, 1986; Huttenlocher, Vasilyeva, Cymerman, & Levine, 2002; Naigles & Hoff-Ginsberg, 1998; Newport, Gleitman, & Gletiman, 1977). However, not all individual differences have been easy to trace back to differences in environment. For example, Newport et al. (1977) found that the number of true verbs or noun phrases children produced within a sentence was not systematically related to variations in the talk the children heard from their mothers. However, as the authors point out, the variation in their sample of mothers may have been insufficient to detect an effect of environmental variation on language learning. Unless a sample is sufficiently heterogeneous in input, even a substantial relation between linguistic input and child output may go undetected. In many studies, families are drawn from well-educated, White, high-SES groups. That any effects of input can be detected under such conditions is impressive, but the absence of effects is merely suggestive, requiring further research.

One way to extend variability in linguistic input is to observe homes that vary in SES. Parents in low-SES homes spend, on average, less time engaged in mutual activities with their children (e.g., Heath, 1983; Hess & Shipman, 1965) and, perhaps as a result, talk less to their children, on average (Hoff-Ginsberg, 1990), than do parents in higher SES homes, resulting in differences in the vocabulary (Hart & Risley, 1995) and syntax (Huttenlocher et al., 2002) that children hear.

Here, we present data on 64 children from families selected to represent the demographic variation within the Chicago area.1 Our observations took place over a period of four years (from child age 14 months through 58 months) with frequent assessments (90-minute home observations every four months). We can thus construct a longitudinal picture of the child's trajectory of linguistic growth that can be examined in relation to the input that the children received from their parents.

Varying the Language Learner

Another obvious factor to consider in understanding individual differences in language learning is the learner. One way to extend variation in children's language learning skills is to examine children who early in development suffer an insult to the brain. In general, children with unilateral brain injury to either hemisphere tend to acquire the early appearing aspects of language on time or with minimal delays (e.g., Bates, Thal, & Janowsky, 2000; Feldman, 1994; Marchman, Miller, & Bates, 1991). However, these children appear to iteratively experience difficulty with each aspect of language development as these skills come on line (Stiles, Reilly, Levine, Trauner, & Nass, 2012; Stiles, Reilly, Paul, & Moses, 2005), including complex syntactic skills (Kiessling, Denckla, & Carlton, 1983; Levine, Huttenlocher, Banich, & Duda, 1987; Rankin, Aram, & Horwitz, 1981) and narrative skills (e.g., Demir, Levine, & Goldin-Meadow, 2010; Reilly, Bates, & Marchman, 1998).

Moreover, and not surprisingly, there is considerable variation in language learning skills across children with unilateral brain injury. Previous studies have explored the relation between biological characteristics (lesion laterality, location, and size; seizure history) and differences in language skill in children with brain injury (e.g., Feldman, 2005). However, few studies have examined the relation between language input and rate of language learning in children with brain injury. Linguistic input could be particularly important for such children. If so, the same variation in input might lead to wider variations in output in children with brain injury than in uninjured children.

We observed a sample of 40 children with unilateral brain injury interacting with their parents at home over time. The variation in language learning associated with variation in linguistic input in this group of children can then be assessed in relation to comparable effects observed in the typically developing children.

Gesture: Another Perspective on Communicative Abilities

Even before young children begin to use words, they use gestures (Bates, 1976). Moreover, gesture does not disappear from a young child's communicative repertoire after the onset of speech. Rather, it becomes integrated with speech often serving a communicative function in its own right (e.g., the child says “gimme” while pointing at a cracker; gesture makes it clear what the object of “gimme” is). Gesture thus has the potential to extend a child's range of communicative devices. Importantly, there is variability in the way children use gesture, and this variability predicts differences in the onset of linguistic milestones (Cartmill, Hunsicker, & Goldin-Meadow, 2014; Özçalişkan & Goldin-Meadow, 2005). For example, children vary in the age at which they produce combinations in which speech conveys one idea and gesture another (e.g., gimme + point at cracker), and this variability predicts the age at which the children produce their first two-word utterances (gimme cracker; Goldin-Meadow & Butcher, 2003; Iverson & Goldin-Meadow, 2005).

There is also variability in the way gesture is used across the language learning situations to which children are exposed. Parents use gesture differently with children of different ages within a single culture (Iverson, Capirci, Longobardi, & Caselli, 1999) and also vary in how often they use gesture with young children across cultures (Goldin-Meadow & Saltzman, 2000). By exploring the impact that variability in parent gesture has on language learning, along with factors that have the potential to create this variability (SES and brain injury), we gain insight into the processes that contribute to how children learn language. We therefore examined the gestures produced by both the typically developing children and children with brain injury, and their parents.

To summarize, we examined two behaviors known to vary across individuals and environments—child gesture and parent speech—behaviors that have the potential to index, and perhaps even play a role in creating, differences in linguistic and other cognitive skills across children. Our goal was to extend the variation in these behaviors, and we accomplished this goal in two ways: (a) We observed children from families who varied widely in SES to extend the variation in language learning environments, and (b) we observed children who do and do not have brain injuries to extend the variation in language learners. Our observations have led to four hypotheses that have promise for the development of diagnostic tools and interventions to enhance language and cognitive development. We end by discussing our findings in light of these hypotheses.

Summary of Methods

We briefly review the participants, procedures, and coding schemes used in our study. Please see individual articles for additional details.

Participants

The first group included 64 families with a child who had no known physical or cognitive disabilities and thus was assumed to be typically developing (TD). TD families were recruited from the greater Chicago area through a direct mailing to roughly 5,000 families or through advertisements in a free parent magazine. We asked parents who responded to participate in a phone interview in which we gathered demographic information. We selected participants who matched as closely as possible the ethnic/racial makeup and family income reported in the 2000 U.S. Census for the Chicago area. Sixty-four families completed at least four visits (31 girls; 34 firstborns). All spoke only English in the home.

The second group included 40 families with a child who had suffered a unilateral brain injury (BI) in the pre-, peri-, or early postnatal period. Families were recruited through pediatric neurologists and rehabilitation physicians and through contacts with support groups for families with children who have brain injury. Because of the scarcity of children with unilateral brain injury (roughly 1 in 4,000), families were not excluded based on demographic characteristics, and children were enrolled in the study at various ages (from 14 to 54 months); the number of children with BI whose data are analyzed in any given study depends on how many children of the appropriate age were in the sample at the time. Children were enrolled if brain imaging results (magnetic resonance imaging or computed tomography scan) confirmed a unilateral injury of vascular etiology (stroke), either from hemorrhage (bleeding in the brain) or ischemic infarction (death of brain tissue from lack of blood delivery). The hemorrhages typically occurred in highly cellular regions of the developing brain adjacent to fluid-filled spaces (ventricles) and are called periventricular. The ischemic infarctions typically involved disruptions of blood flow (or ischemia) to parts of the brain supplied by the middle cerebral artery, including the frontal, temporal, and parietal lobes (except for medial parts of the frontal and parietal lobes and inferior parts of the temporal lobe) as well as the basal ganglia. Forty families completed at least four visits (21 girls), 15 with right-sided brain injury, 25 with left-sided injury; this distribution of left- versus right-hemisphere lesions is typical of early vascular injuries. Most children with BI had motor deficits involving hemiparesis (weakness on one side) contralateral to the lesion.

Procedure

Families were visited in their homes every four months for a total of 12 visits between 14 and 58 months (fewer for some of the children with BI depending upon the age at which they entered the study). The investigator videotaped interactions of the target child and primary caregiver (typically mother) during ordinary daily activities for a 90-minute period at each visit. Other family members were sometimes present during the visits, but the video focused on interaction between the target child and parental care-giver(s). Investigators interacted minimally with families during the observations.

Transcription of Speech and Gesture

All child and parent speech and gestures were transcribed (see details for speech in Huttenlocher, Waterfall, Vasilyeva, Vevea, & Hedges, 2010; see details for gesture in Rowe & Goldin-Meadow, 2009b). Two reliability measures were applied:

Reliability of transcription

For a random 20% of transcripts, a second person transcribed 10% of utterances, including speech and gesture. Interrater agreement was at or above 95%.

Reliability of particular categories

A second person coded a random selection of utterances for speech and gesture, and the proportion of utterances on which the two coders agreed was calculated for each category. Agreement exceeded 88% for all categories.

Summary of Findings

We review the major findings from our longitudinal sample. We first present studies showing that child gesture produced early in development varies as a function of external (SES) and internal (BI) factors, controlling for early child speech, and that these early gestures predict later linguistic achievements in both TD children and children with BI. We turn next to studies showing that parent speech produced early in a child's development also predicts later linguistic achievements and interacts with brain injury status—parent input holds the same relation to child outcome in TD children and in children with BI for vocabulary development but not for syntactic development, thus revealing the joint effects of environmental and organic factors in language development. Finally, we present studies showing that different types of parent speech produced early in a child's development predict later differences in cognitive abilities (understanding cardinal number, spatial relations, and abstract similarity relations).

Gesture: A Possible Early Index of Variation in Children's Linguistic Skills

Early gesture varies in homes that vary in SES and predicts later language

Children from low-SES families arrive at school with smaller vocabularies, on average, than children from high-SES families (e.g., Hart & Risley, 1995). In an effort to determine whether early gesture might be a precursor to this inequality, Rowe and Goldin-Meadow (2009a) studied 50 TD children at 14 months,2 and they used number of gesture types (defined as the number of different meanings conveyed using gesture; e.g., point at dog = dog, point at cup = cup) as the measure of early gesture use. Number of word types (defined as the number of different intelligible word roots produced by the child) served as a control for early child speech. Children's scores on the Peabody Picture Vocabulary Test—Third Edition (PPVT; Dunn & Dunn, 1997) taken at 54 months served as the measure of later child vocabulary. Family income and education were positively related to one another (r = .44, p = .001) and were combined into one variable (SES) using principal components analysis (see details in Rowe & Goldin-Meadow, 2009a).

Rowe and Goldin-Meadow (2009a) replicated the well-known phenomenon that child vocabulary (PPVT) varies as a function of SES. They then asked whether this relation between SES and later child vocabulary skill can be explained by the children's early gesture use. Using a mediation analysis, Rowe and Goldin-Meadow first established that SES relates to child gesture at 14 months, and that child gesture relates to later child vocabulary. The mediation analysis showed that the relation between SES and later child vocabulary (controlling for child word types at 14 months) is reduced in magnitude when child gesture is included in the model, suggesting that the relation between SES and child vocabulary at 54 months is partially mediated by child gesture at 14 months. A second mediation analysis found that the relation between SES and child gesture at 14 months was mediated by parent gesture at 14 months. However, parent gesture did not have a direct relation to child vocabulary—it was related to early child gesture, which in turn, was related to later child vocabulary.

Importantly, there was no relation between SES and child word types in our sample at 14 months. Thus, at a time when we do not yet see differences as a function of SES in child speech, we do see them in child gesture. As mentioned earlier, children typically do not begin to gesture until around 10 months. SES differences are thus evident a mere four months (and possibly even sooner) after the onset of child gesture.

Why does early gesture forecast later vocabulary learning? Early gesture might be an index of global communicative skill. For example, children who convey a large number of different meanings in their early gestures might be generally good language learners and/or have high levels of intelligence. If so, not only should these children have large vocabularies later in development, but their sentences ought to be relatively complex as well. Alternatively, particular types of early gestures could be specifically related to particular aspects of later spoken language use. In an analysis of 52 TD children, Rowe and Goldin-Meadow (2009b) found that, controlling for early vocabulary, the number of different meanings children conveyed in gesture early in development predicted the size of their comprehension vocabularies several years later, whereas the number of gesture-speech combinations they produced early in development did not. In contrast, controlling for early syntax, the number of gesture-speech combinations (e.g., point at hat + dada to refer to dad's hat) that children produced early in development predicted the syntactic complexity of their spoken sentences several years later, whereas the number of different meanings conveyed in gesture early in development did not. Importantly, if the number of different meanings conveyed in gesture and the number of gesture-speech combinations are pitted against one another (along with a control for spoken vocabulary) in a single model, early gesture meanings significantly predict children's later comprehension vocabulary, but early gesture–speech combinations do not. The selectivity with which gesture predicts different linguistic skills suggests that the gestures are reflecting not just general intelligence or overall language learning ability, but rather skills specific to learning vocabulary or to learning syntax.

Early gesture predicts later language in children with brain injury just as it does in children with an intact brain

We have seen that individual differences in early gesture use can predict later differences in speech in children acquiring language at a typical pace. What would happen if we were to extend the variation and examine children with early brain injury? Within this population, can we identify children who are at risk for language delay by examining their early gestures?

Sauer, Levine, and Goldin-Meadow (2010) categorized 11 children with BI into two groups based on whether their gesture use at 18 months was within or outside of the range for 53 TD children: Five children in the BI group fell below the 25th percentile for TD gesture at 18 months (low), and six children fell above the 25th percentile (high). Speech (as measured by number of different words produced) did not differ for low and high groups at 18 months; both groups were below the range of speech for TD children at that time point.

The interesting question is whether gesture use at 18 months predicts later delays in speech. It does—children with BI classified as high gesturers at 18 months went on to develop production vocabularies (as measured by number of different words produced) at 22 and 26 months, and comprehension vocabularies (as measured by PPVT scores) at 30 months that were within the TD range, indeed close to the mean. In contrast, children with BI classified as low gesturers at 18 months remained below the range for TD children at both 22 and 26 months in production, and at 30 months in comprehension. Early gesture use can thus predict subsequent spoken vocabulary not only for children learning language at a typical pace but also for those exhibiting delays.

Note that the advantage of looking at early gesture (as opposed to speech) is that we can see differences between children who will eventually catch up and those who will not (at least not without intervention) before they display differences in speech—both groups were below the norm in speech at 18 months. The bottom line is that early gesture can be used to diagnose risk for later language delays before those delays are evident in speech, opening the door to early intervention.

Parent Speech: A Possible Source of Variation in Children's Linguistic Skills

We have seen that the large individual differences children exhibit can be traced, at least in part, to differences in the way children use gesture early in development, a characteristic internal to the child. Building on prior findings (e.g., Hart & Risley, 1995; Hoff, 2003; Huttenlocher et al., 1991), we ask whether a factor external to the child— parent speech—can also be used to predict differences in language learning. Finding that the acquisition of language is sensitive to parent speech would open possibilities for effective interventions to improve language skills and help remodel an injured brain.

Quantity but not quality of parent speech varies by SES; both predict child language

Huttenlocher et al. (2010) analyzed videotapes of 47 TD families, concentrating on the period when children begin to acquire complex syntax (26–46 months). Rather than focus on total amount of parent and child speech, Huttenlocher et al. analyzed diversity—the variety of words, phrases, and clauses produced by parents and children. Lexical diversity was measured by the number of different words (i.e., word types) that a speaker produced. Syntactic diversity was measured at two hierarchically organized levels, and different types of constructions were identified at each level—within clause (constituent diversity; e.g., adjectives, prepositional phrases) and across clause (clausal diversity; e.g., subject relative and object complement clauses).

There were three notable results. First, parents and children both displayed large individual differences in diversity scores, and more diverse parent speech predicted more diverse child speech. Moreover, some effects were specific—child lexical diversity was best predicted by parent lexical diversity; child constituent diversity by parent constituent diversity. Child clausal diversity was predicted by all three types of parent input, perhaps because clausal diversity is a measure that taps into aspects of the entire sentence.

Second, Huttenlocher et al. (2010) addressed the important question of “who is influencing whom” by using lagged correlations (e.g., using parent speech at 26 months to predict child speech at 30 months, parent speech at 30 months to predict child speech at 34 months, etc.). Lagged correlations between parent speech at an earlier session and child speech at a later session (forward correlations) and between child speech at an earlier session and parent speech at a later session (backward correlations) allow a relatively fine-tuned assessment of directionality. For vocabulary, forward and backward correlations were both significant and equally large; that is, earlier parent speech predicted later child speech, and earlier child speech predicted later parent speech, suggesting a reciprocal relation between parent and child for vocabulary. In contrast, for syntax, forward correlations were significant, but backward correlations (early child predicting later parent syntax) were not, suggesting an unequal relation between parent and child for syntax. The different patterns for vocabulary and syntax suggest that particular parent behaviors, rather than overall parent intelligence, are behind the correlations with child language learning.

Third, Huttenlocher et al. (2010) examined whether SES differences in child speech were mediated by differences in parent speech. They first analyzed SES effects without parent speech and then later included parent speech in the analyses. SES effects turned out to be smaller when parent speech was included in the models predicting child lexical and constituent diversity, suggesting that the relation between SES and these aspects of child speech is partially mediated by parent speech.

The lexical diversity measure used by Huttenlocher et al. (2010)—the number of different words parents use with their children—is, of course, not independent of the quantity of talk parents address to their children. Using a procedure developed by Gillette, Gleitman, Gleitman, and Lederer (1999), Cartmill et al. (2013) measured quality of parent input to vocabulary development independent of quantity. They determined how easy it was to guess from nonlinguistic context alone a randomly selected set of nouns produced by 50 of the parents in our sample. The more easily a word can be guessed, the more likely a child is to figure out, and then learn, the word—easily guessed words thus reflect high-quality word-learning experiences. There were three central findings: (a) Parents varied in the quality of word-learning experiences they gave their children at 14 and 18 months. (b) That variability in quality of input correlated with children's comprehension vocabulary (as measured by PPVT scores) three years later, controlling for quantity of parent input (number of words produced per minute) at 14 and 18 months. Quantity and quality did not correlate with each other and did not interact in predicting child vocabulary—they each accounted for different aspects of variance in the child outcome measure. (c) Quantity of parent input to word learning was positively related to SES, but quality of parent input was not. The bottom line from these studies, taken together, is that particular aspects of parent language input matter for particular aspects of child language output, and that some aspects of input vary by SES whereas others do not, which could have implications for intervention efforts.

Parent speech may matter more for children with early brain injury than for typically developing children

The question we next ask is whether parent speech has the same relation to child speech in children with brain injury as it does in typically developing children. Rowe et al. (2009) analyzed data from 27 children with BI, compared to 53 TD children. They used hierarchical linear modeling to construct two models—one for vocabulary growth (measured in number of word types) and one for syntactic growth (measured in mean length of utterance [MLU])—to predict child growth, taking into account parent education,3 parent input, brain injury status, and the interaction between parent input and brain injury status.

Looking first at word types, Rowe et al. (2009) found a relation between brain injury status and vocabulary— vocabulary growth for children with BI was lower than the corresponding trajectories for TD children. They also found two relations between parent input and vocabulary— children with less input had lower vocabularies at 30 months than children with more input; and acceleration in growth was more profound for children with high than low input. Interestingly, the relation between parent speech and vocabulary did not differ based on brain injury status. However, MLU showed a different pattern—a significant interaction between brain injury status and input. There was a bigger difference between rates of growth in MLU for high- and low-input children with BI than for high- and low-input TD children. Parent input thus appears to act similarly as a predictor of growth in vocabulary for the two groups, but to be a more potent predictor of growth in syntax for children with BI than for TD children.

Rowe et al. (2009) also examined lesion size, lesion type (periventricular, ischemic), and seizure history in the children with BI, and they found that these characteristics contributed to language trajectories as well. Plasticity after early lesions should therefore be thought of as a joint function of variability in the environment (linguistic input) and variability in the organism (lesion characteristics and neurological manifestations). The bottom line is that the effect linguistic input has on children with BI, compared to TD children, can differ as a function of linguistic property.

Parent Speech: A Possible Source of Variation in Children's Cognitive Skills

We have shown that variability in the language parents use with their children is associated with variability in the language children themselves use. We now ask whether parent input is related not only to child language, but also to child cognition.

Parent speech is related to children's acquisition of numerical and spatial knowledge

Levine, Suriyakham, Rowe, Huttenlocher, and Gunderson (2010) examined the cumulative number of times 44 TD children and their parents produced number words (1–10) during the 14- to 30-month observation sessions. At 46 months, children were given the Point-to-X task (children were shown two cards with different numbers of squares on them and told to point to a number between 2 and 6; Wynn, 1992), and their scores were taken as a measure of their understanding of cardinal number. With SES controlled, parent cumulative talk about number to children during the early years was positively related to children's later cardinal number knowledge, over and above parent talk in general (which did predict children's word comprehension at 54 months, as measured by PPVT). Children's own cumulative experience talking about number also related to their later cardinal number knowledge, yet children's talkativeness in general did not. Parent number input thus specifically predicted children's later cardinal number knowledge.

Pruden, Levine, and Huttenlocher (2011) examined the relation between parent talk about space and children's later spatial abilities, and they also explored whether the relation between parent talk and child nonverbal spatial abilities was mediated by children's own production of spatial language. Spatial language (words describing the spatial features and properties of objects; e.g., big, tall, circle, curvy, edge) was coded in 52 TD families during the 14- to 46-month sessions. At 54 months, children were given items from three nonverbal spatial tasks: a spatial transformation task (Levine, Huttenlocher, Taylor, & Langrock, 1999), the Block Design subtest from the Wechsler Preschool and Primary Scale of Intelligence–Third Edition (Wechsler, 2002), and a spatial analogies test (Huttenlocher & Levine, 1990).

Pruden et al. (2011) found a significant correlation between parents' spatial tokens through 46 months and children's scores on the three spatial tasks at 54 months. They used a mediation analysis to determine whether this relation can be explained by the children's spatial language through 46 months. They first established that parent spatial tokens were significantly correlated with child spatial tokens, and that child spatial tokens were significantly correlated with child scores on the spatial tasks, which was true for both the spatial transformation and spatial analogies tests. The mediation analysis showed that the relation between parent spatial tokens (controlling for parents' other tokens and children's other tokens) was no longer significant when child spatial tokens were included in the analysis, indicating that the relation between parent spatial tokens and child scores on these two tests was mediated by child spatial tokens. Parent spatial language thus predicts child spatial language, which in turn, predicts at least some of the child's spatial abilities.

Parent speech is related to the acquisition of abstract similarity relations

The findings from our longitudinal sample suggest that children who receive more linguistic input tend, on the whole, to display better linguistic and cognitive skills than do children who receive less, suggesting that linguistic input plays a role in the development of these skills. Another way to explore the impact of parent linguistic input on child outcome is to study outcomes in children who have no access to a conventional language. If linguistic input is critical to developing a particular skill, having no access to language input should result in the child not developing the skill. We used our data to first establish a relation between linguistic input and the types of comparisons children make early in development. We then tested this relation by exploring types of comparisons made by children not exposed to conventional linguistic input.

Özçalişkan, Goldin-Meadow, Gentner, and Mylander (2009) explored whether learning a word for an abstract relation—the English word like, which marks similarity (the butterfly is like a rainbow)—promotes noticing and commenting on different types of similarity comparisons in 40 TD children from ages 14 to 34 months. The children began to express similarity relations as early as 18 months in speech + gesture combinations (e.g., cat + point to tiger), long before they learned the word like. At approximately 30 months, the children began producing a sizeable number of similarity comparisons with like, and the nature of their similarity comparisons changed. Although they continued to produce global comparisons (comparing objects from the same category that resemble each other on many dimensions; the cat is like a tiger), after 30 months, they dramatically increased their production of more sophisticated comparisons, drawing similarities between objects from different categories that resemble each other on only one dimension (the crayon is brown like my hair). This temporal pattern suggests that acquiring the word like may be instrumental in getting children to draw these more sophisticated comparisons.

To test this hypothesis, Özçalişkan et al. (2009) used the same coding system with four deaf children whose hearing losses prevented them from learning spoken language and whose hearing parents had not exposed them to sign language. Thus, these deaf children had not been exposed to a usable model for language. However, they did develop their own gestures, called homesigns, to communicate with the hearing individuals in their worlds. Although the homesigns had linguistic structure at many levels (Goldin-Meadow, 2003), they did not contain a term comparable to the word like. Nevertheless, all four home-signers used their gestures to express similarity comparisons (point to cat + point to tiger), and those comparisons resembled the comparisons that the 40 hearing children conveyed in their early speech + gesture combinations (cat + point to tiger). Importantly, however, the two groups of children diverged at later ages. After acquiring the word like, the hearing children shifted from expressing global similarity to also expressing single-property similarity. In contrast, the homesigners, lacking an explicit term for similarity, did not make the switch and continued to express only global similarity. These findings highlight the importance of conventional terms for comparison as likely contributors to expressing more focused similarity relations. Learning similarity language may influence the ability to talk about, and perhaps think about, more abstract similarity relations.

Taken together, the bottom line from these studies is that parent speech relates not only to children's later language skills but also to their later cognitive skills. In future work, we will explore whether these various types of linguistic input relate to cognitive skills in children with BI in the same way that they do in TD children.

Theoretical Implications of Our Findings

Language learning is a resilient process (Goldin-Meadow, 2003)—children raised in environments that vary widely in the amount and quality of linguistic input they provide all learn language. Our findings underscore this resilience. The TD children in our sample, which was chosen to reflect the SES variability in Chicago, were within the typical range for vocabulary and syntactic development. Note that our data cannot tell us whether linguistic input is essential for children to develop linguistic structure—all of the parents in our sample may have exceeded the threshold amount and quality of linguistic input needed to promote language learning. We need to remove linguistic input entirely to address this question. As described earlier, homesigners lack usable linguistic input. Nevertheless, these children develop gesture systems containing the rudimentary properties of natural language (Goldin-Meadow, 2003), suggesting that linguistic input is not essential for the development of at least some resilient properties of language.

However, the fact that language learning is resilient does not mean that the rate at which language is learned is impervious to external and internal pressures. Our findings add to many showing that variability in parent input is correlated with variability in child output—parents who use many words overall and many spatial and numerical words in particular, who offer many high-quality word-learning experiences, and who use a variety of syntactic constructions have children who make early progress in just these areas. Linguistic input may thus play an important role in the timing of language learning and may even be essential for the development of certain language properties (properties homesign does not have, e.g., single-property similarity; see also Gentner, Özyurek, Gurcanli, & Goldin-Meadow, 2013).

Our findings on children with BI underscore the resilience of language with respect to internal factors, while at the same time highlighting the importance of linguistic input. Many of the children with BI in our study fell within the range for the TD children. However, some did not. Importantly, for the children with BI, high linguistic input was associated with being within the TD range, suggesting an interaction between internal and external factors in determining language learning rate. Future research is needed to determine which linguistic properties are resilient with respect to external factors (here linguistic input) and/or internal factors (here, early brain lesions) and to explore how these factors work together to create acceptable language learning outcomes.

Hypotheses Generated by the Study That Have Implications for Interventions

Our project was not intended to generate interventions. However, our findings have implications both for prediction of later language deficits and for interventions to improve children's skills. We present the following four hypotheses as a summary of the findings from our study because there is scientific evidence behind them and because it is worth considering whether this evidence can be put to practical use.

Hypothesis 1: Charting early gesture use allows us to predict when children are likely to acquire particular linguistic constructions in speech; as such, it has the potential to serve as a diagnostic tool to identify individuals at risk for language delay.

Our findings suggest that a diagnostic tool based on the number of different meanings a child produces in gesture during the earliest stages of language learning could be used to identify children at risk for later vocabulary deficits well before those children can be identified using speech. Given that we see the first signs of sentence construction in speech + gesture combinations in TD children, we may also be able to use the number and types of speech + gesture combinations children produce prior to the onset of two-word combinations to identify individual children who are at risk for later deficits in sentence construction (e.g., Iverson, Longbardi, & Caselli, 2003). Of course, a great deal of work would need to be done to make a gesture diagnostic feasible. Clinicians and teachers do not have time to take 90-minute samples of child speech and examine them for gesture. However, it should be possible to construct elicitation tasks that generate gesture, norm those tasks on typically developing children, and then use the tasks to assess gesture production in children at risk for language delay.

The advantage of a gesture test is that potential delays can be detected even before the onset of speech, providing an earlier start for intervention, and a longer time during which to intervene, before school entry. Early identification would also help focus attention on children most at-risk for language delay, and thus in need of intervention, before they display delays. Given limited resources, it would be useful to identify, within children at risk for language delay, which children are more likely to require intervention to end up within the range for typically developing children.

The first hypothesis concerns a diagnostic that may be able to predict which children are at risk for language delays or deficits, but it provides no insight about what might be done to improve children's development. The remaining hypotheses all concern interventions that might be used to improve language skill. However, these hypotheses raise problems of causal inference. Ours is a study of naturalistic variation, not a study that assigns children to different conditions at random. Hence, our study, like other observational studies, cannot offer assurances that individuals being compared are identical in every way except treatment condition. We can use (and have used) matching on observed characteristics, or statistical control based on observed characteristics, as a substitute for control by random assignment, but neither method is foolproof, and neither can address the problem of unknown or unobserved confounding variables. Moreover, even when the relevant confounding variables are known, it is not always obvious how to take the variables into account to frame a causal inference. The remaining three hypotheses should therefore not be taken as established causal relations. However, the factors we have found in early development that predict later language outcomes lay the basis for causal explanations, which, of course, require further investigation.

Hypothesis 2: Encouraging children to gesture at very early ages has the potential to increase the size of their spoken vocabularies at school entry.

There are at least two reasons why gesture might be an ideal candidate around which to design an intervention program. First, SES differences in vocabulary are already well established by the time children enter school. To alleviate social disparities, we need to intervene with low-SES children early in development and we therefore need to focus on early appearing skills—gesture is just such an early developing skill. Second, unlike SES, which is extremely difficult to alter, gesturing can be manipulated. Previous work on older children has shown that encouraging them to gesture when explaining how they solved a math problem made them receptive to subsequent instruction on that problem (Broaders, Cook, Mitchell, & Goldin-Meadow, 2007). As another example with younger children, in a 7-week intervention study conducted in children's homes, toddlers who were encouraged to gesture while looking at a book with the experimenter increased the rate at which they gestured when interacting with their parents more than toddlers who were not encouraged to gesture. Gesturing is malleable. Importantly, the toddlers who increased their rate of gesturing also increased the number of different words they produced, and did so more than children whose gestures did not increase (LeBarton, Goldin-Meadow, & Raudenbush, 2013). Gesturing can have an impact on word learning.

Child gesture has the potential to influence language learning in a direct way by giving children an opportunity to practice producing particular meanings by hand at a time when those meanings are difficult to produce by mouth (Iverson & Goldin-Meadow, 2005). Child gesture could also play a more indirect role in language learning by eliciting timely speech from listeners. Gesture has the potential to alert listeners (parents, teachers, clinicians) to the fact that a child is ready to learn a particular word or sentence; listeners might then adjust their talk, providing just the right input to help the child learn the word or sentence (e.g., a child who does not yet know the word “cat” points at it and his mother responds, “yes, that's a cat”). Because they are finely tuned to a child's current state (cf. Vygotsky's, 1986, zone of proximal development), parental responses of this sort are effective in teaching children how an idea can be expressed in the language they are learning (e.g., Goldin-Meadow, Goodrich, Sauer, & Iverson, 2007). We suggest that it may be beneficial for parents, teachers, and clinicians to encourage children to gesture and then to use those gestures to guide the linguistic input they offer the children.

Hypothesis 3: Encouraging caregivers to use more diversified vocabulary and complex syntax has the potential to facilitate children's acquisition of vocabulary and complex syntax.

There is growing evidence that the language parents use with children can be changed (Engle et al., 2011; Marulis & Neuman, 2010; Roberts & Kaiser, 2011), as can child language itself (Hart & Risley, 1980). Our findings suggest that interventions designed to encourage parents or teachers at preschool programs to use more complex syntax may enhance children's acquisition of complex syntactic forms, but such an intervention might have little effect on vocabulary. Conversely, interventions designed to induce caregivers or teachers to use more extensive vocabulary in high-quality word-learning situations may enhance children's vocabulary but may have little effect on their development of complex syntactic forms.

Moreover, the relation between input and syntactic growth appears to be even tighter for children with brain injury than for typically developing children. In terms of policy implications, because we cannot yet alter lesion characteristics, and assuming optimal neurological care, we must for the moment focus our interventions on environmental factors that can contribute to language growth. Thus, interventions that promote more extensive vocabulary and (particularly) more complex syntactic forms offer a promising route to achieving higher linguistic competence for children with brain injury.

Hypothesis 4: Encouraging caregivers to increase their use of words for number, for the spatial properties of objects, and for abstract relations like similarity has the potential to increase how often children use these words, which, in turn, can affect their thinking in domains to which the words are relevant.

Our findings provide an initial step in identifying the kinds of talk that hold promise for improving children's understanding of number and spatial thinking, and their ability to make sophisticated comparisons. Follow-up experimental studies are, of course, needed to determine ways to increase the talk children hear (either at home or at school), which would then allow policy makers to make precise, evidence-based recommendations to parents and early childhood educators about the input children need to enhance not only their language but also their thinking.

Interventions of this sort could help typically developing children (as well as children at risk due to internal factors such as brain injury) learn concepts and abstractions that they will all eventually learn, but to do so earlier, which, in turn, may impact achievement trajectories (Duncan et al., 2007). Finally, the insights gained from our study and follow-up studies could also be used as a basis for engineering instructional materials (such as educational videos, computer games, and curricula for preschools) to promote the development of a suite of robust cognitive skills.

Conclusions

We have suggested four specific hypotheses arising from our longitudinal study that have potential implications for educational practice, ranging from diagnostic testing to the theoretical basis of interventions for parents and children. None of these hypotheses has been verified experimentally. Moreover, the actual creation of the interventions (and specific diagnostic testing procedures) we propose will require considerable iterative efforts, and some, of course, may not work. However, the suggestions for intervention that we offer are grounded in extensive empirical evidence and, at the least, constitute promising directions for research and development.

Acknowledgments

The longitudinal study upon which this article is based was supported by Eunice Kennedy Shriver National Institute of Child Health and Human Development Grant P01 HD 40605; funding from the National Institute on Deafness and Other Communication Disorders (Grant R01 DC00491 to Susan Goldin-Meadow) and from the National Science Foundation (Grant SBE 0541957 to the Spatial Intelligence and Learning Center) provided additional support.

We thank Kristi Schonwald, Jodi Khan, and Jason Voigt for administrative and technical support. We also thank Laura Chang, Elaine Croft, Ashley Drake, Kristen Duboc, Lauren Graham, Jennifer Griffin, Kristen Jezior, Lauren King, Max Masich, Erica Mellum, Josey Mintel, Jana Oberholtzer, Emily Ostergaard, Lilia Rissman, Rebecca Seibel, Calla Trofatter, Kevin Uttich, Julie Wallman, and Kristin Walters for help in data collection, transcription, and coding. We also are grateful to the participating children and families.

Biographies

graphic file with name nihms601254b1.gif

Susan Goldin-Meadow

graphic file with name nihms601254b2.gif

Susan C. Levine

graphic file with name nihms601254b3.gif

Larry V. Hedges

graphic file with name nihms601254b4.gif

Janellen Huttenlocher

graphic file with name nihms601254b5.gif

Stephen W. Raudenbush

graphic file with name nihms601254b6.gif

Steven L. Small

Footnotes

1

We are thus using SES as a methodological device—we sampled families of varied SES, and this procedure increased the sample variation in parent input, the factor that we were hoping to vary.

2

The number of families included in each of the studies described here depended on the amount of coding that had been completed at the time of the study and the number of children who had taken the relevant standardized test—in this case, Peabody Picture Vocabulary Test—Third Edition (PPVT; Dunn & Dunn, 1997) at 54 months. The number of participants thus varies across the studies we report.

3

It was important to control for parent education (our proxy for SES in this study) because the TD participants came from a wider SES range than did the BI participants.

References

  1. Anderson RC, Freebody P. Vocabulary knowledge. In: Guthrie JT, editor. Comprehension and teaching: Research reviews. Newark, DE: International Reading Association; 1981. pp. 77–117. [Google Scholar]
  2. Bates E. Language and context: The acquisition of pragmatics. New York, NY: Academic Press; 1976. [Google Scholar]
  3. Bates E, Thal D, Janowsky J. Early language development and its neural correlates. In: Rapin I, Segalowitz S, editors. Handbook of neuropsychology: vol 6: Child neurology. Amsterdam, the Netherlands: Elsevier; 2000. pp. 525–592. [Google Scholar]
  4. Broaders SC, Cook SW, Mitchell Z, Goldin-Meadow S. Making children gesture brings out implicit knowledge and leads to learning. Journal of Experimental Psychology: General. 2007;136:539–550. doi: 10.1037/0096-3445.136.4.539. [DOI] [PubMed] [Google Scholar]
  5. Cartmill E, Armstrong B, Gleitman L, Goldin-Meadow S, Medina T, Trueswell J. Quality of early parent input predicts child vocabulary three years later. Proceedings of the National Academy of Sciences, USA. 2013;110:11278–11283. doi: 10.1073/pnas.1309518110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Cartmill EA, Hunsicker D, Goldin-Meadow S. Pointing and naming are not redundant: Children use gesture to modify nouns before they modify nouns in speech. Developmental Psychology. 2014 Mar 3; doi: 10.1037/a0036003. Advance online publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Demir OE, Levine SC, Goldin-Meadow S. Narrative skill in children with early unilateral brain injury: A possible limit to functional plasticity. Developmental Science. 2010;13:636–647. doi: 10.1111/j.1467-7687.2009.00920.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Duncan GJ, Dowsett CJ, Claessens A, Magnuson K, Huston AC, Klebanov P, Brooks-Gunn J, et al. School readiness and later achievement. Developmental Psychology. 2007;43:1428–1446. doi: 10.1037/0012-1649.43.6.1428. [DOI] [PubMed] [Google Scholar]
  9. Dunn LM, Dunn LM. Peabody Picture Vocabulary Test. Third. Circle Pines, MN: American Guidance Service; 1997. [Google Scholar]
  10. Engle PL, Fernald LCH, Alderman H, Behrman J, O'Gara C, Yousafzai A, et al. the Global Child Development Steering Group Strategies for reducing inequalities and improving developmental outcomes for young children in low-income and middle-income countries. The Lancet. 2011;378:1339–1353. doi: 10.1016/S0140-6736(11)60889-1. [DOI] [PubMed] [Google Scholar]
  11. Feldman HM. Language development after early unilateral brain injury: A replication study. In: Tager-Flusberg H, editor. Constraints on language acquisition: Studies of atypical children. Hillsdale, NJ: Erlbaum; 1994. pp. 75–90. [Google Scholar]
  12. Feldman HM. Language learning with an injured brain. Language Learning and Development. 2005;1:265–288. doi: 10.1080/15475441.2005.9671949. [DOI] [Google Scholar]
  13. Fenson L, Dale PS, Reznick JS, Bates E, Thal DJ, Pethick SJ. Variability in early communicative development. Monographs of the Society for Research in Child Development. 1994;59(Serial No. 242) [PubMed] [Google Scholar]
  14. Furrow D, Nelson K, Benedict H. Mothers' speech to children and syntactic development: Some simple relationships. Journal of Child Language. 1979;6:423–442. doi: 10.1017/S0305000900002464. [DOI] [PubMed] [Google Scholar]
  15. Gentner D, Özyurek A, Gurcanli O, Goldin-Meadow S. Spatial language facilitates spatial cognition: Evidence from children who lack language input. Cognition. 2013;127:318–330. doi: 10.1016/j.cognition.2013.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Gertner BL, Rice ML, Hadley PA. Influence of communicative competence on peer preferences in a preschool classroom. Journal of Speech and Hearing Research. 1994;37:913–923. doi: 10.1044/jshr.3704.913. [DOI] [PubMed] [Google Scholar]
  17. Gillette J, Gleitman H, Gleitman L, Lederer A. Human simulations of vocabulary learning. Cognition. 1999;73:135–176. doi: 10.1016/S0010-0277(99)00036-0. [DOI] [PubMed] [Google Scholar]
  18. Goldin-Meadow S. The resilience of language. New York, NY: Psychology Press; 2003. [Google Scholar]
  19. Goldin-Meadow S, Butcher C. Pointing toward two-word speech in young children. In: Kita S, editor. Pointing: Where language, cultureand cognition meet. Hillsdale, NJ: Erlbaum; 2003. pp. 85–108. [Google Scholar]
  20. Goldin-Meadow S, Goodrich W, Sauer E, Iverson J. Young children use their hands to tell their mothers what to say. Developmental Science. 2007;10:778–785. doi: 10.1111/j.1467-7687.2007.00636.x. [DOI] [PubMed] [Google Scholar]
  21. Goldin-Meadow S, Saltzman J. The cultural bounds of maternal accommodation: How Chinese and American mothers communicate with deaf and hearing children. Psychological Science. 2000;11:307–314. doi: 10.1111/1467-9280.00261. [DOI] [PubMed] [Google Scholar]
  22. Hart B, Risley T. In vivo language intervention: Unanticipated general effects. Journal of Applied Behavior Analysis. 1980;13:407–432. doi: 10.1901/jaba.1980.13-407. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Hart B, Risley TR. Meaningful differences in the everyday experiences of young children. Baltimore, MD: Brookes; 1995. [Google Scholar]
  24. Heath SB. Ways with words. Cambridge, England: Cambridge University Press; 1983. [Google Scholar]
  25. Hess RD, Shipman VC. Early experience and the socialization of cognitive modes in children. Child Development. 1965;36:869–886. doi: 10.2307/1126930. [DOI] [PubMed] [Google Scholar]
  26. Hoff E. The specificity of environmental influence: Socioeconomic status affects early vocabulary development via maternal speech. Child Development. 2003;74:1368–1378. doi: 10.1111/1467-8624.00612. [DOI] [PubMed] [Google Scholar]
  27. Hoff-Ginsberg E. Function and structure in maternal speech: Their relation to the child's development of syntax. Developmental Psychology. 1986;22:155–163. doi: 10.1037/0012-1649.22.2.155. [DOI] [Google Scholar]
  28. Hoff-Ginsberg E. Maternal speech and the child's development of syntax: A further look. Journal of Child Language. 1990;17:85–99. doi: 10.1017/S0305000900013118. [DOI] [PubMed] [Google Scholar]
  29. Huttenlocher J, Haight W, Bryk A, Selzer M, Lyons T. Early vocabulary growth: Relation to language input and gender. Developmental Psychology. 1991;27:236–248. doi: 10.1037/0012-1649.27.2.236. [DOI] [Google Scholar]
  30. Huttenlocher J, Levine SC. Primary Test of Cognitive Skills. New York, NY: Macmillan; 1990. [Google Scholar]
  31. Huttenlocher J, Vasilyeva M, Cymerman E, Levine S. Language input and child syntax. Cognitive Psychology. 2002;45:337–374. doi: 10.1016/S0010-0285(02)00500-5. [DOI] [PubMed] [Google Scholar]
  32. Huttenlocher J, Waterfall H, Vasilyeva M, Vevea J, Hedges L. Sources of variability in children's language growth. Cognitive Psychology. 2010;61:343–365. doi: 10.1016/j.cogpsych.2010.08.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Iverson J, Capirci O, Longobardi W, Caselli MC. Gesturing in mother–child interactions. Cognitive Development. 1999;14:57–75. doi: 10.1016/S0885-2014(99)80018-5. [DOI] [Google Scholar]
  34. Iverson JM, Goldin-Meadow S. Gesture paves the way for language development. Psychological Science. 2005;16:367–371. doi: 10.1111/j.0956-7976.2005.01542.x. [DOI] [PubMed] [Google Scholar]
  35. Iverson JM, Longobardi E, Caselli MC. Relationship between gestures and words in children with Down's syndrome and typically developing children in the early stages of communicative development. International Journal of Language & Communication Disorders. 2003;38:179–197. doi: 10.1080/1368282031000062891. [DOI] [PubMed] [Google Scholar]
  36. Kiessling LS, Denckla MB, Carlton M. Evidence for differential hemispheric function in children with hemiplegic cerebral palsy. Developmental Medicine & Child Neurology. 1983;25:727–734. doi: 10.1111/j.1469-8749.1983.tb13840.x. [DOI] [PubMed] [Google Scholar]
  37. LeBarton ES, Goldin-Meadow S, Raudenbush S. Experimentally-induced increases in early gesture lead to increases in spoken vocabulary. Journal of Cognition and Development. 2013 Nov 12; doi: 10.1080/15248372.2013.858041. Advance online publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Levine SC, Huttenlocher J, Taylor A, Langrock A. Early sex differences in spatial ability. Developmental Psychology. 1999;35:940–949. doi: 10.1037/0012-1649.35.4.940. [DOI] [PubMed] [Google Scholar]
  39. Levine SC, Huttenlocher P, Banich M, Duda E. Factors affecting cognitive functioning of hemiplegic children. Developmental Medicine & Child Neurology. 1987;29:27–35. doi: 10.1111/j.1469-8749.1987.tb02104.x. [DOI] [PubMed] [Google Scholar]
  40. Levine SC, Suriyakham LW, Rowe ML, Huttenlocher S, Gunderson EA. What counts in the development of young children's number knowledge? Developmental Psychology. 2010;46:1309–1319. doi: 10.1037/a0019671. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Marchman VA, Miller R, Bates E. Babble and first words in children with focal brain injury. Applied Psycholinguistics. 1991;12:1–22. doi: 10.1017/S0142716400009358. [DOI] [Google Scholar]
  42. Marulis LM, Neuman SB. The effects of vocabulary intervention on young children's word learning: A meta-analysis. Review of Educational Research. 2010;80:300–335. doi: 10.3102/0034654310377087. [DOI] [Google Scholar]
  43. Miller JF, Chapman RS. The relation between age and mean length of utterance. Journal of Speech and Hearing Research. 1981;24:154–161. doi: 10.1044/jshr.2402.154. [DOI] [PubMed] [Google Scholar]
  44. Naigles LR, Hoff-Ginsberg E. Why are some verbs learned before other verbs? Effects of input frequency and structure on children's early verb use. Journal of Child Language. 1998;25:95–120. doi: 10.1017/S0305000997003358. [DOI] [PubMed] [Google Scholar]
  45. Newport EL, Gleitman H, Gletiman LR. Mother, I'd rather do it myself: Some effects and noneffects of maternal speech style. In: Snow CE, Ferguson CA, editors. Talking to children: Language input and acquisition. Cambridge, England: Cambridge University Press; 1977. pp. 109–150. [Google Scholar]
  46. Özçalişkan S, Goldin-Meadow S. Gesture is at the cutting edge of early language development. Cognition. 2005;96:B101–B113. doi: 10.1016/j.cognition.2005.01.001. [DOI] [PubMed] [Google Scholar]
  47. Özçalişkan S, Goldin-Meadow S, Gentner D, Mylander C. Does language about similarity play a role in fostering similarity comparison in children? Cognition. 2009;112:217–228. doi: 10.1016/j.cognition.2009.05.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Pruden SM, Levine SC, Huttenlocher J. Children's spatial thinking: Does talk about the spatial world matter? Developmental Science. 2011;14:1417–1430. doi: 10.1111/j.1467-7687.2011.01088.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Rankin JM, Aram D, Horwitz S. Language ability in right and left hemiplegic children. Brain and Language. 1981;14:292–306. doi: 10.1016/0093-934X(81)90081-X. [DOI] [PubMed] [Google Scholar]
  50. Reilly JS, Bates EA, Marchman VA. Narrative discourse in children with early focal brain injury. Brain and Language. 1998;61:335–375. doi: 10.1006/brln.1997.1882. [DOI] [PubMed] [Google Scholar]
  51. Roberts MY, Kaiser AP. The effectiveness of parent-implemented language interventions: A meta-analysis. American Journal of Speech-Language Pathology. 2011;20:180–199. doi: 10.1044/1058-0360(2011/10-0055). [DOI] [PubMed] [Google Scholar]
  52. Rowe ML, Goldin-Meadow S. Differences in early gesture explain SES disparities in child vocabulary size at school entry. Science. 2009a Feb 13;323:951–953. doi: 10.1126/science.1167025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Rowe ML, Goldin-Meadow S. Early gesture selectively predicts later language learning. Developmental Science. 2009b;12:182–187. doi: 10.1111/j.1467-7687.2008.00764.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Rowe ML, Levine SC, Fisher J, Goldin-Meadow S. The joint effects of biology and input on the language development of brain-injured children. Developmental Psychology. 2009;45:90–102. doi: 10.1037/a0012848. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Sauer E, Levine S, Goldin-Meadow S. Early gesture predicts language delay in children with pre- or perinatal brain lesions. Child Development. 2010;81:528–539. doi: 10.1111/j.1467-8624.2009.01413.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Stiles J, Reilly JS, Levine SC, Trauner D, Nass RD. Neural plasticity and cognitive development: Insights from children with perinatal brain injury. Oxford, England: Oxford University Press; 2012. [Google Scholar]
  57. Stiles J, Reilly J, Paul B, Moses P. Cognitive development following early brain injury: Evidence for neural adaptation. Trends in Cognitive Sciences. 2005;9:136–143. doi: 10.1016/j.tics.2005.01.002. [DOI] [PubMed] [Google Scholar]
  58. Vygotsky L. Thought and language. Cambridge, MA: MIT Press; 1986. [Google Scholar]
  59. Wechsler D. Wechsler Preschool and Primary Scale of Intelligence. Third. San Antonio, TX: Harcourt; 2002. [Google Scholar]
  60. Wynn K. Children's acquisition of the number words and the counting system. Cognitive Psychology. 1992;24:220–251. doi: 10.1016/0010-0285(92)90008-P. [DOI] [Google Scholar]

RESOURCES