Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jan 1.
Published in final edited form as: Dev Neuropsychol. 2014;39(4):262–284. doi: 10.1080/87565641.2014.906602

Carving the World for Language: How Neuroscientific Research Can Enrich the Study of First and Second Language Learning

Nathan R George 1, Tilbe Göksun 2, Kathy Hirsh-Pasek 3, Roberta Michnick Golinkoff 4
PMCID: PMC4193295  NIHMSID: NIHMS585794  PMID: 24854772

Abstract

Linguistics, psychology, and neuroscience all have rich histories in language research. Crosstalk among these disciplines, as realized in studies of phonology, is pivotal for understanding a fundamental challenge for first and second language learners (SLLs): learning verbs. Linguistic and behavioral research with monolinguals suggests that infants attend to foundational event components (e.g., path, manner). Language then heightens or dampens attention to these components as children map word to world in language-specific ways. Cross-linguistic differences in semantic organization also reveal sources of struggles for SLLs. We discuss how better integrating neuroscience into this literature can unlock additional mysteries of verb learning.


Children raised in an English-speaking home learn English and those in a Spanish-speaking home learn Spanish. This obvious fact harbors deep complexities about the nature of language learning, spanning the fields of linguistics, psychology, and neuroscience. Each has a rich history of engaging questions surrounding the nature and development of language, and each has resulted in substantial literatures that provide invaluable perspective on these processes (e.g., Amorapanth, Widick, & Chatterjee, 2009; Casasola, Cohen & Chiarello, 2003; Chatterjee, 2008; Damasio et al., 2001; Golinkoff & Hirsh-Pasek, 2008; Jackendoff, 1983; Kemmerer, 2006; Langacker, 1987; Mandler, 2012; Talmy, 2000; Wu, Morganti, & Chatterjee, 2008). Yet the study of language development has largely remained a house divided. Researchers in these fields have largely proceeded in parallel, lacking the crosstalk that would allow these separate literatures to build towards common understanding.

When researchers across these domains come together, the whole becomes much more than the sum of its parts. Consider how the integration of these fields has furthered our understanding of phonemic development. Linguistically, we know that languages differ greatly in the phonemic distinctions that are both detectable and producible by native speakers (Trubetzkoy, 1969). Translating linguistics into behavioral research, we learned much about how these language-specific distinctions develop. While newborns distinguish amongst the phonemes in the world’s languages, by 12 months of age their ability to perceive foreign phonemic distinctions lessens as they note how the phonemes of their own language are united into words (Eimas, Miller, & Jusczyk, 1987; Werker & Tees, 1984). Though these findings advanced our understanding significantly, the integration of behavioral paradigms into the study of the brain has augmented our knowledge further, providing both support for behavioral findings as well as new perspectives on the underlying architecture of these changes. In summarizing the role of brain data in clarifying phonemic development, Kuhl and Rivera-Gaxiola (2008) reflect that “the fact that the earliest stages of learning affect brain processing of both the signals being learned (native patterns) and the signals to which the infant is not exposed (nonnative patterns) may play a role in our understanding of the mechanisms underlying the critical period, at least at the phonetic level, showing that learning itself, not merely time, affects one’s future ability to learn” (p. 527). What becomes evident is that while each field contributes a unique perspective on the phonology problem, it is through their integration that we make the greatest strides.

A similar opportunity is arising in the field of cognitive semantics. As Slobin (1996) eloquently stated, languages are not “neutral coding systems of an objective reality” (p. 91); that is, the difference between a child learning English and Spanish does not merely boil down to identifying different words for the concept of dog. Rather, language, particularly relational terms like verbs and prepositions, extract and express information from events differently. For instance, the child raised in the English-speaking household must learn that their language places manner information (i.e., how entities move) within the verb and path (i.e., the trajectory along which entities travel) in a “satellite” prepositional phrase (e.g., “He’s running around the barn”; Pulverman, Golinkoff, Hirsh-Pasek, & Sootsman-Buresh, 2008). In contrast, the child raised in a Spanish-speaking household learns to favor path information in verbs, encoding manner in optional phrases, such as adverbs (e.g., “He’s circling the barn running”). Thus, children must learn to think for speaking, “packaging” information in events into the relational terms of their native tongue (Slobin, 1996; Tomasello, 1995).

The advent and wide acceptance of behavioral methodologies, primarily eye-gaze measures such as the Intermodal Preferential Looking Paradigm, allowed researchers to examine what linguists proposed to be the earliest foundations of semantic constructs (see Golinkoff, Ma, Song, & Hirsh-Pasek, 2013 for a review), resulting in vast gains in our understanding of how children master the daunting task of acquiring relational language (e.g., Casasola et al., 2003; Gleitman & Papafragou, 2013; Golinkoff & Hirsh-Pasek, 2008; Lakusta, Wagner, O’Hearn, & Landau, 2007; Mandler, 2012; Pulverman, et al., 2008). Research reveals that young infants learn to attend to both native and non-native components of events, such as paths, manners, sources, and goals, and that experience with the ambient language heightens and dampens attention to these components as children learn to package this information in accordance with the biases of their native tongue (Göksun, Hirsh-Pasek, & Golinkoff, 2010; Golinkoff & Hirsh-Pasek, 2008).

Though this research occurred largely within the context of monolingual development, the principles discovered are reaping applications beyond the learning of a primary language. The ability to learn a second or even third or fourth language is becoming more important in an increasingly global world, yet relatively little research has applied our knowledge of monolingual development to potentially assuage the difficulties faced by second language learners (SLLs) in learning relational terms. In this regard, the study of SLLs represents an exciting and practical new frontier in which to examine the processes underlying the learning of relational language. Considering the degree to which these biases in event processing are entrenched provides insight into the obstacles faced by SLLs. Relatedly it is of interest to understand how bilingual learners, be they children or adults, learn to package event components in language in different ways. An understanding of these obstacles and how they are overcome leads to the potential for more effective educational interventions, as the processes underlying first language learning are applied to ease the struggles of SLLs.

Much as looking measures provided access to the previously inaccessible minds of preverbal infants, the advent of non-invasive procedures such as electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) stands to play a significant role in these next stages of research on first and second language acquisition. While some research has examined cognitive semantics in adults, these child-friendly methodologies open the door for much needed research in infants and children. As was the case in the study of phonological development (Kuhl & Rivera-Gaxiola, 2008), integrating these technologies with established behavioral methodologies permits the examination of how changes in cognition are accompanied by changes in brain structure. In monolingual development, identifying the common neural signatures of learning across cultures, as well as the potentially different ways of processing and storing semantic information illuminates how children learn from their linguistic environment. For second language learning, identifying how successful instructional methods reengage these areas of the brain could be pivotal to understanding their effectiveness. Through integrating neuroscience with the foundations of linguistics and behavioral psychology, cognitive semantics can follow in the footsteps of phonological development and progress towards a more unified theory of these complex processes.

In this article, we echo a call made fifteen years ago by Hirsh-Pasek, Golinkoff, & Hollich (1999) to embrace neuroscience as the next tool with which to examine the developing linguistic mind. To that end, we divide this article into three parts. First, we review the progress of linguistics and behavioral psychology in understanding how monolingual children learn to select information from events and express it in the relational terms of their native tongue. Second, we call attention to how the theories of monolingual language development can drive forward exciting new research in second language learning that promises to provide practical solutions for improving second language instruction. Finally, we discuss how increasing neurological research in the area of language acquisition holds great promise not only for deepening our knowledge of monolingual and bilingual semantic development, but for leading the charge into the practical questions of adult second language learning.

The Verb Learning Problem

Relational terms such as verbs and prepositions are fundamental components of language that convey dynamic and static relations between objects in events (e.g., “The man kicked the ball,” or “The apple was on the table”). Verbs, in particular, are the glue that connect predicates like kicked to arguments like man and ball. Yet, these “hard words” (Gleitman, Cassidy, Papafragou, Nappa, & Trueswell, 2005) are notoriously difficult to learn, even for first language learners in so-called “verb-friendly” languages, which allow a verb to be presented alone or in the salient final position in a sentence (Hirsh-Pasek & Golinkoff, 2006; Imai et al., 2008; Waxman et al., 2013). Relational terms present unique challenges, demanding that we map a discrete representational system (e.g., a verb) onto a continuous and dynamic representation of events that unfolds through space and time (Hespos, Grossman, & Saylor, 2010). Consider a child playing on a sliding board. He runs to the base, climbs the steps, sits down, descends the slide, and runs around to repeat the whole process again. Although we use the verb slide to describe this scene, it is not so clear where sliding begins and ends. Mapping between verb and referent is particularly difficult because actions are fluid and fleeting (Golinkoff, Jacquet, Hirsh-Pasek, & Nandakumar, 1996; Hirsh-Pasek & Golinkoff, 2006). Moreover, relational terms label only part of a motion event and languages differ in the parts that are highlighted. For instance, English speakers use the verb cross to depict an object moving from one side of a surface to another, while Japanese speakers use different verbs to depict crossing a bounded surface (e.g., wataru for crossing a street) and an unbounded surface (e.g., tooru for crossing a field; Muehleisen & Imai, 1997).

The critical problem for the learning of relational terms is how “children from an initially equivalent base, end up controlling often very differently structured languages” (Bowerman & Levinson, 2001, p. 10). Research suggests that children have little problem discriminating and categorizing semantically relevant constructs from events (e.g., Gentner & Bowerman, 2009; Göksun et al., 2011; Hirsh-Pasek & Golinkoff, 2006; Lakusta et al., 2007; Pruden, Göksun, Roseberry, Hirsh-Pasek, & Golinkoff, 2012; Pruden, Roseberry, Göksun, Hirsh-Pasek, & Golinkoff, 2013; Pulverman, Song, Pruden, Golinkoff, & Hirsh-Pasek, 2013). In fact, infants appear to be attuned to event components that will be realized in language, even if that language is not their own. Five-month-old English-reared babies, for example, detect categories like tight fit (e.g., ring on finger) and loose fit (e.g., ring on table) that are marked in Korean but not in English (Hespos & Spelke, 2004). Consistent with Mandler’s (2012) Perceptual Meaning Analysis, infants seem to attend to basic spatial components of events like links (i.e., contingent interactions), start paths (i.e., where motions begin), and contact (i.e., objects touching) only later to “redescribe” them to form the concepts that become codified in their native tongue.

If children readily identify event components for language, then difficulties in learning relational language reside not in discriminating or categorizing the event components to be marked in language, but in the mapping or packaging problem: learning to assemble the semantic elements of events to align with the linguistic tendencies of a given language (Tomasello, 1995), as in the Japanese versus English method of encoding ‘crossing’ (Muehleisen & Imai, 1997).

Mastering how languages package this information is a challenge for young learners, but may pose an even greater obstacle for SLLs. Monolingual children need only solve the packaging problem once; bilinguals and adult SLLs confront the potential problem of learning two languages that might package events in different ways. The “slow and errorful learning of verbs and prepositions” (Gentner, 2006, p. 554) in SLLs suggests that there may be interference between the two systems of representation held by these learners (Lennon, 1996), but to date, relatively little empirical research focuses specifically on how these populations navigate the tension between two potentially different systems of packaging events for language.

In the following sections, we review the current state of knowledge concerning the acquisition of relational terms in childhood as has been achieved primarily through linguistic and behavioral research. We then demonstrate 1) how this research can help us better understand the difficulties SLLs face when encountering these terms in a second language and 2) how applying this knowledge to instructional practices can enhance the learning environment of SLLs.

Non-Linguistic Event Processing

Talmy (2000) created a universal dictionary of semantic components that languages express in their relational terms (see also Jackendoff, 1983; Langacker, 1987). These include the figure, or movable entity within the scene, and its relation to the ground, or stationary setting. Figures travel along paths (e.g., behind, over) and do so using particular manners of motion (e.g., running, crawling). Paths have sources, or starting points, and goals, or endpoints of motion. Events also often have causes, which bring action about. Finally, events contain spatial relations such as containment (e.g., putting things in a container) and support (e.g., putting things on a surface) among others. These components of events may not be an exhaustive list. They have, however, proven a starting point for the behavioral study of the formation of the semantic categories of language. Each of these components are universally represented in languages studied to date, yet are expressed differently across these languages. Because they are perceptually salient, they are also amenable to study with young children. These shared features have created a window for scientists to begin to explore the psychological reality of these constructs in language learning (Golinkoff & Hirsh-Pasek, 2008; Hirsh-Pasek & Golinkoff, 2006). The field has made great progress across a number of constructs reviewed below. Most of this research focuses on children reared in monolingual environments, with virtually no work on bilingual children. Only recently have researchers begun to explore the neurological correlates of these constructs and the implications of these findings for second language learning.

Containment and Support

Containment refers to a relation in which an object is fully or partially surrounded by a container (e.g., apple in a bowl), whereas a support relation consists of an object resting upon a surface (e.g., apple on a table). English utilizes the categories of in and on, however, Korean labels containment and support based on the degree of fit (tight or loose) between objects. The verb kkita refers to a tight-fitting relation, collapsing across the English categories of put in and put on (Gentner & Bowerman, 2009).

English-reared infants notice Korean degree-of-fit relations by 5 months of age, (Hespos & Spelke, 2004). They also form categories of tight-fitting and loose-fitting relations by 9 months of age (McDonough, Choi, & Mandler, 2003). Categories of containment relations appear around 6 months of age across a variety of exemplars (Casasola, Cohen, & Chiarello, 2003), but support relations are not categorized until 14 months (Casasola, 2005).

Figure and Ground

The figure in an event is “a moving or conceptually moveable entity whose path, site, or orientation is conceived as a variable” (Talmy, 2000, p. 312). The ground is “a reference entity, one that has a stationary setting relative to a reference frame” (Talmy, 2000, p. 312). In the sentence “John is walking across the street,” John is the figure and the street is the ground. English uses a single category to refer to movement from one side of a surface to another (i.e., crossing). Japanese carves finer-grained distinctions based on the type of surface traversed (Muehleisen & Imai, 1997). The verb wataru (i.e., go across) implies two things: (a) there is both a starting point and a goal, and (b) the ground is a flat extended surface with a boundary. The typical grounds for wataru are railroads, roads, or bridges. When the ground does not contain a barrier between two sides (e.g., a tennis court or grassy field), the verb tooru (i.e., go through) is used (Göksun et al., 2011). Even though these verbs incorporate path information with the ground, the distinctions between the verbs are made based on the grounds.

Research shows that English-reared infants distinguish figures in dynamic events by 11 months of age (Göksun et al., 2011). English-reared infants also differentiate between non-native ground categories (e.g., a railroad vs. a grassy field) by 14 months of age (Göksun et al., 2011) and form categories of these relations by 14 months (Göksun, 2010).

Path and Manner

Path refers to the trajectory of a figure in an event (e.g., into, away from), whereas manner refers to the way in which an action is performed (e.g., running, bending). In English, manner information typically appears in the verb, while path information is typically encoded in the prepositional phrase (e.g., “John ran out of the house”). Spanish and Greek, however, typically encode path in the verb and depict manner via an optional adverbial phrase (e.g., “John exited the house running”), if at all (Pulverman et al., 2008).

English- and Spanish-reared 7-month-olds attend to changes in path and manner in dynamic events (Pulverman et al., 2013; Pulverman et al., 2008). When habituated to an animated starfish traveling on a specific path with respect to a ball (e.g., over the ball) and with a specific manner (e.g., twisting), infants renew their visual attention when there is a change in either the path (e.g., under the ball) or manner (e.g., jumping). Pruden and colleagues (2013) show that 10-month-olds also categorize paths when they are presented across different manners; however, it is not until 13 months of age that infants categorize a consistent manner performed over a variety of paths (Pruden et al., 2012).

Source and Goal

The source of motion is a location or reference object from which a figure moves, while the goal of motion is a location or reference object towards which the figure moves. Source and goal manifest themselves as both source paths (e.g., from, flee) and goal paths (e.g., to, approach; Lakusta & Landau, 2012). Source and goal provide an interesting exception to the pattern observed with previous constructs. While they are still perceptually salient and encoded in all languages studied to date, these components appear to be packaged in similar ways. Specifically, languages encode goals more often than sources for both the movements of intentional and inanimate figures (Lakusta & Landau, 2012; Regier & Zheng, 2007). One language-specific aspect is that some languages like Japanese differentiate source and goal with specific morphemes (e.g., ni and kara) attached to the noun (Tsujimura, 1996).

Research shows that infants discriminate between goals in motion events by 12 months of age (Lakusta et al., 2007; but see Woodward, 1998). Twelve-month-olds also identify source changes in events; however, they only do so when sources are made extremely salient (e.g., decorated with sparkles; Lakusta et al., 2007). By 14 months, infants form categories of goals across different objects, spatial relations, and agents (Lakusta & Carey, 2008). However, infants of the same age do not form categories of sources across such variation.

Basic Causality

“The basic causative situation consists of three main components: a simple event (that is, one that would otherwise be considered autonomous), something that immediately causes the event, and the causal relation between the two” (Talmy, 2000, p.480). In the sentence, “John broke the vase,” the transitive verb broke encodes the causal relation between the simple event (i.e., the vase breaking) and the event that immediately caused it (i.e., John’s action). While expressed in languages across the world (Wolff, Klettke, Ventura, & Song, 2005), languages differ in their categories of basic causality (Wolff, Jeon, Klettke, & Yi, 2010). Languages with fixed word orders, such as English, allow a wide range of agents in the subject position, including both intentional beings and tools (e.g., “The person cut the bread” or “The knife cut the bread”). On the other hand, languages with variable word order, such as Korean, restrict their category of what can be a causal agent to exclude tools (e.g., a knife) as viable agents.

Infants discriminate causal from non-causal relations in motion events. Habituated to Michotte’s (1963) traditional causal collision event, infants as young as 6 months of age will not increase attention to another causal event; however, they recover attention when presented with a non-causal event, such as when an effect occurs even though the agent (i.e., the initiator of action) does not contact the patient (i.e., the recipient of action), or when the patient does not continue along the trajectory initiated by the agent (Cohen & Oakes, 1993; Göksun et al., 2010; Leslie & Keeble, 1987; Muentener & Carey, 2010; see also Rakison & Krogh, 2012). By 8 months of age, infants also differentiate between causal and non-causal events in which the effect is a change of state (e.g., breaking; Muentener & Carey, 2010). Finally, infants as young as 7 months of age expect caused motion to be initiated by animate agents, as opposed to inanimate objects, mirroring what is seen in some languages (Saxe, Tzelnic, & Carey, 2007). No study to our knowledge has examined infants’ categorization of basic causality across these varying types of causal events (e.g., motion versus change of state effects, inanimate versus animate figures).

Force Dynamics

Force dynamics is a semantic category that encodes “how entities interact with respect to force” (Talmy, 2000, p. 409). Related to basic causality, force dynamics moves to a more complex level of representation, encoding how a patient with a force or intention towards a goal is affected by its interaction with an agent that has its own force or intention. Force dynamics encodes properties of events that make up the semantic categories of cause1, prevent, enable, and despite (Note: the category of despite is often neglected in this literature). In psychological research, these categories are realized as unique patterns of forces in events, defined by three components: (1) the patient’s tendency (i.e., motion or intention) with respect to a goal, (2) the concordance between the tendency of the patient and the tendency of the agent acting upon it, and (3) the patient’s success or failure in reaching the goal (Wolff, 2007). In the sentence, “John prevented the boy from entering the street,” the verb prevent encodes a relation between the agent (i.e., John), the patient (i.e., the boy), and the goal (i.e., the street) such that the boy had a tendency towards the street, which was not concordant with John’s intention to keep him away from the street, resulting in the boy failing to reach the street. Research suggests that these categories are present in languages across the world, such as English, Spanish, German, Russian, and Arabic (Wolff et al., 2005).

The languages studied so far have categories of force dynamics, but differ in how they differentiate them. Wolff and Ventura (2009) provide evidence that Russian’s semantic category of enable differs from that of English. According to their tendency hypothesis, languages of varied word order, such as Russian, place stricter requirements on agents of action, requiring them to be animate and intentional. The emphasis on intentionality highlights forces of internal energy, leading Russian speakers to attribute intention to patients based merely on the direction they face. English speakers, who are more liberal in what they allow in their category of causal agents, also recognize the external force of friction, and consequently require a visible effort to overcome this external force before attributing intention to a patient. These differences in focus yield different interpretations of the same event. Consider a man sitting in a wheelchair facing a door, when a nurse comes behind him and rolls him through it. Russian speakers are more likely to interpret the event as helping, reading intent to go through the door from the direction the man faced, whereas English speakers interpret the event as causing given the lack of visible effort by the man to approach the door. Importantly, these differences are not due to differences in the defining criteria of force dynamics, but rather are a consequence of how each language encodes the intent of the figures in a causal relation.

Adults identify these categories according to the predictions of force dynamics theory for both physical and social events (Wolff, 2007). However, little research has been done developmentally. Beginning research suggests that children may not be able to integrate the forces of two or more figures until around age 5 (Göksun, George, Hirsh-Pasek, & Golinkoff, 2013). If representations of these events rely on notions of force, as suggested by work with adults (Wolff, 2007), then complex causal interactions may not be fully understood until after this age. Other research suggests that children may form related categories earlier via other means such Mandler’s (2012) Perceptual Meaning Analysis or Hamlin, Wynn, and Bloom’s (2007) social valuation.

Taken together, this body of research demonstrates that Talmy’s list of semantic constructs is an important framework for the study of event processing. Infants are keenly aware of a common set of event components that will become the semantic categories encoded by verbs and relational terms (see Table 1 for a summary). Research also suggests that not all constructs emerge at the same time (e.g., figure before ground, path before manner). This might be related to which semantic distinctions are expressed more universally than others (Gentner & Bowerman, 2009). For example, verbs that incorporate ground information are rare across languages. Thus, it might not be surprising that this distinction is learned later than others. Differences in developmental trajectories might also reflect the perceptual saliency of some components over others (Regier & Zheng, 2007). For example, goals of actions are more recent and arguably more salient than sources of action, which may lead to a later formation of source categories. In their Typological Prevalence Hypotheses, Gentner and Bowerman (2009) suggest that both language and perception may play a role in this process, though the exact role of each is still hotly debated.

Table 1.

Processing of semantic components in infancy.

Semantic Category Languages Studied Discrimination Categorization
Containment/Support Dutch, English, Korean Tight/Loose-fit: 5 mos
Containment: Unknown
Support: Unknown
Tight/Loose-fit: 9 mos
Containment: 6 mos
Support: 14 mos
Figure/Ground English, Japanese Figures: 11 mos
Grounds: 14 mos
Figures: Unknown
Grounds: 14 mos
Path/Manner English, Greek, Japanese, Mandarin, Spanish Path: 7 mos
Manner: 7 mos
Path: 10 mos
Manner: 13 mos
Source/Goal Arabic, Chinese, English Goal: 12 mos
Source: 12 mos*
Goal: 14 mos
Source: Unknown
Basic Causality Arabic, Chinese, English, German, Korean, Russian, Spanish 6 Months Unknown
Force Dynamics Arabic, English, German, Russian, Spanish Unknown Unknown
*

Note: Discrimination of sources required extra saliency cues at 12 months of age

How Language Highlights Semantic Relations

On the assumption that children start out from a common base, how does the ambient language influence children to adopt language-specific encodings of pre-linguistic concepts? While much of the relevant research comes from first language learning, it is worth considering what lessons might be learned for understanding second language learning.

Vocabulary Development Relates to Children’s Semantic Categories

If learning to package semantic categories in language-specific ways is essential to mapping relational terms, we would expect to see a relation between children’s vocabulary level and their attention to non-native contrasts in events. Children with higher vocabularies likely will have narrowed their perceptual space to attend only to native contrasts. For instance, Korean children with larger vocabularies should maintain degree-of-fit relations while decreasing sensitivity to the distinction between in and on. Children with smaller vocabularies, however, are likely still sensitive to a variety of semantic categories.

Research supports this hypothesis. Choi (2006) found that English-speaking 29-month-olds with larger vocabularies were less likely than their lower-vocabulary counterparts to distinguish the degree-of-fit relations encoded in Korean. Similarly, English-speaking children who knew the word in were less likely to encode degree-of-fit relations than those who did not yet know the word in. Korean children maintained these distinctions regardless of vocabulary level.

Research on figure and ground offers another example. Göksun and colleagues (2011) show that at 19 months of age, Japanese infants continue to attend to the distinction between waturu and tooru, whereas English infants, especially those with larger vocabularies, now ignore it (Göksun, 2010). Taken together, these studies suggest that the accumulation of vocabulary in the native language plays a pivotal role in heightening infants’ attention to native distinctions and dampening attention to nonnative distinctions.

Making the Switch to Native Event Encodings

When might children’s language experience turn their attention to language-specific interpretations of events? Research suggests that children shift towards the biases of their native tongue over the first 3 years of life. For instance, presented with a starfish character performing a novel manner along a novel path accompanied by a novel verb, 2.5-year-old English-, Spanish-, and Japanese-speaking children apply the new verb to the path of the action rather than to its manner (Maguire et al., 2010). By 3 years of age, language-specific interpretations emerged: while English speakers assumed the new verb now applied to the manner of motion, Spanish speakers were less likely to make this assumption. The switch is mirrored in production with other semantic components. Children’s speech utilizes native containment and support relations around age 2 (e.g., Choi, 2006), and begins to reflect native path and manner biases by age 3 (e.g., Allen et al., 2007).

What remains to be seen is how these processes operate for children speaking more than one language or for those learning a second. As with phonological development (e.g., Eimas et al., 1987; Werker & Tees, 1984), monolingual children appear to narrow the range of event components they attend to as a result of exposure to their ambient language (see Göksun et al., 2010 for a detailed discussion of the parallels). Furthermore, just as children learning two languages at the same time do not narrow their phoneme repertoire as rapidly as children exposed to one language (Werker & Byers-Heinlein, 2008), bilingual children may also remain sensitive to a wider array of event components. Perhaps knowing two languages that encode different semantic components would even allow bilingual children to be more open than monolingual children to learning yet another configuration of semantic patterns in a new language.

For children or adults who have already mastered the configuration of semantic elements in one language, learning another language that relies on different patterns should be difficult. Importantly, neither children nor adults entirely lose their ability to access these categories in non-linguistic events (Choi & Hattrup, 2012; Papafragou, Hulbert, & Trueswell, 2008; see also Hespos & Spelke, 2004; McDonough et al., 2003). Yet, unlike monolinguals, SLLs might already see events through the particular “lens” of their first language, which may interfere with their ability to identify how a second language may relate words to events in a new way. Perhaps, for example, implicitly knowing that manner is generally expressed in verbs in the first language might heighten attention to manner and impede a learner’s attention to the path aspect of the event. Through connecting what we have learned about the development of relational language in monolinguals to the study of second language learning, we can better illuminate this potential obstacle for SLLs.

What Have We Learned About Relational Language?

The literature on verb learning suggests that infants enter the world prepared to carve events into a universal set of categories that are the basis for later language. Regardless of their language community, the research so far suggests that infants detect the same array of non-linguistic components of events, including fine-grained distinctions that are not encoded in their language (Göksun et al., 2010). The way in which infants parse the world prepares them to learn any language. Therefore, the challenge in learning the relational terms of a second cannot be blamed on an inability to process non-native categories. Research with adults supports this conclusion, showing that while adults are attuned to native semantic categories when processing events for language, they may retain, or readily re-acquire non-native distinctions when processing non-linguistic events (Choi & Hattrup, 2012; Papafragou et al., 2008, but see Hespos & Spelke, 2004; McDonough et al., 2003).

More critically, the research suggests that as infants are exposed to their native language, they learn to package already established nonlinguistic constructs according to the demands of the grammar of their language. Tracking statistical regularities in how their native language encodes events probably allows the child to “infer how new words and sentences will relate to new objects and events” (Li, Abarbanell, Gleitman, & Papafragou, 2011, p. 51). With enough data, infants note distinctions made in their native tongue, effectively learning how to think for speaking (Slobin, 1996).

We suggest that thinking for speaking occurs through a process of semantic reorganization, in which attention to semantic categories is heightened or dampened to meet the demands of a particular language. Over the first year and a half of life, infants notice a common set of foundational components of events regardless of the language they are learning. Then, influenced by distinctions encoded in the native language, infants appear to focus on a subset of these categories that are relevant to their native tongue. Importantly, we are not arguing in favor of Whorf’s linguistic relativity, in which language itself determines our categories of events (Whorf, 1956). Language, in this case, has the function of orienting infants’ attention to some relations in events over others. Through this process, infants develop new perspectives in their interpretations of event categories, effectively trading spaces as they develop (Göksun et al., 2010). Indeed, preliminary findings suggest that infants’ ability to categorize the path and manner components of events predicts their later language ability (Roseberry et al., 2009).

The result of semantic reorganization is the creation of entrenched lexicalization biases, or strategies for mapping word to world, that are likely at the heart of the struggles that SLLs experience. Learning a new language requires that SLLs not only acquire a new lexicon, but that they begin to notice how the relational terms in the second language map onto events. Given that this lexicon is likely, though not certain, to be different than how their first language encodes events, the SLL will need to uncover the way the new language operates, perhaps having to resurrect semantic distinctions typically ignored in the first language. This realization has spurred an exciting new line of research as psychologists have sought to understand how this challenge is manifested in second language learning (see Table 2 for an overview of the literature covered in this article).

Table 2.

Index of reviewed articles concerning the processing of event components that underlie relational terms.

Overviews Containment/Support Path/Manner
Linguistic Reviews Jackendoff (1983); Langacker (1987); Slobin (1996); Talmy (2000); Tsujimura (1996) - -
Behavioral Research Bowerman & Levinson (2001); Gentner (2006); Gentner & Bowerman (2009); Gleitman et al. (2005); Gleitman & Papafragou (2013); Göksun, Hirsh-Pasek, & Golinkoff (2010); Golinkoff & Hirsh-Pasek (2008); Golinkoff, Jacquet, Hirsh-Pasek, & Nandakumar (1996); Golinkoff, Ma, Song, & Hirsh-Pasek (2013); Hirsh-Pasek & Golinkoff (2006); Hirsh-Pasek, Golinkoff, & Hollich (1999); Imai et al. (2008); Landau & Jackendoff (1993); Lennon (1996); Mandler (2012); Slobin (1996); Tomasello (1995); Waxman et al. (2013); Whorf (1956) Casasola (2005); Casasola, Cohen, & Chiarello (2003); Choi (2006); Choi & Hattrup (2012); Gentner & Bowerman (2009); Hespos & Spelke (2004); McDonough, Choi, & Mandler (2003) Allen et al. (2007); Maguire et al. (2010); Papafragou, Hulbert, & Trueswell (2008); Pruden et al. (2012); Pruden et al. (2013); Pulverman Golinkoff, Hirsh-Pasek, & Sootsman-Buresh (2008); Pulverman et al. (2013); Roseberry et al. (2009)
Cognitive Neuroscience Research Amorapanth, Widick, & Chatterjee (2009); Chatterjee (2008); Goodale & Milner (1992); Kemmerer (2006); Landau & Jackendoff (1993); Pakulak & Neville (2011); Ungerleider & Mishkin (1982) Damasio et al. (2001); Tranel & Kemmerer (2004) Wu, Morganti, & Chatterjee (2008)
SLL Research Krashen (1981); White, Spada, Lightbown, & Ranta (1991) - Cadierno (2008); Han & Cadierno (2010); Havasi & Snedeker (2004); Hohenstein, Eisenberg, & Naigles (2006); Inagaki (2002); Negueruela, Lantolf, Jordan, & Gelabert (2004); Song et al. (2014)

Lexicalization Biases and Second Language Learning

How might the challenges faced by SLLs be informed by the research on monolingual language development? Appreciating the way in which semantic categories are originally formed in infants and toddlers, and how language links up with the processing of these categories, will lead to a better understanding of the stumbling blocks SLLs may face. Even in learning a primary language, the formation of a single language-specific lexicalization bias takes time, as children learn to package semantic categories in a manner that supports understanding of their native language. When a second language is introduced, people need to juggle two possibly conflicting lexicalization biases. For young bilinguals, the biases must be teased apart or intermixed in a functional way. Adult SLLs, on the other hand, are already in possession of an established lexical and grammatical base, which they must then realign to learn the second language. While both early bilinguals and adult SLLs confront the problems of multiple ways of packaging events, we choose to focus on adult SLLs to demonstrate 1) how already established lexicalization biases can be a barrier to verb learning, and 2) how education might better approach second language instruction.

We posit that in order to learn relational terms in a new language, SLLs must overcome lexicalization biases established in infancy to learn how to package events in a different way for the second language. For instance, native English speakers, who predominantly use manner verbs, may find it difficult to learn the lexicalization patterns of Spanish, which requires that verbs differentially encode path information. Research suggests this is a daunting task. SLLs rarely approach native competency even with years of instruction or exposure to the second language (Bley-Vroman, 1990; Johnson & Newport, 1989). Moreover, most research focuses on the challenges of grammar (e.g., Johnson & Newport, 1989), vocabulary (e.g., Gass, 1988), and accent (e.g., Flege, Munro, & MacKay, 1995), leaving open the question of how taxing it is to overcome these often firmly ingrained biases in the learning of verbs and relational terms—the cornerstones of language—and whether such biases may underlie struggles in other areas, particularly grammar.

The little evidence that exists suggests there is plasticity in these lexicalization biases. In a study by Havasi and Snedeker (2004), English-speaking adults were taught nonsense verbs in English consistent with the path bias seen in Spanish. Participants heard a novel label over five video clips that maintained a single path, while varying manner. Importantly, participants were not only presented with a test of verb learning after each block of videos, but prior to them as well. The inclusion of a pretest was designed to assess changes in English-speakers biases as the experiment progressed. For the pretest and posttest, participants heard a label applied to a single event in which a character performed a novel manner and a novel path. They then saw two new events presented together, with one maintaining path and the other maintaining manner. As expected, at the beginning of the experiment the English-speaking adults strongly preferred applying the novel verb to the consistent manner. However, as the experiment progressed, the participants gradually shifted towards a path interpretation for novel verbs, suggesting that these ingrained biases can be changed with experience. This plasticity is underscored by studies that examined motion event expressions and lexicalization biases in individuals with multiple years of second language instruction (e.g., Cadierno, 2008; Hohenstein, Eisenberg, & Naigles, 2006; Inagaki, 2002). For example, Hohenstein and colleagues (2006) found that even college students who did not begin second language instruction until after puberty demonstrated similar lexical patterns to monolinguals in both their first and second language, showing that these biases can be altered with time. Furthermore, they also used more non-native lexicalization patterns (e.g., path verbs in English) in both languages, suggesting bidirectional influences of L1 and L2 in English and Spanish bilinguals’ use of motion verbs.

Unfortunately, methods utilized in classrooms focus primarily on vocabulary and grammatical structure, leaving learners to uncover on their own the different ways their new language expresses information about the same exact events. Song, Pulverman, Infiesta, Golinkoff, and Hirsh-Pasek, (2014) studied whether traditional college language classrooms are effective for highlighting the difference in these lexicalization biases over a longer period of instruction. Using English-speaking college undergraduates enrolled in Spanish courses, they examined whether years of instruction or experience abroad was sufficient for these students to adopt the new lexicalization bias of their second language. They asked beginning, intermediate, and advanced students of Spanish language education to describe in writing four pages of a wordless picture book in Spanish after “reading” the whole book. Results showed that intermediate students, who had approximately five previous Spanish courses, were still significantly different from native speakers in their lexicalization of these motion events. Only the advanced students, who had approximately seven previous, semester-long courses, showed lexicalization biases that were not significantly different from native speakers. Furthermore, studying abroad had an independent effect on lexicalization, with students who studied abroad having more similar lexicalization patterns to native speakers.

What does learning look like in these environments and why does it take so long to see substantial improvements? Research suggests that many students form compensation strategies, which mimic success but still rely on characteristics of the student’s native tongue. For instance, Spanish rarely uses verbs such as streamed that conflate manner and path. Native Spanish speakers learning English show a bias to maintain this separation (e.g., passed through like a river), avoiding conflations utilized by English speakers (Negueruela, Lantolf, Jordan, & Gelabert, 2004). While these strategies yield acceptable descriptions, students fail to grasp the complexities of their second language’s lexicalization patterns.

Similar findings come from studies examining motion expressions in intermediate SLLs. When learning Danish, a satellite-framed language that favors manner information in the verb and path in satellite prepositional phrases, native speakers of other satellite-framed languages, such as German and Russian, showed greater proficiency than native speakers of Spanish, a verb-framed language that emphasizes path information in the verb (Han & Cadierno, 2010). Specifically, German and Russian SLLs were not only better able to produce the satellite-framed constructions typical of Danish, but also produced a wider variety of manner-of-motion verbs than Spanish SLLs. Native Spanish speakers used more non-manner verbs with satellite path phrases (e.g., “The man comes into the house”), showing difficulty in switching their lexicalization biases to favor manner information (see also Cadierno, 2008). Similarly, studies of gesture have discovered that native English speakers learning Spanish produce a high rate of manner gestures to compensate for the lack of variety of manner verbs in Spanish relative to English (Negueruela et al., 2004). Given that SLLs rely on different neural mechanisms for syntactic processing based on age of acquisition and proficiency in the second language (Pakulak & Neville, 2011), the neural analyses of processing and learning verbs in a second language will be critical to future examinations of these learning patterns.

Why is the training done by Havasi and Snedeker (2004) so effective in such a limited time, while years of classroom instruction yield only minor improvements (Song et al., 2014)? The key may be offering contrasts between the native and new language. By providing pedagogical experiences that explicitly highlight the differences between languages, instruction can aid the learner in recognizing their current biases and how the new language may differ. In the absence of such experience, students are left to discover such patterns on their own, a task in which they are unlikely to succeed. With this knowledge, how might we help students overcome difficulties with the learning of relational terms?

One possible answer is to explicitly teach the event encoding patterns of the second language, emphasizing contrasts between the native and new language. Rather than focusing solely on vocabulary and grammatical structure, language educators might clearly highlight the lexicalization biases in the new language and contrast them with those in the students’ native tongue. Such a strategy has been successful in the past, with increased grammatical competence and quicker learning found among adult SLLs who were explicitly instructed on the rules underlying a second language (White, Spada, Lightbown, & Ranta, 1991) as opposed to those who had to discover the rules on their own (Krashen, 1981). Furthermore, comparison is a useful tool that may encourage students to examine deeper commonalities and differences between languages (e.g., Jameson & Gentner, 2003). This type of pedagogy might aid in easing what is historically a difficult aspect of mastering a new language—the learning of relational terms.

The Role of Cognitive Neuroscience

With the field of cognitive semantics established in both linguistics and behavioral psychology, there needs to be an increased focus on how the integration of neuroscience can considerably enhance the literature on how monolinguals, bilinguals, and adult SLLs learn relational terms. Improvements in noninvasive neuroscience techniques, such as EEG, MEG, and fMRI open the door for the bridging of neuroscientific and behavioral methods in the study of language learning across development. Studies using neuroimaging techniques have already proven invaluable in corroborating and expanding on behavioral research in the areas of speech and language processing with infants and children (e.g., Conboy, Rivera-Gaxiola, Silva-Pereyra, & Kuhl, 2008; Dehaene-Lambertz, Hertz-Pannier, & Dubois, 2006; Kuhl, & Rivera-Gaxiola, 2008). By extending this approach into the study of cognitive semantics, we stand to make similar advancements in our understanding of how children and adults learn to think for speaking.

As in the case of phonological development, the integration of neuroscience into the study of cognitive semantics in children is pivotal to identifying the neural signatures for learning. Though languages differ in the semantic components that are highlighted, there are likely common attentional processes that underlie children’s ability to adapt to any of the semantic spaces they encounter. Understanding the neural correlations of semantic (re)organization in the developing brain will not only add to our understanding of typical development, but also help us when examining atypical development such as autism or prenatal brain injury.

A second issue surrounds how the lexicalization biases that result from these learning processes are manifested in the architecture of the brain. Linguistic and behavioral studies have identified the basic building blocks of relational language, yet understanding how these semantic components are stored in the brain has only recently become a focus of research (e.g., Amorapanth et al., 2009; Chatterjee, 2008, Damasio et al., 2001; Kemmerer, 2006; Wu et al., 2008). Though not motivated by questions of language learning, we can garner some early clues from work on visual processing (Ungerleider & Mishkin, 1982). Visual processing is posited to segregate into two pathways: the ventral (“what” pathway) and the dorsal (“where” pathway) stream. The ventral stream processes information about object properties, such as color, shape or size of an object. The dorsal stream processes spatial information such as the location and motion of an object. Based on this hypothesis, Landau and Jackendoff (1993) suggested that prepositions might be neurally instantiated within the dorsal stream, particularly the parietal cortex. Initial studies examined relational language as represented by containment and support events with brain damaged patients. Using neuroimaging techniques, results indicate that the left supramarginal gyrus, the left posterior middle and left inferior frontal gyri, and the left superior temporal gyrus are indeed related to processing prepositions in English (Amorapanth et al., 2009; Damasio et al., 2001; Tranel & Kemmerer, 2004).

Other neuroimaging studies are beginning to explicitly examine semantic categories in dynamic events such as path and manner. One fMRI study tested the hypothesis that different neural networks are activated by path and manner of motion in nonlinguistic events (Wu et al., 2008). Inspired by the research with infants’ processing of event components and using the same stimuli as in Pulverman and colleagues (2008) and Pruden and colleagues (2012; Pruden et al., 2013), these researchers employed a one-back matching block design, in which the starfish moved with different manners and paths. In some blocks, participants were asked to attend to manner and made a key-press response to indicate whether the previous manner was same as the one seen before. The same was done for path. Results demonstrated that within regions sensitive to motion, more dorsal areas (i.e., bilateral parietal lobules and frontal areas) were preferentially activated in path conditions and more ventral areas (i.e., bilateral posterior inferior/middle temporal cortex) were preferentially activated in manner conditions. These findings confirm the segregation of manner and path processing in the brain following ‘what’ and ‘where’ pathways, respectively (see also Goodale & Milner, 1992).

Unlike other semantic categories, the processing of basic causality has received more attention in studies of cognitive neuroscience, but again this research is conducted outside of cognitive semantics. Neuroimaging studies have identified increased activation in the right superior and middle frontal cortices and in the right inferior parietal lobule when perceiving causal events as compared to non-causal events (Fugelsang, Roser, Corballis, Gazzaniga, & Dunbar, 2005). In a different task, Straube and Chatterjee (2010) manipulated the time delay before contact between two objects and/or the angle of the second object after contact, and found greater sensitivity to angle manipulations than time manipulations in the right parietal lobe.

Though we can glean something from these studies with adults, there is no research to our knowledge that examines the neural correlates of nonlinguistic processing of event components in young children, a pivotal population for understanding the neural signatures of verb learning. One exception is the recent research on goal-directed actions, though this research is not concerned primarily with the issues of cognitive semantics (Reid, Csibra, Belsky, & Johnson, 2007; Reid et al., 2009). In these studies, 8-month-old infants are sensitive to interrupted goal-directed actions (e.g., the video clip freezing before completion of an action, such as a woman eating from a soup bowl), suggested by increased gamma-band activity over left frontal regions (Reid et al., 2007). In another study, infants and adults were presented with a sequence of pictures. The first two pictures conveyed context and the final picture showed the completion of an action, either expected or unexpected. For example, in a trial depicting “eating,” an actor held the food in the first scene, opened his mouth and held the food closer to the mouth in the second scene, and in the final scene he either ate the food (expected completion) or placed the food near his ear (unexpected condition). Reid and colleagues (2009) hypothesized that an N400 response would be observed only to the final component of an unexpected action. Nine-month-old infants, but not 7-month-olds, produced an N400 component of the event related potential response for unexpected action goals in a chain of actions at the adult level.

There is also a growing body of literature on the neural instantiations of object processing in infancy using various methods such as event-related potentials (e.g., de Haan & Nelson, 1999) and near infrared spectroscopy (NIRS) (e.g., Wilcox, Bortfeld, Armstrong, Woods, & Boas, 2009). These results can be loosely related to the processing of figures in behavioral studies of event processing, as objects are often figures in dynamic scenes.

Though these studies provide a glimpse of the potential neuroscientific studies have for addressing how semantic components are stored in the brain, there is a great need for more research that a) focuses on infants and young children, and b) makes use of behavioral methodologies to more closely link cognitive function to neural data. Future research in neuroscience must focus more heavily on the developmental period from infancy to early childhood in order to pair changes in brain architecture with observed behavioral changes in children’s attention to semantic categories. This population also opens the door for the comparison of typical developing children to those with language impairments, allowing for the identification of the unique brain areas responsible for struggles in verb learning. In addition, examining neural correlates of semantic processing through studies that more closely align with behavioral methodologies, such as the Intermodal Preferential Looking Paradigm (Golinkoff et al., 2013) or the head-turn preference paradigm (Jusczyk & Aslin, 1995) is critical for providing convergent evidence about how children organize the semantic space of events.

Neuroscience will also play a significant role in translating these issues into the study of second language learning. Isolating how lexicalization biases are instantiated in the brain will offer a better understanding of the “roadblocks” to second language learning. In addition, examining the bilingual brain could provide insight into how these conflicting systems of representation can be balanced successfully, revealing the types of change necessary for second language learning. These additional findings from neuroscientific research may help us to answer why rather than how SLLs confront problems in learning relational terms. Finally, neuroscience might help isolate the differences between effective and ineffective instructional methods, both for SLLs and children with language impairments. For instance, it may be that the most successful interventions activate the same pathways in the brain that play a pivotal role in semantic reorganization in monolinguals.

In all these areas of research, recent theories and empirical evidence also suggest the need to move from region-specific language areas to understand the neural architecture of language in a dynamic neural network that involves both specialized core regions and domain general regions (Fedorenko & Thompson-Schill, 2014). It is necessary to uncover how this dynamic neural network processing develops as children learn semantic components. Moreover, as in the relation between dynamic auditory temporal processing and language development, not only localizing a specific region, but also understanding the speed of processing (Tallal & Gaab, 2006), would be beneficial for revealing neural mechanisms of new semantic information.

While this is likely not an exhaustive list of the potential contributions of neuroscience to both first and second language learning, these open questions highlight the power of these tools in bringing about a deeper theoretical and practical understanding of these domains.

Conclusion

Relational terms such as verbs are fundamental components of language as they often carry the core meaning of the sentence. However, they also represent one of the most difficult challenges in first and second language learning. Better understanding of how infants move from constructing apparently universal event categories to language-specific encodings of events can broaden our knowledge not only of language learning in infancy, but also of second language learning. We have argued that this growing body of information from linguistics and behavioral psychology offers the field neuroscience a challenge to engage in similar research that promises to illuminate and deepen our understanding of the complicated picture of verb learning.

Acknowledgments

This work was supported by National Institute of Child Health and Human Development (NICHD) Grant 5R01HD050199 and by National Science Foundation (NSF) Grant BCS0642529 to Kathy Hirsh-Pasek and Roberta Michnick Golinkoff. This research was also supported by the NSF funded Spatial Intelligence and Learning Center (SBE-0541957, SBE-1041707). We thank everyone at the Temple University Infant and Child Lab and the University of Delaware Infant Language Project for their invaluable contributions on various stages of this article. Special thanks to Nora Newcombe and Peter Marshall for helpful comments about this work.

Footnotes

1

Note that the semantic notion of cause as defined by force dynamics is different from the semantic notion of basic causality as defined as one event bringing about a second. Cause here represents a specific subtype of basic causality.

Contributor Information

Nathan R. George, Temple University

Tilbe Göksun, Koç University.

Kathy Hirsh-Pasek, Temple University.

Roberta Michnick Golinkoff, University of Delaware.

References

  1. Allen S, Özyürek A, Kita S, Brown A, Furman R, Ishizuka T, Fujii M. Language-specific and universal influences in children’s syntactic packaging of manner and path: A comparison of English, Japanese, and Turkish. Cognition. 2007;102:16–48. doi: 10.1016/j.cognition.2005.12.006. [DOI] [PubMed] [Google Scholar]
  2. Amorapanth PX, Widick P, Chatterjee A. The neural basis for spatial relations. Journal of Cognitive Neuroscience. 2009;22:1739–1753. doi: 10.1162/jocn.2009.21322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bley-Vroman R. The logical problem of foreign language learning. Linguistic Analysis. 1990;20(1–2):3–49. [Google Scholar]
  4. Bowerman M, Levinson L. Language acquisition and conceptual development. Cambridge, United Kingdom: Cambridge University Press; 2001. [Google Scholar]
  5. Cadierno T. Learning to talk about motion in a foreign language. In: Robinson P, Ellis NC, editors. Handbook of cognitive linguistics and second language acquisition. New York, NY: Routledge; 2008. pp. 239–275. [Google Scholar]
  6. Casasola M. When less is more: How infants learn to form an abstract categorical representation of support. Child Development. 2005;76:279–290. doi: 10.1111/j.1467-8624.2005.00844.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Casasola M, Cohen LB, Chiarello E. Six-month-old infants’ categorization of containment spatial relations. Child Development. 2003;74:679–693. doi: 10.1111/1467-8624.00562. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Chatterjee A. The neural organization of spatial thought and language. Seminars in Speech and Language. 2008;29:226–238. doi: 10.1055/s-0028-1082886. [DOI] [PubMed] [Google Scholar]
  9. Choi S. Influence of language-specific input on spatial cognition: Categories of containment. First Language. 2006;26:207–232. [Google Scholar]
  10. Choi S, Hattrup K. Relative contribution of perception/cognition and language on spatial categorization. Cognitive Science. 2012;36:102–129. doi: 10.1111/j.1551-6709.2011.01201.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cohen LB, Oakes LM. How infants perceive a simple causal event. Developmental Psychology. 1993;29:421–433. [Google Scholar]
  12. Conboy BT, Rivera-Gaxiola M, Silva-Pereyra J, Kuhl PK. Event-related potential studies of early language processing at the phoneme, word, and sentence levels. In: Friederici AD, Thierry G, editors. Trends in Language Acquisition Research. Vol. 5. Amsterdam, the Netherlands: John Benjamins Publushing; 2008. pp. 23–64. [Google Scholar]
  13. Damasio H, Grabowski TJ, Tranel D, Ponto LL, Hichwa RD, Damasio AR. Neural correlates of naming actions and of naming spatial relations. NeuroImage. 2001;13:1053–1064. doi: 10.1006/nimg.2001.0775. [DOI] [PubMed] [Google Scholar]
  14. de Haan M, Nelson CA. Brain activity differentiates face and object processing in 6-month-old infants. Developmental Psychology. 1999;34:1114–1121. doi: 10.1037//0012-1649.35.4.1113. [DOI] [PubMed] [Google Scholar]
  15. Dehaene-Lambertz G, Hertz-Pannier L, Dubois J. Nature and nurture in language acquisition: anatomical and functional brain-imaging studies in infants. Trends in Neurosciences. 2006;29:367–373. doi: 10.1016/j.tins.2006.05.011. [DOI] [PubMed] [Google Scholar]
  16. Eimas PD, Miller JL, Jusczyk PW. On infant speech perception and the acquisition of language. In: Harnard S, editor. Categorical perception: The groundwork of cognition. Cambridge, United Kingdom: Cambridge University Press; 1987. pp. 161–198. [Google Scholar]
  17. Fedorenko E, Thompson-Schill SL. Reworking the language network. Trends in Cognitive Sciences. 2014;18(3):120–126. doi: 10.1016/j.tics.2013.12.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Flege JE, Munro MJ, MacKay IR. Factors affecting strength of perceived foreign accent in a second language. Journal of the Acoustical Society of America. 1995;97:3125–3134. doi: 10.1121/1.413041. [DOI] [PubMed] [Google Scholar]
  19. Fugelsang JA, Roser ME, Corballis PM, Gazzaniga MS, Dunbar KN. Brain mechanisms underlying perceptual causality. Cognitive Brain Research. 2005;24:41–47. doi: 10.1016/j.cogbrainres.2004.12.001. [DOI] [PubMed] [Google Scholar]
  20. Gass SM. Second language vocabulary acquisition. Annual Review of Applied Linguistics. 1988;9:92–106. [Google Scholar]
  21. Gentner D. Why verbs are hard to learn. In: Hirsh-Pasek K, Golinkoff RM, editors. Action meets word: How children learn verbs. Oxford University Press; 2006. pp. 544–564. [Google Scholar]
  22. Gentner D, Bowerman M. Why some spatial semantic categories are harder to learn than others: The typological prevalence hypothesis. In: Guo J, Lieven E, Ervin-Tripp S, Budwig N, Özcaliskan S, Nakamura K, editors. Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin. New York, NY: Erlbaum; 2009. pp. 465–480. [Google Scholar]
  23. Gleitman LR, Cassidy K, Papafragou A, Nappa R, Trueswell JT. Hard words. Journal of Language Learning and Development. 2005;1:23–64. [Google Scholar]
  24. Gleitman LR, Papafragou A. Relations between language and thought. In: Reisberg D, editor. Handbook of Cognitive Psychology. New York: Oxford University Press; 2013. [Google Scholar]
  25. Göksun T. Doctoral dissertation. 2010. The ‘who’ and ‘where’ of events: Infants’ processing of figures and grounds in nonlinguistic events. Available from ProQuest Dissertations and Theses database. (UMI No. 3423210). [Google Scholar]
  26. Göksun T, George NR, Hirsh-Pasek K, Golinkoff RM. Forces and motion: How young children understand causal events. Child Development. 2013;84:1285–1295. doi: 10.1111/cdev.12035. [DOI] [PubMed] [Google Scholar]
  27. Göksun T, Hirsh-Pasek K, Golinkoff RM. Trading spaces: Carving up events for learning language. Perspectives on Psychological Science. 2010;5:33–42. doi: 10.1177/1745691609356783. [DOI] [PubMed] [Google Scholar]
  28. Göksun T, Hirsh-Pasek K, Golinkoff RM, Imai M, Konishi H, Okada H. Who is crossing where? Infants’ discrimination of figures and grounds in events. Cognition. 2011;121:176–195. doi: 10.1016/j.cognition.2011.07.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Göksun T, Tynan E, Roseberry S, George NR, Ferrara K, Stahl A, Hirsh-Pasek K, Golinkoff RM. A new angle to infant causality; Poster presented at the International Conference on Infant Studies; Baltimore, MD. 2010. Mar, [Google Scholar]
  30. Golinkoff RM, Hirsh-Pasek K. How toddlers begin to learn verbs. Trends in Cognitive Science. 2008;12:397–403. doi: 10.1016/j.tics.2008.07.003. [DOI] [PubMed] [Google Scholar]
  31. Golinkoff RM, Jacquet R, Hirsh-Pasek K, Nandakumar R. Lexical principles may underlie the learning of verbs. Child Development. 1996;67:3101–3119. [PubMed] [Google Scholar]
  32. Golinkoff RM, Ma W, Song L, Hirsh-Pasek K. Twenty-five years using the intermodal preferential looking paradigm to study language acquisition: What have we learned? Perspectives on Psychological Science. 2013;8(3):316–339. doi: 10.1177/1745691613484936. [DOI] [PubMed] [Google Scholar]
  33. Goodale MA, Milner AD. Separate visual pathways for perception and action. Trends in Neurosciences. 1992;15:20–25. doi: 10.1016/0166-2236(92)90344-8. [DOI] [PubMed] [Google Scholar]
  34. Hamlin JK, Wynn K, Bloom P. Social evaluation by preverbal infants. Nature. 2007;450:557–559. doi: 10.1038/nature06288. [DOI] [PubMed] [Google Scholar]
  35. Han Z, Cadierno T, editors. Linguistic relativity in SLA: Thinking for speaking. Bristol, United Kingdom: Multilingual Matters; 2010. [Google Scholar]
  36. Havasi C, Snedeker J. Proceedings of the Twenty-sixth Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum Associates; 2004. The adaptability of language-specific verb lexicalization biases. [Google Scholar]
  37. Hespos SJ, Grossman SR, Saylor MM. Infants’ ability to parse continuous actions: Further evidence. Neural Networks. 2010;23:1026–1032. doi: 10.1016/j.neunet.2010.07.010. [DOI] [PubMed] [Google Scholar]
  38. Hespos SJ, Spelke ES. Conceptual precursors to language. Nature. 2004;430:453–456. doi: 10.1038/nature02634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Hirsh-Pasek K, Golinkoff RM, editors. Action meets word: How children learn verbs. New York, NY: Oxford University Press; 2006. [Google Scholar]
  40. Hirsh-Pasek K, Golinkoff RM, Hollich G. Trends and transitions in language acquisition: Looking for the missing piece. Developmental Neuropsychology. 1999;16:139–162. [Google Scholar]
  41. Hohenstein J, Eisenberg A, Naigles L. Is he floating across or crossing afloat? Cross-influence of L1 and L2 in Spanish-English bilingual adults. Bilingualism: Language and Cognition. 2006;9:249–261. [Google Scholar]
  42. Imai M, Li L, Haryu E, Okada H, Hirsh-Pasek H, Golinkoff RM. Novel noun and verb learning in Chinese, English, and Japanese children: Universality and language specificity in novel noun and verb learning. Child Development. 2008;79:979–1000. doi: 10.1111/j.1467-8624.2008.01171.x. [DOI] [PubMed] [Google Scholar]
  43. Inagaki S. Japanese learners’ acquisition of English manner-of-motion verbs with locational/directional PPs. Second Language Research. 2002;18:3–27. [Google Scholar]
  44. Jackendoff R. Semantics and cognition: Current studies in linguistics series, No. 8. Cambridge, MA: The MIT Press; 1983. [Google Scholar]
  45. Jameson J, Gentner D. Proceedings of the Twenty-fifth Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society; 2003. Mundane comparisons can facilitate relational understanding. [Google Scholar]
  46. Johnson JS, Newport EL. Critical period effects in second language learning: The influence of maturational state on the acquisition of English as a second language. Cognitive Psychology. 1989;21:60–99. doi: 10.1016/0010-0285(89)90003-0. [DOI] [PubMed] [Google Scholar]
  47. Jusczyk PW, Aslin RN. Infants’ detection of the sound patterns of words in fluent speech. Cognitive Psychology. 1995;29:1–23. doi: 10.1006/cogp.1995.1010. [DOI] [PubMed] [Google Scholar]
  48. Kemmerer D. The semantics of space: Integrating linguistic typology and cognitive neuroscience. Neuropsychologia. 2006;44:1607–1621. doi: 10.1016/j.neuropsychologia.2006.01.025. [DOI] [PubMed] [Google Scholar]
  49. Krashen S. Second language acquisition and second language learning. Oxford, United Kingdom: Pergamon Press; 1981. [Google Scholar]
  50. Kuhl PK, Rivera-Gaxiola M. Neural substrates of language acquisition. Annual Review of Neuroscience. 2008;31:511–534. doi: 10.1146/annurev.neuro.30.051606.094321. [DOI] [PubMed] [Google Scholar]
  51. Lakusta L, Carey S. Paper presented in Pruden S, Göksun T, chairs. Conceptual primitives for processing events and learning relational terms; Symposium at the International Conference on Infant Studies. Vancouver, Canada: 2008. Mar, Infants’ categorization of sources and goals in motion events. [Google Scholar]
  52. Lakusta L, Landau B. Language and memory for motion events: Origins of the asymmetry between source and goal paths. Cognitive Science. 2012;36:517–544. doi: 10.1111/j.1551-6709.2011.01220.x. [DOI] [PubMed] [Google Scholar]
  53. Lakusta L, Wagner L, O’Hearn K, Landau B. Conceptual foundations of spatial language : Evidence for a goal bias in infants. Language Learning. 2007;3:179–197. [Google Scholar]
  54. Landau B, Jackendoff R. “What” and “where” in spatial language and spatial cognition. Behavioral and Brain Sciences. 1993;16:217–265. [Google Scholar]
  55. Langacker RW. Foundations of cognitive grammar. Stanford, CA: Stanford University Press; 1987. [Google Scholar]
  56. Lennon P. Getting "easy" verbs wrong at the advanced level. International Review of Applied Linguistics in Language Teaching. 1996;34:23–36. [Google Scholar]
  57. Leslie AM, Keeble S. Do six-month-old infants perceive causality? Cognition. 1987;25:265–288. doi: 10.1016/s0010-0277(87)80006-9. [DOI] [PubMed] [Google Scholar]
  58. Li P, Abarbanell L, Gleitman L, Papafragou A. Spatial reasoning in Tenejapan Mayans. Cognition. 2011;120:33–53. doi: 10.1016/j.cognition.2011.02.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Maguire MJ, Hirsh-Pasek K, Golinkoff RM, Haryu E, Imai M, Vengas S, Okada H, Pulverman R, Sanchez-Davis B. A developmental shift from similar to language-specific strategies in verb acquisition: A comparison of English, Spanish and Japanese. Cognition. 2010;114:299–319. doi: 10.1016/j.cognition.2009.10.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Mandler JM. On the spatial foundations of the conceptual system and its enrichment. Cognitive Science. 2012;36:421–451. doi: 10.1111/j.1551-6709.2012.01241.x. [DOI] [PubMed] [Google Scholar]
  61. McDonough L, Choi S, Mandler JM. Understanding spatial relations: Flexible infants, lexical adults. Cognitive Psychology. 2003;46:229–259. doi: 10.1016/s0010-0285(02)00514-5. [DOI] [PubMed] [Google Scholar]
  62. Michotte AE. In: The perception of causality. Miles TR, Miles E, translators. London, United Kingdom: Methuen; 1963. (Original published in 1946). [Google Scholar]
  63. Muehleisen V, Imai M. Transitivity and the incorporation of ground information in Japanese path verbs. In: Lee K, Sweetwer E, Verspoor M, editors. Lexical and syntactic constructions and the construction of meaning. Amsterdam: John Benjamins; 1997. pp. 329–346. [Google Scholar]
  64. Muentener P, Carey S. Infants’ causal representations of state change events. Cognitive Psychology. 2010;61:63–86. doi: 10.1016/j.cogpsych.2010.02.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Negueruela E, Lantolf JP, Jordan SR, Gelabert J. The “private function” of gesture in second language communicative activity: A study on motion verbs and gesturing in English and Spanish. International Journal of Applied Linguistics. 2004;14:113–147. [Google Scholar]
  66. Pakulak E, Neville HJ. Maturational constraints on the recruitment of early processes for syntactic processing. Journal of Cognitive Neuroscience. 2011;23(10):2752–2765. doi: 10.1162/jocn.2010.21586. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Papafragou A, Hulbert J, Trueswell J. Does language guide event perception? Evidence from eye movements. Cognition. 2008;108:155–184. doi: 10.1016/j.cognition.2008.02.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Pruden SM, Göksun T, Roseberry S, Hirsh-Pasek K, Golinkoff RM. Find your manners: How do infants detect the invariant manner of motion in dynamic events? Child Development. 2012;83:977–991. doi: 10.1111/j.1467-8624.2012.01737.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Pruden SM, Roseberry S, Göksun T, Hirsh-Pasek K, Golinkoff RM. Infant categorization of path relations during dynamic events. Child Development. 2013;84:331–345. doi: 10.1111/j.1467-8624.2012.01843.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Pulverman R, Golinkoff RM, Hirsh-Pasek K, Sootsman-Buresh J. Infants discriminate paths and manners in non-linguistic dynamic events. Cognition. 2008;108:825–830. doi: 10.1016/j.cognition.2008.04.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Pulverman R, Song L, Pruden SM, Golinkoff RM, Hirsh-Pasek K. Preverbal infants’ attention to manner and path: Foundations for learning relational terms. Child Development. 2013;84:241–252. doi: 10.1111/cdev.12030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Rakison DH, Krogh L. Does causal action facilitate causal perception in infants younger than 6 months of age? Developmental Science. 2012;15:43–53. doi: 10.1111/j.1467-7687.2011.01096.x. [DOI] [PubMed] [Google Scholar]
  73. Regier T, Zheng M. Attention to endpoints: A cross-linguistic constraint on spatial meaning. Cognitive Science. 2007;31:705–719. doi: 10.1080/15326900701399954. [DOI] [PubMed] [Google Scholar]
  74. Reid VM, Csibra G, Belsky J, Johnson MH. Neural correlates of the perception of goal-directed action in infants. Acta Psychologica. 2007;124:129–138. doi: 10.1016/j.actpsy.2006.09.010. [DOI] [PubMed] [Google Scholar]
  75. Reid VM, Hoehl S, Grigutsch M, Groendahl A, Parise E, Striano T. The neural correlates of infant and adult goal prediction: Evidence for semantic processing systems. Developmental Psychology. 2009;45:620–629. doi: 10.1037/a0015209. [DOI] [PubMed] [Google Scholar]
  76. Roseberry S, Göksun T, Hirsh-Pasek K, Newcombe NS, Golinkoff RM, Novack M, Brayfield R. Individual differences in early event perception predict later verb learning; Paper presented at the Society for Research in Child Development; Denver, CO. 2009. Apr, [Google Scholar]
  77. Saxe R, Tzelnic T, Carey S. Knowing who dunnit: Infants identify the causal agent in an unseen causal interaction. Developmental Psychology. 2007;43:149–158. doi: 10.1037/0012-1649.43.1.149. [DOI] [PubMed] [Google Scholar]
  78. Slobin DI. From “thought and language” to “thinking to speaking”. In: Gumperz JJ, Levinson SC, editors. Rethinking linguistic relativity. Cambridge, United Kingdom: Cambridge University Press; 1996. pp. 70–96. [Google Scholar]
  79. Song L, Pulverman R, Infiesta C, Golinkoff RM, Hirsh-Pasek K. Does the owl fly out of the tree or does the owl exit the tree flying? How L2 learners overcome their L1 lexicalization biases. 2014 doi: 10.1080/15475441.2014.989051. Manuscript submitted for publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Straube B, Chatterjee A. Space and time in perceptual causality. Frontiers in Human Neuroscience. 2010;4(28) doi: 10.3389/fnhum.2010.00028. Retrieved from http://journal.frontiersin.org/Journal/10.3389/fnhum.2010.00028/full. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Tallal P, Gaab N. Dynamic auditory processing, musical training, and language development. Trends in Neurosciences. 2006;29:382–390. doi: 10.1016/j.tins.2006.06.003. [DOI] [PubMed] [Google Scholar]
  82. Talmy L. Toward a cognitive semantics. Volume I: Concept structuring systems. Cambridge, MA: MIT Press; 2000. [Google Scholar]
  83. Tomasello M. Pragmatic contexts for early verb learning. In: Tomasello M, Merriman WE, editors. Beyond the names for things: Young children’s acquisition of verbs. Hillsdale, NJ: Erlbaum; 1995. pp. 115–146. [Google Scholar]
  84. Tranel D, Kemmerer D. Neuroanatomical correlates of locative prepositions. Cognitive Neuropsychology. 2004;21:719–749. doi: 10.1080/02643290342000627. [DOI] [PubMed] [Google Scholar]
  85. Trubetzkoy NS. Principles of phonology. Berkeley, CA: University of California Press; 1969. [Google Scholar]
  86. Tsujimura N. Introduction to Japanese linguistics. NY: Blackwell; 1996. [Google Scholar]
  87. Ungerleider LG, Mishkin M. Two cortical visual systems. In: Ingle DJ, Goodale MA, Mansfield RJW, editors. Analysis of visual behavior. Cambridge, MA: MIT Press; 1982. pp. 549–586. [Google Scholar]
  88. Waxman S, Fu X, Arunachalam S, Leddon E, Geraghty K, Song H. Are nouns learned before verbs? Infants provide insight into a long-standing debate. Child Development Perspectives. 2013;7:155–159. doi: 10.1111/cdep.12032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Werker JF, Byers-Heinlein K. Bilingualism in infancy: First steps in perception and comprehension. Trends in Cognitive Sciences. 2008;12(4):144–151. doi: 10.1016/j.tics.2008.01.008. [DOI] [PubMed] [Google Scholar]
  90. Werker JF, Tees RC. Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development. 1984;7:49–63. [Google Scholar]
  91. White L, Spada N, Lightbown P, Ranta L. Input enhancement and L2 question formation. Applied Linguistics. 1991;12:416–432. [Google Scholar]
  92. Whorf BL. Language, thought, and reality. Cambridge, MA: MIT Press; 1956. [Google Scholar]
  93. Wilcox T, Bortfeld H, Armstrong J, Woods R, Boas DA. Hemodynamic response to featural and spatiotemporal information in the infant brain. Neuropsychologia. 2009;47:657–662. doi: 10.1016/j.neuropsychologia.2008.11.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Wolff P. Representing causation. Journal of Experimental Psychology: General. 2007;136:82–111. doi: 10.1037/0096-3445.136.1.82. [DOI] [PubMed] [Google Scholar]
  95. Wolff P, Jeon G, Klettke B, Li Y. Force creation and possible causers across languages. In: Malt B, Wolff P, editors. Words and the mind: How words capture human experience. New York, NY: Oxford University Press; 2010. pp. 93–110. [Google Scholar]
  96. Wolff P, Klettke B, Ventura T, Song G. Expressing causation in English and other languages. In: Ahn W, Goldstone RL, Love BC, Markman AB, Wolff P, editors. Categorization inside and outside of the lab: Festschrift in Honor of Douglas L. Medin. Washington, DC: American Psychological Association; 2005. pp. 29–48. [Google Scholar]
  97. Wolff P, Ventura T. When Russians learn English: How the semantics of causation may change. Bilingualism: Language and Cognition. 2009;12:153–176. [Google Scholar]
  98. Woodward AL. Infants selectively encode the goal object of an actor's reach. Cognition. 1998;69:1–34. doi: 10.1016/s0010-0277(98)00058-4. [DOI] [PubMed] [Google Scholar]
  99. Wu DH, Morganti A, Chatterjee A. Neural substrates of processing path and manner information of a moving event. Neuropsychologia. 2008;46:704–713. doi: 10.1016/j.neuropsychologia.2007.09.016. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES