Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jun 1.
Published in final edited form as: Psychon Bull Rev. 2017 Jun;24(3):652–665. doi: 10.3758/s13423-016-1145-z

Gesture as Representational Action: A paper about function

Miriam A Novack 1, Susan Goldin-Meadow 1
PMCID: PMC5340635  NIHMSID: NIHMS815407  PMID: 27604493

Abstract

A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal—that gesture arises from simulated action (see Hostetter & Alibali, 2008)—has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon, and that is to understand its function. A phenomenon’s function is its purpose rather than its precipitating cause—the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism—it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.

Keywords: Gesture, action, learning, representations


Gestures are spontaneous hand movements that accompany speech (Goldin-Meadow & Brentari, 2016; Kendon, 2004; McNeill, 1992). They have the capacity to portray actions or objects through their form (iconic gestures), to represent abstract ideas (metaphoric gestures), to provide emphasis to discourse structure (beat gestures), and to reference locations, items, or people in the world (deictic gestures). Children gesture before they can speak (Bates, 1976; Goldin-Meadow, 2014) and people all over the world have been found to gesture in one way or another (Kita, 2009). Gestures provide a spatial or imagistic complement to spoken language and are not limited to conventions and rules of formal linear-linguistic systems. Importantly, gestures play a unique role in communication, thinking, and learning, and have been shown to affect the minds of both the people who see them, and the people who produce them (Goldin-Meadow, 2003).

There are many questions that arise when we think about gesture: What makes us gesture? What types of events make gesture likely? What controls how often we gesture? These sorts of questions are all focused on the mechanism of gesture production—an important line of inquiry exploring the structures and processes that underlie how gesture is produced. Here, rather than ask about the mechanisms that lead to gesture, we focus instead on the consequences of having produced gesture—that is, on the function of gesture. What effects do gestures have on the listeners who see them and the speakers who produce them? What features of gestures contribute to these effects? How do these features and functions inform our understanding of what exactly gestures are?

We propose that gestures produce effects on thinking and learning because they are representational actions. When we say here that gestures are representational actions, we mean that they are meaningful substitutions and analogical stand-ins for ideas, objects, actions, relations, etc. This use of the term representational should not be confused with the term representational gesture—a category of gestures that look like the ideas and items to which they refer (i.e., iconic and metaphoric gestures). Our proposal that gestures are representational is meant to apply to all types of nonconventional gestures, including representational gestures (iconics, metaphorics), deictic gestures, and even beat gestures. Iconic gestures can represent actions or objects; deictic gestures draw attention to the entities to which they refer; beat gestures reflect discourse structure. Most of this paper explores the functions of iconic and deictic gestures, but we believe that our framework can be applied to all (non-conventional) gestures.

Gestures are representational in that they represent something other than themselves, and they are actions in that they involve movements of the body. Most importantly, the fact that gestures are representational actions differentiates them from full-blown instrumental actions, whose purpose is to affect the world by directly interacting with it (e.g., grabbing a fork, opening a canister). In addition, gestures are unlike movements for their own sake (see Schachner & Carey, 2013), whose purpose is the movement itself (e.g., dancing, exercising). Rather, gestures are movements whose power resides in their ability to represent actions, objects, or ideas.

Gestures have many similarities to actions simply because they are a type of action. Theories rooted in embodied cognition maintain that action experiences have profound effects on how we view objects (James & Swain 2011), perceive other’s actions (Casile & Giese, 2006), and even understand language (Beilock, Lyons, Mattarella-Micke, Nusbaum, & Small, 2008). The Gesture as Simulated Action (GSA) framework grew out of the embodied cognition literature. The GSA proposes that gestures are the manifestation of action programs, which are simulated (but not actually carried out) when an action is imagined (Hostetter & Alibali, 2008). Following at least some accounts of embodied cognition (see Wilson, 2002, for review), the GSA suggests that when we think of an action (or an object that can be acted upon), we activate components of the motor network responsible for carrying out that action, in essence, simulating the action. If this simulation surpasses the “gesture threshold,” it will spill over and become a true motor expression—an overt gesture. The root of gesture, then, according to this framework, is simulation—partial motor activation without completion.

The GSA framework offers a useful explanation of how gesturing comes about (its mechanism) and the framework highlights gesture’s tight tie to action. However, this framework is primarily useful for understanding how gestures are produced, not for how they are understood, unless we assume that gesture comprehension (like language comprehension, see Beilock et al., 2008) also involves simulating action. More importantly, the framework does not necessarily help us understand what gestures do both for the people who produce them, and for the people who see them. We suggest that viewing gestures as simulated actions places too much emphasis on the action side of gesture and, in so doing, fails to explain the ways in which gesture’s functions differ from those of instrumental actions. The fact that gesture is an action is only one piece of the puzzle. Gesture is a special kind of action, one that represents the world rather than directly impacting the world. For example, producing a twisting gesture in the air near, but not on, a jar will not open the jar; only performing the twisting action on the jar itself will do that. Here we argue that this representational characteristic of gesture is key to understanding why gesturing occurs (its function).

Our hypothesis is that the effects gesture has on thinking and learning grow not only out of the fact that gesture is itself an action, but also out of the fact that gesture is abstracted away from action—the fact that it is representational. Importantly, we argue that this framework can account for the functions gesture serves both for producers of gesture and for perceivers of gesture. We begin by defining what we mean by gesture, and providing evidence that adults spontaneously view gesture-like movements as representational. Second, we review how gesture develops over ontogeny, and use evidence from developmental populations to suggest a need to move from thinking about gesture as simulated action to thinking about it as representational action. Finally, we review evidence that gesture can have an impact on cognitive processes, and explore this idea separately for producers of gesture and for receivers of gesture. We show that the effects that gesture has on both producers and receivers are distinct from the effects that instrumental action has. In each of these sections, our goal is to develop a framework for understanding gesture’s functions, thereby creating a more comprehensive account of cause in the phenomenon of gesture.

Part 1: What makes a movement a gesture?

Before we can unpack how gesture’s functions relate to its classification as representational action, we must establish how people distinguish gestures from the myriad of hand movements they encounter. Gestures have a few obvious features that differentiate them from other types of movements. The most obvious is that gestures happen off objects, in the air. This feature makes gestures qualitatively different from object-directed actions (e.g., grabbing a cup of coffee, typing on a keyboard, stirring a pot of soup), which involve manipulating objects and causing changes to the external world. A long-standing body of research has established that adults (as well as children and infants) process object-directed movements in a top-down, hierarchical manner, encoding the goal of an object-directed action as most important and ignoring the particular movements used to achieve that goal (e.g., Baldwin & Baird, 2001; Bower & Rinck, 1999; Searle, 1980; Trabasso & Nickels, 1992; Woodward, 1998; Zacks, Tversky, & Iyer, 2001). For example, the goal of twisting the lid of a jar is to open the jar—not just to twist one’s hand back and forth while holding onto the jar lid.

In contrast to actions that are produced to achieve external goals, if we interpret the goal of an action to be the movement itself, we are inclined to describe that movement in detail, focusing on its low-level features. According to Schachner and Carey (2013), adults consider the goal of an action to be the movement itself if the movement is irrational (e.g., moving toward an object and then away from it without explanation) or if it is produced in the absence of objects (e.g., making the same to-and-fro movements but without any objects present). These “movements for the sake of movement” can include dancing, producing ritualized movements, or exercising. For example, the goal of twisting one’s hands back and forth in the air when no jar is present might be to just stretch or to exercise the wrist and fingers.

So where does gesture fit in? Gestures look like movements for their own sake in that they occur off objects and, in this sense, resemble dance, ritual, and exercise. However, gestures are also similar to object-directed actions in that the movements that comprise a gesture are not the purpose of the gesture—those movements are a means to accomplish something else—communicating and representing information. Gestures also differ from object-directed actions, however, in their purpose—the purpose of an object-directed action is to accomplish a goal with the object (e.g., to open a jar, grab a cup of coffee); the purpose of a gesture is to represent information and perhaps communicate that information (e.g., to show someone how to open a jar, to tell someone that you want that cup of coffee). The question then is—how is an observer to know when a movement is a communicative symbol (i.e., a gesture) and when it is an object-directed action or a movement produced for its own sake?

To better understand how people know when they have seen a gesture, we asked adults to describe scenes in which a woman moved her hands under three conditions (Novack, Wakefield, & Goldin-Meadow, 2016). In the first condition (action on objects), the woman moved two blue balls into a blue box and two orange balls into an orange box. In the second condition (action off objects with the objects present), the balls and boxes were present, but the woman moved her hands as if moving the objects without actually touching them. Finally, in the third condition (action with the objects absent), the woman moved her hands as if moving the objects, but in the absence any objects.

In addition to the presence or absence of objects, another feature that differentiates object-directed actions from gestures is co-occurrence with speech. Although actions can be produced along with speech, they need not be. In contrast, gestures not only routinely co-occur with speech, but they are also synchronized with that speech (Kendon, 1980; McNeill, 1992). People do, at times, spontaneously produce gesture without speech and, in fact, experimenters have begun to instruct participants to describe events using their hands and no speech (e.g., Gibson, Piantadosi, Brink, Bergen, Lim & Saxe, 2013; Goldin-Meadow, So, Özyürek, & Mylander, 2008; Hall, Ferreira & Mayberry, 2013). However, these silent gestures, as they are known, look qualitatively different from the co-speech gestures that speakers produce as they talk (Goldin-Meadow, McNeill & Singleton, 1996; Özçalışkan, Lucero & Goldin-Meadow, 2016; see Goldin-Meadow & Brentari, 2016, for discussion). To explore this central feature of gesture, Novack et al. (2016) also varied whether the actor’s movements in their study were accompanied by filtered speech. Movements accompanied by speech-like sounds should be more likely to be seen as a gesture (i.e., as a representational action) than the same movements produced without speech-like sounds.

Participants’ descriptions of the event in the video were coded according to whether they described external goals (e.g., “the person placed balls in boxes”), movement-based goals (e.g., “a woman waved her hands over some balls and boxes”), or representational goals (i.e., “she showed how to sort objects”). As expected, all participants described the videos in which the actor moved the objects as depicting an external-goal, whereas participants never gave this type of response for the empty-handed videos (i.e., videos in which the actor did not touch the objects). However, participants gave different types of responses as a function of the presence or absence of the objects in the empty-handed movement conditions. When the objects were there (but not touched), approximately 70% of observers described the movements in terms of representational goals. In contrast, when the objects were not there (and obviously not touched), only 30% of observers mentioned representational goals. Participants increased the number of representational goals they gave when the actor’s movements were accompanied by filtered speech (which made the movement feel like part of a communicative act).

Observers thus systematically describe movements that have many of the features of gesture—no direct contact with objects, and co-occurrence with speech—as representational actions. Importantly, participants made a clear distinction between the instrumental object-directed action, and the two empty-handed movements (movements in the presence of objects and movements in the absence of objects), indicating that actions on objects have clear external goals, and actions off objects do not. Empty-handed movements are often interpreted as movements for their own sake. But if the conditions are right, observers go beyond the movements they see to make rich inferences about what those movements can represent.

Part 2: Learning from gestures over development

We now know that, under the right conditions, adults will view empty-handed movements as more than just movements for their own sake. We are thus perfectly positioned to ask how the ability to see movement as representational action develops over ontogeny. In this section, we look at both the production and comprehension of gesture in the early years, focusing on the development of two types of gestures—deictic gestures and iconic gestures.

The development of deictic gestures

We begin with deictic gestures, as these are the first gestures that children produce and understand. Although deictic gestures have a physically simple form (an outstretched arm and an index finger), their meaning is quite rich, representing social, communicative, and referential intentions (Tomasello, Carpenter, & Liszkowski, 2007). Interestingly, deictic gestures are more difficult to produce and understand than their simple form would lead us to expect.

Producing deictic gestures

Infants begin to point between 9 and 12 months, even before they say their first words (Bates, 1976). Importantly, producing these first gesture forms signals advances in children’s cognitive processes, particularly with respect to their language production. For example, lexical items for objects to which a child points are soon found in that child’s verbal repertoire (Iverson & Goldin-Meadow, 2005). Similarly, pointing to one item (e.g., a chair) while producing a word for a different object (e.g., “mommy”) predicts the onset of two-word utterances (e.g., “mommy’s chair”) (Goldin-Meadow & Butcher, 2003; Iverson & Goldin-Meadow, 2005). Not only does the act of pointing preview the onset of a child’s linguistic skills, but it also plays a causal role in the development of those skills. One and a half-year-old children given pointing training (i.e., they were told to point to pictures of objects as the experimenter named them) increased their own pointing in spontaneous interactions with their caregivers, which, in turn, led to increases in their spoken vocabulary (LeBarton, Goldin-Meadow & Raudenbush, 2015). Finally, these language-learning effects are unique to pointing gestures, and do not arise in response to similar-looking instrumental actions like reaches. Eighteen-month-old children learn a novel label for an object if an experimenter says the label while the child is pointing at the object, but not if the child is reaching to the object (Lucca & Wilborn, 2016). Thus, as early as 18 months, we see that the representational status of the pointing gesture can have a unique effect on learning (here, language learning), an effect not found for a comparable instrumental act.

Perceiving deictic gestures

Children begin to understand other’s pointing gestures around the same age as they themselves begin to point. At 12 months, infants view points as goal directed (Woodward & Guajardo, 2002) and recognize the communicative function of points (Behne, Liszkowski, Carpenter, & Tomasello, 2012). Infants even understand that pointing hands, but not non-pointing fists, communicate information to those who can see them (Krehm, Onishi, & Vouloumanos, 2014). As is the case for producing pointing gestures, seeing pointing gestures results in effects that are not found for similar-looking instrumental actions. For example, Yoon, Johnson, and Csibra, (2008) found that when 9-month-old children see someone point to an object, they are likely to remember the identity of that object. In contrast, if they see someone reach to an object (an instrumental act), 9-month-olds are likely to remember the location of the object, not its identity. Thus, as soon as children begin to understand pointing gestures, they seem to understand them as representational actions, rather than as instrumental actions.

The development of iconic gestures

Young children find it difficult to interpret iconic gestures, which, we argue, is an outgrowth of the general difficulty they have with interpreting representational forms (e.g., DeLoache, 1995). Interestingly, even though instrumental actions often look like iconic gestures, interpreting instrumental actions does not present the same challenges as interpreting gesture.

Producing iconic gestures

Producing iconic gestures is rare in the first years of life. Although infants do produce a few iconic gestures as early as 14 months (Acredolo & Goodwyn, 1985; 1988), these early gestures typically grow out of parent-child play routines (e.g., while singing the itsy-bitsy spider), suggesting that they are probably not child-driven representational inventions. It is not until 26 months that children begin to reliably produce iconic gestures in spontaneous settings (Özçalışkan & Goldin-Meadow, 2011) and in elicited laboratory experiments (Behne, Carpenter & Tomasello, 2014) and, even then, these iconic forms are extremely rare. Of the gestures that young children produce, only 1–5% are iconic (Iverson, Capirci & Caselli, 1994; Nicoladis, Mayberry & Genesee, 1999; Özçalışkan & Goldin-Meadow, 2005); in contrast, 30% of the gestures that adults produce are iconic (McNeill, 1992).

If gestures are simply a spillover from motor simulation (as the GSA predicts), we might expect children to begin producing a gesture for a given action as soon as they acquire the underlying action program for that action (e.g., we would expect a child to produce a gesture for eating as soon as the child is able to eat by herself). But children produce actions on objects well before they produce gestures for those actions (Özçalışkan & Goldin-Meadow, 2011). In addition, according to the GSA, gesture is produced when an inhibitory threshold is exceeded. Since young children have difficulty with inhibitory control, we might then expect them to produce more gestures than adults, which turns out not to be the case (Özçalışkan & Goldin-Meadow, 2011). The relatively late onset, and paucity, of iconic gesture production is thus not predicted by the GSA. It is, however, consistent with the proposal that gestures are representational actions. As representational actions, gestures require sophisticated processing skills to produce and thus would not be expected in very young children.

Perceiving iconic gestures

Understanding iconic gestures is also difficult for toddlers. At 18 months, children are no more likely to associate an iconic gesture (e.g., hopping two fingers up and down to represent the rabbit’s ears as it hops) or an arbitrary gesture (holding a hand shaped in an arbitrary configuration to represent a rabbit) with an object (Namy, Campbell, & Tomasello, 2004). It is not until the middle of the second year that children begin to appreciate the relation between an iconic gesture and its referent (Goodrich & Hudson Kam, 2009; Marentette & Nicoladis, 2011; Namy, Campbell, & Tomasello, 2004; Namy, 2008; Novack, Goldin-Meadow, & Woodward, 2015). In many cases, children fail to correctly see the link between an iconic gesture and its referent until age 3 or even 4 (for example, when gestures represent the perceptual properties of an object, Hodges, Özçalışkan, & Williamson, 2015; Tolar, Lederberg, Gokhale, & Tomasello, 2008).

The relatively late onset of children’s comprehension of iconic gestures is also consistent with the proposal that gestures are representational actions. If gestures were simulations of actions, then, as soon as an infant has a motor experience, the infant ought to be able to interpret that motor action as a gesture just by accessing her own motor experiences. But young children who are able to understand an instrumental action are not necessarily able to understand a gesture for that action. Consider, for example, a 2-year-old who is motorically capable of putting a ring on a post. If an adult models the ring-putting-on action for the child, she responds by putting the ring on the post (in fact, children put the ring on the post even if the adult tries to get the ring on the post but doesn’t succeed, i.e., if the adult models a failed attempt). If, however, the adult models a put-ring-on-post gesture (she shows how the ring can be put on the post without touching it), the 2-year-old frequently fails to place the ring on the post (Novack et al., 2015). In other words, at a time when a child understands the goal of an object-directed action and is able to perform the action, the child is still unable to understand a gesture for that action. This difficulty makes sense on the assumption that gestures are representational actions since children of this age are generally known to have difficulty with representation (e.g., DeLoache, 1995).

As another example, young children who can draw inferences from a hand that is used as an instrumental action (e.g., an object-directed reach) fail to draw inferences from the same hand used as a gesture. Studies of action processing find that infants as young as 6-months can use the shape of someone’s reaching hand to correctly predict the intended object of the reach (Ambrosini et al, 2013; Filippi & Woodward, 2016). For example, infants expect someone whose hand is shaped in a pincer grip to reach toward a small object, and someone whose hand is shaped in a more open grip to reach toward a large object (Ambrosini et al, 2013)—but they do so only when the handshape is embedded in an instrumental reach. Two and a half year-olds presented with the identical hand formations as gestures rather than reaches (i.e., an experimenter holding a pincer handshape or open handshape in gesture space) are unable to map the hand cue onto its referent (Novack, Filippi, Goldin-Meadow & Woodward, 2016). The fact that children can interpret handshape information accurately in instrumental actions by 6 months, but are unable to interpret handshape information in gesturing actions until 2 or 3 years, adds weight to the proposal that gestures are a special type of representational action.

Part 3: Gesture’s functions are supported by its action properties and its representational properties

Thus far, we have discussed how people come to see movements as gestures, and have used findings from the developmental literature to raise questions about whether gesture is best classified as simulated action. We suggest that, even if gesture arises from simulated action programs, in order to fully understand its effects, we need to also think about gesture as representational action. Under this account, simulated actions are considered non-representational, and it is the difference between representational gesture and veridical action that is key to understanding the effects that gesture has on producers and perceivers. In this section, we examine similarities and differences between gesture and action, and discuss the implications of these similarities and differences for communication, problem solving, and learning.

Gesture vs. action in communication

As previously mentioned, one way in which gestures differ from actions is in how they relate to spoken language. Unlike object-directed actions, gestures are seamlessly integrated with speech in both production (e.g., Bernardis & Gentilucci, 2006; Kendon, 1980; Kita & Ozy rek, 2003) and comprehension (e.g., Kelly, Ozy rek, & Maris, 2010), supporting the claim that speech and gesture form a single integrated system (McNeill, 1992). Indeed, the talk that accompanies gesture plays a role in determining the meaning taken from that gesture. For example, a spiraling gesture might refer to ascending a staircase when accompanied by the sentence, “I ran all the way up,” but to out-of-control prices when accompanied by the sentence, “The rates are rising every day.” Conversely, the gestures that accompany speech can influence the meaning taken from speech. For example, the sentence, “I ran all the way up,” is likely to describe mounting a spiral staircase when accompanied by a upward spiraling gesture, but a straight staircase when accompanied by a upward moving point. Here, we discuss the effects of gesture-speech integration for the speakers who produce gesture, as well as the listeners who perceive it.

Producing gesture in communication

Gesture production is spontaneous and temporally linked to speech (Loehr, 2007; McNeill, 1992). Moreover, the tight temporal relation found between speech and gesture is not found between speech and instrumental action. For example, if adults are asked to explain how to throw a dart using the object in front of them (an instrumental action) or using just their hands with no object (a gesture), they display a tighter link between speech and the accompanying dart-throwing gesture than between speech and the accompanying dart-throwing action (Church, Kelly & Holcombe, 2014). Other signatures of the gesture-speech system also seem to be unique to gesture, and are not found in instrumental actions. For example, gestures are more often produced with the right hand (suggesting a link to the left-hemisphere speech system), whereas self-touching adaptors (e.g., scratching, pushing back the hair), which are instrumental actions, are produced with both hands (Kimura, 1973).

The act of producing representational gesture along with speech has been found to have an effect on speakers themselves. Gesturing while speaking can improve the speaker’s lexical access and fluency (Graham & Heywood, 1975; Rauscher, Krauss, & Chen, 1996), help the speaker package information (Kita, 2000), and even lighten the speaker’s working memory load (Goldin-Meadow, Nusbaum, Kelly, & Wagner, 2001; Wagner, Nusbaum, & Goldin-Meadow, 2004). Moreover, here again, movements that are not gestures, such as meaningless hand movements, do not have the same load-lightening effects on the speaker as gestures do (Cook, Yip, & Goldin-Meadow, 2012).

Perceiving gesture in communication

The gestures that accompany a speaker’s talk often emphasize information found in that talk. Seeing gestures has been found to improve comprehension for listeners, particularly for bilinguals with low-proficiency in their second language (Sueyoshi & Hardison, 2005) or for young children (McNeil, Alibali, & Evans, 2000). Seeing gestures has also been found to improve listeners’ mental imagery, particularly with respect to spatial topics (Driskell & Radtke, 2003). In a meta-analysis of gesture comprehension studies, messages with gesture were shown to have a moderate, but significant, comprehension advantage for the listener, compared to messages without gesture (Hostetter, 2011). But gestures can also provide non-redundant information not found in the speaker’s talk (Church, Garber, & Rogalski, 2007; Goldin-Meadow, 2003; Kelly, 2001; Kelly, Barr, Church, & Lynch, 1999; McNeill, 1992), and listeners are able to take advantage of information conveyed uniquely in gesture (e.g., Goldin-Meadow & Sandhofer, 1999). For example, listeners are more likely to infer the meaning of an indirect request (e.g., “I’m getting cold”) if that speech is accompanied by a gesture (a point to an open window) than if it is produced without the gesture (Kelly et al., 1999). Gesture thus serves a function not only for speakers but also for listeners.

Moreover, the effects of perceiving gesture are not the same as the effects of perceiving instrumental action. For example, although adults can seamlessly and easily integrate information conveyed in speech with gesture, they often fail to integrate that information with instrumental action. For example, adults can easily ignore actions that are incongruent with the speech with which they are produced, but they have difficulty ignoring gestures that are incongruent with the speech they accompany, suggesting a difference in the relative strength of speech-gesture integration vs. speech-action integration (Kelly, Healy, Özyürek, & Holler, 2014). Thus, gesture has a different relationship to speech than instrumental action does and, in turn, has a different effect on listeners than instrumental action.

Gesture vs. action in problem solving

Gesture not only has an impact on communication, but it also plays a role in more complex cognitive processes, such as conceptualization and problem-solving. Here again, we find that gesture and instrumental action do not influence problem-solving in the same way.

Producing gesture in problem-solving

Viewing gestures as representational action acknowledges that gesture has its base in action. Indeed, gestures often faithfully reflect our action experiences on objects in the world. Take, for example, the Tower of Hanoi task (Newell & Simon, 1972). In this task, participants are asked to move a number of disks, stacked from largest to smallest, from one peg to another peg; the goal is to re-create the stacked arrangement without ever placing a larger disk on top of a smaller disk and moving only one disk at a time. Solving the task involves actions (i.e., moving the disks), and the gestures that participants use to later explain their solution represent elements of the actions they produced while solving the task in the first place. More specifically, participants who solved the problem using a physical tower produce more grasping gestures and curved trajectories than participants who solved the problem using a computer program in which disk icons could be dragged across the screen using a mouse curser (Cook & Tanenhaus, 2009). Gestures thus reflect a speaker’s action experiences in the world by re-presenting traces of those actions.

As noted earlier, gesturing about an action accomplishes nothing tangible—gesturing about moving disks does not actually move the disks. But even though gesture does not accomplish anything physical, it can change our cognition in ways that action does not. Using the Tower of Hanoi task again as an example, we see that individuals who gesture about how they moved the disks encode the problem differently from individuals who do not gesture. In one study using this paradigm, after explaining how they solved the task and gesturing while doing so, participants were surreptitiously given a new stack of disks that looked like the original stack but differed in weight—the largest disk was now the lightest, the smallest disk became the heaviest and could no longer be lifted with one hand (Goldin-Meadow & Beilock, 2010). Participants who had initially produced one-handed gestures when describing how to move the smallest disk were adversely affected by the switch in weights—the more these participants gestured about the small disk with one hand, the slower their time to solve the problem after the disk weights had been switched (recall that the small disk could now not be moved with one hand). By gesturing about the smallest disk with one hand, participants set themselves up to think of the disk as light—the unanticipated switch in disk weights violated this expectation, leading to relatively poor performance after the switch. Importantly, if participants are not asked to provide explanations before the switch—and thus do not gesture—the switch effect disappears (Beilock & Goldin-Meadow, 2010). Moreover, participants who are asked to act on the objects and actually move the disks while explaining their solution (instead of gesturing) also do not show the switch effect (Trofatter, Kontra, Beilock, & Goldin-Meadow, 2014). Gesture can thus have an effect (in this case, a detrimental effect) on thinking, and it can have a more powerful effect on thinking than action does.

Finally, although gestures contain many components of the actions to which they refer, they also drop out components. Gestures are not, and cannot be, exact replicas of the actions to which they refer. Using the Tower of Hanoi task again as a case study, we see that one cannot veridically represent, in a single gesture, both the force needed to lift a heavy disk and the speed at which the disk is lifted. Incorporating into gesture the actual force needed to lift the disk (while lifting nothing) will necessarily result in a much faster movement than was made when the disk was actually lifted. Conversely, incorporating into gesture the speed at which the disk actually moved (while moving nothing) would not require the same force as is necessary with an object in hand. Thus, gestures are not just smaller versions of actions, they have fundamentally different features from actions and, perhaps as a result, have different functional effects on cognitive processes.

Perceiving gesture in problem-solving

The Tower of Hanoi task also exemplifies the impact that perceiving gesture has on the listener’s conceptualizations. As mentioned in the last section, participants gesture differently as a reflection of how they solved the Tower of Hanoi task, producing smaller arches to represent the movement of the disks if they had solved the task on a computer than if they had solved the task with actual disks (Cook & Tenenhus, 2009). Participants who saw those gestured explanations, but did not act on the Tower themselves, were influenced by the gestures they saw when they were later asked to solve the problem themselves on a computer. Participants who watched someone explain how to solve the Tower of Hanoi task using gestures with high arches were more likely to produce higher arching movements themselves on the computer (even though it is not necessary to arch the movement at all on the computer) than participants who saw someone use gestures with smaller arches—in fact, the bigger the gestured arcs, the bigger the participant’s movements on the computer screen. The gestures we see can influence our own actions.

Gesture vs. action in learning

Gesture can also lead learners to new ideas or concepts, both when learners see gesture in instruction and when they produce gesture themselves. Learners are more likely to profit from a lesson in which the teacher gestures than from a lesson in which the teacher does not gesture (Cook, Duffy, & Fenn, 2013; Ping & Goldin-Meadow 2008; Singer & Goldin-Meadow, 2005; Valenzeno, Alibali, & Klatzky, 2003). And when children gesture themselves, they are particularly likely to discover new ideas (Goldin-Meadow, Cook & Mitchell, 2009), retain those ideas (Cook, Mitchell & Goldin-Meadow, 2008), and generalize the ideas to novel problem types (Novack, Congdon, Hemani-Lopez & Goldin-Meadow, 2014). We argue that gesture can play this type of role in learning because it is an action and thus engages the motor system, but also because it represents information.

Learning from producing gesture

Producing one’s own actions has been found to support learning from infancy through adulthood (see Kontra, Beilock, & Goldin-Meadow, 2012, for a review). For example, 3-month-olds given experience wearing Velcro mittens that helped them grab the objects they reached for, come to successfully interpret other’s goal-directed reaches in a subsequent habituation test; in contrast, infants given experience simply watching someone else obtain objects while wearing the mittens do not come to understand other’s reaches (Gerson & Woodward, 2014; Sommerville, Woodward & Needham, 2005). Even college-aged students benefit from active experience in learning contexts. When physics students are given the chance to feel the properties of angular momentum first-hand (by holding a system of two bicycle wheels spinning around an axel), they score higher on a test of their understanding of force than their counterparts who simply had access to a visible depiction of the angular momentum (i.e., watching the deflection of a laser pointer connected to the bicycle system) (Kontra, Lyons, Fischer, & Beilock, 2015). Finally, neuroimaging data suggest that active experience manipulating objects leaves a lasting neural signature that is found when learners later view the objects without manipulating them (James, 2010; James & Swain, 2011; Longcamp et al., 2003; Prinz, 1997). For example, children given active experience writing letters later show greater activation in motor regions when just passively looking at letters in the scanner, compared to children who were given practice looking at letters without writing them (James, 2010). Given that gestures are a type of action and that action affects learning, we might expect learning from gesture to resemble learning from action.

In fact, recent work suggests that learning via producing gesture engages a similar motor network as learning via producing action. When children were taught how to solve mathematical equivalence problems while producing gesture strategies, they later showed greater activation in motor regions when passively solving the types of problems they had learned about, compared to children who learned without gesture (Wakefield, et al., 2016). The same motor regions have been implicated in studies looking at the effect of producing action on learning (e.g., James 2010; James & Atwood, 2009; James & Swain, 2011), suggesting that gesture and action are similar in the effect they have on the brain.

But gestures differ from actions in a number of ways, and these differences might influence the impact that producing gesture has on learning. First, as mentioned earlier, actions are produced on objects, gestures are not. To compare the effects of learning via gesture vs. learning via action, Novack and colleagues (2014) taught 3rd graders to produce actions on objects or gestures off objects during a math lesson. Children were shown movable number tiles placed over numbers in problems such as 4+7+2=__+2. Children in the Action condition were taught to pick up the first two number tiles (the 4 and the 7) and then hold them in the blank. Children in the Concrete Gesture condition were taught to move their hands as if they were picking up the tiles and holding them in the blank, but without actually moving them. Finally, children in the Abstract Gesture condition were taught to produce a V-point gesture to the first two numbers and then a point to the blank. In all three conditions, children were using their hands to represent a strategy for solving the problem—the grouping strategy in which the two numbers on the left side of the equation that are not found on the right are added and the sum is put in the blank. But the conditions differed in whether the hands actually moved objects. Although children in all three conditions learned how to solve the types of problems on which they had been trained, only children in the gesture conditions were able to transfer what they had learned to problems with a different format (near-transfer problems, e.g., 4+7+2=4+__; far-transfer problems, e.g., 4+7+2= __+6). Children in the Action condition seemed to have gotten “stuck” in the concrete nature of the movements, learning how to solve the problem at a shallow level that did not lead to transfer. Even more surprising, children in the concrete gesture condition were less successful on far-transfer problems than children in the abstract gesture condition, suggesting that the closer a gesture’s form is to action, the closer the gesture comes to behaving like action.

Understanding how learners are affected by gesture compared to object-directed action is particularly important given the widespread use of manipulatives in educational settings (see Mix, 2010, for review). Manipulatives, or external symbols, are thought to help learners off-load some of the cognitive burden involved in maintaining abstract ideas in mind. Children can use concrete external symbols as a reference to be revisited, freeing up cognitive resources for other processing tasks. Importantly, external symbols can be moved and acted on, allowing for the integration of physical, motor processes with abstract conceptual ideas. Despite these potential benefits of learning through action, and consistent with findings on learning through gesture, research from the education literature casts doubt on manipulative-based learning. Interacting with a manipulative can encourage learners to focus on the object itself rather than its symbolic meaning (see for example, Uttal, Scudder & DeLoache, 1997). The perceptual features of objects can be distracting (McNeil, Uttall, Jarvin, & Sternberg, 2009), and young children in particular may lose track of the fact that the manipulatives not only are objects, but also stand for something else (DeLoache, 1995). Gesture has the potential to distance learners from the concrete details of a manipulative, thus encouraging them to approach the concept at a deeper level.

Learning from perceiving gesture

The gestures that children see in instruction also have beneficial effects on learning (e.g., Cook, et al., 2013; Ping & Goldin-Meadow 2008; Singer & Goldin-Meadow, 2005; Valenzeno, et al., 2003). Some have suggested that seeing gestures can help learners connect abstract ideas, often presented in speech, to the concrete physical environment (Valenzeno, et al., 2003). Seeing gesture might also support learning through the same mechanisms as producing gesture, that is, by engaging the motor system. Listeners recruit their own motor systems when listening to speakers who gesture (Ping, Goldin-Meadow, & Beilock, 2014), and neuroimaging research suggests that recruiting the motor system may be key in learning. Adults learn more foreign words if they are taught those words while seeing someone produce meaningful iconic gestures, compared to seeing someone produce meaningless movements (Macedonia, Muller, & Friederici, 2011). Those adults then activate areas of their premotor cortex when later recognizing words initially learned while seeing gesture, implicating the motor cortex in learning from seeing gesture.

Another way that perceiving gesture might have an impact on learning is through its ability to integrate with speech. Children are more likely to learn from a math lesson if the teacher provides one problem-solving strategy in speech simultaneously with a different, complementary strategy in gesture (S1+G2) than if the teacher provides the same two strategies in speech (S1→S2), which, of course, must be produced sequentially (Singer & Goldin-Meadow, 2005). Moreover, it is gesture’s ability to be produced simultaneously with speech that appears to promote learning. Children are more likely to learn from the math lesson if the gesture strategy and the speech strategy occur at the same time (S1+G2) than if the speech strategy occurs first, followed by the gesture strategy (S1→G2). In other words, the benefit of simultaneous speech+gesture instruction disappears when the two strategies are presented sequentially rather than simultaneously in time (Congdon et al, 2016). A question for future work is whether learning through action will also be affected by timing—that is, will learning differ when an action problem-solving strategy is presented simultaneously with speech, compared to when the same action strategy is presented sequentially with speech? We suspect that this is yet another area where learning via gesture will differ from learning via action.

Part 4. Open questions and areas for future research

Thus far, we have shown that, although gesture may be an effective learning tool, at least in part, because it is a type of action, it is the fact that gesture is abstracted action, or representational action, that likely gives rise to its far-reaching learning outcomes. Viewing gesture as representational action explains many of the benefits gesture confers in instruction, and also may explain cases where using gesture in instruction is sub-optimal. For example, gesture instruction is less useful than action instruction for 2-year-olds (Novack et al., 2015), likely because, at this young age, children are only beginning to be able to decode representational forms. Gesture instruction has also been shown to be less useful than action instruction in children with a rudimentary understanding of a concept (e.g., Congdon & Levine, 2016), raising the possibility that a learner’s initial understanding of a task affects that learner’s ability to profit from a lesson on the task containing representational action. In the final section, we explore open questions of this sort, and discuss how their answers can inform the proposed framework.

One major topic that we have touched on in this paper, but that would benefit from additional research, is the relative effect of producing gesture vs. perceiving gesture. We have provided evidence suggesting that gesture’s functions arise from its status as representational action both for the producer of gesture and for the perceiver of gesture. Thus, we believe that our framework can be applied to both situations. However, the magnitude of gesture’s effects may not be identical for doing vs. seeing gesture (see Goldin-Meadow, et al., 2012). Moreover, there might be effects on thinking and learning that depend on whether a person is perceiving or producing a gesture. For example, gesture’s ability to support learning, retention, and generalization may depend on whether the gesture is produced or perceived. When children are shown a gesture that follows speech and is thus produced on its own, they do no better after instruction than when the same information is displayed entirely in speech (Congdon et al., 2016). In other words, learning from a seen gesture may depend on its being produced simultaneously with speech. In contrast, when children are told to produce a gesture, they profit from that instruction (Brooks & Goldin-Meadow, 2015) and retain what they have learned (Cook, et al., 2008) even when the gesture is produced on its own without speech. Learning from a produced gesture does not seem to depend on its being produced along with speech. Producing vs. perceiving gesture might then function through distinct mechanisms, although we suggest that gesture’s status as a representational form is still essential to both. Additional studies that directly compare learning from seeing vs. doing gesture are needed to determine whether the mechanisms that underlie these two processes are the same or different, and whether seeing vs. doing gesture interacts with interpreting gesture as representational action. For example, it may be easier to think of a gesture as representational when producing it (even if it’s a novel action) than when seeing someone else produce the gesture.

A related open question is whether movement is categorized as gesture in the same way for perceiving vs. producing movement. We reviewed evidence about when perceivers of a movement see the movement as representational (Novack et al., 2016). However, it is unclear whether the same features lead producers of a movement to see the movement as representational. This question is particularly relevant in tasks where learners are taught to produce movements during a lesson (e.g., Goldin-Meadow et al., 2009). These movements are meaningless to the learner at the beginning of the lesson. The question is whether the movements become meaningful, and therefore representational, during the lesson and, if so, when? Do children think of these hand movements as “gesture” when they are initially taught them, or do they think of them first as “movement-for-its-own sake” and only gradually come to see the movements as “gesture” as their conceptual understanding of the lesson shifts? If the process is gradual, might there be markers or features within the movement itself that an observer could use to determine when a rote movement has become a true gesture?

This possibility brings into focus whether learners need to be aware of the representational status of a gesture in order to benefit from that gesture during instruction. Although gesture-training studies find that, on average, instruction with gesture supports learning better than instruction without gesture (see Novack & Goldin-Meadow, 2015, for review), there is always variability in learner outcomes. Might a learner’s ability to profit from a gestural movement be related to that learner’s ability to categorize that movement as meaningful? Perhaps only learners who see a movement as gesture will benefit from incorporating that movement into instruction. Alternatively, learners may be able to benefit from gesture in instruction without being explicitly aware of its representational properties. Thomas and Lleras (2009) found that adults asked to produce arm movements that were consistent with the solution to an unrelated problem were more likely to subsequently solve the problem, compared to adults asked to produce arm movements that were inconsistent with the solution to the problem (see also Brooks & Goldin-Meadow, 2015, for similar evidence in children). Importantly, these adults were not aware of the link between the arm movements and the problem-solving task (they were told that the arm movements were for “exercise breaks”). Thus, at least in some cases, learners do not need to consciously see a movement as meaningful in order to learn from it.

Another open question related to the issue of categorizing movement as gesture or instrumental action is whether there are in-between cases. Clark (1996) has identified a class of movements called demonstrations—actions produced with the intention of showing something to someone. For example, if a mother were to show her child how to open a jar, she could hold the jar out in front of the child, twist open the lid in an exaggerated manner, and then put the lid back on the jar, handing the jar to the child to try the action himself. This object-focused movement has elements of an instrumental action—the mother’s hands directly interact with the object and cause a physical change. However, the movement also has obvious elements of representational actions—in the end, the jar is not open and the movement is clearly performed for communicative (as opposed to purely instrumental) purposes. As another example, consider “hold-ups”—gestures in which someone holds up an object to display it to someone else (e.g., a child holds up her bottle to draw it to her mother’s attention). Hold-ups have some aspects of gesture—they are intended to communicate and are like deictic pointing gestures in that they indicate a particular object. But they also have aspects of instrumental actions—they are produced directly on objects. Developmentally, hold-ups tend to emerge before pointing gestures (Bates Camaioni, & Volterra, 1975), lending credence to the idea that hold-ups may not be as representational as empty-handed gestures. The important question from our point of view is whether hold-ups function like gestures for the child. It turns out that they do, in at least in one sense—they predict the onset of various aspects of spoken language. For example, hold-ups have been counted as deictic gestures in studies finding that early gesture predicts the size of a child’s subsequent spoken vocabulary (Rowe & Goldin-Meadow, 2009), the introduction of particular lexical items into a child’s spoken vocabulary (Iverson & Goldin-Meadow, 2005), and the developmental onset of noun phrases (Cartmill, Hunsicker & Goldin-Meadow, 2014).

In terms of developmental questions, although we have reviewed evidence exploring the features that encourage adults to view a hand movement as a representational action (Novack et al., 2016), it is an open question as to what infants think about the movements they see. Infants have a special ability to process actions on objects (e.g., Woodward, 1998), but how do infants process actions off objects, that is, gestures? One possibility is that infants think of gestures as movements for their own sake—seeing them as mere handwaving (cf. Schachner & Carey, 2013). Another possibility is that, despite the fact that infants may not be able to correctly interpret the meaning of a gesture, they can nonetheless categorize the gesture as a representational act. Just as infants seem to know that speech can communicate even if they cannot understand that speech (Vouloumanos, Onishi & Pogue, 2012), infants might know that gestures are meant to represent without being able to understand what they represent. Knowing what infants think about gesture (and whether they categorize it as a unique form) would contribute to our understanding of the development of gesture processing.

Finally, with respect to how gestures affect learning, additional research is needed to determine when in the learning process, and for which content domains, gesture instruction is particularly helpful. Gesture’s status as representational action might mean that it is most useful for some content domains, and not others. For example, gesture has been shown to support generalization and retention in math instruction. But math is a relatively abstract subject. Gesture may be less useful in domains grounded in physical experience, such as physics, where direct action on objects has been found to support learning (Kontra, et al., 2015; generalization has yet to be studied in these domains). There are, however, domains that are grounded in physical experience, such as dance, where practicing gesture-like movements has been found to promote learning better than practicing the actual dance movements (Kirsh, 2010; 2012). Dancers often “mark” their movements, a form of practice in dance that involves producing attenuated versions of dance moves. Marking is comparable to gesturing in that the movements are produced to represent other movements, rather than to have a direct effect on the world (i.e., they represent movements that will, in the end, be seen by an audience, see Kirsh, 2012). Dancers use marking when practicing on their own, as well as when communicating with other dancers (Kirsh, 2010), and this marking seems to function like gesture in that it promotes learning, even though dance is grounded in physical action.

Conclusions

In this paper, we present a framework for understanding gesture’s function. We propose that gesture has unique effects on thinking and learning because of its status as representational action. More specifically, the fact that gesture is representational action, and not instrumental action, is critical to its capacity to support generalization beyond the specific and to support retention over a period of time. Our proposal is agnostic about whether gesture’s role in learning depends on its being embodied, and about whether the Gesture-as-Simulated-action framework can account for how gesture is produced, its mechanism. The proposal is designed to account for why gesture is produced, that is, for the functions it serves, particularly in a learning context. Our proposal is thus not inconsistent with the mechanistic account of gesture production proposed in the GSA framework (Hostetter & Alibali, 2008). But it does offer another perspective, a functional perspective, that highlights the differences between gestures and other types of actions.

Although, in some cases, mechanism and function are critically related, in other cases, they are not. For example, consider an alligator’s nightly sojourn into the Mississippi River. The functional explanation for this phenomenon is that the alligator is cold-blooded and, in the evening, the river water is warmer than the air; entering the water at night serves the function of helping the alligator maintain its body temperature during the overnight hours. However, the mechanism by which this behavior comes about has nothing to do with temperature and depends instead on changes in sunlight. The alligator heads for the water in response to fading light, a relationship that was discovered by experimentally dissociating temperature from light—alligators approach the water as the light fades whether or not the temperature changes, and do not approach the water as the temperature drops unless the light fades (Lang, 1976). Thus, the temperature-regulation function of the behavior (going into water in order to regulate temperature) is different from its light-sensitive mechanism (going into the water in response to changes in light). We therefore cannot assume that the function of a phenomenon is the complement of its mechanism, and must explore function in its own right. Our hope is that by expanding the investigation of gesture to include a framework built around its functions, we will come to a more complete understanding of how and why we move our hands when we talk.

Acknowledgments

This work was supported by NIH grant number R01-HD047450 and NSF grant number BCS-0925595 to Goldin-Meadow, NSF grant number SBE-0541957 (Spatial Intelligence and Learning Center, Goldin-Meadow is a co-PI), and a grant from the Institute of Education Sciences (R305 B090025) to S. Raudenbush in support of Novack. SGM thanks Bill Wimsatt and Martha McClintock for introducing her to the distinction between mechanism and function, and for convincing her of its importance in understanding scientific explanation back in 1978 when we taught our first Mind course together in the Social Sciences Division at the University of Chicago.

References

  1. Acredolo LP, Goodwyn SW. Symbolic gesturing in language development: A case study. Human Development. 1985;28:40–49. doi: 20.2259/000272934. [Google Scholar]
  2. Acredolo L, Goodwyn S. Symbolic gesturing in normal infants. Child Development. 1988;59:450–466. [PubMed] [Google Scholar]
  3. Ambrosini E, Reddy V, de Looper A, Costantini M, Lopez B, Sinigaglia C. Looking Ahead: Anticipatory Gaze and Motor Ability in Infancy. PLoS ONE. 2013;8:e67916. doi: 10.1371/journal.pone.0067916. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Baldwin DA, Baird JA. Discerning intentions in dynamic human action. Trends in Cognitive Sciences. 2001;5:171–178. doi: 10.1016/s1364-6613(00)01615-6. [DOI] [PubMed] [Google Scholar]
  5. Bates E. Language and context: The acquisition of pragmatics. Vol. 13. New York: Academic Press; 1976. [Google Scholar]
  6. Bates E, Camaioni L, Volterra V. The Acquisition of Performatives Prior to Speech. Merrill-palmer Quarterly of Behavior and Development. 1975;21:205–226. [Google Scholar]
  7. Behne T, Carpenter M, Tomasello M. Young Children Create Iconic Gestures to Inform Others. Developmental Psychology. 2014;50:2049–2060. doi: 10.1037/a0037224. [DOI] [PubMed] [Google Scholar]
  8. Behne T, Liszkowski U, Carpenter M, Tomasello M. Twelve-month-olds’ comprehension and production of pointing. British Journal of Developmental Psychology. 2012;30:359–375. doi: 10.1111/j.2044-835X.2011.02043.x. [DOI] [PubMed] [Google Scholar]
  9. Beilock SL, Goldin-Meadow S. Gesture changes thought by grounding it in action. Psychological Science. 2010;21:1605–1610. doi: 10.1177/0956797610385353. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Beilock SL, Lyons IM, Mattarella-Micke A, Nusbaum HC, Small SL. Sports experience changes the neural processing of action language. Proceedings of the National Academy of Sciences of the United States of America. 2008;105:13269–13273. doi: 10.1073/pnas.0803424105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Bernardis P, Gentilucci M. Speech and gesture share the same communication system. Neuropsychologia. 2006;44:178–190. doi: 10.1016/j.neuropsychologia.2005.05.007. [DOI] [PubMed] [Google Scholar]
  12. Bower GH, Rinck M. Goals as generators of activation in narrative understanding. In: Goldman SR, Graesser AC, van den Broek P, editors. Narrative Comprehension, Causality, and Coherence: Essays in Honor of Tom Trabasso. Erlbaum; 1999. pp. 111–134. [Google Scholar]
  13. Brooks N, Goldin-Meadow S. Moving to Learn: How Guiding the Hands Can Set the Stage for Learning. Cognitive Science. 2015 doi: 10.1111/cogs.12292. [DOI] [PubMed] [Google Scholar]
  14. Cartmill EA, Hunsicker D, Goldin-Meadow S. Pointing and naming are not redundant: Children use gesture to modify nouns before they modify nouns in speech. Developmental Psychology. 2014;50:1660–1666. doi: 10.1037/a0036003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Casile A, Giese MA. Nonvisual motor training influences biological motion perception. Current Biology. 2006;16:69–74. doi: 10.1016/j.cub.2005.10.071. [DOI] [PubMed] [Google Scholar]
  16. Church RB, Garber P, Rogalski K. The role of gesture in memory and social communication. Gesture. 2007;7:137–158. [Google Scholar]
  17. Church RB, Kelly S, Holcombe D. Temporal synchrony between speech, action and gesture during language production. Language, Cognition and Neuroscience. 2014;29:345–354. [Google Scholar]
  18. Clark HH. Using language. Cambridge, GB: Cambridge Univ. Press; 1996. [Google Scholar]
  19. Cook SW, Duffy RG, Fenn KM. Consolidation and transfer of learning after observing hand gesture. Child Development. 2013;84:1863–1871. doi: 10.1111/cdev.12097. [DOI] [PubMed] [Google Scholar]
  20. Cook SW, Mitchell Z, Goldin-Meadow S. Gesturing makes learning last. Cognition. 2008;106:1047–1058. doi: 10.1016/j.cognition.2007.04.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Cook SW, Tanenhaus MK. Embodied communication: Speaker's gestures affect listeners' actions. Cognition. 2009;113:98–104. doi: 10.1016/j.cognition.2009.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Cook SW, Yip TK, Goldin-Meadow S. Gestures, but not meaningless movements, lighten working memory load when explaining math. Language and cognitive processes. 2012;27:594–610. doi: 10.1080/01690965.2011.567074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Congdon EL, Levine SC. Learning to measure through action and gesture: children’s starting state matters. Manuscript submitted for publication; 2016. [DOI] [PubMed] [Google Scholar]
  24. Congdon EL, Novack MA, Brooks NB, Hemani-Lopez N, O’Keefe L, Goldin-Meadow S. Better Together: Simultaneous Presentation of Speech and Gesture in Math Instruction Supports Generalization and Retention. Manuscript submitted for publication; 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. DeLoache J. Rapid change in the symbolic functioning of very young children. Science. 1987;238:1556–1557. doi: 10.1126/science.2446392. [DOI] [PubMed] [Google Scholar]
  26. DeLoache JS. Early Understanding and Use of Symbols: The Model Model. Current Directions in Psychological Science. 1995;4:109–113. [Google Scholar]
  27. Driskell JE, Radtke PH. The effect of gesture on speech production and comprehension. Human Factors: The Journal of the Human Factors and Ergonomics Society. 2003;45:445–454. doi: 10.1518/hfes.45.3.445.27258. [DOI] [PubMed] [Google Scholar]
  28. Filippi C, Woodward AL. Action experience changes attention to kinematic cues. Frontiers in Psychology. 2016;7:19. doi: 10.3389/fpsyg.2016.00019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Gerson SA, Woodward AL. Learning From Their Own Actions: The Unique Effect of Producing Actions on Infants’ Action Understanding. Child Development. 2014;85:264–277. doi: 10.1111/cdev.12115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Gibson E, Piantadosi ST, Brink K, Bergen L, Lim E, Saxe R. A noisy-channel account of crosslinguistic word order variation. Psychological Science. 2013;24:1079–1088. doi: 10.1177/0956797612463705. [DOI] [PubMed] [Google Scholar]
  31. Goldin-Meadow S. Hearing gesture: How our hands help us think. Cambridge, MA: Harvard University Press; 2003. [Google Scholar]
  32. Goldin-Meadow S. How gesture helps children learn language. In: Arnon I, Tice M, Kurumada C, Estigarribia B, editors. Language in interaction: Studies in honor of Eve V. Clark. Amsterdam: John Benjamins; 2014. pp. 157–171. [Google Scholar]
  33. Goldin-Meadow S, Beilock SL. Action’s Influence on Thought: The Case of Gesture. Perspectives on Psychological Science. 2010;5:664–674. doi: 10.1177/1745691610388764. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Goldin-Meadow S, Brentari D. Gesture, sign and language: The coming of age of sign language and gesture studies. Behavioral and Brain Sciences. doi: 10.1017/S0140525X15001247. (in press). online first. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Goldin-Meadow S, Butcher C. Pointing toward two-word speech in young children. In: Kita S, editor. Pointing: Where language, culture, and cognition meet. Mahwah, NJ: Earlbaum Associates; 2003. pp. 85–107. [Google Scholar]
  36. Goldin-Meadow S, Cook SW, Mitchell ZA. Gesturing gives children new ideas about math. Psychological Science. 2009;20:267–272. doi: 10.1111/j.1467-9280.2009.02297.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Goldin-Meadow S, Levine SL, Zinchenko E, Yip TK-Y, Hemani N, Factor L. Doing gesture promotes learning a mental transformation task better than seeing gesture. Developmental Science. 2012;15:876–884. doi: 10.1111/j.1467-7687.2012.01185.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Goldin-Meadow S, McNeill D, Singleton J. Silence is liberating: Removing the handcuffs on grammatical expression in the manual modality. Psychological Review. 1996;103:34–55. doi: 10.1037/0033-295x.103.1.34. [DOI] [PubMed] [Google Scholar]
  39. Goldin-Meadow S, Nusbaum H, Kelly SD, Wagner S. Explaining math: gesturing lightens the load. Psychological Science. 2001;12:516–522. doi: 10.1111/1467-9280.00395. [DOI] [PubMed] [Google Scholar]
  40. Goldin-Meadow S, Sandhofer CM. Gesture conveys substantive information about a child's thoughts to ordinary listeners. Developmental Science. 1999;2:67–74. doi.org/10.1037/0033-295X.103.1.34. [Google Scholar]
  41. Goldin-Meadow S, So WC, Ozyurek A, Mylander C. The natural order of events: How speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences. 2008;105:9163–9168. doi: 10.1073/pnas.0710060105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Goodrich W, Hudson Kam CL. Co-speech gesture as input in verb learning. Developmental Science. 2009;12:81–87. doi: 10.1111/j.1467-7687.2008.00735.x. [DOI] [PubMed] [Google Scholar]
  43. Graham JA, Heywood S. The effects of elimination of hand gestures and of verbal codability on speech performance. European Journal of Social Psychology. 1975;5:189–195. [Google Scholar]
  44. Hall ML, Ferreira VS, Mayberry RI. Investigating constituent order change with elicited pantomime: A functional account of SVO emergence. Cognitive Science. 2013;38:943–972. doi: 10.1111/cogs.12105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Hodges LE, ÖzçalıŞkan Ş, Williamson RA. Proceedings of the 39th Boston University Conference on Language Development. Somerville, MA: Cascadilla Press; 2015. How Early Do Children Understand Different Types of Iconicity in Gesture? [Google Scholar]
  46. Hostetter AB. When do gestures communicate? A meta-analysis. Psychological Bulletin. 2011;137:297–315. doi: 10.1037/a0022128. [DOI] [PubMed] [Google Scholar]
  47. Hostetter AB, Alibali MW. Visible embodiment: gestures as simulated action. Psychonomic Bulletin & Review. 2008;15:495–514. doi: 10.3758/pbr.15.3.495. [DOI] [PubMed] [Google Scholar]
  48. Iverson JM, Capirci O, Caselli MC. From communication to language in two modalities. Cognitive Development. 1994;9:23–43. [Google Scholar]
  49. Iverson JM, Goldin-Meadow S. Gesture paves the way for language development. Psychological Science. 2005;16:368–371. doi: 10.1111/j.0956-7976.2005.01542.x. [DOI] [PubMed] [Google Scholar]
  50. James KH. Sensori-motor experience leads to changes in visual processing in the developing brain. Developmental Science. 2010;13:279–288. doi: 10.1111/j.1467-7687.2009.00883.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. James KH, Atwood TP. The role of sensorimotor learning in the perception of letter-like forms: tracking the causes of neural specialization for letters. Cognitive Neuropsychology. 2009;26:91–110. doi: 10.1080/02643290802425914. [DOI] [PubMed] [Google Scholar]
  52. James KH, Swain SN. Only self-generated actions create sensori-motor systems in the developing brain. Developmental Science. 2011;14:673–678. doi: 10.1111/j.1467-7687.2010.01011.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Kendon A. Gesture: Visible Action as Utterance. Cambridge University Press; 2004. [Google Scholar]
  54. Kendon A. Gesticulation and speech: Two aspects of the process of utterance. In: Key MR, editor. The relationship of verbal and nonverbal communication. The Hague, the Netherlands: Mouton; 1980. pp. 207–227. [Google Scholar]
  55. Kelly SD. Broadening the units of analysis in communication: Speech and nonverbal behaviours in pragmatic comprehension. Journal of Child Language. 2001;28:325–349. doi: 10.1017/s0305000901004664. [DOI] [PubMed] [Google Scholar]
  56. Kelly SD, Barr DJ, Church RB, Lynch K. Offering a hand to pragmatic understanding: The role of speech and gesture in comprehension and memory. Journal of memory and Language. 1999;40:577–592. [Google Scholar]
  57. Kelly SD, Healy M, Özyürek A, Holler J. The processing of speech, gesture, and action during language comprehension. Psychonomic Bulletin & Review. 2014;22:517–523. doi: 10.3758/s13423-014-0681-7. [DOI] [PubMed] [Google Scholar]
  58. Kelly SD, Özyürek A, Maris E. Two sides of the same coin: Speech and gesture manually interact to enhance comprehension. Psychological Science. 2010;21:260–267. doi: 10.1177/0956797609357327. [DOI] [PubMed] [Google Scholar]
  59. Kirsh D. Thinking with the body. In: Ohlsson S, Catrambone R, editors. Proceedings of the 32nd Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society; 2010. pp. 2864–2869. [Google Scholar]
  60. Kirsh D. How marking in dance constitutes thinking with the body. In: Fusaroli R, Granelli T, Paolucci C, editors. The External Mind: Perspectives on Mediation, Distribution and Situation in Cognition and Semiosis. 2012. pp. 112–113. [Google Scholar]
  61. Kita S. Cross-cultural variation of speech-accompanying gesture: A review. Language and Cognitive Processes. 2009;24(2):145–167. [Google Scholar]
  62. Kita S. How representational gestures help speaking. In: McNeill D, editor. Language and gesture. Cambridge, UK: Cambridge University Press; 2000. pp. 162–185. [Google Scholar]
  63. Kita S, Özyürek A. What does cross-linguistic variation in semantic coordination of speech and gesture reveal?: Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language. 2003;48:16–32. [Google Scholar]
  64. Kimura D. Manual activity during speaking–I. Righthanders. Neuropsychologia. 1973;11:45–50. doi: 10.1016/0028-3932(73)90063-8. [DOI] [PubMed] [Google Scholar]
  65. Kontra C, Goldin-Meadow S, Beilock SL. Embodied Learning Across the Life Span. Topics in Cognitive Science. 2012;4:731–739. doi: 10.1111/j.1756-8765.2012.01221.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Kontra C, Lyons DJ, Fischer SM, Beilock SL. Physical Experience Enhances Science Learning. Psychological Science. 2015;26:737–749. doi: 10.1177/0956797615569355. [DOI] [PubMed] [Google Scholar]
  67. Krehm M, Onishi KH, Vouloumanos A. I see your point: infants under 12 months understand that pointing is communicative. Journal of Cognition and Development. 2014;15:527–538. [Google Scholar]
  68. Lang JW. Amphibious behavior of Alligator mississippiensis: Roles of a circadian rhythm and light. Science. 1976;191:575–577. doi: 10.1126/science.1251194. [DOI] [PubMed] [Google Scholar]
  69. LeBarton ES, Goldin-Meadow S, Raudenbush S. Experimentally-induced increases in early gesture lead to increases in spoken vocabulary. Journal of Cognition and Development. 2015;16:199–220. doi: 10.1080/15248372.2013.858041. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Loehr D. Aspects of rhythm in gesture and speech. Gesture. 2007;7:179–214. [Google Scholar]
  71. Longcamp M, Anton JL, Roth M, Velay JL. Visual presentation of single letters activates a premotor area involved in writing. NeuroImage. 2003;19:1492–1500. doi: 10.1016/s1053-8119(03)00088-0. [DOI] [PubMed] [Google Scholar]
  72. Lucca KR, Wilborn PM. Communicating to Learn: Infants’ Pointing Gestures Facilitate Fast Mapping. Manuscript submitted for publication; 2016. [Google Scholar]
  73. Macedonia M, Muller K, Friederici AD. The impact of iconic gestures on foreign language word learning and its neural substrate. Human Brain Mapping. 2011;32:982–998. doi: 10.1002/hbm.21084. doi.org/10.1002/hbm.21084. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Marentette P, Nicoladis E. Preschoolers’ interpretations of gesture: label or action associate? Cognition. 2011;121:386–399. doi: 10.1016/j.cognition.2011.08.012. [DOI] [PubMed] [Google Scholar]
  75. McNeil NM, Uttal DH, Jarvin L, Sternberg RJ. Should you show me the money? Concrete objects both hurt and help performance on mathematics problems. Learning and Instruction. 2009;19:171–184. [Google Scholar]
  76. McNeil N, Alibali M, Evans JL. The role of gesture in children's language comprehension: Now they need it, now they don't. Journal of Nonverbal Behavior. 2000;24:131–150. [Google Scholar]
  77. McNeill D. Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press; 1992. [Google Scholar]
  78. Mix KS. Spatial tools for mathematical thought. In: Mix KS, Smith LB, Gasser M, editors. Space and Language. New York: Oxford University Press; 2010. [Google Scholar]
  79. Namy LL. Recognition of iconicity doesn’t come for free. Developmental Science. 2008;11:841–846. doi: 10.1111/j.1467-7687.2008.00732.x. [DOI] [PubMed] [Google Scholar]
  80. Namy LL, Campbell AL, Tomasello M. The Changing Role of Iconicity in Non-Verbal Symbol Learning: A U-Shaped Trajectory in the Acquisition of Arbitrary Gestures. Journal of Cognition and Development. 2004;5:37–57. [Google Scholar]
  81. Newell A, Simon HA. Human problem-solving. Englewood Cliffs, NJ: Prentice-Hall; 1972. [Google Scholar]
  82. Nicoladis E, Mayberry RI, Genesee F. Gesture and early bilingual development. Developmental Psychology. 1999;35:514. doi: 10.1037//0012-1649.35.2.514. [DOI] [PubMed] [Google Scholar]
  83. Novack MA, Congdon E, Hemani-Lopez N, Goldin-Meadow S. From action to abstraction: Using the hands to learn math. Psychological Science. 2014;25:903–910. doi: 10.1177/0956797613518351. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Novack MA, Filippi C, Goldin-Meadow S, Woodward A. Actions speak louder than gestures when you are 2-years-old. Mansucript submitted for publication; 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Novack M, Goldin-Meadow S. Learning from Gesture: How Our Hands Change Our Minds. Educational Psychology Review. 2015;27:205–412. doi: 10.1007/s10648-015-9325-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Novack MA, Goldin-Meadow S, Woodward A. Learning from gesture: How early does it happen? Cognition. 2015;142:138–147. doi: 10.1016/j.cognition.2015.05.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Novack MA, Wakefield EM, Goldin-meadow S. What makes a movement a gesture? Cognition. 2016;146:339–348. doi: 10.1016/j.cognition.2015.10.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. ÖzçalıŞkan Ş, Goldin-Meadow S. Do parents lead their children by the hand? Journal of Child Language. 2005;32:481–505. doi: 10.1017/s0305000905007002. [DOI] [PubMed] [Google Scholar]
  89. ÖzçalıŞkan Ş, Goldin-Meadow S. Integrating gestures: The interdisciplinary nature of gesture. Amsterdam, NL: John Benjamins; 2011. Is there an iconic gesture spurt at 26 months. [Google Scholar]
  90. ÖzçalıŞkan S, Lucero C, Goldin-Meadow S. Does language shape silent gesture? Cognition. 2016;148:10–18. doi: 10.1016/j.cognition.2015.12.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Ping RM, Goldin-Meadow S. Hands in the air: using ungrounded iconic gestures to teach children conservation of quantity. Developmental Psychology. 2008;44:1277–1287. doi: 10.1037/0012-1649.44.5.1277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Ping R, Goldin-Meadow S, Beilock S. Understanding gesture: Is the listener's motor system involved. Journal of Experimental Psychology: General. 2014;143:195–204. doi: 10.1037/a0032246. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Prinz W. Perception and action planning. European Journal of Cognitive Psychology. 1997;9:129–154. [Google Scholar]
  94. Rauscher F, Krauss RM, Chen Y. Gesture, speech, and lexical access: The role of lexical movements in speech production. Psychological Science. 1996;7:226–231. [Google Scholar]
  95. Rowe ML, Goldin-Meadow S. Differences in early gesture explain SES disparities in child vocabulary size at school entry. Science. 2009;323:951–953. doi: 10.1126/science.1167025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Schachner A, Carey S. Reasoning about “irrational” actions: When intentional movements cannot be explained, the movements themselves are seen as the goal. Cognition. 2013;129:309–327. doi: 10.1016/j.cognition.2013.07.006. [DOI] [PubMed] [Google Scholar]
  97. Searle JR. The intentionality of intention and action. Inquiry. 1980;22:253–280. [Google Scholar]
  98. Singer Ma, Goldin-Meadow S. Children learn when their teacher’s gestures and speech differ. Psychological Science. 2005;16:85–89. doi: 10.1111/j.0956-7976.2005.00786.x. [DOI] [PubMed] [Google Scholar]
  99. Sommerville JA, Woodward AL, Needham A. Action experience alters 3-month-old infants’ perception of others' actions. Cognition. 2005;96:B1–B11. doi: 10.1016/j.cognition.2004.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Sueyoshi A, Hardison DM. The Role of Gestures and Facial Cues in Second Language Listening Comprehension. Language Learning. 2005;55:661–699. [Google Scholar]
  101. Thomas LE, Lleras A. Swinging into thought: Directed movement guides insight in problem solving. Psychonomic Bulletin & Review. 2009;16:719–723. doi: 10.3758/PBR.16.4.719. [DOI] [PubMed] [Google Scholar]
  102. Tolar TD, Lederberg AR, Gokhale S, Tomasello M. The development of the ability to recognize the meaning of iconic signs. Journal of Deaf Studies and Deaf Education. 2008;13:225–240. doi: 10.1093/deafed/enm045. [DOI] [PubMed] [Google Scholar]
  103. Tomasello M, Carpenter M, Liszkowski U. A new look at infant pointing. Child development. 2007;78:705–722. doi: 10.1111/j.1467-8624.2007.01025.x. [DOI] [PubMed] [Google Scholar]
  104. Trabasso T, Nickels M. The development of goal plans of action in the narration of a picture story. Discourse Processes. 1992;15:249–275. [Google Scholar]
  105. Trofatter C, Kontra C, Beilock S, Goldin-Meadow S. Gesturing has a larger impact on problem-solving than action, even when action is accompanied by words. Language, Cognition and Neuroscience. 2014;30:251–260. doi: 10.1080/23273798.2014.905692. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Uttal D, Scudder K, DeLoache J. Manipulatives as symbols: A new perspective on the use of concrete objects to teach mathematics. Journal of applied developmental psychology. 1997;18:37–54. [Google Scholar]
  107. Valenzeno L, Alibali MW, Klatzky R. Teachers’ gestures facilitate students’ learning: A lesson in symmetry. Contemporary Educational Psychology. 2003;28:187–204. [Google Scholar]
  108. Vouloumanos A, Onishi KH, Pogue A. Twelve-month-old infants recognize that speech can communicate unobservable intentions. Proceedings of the National Academy of Sciences of the United States of America. 2012;109:12933–12937. doi: 10.1073/pnas.1121057109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Wagner S, Nusbaum H, Goldin-Meadow S. Probing the mental representation of gesture: Is handwaving spatial? Journal of Memory and Language. 2004;50:395–407. [Google Scholar]
  110. Wakefield EM, Congdon EL, Novack MA, Goldin-Meadow S, James K. Learning Math by Hand: The neural effects of gesture-based instruction in 8-year-old children. Manuscript submitted for publication; 2016. [DOI] [PubMed] [Google Scholar]
  111. Wilson M. Six views of embodied cognition. Psychonomic Bulletin & Review. 2002;9:625–636. doi: 10.3758/bf03196322. [DOI] [PubMed] [Google Scholar]
  112. Woodward AL. Infants selectively encode the goal object of an actor’s reach. Cognition. 1998;69:1–34. doi: 10.1016/s0010-0277(98)00058-4. [DOI] [PubMed] [Google Scholar]
  113. Woodward AL, Guajardo JJ. Infants’ understanding of the point gesture as an object-directed action. Cognitive Development. 2002;17:1061–1084. [Google Scholar]
  114. Yoon JM, Johnson MH, Csibra G. Communication-induced memory biases in preverbal infants. Proceedings of the National Academy of Sciences. 2008;105:13690–13695. doi: 10.1073/pnas.0804388105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Zacks JM, Tversky B, Iyer G. Perceiving, remembering, and communicating structure in events. Journal of Experimental Psychology. General. 2001;130:29–58. doi: 10.1037/0096-3445.130.1.29. [DOI] [PubMed] [Google Scholar]

RESOURCES