Abstract
Speakers move their hands when they talk––they gesture. These gestures can signal whether the speaker is ready to learn a particular task and, in this sense, provide a window onto the speaker’s knowledge. But gesture can do more than reflect knowledge. It can play a role in changing knowledge in at least two ways: indirectly through its effects on communication with the learner, and directly through its effects on the learner’s cognition. Gesturing is, however, not limited to learners. Speakers who are proficient in a task also gesture. Their gestures have a different relation to speech than the gestures that novices produce, and seem to support cognition rather than change it. Gesturing can thus serve as a tool for thinking and for learning.
1. Introduction
When people talk, they move their hands (Kendon 1980; McNeill 1992), across all cultures (Feyereisen and Lannoy 1991) and ages (Iverson and Goldin-Meadow 1998a) –– even when they have been blind from birth and have never seen anyone else gesture (Iverson and Goldin-Meadow 1998b). These hand movements, often called gestures, are not mere handwaving as they convey substantive information that is accessible to listeners (Beattie and Shovelton 1999; Cassell et al. 1999; Riseborough 1981). Indeed, the information conveyed in gesture is often not conveyed anywhere in the speech that it accompanies (Goldin-Meadow 2003).
Previous work has found that, when explaining their solutions to a problem, speakers who produce gestures that convey different information from speech (gesture-speech mismatches) are more ready to profit from instruction on that problem than speakers whose gestures always convey the same information as their speech (gesture-speech matches). The gestures that learners produce thus reflect the state of their knowledge. But recent research has shown that gesture can do more than reflect knowledge––it can play a role in changing that knowledge. The purpose of this paper is to review findings on gesture as a marker of readiness to learn and as a vehicle for promoting learning, and then to explore the conditions under which gesture does, or does not, promote learning.
1.1 Gesture can identify who is ready to learn
Consider a child asked to judge whether water poured from a tall, thin container into a short, fat container is still the same amount after the pouring. Children who are convinced that the answer is “no” but justify that answer by producing gestures that convey different information from their speech (e.g., saying, “they’re different because this one’s tall and that one’s short,” while producing a thin gesture followed by a wide gesture) are particularly likely to profit from instruction in conservation of quantity––more likely than children who justify their “no” answers by producing gestures that convey the same information as their speech (e.g., again saying, “they’re different because this one’s tall and that one’s short,” but while producing a tall gesture and a short gesture) (Church and Goldin-Meadow 1986). Thus, 5- to 8-year-old non-conserving children who produce gesture-speech mismatches when asked to explain how they solved conservation problems are more ready to learn about conservation than children who produce only gesture-speech matches.
As another example, 9- to-10-year old children who solve problems such as 5+3+6=__+6 incorrectly but justify their incorrect solution by producing gestures that convey a different problem-solving strategy from their speech (e.g., saying, “I added the 5, the 3, and the 6, and put 14 in the blank,” an add-to-equal-sign strategy, while pointing at the 5, the 3, the 6 on the left side of the equation, and the 6 on the right side of the equation, an add-all-numbers strategy) are particularly likely to profit from instruction in mathematical equivalence––more likely than children who justify their incorrect answers by producing gestures that convey the same information as their speech (e.g., again saying, “I added the 5, the 3, and the 6, and put 14 in the blank,” while pointing at the 5, the 3, and the 6 on the left side of the equation, i.e., producing an add-to-equal-sign strategy in both speech and gesture) (Perry et al. 1988; Alibali and Goldin-Meadow 1993).
This phenomenon is a general one, found in a variety of tasks and ages: toddlers learning to produce two-word sentences (Capirci et al. 1996; Goldin-Meadow and Butcher 2003; Iverson and Goldin-Meadow 2005; Iverson et al. 2008; Ozcaliskan and Goldin-Meadow 2005), 5- to 6-year-olds learning to mentally rotate objects (Ehrlich et al. 2006); 5-year-olds learning to balance blocks on a beam (Pine et al. 2004); and adults learning how gears work (Perry and Elder 1997).
When a speaker produces a gesture-speech mismatch, the information conveyed in gesture is, by definition, different from the information conveyed in the accompanying speech. Consider the child who produced an add-all-numbers strategy in gesture while giving an add-to-equal-sign strategy in speech. The add-all-numbers strategy was conveyed uniquely in gesture in that response. However, it is possible that this child is able to articulate the add-all-numbers strategy in speech, and does so in other responses. Alternatively, the information conveyed in gesture in a mismatch may be accessible only to gesture. If so, this child should not be able to articulate the add-all-numbers strategy in speech in any of his responses. Goldin-Meadow et al. (1993) explored these alternatives with respect to mathematical equivalence, and found that the strategies that children expressed in gesture in a mismatch were almost never found in speech on any of their responses. Thus, for children who are on the verge of learning mathematical equivalence, the information conveyed in gesture in a mismatch appears to be accessible only to gesture. The children are not able to verbalize this information and, in this sense, the information constitutes implicit knowledge for them.
2. Gesture can promote learning
The gestures that learners produce in a mismatch thus provide insight into their cognitive state––they reflect what the learner knows. But evidence is mounting that gesture goes beyond reflecting knowledge and plays a role in fostering knowledge. Gesture can play a role in learning in (at least) two ways: (1) If communication partners are able to glean information about a learner’s cognitive state from the gestures the learner produces, the partners may then alter the input they give the learner as a function of those gestures, perhaps providing just the right kind of input to facilitate learning. Gesture can thus play an indirect role in learning by influencing the kind of communicative input the learner receives. (2) Gesture can also play a more direct role in learning by altering the learner’s cognitive state. There is evidence that gesture can play both of these roles.
2.1. Gesture promotes learning through communication
The first step in making the argument that learners elicit different kinds of input as a function of the gestures they produce requires us to show that ordinary listeners, listeners who have not been trained to code gesture, are able to glean meaning from the spontaneous gestures that speakers produce. Several studies have found that listeners, both adults and children, can read the gestures produced by children participating in conservation and mathematical equivalence tasks. These effects have been found in adults and children observing child speakers on a videotape (Alibali et al. 1997; Goldin-Meadow et al. 1992; Kelly and Church, 1997, 1998); in adults watching children and reacting to them on-line (Goldin-Meadow and Sandhofer 1999); and, most importantly, in adults and children interacting with one another in a naturalistic setting (Goldin-Meadow et al. 1999; Goldin-Meadow and Singer 2003). In short, listeners can read other peoples’ gestures.
The second step in the argument is to show that adults change the input they give children as a function of the gestures that the children produce. Goldin-Meadow and Singer (2003; see also Goldin-Meadow, Kim and Singer 1999) asked teachers to interact individually with children who could not yet solve the mathematical equivalence problems. They found that the teachers gave different kinds of instruction to children who produced gesture-speech mismatches than they gave to children who produced only gesture-speech matches. In particular, the teachers gave more different kinds of problem-solving strategies in speech to children who produced mismatches than to children who produced matches. Teachers also produced more mismatches of their own––typically containing two correct strategies, one in speech and the other in gesture––when teaching children who produced mismatches than when teaching children who produced matches. Thus, teachers do notice the gestures learners produce and they change their instruction accordingly.
The final step in the argument is to demonstrate that children profit from the input that their gestures elicit from teachers. Singer and Goldin-Meadow (2005) designed a mathematical equivalence lesson based on the instruction that teachers spontaneously gave children who produced mismatches. In particular, the lesson included either one correct strategy (equalizer) or two correct strategies (equalizer and add-subtract) in speech; in addition, the instruction either contained no gestures at all, matching gestures, or mismatching gestures. There were thus six different training groups. Interestingly, including more than one strategy in speech in the lesson turned out to be an ineffective teaching strategy––children improved significantly more after the lesson if they had been given one strategy in speech than if they had been given two. But including mismatches in the lesson was very effective––children improved significantly more after the lesson if their lesson included mismatching gestures than if it included matching gestures or no gestures at all. The lesson that was most effective contained the equalizer strategy in speech (“to solve this problem you need to make one side equal to the other side”), combined with the add-subtract strategy in gesture (pointing at the three numbers on the left side of the equation and then producing a ‘take away’ gesture under the number on the right side). In other words, a lesson containing two strategies can be effective, but only if the two strategies are produced in different modalities.
Including gesture in instruction has been found to promote learning in previous studies examining the effects of teachers’ gestures on students’ learning. The general finding is that children who are exposed to instruction that includes both speech and gesture learn more from that instruction than children who are exposed to instruction that includes only speech. The effect has been found in mathematical equivalence tasks (Church et al. 2004; Perry et al. 1995), as well as tasks involving symmetry (Valenzeno et al. 2003). For example, Valenzeno and colleagues (2003) compared children’s performance on tests of symmetry after they viewed a videotaped lesson containing both speech and gesture or only speech. Children who saw the lesson that included gesture were much more successful at identifying symmetry after the lesson than children who saw the lesson containing speech alone.
Why does including gesture in instruction facilitate learning? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. The gestures used in the previous studies exploring the effect of gesture in instruction on learning were either points to or traced paths on objects, thus providing support for this hypothesis. But to really test the hypothesis, we need to determine whether including gesture in instruction helps children learn even when it is not produced in relation to an object but is instead produced “in the air.” Ping and Goldin-Meadow (2008) gave children instruction in Piagetian conservation problems with or without gesture and with or without concrete objects. They found that children given instruction with speech and gesture learned more about conservation than children given instruction with speech alone, whether or not objects were present during instruction. Moreover, children who received instruction in speech and gesture, when asked to explain their solutions, were more likely to express strategies that they had not been taught in either speech or gesture during the experiment; this advantage was found only when objects were absent during instruction. Gesture in instruction can thus promote learning even when those gestures do not direct attention to visible objects, suggesting that gesture can do more for learners than simply ground arbitrary, symbolic language in the physical, observable world.
Taken together, the findings suggest that the gestures learners produce convey meaning that is accessible to their communication partners. The partners, in turn, alter the way they respond to a learner as a function of that learner’s gestures. Learners then profit from those responses, which they elicited through their gestures. Gesture can thus play a causal role in learning indirectly through the effect it has on communication.
2.2. Gesture promotes learning through cognition
2.2.1. Seeing gesture makes learners gesture, which leads to learning
Gesture also has the potential to contribute to learning by having a direct effect on the learner. Indeed, one reason that including gesture in a lesson may be good for learning is because seeing a teacher gesture encourages learners to produce gestures of their own, which may, in turn, facilitate learning. Cook and Goldin-Meadow (2006) gave children instruction in mathematical equivalence. One group of children was given the equalizer strategy in speech with no gestures during the lesson (“to solve this problem, you need to make one side equal to the other side,” speech+no gesture). The other group was given the equalizer strategy in speech accompanied by the equalizer strategy in gesture (the same words plus a sweep with the left palm under the left side of the equation, followed by a sweep with the right palm under the right side of the equation, speech+gesture). Children in the two groups were equally likely to produce the equalizer strategy in speech during the lesson. But children in the speech+gesture group were significantly more likely to produce the equalizer strategy in gesture during the lesson. Seeing the teacher gesture made it more likely that the children themselves would gesture. Importantly, children who gestured during the lesson were significantly more likely to profit from the lesson than children who did not gesture. These findings suggest that gesturing can help children get the most out of a lesson.
The children in the Cook and Goldin-Meadow (2006) study saw the teacher gesture and either imitated those gestures or not. Children were not forced to gesture––they chose to. As a result, the children who gestured may have been systematically different from those who did not. In particular, the children who chose to gesture may have been more ready to learn than the children who chose not to gesture. If so, the fact that they reproduced the experimenter’s gestures may have been a reflection of that readiness to learn, rather than a causal factor in the learning itself.
2.2.2. Gesturing makes learning last
To address this concern, gesture needs to be manipulated more directly––all of the children in the gesture group must reproduce the experimenter’s hand movements during the lesson. Cook et al. (2008) solved this problem by teaching children words and hand movements prior to the mathematical equivalence lesson, and then asking the children to reproduce those words and/or gestures during the lesson itself. One group of children was taught to say the following words: “to solve this problem, I need to make one side equal to the other side,” an equalizer strategy in speech (speech group). Another group was taught to make the following hand movements: sweep with the left palm under the left side of the equation, followed by a sweep with the right palm under the right side of the equation, an equalizer strategy in gesture (gesture group). The third group was taught to say the words and produce the hand movements at the same time, an equalizer strategy in both speech and gesture (speech+gesture group).
All of the children were then given the same lesson in mathematical equivalence; the experimenter taught the children the equalizer strategy using both speech and gesture. The only difference among the groups during the lesson was the children’s own behavior––the children repeated the words and/or hand movements they were taught before and after each problem they were given to solve.
These self-produced behaviors turned out to make a big difference, not in how well the children did at posttest (children in all three groups made equal progress right after the lesson), but in how long they retained the knowledge they had been taught––children who were told to produce gestures (with or without speech) during the lesson performed significantly better on a follow-up test four weeks later than children who were told to produce only speech (Cook et al. 2008). Thus, the children’s own hand movements worked to cement what they had learned, suggesting that gesture can play a role in knowledge change by making learning last.
2.2.3. Gesturing brings new information into the system
The information that the children produced in gesture in the Cook et al. (2008) study (the equalizer strategy in gesture) was reinforced by the equalizer information they heard in both speech and gesture during the lesson. Thus, their gestures did not provide new information. To determine whether gesture can create new ideas, Goldin-Meadow et al. (2009) again taught children words and hand movements to produce before the lesson began. But this time, the hand movements instantiated a different strategy from the one conveyed in the words they were taught. All three groups were taught to say the equalizer strategy in speech, “to solve this problem, I need to make one side equal to the other side.” One group was taught only these words and no hand movements (speech+no gesture group). One group was taught to say the equalizer strategy while producing a V-hand under the 6+3 in the problem 6+3+5=__+5 and then pointing at the blank, a grouping strategy in gesture (speech+correct gesture group). The third group was taught to say the same words but to produce a partially correct version of the grouping strategy in gesture (speech+partially correct gesture group)––a V-hand under the 3+5 followed by a point at the blank (these movements are partially correct in that the V-hand highlights the fact that two numbers on the left side of the equation can be grouped, and the two gestures together highlight the fact that there are two sides to the equation; the movements are incorrect in that the V-hand isolates the wrong two numbers to be grouped). All of the children were given the same lesson in mathematical equivalence; the experimenter taught them the equalizer strategy in speech and produced no gestures. The children were required to produce the words or words+gestures they had been taught before and after each problem they solved during the lesson.
Children in the speech+correct gesture group performed better on the posttest than children in the speech+partially correct gesture group, who performed better than children in the speech+no gesture group. Importantly, this effect was mediated by whether a child produced the grouping strategy in speech after the lesson (only one child in the speech+no gesture group produced grouping in speech prior to the lesson and this child did not improve). Recall that the experimenter did not use the grouping strategy in either gesture or speech, and the children only produced the grouping strategy in gesture and not in speech. Thus, the grouping strategy had to have come from the children’s own hands, suggesting that gesture can introduce new knowledge into a child’s repertoire.
2.2.4. Gesture brings out implicit knowledge, which leads to learning
We have seen that gesture can bring new knowledge into a children’s system if the child is told to produce particular hand movements. But learners are rarely told how to move their hands. What would happen if children were simply told to move their hands without instruction about which movements to make? Broaders et al. (2007) addressed this question by first asking children to solve six mathematical equivalence problems without any instructions about what to do with their hands. The children were then asked to solve a second set of comparable problems but, this time, one group of children was told to move their hands as they explained their solutions to the second set of problems (gesture group). One group was told not to move their hands (no gesture group). The third group was given no instructions whatsoever about their hands (control group). Broaders et al. (2007) then compared the types of strategies a child produced on the second set of problems with the types that child produced on the first set, and calculated how many new strategies the child added to his or her repertoire on the second set of problems.
Interestingly, children who were told to gesture on the second set of problems added significantly more new strategies to their repertoires than children who were told not to gesture and than children given no instructions at all. Most of those strategies were produced uniquely in gesture, not in speech, and, surprisingly, most were correct. The children who were told to gesture had been turned into mismatchers––they produced information in gesture that was different from the information they produced in speech. Were these created mismatchers also ready to learn?
To find out, Broaders et al. (2007) gave another group of children the same instructions to gesture or not to gesture while solving a second set of mathematical equivalence problems, and then gave all of the children a lesson in mathematical equivalence. Broaders and colleagues replicated the original phenomenon––children told to gesture added more strategies to their repertoires after the second set of problems than children told not to gesture. Moreover, children told to gesture showed significantly more improvement on the posttest than children told not to gesture, particularly if the children had added strategies to their repertoires after being told to gesture. Being told to gesture thus encouraged children to express new ideas that they had previously not expressed, which, in turn, led to learning.
2.3. Do the gestures learners see and the gestures they produce promote learning by activating implicit knowledge or creating it?
The question that the Broaders et al. (2007) study cannot answer is whether gesture created new implicit knowledge, or activated implicit knowledge that the children already had. All of the children who were told to gesture moved their hands, but only some added new and correct strategies to their repertoires. These children may have “had” these correct strategies in their repertoires before receiving the instructions to gesture. To determine whether gesture affects learning by creating implicit knowledge or activating it, Cook and Goldin-Meadow (2010) reanalyzed the data from previous studies, dividing children into those who had implicit knowledge prior to the experimental manipulation and those who did not. They used the gestures that children produced prior to instruction, evaluated in relation to the accompanying speech, as a marker for implicit knowledge. Children who produced at least some gestures that conveyed different information from their speech (i.e., children who produced gesture-speech mismatches) on a particular task were classified as “having implicit knowledge” with respect to that task. Children whose gestures always conveyed the same information as their speech on a task were classified as “not having implicit knowledge” with respect to that task.
If gesture is merely activating implicit knowledge, as opposed to creating it, then asking learners to gesture, or having them observe gesture, should improve learning only for children who already have implicit knowledge. However, if gesture can create new knowledge, then gesture should also be effective for children who do not yet have implicit knowledge. Cook and Goldin-Meadow (2010) found that gesture did, in fact, lead to learning not only in children who had implicit knowledge, but also in children who did not have implicit knowledge, suggesting that the gesture manipulations were not merely activating implicit knowledge but were creating it.
In addition to pinning down the mechanism by which gesture affects learning, Cook and Goldin-Meadow (2010) were able to explore whether having implicit knowledge prepares children to profit from instruction. They found that instruction of any sort, whether it contained gesture or not, led to improved learning on a task if children already had implicit knowledge on that task. In contrast, for children who did not have implicit knowledge prior to instruction, including gesture in instruction (either seeing other peoples’ gestures or producing one’s own gestures) was necessary in order for the children to show improvement. In general, the analyses showed that gesture manipulations promote learning in children who do not yet have implicit knowledge, suggesting that gesture can indeed create implicit knowledge rather than merely activate it.
3. How does gesture promote learning?
We have seen that gesture can play a role in learning, but we do not yet fully understand the mechanisms that underlie this process. The next two sections review evidence for two mechanisms that can account, at least in part, for gesture’s effect on cognition, but there are undoubtedly others that have not yet been explored. A question that remains for future research is whether the mechanisms responsible for the effect that gesture has on learning are unique to gesture. Gesture may be special only in the sense that it makes efficient use of ordinary learning mechanisms; for example, cues may be more distinctive when presented in two modalities than in one. On the other hand, it is possible that traditional principles of learning and memory (e.g., distinctiveness, elaboration, cue validity, cue salience, etc.) will, in the end, not be adequate to account for the impact that gesture has on learning. In this unlikely event, it will be necessary to search for mechanisms that are specific to gesture.
3.1. Gesture grounds thought in action
Gesturing can change speakers’ thoughts by introducing action information into their mental representations of a problem, which then impacts how they solve the problem. Beilock and Goldin-Meadow (2010) asked adults to solve a Tower of Hanoi problem (TOH1) in which a stack of four disks, arranged from the largest on bottom to the smallest on top, must be moved from the leftmost of three pegs to the rightmost; only one disk can be moved at a time and larger disks cannot be placed on top of smaller disks (Newell and Simon 1972). The smallest disk weighed the least (0.8kg), the largest disk the most (2.9kg). Adults were then asked to explain how they solved TOH1. In the final step, adults were asked to solve the Tower of Hanoi problem a second time (TOH2). Two version of TOH2 were used––one in which the disk weights were switched so that the smallest disk weighed the most and the largest the least (Switch condition); and one in which the disk weights were identical to TOH1 (No-Switch condition).
Adults gestured when they explained how they solved TOH1, often producing action gestures; for example, one-handed or two-handed motions mimicking actions used to move the disks (cf. Cook and Tanenhaus 2009; Garber and Goldin-Meadow 2002). Some of these gestures––in particular, one-handed gestures produced to describe moving the smallest disk––were incompatible with the actions needed to solve TOH2 in the Switch condition (where the smallest disk was now the heaviest and required two hands to move), but were not incompatible with actions needed to solve TOH2 in the No-Switch condition (where the smallest disk continued to be the lightest and could easily be moved one-handed).
The more incompatible gestures that adults in the Switch condition produced when explaining how they solved TOH1, the worse they performed on TOH2. No such relation between gesture and solution performance was found in the No-Switch condition. Gesturing thus seems to change adults’ mental representation of the TOH task. After gesturing about the smallest disk with one hand, the adults mentally represented this disk as a light object. For the Switch Group, this representation was incompatible with the disk that the subjects eventually encountered when solving TOH2 (the smallest disk was now too heavy to lift with one hand). The relatively poor performance of the Switch Group on TOH2 suggests that the mental representation created by gesture interfered with subsequent performance on TOH2.
There is, however, another possibility. The adults’ gestures could be reflecting their representation of the smallest disk as a light object rather than creating it. But if gesture changes thought by adding action information––rather than merely reflecting action information already inherent in one’s mental representation of a problem––then performance of Switch Group subjects should not be impaired if those subjects do not gesture. Beilock and Goldin-Meadow (2010) asked a second group of adults to solve TOH1 and TOH2, but they were not asked to do the explanation task in between and, as a result, did not gesture. These adults performed equally well on TOH2 in both the Switch and No-Switch conditions. Switching the weights of the disks interfered with performance only when subjects had previously produced action gestures relevant to the task.
Gesturing thus adds action information to speakers' mental representations––when incompatible with subsequent actions, this information interferes with problem-solving. When the information gesture adds to a speaker’s mental representations is compatible with future actions, those actions will presumably be facilitated. Gesturing introduces action into thought and, in this way, changes how we think (see Goldin-Meadow and Beilock, 2010).
3.2. Gesture lightens cognitive load
Gesturing can also have an impact on thinking by lightening the load on working memory. Gesturing while speaking seems likely to require motor planning, execution, and coordination of two separate cognitive and motor systems. If so, gesturing might increase speakers’ cognitive load. Alternatively, gesture and speech might form a single, integrated system in which the two modalities work together to convey meaning. Under this view, gesturing while speaking would reduce demands on the speaker's cognitive resources (relative to speaking without gesture), and free up cognitive capacity to perform other tasks.
To distinguish these alternatives and to determine the impact of gesturing on a speaker's cognitive load, Goldin-Meadow et al. (2001; see also Wagner et al. 2004) explored how gesturing on one task (explaining a math problem) affected performance on a second task (remembering a list of words or letters) carried out at the same time. If gesturing increases cognitive load, gesturing while explaining the math problems should take away from the resources available for remembering. Memory should then be worse when speakers gesture than when they do not gesture. Alternatively, if gesturing reduces cognitive load, gesturing while explaining the math problems should free up resources available for remembering. Memory should then be better when speakers gesture than when they do not. Both adults and children remembered significantly more items when they gestured during their math explanations than when they did not gesture. Gesturing appeared to save the speakers cognitive resources on the explanation task, permitting the speakers to allocate more resources to the memory task.
Why does gesturing lighten cognitive load? Perhaps it is the motor aspects of gesture that are responsible for the cognitive benefits association with producing gesture. If so, the meaning of the gesture should not affect its ability to lighten cognitive load. Wagner and colleagues (2004) replicated the cognitive load effect on adults asked to remember lists of letters or locations on a grid while explaining how they solved a factoring problem. The adults remembered more letters or locations when they gestured than when they did not gesture. But the types of gestures they produced mattered. In particular, gestures that conveyed different information from the accompanying speech (mismatching gesture) lightened load less than gestures that conveyed the same information as the accompanying speech (matching gesture). Thus, the effect gesture has on working memory cannot be a pure motor phenomenon––it must stem instead from the coordination of motor activity and higher order conceptual processes. If the motor aspects of gesture were solely responsible for the cognitive benefits associated with gesture production, mismatching gestures should be as effective in promoting recall as matching gestures––mismatching gestures are motor behaviors that are physically comparable to matching gestures.
Gesturing on a task thus allows speakers to conserve cognitive resources. Learners might then have more resources available to learn a new task if they gesture while tackling the task than if they do not gesture.
4. When gesture does and does not promote learning: Novices vs. experts
We have found that speakers who produce gestures that convey different information from speech (i.e., gesture-speech mismatches) on a task are typically in a transitional state with respect to that task. But there are times when speakers produce mismatches and are not in a transitional state. Take, for example, the teachers in the Goldin-Meadow and Singer (2003) study who instructed children individually in mathematical equivalence. The teachers often produced mismatches, particularly when interacting with children who produced mismatches, but the teachers were not in a state of transitional knowledge––they were all expert in solving the mathematical equivalence problems. Are the mismatches produced by expert teachers different from the mismatches produced by novice students? It turns out that they are––they differ in terms of how accessible the information conveyed in gesture is, and how much load the information conveyed in gesture imposes on working memory.
4.1. How accessible is the information conveyed in gesture in novices vs. experts?
Not surprisingly since their goal was to instruct, the teachers in Goldin-Meadow and Singer (2003) produced mismatches that typically contained two correct problem-solving strategies (e.g., equalizer in speech, add-subtract in gesture). In contrast, the children’s mismatches typically contained either two incorrect strategies (e.g., add-to-equal-sign in speech, add-all-numbers in gesture) or a correct strategy in gesture and a incorrect strategy in speech (e.g., equalizer or grouping in gesture, add-to-equal-sign or add-all-numbers in speech).
More interestingly, the mismatches that the teachers and children produced differed in how accessible their gestured strategies were to the spoken modality. Recall that the strategies that the children produced in gesture in mismatches were typically found only in gesture, not only in that particular mismatch but in all of their responses. In other words, the information conveyed in gesture was not accessible to speech for the children. In contrast, the strategies that the teachers produced in gesture in mismatches could also be found in the teachers’ speech in other responses. The information conveyed in gesture was accessible to both gesture and speech for the teachers.
Thus, the information conveyed by the novice children in gesture in their mismatches was not, for them, verbalizable knowledge––it was part of their implicit repertoire. However, the information conveyed by the expert teachers in gesture in their mismatches was verbalizable and thus part of their explicit repertoire. Mismatches may be important to learning not because the two modalities in a mismatch convey different information, but because the information conveyed in gesture is accessible only to gesture and thus implicit. Gesture may be an ideal vehicle for bringing implicit knowledge into the system.
4.2. How much does gesture lighten the load in novices vs. experts?
Recall that producing gesture along with speech lightens the speaker’s cognitive load (Goldin-Meadow et al. 2001) and that gesture’s meaning plays a role in determining how light the load is (Wagner et al. 2004). Wagner and colleagues found, in adults asked to remember letters or locations while explaining factoring problems, that mismatches lightened cognitive load less than matches. But the adults were all experts in solving factoring problems (they rarely made mistakes). Moreover, their mismatches were all of the expert kind––the information conveyed in gesture in a mismatch could be found in speech on some other trial.
In contrast, Ping and Goldin-Meadow (2010) studied the effects of gesturing on cognitive load in children explaining their responses to a liquid conservation task. Most of the children did not know how to solve the problems and many were in transition. Their mismatches were of the novice kind––the information conveyed in gesture in a mismatch could not be found in speech on any other trial. Ping and Goldin-Meadow replicated the original findings––gesturing lightened cognitive load even on this new task (a task that elicits iconic as well as deictic gestures). Interestingly, however, for the novice children, mismatching gestures lightened cognitive load more than matching gestures––the opposite pattern found for the expert adults.
Since the adult experts in the Wagner et al. (2004) study have in their repertoires the spoken equivalent of the strategy expressed in gesture in a mismatch, they may be implicitly activating this strategy, not only in gesture but also in speech, when they produce mismatches in their explanations of the factoring problems. In other words, in addition to explicitly producing strategy 1 in speech and strategy 2 in gesture in a mismatch, the adults may also be implicitly activating the spoken equivalent of strategy 2 precisely because it is in their spoken repertoire. If so, the adults are activating more strategies in their mismatches (strategy 1 in speech, strategy 2 in gesture, strategy 2 in speech) than they activate in their matches (strategy 1 in speech, strategy 1 in gesture), a difference that could explain why their mismatches were less effective than their matches in lightening cognitive load.
In contrast, the child novices in the Ping and Goldin-Meadow (2010) study do not have the option of implicitly activating the spoken equivalent of the strategy expressed in gesture in a mismatch since this strategy is not part of their spoken repertoires. They activate only two strategies (strategy 1 in speech, strategy 2 in gesture). This difference might explain why the children’s mismatches lighten cognitive load more than the adults’ mismatches.
But why then do the children’s mismatches lighten cognitive load more than their matches, which also involve activating two strategies, one in speech (strategy 1) and one in gesture (strategy 1)? Counter-intuitively, for novice children, expressing two different strategies, one in speech and the other in gesture, lightens cognitive load more than expressing the same strategy in speech and gesture. Perhaps it is necessary to link the strategy expressed in gesture with its equivalent in speech, and this link requires some cognitive effort. Such a link is not necessary in a novice’s mismatch (there is no spoken equivalent of the strategy expressed in gesture), which might make the children’s mismatches better at lightening cognitive load than their matches. Whatever the explanation, it is clear that mismatches are qualitatively different in novices and experts.
4.3. The function of gesture for novices vs. experts
Both experts and novices produce mismatches which, by definition, instantiate variability––more than one strategy produced in a single response. But the variability in novices’ mismatches serves a different function from the variability in experts’ mismatches. For novices, the information conveyed in gesture in a mismatch is at the cutting edge of their knowledge––the variability in their mismatches can thus serve as an engine of change, propelling development forward (cf., Siegler 1994; Thelen 1989). But for experts, the information conveyed in gesture in a mismatch is not new knowledge –– the variability in their mismatches neither reflects nor creates change, but may instead index discourse instability, a moment when speech and gesture are not completely aligned, reflecting the dynamic tension of the speaking process (McNeill 1992) or perhaps the influence that speakers and listeners have on each other (Kimbara 2006; Furuyama 2000). The expert’s mismatches are best characterized in terms of the kind of variability that comes with expertise: the back-and-forth around a set point that typifies expert (as opposed to novice) performance on a task (cf. Bertenthal 1999). As such, mismatches can support cognition in a variety of ways––by, for example, facilitating lexical access (Krauss et al. 2000), helping to package information for speaking (Kita 2000), highlighting perceptual-motor information (Hotstetter and Alibali 2008; Beilock and Goldin-Meadow 2010), keeping mental images active (Ruiter 1998; Wesp et al. 2001; Morsella and Krauss 2004), lightening cognitive load (Goldin-Meadow et al. 2001; Wagner et al. 2004)––but they do not lead to learning in experts.
Thus, experts and novices both exhibit variability in their gestures. However, the variability in gesture that experts display is in the service of adjusting to small (and perhaps unexpected) variations in the discourse. In contrast, the variability in gesture that novices display reflects experimentation with new and not-yet-solidified ways of solving a task and, in this way, has the potential to lead to cognitive change. Importantly, this difference is not a developmental difference but rather reflects the state of the speaker’s knowledge––adults produce gesture-speech mismatches when they are learning a task, that is, when they are novices (Perry and Elder 1997); and children continue to produce gesture-speech mismatches even after they have mastered a task, that is, when they are experts (Ozcaliskan and Goldin-Meadow 2009).
To summarize, the spontaneous gestures that speakers produce when they talk about a task can serve as a signal that the speaker is in a transitional state and ready to learn that task. Gesture can thus reflect the state of a speaker’s knowledge. But gesture can go beyond reflecting knowledge––it can play a role in changing knowledge, indirectly through its effects on communication or more directly through its effects on cognition. Gesturing, however, is not limited to novices. Experts gesture too but their gestures may serve different functions from the gestures that novices produce, supporting cognition rather than changing it.
Acknowledgments
Supported by R01 HD47450 from NICHD
References
- Alibali MW, Goldin-Meadow S. Transitions in learning: What the hands reveal about a child's state of mind. Cognitive Psychology. 1993;25:468–523. doi: 10.1006/cogp.1993.1012. [DOI] [PubMed] [Google Scholar]
- Alibali MW, Flevares L, Goldin-Meadow S. Assessing knowledge conveyed in gesture: Do teachers have the upper hand? Journal of Educational Psychology. 1997;89:183–193. [Google Scholar]
- Beattie G, Shovelton H. Mapping the range of information contained in the iconic hand gestures that accompany spontaneous speech. Journal of Language and Social Psychology. 1999;18(4):438–462. [Google Scholar]
- Beilock SL, Goldin-Meadow S. Gesture grounds thought in action. 2010. under review. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bertenthal B. Variation and selection in the development of perception and action. In: Savelsbergh G, editor. Nonlinear analyses of developmental processes. Amsterdam: Elsevier Science Publishers; 1999. [Google Scholar]
- Broaders SC, Cook SW, Mitchell Z, Goldin-Meadow S. Making children gesture brings out implicit knowledge and leads to learning. Journal of Experimental Psychology: General. 2007;136:539–550. doi: 10.1037/0096-3445.136.4.539. [DOI] [PubMed] [Google Scholar]
- Capirci O, Iverson JM, Pizzuto E, Volterra V. Gestures and words during the transition to two-word speech. Journal of Child Language. 1996;23(3):645–673. [Google Scholar]
- Cassell J, McNeill D, McCullough K-E. Speech-gesture mismatches: Evidence for one underlying representation of linguistic and nonlinguistic information. Pragmatics and Cognition. 1999;7(1):1–34. [Google Scholar]
- Church RB, Goldin-Meadow S. The mismatch between gesture and speech as an index of transitional knowledge. Cognition. 1986;23(1):43–71. doi: 10.1016/0010-0277(86)90053-3. [DOI] [PubMed] [Google Scholar]
- Church RB, Ayman-Nolley S, Mahootian S. The effects of gestural instruction on bilingual children. International Journal of Bilingual Education and Bilingualism. 2004;7(4):303–319. [Google Scholar]
- Cook SW, Goldin-Meadow S. The Role of Gesture in Learning: Do children use their hands to change their minds? Journal of Cognition and Development. 2006;7(2):211–232. [Google Scholar]
- Cook SW, Mitchell Z, Goldin-Meadow S. Gesturing makes learning last. Cognition. 2008;106:1047–1058. doi: 10.1016/j.cognition.2007.04.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cook SW, Tanenhaus MK. Embodied communication: Speakers’ gestures affect listeners’ actions. Cognition. 2009;113:98–104. doi: 10.1016/j.cognition.2009.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cook SW, Goldin-Meadow S. Gesture leads to knowledge change by creating implicit knowledge. 2010. under review. [Google Scholar]
- Ehrlich SB, Levine SC, Goldin-Meadow S S. The importance of gesture in children’s spatial reasoning. Developmental Psychology. 2006;42:1259–1268. doi: 10.1037/0012-1649.42.6.1259. [DOI] [PubMed] [Google Scholar]
- Feyereisen P, de Lannoy J-D. Gestures and speech: Psychological investigations. Cambridge: Cambridge University Press; 1991. [Google Scholar]
- Furuyama N. Gestural interaction between the instructor and the learner in origami instruction. In: McNeill David., editor. Language and Gesture. Cambridge: Cambridge University Press; 2000. pp. 99–117. [Google Scholar]
- Garber P, Goldin-Meadow S. Gesture offers insight into problem-solving in adults and children. Cognitive Science. 2002;26:817–831. [Google Scholar]
- Goldin-Meadow S. Hearing gesture: How our hands help us think. Cambridge, MA: Harvard University Press; 2003. [Google Scholar]
- Goldin-Meadow S, Wein D, Chang C. Assessing knowledge through gesture: Using children’s hands to read their minds. Cognition and Instruction. 1992;9:201–219. [Google Scholar]
- Goldin-Meadow S, Alibali MW, Church RB. Transitions in concept acquisition: Using the hand to read the mind. Psychological Review. 1993;100(2):279–297. doi: 10.1037/0033-295x.100.2.279. [DOI] [PubMed] [Google Scholar]
- Goldin-Meadow S, Kim S, Singer M. What the teachers’ hands tell the students’ minds about math. Journal of Educational Psychology. 1999;91:720–730. [Google Scholar]
- Goldin-Meadow S, Sandhofer C. Gestures convey substantive information about a child's thoughts to ordinary listeners. Developmental Science. 1999;2(1):67–74. [Google Scholar]
- Goldin-Meadow S, Nusbaum H, Kelly SD, Wagner S. Explaining math: Gesturing lightens the load. Psychological Science. 2001;12:516–522. doi: 10.1111/1467-9280.00395. [DOI] [PubMed] [Google Scholar]
- Goldin-Meadow S, Butcher C. Pointing toward two-word speech in young children. In: Kita S, editor. Pointing: Where language, culture, and cognition meet. Mahwah, NJ: Erlbaum; 2003. pp. 85–107. [Google Scholar]
- Goldin-Meadow S, Singer MA. From children's hands to adults' ears: Gesture's role in the learning process. Developmental Psychology. 2003;39:509–520. doi: 10.1037/0012-1649.39.3.509. [DOI] [PubMed] [Google Scholar]
- Goldin-Meadow S, Cook SW, Mitchell Z. Gesturing gives children new ideas about math. Psychological Science. 2009;20(3):267–272. doi: 10.1111/j.1467-9280.2009.02297.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goldin-Meadow S, Beilock SL. Action’s influence on thought: The case of gesture, Perspectives on Psychological Science. 2010 doi: 10.1177/1745691610388764. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hostetter AB, Alibali MW. Visible embodiment: Gestures as simulated action. Psychonomic Bulletin and Review. 2008;15:495–514. doi: 10.3758/pbr.15.3.495. [DOI] [PubMed] [Google Scholar]
- Iverson JM, Goldin-Meadow S, editors. The nature and functions of gesture in children's communications, in the New Directions for Child Development series No. 79. San Francisco: Jossey-Bass; 1998a. [Google Scholar]
- Iverson JM, Goldin-Meadow S. Why people gesture as they speak. Nature. 1998b;396:228. doi: 10.1038/24300. [DOI] [PubMed] [Google Scholar]
- Iverson JM, Goldin-Meadow S. Gesture paves the way for language development. Psychological Science. 2005;16:368–371. doi: 10.1111/j.0956-7976.2005.01542.x. [DOI] [PubMed] [Google Scholar]
- Iverson JM, Capirci O, Volterra V, Goldin-Meadow S. Learning to talk in a gesture-rich world: Early communication of Italian vs. American children. First Language. 2008;28:164–181. doi: 10.1177/0142723707087736. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kelly SD, Church RB. Can children detect conceptual information conveyed through other children's nonverbal behaviors? Cognition and Instruction. 1997;15:107–134. [Google Scholar]
- Kelly SD, Church RB. A comparison between children’s and adults’ ability to detect conceptual information conveyed through representational gestures. Child Development. 1998;69:85–93. [PubMed] [Google Scholar]
- Kendon A. Gesticulation and speech: Two aspects of the process of utterance. In: Key MaryRitchie., editor. Relationship of verbal and nonverbal communication. The Hague: Mouton de Gruyter; 1980. pp. 207–228. [Google Scholar]
- Kimbara I. On gestural mimicry. Gesture. 2006;6:39–61. [Google Scholar]
- Kita S. How representational gestures help speaking. In: McNeill David., editor. Language and gesture: Window into thought and action. Cambridge: Cambridge University Press; 2000. pp. 162–185. [Google Scholar]
- Krauss RM, Chen Y, Gottesman RF. Lexical gestures and lexical access: A process model. In: McNeill David., editor. Language and gesture. N.Y.: Cambridge University Press; 2000. pp. 261–283. [Google Scholar]
- McNeill D. Hand and mind: What gestures reveal about thought. Chicago: The University of Chicago Press; 1992. [Google Scholar]
- Morsella E, Krauss RM. The role of gestures in spatial working memory and speech. American Journal of Psychology. 2004;117:411–424. [PubMed] [Google Scholar]
- Newell A, Simon HA. Human problem solving. Englewood Cliffs, NJ: Prentice-Hall; 1972. [Google Scholar]
- Ozcaliskan S, Goldin-Meadow S. Gesture is at the cutting edge of early language development. Cognition. 2005;96(3):B101–B113. doi: 10.1016/j.cognition.2005.01.001. [DOI] [PubMed] [Google Scholar]
- Ozcaliskan S, Goldin-Meadow S. When gesture-speech combinations do and do not index linguistic change. Language and Cognitive Processes. 2009;28:190–217. doi: 10.1080/01690960801956911. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Perry M, Church RB, Goldin-Meadow S. Transitional knowledge in the acquisition of concepts. Cognitive Development. 1988;3(4):359–400. [Google Scholar]
- Perry M, Berch DB, Singleton JL. Constructing Shared Understanding: The Role of Nonverbal Input in Learning Contexts. Journal of Contemporary Legal Issues. 1995 Spring;(6):213–236. [Google Scholar]
- Perry M, Elder AD. Knowledge in transition: Adults' developing understanding of a principle of physical causality. Cognitive Development. 1997;12:131–157. [Google Scholar]
- Pine KJ, Lufkin N, Messer D. More gestures than answers: Children learning about balance. Developmental Psychology. 2004;40:1059–1106. doi: 10.1037/0012-1649.40.6.1059. [DOI] [PubMed] [Google Scholar]
- Ping R, Goldin-Meadow S. Hands in the air: Using ungrounded iconic gestures to teach children conservation of quantity. Developmental Psychology. 2008;44:1277–1287. doi: 10.1037/0012-1649.44.5.1277. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ping R, Goldin-Meadow S. Gesturing saves cognitive resources when talking about non-present objects, Cognitive Science. 2010 doi: 10.1111/j.1551-6709.2010.01102.x. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Riseborough MG. Physiographic Gestures as Decoding Facilitators - 3 Experiments Exploring a Neglected Facet of Communication. Journal of Nonverbal Behavior. 1981;5(3):172–183. [Google Scholar]
- Ruiter JPde. Unpublished doctoral dissertation. Nijmegen, The Netherlands: Katholieke Universiteit Nijmegen; 1998. Gesture and speech production. [Google Scholar]
- Siegler RS. Cognitive variability: A key to understanding cognitive development. Current Directions in Psychological Science. 1994;3:1–5. [Google Scholar]
- Singer MA, Goldin-Meadow S. Children learn when their teacher's gestures and speech differ. Psychological Science. 2005;16(2):85–89. doi: 10.1111/j.0956-7976.2005.00786.x. [DOI] [PubMed] [Google Scholar]
- Thelen E. Self-organization in developmental processes: Can systems approaches work? In: Gunnar Megan, Thelen Esther., editors. Systems and development: The Minnesota Symposium on Child Psychology. Hillsdale, N.J: Erlbaum; 1989. pp. 77–117. [Google Scholar]
- Valenzeno L, Alibali MW, Klatzky R. Teachers’ gestures facilitate students’ learning: A lesson in symmetry. Contemporary Educational Psychology. 2003;28:187–204. [Google Scholar]
- Wagner S, oward Nusbaum H, Goldin-Meadow S. Probing the mental representation of gesture: Is handwaving spatial? Journal of Memory and Language. 2004;50:395–407. [Google Scholar]
- Wesp R, Hess J, Keutmann D, Wheaton K. Gestures maintain spatial imagery. The American Journal of Psychology. 2001;114:591–600. [PubMed] [Google Scholar]