Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Nov 1.
Published in final edited form as: Instr Sci. 2015 Aug 19;43(6):709–735. doi: 10.1007/s11251-015-9357-6

Give me a hand: Differential effects of gesture type in guiding young children's problem-solving

Claire Vallotton 1, Maria Fusaro 2, Julia Hayden 3, Kalli Decker 1, Elizabeth Gutowski 1
PMCID: PMC4734138  NIHMSID: NIHMS716976  PMID: 26848192

Abstract

Adults’ gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents’ use of gestures to support their young children (1.5 – 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents’ gesture use indicating different gestural strategies. Further, we examined the effect of child age on both the frequency and types of gestures parents used, and on their usefulness to support children's learning. Children attempted to solve the puzzle independently before and after receiving help from their parent; half of the parents were instructed to sit on their hands while they helped. Parents who could use their hands appear to use gestures in three strategies: orienting the child to the task, providing abstract information, and providing embodied information; further, they adapted their gesturing to their child's age and skill level. Younger children elicited more frequent and more proximal gestures from parents. Despite the greater use of gestures with younger children, it was the oldest group (4.5-6.0 years) who were most affected by parents’ gestures. The oldest group was positively affected by the total frequency of parents’ gestures, and in particular, parents’ use of embodying gestures (indexes that touched their referents, representational demonstrations with object in hand, and physically guiding child's hands). Though parents rarely used the embodying strategy with older children, it was this strategy which most enhanced the problem-solving of children 4.5 – 6 years.

Keywords: gesture, embodied gesture, informal instruction, parent-child interaction, individual differences, puzzle, problem-solving


While it is evident that adults gesture during their verbal interactions with young children, it is not yet clear how gestures might impact children's cognition and learning. Moreover, different types of gestures may achieve different results. Index, or deictic, gestures, such as pointing towards, tapping, and showing a physical object in order to indicate it, may help children to focus their attention on the most relevant components of a task (Bangerter, 2004). Gestures that provide representational information that reinforces or complements corresponding speech may allow children to glean more information from the adult than just that which is presented verbally (Singer & Goldin-Meadow, 2005; Cook & Goldin-Meadow, 2006). Lozano and Tversky (2006) propose that gestures can facilitate learning because gestures are a form of embodied knowledge. Iconic gestures can serve as “miniature actions” that, when modeled, “provide a communicator with motor experience that can guide knowledge acquisition and learning” (Lozano & Tversky, 2006, p. 48). Thus gestures may facilitate learning, particularly in tasks that require manual action. Several studies have found positive learning outcomes for school-aged children when verbal lessons are accompanied by gestures (Cook & Goldin-Meadow, 2006; Kelly, Manning, & Rodak, 2008; Pozzer-Ardenghi & Roth, 2007; Singer & Goldin-Meadow, 2005). However, relatively little is known about whether and how parents’ use of gestures facilitates their very young children's performance in a teaching and learning context. Early childhood may be a particularly salient time to examine effects of adults’ gestures to help children solve problems and learn about the problem-solving process because the “problems” which children undertake at this age typically involve object manipulation. This paper examines parents’ use of spontaneous gestures with their toddler through kindergarten-aged children (1.5-6 years old), and tests whether these gestures facilitate children's problem-solving through their attention-directing and representational properties.

Gesture in Teaching and Learning Contexts

Parents naturally use gestures when engaging in tasks with their young children (O'Neill, Bard, Linnell, & Fluck, 2005; Clark & Estigarribia, 2011).1 In a study of mothers’ gesture use during a free play session and a counting task with their toddlers, gestures comprised 29% of mothers’ total communicative acts (O'Neill et al., 2005). Clark and Estigarribia (2011) asked parents to introduce unfamiliar objects to their 1.5 and 3-year-old children. Parents’ verbal explanations were accompanied by indicating gestures when they were talking about object properties and parts, and by demonstrating gestures when they were talking about actions and functions. In a different study, mothers attempting to teach toddler-aged children how to solve a puzzle manipulated the child's hands (by putting their hands over the child's hands and moving or shaping the child's hands), demonstrated what they wanted the child to do, and used deictic gestures (e.g., pointing, showing) to focus the child's attention on particular pieces of the puzzle (Zukow-Goldring, 2006). Although there are many studies of the communicative functions of parents’ and children's gestures in their dyadic interactions, most are in the context of unstructured play or communication-temptation tasks, rather than in teaching and learning contexts, and thus do not examine the effects of parents’ gestures on children's learning.

Much of what we know about adults’ use of gestures to communicate with children comes not from studies of parents and children but from work on teachers and students. Like parents, classroom teachers use gestures spontaneously when they teach a task (Alibali & Nathan, 2007). For example, first grade teachers used gestures such as pointing or counting on the hands during math instruction (Flevares & Perry, 2001). These gestures accompanied teachers’ speech, reinforced the verbal message, and were often deployed in response to student confusion. Notably, the teachers used gestures more often than any other nonverbal representation, including pictures, objects, and writing. In sum, adults spontaneously use gestures when interacting with children, and they may be especially likely to do so in teaching and learning contexts.

Effects of Gestures on Children's Learning

Even when gestures are not used as an intentional pedagogical strategy, they appear to help children learn (Cook & Goldin-Meadow, 2006; Kelly, Manning, & Rodak, 2008; Pozzer-Ardenghi & Roth, 2007; Singer & Goldin-Meadow, 2005). Goldin-Meadow, Kim, and Singer (1999) demonstrated that children notice and take up information that is conveyed to them via gesture alone. In a study of eight-and-a-half to eleven-and-a-half year-olds’ ability to solve mathematical equivalence problems, the authors reported that children were later able to state in words problem-solving strategies that their teachers had previously only conveyed to them via representational gesture (e.g., grouping strategy: V-shaped hand under two addends on one side of the equation). That is, children were able to make use of problem-solving strategies conveyed to them by gesture and to re-cast them in a different modality that enabled them to explain how they had solved the problem. These authors argued that gestures may be helpful because they provide a second representation of information, particularly visual information that is more difficult to convey in speech. Further work established that children learned more when their teachers’ speech and gestures conveyed two different problem-solving strategies, compared to when teachers’ talk and gestures conveyed the same strategy, demonstrating that children are able learn from gesture over and above what they glean from speech (Singer & Goldin-Meadow, 2005). Cook and Goldin-Meadow (2006) reported that children's gesture use correlated with the gesture use of their teacher; and greater use of gestures on children's part was associated with more gains in learning, presumably because children's use of gestures had caused them to internalize the strategies their teacher was modeling. Thus, the evidence clearly shows that school-aged children learn from teachers’ gestures in formal teaching and learning contexts; however, we do not yet know if such learning extends to younger children in less formal contexts.

Adaptation of Gestures to Children's Age and Ability

We know that parents gesture spontaneously when engaging in problem-solving tasks with their children (O'Neill, et al. 2005; Clark & Estigarribia, 2011; Zukow-Goldring, 2006). While engaged in a task, the gestural help parents provide may vary based on children's overall developmental level or based on observations of children's actual skills and needs in a specific area. This variation in gesture may be a parallel form of adults’ use of child-directed speech—a modified speech register characterized by simplistic speech and variable intonation patterns (Snow, 1988). Relative to the amount and type of gestures used with adult interlocutors, mothers communicating with young children tend to use fewer and more conceptually simple (e.g., pointing or showing) gestures (Bekken, as cited in Goldin-Meadow, 2006; O'Neill et al., 2005). Along these lines, Iverson, Capirci, Longobardi and Carselli (1999) reported that mothers’ gestures to their 16-20 month-olds were relatively infrequent, concrete, and reinforced (rather than supplemented) the content of their speech. Gutmann and Turnure's (1979) investigation of slightly older children (ages 2-3 and 4-5 years) further established that mothers appear to adapt their gestures to the developmental level of their child, as mothers of the 4-5 year olds used more, and more complex, gestures than did mothers of 2-3 year-olds. Similarly, Wertsch and colleagues (1980) found that mothers provided more verbal and gestural guidance (e.g., points, handling of pieces) to younger compared to older preschoolers during a puzzle-solving task. Thus, similar to the way that parents adapt their speech to young children, there is evidence that they also adapt and simplify their gestures based on children's age or ability level.

Different Types of Gestures May be Differentially Effective in Teaching and Learning Contexts

How might gestures by those in a teacher role facilitate learning? Two distinct theoretical possibilities include the notion that gestures can help by regulating children's attention during a task (Bangerter, 2004), and that they can provide relevant conceptual information via the representational content of the gesture, particularly about action- and visually-based strategies required by a task (Zukow-Goldring, 2006). Further explanation of these viewpoints requires consideration of gesture type. First, index gestures serve as a learning aid by helping to regulate the child's attention. Pointing can draw children's visual attention back to the learning task, and cue them to important features of the task (Wang et al., 2004). Such indexical gestures are prominent in child-directed gesturing. One type of indexical gesture, proximal gestures (such as repeatedly tapping an object), may reduce ambiguity about the target of one's communication. Bangerter (2004) demonstrated that adults relied more on a communicator's pointing gestures, and less on their speech, to target their attention as distance between the gesture and referent decreased. The use of such index gestures alongside speech may prove to be more effective for instruction than the use of speech alone, which lacks visual cues to direct the child's attention.

Secondly, iconic gestures convey representational information relevant to a task. These include mimicking action-based ideas involved a task. In a relevant study with adult learners, Lozano and Tversky (2006) examined the role of gesture versus speech in the teaching of the highly visual and action based task of assembling a piece of furniture, namely a TV cart. The researchers presented participants with one of four instructional videos; two presented speech-based instruction, and two presented gesture-based instruction. Half of the videos highlighted structural components of the assembly task (e.g., using speech or gesture to indicate the location of peg holes and relative placement of shelves). The other half highlighted actions involved in the assembly (e.g., describing or pantomiming the insertion of pegs into corresponding holes). When asked to complete the assembly task on their own, learners made fewer errors and completed the task more quickly following gesture-only instruction, compared to speech-only instruction. Further, videos with representational action-based information yielded better performance than those that highlighted structural features. The authors argue that this evidence supports a direct embodiment hypothesis. In this view, gesture-based instruction facilitates performance because it conveys motor, or action-based, information that learners can then translate into their own actions.

Further, unlike empty-handed iconic gestures, such as those used in study described above, the demonstration of an action with an object in hand may make the meaning of the relevant action more concrete, by demonstrating the relationship between the affordances of the object (i.e., functions or possible actions of the object) and the effectivities of the body (i.e., actions which the body is capable of producing; Zukow-Goldring, 2006). Further, embodied gestures in which the parent or teacher wraps their own hands around the child's hands, and helps them to perform an action – what Zukow-Goldring terms assisted imitation may be particularly effective in helping a child internalize a specific way of manipulating or using an object (2006).

Studies of young children provide support of the facilitative role of gestural representations of actions for learning, particularly for learners over the age of two years. Goodrich and Hudson Kam (2009) found that iconic gestures representing actions helped two to four year-old children learn novel verbs for those actions. Kindergarten and first-graders who received instructions via speech and iconic gesture performed better on Piagetian conservation tasks than children who received instruction via speech alone (Church, Ayman-Nolley & Mahootian, 2004). Goldin-Meadow and colleagues have argued that gestures make abstract information more concrete by providing a visible representational format from which children can glean information which changes their thinking (Goldin-Meadow et al., 1999; Goldin-Meadow et al., 2009). Gestures can express notions that may not be expressed as easily or efficiently in words alone. Thus, when parents use gestures along with their words, the child develops an “overlapping but not identical representation” (Goldin-Meadow et al., 1999, p. 729) of a concept.

The effectiveness of representational gestures may vary by age. In particular, the capacity to detect iconicity develops during the first two years of life (Namy, Campbell & Tomasello, 2004). In contrast to empty-handed iconic gestures, the demonstration of an action with an object in hand may make the meaning of the relevant action more concrete, which may be particularly helpful for very young children. There is little work describing embodied gestures, let alone their effectiveness in helping children learn. However, given that parents tend to use these gestures with the youngest children (e.g., infants and toddlers; Zukow-Goldring, 2006), we may expect that this form of child-directed gesturing is attuned to children's developmental needs and that it may be most effective at these early ages. However, given the “novice” status of anyone learning a new task or novel strategies, it is also possible that embodied gestures are effective at helping a child of any age learn a physical or visual-spatial task. In sum, indexical gestures may help children learn by focusing their attention on relevant aspects of the task whereas iconic gestures are thought to facilitate learning by representing abstract information concretely. Gestures completed with an object in-hand may make the learning objective even more concrete. Further, evidence suggests that not only do different kinds of gestures affect learning in different ways, but gestures differentially affect children depending on their age and ability level.

Current Study

The current study examines parents’ use of gestures to teach their young children – ages 18 months to 6 years – how to solve a wooden block puzzle (Figure 1). A pre- and post-assessment of children's puzzle performance was used to measure change in performance before and after parent instruction. This particular puzzle had several features that made it effective for this study. For practical reasons, the puzzle was simple enough that it could be explained quickly and, if solved efficiently, could be completed in a short period of time. However, the puzzle could be solved in many different configurations; this helped eliminate a possible ceiling effect, as more skilled children could potentially solve the puzzle multiple times in different variations. Because the puzzle involved organizing the placement of multiple blocks, we expected that it would elicit a variety of gestures from parents in a teaching role, including index gestures (e.g., pointing to particular pieces or to the board) and representational gestures (e.g., conveying the idea of rotating a piece). Our analysis includes description of the types of gestures that parents produced spontaneously in this context. A final advantage for using this puzzle is that a partial solution is presented on the cover of the box (Figure 1). Thus, an additional strategy exists for explaining and/or solving the puzzle, by referring to the picture on the box.

Figure 1.

Figure 1

Problem-solving task within the Zone of Proximal Development for 3 to 4.5 year olds. Photo of block puzzle used in all phases of the puzzle task for all children in the study.

NOTE ON COLOR: This figure is to be reproduced in color on the web, and black and white in print.

To examine the impact of gesture while teaching, we introduced an experimental manipulation. Namely, parents provided support to their child's puzzle-solving while their hands were either restricted (no hands) or unrestricted (hands) from use. Comparison across these conditions allowed us to ask the following research question:

1. Does parents’ freedom to use gesture in a teaching and learning task affect children's independent performance? Does this effect vary by child age?

Given the empirical evidence that teachers’ gestures can facilitate children's learning in formal settings, we expected that the hands condition would be more supportive of children's independent performance than the no hands condition, across our full age range. We considered two possible hypotheses for the moderating effects of child age. First, we thought that the oldest children (older than 4.5 years) might be better able to utilize the information parents’ provide via their hands, because they may be better able to integrate the information coming from parents’ words and gestures. However, we also considered the possibility that it would be younger children – either at preschool age (3 to 4.5 years) or even younger (< 3 years) who gained the most from parents’ use of gestures because they are at an age in which parents’ gestures may be more necessary to make the problem-solving strategies more accessible to children.

As reviewed above, there is evidence that parents’ gestures differ based on their child's age (Gutman & Turnure, 1979; Iverson et al., 1999) and perceived competence (Wertsch et al., 1980). However, to our knowledge, no studies have simultaneously examined how the characteristics and effectiveness of parents’ gestures vary by child age in an informal teaching-and-learning task. Thus, our second and third research questions are:

2. Do the frequency and types of parents’ gestures vary based on children's ages and skill levels, reflecting child-directed gesturing?

3. Does the frequency or types of parents’ gestures affect children's independent performance? Does this vary by child age?

For those in the Hands condition, we expected parents’ use of gestures to reflect children's learning needs in the puzzle task. We anticipated that parents would use more gestures when children were younger or less skilled, and that these gestures would be simpler and more obvious (e.g., less iconic/ representation, more proximal). Upon detection of child-directed gesturing, we expected that use of such gestures would facilitate children's puzzle solving performance, particularly for the youngest children, under age 3 years, who may need parents to target their teaching by providing simpler or more concrete information. In contrast, we expected iconic/representational gestures to be most useful for older children, who may be most capable of integrating information across communication modalities.

Restricting parents’ use of their hands may mean more than simply eliminating gestural input – it may impact broader aspects of communication for those in the no-hands condition. One possibility is that parents may compensate with increased use of verbal messages; that is, they may increase their overall use of language and total words spoken. Another possibility is that their communication is disrupted by restriction of movement. Adults are less fluent in conveying spatial information when the hands are not free to gesture (Rauscher, Krauss, & Chen, 1996). This disruption may result in more repetitive verbal messages, with less variation, which may ultimately be less useful for children. We addressed these possibilities by comparing the speech of parents assigned to the two experimental conditions, including their word tokens, word types, and type/token ratios.

Method

Overview

Parent-child dyads were given time to work on the puzzle in three phases, each of which were videotaped. During the first phase, only the child worked on the puzzle while the parent was asked to observe (pre-help phase). During the second phase, the parent was asked to assist the child (help phase). During the third phase, the child again worked on the puzzle independently (post-help phase). Dyads were randomly assigned to one of two conditions: half of the parents were free to use their hands during the help phase but the other half of the parents were asked to sit on their hands.

Participants

Participants were 134 children and parents visiting the Boston Museum of Science (MOS). Children were between 1.5 and 6 years old (M = 3.6, SD = 1.2); 53% of children were girls, and 79% of parents were mothers. Eight children were eliminated for various reasons, including that the child was a sibling of another participant, researcher errors in the protocol, or the child had become frustrated or walked away from the task before the end of the experiment. The final sample consisted of 126 children.2 Demographic information specific to this sample is not available because researchers at the MOS do not collect individual demographic information. However, for the year these data were collected, the data collected by the museum on visitor demographics indicated that the majority of visitors identified their race as white (89%), with smaller percentages of Asian/Asian-American (4%), black/African American(3%), American Indian or Alaskan Native (1%), and other (3%); 3% of visitors identified their ethnicity as Hispanic/Latino.

Procedure

Parents who were accompanying young children in the museum's children's area were approached by a research team member and were invited to participate in a study involving a block puzzle activity. Parents were not given any information about the specific behaviors of interest nor the hypotheses of the study until after they had completed participation.

After parents signed a consent form, parent-child dyads were video recorded while sitting next to each other at a child-size table. The puzzle box, which depicted one way in which the puzzle could be partially completed, was positioned on the table across from the child. A researcher brought the assembled puzzle to the child and took the pieces off the board. The researcher said to the child, “This is a puzzle for you to do. I'm going to take the pieces out. There are many ways to solve the puzzle. Let's see if you can put the pieces back together!” Then the researcher said to the parent, “For the first minute, we'll let your child work on his/her own. Then, for the next two minutes, it will be time for you to help. For the last minute, we'll let him/her work alone again. Until I say you can help, please try not to say anything or give any help.” The researcher stepped back from the table and set a timer for one minute (pre-help phase). After one minute, the researcher approached the table and said, “Look how well you did on your own! Now Mom/Dad can help you. I'm going to take the pieces out. Can you put the puzzle back together?” Meanwhile, the researcher took all the pieces off the board. The researcher turned to the parent and said “Now you can give your child whatever help you think he/she needs.” If the dyad had been randomly assigned to the no-hands condition, the researcher added, “The only rule is that you need to sit on your hands while you help. Please don't use your hands to help.” Finally, the researcher told all parents, “In about 2 minutes, we'll let your child do the puzzle alone again.” The researcher backed away from the table and started the timer (help phase). After 90 seconds, the researcher returned to the table and said “O.k., let's have ________ (child's name) work on it on his/her own for another minute.” The researcher removed all puzzle pieces from the board while saying to the child “I'm going to take the pieces out now. O.k., now can you do it by yourself?” This third phase lasted one minute (post-help phase). When the final minute was over, the researcher let the child finish the puzzle if s/he was still working on it, then thanked the parent and child for their participation, and gave the child a sticker. At that time, the researcher also offered to tell the parent more about the study if the parent was interested.

Coding and Scoring

Peak Puzzle Score

A team of two trained researchers coded, from video, children's puzzle performance in each of the three phases. The same codes were used for each phase of the puzzle-solving session. Coders identified the point at which the puzzle was most complete in each phase. At this point, a peak score was calculated, equal to the number of puzzle pieces lying flat on the board. Pieces were excluded from the peak score if they were stacked on top of another piece, were standing upright, or were hanging off the edge of the puzzle. The highest score possible was 11, indicating that all pieces were on the board and that the puzzle was completed. See Table 2 for the ranges, means, and standard deviation of this variable during each phase of the puzzle task (Peak Scores 1, 2, and 3). The Peak Scores 1 and 3 represent the child's independent performance, before and after parental help, respectively. Peak Score 2 represents the child's score with the parents’ help, either with or without the use of their hands.

Table 2.

Descriptive statistics for all dependent and independent variables (N = 126).

Frequency or Mean (SD) Range Percentage within Category
Child age in years 3.62 (1.21)
1.36 – 6.00
    Child age < 36 months 47 37.3% of children
    Child age 36 – 54 months
44 34.9% of children
    Child age > 54 months 35 27.8% of children
Child sex (1 = girl) 67 53% of children
Peak Score 1: Most Pieces on Board in Pre-Help Phase 6.45 (3.13)
0 – 11
Peak Score 2: Most Pieces Flat in Help Phase 8.20 (3.04)
0 – 11
Peak Score 3: Most Pieces Flat in Post-Help Phase 7.30 (3.16)
1 – 11
Word Types per minute (n=63) 16.04 (6.00)
2.59 – 29.72
Word Tokens per minute (n=63) 37.37 (17.86)
2.59 – 82.88
Type/Token ratio (n=63) 0.47 (0.13)
0.30 – 1.00
Gestures by parents in the Hands Condition
Gestures/ 90 seconds Percent of All Gestures
Embodied actions 0.78 (1.41)
0 – 8.49
7.3% (10.3%) of gestures
0 – 50.0%
Demonstrations 0.36 (0.77)
0 – 3.21
3.8% (8.3%) of gestures
0 – 40.0%
Icons 0.82 (1.40)
0 – 6.36
14.7% (14.6%) of gestures
0 – 56.0%
Indexes 5.81 (5.44)
0 – 24.71
60.6% (24.1%) of gestures
8.0 – 100%
    Taps/Shows 2.75 (3.36)
0 – 17.83
47.9% (32.5%) of indexes
0 – 100%
    Proximal points 1.77 (2.55)
0 – 12.77
26.8% (26.6%) of indexes
0 – 100%
    Distal points 0.89 (1.32)
0 – 6.12
16.7% (22.7%) of indexes
0 – 100%
    Indicating Pieces 1.85 (2.43)
0 – 13.24
33.9% (29.2%) of indexes
0 – 100%
    Indicating Board 3.23 (3.26)
0 – 11.89
54.9% (30.6%) of indexes
0 – 100%
    Indicating Box 0.61 (1.54)
0 – 7.94
9.4% (19.4%)
0 – 71.0%
Other gestures 0.80 (1.43)
0 – 8.64
13.7% (17.8%) of gestures
0 – 92%
Total Gesture Frequency (sans helping actions) 9.63 (7.62)
0 – 37.06
--
Total Gesture Variety (embodied, demo, icon, tap/show, proximal index, distal index) (not standardized per 90 seconds) 3.15 (1.68)
0 – 6.00
--
Helping actions 7.11 (6.56)
0 – 24.75
--

Inter-coder reliability was calculated using Cohen's Kappa coefficients across 10% of videos. Kappa scores for the peak number of puzzle pieces lying flat on the puzzle board ranged from .72 to 1.0, with an average of K=.84. The number of pieces counted as flat by each coder had to be exactly the same to be considered agreement. Cohen's Kappa was used as a measure of inter-observer agreement because it takes into account the percent of agreement that could have occurred by chance, and is thus a much more conservative measure than a simple percent of agreement (Bakeman & Gottman, 1997); Kappas between 0.6 and 0.75 are considered good, Kappas over 0.75 are considered excellent agreement (Fleiss, 1981, as described by Bakeman & Gottman, 1997).

Parent Gesture

A second team of coders coded, from video, parents’ gestures during the help phase were coded, for parents in the group who were allowed to use their hands. There were six mutually exclusive categorical codes applied to all of parents’ hand, arm, or shoulder movements; head gestures were not coded. Five gesture codes, modeled after codes originally developed by Zukow-Goldring (2006), are described in Table 1, along with one other “Helping action” code, which captured manual actions other than gestures that were part of adults’ support of the task. Gestures were parsed as single acts, and each assigned one mutually exclusive code; parsing was based in part on timing, including brief pauses between manual actions, and in part on transitioning between types of movements. If a parent transitioned between one type of gesture and another in quick succession, both types were captured; however, if a gesture appeared to be a combination of types it was coded as “other” if a dominant type could not be determined. Potential gesture combinations were not identified or coded as such, though these could be determined from the data because the data include both type and timing of gestures. In the Indexical category, there were several subtypes including pointing, showing, and what we refer to as complex indicatives. Each of these subtypes is described in Table 1. Each pointing gesture was coded for its proximity to its referent object, including touching the referent (i.e. tapping), proximal pointing within two inches of the referent, and distal pointing in which the hand did not come within two inches of the referent. A team of two researchers coded the parents’ gestures. Inter-observer reliability was assessed on 15% of the episodes. Percent of agreement between coders ranged from 72% to 92%, and the average Kappa was 0.62, which is considered to be good agreement (Bakeman & Gottman, 1997).

Table 1.

Codes for adult gestures and other scaffolding behavior.

Code Definition
Embodied Action Parent moves child's hand/body, or moves puzzle piece with child where both parents’ and child's hands are on the same piece.
Demonstration Gesture demonstrates an action with a puzzle piece in hand, but does not leave the block on the board. For example, parent rotates a puzzle piece in his/her own hand over an empty spot on the board, then sets the piece on the table.
Iconic The motion of the gesture represents some essential feature of the puzzle piece or board-either the form or the path of the object, but without an object in the hand forming the gesture. For example, parent's hand is shaped as if it were holding a puzzle piece, then parent rotates the hand as if turning the puzzle piece.
Indexical Gesture is used to indicate/refer to some part of the puzzle task. Sub-categories of indexical gestures are described below.
    Show Parent extends arm and hand to show an object held in hand, without giving the object to the child.
    Tap Pointing gesture comes in contact with the referent object, tapping it. This can be a single touch that is held to the surface of the object, a single quick touch, or a repeated tapping in short succession.
    Proximal Point At the furthest extent of the pointing gesture, the hand is within two inches of the referent object, but does not touch the object.
    Distal Point At the furthest extent of the pointing gesture, the hand does not come within two inches of the referent object.
    Complex Indicative A motion of the hand that is used to indicate multiple objects related to the puzzle task. For example, parent sweeps his/her hand over a set of pieces in a circular motion.
    Other Index A hand motion that appears to refer to an object related to the puzzle task, but does not fit within any of the other sub-code definitions.
Helping Action Action is used to help the child with the task in a physical way. These actions included: moving the board closer to the child, moving puzzle pieces closer to the child, moving the box, giving the child a puzzle piece, placing a piece on the board, moving pieces around on the board, and removing pieces from the board.
Other Gesture Any other gesture, with or without an object in hand, that was related to the puzzle task but could not be coded in one of the other categories. These other gestures included a palms up gesture which often accompanies questions such as “Where?” or “What?”, a shoulder-shrugging gesture which conventionally means “I don't know”, clapping or giving a high five when the child finished the puzzle, or taking a piece from the child.

We derived three sets of variables to describe the parent gestures. First, we calculated the frequency of each type of gesture per 90 seconds (the modal length of time for the parent help phase). Second, we calculated the percentage of parents’ gestures which fell into each category. Complex indicatives and other indexes were so rare that we did not calculate the relative frequencies and percents of each of these, though they are represented in the total frequency of indexical gestures. We combined tapping and showing gestures into taps/shows because they are both indexes that touch their referent objects. Finally we created a single variable for parents’ gesture variety, indicating how many of the categories and subtypes of gesture each parent used (embodied, demonstration, icon, tap/show index, proximal index, distal index). Descriptive statistics for all variables are in Table 2.

Parent Language

A third team of researchers transcribed the language used by parents during the help phase, for all parents who spoke English (n=116) and for whom the audio recording quality of the videos was sufficient to hear a majority of language (n=63; Hands condition = 31, No Hands condition = 32). The open and public nature of the setting of this study precluded capturing high quality audio data from many of the dyads. The CHAT software associated with the Child Language Data Exchanges System (CHILDES) project (MacWhinny, 2000) was used for transcription.

Transcripts were analyzed using CLAN software to determine the parents’ word types (number of unique words spoken; e.g., “puzzle” and “piece” are two unique words, while “piece” and “pieces” are just one unique word) and tokens (total number of words spoken, i.e., every word spoken by the parent was counted, regardless of duplications) per 90 seconds, as well as the type/token ratio. Descriptive statistics are available in Table 2.

Analyses and Results

Preliminary Language Analysis

Our primary hypothesis was that parents’ gestures support children's learning in the puzzle task, as indicated by higher puzzle scores for children after their parents’ help when they were in the Hands condition as compared to the No Hands condition. However, before proceeding with this test, we examined the effect of the experimental condition on parents’ language to rule out the possibility that any group differences would be an unintended consequence of the condition on parents’ language use, rather than their gesture use. We used independent samples t-tests to test the possibility that restricting use of the hands had a diminishing effect on speech. If so, the mediating variable of parental language could explain any effect of the hands condition on children's puzzle success. We used parents’ word tokens as an indicator of the amount of speech parents produced, word types (number of unique words spoken) as an indicator of the variety of parents’ language, and type/token ratio as a proxy for potential dysfluencies in speech. We reasoned that if parents’ repeated themselves or use a reduced vocabulary because they were not allowed to use their hands, this would appear as fewer types per token. However, as seen in Table 3, there were no significant differences in word types or tokens per minute for parents in the two conditions, nor in the type/token ratio. The equality of the means suggests that the No-Hands restriction did not lead to either compensation through excessive language, or suppression of verbal input. Thus, we did not include parent language variables in our models testing the effects of the Hands/No-Hands conditions on children's puzzle success (Research Questions 1 and 3).

Table 3.

Results of T-tests for differences between Experimental Groups in Parent Language Use (n=62)

Mean (SD) T-test

No Hands Hands t-value (df), p value
Parent Word Types per minute 16.80 (5.70) 15.25 (6.30) 1.03 (61), p = .31
Parent Word Tokens per minute 38.76 (17.23) 35.92 (18.66) 0.63 (61), p = .53
Parent Type/Token Ratio 0.47 (0.14) 0.47 (0.12) 0.03 (61), p = .97

Effects of Parents’ Use of Hands to Support Children's Problem-Solving

For our first research question, we expected that parents’ abilities to use their hands to gesture and provide guidance to children's problem-solving during the help phase would provide a better support, and result in more learning. This finding would be indicated by a greater increase in children's peak puzzle scores between the pre-help and post-help phase for children in the Hands conditions. Further, we anticipated that age may render children differentially able to take advantage of parents’ support via gestures, and thus the effects of the experimental condition may vary by child age. Thus, we tested whether it is the older children (i.e., older than 4.5 years) who may be better able to integrate and utilize information provided via two modes of communication (speech and gesture), that are most benefited by parents’ use of hands; or whether it is the youngest children, who may be in most need of parents’ gestures to help guide their attention to relevant features and make a complex task more concrete. We used linear regression to test the effects of parents’ ability to use their hands (experimental condition) on children's learning between the pre- and post- help opportunities to solve the puzzle, as indicated by their peak scores in the post-help, controlling for their peak scores during the pre-help phase; thus the outcome can be interpreted as residualized change between the pre- and post-help phases. Then we moderated the effect of the hands condition by child age group to test whether children are better able to take advantage of parents’ gestures at certain ages. Because we considered a non-linear effect of child age, we included dummy variables for child age group in the regression models, and interacted child age group by experimental condition, comparing the older two groups 36 – 54 months, > 54 months) to the youngest group (< 36 months), which served as the intercept.3

As seen in Model A in Table 4, counter to our hypothesis, there was no main effect of the hands condition on change in children's puzzle success across all children, controlling for child age. There was a main effect of child age, with both the middle and oldest age groups showing more pre-to-post change than the youngest group. Once we accounted for the possibility of age differences in the effect of the condition (interaction between hands condition and child age), there was a positive effect of the hands condition for the youngest two groups of children (see Model B of Table 4; see Figure 3). One unexpected finding was a negative effect of the hands condition for children 4.5 years and older. As seen in Figure 3, children in the oldest group did better than younger children in the no-hands condition, but scored the same as children between 3 and 4.5 years when parents could use their hands.

Table 4.

Effect of experimental condition (parents' ability to use hands) and specific types of gestures on children's post help peak score.

Model A: Child Age and Hands Condition Model B: Child Age * Hands Condition
Step 1
    Constant (Child age < 36 m) 4.565***
(0.327)
4.311***
(0.390)
    Child Age 36-54 m 1.177*
(0.475)
1.150*
(0.465)
    Child Age > 54 m 2.104***
(0.522)
3.004***
(0.680)
    Pre-Help Score 0.544***
(0.078)
0.551***
(0.077)
    Hands (1), No Hands (0) 0.288
(0.397)
0.818~
(0.458)
Step 2
    Hands*Child Age> 54m −1.899*
(0.868)

Model Fit
    R2 0.532 0.555
    F 26.725*** 23.199***
    F Change -- 4.786*
~

p < .10

*

p < .05

**p < .01

***

p < .001

NOTE: Results of linear regression analyses with pairwise deletion.

Figure 3.

Figure 3

Effect of child age on parent gestures. This figure illustrates the group differences in parents’ helping actions and gestures (per 90 seconds) between child age groups for parents in the hands condition (n = 62), according to post-hoc Bonferroni tests following a one-way ANOVA of group differences. ~p < .10, *p < .05, ** p < .01.

Parents’ Use of Gestures to Support Children's Problem-Solving

For our second research question, we expected that parents’ gesturing would vary based on the child's age and skill level. Further, variation in parents’ use of gestures might represent particular strategies or approaches to supporting children's problem-solving in the puzzle task. To explore the possibility of specific gestural strategies, we conducted an exploratory factor analysis (EFA) including the frequencies of each feature of parents’ gestures. We included the frequency of each major gesture type (indexes, icons, demonstrations, and embodied), the index referents (box, board, pieces, and index distances (tap/show, proximal, distal). We did not specify a number of factors, but set the Eigen value criterion to 1; we used the correlation method of estimation with varimax rotation. The results (presented in Table 5) indicated three orthogonal factors which together explain 68.6% of the variance in features of parents’ gestures. The first factor included proximal indexes and iconic gestures (empty-handed representational gestures), and the indexes were to the board and pieces. These gestures may represent a “hands off” abstracting strategy in which the parent uses gestures to point out to children what they should do, and provides abstract strategies. The second factor included embodied gestures, indexes that touched their referents (tap, show), and demonstration gestures (representations with the pieces in hand); these gestures may indicate an embodying strategy in which parents use hands on gestures to help children internalize information. The third factor included indexes to the box and distal indexes; this strategy may represent parents’ use of gestures to orient the child to the general nature of the task. Because these dimensions are orthogonal, parents could be high or low on each one.

Table 5.

Results of Factor Analysis of Features of Parents' Gestures

Abstracting Embodying Orienting
Indexes to the Board .876 .229 −.086
Proximal Indexes .800 −.014 .308
Iconic Gestures .617 −.117 −.151
Indexes to the Pieces .616 .385 .449
Embodied Gestures .191 .828 −.131
Tap/ Show Indexes .562 .722 .094
Demonstration Gestures −.207 .648 .098
Indexes to the Box −.101 .230 .861
Distal Indexes .108 −.210 .740

Based on these factors, we created gesture strategy composites by averaging the z-scores of each of the gestural features identified as part of each factor. For example, for the embodying composite, we averaged the z-scores of the frequencies of parents’ embodied gestures, demonstrations, and indexes that touched their referents (tap/show). In this way, though multiple features of a single gesture (e.g., referent and distance) may be taken into account in a composite score, the scores do not represent a double-counting of individual gestures. As described below, we use these composites to detect child effects on parents’ gesturing strategies (Question 2), and the effects of parents’ strategies on children's puzzle success (Question 3).

Child Effects on Parents’ Gestures

For those in the Hands condition, we expected parents’ use of gestures to reflect children's needs for support. We were interested in determining which specific features of gestures varied across parents, based on child age or ability. Thus, we first examined the effects of child characteristics on the frequencies of each gestural feature; then we examined effects on the strategy composites we created based on the EFA results. In each case, we used hierarchical linear regression with each features of parents’ gestures as the dependent variables, and child characteristics as the independent variables. We tested the effects of two child characteristics – age, and initial puzzle skill – as two non-mutually exclusive qualities which may explain variation in parents’ gestures.

To test these hypotheses we examined whether, for those in the hands condition, parents’ use of gestures was influenced by child age or the child's success on the puzzle during the first, pre-help, segment. We entered child first as a proxy for the child's general developmental stage and skills; then we entered child initial puzzle success (the child peak score during the pre-help session) to see if it had any additional effect over child age. We explored both a continuous specification of child age, and the categorical groups of child age (< 36m, 36-54m, and > 54m). Because of the nonlinear effects of child age on parents’ gestures, the best fitting models were consistently the models with categorical specifications of child age, rather than the continuous predictor.

Results for the effects of child age and puzzle skill on specific features of parents’ gestures are presented in Tables 6A (main categories of gesture) and 6B (features of indexes). As seen in Table 6A, child age was negatively related to the total frequency of parents’ gestures (Model A), and more specifically the frequency of icons (Model D) and of indexes (Models E-H). Child age did not impact parents’ use of embodied gestures (Model B) or demonstrations (Model C). Once the effects of child age were controlled, children's success in the pre-help puzzle task did not affect parents’ use of any of the major gesture categories.

Table 6A.

Effect of child age and pre-help puzzle success on the frequency of all gestures and each main category of parents' gestures

Model A: All Gestures Model B: Embodied Model C: Demonstrations Model D: Icons Model E: Indexes
Step 1
    Constant
(Child age < 36 m)
14.144***
(1.537)
1.218***
(0.321)
0.540**
(0.177)
0.052
(0.050)
2.027***
(0.361)
    Child Age 36-54 m −6.395**
(2.124)
−0.452
(0.444)
−0.148
(0.245)
−0.160*
(0.069)
−0.443
(0.498)
    Child Age > 54 m −7.225**
(2.306)
−0.772
(0.482)
−0.349
(0.266)
−0.252**
(0.085)
−1.369*
(0.541)

Model Fit
    R2 0.188 0.046 0.030 0.449 0.106
    F 6.372** 1.320 0.860 3.100* 3.272*

When examining the features of index gestures, differential effects of child age emerged, as did the effect of children's initial puzzle success. Models F – H in Table 6B reveals that, in general, child age was associated with greater distance of gestures; this can be seen in both the greater use of tap/show and proximal indexes with younger children, and the greater use of distal indexes with older children. Further, controlling for child age, parents used fewer distal indexes with children who showed more skill in the pre-help puzzle session. Models I - K reveal effects of child age and puzzle skills on the referents of parents’ indexes. Parents’ used fewer indexes to the pieces with children over 3 years, and used incrementally fewer indexes to the board with children in the older two age groups compared to the youngest. However, child age did not have a significant effect on indexes to the box; instead, it was children's initial puzzle skills which influenced parents’ referents to the box. Similar to effects on distal indexes, children who showed greater skill in the pre-help puzzle session saw fewer indexes to the box when parents could help.

Table 6B.

Effect of child age and pre-help puzzle success on features of parents' index gestures.

Model F: Tap/ Show Indexes Model G: Proximal Indexes Model H: Distal Indexes Model I: Indexes to Pieces Model J: Indexes to Board Model K: Indexes to Box
Step 1
    Constant
(Child age < 36 m)
5.055***
(0.654)
2.671***
(0.511)
0.459
(0.277)
3.700***
(0.658)
5.550***
(0.803)
0.920*
(0.393)
    Child Age 36-54 m −3.357***
(0.904)
−1.607*
(0.707)
0.586
(0.383)
−2.427*
(0.909)
−2.277*
(1.109)
−0.558
(0.544)
    Child Age > 54 m −3.868***
(0.981)
−1.253
(0.767)
1.270**
(0.455)
−2.138*
(0.987)
−3.113*
(1.204)
0.455
(0.646)
Step 2
    Pre-Help Peak Score / Minute (mean-centered) −0.248***
(0.062)
−0.238**
(0.087)

Model Fit
    R2 0.265 0.092 0.244 0.129 0.120 0.159
    F 9.916*** 2.774~ 5.798** 4.064* 3.762* 3.406*

To confirm the differences by child age group in each gesture type and index feature, we used ANOVA with child age group as the factor, and the frequencies of parents’ uses of each gesture type or feature as the dependent variables. There were significant child age group differences in parents’ total frequency of all gestures (F = 3.572, df = 61, p = .034), and frequency of indexes (F = 3.716, df = 61, p = .030). Further, several features of parents’ indexes also varied by child age, including the frequency of tap/show indexes (F = 7.963, df = 61, p < .001), and indexes to the puzzle pieces (F = 3.385, df = 61, p = .041) and to the board (F = 2.629, df = 61, p = .081). Figure 3 displays the mean number of parents’ gestures per 90 seconds by age group, with significant results of one-way ANOVA tests with post-hoc Bonferoni comparisons indicated on the figure.

Interestingly, parents used most tap/show indexes with children under 3 years, and less frequent tap/shows with children in the middle and oldest groups (Table 4 Model F, and Figure 3). In fact, the greater use of tap/show indexes with the youngest group seems to account for the majority of the difference in parents’ total gesture frequency between child age groups. In contrast, parents used more distal gestures with the oldest children, but even accounting for child age, they used fewer distal gestures when children did better on their own during the pre-help puzzle task, confirming the idea of distal gestures as part of an orienting strategy.

Finally, we examined the effects of child age and puzzle skills on the gesture strategy composites we had created based on the results of the exploratory factor analysis which identified patterns in parents’ gestures. We followed the same analytic approach we had used to determine child effects on each gesture feature; we used the gesture strategy composite scores as dependent variables in a hierarchical regression, regressing each composite first on dummy variables representing two of the three child age groups, then adding child puzzle success from the pre-help session (pre-help peak score). Then, to confirm child age differences, we conducted an ANOVA with child age groups as the independent factor, using post-hoc Bonferroni tests to examine specific between-group differences.

As seen in Table 7, parents’ use of the orienting strategy did not differ significantly by age (See Model A), but was affected by children's initial puzzle success; parents of children who did more poorly on the puzzle initially used the orienting strategy more. Child age was negatively related to both the abstracting (Model B) and embodying strategies (Model C), such that parents of children in the older two groups used these strategies less. These results were confirmed in ANOVA with post-hoc Bonferroni comparisons. As seen in Figure 4, there were significant child age group differences in parents’ use of the abstracting strategy (F = 3.448, df = 61, p = .038) and the embodying strategy (F = 3.373, df = 61, p = .041), with significant differences (at the p < .05 level) between the oldest and youngest groups in the post-hoc comparisons.

Table 7.

Effect of child age and pre-help puzzle success on parents' gesture strategy composites.

Model A: Orienting Strategy Composite Model B: Abstracting Strategy Composite Model C: Embodying Strategy Composite
Step 1
    Constant (Child age < 36 m) −0.294
(0.210)
0.315~
(0.158)
0.305~
(0.159)
    Child Age 36-54 m 0.120 (0.262) −0.423~
(0.223)
−0.390~
(0.224)
    Child Age > 54 m 0.450
(0.297)
−0.595*
(0.242)
−0.604*
(0.243)
Step 2
    Pre-Help Peak Score −0.133*
(0.054)
-- --

Model Fit
    R2 0.109 0.107 0.104
    F 2.274~ 3.384* 3.307*

NOTE: Composites were derived from the results of the factor analysis and expressed as the average of the z-scores of gesture feature components.

NOTE: The continuous specification of child age was also tested, but the categorical specification produced models that fit better and explained more variance.

Figure 4.

Figure 4

Average z-scores of composites of the features of parents’ gestures by child age group. This figure illustrates the effects of child age on parents’ gesture strategies as represented by composites of gesture features based on the exploratory factor analysis for parents in the hands condition (n = 62); asterisks indicate significant differences between child age groups according to post-hoc Bonferroni tests following a one-way ANOVA of group differences. *p < .05.

Effects of Parents’ Gesture Strategies on Children's Independent Performance

After identifying what appears to be gesturing strategies which parents’ modified according to child age, and somewhat to child ability, we examined the effects of these strategies on children's performance for those in the Hands condition. Because we had also identified child age-related differences in the effects of the hands condition on children's puzzle scores, we included child age as a moderator of the effects of parents’ gestures. We were particularly interested in the potential differences in the effects of the Abstracting and Embodying dimensions because both contained indexes (proximal versus tap/show) and representational gestures (icons versus demonstrations), but used in differential proximity to both the referents and recipients of the gestures. We used hierarchical regression to test the effects of the overall frequency of parents’ gestures during the help session on children's peak scores in the post-help session, controlling for child age and pre-help performance (Step 1). In Step 2, we tested the effects of the total frequency of parents’ gestures and interactions with child age.4 Finally, in Step 3, we tested whether, after controlling total frequency of gestures, the features of parents’ gestures influenced children's puzzle scores, and whether these effects varied by child age; we used gesture strategy composites (average of z-scores) as the predictors, and interacted these with the child age group variables.

As seen in Model A of Table 8, there was no main effect of the total frequency of parents’ gestures, but there was a significant interaction between child age group (> 54 months) and the frequency of parents’ gestures, such that the oldest children were those who most benefited from their parents’ gesture frequency. For a change in parents’ gesture frequency of one standard deviation, children older than 4.5 years increased their peak puzzle score by an average of 4.5 pieces, an effect size of 1.44. Interestingly, this is the group of children whose scores appeared to be diminished by their parents’ ability to use their hands freely (Figure 2), and whose parents used the fewest gestures overall (Figure 3); yet when parents used gestures more frequently, these children learned more.

Table 8.

Effect of parents' use of hands and specific types of gestures on children's post help peak score.

Model A: Total Gesture Frequency Model B: Effects of Orienting Strategy Model C: Effects of Abstracting Strategy Model D: Effects of Embodying Strategy
Step 1
    Constant (Child age < 36 m) 5.529***
(0.571)
5.663***
(0.610)
5.457***
(0.589)
5.200***
(0.543)
    Child Age 36-54 m 2.205**
(0.795)
2.037*
(0.837)
2.273**
(0.816)
2.492**
(0.739)
    Child Age > 54 m 4.381***
(1.090)
4.136**
(1.183)
4.624***
(1.168)
6.293***
(1.234)
    Pre-Help Score 0.384**
(0.117)
0.421**
(0.134)
0.377**
(0.122)
0.307*
(0.114)
Step 2
Total Gesture Frequency 0.126
(0.385)
−0.050
(0.452)
−0.274
(0.599)
1.158~
(0.626)
Total Gesture Frequency *Age 36-54m −0.436
(1.086)
0.062
(1.234)
0.354
(1.959)
−1.854
(1.296)
Total Gesture Frequency *Age>54m 4.542*
(1.827)
4.670*
(1.879)
4.124
(2.543)
1.929
(1.869)
Step 3
    Orienting Strategy 0.534
(0.670)
    Orienting * Child Age 36-54 −1.639
(1.857)
    Orienting * Child Age > 54 −0.636
(1.144)
    Abstracting Strategy 0.688
(0.770)
    Abstracting * Child Age 36-54 −1.348
(2.674)
    Abstracting * Child Age > 54 1.011
(3.361)
    Embodying Strategy −1.563~
(0.797)
    Embodying * Child Age 36-54 2.063
(1.735)
    Embodying * Child Age > 54 9.868**
(2.964)

Model Fit
    R2 0.539 0.549 0.550 0.638
    F 8.949*** 5.806*** 5.864*** 8.427***
    F Change 2.423~ 0.317 0.398 3.945*
~

p < .10

*

p < .05

**

p < .01

***

p < .001

Figure 2.

Figure 2

Effect of Hands Condition (parents allowed to use hands) on the puzzle success of children in three age groups. This figure displays the effect of the hands/ no hands condition on children's peak puzzle score in the post-help session, controlling for children's peak score in the pre-help session.

As seen in Models B and C of Table 8, there were no effects of the Orienting or Abstracting strategies on children's puzzle performance, neither main effects nor interactions with child age. Though we only include the final models with the strategy by age interactions, we did test simpler main effects models and found that the addition of these strategies did not significantly improve the model fits. However, as seen in Model D, there was a statistically significant interaction between parents’ use of the Embodying strategy and child age such that for the oldest children, parents’ use of the embodying strategy had a strong positive effect on their puzzle performance following parents’ help. Figure 5 depicts the average effects of parents’ use of the Embodying strategy on children's post-help puzzle performance for children in each age group; as seen here, the Embodying strategy had no effect on the middle age group, and appeared to diminish the performance of children under three years. However, children over 4.5 years whose parents used few embodied gestures performed similar to those in the younger groups, but those whose parents’ used more embodied gestures far outperformed their age-mates whose parents used few embodied gestures as well as younger children whose parents had used this strategy more frequently. Further, when the Embodying strategy variable and its interactions with child age are in the model, the effect of parents’ total gesture frequency on the puzzle performance of the older group is diminished and no longer statistically significant. Interestingly, these were the children whose parents used this strategy the least (see Figure 4). This indicates that these early school aged children are those who are able to use the information parents provide via the Embodying strategy, and that only a few such gestures are sufficient for these early school aged children to learn from their parents’ actions.

Figure 5.

Figure 5

Effect of parents’ use of Embodying strategy gestures on the puzzle success of children in three age groups. This figure illustrates the fitted results of Model D in Table 8. It displays the effect of 1 SD of parents’ embodied gestures on children's peak puzzle score in the post-help session, controlling for child age and pre-help score, and parents’ total frequency of gestures during the help session.

To further examine parents’ uses of the embodied strategy, and children's responses to these types of gestures, we re-watched a set of videos containing these types of gestures. Figure 6 provides illustrations of parents’ uses of three different types of gestures that were used together in the embodying strategy – indexes touching their referents, demonstrations which provide representational information while touching the object, and embodied gestures in which parents physically guide the child's actions by moving either the child's hands or the object which the child is also touching. We also briefly describe the child's main approach and dominant actions for solving the puzzle before and after the parent's help. As seen in the illustrations in Figure 6, children appeared to internalize the strategies which parents provided when parents used gestures which physically touched the objects to which they were referring, mimicking and demonstrating the actual action the child was to do, and sometimes even moving the child's hands directly. We further address this possibility in the discussion section.

Figure 6.

Figure 6

Parents’ use of Embodying strategy gestures, each of which touch objects or their children's hands, to provide children with problem-solving strategies.

Discussion

We used a puzzle, designed for children around age four, to examine patterns in parents’ use of gestures that may indicate gesturing strategies in a teaching and learning context, and the effects of parents’ gestural help on children's learning. Overall, we found that child age is a cue to parents to change the frequency of their gestures and specifically the proximity of index gestures to their referents, with parents using gestures more frequently with younger children, and using gestures which are more concrete and proximal to their referents. Though parents’ freedom to use their hands to support their child's learning resulted in greater learning for children under 4.5 years, it resulted in lower scores for children between 4.5 and 6 years old. Yet, parents’ use of more frequent gestures and specifically more concrete gestures – touching referent objects and actually embodying the child's hands – resulted in greater learning for children in the oldest group. Thus, this study identified patterns in parents’ gesturing, child age effects on parents’ patterns of gesture use, and age-specific effects of parents’ gesture use on children's learning in this task.

We found patterns in the features of parents’ gestures which appear to indicate different strategies for supporting children's learning. The three strategies were: (a) orienting the child to the task at hand (pointing to the picture of the puzzle on the box), (b) providing specific but abstract information about the task (proximal pointing to the components, providing strategies via empty-handed iconic gestures), and (c) providing specific, concrete information (touching components of the task to indicate them, demonstrating in a hands-on way, and embodying the child's hands). Parents could be high or low on each of these orthogonal strategies independently, meaning that parents are not simply an “orienting teacher” or an “embodying teacher,” nor do parents use just “hands on” versus just “hands off” gestures. Instead, parents’ strategies vary based on children's ages, and somewhat on their abilities. Parents of younger children used more tap and show indexes. Unlike distal indexes, which require an onlooker to follow the trajectory of the point, and shift his or her gaze and attention from the gesturer's hand to the referent, tap or show indexes require the onlooker to gaze only in a single location. One interpretation of this pattern of findings is that child age is a cue to parents to shift the proximity of their gestures that indicate objects. Previous studies have documented the characteristics of mothers’ use of child-directed gestures, including using more pointing and fewer abstract gestures with younger children (Bekken, 1989), using gestures specifically to emphasize or disambiguate mothers’ speech with younger children (Iverson et al., 1999; O'Neill et al., 2005), and using more frequent and more complex gestures as infants’ object knowledge increases (Dimitrova & Moro, 2013).

We also found evidence that the proximity of parents’ indexes was also influenced by children's abilities with the puzzle task; after accounting for child age, parents used more distal indexes when children did poorly on the puzzle task on their own. However, this appears to be part of the orienting strategy that parents used with children who do not seem to understand the nature of the task, pointing out the picture on the box as a way to show the children very generally what they were supposed to do. Upon further review of the videos, this orientation is somewhat different for younger and older children. For the youngest children, parents appear to use the picture to indicate that the child should focus on getting pieces on the board. For the older children, parents seem to be using this same gesture to challenge their children to complete the puzzle in a particular way, e.g., putting the complicated pieces in first, or placing a particular piece in the middle of the board and working around it. In these cases, parents are making the task harder for their children, working at the edge of the child's zone of proximal development, as it were. This may be why children in the oldest two groups faired worse when their parents more frequently used the distal index. It appears that children attempt to implement this change in the task or the strategies their parents suggested when working alone in the third puzzle trial; however, for many children, they were not yet able to do this successfully, and thus their scores were lower. Older children may be attempting to incorporate more complex strategies that they were not yet equipped to use on their own.

Despite that parents used more concrete, embodying gestures with children in the youngest group, these young children did not do any better in the post-help phase when their parents had used this strategy more frequently. One interpretation is that children in the youngest age group were unable to sustain the benefit of their parents’ concrete help in solving the puzzle. This may be because they did not attend to their parents’ gestures during the task, because they did not incorporate the gestures into their own puzzle-solving repertoire, or because they could not immediately integrate and utilize the information provided by parents’ gestures into their own puzzle-solving strategies. Another interpretation is that during the help phase, parents were responding to children's puzzle solving abilities in a dynamic, moment-by-moment way; thus, among the youngest age group, the greater use the embodying gestures may have been associated with the child's greater need for simplification of the task.

On the other hand, while it was the oldest children (> 4.5 years) who experienced the least embodying gestures from their parents, it was also this group who benefited most from parents’ use of this gestural strategy. Why might these gestures increase children's puzzle solving? In a discussion of assisted imitation, Zukow-Goldring (2012) describes a category of caregiver's gestures referred to as embodying: “When caregivers assist infants to become culturally embodied, they act in tandem with the infant by putting them through the motions of some activity in which the bodily ability (effectivity) and the affordance are new or relatively new.” (p. 573). To illustrate this concept, Zukow-Goldring details longitudinal observations of a Latino, Spanish-speaking parent in the US, instructing her young child (6 – 39 months) in a culturally significant skill of kicking a soccer ball. The interactions involve the parent directing the child's attention to what the ball can do (its affordances), and supporting the child's weight while physically moving the child's legs to perform a kicking motion. The author argues that eventually “infants come to perceive, act, and know the organization and structure of everyday life” (p. 580), in part, through such simultaneously social and perceptual interactions. It would be worthwhile to determine – through careful experimentation – whether experiencing more embodying gestures would indeed effectively support learning among young children more generally, including pre-school and early school-aged children.

An application of these ideas to educational settings and everyday learning opportunities involves the use of manipulatives to teach mathematics concepts in a concrete way (e.g., using a number line model, composing geometrical shapes, such as building a square from two triangle-shaped objects). Manipulatives provide opportunities for children to gain hands-on practice with underlying mathematical concepts. Adult-child embodied interaction with such manipulatives may further provide a means for educators (whether parents or teachers) to physically guide children to a deeper understanding of these concepts.

This embodied strategy may be an underestimated strategy for early school-aged children whom parents and teachers may consider to be more autonomous in their problem-solving efforts. In other words, parents may use fewer hands-on gestures with early school age children in an attempt to support their child's autonomy in solving the puzzle. Their use of fewer gestures may also be because for these children, a little goes a long way – that is, though the older children in the current study experienced far fewer embodying gestures than their younger peers, these gestures had a far greater impact on their independent problem-solving. Further study could illuminate how parents’ and teachers’ frequency of use of these gestures relates to their sensitivity to their children's learning needs as well as children's autonomy in problem solving, and ultimately children's learning.

Limitations and Future Directions

We expect that other gestural strategies, in addition to the three identified here, may be found in other types of tasks. Additionally, it would be useful to replicate the current findings – specifically the existence of gestural strategies – in other visual-spatial teaching-and-learning tasks. Further, it would be useful to replicate the current findings across families in different cultures. The demographic sample from which the current study was drawn was primarily white, non-Hispanic and in a Western culture, a population which uses a more distal style of parenting, and thus incorporates less touching between parents and children compared to other cultures (Keller et al., 2009), and thus may involve fewer embodied gestures than would be seen in other cultures (Zukow-Goldring, 2012).

We hypothesized positive effects of the hands condition and age-differences in the effects of parents’ use of gestures, but we did not anticipate that parental help might hinder the post-help performance of the older children. Based on reviewing the videos from the older children, it appears that parents used the help phase to make the task more challenging for these children, sometimes suggesting goals or strategies that their children were not yet ready to use competently when working on their own. Yet, many children in this older group appeared to attempt their parents’ goals or strategies, reducing their peak scores in the post-help phase. These advanced suggestions may be useful for longer term gains in children's problem-solving, but in the short term, produce confusion, as children attempt to incorporate them into their independent problem-solving performance. It is also possible that if we had given children more post-help time (phase three was limited to 60 seconds), they could have successfully employed their parents’ strategies; that is, perhaps the parental strategies slowed them down or took more time to implement, rather than actually reducing their ability to solve the puzzle. Future studies may shed light on whether this pattern of a parental “challenge” followed by a temporary slowing of the child's progress, characterizes parent-child interaction when a child already has some level of task mastery. Indeed, such interactions may help to facilitate children's entry into higher zones of proximal development.

It may be that children's own use of gestures – not measured in the current study – mediated the relationship between parents’ gestures and children's success in this task. Children whose parents gesture more frequently also use more gestures themselves (e.g. Rowe & Goldin-Meadow, 2009), and O'Neill and Miller (2013) found that children's own use of gestures supported their performance on a series of executive function tasks. In fact, they found that children who gestured more overall did better even on those tasks during which they did not use gestures. Thus, further research should examine the degree to which variation in the frequency and types of parents’ gestures is matched by variation in the same features of children's gestures, and whether it is child gestures that support their success on a problem-solving task.

Another limitation of this study is that we only accounted for non-verbal helping strategies. Although we found that the experimental condition did not change the basic frequency or richness of parents’ language (types, tokens), it is possible that parents who could not use their hands in the current study used language differently, for different communicative purposes. Though it was beyond the scope of the current study, examining the pragmatic features of parents’ speech in the two experimental groups may help to elucidate the complementary roles of language and gesture in supporting children's learning.

Implications and Conclusion

This study provides evidence that parents use particular gestural strategies in the context of a teaching and learning task with their young children, and variation in their use of gestures reflects children's learning needs. We found that the proximity of parents’ index gestures to their referents may be a feature of child-directed gesturing that had not been identified in previous studies. Further, young school-aged children between 4.5 and 6 years, old may benefit more from parents’ and teachers’ use of concrete, hands-on gestures than from more abstract gestures. However, parents’ tendencies were to use fewer such concrete gestures with children of this age compared to younger children. Thus, parents’ attempts to support their children's autonomy in problem-solving may be preventing them from using a gestural strategy which could support their child's internalization and independent use of problem-solving strategies.

Acknowledgements

This research was conducted through Living Laboratory® in the Discovery Center at the Museum of Science, Boston. We would like to thank Paul Harris for strategic advice on study design, the staff of the Museum of Science, Boston for their generosity in facilitating our collection of these data, the children and parents who volunteered their time for our study, and the research assistants who helped to collect and code these data.

Footnotes

1

This review is limited to studies on English-speaking participants. Discussion of language and cultural effects on caregiver gesture use is beyond the scope of the current study (but see Goldin-Meadow & Saltzman, 2000; So & Lim, 2012).

2

No information was collected on the race/ethnicity, parent education, or family income of the participants, as required by the data collection site.

3

We also tested the effects of child age as a continuous variable, testing first linear then quadratic specifications of child age (age*age), but found the best fitting models were consistently those that treated child age as three categorical groups.

4

We explored whether the variety of parents’ gestures – number of different types of gestures used – would contribute to children's puzzle scores, controlling for gesture frequency. While there was a trend toward a significant interaction between gesture variety and child age (p < .10), this effect disappeared when parents’ gesture frequency was included in the model.

References

  1. Alibali MW, Nathan MJ. Teachers’ gestures as a means of scaffolding students’ understanding: Evidence from an early algebra lesson. In: Goldman R, Pea R, Barron BJ, Derry S, editors. Video Research in the Learning Sciences. Erlbaum; Mah Wah, NJ: 2007. pp. 349–365. [Google Scholar]
  2. Bakeman R, Gottman JM. Observing interaction: An introduction to sequential analysis. 2nd Edition Cambridge University Press; Cambridge, UK: 1997. [Google Scholar]
  3. Bangerter A. Using pointing and describing to achieve joint focus of attention in dialogue. Psychological Science. 2004;15:415–419. doi: 10.1111/j.0956-7976.2004.00694.x. [DOI] [PubMed] [Google Scholar]
  4. Bekken K. Unpublished doctoral dissertation. The University of Chicago; Chicago, IL: 1989. Is there motherese in gesture? [Google Scholar]
  5. Church B, Ayman-Nolley S, Mahootian S. The role of gesture in bilingual education: Does gesture enhance learning? International Journal of Bilingual Education and Bilingualism. 2004;7:303–319. [Google Scholar]
  6. Cook S, Goldin-Meadow S. The role of gesture in learning: Do children use their hands to change their minds? Journal of Cognition and Development. 2006;7:211–232. doi:10.1207/s15327647jcd0702_4. [Google Scholar]
  7. Crais E, Douglas DD, Campbell CC. The intersection of the development of gestures and intentionality. Journal of Speech, Language, and Hearing Research. 2004;47:678–698. doi: 10.1044/1092-4388(2004/052). [DOI] [PubMed] [Google Scholar]
  8. Dimitrova N, Moro C. Common ground on object use associates with caregivers’ gesturese. Infant Behavior & Development. 2013;36:618–626. doi: 10.1016/j.infbeh.2013.06.006. doi: 10.1016/j.infbeh.2013.06.006. [DOI] [PubMed] [Google Scholar]
  9. Fleiss JL. Statistical methods for rates and proportions. Wiley; New York, NY: 1981. [Google Scholar]
  10. Flevares LM, Perry M. How many do you see? The use of nonspoken representations in first-grade mathematics lessons. Journal of Educational Psychology. 2001;93(2):330–345. doi: 10.1037/0022-0663.93.2.330. [Google Scholar]
  11. Goldin-Meadow S. Gesture promotes learning throughout childhood. Child Development Perspectives. 2009;3:106–111. doi: 10.1111/j.1750-8606.2009.00088.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Goldin-Meadow S. Nonverbal communication: The hand's role in talking and thinking. In: Kuhn D, Siegler RS, Damon W, Lerner RM, editors. Handbook of child psychology: Vol 2, Cognition, perception, and language. 6th ed. John Wiley & Sons Inc; Hoboken, NJ, US: 2006. pp. 336–369. [Google Scholar]
  13. Goodrich W, Hudson Kam CL. Co-speech gesture as input in verb learning. Developmental Science. 2009;12:81–87. doi: 10.1111/j.1467-7687.2008.00735.x. [DOI] [PubMed] [Google Scholar]
  14. Gutmann AJ, Turnure JE. Mothers’ production of hand gestures while communicating with their preschool children under various task conditions. Developmental Psychology. 1979;15(2):197–203. doi: 10.1037/0012-1649.15.2.197. [Google Scholar]
  15. Iverson JM, Capirci O, Longobardi E, Caselli MC. Gesturing in mother-child interactions. Cognitive Development. 1999;14(1):57–75. doi: 10.1016/S0885-2014(99)80018-5. [Google Scholar]
  16. Keller H, Borke J, Staufenbiel T, Yovsi RD, Abels M, Su Y. Distal and proximal parenting as alternative parenting strategies during infants’ early months of life: A cross-cultural study. International Journal of Behavioral Development. 2009;23(5):412–420. DOI: 10.1177/0165025409338441. [Google Scholar]
  17. Kelly SD, Manning SM, Rodak S. Gesture gives a hand to language and learning: Perspectives from cognitive neuroscience, developmental psychology and education. Language and Linguistics Compass. 2008;2(4):569–588. doi: 10.1111/j.1749-818X.2008.00067.x. [Google Scholar]
  18. MacWhinney B. The CHILDES project: Tools for analyzing talk. Lawrence Erlbaum Associates; Mahwah, NJ: 2000. [Google Scholar]
  19. Namy LL. Recognition of iconicity doesn't come for free. Developmental Science. 2008;11(6):841–846. doi: 10.1111/j.1467-7687.2008.00732.x. doi: 10.1111/j.1467-7687.2008.00732.x. [DOI] [PubMed] [Google Scholar]
  20. Namy LL, Campbell AL, Tomasello M. The changing role of iconicity in non-verbal symbol learning: A U-shaped trajectory in the acquisition of arbitrary gestures. Journal of Cognition and Development. 2004;5(1):37–57. [Google Scholar]
  21. Nicoladis E, Pika S, Marentette P. Are number gestures easier than number words for preschoolers? Cognitive Development. 2010;25:247–261. doi:10.1016/j.cogdev.2010.04.001. [Google Scholar]
  22. O'Neill G, Miller PH. A show of hands: Relations between young children's gesturing and executive function. Developmental Psychology. 2013;49:1517–1528. doi: 10.1037/a0030241. doi: 10.1037/a0030241. [DOI] [PubMed] [Google Scholar]
  23. O'Neill M, Bard KA, Linnell M, Fluck M. Maternal gestures with 20-month-old infants in two contexts. Developmental Science. 2005;8(4):352–359. doi: 10.1111/j.1467-7687.2005.00423.x. doi: 10.1111/j.1467-7687.2005.00423.x. [DOI] [PubMed] [Google Scholar]
  24. Ping RM, Goldin-Meadow S. Hands in the air: Using ungrounded iconic gestures to teach children. Developmental Psychology. 2008;44(5):1277–1287. doi: 10.1037/0012-1649.44.5.1277. doi: 10.1037/0012-1649.44.5.1277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Pozzer-Ardenghi L, Roth WM. On performing concepts during science lectures. Science Education. 2007;91(1):96–114. doi: 10.1002/sce.20172. [Google Scholar]
  26. Pratt MW, Kerig P, Cowan PA, Cowan CP. Mothers and fathers teaching 3-year-olds: Authoritative parenting and adult scaffolding of young children's learning. Developmental Psychology. 1988;24:832–839. [Google Scholar]
  27. Rauscher FH, Krauss RM, Chen Y. Gesture, speech and lexical access. Psychological Science. 1996;7(4):226–230. [Google Scholar]
  28. Rowe M, Goldin-Meadow S. Differences in early gesture explain SES disparities in child vocabulary size at school entry. Science. 2009;323:951–953. doi: 10.1126/science.1167025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Singer MA, Goldin-Meadow S. Children learn when their teacher's gestures and speech differ. Psychological Science. 2005;16(2):85–89. doi: 10.1111/j.0956-7976.2005.00786.x. doi: 10.1111/j.0956-7976.2005.00786.x. [DOI] [PubMed] [Google Scholar]
  30. Snow CE. The development of conversation between mothers and babies. In: Franklin MB, Barten SB, editors. Child Language: A Reader. Oxford University Press; New York, NY: 1988. pp. 20–35. [Google Scholar]
  31. Tomasello M. Origins of Human Communication. The MIT Press; Cambridge, MA: 2008. [Google Scholar]
  32. Valenzeno L, Alibali MW, Klatzky R. Teachers’ gestures facilitate students’ learning: A lesson in symmetry. Contemporary Educational Psychology. 2003;28:187–204. doi: 10.1016/S0361-476X(02)00007-3. [Google Scholar]
  33. Vygotsky LS. Interaction between learning and development. In: Lopez-Morillas M, Cole M, John-Steiner V, Scribner S, Souberman E, editors. Mind in society: The development of higher psychological processes. Harvard University Press; Cambridge, MA: 1978. pp. 79–91. [Google Scholar]
  34. Vygotsky LS. Thinking and speech. In: Minick N, Rieber RW, Carton AS, editors. The collected works of L. S. Vygotsky: Vol. 1. Problems of general psychology. Plenum Press; New York: 1987. pp. 39–285. doi:10.1080/0305569042000224189. (Original work published 1934) Wang, X., Bernas, R., & Eberhard, P. (2004). Engaging ADHD students in tasks with hand gestures: A pedagogical possibility for teachers. Educational Studies, 30(3), 217-229.
  35. Wang X, Bernas R, Eberhard P. Engaging ADHD students in tasks with hand gestures: A pedagogical possibility for teachers. Educational Studies. 2004;30(3):217–229. doi:10.1080/0305569042000224189. [Google Scholar]
  36. Wertsch JV. From social interaction to higher psychological processes: A clarification and application of Vygotksy's Theory. Human Development. 1979;22:1–22. [Google Scholar]
  37. Wertsch JV, McNamee GD, McLane JB, Budwig NA. The adult-child dyad as problem-solving system. Child Development. 1980;51:1215–1221. [Google Scholar]
  38. Zukow-Goldring P. Assisted imitation: Affordances, effectiveness, and the mirror system in early language development. In: Abib M, editor. Action to language via the mirror neuron system. Cambridge University Press; Cambridge, England: 2006. pp. 469–500. [Google Scholar]
  39. Zukow-Goldring P. Assisted imitation: First steps in the seed model of language development. Language Sciences. 2012;34(5):569–582. doi:10.1016/j.langsci.2012.03.012. [Google Scholar]

RESOURCES