Abstract
Human learning is distinguished by the range and complexity of skills that can be learned and the degree of abstraction that can be achieved compared to other species. Humans are also the only species that has developed formal ways to enhance learning: teachers, schools, and curricula. Human infants have an intense interest in people and their behavior, and possess powerful implicit learning mechanisms that are affected by social interaction. Neuroscientists are beginning to understand the brain mechanisms underlying learning and how shared brain systems for perception and action support social learning. Machine learning algorithms are being developed that allow robots and computers to learn autonomously. New insights from many different fields are converging to create a new science of learning that may transform educational practices.
Introduction
Cultural evolution, which is rare among species and reaches a pinnacle in Homo sapiens, became possible when new forms of learning evolved under selective pressure in our ancestors. Culture underpins achievements in language, arts, and science that are unprecedented in nature. The origin of human intelligence is still a deep mystery. However, the study of child development, the plasticity of the human brain, and computational approaches to learning are laying the foundation for a new science of learning that provides insights into the origins of human intelligence.
Human learning and cultural evolution are supported by a paradoxical biological adaptation: We are born immature. Young infants cannot speak, walk, use tools, or take the perspective of others. Immaturity comes at a tremendous cost, both to the newborn whose brain consumes 60% of its entire energy budget (1), and to the parents. During the first year of life, the brain of an infant is teeming with structural activity as neurons grow in size and complexity and trillions of new connections are formed between them. The brain continues to grow during childhood and reaches the adult size around puberty. The development of the cerebral cortex has “sensitive periods” during which connections between neurons are more plastic and susceptible to environmental influence: The sensitive periods for sensory processing areas occur early in development, higher cortical areas mature later, and the prefrontal cortex continues to develop into early adulthood (2).
Yet immaturity has value. Delaying the maturation and growth of brain circuits allows initial learning to influence the developing neural architecture in ways that support later, more complex learning. In computer simulations, starting the learning process with a low-resolution sensory system allows more efficient learning than starting with a fully developed sensory system (3).
What characterizes the exuberant learning that occurs during childhood? Three principles are emerging from cross-disciplinary work in psychology, neuroscience, machine learning, and education, contributing to a new science of learning (Fig. 1). These principles support learning across a range of areas and ages and are particularly useful in explaining children’s rapid learning in two unique domains of human intelligence: language and social understanding.
Learning is Computational
Discoveries in developmental psychology and in machine learning are converging on new computational accounts of learning. Recent findings show that infants and young children possess powerful computational skills that allow them automatically to infer structured models of their environment from the statistical patterns they experience. Infants use statistical patterns gleaned from experience to learn about both language and causation. Before they are three, children use frequency distributions to learn which phonetic units distinguish words in their native language (4, 5), use the transitional probabilities between syllables to segment words (6), and use covariation to infer cause-effect relationships in the physical world (7).
Machine learning has the goal of developing computer algorithms and robots that improve automatically from experience (8). For example, BabyBot, a baby doll instrumented with a video camera, a microphone, and a loudspeaker (9), learned to detect human faces using the temporal contingency between BabyBots’ programmed vocalizations and humans that tended to respond to these baby-like vocalizations. After 6 minutes of learning, BabyBot detected novel faces and generalized to the schematic faces used in studies of infant face recognition.
Statistical regularities and covariations in the world thus provide a richer source of information than previously thought. Infants’ pickup of this information is implicit; it occurs without parental training, and begins before infants can manipulate the physical world or speak their first words. New machine learning programs also succeed without direct reinforcement or supervision. Learning from probabilistic input provides an alternative to Skinnerian reinforcement learning and Chomskian nativist accounts (10, 11).
Learning is Social
Children do not compute statistics indiscriminately. Social cues highlight what and when to learn. Even young infants are predisposed to attend to people, and are motivated to copy the actions they see others do (12). They more readily learn and re-enact an event when it is produced by a person than by an inanimate device (13, 14).
Machine learning studies show that systematically increasing a robot’s social-like behaviors and contingent responsivity elevates young children’s willingness to connect with and learn from it (15). Animal models may help explain how social interaction affects learning: In birds, neurosteroids that affect learning modulate brain activity during social interaction (16). Social interaction can extend the sensitive period for learning in birds (17). Social factors also play a role in life-long learning—new social technologies (text messaging, Facebook, Twitter) tap humans’ drive for social communication. Educational technology is increasingly embodying the principles of social interaction in intelligent tutoring systems to enhance student learning (18).
Learning is Supported by Brain Circuits Linking Perception and Action
Human social and language learning are supported by neural-cognitive systems that link the actions of self and other. Moreover, the brain machinery needed to perceive the world and move our bodies to adapt to the movements of people and objects is complex, requiring continuous adaptation and plasticity. Consider what is necessary to explain human imitative learning. Newborns as young as 42 minutes old match gestures shown to them, including tongue protrusion and mouth opening (19). This is remarkable because infants cannot see their own faces, and newborns have never seen their reflection in a mirror. Yet, newborns can map from observed behavior to their own matching acts, suggesting shared representations for the acts of self and others (14, 19). Neuroscientists have discovered a striking overlap in the brain systems recruited both for the perception and production of actions (20, 21). For example, in human adults there is neuronal activation when observing articulatory movements in the cortical areas responsible for producing those articulations (22). Social learning, imitation, and sensorimotor experience may initially generate, as well as modify and refine, shared neural circuitry for perception and action. The emerging field of social neuroscience is aimed at discovering brain mechanisms supporting the close coupling and attunement between the self and other that is the hallmark of seamless social communication and interaction.
Social Learning and Understanding
Human children readily learn through social interactions with other people. Three social skills are foundational to human development and rare in other animals: Imitation, shared attention, and empathic understanding.
Imitation
Learning by observing and imitating experts in the culture is a powerful social learning mechanism. Children imitate a diverse range of acts, including parental mannerisms, speech patterns, and the use of instruments to get things done. For example, a toddler may see her father using a telephone or computer keyboard and crawl up on the chair and babble into the receiver or poke the keys. Such behavior is not explicitly trained (it may be discouraged by the parent) and there is no inborn tendency to treat plastic boxes in this way—the child learns by watching others and imitating.
Imitation accelerates learning and multiplies learning opportunities. It is faster than individual discovery and safer than trial-and-error learning. Children can use 3rd-person information (observation of others) to create 1st-person knowledge. This is an accelerator for learning: Instead of having to work out causal relations themselves, children can learn from watching experts. Imitative learning is valuable because the behavioral actions of others “like me” serve as a proxy for one’s own (14).
Children do not slavishly duplicate what they see but re-enact a person’s goals and intentions. For example, suppose an adult tries to pull apart an object but his hand slips off the ends. Even at 18-months of age, infants can use the pattern of unsuccessful attempts to infer the unseen goal of another. They produce the goal that the adult was striving to achieve, not the unsuccessful attempts (13). Children choose whom, when, and what to imitate and seamlessly mix imitation and self-discovery to solve novel problems (23, 24).
Imitation is a challenging computational problem that is being intensively studied in the robotic and machine learning communities (25, 26). It requires algorithms capable of inferring complex sensorimotor mappings that go beyond the repetition of observed movements. The match must be achieved despite the fact that the teacher may be different from the observer in several ways (e.g., size, spatial orientation, morphology, dexterity). The ultimate aim is to build robots that can learn like infants, through observation and imitation (27). Current computational approaches to imitation can be divided into direct and goal-based approaches. Direct approaches learn input-action mappings that reproduce the observed behaviors (25); goal-based approaches, which are more recent and less explored, infer the goal of the observed behaviors and then produce motor plans that achieve those goals (28, 29).
Shared Attention
Social learning is facilitated when people share attention. Shared attention to the same object or event provides a common ground for communication and teaching. An early component of shared attention is gaze following (Fig. 2). Infants in the first half-year of life look more often in the direction of an adult’s head turn when peripheral targets are in the visual field (30). By 9 months of age infants interacting with a responsive robot follow its head movements, and the timing and contingencies, not just the visual appearance of the robot appear to be key (31). It is unclear, however, whether young infants are trying to look at what another is seeing or are simply tracking head movements. By 12 months sensitivity to the direction and state of the eyes exists, not just sensitivity to the direction of head turning. If a person with eyes open turns to one of two equidistant objects, 12-month-old infants look at that particular target, but not if the person makes the same head movement with eyes closed (32).
A blindfold covering the person’s eyes causes 12-month-olds to make the mistake of following the head movements. They understand that eye closure, but not a blindfold, blocks the other person’s view. Self-experience corrects this error. In a training study, 1-year-olds were given experience with a blindfold so they understood that it made it impossible to see. When the adult subsequently wore the blindfold, infants who had received self-experience with it treated the adult as if she could not see (33), whereas control infants did not. Infants project their own experience onto other people. The ability to interpret the behavior and experience of others by using oneself as a model is a highly effective learning strategy that may be unique to humans and impaired in children with autism (34, 35). It would be useful if this could be exploited in machine learning, and preliminary progress is being made (36).
Empathy and Social Emotions
The capacity to feel and regulate emotions is critical to understanding human intelligence and has become an active area of research in human-computer interaction (37). In humans many affective processes are uniquely social. Controlled experiments lead to the conclusion that prelinguistic toddlers engage in altruistic, instrumental helping (38). Children also show primitive forms of empathy. When an adult appears to hurt a finger and cry in pain, children under 3 years of age comfort the adult, sometimes offering a band-aid or teddy bear (39). Related behavior has been observed with children helping and comforting a social robot that was “crying” (15, 40).
Brain imaging studies in adults show an overlap in the neural systems activated when people receive a painful stimulus themselves or perceives that another person in pain (41, 42). These neural reactions are modulated by cultural experience, training, and perceived similarity between self and other (42, 43). Atypical neural patterns have been documented in antisocial adolescents (44). Discovering the origins of individual differences in empathy and compassion is a grand challenge for developmental social-cognitive neuroscience.
Language Learning
Language acquisition poses a major challenge for theories of learning. The world’s most powerful computers have been unable to crack the speech code—no computer has achieved fluent speech understanding across talkers, which children master by 3 years of age (10).
Human language acquisition sheds light on the interaction among computational learning, social facilitation of learning, and shared neural circuitry for perception and production.
Behavioral Development
Early in development, infants have a capacity to distinguish all sounds across the languages of the world, a capacity shared by non-human primates (45). However, infants’ universal capacities narrow with development, and by one year of age, infants’ ability to perceive sound distinctions used only in foreign languages and not their native environment is weakened. Infants’ universal capacities become language-specific between 9 and 12 months of age. American and Japanese infants, who at 7 months of age discriminated /ra/ from /la/ equally well, both change by 11 months: American infants improve significantly while Japanese infants’ skills show a sharp decline (46).
This transition in infant perception is strongly influenced by the distributional frequency of sounds contained in ambient language (4, 5). Infants’ computational skills are sufficiently robust that laboratory exposure to artificial syllables in which the distributional frequencies are experimentally manipulated changes infants’ abilities to discriminate the sounds.
However, experiments also show that the computations involved in language learning are “gated” by social processes (47). In foreign-language learning experiments, social interaction strongly influences infants’ statistical learning. Infants exposed to a foreign language at 9 months learn rapidly, but only when experiencing the new language during social interchanges with other humans. American infants exposed in the laboratory to Mandarin Chinese rapidly learned phonemes and words from the foreign language, but only if exposed to the new language by a live human being during naturalistic play. Infants exposed to the same auditory input at the same age and for the same duration via television or audiotape showed no learning (48, Fig. 3). Why infants learned better from people, and what components of social interactivity support language learning are currently being investigated. Determining the key stimulus and interactive features will be important for theory. Temporal contingencies may be critical (49).
Other evidence that social input advances language learning comes from studies showing that infants vocally imitate adult vowel sounds by 5 months but not acoustically matched non-speech sounds that are not perceived as human speech (50, 51). By 10 months, even before speaking words, the imitation of social models results in a change in the types of vocalizations children produce. Children raised in Beijing listening to Mandarin babble using tone-like pitches characteristic of Mandarin, which make them sound distinctly Chinese. Children being raised in Seattle listening to English do not babble using such tones and sound distinctly American.
Children react to a social audience by increasing the complexity of their vocal output. When mothers’ responses to their infants’ vocalizations are controlled experimentally, a mother’s immediate social feedback results both in greater numbers and more mature, adult-like vocalizations from infants (52). Sensory impairments affect infant vocalizations: Children with hearing impairments use a greater preponderance of sounds (such as ‘ba’) that they can see by following the lip movements of the talker. Infants who are blind, babble using a greater proportion of sounds that do not rely on visible articulations (‘ga’) (53).
Birdsong provides a neurobiological model of vocal learning that integrates self-generated sensorimotor experience and social input. Passerine birds learn conspecific song by listening to and imitating adult birds. Like humans, young birds listen to adult conspecific birds sing during a sensitive period in development and then practice that repertoire during a “sub-song” period (akin to babbling) until it is crystallized (54). Neural models of birdsong learning can account for this gradual process of successive refinement (55). In birds, as in humans, a social context enhances vocal learning (56).
Neural Plasticity
In humans, a sensitive period exists between birth and 7 years of age when language is learned effortlessly; after puberty, new language learning is more difficult and native-language levels are rarely achieved (57, 58). In birds, the duration of the sensitive period is extended in richer social environments (17, 59). Human learning beyond the sensitive period may also benefit from social interaction. Adult foreign-language learning improves under more social learning conditions (60).
A candidate mechanism governing the sensitive period for language in humans is “neural commitment” (10). Neural commitment is the formation of neural architecture and circuitry dedicated to the detection of phonetic and prosodic characteristics of the particular ambient language(s) to which the infant is exposed. The neural circuitry maximizes detection of a particular language, and when fully developed, interferes with the acquisition of a new language.
Neural signatures of children’s early language learning can be documented using event-related potentials (ERPs). Phonetic learning can be documented at 11 months of age; responses to known words at 14 months, and semantic and syntactic learning at 2.5 years (61). Early mastery of the sound patterns of one’s native language provides a foundation for later language learning: Children who show enhanced ERP responses to phonemes at 7.5 months show faster advancement in language acquisition between 14 and 30 months of age (62).
Children become both native-language listeners and speakers, and brain systems that link perception and action may help children achieve parity between the two systems. In adults, fMRI studies show that watching lip movements appropriate for speech activates the speech motor areas of the brain (63). Early formation of linked perception-production brain systems for speech has been investigated using brain imaging technology, magnetoencephalography (MEG). MEG reveals nascent neural links between speech perception and production. At 6 months of age, listening to speech activates higher auditory brain areas (superior temporal), as expected, but also simultaneously activates Broca’s area, which controls speech production, although listening to non-speech sounds does not (64, see also 65). MEG technology will allow linguists to explore how social interaction and sensorimotor experience affects the cortical processing of language in children, and why young children can learn foreign language material from a human tutor but not a television.
New interactive robots are being designed to teach language to children in a social-like manner. Engineers created a social robot that autonomously interacts with toddlers, recognizing their moods and activities (15) (Fig. 4). Interaction with the social robot over a 10-day period resulted in a 10-percentage point increase in vocabulary in 18- to 24-month-old children compared to an age-matched control group (40). This robotic technology is now being used to test whether children might learn foreign language words through social games with the robot.
Education
During their long period of immaturity, human brains are sculpted by implicit social and statistical learning. Children progress from relatively helpless, observant newborns to walking, talking, empathetic people, who perform everyday experiments on cause and effect. Educators are turning to psychology, neuroscience, and machine learning to ask: Can the principles supporting early exuberant and effortless learning be applied to improve education?
Progress is being made in three areas: Early intervention programs, learning outside of school, and formal education.
Children are born learning, and how much they learn depends on environmental input, both social and linguistic. Many children entering kindergarten in the U.S. are not ready for school (66), and children who start behind in school-entry academic skills tend to stay behind (67). Neuroscience work suggests that differences in learning opportunities before first grade are correlated with neural differences that may impact school learning (68, 69).
The recognition that the right input at the right time has cascading effects led to early interventions for children at risk for poor academic outcomes. Programs enhancing early social interactions and contingencies produce significant long-term improvements in academic achievement, social adjustment, and economic success, and are highly cost-effective (70–72).
The science of learning has also impacted the design of interventions with children with disabilities. Speech perception requires the ability to perceive changes in the speech signal on the time scale of milliseconds, and neural mechanisms for plasticity in the developing brain are tuned to these signals. Behavioral and brain imaging experiments suggest that children with dyslexia have difficulties processing rapid auditory signals; computer programs that train the neural systems responsible for such processing are helping children with dyslexia improve language and literacy (73). The temporal “stretching” of acoustic distinctions that these programs use is reminiscent of infant-directed speech (“motherese”) spoken to infants in natural interaction (74). Children with autism spectrum disorders (ASD) have deficits in imitative learning and gaze following (75–77). This cuts them off from the rich socially-mediated learning opportunities available to typically developing children, with cascading developmental effects. Young children with ASD prefer an acoustically matched non-speech signal over motherese, and the degree of preference predicts the degree of severity of their clinical autistic symptoms (78). Children with ASD are attracted to humanoid robots with predictable interactivity, which is beginning to be used in diagnosis and interventions (79).
K-12 educators are attempting to harness the intellectual curiosity and avid learning that occurs during natural social interaction. The emerging field of informal learning (80) is based on the idea that informal settings are venues for a significant amount of childhood learning. Children spend nearly 80% of their waking hours outside of school. They learn at home, in community centers, in clubs, through the Internet, at museums, zoos, and aquariums, through digital media, and gaming. Informal learning venues are often highly social and offer a form of mentoring, apprenticeship, and participation that maximizes motivation and engages the learner’s sense of identity—learners come to think of themselves as good in technology or as budding scientists, and such self-concepts influence children’s interests, goals, and future choices (81, 82). A recent National Research Council study on science education (80) catalogued factors that enliven learning in informal learning venues with the long-term goal of using them to enhance learning in school.
In formal school settings, research shows that individual face-to-face tutoring is the most effective form of instruction. Students taught by professional tutors one-on-one show achievement levels two standard deviations higher than students in conventional instruction (83). New learning technologies are being developed that embody key elements of individual human tutoring while avoiding its extraordinary financial cost. For example, learning researchers have developed intelligent tutoring systems based on cognitive psychology that provide an interactive environment with step-by step feedback, feedforward instructional hints to the user, and dynamic problem selection (18). These automatic tutors have been shown to approximate the benefits of human tutoring by adapting to the needs of individual students, as good teachers do. Classrooms are becoming living laboratories as researchers and educators use technology to track and collect data from individual children and use this information to test theories and design curricula.
Conclusions
A convergence of discoveries in psychology, neuroscience, and machine learning has resulted in principles of human learning that are leading to changes in educational theory and the design of learning environments. Reciprocally, educational practice is leading to the design of new experimental work. A key component is the role of “the social” in learning. What makes social interaction such a powerful catalyst for learning, and can key elements be embodied in technology to improve learning? How can we capitalize on social factors to better teach children and foster their natural curiosity about people and things? These are deep questions at the leading edge of the new science of learning.
References and Notes
- 1.Allman JM. Evolving brains. New York: W. H. Freeman; 1999. [Google Scholar]
- 2.Quartz SR, Sejnowski TJ. Behav. Brain Sci. 1997;20:537. doi: 10.1017/s0140525x97001581. [DOI] [PubMed] [Google Scholar]
- 3.Jacobs RA, Dominguez M. Neural Comput. 2003;15:761–781. doi: 10.1162/08997660360581895. [DOI] [PubMed] [Google Scholar]
- 4.Kuhl PK, Williams KA, Lacerda F, Stevens KN, Lindblom B. Science. 1992;255:606. doi: 10.1126/science.1736364. [DOI] [PubMed] [Google Scholar]
- 5.Maye J, Werker JF, Gerken L. Cognition. 2002;82:B101. doi: 10.1016/s0010-0277(01)00157-3. [DOI] [PubMed] [Google Scholar]
- 6.Saffran JR, Aslin RN, Newport EL. Science. 1996;274:1926. doi: 10.1126/science.274.5294.1926. [DOI] [PubMed] [Google Scholar]
- 7.Gopnik A, et al. Psychol. Rev. 2004;111:3. doi: 10.1037/0033-295X.111.1.3. [DOI] [PubMed] [Google Scholar]
- 8.Mitchell TM. Machine learning. NY: McGraw-Hill; 1997. [Google Scholar]
- 9.Butko NJ, Fasel IR, Movellan JR. Proc. 5th IEEE Int. Conf. Dev. and Learning; 2006. [Google Scholar]
- 10.Kuhl PK. Nature Rev. Neurosci. 2004;5:831. doi: 10.1038/nrn1533. [DOI] [PubMed] [Google Scholar]
- 11.Gopnik A. J. Developmental Science. 2007;10:281. doi: 10.1111/j.1467-7687.2007.00584.x. [DOI] [PubMed] [Google Scholar]
- 12.Meltzoff AN, Moore MK. Science. 1977;198:75. doi: 10.1126/science.198.4312.75. [DOI] [PubMed] [Google Scholar]
- 13.Meltzoff AN. Dev. Psychol. 1995;31:838. doi: 10.1037/0012-1649.31.5.838. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Meltzoff AN. Acta Psychol. 2007;124:26. doi: 10.1016/j.actpsy.2006.09.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Tanaka F, Cicourel A, Movellan JR. Proc. Natl. Acad. Sci. U.S.A. 2007;104:17954. doi: 10.1073/pnas.0707769104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Remage-Healey L, Maidment NT, Schlinger BA. Nature Neurosci. 2008;11:1327. doi: 10.1038/nn.2200. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Brainard MS, Knudsen EI. J. Neurosci. 1998;18:3929. doi: 10.1523/JNEUROSCI.18-10-03929.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Koedinger KR, Aleven V. Educ. Psychol. Rev. 2007;19:239. [Google Scholar]
- 19.Meltzoff AN, Moore MK. Early Dev. Parenting. 1997;6:179. doi: 10.1002/(SICI)1099-0917(199709/12)6:3/4<179::AID-EDP157>3.0.CO;2-R. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Hari R, Kujala M. Physiol. Rev. 2009;89:453. doi: 10.1152/physrev.00041.2007. [DOI] [PubMed] [Google Scholar]
- 21.Rizzolatti G, Fogassi L, Gallese V. Nature Rev. Neurosci. 2001;2:661. doi: 10.1038/35090060. [DOI] [PubMed] [Google Scholar]
- 22.Möttönen R, Järveläinen J, Sams M, Hari R. NeuroImage. 2004;24:731. doi: 10.1016/j.neuroimage.2004.10.011. [DOI] [PubMed] [Google Scholar]
- 23.Williamson RA, Meltzoff AN, Markman EM. Dev. Psychol. 2008;44:275. doi: 10.1037/0012-1649.44.1.275. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Repacholi BM, Meltzoff AN, Olsen B. Dev. Psychol. 2008;44:561. doi: 10.1037/0012-1649.44.2.561. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Schaal S. Trends Cognit. Sci. 1999;3:233. doi: 10.1016/s1364-6613(99)01327-3. [DOI] [PubMed] [Google Scholar]
- 26.Shon AP, Storz JJ, Meltzoff AN, Rao RPN. Int. J. Humanoid Robotics. 2007;4:387. [Google Scholar]
- 27.Demiris Y, Meltzoff AN. Infant and Child Dev. 2008;17:43. doi: 10.1002/icd.543. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Ng AY, Russell S. Proc. 17th Int. Conf. on Machine Learning; San Francisco, CA: Morgan Kaufmann; 2000. pp. 663–670. [Google Scholar]
- 29.Verma D, Rao RPN. Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press; 2006. pp. 1393–1400. [Google Scholar]
- 30.Flom R, Lee K, Muir D, editors. Gaze-following. Mahwah, NJ: Erlbaum; 2007. [Google Scholar]
- 31.Movellan JR, Watson JS. Proc. 2nd IEEE Int. Conf. Dev. and Learning; 2002. [Google Scholar]
- 32.Brooks R, Meltzoff AN. Dev. Psychol. 2002;38:958. doi: 10.1037//0012-1649.38.6.958. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Meltzoff AN, Brooks R. Dev. Psychol. 2008;44:1257. doi: 10.1037/a0012888. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Meltzoff AN. Dev. Sci. 2007;10:126. doi: 10.1111/j.1467-7687.2007.00574.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Tomasello M, Carpenter M, Call J, Behne T, Moll H. Behav. Brain Sci. 2005;28:675. doi: 10.1017/S0140525X05000129. [DOI] [PubMed] [Google Scholar]
- 36.Bongard J, Zykov V, Lipson H. Science. 2006;314:1118. doi: 10.1126/science.1133687. [DOI] [PubMed] [Google Scholar]
- 37.Picard RW. Affective Computing. Cambridge, MA: MIT Press; 1997. [Google Scholar]
- 38.Warneken F, Tomasello M. Science. 2006;311:1301. doi: 10.1126/science.1121448. [DOI] [PubMed] [Google Scholar]
- 39.Zahn-Waxler C, Radke-Yarrow M, King RA. Child Dev. 1979;50:319. doi: 10.1111/j.1467-8624.1979.tb04112.x. [DOI] [PubMed] [Google Scholar]
- 40.Movellan JR, Eckhart M, Virnes M, Rodriguez A. Proc. Int. Conf. Human Robot Interaction; 2009. [Google Scholar]
- 41.Singer T, et al. Science. 2004;303:1157. doi: 10.1126/science.1093535. [DOI] [PubMed] [Google Scholar]
- 42.Hein G, Singer T. Curr. Opin.Neurobiol. 2008;18:153. doi: 10.1016/j.conb.2008.07.012. [DOI] [PubMed] [Google Scholar]
- 43.Lamm C, Meltzoff AN, Decety J. J. Cogn. Neurosci. doi: 10.1162/jocn.2009.21186. (In press) [DOI] [PubMed] [Google Scholar]
- 44.Decety J, Michalska KJ, Akitsuki Y, Lahey B. Biol. Psychol. 2009;80:203. doi: 10.1016/j.biopsycho.2008.09.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Kuhl PK, Miller JD. Science. 1975;190:69. doi: 10.1126/science.1166301. [DOI] [PubMed] [Google Scholar]
- 46.Kuhl PK, et al. Dev. Sci. 2006;9:F13. doi: 10.1111/j.1467-7687.2006.00468.x. [DOI] [PubMed] [Google Scholar]
- 47.Kuhl PK. Dev. Sci. 2007;10:110. doi: 10.1111/j.1467-7687.2007.00572.x. [DOI] [PubMed] [Google Scholar]
- 48.Kuhl PK, Tsao F-M, Liu H-M. Proc. Natl. Acad. Sci. U.S.A. 2003;100:9096. doi: 10.1073/pnas.1532872100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Montague PR, Sejnowski TJ. Learning & Memory. 1994;1:1. [PubMed] [Google Scholar]
- 50.Kuhl PK, Meltzoff AN. Science. 1982;218:1138. doi: 10.1126/science.7146899. [DOI] [PubMed] [Google Scholar]
- 51.Kuhl PK, Meltzoff AN. J. Acoust. Soc. Am. 1996;100:2425. doi: 10.1121/1.417951. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Goldstein M, King A, West M. Proc. Natl. Acad. Sci. U.S.A. 2003;100:8030. doi: 10.1073/pnas.1332441100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Stoel-Gammon C. In: Phonological development. Ferguson CA, Menn L, Stoel-Gammon C, editors. Timonium, MD: York Press; 1992. pp. 273–282. [Google Scholar]
- 54.Brainard MS, Doupe AJ. Nature. 2002;417:351. doi: 10.1038/417351a. [DOI] [PubMed] [Google Scholar]
- 55.Doya K, Sejnowski TJ. In: Advances in Neural Information Processing Systems. Tesauro G, Touretzky DS, Leen T, editors. Cambridge, MA: MIT Press; 1995. pp. 101–108. [Google Scholar]
- 56.Doupe AJ, Kuhl PK. In: Neuroscience of Birdsong. Zeigler HP, Marler P, editors. Cambridge, England: Cambridge Univ. Press; 2008. [Google Scholar]
- 57.Johnson JS, Newport EL. Cognit. Psychol. 1989;21:60. doi: 10.1016/0010-0285(89)90003-0. [DOI] [PubMed] [Google Scholar]
- 58.Mayberry RI, Lock E. Brain Lang. 2003;87:369. doi: 10.1016/s0093-934x(03)00137-8. [DOI] [PubMed] [Google Scholar]
- 59.Baptista LF, Petrinovich L. Anim. Behav. 1986;34:1359. [Google Scholar]
- 60.Zhang Y, et al. NeuroImage. 2009;46:226. doi: 10.1016/j.neuroimage.2009.01.028. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Kuhl PK, Rivera-Gaxiola M. Ann. Rev. Neurosci. 2008;31:511. doi: 10.1146/annurev.neuro.30.051606.094321. [DOI] [PubMed] [Google Scholar]
- 62.Kuhl PK, Conboy BT, Coffey-Corina S, Padden D, Rivera-Gaxiola M, Nelson T. Philos. Trans. R. Soc. London Ser. B. 2008;363:979. doi: 10.1098/rstb.2007.2154. (2008) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Nishitani N, Hari R. Neuron. 2002;36:1211. doi: 10.1016/s0896-6273(02)01089-9. [DOI] [PubMed] [Google Scholar]
- 64.Imada T, et al. Neuroreport. 2006;17:957. doi: 10.1097/01.wnr.0000223387.51704.89. [DOI] [PubMed] [Google Scholar]
- 65.Dehaene-Lambertz G, et al. Proc. Natl. Acad. Sci. U.S.A. 2006;103:14240. doi: 10.1073/pnas.0606302103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Shonkoff JP, Philips DA, editors. From neurons to neighborhoods. Washington, D.C.: Nationl. Acad. Press; 2000. [Google Scholar]
- 67.Duncan GJ, et al. Dev. Psychol. 2007;43:1428. doi: 10.1037/0012-1649.43.6.1428. [DOI] [PubMed] [Google Scholar]
- 68.Raizada RD, Richards TL, Meltzoff AN, Kuhl PK. NeuroImage. 2008;40:1392. doi: 10.1016/j.neuroimage.2008.01.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Hackman DA, Farah MJ. Trends Cognit. Sci. 2008;13:65. doi: 10.1016/j.tics.2008.11.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Ramey CT, Ramey SL. Merrill-Palmer Q. 2004;50:471. [Google Scholar]
- 71.Heckman JJ. Science. 2006;312:1900. doi: 10.1126/science.1128898. [DOI] [PubMed] [Google Scholar]
- 72.Knudsen EI, Heckman JJ, Cameron JL, Shonkoff JP. Proc. Natl. Acad. Sci. U.S.A. 2006;103:10155. doi: 10.1073/pnas.0600888103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Tallal P. Nature Rev. Neurosci. 2004:721. doi: 10.1038/nrn1499. [DOI] [PubMed] [Google Scholar]
- 74.Kuhl PK, et al. Science. 1997;277:684. doi: 10.1126/science.277.5326.684. [DOI] [PubMed] [Google Scholar]
- 75.Rogers SJ, Williams JHG, editors. Imitation and the social mind: Autism and typical development. New York: Guilford; 2006. [Google Scholar]
- 76.Mundy P, Newell L. Curr. Dir. Psychol. Sci. 2007;16:269. doi: 10.1111/j.1467-8721.2007.00518.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Toth K, Munson J, Meltzoff AN, Dawson G. J. Autism and Dev. Disorders. 2006;36:993. doi: 10.1007/s10803-006-0137-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Kuhl PK, Coffey-Corina S, Padden D, Dawson G. Dev. Sci. 2005;8:F1. doi: 10.1111/j.1467-7687.2004.00384.x. [DOI] [PubMed] [Google Scholar]
- 79.Scassellati B. Proc. IEEE Int. Workshop on Robots and Human Interactive Comm. 2005 [Google Scholar]
- 80.Bell P, Lewenstein B, Shouse AW, Feder MA, editors. Learning science in informal environments. Washington. DC: Nationl. Acad. Press; 2009. [Google Scholar]
- 81.Lee CD. Edu. Research. 2008;37:267. [Google Scholar]
- 82.Bruner J. The culture of education. Cambridge, MA: Harvard Univ. Press; 1996. [Google Scholar]
- 83.Bloom BS. Edu. Research. 1984;13:4. [Google Scholar]
- 84.Supported by NSF Science of Learning Center grants SBE-0354453 (A.M., P.K.) and SBE-0542013 (J.M., T.S.), and NIH grants HD-22514 (A.M.) and HD-37954 (P.K.). The views in this article do not necessarily reflect those of NSF or NIH. We thank A. Gopnik, J. Bongard, P. Marshall, S. Cheryan, P. Tallal, and J. Watson for valuable comments, and the members of the NSF Science of Learning Centers for helpful discussions.