Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Feb 1.
Published in final edited form as: Trends Cogn Sci. 2012 Jan 3;16(2):114–121. doi: 10.1016/j.tics.2011.12.007

Brain-to-Brain coupling: A mechanism for creating and sharing a social world

Uri Hasson 1,2, Asif A Ghazanfar 1,2, Bruno Galantucci 3,4, Simon Garrod 5, Christian Keysers 6,7
PMCID: PMC3269540  NIHMSID: NIHMS348090  PMID: 22221820

Abstract

Cognition materializes in an interpersonal space. The emergence of complex behaviors requires the coordination of actions among individuals according to a shared set of rules. Despite the central role of other individuals in shaping our minds, most cognitive studies focus on processes that occur within a single individual. We call for a shift from a single-brain to a multi-brain frame of reference. We argue that in many cases the neural processes in one brain are coupled to the neural processes in another brain via the transmission of a signal through the environment. Brain-to-brain coupling constrains and simplifies the actions of each individual in a social network, leading to complex joint behaviors that could not have emerged in isolation.

Keywords: action-perception, mother-infant, coupled oscillations, social neuroscience, joint action, speaker-listener neural coupling

Why two (or more) brains are better than one

Although the scope of cognitive neuroscience research is vast and rich, the experimental paradigms used are primarily concerned with studying the neural mechanisms of one individual’s behavioral processes. Typical experiments isolate humans or animals from their natural environments by placing them in a sealed room where interactions occur solely with a computerized program. This egocentric framework is reminiscent of the Ptolemaic geocentric frame of reference for the solar system. From the early days of civilization, stars were not believed to have any influence on the geophysical processes on earth. The present understanding of gravity, orbits and the tides came about only after the Copernican revolution which brought about the realization that the earth is just another element in a complex, interacting system of planets. Along the same lines, we argue here that the dominant focus on single individuals in cognitive neuroscience paradigms obscures the forces that operate between brains to shape behavior.

Verbal communication is an excellent example to illustrate the role that other individuals play in one’s cognitive processes. As Wittgenstein argued, the meaning of a word is defined by its use [1]. The word’s correct use, however, can vary across eras, cultures and contexts [1]. Thus, the appropriate use of a word is grounded in a set of inter-related norms shared by a community of speakers. To master a language, one has to learn the correct uses of words by interacting with other members of the community. Such interactions fundamentally shape the way individuals think and act in the world [2, 3]. This is by no means limited to language. Several other non-verbal social and cognitive skills, such as courting, dancing, or tool manipulation, require the collaboration of multiple agents that coordinate their behavior according to a shared set of rules and customs. With so many cognitive faculties emerging from interpersonal space, a complete understanding of the cognitive processes within a single individual’s brain cannot be achieved without examining and understanding the interactions among individuals [4]. In this article, we call for a shift from a single-brain to a multi-brain frame of reference.

Brain-to-brain coupling

The premise of brain-to-brain coupling is that the perceptual system of one brain can be coupled to the motor system of another. This binding mechanism is similar to action-perception coupling within a single brain. Action-perception (or stimulus-to-brain), coupling relies on the ability of brains to actively interact with the physical world (Figure 1A). Different objects in the environment emit different forms of energy (mechanical, chemical, electromagnetic), and receptors convert these signals into electrical impulses that the brain can use to infer information about the state of the world and generate appropriate behaviors. Furthermore, organisms are not passive receivers of sensory input but rather actively move their sensory receptor surfaces (hands, eyes, tongues, etc.) to sample information from the environment [5, 6]. Thus, stimulus-to-brain coupling is fundamental to our ability to retrieve information about the world to guide actions.

Figure 1. Two types of coupling.

Figure 1

A) Stimulus-to-Brain Coupling B) Brain-to-Brain Coupling.

Brain-to-brain coupling also relies on stimulus-to-brain coupling as a vehicle for conveying information. However, in brain-to-brain coupling, the signal is generated by another brain and body that resemble our own, rather than by inanimate objects in the physical environment (Figure 1B). Brain-to-brain coupling is analogous to a wireless communication system in which two brains are coupled via the transmission of a physical signal (light, sound, pressure or chemical compound) through the shared physical environment.

The coordination of behavior between the sender and receiver enables specific mechanisms for brain-to-brain coupling unavailable during interactions with the inanimate world. For example, seeing or hearing the actions, sensations or emotions of an agent trigger cortical representations in the perceiver (so called vicarious activations [7, 8]). If the agent has a similar brain and body, vicarious activations in the perceiver will approximate those of the agent, and the neural responses will become coupled [7]. If the agent, however, has a brain and body that are fundamentally different from those of the witness, this vicarious activation pattern will look fundamentally different from that in the agent and the brain responses will not be coupled. Vicarious activation, of course, is only one particular mechanism by which the neural responses can be coupled across two brains. In other cases, the neural responses in the receiver can be coupled to the neural responses in the sender in a lawful, but more complex, manner [9].

Acquiring communication

The emergence of any communication system, as Wittgenstein suggested [1], requires a shared understanding of the signal’s meaning (i.e., uses) within a particular context among a community of users. Such common ground is established through learning, which often takes place in the form of early interactions between a tutor’s brain and a learner’s brain. This hypothesis is supported by developmental evidence.

Many joint behaviors such as mating, group cohesion, and predator avoidance depend on accurate production and perception of social signals. As a result, the development of these behaviors is strongly influenced by interactions with other group members. Developmental processes ultimately must result in coupling between the sensory systems of one individual with the signals produced by the motor system of another individual. How might this coupling occur? We demonstrate how a multi-brain frame of reference might provide an answer to this question using studies of birds and human infants.

Early studies of song learning in captive birds explicitly excluded social factors, in line with a single-brain frame of reference approach. Young birds were only exposed to taped recordings of adult songs. This practice enabled great experimental control and reflected the assumption that song learning was based on an imprinting mechanism. However, it occluded the fact that songbirds learn much more effectively in social contexts, to the extent that one species of bird can learn the song of another species provided that the tutor is a live bird as opposed to a tape-recording [10]. Perhaps the best evidence that social interactions mediate vocal learning in birds comes from studies of cowbirds [11]. Male cowbirds learn to sing a potent (attractive) song by watching the reactions of females [12]. When female cowbirds (who do not sing) hear an attractive song or a song element, they produce a small wing movement. The visual signal reinforces the males’ behavior so that they are more likely to repeat the song elements that elicited the female wing movement. This ultimately leads males to sing a more advanced song that will successfully attract many females. Females, in turn, learn their preferences for certain male songs by watching and hearing the responses of other females in the group [13, 14]. Thus, both song production and preferences emerge through social interactions.

In human infant communication, it is typically assumed that social interactions primarily have a role in children’s language learning after they learn how to produce words. It turns out, however, that the social environment also influences infants’ earliest pre-linguistic vocalizations. Seven-to-twelve-month-old infant babbling exhibits a pitch, rhythm, and even a syllable structure that is similar to the ambient language [15]. This acoustic shift in babbling towards the ambient language occurs through interactions with caregivers [15]. Consistent caregiver responses to babbling can reinforce certain acoustic structures, allowing infants to learn from the consequences of their vocalizations. For this type of learning, there are two reciprocal requirements: 1) adult caregivers must be sensitive to the acoustic characteristics of babbling and respond to them, and 2) infants must perceive the reactions of caregivers and adjust their vocalizations accordingly. Indeed, caregivers respond readily to babbling during the first year of life, often by mimicking the infants’ vocalizations and establishing turn-taking during face-to-face interactions. Furthermore, caregivers are more likely to respond to vocalizations that contain more speech-like elements, such as consonant-vowel structured syllables [16]. We know from recent studies of contingent versus non-contingent social reinforcement that infants care about these reactions; these studies have demonstrated robust effects of caregiver responses on infant vocal production [17, 18].

Perhaps the most compelling data showing that communication emerges through interaction comes from adults trying to communicate with wholly original signals [19]. In a set of experiments looking at the emergence of novel communication systems, pairs of human participants were invited to play cooperative games through interconnected computers. The games required players to communicate, but players could not see, hear, or touch each other. The only means of contact were devices that allowed for the exchange of visual signals but could not be used to exchange standard graphic forms (such as letters or numbers). Thus, in order to communicate, players had to craft a novel visual communication system [20]. Such experiments showed that, when there was direct interaction between two (or more) players, symbols (i.e., signs which bear a conventionalized relation to their referents) quickly emerged [19, 2123]. However, these trademarks of human communication did not emerge when the signs were developed by isolated individuals who played with an offline partner [22]. Moreover, the abstraction of the signs increased as they were transmitted through generations of players [23]. Finally, bystanders watching the novel communication system develop, but not actively involved in its creation, were not as effective at using the system [22, 24]. This provides clear evidence for the crucial role of behavioral interaction for the generation of a novel communication system.

Together, these data from songbirds and human adults and infants show that the development of communication is fundamentally embedded in social interactions across individual brains.

Speech emerges through coupled oscillations

In the typical human scenario, much of communication is mediated by speech. How are speech signals transmitted and received when two adult individuals communicate? Notably, the transmission of information between two individuals is similar to re-afferent forms of transmission of information between two areas within a single brain. Whereas we typically think that signals between parts of the brain require anatomical connections, neural states can also be influenced by physical signals that were generated by another part of the brain and transmitted through the environment. For example, a speaker delivering a monologue, without any awareness of doing so, adjusts the production of her speech patterns by monitoring how they sound in a given context. The motor areas of the brain generate a program of speech, and the body produces an acoustic signal that mixes in the air with ambient noise. This sound travels back to the speaker’s ears and auditory system, and thus acts as a feedback signal directing the motor system to adjust vocal output if necessary. This is the reason why we (and other primates) reflexively raise our voice in noisy environments. In this scenario, communication between the motor and the auditory parts of the brain is coordinated through the vocal signal, with the air as a conduit. This idea naturally extends to both sides of a conversation—the speaker and the listener.

During speech communication two brains are coupled through an oscillatory signal. Across all languages and contexts, the speech signal has its own amplitude modulation (i.e., it goes up and down in intensity), consisting of a rhythm that ranges between 3–8Hz [2527]. This rhythm is roughly the timescale of the speaker’s syllable production (3 to 8 syllables per second). The brain, in particular the neocortex, also produces stereotypical rhythms or oscillations [28]. Recent theories of speech perception point out that the amplitude modulations in speech closely match the structure of the 3–8Hz theta oscillation [25, 29, 30] (Figure 2A). This suggests that the speech signal could be coupled [30] and/or resonate (amplify) [25] with ongoing oscillations in the auditory regions of a listener’s brain [31, 32] (Figure 2B). Resonance, for example, could increase the signal-to-noise ratio of neural signals and thus help to improve hearing. In addition, disruption of the speech signal by making it faster than 8Hz reduces the intelligibility of speech [3335] and decreases the entrainment of the auditory cortex [36, 37].

Figure 2.

Figure 2

The 3 – 8 Hz rhythm of speech couples with the on-going auditory cortical oscillations that have a similar frequency band. A) Even in silence, there are on-going auditory cortical oscillations in the receiver’s brain. B) The signal-to-noise of this cortical oscillation increases when it is coupled to the auditory-only speech of the signaler. C) Finally, since in humans, the mouth moves at the same rhythmic frequency of the speech envelope, audiovisual speech can further enhance the signal-to-noise ratio of the cortical oscillation.

The coupling hypothesis also extends to the visual modality [25] (Figure 2C). Human conversations, which typically occur face-to-face, are perceived through both visual and auditory channels. It is well-established that watching a speaker’s face improves speech intelligibility [38]. Indeed, in noisy situations such as a cocktail party, watching a speaker’s face is the equivalent of turning up the volume by 15 decibels. This amplification of speech by vision is due, at least in part, to the fact that mouth movements during speech are tightly coupled with the speech signal’s amplitude modulations [25]. Thus, the speaker’s mouth oscillations at a rate of 3–8Hz are captured by the listener’s visual system, and amplify (via numerous multisensory neural pathways [39]) the auditory signals in the listener’s brain [25, 30, 40].

In the scenario described above, the speaker is like a radio station broadcasting an AM speech signal via the mouth, and this signal and background noises are picked up by the antenna-like ears of the listener. The listener’s brain is pre-tuned, so to speak, to the specific AM frequency of the speech signal and thus amplifies the speech signal alone through resonance. The structure of the speech signal is modulated with a 3–8Hz frequency that resonates (or samples, or entrains) with on-going oscillations in the auditory regions of the listener’s brain that are also 3–8Hz in frequency. To further amplify this vocal signal in noisy environments, the brain takes advantage of the coupling between visible mouth movements and the oscillatory speech sound waves. Thus, the mouth motion helps divide up or ‘chunk’ the speech into syllables so that the listener’s brain can efficiently extract meaningful information from the signal. According to the brain-to-brain coupling framework, vocal communication emerges through the interactions or entrainment between the brains and bodies of signaler and receiver.

Coordinated, hierarchical alignment during speech

Once brains are coupled to each other via speech signals, information can be shared and exchanged more efficiently. Human communication protocols can be divided into two types: monologues, in which only one speaker sends information and listeners receive it, and dialogues, in which interlocutors have to interweave their activities with precise timing. Garrod and Pickering argue that communication protocols in general, and dialogues in particular, are made easy because of the largely unconscious process of interactive alignment [41, 42]. In this scheme, two interlocutors simultaneously align their representations at different linguistic levels and do so by imitating each other’s choices of speech sounds [43], grammatical forms [44], words and meanings [45]. For example, if Peter says to Mary with reference to their child, I handed John his lunch box today,’ Mary is more likely to respond with And I handed him his coat’ than with And I gave him his coat’ even though the two alternative responses have equivalent meaning.

Interactive alignment occurs for two related reasons. First, within an individual’s brain, production (speaking) and comprehension (listening) are coupled to each other and rely on shared linguistic representations [46]. Second, accordingly, when a speaker produces a particular representation, it primes (activates) both the comprehension and the corresponding production of the same representation in the listener, making the listener more likely to use it in his speech [42]. Crucially, interactive alignment occurs at all linguistic levels, from the phonological and syntactic up to the semantic and the contextual. Additionally, alignment at one linguistic level leads to greater alignment at other levels [44, 47]. For example, “low-level” alignment of words or grammatical forms can lead to alignment at the critical level of the situation model (i.e., the level at which the speaker and the listener understand that they are referring to the same state of affairs [48]). This interactive alignment, as we will argue below, is achieved by coupling the speaker’s brain responses during speech production with the listener’s brain responses during speech comprehension.

Coupling of two brains via verbal communication

The coupling between the speaker’s and listener’s brain responses during natural communication relies on speaker-listener brain coupling. Using functional MRI, Stephens et al. recently recorded the brain activity of a speaker telling an unrehearsed real-life story [49]. Next, they measured the brain activity of a subject listening to the recorded audio of the spoken story, thereby capturing the time-locked neural dynamics from both sides of the communication. Finally, they asked the listeners to complete a detailed questionnaire that assessed their level of comprehension.

Using a dynamic model of neural coupling based on inter-subject correlation analysis (Figure 3A), the authors found that, during successful communication, the speaker’s and listener’s brains exhibited joint, temporally-coupled, response patterns. This “brain-to-brain” coupling substantially diminished in the absence of communication, for instance when the speaker told the story in a language the listener did not know. On average, the listener’s brain responses mirrored the speaker’s brain responses with some temporal delays (Figure 3B, blue). The delays matched the flow of information across interlocutors, implying a causal relationship by which the speaker’s production-based processes induce and shape the neural responses in the listener’s brain. However, the analysis also identified a subset of brain regions in which the responses in the listener’s brain actually preceded the responses in the speaker’s brain (Figure 3B, red). These anticipatory responses suggest that the listeners are actively predicting the speaker’s upcoming utterances. Such predictions may compensate for problems with noisy or ambiguous input [41]. Indeed, the more extensive the coupling between a speaker’s brain responses and a listener’s anticipatory brain responses, the better the comprehension (Figure 3C).

Figure 3. Speaker–listener brain-to-brain coupling.

Figure 3

A) The speaker–listener neural coupling was assessed through the use of a general linear model in which the time series in the speaker’s brain are used to predict the activity in the listeners’ brains. B) The speaker–listener temporal coupling varies across brain areas. In early auditory areas (A1+) the speaker–listener brain coupling is time locked to the moment of vocalization (yellow). In posterior areas the activity in the speaker’s brain preceded the activity in the listeners’ brains (blue), whereas in the mPFC, dlPFC, and striatum the listeners’ brain activity preceded (red). C) The listeners’ behavioral scores and the extent of significant speaker–listener brain coupling was found to be strongly correlated (r = 0.54, p < 0.07). These results suggest that the stronger the neural coupling between interlocutors, the better the understanding. The extent of brain areas where the listeners’ activity preceded the speaker’s activity (red areas in Fig. 3B) provided the strongest correlation with behavior (r = 0.75, p < 0.01). These results provide evidence that prediction is an important aspect of successful communication. (Adapted from [49]).

The speaker-listener neural coupling reveals a shared neural substrate that exhibits temporally aligned response patterns across interlocutors. Previous studies showed that during free viewing of a movie or listening to a story, shared external input can induce similar brain responses across different individuals [5055]. Verbal communication enables us to directly convey information across brains, even when the information is unrelated to the current external environment. Thus, while stimulus-to-brain coupling (Figure 1A) is locked to momentary states of affairs in the environment, brain-to-brain coupling (Figure 1B) can provide a mechanism for transmitting information regarding temporally and spatially remote events. This mechanism works by directly inducing similar brain patterns in another listening individual in the absence of any stimulation other than speech. Such freedom from the immediate physical environment is one of the prime benefits of the human communication system.

Coupling of two brains via nonverbal communication

Brain-to-brain coupling is also possible through hand gestures and facial expressions. This was first demonstrated in an experiment in which participants played the game ‘charades’ in the fMRI scanner. A signaler had to transmit non-verbal cues about the identity of a word while her brain activity was measured and her hand gestures were video-recorded [56]. Later, an observer was shown the video-footage while his brain activity was measured. Using Between-Brain Granger Causality (Figure 4), the study found that the temporal variation in the brain activity of the observer carried information about the temporal variation in that of the signaler, demonstrating brain-to-brain coupling during gestural communication. Furthermore, Between-Brain Granger Causality identified two networks in the observer’s brain that coupled with activity in the signaler’s motor system—regions associated with the mirror neuron system and regions associated with theory-of-mind processes—support for the idea that these two networks collaborate during social perception [57]. Interestingly, separate analyses of the brain activity of the signaler and observer failed to identify the full extent of the networks involved, illustrating the power of brain-to-brain approaches to advance our understanding of brain function.[58]

Figure 4. Gesturer-observer brain-to-brain coupling.

Figure 4

During gestural communication, in the game of charades, brain activity in the gesturer triggers muscle movements (gestures), that are seen by the receiver and which trigger activity in brain regions of the observer that are similar to those that caused the gestures in the gesturer. Brain-to-brain Granger causality can map such information transfer by calculating how much brain activity in the gesturer helps predict brain activity in the viewer.

In another study, Anders et al. had women express emotions in the fMRI scanner and then showed movies of their expressions to their romantic partners [59]. They found that the temporal sequence of activation was coupled across the brains of the women and their partners, and analyses provided evidence for a ‘shared coding’ of emotion-specific information in a distributed network across the brains of the signaler and observer.

In both the gesture and emotion experiments, each pair of participants was free to choose idiosyncratic signals, ensuring that the interaction was unique. Indeed, control analyses in both experiments revealed that the brain-to-brain coupling was tighter within a pair than across pairs [56, 59]. Thus, brain-to-brain approaches can provide an exciting new tool to study the idiosyncratic aspects of human interactions that emerge between a dyad in the context of a communicative act.

Synergy through joint action

Coupled systems can generate complex behaviors that cannot be performed in isolation. Many human actions, such as playing basketball or operating a sail-boat, require tight spatiotemporal coordination across team members [60]. Moreover, even actions that can be performed in isolation, such as playing a musical instrument or dancing, are faster and more accurate when performed with others.

An increasing body of evidence shows that, during joint actions, people become implicitly coupled at motor, perceptual and cognitive levels [61]. At a motoric level, for example, two people on rocking chairs synchronize their rocking as if they were mechanically coupled [62]. Pianists synchronize their playing during a duet with accuracies that rival those of the two hands of a single musician[63]. Moreover, interpersonally coordinated actions of pairs of guitarists playing a short melody together is accompanied by between-brain oscillatory couplings [64]. At a perceptual level, when two people view an object from different sides and are asked to mentally rotate the object, they adopt the perspective of their partner [65]. At a cognitive level, implicit coupling occurs when two people respond to different aspects of a stimulus using either the same effectors (e.g. the same hand, compatible condition) or different effectors (e.g. the opposite hand, incompatible condition). Although the assigned stimulus-response tasks did not require participants’ to coordinate their responses, the response times sped up when the other participants responses were compatible and slowed down when they were incompatible [66].

A recent proposal views joint action as the linking of degrees of freedom (DF) across individuals’ motor systems into synergies [9]. In this scheme, two motor systems working together in joint action result in dimensional compression (the two individuals constrain each other’s degrees of freedom) and reciprocal compensation (whereby one component of the synergy can react to changes in other components).

Joint decision making

The choices an individual makes are often influenced and modified by the decisions of others. Individuals playing a strategy game keep track not only of the actions of the opponent, but also of how opponents are influenced in response to their own actions [67]. For example, while playing the game “rock-paper-scissors” players automatically imitate each other’s strategic decisions when competing, although imitation actually reduces the chance of winning [68]. The influence of one brain on the decision-making processes of another brain was measured concurrently using a hyperscanning-fMRI setup [64, 6972]. In one study, subjects played an investor-trustee economic exchange game [69]. Neural responses, as well as the decisions, of the trustee were influenced by the social signal expressed by the investor. In particular, responses in the caudate nucleus were stronger for benevolent reciprocity than for malevolent reciprocity. Moreover, after playing a few games together, caudate brain activity in the trustee predicted the investor’s expected behavior, responding even before the investor revealed his decision.

However, it is not always the case that two brains are better than one [7375]. A first step toward modeling the way by which two individuals accurately combine information that they communicate with each other, was made in an elegant study in which two observers performed a collective low-level perceptual decision-making task [73]. The experiment revealed that sharing information among two observers with equal visual sensitivity improve their performances, while sharing information among individuals with different level of visual sensitivity actually worsen their performances.

Concluding remarks

The structure of the shared external environment shapes our neural responses and behavior. Some aspects of the environment are determined by the physical environment. Other aspects, however, are determined by a community of individuals, who together establish a shared set of rules (behaviors) that shape and constrain the perception and actions of each member of the group. For example, human infants undergo a period of perceptual narrowing whereby younger infants can discriminate between social signals from multiple species and cultures, but older infants fine-tune their perception following experience with their native social signals [76]. Coupled brains can create new phenomena, including verbal and non-verbal communication systems and interpersonal social institutions that could not have emerged in species that lack brain-to-brain coupling. Thus, just as the Copernican revolution simplified rather than complicated our understanding of the physical world, embracing brain-to-brain coupling as a reference system may simplify our understanding of behavior by illuminating new forces that operate among individuals and shape our social world (Box 1).

Box 1. Questions for future research.

  1. How to further develop principled methods to explore information transfer across two (or more) brains?

  2. What is the full range of neural transformations that could govern or characterize brain-to-brain coupling (e.g., “mirroring”, reducing the degrees of freedom, synergy, etc.)?

  3. How does the tuning between brains emerge during development?

  4. Why can some individuals better couple to other individuals?

  5. What are the minimum brain-to-brain coupling requirements for inter-species interactions?

Acknowledgments

UH was supported by the National Institute of Mental Health award (R01MH094480). AAG was supported by the National Institute of Neurological Disorders and Stroke (R01NS054898) and the James S McDonnell Scholar Award. BG was supported by the National Science Foundation (BCS-1026943). CK was supported by a national initiative for the brain and cognition N.W.O. grant (433-09-253).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Wittgenstein L. Philosophical investigations. Prentice Hall; 1973. [Google Scholar]
  • 2.Vygotsky LS, Cole M. Mind in society: the development of higher psychological processes. Harvard University Press; 1978. [Google Scholar]
  • 3.Boroditsky L, Gaby A. Remembrances of times east. Psychological Science. 2010;21:1635–1639. doi: 10.1177/0956797610386621. [DOI] [PubMed] [Google Scholar]
  • 4.Hari R, Kujala MV. Brain Basis of Human Social Interaction: From Concepts to Brain Imaging. Physiological Reviews. 2009;89:453–479. doi: 10.1152/physrev.00041.2007. [DOI] [PubMed] [Google Scholar]
  • 5.Schroeder CE, et al. Dynamics of active sensing and perceptual selection. Curr Opin Neurobiol. 2010;20:172–176. doi: 10.1016/j.conb.2010.02.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Gibson JJ. The ecological approach to visual perception. Houghton Mifflin; 1979. [Google Scholar]
  • 7.Keysers C. The Empathic Brain; Createspace; 2011. [Google Scholar]
  • 8.Keysers C, Gazzola V. Expanding the mirror: vicarious activity for actions, emotions, and sensations. Curr Opin Neurobiol. 2009;19:666–671. doi: 10.1016/j.conb.2009.10.006. [DOI] [PubMed] [Google Scholar]
  • 9.Riley MA, et al. Interpersonal synergies. Front Psychol. 2011;2:38. doi: 10.3389/fpsyg.2011.00038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Baptista LF, Petrinovich L. Social interaction, sensitive phases and the song template hypothesis in the white-crowned sparrow. Animal Behaviour. 1984;32:172–181. [Google Scholar]
  • 11.White DJ. The form and function of social development: Insights from a parasite. Current Directions in Psychological Science. 2010;19:314–318. [Google Scholar]
  • 12.West MJ, King AP. Female visual displays affect the development of male song in the cowbird. Nature. 1988;334:244–246. doi: 10.1038/334244a0. [DOI] [PubMed] [Google Scholar]
  • 13.Freed-Brown G, White DJ. Acoustic mate copying: Female cowbirds attend to other females’ vocalizations to modify their song preferences. Proceedings of the Royal Society of London Series B-Biological Sciences. 2009;276:3319–3325. doi: 10.1098/rspb.2009.0580. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.West MJ, et al. The development of local song preferences in female cowbirds (Molothrus ater): Flock living stimulates learning. Ethology. 2006;112:1095–1107. [Google Scholar]
  • 15.Goldstein MH, Schwade JA. From birds to words: perception of structure in social interaction guides vocal development and language learning. In: Blumberg MS, et al., editors. The Oxford Handbook of Developmental and Comparative Neuroscience. Oxford University Press; 2010. pp. 708–729. [Google Scholar]
  • 16.Gros-Louis J, et al. Mothers provide differential feedback to infants’ prelinguistic vocalizations. International Journal of Behavioral Development. 2006;30:509–516. [Google Scholar]
  • 17.Goldstein MH, et al. Social interaction shapes babbling: Testing parallels between birdsong and speech. Proceedings of the National Academy of Sciences of the United States of America. 2003;100:8030–8035. doi: 10.1073/pnas.1332441100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Goldstein MH, Schwade JA. Social feedback to infants’ babbling facilitates rapid phonological learning. Psychological Science. 2008;19:515–523. doi: 10.1111/j.1467-9280.2008.02117.x. [DOI] [PubMed] [Google Scholar]
  • 19.Galantucci B. An experimental study of the emergence of human communication systems. Cognitive Science. 2005;29:737–767. doi: 10.1207/s15516709cog0000_34. [DOI] [PubMed] [Google Scholar]
  • 20.Galantucci B, Garrod S. Experimental semiotics: a review. Frontiers in Human Neuroscience. 2011 doi: 10.3389/fnhum.2011.00011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Healey PGT, et al. Graphical language games: Interactional constraints on representational form. Cognitive Science. 2007;31:285–309. doi: 10.1080/15326900701221363. [DOI] [PubMed] [Google Scholar]
  • 22.Garrod S, et al. Foundations of representation: where might graphical symbol systems come from? Cognitive Science. 2007;31:961–987. doi: 10.1080/03640210701703659. [DOI] [PubMed] [Google Scholar]
  • 23.Fay N, et al. The fitness and functionality of culturally evolved communication systems. Philos Trans R Soc Lond B Biol Sci. 2008;363:3553–3561. doi: 10.1098/rstb.2008.0130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Galantucci B. Experimental Semiotics: a new approach for studying communication as a form of joint action. Topics in Cognitive Science. 2009;1:393–410. doi: 10.1111/j.1756-8765.2009.01027.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Chandrasekaran C, et al. The natural statistics of audiovisual speech. PLoS Computational Biology. 2009;5:e1000436. doi: 10.1371/journal.pcbi.1000436. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Drullman R. Temporal envelope and fine structure cues for speech intelligibility. The Journal of the Acoustical Society of America. 1995;97:585–592. doi: 10.1121/1.413112. [DOI] [PubMed] [Google Scholar]
  • 27.Greenberg S, et al. Temporal properties of spontaneous speech--a syllable-centric perspective. Journal of Phonetics. 2003;31:465–485. [Google Scholar]
  • 28.Buzsaki G, Draguhn A. Neuronal oscillations in cortical networks. Science. 2004;304:1926–1929. doi: 10.1126/science.1099745. [DOI] [PubMed] [Google Scholar]
  • 29.Poeppel D. The analysis of speech in different temporal integration windows: cerebral lateralization as ‘asymmetric sampling in time’. Speech Communication. 2003;41:245–255. [Google Scholar]
  • 30.Schroeder CE, et al. Neuronal oscillations and visual amplification of speech. Trends Cogn Sci. 2008;12:106–113. doi: 10.1016/j.tics.2008.01.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Giraud AL, et al. Endogenous cortical rhythms determine cerebral specialization for speech perception and production. Neuron. 2007;56:1127–1134. doi: 10.1016/j.neuron.2007.09.038. [DOI] [PubMed] [Google Scholar]
  • 32.Lakatos P, et al. An oscillatory hierarchy controlling neuronal excitability and stimulus processing in the auditory cortex. Journal Of Neurophysiology. 2005;94:1904–1911. doi: 10.1152/jn.00263.2005. [DOI] [PubMed] [Google Scholar]
  • 33.Saberi K, Perrott DR. Cognitive restoration of reversed speech. Nature. 1999;398:760–760. doi: 10.1038/19652. [DOI] [PubMed] [Google Scholar]
  • 34.Shannon RV, et al. Speech Recognition with Primarily Temporal Cues. Science. 1995;270:303–304. doi: 10.1126/science.270.5234.303. [DOI] [PubMed] [Google Scholar]
  • 35.Smith ZM, et al. Chimaeric sounds reveal dichotomies in auditory perception. Nature. 2002;416:87–90. doi: 10.1038/416087a. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Ahissar E, et al. Speech comprehension is correlated with temporal response patterns recorded from auditory cortex. Proceedings of the National Academy of Sciences of the United States of America. 2001;98:13367–13372. doi: 10.1073/pnas.201400998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Luo H, Poeppel D. Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex. Neuron. 2007;54:1001–1010. doi: 10.1016/j.neuron.2007.06.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Sumby WH, Pollack I. Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America. 1954;26:212–215. [Google Scholar]
  • 39.Ghazanfar AA, Schroeder CE. Is neocortex essentially multisensory? Trends In Cognitive Sciences. 2006;10:278–285. doi: 10.1016/j.tics.2006.04.008. [DOI] [PubMed] [Google Scholar]
  • 40.Luo H, et al. Auditory cortex tracks both auditory and visual stimulus dynamics using low-frequency neuronal phase modulation. PLoS Biology. 2010;10:e1000445. doi: 10.1371/journal.pbio.1000445. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Garrod S, Pickering MJ. Why is conversation so easy? Trends Cogn Sci. 2004;8:8–11. doi: 10.1016/j.tics.2003.10.016. [DOI] [PubMed] [Google Scholar]
  • 42.Pickering MJ, Garrod S. Toward a mechanistic psychology of dialogue. Behav Brain Sci. 2004;27:169–190. doi: 10.1017/s0140525x04000056. discussion 190–226. [DOI] [PubMed] [Google Scholar]
  • 43.Pardo JS. On phonetic convergence during conversational interaction. J Acoust Soc Am. 2006;119:2382–2393. doi: 10.1121/1.2178720. [DOI] [PubMed] [Google Scholar]
  • 44.Branigan HP, et al. Syntactic co-ordination in dialogue. Cognition. 2000;75:B13–B25. doi: 10.1016/s0010-0277(99)00081-5. [DOI] [PubMed] [Google Scholar]
  • 45.Garrod S, Anderson A. Saying what you mean in dialogue: a study in conceptual and semantic co-ordination. Cognition. 1987;27:181–218. doi: 10.1016/0010-0277(87)90018-7. [DOI] [PubMed] [Google Scholar]
  • 46.Liberman AM, Whalen DH. On the relation of speech to language. Trends Cogn Sci. 2000;4:187–196. doi: 10.1016/s1364-6613(00)01471-6. [DOI] [PubMed] [Google Scholar]
  • 47.Cleland AA, Pickering MJ. The use of lexical and syntactic information in language production: Evidence from the priming of noun-phrase structure. Journal of Memory and Language. 2003;49:214–230. [Google Scholar]
  • 48.Zwaan RA, Radvansky GA. Situation models in language comprehension and memory. Psychol Bull. 1998;123:162–185. doi: 10.1037/0033-2909.123.2.162. [DOI] [PubMed] [Google Scholar]
  • 49.Stephens G, et al. Speaker-listener neural coupling underlies successful communication. Proc Natl Acad Sci U S A. 2010 doi: 10.1073/pnas.1008662107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Hasson U, et al. A hierarchy of temporal receptive windows in human cortex. J Neurosci. 2008;28:2539–2550. doi: 10.1523/JNEUROSCI.5487-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Golland Y, et al. Extrinsic and intrinsic systems in the posterior cortex of the human brain revealed during natural sensory stimulation. Cereb Cortex. 2007;17:766–777. doi: 10.1093/cercor/bhk030. [DOI] [PubMed] [Google Scholar]
  • 52.Hasson U, et al. Intersubject synchronization of cortical activity during natural vision. Science. 2004;303:1634–1640. doi: 10.1126/science.1089506. [DOI] [PubMed] [Google Scholar]
  • 53.Wilson SM, et al. Beyond superior temporal cortex: intersubject correlations in narrative speech comprehension. Cereb Cortex. 2008;18:230–242. doi: 10.1093/cercor/bhm049. [DOI] [PubMed] [Google Scholar]
  • 54.Hanson SJ, et al. Solving the brain synchrony eigenvalue problem: conservation of temporal dynamics (fMRI) over subjects doing the same task. J Comput Neurosci. 2008 doi: 10.1007/s10827-008-0129-z. [DOI] [PubMed] [Google Scholar]
  • 55.Jaaskelainen PI, et al. Inter-subject synchronization of prefrontal cortex hemodynamic activity during natural viewing. The Open Neuroimaging Journal. 2008;2:14–19. doi: 10.2174/1874440000802010014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Schippers MB, et al. Mapping the information flow from one brain to another during gestural communication. Proc Natl Acad Sci U S A. 2010;107:9388–9393. doi: 10.1073/pnas.1001791107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Keysers C, Gazzola V. Integrating simulation and theory of mind: from self to social cognition. Trends Cogn Sci. 2007;11:194–196. doi: 10.1016/j.tics.2007.02.002. [DOI] [PubMed] [Google Scholar]
  • 58.Schippers MB, et al. Playing charades in the fMRI: are mirror and/or mentalizing areas involved in gestural communication? PLoS One. 2009;4:e6801. doi: 10.1371/journal.pone.0006801. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Anders S, et al. Flow of affective information between communicating brains. Neuroimage. 2010;54:439–446. doi: 10.1016/j.neuroimage.2010.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Hutchins E. Cognition in the wild. MIT Press; 1995. [Google Scholar]
  • 61.Knoblich G, et al. In: Psychological research on joint action: theory and data the psychology of learning and motivation. Ross B, editor. Academic Press; 2011. pp. 59–101. [Google Scholar]
  • 62.Richardson MJ, et al. Rocking together: dynamics of intentional and unintentional interpersonal coordination. Hum Mov Sci. 2007;26:867–891. doi: 10.1016/j.humov.2007.07.002. [DOI] [PubMed] [Google Scholar]
  • 63.Keller PE. Joint action in music performance. In: Morganti A, et al., editors. Enacting intersubjectivity: A cognitive and social perspective to the study of interactions. IOS Press; 2008. pp. 205–221. [Google Scholar]
  • 64.Lindenberger U, et al. Brains swinging in concert: cortical phase synchronization while playing guitar. BMC Neurosci. 2009;10:22. doi: 10.1186/1471-2202-10-22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Bockler A, et al. Giving a helping hand: effects of joint attention on mental rotation of body parts. Exp Brain Res. 2011;211:531–545. doi: 10.1007/s00221-011-2625-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Sebanz N, et al. Representing others’ actions: just like one’s own? Cognition. 2003;88:B11–21. doi: 10.1016/s0010-0277(03)00043-x. [DOI] [PubMed] [Google Scholar]
  • 67.Hampton AN, et al. Neural correlates of mentalizing-related computations during strategic interactions in humans. Proc Natl Acad Sci U S A. 2008;105:6741–6746. doi: 10.1073/pnas.0711099105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Cook R, et al. Automatic imitation in a strategic context: players of rock-paper-scissors imitate opponents’ gestures. Proc Biol Sci. 2011 doi: 10.1098/rspb.2011.1024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.King-Casas B, et al. Getting to know you: reputation and trust in a two-person economic exchange. Science. 2005;308:78–83. doi: 10.1126/science.1108062. [DOI] [PubMed] [Google Scholar]
  • 70.Montague PR, et al. Hyperscanning: simultaneous fMRI during linked social interactions. Neuroimage. 2002;16:1159–1164. doi: 10.1006/nimg.2002.1150. [DOI] [PubMed] [Google Scholar]
  • 71.Tomlin D, et al. Agent-specific responses in the cingulate cortex during economic exchanges. Science. 2006;312:1047–1050. doi: 10.1126/science.1125596. [DOI] [PubMed] [Google Scholar]
  • 72.Babiloni F, et al. Hypermethods for EEG hyperscanning. Conf Proc IEEE Eng Med Biol Soc. 2006;1:3666–3669. doi: 10.1109/IEMBS.2006.260754. [DOI] [PubMed] [Google Scholar]
  • 73.Bahrami B, et al. Optimally interacting minds. Science. 2010;329:1081–1085. doi: 10.1126/science.1185718. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Kerr NL, Tindale RS. Group performance and decision making. Annu Rev Psychol. 2004;55:623–655. doi: 10.1146/annurev.psych.55.090902.142009. [DOI] [PubMed] [Google Scholar]
  • 75.Krause J, et al. Swarm intelligence in animals and humans. Trends Ecol Evol. 2010;25:28–34. doi: 10.1016/j.tree.2009.06.016. [DOI] [PubMed] [Google Scholar]
  • 76.Lewkowicz DJ, Ghazanfar AA. The emergence of multisensory systems through perceptual narrowing. Trends Cogn Sci. 2009;13:470–478. doi: 10.1016/j.tics.2009.08.004. [DOI] [PubMed] [Google Scholar]

RESOURCES