Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2007 Jan 10;27(2):308–314. doi: 10.1523/JNEUROSCI.4822-06.2007

Action Representation of Sound: Audiomotor Recognition Network While Listening to Newly Acquired Actions

Amir Lahav 1,2,, Elliot Saltzman 2,3, Gottfried Schlaug 1
PMCID: PMC6672064  PMID: 17215391

Abstract

The discovery of audiovisual mirror neurons in monkeys gave rise to the hypothesis that premotor areas are inherently involved not only when observing actions but also when listening to action-related sound. However, the whole-brain functional formation underlying such “action–listening” is not fully understood. In addition, previous studies in humans have focused mostly on relatively simple and overexperienced everyday actions, such as hand clapping or door knocking. Here we used functional magnetic resonance imaging to ask whether the human action-recognition system responds to sounds found in a more complex sequence of newly acquired actions. To address this, we chose a piece of music as a model set of acoustically presentable actions and trained non-musicians to play it by ear. We then monitored brain activity in subjects while they listened to the newly acquired piece. Although subjects listened to the music without performing any movements, activation was found bilaterally in the frontoparietal motor-related network (including Broca's area, the premotor region, the intraparietal sulcus, and the inferior parietal region), consistent with neural circuits that have been associated with action observations, and may constitute the human mirror neuron system. Presentation of the practiced notes in a different order activated the network to a much lesser degree, whereas listening to an equally familiar but motorically unknown music did not activate this network. These findings support the hypothesis of a “hearing–doing” system that is highly dependent on the individual's motor repertoire, gets established rapidly, and consists of Broca's area as its hub.

Keywords: Broca's area, mirror neuron system, auditory, fMRI, premotor, sensorimotor

Introduction

When we observe someone performing an action, our brain may produce neural activity similar to that seen when we perform it ourselves. This neural simulation happens via a multimodal mirror neuron system, originally found in the monkey ventral premotor cortex (Gallese et al., 1996; Rizzolatti et al., 1996) and recently extended to humans in a variety of action-observation tasks (Decety and Grezes, 1999; Nishitani and Hari, 2000; Buccino et al., 2001; Grezes et al., 2003; Kilner et al., 2003; Gangitano et al., 2004; Hamilton et al., 2004; Calvo-Merino et al., 2005; Haslinger et al., 2005; Iacoboni, 2005; Nelissen et al., 2005; Thornton and Knoblich, 2006). Mirror neurons, however, may not only be triggered by visual stimuli. A subgroup of premotor neurons in monkeys also responded to the sound of actions (e.g., peanut breaking), and it has been recently suggested that there might also be a cross-modal neural system that formally orchestrates these neurons in humans (Kohler et al., 2002; Keysers et al., 2003). However, the whole-brain functional formation and mechanism underlying such “action–listening” is not fully understood.

Action listening takes part in many of our daily activities. Consider, for example, listening to door knocking or finger snapping. You would probably recognize the sound, although at the same time, your brain may also simulate the action (Aziz-Zadeh et al., 2004). Previous studies, however, have mostly focused on actions that we all learn from infancy and that are typically overexperienced, for example, hand clapping (Pizzamiglio et al., 2005), tongue clicking (Hauk et al., 2006), and speech (Fadiga et al., 2002; Wilson et al., 2004; Buccino et al., 2005). One other concern, at least with regard to the sound of speech, is that speech is not representative of all sounds; it carries meaning and is limited to communicative mouth actions, which all together, may activate different types of neural circuits than nonspeech sounds (Pulvermuller, 2001; Zatorre et al., 2002; Thierry et al., 2003; Schon et al., 2005; Ozdemir et al., 2006). Thus, in the present study, we ask whether and how the mirror neuron system will respond to actions and sounds that do not have verbal meaning and, most importantly, are well controlled and newly acquired.

To address this, we chose piano playing as a model task and a piece of music as a set of acoustically presentable novel actions. The use of music making as a sensory–motor framework for studying the acquisition of actions has been demonstrated in several previous studies (Stewart et al., 2003; Bangert et al., 2006b). Typically, when playing the piano, auditory feedback is naturally involved in each of the player's movements, leading to an intimate coupling between perception and action (Bangert and Altenmuller, 2003; Janata and Grafton, 2003). We therefore hypothesized that music one knows how to play (even if only recently learned) may be strongly associated with the corresponding elements of the individual's motor repertoire and might activate an audiomotor network in the human brain (Fig. 1).

Figure 1.

Figure 1.

Action–listening illustration. A, Music performance can be viewed as a complex sequence of both actions and sounds, in which sounds are made by actions. B, The sound of music one knows how to play can be reflected, as if in a mirror, in the corresponding motor representations.

Materials and Methods

Experimental design

The present study had two stages. First, we trained musically naive subjects to play a novel piano piece (“trained-music”) and measured their learning progress over a period of 5 d. Next, on day 5, we performed functional magnetic resonance imaging (fMRI) scans on subjects while they passively listened to short passages taken from the newly acquired piece (this was considered the action–listening condition). Similar passages taken from two other original musical pieces that subjects had only listened to (to control for familiarity) but had never learned to play were used for the (two) control conditions (Fig. 2B).

Figure 2.

Figure 2.

Experimental setup. A, Learning times of the trained-music are shown for individual subjects (n = 9) over five daily piano-training sessions. Learning time was influenced by the number of errors made during a session, until reaching error-free performance with the musical piece. A plateau level of performance is notable in sessions 4–5, obtaining the minimum possible learning time (12 min) set by the software. B, Examples of three fMRI experimental conditions. Sample short passages extracted from the trained-music that subjects learned how to play (blue) and two untrained control musical pieces are shown. These controls include an untrained-same-notes-music, in which the same notes were used in a different order to create a new melody (cyan) and an untrained-different-notes-music, composed of a completely different set of notes (red). C, A sample fMRI time series is shown between two imaging time points (TR, 18 s; TA, 1.75 s; red) within a single run. Images were acquired immediately after listening to short passages (5–8 s each; yellow) extracted from all three musical pieces. A behavioral control task (green) was used to control for listening attention; subjects heard three piano notes and had to press a button with their left hand if these notes had appeared as a subsequence in the preceding musical passage (yellow) they had heard. Intentionally, no images reflecting this task were acquired. We varied pause times between scans (gray) so that the TR remained fixed. D, Results of the behavioral control task are shown for all listening conditions. The graph shows the percentage of correct button presses from the group (n = 9) for each listening condition (24 trials per subject). No significant differences were found for mean correct responses across musical pieces (p = 0.489).

Subjects

Nine non-musicians (six females and three males; mean age, 22.4 ± 2.2 years old) participated in the study. All subjects were right-handed [assessed by a questionnaire (Oldfield, 1971)], had no previous musical training (including voice), and had no history of neurological, psychiatric, or auditory problems. This study was approved by the Institutional Review Boards of Beth Israel Deaconess Medical Center and Boston University, and all subjects gave written informed consent.

Experimental training

Drawing on previous MIDI-based interactive software tools (Bangert et al., 2001), we have developed our own MIDI-based interactive software for learning to play by ear (Lahav et al., 2005) and have trained subjects to play the piano part of a novel musical piece (trained-music) with no sight-reading required. This was done to abolish the requirement for visuomotor translation of musical notation into key presses and to enhance auditory–motor learning. During piano training, subjects learned to play the piano part (along with a prerecorded accompaniment), using their right hand and a set of five keys in a fixed fingering position (F, G, A, Bb, C) (Fig. 1A, gray keys) (the same finger always hit the same key). To complete a piano session, subjects had to gradually go through a series of trials until reaching error-free performance with the musical piece, while a computer notified them of a note (wrong key press) or timing (>116 of a beat) error after each playing trial.

Two behavioral tests to assess auditory–motor learning

Measuring learning time.

The learning time of the musical piece was a function of the number of errors made during a given session. We measured learning progress in subjects over 5 consecutive days (one session per day), specifically looking at the duration (minutes) it took for each subject to achieve error-free performance and mastery of the piece.

Pitch-recognition–production test.

Subjects heard 30 single piano notes [from the set of five notes used in the trained-music: F, G, A, Bb, C] and had to press the corresponding piano key with the matching right-hand finger for each note at a time. The notes were played in random order, out of musical context. To rule out possible learning effects, subjects did not receive knowledge of the results (auditory feedback) when pressing the piano keys. This test was done to assess pitch–key mapping ability and was performed before and after the 5 d piano-training period.

Auditory stimuli

Three musical pieces were involved in this study. Subjects learned how to play only one musical piece (trained-music), whereas in addition, they listened to (to control for familiarity) but did not physically train with two other control musical pieces: (1) a musical piece composed of a completely different set of notes (F#, G#, B, C#, D#) than the one used in the trained-music (i.e., “untrained-different-notes-music”); and (2) a musical piece in which the same notes (F, G, A, Bb, C) were used in a different order to compose a new melody (i.e., untrained-same-notes-music). The auditory exposure time for all three musical pieces was equivalent. All musical pieces were novel and composed specifically for this study based on principles of western music; they were all of the same length (24 s, eight measures, 15 notes in total) and same tempo (80 beats per minute) and were played by piano accompanied by guitar, base, and drums. Short samples of the three musical pieces are shown in Figure 2B. The first three bars of the trained-music are shown in Figure 1A; for a fully orchestrated score and detailed description of software setup, see Lahav et al. (2005).

Motion-tracking system

To ensure motionless listening, we trained subjects to listen passively to music in a simulated scanning position (supine, palms facing up, fixated wrist) and digitally monitored their finger movements. We used a specially designed motion-tracking system plus a passive-marker glove, implemented in the EyesWeb development environment (www.eyesweb.org). Subjects wore a cotton glove with red markers on the fingertips and listened to music in a simulated scanning position just before the fMRI procedure. A camera (Quickcam Pro 4000; Logitech, Fremont, CA), faced downward, detected possible changes in finger position (<1 mm), based on real-time computation of pixel color change, collecting 352 × 288 pixel frames at a rate up to 30 Hz in the RGB (red/green/blue) color space.

fMRI acquisition and data analysis

A 3T GE whole-body system (GE Medical Systems, Milwaukee, WI) was used to acquire a set of high-resolution T1-weighted anatomical images (voxel size, 0.93 × 0.93 × 1.5 mm) and functional magnetic resonance images using a gradient echo-planar T2* sequence sensitive to the blood oxygenation level-dependent contrast (voxel size, 2 × 2 × 4 mm). To reduce scanner noise artifacts and interference with the music, we used a sparse temporal sampling technique [repetition time (TR), 18 s], acquiring 28 axial slices in a cluster [acquisition time (TA), 1.75 s]. MR images were acquired after listening to 5, 6, 7, and 8 s musical passages. A total of 32 fully orchestrated short passages extracted from three musical pieces were presented in a counterbalanced block design, with a total of 108 sets of axial images acquired during nine functional runs (see Fig. 2B,C for details). During each run, we acquired 12 sets of 28 axial slices (eight listening scans and four rest scans; total run time, 234 s). To reduce the chance of movement artifacts, subjects lay supine with their eyes closed and their palms facing up (an “uninviting” playing position) and followed instructions to stay as still as possible. fMRI data were analyzed using the SPM99 software package (Institute of Neurology, London, UK). Each set of axial images for each subject was realigned to the first image, coregistered with the corresponding T1-weighted data set, spatially normalized to the T1 template, and smoothed with an isotropic Gaussian kernel (8 mm full-width at half-maximum). Subject and condition effects were estimated using a general linear model. Global differences in scan intensity were removed by scaling each scan in proportion to its global intensity, and low-frequency drifts were removed using the default temporal high-pass filter.

A behavioral control task during the fMRI procedure

To ensure attentive listening to music, we included a behavioral control task during the fMRI procedure. After listening to each musical passage, subjects heard a three-tone sequence and had to press a button with their left hand if these notes had appeared as a subsequence in the preceding musical passage they had heard. Approximately 50% of the time, the three-tone sequence was part of the musical passage heard before. We intentionally arranged the acquisition of images so that no acquisition would reflect the neural activity induced by this control task (for details of the task design, see Fig. 2C).

Results

Action–sound training: learning the musical piece

In the first piano session, the time required to reach error-free performance with the musical piece was highly variable across subjects (mean, 29.11 min; SD, 5.53). The following sessions (2–5) were used to ensure mastery of the musical piece and to reduce performance variability between subjects during their fMRI scanning session (Fig. 2A). A plateau level of performance is notable in sessions 4–5, obtaining (and staying within) the minimum possible learning time (12 min) set by the software.

Pitch-recognition–production test

Results of the pitch-recognition–production (PRP) test indicate that as a byproduct of learning to play by ear, subjects also developed (albeit not to a perfection) a pitch–key mapping ability for the five pitches–keys used during piano training (i.e., the ability to recognize pitches and identify them in real time on the piano keyboard). Subjects significantly improved their PRP scores from 24% before the piano training period (which is around the level of chance) to 77% after training was completed.

Behavioral control during fMRI

The three-tone recognition task revealed similar performances across all three musical pieces. The motivation for implementing such a task was to make sure subjects are indeed listening and attending to the music (and not just hearing it in the background). The proportion of errors during this task did not differ across fMRI listening conditions, indicating that the differences in neural activation across music conditions were not caused by possible variations in the level of attention or auditory engagement with a given musical piece (Fig. 2D) [repeated-measure ANOVA with conditions (trained-music, untrained-same-notes music, untrained-different-notes-music) as within-subjects factors; F(2,16) = 1.536; p = 0.489].

Contrasting trained-music versus untrained-different-notes-music

All listening conditions (compared with rest) showed a similar activation pattern in primary and secondary auditory cortices bilaterally (Fig. 3A,B). However, when subjects listened to the trained-music (but not to the untrained-different-notes-music), activation was found in additional frontoparietal motor-related regions including, prominently, the posterior inferior frontal gyrus (IFG; Broca's region and Broca's homolog on the right; BA44, BA45) as well as the posterior middle premotor region and, to a lesser degree, the inferior parietal lobule (supramarginal gyrus and angular gyrus) bilaterally and cerebellar areas on the left (Fig. 3A,C) [trained-music > untrained-different-notes-music; p < 0.05, false discovery rate (FDR) corrected]. The complete lack of primary motor cortex (M1) activation in the hand/finger region during all conditions may be taken as evidence that subjects indeed did not physically move, as instructed (this was additionally verified by an observer in the scanning room).

Figure 3.

Figure 3.

Action–listening activation. A, B, Extensive bilateral activation in the frontoparietal motor-related brain regions was observed when subjects listened to the trained-music they knew how to play (A), but not when they listened to the never-learned untrained-different-notes-music (B) (contrasted against rest baseline; p < 0.05, FDR corrected). C, Activation maps are shown in areas that were significantly more active during listening to the trained-music versus the untrained-different-notes-music. Surface projection of group mean activation (n = 9) are rendered onto an individual standardized brain (p < 0.05, FDR corrected). L, Left; R, right.

Contrasting trained-music versus untrained-same-notes-music

To further investigate the tuning of the action-recognition system, we compared brain activity in subjects during listening to the trained-music versus the untrained-same-notes-music (in which the notes of the trained-music were arranged in a different order). Interestingly, we found significant activation in the left posterior premotor cortex, as well as in the posterior part of the IFG containing Broca's area and in its right-hemispheric homolog (Fig. 4B) (p < 0.05, FDR corrected). Yet, despite those differences, several premotor and parietal regions were still active bilaterally (although to a much lesser degree) even when subjects listened to the untrained-same-notes-music (Fig. 4A). In addition, a region-of-interest analysis of the pars opercularis of the IFG indicates significant pick activations on the left only when subjects listen to the trained-music, whereas the right IFG remained fairly active across listening conditions (Fig. 4C) [repeated-measure ANOVA shows a significant condition effect for the left IFG (F(2,16) = 10.324; p = 0.001) and no effect for the right IFG (F(2,16) = 0.026; p = 0.973)].

Figure 4.

Figure 4.

A,Areas activated during listening to the untrained-same-notes-music contrasted against rest (p < 0.05, FDR corrected). B, Contrasted image of group mean activation is presented in areas that were significantly more active during listening to trained-music compared with untrained-same-notes-music. This included the left premotor region as well as Broca's area and its right hemispheric homolog (green arrows), shown also in the corresponding coronal view (middle) (p < 0.05, FDR corrected). C, Parameter estimates (β values) of the left (−50, 18, 16; magenta) and right (52, 18, 16; cyan) IFG across listening conditions. Results indicate significant pick activations on the left IFG only when subjects listen to the trained-music they knew how to play (p = 0.001), whereas the right IFG remained fairly active across listening conditions (p = 0.973).

Discussion

A unique aspect of the present study is the use of completely novel actions for studying auditory operations of the mirror neuron system. Previous EEG (Pizzamiglio et al., 2005; Hauk et al., 2006) and transcranial magnetic stimulation (Aziz-Zadeh et al., 2004) studies have mainly focused on the sound of over-experienced actions laying the groundwork for this field. The advantage, however, of training subjects with novel actions is twofold. First, it allows following the neural formation of auditory–motor learning, while eliminating possible influences from previous learning experiences. Second, it allows dealing with actions and sounds that are exclusively associated with a specific body part (here, the right hand). Such kind of control is almost impossible when investigating freestyle everyday actions that could potentially be bimanual (paper tearing) or mixed-handed (door knocking).

In addition, the use of music as an audible sequence of actions reveals important sequential aspects of the human action-recognition system. One should bear in mind that during the fMRI procedure, subjects did not listen to the musical piece in its entirety but to only short segments of it (5–8 s each) containing only part of the entire sequence of actions. The fact that listening to such short subsequences was still enough to activate an audiomotor recognition network suggests high levels of detection for action-related sounds (Fig. 3A,C). It is also striking that these audiomotor activation patterns are, in fact, within the core regions of the frontoparietal mirror neuron circuit previously found in humans in a variety of action-observation tasks (Grezes et al., 2003; Iacoboni et al., 2005; Lotze et al., 2006) including music observations, such as watching chord progression on the guitar (Buccino et al., 2004a) or finger-playing movements on the piano (Haslinger et al., 2005).

When subjects listened to the trained-music, a sound–action network became active but actions were not executed (Fig. 3A,C). However, when subjects listened to music they had never played before, they could not match it with existing action representations, and thus auditory activation was entirely dominant (Fig. 3B). These findings point to the heart of the action–listening mechanism and are in keeping with previous evidence for increased motor excitability (D'Ausilio et al., 2006) and premotor activity (Lotze et al., 2003; Bangert et al., 2006a) during listening to a rehearsed musical piece. Furthermore, our findings may be analogous to studies in the visuomotor domain, in which the mirror neuron system was involved only when the observed action was part of the observer's motor repertoire, such as in the case of dancers watching movements from their own dance style (but not from other styles) (Calvo-Merino et al., 2005) or in the case of humans watching biting actions (but not barking) (Buccino et al., 2004b).

It is particularly interesting that the posterior IFG, including Broca's area, was active only when subjects listened to the music they knew how to play. Even the presentation of music made of the same notes in an untrained order did not activate this area (Fig. 4B,C). These results reinforce the previous literature in two important ways. First, Broca's area is the human homolog of area F5 (ventral premotor cortex) in the monkey, where mirror neurons have been located (Rizzolatti and Arbib, 1998; Binkofski and Buccino, 2006). Our findings thus support the view that Broca's area is presumably a central region (“hub”) of the mirror neuron network (Iacoboni et al., 1999; Nishitani and Hari, 2000; Hamzei et al., 2003; Rizzolatti and Craighero, 2004; Nelissen et al., 2005), demonstrating here its multifunctional role in action listening. Second, mounting evidence suggest that in addition to its “classical” involvement in language, Broca's area functions also as a sensorimotor integrator (Binkofski and Buccino, 2004, 2006; Baumann et al., 2005), manual imitator (Krams et al., 1998; Heiser et al., 2003), supramodal sequence predictor (Maess et al., 2001; Kilner et al., 2004; Iacoboni et al., 2005), and internal simulator (Platel et al., 1997; Nishitani and Hari, 2000; Schubotz and von Cramon, 2004) of sequential actions. It is therefore possible that the activity in Broca's area during the action–listening condition reflects sequence-specific priming of action representations along with unconscious simulations and predictions as to the next action/sound to come. Such implicit motor predictions may be partially overlapped with functional operations of the mirror neuron system and thus could have hardly been made while listening to the never-learned musical pieces.

The significant activity in Broca's area during the action–listening condition strongly supports recent evidence for left-hemisphere dominance for action-related sounds (Pizzamiglio et al., 2005; Aziz-Zadeh et al., 2006). Region-of-interest analysis strongly confirms this laterality effect and also shows that the right IFG remained fairly active across all listening conditions (Fig. 4C). It is therefore likely that the observed activation in the right IFG may not fully reflect action representation per se but rather simply operations of music perception, consistent with the role of the right hemisphere in melodic/pitch processing (Zatorre et al., 1992). However, despite the left-hemispheric lateralization of the IFG, our data suggest that, at least with regard to the overall frontoparietal activation pattern, actions may be represented across the two hemispheres (Fig. 3A,C). The ipsilateral premotor activity, although seemingly illogical (because only the right hand was trained) is, in fact, in accordance with previous evidence for bilateral premotor representation of finger movements (Kim et al., 1993; Hanakawa et al., 2005), as well as with the general view that the mirror neuron system is bihemispheric in nature regardless of the laterality of the involved hand (Rizzolatti and Craighero, 2004; Molnar-Szakacs et al., 2005; Aziz-Zadeh et al., 2006). Support for such bilateral operation has mainly come from action-observation studies, whereas evidence from action listening is still quite limited. Having abstract representations of actions in both hemispheres may be essential for making actions more generalizable (Grafton et al., 2002; Vangheluwe et al., 2005) and potentially transferable from one limb to another (Rijntjes et al., 1999).

One may wonder what could be the reason for the premotor activity seen when subjects listened to the untrained-same-notes-music (Fig. 4A), especially because they had never played that piece. A possible explanation is that this premotor activity reflects the ability of subjects to link some of the notes they heard to the matching fingers and piano keys (77% matching accuracy in the PRP test; see Materials and Methods) (Fig. 5). Strikingly, this pitch–key linking mechanism seemed to be purely implicit because subjects were completely unaware (presumably because of their musical naivety) that the piece was composed of the same notes as the trained-music (as confirmed in post-fMRI interviews). As a result of this linking ability, the notes themselves appeared motorically familiar, which in turn was sufficient to activate a small action–sound circuit. This circuit supposedly demonstrates basic operations of audiomotor recognition at the level of one single action (finger press) and one sound (piano pitch). Nevertheless, this audiomotor recognition of individual notes, without the complete motor representation of the melodic sequence, was not enough to fully engage the hearing–doing mirror neuron system for action listening.

Figure 5.

Figure 5.

Scores of the PRP test. The percentage of correct responses is shown for individual subjects, before the initial (red) and after the last (blue) piano-training session. Mean group errors (solid line) and SEs (dashed line) are presented. An improvement trend is shown, from 24% before training (around the level of chance) to 77% after training (mean relative improvement of 209%; p < 10−4).

To what extent does action listening involve an implicit mental rehearsal component of the heard action? This may be somewhat difficult to answer based on the present study, because we have not included an explicit imagery condition. Yet, the lack of activation in the “classic” (although still controversial) motor imagery network [including the contralateral (left) M1/S1, the supplementary motor area, and the ipsilateral cerebellum] (Fig. 3A) argues against the possibility that subjects consciously imagined themselves playing the music (Grafton et al., 1996; Porro et al., 1996; Langheim et al., 2002). Furthermore, imagery was very unlikely, because our auditory control task during fMRI scanning sidetracked subjects from playing the music in their minds (as confirmed by post-fMRI interviews). Furthermore, evidence suggest that mental simulations and operations of the mirror neuron system might all be functionally related or possibly even a form of one another (Grezes and Decety, 2001; Patuzzo et al., 2003; Cisek and Kalaska, 2004). Additional studies are still needed to support this hypothesis.

Finally, what could be the functional advantage of having audiomotor recognition networks? It has been suggested that such audiomotor networks are essential for the acquisition of language, serving as a critical sensorimotor feedback loop during speech perception (Rizzolatti and Arbib, 1998; Theoret and Pascual-Leone, 2002). We suggest that, in addition, such networks may have been developed to protect the survival of all hearing organisms, allowing the understanding of actions even when they cannot be observed but can only be heard (for example, the sound of footsteps in the dark).

In conclusion, this study indicates that the human action-recognition system is highly sensitive to the individual's motor experience and has the tuning capabilities needed to discriminate between the sound of newly acquired actions and the sound of actions that are motorically unknown. Thus, acquiring actions that have an audible output quickly generates a functional neural link between the sound of those actions and the presumably corresponding motor representations. These findings serve as an important brain imaging addition to a growing body of research on the auditory properties of the mirror neuron system and may encourage additional investigations in the realm of action listening.

Footnotes

This work was supported in part by grants from the National Institutes of Health–National Institute of Neurological Disorders and Stroke (E.S. and G.S.) and the International Foundation for Music Research (G.S.) and by a Dudley Allen Sargent Research Grant (A.L.). We thank Adam Boulanger for computer music support, Marc Bangert for discussions at initial stages, Eugene Wong for help with graphic design, and Galit Lahav and Daniel Segre for insightful comments.

References

  1. Aziz-Zadeh L, Iacoboni M, Zaidel E, Wilson S, Mazziotta J. Left hemisphere motor facilitation in response to manual action sounds. Eur J Neurosci. 2004;19:2609–2612. doi: 10.1111/j.0953-816X.2004.03348.x. [DOI] [PubMed] [Google Scholar]
  2. Aziz-Zadeh L, Koski L, Zaidel E, Mazziotta J, Iacoboni M. Lateralization of the human mirror neuron system. J Neurosci. 2006;26:2964–2970. doi: 10.1523/JNEUROSCI.2921-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bangert M, Altenmuller EO. Mapping perception to action in piano practice a longitudinal DC-EEG study. BMC Neurosci. 2003;4:26. doi: 10.1186/1471-2202-4-26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bangert M, Haeusler U, Altenmuller E. On practice how the brain connects piano keys and piano sounds. Ann NY Acad Sci. 2001;930:425–428. doi: 10.1111/j.1749-6632.2001.tb05760.x. [DOI] [PubMed] [Google Scholar]
  5. Bangert M, Peschel T, Schlaug G, Rotte M, Drescher D, Hinrichs H, Heinze HJ, Altenmuller E. Shared networks for auditory and motor processing in professional pianists: evidence from fMRI conjunction. NeuroImage. 2006a;30:917–926. doi: 10.1016/j.neuroimage.2005.10.044. [DOI] [PubMed] [Google Scholar]
  6. Bangert M, Jurgens U, Hausler U, Altenmuller E. Classical conditioned responses to absent tones. BMC Neurosci. 2006b;7:60. doi: 10.1186/1471-2202-7-60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Baumann S, Koeneke S, Meyer M, Lutz K, Jancke L. A network for sensory-motor integration: what happens in the auditory cortex during piano playing without acoustic feedback? Ann NY Acad Sci. 2005;1060:186–188. doi: 10.1196/annals.1360.038. [DOI] [PubMed] [Google Scholar]
  8. Binkofski F, Buccino G. Motor functions of the Broca's region. Brain Lang. 2004;89:362–369. doi: 10.1016/S0093-934X(03)00358-4. [DOI] [PubMed] [Google Scholar]
  9. Binkofski F, Buccino G. The role of ventral premotor cortex in action execution and action understanding. J Physiol (Paris) 2006;99:396–405. doi: 10.1016/j.jphysparis.2006.03.005. [DOI] [PubMed] [Google Scholar]
  10. Buccino G, Binkofski F, Fink GR, Fadiga L, Fogassi L, Gallese V, Seitz RJ, Zilles K, Rizzolatti G, Freund HJ. Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study. Eur J Neurosci. 2001;13:400–404. [PubMed] [Google Scholar]
  11. Buccino G, Vogt S, Ritzl A, Fink GR, Zilles K, Freund HJ, Rizzolatti G. Neural circuits underlying imitation learning of hand actions: an event-related fMRI study. Neuron. 2004a;42:323–334. doi: 10.1016/s0896-6273(04)00181-3. [DOI] [PubMed] [Google Scholar]
  12. Buccino G, Lui F, Canessa N, Patteri I, Lagravinese G, Benuzzi F, Porro CA, Rizzolatti G. Neural circuits involved in the recognition of actions performed by nonconspecifics: an fMRI study. J Cogn Neurosci. 2004b;16:114–126. doi: 10.1162/089892904322755601. [DOI] [PubMed] [Google Scholar]
  13. Buccino G, Riggio L, Melli G, Binkofski F, Gallese V, Rizzolatti G. Listening to action-related sentences modulates the activity of the motor system: a combined TMS and behavioral study. Brain Res Cogn Brain Res. 2005;24:355–363. doi: 10.1016/j.cogbrainres.2005.02.020. [DOI] [PubMed] [Google Scholar]
  14. Calvo-Merino B, Glaser DE, Grezes J, Passingham RE, Haggard P. Action observation and acquired motor skills: an fMRI study with expert dancers. Cereb Cortex. 2005;15:1243–1249. doi: 10.1093/cercor/bhi007. [DOI] [PubMed] [Google Scholar]
  15. Cisek P, Kalaska JF. Neural correlates of mental rehearsal in dorsal premotor cortex. Nature. 2004;431:993–996. doi: 10.1038/nature03005. [DOI] [PubMed] [Google Scholar]
  16. D'Ausilio A, Altenmuller E, Olivetti Belardinelli M, Lotze M. Cross-modal plasticity of the motor cortex while listening to a rehearsed musical piece. Eur J Neurosci. 2006;24:955–958. doi: 10.1111/j.1460-9568.2006.04960.x. [DOI] [PubMed] [Google Scholar]
  17. Decety J, Grezes J. Neural mechanisms subserving the perception of human actions. Trends Cogn Sci. 1999;3:172–178. doi: 10.1016/s1364-6613(99)01312-1. [DOI] [PubMed] [Google Scholar]
  18. Fadiga L, Craighero L, Buccino G, Rizzolatti G. Speech listening specifically modulates the excitability of tongue muscles: a TMS study. Eur J Neurosci. 2002;15:399–402. doi: 10.1046/j.0953-816x.2001.01874.x. [DOI] [PubMed] [Google Scholar]
  19. Gallese V, Fadiga L, Fogassi L, Rizzolatti G. Action recognition in the premotor cortex. Brain. 1996;119:593–609. doi: 10.1093/brain/119.2.593. [DOI] [PubMed] [Google Scholar]
  20. Gangitano M, Mottaghy FM, Pascual-Leone A. Modulation of premotor mirror neuron activity during observation of unpredictable grasping movements. Eur J Neurosci. 2004;20:2193–2202. doi: 10.1111/j.1460-9568.2004.03655.x. [DOI] [PubMed] [Google Scholar]
  21. Grafton ST, Arbib MA, Fadiga L, Rizzolatti G. Localization of grasp representations in humans by positron emission tomography. 2. Observation compared with imagination. Exp Brain Res. 1996;112:103–111. doi: 10.1007/BF00227183. [DOI] [PubMed] [Google Scholar]
  22. Grafton ST, Hazeltine E, Ivry RB. Motor sequence learning with the nondominant left hand. A PET functional imaging study. Exp Brain Res. 2002;146:369–378. doi: 10.1007/s00221-002-1181-y. [DOI] [PubMed] [Google Scholar]
  23. Grezes J, Decety J. Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a meta-analysis. Hum Brain Mapp. 2001;12:1–19. doi: 10.1002/1097-0193(200101)12:1&#x0003c;1::AID-HBM10&#x0003e;3.0.CO;2-V. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Grezes J, Armony JL, Rowe J, Passingham RE. Activations related to “mirror” and “canonical” neurones in the human brain: an fMRI study. NeuroImage. 2003;18:928–937. doi: 10.1016/s1053-8119(03)00042-9. [DOI] [PubMed] [Google Scholar]
  25. Hamilton A, Wolpert D, Frith U. Your own action influences how you perceive another person's action. Curr Biol. 2004;14:493–498. doi: 10.1016/j.cub.2004.03.007. [DOI] [PubMed] [Google Scholar]
  26. Hamzei F, Rijntjes M, Dettmers C, Glauche V, Weiller C, Buchel C. The human action recognition system and its relationship to Broca's area: an fMRI study. NeuroImage. 2003;19:637–644. doi: 10.1016/s1053-8119(03)00087-9. [DOI] [PubMed] [Google Scholar]
  27. Hanakawa T, Parikh S, Bruno MK, Hallett M. Finger and face representations in the ipsilateral precentral motor areas in humans. J Neurophysiol. 2005;93:2950–2958. doi: 10.1152/jn.00784.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Haslinger B, Erhard P, Altenmuller E, Schroeder U, Boecker H, Ceballos-Baumann AO. Transmodal sensorimotor networks during action observation in professional pianists. J Cogn Neurosci. 2005;17:282–293. doi: 10.1162/0898929053124893. [DOI] [PubMed] [Google Scholar]
  29. Hauk O, Shtyrov Y, Pulvermuller F. The sound of actions as reflected by mismatch negativity: rapid activation of cortical sensory-motor networks by sounds associated with finger and tongue movements. Eur J Neurosci. 2006;23:811–821. doi: 10.1111/j.1460-9568.2006.04586.x. [DOI] [PubMed] [Google Scholar]
  30. Heiser M, Iacoboni M, Maeda F, Marcus J, Mazziotta JC. The essential role of Broca's area in imitation. Eur J Neurosci. 2003;17:1123–1128. doi: 10.1046/j.1460-9568.2003.02530.x. [DOI] [PubMed] [Google Scholar]
  31. Iacoboni M. Neural mechanisms of imitation. Curr Opin Neurobiol. 2005;15:632–637. doi: 10.1016/j.conb.2005.10.010. [DOI] [PubMed] [Google Scholar]
  32. Iacoboni M, Woods RP, Brass M, Bekkering H, Mazziotta JC, Rizzolatti G. Cortical mechanisms of human imitation. Science. 1999;286:2526–2528. doi: 10.1126/science.286.5449.2526. [DOI] [PubMed] [Google Scholar]
  33. Iacoboni M, Molnar-Szakacs I, Gallese V, Buccino G, Mazziotta JC, Rizzolatti G. Grasping the intentions of others with one's own mirror neuron system. PLoS Biol. 2005;3:e79. doi: 10.1371/journal.pbio.0030079. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Janata P, Grafton ST. Swinging in the brain: shared neural substrates for behaviors related to sequencing and music. Nat Neurosci. 2003;6:682–687. doi: 10.1038/nn1081. [DOI] [PubMed] [Google Scholar]
  35. Keysers C, Kohler E, Umilta MA, Nanetti L, Fogassi L, Gallese V. Audiovisual mirror neurons and action recognition. Exp Brain Res. 2003;153:628–636. doi: 10.1007/s00221-003-1603-5. [DOI] [PubMed] [Google Scholar]
  36. Kilner JM, Paulignan Y, Blakemore SJ. An interference effect of observed biological movement on action. Curr Biol. 2003;13:522–525. doi: 10.1016/s0960-9822(03)00165-9. [DOI] [PubMed] [Google Scholar]
  37. Kilner JM, Vargas C, Duval S, Blakemore SJ, Sirigu A. Motor activation prior to observation of a predicted movement. Nat Neurosci. 2004;7:1299–1301. doi: 10.1038/nn1355. [DOI] [PubMed] [Google Scholar]
  38. Kim SG, Ashe J, Hendrich K, Ellermann JM, Merkle H, Ugurbil K, Georgopoulos AP. Functional magnetic resonance imaging of motor cortex: hemispheric asymmetry and handedness. Science. 1993;261:615–617. doi: 10.1126/science.8342027. [DOI] [PubMed] [Google Scholar]
  39. Kohler E, Keysers C, Umilta MA, Fogassi L, Gallese V, Rizzolatti G. Hearing sounds, understanding actions: action representation in mirror neurons. Science. 2002;297:846–848. doi: 10.1126/science.1070311. [DOI] [PubMed] [Google Scholar]
  40. Krams M, Rushworth MF, Deiber MP, Frackowiak RS, Passingham RE. The preparation, execution and suppression of copied movements in the human brain. Exp Brain Res. 1998;120:386–398. doi: 10.1007/s002210050412. [DOI] [PubMed] [Google Scholar]
  41. Lahav A, Boulanger A, Schlaug G, Saltzman E. The power of listening: auditory-motor interactions in musical training. Ann NY Acad Sci. 2005;1060:189–194. doi: 10.1196/annals.1360.042. [DOI] [PubMed] [Google Scholar]
  42. Langheim FJ, Callicott JH, Mattay VS, Duyn JH, Weinberger DR. Cortical systems associated with covert music rehearsal. NeuroImage. 2002;16:901–908. doi: 10.1006/nimg.2002.1144. [DOI] [PubMed] [Google Scholar]
  43. Lotze M, Scheler G, Tan HR, Braun C, Birbaumer N. The musician's brain: functional imaging of amateurs and professionals during performance and imagery. NeuroImage. 2003;20:1817–1829. doi: 10.1016/j.neuroimage.2003.07.018. [DOI] [PubMed] [Google Scholar]
  44. Lotze M, Heymans U, Birbaumer N, Veit R, Erb M, Flor H, Halsband U. Differential cerebral activation during observation of expressive gestures and motor acts. Neuropsychologia. 2006;44:1787–1795. doi: 10.1016/j.neuropsychologia.2006.03.016. [DOI] [PubMed] [Google Scholar]
  45. Maess B, Koelsch S, Gunter TC, Friederici AD. Musical syntax is processed in Broca's area: an MEG study. Nat Neurosci. 2001;4:540–545. doi: 10.1038/87502. [DOI] [PubMed] [Google Scholar]
  46. Molnar-Szakacs I, Iacoboni M, Koski L, Mazziotta JC. Functional segregation within pars opercularis of the inferior frontal gyrus: evidence from fMRI studies of imitation and action observation. Cereb Cortex. 2005;15:986–994. doi: 10.1093/cercor/bhh199. [DOI] [PubMed] [Google Scholar]
  47. Nelissen K, Luppino G, Vanduffel W, Rizzolatti G, Orban GA. Observing others: multiple action representation in the frontal lobe. Science. 2005;310:332–336. doi: 10.1126/science.1115593. [DOI] [PubMed] [Google Scholar]
  48. Nishitani N, Hari R. Temporal dynamics of cortical representation for action. Proc Natl Acad Sci USA. 2000;97:913–918. doi: 10.1073/pnas.97.2.913. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113. doi: 10.1016/0028-3932(71)90067-4. [DOI] [PubMed] [Google Scholar]
  50. Ozdemir E, Norton A, Schlaug G. Shared and distinct neural correlates of singing and speaking. NeuroImage. 2006;33:628–635. doi: 10.1016/j.neuroimage.2006.07.013. [DOI] [PubMed] [Google Scholar]
  51. Patuzzo S, Fiaschi A, Manganotti P. Modulation of motor cortex excitability in the left hemisphere during action observation: a single- and paired-pulse transcranial magnetic stimulation study of self- and non-self-action observation. Neuropsychologia. 2003;41:1272–1278. doi: 10.1016/s0028-3932(02)00293-2. [DOI] [PubMed] [Google Scholar]
  52. Pizzamiglio L, Aprile T, Spitoni G, Pitzalis S, Bates E, D'Amico S, Di Russo F. Separate neural systems for processing action- or non-action-related sounds. NeuroImage. 2005;24:852–861. doi: 10.1016/j.neuroimage.2004.09.025. [DOI] [PubMed] [Google Scholar]
  53. Platel H, Price C, Baron JC, Wise R, Lambert J, Frackowiak RS, Lechevalier B, Eustache F. The structural components of music perception. A functional anatomical study. Brain. 1997;120:229–243. doi: 10.1093/brain/120.2.229. [DOI] [PubMed] [Google Scholar]
  54. Porro CA, Francescato MP, Cettolo V, Diamond ME, Baraldi P, Zuiani C, Bazzocchi M, di Prampero PE. Primary motor and sensory cortex activation during motor performance and motor imagery: a functional magnetic resonance imaging study. J Neurosci. 1996;16:7688–7698. doi: 10.1523/JNEUROSCI.16-23-07688.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Pulvermuller F. Brain reflections of words and their meaning. Trends Cogn Sci. 2001;5:517–524. doi: 10.1016/s1364-6613(00)01803-9. [DOI] [PubMed] [Google Scholar]
  56. Rijntjes M, Dettmers C, Buchel C, Kiebel S, Frackowiak RS, Weiller C. A blueprint for movement: functional and anatomical representations in the human motor system. J Neurosci. 1999;19:8043–8048. doi: 10.1523/JNEUROSCI.19-18-08043.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Rizzolatti G, Arbib MA. Language within our grasp. Trends Neurosci. 1998;21:188–194. doi: 10.1016/s0166-2236(98)01260-0. [DOI] [PubMed] [Google Scholar]
  58. Rizzolatti G, Craighero L. The mirror-neuron system. Annu Rev Neurosci. 2004;27:169–192. doi: 10.1146/annurev.neuro.27.070203.144230. [DOI] [PubMed] [Google Scholar]
  59. Rizzolatti G, Fadiga L, Gallese V, Fogassi L. Premotor cortex and the recognition of motor actions. Brain Res Cogn Brain Res. 1996;3:131–141. doi: 10.1016/0926-6410(95)00038-0. [DOI] [PubMed] [Google Scholar]
  60. Schon D, Gordon RL, Besson M. Musical and linguistic processing in song perception. Ann NY Acad Sci. 2005;1060:71–81. doi: 10.1196/annals.1360.006. [DOI] [PubMed] [Google Scholar]
  61. Schubotz RI, von Cramon DY. Sequences of abstract nonbiological stimuli share ventral premotor cortex with action observation and imagery. J Neurosci. 2004;24:5467–5474. doi: 10.1523/JNEUROSCI.1169-04.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Stewart L, Henson R, Kampe K, Walsh V, Turner R, Frith U. Brain changes after learning to read and play music. NeuroImage. 2003;20:71–83. doi: 10.1016/s1053-8119(03)00248-9. [DOI] [PubMed] [Google Scholar]
  63. Theoret H, Pascual-Leone A. Language acquisition: do as you hear. Curr Biol. 2002;12:R736–R737. doi: 10.1016/s0960-9822(02)01251-4. [DOI] [PubMed] [Google Scholar]
  64. Thierry G, Giraud AL, Price C. Hemispheric dissociation in access to the human semantic system. Neuron. 2003;38:499–506. doi: 10.1016/s0896-6273(03)00199-5. [DOI] [PubMed] [Google Scholar]
  65. Thornton IM, Knoblich G. Action perception: seeing the world through a moving body. Curr Biol. 2006;16:R27–R29. doi: 10.1016/j.cub.2005.12.006. [DOI] [PubMed] [Google Scholar]
  66. Vangheluwe S, Suy E, Wenderoth N, Swinnen SP. Learning and transfer of bimanual multifrequency patterns: effector-independent and effector-specific levels of movement representation. Exp Brain Res. 2005:1–12. doi: 10.1007/s00221-005-0238-0. [DOI] [PubMed] [Google Scholar]
  67. Wilson SM, Saygin AP, Sereno MI, Iacoboni M. Listening to speech activates motor areas involved in speech production. Nat Neurosci. 2004;7:701–702. doi: 10.1038/nn1263. [DOI] [PubMed] [Google Scholar]
  68. Zatorre RJ, Evans AC, Meyer E, Gjedde A. Lateralization of phonetic and pitch discrimination in speech processing. Science. 1992;256:846–849. doi: 10.1126/science.1589767. [DOI] [PubMed] [Google Scholar]
  69. Zatorre RJ, Belin P, Penhune VB. Structure and function of auditory cortex: music and speech. Trends Cogn Sci. 2002;6:37–46. doi: 10.1016/s1364-6613(00)01816-7. [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES