Abstract
Research has established that there is a cognitive link between perception and production of the same movement. However, there has been relatively little research into the relevance of this for non-expert perceivers, such as music listeners who do not play instruments themselves. In two experiments we tested whether participants can quickly learn new associations between sounds and observed movement without performing those movements themselves. We measured motor evoked potentials (MEPs) in the first dorsal interosseous muscle of participants’ right hands while test tones were heard and single transcranial magnetic stimulation (TMS) pulses were used to trigger motor activity. In Experiment 1 participants in a ‘human’ condition (n=4) learnt to associate the test tone with finger movement of the experimenter, while participants in a ‘computer’ condition (n=4) learnt that the test tone was triggered by a computer. Participants in the human condition showed a larger increase in MEPs compared with those in the computer condition. In a second experiment pairing between sounds and movement occurred without participants repeatedly observing the movement and we found no such difference between the human (n=4) and computer (n=4) conditions. These results suggest that observers can quickly learn to associate sound with movement, so it should not be necessary to have played an instrument to experience some motor resonance when hearing that instrument.
Keywords: TMS, perception, action, timing, sound
Within psychology there has been a longstanding interest in the relationship between perception and performance of movement and the possibility that these share common cognitive roots (e.g. James, 1890). More recently, research with non-human primates demonstrated that ‘mirror neurons’ are active both during perception and performance of the same actions, providing supporting evidence for this theory (Gallese & Goldman, 1998; Rizzolatti, Fadiga, Gallese, & Fogassi, 1996). Related evidence from humans has shown that the perception of actions leads to some increase in activity in regions of the brain involved in making those movements oneself, which can be described as ‘motor resonance’ (see Rizzolatti, 2005). However, there is a relative paucity of evidence relating to auditory aspects of the perception-action link (i.e. when we hear the sounds of human movement rather than seeing movement), likely due to a bias towards research in the visual domain.
Auditory research has often focussed on well-established associations between sound and movement, showing for example that the perception of words that relate to limbs can lead to activity in regions of the brain involved in movement of those limbs (Galati et al., 2008; Hauk, Johnsrude, & Pulvermüller, 2004; Tettamanti et al., 2005) and that the sounds of relevant actions alone can evoke this motor resonance (Aziz-Zadeh, Iacoboni, Zaidel, Wilson, & Mazziotta, 2004; Gazzola, Aziz-Zadeh, & Keysers, 2006). Motor responses to sound are more pronounced if the sound has an established association with movement, as occurs with musical training (Münte, Altenmüller, & Jäncke, 2002), and are absent in people with apraxias specific to the actions they are hearing (Pazzaglia, Pizzamiglio, Pes, & Aglioti, 2008), suggesting that when we have the capacity to perform an action this becomes a part of perception (Maes, Leman, Palmer, & Wanderley, 2014).
There has been some contention about the acquisition of action-perception associations however (Heyes, 2009). Hebbian learning, which suggests that any neurons that fire together can wire together, regardless of specific predictive value, has the potential to explain how movements could become associated with visual perception of those movements ((Keysers & Perrett, 2004). However, empirical research using novel associations between produced and perceived movement suggests that contingency learning leads to better action-perception links (Catmur, Walsh, & Heyes, 2007; Cook, Press, Dickinson, & Heyes, 2010). With regards to music it is clear that with any form of learning musicians will repeatedly associate their own movement with the perceived sound of instruments, while this is not the case for non-musicians, and is also likely to be subject to gradual learning processes (Novembre & Keller, 2014). As such, musicians have been shown to exhibit more motor resonance for their instruments than non-musicians (e.g. Bangert et al., 2006; Buccino et al., 2004; Haueisen & Knösche, 2001), but importantly motor resonance can be acquired by non-musicians through learning an instrument (Lahav, Saltzman, & Schlaug, 2007), and associations can take as little as 20 minutes to acquire (Bangert & Altenmüller, 2003; D’Ausilio, Altenmüller, Olivetti Belardinelli, & Lotze, 2006)
The relationship between action and perception has been used as a potential explanation for empathy that is experienced when engaging with music (e.g. Molnar-Szakacs & Overy, 2006; Overy & Molnar-Szakacs, 2009). When we listen to music which has been created by another person we might mirror their motions and to some extent therefore empathise with their experience, leading to the both pleasurable and emotional experiences that people have. This is supported by evidence showing that people with higher trait empathy do have greater motor resonance in musical situations (Novembre, Ticini, Schütz-Bosbach, & Keller, 2014, 2012). However, action-perception research to date does not directly relate to the experience of non-musicians, who can generally enjoy music without necessarily having knowledge of how it is performed. An important gap in our knowledge concerns whether novices (here defined as people who do not have experience of playing the instrument they are listening to) are likely to experience motor resonance for musical sounds, given that they have not directly learnt associations with the movements that make those sounds.
Another underexplored area of the relationship between action and perception relates to the temporal specificity for motor resonance. If motor resonance acts as a part of the perception process then we would expect it to be tightly locked to the time at which stimuli are presented, yet motor regions of the brain appear to be active throughout perception of musical sound, in response to rhythm in general rather than locked to specific tones (Zatorre, Chen, & Penhune, 2007). Experiments have often presented stimuli for some considerable period of time, meaning temporal specificity was not investigated (e.g. Aziz-Zadeh et al., 2004; Ticini, Schüz-Bosbach, Weiss, Casile, & Waszak, 2012), but recent investigations into the relationship between more musical sounds, involving predictable rhythmic beats, have demonstrated that motor resonance is more pronounced at the time of beats rather than in between them (Cameron, Stewart, Pearce, Grube, & Muggleton, 2012; Fujioka, Trainor, Large, & Ross, 2012; Stupacher, Hove, Novembre, Schütz-Bosbach, & Keller, 2013). If people learn to anticipate the time of predictable sounds we would expect motor resonance to occur selectively shortly anticipating the time of those sounds.
Over the current set of experiments we test two main hypotheses:
Participants can quickly learn associations between observed movement and sound, resulting in greater motor resonance when subsequently hearing those sounds.
Motor resonance for sound is temporally specific (i.e. occurs only at the time that the sound is perceived).
In Experiment 2 we additionally test whether people need to observe pairings between sound and movement, or whether believing that a sound is being created by movement is sufficient to lead to motor resonance.
General Methods
Equipment and stimuli
A Magstim Rapid2 with a figure-of-8 coil was used for transcranial magnetic stimulation. Pulses were triggered by a Dell PC running DMDX software version 4.0.4.4., which also played auditory tones to participants via an Edirol UA-25X soundcard, and Philips SHS4700 ear clip headphones during the TMS/MEP testing phase. A Northern Digital Incorporated Polaris Spectra Neuronavigation system was used to track head and coil movements so that the experimenter could hold the coil in place throughout the experiment, with Advanced Source Analysis software version 4.7.41. (visor) running on a separate Dell Optiplex Gx745.
Muscle twitches were recorded using surface electromyography with adhesive Ag/AgCl ECG conductive electrodes and a dry earth strap connected to an ADInstruments Dual BioAmp, and ADInstruments Powerlab 16/30 recording system, via ADInstruments Chart software version 5.5.6., and Scope software version 3.9.2.
During the learning phase of the experiment, auditory tones were played to participants using a MacBook Pro running MAX/MSP v. 5.0.8. using an Edirol UA-25 Soundcard and Philips SHS4700 ear clip headphones. The tones used were a MIDI woodblock sound, and cowbell sound, corresponding to midi-note 31 and 67 with MAX/MSPs default MIDI channel 10. The experimenter tapped on a Roland Handsonic HPD-10 drum pad to trigger sounds during the learning phase of the experiment.
Procedure
Participants were invited to the lab a day before any TMS testing occurred. On this first meeting they were given an information sheet about the study, and two safety-screening questionnaires, which they were asked to read, fill in and return before the full study. A 15-minute interview session then followed, during which the experimenter gave each participant full information about what is involved in TMS, including the associated risks and potential side effects. Participants were then invited to attend a second 1 hour 15 minute session during which the experiment would occur. The procedure was approved by the University of Western Sydney Ethics Committee
During the testing session the ‘hot-spot’ for triggering first finger movement was identified by varying coil scalp position. Motor thresholds were then defined as the minimum machine power required to evoke MEPs greater than 50µV from trough to peak, 50 % of the time (5 out of 10 pulses). The experimental procedure then began, which included three phases: a TMS/MEP testing phase, a learning phase, then a repeat of the TMS/MEP testing phase (see Figure 1).
Figure 1.
Summary of procedure.
TMS/MEP testing phase
Participants heard a series of isochronous tones (test timbre) played over headphones, each 600 ms apart, while TMS pulses were delivered at 120 % of each individual participant’s motor threshold. Every seven tones, either a TMS pulse occurred (two thirds of the time) or no TMS pulse occurred. Half of these TMS pulses occurred ‘with’ the tone (110 ms before the tone 10 times, 120 ms before the tone 10 times and 130 ms before the tone 10 times), and the other half occurred ‘between’ two tones (at 360 ms after the tone 10 times, 370 ms after the tone 10 times, and 380 ms after the tone 10 times) – although note that these times could also be characterised as anticipating the tones by 240 ms, 230 ms and 220 ms respectively. TMS pulses preceded the sound because we expected participants to anticipate the predictable test tone, and pilot testing (described below) confirmed that 120 ms was a sufficient anticipatory period, and we used three different periods in order to introduce some jitter into the sequence. The order of timing of the pulses was determined by DMDX software. Within every set of 21 tones, the three different TMS pulse conditions (with tone, between tone, or no pulse) would occur, in a randomly determined order (see Figure 2). This part of the experiment lasted approximately 6.3 minutes. Participants were asked to keep their hand relaxed during this phase but we did not monitor muscle activity continuously between TMS pulses.
Figure 2.
Example of 21 tones in the testing phase. Each diamond indicates a test tone, arrows indicate a TMS pulse, and X’s indicate a target tone (not known to participant).
Learning phase
Next, participants heard isochronous tones played over headphones while watching the experimenter generate half of them by tapping on an electronic drum pad. Two different timbres occurred in a predetermined pseudorandomised order - one was the test timbre (as heard in the preceding phase) and the other was a control timbre. Six different trials occurred, each with 50 tones. In every trial the participant was instructed to either count the tones generated by the experimenter’s tapping (three rounds), or count the tones generated by the computer (three rounds). The instruction changed in each trial, and the instruction for the first trial was counterbalanced across participants. At the end of each trial, participants reported the number they had counted back to the experimenter, and were given feedback about their performance. The experimenter made 26 taps in the first trial, then 23, 24, 27, 25 and 25 in the following trials respectively. This procedure was designed to emulate a recent single pulse TMS study showing that abstract visual stimuli could be associated with hand movement (Fecteau, Tormos, Gangitano, Thèoret, & Pascual-Leone, 2010).
In the human condition, the test timbre corresponded to sounds associated with the tapping of the experimenter (i.e. the experimenter visibly tapped on an electronic drumpad to generate the sound), while in the computer condition, the test timbre was not associated with any visible signal, and was described as being generated by the computer. Apart from this there were no differences between the ‘human’ and ‘computer’ conditions. Throughout the learning phase of the experiment, participants should learn associations between the test timbre and first finger movement of the experimenter in the human condition, or associations between the test timbre and the computer in the computer condition. This part lasted approximately 5 minutes in total, and occurred between two instances of the TMS/MEP testing phase.
Participants experienced the TMS/MEP testing phase, followed by the learning phase, followed by another round of the TMS/MEP testing phase. At the end of this, participants were fully debriefed about the experiment. The only difference between Experiment 1 and Experiment 2 was whether participants observed the experimenter making finger movements during the learning phase of the experiment.
Design
Experiment 1 and Experiment 2 both used a 2 (association condition: human/computer, between-subjects) x 2 (pulse timing condition: with tone/ between tone, within-subjects) design. The dependent variable was change in MEP amplitude, as measured using EMG (electromyography) signals recorded from the first dorsal interosseous muscle of the right hand.
Analysis
MEPs were recorded by Scope with a sample rate of 2048 Hz, a bandpass filter of 10 Hz – 1 kHz, and with an amplitude range of 10 mV. No 50 Hz noise was observed so a notch filter at 50 Hz was not used. Recordings were taken from when the pulse was triggered for a total period of 60 ms. Each trace that was recorded was initially visually inspected to determine whether it contained an MEP (i.e. whether the TMS pulse had triggered a motor response, 56 % of data were included on this basis in Experiment 1, and 55 % in Experiment 21). Following this exclusion, the minimum value measured during the 60ms period was subtracted from the maximum value, to give a size of MEP from peak to trough. Values were then log-transformed to approximate normality.
Following this transformation, each of the values collected after the learning phase was subtracted from the average for that participant before the learning phase. This was done separately for MEPs collected at each time relative to sounds (i.e. TMS pulse with sound, compared with TMS pulse between sounds). Multilevel linear modelling was used to compare the different conditions (condition: human/computer, between-subjects; timing: pulse with tone/pulse between tone, within-subjects), with a random intercept for each individual. For all models, the statistical package R (version 2) was used with the package lme4 (Bates & Sarkar, 2008) to create models, and lmerTest to test the significance of models.
Pliot Study
In a pilot study one participant (aged 28, F) was exposed to the learning phase of the experiment, and subsequently experienced a version of the testing phase in which TMS pulses were triggered at different timepoints in relation to the tones (90 ms before tones, 120 ms before tones or 370 ms after tones). MEPs were then averaged over the 30 repeats of each of these timepoints, and values compared to see which timepoints might best elicit large MEPs relating to the tones. TMS pulses occurring 120 ms before the tones were found to have the greatest response. Given likely individual differences in when anticipation of tones might occur, some jitter was introduced into the temporal sequence during subsequent experiments.
Experiment 1
Experiment 1 was designed to test the basic hypothesis that people can learn to associate sound with their own movement after repeated pairing between the sound and observed movement.
Participants
Eight undergraduate psychology students from the University of Western Sydney were tested: 4 in the human condition (2 male, age M = 19 years, SD = 1) and 4 in the computer condition (1 male, age M = 23 years, SD = 7). All participants reported being right-handed, and reported having normal hearing.
Results
Participants’ mean motor threshold was 66 % of the maximum stimulator output, SD = 9 (human condition: M = 61 %, SD = 9.0; computer condition: M = 72 %, SD = 5.6; these are not significantly different from one another, t(5) = 2.0, p = 0.1). Accuracy when asked to count the number of taps made by the experimenter or computer during the learning phase was assessed using the average (over 6 trials) absolute difference between the correct number and the participant’s answer. The group mean of this accuracy score was 1.58 (SD = 0.51). Multilevel linear modelling to compare change in MEP size in the human and computer condition and for the time of pulse (either between or just before the tone) revealed a fixed effect of condition, b= 0.28, se = 0.061 t(4) = 4.6, p = .011, but no main effect of pulse timing (b = 0.01, se = 0.038, t(300) = 0.30, p = 0.76) and no significant interaction between the two (b = -0.07, se = 0.038, t(300) = 1.86, p = 0.06). The main effect of condition is indicative of a significantly larger increase in MEP size in the human condition (see Table 1 and Figure 3) following the learning phase.
Table 1.
Accuracy scores in each condition
Human Condition | Computer Condition | |
---|---|---|
M (SD) | M (SD) | |
Experiment 1 | 1.71 (0.70) | 1.46 (0.28) |
Experiment 2 | 1.83 (1.11) | 1.83 (0.56) |
Figure 3.
Mean MEP trough to peak amplitudes for each participant before and after learning phase in Experiment 1 and Experiment 2.
Summary
In Experiment 1, results demonstrated the hypothesised increase in MEP size while listening to sounds associated with movement in the human condition. No significant differences were observed between the different TMS pulse timepoints (i.e. pulse occurring with the tone, or occurring between tones).
Experiment 2
Experiment 2 was designed to test whether participants could learn associations between sound and movement without watching repeated pairing of the two. This should assess whether knowing that a sound is caused by a certain movement is sufficient to make people associate sound with that movement (without repeated visual confirmation of that association). Here, participants were exposed to just one visual pairing between movement and sounds, but were similarly asked to count the number of sounds made by the experimenter or the computer in subsequent rounds of the learning phase. If their belief that sounds are human generated is sufficient to induce motor resonance then this learning phase should have similar effects to that in Experiment 1.
Participants
Eight undergraduate psychology students from the University of Western Sydney were tested: 4 in the human condition (1 male, age M = 24 years, SD = 11) and 4 in the computer condition (0 male, age M = 23 years, SD = 9). All participants reported being right-handed, and having normal hearing, and none of the participants had been tested during Experiment 1.
Procedure
The procedure was similar to Experiment 1, but during the learning phase participants were told to face away from the experimenter and close their eyes. In this way, they would not be able to visually associate sounds with the movement of the experimenter. Before the learning phase, the experimenter tapped the drumpad once, triggering the test tone, indicating that this would be the general mechanism by which the sound would be triggered, so participants would be aware of this association. They were then required to count the tones based solely on this timbre, and might therefore imagine the movement associated with it.
Results
Participants’ mean motor threshold was 62 % of the maximum stimulator output, SD = 5 (human condition: M = 64 %, SD = 2.2; computer condition: M = 59 %, SD = 6.2; these are not significantly different from one another, t(4) = 1.6, p = 0.19). Accuracy scores in the learning phase of the experiment are given in Table 1. Multilevel linear modelling demonstrated no main effects of condition (b = -0.087, se = 0.097, t(6) = 0.90, p = 0.40) or timepoint (b = -0.024, se = 0.034, t(258) = 0.70, p = 0.49), and no interaction between the two (b = 0.044, se = 0.034, t(258) = 1.28, p = 0.20, see Table 2 and Figure 3).
Table 2.
Summary statistics for change in log-transformed MEP size in each of the four conditions in both experiments.
Human Condition | Computer Condition | |||
---|---|---|---|---|
With tones | Between tones | With tones | Between tones | |
M (SD) | M (SD) | M (SD) | M (SD) | |
Experiment 1 | 0.60 (0.77) | 0.72 (0.78) | 0.14 (0.50) | -0.02 (0.55) |
Experiment 2 | 0.14 (0.42) | 0.10 (0.38) | 0.35 (0.67) | 0.50 (0.74) |
Summary
In Experiment 2, results did not demonstrate any pairing of sound with movement: the change in MEP size was similar in both the condition in which participants were taught to associate sound with human movement, and the condition in which participants were taught to associate sound with a computer. The result suggest that visual pairing might be required to learn associations between sound and movement in this paradigm. We also found no support for the hypothesis that motor resonance could be temporally locked to the time of sound, but this is unsurprising given that motor associations were not learnt.
General Discussion
The results of Experiment 1 suggest that there is an increase in motor resonance after learning associate observed movement with sound. Finding that sounds with newly learnt associations with observed movement can lead to increased motor resonance is a significant and original finding, and can be compared to a recent study (Ticini et al., 2012) which had a similar result when participants learnt to associate their own hand movement with sound. The major difference in the current study is that the result occurs when participants learn to make associations with observed movement, as might occur when observing a musician play. Learning associations with one’s own motor system after observing that movement can be explained by the theory that the perceived movements of other people are processed using motor regions of the brain (Rizzolatti & Craighero, 2004). When participants see the experimenter move at the same time as hearing sounds generated by that movement, they process information about the movement using motor regions of the brain, and this processing becomes paired with the sounds that are occurring at the same time. This means that after watching other people create sounds (e.g. during musical performance) we can experience motor engagement when listening to those sounds.
With regards to musical sounds, our findings suggest that people with no experience of playing an instrument can develop motor resonance associated with the sounds of that instrument. While the current result suggests that some visual pairing between action and sound is required to lead to changes in motor resonance, a short period of such pairing appeared to have substantial effects. It is therefore possible that with limited experience of observing a performer on a musical instrument, people may develop some motor resonance for that instrument, and potentially experience some empathy and emotional investment in that sound (Overy & Molnar-Szakacs, 2009). Here we do not measure from multiple effectors so we cannot determine the level of specificity of this motor resonance, and further research would be required to confirm whether this effect is related to the particular movements that were observed.
Experiment 2 was designed to test whether associations could develop even in the absence of repeated visual pairing of movement with sound (an ‘imagined sound’ version of Experiment 1). Effectively this should demonstrate whether people’s belief that a sound is triggered by a human movement, rather than repeated observation of the movement and sound co-occurring, is sufficient to lead to motor resonance for that sound, and we did not find this to be the case. In this experiment, participants did not observe movements being paired with sound, but they had been informed that the specified sound was triggered by movements during the learning phase. This suggests that without having visual pairing of sound and movement people do not learn associations between the two. However, an alternative explanation for the null result is that the amount of pairing between sound and movement required for learning is different without visual observation. The current number of pairings was taken from a comparable visual association study (Fecteau et al., 2010). It is possible that when associations are imagined they require a greater number of pairings in order to be learnt, or even just that a greater number of initial demonstrations of the finger movement might be required in order for participants to start imagining the movement in time with sound. A further experiment could also involve asking participants to actively imagine finger movement whilst counting tones in the learning phase, as this might be sufficient to encourage associations to be learnt.
Taking the results of Experiment 1 and Experiment 2 together we provide support for the hypothesis that associations between sound and movement must be visible for it to be learnt and lead to motor resonance. Thus in Experiment 2, without direct pairing between movements and sound we did not find any increase in motor resonance for sounds for participants who had seen those sounds paired with human movement. Associative learning has been put forward as an explanation for all effects that could be attributed to mirror neurons in humans, and generally there is good experimental evidence to support this theory (Heyes, 2009; Petroni, Baguear, & Della-Maggiore, 2010), with which the current experiment concurs.
A limitation of the current experiment is that we did not take into account musical experience of participants, which might lead to some individual differences in performance on the tasks. None of the participants reported having extensive musical training, but it is likely that experience of listening to music and attending live music events might affect the current results, as has been shown with observers of dance (Jola, Abedian-Amiri, Kuppuswamy, Pollick, & Grosbras, 2012). The current movement-sound associations were almost certainly novel for participants though, and this should minimise this effect. The very small number of participants should also be taken into account when interpreting the current results, although Figure 3 demonstrates that the changes were identified similarly in most participants, and although we have not used ANOVAs or compared the two experiments due to low statistical power and the probability of this causing Type I and Type II errors there is a clear pattern of increase MEPs in the human condition of Experiment 1 only. In addition, one participant in Experiment 2 demonstrated a significantly later MEP response to other participants (occurring approximately 10 ms later than other participants) suggesting some problem with recording equipment in this case.
We did not find evidence for the temporal specificity for motor resonance in the current experiments. TMS pulses that were coordinated with sounds demonstrated the same increase in motor resonance as those pulses which occurred at a time unrelated to the sounds. Although the timings of TMS pulses in the current experiment were based on pilot testing, it is possible that they were not optimal for testing temporal specificity of motor resonance. The pulses occurring ‘between’ and ‘with’ sounds in the current studies were actually very close to one another (the smallest difference between these being just 90 ms), so it is feasible that this led to the null result regarding temporal specificity. Further investigation into the temporal specificity of motor resonance could use a variety of different TMS pulse timings (e.g. every 50 ms between two sounds) in order to determine whether there is some fine-grained temporal specificity that was not identified in the current study. It would also be possible to use isochronous stimuli that occurred with a larger interval (e.g. 1000 ms), allowing greater space between sounds for the TMS pulses.
We did, however, find a near significant interaction between condition and timing of TMS pulse in Experiment 1. This was suggestive of a smaller increase in MEP size in the human condition when the TMS pulse occurred with the tones compared with when they occurred between the tones (see Table 2). Given that we had no specific predictions about this kind of interaction and the effect was not quite significant it is quite hard to interpret. However, as the values indicate change in motor resonance it is possible that the interaction is primarily because motor resonance was higher at the time of the tone before the learning phase, and increased relatively less compared with when the pulse occurred between tones.
In the current set of experiments we demonstrate that it is possible to learn associations between sound and movement when observing movement, without making movement oneself. Experiment 2 suggests that these associations were not made in the same way when there was not repeated visual pairing of sound with movement. These findings have implications both for the way that we understand how associations between perception and action develop, and also for our understanding of how people perceive sound with agency, such as musical sound.
Supplementary Material
Footnotes
Further details of excluded data are given in supplementary material, including analyses with all data included.
References
- Aziz-Zadeh L, Iacoboni M, Zaidel E, Wilson S, Mazziotta J. Left hemisphere motor facilitation in response to manual action sounds. European Journal of Neuroscience. 2004;19(9):2609–12. doi: 10.1111/j.0953-816X.2004.03348.x. [DOI] [PubMed] [Google Scholar]
- Bangert M, Altenmüller EO. Mapping perception to action in piano practice: a longitudinal DC-EEG study. BMC Neuroscience. 2003;4(1):26. doi: 10.1186/1471-2202-4-26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bangert M, Peschel T, Schlaug G, Rotte M, Drescher D, Hinrichs H, Altenmuller E. Shared networks for auditory and motor processing in professional pianists: evidence from fMRI conjunction. Neuroimage. 2006;30(3):917–926. doi: 10.1016/J.Neuroimage.2005.10.044. [DOI] [PubMed] [Google Scholar]
- Bates D, Sarkar D. lme4: Linear mixed-effects models using S4 classes. 2008 Retrieved from http://CRAN.r–project.org/package=l.
- Buccino G, Vogt S, Ritzl A, Fink GR, Zilles K, Freund HJ, Rizzolatti G. Neural circuits underlying imitation learning of hand actions: an event-related fMRI study. Neuron. 2004;42(2):323–34. doi: 10.1016/S0896-6273(04)00181-3. [DOI] [PubMed] [Google Scholar]
- Cameron DJ, Stewart L, Pearce MT, Grube M, Muggleton NG. Modulation of motor excitability by metricality of tone sequences. Psychomusicology: Music, Mind, and Brain. 2012;22(2):122–8. doi: 10.1037/a0031229. [DOI] [Google Scholar]
- Catmur C, Walsh V, Heyes C. Sensorimotor learning configures the human mirror system. Current Biology. 2007;17(17):1527–31. doi: 10.1016/j.cub.2007.08.006. [DOI] [PubMed] [Google Scholar]
- Cook R, Press C, Dickinson A, Heyes C. Acquisition of automatic imitation is sensitive to sensorimotor contingency. Journal of Experimental Psychology: Human Perception and Performance. 2010;36(4):840–52. doi: 10.1037/a0019256. [DOI] [PubMed] [Google Scholar]
- D’Ausilio A, Altenmüller EO, Olivetti Belardinelli M, Lotze M. Cross-modal plasticity of the motor cortex while listening to a rehearsed musical piece. European Journal of Neuroscience. 2006;24(3):955–8. doi: 10.1111/j.1460-9568.2006.04960.x. [DOI] [PubMed] [Google Scholar]
- Fecteau S, Tormos JM, Gangitano M, Thèoret H, Pascual-Leone A. Modulation of cortical motor outputs by the symbolic meaning of visual stimuli. European Journal of Neuroscience. 2010;32(1):172–7. doi: 10.1111/j.1460-9568.2010.07285.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fujioka T, Trainor LJ, Large EW, Ross B. Internalized timing of isochronous sounds is represented in neuromagnetic beta oscillations. Journal of Neuroscience. 2012;32(5):1791–802. doi: 10.1523/JNEUROSCI.4107-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Galati G, Committeri G, Spitoni G, Aprile T, Di Russo F, Pitzalis S, Pizzamiglio L. A selective representation of the meaning of actions in the auditory mirror system. Neuroimage. 2008;40(3):1274–86. doi: 10.1016/j.neuroimage.2007.12.044. [DOI] [PubMed] [Google Scholar]
- Gallese V, Goldman A. Mirror neurons and the simulation theory of mind-reading. Trends in Cognitive Sciences. 1998;2(12):493–501. doi: 10.1016/S1364-6613(98)01262-5. [DOI] [PubMed] [Google Scholar]
- Gazzola V, Aziz-Zadeh L, Keysers C. Empathy and the somatotopic auditory mirror system in humans. Current Biology. 2006;16(18):1824–9. doi: 10.1016/j.cub.2006.07.072. [DOI] [PubMed] [Google Scholar]
- Haueisen J, Knösche TR. Involuntary motor activity in pianists evoked by music perception. Journal of Cognitive Neuroscience. 2001;13(6):786–92. doi: 10.1162/08989290152541449. [DOI] [PubMed] [Google Scholar]
- Hauk O, Johnsrude I, Pulvermüller F. Somatotopic representation of action words in human motor and premotor cortex. Neuron. 2004;41(2):301–7. doi: 10.1016/S0896-6273(03)00838-9. [DOI] [PubMed] [Google Scholar]
- Heyes C. Where do mirror neurons come from? Neuroscience & Biobehavioral Reviews. 2009;34(4):575–83. doi: 10.1016/j.neubiorev.2009.11.007. [DOI] [PubMed] [Google Scholar]
- James W. The Principles of Psychology. New York: Dover Publications; 1890. [Google Scholar]
- Jola C, Abedian-Amiri A, Kuppuswamy A, Pollick FE, Grosbras MH. Motor simulation without motor expertise: Enhanced corticospinal excitability in visually experienced dance spectators. PLoS ONE. 2012;7(3) doi: 10.1371/journal.pone.0033343. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Keysers C, Perrett DI. Demystifying social cognition: a Hebbian perspective. Trends in Cognitive Sciences. 2004;8(11):501–7. doi: 10.1016/j.tics.2004.09.005. [DOI] [PubMed] [Google Scholar]
- Lahav A, Saltzman E, Schlaug G. Action representation of sound: audiomotor recognition network while listening to newly acquired actions. Journal of Neuroscience. 2007;27(2):308–14. doi: 10.1523/JNEUROSCI.4822-06.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maes P-J, Leman M, Palmer C, Wanderley MM. Action-based effects on music perception. Frontiers in Psychology. 2014;4 doi: 10.3389/fpsyg.2013.01008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Molnar-Szakacs I, Overy K. Music and mirror neurons: from motion to “e”motion. Social Cognitive and Affective Neuroscience. 2006;1(3):235–41. doi: 10.1093/scan/nsl029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Münte TF, Altenmüller EO, Jäncke L. The musician’s brain as a model of neuroplasticity. Nature Reviews Neuroscience. 2002;3(6):473–8. doi: 10.1038/nrn843. [DOI] [PubMed] [Google Scholar]
- Novembre G, Keller PE. A conceptual review on action-perception coupling in the musicians’ brain: what is it good for? Frontiers in Human Neuroscience. 2014;8:1–11. doi: 10.3389/fnhum.2014.00603. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Novembre G, Ticini LF, Schütz-Bosbach S, Keller P. Motor simulation and the coordination of self and other in real-time joint action. Social Cognitive and Affective Neuroscience. 2014;9(8):1062–8. doi: 10.1093/scan/nst086. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Novembre G, Ticini LF, Schütz-Bosbach S, Keller PE. Distinguishing self and other in joint action. Evidence from a musical paradigm. Cerebral Cortex. 2012;22(12):2894–903. doi: 10.1093/cercor/bhr364. [DOI] [PubMed] [Google Scholar]
- Overy K, Molnar-Szakacs I. Being together in time: musical experience and the mirror neuron system. Music Perception. 2009;26(5):489–504. doi: 10.1525/Mp.2009.26.5.489. [DOI] [Google Scholar]
- Pazzaglia M, Pizzamiglio L, Pes E, Aglioti SM. The sound of actions in apraxia. Current Biology. 2008;18(22):1766–72. doi: 10.1016/j.cub.2008.09.061. [DOI] [PubMed] [Google Scholar]
- Petroni A, Baguear F, Della-Maggiore V. Motor resonance may originate from sensorimotor experience. Journal of Neurophysiology. 2010;104(4):1867–71. doi: 10.1152/jn.00386.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rizzolatti G. The mirror neuron system and its function in humans. Anatomy and Embryology. 2005;210(5):419–21. doi: 10.1007/s00429-005-0039-z. [DOI] [PubMed] [Google Scholar]
- Rizzolatti G, Craighero L. The mirror-neuron system. Annual Review of Neuroscience. 2004;27:169–92. doi: 10.1146/Annurev.Neuro.27.070203.144230. [DOI] [PubMed] [Google Scholar]
- Rizzolatti G, Fadiga L, Gallese V, Fogassi L. Premotor cortex and the recognition of motor actions. Cognitive Brain Research. 1996;3(2):131–41. doi: 10.1016/0926-6410(95)00038-0. [DOI] [PubMed] [Google Scholar]
- Stupacher J, Hove MJ, Novembre G, Schütz-Bosbach S, Keller PE. Musical groove modulates motor cortex excitability: A TMS investigation. Brain and Cognition. 2013;82(2):127–36. doi: 10.1016/j.bandc.2013.03.003. [DOI] [PubMed] [Google Scholar]
- Tettamanti M, Buccino G, Saccuman MC, Gallese V, Danna M, Scifo P, Perani D. Listening to action-related sentences activates fronto-parietal motor circuits. Journal of Cognitive Neuroscience. 2005;17(2):273–81. doi: 10.1162/0898929053124965. [DOI] [PubMed] [Google Scholar]
- Ticini LF, Schüz-Bosbach S, Weiss C, Casile A, Waszak F. When sounds become actions: higher-order representation of newly learnt action sounds in the human motor system. Journal of Cognitive Neuroscience. 2012;24(2):464–74. doi: 10.1162/jocn_a_00134. [DOI] [PubMed] [Google Scholar]
- Zatorre RJ, Chen JL, Penhune VB. When the brain plays music: auditory-motor interactions in music perception and production. Nature Reviews. Neuroscience. 2007;8(7):547–58. doi: 10.1038/nrn2152. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.