Skip to main content
Frontiers in Human Neuroscience logoLink to Frontiers in Human Neuroscience
. 2015 Sep 16;9:491. doi: 10.3389/fnhum.2015.00491

Response: A commentary on: “Neural overlap in processing music and speech”

Barbara Tillmann 1,2,*, Emmanuel Bigand 3,4
PMCID: PMC4584969  PMID: 26441591

In comparison to a more classical approach investigating the modularity of music and language processing, recent research focuses on the investigation how and to what extent music and speech processing share neural correlates. This research has implications for the use of music for education and rehabilitation, and provides us with further insights regarding origins and evolution of music. As reviewed by Peretz et al. (2015), neuroimaging studies have been strongly contributing to this debate, suggesting both neural overlap and separability. In their commentary, Kunert and Slevc (2015) point out that behavioral and electrophysiological studies can also contribute to this investigation, and they provide an overview of research using a music-language interference paradigm.

In this paradigm, musical sequences and linguistic sentences were presented simultaneously. Each material (or both at the same time) can introduce a structural violation (or a more complex structure), and behavioral and electrophysiological measures are recorded to investigate whether the violation of the structure in one material (e.g., music) influences the processing of the structure in the other material (e.g., language). For example, participants read syntactic garden-path sentences, presented segment-by-segment and time-locked to the chords of a musical sequence (Slevc et al., 2009). These chords were either musically correct and expected (respecting musical syntactic-like structures), or incorrect and unexpected (i.e., an out-of-key chord). Results reveal interference of the musical material with the processing of linguistic syntax. Some studies have compared this interference effect with the effect of musical structures on semantic structure processing. The different result patterns have been interpreted in terms of interference being syntax-specific, pointing to more general structural integration or reflecting shared attention and cognitive control.

Comparing music and language processing, whether using neuroimaging, behavioral or electrophysiological methods, requires careful control and matching of the experimental material. First, when investigating the processing of cognitive structures and expectancy violations, care must be taken that the introduced structure violations (or manipulations) do not create additional violations, which might provide alternative explanations. Second, the manipulations in the material of the two domains need to be comparable in terms of their complexity (also when comparing syntactic and semantic processing).

The first point is particularly crucial for musical structure manipulations: the material must be constructed to exclude explanations based on low-level processing, which might provide a more parsimonious interpretation of the data than higher-level cognitive structure processing (e.g., Bigand et al., 2014; Collins et al., 2014). In Western tonal music, sensory and cognitive structures are indeed entwined, leading psychoacoustic and cognitive approaches to provide highly correlated accounts of musical structures (e.g., Bigand et al., 1996; Leman, 2000). Psychoacoustic approaches have challenged cognitive approaches that claimed for musical syntax processing: a short-term sensory memory model, operating on echoic images of periodicity pitch, can account for the musical functions of tones in tonal contexts (Leman, 2000).

This long-standing debate in music cognition research does not only concern the investigation of musical structure processing, but also the investigation of interference between musical and linguistic (syntactic, semantic) processing. This research domain should thus also question the relevance to use out-of-key violations as these are not only violating tonal structures and tonal expectations based on listeners' knowledge, but are also violating sensory expectations based on information stored in sensory memory buffer. These sensory violations compromise the unambiguous interpretation of interactive data patterns in terms of shared neural resources for musical and linguistic structure processing.

Eight of the ten studies listed in Kunert and Slevc (2015) used musical structure violations that introduced out-of-key notes or chords. Consequently, the question rises in how far the observed interference and interactive patterns are due to the sensory violations of the out-of-key events rather than musical syntax processing. Some of the authors were aware of potential alternative influences of other “types of musical unexpectancy,” which might attract attention, and used control conditions that introduced a timbre or loudness change (Fedorenko et al., 2009; Slevc et al., 2009; Fiveash and Pammer, 2014). However, it seems difficult to match changes on timbre or loudness dimensions in terms of the degree of violation to changes due to an out-of-key event. It might be that the violation of sensory expectations is stronger for out-of-key events, and/or that the out-of-key event combines sensory and cognitive violations, leading to a stronger violation.

This discussion leads to the second point, notably the comparability of the structural complexity and violations across music and language materials as well as within the language material, such as the comparability of syntactic and semantic expectancy violations when investigating their interactions with musical expectancy violations. For example, semantic violations based on correct, but low-cloze probability words might be less strong than syntactic violations based on syntactic errors (gender violations) or syntactic complex sentences, thus being less strongly subjected to interference with musical violations (Hoch et al., 2011; Perruchet and Poulin-Charronnat, 2013).

Where to go from here? Research investigating neural correlates as well as interference patterns between music and language processing should take into account debates and advances of music cognition and psycholinguistic domains: the need to disentangle musical structure violations from sensory violations (e.g., Leman, 2000; Bigand et al., 2014) as well as the need to equalize strength of structure manipulations across linguistic dimensions (e.g., syntax and semantics; Gibson and Fedorenko, 2013) and between musical and linguistic dimensions. Using other materials might complement the investigation of the interference with musical structures, such as arithmetic processing that allows manipulating more directly the degree of complexity of the structures (e.g., Hoch and Tillmann, 2012). Assuring equal strengths of manipulations across dimensions requires additional testing, including baseline conditions (without the concurrent manipulation of the other dimension), as done similarly in studies using Garner's interference paradigm (Garner, 1974). Even though initially developed to investigate perceptual processes, Garner's paradigm has been used to study sensory and linguistic processes (e.g., Melara and Marks, 1990) or text and melody in song (see Lidji, 2007). It also calls the domain to further study the directionality of the interference between music and language processing (with most studies having investigated the effect of music on language processing, see however Steinbeis and Koelsch, 2008).

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Bigand E., Delbé C., Poulin-Charronnat B., Leman M., Tillmann B. (2014). Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory. Front. Syst. Neurosci. 8:94. 10.3389/fnsys.2014.00094 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bigand E., Parncutt R., Lerdahl F. (1996). Perception of musical tension in short chord sequences: the influence of harmonic function, sensory dissonance, horizontal motion, and musical training. Percept. Psychophys. 58, 124–141. 10.3758/bf03205482 [DOI] [PubMed] [Google Scholar]
  3. Collins T., Tillmann B., Delbé C., Barrett F. S., Janata P. (2014). From the audio signal to sensory and cognitive representations in the perception of tonal music: modeling sensory and cognitive influences on tonal expectations. Psychol. Rev. 121, 33–65. 10.1037/a0034695 [DOI] [PubMed] [Google Scholar]
  4. Fedorenko E., Patel A., Casasanto D., Winawer J., Gibson E. (2009). Structural integration in language and music: evidence for a shared system. Mem. Cognit. 37, 1–9. 10.3758/MC.37.1.1 [DOI] [PubMed] [Google Scholar]
  5. Fiveash A., Pammer K. (2014). Music and language: do they draw on similar syntactic working memory resources? Psychol. Music 42, 190–209. 10.1177/0305735612463949 [DOI] [Google Scholar]
  6. Garner W. R. (1974). The Processing of Information and Structure. New York, NY: Erlbaum, Potomac, Wiley. [Google Scholar]
  7. Gibson E., Fedorenko E. (2013). The need for quantitative methods in syntax and semantics research. Lang. Cogn. Processes 28, 88–124. 10.1080/01690965.2010.51508020008661 [DOI] [Google Scholar]
  8. Hoch L., Poulin-Charronnat B., Tillmann B. (2011). The tonal function of a task-irrelevant chord influences language processing: syntactic versus semantic structures. Front. Psychol. 2:112. 10.3389/fpsyg.2011.00112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Hoch L., Tillmann B. (2012). Shared structural and temporal integration resources for music and arithmetic processing. Acta Psychol. 140, 230–235. 10.1016/j.actpsy.2012.03.008 [DOI] [PubMed] [Google Scholar]
  10. Kunert R., Slevc L. R. (2015). A Commentary on: “Neural overlap in processing music and speech.” Front. Hum. Neurosci. 9:330. 10.3389/fnhum.2015.00330 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Leman M. (2000). An auditory model of the role of short-term memory in probe-tone ratings. Music Percept. 17, 481–509. 10.2307/40285830 [DOI] [Google Scholar]
  12. Lidji P. (2007). Intégralité et séparabilité: revue et application aux interactions entre paroles et melodies dans le chant. L'année Psychol. 107, 659–694. 10.4074/S000350330700406X [DOI] [Google Scholar]
  13. Melara R. D., Marks L. E. (1990). Dimensional interactions in language processing: investigating directions and levels of crosstalk. J. Exp. Psychol. Learn. Mem. Cogn. 16, 539–554. 10.1037/0278-7393.16.4.539 [DOI] [PubMed] [Google Scholar]
  14. Peretz I., Vuvan D., Lagrois M. É., Armony J. L. (2015). Neural overlap in processing music and speech. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 1370:20140090. 10.1098/rstb.2014.0090 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Perruchet P., Poulin-Charronnat B. (2013). Challenging prior evidence for a shared syntactic processor for language and music. Psychon. Bull. Rev. 20, 310–317. 10.3758/s13423-012-0344-5 [DOI] [PubMed] [Google Scholar]
  16. Slevc L. R., Rosenberg J. C., Patel A. D. (2009). Making psycholinguistics musical: self-paced reading time evidence for shared processing of linguistic and musical syntax. Psychon. Bull. Rev. 16, 374–381. 10.3758/16.2.374 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Steinbeis N., Koelsch S. (2008). Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns. Cereb. Cortex 18, 1169–1178. 10.1093/cercor/bhm149 [DOI] [PubMed] [Google Scholar]

Articles from Frontiers in Human Neuroscience are provided here courtesy of Frontiers Media SA

RESOURCES