Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Nov 1.
Published in final edited form as: Neuropsychology. 2021 Aug 26;35(8):771–791. doi: 10.1037/neu0000766

Processing rhythm in speech and music: Shared mechanisms and implications for developmental speech and language disorders

Anna Fiveash 1,2, Nathalie Bedoin 1,2,3, Reyna L Gordon 4,5,6,7, Barbara Tillmann 1,2
PMCID: PMC8595576  NIHMSID: NIHMS1732209  PMID: 34435803

Abstract

Objective.

Music and speech are complex signals containing regularities in how they unfold in time. Similarities between music and speech/language in terms of their auditory features, rhythmic structure, and hierarchical structure have led to a large body of literature suggesting connections between the two domains. However, the precise underlying mechanisms behind this connection remain to be elucidated.

Method.

In this theoretical review paper, we synthesize previous research and present a framework of potentially shared neural mechanisms for music and speech rhythm processing. We outline structural similarities of rhythmic signals in music and speech, synthesize prominent music and speech rhythm theories, discuss impaired timing in developmental speech and language disorders, and discuss music rhythm training as an additional, potentially effective therapeutic tool to enhance speech/language processing in these disorders.

Results.

We propose the processing rhythm in speech and music (PRISM) framework, which outlines three underlying mechanisms that appear to be shared across music and speech/language processing: precise auditory processing, synchronization/entrainment of neural oscillations to external stimuli, and sensorimotor coupling. The goal of this framework is to inform directions for future research that integrate cognitive and biological evidence for relationships between rhythm processing in music and speech.

Conclusion.

The current framework can be used as a basis to investigate potential links between observed timing deficits in developmental disorders, impairments in the proposed mechanisms, and pathology-specific deficits which can be targeted in treatment and training supporting speech therapy outcomes. On these grounds, we propose future research directions and discuss implications of our framework.

Keywords: music, speech, rhythm, language, developmental disorders


Music and language are both structured means of communication that exhibit connections across multiple components, including acoustic parameters, hierarchical syntactic structure, and rhythm. Research has investigated the neural mechanisms supporting various aspects of music perception and production, speech perception and production, and some processes that appear to be shared between both domains. In the current theoretical review, we synthesize a number of independently developed theories and different sources of evidence that contain recurring and common elements. Our aim is to create a parsimonious framework based on three common underlying neural mechanisms supporting music and speech rhythm processing: the processing rhythm in speech and music (PRISM) framework. This framework aims to provide a solid foundation for both theoretical/empirical and applied future research, with implications for developmental speech and language disorders.

First, we define rhythm in music and speech. Second, we focus on the three mechanisms suggested to be common to rhythm processing as it occurs for music and speech: precise auditory processing, synchronization/entrainment of neural oscillations to external rhythmic stimuli, and sensorimotor coupling. Third, we propose predictions and future directions derived from the PRISM framework. Within this section, we provide evidence for timing deficits across different developmental speech and language disorders and provide suggestions on how to apply the PRISM framework in both empirical and applied research. Finally, we provide a larger context and outlook for how to integrate different sources of evidence to better understand rhythm processing in music and speech. Although these suggested underlying mechanisms exist across different theories and within different domains, to our knowledge, they have not before been brought together in a framework to explain rhythmic processing in music and speech. The acoustic, sensory, and cognitive links between music and speech rhythm on the one hand, and developmental speech and language disorders and timing impairments on the other hand, suggest a promising research area that can be guided by the current evidence-based framework.

Rhythm in Music and Speech

Rhythm is a fundamental element of both music and language and is universally present across different cultures and languages (Brown & Jordania, 2011; Ding et al., 2017; Kotz et al., 2018; Savage et al., 2015). Rhythm refers to the temporal patterns created by the onsets and durations of acoustic events in an incoming sequence (London, 2012; McAuley, 2010). Fulfilling this definition, both music and speech are auditory signals that unfold in the temporal domain and contain periodic (and quasi-periodic) information structured by a number of similar acoustic cues, including duration (timing), frequency (pitch), amplitude/intensity (loudness), and timbre (instrument/voice quality) (Allen et al., 2017; Besson et al., 2011). These acoustic cues and the way they are structured in time form the basis of the bottom-up percept of auditory stimulus rhythm in both domains, which then has implications for higher-level processes of prediction and structure building.

Music is often perceived as having a clear, isochronous beat or pulse, defined as a salient point in time when an event is expected to occur (i.e., where listeners might naturally clap their hands, see Repp & Su, 2013). Although speech does not have such isochrony (see the unsuccessful history of the search for speech isochrony, Cummins, 2012; Knowles, 1974; Patel, 2008), speech rhythm emerges through a number of interacting lexical and prosodic factors. We will first discuss this difference in regularity and then the hierarchical nature of music and speech. This section thus focuses on acoustic aspects of music and speech and how they influence the sensory and cognitive processing of the auditory signals, which lay the foundation for musicality and speech/language skills (Honing, 2018). Note that we will primarily be focusing on Western concepts of music rhythm for the current discussion, as most music cognition research focuses on Western tonal structure, but see Brown and Jordania (2011); Savage et al. (2015); and Stevens (2012) for cross-cultural perspectives aiming to confirm similar underlying perceptual and cognitive processes.

One key distinction between music rhythm and speech rhythm is the regularity by which the acoustic events are patterned in time (see Figure 1). Music rhythm largely consists of regular, recurring patterns that allow for quick synchronization and strong predictions of upcoming events at multiple embedded time levels (Huron, 2008; Jones, 2016; Patel & Morgan, 2016). Importantly, this strong predictability facilitates synchronization both to the music and amongst individuals when listening and performing music. The strong activation in motor areas when just listening to music (Grahn & Brett, 2007), and the urge to dance when a rhythm is played (Levitin et al., 2018) suggest strong connections between music rhythm and movement, perhaps driven by the perception-production or auditory-motor loop (Lezama-Espinosa & Hernandez-Montiel, 2020; Zatorre et al., 2007), and the role of music in social bonding and group cohesion (Bowling et al., 2013; Kotz et al., 2018; Savage, Brown et al., 2015; Savage, Loui et al., 2020).

Figure 1.

Figure 1.

Representations of (a) music (a simple melody) and (b) speech (a simple sentence) showing the acoustic waveforms, the melody or sentence represented within the waveform, and the hierarchical structure for each element. For (a) the duration differences of each note are outlined in the rhythm row, the perceived beat is marked with an x in the beat row, and the higher-level metric structure of the melody is marked with x’s in the following two rows. For (b) each syllable is marked on the syllable-level row, and the higher-level structure of stressed syllables are marked on the following rows.

In contrast to music, speech rhythm is less periodic and more variable, possibly related to the referential nature of speech (with its lexical properties) that does not allow for a strict rhythmic pulse. However, speech rhythm is nevertheless predictable (i.e., the prediction of syllable stress patterns; Beier & Ferreira, 2018), and the rhythm that emerges from speech improves perception and segmentation of the speech signal by providing cues to word boundaries (Cutler, 1994; Cutler & Butterfield, 1992; Cutler & Norris, 1998; Echols et al., 1997; Spinelli et al., 2010), enhancing communication between individuals (Hawkins, 2014; Kotz et al., 2018), and facilitating turn-taking in conversations (Garrod & Pickering, 2015; Wilson & Wilson, 2005). The perception-production loop is also important for speech, with motor areas activated in speech perception (Wilson et al., 2004). These different features and mechanisms should apply independently of language types. The different stress patterns and syllable types evident in different languages (perhaps also influencing syllable prominence patterns) resulted in the traditional separation of stress- and syllable-timed languages (i.e., see Ramus et al., 1999 for one of the initial metrics used to quantify stress- versus syllable-timing and Patel, 2008 for a discussion). However, this distinction is less clear-cut than previously claimed, as suggested by the lack of supporting empirical data and inconsistencies regarding the metrics used to achieve this classification (Arvaniti, 2009), which ultimately may be more complex. Therefore, it has been suggested that speech rhythm should be discussed in relation to patterns of prominence, grouping, and lexical stress, which can also be more readily related to music (Arvaniti, 2009; Beier & Ferreira, 2018).

Though music and speech rhythm diverge in relation to regularity (periodic, non-periodic) and individual elements (notes, chords, musical phrases, versus syllables, words, sentences), they both have similar acoustic features, create top-down cognitive predictions of upcoming elements, and are organized in hierarchical structures (i.e., contain meter, where events are organized temporally along multiple time scales, McAuley, 2010), see Figure 1. In addition, music and speech can both generate strong syntactic predictions, with additional lexical and semantic predictions for speech (see also semantics in music; Koelsch, 2011; Koelsch, 2009). Patterning of strong and weak events allows for perception at multiple levels within a larger hierarchical framework, and the creation of top-down expectations. For music, patterns of strong and weak beats obtained by changes in acoustic and/or temporal parameters of the events (Lerdahl & Jackendoff, 1983; London, 2012; Povel & Essens, 1985) create this hierarchical structure, also referred to as metric hierarchy (see also evidence for rhythmic hierarchy in non-Western music with more complex metrical patterns; Magill & Pressing, 1997; Stevens, 2012). For speech, interacting lexical, prosodic, and accentual elements (Beier & Ferreira, 2018; Patel, 2008; Wagner & Watson, 2010) create a rhythmic hierarchy that is reflected in patterns of prominence, grouping, and lexical stress (Arvaniti, 2009; Beier & Ferreira, 2018). In both domains, rhythmic stress patterns help to direct attention to more prominent events (music: Bharucha & Pryor, 1986; Jones et al., 1982; Palmer & Krumhansl, 1990; speech: Cutler & Foss, 1977; Gow & Gordon, 1993; Pitt & Samuel, 1990), engaging top-down temporal predictions. Although it is not entirely clear how top-down knowledge influences the perception of speech rhythm (with language background being one of the potential influences; Zhang & Francis, 2010), natural speech rhythm enhances comprehension, as altering speech rhythm through time-compression (Adank & Janse, 2009; Ghitza, 2012; Ghitza & Greenberg, 2009), or manipulating the stress structure (Bohn et al., 2013; Rothermich et al., 2012) lowers intelligibility and results in a cognitive processing cost, respectively. Music and speech therefore share similarities in terms of the acoustic signal itself, the sensory processing of the acoustic signal, and cognitive processing parallels in relation to prediction and hierarchical structure, which contribute to connections between musical and linguistic skills or behavior, and which are the focus of the present proposal.

Shared Neural Mechanisms for Rhythmic Processing

The commonalities between music and speech regarding acoustic elements, hierarchical organization, and the role of rhythm for perception and production suggest the involvement of shared neural mechanisms. Several theoretical frameworks (outlined below) have aimed to further understand and characterize various underlying neural mechanisms supporting rhythmic processing in music, speech, or music and speech together. However, most of these frameworks are limited in that they focus only on one or two elements or mechanisms supporting rhythmic processing, and often focus on one domain or from one perspective only. It is critical for future research to be directed by a more global understanding of rhythm processing that underlies both music and speech rhythm, with implications for connections between the two domains. We propose here the PRISM framework (see Figure 2): a parsimonious framework of three central mechanisms that emerge separately across theories from different research fields, and which appear critical for the processing of rhythm in music and speech. Our goal is to combine these mechanisms into an overarching theoretical framework that can inform and drive (1) fundamental research investigating the mechanisms underlying rhythm processing and (2) applied research investigating how music rhythm training can be mobilized in clinical-translational settings to support speech rhythm and language processing in normal and impaired populations. We propose that: (1) precise, fine-grained auditory processing; (2) synchronization/entrainment of neural oscillations to external rhythmic stimuli; and (3) sensorimotor coupling; are critical elements underlying speech and music rhythm processing (see Figure 2). The PRISM framework will be used as a basis to propose directions for future training on each of these mechanisms, with the goal to benefit speech and language processing.

Figure 2.

Figure 2.

The processing rhythm in speech and music (PRISM) framework proposes three common underlying mechanisms for music and speech processing observed across different theories: precise auditory processing; synchronization/entrainment of neural oscillations to external rhythmic stimuli; and sensorimotor coupling.

These three underlying mechanisms have emerged from a critical reading and synthesis of elements discussed in previously proposed approaches. Specifically, we have drawn on the sound envelope processing and synchronization and entrainment to pulse hypothesis (SEP; Fujii & Wan, 2014), the precise auditory timing hypothesis (PATH; Tierney & Kraus, 2014), and the temporal sampling framework for developmental dyslexia (TSF; Goswami, 2011), which focus on different yet complementary approaches to understand shared elements of music and speech rhythm. The PRISM framework is also informed by the broader OPERA hypothesis, which suggests that Overlap, Precision, Emotion, Repetition, and Attention drive the influence of music training on speech processing (Patel, 2011, 2014). The three mechanisms proposed here as central are also informed by sensorimotor theories (e.g., action simulation for auditory prediction, ASAP, Patel & Iversen, 2014; active sensing, Morillon et al., 2015; Schroeder et al., 2010), dynamic attending theory (Jones, 1976, 2016, 2019; Large & Jones, 1999) and predictive coding (Friston, 2005, 2010). Drawing on this research, the following section will outline the three proposed mechanisms (precise auditory processing, synchronization/entrainment of neural oscillations to external rhythmic stimuli, and sensorimotor coupling) that appear integral to music and speech rhythm processing in more detail, as well as theoretical and empirical evidence that support them. Note that prediction as well as emotion are considered to be related to all mechanisms. The PRISM framework is both novel and parsimonious as it explicitly combines these three underlying mechanisms as directly applicable to the processing of music and speech rhythm. Bringing together these underlying mechanisms can provide the theoretical groundwork to inform further empirical and applied research investigating links between impaired timing in speech and language disorders, with the goal to better understand impaired timing in these disorders, and to inform applied research for music training interventions.

While the three mechanisms proposed in the PRISM framework are deeply intertwined, each plays a distinct role within music and speech rhythm processing. Precise auditory processing is crucial for the discrimination of small timing deviations and accurate auditory perception, which lay the foundation for auditory processing. The synchronization and entrainment of neural oscillations to external stimuli allows for prediction of upcoming elements and the tracking of hierarchical structure at multiple levels. Sensorimotor coupling allows for a tight connection between perception and production in the brain, as well as links to the motor system, which also benefits timing and prediction mechanisms. However, each mechanism can also be involved in the functioning of the other mechanisms, as indicated in the bidirectional arrows in Figure 2. Throughout the following section, we will outline the contributions of these three mechanisms, signpost some important connections between them, and outline how they fit into the broader body of research on music and speech processing.

Precise Auditory Processing

The capacity of the auditory system for precise auditory processing is unparalleled and is fundamental to music and speech rhythm perception. Precise or fine-grained auditory processing refers to the ability to discriminate very small deviations or changes in timing (i.e., on the millisecond level), pitch, and timbre (Kraus & Chandrasekaran, 2010). This ability is critical for accurate perception of acoustic events, such as discriminating between /ba/ and /pa/ in speech, and processing subtle timing deviations as well as synchronizing different instrumental parts in music perception and production (Patel, 2011). The auditory system also appears to be sensitive to timing deviations below the threshold of conscious change detection: evidence suggests that participants can alter synchronization behavior to isochronous sequences with deviations as little as three milliseconds (Madison & Merker, 2004), likely based on tight connections between the auditory and motor areas of the brain (see Repp, 2000; Tierney & Kraus, 2014). The capacity to track temporal information at different temporal integration windows (including precise processing of information, such as formant transitions to discriminate for instance /pa/ and /ta/, i.e., discriminations of 20–40 ms) is suggested to be supported by neural oscillations (e.g., in delta, theta, and gamma frequency ranges; Giraud & Poeppel, 2012; Poeppel, 2003). Precise auditory processing is therefore also strongly intertwined with sensorimotor coupling and synchronization/entrainment of neural oscillations to external rhythmic stimuli.

Precise auditory processing has been proposed to be a mechanism underlying potential transfer between music and speech rhythm processing capacities (Fujii & Wan, 2014; Kraus & Chandrasekaran, 2010; Tierney & Kraus, 2014). In line with the OPERA hypothesis (Patel, 2011, 2012), it has been suggested that music training can enhance speech processing, based on overlapping brain circuits that process the acoustic signal, and the more precise timing necessary to process music rhythm compared to speech rhythm (Patel, 2011). Precise auditory processing is outlined also in the SEP and PATH hypotheses: Fujii and Wan (2014) suggest that the processing of sound envelopes in music requires enhanced temporal precision, which has carry-over effects to the processing of the less regular speech envelope and the neural encoding of speech sound. In PATH, Tierney and Kraus (2014) suggest that (1) the millisecond-level precision required for entrainment to music can sharpen brain networks responsible for speech processing, and (2) phonological processing and auditory-motor entrainment rely on precise timing in the auditory system to generate accurate predictions. The role of auditory-motor entrainment in generating precise auditory predictions is also in line with hypotheses of sensorimotor theories discussed below.

Supporting evidence has been provided by research showing that music training can actively enhance precise auditory processing, which may benefit speech processing across the lifespan (Kraus & Chandrasekaran, 2010). In addition to correlational studies (see Supplementary Table 1), longitudinal training studies have shown benefits of music rhythm training on precise temporal processing of the speech signal. For nine-month-old infants, 12 sessions of music training emphasizing rhythm (compared to a control group who engaged in non-musical play activities) enhanced the neural response (the mismatch negativity, MMN1) to violations of temporal structure in both music and speech, suggesting that music rhythm training can improve speech rhythm processing (Zhao & Kuhl, 2016). Further, compared to a group who received painting training, 8-year-old children who received music training showed an increase in speech segmentation skills after one and two years (François et al., 2013). This music-training group also showed an enhanced MMN to syllable duration and vowel onset time deviants (but not frequency deviants) after one year of training (Chobert et al., 2014). The participants were pseudo-randomly assigned to ensure matched groups in terms of age, school level, sex, socio-economic status, and neuropsychological test scores, and did not differ on the measures of interest before the training, suggesting that the enhanced fine-grained speech processing can be attributed to effects from the music training.

Precise auditory processing has been also suggested to be critical for encoding of the speech envelope. In the TSF, Goswami (2011) suggests that impaired rise-time perception of syllables (i.e., occurring every ~200 ms or every ~500 ms for accented syllables) can affect accurate encoding of the speech envelope, potentially resulting in deficits in phonological processing, segmentation, and phonological awareness, which in turn can impact reading skills in developmental dyslexia (Di Liberto et al., 2018; Goswami, 2011, 2018; Goswami et al., 2002, 2010). The TSF and related research suggest that the regularity of music rhythm could sharpen the precision of auditory processing and entrained neural oscillations, which could enhance phonological skills by improving the neural tracking of the speech envelope (Flaugnacco et al., 2015; Goswami, 2012). Compared to control groups (with sports training or no-training), music rhythm training, which was experimentally implemented over periods covering 14 weeks to 4 months, has been shown to enhance phonological processing in typically developing children, providing support for this hypothesis (Degé & Schwarzer, 2011 (5–6 year olds); Gromko, 2005 (kindergarten); Patscheke et al., 2016 (4–6 year olds)). Numerous positive correlations between rhythm production/perception skills and phonological awareness have also been reported for children (see Supplementary Table 1). Music and speech rhythm processing therefore builds on precise auditory timing, which is also linked to both synchronization and entrainment of neural oscillations to external stimuli and sensorimotor coupling (e.g., Morillon & Baillet, 2017; Peelle & Davis, 2012; ten Oever & Sack, 2015).

Synchronization and Entrainment of Neural Oscillations to External Rhythmic Stimuli

Neural oscillations are regularly recurring inhibitory and excitatory patterns of electrical activity produced by neurons (Buzsáki, 2019; Buzsáki & Draguhn, 2004). They are ubiquitous throughout the brain (Buzsáki, 2006), and have been shown to play a central role in music and speech processing (Jones, 2019). Neural oscillations track auditory rhythms, and are suggested to underlie the perception of music (Fujioka et al., 2012; Nozaradan et al., 2011, 2012, 2015) and speech (Giraud & Poeppel, 2012; Kösem et al., 2018; Kösem & Wassenhove, 2017), and to function similarly across the two domains (Harding et al., 2019). Neural oscillations have been linked to temporal attention (Jones, 2019), prediction (Arnal & Giraud, 2012), entrainment (Calderone et al., 2014), hierarchical processing (Jones, 2016; Poeppel & Assaneo, 2020), and communication between brain regions (i.e., auditory and motor cortices, Assaneo & Poeppel, 2018), which are all elements integral to music and speech processing. They have been also linked to precise auditory processing (Goswami, 2011; Poeppel, 2003) and sensorimotor coupling (Morillon & Baillet, 2017; van Wijk et al., 2012; Yang et al., 2018). Neural oscillations have been observed at several different frequency rates (Buzsáki & Draguhn, 2004), and can be hierarchically coupled, supporting the processing and integration of information at various embedded frequencies (Jones, 2016). Neural oscillations are also suggested to be involved in the generation and signaling of predictions and prediction errors (Arnal & Giraud, 2012; Chao, Takaura, Wang, Fujii, & Dehaene, 2018; see also Buzsáki, 2019). Neural oscillations therefore appear to be a mechanism underlying predictive processing, temporal attention, and the tracking of external rhythmic stimuli, and could underlie the efficacy of music-based rhythm training for speech processing. Here, we will focus primarily on the role of neural oscillations in the synchronization and entrainment to external rhythmic stimuli.

The crucial role of neural oscillations in temporal attention and predictive processing, as well as applications to music and speech processing, is outlined clearly in the theory of dynamic attending (DAT) proposed by Jones (1976, 2018). The central thesis of DAT is that endogenous neural oscillations entrain in phase to external rhythmic (or quasi-rhythmic) signals, which allow for the direction of temporal attention towards predicted points in time and enhanced predictive processing. Behavioral research has supported this theory with data on perception, learning, and memory. For example, perceptual judgments (and memory; Hickey et al., 2020) are facilitated for events occurring at expected points in time, in line with the hypothesis that neural oscillations entrain and direct attention to these moments for both auditory (Barnes & Jones, 2000; Jones et al., 2002, 2006; Large & Jones, 1999; McAuley & Kidd, 1998), and visual (Bolger et al., 2013; Escoffier et al., 2010) stimuli (see also Henry & Herrmann, 2014 for a review and link between behavioral and electrophysiological research).

One benefit of music is that it is highly rhythmic and predictable, thus defining an ideal stimulus to entrain neural oscillations. Neural oscillations entrained by music rhythm have been shown to persist and influence subsequent language processing. For example, short rhythmic cues matched to the syllabic structure of a subsequent sentence enhance phoneme detection (Cason et al., 2015; Cason & Schön, 2012) and the neural response (Falk et al., 2017) to subsequent sentences compared to non-matching or temporally irregular cues. Though the effect of these shorter rhythmic cues may possibly be explained by auditory short-term memory of the same matching pattern, similar effects have been found with longer (~30 second) rhythmic primes that persist over six subsequent naturally pronounced sentences. Regular rhythmic primes facilitate grammatical judgments of orally presented sentences compared to irregular rhythmic primes for English (Chern et al., 2018), French (e.g., Canette et al., 2020; Fiveash, Bedoin et al., 2020; Przybylski et al., 2013), and Hungarian (Ladányi, Lukács et al., 2020) children. These findings suggest that music rhythm can entrain temporal attentional cycles, which can persist after the music has ended and influence subsequent language processing, or even simple detection of events (Hickok et al., 2015). This evidence suggests that the synchronization/entrainment of neural oscillations can be targeted as a mechanism to extend the benefits of the regular music signal to the less regular speech signal.

The regularity of music rhythm is also beneficial for enhancing prediction and minimizing prediction error in line with predictive coding and predictive timing approaches (Arnal & Giraud, 2012; Friston, 2005, 2010). Predictive coding (i.e., predicting what will occur) and predictive timing (i.e., predicting when an event will occur) are based on the hypothesis that the brain constantly generates predictions about upcoming events based on incoming sensory evidence, with the goal to minimize prediction error (see also Friston & Kiebel, 2009; Jones, 2019; Jones & Boltz, 1989). If the sensory evidence does not match the prediction, this prediction error is sent up the cortical hierarchy, and subsequent predictions are updated. Predictive coding/timing is supported by both forward (i.e., bottom-up) and backward (i.e., top-down) processes to signal predictions and prediction errors based on sensory information. Importantly, neural oscillations have been suggested to support predictive coding (Arnal & Giraud, 2012; Chao et al., 2018), and prediction appears to work similarly across different hierarchically organized domains (Siman-Tov et al., 2019). Links between predictive neural networks and networks involved in rhythmic entrainment (i.e., Levitin et al., 2018; Merchant et al., 2015), as well as fluctuations in attention and entrainment as outlined in the DAT (Jones, 1976, 2019) have also been proposed (Siman-Tov et al., 2019). The strongly rhythmic and predictable nature of music could therefore be used to train domain-general predictive networks associated with predictions and prediction errors and to enhance predictive precision in speech processing. Along these lines, research has shown that sung sentences result in stronger cerebro-acoustic phase coherence compared to spoken sentences in difficult listening conditions (Vanden Bosch der Nederlanden et al., 2020), suggesting an added benefit of musical attributes to the processing of the speech signal.

The entrainment of neural oscillations is also implicated in representing the different hierarchical levels of music and speech rhythm. The DAT suggests that neural oscillations entrain at multiple hierarchical levels to external regularities, resulting in nested oscillations that track multiple levels of hierarchical structure simultaneously and provide benefits of metric binding (Jones, 1976, 2016). Indeed, different beat- and meter-related frequencies have been observed in the neural response of participants listening to music (Fiveash, Schön, et al., 2020; Nozaradan et al., 2012) as well as in response to an imagined meter that was not present in the acoustic stimulus (Nozaradan et al., 2011; see also Nozaradan, 2014, Nozaradan et al., 2012, 2015). For speech, phrasal, syllabic, and phonemic processing (i.e., covering time scales ranging from ~300–1000 ms, to 125–250 ms, to ~30–40 ms) are suggested to be represented by coupled oscillations in the delta (1–3 Hz), theta (4–8 Hz), and low gamma (25–35 Hz) frequency bands, respectively (Giraud & Poeppel, 2012, see also Ghitza, 2011 for slightly different timescales). Neural oscillations have been observed at these different levels not only with isochronous (Ding et al., 2016), but also with natural (Keitel et al., 2018) speech rhythms. Further, Ding et al. (2016) showed that higher level (phrasal and sentence level) neural oscillations were only observed when participants could comprehend the language they were listening to, suggesting strong effects of top-down processing. Similarly, in music, behavioral (Large et al., 2015) and electrophysiological (Tal et al., 2017) evidence shows that participants both perceive and represent in the brain the underlying pulse (or beat) that is not present in the acoustic signal of the rhythm, suggesting top-down processing of hierarchical structure driven by neural oscillations.

Observing neural oscillations at hierarchical levels not physically present in the stimulus is particularly pertinent to the discussion of whether neural oscillations represent the entrainment of already-present endogenous oscillations to an external stimulus (entrainment in the narrow sense, Obleser & Kayser, 2019), or whether they only represent evoked neural responses to the acoustic (rhythmic) properties of the external stimulus (see Haegens, 2020; Haegens & Zion Golumbic, 2018; and Zoefel et al., 2018 for discussion). This distinction has implications for the active role of neural oscillations in the prediction of upcoming events via the entrainment of self-sustaining endogenous oscillations (see also links with predictive coding, Friston, 2018; Giraud & Arnal, 2018; Hovsepyan et al, 2018; Rao & Ballard, 1999). Despite an ongoing debate, accumulating evidence suggests that observed neural oscillations cannot be reduced to evoked responses, but include also the entrainment of neural oscillations with functional significance (e.g., Bree et al., 2021; Doelling et al., 2019). The recruitment of endogenous neural oscillations for rhythmic processing suggests that entrainment to an external stimulus is an active process involving temporal attention and prediction, rather than a passive response to an external stimulus. The regular rhythmic structure and temporal precision of music makes it an ideal stimulus for enhancing neural entrainment and precise processing, with potential benefits for the speech signal.

Sensorimotor Coupling via Cross-Region Neural Connectivity

Sensorimotor (or auditory-motor) coupling refers to the connection between the auditory and motor cortices, and is a central underlying mechanism for the perception and production of music and speech rhythm. Research has shown that just listening to music/rhythmic patterns (Chen et al., 2008; Fujioka et al., 2012; Gordon et al., 2018; Grahn & Brett, 2007; Stephan et al., 2018) or speech (Glanz et al., 2018; Möttönen et al., 2013; Wilson et al., 2004) activates areas within the motor cortex (largely the supplementary motor area (SMA), pre-SMA, and premotor cortex), suggesting a tight coupling between perception and production in each domain. For music, this sensorimotor link is evident with the urge to move to music (Janata et al., 2012), and moving with a rhythm enhances the subsequent perception of that rhythm (Chemin et al., 2014; Manning & Schutz, 2013). Sensorimotor coupling appears crucial to the perception and production of speech, with the motor system implicated also in speech perception, and the auditory system implicated also in speech production (Guenther & Hickok, 2015; Hickok et al., 2011). Speech production inherently involves movement, and speech perception partly utilizes similar networks in the brain (Fujii & Wan, 2014; Kotz & Schwartze, 2010). There appears to be specific synchronization between auditory and motor cortices at the syllable rate (4.5 Hz), suggesting the significance of the motor cortex for speech processing (Assaneo & Poeppel, 2018). Further, the sensorimotor connection plays a central role in the development of speech in infants (Bruderer et al., 2015). Recent evidence has also shown that participants who were classified as high-synchronizers in a spontaneous synchronization of speech test differ from low-synchronizers in the white matter pathways that connect frontal and auditory regions, suggesting a connection between auditory-motor coupling and neural synchronization (Assaneo et al., 2019). Importantly, this connection was also linked to enhanced word learning for high-synchronizers compared to low-synchronizers, suggesting implications for language learning. Further, speech perception has been shown to be enhanced by rhythmic speech production, but only for high-synchronizers, suggesting the importance of individual differences in the connection between auditory and motor cortices (Assaneo et al., 2021).

The involvement of the motor system in sensorimotor coupling benefits the generation of precise sensory predictions for music and speech (Grahn & Rowe, 2013; Kotz & Schwartze, 2010; Large et al., 2015; Palva & Palva, 2018; Schubotz, 2007; Zatorre et al., 2007). Though the exact process by which this occurs is not fully known yet, one theory focusing on musical beat perception suggests that motor regions (including the premotor cortex, the SMA, pre-SMA and the putamen) receive input from the auditory cortex, use this input for motor planning (even in the absence of movement), and then send timing predictions based on motor planning back to the auditory cortex (action simulation for auditory prediction (ASAP) hypothesis, Patel & Iversen, 2014; see Cannon & Patel, 2021 for a neurophysiological update on the ASAP hypothesis; see also Large et al., 2015; Ross et al., 2016). Similarly, the active sensing framework posits that neural oscillations are generated by the motor system and influence predictive timing and coding (Morillon et al., 2015; Schroeder et al., 2010). Active sensing is applicable to both speech and music, and suggests a role of neural oscillations in the communication between regions, and the amplification of sensory input arriving at predicted times (Morillon et al., 2015; Morillon & Baillet, 2017; Schroeder et al., 2008; Schroeder & Lakatos, 2009). The strong involvement of auditory and cortical motor areas for both speech and music processing points to the contribution of sensorimotor coupling for the processing of temporal regularities.

Findings suggesting shared sensorimotor mechanisms for music and speech materials lead to the prediction that training focusing on rhythm processing and in particular, entrainment to rhythm would strengthen the connection between auditory and motor cortices. This training could therefore be beneficial to both music and speech processing, particularly in relation to temporal attention and prediction. Entrainment satisfies all of the OPERA conditions (Patel, 2011, 2012), and has thus been suggested as an underlying plasticity mechanism behind training transfer from music to speech (PATH; Tierney & Kraus, 2014). The temporal regularity of music has also been suggested to enhance sensorimotor coupling also involved in the perception and production of speech (SEP, Fujii & Wan, 2014). In support of music enhancing sensorimotor coupling, a recent study has shown that just 24 weeks of piano training enhances functional connectivity between auditory and sensorimotor regions compared to a control group without training (Li et al., 2018; see also Hyde et al., 2009). It therefore appears likely that neural oscillations in the motor cortex and auditory cortex align to enhance perception (Bowers et al., 2014; Fujioka et al., 2012; Morillon et al., 2015), and that this connection can be trained.

Predictions and Future Directions of the PRISM Framework

The Processing Rhythm in Speech and Music (PRISM) framework brings together evidence from multiple research fields, focusing on separate aspects of music and speech rhythm. While previous research has focused on individual elements separately or in subsets, the PRISM framework provides a global, parsimonious combination of the underlying mechanisms that support rhythm processing in music and speech. One of the aims of this framework is to direct future research investigating the connections between music and speech rhythm processing, and to inform the development of specific music rhythm interventions and training that could be particularly pertinent to the treatment of developmental speech and language disorders. We propose that a better understanding of the contributions and connections between precise auditory processing, synchronization/entrainment of neural oscillations to external rhythmic stimuli, and sensorimotor coupling will provide a valuable perspective for future rhythm studies at the intersection between speech and music. We further suggest that these three underlying mechanisms support the connections observed between music rhythm and speech rhythm skills and should be targeted directly in future research on developmental speech and language disorders. Based on the neuroplasticity of the brain (Patel, 2011), training of the three suggested mechanisms should enhance precision of and connections between cortical and subcortical temporal processing networks, including the basal ganglia, auditory and motor cortices, and fronto-temporal connections, which serve both speech and music processing (Kotz & Schwartze, 2010; Rajendran et al., 2018).

The predictions of the current framework are that (1) deficits in one or more of the proposed underlying mechanisms should be related to deficits in both speech/language and music processing, (2) different expressions of speech/language difficulties in different disorders should be related to specific impairments in one (or more) of the underlying mechanisms proposed, and (3) targeted training of these mechanisms should enhance related skills in both music and speech/language processing. The following sections will outline some available evidence fitting with these predictions and suggest pathways for future research.

Deficits in Shared Underlying Mechanisms of Music and Speech/Language Rhythm

Mounting evidence suggests that speech and language difficulties may be related to co-morbid impairments in timing (Falk et al., 2015; Falter & Noreika, 2014; Goswami, 2011; Peter & Stoel-Gammon, 2008), and that atypical rhythm processing may be a risk factor for developmental speech and language disorders (Atypical Rhythm Risk Hypothesis, ARRH; Ladányi, Persici et al., 2020)2. Although developmental speech and language disorders may express differently (e.g., children often present with a heterogeneous constellation of perceptual and production deficits at the levels of phoneme awareness, phonological processing, articulation, fluency, supra-syllabic prosodic sensitivity, vocabulary, spoken grammar, and reading skills), the notable levels of comorbidity between disorders (Heaton et al., 2018; Kaplan et al., 2001; Nicolson & Fawcett, 2007; Puyjarinet et al., 2017; Zwicker et al., 2009), and the strong link between perception and production in the brain (Kotz & Schwartze, 2010) makes it likely that there are shared impairments in underlying neural mechanisms across different disorders. In the current section, we will focus primarily on developmental dyslexia, developmental language disorder (DLD), and stuttering. These three disorders have speech or language as a primary deficit, show a high prevalence in the population, with frequent persistence into adulthood, and have communication difficulties that exert a personal and professional toll. Notably, these three disorders have also been shown to have deficits in timing (Goswami, 2019; Ladányi, Persici et al., 2020).

Individuals with developmental dyslexia present with learning difficulties for reading and spelling (Goswami, 2011; Lyon et al., 2003), usually associated with phonological processing deficits in the perceptual domain, whereas those with DLD generally have impaired oral language acquisition (McArthur et al., 2000; Ramus et al., 2013). Developmental language disorder (previously termed specific language impairment, see Bishop et al., 2017 for specifications of the terminology) manifests primarily in delayed and disordered acquisition of morpho-syntax (grammar) and vocabulary, and may be characterized by solely expressive or expressive-receptive deficits. DLD has lifelong negative consequences for academic, economic, and social well-being (Conti-Ramsden et al., 2018; Hubert-Dibon et al., 2016; Law et al., 2009). Further, dyslexia and DLD very often co-occur (Bishop & Snowling, 2004; Snowling et al., 2019, 2020). In contrast, individuals who stutter have difficulty producing fluent speech, and may prolong syllables and produce speech with irregular temporal patterns (Perez & Stoeckle, 2016) but generally have intact acquisition of vocabulary, grammar, and reading to the extent that language structure can be dissociated from motor speech. Note that for a diagnosis of these disorders, speech or language deficits cannot be attributed to low IQ, neurological damage, an inadequate learning environment, or hearing impairment.

Impairments in precise auditory processing and entrainment of neural oscillations to external stimuli have been observed for individuals with dyslexia and DLD, who show deficits in the processing of syllable rise-time and stress perception, as well as deficits in the larger-scale temporal sampling of the speech envelope (dyslexia: Goswami et al., 2016; Huss et al., 2011; Leong et al., 2011; Leong & Goswami, 2014; Molinaro et al., 2016; Power et al., 2016; Thomson et al., 2006; DLD: Corriveau et al., 2007; Cumming, Wilson, & Goswami, 2015; Richards & Goswami, 2015). This research has motivated the development of the TSF for developmental dyslexia (and extended to DLD), which links observed perceptual deficits with impaired synchronization across multiple neural oscillation bands in the brain (Goswami, 2011; Goswami et al., 2016). Based on the TSF, Cumming, Wilson, & Goswami (2015) proposed the prosodic phrasing hypothesis, which also suggests that children with DLD are impaired in the perception of amplitude rise-times and duration cues, and further focuses on implications for the perception of larger scale prosodic structure and grammatical processing (see also Cumming, Wilson, Leong et al., 2015). Support for the impairment of neural oscillations in individuals with dyslexia comes from research finding atypical neural entrainment to slow modulations in auditory signals in dyslexic children (Cutini et al., 2016; Power et al., 2016) and adults (Hämäläinen et al., 2012; Soltész et al., 2013). Therefore, dyslexia and DLD may be associated with fundamental impairments in precise auditory processing and synchronization of neural oscillations to the speech signal (Goswami, 2011, 2018).

Related impairments have been observed for the processing of music and music-like stimuli for these populations, supporting the hypothesis of shared underlying mechanisms for music and speech rhythm. Individuals with dyslexia appear to show a general impairment in synchronization to an external beat (Colling et al., 2017; Overy et al., 2003; Thomson & Goswami, 2008), and perform poorly on measures of rhythm perception, rhythm production, and synchronization (e.g., Degé et al., 2015; Flaugnacco et al., 2014; Meyler & Breznitz, 2005; Thomson & Goswami, 2008; Wolff, 2002). Similarly, individuals with DLD show difficulties in both speech rhythm and music rhythm processing (Cumming, Wilson, Leong, et al., 2015; Sallat & Jentschke, 2015), are poorer at paced rhythmic tapping (Corriveau & Goswami, 2009) and have poorer singing ability (Clément et al., 2015) compared with controls. Children with DLD also perform worse than controls on a semantic judgment task when natural sentences are spoken fast or are time-compressed (Guiraud et al., 2018).

Timing and synchronization impairments have been observed not only for children with dyslexia and DLD, but also for individuals who stutter. Individuals who stutter exhibit impairments in synchronized tapping (Falk et al., 2015), show impaired rhythm perception with musical material (Wieland et al., 2015), and are impaired in unpaced tapping tasks, which rely on internal time keeping (Olander et al., 2010). These results suggest similar impairments across music and speech and open up the possibility that training in music rhythm could enhance both music and speech processing in these populations by co-opting shared neural circuitry. For stuttering, reduced sensorimotor coupling may affect the perception/production loop (S.-E. Chang et al., 2016b; Hickok et al., 2011), and impaired internal beat generation may be related to deficient production of internal timing cues from the basal ganglia (Alm, 2004; Toyomura et al., 2011). By better understanding how impairments in these underlying temporal processing mechanisms are involved across different developmental speech and language problems, it may be possible to directly target (and train) specific mechanisms for improvement.

Impaired timing mechanisms also often co-occur in developmental disorders that are frequently characterized by speech and language impairments (see Lense et al., in press for a review). Timing impairments have been observed, for example, in Autism Spectrum Disorder (Duffield et al., 2013; Green et al., 2009; Hardy & LaGasse, 2013; Isaksson et al., 2018; Morimoto et al., 2018; Mostofsky et al., 2009; Rinehart et al., 2001; Tryfon et al., 2017), Attention Deficit Hyperactivity Disorder (Hove et al., 2017; Noreika et al., 2013; Puyjarinet et al., 2017; Rubia et al., 1999; Shapiro & Huang-Pollock, 2019; Slater & Tate, 2018; Valera et al., 2010; Zelaznik et al., 2012), and Developmental Coordination Disorder (DCD; Chang et al., 2021; Rosenblum & Regev, 2013; Trainor et al., 2018). In the adult-focused brain pathology literature, co-morbid timing deficits have also been observed linked to basal ganglia dysfunction/impairments, such as in Parkinson’s disease and patients with basal ganglia lesions (Kotz & Gunter, 2015; Kotz, Gunter, & Wonneberger, 2005). Future research could search for similarities in timing impairments across different disorders that could be trained using common and/or specific music rhythm interventions (e.g., see the success of auditory cueing in individuals with Parkinson’s disease; Dalla Bella, 2018; Kotz & Gunter, 2015; and the key role of rhythm in melodic intonation therapy for patients with aphasia, Stahl et al., 2011).

Targeting Shared Underlying Mechanisms for Music and Speech Rhythm

Music Rhythm to Stimulate and Train Speech/Language Processing

Research has started to show that using music rhythm as a stimulation or training tool can be beneficial to language processing in dyslexia, DLD, and stuttering. Current music training studies tend to provide general rhythmic or music training, or combined pitch and rhythm music training, so it is difficult to assess the direct effects of training specific underlying mechanisms. Rhythm training in general should enhance precise auditory processing, as well as training temporal attention, sequencing, and predictive timing skills, which may then also indirectly influence the processing of the less regular speech signal. This transfer could occur by sharpening and directing attention to relevant points in time, thereby enhancing various aspects of sentence processing such as phonological, syllable and word processing, as well as prosody, syntax, and reading (see also the OPERA hypothesis; Patel, 2011). The majority of experimental studies implementing music rhythm training alongside a control condition have been conducted in individuals with dyslexia. These studies have shown that rhythm training (of various types) can enhance the perception of the speech signal (e.g., voice onset time; Frey et al., 2019), phonological awareness (Flaugnacco et al., 2015; Thomson et al., 2013), and reading skills (Bonacina et al., 2015; Flaugnacco et al., 2015) compared to painting or no-training control groups. Even though a full, systematic review of this literature is beyond the scope of the current paper3, these dyslexia studies are a promising proof of concept that rhythm training can impact speech and language task performance. Meanwhile, more research is needed to investigate the effect of rhythm training in other developmental speech and language disorders.

Music training research in general (across both typically developing and clinical populations) is in need of clear, hypothesis-driven experiments that aim to train the precise mechanisms predicted to be shared between music and speech/language processing. Although there are a limited number of studies investigating pure rhythmic training, current meta-analyses and systematic reviews of music training (not specific to rhythm) report beneficial transfer effects on speech/language skills, albeit weak to moderate (Cooper, 2020; Gordon et al., 2015; Pesnot Lerousseau et al., 2020; Román-Caballero et al., 2021). These reviews and related discussions (see Schellenberg, 2019a) also underline the need for more systematic research, in particular including the use of random assignment to groups and an active control group to investigate causal effects of music training on related abilities (see also the series of exchanges in Bidelman & Mankel, 2019; Mankel & Bidelman, 2018 and Schellenberg, 2019b for a discussion of musical aptitude versus training). Such controlled designs are particularly important to investigate the effect of training the three mechanisms proposed in the current framework. Note that reported correlations in the literature can also provide some first insights into potential links between music rhythm and language processing. However, correlations between different music rhythm and speech/language skills should be taken with some caution as they may be driven by other pre-disposition-related factors (Schellenberg, 2019b; Swaminathan et al., 2017; Swaminathan & Schellenberg, 2020). With this caveat in mind, numerous correlations have been found between various rhythmic abilities and phonological awareness/reading skills in both typically developing children (see Supplementary Table 1) and clinical populations (e.g., Flaugnacco et al., 2014; Forgeard et al., 2008; Goswami et al., 2013, 2013; Huss et al., 2011; Thomson & Goswami, 2008). Much less is known about possible benefits of rhythm-based treatment in DLD (see Wiens & Gordon, 2018), and it will be important to explore whether speech-rhythm or musical-rhythm-focused training could impact spoken grammar and vocabulary, key areas of difficulty for children with DLD.

An important source of converging evidence comes from other research that has adapted short-term stimulation approaches. Benefits of rhythm regularity in priming stimuli and rhythmic cueing have been observed across these three speech and language disorders. Presenting a regular rhythm before a set of sentences has been shown to enhance grammatical processing for children with DLD and dyslexia compared to both irregular primes (Ladányi, Lukàcs et al., 2020; Przybylski et al., 2013) and environmental sound scenes (Bedoin et al., 2016). These findings suggest a role for sustained neural oscillations stimulated by musical rhythm (i.e., in the prime) in improving temporal expectations for various aspects of the subsequently presented speech signal (e.g., morphosyntactic cues for enhanced grammatical processing) even in developmental disorders. Rhythmic cueing or auditory stimulation has also been suggested to be valuable for individuals who stutter, notably by providing an external structure for internal time keeping, enhancing temporal attention allocation and predictive processing, strengthening the auditory-motor networks, and training the generation of an internal rhythm (Thaut, 2005). Supporting this suggestion, individuals who stutter appear to benefit from external auditory stimulation (Frankford et al., 2021; Toyomura et al., 2011), and asking individuals who stutter to sing enhances fluency in speech, potentially by regulating the temporal structure of the words (Falk et al., 2016; Glover et al., 1996; Wan et al., 2010). Both short- and long-term music rhythm stimulation and training therefore appear able to enhance precise auditory processing, synchronization/entrainment of neural oscillations to external rhythmic stimuli, and sensorimotor coupling, and could be valuable therapeutic tools to be used alongside speech therapy for these pathologies, especially with targeted interventions.

Music rhythm training could be particularly valuable to complement traditional speech or neuropsychological therapy as it contains a number of additional components which could be beneficial across the two domains (including links to attention, emotion, and motor functions; e.g., Särkämö et al., 2008). Importantly, music training is an enjoyable, easily administered, motivating, and cost-effective intervention, and can be used in group sessions (see Tamplin et al., 2013 and Tierney et al., 2013 for examples of group applications), as well as in individual sessions. Group sessions have the additional advantage of benefiting from joint motivation and joint action, social coherence and synchronization, enhancing potential entrainment (see Cross & Morley, 2008; Kokotsaki & Hallam, 2007; Miendlarzewska & Trost, 2014). The use of music rhythm training in treatment has also been developed in the SEP hypothesis, which focuses on applications to Parkinson’s Disease, stuttering, aphasia, and Autism. Studies in both typically developing children (Degé & Schwarzer, 2011; Patscheke et al., 2016) and children with dyslexia (Thomson et al., 2013) suggest that music training may provide comparable improvements in phonological awareness to direct training in phonology (Bhide et al., 2013; see also Bigand & Tillmann, 2021). Such results would suggest that music (rhythm) training could be used to complement more direct approaches, allowing for more diverse training, potentially increased motivation, and enhanced progress (Schön & Tillmann, 2015).

Music (rhythm) training can therefore be a motivating and engaging way to train associated neural timing mechanisms (Thaut, 2005; Thaut & Hoemberg, 2014) in combination with traditional evidenced-based speech and language therapeutic techniques. It can also be used to improve motor-related functioning such as coordination in motor speech disorders and motor-focused treatment of apraxia of speech (Lee et al., 2019). Additionally, music rhythm training could also be effective in training infant rhythm processing, as newborns are sensitive to rhythm and beat in music-like material (Cirelli et al., 2016; Winkler et al., 2009), as well as linguistic rhythm of speech in different languages (Nazzi & Ramus, 2003; Ramus, 2002). Such results suggest that rhythm processing might be a risk indicator for the development of speech and language difficulties (Atypical Rhythm Risk Hypothesis; Ladányi, Persici et al., 2020) and that music rhythm training might be able to shape the underlying neural mechanisms involved in timing at a young age (Gerry et al., 2012).

Future Directions for Research and Training

We suggest that a fruitful research direction would be to target more directly the three underlying mechanisms proposed in the current framework along two parallel axes: both theoretical/empirical and applied. Theoretically and empirically, the link between these three underlying mechanisms and different speech/language disorders needs to be systematically investigated to clarify potential links between the underlying mechanisms and specific speech/language impairments. Developing on insights from perceptual and production behavioral tasks and commonalities across different disorders, neuroscience methods such as electroencephalography, magnetoencephalography, and functional magnetic resonance imaging can be applied in a more targeted way to detect deficits in underlying neural mechanisms. These deficits may manifest as (1) impaired or reduced responses to fine-grained/precise auditory information (i.e., see the work on rise-time perception and speech envelope encoding in individuals with dyslexia as well as early evoked electrophysiological potentials; Chobert et al., 2012; Power et al., 2016; Van Hirtum et al., 2019), (2) reduced phase alignment and connectivity of neural oscillations to external stimuli (i.e., see the work on phase locking, coherence, and entrainment in dyslexia: Hämäläinen et al., 2012; Soltész et al., 2013, and Fiveash, Schön et al., 2020 for more natural stimuli), or (3) reduced connectivity between auditory and motor regions (e.g., in stuttering: Chang et al., 2016; Hickok et al., 2011). Such already available methods and paradigms could be used to investigate neural processing underlying timing deficits across different disorders involving some deficit in speech perception/production. Further insight could be gained by investigating whether musical training enhances the precision of the proposed mechanisms and extends to speech/language processing (i.e., Assaneo et al., 2021; Doelling & Poeppel, 2015).

These research lines should aim to clarify links between impaired speech/language functioning in developmental disorders and the three underlying mechanisms proposed in the PRISM framework. It might be argued that a potential alternative hypothesis is that there are no observable links between the three mechanisms proposed in the PRISM framework and developmental speech and language disorders. Such evidence would necessitate a revisit of the mechanisms proposed and the links between music and speech rhythm. However, the evidence presented above (including first findings of timing deficits in developmental speech/language disorders) suggests that impaired timing, based on the mechanisms proposed, might be a crucial deficit occurring in developmental speech and language disorders (Ladányi, Persici, et al., 2020).

Within our here proposed approach, it would be particularly interesting to investigate the patterns of impairment within different developmental disorders across the three mechanisms. Because of the links and interactions between the mechanisms, it is possible that all three mechanisms might be impaired compared to a control group, or that only one or two mechanisms or their combination could be impaired. An example of this pattern could be impaired sensorimotor coupling for individuals who stutter, but intact precise auditory processing and neural entrainment to external stimuli. Another example could be the use of rhythm training or stimulation to improve temporal attention and hierarchical processing in children with DLD, with the aim to test a potential mediating role of speech and music rhythm on the enhancement of syntactic skills. It should be noted that individual differences are expected in these outlined investigations (related also to the large variance in language impairments exhibited within different developmental speech and language disorders), so large samples of participants for each disorder would be required to fully understand these links, as well as comparisons with an appropriate control group of children with typical development. Strong methodological approaches should also be used, including the appropriate tracking of treatment fidelity (i.e., tracking the implementation and administration of the rhythmic training; Wiens & Gordon, 2018). With precision medicine approaches made possible by well-powered high-quality datasets, treatment plans combining traditional speech therapy and rhythm-based training can eventually be individualized, i.e., tailored to the specific needs of the individual (Ginsburg & Phillips, 2018).

For applied research, we predict that a rhythm training program focusing explicitly on direct training of precise auditory processing, the entrainment of neural oscillations to external stimuli, and the strengthening of sensorimotor coupling could have direct benefits on the speech/language skills that draw on these same underlying mechanisms. This research can start to be developed based on the PRISM framework and would be directly informed by the research discussed above. The goal of such training programs would be to investigate, with appropriate control groups, the effect of training that targets these three mechanisms on different speech and language skills. Examples of tasks that could specifically target these underlying mechanisms include: (1) discrimination of small timing differences and rise-time perception training (precise auditory processing); (2) hierarchical structure tracking at multiple levels (neural entrainment to external stimuli and structure-based predictions); and (3) rhythmic production with auditory feedback (sensorimotor coupling). Beat and meter perception and production would be particularly valuable to such training, as it can span the three mechanisms. It might be possible that specific clinically distinct speech or language impairments would be more or less sensitive to modulation via training of different mechanisms (i.e., focus on neural entrainment for dyslexia and DLD, focus on sensorimotor coupling for stuttering); this possibility should be explored in future research. However, considering the connections between the three mechanisms, targeting all mechanisms should still be a valuable approach, especially for preliminary research.

The implementation of such potential training programs should be guided by research showing that rhythm and rhythmic skills are not a single entity; but rather a constellation of various sub-processes that may draw on different underlying processing mechanisms and neural correlates (Bonacina et al., 2019; Bouwer et al., 2020; Fiveash et al., in preparation; Kotz et al., 2018; Thaut et al., 2014; Tierney & Kraus, 2015). Current evidence is revealing distinctions between beat-based versus memory-based rhythmic tasks/expectations (Bonacina et al., 2019; Bouwer et al., 2020; Tierney & Kraus, 2015), periodic motor pattern generation, beat extraction, entrainment, and meter perception (Kotz et al., 2018), neural signatures of rhythmic pattern, meter, and tempo processing (Thaut et al., 2014), and between rhythm and meter processing (Liégeois-Chauvel et al., 1998). Such distinctions should be further investigated in both typically developing individuals and those with developmental disorders, and should be considered when developing future training programs, in line with the current framework. Appropriate tasks and training programs still need to be developed, but can be guided by the PRISM framework and further research investigating impaired underlying timing mechanisms across different developmental disorders, with the goal to develop a strong evidence base for targeted music rhythm training.

Larger Context and Outlook

Finally, we propose to situate the here presented research and PRISM framework within a larger context of putative cognitive and biological similiarites between rhythm processing in music and speech. Integrating evidence across different methods and techniques will allow for a more complete understanding of rhythm processing in the typical and disordered brain. We suggest that five major evidence types should be considered for a more complete understanding of connections between music and speech rhythm (see Figure 3). Converging with (1) neural and cognitive evidence outlined here in detail for our underlying mechanisms approach, it is important to incorporate (2) individual differences research and developmental evidence that have reported strong associations between performance on tasks measuring sensitivity to musical rhythm and speech rhythm, as well as links between musical rhythm and language skills such as phonological awareness, grammar, and reading. Although the full extent of point 2 has not been presented here (though see Supplementary Table 1 for an outline of selected correlational studies), it is important to keep in mind that a comprehensive understanding of rhythm processing in speech and language should include findings from individual differences research in populations with diverse demographic characteristics to increase the chances of potential generalizability (see Jones, 2010). Furthermore, the efficacy of music-based interventions could differ across individuals (and depending on age, e.g., see greater sensitivity to foreign rhythms in 12-month-old infants compared to adults (Hannon & Trehub, 2005).

Figure 3.

Figure 3.

Five different cognitive and biological evidence types that should be considered for a more comprehensive contextual understanding of music and speech rhythm.

As outlined above, evidence from (3) atypical or disordered speech and language development in children and (4) initial promising outcomes of rhythm priming and training to influence speech and language outcomes in these populations and in typically developing populations provide further evidence for links between music and speech rhythm. Potentially shared genetic influences (5) should also be examined in the future, given that musical rhythm skills are moderately heritable (meaning a portion of the phenotypic variance can be attributed to genetic factors) as shown with genomic and twin methods (Niarchou et al., 2021; Ullén et al., 2014). Neural oscillatory mechanisms (measured from resting state) are also known to be highly heritable (Smit et al., 2005), though the heritability of neural entrainment mechansims to rhythm in speech or music have not to our knowledge been studied. Further, while the heritability of speech rhythm traits (i.e., prosody-related tasks) has not been studied to our knowledge, correlated individual differences at the behavioral level often reflect shared underlying genetic architecture (Sodini et al., 2018) and other language-related traits are also moderately heritable (Deriziotis & Fisher, 2017). Moreover, potentially shared underlying biology and increased prevalence of co-morbid rhythm problems in developmental speech and language disorders have led to the proposition that atypical rhythm traits partially share genes with speech and language development (see Ladányi, Persici et al., 2020 for an in-depth framework). Genetic evidence therefore appears to be an interesting avenue for future research that remains to be explored. The integration of these five sources of evidence will allow for a more complete understanding of the connections between music and speech rhythm and how they can be exploited to develop effective tools for treatment and training in light of patient-centered, precision medicine approaches, which are neuroscience-informed.

Conclusion

The similarities between music and speech in relation to rhythm have spurred a large amount of research interest. Based on a synthesis of theoretical and empirical work, the present paper proposed the PRISM framework, consisting of three mechanisms underlying the processing of music and speech rhythm: precise auditory processing, synchronization/entrainment of neural oscillations to external rhythms, and sensorimotor coupling. Based on observed timing impairments across developmental speech and language disorders including dyslexia, DLD, and stuttering, we suggest that focusing on impairments to these neural mechanisms may accelerate our understanding of potentially shared timing deficits across different disorders and inform the development of training and treatment programs. Based on the strong regularity of music rhythm, the shared neural circuitry between music and speech rhythm processing, and overlapping mechanisms involved in encoding, perception, prediction, and production of the speech signal, rhythmic training, in particular when exploiting metrical structures and other benefits of musical material, appears to be a promising avenue for future research to enhance speech and language processing in both unimpaired and impaired populations.

Supplementary Material

Supplemental Material

Key Points.

Question:

The current paper investigates whether shared mechanisms underlying rhythm processing in music and speech can be used to better understand speech and language processing in developmental disorders and to develop programs for treatment.

Findings:

We propose a new framework suggesting three common mechanisms underlying music and speech rhythm processing: precise auditory timing, synchronization/entrainment of neural oscillations to external rhythmic stimuli, and sensorimotor coupling.

Importance:

The identification of these underlying mechanisms allows for a more targeted approach to future research investigating music and speech rhythm processing in typically developing children/adults and those with developmental speech and language impairments.

Next Steps:

We outline a number of avenues for future research, including the need to incorporate multiple sources of evidence for the investigation of potential links between music and speech rhythm processing, and different approaches to apply the current framework to speech and language disorders.

Acknowledgements

We would like to thank our reviewers for constructive feedback throughout the publication process, and Anna Kasdan for advice on the figures. This research was supported by a grant from Agence Nationale de la Recherche (ANR-16-CE28-0012-02) to BT and NB. The team Auditory Cognition and Psychoacoustics is part of the LabEx CeLyA (Centre Lyonnais d’Acoustique, ANR-10-LABX-60). Research reported in this publication is supported by the National Institutes of Health Common Fund under award DP2HD098859 through the Office of Strategic Coordination/Office of the NIH Director, and the National Institute On Deafness And Other Communication Disorders of the NIH under Award R01DC016977. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Footnotes

1

The MMN is an evoked neural response which is classically elicited within an oddball paradigm (i.e., when a sequence of similar events are interspersed with occasional deviant events), or when a unexpected event occurs (Garrido et al., 2009).

2

The genetic implications and risk factors that could be informed by the PRISM framework are outlined in Ladányi, Persici et al. (2020).

3

See Table 1 in Ladányi, Persici, et al. (2020) for an overview of research investigating connections between rhythm and speech/language impairments.

Open Practices Statement

There are no data or materials to share for the current review paper.

References

  1. Adank P, & Janse E (2009). Perceptual learning of time-compressed and natural fast speech. The Journal of the Acoustical Society of America, 126(5), 2649–2659. 10.1121/1.3216914 [DOI] [PubMed] [Google Scholar]
  2. Allen EJ, Burton PC, Olman CA, & Oxenham AJ (2017). Representations of pitch and timbre variation in human auditory cortex. The Journal of Neuroscience, 37(5), 1284–1293. 10.1523/JNEUROSCI.2336-16.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Alm PA (2004). Stuttering and the basal ganglia circuits: A critical review of possible relations. Journal of Communication Disorders, 37(4), 325–369. 10.1016/j.jcomdis.2004.03.001 [DOI] [PubMed] [Google Scholar]
  4. Arnal LH, & Giraud A-L (2012). Cortical oscillations and sensory predictions. Trends in Cognitive Sciences, 16(7), 390–398. 10.1016/j.tics.2012.05.003 [DOI] [PubMed] [Google Scholar]
  5. Arvaniti A (2009). Rhythm, timing and the timing of rhythm. Phonetica, 66(1–2), 46–63. 10.1159/000208930 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Assaneo MF, & Poeppel D (2018). The coupling between auditory and motor cortices is rate-restricted: Evidence for an intrinsic speech-motor rhythm. Science Advances, 4(2), eaao3842. 10.1126/sciadv.aao3842 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Assaneo MF, Rimmele JM, Sanz Perl Y, & Poeppel D (2021). Speaking rhythmically can shape hearing. Nature Human Behaviour, 5(1), 71–82. 10.1038/s41562-020-00962-0 [DOI] [PubMed] [Google Scholar]
  8. Assaneo MF, Ripollés P, Orpella J, Lin WM, Diego-Balaguer R. de, & Poeppel D (2019). Spontaneous synchronization to speech reveals neural mechanisms facilitating language learning. Nature Neuroscience, 1. 10.1038/s41593-019-0353-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Barnes R, & Jones MR (2000). Expectancy, attention, and time. Cognitive Psychology, 41(3), 254–311. 10.1006/cogp.2000.0738 [DOI] [PubMed] [Google Scholar]
  10. Bedoin N, Brisseau L, Molinier P, Roch D, & Tillmann B (2016). Temporally regular musical primes facilitate subsequent syntax processing in children with specific language impairment. Frontiers in Neuroscience, 10(245). 10.3389/fnins.2016.00245 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Beier EJ, & Ferreira F (2018). The temporal prediction of stress in speech and its relation to musical beat perception. Frontiers in Psychology, 9. 10.3389/fpsyg.2018.00431 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Besson M, Chobert J, & Marie C (2011). Transfer of training between music and speech: Common processing, attention, and memory. Front Psychol, 2, 94. 10.3389/fpsyg.2011.00094 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Bharucha JJ, & Pryor JH (1986). Disrupting the isochrony underlying rhythm: An asymmetry in discrimination. Perception & Psychophysics, 40(3), 137–141. 10.3758/BF03203008 [DOI] [PubMed] [Google Scholar]
  14. Bhide A, Power A, & Goswami U (2013). A rhythmic musical intervention for poor readers: A comparison of efficacy with a letter-based intervention. Mind Brain and Education, 7(2), 113–123. 10.1111/mbe.12016 [DOI] [Google Scholar]
  15. Bidelman GM, & Mankel K (2019). Reply to Schellenberg: Is there more to auditory plasticity than meets the ear? Proceedings of the National Academy of Sciences, 116(8), 2785–2786. 10.1073/pnas.1900068116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Bigand E, & Tillmann B (2021). Near and far transfer: Is music special? PsyArXiv. 10.31234/osf.io/gtnza [DOI] [PubMed] [Google Scholar]
  17. Bishop DVM, & Snowling MJ (2004). Developmental dyslexia and specific language impairment: Same or different? Psychological Bulletin, 130(6), 858–886. 10.1037/0033-2909.130.6.858 [DOI] [PubMed] [Google Scholar]
  18. Bishop DVM, Snowling MJ, Thompson PA, & Greenhalgh T (2017). Phase 2 of CATALISE: A multinational and multidisciplinary Delphi consensus study of problems with language development: Terminology. Journal of Child Psychology and Psychiatry, 58(10), 1068–1080. 10.1111/jcpp.12721 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Bohn K, Knaus J, Wiese R, & Domahs U (2013). The influence of rhythmic (ir)regularities on speech processing: Evidence from an ERP study on German phrases. Neuropsychologia, 51(4), 760–771. 10.1016/j.neuropsychologia.2013.01.006 [DOI] [PubMed] [Google Scholar]
  20. Bolger D, Trost W, & Schön D (2013). Rhythm implicitly affects temporal orienting of attention across modalities. Acta Psychologica, 142(2), 238–244. 10.1016/j.actpsy.2012.11.012 [DOI] [PubMed] [Google Scholar]
  21. Bonacina S, Cancer A, Lanzi PL, Lorusso ML, & Antonietti A (2015). Improving reading skills in students with dyslexia: The efficacy of a sublexical training with rhythmic background. Frontiers in Psychology, 6. 10.3389/fpsyg.2015.01510 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Bonacina S, Krizman J, White-Schwoch T, Nicol T, & Kraus N (2019). How rhythmic skills relate and develop in school-age children. Global Pediatric Health, 6. 10.1177/2333794X19852045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Bouwer FL, Honing H, & Slagter HA (2020). Beat-based and memory-based temporal expectations in rhythm: Similar perceptual effects, different underlying mechanisms. Journal of Cognitive Neuroscience, 1–24. 10.1162/jocn_a_01529 [DOI] [PubMed] [Google Scholar]
  24. Bowers AL, Saltuklaroglu T, Harkrider A, Wilson M, & Toner MA (2014). Dynamic modulation of shared sensory and motor cortical rhythms mediates speech and non-speech discrimination performance. Frontiers in Psychology, 5. 10.3389/fpsyg.2014.00366 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Bowling DL, Herbst CT, & Fitch WT (2013). Social origins of rhythm? Synchrony and temporal regularity in human vocalization. PLoS ONE, 8(11), e80402. 10.1371/journal.pone.0080402 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. van Bree, S., Sohoglu E, Davis MH, & Zoefel B (2021). Sustained neural rhythms reveal endogenous oscillations supporting speech perception. PLOS Biology, 19(2), e3001142. 10.1371/journal.pbio.3001142 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Brown S, & Jordania J (2011). Universals in the world’s musics. Psychology of Music. 10.1177/0305735611425896 [DOI] [Google Scholar]
  28. Bruderer AG, Danielson DK, Kandhadai P, & Werker JF (2015). Sensorimotor influences on speech perception in infancy. Proceedings of the National Academy of Sciences, 112(44), 13531–13536. 10.1073/pnas.1508631112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Buzsáki G (2006). Rhythms of the Brain. In Rhythms of the Brain. Oxford University Press. https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780195301069.001.0001/acprof-9780195301069 [Google Scholar]
  30. Buzsáki G (2019). The brain from the inside out. Oxford University Press. [Google Scholar]
  31. Buzsáki G, & Draguhn A (2004). Neuronal oscillations in cortical networks. Science, 304(5679), 1926–1929. 10.1126/science.1099745 [DOI] [PubMed] [Google Scholar]
  32. Calderone DJ, Lakatos P, Butler PD, & Castellanos FX (2014). Entrainment of neural oscillations as a modifiable substrate of attention. Trends in Cognitive Sciences, 18(6), 300–309. 10.1016/j.tics.2014.02.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Canette L-H, Lalitte P, Bedoin N, Pineau M, Bigand E, & Tillmann B (2020). Rhythmic and textural musical sequences differently influence syntax and semantic processing in children. Journal of Experimental Child Psychology, 191, 104711. 10.1016/j.jecp.2019.104711 [DOI] [PubMed] [Google Scholar]
  34. Cannon JJ, & Patel AD (2021). How beat perception co-opts motor neurophysiology. Trends in Cognitive Sciences, 25(2), 137–150. 10.1016/j.tics.2020.11.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Cason N, Astésano C, & Schön D (2015). Bridging music and speech rhythm: Rhythmic priming and audio–motor training affect speech perception. Acta Psychologica, 155(0), 43–50. 10.1016/j.actpsy.2014.12.002 [DOI] [PubMed] [Google Scholar]
  36. Cason N, & Schön D (2012). Rhythmic priming enhances the phonological processing of speech. Neuropsychologia, 50(11), 2652–2658. 10.1016/j.neuropsychologia.2012.07.018 [DOI] [PubMed] [Google Scholar]
  37. Chang A, Li Y-C, Chan JF, Dotov DG, Cairney J, & Trainor LJ (2021). Inferior auditory time perception in children with motor difficulties. Child Development, n/a(n/a). 10.1111/cdev.13537 [DOI] [PubMed] [Google Scholar]
  38. Chang S-E, Chow HM, Wieland EA, & McAuley JD (2016a). Relation between functional connectivity and rhythm discrimination in children who do and do not stutter. NeuroImage: Clinical, 12, 442–450. 10.1016/j.nicl.2016.08.021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Chang S-E, Chow HM, Wieland EA, & McAuley JD (2016b). Relation between functional connectivity and rhythm discrimination in children who do and do not stutter. NeuroImage: Clinical, 12(Supplement C), 442–450. 10.1016/j.nicl.2016.08.021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Chao ZC, Takaura K, Wang L, Fujii N, & Dehaene S (2018). Large-scale cortical networks for hierarchical prediction and prediction error in the primate brain. Neuron, 100(5), 1252–1266.e3. 10.1016/j.neuron.2018.10.004 [DOI] [PubMed] [Google Scholar]
  41. Chemin B, Mouraux A, & Nozaradan S (2014). Body movement selectively shapes the neural representation of musical rhythms. Psychological Science, 25(12), 2147–2159. 10.1177/0956797614551161 [DOI] [PubMed] [Google Scholar]
  42. Chen JL, Penhune VB, & Zatorre RJ (2008). Listening to musical rhythms recruits motor regions of the brain. Cerebral Cortex, 18(12), 2844–2854. 10.1093/cercor/bhn042 [DOI] [PubMed] [Google Scholar]
  43. Chern A, Tillmann B, Vaughan C, & Gordon RL (2018). New evidence of a rhythmic priming effect that enhances grammaticality judgments in children. Journal of Experimental Child Psychology, 173, 371–379. 10.1101/193961 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Chobert J, François C, Habib M, & Besson M (2012). Deficit in the preattentive processing of syllabic duration and VOT in children with dyslexia. Neuropsychologia, 50(8), 2044–2055. 10.1016/j.neuropsychologia.2012.05.004 [DOI] [PubMed] [Google Scholar]
  45. Chobert J, François C, Velay J-L, & Besson M (2014). Twelve months of active musical training in 8- to 10-year-old children enhances the preattentive processing of syllabic duration and voice onset time. Cerebral Cortex, 24(4), 956–967. 10.1093/cercor/bhs377 [DOI] [PubMed] [Google Scholar]
  46. Cirelli LK, Spinelli C, Nozaradan S, & Trainor LJ (2016). Measuring neural entrainment to beat and meter in infants: Effects of music background. Frontiers in Neuroscience, 10. 10.3389/fnins.2016.00229 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Clément S, Planchou C, Béland R, Motte J, & Samson S (2015). Singing abilities in children with Specific Language Impairment (SLI). Frontiers in Psychology, 6, 420. 10.3389/fpsyg.2015.00420 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Colling LJ, Noble HL, & Goswami U (2017). Neural entrainment and sensorimotor synchronization to the beat in children with developmental dyslexia: An EEG study. Frontiers in Neuroscience, 11(JUL). Scopus. 10.3389/fnins.2017.00360 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Conti-Ramsden G, Durkin K, Toseeb U, Botting N, & Pickles A (2018). Education and employment outcomes of young adults with a history of developmental language disorder. International Journal of Language & Communication Disorders, 53(2), 237–255. 10.1111/1460-6984.12338 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Cooper PK (2020). It’s all in your head: A meta-analysis on the effects of music training on cognitive measures in schoolchildren. International Journal of Music Education, 38(3), 321–336. 10.1177/0255761419881495 [DOI] [Google Scholar]
  51. Corriveau KH, & Goswami U (2009). Rhythmic motor entrainment in children with speech and language impairments: Tapping to the beat. Cortex, 45(1), 119–130. 10.1016/j.cortex.2007.09.008 [DOI] [PubMed] [Google Scholar]
  52. Corriveau KH, Pasquini E, & Goswami U (2007). Basic auditory processing skills and specific language impairment: A new look at an old hypothesis. Journal of Speech, Language, and Hearing Research, 50(3), 647–666. 10.1044/1092-4388(2007/046) [DOI] [PubMed] [Google Scholar]
  53. Cross I, & Morley I (2008). The evolution of music: Theories, definitions and the nature of evidence. In Malloch S & Trevarthan C (Eds.), Communicative Musicality (pp. 61–82). Oxford University Press. [Google Scholar]
  54. Cumming R, Wilson A, & Goswami U (2015). Basic auditory processing and sensitivity to prosodic structure in children with specific language impairments: A new look at a perceptual hypothesis. Frontiers in Psychology, 6. 10.3389/fpsyg.2015.00972 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Cumming R, Wilson A, Leong V, Colling LJ, & Goswami U (2015). Awareness of rhythm patterns in speech and music in children with specific language impairments. Frontiers in Human Neuroscience, 9. 10.3389/fnhum.2015.00672 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Cummins F (2012). Looking for rhythm in speech. Empirical Musicology Review, 7(1–2), 28–35. 10.18061/1811/52976 [DOI] [Google Scholar]
  57. Cutini S, Szűcs D, Mead N, Huss M, & Goswami U (2016). Atypical right hemisphere response to slow temporal modulations in children with developmental dyslexia. NeuroImage, 143, 40–49. 10.1016/j.neuroimage.2016.08.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Cutler A (1994). Segmentation problems, rhythmic solutions. Lingua, 92(1–4), 81–104. 10.1016/0024-3841(94)90338-7 [DOI] [Google Scholar]
  59. Cutler A, & Butterfield S (1992). Rhythmic cues to speech segmentation: Evidence from juncture misperception. Journal of Memory and Language, 31(2), 218–236. 10.1016/0749-596X(92)90012-M [DOI] [Google Scholar]
  60. Cutler A, & Foss DJ (1977). On the role of sentence stress in sentence processing. Language and Speech, 20(1), 1–10. 10.1177/002383097702000101 [DOI] [PubMed] [Google Scholar]
  61. Cutler A, & Norris D (1998). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14(1), 113–121. [Google Scholar]
  62. Dalla Bella S (2018). Music and movement: Towards a translational approach. Neurophysiologie Clinique, 48(6), 377–386. 10.1016/j.neucli.2018.10.067 [DOI] [PubMed] [Google Scholar]
  63. Degé F, Kubicek C, & Schwarzer G (2015). Associations between musical abilities and precursors of reading in preschool aged children. Frontiers in Psychology, 6. 10.3389/fpsyg.2015.01220 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Degé F, & Schwarzer G (2011). The effect of a music program on phonological awareness in preschoolers. Frontiers in Psychology, 2. 10.3389/fpsyg.2011.00124 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Deriziotis P, & Fisher SE (2017). Speech and language: Translating the genome. Trends in Genetics, 33(9), 642–656. 10.1016/j.tig.2017.07.002 [DOI] [PubMed] [Google Scholar]
  66. Di Liberto GM, Peter V, Kalashnikova M, Goswami U, Burnham D, & Lalor EC (2018). Atypical cortical entrainment to speech in the right hemisphere underpins phonemic deficits in dyslexia. NeuroImage. 10.1016/j.neuroimage.2018.03.072 [DOI] [PubMed] [Google Scholar]
  67. Ding N, Melloni L, Zhang H, Tian X, & Poeppel D (2016). Cortical tracking of hierarchical linguistic structures in connected speech. Nature Neuroscience, 19(1), 158–164. 10.1038/nn.4186 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Ding N, Patel AD, Chen L, Butler H, Luo C, & Poeppel D (2017). Temporal modulations in speech and music. Neuroscience & Biobehavioral Reviews. 10.1016/j.neubiorev.2017.02.011 [DOI] [PubMed] [Google Scholar]
  69. Doelling KB, Assaneo MF, Bevilacqua D, Pesaran B, & Poeppel D (2019). An oscillator model better predicts cortical entrainment to music. Proceedings of the National Academy of Sciences, 201816414. 10.1073/pnas.1816414116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Doelling KB, & Poeppel D (2015). Cortical entrainment to music and its modulation by expertise. Proceedings of the National Academy of Sciences, 112(45), E6233–E6242. 10.1073/pnas.1508431112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Duffield TC, Trontel HG, Bigler ED, Froehlich A, Prigge MB, Travers B, Green RR, Cariello AN, Cooperrider J, Nielsen J, Alexander A, Anderson J, Fletcher PT, Lange N, Zielinski B, & Lainhart J (2013). Neuropsychological investigation of motor impairments in autism. Journal of Clinical and Experimental Neuropsychology, 35(8), 867–881. 10.1080/13803395.2013.827156 [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Echols CH, Crowhurst MJ, & Childers JB (1997). The perception of rhythmic units in speech by infants and adults. Journal of Memory and Language, 36(2), 202–225. 10.1006/jmla.1996.2483 [DOI] [Google Scholar]
  73. Escoffier N, Sheng DYJ, & Schirmer A (2010). Unattended musical beats enhance visual processing. Acta Psychologica, 135(1), 12–16. 10.1016/j.actpsy.2010.04.005 [DOI] [PubMed] [Google Scholar]
  74. Falk S, Lanzilotti C, & Schön D (2017). Tuning neural phase entrainment to speech. Journal of Cognitive Neuroscience, 29(8), 1378–1389. 10.1162/jocn_a_01136 [DOI] [PubMed] [Google Scholar]
  75. Falk S, Maslow E, Thum G, & Hoole P (2016). Temporal variability in sung productions of adolescents who stutter. Journal of Communication Disorders, 62, 101–114. 10.1016/j.jcomdis.2016.05.012 [DOI] [PubMed] [Google Scholar]
  76. Falk S, Müller T, & Dalla Bella S (2015). Non-verbal sensorimotor timing deficits in children and adolescents who stutter. Frontiers in Psychology, 6. 10.3389/fpsyg.2015.00847 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Falter C, & Noreika V (2014). Time processing in developmental disorders: A comparative view. In Lloyd D & Arstila V (Eds.), Subjective Time: The Philosophy, Psychology, and Neuroscience of Temporality. MIT Press. [Google Scholar]
  78. Fiveash A, Bedoin N, Lalitte P, & Tillmann B (2020). Rhythmic priming of grammaticality judgments in children: Duration matters. Journal of Experimental Child Psychology, 197, 104885. 10.1016/j.jecp.2020.104885 [DOI] [PubMed] [Google Scholar]
  79. Fiveash A, Dalla Bella S, Gordon RL, Bigand E, & Tillmann B (in preparation). You got rhythm, or more: The multidimensionality of rhythmic abilities. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Fiveash A, Schön D, Canette L-H, Morillon B, Bedoin N, & Tillmann B (2020). A stimulus-brain coupling analysis of regular and irregular rhythms in adults with dyslexia and controls. Brain and Cognition, 140, 105531. 10.1016/j.bandc.2020.105531 [DOI] [PubMed] [Google Scholar]
  81. Flaugnacco E, Lopez L, Terribili C, Montico M, Zoia S, & Schön D (2015). Music training increases phonological awareness and reading skills in developmental dyslexia: A randomized control trial. PLOS ONE, 10(9), e0138715. 10.1371/journal.pone.0138715 [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Flaugnacco E, Lopez L, Terribili C, Zoia S, Buda S, Tilli S, Monasta L, Montico M, Sila A, Ronfani L, & Schön D (2014). Rhythm perception and production predict reading abilities in developmental dyslexia. Frontiers in Human Neuroscience, 8(392). 10.3389/fnhum.2014.00392 [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Forgeard M, Schlaug G, Norton A, Rosam C, Iyengar U, & Winner E (2008). The relation between music and phonological processing in normal-reading children and children with dyslexia. Music Perception: An Interdisciplinary Journal, 25(4), 383–390. 10.1525/mp.2008.25.4.383 [DOI] [Google Scholar]
  84. François C, Chobert J, Besson M, & Schön D (2013). Music training for the development of speech segmentation. Cerebral Cortex, 23(9), 2038–2043. 10.1093/cercor/bhs180 [DOI] [PubMed] [Google Scholar]
  85. Frankford SA, Heller MES, Masapollo M, Cai S, Tourville JA, Nieto-Castañón Alfonso, & Guenther FH (2021). The Neural Circuitry Underlying the “Rhythm Effect” in Stuttering. Journal of Speech, Language, and Hearing Research, 64(6S), 2325–2346. 10.1044/2021_JSLHR-20-00328 [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Frey A, François C, Chobert J, Velay J-L, Habib M, & Besson M (2019). Music training positively influences the preattentive perception of voice onset time in children with dyslexia: A longitudinal study. Brain Sciences, 9(4). 10.3390/brainsci9040091 [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Friston K (2005). A theory of cortical responses. Philosophical Transactions of the Royal Society B: Biological Sciences, 360(1456), 815–836. 10.1098/rstb.2005.1622 [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Friston K (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. 10.1038/nrn2787 [DOI] [PubMed] [Google Scholar]
  89. Friston K (2018). Does predictive coding have a future? Nature Neuroscience, 21(8), 1019–1021. 10.1038/s41593-018-0200-7 [DOI] [PubMed] [Google Scholar]
  90. Friston K, & Kiebel S (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1521), 1211–1221. 10.1098/rstb.2008.0300 [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Fujii S, & Wan CY (2014). The role of rhythm in speech and language rehabilitation: The SEP hypothesis. Frontiers in Human Neuroscience, 8. 10.3389/fnhum.2014.00777 [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Fujioka T, Trainor LJ, Large EW, & Ross B (2012). Internalized timing of isochronous sounds Is represented in neuromagnetic beta oscillations. Journal of Neuroscience, 32(5), 1791–1802. 10.1523/JNEUROSCI.4107-11.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Garrido MI, Kilner JM, Stephan KE, & Friston KJ (2009). The mismatch negativity: A review of underlying mechanisms. Clinical Neurophysiology, 120(3), 453–463. 10.1016/j.clinph.2008.11.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Garrod S, & Pickering MJ (2015). The use of content and timing to predict turn transitions. Frontiers in Psychology, 6. 10.3389/fpsyg.2015.00751 [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Gerry D, Unrau A, & Trainor LJ (2012). Active music classes in infancy enhance musical, communicative and social development. Developmental Science, 15(3), 398–407. 10.1111/j.1467-7687.2012.01142.x [DOI] [PubMed] [Google Scholar]
  96. Ghitza O (2011). Linking speech perception and neurophysiology: Speech decoding guided by cascaded oscillators locked to the input rhythm. Frontiers in Psychology, 2, 130. 10.3389/fpsyg.2011.00130 [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Ghitza O (2012). On the role of theta-driven syllabic parsing in decoding speech: Intelligibility of speech with a manipulated modulation spectrum. Frontiers in Psychology, 3. 10.3389/fpsyg.2012.00238 [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Ghitza O, & Greenberg S (2009). On the possible role of brain rhythms in speech perception: Intelligibility of time-compressed speech with periodic and aperiodic insertions of silence. Phonetica, 66(1–2), 113–126. 10.1159/000208934 [DOI] [PubMed] [Google Scholar]
  99. Ginsburg GS, & Phillips KA (2018). Precision medicine: From science to value. Health Affairs, 37(5), 694–701. 10.1377/hlthaff.2017.1624 [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Giraud A, & Poeppel D (2012). Cortical oscillations and speech processing: Emerging computational principles and operations. Nature Neuroscience, 15(4), 511–517. mdc. 10.1038/nn.3063 [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Giraud A-L, & Arnal LH (2018). Hierarchical predictive information is channeled by asymmetric oscillatory activity. Neuron, 100(5), 1022–1024. 10.1016/j.neuron.2018.11.020 [DOI] [PubMed] [Google Scholar]
  102. Glanz O, Derix J, Kaur R, Schulze-Bonhage A, Auer P, Aertsen A, & Ball T (2018). Real-life speech production and perception have a shared premotor-cortical substrate. Scientific Reports, 8(1), 8898. 10.1038/s41598-018-26801-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Glover H, Kalinowski J, Rastatter M, & Stuart A (1996). Effect of instruction to sing on stuttering frequency at normal and fast rates. Perceptual and Motor Skills, 83(2), 511–522. [DOI] [PubMed] [Google Scholar]
  104. Gordon CL, Cobb PR, & Balasubramaniam R (2018). Recruitment of the motor system during music listening: An ALE meta-analysis of fMRI data. PLOS ONE, 13(11), e0207213. 10.1371/journal.pone.0207213 [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Gordon RL, Fehd HM, & McCandliss BD (2015). Does music training enhance literacy skills? A meta-analysis. Frontiers in Psychology, 6. 10.3389/fpsyg.2015.01777 [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Goswami U (2011). A temporal sampling framework for developmental dyslexia. Trends in Cognitive Sciences, 15(1), 3–10. 10.1016/j.tics.2010.10.001 [DOI] [PubMed] [Google Scholar]
  107. Goswami U (2012). Entraining the brain: Applications to language research and links to musical entrainment. 10.18061/1811/52980 [DOI]
  108. Goswami U (2018). A neural basis for phonological awareness? An oscillatory temporal-sampling perspective. Current Directions in Psychological Science, 27(1), 56–63. 10.1177/0963721417727520 [DOI] [Google Scholar]
  109. Goswami U (2019). A neural oscillations perspective on phonological development and phonological processing in developmental dyslexia. Language and Linguistics Compass, 13(5), e12328. 10.1111/lnc3.12328 [DOI] [Google Scholar]
  110. Goswami U, Cumming R, Chait M, Huss M, Mead N, Wilson AM, Barnes L, & Fosker T (2016). Perception of filtered speech by children with developmental dyslexia and children with specific language impairments. Frontiers in Psychology, 7. 10.3389/fpsyg.2016.00791 [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Goswami U, Gerson D, & Astruc L (2010). Amplitude envelope perception, phonology and prosodic sensitivity in children with developmental dyslexia. Reading and Writing, 23(8), 995–1019. 10.1007/s11145-009-9186-6 [DOI] [Google Scholar]
  112. Goswami U, Huss M, Mead N, Fosker T, & Verney JP (2013). Perception of patterns of musical beat distribution in phonological developmental dyslexia: Significant longitudinal relations with word reading and reading comprehension. Cortex, 49(5), 1363–1376. 10.1016/j.cortex.2012.05.005 [DOI] [PubMed] [Google Scholar]
  113. Goswami U, Thomson J, Richardson U, Stainthorp R, Hughes D, Rosen S, & Scott SK (2002). Amplitude envelope onsets and developmental dyslexia: A new hypothesis. Proceedings of the National Academy of Sciences, 99(16), 10911–10916. 10.1073/pnas.122368599 [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Gow DW, & Gordon PC (1993). Coming to terms with stress: Effects of stress location in sentence processing. Journal of Psycholinguistic Research, 22(6), 545–578. 10.1007/BF01072936 [DOI] [PubMed] [Google Scholar]
  115. Grahn JA, & Brett M (2007). Rhythm and beat perception in motor areas of the brain. Journal of Cognitive Neuroscience, 19(5), 893–906. 10.1162/jocn.2007.19.5.893 [DOI] [PubMed] [Google Scholar]
  116. Grahn JA, & Rowe JB (2013). Finding and feeling the musical beat: Striatal dissociations between detection and prediction of regularity. Cerebral Cortex, 23(4), 913–921. 10.1093/cercor/bhs083 [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Green D, Charman T, Pickles A, Chandler S, Loucas T, Simonoff E, & Baird G (2009). Impairment in movement skills of children with autistic spectrum disorders. Developmental Medicine & Child Neurology, 51(4), 311–316. 10.1111/j.1469-8749.2008.03242.x [DOI] [PubMed] [Google Scholar]
  118. Gromko JE (2005). The effect of music instruction on phonemic awareness in beginning readers. Journal of Research in Music Education, 53(3), 199–209. 10.1177/002242940505300302 [DOI] [Google Scholar]
  119. Guenther FH, & Hickok G (2015). Chapter 9—Role of the auditory system in speech production. In Aminoff MJ, Boller F, & Swaab DF (Eds.), Handbook of Clinical Neurology (Vol. 129, pp. 161–175). Elsevier. 10.1016/B978-0-444-62630-1.00009-3 [DOI] [PubMed] [Google Scholar]
  120. Guiraud H, Bedoin N, Krifi-Papoz S, Herbillon V, Caillot-Bascoul A, Gonzalez-Monge S, & Boulenger V (2018). Don’t speak too fast! Processing of fast rate speech in children with specific language impairment. PLOS ONE, 13(1), e0191808. 10.1371/journal.pone.0191808 [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Haegens S (2020). Entrainment revisited: A commentary on Meyer, Sun, and Martin (2020). Language, Cognition and Neuroscience, 0(0), 1–5. 10.1080/23273798.2020.1758335 [DOI] [PMC free article] [PubMed] [Google Scholar]
  122. Haegens S, & Zion Golumbic E (2018). Rhythmic facilitation of sensory processing: A critical review. Neuroscience & Biobehavioral Reviews, 86, 150–165. 10.1016/j.neubiorev.2017.12.002 [DOI] [PubMed] [Google Scholar]
  123. Hämäläinen JA, Rupp A, Soltész F, Szücs D, & Goswami U (2012). Reduced phase locking to slow amplitude modulation in adults with dyslexia: An MEG study. NeuroImage, 59(3), 2952–2961. 10.1016/j.neuroimage.2011.09.075 [DOI] [PubMed] [Google Scholar]
  124. Hannon EE, & Trehub SE (2005). Tuning in to musical rhythms: Infants learn more readily than adults. Proceedings of the National Academy of Sciences, 102(35), 12639–12643. 10.1073/pnas.0504254102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  125. Harding EE, Sammler D, Henry MJ, Large EW, & Kotz SA (2019). Cortical tracking of rhythm in music and speech. NeuroImage, 185, 96–101. 10.1016/j.neuroimage.2018.10.037 [DOI] [PubMed] [Google Scholar]
  126. Hardy MW, & LaGasse AB (2013). Rhythm, movement, and autism: Using rhythmic rehabilitation research as a model for autism. Frontiers in Integrative Neuroscience, 7. 10.3389/fnint.2013.00019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  127. Hawkins S (2014). Situational influences on rhythmicity in speech, music, and their interaction. Philosophical Transactions of the Royal Society B-Biological Sciences, 369(1658), UNSP 20130398. 10.1098/rstb.2013.0398 [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Heaton P, Tsang WF, Jakubowski K, Mullensiefen D, & Allen R (2018). Discriminating autism and language impairment and specific language impairment through acuity of musical imagery. Research in Developmental Disabilities, 80, 52–63. 10.1016/j.ridd.2018.06.001 [DOI] [PubMed] [Google Scholar]
  129. Henry MJ, & Herrmann B (2014). Low-frequency neural oscillations support dynamic attending in temporal context. Timing & Time Perception, 2(1), 62–86. 10.1163/22134468-00002011 [DOI] [Google Scholar]
  130. Hickey P, Merseal H, Patel AD, & Race E (2020). Memory in time: Neural tracking of low-frequency rhythm dynamically modulates memory formation. NeuroImage, 116693. 10.1016/j.neuroimage.2020.116693 [DOI] [PubMed] [Google Scholar]
  131. Hickok G, Farahbod H, & Saberi K (2015). The rhythm of perception: Entrainment to acoustic rhythms induces subsequent perceptual oscillation. Psychological Science, 26(7), 1006–1013. 10.1177/0956797615576533 [DOI] [PMC free article] [PubMed] [Google Scholar]
  132. Hickok G, Houde J, & Rong F (2011). Sensorimotor integration in speech processing: Computational basis and neural organization. Neuron, 69(3), 407–422. 10.1016/j.neuron.2011.01.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  133. Honing H (2018). On the biological basis of musicality. Annals of the New York Academy of Sciences, 1423(1), 51–56. 10.1111/nyas.13638 [DOI] [PubMed] [Google Scholar]
  134. Hove MJ, Gravel N, Spencer RMC, & Valera EM (2017). Finger tapping and preattentive sensorimotor timing in adults with ADHD. Experimental Brain Research, 235(12), 3663–3672. 10.1007/s00221-017-5089-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. Hovsepyan S, Olasagasti I, & Giraud A-L (2018). Combining predictive coding with neural oscillations optimizes on-line speech processing. BioRxiv, 477588. 10.1101/477588 [DOI] [Google Scholar]
  136. Hubert-Dibon G, Bru M, Gras Le Guen C, Launay E, & Roy A (2016). Health-related quality of life for children and adolescents with specific language impairment: A cohort study by a learning disabilities reference center. PloS One, 11(11), e0166541. 10.1371/journal.pone.0166541 [DOI] [PMC free article] [PubMed] [Google Scholar]
  137. Huron D (2008). Sweet anticipation: Music and the psychology of expectation. MIT Press. [Google Scholar]
  138. Huss M, Verney JP, Fosker T, Mead N, & Goswami U (2011). Music, rhythm, rise time perception and developmental dyslexia: Perception of musical meter predicts reading and phonology. Cortex, 47(6), 674–689. 10.1016/j.cortex.2010.07.010 [DOI] [PubMed] [Google Scholar]
  139. Hyde KL, Lerch J, Norton A, Forgeard M, Winner E, Evans AC, & Schlaug G (2009). Musical training shapes structural brain development. The Journal of Neuroscience, 29(10), 3019–3025. 10.1523/JNEUROSCI.5118-08.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  140. Isaksson S, Salomäki S, Tuominen J, Arstila V, Falter-Wagner CM, & Noreika V (2018). Is there a generalized timing impairment in Autism Spectrum Disorders across time scales and paradigms? Journal of Psychiatric Research, 99, 111–121. 10.1016/j.jpsychires.2018.01.017 [DOI] [PubMed] [Google Scholar]
  141. Janata P, Tomic ST, & Haberman JM (2012). Sensorimotor coupling in music and the psychology of the groove. Journal of Experimental Psychology: General, 141(1), 54–75. 10.1037/a0024208 [DOI] [PubMed] [Google Scholar]
  142. Jones D (2010). A WEIRD view of human nature skews psychologists’ studies. Science, 328(5986), 1627–1627. 10.1126/science.328.5986.1627 [DOI] [PubMed] [Google Scholar]
  143. Jones MR (1976). Time, our lost dimension: Toward a new theory of perception, attention, and memory. Psychological Review, 83(5), 323–355. [PubMed] [Google Scholar]
  144. Jones MR (2016). Musical time. In Hallam S, Cross I, & Thaut M, The Oxford Handbook of Music Psychology (2nd ed.). Oxford University Press. [Google Scholar]
  145. Jones MR (2018). Time will tell. Oxford University Press. [Google Scholar]
  146. Jones MR, & Boltz M (1989). Dynamic attending and responses to time. Psychological Review, 96(3), 459–491. [DOI] [PubMed] [Google Scholar]
  147. Jones MR, Boltz M, & Kidd G (1982). Controlled attending as a function of melodic and temporal context. Perception & Psychophysics, 32(3), 211–218. 10.3758/BF03206225 [DOI] [PubMed] [Google Scholar]
  148. Jones MR, Johnston HM, & Puente J (2006). Effects of auditory pattern structure on anticipatory and reactive attending. Cognitive Psychology, 53(1), 59–96. 10.1016/j.cogpsych.2006.01.003 [DOI] [PubMed] [Google Scholar]
  149. Jones MR, Moynihan H, MacKenzie N, & Puente J (2002). Temporal aspects of stimulus-driven attending in dynamic arrays. Psychological Science, 13(4), 313–319. 10.1111/1467-9280.00458 [DOI] [PubMed] [Google Scholar]
  150. Kaplan BJ, Dewey DM, Crawford SG, & Wilson BN (2001). The term comorbidity is of questionable value in reference to developmental disorders: Data and theory. Journal of Learning Disabilities, 34(6), 555–565. 10.1177/002221940103400608 [DOI] [PubMed] [Google Scholar]
  151. Keitel A, Gross J, & Kayser C (2018). Perceptually relevant speech tracking in auditory and motor cortex reflects distinct linguistic features. PLOS Biology, 16(3), e2004473. 10.1371/journal.pbio.2004473 [DOI] [PMC free article] [PubMed] [Google Scholar]
  152. Knowles G (1974). The rhythm of English syllables. Lingua, 34, 115–147. [Google Scholar]
  153. Koelsch S (2009). Neural substrates of processing syntax and semantics in music. Music That Works, 143–153. 10.1007/978-3-211-75121-3_9 [DOI] [PubMed] [Google Scholar]
  154. Koelsch S (2011). Towards a neural basis of processing musical semantics. Phys Life Rev, 8(2), 89–105. 10.1016/j.plrev.2011.04.004 [DOI] [PubMed] [Google Scholar]
  155. Kokotsaki D, & Hallam S (2007). Higher education music students’ perceptions of the benefits of participative music making. Music Education Research, 9(1), 93–109. 10.1080/14613800601127577 [DOI] [Google Scholar]
  156. Kösem A, Bosker HR, Takashima A, Meyer A, Jensen O, & Hagoort P (2018). Neural entrainment determines the words we hear. Current Biology, 28(18), 2867–2875.e3. 10.1101/175000 [DOI] [PubMed] [Google Scholar]
  157. Kösem A, & Wassenhove V van. (2017). Distinct contributions of low- and high-frequency neural oscillations to speech comprehension. Language, Cognition and Neuroscience, 32(5), 536–544. 10.1080/23273798.2016.1238495 [DOI] [Google Scholar]
  158. Kotz SA, & Gunter TC (2015). Can rhythmic auditory cuing remediate language-related deficits in Parkinson’s disease? Annals of the New York Academy of Sciences, 1337(1), 62–68. 10.1111/nyas.12657 [DOI] [PubMed] [Google Scholar]
  159. Kotz SA, Gunter TC, & Wonneberger S (2005). The basal ganglia are receptive to rhythmic compensation during auditory syntactic processing: ERP patient data. Brain and Language, 95(1), 70–71. 10.1016/j.bandl.2005.07.039 [DOI] [Google Scholar]
  160. Kotz SA, Ravignani A, & Fitch WT (2018). The evolution of rhythm processing. Trends in Cognitive Sciences, 22(10), 896–910. 10.1016/j.tics.2018.08.002 [DOI] [PubMed] [Google Scholar]
  161. Kotz SA, & Schwartze M (2010). Cortical speech processing unplugged: A timely subcortico-cortical framework. Trends in Cognitive Sciences, 14(9), 392–399. 10.1016/j.tics.2010.06.005 [DOI] [PubMed] [Google Scholar]
  162. Kraus N, & Chandrasekaran B (2010). Music training for the development of auditory skills. Nature Reviews Neuroscience, 11(8), 599–605. 10.1038/nrn2882 [DOI] [PubMed] [Google Scholar]
  163. Ladányi E, Lukács Á, & Gervain J (2020). Does rhythmic priming improve grammatical processing in Hungarian-speaking children with and without Developmental Language Disorder? BioRxiv, 2020.06.19.162347. 10.1101/2020.06.19.162347 [DOI] [PMC free article] [PubMed] [Google Scholar]
  164. Ladányi E, Persici V, Fiveash A, Tillmann B, & Gordon RL (2020). Is atypical rhythm a risk factor for developmental speech and language disorders? WIREs Cognitive Science, e1528. https://onlinelibrary-wiley-com.proxy.insermbiblio.inist.fr/doi/epdf/10.1002/wcs.1528 [DOI] [PMC free article] [PubMed] [Google Scholar]
  165. Large EW, Herrera JA, & Velasco MJ (2015). Neural networks for beat perception in musical rhythm. Frontiers in Systems Neuroscience, 9. 10.3389/fnsys.2015.00159 [DOI] [PMC free article] [PubMed] [Google Scholar]
  166. Large EW, & Jones MR (1999). The dynamics of attending: How people track time-varying events. Psychological Review, 106(1), 119–159. [Google Scholar]
  167. Law J, Rush R, Schoon I, & Parsons S (2009). Modeling developmental language difficulties from school entry into adulthood: Literacy, mental health, and employment outcomes. Journal of Speech, Language, and Hearing Research, 52(6), 1401–1416. 10.1044/1092-4388(2009/08-0142) [DOI] [PubMed] [Google Scholar]
  168. Lee YS, Thaut C, & Santoni C (2019). Neurologic Music Therapy for Speech and Language Rehabilitation. In Thaut MH & Hodges DA (Eds.), The Oxford Handbook of Music and the Brain (pp. 714–737). Oxford University Press. 10.1093/oxfordhb/9780198804123.013.28 [DOI] [Google Scholar]
  169. Lense MD, Ladányi E, Rabinowitch T, Trainor LJ, & Gordon RL (in press). Rhythm and timing as vulnerabilities in neurodevelopmental disorders. Philosophical Transactions of the Royal Society B - Biological Sciences. [DOI] [PMC free article] [PubMed] [Google Scholar]
  170. Leong V, & Goswami U (2014). Assessment of rhythmic entrainment at multiple timescales in dyslexia: Evidence for disruption to syllable timing. Hearing Research, 308(0), 141–161. 10.1016/j.heares.2013.07.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  171. Leong V, Hämäläinen J, Soltész F, & Goswami U (2011). Rise time perception and detection of syllable stress in adults with developmental dyslexia. Journal of Memory and Language, 64(1), 59–73. 10.1016/j.jml.2010.09.003 [DOI] [Google Scholar]
  172. Lerdahl F, & Jackendoff R (1983). A generative theory of tonal music. MIT Press. [Google Scholar]
  173. Levitin DJ, Grahn JA, & London J (2018). The psychology of music: Rhythm and movement. Annual Review of Psychology, 69(1), 51–75. 10.1146/annurev-psych-122216-011740 [DOI] [PubMed] [Google Scholar]
  174. Lezama-Espinosa C, & Hernandez-Montiel HL (2020). Neuroscience of the auditory-motor system: How does sound interact with movement? Behavioural Brain Research, 384, 112535. 10.1016/j.bbr.2020.112535 [DOI] [PubMed] [Google Scholar]
  175. Li Q, Wang X, Wang S, Xie Y, Li X, Xie Y, & Li S (2018). Musical training induces functional and structural auditory-motor network plasticity in young adults. Human Brain Mapping, 39(5), 2098–2110. 10.1002/hbm.23989 [DOI] [PMC free article] [PubMed] [Google Scholar]
  176. Liégeois-Chauvel C, Peretz I, Babaï M, Laguitton V, & Chauvel P (1998). Contribution of different cortical areas in the temporal lobes to music processing. Brain: A Journal of Neurology, 121 (Pt 10), 1853–1867. 10.1093/brain/121.10.1853 [DOI] [PubMed] [Google Scholar]
  177. London J (2012). Hearing in time: Psychological aspects of musical meter (2nd ed.). Oxford University Press. [Google Scholar]
  178. Lyon GR, Shaywitz SE, & Shaywitz BA (2003). A definition of dyslexia. Annals of Dyslexia, 53(1), 1–14. 10.1007/s11881-003-0001-9 [DOI] [Google Scholar]
  179. Madison G, & Merker B (2004). Human sensorimotor tracking of continuous subliminal deviations from isochrony. Neuroscience Letters, 370(1), 69–73. 10.1016/j.neulet.2004.07.094 [DOI] [PubMed] [Google Scholar]
  180. Magill JM, & Pressing JL (1997). Asymmetric cognitive clock structures in west african rhythms. Music Perception: An Interdisciplinary Journal, 15(2), 189–221. JSTOR. 10.2307/40285749 [DOI] [Google Scholar]
  181. Mankel K, & Bidelman GM (2018). Inherent auditory skills rather than formal music training shape the neural encoding of speech. Proceedings of the National Academy of Sciences, 115(51), 13129–13134. 10.1073/pnas.1811793115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  182. Manning F, & Schutz M (2013). “Moving to the beat” improves timing perception. Psychonomic Bulletin & Review, 20(6), 1133–1139. 10.3758/s13423-013-0439-7 [DOI] [PubMed] [Google Scholar]
  183. McArthur GM, Hogben JH, Edwards VT, Heath SM, & Mengler ED (2000). On the “specifics” of Specific Reading Disability and Specific Language Impairment. Journal of Child Psychology and Psychiatry, 41(7), 869–874. 10.1111/1469-7610.00674 [DOI] [PubMed] [Google Scholar]
  184. McAuley JD (2010). Tempo and rhythm. In Jones MR (Ed.), Music Perception. Springer Science+Business Media. [Google Scholar]
  185. McAuley JD, & Kidd GR (1998). Effect of deviations from temporal expectations on tempo discrimination of isochronous tone sequences. Journal of Experimental Psychology: Human Perception and Performance, 24(6), 1786–1800. [DOI] [PubMed] [Google Scholar]
  186. Merchant H, Grahn J, Trainor L, Rohrmeier M, & Fitch WT (2015). Finding the beat: A neural perspective across humans and non-human primates. Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1664), 20140093. 10.1098/rstb.2014.0093 [DOI] [PMC free article] [PubMed] [Google Scholar]
  187. Meyler A, & Breznitz Z (2005). Visual, auditory and cross-modal processing of linguistic and nonlinguistic temporal patterns among adult dyslexic readers. Dyslexia, 11, 93–115. [DOI] [PubMed] [Google Scholar]
  188. Miendlarzewska EA, & Trost WJ (2014). How musical training affects cognitive development: Rhythm, reward and other modulating variables. Frontiers in Neuroscience, 7. 10.3389/fnins.2013.00279 [DOI] [PMC free article] [PubMed] [Google Scholar]
  189. Molinaro N, Lizarazu M, Lallier M, Bourguignon M, & Carreiras M (2016). Out-of-synchrony speech entrainment in developmental dyslexia. Human Brain Mapping, 37(8), 2767–2783. 10.1002/hbm.23206 [DOI] [PMC free article] [PubMed] [Google Scholar]
  190. Morillon B, & Baillet S (2017). Motor origin of temporal predictions in auditory attention. Proceedings of the National Academy of Sciences, 114(42), E8913–E8921. 10.1073/pnas.1705373114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  191. Morillon B, Hackett TA, Kajikawa Y, & Schroeder CE (2015). Predictive motor control of sensory dynamics in auditory active sensing. Current Opinion in Neurobiology, 31(Supplement C), 230–238. 10.1016/j.conb.2014.12.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  192. Morimoto C, Hida E, Shima K, & Okamura H (2018). Temporal processing instability with millisecond accuracy is a cardinal feature of sensorimotor impairments in Autism Spectrum Disorder: Analysis using the synchronized finger-tapping task. Journal of Autism and Developmental Disorders, 48(2), 351–360. 10.1007/s10803-017-3334-7 [DOI] [PubMed] [Google Scholar]
  193. Mostofsky SH, Powell SK, Simmonds DJ, Goldberg MC, Caffo B, & Pekar JJ (2009). Decreased connectivity and cerebellar activity in autism during motor task performance. Brain, 132(9), 2413–2425. 10.1093/brain/awp088 [DOI] [PMC free article] [PubMed] [Google Scholar]
  194. Möttönen R, Dutton R, & Watkins KE (2013). Auditory-motor processing of speech sounds. Cerebral Cortex, 23(5), 1190–1197. 10.1093/cercor/bhs110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  195. Nazzi T, & Ramus F (2003). Perception and acquisition of linguistic rhythm by infants. Speech Commun, 41(1), 233–243. 10.1016/S0167-6393(02)00106-1 [DOI] [Google Scholar]
  196. Niarchou M, Gustavson DE, Sathirapongsasuti JF, Anglada-Tort M, Eising E, Bell E, McArthur E, Straub P, Team, T. 23 and Me R, McAuley JD, Capra JA, Ullén F, Creanza N, Mosing MA, Hinds D, Davis LK, Jacoby N, & Gordon RL (2021). Unravelling the genetic architecture of musical rhythm: A large-scale genome-wide association study of beat synchronization. BioRxiv, 836197. 10.1101/836197 [DOI] [Google Scholar]
  197. Nicolson RI, & Fawcett AJ (2007). Procedural learning difficulties: Reuniting the developmental disorders? Trends in Neurosciences, 30(4), 135–141. 10.1016/j.tins.2007.02.003 [DOI] [PubMed] [Google Scholar]
  198. Noreika V, Falter CM, & Rubia K (2013). Timing deficits in attention-deficit/hyperactivity disorder (ADHD): Evidence from neurocognitive and neuroimaging studies. Neuropsychologia, 51(2), 235–266. 10.1016/j.neuropsychologia.2012.09.036 [DOI] [PubMed] [Google Scholar]
  199. Nozaradan S (2014). Exploring how musical rhythm entrains brain activity with electroencephalogram frequency-tagging. Philosophical Transactions of the Royal Society B: Biological Sciences, 369. [DOI] [PMC free article] [PubMed] [Google Scholar]
  200. Nozaradan S, Peretz I, Missal M, & Mouraux A (2011). Tagging the neuronal entrainment to beat and meter. The Journal of Neuroscience, 31(28), 10234–10240. 10.1523/jneurosci.0411-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  201. Nozaradan S, Peretz I, & Mouraux A (2012). Selective neuronal entrainment to the beat and meter embedded in a musical rhythm. Journal of Neuroscience, 32(49), 17572–17581. 10.1523/JNEUROSCI.3203-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  202. Nozaradan S, Zerouali Y, Peretz I, & Mouraux A (2015). Capturing with EEG the neural entrainment and coupling underlying sensorimotor synchronization to the beat. Cerebral Cortex, 25(3), 736–747. 10.1093/cercor/bht261 [DOI] [PubMed] [Google Scholar]
  203. Obleser J, & Kayser C (2019). Neural entrainment and attentional selection in the listening brain. Trends in Cognitive Sciences. 10.1016/j.tics.2019.08.004 [DOI] [PubMed] [Google Scholar]
  204. Olander L, Smith A, & Zelaznik H (2010). Evidence that a motor timing deficit Is a factor in the development of stuttering. Journal of Speech, Language, and Hearing Research : JSLHR, 53(4), 876–886. 10.1044/1092-4388(2009/09-0007) [DOI] [PMC free article] [PubMed] [Google Scholar]
  205. Overy K, Nicolson RI, Fawcett AJ, & Clarke EF (2003). Dyslexia and music: Measuring musical timing skills. Dyslexia, 9(1), 18–36. 10.1002/dys.233 [DOI] [PubMed] [Google Scholar]
  206. Palmer C, & Krumhansl CL (1990). Mental representations for musical meter. Journal of Experimental Psychology: Human Perception and Performance, 16(4), 728–741. 10.1037/0096-1523.16.4.728 [DOI] [PubMed] [Google Scholar]
  207. Palva S, & Palva JM (2018). Roles of brain criticality and multiscale oscillations in temporal predictions for sensorimotor processing. Trends in Neurosciences, 41(10), 729–743. 10.1016/j.tins.2018.08.008 [DOI] [PubMed] [Google Scholar]
  208. Patel AD (2008). Music, language, and the brain. Oxford University Press. [Google Scholar]
  209. Patel AD (2011). Why would musical training benefit the neural encoding of speech? The OPERA hypothesis. Frontiers in Psychology, 2, 142. PMC. 10.3389/fpsyg.2011.00142 [DOI] [PMC free article] [PubMed] [Google Scholar]
  210. Patel AD (2012). The OPERA hypothesis: Assumptions and clarifications. Annals of the New York Academy of Sciences, 1252(1), 124–128. 10.1111/j.1749-6632.2011.06426.x [DOI] [PubMed] [Google Scholar]
  211. Patel AD (2014). Can nonlinguistic musical training change the way the brain processes speech? The expanded OPERA hypothesis. Hearing Research, 308(0), 98–108. 10.1016/j.heares.2013.08.011 [DOI] [PubMed] [Google Scholar]
  212. Patel AD, & Iversen JR (2014). The evolutionary neuroscience of musical beat perception: The Action Simulation for Auditory Prediction (ASAP) hypothesis. Frontiers in Systems Neuroscience, 8. 10.3389/fnsys.2014.00057 [DOI] [PMC free article] [PubMed] [Google Scholar]
  213. Patel AD, & Morgan E (2016). Exploring cognitive relations between prediction in language and music. Cognitive Science, 303–320. 10.1111/cogs.12411 [DOI] [PubMed] [Google Scholar]
  214. Patscheke H, Degé F, & Schwarzer G (2016). The effects of training in music and phonological skills on phonological awareness in 4- to 6-year-old children of immigrant families. Frontiers in Psychology, 7. 10.3389/fpsyg.2016.01647 [DOI] [PMC free article] [PubMed] [Google Scholar]
  215. Peelle JE, & Davis MH (2012). Neural oscillations carry speech rhythm through to comprehension. Frontiers in Psychology, 3. 10.3389/fpsyg.2012.00320 [DOI] [PMC free article] [PubMed] [Google Scholar]
  216. Perez HR, & Stoeckle JH (2016). Stuttering. Canadian Family Physician, 62(6), 479–484. [PMC free article] [PubMed] [Google Scholar]
  217. Pesnot Lerousseau J, Hidalgo C, & Schön D (2020). Musical training for auditory rehabilitation in hearing loss. Journal of Clinical Medicine, 9(4), 1058. 10.3390/jcm9041058 [DOI] [PMC free article] [PubMed] [Google Scholar]
  218. Peter B, & Stoel-Gammon C (2008). Central timing deficits in subtypes of primary speech disorders. Clinical Linguistics & Phonetics, 22(3), 171–198. 10.1080/02699200701799825 [DOI] [PubMed] [Google Scholar]
  219. Pitt MA, & Samuel AG (1990). The use of rhythm in attending to speech. Journal of Experimental Psychology. Human Perception and Performance, 16(3), 564–573. [DOI] [PubMed] [Google Scholar]
  220. Poeppel D (2003). The analysis of speech in different temporal integration windows: Cerebral lateralization as ‘asymmetric sampling in time.’ Speech Communication, 41(1), 245–255. 10.1016/S0167-6393(02)00107-3 [DOI] [Google Scholar]
  221. Poeppel D, & Assaneo MF (2020). Speech rhythms and their neural foundations. Nature Reviews Neuroscience, 1–13. 10.1038/s41583-020-0304-4 [DOI] [PubMed] [Google Scholar]
  222. Povel D-J, & Essens P (1985). Perception of temporal patterns. Music Perception: An Interdisciplinary Journal, 2(4), 411–440. 10.2307/40285311 [DOI] [Google Scholar]
  223. Power AJ, Colling LJ, Mead N, Barnes L, & Goswami U (2016). Neural encoding of the speech envelope by children with developmental dyslexia. Brain and Language, 160(Supplement C), 1–10. 10.1016/j.bandl.2016.06.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  224. Przybylski L, Bedoin N, Krifi-Papoz S, Herbillon V, Roch D, Léculier L, Kotz SA, & Tillmann B (2013). Rhythmic auditory stimulation influences syntactic processing in children with developmental language disorders. Neuropsychology, 27(1), 121–131. 10.1037/a0031277 [DOI] [PubMed] [Google Scholar]
  225. Puyjarinet F, Bégel V, Lopez R, Dellacherie D, & Dalla Bella S (2017). Children and adults with Attention-Deficit/Hyperactivity Disorder cannot move to the beat. Scientific Reports, 7(1), 11550. 10.1038/s41598-017-11295-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  226. Rajendran VG, Teki S, & Schnupp JWH (2018). Temporal processing in audition: Insights from music. Neuroscience, 389, 4–18. 10.1016/j.neuroscience.2017.10.041 [DOI] [PMC free article] [PubMed] [Google Scholar]
  227. Ramus F (2002). Language discrimination by newborns: Teasing apart phonotactic, rhythmic, and intonational cues. Annual Review of Language Acquisition, 2(1), 85–115. 10.1075/arla.2.05ram [DOI] [Google Scholar]
  228. Ramus F, Marshall CR, Rosen S, & van der Lely HKJ (2013). Phonological deficits in specific language impairment and developmental dyslexia: Towards a multidimensional model. Brain, 136(2), 630–645. 10.1093/brain/aws356 [DOI] [PMC free article] [PubMed] [Google Scholar]
  229. Ramus F, Nespor M, & Mehler J (1999). Correlates of linguistic rhythm in the speech signal. Cognition, 73(3), 265–292. 10.1016/S0010-0277(99)00058-X [DOI] [PubMed] [Google Scholar]
  230. Rao RPN, & Ballard DH (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87. 10.1038/4580 [DOI] [PubMed] [Google Scholar]
  231. Repp BH (2000). Compensation for subliminal timing perturbations in perceptual-motor synchronization. Psychological Research, 63(2), 106–128. 10.1007/PL00008170 [DOI] [PubMed] [Google Scholar]
  232. Repp BH, & Su Y-H (2013). Sensorimotor synchronization: A review of recent research (2006–2012). Psychonomic Bulletin & Review, 20(3), 403–452. 10.3758/s13423-012-0371-2 [DOI] [PubMed] [Google Scholar]
  233. Richards S, & Goswami U (2015). Auditory processing in Specific Language Impairment (SLI): Relations with the perception of lexical and phrasal stress. Journal of Speech Language and Hearing Research, 58(4), 1292–1305. 10.1044/2015_JSLHR-L-13-0306 [DOI] [PubMed] [Google Scholar]
  234. Rinehart NJ, Bradshaw JL, Brereton AV, & Tonge BJ (2001). Movement preparation in high-functioning Autism and Asperger Disorder: A serial choice reaction time task involving motor reprogramming. Journal of Autism and Developmental Disorders, 31(1), 79–88. 10.1023/A:1005617831035 [DOI] [PubMed] [Google Scholar]
  235. Román-Caballero R, Vadillo MA, Trainor L, & Lupiáñez J (2021). Please don’t stop the music: A meta-analysis of the benefits of learning to play an instrument on cognitive and academic skills. PsyArXiv. 10.31234/osf.io/4bm8v [DOI] [Google Scholar]
  236. Rosenblum S, & Regev N (2013). Timing abilities among children with developmental coordination disorders (DCD) in comparison to children with typical development. Research in Developmental Disabilities, 34(1), 218–227. 10.1016/j.ridd.2012.07.011 [DOI] [PubMed] [Google Scholar]
  237. Ross JM, Iversen JR, & Balasubramaniam R (2016). Motor simulation theories of musical beat perception. Neurocase, 22(6), 558–565. 10.1080/13554794.2016.1242756 [DOI] [PubMed] [Google Scholar]
  238. Rothermich K, Schmidt-Kassow M, & Kotz SA (2012). Rhythm’s gonna get you: Regular meter facilitates semantic sentence processing. Neuropsychologia, 50(2), 232–244. 10.1016/j.neuropsychologia.2011.10.025 [DOI] [PubMed] [Google Scholar]
  239. Rubia K, Taylor A, Taylor E, & Sergeant JA (1999). Synchronization, anticipation, and consistency in motor timing of children with dimensionally defined attention deficit hyperactivity behaviour. Perceptual and Motor Skills, 89(3_suppl), 1237–1258. 10.2466/pms.1999.89.3f.1237 [DOI] [PubMed] [Google Scholar]
  240. Sallat S, & Jentschke S (2015). Music perception influences language acquisition: Melodic and rhythmic-melodic perception in children with Specific Language Impairment. Behavioural Neurology. 10.1155/2015/606470 [DOI] [PMC free article] [PubMed] [Google Scholar]
  241. Särkämö T, Tervaniemi M, Laitinen S, Forsblom A, Soinila S, Mikkonen M, Autti T, Silvennoinen H, Erkkilä J, Laine M, Peretz I, & Hietanen M (2008). Music listening enhances cognitive recovery and mood after cerebral artery stroke. Brain : A Journal of Neurology, 131, 866–876. 10.1093/brain/awn013 [DOI] [PubMed] [Google Scholar]
  242. Savage PE, Brown S, Sakai E, & Currie TE (2015). Statistical universals reveal the structures and functions of human music. Proceedings of the National Academy of Sciences, 112(29), 8987–8992. 10.1073/pnas.1414495112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  243. Savage PE, Loui P, Tarr B, Schachner A, Glowacki L, Mithen S, & Fitch WT (2020). Music as a coevolved system for social bonding. Behavioral and Brain Sciences, 1–36. 10.1017/S0140525X20000333 [DOI] [PubMed] [Google Scholar]
  244. Schellenberg EG (2019a). Correlation = causation? Music training, psychology, and neuroscience. Psychology of Aesthetics, Creativity, and the Arts, No Pagination Specified-No Pagination Specified. 10.1037/aca0000263 [DOI] [Google Scholar]
  245. Schellenberg EG (2019b). Music training, music aptitude, and speech perception. Proceedings of the National Academy of Sciences, 116(8), 2783–2784. 10.1073/pnas.1821109116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  246. Schön D, & Tillmann B (2015). Short- and long-term rhythmic interventions: Perspectives for language rehabilitation. Annals of the New York Academy of Sciences, 1337(1), 32–39. 10.1111/nyas.12635 [DOI] [PubMed] [Google Scholar]
  247. Schroeder CE, & Lakatos P (2009). Low-frequency neuronal oscillations as instruments of sensory selection. Trends in Neurosciences, 32(1). 10.1016/j.tins.2008.09.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  248. Schroeder CE, Lakatos P, Kajikawa Y, Partan S, & Puce A (2008). Neuronal oscillations and visual amplification of speech. Trends in Cognitive Sciences, 12(3), 106–113. 10.1016/j.tics.2008.01.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  249. Schroeder CE, Wilson DA, Radman T, Scharfman H, & Lakatos P (2010). Dynamics of active sensing and perceptual selection. Current Opinion in Neurobiology, 20(2), 172–176. 10.1016/j.conb.2010.02.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  250. Schubotz RI (2007). Prediction of external events with our motor system: Towards a new framework. Trends in Cognitive Sciences, 11(5), 211–218. 10.1016/j.tics.2007.02.006 [DOI] [PubMed] [Google Scholar]
  251. Shapiro Z, & Huang-Pollock C (2019). A diffusion-model analysis of timing deficits among children with ADHD. Neuropsychology, No Pagination Specified-No Pagination Specified. 10.1037/neu0000562 [DOI] [PMC free article] [PubMed] [Google Scholar]
  252. Siman-Tov T, Granot RY, Shany O, Singer N, Hendler T, & Gordon CR (2019). Is there a prediction network? Meta-analytic evidence for a cortical-subcortical network likely subserving prediction. Neuroscience & Biobehavioral Reviews, 105, 262–275. 10.1016/j.neubiorev.2019.08.012 [DOI] [PubMed] [Google Scholar]
  253. Slater J, & Tate MC (2018). Timing deficits in ADHD: Insights from the neuroscience of musical rhythm. Frontiers in Computational Neuroscience, 12. 10.3389/fncom.2018.00051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  254. Smit DJA, Posthuma D, Boomsma DI, & Geus EJCD (2005). Heritability of background EEG across the power spectrum. Psychophysiology, 42(6), 691–697. 10.1111/j.1469-8986.2005.00352.x [DOI] [PubMed] [Google Scholar]
  255. Snowling MJ, Hayiou-Thomas ME, Nash HM, & Hulme C (2020). Dyslexia and Developmental Language Disorder: Comorbid disorders with distinct effects on reading comprehension. Journal of Child Psychology and Psychiatry, 61(6), 672–680. 10.1111/jcpp.13140 [DOI] [PMC free article] [PubMed] [Google Scholar]
  256. Snowling MJ, Nash HM, Gooch DC, Hayiou-Thomas ME, Hulme C, & Wellcome Language and Reading Project Team. (2019). Developmental outcomes for children at high risk of dyslexia and children with developmental language disorder. Child Development, 90(5), e548–e564. 10.1111/cdev.13216 [DOI] [PMC free article] [PubMed] [Google Scholar]
  257. Sodini SM, Kemper KE, Wray NR, & Trzaskowski M (2018). Comparison of genotypic and phenotypic correlations: Cheverud’s conjecture in humans. Genetics, 209(3), 941–948. 10.1534/genetics.117.300630 [DOI] [PMC free article] [PubMed] [Google Scholar]
  258. Soltész F, Szűcs D, Leong V, White S, & Goswami U (2013). Differential entrainment of neuroelectric delta oscillations in Developmental Dyslexia. PLOS ONE, 8(10), e76608. 10.1371/journal.pone.0076608 [DOI] [PMC free article] [PubMed] [Google Scholar]
  259. Spinelli E, Grimault N, Meunier F, & Welby P (2010). An intonational cue to word segmentation in phonemically identical sequences. Attention, Perception, & Psychophysics, 72(3), 775–787. 10.3758/APP.72.3.775 [DOI] [PubMed] [Google Scholar]
  260. Stahl B, Kotz SA, Henseler I, Turner R, & Geyer S (2011). Rhythm in disguise: Why singing may not hold the key to recovery from aphasia. Brain, 134(10), 3083–3093. 10.1093/brain/awr240 [DOI] [PMC free article] [PubMed] [Google Scholar]
  261. Stephan MA, Lega C, & Penhune VB (2018). Auditory prediction cues motor preparation in the absence of movements. NeuroImage, 174, 288–296. 10.1016/j.neuroimage.2018.03.044 [DOI] [PubMed] [Google Scholar]
  262. Stevens CJ (2012). Music perception and cognition: A review of recent cross-cultural research. Topics in Cognitive Science, 4(4), 653–667. 10.1111/j.1756-8765.2012.01215.x [DOI] [PubMed] [Google Scholar]
  263. Swaminathan S, & Schellenberg EG (2020). Musical ability, music training, and language ability in childhood. Journal of Experimental Psychology. Learning, Memory, and Cognition, 46(12), 2340–2348. 10.1037/xlm0000798 [DOI] [PubMed] [Google Scholar]
  264. Swaminathan S, Schellenberg EG, & Khalil S (2017). Revisiting the association between music lessons and intelligence: Training effects or music aptitude? Intelligence, 62, 119–124. 10.1016/j.intell.2017.03.005 [DOI] [Google Scholar]
  265. Tal I, Large EW, Rabinovitch E, Wei Y, Schroeder CE, Poeppel D, & Zion Golumbic E (2017). Neural entrainment to the beat: The “missing-pulse” phenomenon. The Journal of Neuroscience, 37(26), 6331–6341. 10.1523/JNEUROSCI.2500-16.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  266. Tamplin J, Baker FA, Jones B, Way A, & Lee S (2013). “Stroke a chord”: The effect of singing in a community choir on mood and social engagement for people living with aphasia following a stroke. NeuroRehabilitation, 32(4), 929–941. ccm. 10.3233/NRE-130916 [DOI] [PubMed] [Google Scholar]
  267. ten Oever S, & Sack AT (2015). Oscillatory phase shapes syllable perception. Proceedings of the National Academy of Sciences, 112(52), 15833–15837. 10.1073/pnas.1517519112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  268. Thaut MH (2005). Rhythm, music, and the brain: Scientific foundations and clinical applications. Routledge. [Google Scholar]
  269. Thaut MH, & Hoemberg V (Eds.). (2014). Handbook of neurologic music therapy. Oxford University Press. [Google Scholar]
  270. Thaut MH, Trimarchi PD, & Parsons LM (2014). Human brain basis of musical rhythm perception: Common and distinct neural substrates for meter, tempo, and pattern. Brain Sciences, 4(2), 428–452. 10.3390/brainsci4020428 [DOI] [PMC free article] [PubMed] [Google Scholar]
  271. Thomson JM, Fryer B, Maltby J, & Goswami U (2006). Auditory and motor rhythm awareness in adults with dyslexia. Journal of Research in Reading, 29(3), 334–348. 10.1111/j.1467-9817.2006.00312.x [DOI] [Google Scholar]
  272. Thomson JM, & Goswami U (2008). Rhythmic processing in children with developmental dyslexia: Auditory and motor rhythms link to reading and spelling. Journal of Physiology-Paris, 102(1), 120–129. 10.1016/j.jphysparis.2008.03.007 [DOI] [PubMed] [Google Scholar]
  273. Thomson JM, Leong V, & Goswami U (2013). Auditory processing interventions and developmental dyslexia: A comparison of phonemic and rhythmic approaches. Reading and Writing, 26(2), 139–161. 10.1007/s11145-012-9359-6 [DOI] [Google Scholar]
  274. Tierney A, & Kraus N (2014). Auditory-motor entrainment and phonological skills: Precise auditory timing hypothesis (PATH). Frontiers in Human Neuroscience, 8. 10.3389/fnhum.2014.00949 [DOI] [PMC free article] [PubMed] [Google Scholar]
  275. Tierney A, & Kraus N (2015). Evidence for multiple rhythmic skills. PLOS ONE, 10(9), e0136645. 10.1371/journal.pone.0136645 [DOI] [PMC free article] [PubMed] [Google Scholar]
  276. Tierney A, Krizman J, Skoe E, Johnston K, & Kraus N (2013). High school music classes enhance the neural processing of speech. Frontiers in Psychology, 4. 10.3389/fpsyg.2013.00855 [DOI] [PMC free article] [PubMed] [Google Scholar]
  277. Toyomura A, Fujii T, & Kuriki S (2011). Effect of external auditory pacing on the neural activity of stuttering speakers. NeuroImage, 57(4), 1507–1516. 10.1016/j.neuroimage.2011.05.039 [DOI] [PubMed] [Google Scholar]
  278. Trainor LJ, Chang A, Cairney J, & Li Y-C (2018). Is auditory perceptual timing a core deficit of developmental coordination disorder? Annals of the New York Academy of Sciences. 10.1111/nyas.13701 [DOI] [PMC free article] [PubMed] [Google Scholar]
  279. Tryfon A, Foster NE, Ouimet T, Doyle-Thomas K, Anagnostou E, Sharda M, & Hyde KL (2017). Auditory-motor rhythm synchronization in children with autism spectrum disorder. Research in Autism Spectrum Disorders, 35, 51–61. 10.1016/j.rasd.2016.12.004 [DOI] [Google Scholar]
  280. Ullén F, Mosing MA, Holm L, Eriksson H, & Madison G (2014). Psychometric properties and heritability of a new online test for musicality, the Swedish Musical Discrimination Test. Personality and Individual Differences, 63(0), 87–93. 10.1016/j.paid.2014.01.057 [DOI] [Google Scholar]
  281. Valera EM, Spencer RMC, Zeffiro TA, Makris N, Spencer TJ, Faraone SV, Biederman J, & Seidman LJ (2010). Neural substrates of impaired sensorimotor timing in adult attention-deficit/hyperactivity disorder. Biological Psychiatry, 68(4), 359–367. 10.1016/j.biopsych.2010.05.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  282. Van Hirtum T, Ghesquière P, & Wouters J (2019). Atypical neural processing of rise time by adults with dyslexia. Cortex, 113, 128–140. 10.1016/j.cortex.2018.12.006 [DOI] [PubMed] [Google Scholar]
  283. van Wijk BCM, Beek PJ, & Daffertshofer A (2012). Neural synchrony within the motor system: What have we learned so far? Frontiers in Human Neuroscience, 6. 10.3389/fnhum.2012.00252 [DOI] [PMC free article] [PubMed] [Google Scholar]
  284. Vanden Bosch der Nederlanden CM, Joanisse M, & Grahn JA (2020). Music as a scaffold for listening to speech: Better neural phase-locking to song than speech. NeuroImage, 116767. 10.1016/j.neuroimage.2020.116767 [DOI] [PubMed] [Google Scholar]
  285. Wagner M, & Watson DG (2010). Experimental and theoretical advances in prosody: A review. Language and Cognitive Processes, 25(7–9), 905–945. 10.1080/01690961003589492 [DOI] [PMC free article] [PubMed] [Google Scholar]
  286. Wan CY, Rüber T, Hohmann A, & Schlaug G (2010). The therapeutic effects of singing in neurological disorders. Music Perception, 27(4), 287–295. 10.1525/mp.2010.27.4.287 [DOI] [PMC free article] [PubMed] [Google Scholar]
  287. Wieland EA, McAuley JD, Dilley LC, & Chang S-E (2015). Evidence for a rhythm perception deficit in children who stutter. Brain and Language, 144, 26–34. 10.1016/j.bandl.2015.03.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  288. Wiens N, & Gordon RL (2018). The case for treatment fidelity in active music interventions: Why and how. Annals of the New York Academy of Sciences, 1423(1), 219–228. 10.1111/nyas.13639 [DOI] [PMC free article] [PubMed] [Google Scholar]
  289. Wilson M, & Wilson TP (2005). An oscillator model of the timing of turn-taking. Psychonomic Bulletin & Review, 12(6), 957–968. 10.3758/BF03206432 [DOI] [PubMed] [Google Scholar]
  290. Wilson SM, Saygin AP, Sereno MI, & Iacoboni M (2004). Listening to speech activates motor areas involved in speech production. Nature Neuroscience, 7(7), 701–702. 10.1038/nn1263 [DOI] [PubMed] [Google Scholar]
  291. Winkler I, Háden GP, Ladinig O, Sziller I, & Honing H (2009). Newborn infants detect the beat in music. Proceedings of the National Academy of Sciences, 106(7), 2468–2471. 10.1073/pnas.0809035106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  292. Wolff PH (2002). Timing precision and rhythm in developmental dyslexia. Reading and Writing, 15(1–2), 179–206. 10.1023/A:1013880723925 [DOI] [Google Scholar]
  293. Yang Y, Dewald JPA, van der Helm FCT, & Schouten AC (2018). Unveiling neural coupling within the sensorimotor system: Directionality and nonlinearity. The European Journal of Neuroscience, 48(7), 2407–2415. 10.1111/ejn.13692 [DOI] [PMC free article] [PubMed] [Google Scholar]
  294. Zatorre RJ, Chen JL, & Penhune VB (2007). When the brain plays music: Auditory–motor interactions in music perception and production. Nature Reviews Neuroscience, 8(7), 547–558. 10.1038/nrn2152 [DOI] [PubMed] [Google Scholar]
  295. Zelaznik HN, Vaughn AJ, Green JT, Smith AL, Hoza B, & Linnea K (2012). Motor timing deficits in children with Attention-Deficit/Hyperactivity disorder. Human Movement Science, 31(1), 255–265. 10.1016/j.humov.2011.05.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  296. Zhang Y, & Francis A (2010). The weighting of vowel quality in native and non-native listeners’ perception of English lexical stress. Journal of Phonetics, 38(2), 260–271. 10.1016/j.wocn.2009.11.002 [DOI] [Google Scholar]
  297. Zhao TC, & Kuhl PK (2016). Musical intervention enhances infants’ neural processing of temporal structure in music and speech. Proceedings of the National Academy of Sciences. 10.1073/pnas.1603984113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  298. Zoefel B, ten Oever S, & Sack AT (2018). The involvement of endogenous neural oscillations in the processing of rhythmic input: More than a regular repetition of evoked neural responses. Frontiers in Neuroscience, 12. 10.3389/fnins.2018.00095 [DOI] [PMC free article] [PubMed] [Google Scholar]
  299. Zwicker JG, Missiuna C, & Boyd LA (2009). Neural correlates of Developmental Coordination Disorder: A review of hypotheses. Journal of Child Neurology, 24(10), 1273–1281. 10.1177/0883073809333537 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Material

RESOURCES