Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 May 1.
Published in final edited form as: Atten Percept Psychophys. 2022 Apr 18;84(4):1370–1392. doi: 10.3758/s13414-022-02487-2

You got rhythm, or more: The multidimensionality of rhythmic abilities

Anna Fiveash 1,2, Simone Dalla Bella 3,4,5,6, Emmanuel Bigand 7, Reyna L Gordon 8,9,10,11, Barbara Tillmann 1,2
PMCID: PMC9614186  NIHMSID: NIHMS1840137  PMID: 35437703

Abstract

Humans have a remarkable capacity for perceiving and producing rhythm. Rhythmic competence is often viewed as a single concept, with participants who perform more or less accurately on a single rhythm task. However, research is revealing numerous sub-processes and competencies involved in rhythm perception and production, which can be selectively impaired or enhanced. To investigate whether different patterns of performance emerge across tasks and individuals, we measured performance across a range of rhythm tasks from different test batteries. Distinct performance patterns could potentially reveal separable rhythmic competencies that may draw on distinct neural mechanisms. Participants completed nine rhythm perception and production tasks selected from the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA), the Beat Alignment Test (BAT), the Beat-Based Advantage task (BBA), and two tasks from the Burgundy best Musical Aptitude Test (BbMAT). Principal component analyses revealed clear separation of task performance along three main dimensions: production, beat-based rhythm perception, and sequence memory-based rhythm perception. Hierarchical cluster analyses supported these results, revealing clusters of participants who performed selectively more or less accurately along different dimensions. The current results support the hypothesis of divergence of rhythmic skills. Based on these results, we provide guidelines towards a comprehensive testing of rhythm abilities, including at least three short tasks measuring: (1) rhythm production (e.g., tapping to metronome/music), (2) beat-based rhythm perception (e.g., BAT), and (3) sequence memory-based rhythm processing (e.g., BBA). Implications for underlying neural mechanisms, future research, and potential directions for rehabilitation and training programs are discussed.

Keywords: Music, Rhythm, Rhythmic skills, Rhythmic competencies, Rhythm battery

Introduction

The human proclivity for rhythm is widespread through-out the population and can be clearly seen on the dancefloor (Carlson et al., 2018), at infancy (Winkler et al., 2009), and across different cultures (Jacoby & McDermott, 2017; Polak et al., 2018). We refer to rhythm as the serially ordered pattern of time intervals in a stimulus sequence (i.e., time spans marked by event onsets). The beat1 can be considered as the most prominent periodicity within a musical piece, for example, where the listener is likely to want to clap their hands or move in time with the rhythm (McAuley, 2010). Meter refers to the temporal organization of beats, in which some beats are perceived as more salient than others, on multiple time scales (i.e., a perceived hierarchy of patterns of strong and weak beats) (Fitch, 2013). Extracting a beat/metrical structure from a rhythm engages several cognitive processes, including time/duration processing and more general cognitive processing, including working memory and attention. Rhythm perception is also strongly linked to movement, as just listening to rhythmic patterns activates (pre)motor areas in the brain (Grahn & Brett, 2007), and invokes an urge to move in time with the music (Vuust et al., 2014). The large majority of the population are able to move in time with an external rhythm (e.g., Repp, 2010; Sowiński & Dalla Bella, 2013) and find the beat without difficulty; however, different patterns of rhythm impairments observed in the population can be valuable to reveal the multidimensionality of rhythmic abilities (Bégel et al., 2017; Phillips-Silver et al., 2011) and provide insight into potential differences in underlying neural mechanisms.

To assess rhythmic skills, previous research has used different types of tasks that potentially tap into separable underlying competencies. One distinction observed in the literature is between tasks that involve the perception of rhythm and tasks that involve the production of rhythm. Rhythm perception tasks refer to tasks where the listener makes a judgment on the rhythm (with no production element), and rhythm production tasks refer to tasks where the listener is asked to produce a rhythm (i.e., synchronization/paced tapping tasks, reproducing a previously heard rhythm). Rhythm production tasks often additionally require rhythm perception skills (i.e., tapping to a metronome or music), but not always (i.e., unpaced tapping). A second distinction in the literature is between tasks that involve memory for and discrimination of rhythmic sequences (during perception and/or production), and tasks that involve judgments on or synchronization to the timing, beat, or alignment of rhythms. We refer to tasks that involve a strong short-term memory component (i.e., rhythm pattern discrimination and rhythm reproduction tasks) as sequence memory-based rhythm tasks, and tasks that involve judgments about the timing, beat, or alignment of a rhythm, as well as tapping tasks, as beat-based rhythm tasks. Note that this distinction is not always clear-cut, as sequence memory-based rhythm tasks may also involve some beat-based processing and beat-based rhythm tasks may also involve some sequence memory-based processing. However, for current purposes, and to align with previous research in the field (e.g., Bonacina et al., 2019; Tierney & Kraus, 2015), we use this distinction. In the typically developing population, performance on rhythm perception and production tasks (Dalla Bella et al., 2017), and sequence memory-based and beat-based rhythm tasks (Bonacina et al., 2019; Tierney & Kraus, 2015) are not routinely correlated, suggesting separable rhythmic competencies.

Within the larger umbrella term of rhythm ability, different patterns of performance have been observed in single-case studies, which indicate various dissociations between different rhythmic competencies. Isolated difficulties have been observed for rhythm perception (Bégel et al., 2017), rhythm synchronization (Sowiński & Dalla Bella, 2013), or both perception and synchronization (Palmer et al., 2014; Phillips-Silver et al., 2011). Synchronization with a musical beat can also be selectively impaired (i.e., beat/meter extraction), while synchronization to an isochronous stimulus and unpaced tapping remain unimpaired (Launay et al., 2014; Phillips-Silver et al., 2011). Rhythm deficits have also been observed to occur comorbidly with developmental disorders, including attention deficit hyperactivity disorder (Puyjarinet et al., 2017), dyslexia (Bégel et al., 2022; Colling et al., 2017; Overy et al., 2003), developmental language disorder (Cumming et al., 2015), as well as in patients with Parkinson’s disease (Puyjarinet et al., 2019). Such cases provide an interesting basis to suggest that different rhythm skills are (a) somewhat separable and (b) come into play during social and language development (Lense et al., 2021). They may also be linked to pathology in some cases (for review, see Fiveash et al., 2021; Ladányi et al., 2020). It is therefore important to understand the multidimensionality of rhythm skills in the general population, with the larger goal to better understand rhythm deficits and their underlying neural correlates in patient populations.

The distinctions and dissociations observed for performance on different types of rhythm tasks in the literature strongly provoke a case for multiple rhythmic competencies (Tierney & Kraus, 2015). These competencies can be measured with different types of tasks and are likely to be supported by different underlying neural mechanisms (Bouwer et al., 2020; Leow & Grahn, 2014; Thaut et al., 2014). Based on such findings, theoretical work is beginning to re-categorize rhythmic ability as multi-faceted, with potentially distinct biological bases and evolutionary histories underlying different competencies (Bouwer et al., 2021; Greenfield et al., 2021; Kotz et al., 2018). With the aim of measuring separable rhythmic competencies within an overall picture of competencies instead of investigating them independently (as in earlier research), we refer here primarily to two distinctions: performance measured by perception versus production tasks, and performance measured by sequence memory-based versus beat-based rhythm perception tasks.

Perception versus production

One common distinction that has emerged from previous research is between rhythm perception and production skills. Although these skills are often intertwined, and encompass various sub-processes (i.e., beat extraction, attention, working memory), evidence for dissociation suggests the need for further exploration of individual differences and underlying mechanisms. The Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA; Dalla Bella et al., 2017) and the Harvard Beat Assessment Test (H-BAT; Fujii & Schlaug, 2013) are rhythm perception and production test batteries that each use the same set of materials across tasks. These batteries have shown both correlation and lack of correlation between different measures of rhythmic perception and production. For example, using BAASTA (Dalla Bella et al., 2017; Fig. 8; n = 20), correlations were found between the perceptual beat alignment test (determine whether a super-imposed metronome was on or off the beat of a musical sequence) and a number of production measures: unpaced tapping, paced tapping to an isochronous sequence (but only marginally for paced tapping to music), and the synchronization-continuation task (continue tapping after the external metronome has stopped). The perceptual Anisochrony detection task (detect whether a sequence of tones was regular or not regular) was correlated with paced tapping to an isochronous sequence and paced tapping to music, but not with unpaced tapping or synchronization-continuation. In the H-BAT, Fujii and Schlaug (2013, n = 30) only found one significant correlation between equivalent perception and production measures, which was in the beat saliency test. In this test, participants were asked to detect a duple or triple meter, or produced the same meter by tapping on a drum pad. For the beat interval test (discriminate or synchronize with increases/decreases in tempo) and the beat finding and interval test (find the underlying beat and produce or discriminate increases/decreases in tempo), there were no correlations between the perception and production measures. Different patterns of correlation and no correlation across these different batteries suggest that rhythm processing skills are complex and may be composed of different underlying competencies. Note though that these correlations need to be treated somewhat carefully as they are based on quite low participant numbers.

Single-case studies have shown dissociations in participants with selective impairments in rhythm perception or production. On the one hand, Sowiński and Dalla Bella (2013) reported two participants (S1 and S5) who were atypical in their marked inaccuracy at tapping to music and to amplitude-modulated noise, but performed at control level on the Anisochrony detection task and the Montreal Battery of Evaluation of Amusia (both the pitch and rhythm sub-tests; Peretz et al., 2003) and were unimpaired in unpaced tapping. Such a pattern suggests a specific synchronization impairment. To distinguish it from beat deafness, the authors labelled this new form of impairment as “pure sensorimotor coupling disorder.” This distinction is in line with an earlier case study that reported a patient with brain damage following stroke who exhibited intact rhythm perception and reproduction skills, but impaired synchronization to both a metronome and marching band music (Fries & Swihart, 1990).

On the other hand, Bégel et al. (2017) showed the reverse pattern of two cases of “beat deaf’ individuals (L.A. and L.C.) who showed impaired beat perception (detecting time shifts in a regular sequence; estimating if a metronome is aligned/not aligned with a musical beat) compared to controls, but were unimpaired in beat production, as measured by tapping to a metronome or to the beat of music (they also present the case of L.V., who showed impairments in both perception and production tasks). It therefore appears that neural mechanisms serving perception and production can dissociate in participants with rhythm disorders, even though beat perception and production are commonly linked in the brain (Cannon & Patel, 2021; Chen et al., 2008). Different patterns of association and dissociation across different perception and production tasks have also been shown in the pitch domain, with evidence for both dissociations (e.g., Dalla Bella et al., 2007; Loui et al., 2008) and associations (Pfordresher & Demorest, 2021; Pfordresher & Nolan, 2019; Williamson et al., 2012) across different perception and production tasks.

Sequence memory-based rhythm versus beat-based rhythm

The other distinction observed in the literature is between sequence memory-based and beat-based rhythm processing tasks. Sequence memory-based tasks that require the participant to remember or reproduce a rhythm are suggested to draw on mechanisms that differ from those in beat-based tasks; the latter requiring the participant to extract a beat or synchronize with an external rhythm. Such distinctions appear related to the different time-scales and sequencing/memory demands necessary to complete the task. As outlined in Tierney and Kraus (2015), memory for rhythm largely operates over a longer, supra-second time scale, whereas synchronization to a regular pulse generally operates on a shorter, sub-second time scale (specifically in the case where participants tap at a low level of the metric hierarchy). A meta-analysis of fMRI studies showed that although sub- and supra-second processing do draw on similar areas in the brain, subcortical areas appear more strongly involved in sub-second tasks, and cortical areas appear more strongly involved in supra-second tasks (Nani et al., 2019). A specialization of the cerebellum for processing sub-second time scales and the basal ganglia for processing supra-second time scales has also been proposed (Schwartze et al., 2012). It therefore appears that sub-second and supra-second time scales may be processed differently in the brain, though it should be noted that this does not discount roles for cortical areas in sub-second timing or subcortical areas in supra-second timing (see Nani et al., 2019). Importantly, discrimination tasks requiring the comparison of two rhythmic stimuli, or the reproduction of a previously heard stimulus, additionally require the maintenance of these rhythms in short-term memory (STM), which could also account for the larger involvement of cortical areas for longer time scales (see also links with STM and rhythm reproduction in Grahn & Schuit, 2012, and Tierney & Kraus, 2015). However, it should be noted that shared processes are likely to still be involved in both types of tasks, including beat and meter processing, duration processing, attention, and working memory, though perhaps the relative contribution of these processes differs depending on the task, resulting in separable skills.

Behavioral research has shown a separation between performance on sequence memory-based rhythm tasks compared to beat-based rhythm tasks. To test the potential divergence of sequence memory-based and beat-based rhythm skills, both Tierney and Kraus (2015) and Bonacina et al. (2019) ran four separate tasks on 67 adults and 68 children (aged between 5 and 8 years), respectively. All tasks consisted of production measures, though as acknowledged by the authors, perception was also strongly involved. In Tierney and Kraus (2015), two of the tasks were suggested to implicate beat-tapping (i.e., beat-based) skills: (a) drumming to a metronome and (b) a tempo adaptation task. It was suggested that the other two tasks involved memory/sequencing (i.e., memory-based) skills: (a) drumming along to repeated rhythmic sequences and (b) reproducing previously heard rhythms. As predicted, the two beat-tapping tasks correlated with each other, and the two memory/sequencing tasks correlated with each other, but there were no correlations between the beat-tapping and memory/sequencing tasks, supporting the hypothesis that these tasks may recruit different underlying mechanisms.

Bonacina et al. (2019) asked participants to (a) drum in time with an isochronous beat, (b) drum to the beat of music, (c) remember and reproduce rhythms, and (d) clap in time with visual feedback. Consistent with the results from Tierney and Kraus (2015), drumming to an isochronous beat and remembering and reproducing rhythms did not correlate. However, clapping in time with visual feedback correlated with the other three measures, and drumming to music also correlated with the other three measures (including remembering and reproducing rhythms). These results therefore replicate a distinction between tapping to a metronome and reproducing a rhythm from memory, but suggest that there may be a connection between beat extraction in music and remembering and repeating rhythms. This connection may be related to beat and meter extraction, as the rhythms to be reproduced consisted of both strong and weak metrical sequences. It therefore appears that the connections between different rhythmic skills reflect more than a simple dichotomy between perception and production, but likely provide an insight into underlying neural mechanisms driving these separable skills (see also distinctions between beat-based and memory-based temporal expectations in Bouwer et al., 2020).

To test the hypothesis that shorter and longer rhythmic time scales relate to synchronization and sequencing skills (respectively) in the brain, Tierney et al. (2017) administered six different drumming production tasks and measured neural response consistency to a sound (i.e., the syllable “da”) in 64 young adults (49 with the neural measures). The rationale behind this experiment was that there should be a link between tapping consistency and the neural response to sound. The drumming production tasks consisted of synchronization and memory/sequencing tasks, while both subcortical and cortical neural responses were measured. The subcortical measurement was the consistency of the fast frequency-following response (FFR) to the sound presented with an inter-onset-interval (IOI) of 251 ms. The cortical measurement was the consistency of the evoked response to the same sound presented with a 1,006 ms IOI. Four main findings emerged from this study: First, there were no correlations between synchronization (tapping to an isochronous metronome) and memory/sequencing tasks (repeating rhythmic sequences and drumming to repeated rhythmic sequences), in line with other studies (Bonacina et al., 2019; Tierney & Kraus, 2015). Second, a factor analysis across the six tasks independently revealed two main factors: a synchronization factor and a sequencing factor (as observed in Bonacina et al., 2019; Tierney & Kraus, 2015). Third, a measure of STM correlated with the sequencing factor, but not the synchronization factor, revealing the expected link between STM and memory/sequencing tasks. Fourth, the synchronization factor only correlated with subcortical FFR consistency (both reflecting shorter time scales), and the sequencing factor only correlated with cortical evoked consistency (both reflecting longer time scales). The authors therefore suggest that synchronization tasks rely on shorter time scales of processing in the brain, and that memory/sequencing tasks rely on longer time scales of processing in the brain, and can be dissociated.

Although research is beginning to find distinctions between performance in perception and production tasks, as well as sequence memory-based and beat-based rhythm tasks, these distinctions have not yet been systematically investigated within one group of participants and across different types of tasks. Further, to our knowledge there has been no investigation of individual differences in relation to unique variance on different types of rhythm tasks, or whether participants can have selectively strong or weak performance depending on the rhythm task tested. The current study aims to investigate these questions, which have implications for how rhythm is processed by the brain, how rhythm skills are measured in the literature, and how different rhythm tasks may relate to other skills, such as speech/language processing (Fiveash et al., 2021; Ladányi et al., 2020).

The current study

The aim of the current study was to test on the same participants a set of rhythm tasks conceived across different labs to capture separable underlying rhythm competencies that exist above and beyond methodological differences of the various tasks. From currently available rhythm tasks, we selected nine representative tasks to cover different aspects of perception, production, sequence memory-based rhythm perception, and beat-based rhythm perception, including tests for internal beat generation, and beat and meter extraction. Based on previous research, we hypothesized that (a) perception and production tasks and (b) sequence memory-based and beat-based rhythm perception tasks would map onto different latent factors within principal component analyses (PCAs). We further predicted that clusters of participants would emerge who presented with different performance patterns across the different tasks. Differences across individuals and across tasks would provide insight into the complexity and variation of rhythmic skills within the general population, as well as potential clues for separable underlying neural mechanisms. This investigation aimed to provide insights into the diversity of rhythmic competencies in the general population, which could provide perspectives for the understanding of potential deficits emerging in both typically developing and patient populations.

Method

Participants

Thirty-one native French speaking adults non-selected for musical training participated in the current experimental battery (Mage = 20 years, SD = 1.9; range = 18–26; 26 females). Participants were recruited from different Universities in Lyon and through social media. Nineteen participants reported that they had previously played music and eight reported to currently play music. On average, participants had 3.61 years (SD = 4.24; range = 0–13, median = 2.00) of musical experience (including years of classes and years of individual playing). Seventeen participants reported attending dance classes in the past, and two currently attended dance classes. In total, participants had taken an average of 3.32 years of dance classes (SD = 4.45 years, range = 0–15 years). See Online Supplementary Material (Table 5) for all music and dance training information. Participants reported no history of dyslexia or neurological issues, and no issues with hearing or vision that precluded them from participating in the study. All participants provided written informed consent, as approved by the French ethics committee (Comité de Protection des Personnes Ile de France X, CPP). They were paid 12 euros an hour for their participation.

Sample size was determined based on a power analysis using G*Power (Faul et al., 2007). As the required sample size for principal component analysis (PCA) is highly variable and inconsistent (Mundfrom et al., 2005), and PCA is considered a form of multivariate statistics (Chi, 2012), we ran a power analysis to aim for enough power to detect an effect within a repeated-measures MANOVA-based analysis. We set α = 0.05, power = 0.80, and specified a medium effect size2 f = 0.25, as suggested by Cunningham & McCrum-Gardner, 2007). Further, we outlined that we have one group with eight measurements (i.e., our dependent variables), and a correlation of 0.5 between variables. The power analysis suggested 23 participants were necessary to detect an effect, but considering we were running a PCA, we tested some more participants (n = 31), aimed to reduce redundancy in variables entered into the PCA, and limited the extracted components to three to avoid possible over-interpretation. All participants were tested before data were analyzed, to avoid optional stopping (Rouder, 2014; Simmons et al., 2011).

Tasks and apparatus

Nine rhythm tasks were selected to encompass various rhythmic competencies (see Table 1). Five tasks measured different aspects of perception: (1) the Anisochrony detection task (BAASTA; Dalla Bella et al., 2017); (2) the beat-based advantage task (BBA; Gordon et al., 2015; Niarchou et al., 2021); (3) the beat alignment test (BAT; Dalla Bella et al., 2017; original version, Iversen & Patel, 2008); (4) the Burgundy best musical aptitude test (http://leadserv.u-bourgogne.fr/~cimus/) for synchronization (BbMAT-Synch); and (5) the BbMAT metric regularity task (BbMAT-Metric). Four tasks (all from the BAASTA) measured different aspects of production using finger tapping: (1) unpaced tapping; (2) synchronization-continuation; (3) paced tapping to a metronome, and (4) paced tapping to music. See more details below in the descriptions of each task. Even though we did expect some degree of overlap across tasks, each task was chosen to assess a different aspect of rhythm-based processing, as outlined in Table 1. All production tasks were performed before the perception tasks, so the perception of errors would not have influenced tapping performance, even though paced tapping to music and BAT used the same music. The BAT and BbMAT both assess an aspect of beat alignment; however, the BAT is a well-established test that uses an external timbre to investigate beat alignment, whereas the BbMAT has not been widely tested up to now and is a more subtle measure of alignment sensitivity within a complex musical piece (with finer manipulations and no external stimulus). The combination of these tests provides a unique opportunity to extend the domain to other tasks and investigate potential overlap across tasks and competencies.

Table 1.

Summary of perception and production tasks, the dependent variables (DVs) measured, the duration of each task, and potential rhythm abilities measured by these tasks

Task DV Duration Rhythm ability
Perception Anisochrony detection Threshold Estimation ~ 3 min Deviation detection from an isochronous beat
BBA Sensitivity, c, accuracy ~ 6 min Rhythm discrimination, potential beat and meter extraction (simple and complex rhythms)
BAT Sensitivity, c, accuracy ~ 8 min Beat alignment
BbMAT-Synch Sensitivity, c, accuracy ~ 6 min Beat and meter extraction, alignment
BbMAT-Metric Sensitivity, c, accuracy ~ 5 min Beat and meter extraction (impaired performers)
Production Unpaced tapping Mean ITI, motor variability ~ 1 min Preferred tapping rate. Internal rhythm generation and consistency
Synch – continuation Mean ITI, motor variability ~ 1.5 Internal rhythm generation and consistency
Paced tapping metronome Synchronization accuracy and consistency, motor variability ~ 1.7 min Synchronization
Paced tapping music Synchronization accuracy and consistency, motor variability ~ 1.8 min Beat extraction, metric extraction, synchronization

ITI inter-tap interval, BBA beat-based advantage task, BbMAT Burgundy best musical aptitude test, BAT beat alignment test, c response bias c

All production tasks and the Anisochrony detection task were run on a tablet version of BAASTA (as in Bégel et al., 2018; Dauvergne et al., 2018; Puyjarinet et al., 2017). This version of BAASTA affords high temporal precision (≤ 1 ms) by relying on the audio recording of the sound of the taps when they reach the touchscreen, and thereby bypassing possible sources of delay and jitter typical of mobile devices (Dalla Bella & Andary, 2020). The BBA, BbMAT-Synch, and BbMAT-Metric tasks were run with Matlab (version 2018a) and PsychToolbox (version 3.0.14) through an Apple Mac Mini desktop computer. The BAT was run with Eprime (Schneider et al., 2002) on a Dell laptop.

Procedure

Participants completed all tasks in a fixed order, starting with the five tasks from BAASTA, in the order: unpaced tapping, paced tapping to a metronome, paced tapping to music, synchronization-continuation, and Anisochrony detection. Each production task was completed twice, and paced tapping to music was completed once with music 1 and once with music 2 (more details below). After BAASTA, participants completed the BBA, BbMAT-Synch, BbMAT-Metric, BAT, and the questionnaires. Participants wore headphones for all tasks and had a short break between each task. The tasks are outlined in more detail below, grouped into perception and then production tasks. The total testing time was approximately 1 h.

Perception tasks

Anisochrony detection

Participants heard a sequence of five tones (tone frequency = 1,047 Hz, duration =150 ms) and indicated if the sequence was regular or irregular. The IOI for regular sequences was constant (750 ms IOI). For the non-regular sequences, the fourth tone was up to 30% of the IOI earlier than expected. The difference in IOI was adapted based on participant responses to detect a change threshold using a 2 down/1 up staircase procedure. With this procedure, participants needed to consecutively detect an irregular trial twice before the IOI difference was reduced in subsequent trials (see Dalla Bella et al., 2017, Exp. 2 for further explanation). Participants responded after each sequence by pressing a regular or irregular button on the BAASTA tablet. The task was repeated twice, and the final threshold was the average of the two thresholds.

Beat-based advantage task

The BBA consisted of 16 same-different trials derived from previous work (Grahn & Brett, 2007; Povel & Essens, 1985) and similar to Gordon et al. (2015). Sixteen items were chosen here to reflect a wide range of difficulty, from the 32-trial version of the same task conducted on 724 participants (Niarchou et al., 2021). Eight trials consisted of simple rhythms and the other eight consisted of complex rhythms (half same, half different). Simple rhythms contained a strong metrical structure and were considered easier to discriminate, whereas complex rhythms contained a weaker metrical structure with more syncopation and were considered more difficult to discriminate (based on principles of subjective accenting from Povel & Essens, 1985). For each trial, a visual schema was presented on the screen to make the structure of the trial format clear, i.e., rhythm 1, rhythm 2, rhythm 3. The first two rhythms were identical, and the third rhythm was either the same or different. Different trials consisted of the reversal of an adjacent interval. A 1,500 ms silence occurred between each rhythm. All three rhythms in each trial were presented in one of four pure tone frequencies (294, 587, 411, 470 Hz). Participants were asked to detect whether the third rhythm was the same or different. Participants responded at the end of the third rhythm by pressing one of two keys on the computer keyboard.

Beat alignment test

The beat alignment test used in the current experiment was an adaptation of the task created by Iversen and Patel (2008). The current implementation used four unique musical sequences (two different fragments from Bach’s Badinerie and Rossini’s William Tell Overture) with an inter-beat interval of 600 ms, taken from BAASTA. Approximately 3–4 s into each musical sequence, a high-pitched triangle timbre was introduced that was either in phase and period with the rhythm (aligned), out of phase (phase misaligned), or out of period (period misaligned). For phase misaligned trials, the triangle tones were presented at the same tempo as that of the excerpt, but shifted before or after the beat by 33% of the inter-beat-interval. In the period misaligned trials, the triangle timbre was presented at a consistent tempo that was 10% slower or faster than the beat. There were two blocks consisting of 24 trials each (four rhythms, presented twice in each condition), for a total of 48 trials. Participants were asked to detect whether the triangle timbre was aligned or not aligned with the beat and to respond as soon as they knew their answer. Participants pressed one of two buttons to indicate their response.

BbMAT synchronization and metric regularity tests

Synchronization

In the synchronization test, participants were presented with sound files created for the BbMAT with complex percussion and accompaniment and were asked whether the music was played well together and synchronized or not played well together and not synchronized. Synchronized rhythms were designed with a polyrhythmic structure including four instrument streams in the style of Brazilian "batucada" from MIDI virtual studio technology instrument timbres. The rhythms were arranged to create a stable beat percept in a 4/4 meter at 120 beats per minute (bpm) including some syncopations. To create the unsynchronized rhythms, all onsets of three of the four polyrhythmic streams were shifted by random values varying in relation to the expected onset chosen. The random values could be between −60 ms/+60 ms, −40 ms/+30 ms, −50 ms/+60 ms, or −50 ms/+50 ms, depending on the polyrhythmic sequence. These slight changes in time values were motivated to render the asynchrony more easily perceptible.

There were eight synchronized and eight unsynchronized trials, as well as an example trial for each, where synchronized or not synchronized was explicitly indicated on the screen. In the current implementation, all rhythms were limited to 17 s in duration with a fade-out at the end, and were randomized for each participant, with the restrictions that (a) the synchronized and unsynchronized versions of the same rhythm were not played in succession, and (b) no more than four of the same type of stimulus were played sequentially. Once the sound file had finished, participants pressed one of two buttons on the keyboard to indicate whether the rhythm was synchronized or not synchronized.

Metric regularity

In the metric regularity test, participants were presented with regular and irregular musical sequences similar to those used in Canette et al. (2020), Fiveash et al. (2020a), (2020b). Regular rhythms contained different percussion instruments and electronic sounds (i.e., bass drum, snare drum, tom-tom, and cymbal), created with MIDI VST instrument timbres. Regular rhythms were in a 4/4 meter and 120 bpm. Irregular rhythms were composed by taking the regular rhythms and rearranging the acoustic information in time so that there were no regularities in beat or meter. In the current implementation, all rhythms were shortened to 17 s with a fade-out at the end. Participants were asked to judge whether the rhythms were pulsed or not pulsed. Pulsed rhythms were defined as rhythms that made you want to tap your feet, to dance, or to move. Non-pulsed rhythms were defined as a bit disjointed, less made to move, and less regular. After two example rhythms (where pulsed or not pulsed was indicated on the screen), six regular and six irregular rhythms were randomly presented to participants, with the same randomization restrictions as in the synchronization task. Because this was a relatively easy task, participants were able to respond as soon as they knew their response (thereby stopping the rhythm) by pressing one of two buttons to indicate pulsed or not pulsed.

Production tasks

In all production tasks, participants tapped with the index finger of their dominant hand within a large green square on the tablet screen.

Unpaced tapping

Participants tapped at a natural rate for 60 s, as regularly as possible. This task was performed twice, separated by a short break.

Paced tapping to a metronome

Participants heard an isochronous sequence of 60 identical tones (600 ms IOI, tone frequency = 1,319 Hz) and tapped along with each tone. To ensure they were as accurate as possible, it was suggested that participants wait for the first four or five tones to be comfortable with the tapping speed before starting to tap. This task was performed twice, separated by a small break.

Paced tapping with music

Participants tapped to the beat (defined as a regular pulse in the music, where you might clap your hands or tap your foot) of two different pieces of music (the same as in the BAT above): The Badinerie (Bach), and the William Tell Overture (Rossini), referred to as music 1 and music 2, respectively. Each piece had a 600 ms inter-beat interval (IBI), with 64 beats each (~38 s). Participants were asked to keep their tapping regular, with the same interval between each tap, and to start tapping when they felt comfortable with the tapping speed. Music 1 was followed by music 2. Tapping results for the two pieces were averaged for a global music tapping measure.

Synchronization-continuation

Participants heard ten piano tones (600 ms IOI, tone frequency = 1,319 Hz, as in paced tapping to metronome), which they tapped along with (synchronization phase). After the last tone, they continued tapping at the same pace (continuation phase). The continuation phase lasted for the equivalent of 30 IOIs from the synchronization phase. Participants stopped tapping when they heard a low-pitched tone. They were asked to tap as regularly as possible, and to maintain the same interval between each tap. This task was repeated twice separated by a short break.

Questionnaires

Participants completed a series of questionnaires at the end of the experiment, in addition to a general questionnaire of musical background and training. To measure a participant’s sensitivity to music reward, we administered the Barcelona Music Reward Questionnaire (BMRQ; Mas-Herrero et al., 2013), consisting of 20 questions, four in each sub-scale of: musical seeking, emotional evocation, mood regulation, sensory-motor, and social reward. The French translation was used from Saliba et al. (2016) and can be accessed in their Appendix S2. Normed scores were then calculated at http://brainvitge.org/z_oldsite/bmrq.php. To measure musicality/music engagement, three questions were selected from the Goldsmiths Musical Sophistication Index (Gold-MSI; Müllensiefen et al., 2014) based on their face validity for musicality/music engagement. These questions were: (1) I can sing or play music from memory (7-point scale; strongly disagree to strongly agree), (2) I have never been complimented for my talents as a musical performer (7-point scale; strongly disagree to strongly agree), and (3) At the peak of my interest, I practiced… hours per day on my primary instrument (7-point scale; 0–5 or more hours). The French translation for the Gold-MSI from Degrave and Dedonder (2019) was used and can be accessed in their supplementary material (and our OSM Table 6). Additionally, we added the question used in Niarchou et al. (2021): can you clap in time with a musical beat?, with a French translation of savez-vous taper en rythme sur la musique?3 We also included a final question from the adaption of the Creative Achievement Questionnaire (Carson et al., 2005) used in Mosing et al. (2016): How engaged with music are you? Singing, playing, and even writing music counts here (7-point scale; I am not engaged in music – I am a professional musician). The French translations can be seen in OSM Table 6. Non-standardized French translations were verified with at least two native French speakers.

Data processing

Perception tasks

For the Anisochrony detection task, the mean threshold for detecting a change in the five-note sequences was expressed as a percentage of the IOI (as in Dalla Bella et al., 2017, Exp. 2). Smaller thresholds indicate better performance. For the other perceptual tasks (BBA, BAT, BbMAT-Synch, BbMAT-Metric), accuracy, sensitivity to the signal (d prime, d’), and response bias c were calculated (see Table 1). D prime is a signal detection theory measure that incorporates hits (i.e., when there was an error/difference and the participant detects the error/difference) and false alarms (i.e., when there was no error/difference, but the participant detects an error/difference) to determine participants’ discrimination sensitivity (Stanislaw & Todorov, 1999). To calculate d’, the z-score of the false alarm rate was subtracted from the z-score of the hit rate. Extreme values of 1 and 0 were corrected to 0.99 and 0.01, respectively. Higher d’ values reflect greater detection of the signal, and a value of 0 reflects that the signal cannot be distinguished from the noise. To calculate response bias c, the sum of the z-scores for hits and false alarms were multiplied by −0.50. Positive values of c suggest a bias to respond same/aligned/synchronized/pulsed, whereas negative values suggest a bias to respond different/not aligned/unsynchronized/not pulsed.

Production tasks

All analyses were conducted as in Dalla Bella et al. (2017), with both linear and circular statistics. Measures obtained using circular statistics for auditory-motor synchronization tasks are particularly sensitive to individual differences, and capture a range of tapping situations (Fujii & Schlaug, 2013; Kirschner & Tomasello, 2009; Sowiński & Dalla Bella, 2013). For all tapping measures, the mean inter-tap interval (ITI) and motor variability (the mean coefficient of variation, CV; calculated by taking the SD of the ITIs divided by the mean ITI) were calculated (for the synchronization-continuation task only the continuation phase was analyzed). For circular statistics, synchronization accuracy (i.e., angle) and synchronization consistency (i.e., vector length R) were calculated (Dalla Bella et al., 2017; Dalla Bella & Sowiński, 2015; Sowiński & Dalla Bella, 2013). The inter-stimulus interval (or inter-beat interval) is presented on a polar scale (from 0 to 360°), where 0° represents the beat time. Taps are represented as angles (unitary vectors) in this circular space, depending on their occurrence relative to the beat time. The resultant vector R is calculated from this distribution of angles. The direction of the vector (or relative phase) indicates synchronization accuracy and refers to tapping time relative to the beat on average. A negative value indicates that taps on average anticipated the beat, while a positive value indicates that taps lagged after the beat. The length of this vector indicates synchronization consistency, which ranges from 0 to 1 (1 = perfect synchronization; 0 = random alignment of the taps to the beat). Note that synchronization accuracy was only calculated when performance was above chance, according to the Rayleigh test (Pewsey et al., 2013). Synchronization consistency (vector length R) was logit transformed before analysis (as in Cumming et al., 2015; Dalla Bella et al., 2017; Falk et al., 2015). For tasks repeated twice, averages were calculated. See Table 1 for an overview.

Statistical analyses

Principal components and hierarchical cluster analyses

PCAs were run in R using the package FactoMineR (Lê et al., 2008). Based on distinctions observed in the literature, we first ran a PCA with only the perception variables to investigate whether we could observe a difference between sequence memory-based rhythm perception tasks and beat-based rhythm perception tasks. We then added the production variables to these same perception variables in a second PCA to investigate whether we would observe distinct performance for perception compared to production tasks. One value for each task was used for the PCAs to reduce redundancy, and composite scores were used instead of sub-scores for the same reason. For perception, the threshold value from the Anisochrony detection task and the sensitivity d’ values from the BBA, BAT, and BbMAT-Synch were used, while BbMAT-Metric was excluded because of ceiling performance. For production, the motor variability (CV of ITI) values were used for all tasks (unpaced tapping, synchronization-continuation, paced tapping to metronome, paced tapping to music) because these values were available across all tasks and reflect tapping variability (e.g., Cameron & Grahn, 2014; Dalla Bella et al., 2017). All variables were z-score normalized as implemented within the PCA analysis of FactoMineR to compare measurements across different variables. Negatively scored values (i.e., motor variability and Anisochrony threshold) were multiplied by −1 so that higher scores on a variable always indicated better performance.

Missing values were imputed using regularized iterative PCA and replaced with an estimation of the missing data based on performance on other tasks within the same dimension (as implemented with FactoMineR using the command imputePCA). Across all perception tasks, only one data point for one participant (#18) was missing for the Anisochrony detection task. For production, one participant (#31) was missing all motor variability values because of a technical problem with the tapping recording file. For the perception + production PCA, this participant’s data were not included.

Clustering was performed using Hierarchical Classification on Principal Components (HCPC) within FactoMineR and was based on the first three principal components extracted from each PCA. The hierarchical trees were cut based on suggested heights defined by the program. Clusters were confirmed within each task with ANOVAs when variance between groups was equal, or Welch’s F-tests if variance between groups was not equal. Clusters were then compared using independent-sample t-tests (if group variance was equal), or Welch’s two-sample t-tests (if group variance was not equal).

Results

All descriptive statistics are outlined in the OSM, shown in Figs. 1 (perception) and 2 (production) for all aggregate results, and OSM Figs. 1 and 2 for a more detailed breakdown of conditions. Bivariate correlations between all measures included in the PCAs are reported in OSM Table 3.

Fig. 1.

Fig. 1

Perception results. (A) Anisochrony detection threshold presented as a percentage of the inter-onset interval (IOI, 750 ms); (B) d′ sensitivity values for the beat-based advantage task (BBA), the beat-alignment test (BAT), the synchronization task in the Burgundy best Musical Aptitude Test (BbMAT-S), and the Metric task in the BbMAT (BbMAT-M); and (C) response bias c for the four perception tasks. Individual dots represent individual participant data, and the mean is represented by a black triangle. Boxplots represent the distribution of data as implemented in ggplot2 in R (R Core Team, 2018), with the black line representing the median

Fig. 2.

Fig. 2

Production results. (A) Synchronization accuracy (i.e., angle) and rose plots for paced tapping measures represented with circular statistics. The zero point refers to when the beat occurred. Blue dots reflect individual participant responses. Negative values reflect taps before the beat, and positive reflect taps after the beat. The rose diagram (in red) reflects the frequency of responses in each segment. Sixteens bins were specified, and the radius of each segment reflects the square root of the relative frequency in each bin. See Pewsey et al. (2013) for more details. (B) Synchronization consistency (vector length R) values where 0 = no consistency between taps and 1 = absolute consistency between taps. (C) Mean inter-tap interval (ITI) for the paced and unpaced tapping tasks. For tasks with an external rhythm, all ITIs were 600 ms. (D) Motor variability (coefficient of variation (CV) of the ITI) for paced and unpaced tapping tasks. Boxplots represent the spread of data as implemented in ggplot2 in R. The black line represents the median in each condition, and the black triangle represents the mean. The box represents the interquartile range (quartile 1–quartile 3), and individual dots represent participants who might be considered as outliers in relation to the interquartile range. Individual lines represent individual participants

Principal component analyses

Participant performance on all tasks entered in the PCA is reported in the OSM, as well as the clusters, musical and dance training, and questionnaire scores and responses.

Perception principal component analysis (PCA)

The perception PCA revealed clear dimensions of rhythmic perception skills, with the first three dimensions accounting for 84.35% of the total variance (see Table 2 for variables entered into the PCA, Fig. 3A for the PCA dimensions, and Table 3 for an outline of all dimensions and clusters). Dimension 1 accounted for 39.46% of the variance and correlated with performance on the BAT, r(29) = 0.784, p < .001, BbMAT-Synch, r(29) = 0.70, p < .001, and Anisochrony detection, r(29) = 0.64, p < .001. This dimension could be considered as beat-based or alignment perception. Dimension 2 accounted for 25.18% of the total variance and correlated with the BBA, r(29) = 0.92, p <.001, suggesting that it is related to sequence memory-based rhythm perception. Dimension 3 accounted for 19.72% of the total variance and correlated with Anisochrony detection, r(29) = 0.71, p < .001 and BbMAT-Synch, r(29) = −0.51, p < .001. The negative correlation with BbMAT-Synch was unexpected, but might be explained by a difference between the two tasks in relation to duration-based timing or complexity: Anisochrony detection is quite simple and based on durations between isochronous tones, whereas BbMAT-Synch involves complex beat extraction and hierarchical meter perception, with musical material that uses complex timbres and multiple instruments. These tasks could therefore reflect extremes along the same dimension.

Table 2.

Tasks and dependent variables included in each principal component analysis. Sensitivity refers to the d′ measurement. Note that the paced tapping to metronome, paced tapping to music, and synchronization-continuation also require perception abilities. Duration is approximate and includes instructions. For trials that were completed twice (Anisochrony detection and all tapping tasks), the duration should be doubled to include the repetition

PCA Task DV
Perception BBA, BAT, BbMAT-S Sensitivity d
Anisochrony detection Threshold of detection
Perception BBA, BAT, BbMAT-S Sensitivity d’
+ Anisochrony detection Threshold of detection
Production Unpaced tapping, synch-cont, paced tapping metronome, paced tapping music (average) Motor variability

PCA principal components analysis, DV dependent variable, BBA beat-based advantage task, BAT beat alignment test, BbMAT-S Burgundy best musical aptitude test – synchronization, synch-cont synchronization-continuation, motor variability coefficient of variation of the inter-tap interval

Fig. 3.

Fig. 3

Dimensions (1, 2, and 3) and clusters (1, 2, and 3) for (A) the perception PCA and (B) the perception + production PCA. (A) In the perception PCA, Cluster 1 reflects weak perceivers, Cluster 2 reflects strong sequence memory-based rhythm perceivers, and Cluster 3 reflects strong beat-based rhythm perceivers. (B) In the perception + production PCA, Cluster 1 reflects weak tappers, Cluster 2 reflects weak perceivers, and Cluster 3 reflects participants with strong rhythm (perception and production). Note that only Dimensions 1 and 2 are shown for the cluster graphs for clarity, but Dimension 3 for the perception + production PCA can be seen in OSM Fig. 3

Table 3.

The first three dimensions, clusters, and their interpretation for the principal component analyses

Dim 1 Dim 2 Dim 3 Cluster 1 Cluster 2 Cluster 3
PCA Perc Beat-based perception Sequence memory-based perception Duration-based / complexity Weak perceivers: 1, 2, 3, 4, 11, 16, 17, 18, 21, 24, 25, 27 Strong sequence memory-based perceivers: 5, 7, 8, 9, 13, 14, 15, 20, 23, 29 Strong beat-based perceivers: 6, 10, 12, 19, 22, 26, 28, 30, 31
 Task BAT, BbMAT-S, Anisoch BBA Anisoch, BbMAT-SS BAT*, Anisoch*, BBA* BBA BAT, BbMAT-S, Anisoch
PCA Perc/ Prod Tapping precision and beat alignment Beat-based perception Sequence memory-based perception Weak tappers: 10, 15, 24, 29 Weak perceivers: 1, 2, 3, 4, 11, 16, 17, 18, 21, 25, 27 Strong rhythm (perc/prod): 5, 6, 7, 8, 9, 12, 13, 14, 19, 20, 22, 23, 26, 28, 30
 Task Unpaced, Synch-cont Metro, Music avg, BAT BAT, Anisoch, BbMAT-S BBA Unpaced*, Synch-cont*, Metro*, Music avg* BAT*, Anisoch*, BBA* Unpaced, Synch-cont, Metro, Music, BBA, BAT
*

Indicates negative correlation. Participant numbers (1–31) presented to show overlap between clusters

Dim dimension, Anisoch Anisochrony detection, Synch-cont synchronization-continuation, Metro metronome, BBA beat-based advantage task, BAT beat alignment test, BbMAT-S Burgundy best musical aptitude test- Synchronization, Perc perception. Prod production

For perception, the hierarchical cluster analysis revealed three clear and distinct clusters of participants (see Fig. 4 for each cluster’s performance on the tests and Fig. 3A for cluster maps). Note that performance on all tasks significantly contributed to the clusters (all values, p < .008), but only along Dimensions 1 and 2 (both p < .001). Cluster 1 (n = 12) contained participants who performed inaccurately along both the beat-based (Dimension 1, v = −3.60, p < .001) and the sequence memory-based (Dimension 2, v = −2.76, p = .006) rhythm dimensions. Cluster 1 performed inaccurately on the BBA (v = −3.62, p < .001), the BAT (v = −3.06, p = .002), and the Anisochrony detection (v = −2.14, p = .03) tasks, and can be considered “weak perceivers.”5 Cluster 2 (n = 10) contained participants who performed well on the sequence memory-based rhythm dimension (Dimension 2, v = 4.59, p < .001), with particularly high performance on the BBA (v = 4.29, p < .001), and can be considered “strong sequence memory-based rhythm perceivers.” Cluster 3 (n = 9) performed well along the beat-based dimension (Dimension 1, v = 4.20, p < .001), with high performance on the BAT (v = 3.37, p < .001), Anisochrony detection (v = 3.33, p < .001), and the BbMAT-Synch (v = 2.93, p = .003) tasks, and can be considered “strong beat-based rhythm perceivers.”

Fig. 4.

Fig. 4

The three clusters that emerged from the perception principal component analyses (PCAs) across the four tasks. Note that Anisochrony detection threshold scores (presented as a percentage of the 750 ms inter-onset interval) were reversed in scoring for the analysis and in the figure, such that more negative scores reflect more inaccurate performance. Boxplots represent the distribution of data as implemented in ggplot2 in R, with the black line representing the median. Individual dots represent individual participant data, and the black triangle represents the mean

Clusters were confirmed with ANOVAs showing a significant main effect of cluster in each perceptual task: BBA, F(2, 28) = 31.31, p < .001, ηp2 = 0.69; BAT, F(2, 28) = 11.89, p < .001, ηp2 = 0.46; Anisochrony detection: F(2, 28) = 8.56, p = .001, ηp2 = 0.38; and BbMAT-Synch, F(2, 28) = 5.67, p = .009, ηp2 = 0.29. Independent-samples t-tests (adjusted p-values presented with p’ after Holm-Bonferroni correction) showed that strong beat-based rhythm perceivers (Cluster 3) performed significantly better than weak perceivers (Cluster 1) on the BAT, t(19) = 6.29, p’ < .001, d = 2.77, Anisochrony detection, t(19) = 4.21, p’ < .001, d = 1.86, BbMAT-Synch, t(19) = 2.59, p’ = .04, d = 1.14, and the BBA, t(19) = 2.55, p’ = .02, d = 1.12 tasks. Strong beat-based rhythm perceivers (Cluster 3) also performed significantly better than strong sequence memory-based rhythm perceivers (Cluster 2) on Anisochrony detection, t(17) = 3.65, p’ = .004, d = 1.68, BbMAT-Synch, t(17) = 3.09, p’ = .02, d = 1.42, and the BAT, t(17) = 2.64, p’ = .03, d = 1.21. However, the strong sequence memory-based rhythm perceivers (Cluster 2) were significantly better than both the weak perceivers (Cluster 1), t(20) = 9.63, p’ < .001, d = 4.12, and the strong beat-based rhythm perceivers (Cluster 3), t(19) = 4.17, p’ = .001, d = 1.92 on the BBA. There were no differences between weak perceivers (Cluster 1) and strong sequence memory-based perceivers (Cluster 2) on the BAT, the BbMAT-Synch, or Anisochrony detection, all corrected p-values > .09.

Note that of the six participants who did not score at ceiling on the BbMAT-M (not included in the PCA analyses), four were clustered as weak perceivers, and two were clustered as strong sequence memory-based perceivers, consistent with the cluster groupings.

Perception + production PCA

The perception + production PCA consisted of the same four perception tasks in the perception PCA (BBA, BAT, BbMAT-Synch, Anisochrony detection) to which we added the motor variability scores (CV of ITI) for four additional production tasks (unpaced tapping, synchronization-continuation, paced tapping to metronome, paced tapping to music; see Table 2, Fig. 3B, and Table 3). The first three dimensions accounted for 72.68% of the total variance (see Fig. 3B). Dimension 1 accounted for 41.64% of the variance and correlated with performance on paced tapping to music, r(29) = 0.91, p < .001, synchronization-continuation, r(29) = 0.87, p < .001, unpaced tapping, r(29) = 0.84, p < .001, paced tapping to ametronome, r(29) = 0.84, p < .001, and BAT, r(29) = 0.44, p = .01. This dimension could be considered as tapping precision and beat alignment. Dimension 2 accounted for 18.43% of the total variance and correlated with performance on the BbMAT-Synch, r(29) = 0.71, p < .001, Anisochrony detection, r(29) = 0.63, p < .001, and the BAT, r(29) = 0.62, p < .001. This dimension therefore appears to reflect beat-based perception. Dimension 3 accounted for 12.60% of the total variance and correlated with performance on the BBA, r(29) = 0.88, p < .001, suggesting that it is related to sequence memory-based perception.

The hierarchical cluster analysis revealed three clusters of participants (see Fig. 3B for the clusters and Fig. 5 for performance on each task depending on cluster). Both production (unpaced tapping, synchronization-continuation, paced tapping to metronome, and paced tapping to music, all ps < .001) and perception (BBA and BAT, all ps < .001) tasks contributed significantly to the clusters; however, BbMAT-Synch and Anisochrony detection did not. All three dimensions contributed significantly to the clusters (all ps < .015). Cluster 1 (n = 4) contained participants who performed inaccurately along the tapping dimension (Dimension 1, v = −4.41, p < .001), and was associated with inaccurate performance on synchronization-continuation (v = −5.08, p < .001), paced tapping to metronome, (v = −4.29, p < .001), paced tapping to music (v = −3.90, p < .001), and unpaced tapping (v = −3.01, p = .003). Cluster 1 can therefore be considered as the outlying “weak tappers.” Cluster 2 (n = 11) contained participants who performed inaccurately along both beat-based (Dimension 2, v = −3.82, p < .001) and sequence memory-based (Dimension 3, v = −2.77, p = .006) rhythm perception dimensions (see also Supplementary Figure 3). Cluster 2 was associated with inaccurate performance on the BBA (v = −3.69, p < .001), Anisochrony detection (v = −2.36, p = .02), and the BAT (v = −2.50, p = .01). This cluster can therefore be considered as “weak perceivers.”6 Cluster 3 (n = 15) contained participants who performed well across all dimensions and tasks. They performed accurately across the tapping dimension (Dimension 1, v = 3.46, p < .001), the sequence memory-based rhythm perception dimension (Dimension 3, v = 2.13, p = .03) and the beat-based rhythm perception dimension (Dimension 2, v = 2.47, p = .01). Cluster 3 showed positive correlations for performance on unpaced tapping (v = 2.76, p = .006), paced tapping to music (v = 2.52, p = .01), synchronization-continuation (v = 2.41, p = .02), paced tapping to metronome (v = 2.06, p = .04), BBA (v = 3.82, p < .001), and BAT (v = 3.33, p < .001), suggesting that they were strong at rhythm perception and production in general.

Fig. 5.

Fig. 5

Performance across all tasks for the three clusters observed in the perception + production principal component analysis. (A) Motor variability of all production tasks. (B) Performance on all perception tasks. Boxplots represent the distribution of data as implemented in ggplot2 in R, with the black line representing the median. Individual dots refer to individual participant data, and black triangles represent the mean

Clusters were confirmed with ANOVAs showing a significant main effect of cluster for synchronization-continuation, F(2, 27) = 118.48, p < .001, ηp2 = 0.90; paced tapping to music, F(2, 6.76) = 6.43,p = .03; unpaced tapping, F(2, 6.57) = 5.29, p = .04; BBA, F(2, 27) = 16.72, p < .001, ηp2 = 0.55; and BAT, F(2, 27) = 8.39, p = .001, ηp2 = 0.38. Paced tapping to a metronome, F(2, 6.28) = 4.73, p = .056 and Anisochrony detection, F(2, 27) = 3.20, p = .056 showed a marginal difference between clusters, and there was no difference between clusters for BbMAT-Synch, F(2, 27) = 1.55, p = .23. Independent-samples t-tests (adjusted p-values presented with p’ after Holm-Bonferroni correction) showed significantly worse performance on the synchronization-continuation task for the weak tappers (Cluster 1) compared to both the weak perceivers (Cluster 2), t(13) = 11.10, p’ < .001, d = 6.48, and the cluster with strong rhythm (Cluster 3), t(17) = 18.51, p’ < .001, d = 10.41. When controlling for multiple comparisons, there were no significant differences between clusters for unpaced tapping (all corrected values, p’ > .13), paced tapping to music (all corrected values, p’ > .12), or paced tapping to metronome (all corrected values, p’ = .23). However, better performance for the strong rhythm cluster (Cluster 3) on the BBA and BAT was confirmed, with the strong perceivers performing better than the weak perceivers (Cluster 2) on the BBA, t(24) = 6.15, p’ < .001, d = 2.44, and the BAT, t(24) = 3.74, p’ = .003, d = 1.49. The strong rhythm cluster (Cluster 3) also performed significantly better than the weak tappers (Cluster 1) on the BAT, t(17) = 2.52, p’ = .04, d = 1.42. All other comparisons were not significant after correction for multiple comparisons (all corrected values, p’ > .09). It should be noted that there was a strong overlap between this combined PCA and the perception PCA for participants identified as weak versus strong perceivers (see Table 3). The emerging clusters of strong and weak perceivers can be considered as quite robust, considering that they remained with the addition of the tapping data.

Note that of the six participants who did not score at ceiling on the BbMAT-M (not included in the PCA analyses), three were clustered as weak perceivers, one was clustered as a weak tapper, and two were clustered with the strong rhythm category, most likely driven by their high sequence memory-based perception, as revealed in the perception PCA.

Links with questionnaire data

Based on the clusters identified in the hierarchical cluster analysis for the perception and production PCA, we identified two groups of participants: weak performers (the weak tappers and weak perceivers, n = 15) and strong performers (the strong rhythm group, i.e., the strong tappers and strong perceivers, n = 15).7 It is particularly interesting to see whether these groupings align with subjective self-report measures. For example, the standardized scores from the BMRQ allowed us to observe whether participants were considered to have a strong (scores above 60) or weak (scores below 40) sensitivity to music (Mas-Herrero et al., 2013; Saliba et al., 2016). Three of our participants scored as having a weak sensitivity to music (two from the weak performer group and one from the strong performer group), and eight of our participants scored as having a strong sensitivity to music (seven from the strong performer group and one from the weak performer group).

A logistic regression8 was run to investigate whether the sub-scales of the BMRQ and participants’ music and dance training could predict whether they were classified as a weak or strong performer. None of the predicting variables were significant; however, the mood regulation sub-scale, χ2(1, N = 30) = 3.74, p = .053, and years of dance training, χ2(1, N = 30) = 3.35, p = .067 predictors were approaching significance. For the mood regulation sub-scale, strong performers (M = 51.07, SD = 8.32) tended to score higher than weak performers (M = 41.73, SD = 14.58), suggesting that strong performers were more likely to use music to regulate their mood. For dance training, strong performers (M = 2.53 years, SD = 4.02, range = 0–13) tended to have had less years of dance training than weak performers (M = 4.33 years, SD = 4.86, range = 0–15). This marginal effect may be based on some extreme values, as two participants who had more than 10 years of dance training both were clustered as weak perceivers (see OSM Table 5).

We also asked participants whether they could clap in time with a musical beat9 (as in Niarchou et al., 2021). Of the 11 participants who said they were “not sure,” seven were in the weak performing group (two within the weak tapping cluster), and four were in the strong performing group. All other participants indicated that they could tap in time to a rhythm, including two who were identified as weak tappers in the perception and production hierarchical cluster analysis. These results indicate that self-report data may not be entirely reliable to identify participants who perform inaccurately across different tasks.

Finally, because our distribution of self-reported musical training was approaching bimodal, we checked whether there was a difference in performance across the different tasks depending on whether participants reported that they had never engaged in musical training or practice (n = 12), or whether they had engaged in musical training or practice (perception tasks: n = 19, range = 1–13 years, M = 5.89, SD = 3.97, median = 6.0; production tasks: n = 18, range = 1–13 years, M = 5.89, SD = 4.09, median = 5.0). Wilcoxon independent tests were run because of unequal group sizes. For perception tasks, performance on the BAT was significantly better for the music training group (Mdprime = 3.15, SD = 0.87) compared to the non-music training group (Mdprime = 2.23, SD = 1.11), W = 53, p = .014, r = .45. There was no difference depending on music training for the BbMAT-Synch, W = 95, p = .45, the BBA, W = 102, p = .64, or the Anisochrony detection, W = 86, p = .44 tasks. For production tasks, paced tapping to music (average) was significantly less variable for the music training group (MCV of ITI = 0.11, SD = 0.14) compared to the non-music training group (MCV of ITI = 0.19, SD = 0.17), W = 160, p = .03, r = .40. Paced tapping to the metronome and synchronization-continuation showed a marginally significant difference between the groups, both in the direction of lower variability for participants with musical training compared to participants without musical training, both W = 154, p = 0.05, r = .36. There was no significant difference between groups for unpaced tapping, W = 130, p = .37. Note that Spearman correlations confirmed this pattern of results, with significant correlations between years of music training and the BAT, r(29) = .42, p = .02, paced tapping to music, r(28) = −.44, p = .01, as well as significant correlations for paced tapping to metronome, r(28) = −.38, p = .04 and synchronization-continuation, r(28) = −.46, p = .01.

Discussion

The goal of the current study was to investigate whether different types of rhythmic tasks would diverge into distinct patterns of performance across tasks and within individuals. Nine tasks originating from different laboratories were selected to cover a variety of potentially distinct rhythmic processing skills. Principal component analyses (PCAs) showed a clear distinction between (1) perception compared to production tasks, and (2) beat-based compared to sequence memory-based rhythm perception tasks. Further, hierarchical cluster analyses revealed participants with selectively strong or weak performance on different types of tasks, suggesting distinct rhythmic competencies across individuals in the broader population from which our participants were sampled. These results are discussed in relation to distinct rhythmic competencies, implications for underlying neural mechanisms, and suggestions for future research.

Distinct rhythmic competencies measured by different rhythm tasks

Across both the perception and perception + production PCAs, three primary dimensions emerged; these corresponded to: (1) tapping precision and beat alignment, (2) beat-based rhythm perception, and (3) sequence memory-based rhythm perception. These dimensions corresponded to our hypotheses of a separation between perception and production tasks (Bégel et al., 2017; Dalla Bella et al., 2017; Sowiński & Dalla Bella, 2013), as well as between sequence memory-based and beat-based rhythm perception tasks (Bonacina et al., 2019; Tierney & Kraus, 2015). Our findings also support and extend the previous findings from Tierney and Kraus (2015) and Bonacina et al. (2019) to show a distinction between sequence memory- and beat-based processing in the perception domain rather than only in the production domain as they had previously shown. The observed separation in performance across different tasks is particularly interesting as it provides behavioral evidence for distinct rhythmic competencies. Such evidence could provide some insight into potential differences in underlying neural architecture, or potential differences in task sensitivity that tap into separable aspects of rhythmic abilities.

It has proven difficult to isolate different rhythmic competencies or sub-components in the brain, as rhythm processing activates a wide range of neural areas, and overlapping cognitive processes are implicated across different tasks (Grahn & McAuley, 2009; Schubotz et al., 2000). Considering that rhythm production tasks that involve synchronization to or reproduction of an external auditory rhythm necessarily involve perception (Leow & Grahn, 2014), and that rhythm perception activates motor areas in the brain (Bengtsson et al., 2009; Chen et al., 2008; Fujioka et al., 2012; Grahn & Brett, 2007; Stephan et al., 2018), it is challenging to separate potentially distinct processes in typically developing individuals. Further, rhythm perception is suggested to be aided by predictions from the motor system, and a tight link between perception and production has been postulated (Cannon & Patel, 2021; Morillon & Baillet, 2017; Patel & Iversen, 2014). Indeed, generally speaking, it appears that perception and production are tightly linked in the brain and there is strong sensorimotor coupling involved in rhythm processing (Zatorre et al., 2007). Nevertheless, rhythm perception and production can dissociate in individuals with rhythm disorders: Accurate beat perception does not appear to be required for accurate synchronization (Bégel et al., 2017) and accurate synchronization does not appear to be required for accurate beat perception (Sowiński & Dalla Bella, 2013). This separation may be possible based on implicit processing of temporal information, allowing for intact production with impaired perception (Bégel et al., 2017). Although the underlying neural networks are difficult to distinguish, the current results suggest that different rhythm tasks may tap into different rhythmic competencies in the general population.

Distinct performance clusters support separable rhythmic competencies

The hierarchical cluster analysis revealed clusters of participants who presented with different profiles of rhythm performance that were remarkably consistent between the perception and the perception + production PCAs. When perception and production variables were combined, a broad distinction was observed between participants who generally performed accurately across all tasks, and those were selectively inaccurate in production tasks (across all tasks) or perception tasks (in both sequence memory-based and beat-based tasks). This distinction suggests that participants can be selectively impaired at production or perception of rhythm in general, supporting previous research (Bégel et al., 2017; Sowiński & Dalla Bella, 2013).

However, we found that when participants performed accurately in rhythm tasks, they generally performed accurately across both perception and production tasks. An fMRI study using a perceptual tempo judgment task showed that strong beat perceivers had greater activation in motor areas (supplementary motor area, the left premotor cortex, and the left insula) than did weak beat perceivers, who showed stronger activation in largely non-motor areas (left posterior superior and middle temporal gyri, but also the right premotor cortex) (Grahn & McAuley, 2009). The authors suggest that strong beat perceivers are more likely to use implicit beat perception when performing rhythm tasks, whereas weak beat perceivers may use more explicit strategies (i.e., interval duration judgments). The ability to use implicit beat processing mechanisms may therefore result in improved performance across both perception and production tasks. Our results provide behavioral support for this suggestion, as participants with high performance tended to perform well across both rhythm production and rhythm perception tasks. However, as also shown in single case studies, our results suggest that production or perception competencies can be selectively impaired, reflected by participant clusters that were selectively weak at tapping or perceiving. Such evidence suggests that patterns of dissociation may be more common than previously thought in the general population. These findings could help to explain why different patterns of correlations between various rhythm tasks are found across different studies (see examples in Tierney & Kraus, 2015), as rhythmic competencies may be both related and unrelated, depending on the participant and their specific constellation of rhythmic competencies and beat processing strategies.

When only perception tasks were included in the PCA, clusters of participants emerged who performed inaccurately on the perception tasks in general (as seen also in the combined PCA), selectively well for the sequence memory-based rhythm task (BBA), or selectively well for the beat-based rhythm tasks (BAT, BbMAT-Synch, Anisochrony detection). Combined with the perception and production results, it appears that weak perceivers perform inaccurately across all perceptual tasks (i.e., a general lack of perception skills that affects both short-term sequence memory- and beat-based rhythm perception), but that strong perceivers can show selective enhancements to either sequence memory- or beat-based skills. These selective enhancements could be based on different cognitive skills necessary to perform well in each type of task. Both beat-based and sequence memory-based tasks require some level of beat-based processing. However, the sequence memory task requires the additional contribution of short-term memory, sequencing, and supra-second judgments (Tierney & Kraus, 2015), which could compensate for deficits in beat-based skills or boost these skills. Therefore, participants could draw on stronger sequence learning and memory-based skills to perform selectively well in the BBA (see the contribution of short-term memory to rhythm reproduction in Grahn & Schuit, 2012), and draw on stronger beat-based skills to perform selectively well in the beat-based tasks. Our results suggest that the necessary skills to perform perceptual tasks somewhat overlap, but that selective abilities can enhance performance in sequence memory-based or beat-based tasks.

Further evidence for distinct performance patterns across individuals can be observed when the clusters that emerge from the PCAs are examined. Only one of the identified weak tappers was also identified as a weak perceiver in the perception PCA (note that this participant also had low music reward sensitivity on the BMRQ, and appeared to have large, general impairments in rhythm and/or music processing). Two other poor tappers were identified as strong sequence memory-based rhythm perceivers, suggesting that they were able to use other cognitive skills for rhythm perception, and that accurate tapping was not necessary for accurate perception. One weak tapper was identified as a strong beat-based rhythm perceiver. This pattern further suggests that tapping can be impaired at the same time as perception is spared, as has been observed in the general population (Sowiński & Dalla Bella 2013), and in children with cerebellum lesions (Provasi et al., 2014). Sowiński and Dalla Bella (2013) suggest that a distinction between synchronization and beat-based perception within the general population could be related to a disruption in auditory-motor mapping. However, Tranchant and Vuvan (2015) pointed out that the perceptual tests used in Sowiński and Dalla Bella (2013) (the Anisochrony detection task and the MBEA rhythm test) could be performed without using beat-based processing (i.e., by comparing durations between intervals). The current study supports the conclusions of Sowiński and Dalla Bella (2013), with the addition of clear beat-based processing tasks (the BAT, BbMAT-S), in addition to the Anisochrony detection task, and the finding of a single participant who showed weak synchronization and intact, strong beat-based perceptual processing. Such result patterns suggest a large variety of rhythmic competency patterns within the population, and that individual differences are important to consider in future research, in healthy and pathological brains.

Production tasks

Even though a distinction between the different tapping tasks was predicted, our PCA analyses did not reveal a separation between these tasks. For example, distinctions have been shown between tapping to music (requiring beat extraction) and tapping to a metronome. Phillips-Silver et al. (2011) reported the case-study of a participant who was impaired in moving to the beat of music, but had no problem moving to a metronome. The authors suggest that this impairment may be linked to poor perception, as the participant was also unable to determine whether a dancer was dancing on or off the beat. In addition, participants typically tap early to metronomes (i.e., the negative mean asynchrony) and on time to music (Repp, 2005) (observed also in the current study), though this would not have been captured in the PCA using the motor variability measure. Distinctions have also been shown between paced and unpaced tapping tasks, as unpaced tapping and the continuation phase of the synchronization-continuation task are suggested to rely on an internal time-keeper, compared to paced tapping tasks, which rely more strongly on synchronization ability (see Repp & Su, 2013 for a review). In the production PCA provided in OSM Table 4, unpaced tapping and tapping to music emerged as two dimensions separate to the other tapping tasks; however, the primary dimension was correlated with all tasks, suggesting a common motor variability dimension. Combined, these findings suggest that tapping variability appears consistent across different types of tapping tasks, though it should be noted that tapping to music revealed more variation between participants than tapping to a metronome, so may be a more sensitive measure to reveal impairments or proficiency.

It is also possible that the observed similarity across tapping tasks may be related to using motor variability (CV of ITI) as our common dependent variable, rather than a measure of synchronization consistency, auditory-motor coupling, or phase locking. Said differently, motor variability may be stable within individuals regardless of whether the rhythm was externally or internally generated. We used the motor variability measure to have a consistent measure across paced and unpaced tapping tasks; however, it should be noted that motor variability (CV of ITI) and synchronization consistency (vector length R, logit transformed) were highly correlated,10 suggesting a close relation between the two measures. Future research could consider whether variability in tapping or synchronization to an external stimulus is more reflective of synchronization ability.

Suggestions for future research and clinical applications

The current results show that rhythm performance is more complex and multi-faceted than commonly thought, and that individuals perform differently across different types of rhythm tasks. We suggest that future research investigating rhythmic abilities should not just be limited to a single task but should be informed by different tasks capable of assessing the different dimensions/processes involved. Based on the current results, we suggest including a rhythm synchronization task (i.e., tapping to a metronome or music), a beat-based rhythm perception task (such as the BAT), and a sequence memory-based rhythm perception task (such as the BBA). These tasks can be run in a relatively short amount of time.11 In the case where researchers are interested in only one component of rhythm (e.g., beat-based rhythm perception), it might still be useful to run these different tests to tease apart different hypotheses or control for mechanisms involved in timing tasks that are not the focus of the given project, for example, controlling for rhythm sequence discrimination processes when the goal is to measure beat-based rhythm processes.

Many studies use only one rhythm test to assess general “rhythm” skills within the population of interest. Although this approach may capture the rhythm skills of participants who are proficient across multiple rhythmic competencies, it might miss nuances for participants who have selective impairments in one (or more) rhythmic competencies, or who draw on non-rhythmic skills (e.g., sequence memory) to perform the task. In numerous studies, rhythmic skills are assessed only by a same-different rhythm discrimination task. It should be noted that sequence memory-based tests are unlikely to capture rhythmic competencies in relation to beat-based perception (although if meter or beat is manipulated, as in the BBA, then the task may also be sensitive to beat-based perception) or rhythm production, and therefore only access a small component of rhythmic ability. If it is only possible to run one rhythm task in a given study, or if tapping tasks are not available (though see Anglada-Tort et al., 2022, for online and accessible options), researchers might consider using the BAT (Dalla Bella et al., 2017; Iversen & Patel, 2008), as it contributed to both the tapping precision/beat alignment dimension and the beat-based perception dimension in our current study, and is correlated with multiple tapping measures in the BAASTA (Dalla Bella et al., 2017). The BAT therefore appears the most sensitive perceptual test to capture variance related to both synchronization and tapping ability when it is not possible to run production tasks (see also the computerized adaptation in Harrison & Müllensiefen, 2018).

Future research could use the currently observed distinctions between tasks to investigate related neural correlates for participants and pathologies with behaviorally different patterns of competencies. Now that we have observed a distinction between performance on sequence memory-based rhythm perception and beat-based rhythm perception tasks, it would be interesting to compare performance on a sequence memory-based production task and a sequence memory-based perception task to investigate whether the performance on the two tasks is grouped more strongly along a production dimension or a sequence memory dimension. However, as the current study only included one measure for sequence memory-based rhythm, it will be important to investigate this skill with other types of tasks (e.g., short-term memory tasks with different metrical structures, rhythm reproduction, long-term memory for sequences, etc.) and in relation to short-term memory of other materials (or as assessed with classic digit span tasks) as well.

To further investigate potential distinctions between tasks and related underlying neural mechanisms, future research could consider testing an even more extensive battery of rhythm tasks on a larger and more representative sample of participants to extend and validate the current findings. With current technological advances in online testing and the collection of accurate tapping data via the internet (Anglada-Tort et al., 2022), it would be possible to conduct multiple rhythm tasks and access a larger sample of participants from different backgrounds (i.e., cultural, linguistic, and musical training) and to also include measures of general (non-musical) cognition (e.g., non-verbal IQ, executive function, and working memory). Such studies can be guided by the current findings to ensure that the three components observed here are represented. It would be particularly valuable to investigate whether the distinctions observed in the current sample (largely young female university students) generalize to other populations. In a next step, rhythm tasks could be included which additionally measure absolute (i.e., duration-based) versus relative (i.e., interval-based) timing (Breska & Ivry, 2018; Grube et al., 2010), and to include measures of pitch processing as well. The link with pitch processing is particularly interesting, as participants with amusia have been shown to perform poorly on rhythm tasks when pitch is alternated, but not when it is kept stable (Foxton et al., 2006).

It would also be interesting to systematically measure and recruit participants with a broad range of music and dance training. In the current dataset, we observed participants who were clustered as strong perceivers and producers with no music or dance training, and participants who were clustered as weak perceivers and producers who had music and/or dance training (similar to the lack of correlation found between strong/weak beat perceivers and musical training in Grahn & McAuley, 2009). Further, music training was only related to performance on the BAT and some measures of motor variability (paced tapping to metronome/music, synchronization-continuation). Such observations are in line with work suggesting a distinction between musical aptitude/competence and musical training (e.g., Swaminathan & Schellenberg, 2018), and suggest that music training may not always result in enhanced performance on music-related tasks. It is also possible that participants with music or dance training but with low musical aptitude may draw on different skillsets to practice music, including sequence memory-based rhythm perception. For example, one participant with 8 years of dance training was clustered as a weak tapper, but was also clustered as a strong beat-based perceiver. Two other weak tappers were also clustered as strong sequence memory-based rhythm perceivers. It therefore appears that different dimensions of rhythmic abilities may potentially allow for the compensation of specific areas of weakness/impairment. It is also possible that the transformation of perception into action constitutes an additional complexity for some participants, resulting in accurate perception but impaired production.

The study of separable rhythmic competencies is particularly important when measuring rhythmic skills of patients with potential rhythm impairments. Underlying timing deficits have been suggested to accompany a number of developmental disorders (Fiveash et al., 2021; Ladányi et al., 2020), including dyslexia (Goswami, 2011), developmental language disorder (Colling et al., 2017), developmental coordination disorder (Chang et al., 2021), stuttering (Falk et al., 2015), autism spectrum disorder (ASD; Isaksson et al., 2018), and attention deficit hyperactivity disorder (Puyjarinet et al., 2017). Testing the same participants (with different pathological backgrounds) on the three dimensions of timing tasks presented here would allow for a greater understanding of the related timing impairments, with the acknowledgement that there are likely to be individual differences within these populations as well. One option that could be explored as a screening measure to reduce initial testing time could be a rhythm reproduction task, as used in Tierney and Kraus (2015) and Bonacina et al. (2019). Rhythm reproduction tasks theoretically combine production, sequence memory-based rhythm perception, and perhaps beat-based rhythm perception, but appear to tap into a different underlying competency than direct tapping tasks (Tierney et al., 2017; Tierney & Kraus, 2015). A rhythm reproduction task might therefore be a quick and simple way to screen participants to assess the potential for rhythm disorders, followed by a more comprehensive testing to disentangle reasons for impaired performance (e.g., weak motor variability, weak sequence memory-based rhythm processing, weak beat-based rhythm processing). Once a more complete understanding of potential timing impairments is known within disorders and/or within individuals, it would be possible to directly train these rhythmic competencies, which could also provide benefits to related non-musical deficits. This hypothesis is outlined in the processing rhythm in speech and music (PRISM) framework, which suggests that training precise auditory processing, entrainment of neural oscillations to external stimuli, and sensorimotor coupling could enhance speech processing across different developmental speech and language disorders (Fiveash et al., 2021).

Finally, all tasks employed in the current study were rhythm-based, as rhythm cognition and distinct rhythmic competencies were our primary focus. It is therefore possible that distinctions between perception and production could rather be reflecting a more general distinction between motor skills and auditory perception (i.e., related to motor control, general cognitive processing, etc.), rather than a rhythm-specific result. Although it was outside the scope of the current work, future research could keep this consideration in mind, especially when testing participants who might have motor impairments. Non-rhythmic perception and production tasks such as a motor control task (e.g., the peg moving task) or an auditory skill task (e.g., pitch discrimination) could be included to control for this possibility and used as a covariate in the analyses. Our results therefore relate specifically to the rhythmic time dimension and distinct competencies observed within rhythm cognition. It would be interesting to investigate whether some of the observed distinctions (i.e., between perception and production) can apply also to other non-rhythmic materials and modalities which could open this research into other domains as well.

Conclusion

The results from the current study and previous work suggest the importance of exploring diverse rhythmic competencies and individual differences when aiming to understand the complex domain of rhythm. Such explorations are also critical for a better understanding of specific neural mechanisms impaired within pathologies that show co-morbid rhythm impairments (Dalla Bella, 2020; Fujii & Wan, 2014; Ladányi et al., 2020), with perspectives for training and rehabilitation.

Supplementary Material

ESM1
ESM2

Acknowledgements

We would like to thank Lucien Deusch for testing participants and data entry, Laure-Helene Canette for data entry and or- ganization, and Nathalie Bedoin for use of the Eprime script for the BAT presentation. This research was supported by grants from Agence Nationale de la Recherche (ANR-16-CE28-0012-02) to BT; the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health (NIH) under award number R01DC016977 awarded to RLG, and by the NIH Common Fund under award DP2HD098859, through the Office of Strategic Coordination/Office of the NIH Director, awarded to RLG. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The team Auditory Cognition and Psychoacoustics is part of the LabEx CeLyA (Centre Lyonnais d'Acoustique, ANR-10-LABX-60).

Footnotes

Supplementary Information The online version contains supplementary material available at https://doi.org/10.3758/s13414-022-02487-2.

1

Note that the beat can also be referred to as the tactus or pulse.

2

Considering that we have several tests that have not been previously combined, we chose a generic effect size suggested in the literature rather than basing the effect size off previous research, which often does not compare all tests together.

3

Note that for French speakers “taper en rythme” refers to the action to clap in time with a musical beat; it does not translate back to the literal translation “tap in rhythm. ”

4

All results ordered by highest to lowest test result.

5

We use the terms “strong” and “weak” to describe participants who perform more or less accurately across the various measures. These terms were chosen to be in line with previous research (e.g., Grahn & McAuley, 2009; Leow et al., 2014), and to not imply any clinical cut-offs.

6

Note that the weak perceivers did not perform particularly strongly or weakly on the tapping measures, as can be seen in Fig. 3B with performance around the mid-point of the tapping precision dimension (Dimension 1).

7

The participant with no tapping data was not included in this analysis.

8

The logistic regression was run in R (R Core Team, 2018), and the car package (Fox & Weisberg, 2011) was used to test significance of individual effects (using Type III Wald chi-squared tests).

9

The French translation was savez-vous taper en rythme sur la musique, described earlier.

10

For metronome tapping: r(29) = −.86, p < .001, and for music (average), r(29) = −.85, p < .001 (Spearman correlations).

11

For example, using the current implementations it would take approximately 16 min to run a paced tapping task (~ 2 min), the BBA (~ 6 min), and the BAT (~ 8 min).

References

  1. Anglada-Tort M, Harrison PMC, & Jacoby N (2022). REPP: A robust cross-platform solution for online sensorimotor synchronization experiments. Behavior Research Methods. 10.3758/s13428-021-01722-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bégel V, Benoit C-E, Correa A, Cutanda D, Kotz SA, & Dalla Bella S (2017). “Lost in time” but still moving to the beat. Neuropsychologia, 94, 129–138. 10.1016/j.neuropsychologia.2016.11.022 [DOI] [PubMed] [Google Scholar]
  3. Bégel V, Verga L, Benoit C-E, Kotz SA, & Dalla Bella S (2018). Test-retest reliability of the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). Annals of Physical and Rehabilitation Medicine, 61(6), 395–400. 10.1016/j.rehab.2018.04.001 [DOI] [PubMed] [Google Scholar]
  4. Bégel V, Dalla Bella S, Devignes Q, Vandenbergue M, Lemaître M-P, & Dellacherie D (2022). Rhythm as an independent determinant of developmental dyslexia. Developmental Psychology, 58(2), 339–358. 10.1037/dev0001293 [DOI] [PubMed] [Google Scholar]
  5. Bengtsson SL, Ullén F, Henrik Ehrsson H, Hashimoto T, Kito T, Naito E, Forssberg H, & Sadato N (2009). Listening to rhythms activates motor and premotor cortices. Cortex, 45(1), 62–71. 10.1016/j.cortex.2008.07.002 [DOI] [PubMed] [Google Scholar]
  6. Bonacina S, Krizman J, White-Schwoch T, Nicol T, & Kraus N (2019). How rhythmic skills relate and develop in school-age children. Global. Pediatric Health, 6(1–7). 10.1177/2333794X19852045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bouwer FL, Honing H, & Slagter HA (2020). Beat-based and memory-based temporal expectations in rhythm: Similar perceptual effects, different underlying mechanisms. Journal of Cognitive Neuroscience, 32(7), 1–24. 10.1162/jocn_a_01529 [DOI] [PubMed] [Google Scholar]
  8. Bouwer F, Nityananda V, Rouse AA, & Cate C ten. (2021). Rhythmic abilities in humans and non-human animals: A review and recommendations from a methodological perspective. PsyArXiv 10.31234/osf.io/pu9yh [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Breska A, & Ivry RB (2018). Double dissociation of single-interval and rhythmic temporal prediction in cerebellar degeneration and Parkinson’s disease. Proceedings of the National Academy of Sciences, 115(48), 12283–12288. 10.1073/pnas.1810596115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cameron DJ, & Grahn JA (2014). Enhanced timing abilities in percussionists generalize to rhythms without a musical beat. Frontiers in Human Neuroscience, 8. 10.3389/fnhum.2014.01003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Canette L-H, Fiveash A, Krzonowski J, Corneyllie A, Lalitte P, Thompson D, Trainor L, Bedoin N, & Tillmann B (2020). Regular rhythmic primes boost P600 in grammatical error processing in dyslexic adults and matched controls. Neuropsychologia, 138, 107324. 10.1016/j.neuropsychologia.2019.107324 [DOI] [PubMed] [Google Scholar]
  12. Cannon JJ, & Patel AD (2021). How beat perception co-opts motor neurophysiology. Trends in Cognitive Sciences, 25(2), 137–150. 10.1016/j.tics.2020.11.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Carlson E, Burger B, & Toiviainen P (2018). Dance like someone is watching: A social relations model study of music-induced movement. Music & Science, 1, 2059204318807846. 10.1177/2059204318807846 [DOI] [Google Scholar]
  14. Carson SH, Peterson JB, & Higgins DM (2005). Reliability, validity, and factor structure of the creative achievement questionnaire. Creativity Research Journal, 17(1), 37–50. 10.1207/s15326934crj1701_4 [DOI] [Google Scholar]
  15. Chang A, Li Y-C, Chan JF, Dotov DG, Cairney J, & Trainor LJ (2021). Inferior auditory time perception in children with motor difficulties. Child Development, n/a(n/a). 10.1111/cdev.13537 [DOI] [PubMed] [Google Scholar]
  16. Chen JL, Penhune VB, & Zatorre RJ (2008). Listening to musical rhythms recruits motor regions of the brain. Cerebral Cortex, 18(12), 2844–2854. 10.1093/cercor/bhn042 [DOI] [PubMed] [Google Scholar]
  17. Chi Y-Y (2012). Multivariate methods. WIREs. Computational Statistics, 4(1), 35–47. 10.1002/wics.185 [DOI] [Google Scholar]
  18. Colling LJ, Noble HL, & Goswami U (2017). Neural entrainment and sensorimotor synchronization to the beat in children with developmental dyslexia: An EEG study. Frontiers in Neuroscience, 11(JUL). 10.3389/fnins.2017.00360 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Cumming R, Wilson A, Leong V, Colling LJ, & Goswami U (2015). Awareness of rhythm patterns in speech and music in children with specific language impairments. Frontiers in Human Neuroscience, 9. 10.3389/fnhum.2015.00672 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Cunningham JB, & McCrum-Gardner DE (2007). Power, effect and sample size using GPower: Practical issues for researchers and members of research ethics committees. 5. [Google Scholar]
  21. Dalla Bella S (2020). Chapter 15—The use of rhythm in rehabilitation for patients with movement disorders. In Cuddy LL, Belleville S, & Moussard A (Eds.), Music and the Aging Brain (pp. 383–406). Academic Press. 10.1016/B978-0-12-817422-7.00015-8 [DOI] [Google Scholar]
  22. Dalla Bella S, & Andary S (2020). High-precision temporal measurement of vibro-acoustic events in synchronisation with a sound signal on a touch-screen device (Organisation Mondiale de la Propriété Intellectuelle Patent No. International Patent No WO 2020/128088 A1).
  23. Dalla Bella S, Farrugia N, Benoit C-E, Begel V, Verga L, Harding E, & Kotz SA (2017). Baasta: Battery for the assessment of auditory sensorimotor and timing abilities. Behavior Research Methods, 49(3), 1128–1145. 10.3758/s13428-016-0773-6 [DOI] [PubMed] [Google Scholar]
  24. Dalla Bella S, Giguère J-F, & Peretz I (2007). Singing proficiency in the general population. The Journal of the Acoustical Society of America, 121(2), 1182–1189. 10.1121/1.2427111 [DOI] [PubMed] [Google Scholar]
  25. Dalla Bella S, & Sowiński J (2015). Uncovering beat deafness: Detecting rhythm disorders with synchronized finger tapping and perceptual timing tasks. Journal of Visualized Experiments : JoVE, 97. 10.3791/51761 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Dauvergne C, Bégel V, Gény C, Puyjarinet F, Laffont I, & Dalla Bella S (2018). Home-based training of rhythmic skills with a serious game in Parkinson’s disease: Usability and acceptability. Annals of Physical and Rehabilitation Medicine, 61(6), 380–385. 10.1016/j.rehab.2018.08.002 [DOI] [PubMed] [Google Scholar]
  27. Degrave P, & Dedonder J (2019). A French translation of the Goldsmiths Musical Sophistication Index, an instrument to assess self-reported musical skills, abilities and behaviours. Journal of New Music Research, 48(2), 138–144. 10.1080/09298215.2018.1499779 [DOI] [Google Scholar]
  28. Falk S, Müller T, & Dalla Bella S (2015). Non-verbal sensorimotor timing deficits in children and adolescents who stutter. Frontiers in Psychology, 6. 10.3389/fpsyg.2015.00847 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Faul F, Erdfelder E, Lang A, & Buchner A (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. [DOI] [PubMed] [Google Scholar]
  30. Fitch WT (2013). Rhythmic cognition in humans and animals: Distinguishing meter and pulse perception. Frontiers in Systems Neuroscience, 7, 68. PMC. 10.3389/fnsys.2013.00068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Fiveash A, Bedoin N, Gordon RL, & Tillmann B (2021). Processing rhythm in speech and music: Shared mechanisms and implications for developmental speech and language disorders. Neuropsychology, 35(8). https://pubmed.ncbi.nlm.nih.gov/34435803/ [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Fiveash A, Bedoin N, Lalitte P, & Tillmann B (2020a). Rhythmic priming of grammaticality judgments in children: Duration matters. Journal of Experimental Child Psychology, 197, 104885. 10.1016/j.jecp.2020.104885 [DOI] [PubMed] [Google Scholar]
  33. Fiveash A, Schön D, Canette L-H, Morillon B, Bedoin N, & Tillmann B (2020b). A stimulus-brain coupling analysis of regular and irregular rhythms in adults with dyslexia and controls. Brain and Cognition, 140, 105531. 10.1016/j.bandc.2020.105531 [DOI] [PubMed] [Google Scholar]
  34. Fox J, & Weisberg S (2011). An {R} Companion to Applied Regression (Second). SAGE Publications. http://socserv.socsci.mcmaster.ca/jfox/Books/Companion [Google Scholar]
  35. Foxton JM, Nandy RK, & Griffiths TD (2006). Rhythm deficits in ‘tone deafness’. Brain and Cognition, 62(1), 24–29. 10.1016/j.bandc.2006.03.005 [DOI] [PubMed] [Google Scholar]
  36. Fries W, & Swihart AA (1990). Disturbance of rhythm sense following right hemisphere damage. Neuropsychologia, 28(12), 1317–1323. 10.1016/0028-3932(90)90047-R [DOI] [PubMed] [Google Scholar]
  37. Fujii S, & Schlaug G (2013). The harvard beat assessment test (H-BAT): A battery for assessing beat perception and production and their dissociation. Frontiers in Human Neuroscience, 7. 10.3389/fnhum.2013.00771 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Fujii S, & Wan CY (2014). The role of rhythm in speech and language rehabilitation: The SEP hypothesis. Frontiers in Human Neuroscience, 8. 10.3389/fhhum.2014.00777 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Fujioka T, Trainor LJ, Large EW, & Ross B (2012). Internalized timing of isochronous sounds Is represented in neuromagnetic beta oscillations. Journal of Neuroscience, 32(5), 1791–1802. 10.1523/JNEUROSCI.4107-11.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Gordon RL, Shivers CM, Wieland EA, Kotz SA, Yoder PJ, & Devin McAuley J (2015). Musical rhythm discrimination explains individual differences in grammar skills in children. Developmental Science, 18(4), 635–644. 10.1111/desc.12230 [DOI] [PubMed] [Google Scholar]
  41. Goswami U (2011). A temporal sampling framework for developmental dyslexia. Trends in Cognitive Sciences, 15(1), 3–10. 10.1016/j.tics.2010.10.001 [DOI] [PubMed] [Google Scholar]
  42. Grahn JA, & Brett M (2007). Rhythm and beat perception in motor areas of the brain. Journal of Cognitive Neuroscience, 19(5), 893–906. 10.1162/jocn.2007.19.5.893 [DOI] [PubMed] [Google Scholar]
  43. Grahn JA, & McAuley JD (2009). Neural bases of individual differences in beat perception. Neuroimage, 47(4), 1894–1903. 10.1016/j.neuroimage.2009.04.039 [DOI] [PubMed] [Google Scholar]
  44. Grahn JA, & Schuit D (2012). Individual differences in rhythmic ability: Behavioral and neuroimaging investigations. Psychomusicology: Music, Mind, and Brain, 22(2), 105–121. 10.1037/a0031188 [DOI] [Google Scholar]
  45. Greenfield MD, Honing H, Kotz SA, & Ravignani A (2021). Synchrony and rhythm interaction: From the brain to behavioural ecology. Philosophical Transactions of the Royal Society B: Biological Sciences, 376(1835), 20200324. 10.1098/rstb.2020.0324 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Grube M, Cooper FE, Chinnery PF, & Griffiths TD (2010). Dissociation of duration-based and beat-based auditory timing in cerebellar degeneration. Proceedings of the National Academy of Sciences, 107(25), 11597–11601. 10.1073/pnas.0910473107 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Harrison PMC, & Müllensiefen D (2018). Development and validation of the computerised adaptive beat alignment test (CA-BAT). Scientific Reports, 8, 12395. 10.1038/s41598-018-30318-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Isaksson S, Salomäki S, Tuominen J, Arstila V, Falter-Wagner CM, & Noreika V (2018). Is there a generalized timing impairment in Autism Spectrum Disorders across time scales and paradigms? Journal of Psychiatric Research, 99, 111–121. 10.1016/j.jpsychires.2018.01.017 [DOI] [PubMed] [Google Scholar]
  49. Iversen JR, & Patel AD (2008). The Beat Alignment Test (BAT): Surveying beat processing abilities in the general population. In Miyazaki K, Adachi M, Nakajima Y, & Tsuzaki M (Eds.), Proceedings of the 10th international Conference on Music Perception and Cognition (pp. 465–468). Causal Productions. [Google Scholar]
  50. Jacoby N, & McDermott JH (2017). Integer ratio priors on musical rhythm revealed cross-culturally by iterated reproduction. Current Biology, 27(3), 359–370. 10.1016/j.cub.2016.12.031 [DOI] [PubMed] [Google Scholar]
  51. Kirschner S, & Tomasello M (2009). Joint drumming: Social context facilitates synchronization in preschool children. Journal of Experimental Child Psychology, 102(3), 299–314. 10.1016/j.jecp.2008.07.005 [DOI] [PubMed] [Google Scholar]
  52. Kotz SA, Ravignani A, & Fitch WT (2018). The evolution of rhythm processing. Trends in Cognitive Sciences, 22(10), 896–910. 10.1016/j.tics.2018.08.002 [DOI] [PubMed] [Google Scholar]
  53. Ladányi E, Persici V, Fiveash A, Tillmann B, & Gordon RL (2020). Is atypical rhythm a risk factor for developmental speech and language disorders? WiREs Cognitive Science, 11(5). https://onlinelibrary-wiley-com.proxy.insermbiblio.inist.fr/doi/epdf/10.1002/wcs.1528 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Launay J, Grube M, & Stewart L (2014). Dysrhythmia: A specific congenital rhythm perception deficit. Frontiers in Psychology, 5. 10.3389/fpsyg.2014.00018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Lê S, Josse J, & Husson F (2008). FactoMineR: An R Package for Multivariate Analysis. Journal of Statistical Software, 25(1), 1–18. 10.18637/jss.v025.i01 [DOI] [Google Scholar]
  56. Lense MD, Ladányi E, Rabinowitch T-C, Trainor L, & Gordon R (2021). Rhythm and timing as vulnerabilities in neurodevelopmental disorders. Philosophical Transactions of the Royal Society B: Biological Sciences, 376(1835), 20200327. 10.1098/rstb.2020.0327 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Leow L-A, & Grahn JA (2014). Neural mechanisms of rhythm perception: Present findings and future directions. In Neurobiology of Interval Timing (pp. 325–338). Springer, New York, NY. 10.1007/978-1-4939-1782-2_17 [DOI] [PubMed] [Google Scholar]
  58. Leow L-A, Parrott T, & Grahn JA (2014). Individual differences in beat perception affect gait responses to low- and high-groove music. Frontiers in Human Neuroscience, 8, 811. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Loui P, Guenther FH, Mathys C, & Schlaug G (2008). Action–perception mismatch in tone-deafness. Current Biology, 18(8), R331–R332. 10.1016/j.cub.2008.02.045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Mas-Herrero E, Marco-Pallares J, Lorenzo-Seva U, Zatorre RJ, & Rodriguez-Fornells A (2013). Individual differences in music reward experiences. Music Perception, 31(2), 118–138. 10.1525/mp.2013.31.2.118 [DOI] [Google Scholar]
  61. McAuley JD (2010). Tempo and rhythm. In Jones MR (Ed.), Music Perception (pp. 165–199). Springer Science+Business Media. [Google Scholar]
  62. Morillon B, & Baillet S (2017). Motor origin of temporal predictions in auditory attention. Proceedings of the National Academy of Sciences, 114(42), E8913–E8921. 10.1073/pnas.1705373114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Mosing MA, Verweij KJH, Abé C, de Manzano Ö, & Ullén F (2016). On the relationship between domain-specific creative achievement and sexual orientation in swedish twins. Archives of Sexual Behavior, 45(7), 1799–1806. 10.1007/s10508-016-0708-4 [DOI] [PubMed] [Google Scholar]
  64. Müllensiefen D, Gingras B, Musil J, & Stewart L (2014). The musicality of non-musicians: An index for assessing musical sophistication in the general population. PLOS ONE, 9(2), e89642. 10.1371/journal.pone.0089642 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Mundfrom DJ, Shaw DG, & Ke TL (2005). Minimum sample size recommendations for conducting factor analyses. International Journal of Testing, 5(2), 159–168. 10.1207/s15327574ijt0502_4 [DOI] [Google Scholar]
  66. Nani A, Manuello J, Liloia D, Duca S, Costa T, & Cauda F (2019). The neural correlates of time: A meta-analysis of neuroimaging studies. Journal of Cognitive Neuroscience. 10.1162/jocn_a_01459 [DOI] [PubMed] [Google Scholar]
  67. Niarchou M, Gustavson DE, Sathirapongsasuti JF, Anglada-Tort M, Eising E, Bell E, McArthur E, Straub P, Team T, Me R, McAuley JD, Capra JA, Ullén F, Creanza N, Mosing MA, Hinds D, Davis LK, Jacoby N, & Gordon RL (2021). Genome-wide association study of musical beat synchronization demonstrates high polygenicity. BioRxiv, 836197. 10.1101/836197 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Overy K, Nicolson RI, Fawcett AJ, & Clarke EF (2003). Dyslexia and music: Measuring musical timing skills. Dyslexia, 9(1), 18–36. 10.1002/dys.233 [DOI] [PubMed] [Google Scholar]
  69. Palmer C, Lidji P, & Peretz I (2014). Losing the beat: Deficits in temporal coordination. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1658). 10.1098/rstb.2013.0405 [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Patel AD, & Iversen JR (2014). The evolutionary neuroscience of musical beat perception: The Action Simulation for Auditory Prediction (ASAP) hypothesis. Frontiers in Systems Neuroscience, 8. 10.3389/fnsys.2014.00057 [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Peretz I, Champod AS, & Hyde K (2003). Varieties of musical disorders: The Montreal Battery of Evaluation of Amusia. Annals of the New York Academy of Sciences, 999(1), 58–75. 10.1196/annals.1284.006 [DOI] [PubMed] [Google Scholar]
  72. Pewsey A, Neuhäuser M, & Ruxton GD (2013). Circular Statistics in R. Oxford University Press. [Google Scholar]
  73. Pfordresher PQ, & Demorest SM (2021). The prevalence and correlates of accurate singing. Journal of Research in Music Education, 69(1), 5–23. 10.1177/0022429420951630 [DOI] [Google Scholar]
  74. Pfordresher PQ, & Nolan NP (2019). Testing convergence between singing and music perception accuracy using two standardized measures. Auditory Perception & Cognition, 2(1–2), 67–81. 10.1080/25742442.2019.1663716 [DOI] [Google Scholar]
  75. Phillips-Silver J, Toiviainen P, Gosselin N, Piché O, Nozaradan S, Palmer C, & Peretz I (2011). Born to dance but beat deaf: A new form of congenital amusia. Neuropsychologia, 49(5), 961–969. 10.1016/j.neuropsychologia.2011.02.002 [DOI] [PubMed] [Google Scholar]
  76. Polak R, Jacoby N, Fischinger T, Goldberg D, Holzapfel A, & London J (2018). Rhythmic prototypes across cultures: A comparative study of tapping synchronization. Music Perception, 36(1), 1–23. 10.1525/mp.2018.36.1.1 [DOI] [Google Scholar]
  77. Povel D-J, & Essens P (1985). Perception of temporal patterns. Music Perception: An Interdisciplinary Journal, 2(4), 411–440. 10.2307/40285311 [DOI] [Google Scholar]
  78. Provasi J, Doyère V, Zélanti PS, Kieffer V, Perdry H, El Massioui N, Brown BL, Dellatolas G, Grill J, & Droit-Volet S (2014). Disrupted sensorimotor synchronization, but intact rhythm discrimination, in children treated for a cerebellar medulloblastoma. Research in Developmental Disabilities, 35(9), 2053–2068. 10.1016/j.ridd.2014.04.024 [DOI] [PubMed] [Google Scholar]
  79. Puyjarinet F, Bégel V, Gény C, Driss V, Cuartero M-C, Kotz SA, Pinto S, & Dalla Bella S (2019). Heightened orofacial, manual, and gait variability in Parkinson’s disease results from a general rhythmic impairment. Npj Parkinson’s Disease, 5(1), 1–7. 10.1038/s41531-019-0092-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Puyjarinet F, Bégel V, Lopez R, Dellacherie D, & Dalla Bella S (2017). Children and adults with Attention-Deficit/Hyperactivity Disorder cannot move to the beat. Scientific Reports, 7(1), 11550. 10.1038/s41598-017-11295-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. R Core Team. (2018). R: A language and environment for statistical computing. https://www.R-project.org/
  82. Repp BH (2005). Sensorimotor synchronization: A review of the tapping literature. Psychonomic Bulletin & Review, 12(6), 969–992. 10.3758/BF03206433 [DOI] [PubMed] [Google Scholar]
  83. Repp BH (2010). Sensorimotor synchronization and perception of timing: Effects of music training and task experience. Human Movement Science, 29(2), 200–213. 10.1016/j.humov.2009.08.002 [DOI] [PubMed] [Google Scholar]
  84. Repp BH, & Su Y-H (2013). Sensorimotor synchronization: A review of recent research (2006–2012). Psychonomic Bulletin & Review, 20(3), 403–452. 10.3758/s13423-012-0371-2 [DOI] [PubMed] [Google Scholar]
  85. Rouder JN (2014). Optional stopping: No problem for Bayesians. Psychonomic Bulletin & Review, 21(2), 301–308. 10.3758/s13423-014-0595-4 [DOI] [PubMed] [Google Scholar]
  86. Saliba J, Lorenzo-Seva U, Marco-Pallares J, Tillmann B, Zeitouni A, & Lehmann A (2016). French validation of the Barcelona Music Reward Questionnaire. PeerJ, 4, e1760. 10.7717/peerj.1760 [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Schneider W, Eschman A, & Zuccolotto A (2002). E-Prime User’s Guide. Psychology Software Tools Inc.. [Google Scholar]
  88. Schubotz RI, Friederici AD, & Yves von Cramon D (2000). Time perception and motor timing: A common cortical and subcortical basis revealed by fmri. NeuroImage, 11(1), 1–12. 10.1006/nimg.1999.0514 [DOI] [PubMed] [Google Scholar]
  89. Schwartze M, Tavano A, Schröger E, & Kotz SA (2012). Temporal aspects of prediction in audition: Cortical and subcortical neural mechanisms. international Journal of Psychophysiology, 83(2), 200–207. 10.1016/j.ijpsycho.2011.11.003 [DOI] [PubMed] [Google Scholar]
  90. Simmons JP, Nelson LD, & Simonsohn U (2011). False-positive psychology. Psychological Science, 22(11), 1359–1366. 10.1177/0956797611417632 [DOI] [PubMed] [Google Scholar]
  91. Sowiński J, & Dalla Bella S (2013). Poor synchronization to the beat may result from deficient auditory-motor mapping. Neuropsychologia, 51(10), 1952–1963. 10.1016/j.neuropsychologia.2013.06.027 [DOI] [PubMed] [Google Scholar]
  92. Stanislaw H, & Todorov N (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31(1), 137–149. [DOI] [PubMed] [Google Scholar]
  93. Stephan MA, Lega C, & Penhune VB (2018). Auditory prediction cues motor preparation in the absence of movements. NeuroImage, 174, 288–296. 10.1016/j.neuroimage.2018.03.044 [DOI] [PubMed] [Google Scholar]
  94. Swaminathan S, & Schellenberg EG (2018). Musical competence is predicted by music training, cognitive abilities, and personality. Scientific Reports, 8(1), 9223. 10.1038/s41598-018-27571-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Thaut MH, Trimarchi PD, & Parsons LM (2014). Human brain basis of musical rhythm perception: Common and distinct neural substrates for meter, tempo, and pattern. Brain Sciences, 4(2), 428–452. 10.3390/brainsci4020428 [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Tierney A, & Kraus N (2015). Evidence for multiple rhythmic skills. PLOS ONE, 10(9), e0136645. 10.1371/journal.pone.0136645 [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Tierney A, White-Schwoch T, MacLean J, & Kraus N (2017). Individual differences in rhythm skills: Links with neural consistency and linguistic ability. Journal of Cognitive Neuroscience, 29(5), 855–868. 10.1162/jocn_a_01092 [DOI] [PubMed] [Google Scholar]
  98. Tranchant P, & Vuvan DT (2015). Current conceptual challenges in the study of rhythm processing deficits. Frontiers in Neuroscience, 9. 10.3389/fnins.2015.00197 [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Vuust P, Gebauer LK, & Witek MAG (2014). Neural underpinnings of music: The polyrhythmic brain. In Merchant H & de Lafuente V (Eds.), Neurobiology of Interval Timing (pp. 339–356). Springer. 10.1007/978-1-4939-1782-2_18 [DOI] [PubMed] [Google Scholar]
  100. Williamson VJ, Liu F, Peryer G, Grierson M, & Stewart L (2012). Perception and action de-coupling in congenital amusia: Sensitivity to task demands. Neuropsychologia, 50(1), 172–180. 10.1016/j.neuropsychologia.2011.11.015 [DOI] [PubMed] [Google Scholar]
  101. Winkler I, Háden GP, Ladinig O, Sziller I, & Honing H (2009). Newborn infants detect the beat in music. Proceedings of the National Academy of Sciences, 106(7), 2468–2471. 10.1073/pnas.0809035106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Zatorre RJ, Chen JL, & Penhune VB (2007). When the brain plays music: Auditory–motor interactions in music perception and production. Nature Reviews Neuroscience, 8(7), 547–558. 10.1038/nrn2152 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ESM1
ESM2

RESOURCES