Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Oct 1.
Published in final edited form as: Autism Res. 2010 Oct;3(5):214–225. doi: 10.1002/aur.147

Perception of emotion in musical performance in adolescents with autism spectrum disorders

Anjali Bhatara a, Eve-Marie Quintin b, Bianca Levy a, Ursula Bellugi c, Eric Fombonne d, Daniel J Levitin a
PMCID: PMC2963682  NIHMSID: NIHMS214981  PMID: 20717952

Scientific Abstract

Individuals with autism spectrum disorders (ASD) are impaired in understanding the emotional undertones of speech, many of which are communicated through prosody. Musical performance also employs a form of prosody to communicate emotion, and the goal of this study was to examine the ability of adolescents with ASD to understand musical emotion. We designed an experiment in which each musical stimulus served as its own control while we varied the emotional expressivity by manipulating timing and amplitude variation. We asked children and adolescents with ASD and matched controls as well as individuals with Williams syndrome to rate how emotional these excerpts sounded. Results show that children and adolescents with ASD are impaired relative to matched controls and individuals with Williams syndrome at judging the difference in emotionality among the expressivity levels. Implications for theories of emotion in autism are discussed in light of these findings.

Keywords: autism spectrum disorders, Asperger syndrome, Williams syndrome, music, emotion perception, auditory perception


As Kanner (1943) observed in the earliest research on autism, some of the most salient deficits in autism spectrum disorders (ASD) concern emotion perception; yet insight into the nature of these deficits has yielded mixed results. Many studies show that individuals with ASD are impaired in perceiving social and emotional information in faces and voices (Adolphs, Sears, & Piven, 2001; Baron-Cohen, Spitz, & Cross, 1993; Downs & Smith, 2004; Gross, 2004; Hobson, Ouston, & Lee, 1988; Pierce, Glad, & Schreibman, 1997; Tantam, Monaghan, Nicholson, & Stirling, 1989; Weeks & Hobson, 1987) while other studies have shown no impairment (Castelli, 2005; Loveland et al., 1997; Ozonoff, Pennington, & Rogers, 1990). This discrepancy may be due to differences in task type or complexity or in the level of functioning of participants (Loveland, 2005). For example, Mazefsky & Oswald (2007) found that children with Asperger syndrome (AS) performed similarly to controls in recognizing facial and vocal emotion, whereas children with high-functioning autism (HFA) performed significantly worse. The main difference between the groups was that the AS group had significantly higher verbal and nonverbal IQs than the HFA group. Given that individuals with AS do show significant social-communicative deficits (Ghaziuddin, 2008; Saulnier & Klin, 2007), this suggests that emotion recognition impairment may be characteristic of autism, but that some laboratory tasks allow individuals with higher verbal abilities to use verbal strategies to compensate (Grossman, Klin, Carter, & Volkmar, 2000). The present study focuses on an area of emotion understanding among individuals with autism spectrum disorders (ASD) that has not been thoroughly investigated and may not be as dependent on verbal abilities as many previously studied laboratory tasks: the perception of the emotion in musical performance.

Here, we consider “emotion” in terms of Russell’s (1980) circumplex model of affect (see Figure 1). Although numerous models have been developed since that time, the clarity and two-dimensionality of Russell’s model make it relevant for this paper. On the edge of the circle there are four bipolar pairs of “affect concepts,” for example, pleasure/misery. All of these can be communicated by music, to varying degrees of specificity. The center of the circle is neutral, representing lack of emotion, a lack of being pulled toward one side of the circle over others. In the present study, we investigate perception of music performances that either (a) pull the listener’s perception of emotion to one side of the circle (thus being emotionally expressive), (b) leave the listener’s perception of emotion in the middle of the circle (being less emotionally expressive), or (c) fall somewhere between these two perceptions.

Figure 1.

Figure 1

Russell’s (1980) circumplex model of affect, modified to include “lack of emotion” at the center.

Individuals with ASD are impaired in perception of emotion as conveyed by speech prosody (Paul, Augustyn, Klin, & Volkmar, 2005; Peppé, McCann, Gibbon, O’Hare, & Rutherford, 2007) and are impaired in identifying vocal affect (Boucher, Lewis, & Collis, 2000; Golan, Baron-Cohen, Hill, & Rutherford, 2007). They also show atypical ERP responses to changes in a word’s affective pitch (Korpilahti et al., 2007) and atypical cortical responses to general vocal sounds (Gervais et al., 2004) and vocal expressions of irony (Wang, Lee, Sigman, & Dapretto, 2007).

Expression of emotion in music relies on mechanisms similar to those used to convey emotion in nonverbal aspects of speech (Juslin & Laukka, 2003), implying that the perception of emotion in speech and music may rely on shared neural mechanisms, analogous to predictions by Patel and colleagues (Patel, 2003; Patel, Peretz, Tramo, & Labreque, 1998) and findings of common neural substrates for processing music and language syntax (Levitin & Menon, 2003, 2005). This suggests that, if the perception of emotion in music relies on the same mechanisms as the perception of emotion in nonverbal speech cues, individuals with ASD will also be impaired in perceiving or recognizing emotionality in music.

In Western classical music, the composer contributes greatly to the emotional content of a piece, but performers typically exercise great latitude in their interpretations, making them salient contributors to the emotional content as well. The composer (in most instances) indicates the notes’ pitches and durations (which in turn affect key and harmonic structure) as well as phrasing, pedaling, tempo, and dynamics, along with abstract indications of mood or style (e.g. “cantabile” meaning “in a singing style,” Kennedy, 1999). The performer is normally expected to follow these indications and to add expressive nuances to the music, over and above what is notated. These nuances consist of systematic variation of duration and amplitude (Gabrielsson, 1999; Repp, 1995) although timbre and pitch variation are also important for some instruments (for a review, see Palmer, 1997). In piano performance (which comprises the stimuli for the present study), pitch cannot be altered, and timbre cannot be varied separately from amplitude (Parncutt & Troup, 2002; Taylor, 1965, p. 175), so the present discussion will focus on duration and amplitude variation. These two types of variation contribute to pulling the perception of the piano performance from the center of Russell’s circumplex model discussed above toward one of the edges (e.g., Kamenetsky, Hill & Trehub, 1997). The specific emotion characterized by the piece (or the specific place on the edge of the model) is mainly determined by more complex aspects specific to the particular piece, which are beyond the scope of this discussion.

Research on auditory and music perception in ASD has shown intriguing results. Children and adults with ASD show greater pitch sensitivity and pitch categorization abilities than typical controls (Bonnel et al., 2003), enhanced pitch memory and labeling abilities (Heaton, 2003; Heaton, Hermelin, & Pring, 1998), preserved or superior sensitivity for detecting pitch direction (Heaton, 2005) and contour change (Mottron, Peretz, & Ménard, 2000), and superior chord disembedding ability (though in Asperger syndrome only; Altgassen, Kliegel, & Williams, 2005). This suggests perception of pitch is unimpaired in ASD.

However, as the discussion of musical performance techniques above shows, pitch perception is not the only factor important for music understanding; the timing and amplitude of each note also contribute to the overall emotional content. In discrimination tasks of tone duration (timing) and amplitude, Jones et al. (2009) reported no group difference between the ASD and control groups, but there were subsets of individuals in the ASD group who showed exceptionally poor performance in each task. In addition, two studies showed impairments in children with ASD in extracting speech from background noise when there were timing cues designed to help with the extraction (Alcántara, Weisblatt, Moore, & Bolton, 2004; Groen et al., 2009). Brain responses to amplitude changes have also been shown to be abnormal in individuals with ASD (Bruneau, Bonnet-Brilhault, Gomot, Adrien, & Barthélémy, 2003; Bruneau, Roux, Adrien, & Barthélémy, 1999; Lincoln, Courchesne, Harms, & Allen, 1995)

Children with ASD may show relatively more interest in music than age-matched controls (Thaut, 1987). In addition, six of Kanner’s (1943) original sample of 11 autistic children showed a strong early interest in music. This raises several questions: What is the nature of their interest? To which aspects of music are individuals with ASD attracted? Do they enjoy the emotional aspects of music in the same way as neurotypical controls?

Previous research on ASD and the perception of emotion in music has employed relatively simple emotion recognition tasks, sometimes utilizing musical mode to convey the emotion. Two common musical modes, “major” and “minor” can each express a variety of moods (Juslin & Laukka, 2003, 2004) but owing to cultural tradition they often express two of the most easily recognized musical moods, happiness (major) and sadness (minor; Hevner, 1935; Dalla Bella, Peretz, Rousseau & Gosselin, 2001; Juslin & Laukka, 2004). These pairings have been used in studies of emotion perception in music in typical children (Dalla Bella et al., 2001; Gregory, Worrall & Sarge, 1996; Kastner & Crowder, 1990). Although in practice, the emotion conveyed by a piece is determined by many factors including tempo, timbre, and rhythm; musical mode is a salient cue that differentiates happy from sad music and is thus an easy variable to manipulate for scientific research in musical emotion. One study using this manipulation showed that children with ASD are unimpaired in matching of musical mode (major or minor) to schematic happy and sad faces, respectively (Heaton, Hermelin, & Pring, 1999). Children with ASD are also able to match musically-depicted emotional mental states such as tenderness with visual representations of these states (Heaton, Allen, Cummins, Williams, & Happé, 2008). It is important to note that these types of tasks only tap into emotion recognition abilities, the ability to choose a verbally-defined state and match particular musical qualities to it. Perception of emotional expressivity is more subtle, relying on small variations in timing and amplitude, and thus may be more difficult to perceive and/or verbalize. It is currently unknown if children with ASD perceive expressivity in musical performances in the same way as typical children or adults.

Many studies on ASD include two comparison groups; one group of typically developing (TD) children and one group of children with a cognitive impairment to control for the difference in level of functioning between the ASD and TD groups (Jarrold & Brock, 2004). Thus, we recruited a comparison group consisting of individuals with Williams syndrome (WS). WS is a neurodevelopmental disorder caused by the hemizygous deletion of approximately 1.5 megabases on chromosome 7, typically including the gene for elastin (Korenberg, et al. 2000; Mervis et al., 2000). They are generally more cognitively impaired than the high-functioning participants with ASD in this study (Mervis et al., 2000) but show relatively spared emotion perception (Rose et al., 2007; Skwerer, Schofield, Verbalis, Faja, & Tager-Flusberg, 2007).

Individuals with WS also show relatively preserved musical abilities (Don, Schellenberg, & Rourke, 1999; Levitin, 2005; Levitin & Bellugi, 1998), especially in tests of musical expressiveness (Hopyan, Dennis, Weksberg, & Cytrynbaum, 2001). In addition, they show stronger liking for music and a greater range of emotions in response to it when compared with typical children (Don et al., 1999; Levitin, et al., 2004), though this may be largely due to the fact that they are less inhibited than typical children, so they express their emotions more.

In the present experiment, we manipulated the expressivity of piano performance as mediated by variability in note duration (timing; including note length, note inter-onset intervals, and onset asynchronies) and amplitude to investigate the contributions of this variability to perception of emotional expressiveness in typical and atypical development. Participants then rated these manipulated performances for their emotionality. We propose two alternate hypotheses: H1 is that music is a domain in which emotion recognition and perception is unimpaired for individuals with ASD. H2 is that, as previous research has demonstrated, individuals with ASD are able to recognize or categorize emotion or associate certain compositional cues with emotions (e.g. minor key is associated with sad), but are not sensitive to more subtle, implicitly learned cues such as those normally employed by a performer and which were manipulated in our experimental task. An important question to consider is whether the emotionality judgments we are asking of the participants are dependent on level of cognitive functioning. The results of our study will shed light on that question also; if the judgments are wholly dependent on cognitive functioning, the WS group will be the most impaired at the task, and the ASD group (in the present study, less cognitively impaired than the WS group) would fall somewhere between the WS group and the control group.

Material and Methods

Background and Screening Measures

All participants completed the Wechsler Abbreviated Scale of Intelligence (WASI). The ASD and TD participants also completed additional measures: two subtests from the Wechsler Intelligence Scale for Children-IV (WISC-IV; Digit Span and Letter-Number Sequencing) and a revised version of the Queens Questionnaire for Musical Background (Cuddy, Balkwill, Peretz, & Holden, 2005). Their parents completed the Salk And McGill Music Inventory (SAMMI, Levitin et al., 2004) which provided further information about the child’s musical history, as well as two questionnaires about social functioning to ascertain ASD diagnosis and verify that children in the control group did not show signs or symptoms of ASD: the Social Communication Questionnaire (SCQ, Rutter, Bailey, & Lord, 2003) and the Social Responsiveness Scale (SRS, Constantino et al., 2003).

Participants

There were three experimental groups in this study. Initially, 33 children and adolescents with autism spectrum disorders (ASD) were recruited through convenience sampling: 25 from a specialized autism clinic at the Montreal Children’s Hospital and eight from a school for children with physical and mental disabilities in Montreal. These participants were aged between 10 and 19 years and had all been diagnosed according to DSM-IV criteria by specialized medical teams with expertise in diagnosing autism and other ASD. Subgroup diagnosis (Autistic disorder, Asperger syndrome and PDD-NOS) was similarly determined according to DSM-IV criteria. In addition, 18 participants with Williams syndrome (WS) were recruited at a summer camp (“Williams Syndrome Camps,” 2010) where some, but not all of the individuals were involved in music activities. These participants were diagnosed based on clinical features by their physicians and/or the fluorescence in situ hybridization (FISH) test indicating a deletion that included the elastin gene on chromosome band 7q11.2 (Korenberg et al., 2000). As controls, we recruited 52 typically developing (TD) children and adolescents between the ages of 8 and 18 by word of mouth and from four schools in Montreal.

From the 33 participants with ASD, 5 were excluded from analysis because their verbal IQs or FSIQs were below 70, 4 were excluded because they did not understand the task, and 1 was excluded because both the SRS and SCQ scores were in the normal (non-ASD) range. This yielded 23 participants with ASD (2 with autism, 12 with Asperger syndrome, and 9 with PDD-NOS) who were retained. We then selected 23 TD participants from the 52 recruits to obtain group matching to the 23 participants with ASD. IQ and age data are reported in Table 1. The goal of the group matching was to obtain groups with equal numbers of participants, collectively matched such that gender distribution was equal and chronological age, verbal IQ (VIQ), performance IQ (PIQ), and full-scale IQ (FSIQ) were within one SD. We performed Wilcoxon 2-sample tests (equivalent to Mann-Whitney U tests) to examine the intergroup differences. The two groups did not differ statistically on PIQ, Z =1.6, p = .1, Digit Span or Letter-Number Sequencing scaled scores, ZDS = −.74, p = .46 and ZLN = .94, p = .35, years of musical experience, Z = −1.57, p = .11, or age, Z = −.94, p = .34. They did differ on VIQ and FSIQ, ZVIQ = 2.2, p = .03, ZFSIQ = 2.13, p = .03, with the TD group mean VIQ being slightly higher (M = 106, SD = 12) than the ASD group mean VIQ (M = 97, SD = 17). There were also significant differences between groups on SRS and SCQ scores, ZSRS = −5.1, p < .001, and ZSCQ=−5.4, p < .001, confirming that the ASD group overall was impaired in social communication relative to the TD group.

Table 1.

Descriptive statistics: participants with ASD (N=23), participants with WS (N=11), and typically developing (TD; N=23) participants, compared using Wilcoxon rank tests

Age (yr:mo) FSIQ VIQ PIQ
ASD (6 females, 17 males)
 Mean 13:7 97 96 98
 S.D. 1:11 17 19 14
 Range 10:11–20:3 76–133 72–132 74–129
WS (8 females, 3 males)
 Mean 22:3 65 75 59
 S.D. 8:9 5 8 3
 Range 13:3–43 59–73 66–89 56–67
TD (6 females, 17 males)
 Mean 12:7 106 106 104
 S.D. 2:1 12 13 14
 Range 13:3–15:7 79–130 81–129 75–132
 ASD vs. TD group only: Z −.94 2.13* 2.20* 1.61
 ASD, TD and WS; χ2 17.7** 27.3** 22.6** 26.0**
*

p < .05

**

p < .01

FSIQ: full scale IQ, VIQ: verbal IQ, PIQ: performance IQ

N.B. In this analysis: 2 Autism, 12 Asperger, 9 PDD-NOS

Six of the 18 participants in the WS group were excluded from the analysis because their performance IQs or FSIQs were less than 55. One additional participant was excluded because of hearing loss. Thus, 11 WS participants were retained in the analyses (8 females and 3 males).

Stimuli

Stimuli were four versions of short (approximately 20-second) selections from four Chopin nocturnes (Op. 15 No. 1 and Op. 32 No. 1, both in a major key, and Op. 55 No. 1 and KKIVa, both in a minor key), previously used by Bhatara, Tirovolas, Duan, Levy & Levitin, (under revision) in a study of musical expressivity in normal adults. To create the stimuli, we obtained performances of the nocturnes from a professional pianist (Tom Plaunt, Piano Performance Professor, Schulich School of Music, McGill University), recorded on a Yamaha Disklavier piano (Buena Park, California, Model MPX1Z 5959089, equipped with a DKC500RW MIDI control module). Using a MIDI editor (ProTools 7, Avid, Daly City, California) we created four levels of musical expressivity by parametrically removing some or all temporal and amplitude variation associated with expressivity. This is described below.

Manipulating temporal expressivity

We first manipulated the expressivity in the performance due to variations in note timing (temporal expressivity) by creating three temporal alterations for a total of four versions of each performance: 1) a normally expressive version (the unaltered Disklavier recording obtained from the professional pianist, called the expressive version); 2) a version in which all temporal variation (and hence temporal expressivity) is removed (mechanical version); 3) an intermediate version with temporal variation interpolated between 0 and 100% expressive (50% expressive version); and 4) a version with random temporal variation (random version). Further details of the stimulus creation procedure are included in the Appendix.

Manipulating amplitude expressivity

We altered the piece’s expressivity due to variation in note amplitude in the same general fashion as the temporal expressivity. The mechanical version was created by assigning to each note the mean amplitude of the expressive version. The expressive version contains the full amplitude variation afforded by MIDI. For the intermediate version, we assigned 50% of the amplitude variation contained in the expressive version, again using linear interpolation. For the random version, the amplitudes of each note were randomly reassigned without regard to note type.

Pedaling

The use of the sustain pedal is important in expressive piano performance. We altered the pedaling in the same fashion as the timing and amplitude expressivity by assigning 100% and 50% of the pedaling values in their respective conditions. At first, we created the mechanical or 0% version with no pedaling at all, but we found that note durations were altered so as to noticeably distort the performance (this is because the pianist had used pedaling to increase some note durations and to provide legato transitions). Moreover, the subjective impression of the experimenters was that the version sounded qualitatively different from the others: lacking legato, it sounded too staccato (choppy), and this would have caused it stand out rather than sounding as though it were simply one point along a continuum. We thus assigned 25% of the pedaling value to the mechanical version. The pedaling profile for the random version was the same for that of the expressive version – we deemed the introduction of random pedaling to be outside the scope of our study, which focuses on amplitude and timing. These three manipulated aspects (timing variation, amplitude variation, and pedaling) were combined to form four categories of expressiveness for each piece (expressive, 50%, mechanical, and random). Expressive versions of each nocturne had 100% of the amplitude variation, 100% of the timing variation, and 100% of the pedaling variation; 50% versions had 50% of the timing, amplitude and pedaling variation, mechanical had 0% of the timing and amplitude variation and 25% of the pedaling variation, and random had random amplitude and timing with the original performance’s pedaling This resulted in a total of 16 stimuli (each presented twice): 4 nocturnes × 4 levels of expressivity. Two of the nocturnes were in a minor key and two were in a major key, thus, eight of the 16 stimuli were in a minor key and eight were in a major key1. We recognize that there are many factors that differentiate these two pairs of pieces in addition to their mode (major or minor), yet we felt it was important to introduce this salient quality as a factor in the experiment. Below, in the analysis section, when we refer to tonality we do so as a convenient short-hand, and do not intend to imply that we are generalizing to all major or minor pieces.

Procedure

In order to increase statistical power, two blocks of trials were created, with each stimulus appearing in random order within each block (thus each participant heard each stimulus twice); the two blocks were separated by a 30 s silent rest period. Stimulus presentation was controlled by a Macintosh PowerBook G4 laptop (Cupertino, CA) using the program Psiexp (Smith, 1995). For the ASD and TD groups, the MIDI data were played back through the Disklavier piano (which makes it appear as though the piano is playing itself), and participants sat approximately four feet from the piano. Members of the WS group, who were tested away from our laboratory, were presented with recordings of the Disklavier output through Sony Dynamic Stereo Headphones, MDR-V250 (Sony Corporation, Buena Park, CA). Pilot testing in our laboratory showed no significant differences in judgments associated with the “live” versus recorded stimuli.

Participants were asked to rate how emotional each musical performance was. We emphasized to participants that it did not matter which emotion they perceived in the performance or how the performance made them feel; rather, they should rate how much emotion the performance conveyed. Even though we were examining the effect of different “expressivity” levels of piano performance, we did not want to ask the participants how expressive the performances sounded. We were instead interested in how they translated these different expressivity levels into emotion. After hearing each stimulus, participants saw the question “How emotional was the music you just heard?” displayed on the computer screen, and they rated the emotional level on a continuous slider, of which one end was labeled “not emotional” and the other end “very emotional.” (The responses were coded as ranging between 0 and 1.0). Participants were asked to use the whole range of the scale.

Results

General analyses

The grand mean of ratings was 0.56 with a standard error of 0.02, demonstrating that, overall, the participants’ responses were centered around the middle of the rating scale (scored between 0 and 1) and were consistent (coefficient of variation = 0.04). Individual participants’ means ranged from 0.20 to 0.81 (SD = 0.1). The ASD group’s mean was 0.59 (SE = 0.02), the TD group’s mean was 0.55 (SE = 0.02), and the WS group’s mean was 0.52 (SE = 0.05). A one-way repeated-measures ANOVA confirmed that these means did not differ significantly from one another, F(2,55) = 1.49, p = .23. Over all three groups, the correlations of ratings between the first and second blocks of stimuli by expressivity level were significant at p < .01 so we combined the blocks in subsequent analyses.

Analysis

We performed an initial two-way repeated measures ANCOVA with tonality (major vs. minor) and expressivity level (expressive, 50%, mechanical, and random) as within-subject factors to examine verbal IQ as a covariate. The main effect of expressivity level was significant, F(3, 162) = 14.7, p <.001, and the covariance main effect of VIQ approached significance, F(1, 54) = 3.0, p = .09. We performed a second three-way repeated measures ANCOVA to examine the interactions of these within-subject factors (expressivity level and tonality) as well as the main effect of diagnosis (ASD, TD or WS). Expressivity level was again significant, F(3, 162) = 7.17, p < .001. Diagnosis was not significant, F(2, 54) = 2.2, p = .11. However, the interaction of diagnosis with expressivity level was significant, F(6, 162) = 3.23, p =.004 (see Figure 2). The main effect of tonality was not significant, F(1, 54) = 1.15, p = .29, nor did it interact with any other factors (all p’s > .1). The covariance main effect of verbal IQ was significant, F(1, 54) = 5.49, p = .02, and its interaction with expressivity level was significant, F(3, 162) = 4.39, p = .005.

Figure 2.

Figure 2

Emotionality ratings of the musical performances by expressivity level and diagnosis (ASD, TD and WS)

To further explore the interactions among diagnosis and other factors, we performed separate repeated measures ANCOVAs for each group of participants with expressivity level and tonality as factors and VIQ as a covariate. For the ASD group, there were no significant main effects of expressivity level or tonality; F(3, 66) = 1.25, p = .30 and F(1,66) = 2.69, p = .10, respectively (see Figure 3a). VIQ was a significant covariate, F(1, 22) = 5.85, p = .02, but it did not interact with any other factor.

Figure 3.

Figure 3

Figure 3

Figure 3

Figures 3a, b, and c. Emotionality ratings by tonality (major vs. minor) and expressivity level for (a) ASD, (b) TD and (c) WS

In contrast with the ASD group, the main effect of expressivity level was significant for both the TD and WS groups, F(3, 66) = 13.6, p < .001 and F(3, 30) = 8.35, p <.001, respectively. Thus, the participants in the ASD group did not differentiate among expressivity levels in their responses, while the participants in the TD and WS groups did. Expressivity level was the only significant factor for the TD group. As can be seen in Figures 2 and 3b, the TD group only differentiated between the original, expressive version and the other three. This was verified using Tukey’s HSD posthoc test, and there were no significant differences among the other levels. The WS group rated the random version as less emotional than the other three when the ratings were collapsed across tonality. However, the WS group showed a significant interaction between expressivity level and tonality, F(3, 56) = 2.84, p < .05. When ratings for the two minor nocturnes were examined alone, the WS group showed a more similar pattern to the TD group (see Figure 3c). Although Tukey’s HSD posthoc test only showed significant differences between the expressive and the random levels, a one-tailed t-test between expressive and 50% yielded a significant difference, t(10) = 2.53, p = .02. This, combined with a power calculation, suggests that with four additional participants the difference may have been significant as measured by a posthoc test. When the ratings for the two major nocturnes were examined alone for the WS group, only mechanical and random were significantly different from each other.

For the WS group, there was no main effect of VIQ, F(1, 6) = 1.6, p = .25, but the interactions between VIQ and expressivity level, F(3, 52) = 4.6, p < .01, as well as VIQ and tonality, F(1, 52) = 4.8, p = .03, were significant. The interaction between VIQ and expressivity level occurred because the individuals with lower VIQ rated the mechanical and 50% versions more similar to the expressive version, while the individuals with higher VIQs rated them as more similar to the random version. The ratings for expressive and random did not vary as much across the VIQ range. The interaction between VIQ and tonality arose because the individuals with lower VIQs tended to rate the major nocturnes as equal to or slightly more emotional than the minor nocturnes, while the individuals with higher VIQ rated the major nocturnes as slightly less emotional than the minor nocturnes.

TD and ASD groups only

We did not have sufficient data on years of musical experience for the WS group to warrant its use in analyses. We therefore performed a repeated-measures ANCOVA on only the ASD and TD groups to investigate the effect of musical experience on expressivity ratings. The within-subjects factor was expressivity level, the between-subjects factor was diagnosis (ASD or TD) and the covariates were VIQ and years of musical experience. The effect of musical experience approached significance, F(1, 45) = 3.47, p = .07, because participants with more musical experience tended to rate all of the pieces as more emotional than did participants with less musical experience. The remaining effects were similar to those from the original omnibus ANCOVA; the main effects of expressivity level and VIQ were significant, F(3, 135) = 10.89, p < .001 and F(1, 45) = 10.37, p < .001, respectively, as was the interaction between expressivity level and diagnosis, F(6, 270) = 3.43, p = .02. There was no interaction between expressivity level and musical experience in the ANCOVA, showing that individuals with all amounts of musical experience tended to rate the expressivity levels in the same way. Nonetheless, there may have been an effect of musical experience on the magnitude of response rather than on the direction. To examine this possibility, we calculated a difference score for each participant (individual mean ratings of expressive minus mean ratings of mechanical). This provided us with a measure of their ability to discriminate between these two levels of expressivity. The correlation between the difference score and years of musical experience was not significant for both groups combined, r(46) = 0.23, p = 0.13, or for the ASD group alone, r(21) = -0.1, p = 0.65. However, we found a significant positive correlation between these values for the TD group alone, r(23) = 0.42, p = 0.04 (increasing discriminability followed increases in musical experience).

Discussion

We report evidence that children and adolescents with ASD are impaired in judging the emotional expressivity of piano performances relative to TD participants group-matched on PIQ, auditory working memory, and years of musical experience; they are also impaired relative to unmatched participants with WS. At the very least, one can conclude that the participants with ASD are not responsive to the same expressive cues as are people with WS or typical development. The interaction between diagnosis and expressivity level showed that the main difference among the three groups was in their patterns of responses. The TD group rated all of the expressive performances of both tonalities as more emotional than the other three levels (50%, mechanical, and random), and the WS group showed a similar pattern for the minor nocturnes, whereas the ASD group failed to differentiate among the expressivity levels for either the major or the minor nocturnes. As indicated by the ANCOVAs above, VIQ is clearly an important factor in this judgment, but the intergroup differences remain even when VIQ is added as a covariate in the analysis. Thus, of the two hypotheses proposed above, we find evidence to support H2: Individuals with ASD show impairments in understanding these expressive emotional cues in music.

Experience is also an important factor; the present study showed that the ability to differentiate among different expressivity levels was positively correlated with years of musical experience. This result is consistent with previous work showing that music experience aids in perception of emotion in the fundamental frequency of voice samples (Nilsonne & Sundberg, 1985) as well as in speech prosody (Thompson, Schellenberg & Husain, 2004), a domain closely related to musical performance expressiveness. However, this correlation is only present in the TD group, suggesting that children with ASD may require greater amounts of standard musical training or alternative forms of musical training to show this particular enhancement in emotion perception.

Previous evidence (Heaton et al., 1999; Heaton et al, 2008) has shown that children and adolescents with ASD are unimpaired in identifying basic musical emotions. This raises the question of what differs between the task of recognition or categorization of emotions and the task of rating the amount of emotional expressivity that is present. It could be that in the categorization studies the emotion was conveyed by the compositional as well as the performance cues, while in the present study the differential cues are only in the performance – each stimulus serving as its own control. Among compositional cues, pitch is very important at conveying emotion, and pitch perception is a strength among individuals with ASD. When they no longer have pitch cues to differentiate among performances, perhaps the task becomes too difficult or they lack access to an alternative strategy. As Samson, Mottron, Jemel, Belin & Ciocca (2006) propose, the high spectro-temporal complexity of the performance cues may impair the ability of individuals with ASD to perform this task.

A complementary explanation for our findings is that the understanding of these performance cues relies on neural mechanisms that overlap with those involved in understanding of affective speech prosody, and they may both be impaired for similar reasons. The connection between speech prosody and musical cues has been previously noted (Bernstein, 1976; Juslin & Laukka, 2003; Kivy, 1980). This would fit into the framework proposed by Klin, Jones, Schultz & Volkmar (2005) in their theory of Enactive Mind (EM). In the EM approach, many impairments in ASD may be due to a lack of the predisposition to respond to and seek out social stimuli. Over the course of development, this drastically affects the differential salience of objects and people to individuals with ASD, and changes the way they interact with the world. Thus, individuals with ASD find people and the subtle communication they employ to be less salient and more difficult to understand. If we assume that 1) there is overlap in the neural bases for the understanding of affective music and the understanding of affective speech prosody (as suggested by the work of Koelsch & Siebel, 2005; Magne, Schön & Besson, 2003; Patel, 2003; Patel et al., 1998), and 2) affective speech prosody over the course of development has never been as salient to individuals with ASD as it is to TD individuals (as is proposed by the EM theory), then the individuals with ASD will not be as sensitive to the emotional connotations of musical performances, thus leading to impairment in the present experiment.

Along these same lines follows an alternative explanation of this study’s results. Kanner (1943) observed a deficit in emotion in children with ASD. He suggested that they “have come into the world with innate inability to form the usual, biologically provided affective contact with people” (p. 250). Although the present experiment does not rely on any social stimuli, the ability to perceive emotion in music performance may arise from experience with people, or from neural resources shared with the ability to understand emotion in others. Related to this is the mirror neuron hypothesis of emotion contagion, which suggests that through mirror neuron activation, observation of a facial expression results in an automatic imitation of that expression, leading to an experience of that emotion (Williams, Whiten, Suddendorf & Perrett, 2001). There is some evidence that mirror neuron function is impaired in individuals with ASD (Dapretto et al., 2006), which would lead to a lack of an intuitive understanding of other people’s emotions. If much of an individual’s understanding of emotions originates from early experience with social affective contact, then a lack of this early experience because of a neural abnormality could also lead to a lack of understanding of emotion through abstract expressions of emotions, through speech, music, and art.

Impairments of another region of the brain may be underlying a number of the deficits present in individuals with ASD. Emotion arises from more primitive brain regions (i.e. limbic structures) than verbal abilities, executive function, or other common impairments in ASD, which are associated with atypical development of cortical connections. Fitting with the EM and mirror neuron/early emotional contact theories, emotion perception may in fact underlie some of these cortically-based impairments. A deficit in emotion recognition such as this may even underlie the lack of a predisposition to seek out and experience social stimuli. As postulated by Hobson (1991), a connection between emotion perception abilities and verbal abilities does not mean that low verbal abilities caused a deficit in emotion perception; it could just as easily be that low verbal abilities arose from a fundamental deficit in an ability to connect with other human beings.

The difference between diagnostic groups on verbal IQ may have contributed to some of the differences in expressivity ratings. However, the intergroup differences were still present after this factor was taken into account in an analysis of covariance. Among the separate group ANCOVAs, the WS group was the only one to show an interactions between VIQ and other factors; VIQ interacted with expressivity level as well as with tonality. Notwithstanding these interactions, the results of the 3-way ANCOVA combined with the fact that the WS group, which has a much lower mean VIQ than the other groups, is relatively unimpaired in this task (at least for the minor nocturnes) suggests that group differences cannot be entirely related to differences in VIQ.

One possible limitation of this study was that our WS participants were recruited at a summer camp. The majority of the activities at the camp were typical of any summer camp, such as crafts, swimming, hiking, music and dancing, but the individuals at this particular camp may not have been representative of WS individuals in general, possibly being more interested and having greater background in music.

A remaining question is where the impairment itself lies: are the adolescents with ASD impaired at a perceptual or a cognitive level? Perhaps the ASD group can tell the difference among the different expressivity levels, but are unable to translate these perceptual differences into emotional differences. Or perhaps the deficit lies in basic perception of timing or amplitude variation in the performances. Further exploration of these individual factors will be needed to answer this question.

Future directions

To maximize consistency of results (and reduce sources of variability in the data), the stimuli used in this study were selections from a single composer, style, time period, and instrument, so we must be cautious about generalizing to other genres of music. Further research on other instruments and genres will be necessary before we can make any strong claims about general music perception in ASD.

Acknowledgments

The research reported herein was submitted in partial fulfillment of the requirements for the Ph.D. in Psychology at McGill University by the first author. A.B. is currently in the Division of Head & Neck Surgery at the David Geffen Medical School at UCLA. B.L. is currently at Boston College in the Department of Psychology. The research was funded by grants to D.J.L. from NAAR, SSHRC, NSERC, and CFI, and to U.B. by NIH. We would like to thank the participants and their families for their time. We are also grateful to Bennett Smith and Karle-Philip Zamor for technical assistance in programming the experiment and preparing the stimuli; to Kiley Hill, Anna Yam, and Bradley Vines for help with testing and recruiting participants; to Carla Himmelman and Anna Tirovolas for assistance with stimulus creation and piloting; and to Athena Vouloumanos for valuable comments.

Grants supporting this paper:

Grants to DJL:

Grant Sponsor: National Alliance for Autism Research (NAAR; now Autism Speaks), Research Grant #1066/DL/01-201-005-001-00-00

Grant Sponsor: National Science and Engineering Research Council of Canada (NSERC), Research Grant #228175-04

Grants to UB:

Grant Sponsor: National Institute of Child Health and Human Development (NICHD); Grant #: HD 33113 “Williams Syndrome: Linking Cognition Brain and Gene”

Grant Sponsor: National Institute of Neurological Disorders and Stroke (NINDS); Grant #: NS 22343 “Social Aspects of Communication”

Appendix

Description of stimulus creation

Temporal expressivity

We removed expressive temporal variation by creating a MIDI version that precisely followed the musical score and composer’s rhythmic markings, which we called mechanical. To accomplish this, we divided the length of the piece (in seconds) by the number of eighth notes to obtain the average duration of an eighth note. We then equalized all events in the piece to this value using ProTools. The note onset times were adjusted to be immediately after the end of the previous note, creating a “legato” feel appropriate for the piece.

We created an intermediate version using linear interpolation to obtain 50% of the temporal variance of the expressive version. We assigned each event a duration that was halfway between its duration in the original version and the mechanical version. The note onset times were altered in the same way, by creating inter-onset intervals (IOIs) that were halfway between the original and the mechanical version.

We added a random condition as an additional control, after considering the possibility that some participants might base their judgments on the overall variability of the performance. That is, the expressive version of the piece always contains greater variability in both timing and amplitude when compared with the altered versions. The random version of each piece was thus created by reassigning all of the note durations of the original performance randomly within note type groups; eighth notes’ durations were rearranged only among eighth notes, quarter notes among quarter notes, etc. The silent space between notes was randomized within groups of consecutive notes of the same type.

Dynamic expressivity

We altered the piece’s dynamic expressivity in the same fashion as above. A mechanical version was created by assigning to each note the mean MIDI velocity (the portion of the MIDI signal that determines amplitude) of the expressive version. The expressive version contains virtually full amplitude variation (limited only by the 127 levels available in MIDI). For the intermediate version, we assigned 50% of the amplitude variation contained in the expressive version, again using linear interpolation. The random version of each piece was created by reassigning all of the MIDI velocities of the original performance randomly among notes, without regard for note type.

Pedaling

We altered the pedaling in the same fashion as the timing and dynamic expressivity, with one exception (mechanical) which is discussed below. We assigned 100% and 50% of the pedaling values in their respective conditions. Pedaling values referred to the height of the pedal; “0” signifies a pedal that is at its topmost, resting position while “127” signifies a fully depressed pedal. The exception for the mechanical version came about because during the original performance, the pianist used some pedal nearly all the time, and this served to create de facto note durations that were not captured by the MIDI file; in other words, the performer may have lifted his finger from a key while the note continued to sound due to pedaling. When we created the mechanical version with no pedaling at all, these note durations were altered in a way that noticeably distorted the performance. We thus assigned 25% of the pedaling value to the mechanical version. The pedaling profile was the same for that of the expressive version – introducing random pedaling was deemed to be outside the scope of our study, which focuses principally on amplitude and timing.

Footnotes

Recordings of the stimuli used in this paper can be found at: http://www.psych.mcgill.ca/labs/levitin/expressivity_asd.htm

1

Due to an equipment malfunction, 21 participants (8 ASD and 13 TD) heard the stimuli at a reduced tempo of 80% of the original speed. A one-way repeated-measures ANOVA of tempo showed that there was no significant effect of tempo, F(1, 55) = .88, p = .35, and so we combined the results for all analyses reported herein.

Literature cited

  1. Adolphs R, Sears L, Piven J. Abnormal processing of social information from faces in autism. Journal of Cognitive Neuroscience. 2001;13(2):232–240. doi: 10.1162/089892901564289. [DOI] [PubMed] [Google Scholar]
  2. Altgassen M, Kliegel M, Williams TI. Pitch perception in children with autistic spectrum disorders. British Journal of Developmental Psychology. 2005;23(4):543–558. doi: 10.1348/026151005X26840. [DOI] [PubMed] [Google Scholar]
  3. Baron-Cohen S, Spitz A, Cross P. Do children with autism recognise surprise? A research note. Cognition and Emotion. 1993;7(6):507–516. doi: 10.1080/02699939308409202. [DOI] [Google Scholar]
  4. Bernstein L. The unanswered questions: Six talks at Harvard (The Charles Eliot Norton Lectures) Cambridge, MA: Harvard University Press; 1976/1981. [Google Scholar]
  5. Bonnel A, Mottron L, Peretz I, Trudel M, Gallun E, Bonnel AM. Enhanced pitch sensitivity in individuals with autism: A signal detection analysis. Journal of Cognitive Neuroscience. 2003;15(2):226–235. doi: 10.1162/089892903321208169. [DOI] [PubMed] [Google Scholar]
  6. Boucher J, Lewis V, Collis GM. Voice processing abilities in children with autism, children with specific language impairments, and young typically developing children. Journal of Child Psychology and Psychiatry. 2000;41(7):847–857. doi: 10.1111/1469-7610.00672. [DOI] [PubMed] [Google Scholar]
  7. Bruneau N, Bonnet-Brilhault F, Gomot M, Adrien JL, Barthélémy C. Cortical auditory processing and communication in children with autism: electrophysiological/behavioral relations. International Journal of Psychophysiology. 2003;51(1):17–25. doi: 10.1016/S0167-8760(03)00149-1. [DOI] [PubMed] [Google Scholar]
  8. Bruneau N, Roux S, Adrien JL, Barthélémy C. Auditory associative cortex dysfunction in children with autism: evidence from late auditory evoked potentials (N1 wave-T complex) Clinical Neurophysiology. 1999;110(11):1927–1934. doi: 10.1016/S1388-2457(99)00149-2. [DOI] [PubMed] [Google Scholar]
  9. Castelli F. Understanding emotions from standardized facial expressions in autism and normal development. Autism. 2005;9(4):428–449. doi: 10.1177/1362361305056082. [DOI] [PubMed] [Google Scholar]
  10. Constantino JN, Davis S, Todd R, Schindler M, Gross M, Brophy S, et al. Validation of a brief quantitative measure of autistic traits: Comparison of the Social Responsiveness Scale with the Autism Diagnostic Interview-Revised. Journal of Autism and Developmental Disorders. 2003;33:427–433. doi: 10.1023/A:1025014929212. [DOI] [PubMed] [Google Scholar]
  11. Cuddy LL, Balkwill LL, Peretz I, Holden RR. Musical difficulties are rare: A study of “tone deafness” among university students. Annals of the N Y Academy of Sciences. 2005;1060:311–324. doi: 10.1196/annals.1360.026. [DOI] [PubMed] [Google Scholar]
  12. Dalla Bella S, Peretz I, Rousseau L, Gosselin N. A developmental study of the affective value of tempo and mode in music. Cognition. 2001;80(3):B1–B10. doi: 10.1016/s0010-0277(00)00136-0. [DOI] [PubMed] [Google Scholar]
  13. Dapretto M, Davies MS, Pfeifer JH, Scott AA, Sigman M, Bookheimer SY, Iacoboni M. Understanding emotions in others: Mirror neuron dysfunction in children with autism spectrum disorders. Nature Neuroscience. 2006;9(1):28–30. doi: 10.1038/nn1611. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Don AJ, Schellenberg GE, Rourke BP. Music and language skills of children with Williams Syndrome. Child Neuropsychology. 1999;5(3):154–170. doi: 10.1076/chin.5.3.154.7337. [DOI] [Google Scholar]
  15. Downs A, Smith T. Emotional understanding, cooperation, and social behavior in high-functioning children with autism. Journal of Autism and Developmental Disorders. 2004;34(6):625–635. doi: 10.1007/s10803-004-5284-0. [DOI] [PubMed] [Google Scholar]
  16. Fombonne E. Epidemiology of autistic disorder and other pervasive developmental disorders. Journal of Clinical Psychiatry. 2005;66(suppl 10):3–8. [PubMed] [Google Scholar]
  17. Gabrielsson A. The performance of music. In: Deutsch D, editor. The Psychology of Music. 2. San Diego: Academic Press; 1999. pp. 501–602. [Google Scholar]
  18. Gervais H, Belin P, Boddaert N, Leboyer M, Coez A, Sfaello I, et al. Abnormal cortical voice processing in autism. Nature Neuroscience. 2004;7(8):801–802. doi: 10.1038/nn1291. [DOI] [PubMed] [Google Scholar]
  19. Ghaziuddin M. Defining the behavioral phenotype of Asperger syndrome. Journal of Autism and Developmental Disorders. 2008;38:138–142. doi: 10.1007/s10803-007-0371-7. [DOI] [PubMed] [Google Scholar]
  20. Golan O, Baron-Cohen S, Hill JJ, Rutherford M. The ‘Reading the Mind in the Voice’ Test-Revised: A study of complex emotion recognition in adults with and without autism spectrum conditions. Journal of Autism and Developmental Disorders. 2007;37(6):1096–1106. doi: 10.1007/s10803-006-0252-5. [DOI] [PubMed] [Google Scholar]
  21. Gregory AH, Worrall L, Sarge A. The development of emotional responses to music in young children. Motivation & Emotion. 1996;20(4):341–348. doi: 10.1007/BF02856522. [DOI] [Google Scholar]
  22. Gross TF. The perception of four basic emotions in human and nonhuman faces by children with autism and other developmental disabilities. Journal of Abnormal Child Psychology. 2004;32(5):469–480. doi: 10.1023/B:JACP.0000037777.17698.01. [DOI] [PubMed] [Google Scholar]
  23. Grossman JB, Klin A, Carter AS, Volkmar FR. Verbal bias in recognition of facial emotions in children with Asperger syndrome. Journal of Child Psychology and Psychiatry. 2000;41(3):369–379. doi: 10.1111/1469-7610.00621. [DOI] [PubMed] [Google Scholar]
  24. Heaton P. Pitch memory, labelling and disembedding in autism. Journal of Child Psychology and Psychiatry. 2003;44(4):543–551. doi: 10.1111/1469-7610.00143. [DOI] [PubMed] [Google Scholar]
  25. Heaton P. Interval and contour processing in autism. Journal of Autism and Developmental Disorders. 2005;35(6):787–793. doi: 10.1007/s10803-005-0024-7. [DOI] [PubMed] [Google Scholar]
  26. Heaton P, Allen R, Williams K, Cummins O, Happé F. Do social and cognitive deficits curtail musical understanding? Evidence from autism and Down syndrome. British Journal of Developmental Psychology. 2008;26(2):171–182. doi: 10.1348/026151007X206776. [DOI] [Google Scholar]
  27. Heaton P, Hermelin B, Pring L. Autism and pitch processing: A precursor for savant musical ability. Music Perception. 1998;15(3):291–305. [Google Scholar]
  28. Heaton P, Hermelin B, Pring L. Can children with autistic spectrum disorders perceive affect in music? An experimental investigation. Psychological Medicine. 1999;29:1405–1410. doi: 10.1017/S0033291799001221. [DOI] [PubMed] [Google Scholar]
  29. Hevner K. The affective character of the major and minor modes in music. American Journal of Psychology. 1935;47:103–118. [Google Scholar]
  30. Hobson RP. Methodological issues for experiments on autistic individuals’ perception and understanding of emotion. Journal of Child Psychology & Psychiatry. 1991;32(7):1135–1158. doi: 10.1111/j.1469-7610.1991.tb00354.x. [DOI] [PubMed] [Google Scholar]
  31. Hobson RP, Ouston J, Lee A. Emotion recognition in autism: Coordinating faces and voices. Psychological Medicine. 1988;18(4):911–923. doi: 10.1017/S0033291700009843. [DOI] [PubMed] [Google Scholar]
  32. Hopyan T, Dennis M, Weksberg R, Cytrynbaum C. Music skills and the expressive interpretation of music in children with Williams-Beuren Syndrome: Pitch, rhythm, melodic imagery, phrasing, and musical affect. Child Neuropsychology. 2001;7(1):42–53. doi: 10.1076/chin.7.1.42.3147. [DOI] [PubMed] [Google Scholar]
  33. Jarrold C, Brock J. To match or not to match? Methodological issues in autism-related research. Journal of Autism and Developmental Disorders. 2004;34(1):81–86. doi: 10.1023/B:JADD.0000018078.82542.ab. [DOI] [PubMed] [Google Scholar]
  34. Jones CR, Happé F, Baird G, Simonoff E, Marsden AJ, Tregay J, Charman T. Auditory discrimination and auditory sensory behaviours in autism spectrum disorders. Neuropsychologia. 2009 doi: 10.1016/j.neuropsychologia.2009.06.015. [DOI] [PubMed] [Google Scholar]
  35. Juslin PN, Laukka P. Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin. 2003;129(5):770–814. doi: 10.1037/0033-2909.129.5.770. [DOI] [PubMed] [Google Scholar]
  36. Juslin PN, Laukka P. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research. 2004;33(3):217–238. doi: 10.1080/0929821042000317813. [DOI] [Google Scholar]
  37. Kamenetsky SB, Hill DS, Trehub SE. Effect of tempo and dynamics on the perception of emotion in music. Psychology of Music. 1997;25:149–160. doi: 10.1177/0305735697252005. [DOI] [Google Scholar]
  38. Kanner L. Autistic disturbances of affective contact. Nervous Child. 1943;2:217–250. [PubMed] [Google Scholar]
  39. Kennedy M. The Oxford Dictionary of Music. 2. New York: Oxford University Press; 1999. Retrieved from http://www.oxfordmusiconline.com/subscriber/article/opr/t237/e1771?q=cantabile&source=omo_t237&search=quick&pos=1&_start=1#firsthit. [Google Scholar]
  40. Kivy P. The corded shell: Reflections on musical expression. Princeton, NJ: Princeton University Press; 1980. [Google Scholar]
  41. Klin A, Jones W, Schultz RT, Volkmar FR. The Enactive Mind--from actions to cognition: Lessons from autism. In: Volkmar FR, Paul R, Klin A, Cohen D, editors. Handbook of autism and pervasive developmental disorders: Vol. 1. Diagnosis, development, neurobiology, and behavior. 3. Hoboken, NJ: John Wiley & Sons Inc; 2005. [Google Scholar]
  42. Koelsh S, Siebel WA. Towards a neural basis of music perception. Trends in Cognitive Sciences. 2005;9(12):578–584. doi: 10.1016/j.tics.2005.10.001. [DOI] [PubMed] [Google Scholar]
  43. Korenberg JR, Chen XN, Hirota H, Lai Z, Bellugi U, Burian D, …Matsuoka R. VI. Genome structure and cognitive map of Williams Syndrome. Journal of Cognitive Neuroscience. 2000;12(1, Supplement 1):89–107. doi: 10.1162/089892900562002. [DOI] [PubMed] [Google Scholar]
  44. Korpilahti P, Jansson-Verkasalo E, Mattila ML, Kuusikko S, Suominen K, Rytky S, …Moilanen I. Processing of affective speech prosody is impaired in Asperger syndrome. Journal of Autism and Developmental Disorders. 2007;37(8):1539–1549. doi: 10.1007/s10803-006-0271-2. [DOI] [PubMed] [Google Scholar]
  45. Levitin DJ. Musical behavior in a neurogenetic developmental disorder: Evidence from Williams syndrome. Annals of the N Y Academy of Sciences. 2005;1060:325–334. doi: 10.1196/annals.1360.027. [DOI] [PubMed] [Google Scholar]
  46. Levitin DJ, Bellugi U. Musical ability in individuals with Williams’ Syndrome. Music Perception. 1998;15(4):357–389. [Google Scholar]
  47. Levitin DJ, Cole K, Chiles M, Lai Z, Lincoln A, Bellugi U. Characterizing the musical phenotype in individuals with Williams syndrome. Child Neuropsychology. 2004;10(4):223–247. doi: 10.1080/09297040490909288. [DOI] [PubMed] [Google Scholar]
  48. Levitin DJ, Menon V. Musical structure is processed in “language” areas of the brain: A possible role for Brodmann Area 47 in temporal coherence. NeuroImage. 2003;20(4):2142–2152. doi: 10.1016/j.neuroimage.2003.08.016. [DOI] [PubMed] [Google Scholar]
  49. Levitin DJ, Menon V. The neural locus of temporal structure and expectancies in music: Evidence from functional neuroimaging at 3 Tesla. Music Perception. 2005;22(3):563–575. doi: 10.1525/mp.2005.22.3.563. [DOI] [Google Scholar]
  50. Lincoln AJ, Courchesne E, Harms L, Allen M. Sensory modulation of auditory stimuli in children with autism and receptive developmental language disorder: event-related brain potential evidence. Journal of Autism and Developmental Disorders. 1995;25(5):521–539. doi: 10.1007/BF02178298. [DOI] [PubMed] [Google Scholar]
  51. Loveland KA. Social-emotional impairment and self-regulation in autism spectrum disorders. In: Nadel J, Muir D, editors. Emotional Development: Recent Research Advances. New York: Oxford University Press; 2005. pp. 365–382. [Google Scholar]
  52. Loveland KA, Tunali-Kotoski B, Chen Y, Ortegon J, Pearson DA, Brelsford KA, Gibbs MC. Emotion recognition in autism: Verbal and nonverbal information. Developmental Psychopathology. 1997;9(3):579–593. doi: 10.1017/S0954579497001351. [DOI] [PubMed] [Google Scholar]
  53. Magne C, Schön D, Besson M. Prosodic and melodic processing in adults and children: Behavioral and electrophysiologic approaches. Annals of the New York Academy of Sciences. 2003;999:461–476. doi: 10.1196/annals.1284.056. [DOI] [PubMed] [Google Scholar]
  54. Mazefsky CA, Oswald DP. Emotion perception in Asperger’s syndrome and high-functioning autism: The importance of diagnostic criteria and cue intensity. Journal of Autism and Developmental Disorders. 2007;37(6):1086–1095. doi: 10.1007/s10803-006-0251-6. [DOI] [PubMed] [Google Scholar]
  55. Mervis CB, Robinson BF, Bertrand J, Morris CA, Klein-Tasman BP, Armstrong SC. The Williams Syndrome cognitive profile. Brain & Cognition. 2000;44(3):604–628. doi: 10.1006/brcg.2000.1232. [DOI] [PubMed] [Google Scholar]
  56. Mottron L, Peretz I, Ménard E. Local and global processing of music in high-functioning persons with autism: Beyond central coherence? Journal of Child Psychology and Psychiatry. 2000;41(8):1057–1065. doi: 10.1111/1469-7610.00693. [DOI] [PubMed] [Google Scholar]
  57. Nilsonne Å, Sundberg J. Differences in ability of musicians and nonmusicians to judge emotional state from the fundamental frequency of voice samples. Music Perception. 1985;2(4):507–516. [Google Scholar]
  58. Ozonoff S, Pennington BF, Rogers SJ. Are there emotion perception deficits in young autistic children? Journal of Child Psychology and Psychiatry. 1990;31(3):343–361. doi: 10.1111/j.1469-7610.1990.tb01574.x. [DOI] [PubMed] [Google Scholar]
  59. Palmer C. Music performance. Annual Review of Psychology. 1997;48:115–138. doi: 10.1146/annurev.psych.48.1.115. [DOI] [PubMed] [Google Scholar]
  60. Parncutt R, Troup M. Piano. In: Parncutt R, McPherson GE, editors. The Science and psychology of music performance: Creative strategies for teaching and learning. New York: Oxford University Press; 2002. pp. 285–302. [Google Scholar]
  61. Patel AD. Language, music, syntax and the brain. Nature Neuroscience. 2003;6(7):674–681. doi: 10.1038/nn1082. [DOI] [PubMed] [Google Scholar]
  62. Patel AD, Peretz I, Tramo M, Labreque R. Processing prosodic and musical patterns: A neuropsychological investigation. Brain and Language. 1998;61:123–144. doi: 10.1006/brln.1997.1862. [DOI] [PubMed] [Google Scholar]
  63. Paul R, Augustyn A, Klin A, Volkmar FR. Perception and production of prosody by speakers with autism spectrum disorders. Journal of Autism and Developmental Disorders. 2005;35(2):205–220. doi: 10.1007/s10803-004-1999-1. [DOI] [PubMed] [Google Scholar]
  64. Peppé S, McCann J, Gibbon F, O’Hare A, Rutherford M. Receptive and expressive prosodic ability in children with high-functioning autism. Journal of Speech, Language and Hearing Research. 2007;50:1097–1115. doi: 10.1044/1092-4388(2007/071). doi: 1092-4388/07/5004-1015. [DOI] [PubMed] [Google Scholar]
  65. Pierce K, Glad KS, Schreibman L. Social perception in children with autism: An attentional deficit? Journal of Autism and Developmental Disorders. 1997;27(3):265–282. doi: 10.1023/A:1025898314332. [DOI] [PubMed] [Google Scholar]
  66. Repp BH. Quantitative effects of global tempo on expressive timing in music performance: Some perceptual evidence. Music Perception. 1995;13(1):39–57. [Google Scholar]
  67. Rose F, Lincoln A, Lai Z, Ene M, Searcy Y, Bellugi U. Orientation and affective expression effects on face recognition in Williams Syndrome and autism. Journal of Autism and Developmental Disorders. 2007;37(3):513–522. doi: 10.1007/s10803-006-0200-4. [DOI] [PubMed] [Google Scholar]
  68. Rutter M, Bailey A, Lord C. SCQ: Social Communication Questionnaire. Los Angeles, CA: Western Psychological Services; 2003. [Google Scholar]
  69. Russell JA. A circumplex model of affect. Journal of Personality and Social Psychology. 1980;39(6):1161–1178. doi: 10.1037/h0077714. [DOI] [Google Scholar]
  70. Samson F, Mottron L, Jemel B, Belin P, Ciocca V. Can spectro-temporal complexity explain the autistic pattern of performance on auditory tasks? Journal of Autism and Developmental Disorders. 2006;36(1):65–76. doi: 10.1007/s10803-005-0043-4. [DOI] [PubMed] [Google Scholar]
  71. Saulnier CA, Klin A. Brief report: Social and communication abilities and disabilities in higher functioning individuals with autism and Asperger syndrome. Journal of Autism and Developmental Disorders. 2007;37(4):788–793. doi: 10.1007/s10803-006-0288-6. [DOI] [PubMed] [Google Scholar]
  72. Skwerer DP, Schofield C, Verbalis A, Faja S, Tager-Flusberg H. Receptive prosody in adolescents and adults with Williams syndrome. Language and Cognitive Processes. 2007;22(2):247–271. doi: 10.1080/01690960600632671. [DOI] [Google Scholar]
  73. Smith B. Psiexp: An Environment for Psychoacoustic Experimentation Using the IRCAM Musical Workstation. Paper presented at the Society for Music Perception and Cognition Conference ‘95; Berkeley: University of California; 1995. [Google Scholar]
  74. Tantam D, Monaghan L, Nicholson H, Stirling J. Autistic children’s ability to interpret faces: A research note. Journal of Child Psychology and Psychiatry. 1989;30(4):623–630. doi: 10.1111/j.1469-7610.1989.tb00274.x. [DOI] [PubMed] [Google Scholar]
  75. Taylor CA. The physics of musical sounds. Aylesbury, England: English Universities Press Ltd; 1965. [Google Scholar]
  76. Thaut MH. Visual versus auditory (musical) stimulus preferences in autistic children: A pilot study. Journal of Autism and Developmental Disorders. 1987;17(3):425–432. doi: 10.1007/BF01487071. [DOI] [PubMed] [Google Scholar]
  77. Thompson WF, Schellenberg EG, Husain G. Decoding speech prosody: Do music lessons help? Emotion. 2004;4(1):46–64. doi: 10.1037/1528-3542.4.1.46. [DOI] [PubMed] [Google Scholar]
  78. Wang A, Lee SS, Sigman M, Dapretto M. Reading affect in the face and voice: Neural correlates of interpreting communicative intent in children and adolescents with autism spectrum disorders. Archives of General Psychiatry. 2007;64(6):698–708. doi: 10.1001/archpsyc.64.6.698. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Weeks SJ, Hobson RP. The salience of facial expression for autistic children. Journal of Child Psychology and Psychiatry. 1987;28(1):137–152. doi: 10.1111/j.1469-7610.1987.tb00658.x. [DOI] [PubMed] [Google Scholar]
  80. Williams JHG, Whiten A, Suddendorf T, Perrett DI. Imitation, mirror neurons and autism. Neuroscience and Biobehavioral Reviews. 2001;25:287–295. doi: 10.1016/s0149-7634(01)00014-8. [DOI] [PubMed] [Google Scholar]
  81. Williams Syndrome Camps. 2010 Retrieved March 8, 2010, from http://www.williams-syndrome.org/news/camps.html.

RESOURCES