Abstract
The Williams syndrome (WS) cognitive profile is characterized by relative strengths in face processing, an attentional bias towards social stimuli, and an increased affinity and emotional reactivity to music. An audio-visual integration study examined the effects of auditory emotion on visual (social/non-social) affect identification in individuals with WS and typically developing (TD) and developmentally delayed (DD) controls. The social bias in WS was hypothesized to manifest as an increased ability to process social than non-social affect, and a reduced auditory influence in social contexts. The control groups were hypothesized to perform similarly across conditions. The results showed that while participants with WS exhibited indistinguishable performance to TD controls in identifying facial affect, DD controls performed significantly more poorly. The TD group outperformed the WS and DD groups in identifying non-social affect. The results suggest that emotionally evocative music facilitated the ability of participants with WS to process emotional facial expressions. These surprisingly strong facial-processing skills in individuals with WS may have been due to the effects of combining social and music stimuli and to a reduction in anxiety due to the music in particular. Several directions for future research are suggested.
Keywords: Williams syndrome, Affect, Audio-visual Integration, Facial expression, Music
INTRODUCTION
Williams syndrome (WS) is a multifactorial genetic disorder resulting from a hemizygous deletion of 25–30 genes on chromosome 7q11.23 (Ewart, Morris, Atkinson, Jin, Sternes, Spallone, Stock, Leppert, & Keating, 1993; Korenberg, Chen, Hirota, Bellugi, Burian, Roe, & Matsuoka, 2000). It is associated with a unique combination of distinct facial characteristics, widespread clinical symptoms, and an asymmetrical, complex profile of cognitive and behavioral features (see Järvinen-Pasley, Bellugi, Reilly, Mills, Galaburda, Reiss, & Korenberg, 2008; Meyer-Lindenberg, Mervis, & Berman, 2006; Morris & Mervis, 2000, for reviews). The neuropsychological profile is characterized by a mean IQ estimate between 40 and 90 (Searcy, Lincoln, Rose, Klima, Bavar, & Korenberg, 2004), with a typically higher verbal IQ (VIQ) than performance IQ (PIQ) (Howlin, Davies, & Udwin, 1998; Udwin & Yule, 1990). In addition, the neurocognitive phenotype is characterized by a unique pattern of dissociations: while relative strengths are evident in socially relevant information processing (e.g., in face and language), significant impairments are apparent in non-verbal intellectual functioning (e.g., planning, problem solving, spatial and numerical cognition) (Bellugi, Wang, & Jernigan, 1994; Bellugi, Lichtenberger, Jones, Lai, & St. George, 2000). However, rather than being “intact”, evidence indicates that near-typical performance in some socially relevant tasks, such as face processing, is associated with atypical neural processing (e.g., Haas, Mills, Yam, Hoeft, Bellugi, & Reiss, 2009; Mills, Alvarez, St. George, Appelbaum, Bellugi, & Neville, 2000; Mobbs, Garrett, Menon, Rose, Bellugi, & Reiss, 2004), which may be related to significantly increased attention to faces (Riby & Hancock, 2008, 2009), as well as to a relative enlargement in some major brain structures involved in social information processing (Reiss, Eckert, Rose, Karchemskiy, Kesler, Chang, Reynolds, Kwon, & Galaburda, 2004). Emerging data suggests that at least some of the characteristic “excessive” social functions, specifically an increased tendency to approach unfamiliar people, can be linked to the genetic features of the WS full deletion (Dai, Bellugi, Chen, Pulst-Korenberg, Järvinen-Pasley, Tirosh-Wagner, Eis, Mills, Simon, Searcy, & Korenberg, 2009). It remains to be investigated, however, whether areas of deficit may be common to general intellectual impairment. Dai et al. (2009) report evidence from a rare individual with a deletion of a subset of the WS genes, who displays a subset of the WS features. These data suggest that GTF2I, the gene telomeric to GTF2IRD1, may contribute disproportionately to specific aspects of social behavior, such as indiscriminant approach to strangers, in WS. However, the pathways of the “dissociation” characterizing the WS social phenotype, that is, the increased sociability and emotionality on one hand, and the clear limitations in complex social cognition on the other, are currently poorly understood.
While great progress has been made in characterizing aspects of the social phenotype of WS, and in mapping out some of its major behavioral components, a somewhat unsymmetrical profile has emerged, with major enigmas remaining with respect to the “hypersocial” phenotype. Perhaps the most robust behavioral characteristic is an increased drive for social interaction, including the initiation of social contacts with unknown people, and increased social engagement (e.g., eye contact, use of language, staring at the faces of others) - a feature readily observable even in infancy (Doyle, Bellugi, Korenberg, & Graham, 2004; Jones, Bellugi, Lai, Chiles, Reilly, Lincoln, & Adolphs, 2000). Other characteristics that appear unique to this syndrome include a relative strength in identifying (e.g., Rossen, Jones, Wang, & Klima, 1996) and remembering (Udwin & Yule, 1991) faces, empathetic, friendly, and emotional personality (Tager-Flusberg & Sullivan, 2000; Klein-Tasman & Mervis, 2003), as well as socially engaging language in narratives (for a recent review, see Järvinen-Pasley et al., 2008; Gothelf, Searcy, Reilly, Lai, Lanre-Amos, Mills, Korenberg, Galaburda, Bellugi, & Reiss, 2008). Remarkably, overly social behavior and language of individuals with WS in relation to typical individuals extend across different cultures (Järvinen-Pasley et al., 2008; Zitzer-Comfort, Doyle, Masataka, Korenberg, & Bellugi, 2007). At the same time, the social profile of WS is poorly understood and appears paradoxical, in that, for example, the emotional and empathic personality is accompanied by significant deficits in social-perceptual abilities (Gagliardi, Frigerio, Burt, Cazzaniga, Perrett, & Borgatti, 2003; Plesa-Skwerer, Verbalis, Schofield, Faja, & Tager-Flusberg, 2005; Plesa-Skwerer, Faja, Schofield, Verbalis, & Tager-Flusberg, 2006; Porter, Coltheart, & Langdon, 2007). This pattern of strengths and deficits suggests that social functioning may have several dissociable dimensions, including affiliative drive and certain aspects of face and social-perceptual processing.
Within the WS phenotype, increased sociability is accompanied by an intriguing profile of auditory processing. Reports suggest that individuals with WS demonstrate a high affinity to music, including a high engagement in musical activities (Don, Schellenberg, & Rourke, 1999; Levitin, Cole, Chiles, Lai, Lincoln, & Bellugi, 2005a), which may be linked to increased activation of the amygdala, reduced planum temporale asymmetries, and augmented size of the superior temporal gyrus (STG) (Galaburda & Bellugi, 2000; Reiss et al., 2004; Levitin, Menon, Schmitt, Eliez, White, Glover, Kadis, Korenberg, Bellugi, & Reiss, 2003). However, this is not to say that individuals with WS demonstrate enhanced music processing abilities (e.g., Deruelle, Schön, Rondan, & Mancini, 2005). In addition, in as much as 95% of cases, WS is accompanied by hyperacusis, including certain sound aversions and attractions (Levitin, Cole, Lincoln, & Bellugi, 2005b; Gothelf, Farber, Raveh, Apter, & Attias, 2005).
Of specific interest to the current study is the notion that in individuals with WS, heightened emotionality has been reported to extend from their social interactions with others (e.g., Reilly, Losh, Bellugi, & Wulfeck, 2004; Tager-Flusberg & Sullivan, 2000) to the experience of music (Don et al., 1999; Levitin et al., 2005a). In one study, Levitin et al. (2005a) utilized a comprehensive parental questionnaire designed to characterize the musical phenotype in WS. Participants included 130 children and adults with WS (M = 18.6 years), as well as controls with autism, Down syndrome, and typical development (TD) (30 in each group), matched for chronological age (CA). Findings suggested that people with WS exhibited a higher degree of emotionality than Down syndrome and TD groups when listening to music. Individuals with WS were also reported to show greater and earlier interest in music than the comparison groups. Similarly, a study by Don and colleagues (1999) reported that, in addition to inducing feelings of happiness, individuals with WS differed from the comparison groups (TD, autism, Down syndrome) in that music had a significantly greater propensity to induce sadness in these participants. These findings are interesting in light of the fact that a genetic link between musicality and sociability has been postulated (Huron, 2001). More specifically, according to this view, during the history of human evolution, music is assumed to have played a role in social communication and social bonding, and thus shared genes may be implicated in both social and musical behaviors. However, reports of increased emotionality in response to music are largely anecdotal in the WS literature, a question of significant interest concerns the ways in which musical information may influence the processing of emotion in other modalities and domains in individuals with WS.
Social behavior is arguably tightly coupled to emotion, and the understanding of the emotions of others is critical for successful social interactions. Previous evidence from affect identification studies utilizing standardized face and voice stimuli have robustly established that individuals with WS are significantly impaired when compared to TD CA-matched controls, but perform at the level expected for their mental age (MA). For example, a study by Plesa-Skwerer et al. (2005) included dynamic face stimuli with happy, sad, angry, fearful, disgusted, surprised, and neutral expressions. The findings showed that TD participants were significantly better at labeling disgusted, neutral, and fearful faces than their counterparts with WS. Similarly, a study by Gagliardi et al. (2003) included animated face stimuli exhibiting neutral, angry, disgusted, afraid, happy, and sad expressions. The results showed that participants with WS showed noticeably lower levels of performance than CA-matched controls particularly with disgusted, fearful, and sad face stimuli. Another study by Plesa-Skwerer et al. (2006) utilized The Diagnostic Analysis of Nonverbal Accuracy – DANVA2 test (Nowicki & Duke, 1994), which includes happy, sad, angry, and fearful expressions, across both voice and still face stimuli. The results showed that, across both visual and auditory domains, individuals with WS exhibited significantly poorer performance than CA-matched controls with all but the happy expressions. In all of the above-mentioned studies, the performance of participants with WS was indistinguishable from that of MA-matched controls. However, these studies fail to elucidate the potential interactions between emotion processing across different domains (e.g., visual and auditory, social and non-social), and reports of increased emotionality in WS.
Affective expressions are often multimodal, that is, simultaneous and often complementary information is provided by, for example, a face and a voice. Thus, the integration of information from visual and auditory sources is an important prerequisite for successful social interaction, particularly during face-to-face conversation. Recent studies with typical individuals utilizing multi-modal affective face/voice stimuli have shown that a congruence in emotion between the two facilitates the processing of emotion (Dolan, Morris, & de Gelder, 2001); that multimodal presentation results in faster and more accurate emotion processing than unimodal presentation (Collignon, Girard, Gosselin, Roy, Saint-Amour, Lassonde, & Lepore, 2008); that information obtained via one sense affects the information-processing of another sensory modality, even when individuals are instructed to attend to only one modality (de Gelder & Vroomen, 2000; Ethofer, Anders, Erb, Droll, Royen, Saur, Reiterer, Grodd, & Wildgruber, 2006); and that visually presented affect tends to be more salient than aurally presented emotion (Collignon et al., 2008). In the context of music, research has shown that musicians’ facial expressions have a significant impact on the experience of emotion in the musical sound (Thompson, Graham, & Russo, 2005; Thompson, Russo & Quinto, 2008; Vines, Krumhansl, Wanderley, & Levitin, 2006). These results suggest that the processes underlying the integration of facial and vocal information are automatic. Only one known study has examined audiovisual integration abilities in WS (Böhning, Campbell, & Karmiloff-Smith, 2002). In this study, which focused upon natural speech perception, individuals with WS were found to be impaired in visual but not auditory speech identification, with decreased effects of visual information upon auditory processing in the audiovisual speech condition. Nevertheless, individuals with WS demonstrated audiovisual integration of speech, albeit to a lesser degree than typical controls.
A central question that arises from the literature reviewed above concerns the role of a face, or a social context, for multimodal emotion processing in individuals with WS. Thus, the aim of the present experiment was to compare the multi-sensory processing of affect in individuals with WS and in both TD and DD controls, and to test the possibility that a “face capture” in WS (e.g., Riby & Hancock, 2008, 2009) may extend to audio-visual contexts. That is, that the presence of a face stimulus may attract the attention of individuals with WS at the cost of attending to other stimuli. Given the strong attraction to music in individuals with WS, and supposedly increased emotionality in response to such stimuli, novel music segments conveying happy, sad, and fearful emotion were used as auditory stimuli. The three emotions were selected because they represent relatively basic affective states, and there is a sizeable literature documenting the relevant abilities of individuals with WS within the visual domain (e.g., Plesa-Skwerer et al., 2005, 2006; Gagliardi et al., 2003; Porter et al., 2007). The auditory segments were paired with either standardized images of facial expressions in the social condition, or with standardized images of objects, scenes, and animals conveying the same affective states as the faces in the non-social condition, in both audio-visually congruent and incongruent conditions. The experimental tasks were first, to identify the affect conveyed by the visual image, and second, to rate its intensity. To directly compare the influences of auditory emotion upon visual affect processing across social and non-social domains, that is, to examine whether the face as a stimulus may have a special status for those with WS, participants were required to respond to the visually presented emotion while ignoring the auditory affect. Although previous evidence has indicated higher auditory than visual performance in audiovisual integration contexts for individuals with WS (Böhning et al., 2002), that study did not examine emotion processing. The current design, focusing on the visual processing, allowed for the direct examination of the potential presence of the “face capture”. The judgment of emotional intensity in the visual domain was included as a measure of experienced emotionality.
In light of the unusual social profile in WS, specifically with respect to the atypically intense interest in people and faces, we predicted that the effects of auditory emotion would be relatively weaker in social, as compared to non-social contexts, across both congruent and incongruent conditions. More specifically, we hypothesized that because of their atypical social profile, individuals with WS would exhibit a “face capture”, resulting in a reduced interference of auditory emotion with stimuli comprising human faces. Thus, this pattern would be manifested as higher visual emotion processing ability in social, as compared to non-social contexts, in individuals with WS. Crucially, in addition, we hypothesized that the reduced auditory interference within the social domain in WS would specifically manifest as relatively high levels of visual performance with the audio-visually incongruent social stimuli (i.e., similar to that with congruent stimuli), reflecting the fact that facial emotion processing would not be affected by a conflict in the emotional content between the visual and auditory stimuli. By contrast, we hypothesized that stronger effects of auditory emotion would be apparent in the non-social condition, manifested as lower visual processing performance overall and mirroring the pattern of performance for TD controls with an advantage for emotionally congruent relative to emotionally incongruent audiovisual stimuli. We hypothesized that both control groups would show similar levels of affect identification performance across the social and non-social stimuli, with higher levels of performance for the audio-visually congruent as compared to the incongruent stimuli across domains; we also expected that the TD group would outperform the DD group overall. Based upon previous studies, we hypothesized that the TD group would also outperform the WS group in facial expression processing, while the WS and DD groups would exhibit similar levels of performance (cf. e.g., Gagliardi et al., 2003; Plesa-Skwerer et al., 2005). It was further predicted that individuals with WS would experience greater emotional intensity in the social, as compared to the non-social contexts, reflecting their increased interest in human faces over non-social stimuli. By contrast, we predicted that both TD and DD controls would exhibit similar patterns of performance across the social and non-social conditions, with both control groups experiencing the intensity of emotion as similar in the two domains, reflecting equivalent levels of interest in both types of stimuli.
METHOD
Participants
Twenty-one individuals with WS (11 males) were recruited through a multicenter program based at the Salk Institute. For all participants, genetic diagnosis of WS was established using fluorescence in situ hybridization (FISH) probes for elastin (ELN), a gene invariably associated with the WS microdeletion (Ewart et al., 1993; Korenberg et al., 2000). In addition, all participants exhibited the medical and clinical features of the WS phenotype, including cognitive, behavioral, and physical features (Bellugi et al., 2000). Twenty-one TD controls (11 males) were matched to those with WS for CA and gender. The participants were screened for the level of education, and those with more than two years of college-level education were excluded from this study. Each participant was screened for current and past psychiatric and/or neurological problems, and only those deemed clinically asymptomatic were included in the study. A DD comparison group included 16 individuals (6 males) with learning and intellectual disability of unknown origin. Participants with DD were recruited from the San Diego area, and were extensively screened for the absence of severe motor, visual and auditory deficits, as well as traumatic brain injury, epilepsy and seizures, multiple sclerosis and autism spectrum disorders. Furthermore, no individuals with diagnoses of any one specific disorder (e.g., Down syndrome) were included in the study. Thus, the stringent selection criteria employed in this study were aimed at increasing the likelihood of having a control group with a cognitive profile characterized by developmental delay and intellectual impairment without etiology-specific (e.g., Down syndrome) or focal impairments to brain and cognitive functioning. All participants were of the same cultural background, i.e., American.
The participants’ cognitive functioning was assessed using the Wechsler Intelligence Scale. Participants under 16 years of age were administered the Wechsler Intelligence Scale for Children 3rd Edition (WISC-III; Wechsler, 1991), and those above 16 years of age were administered either the Wechsler Adult Intelligence Scale Third Edition (WAIS-III; Wechsler, 1997) or the Wechsler Abbreviated Scale of Intelligence (WASI; Wechsler, 1999). Participants were also administered the Benton Test of Facial Recognition (Benton, Hamsher, Varney, & Spreen, 1983), a perceptual face discrimination task, and a threshold audiometry test using a Welch Allyn AM232 manual audiometer. Auditory thresholds were assessed at 250, 500, 750, 1000, 1500, 2000, 3000, 4000, 6000, and 8000 Hz, monaurally. The hearing of all participants included in the study was within the normal range. In addition, all participants were native English speakers, and gave written informed consent before participation. Written informed assent was also obtained from participants’ parents, guardians, or conservators. All experimental procedures complied with the standards of the Institutional Review Board at the Salk Institute for Biological Studies.
Table 1 shows the demographic characteristics of the three groups of participants. The participants in the three groups (WS, TD, DD) did not differ significantly in terms of CA (F (2, 55) = 1.17, p = .32), and the WS and DD groups were well matched on the PIQ (F (1, 35) = .003, p = .96). Also, the Benton scores of the WS and DD groups were not significantly different (F (1, 35) = 3.60, p = .07). The WS group scored significantly higher than the DD group on the VIQ (F (1, 35) = 18.49, p < .001). The performance of the WS and TD groups was not significantly different on the Benton test (F (1, 40) = 3.03, p = .09).
Table 1.
CA (SD; range) | VIQ (SD; range) | PIQ (SD; range) | Benton (SD; range) | |
---|---|---|---|---|
WS (n=21) | 24.0 (7.9; 12–40) | 72 (8.5; 59–91) | 61 (9.3; 44–78) | 21 (2.9; 14–25) |
TD (n=21) | 23.5 (6.5; 12–39) | 106 (9.5; 88–122) | 106 (9.4; 94–128) | 22 (2.8; 17–26) |
DD (n=16) | 27.6 (11.4; 18–52) | 62 (5.3; 55–70) | 61 (8.1; 50–81) | 17 (5.4; 12–23) |
Stimuli
For the social condition, the visual stimuli comprised six standardized images of facial expression taken from the Mac Brain/NimStim Face Stimulus Set (available at: www.macbrain.org; Tottenham, Tanaka, Leon, McCarry, Nurse, Hare, Marcus, Westerlund, Casey, & Nelson, in press). There were two faces (one male and one female) for each of three emotions (happy, fearful, and sad). For the non-social condition, six different images from the International Affective Picture System (IAPS; Lang, Bradley, & Cuthbert, 2005) depicted affective scenes and objects; there were two pictures for each of three emotions (happy, fearful, and sad). None of the non-social images contained human faces. There were four non-affective stimuli for a visual control, including a neutral male and female face (NimStim), and two neutral IAPS images. The IAPS stimuli included the following pictures (image numbers; intended emotion, in parentheses): dolphins (1920; happy), shark (1930; fearful), aimed gun (6230; fearful), basket (7010; neutral), mug (7035; neutral), hot air balloons (8162; happy), cow (9140; sad), and dead cows (9181; sad). One hundred college students (half female) have rated each of the images in the IAPS set for valence, arousal, and dominance; thus, norms are available for each image in the IAPS manual (Lang et al., 2005). As in Baumgartner, Esslen, & Jäncke’s study (2006), we conducted a pilot study to facilitate selecting the visual stimuli. Forty typical adults, who did not participate in the actual experiment reported here, identified the valence, and used a nine-point Likert-style scale to rate the intensity of large sets of NimStim and IAPS stimuli. The piloting phase included 40 images of facial expression (10 per emotion) and 40 non-social images. The images that most reliably conveyed the intended emotion and had the greatest intensity became the test stimuli (except in the case of neutral affect, for which the images associated with the lowest intensity were selected). The specific criteria were that the emotion was identified with >90% accuracy, and that for all except the neutral images, the mean of the intensity ratings was >8 (out of 9). By contrast, for the neutral images, the mean intensity ratings had to be <1. Overall, the valence and arousal ratings from our pilot study were similar to those that Lang and colleagues (2005) found in adults. Given that the IAPS stimuli have a relatively limited number of non-aversive non-social images that do not contain human faces, it was necessary to include images containing animals within the non-social category.
Each of the 16 visual images was paired with segments of unfamiliar emotionally evocative music specifically composed by Marsha Bauman of Stanford University for studies to examine musical abilities in WS. The music segments were each five seconds in duration, and the visual image appeared for the last three seconds of its duration. These segments have been pre-tested in typical adults to confirm that they convey happy, fearful, or sad emotion with >95% accuracy. For the test stimuli, there were three musical examples used, including one for each of the three emotions: happy, sad, and fearful. Each image-sound stimulus was compiled as a QuickTime movie file. As all possible combinations of music type and image type were used, the experiment comprised a total of 48 stimulus items. In 12 pairs, there was a match between the emotional content in the image and the sound, and in the remaining 24 image-sound pairs, there was an incongruity. Thus, the audio-visual stimuli involved either a congruence between the emotional content in the image and the emotional content in sound (e.g., happy music with a happy facial expression), an incongruity (e.g., happy music with a fearful facial expression), or a neutral image paired with emotional sound.
Procedure
The experiment was conducted in a quiet room. The stimuli were presented via circumaural headphones using the PsyScope software (Cohen, MacWhinney, Flatt, & Provost, 1993), and the order of the stimuli was randomized between participants. Training preceded the administration of the experimental task to ensure that the participants understood the task and were familiar with the materials. A two-tiered response was required for each stimulus item: (1) Affect identification: the first part of the experimental task was to identify the affective valence of the visual image; (2) Intensity rating: the second response involved rating the intensity of the chosen emotion using a five-point Likert scale. This two-tiered system enabled the examination of both gross categorical effects on emotion perception, as well as more subtle effects on the intensity of perceived emotion.
The participants were told that they would see some pictures appearing on the computer screen, which would either be of a person’s face, a thing, an animal, or a scene. The participants were also told that they would hear short pieces of music accompanying the images, and that their task would be first to decide what emotion the picture is showing, followed by rating how much or how little the face or the thing showed the chosen emotion. The experimenter then showed the first response screen, which listed the four possible emotions to ensure that the participant understood each of the emotion options (happy, sad, fearful and “no emotion” as a label for neutral). The experimenter then showed an example of the second response screen including the Likert type five-point scale made into a colorful and visually attractive staircase shape, to ensure that the participant understood each of the quantifying labels. The lower end of the scale was assigned shades of blue, reflecting cold, and the higher end of the scale was assigned reddish hues, reflecting hot. In addition, each step had an assigned number from 1 to 5, as well as the aforementioned color, and a verbal label (1=not at all; 2=little; 3=somewhat; 4=quite a lot; 5=very much). An emotionally congruent audio-visual training stimulus of a sad female face, paired with sad music, was then played. Neither the visual nor the auditory training stimuli were included in the actual test stimuli. The participant was instructed to point to the affect label that s/he thought best matched the emotion shown in the picture. If the participant’s response was incorrect (i.e., not “sad”), the experimenter corrected this in an encouraging way, e.g., by saying, “What would you be feeling if you made a face like that? You might also think that the person was sad.” When correcting, the experimenter attempted to avoid teaching the participant a simple correspondence between facial expression and emotion. Although a small proportion of participants were corrected at this stage, they tended to perform accurately on the subsequent trial. The training trials were replayed until the participant gave correct responses spontaneously to both of the trials. The participant was then told to point to the “staircase step” that best matched their judgment of how happy/sad/scared/emotional the picture was. The participant was encouraged to use the whole staircase in their answers, not just the bottom and the top steps. The experimenters did not report any participants who only utilized the first and/or the last step. An emotionally incongruent training stimulus of an IAPS image of ice cream (7330) conveying happiness paired with sad music was then played, and the participant was asked to respond as above. Thus, all training stimuli were separate from the actual test stimuli. Once the training phase was complete, the experimenter told the participant that s/he would hear more music and see more pictures of faces, animals, objects, and scenes from the computer. No feedback was given during the actual experiment. Each stimulus movie appeared on the screen sequentially for five seconds, followed by a two-tiered response procedure.
RESULTS
The Accuracy of Visual Emotion Identification Comparing Across the Social and Non-social Stimuli and the Three Emotions
Figure 1 displays the percentage of correct judgments for each emotion crossed with the type of visual stimulus (social and non-social) for participants with WS, TD, and DD. A judgment was deemed correct if it corresponded with the emotion conveyed in the visual stimulus alone. The scores were collapsed across the emotion content in the music. As no neutral music was included in the audio-visual pairs, the inclusion of the results in response to the stimuli including neutral visual stimuli, as a part of the main statistical analyses, was deemed inappropriate. The control stimuli comprising audio-visual pairs with the neutral visual stimuli were analyzed separately.
The Accuracy of Emotion Identification Comparing Across the Social and Non-social Stimuli Paired with Congruent or Incongruent Music
Figure 2 shows the percentage of correct judgments for social and non-social stimuli crossed with congruent or incongruent music. The scores have been collapsed across the emotion categories.
These data were entered into a 3×2×2×3 repeated-measures analysis of variance (ANOVA), with stimulus category (social/non-social), emotion (happy/fearful/sad), and congruity (congruent/incongruent) as within-participants factors, and group (WS/TD/DD) as a between-participants factor. This analysis revealed significant main effects of stimulus category (F (1, 55) = 73.55, p < .001), congruity (F (1, 55) = 5.15, p = .03), and group (F (2, 55) = 17.19, p < .001). The main effect of emotion failed to reach significance (F (2, 54) = 2.18, p = .12). The analysis also revealed a significant stimulus category by group interaction (F (2, 55) = 14.35, p < .001), and a stimulus category by emotion interaction (F (2, 54) = 5.77, p = .004).
The significant effects were further explored using post hoc analyses with a Bonferroni correction. The main effect for stimulus category was due to a significant advantage in the social condition. The main effect for the factor congruity was associated with an advantage for processing congruous stimuli. The significant main effect of group resulted from all groups exhibiting significantly different performance (WS vs. TD p < .001; WS vs. DD p = .03; TD vs. DD p < .001), with the TD group showing the most accurate performance, and the DD group showing the least accurate performance, overall. Regarding the stimulus category by group interaction, both WS (p = .001) and DD (p = .001) groups showed significantly better performance in the social as compared to the non-social condition, while the performance of the TD group was not significantly different across the social and non-social conditions (p = .06). The WS (p = .01) and TD (p = .008) groups were better than the DD group at identifying the emotion depicted in social stimuli; overall performance for the WS and TD groups was indistinguishable (p = .99). However, for the non-social stimuli, the TD group performed significantly better compared to both the WS (p < .001) and DD (p < .001) groups, which did not differ (p = .20). Regarding the stimulus category by emotion interaction, performance was significantly higher with the social happy than with the non-social happy stimuli (p < .001) and with the social sad than the non-social sad stimuli (p < .001). Participants showed indistinguishable performance across the social and non-social fearful stimuli (p = .11). Performance was higher with the happy face than the fearful face stimuli (p = .001), and with the happy face than with the sad face stimuli (p = .03). Performance for fearful face vs. sad face stimuli was not significantly different (p = .11). Within the non-social condition, contrasting performance for happy and fearful (p = .14), and happy and sad (p = .35) stimuli failed to reveal significant differences, while participants identified non-social fearful stimuli significantly better than non-social sad stimuli (p = .006). For the remaining social and non-social emotion contrasts, performance was significantly higher in the social as compared to the non-social condition (all p < .004).
Accuracy of Emotion Identification for Neutral Visual Stimuli
A 2×3 repeated measures ANOVA examined the effects of auditory emotion on the participants’ perception of neutral visual stimuli. For these stimuli, the correct answer was “no emotion” for the first-tier question in the task. Stimulus category was a within-participants factor with two levels (social/non-social), and group was a between-participants factor with three levels (WS/TD/DD). This analysis revealed significant main effects of stimulus category (F (1, 55) = 10.18, p = .002) and group (F (2, 55) = 3.76, p = .03), as well as a significant interaction effect of stimulus category and group (F (2, 55) = 7.89, p = .001). Post hoc analyses with a Bonferroni correction further explored these significant effects. The main effect of stimulus category was due to a higher accuracy in the non-social as compared to the social condition. The main effect of group was due to the TD group showing the highest level of performance, and the DD group performing at the lowest level. While the WS and TD groups showed indistinguishable performance overall (p = 1.00), the TD group outperformed those with DD (p = .03). The overall performance of the WS and DD groups was not significantly different (p = .16). Regarding the interaction between stimulus category and group, the TD group exhibited superior performance on social stimuli relative to both WS (p = .05) and DD (p = .001) groups. No significant between-group differences emerged for the non-social stimuli (all p > .14). Figure 3 plots the performance for the WS, TD, and DD groups with the neutral social and non-social stimuli.
An analysis of the error patterns exhibited by the groups across the social and non-social neutral stimuli was then carried out on the data using chi-square tests. Table 2 displays the frequency of incorrect responses within each response category (fearful, happy, sad) for individuals with WS, TD (in parentheses), and DD (italics). For the social condition, the analysis failed to reveal any systematic differences in response patterns for each stimulus pair (all p > .15). For the non-social condition, response patterns were random (all p > .15) for all except the neutral image paired with sad music stimuli (c2(2, N = 24) = 7.32, p = .03). This was due to a greater proportion of happy and sad responses than fearful ones. In sum, participants in all groups were somewhat influenced by the musical emotion when processing neutral images across both face and non-face stimuli.
Table 2.
Frequency of incorrect response types WS (TD) DD | |||
---|---|---|---|
| |||
Fearful | Happy | Sad | |
Social condition | |||
Neutral-fearful | 8 (2) 4 | 1 (0) 3 | 8 (3) 5 |
Neutral-happy | 5 (0) 6 | 1 (1) 7 | 9 (4) 7 |
Neutral-sad | 7 (1) 8 | 1 (1) 5 | 7 (4) 5 |
Non-social condition | |||
Neutral-fearful | 0 (1) 1 | 1 (0) 3 | 0 (7) 4 |
Neutral-happy | 1 (0) 1 | 4 (3) 6 | 0 (5) 4 |
Neutral-sad | 0 (0) 0 | 2 (1) 6 | 2 (10) 3 |
Perceived Intensity of Emotion
Figure 4 displays the mean emotion intensity ratings (maximum = 5) for each emotion crossed with the type of visual stimulus (social and non-social) for participants with WS, TD, and DD. The scores were collapsed across the emotion content in the music.
Figure 5 shows the mean intensity ratings (maximum = 5) for social and non-social stimuli crossed with congruent or incongruent music. The scores have been collapsed across the emotion categories.
To compare between-group effects on the perception of affective intensity, a 3×2×2×3 repeated measures ANOVA was carried out, with stimulus category (social/non-social), emotion (happy/fearful/sad), and congruity (congruent/incongruent) as within-participants factors, and group (WS/TD/DD) as a between-participants factor. This analysis revealed significant main effects of emotion (F (2, 54) = 6.35, p = .02), congruity (F (1, 55) = 7.80, p = .007), and group (F (2, 55) = 4.78, p = .01). The main effect of stimulus category failed to reach significance (F (1, 55) = 2.37, p = .13). The analysis also revealed a significant emotion by group interaction (F (4, 110) = 3.11, p = .02), and a stimulus category by emotion interaction (F (2, 54) = 5.44, p = .006).
The significant effects were further explored using post hoc analyses with a Bonferroni correction. The main effect for emotion was due to higher intensity ratings for the fearful as compared to happy (p = .006), and for the sad as compared to happy (p = .04) stimuli, while intensity ratings were not significantly different for the fearful versus sad stimuli (p = .80). Regarding the emotion by group interaction, for the happy stimuli, the WS group rated the intensity of emotion as higher than both TD (p = .04) and DD (p = .005) groups, which were not significantly different from each other (p = 1.00). For the fear stimuli, no significant between group differences emerged (all p > .06). For the sad stimuli, the intensity ratings of the WS and TD groups were indistinguishable (p = 1.00). However, both the participants with WS (p = .04) and TD (p = .05) gave higher intensity ratings for the sad stimuli than those with DD. The main effect for the factor congruity was associated with higher intensity ratings for the congruous stimuli. The significant main effect of group was due to the participants with WS making higher ratings of emotional intensity overall as compared to the DD group (p = .009). The overall intensity ratings of the WS and TD groups (p = .42), and the TD and DD groups (p = .29), were not statistically different. Regarding the stimulus category by emotion interaction, a post hoc analysis failed to reveal any significant differences in the intensity ratings for individual emotion categories both between and within the social and non-social conditions (all p > 1.00).
Finally, correlations were carried out between experimental scores and CA, VIQ, PIQ, and the Benton test scores for each group. Only for participants with WS, PIQ score was positively associated with the ability to identify neutral facial expressions (r (21) = .44, p = .05) (all other r < .42).
DISCUSSION
The aim of the current study was to examine the effects of aurally presented emotion on visual processing of affect by contrasting individuals with WS with TD and DD control groups. We also obtained perceptual ratings of emotion intensity. The main hypothesis was that, due to the disproportionate attention towards face stimuli characterizing individuals with WS, these participants would exhibit an increased ability to identify visual affect in the social relative to non-social stimuli, due to a neglect of auditory information in social contexts. This effect was hypothesized to manifest as relatively high levels of visual performance with the audio-visually incongruent social stimuli (i.e., similar to that with congruent stimuli), reflecting the fact that face processing would not be hampered by the conflict in the emotional content between the visual and auditory stimuli. By contrast, the WS group was hypothesized to perform similarly to the controls within the non-social domain, i.e., exhibiting the typical advantage with the congruent over the incongruent stimuli, and performing at a level similar to the DD group. The control groups were predicted to show similar performance across the social and non-social conditions, with the TD group outperforming those with DD overall. Both groups were predicted to exhibit higher levels of performance with the congruent as compared to the incongruent stimuli across domains, reflecting their more even patterns of interest in, and development within, both domains. The main results showed that, overall, performance of individuals with WS was indistinguishable from that in the TD group for processing emotion in faces. The WS group exhibited marginally higher levels of performance with the emotionally incongruent as compared to the congruent audio-visual stimulus compounds within the social domain, supporting the hypothesis of diminished interference of auditory emotion in conditions in which a face is present. By contrast, the DD controls exhibited markedly lower levels of performance compared to the WS and the TD groups, and these effects were particularly apparent when a facial expression was paired with emotionally incongruent music. This suggests that congruent auditory information enhances the processing of social visual information for individuals with DD (and TD), while incongruent auditory information has a detrimental effect on performance (e.g., Dolan et al., 2001). As shown in Figure 2, such an effect was in evidence for all groups when the visual information was non-social. Although both the WS and the DD groups showed superior performance in the social as compared to the non-social condition, participants with WS showed a robust tendency towards higher levels of performance overall. Taken together, the results indicate that the degree of intellectual impairment cannot entirely account for the patterns of performance exhibited by the WS group. In accordance with our hypothesis, TD participants performed near ceiling-level with both social and non-social stimuli, confirming that the visual stimuli conveyed the intended emotions. Finally, individuals with WS demonstrated a tendency towards perceiving the intensity of visual emotion in both the social and non-social stimuli as higher than either group of controls, and this effect was particularly apparent in relation to positively valenced face stimuli.
The finding that the processing of facial expressions in individuals with WS was indistinguishable from their age- and gender-matched TD counterparts may appear surprising in light of literature indicating that social-perceptual abilities in WS are linked to MA (e.g., Plesa-Skwerer et al. 2005, 2006; Gagliardi et al., 2003; Porter et al., 2007). Porter and colleagues (2007) compared individuals with WS to two groups associated with intellectual impairment, namely general DD and Down syndrome. While the WS group exhibited equivalent performance to those with DD, their performance was significantly better than that of participants with Down syndrome. The DD control group in the current study excluded any individual with a known or diagnosed neurodevelopmental condition. In the context of the current literature, the results of the present study may suggest that instrumental music enhances the processing of emotion in faces for individuals with WS, whether the emotion in sound is congruent or incongruent with that in the visual image. It is also noteworthy that the facial expressions included in the current study depicted the basic emotional states of happiness, fear, and sadness. Previous studies have found that, relative to TD controls, individuals with WS showed relatively proficient processing of happy (Plesa-Skwerer et al., 2005, 2006; Gagliardi et al., 2003) and sad (Plesa-Skwerer et al., 2005) expressions.
As the present experiment only differed in one respect from the above-mentioned studies related to the processing of facial expressions, namely, by the addition of instrumental music stimuli, it may be that hearing music enhances cognitive performance within the social domain for individuals with WS. This suggestion may appear inconsistent with our hypothesis that music would have decreased interference effect in conditions in which a face is present, due to disproportionate attention directed toward such stimuli. However, the finding depicted in Figure 2 showing that the WS group exhibited marginally higher levels of emotion identification performance with the emotionally incongruent, as compared to the emotionally congruent social stimuli, while the opposite pattern characterized the control groups, suggests that emotion per se in the music stimuli had a lesser effect on the WS group relative to the controls. It may also be that, as WS is associated with high levels of anxiety (Dykens, 2003; Leyfer, Woodruff-Borden, Klein-Tasman, Fricke, & Mervis, 2006), the inclusion of music in the current study enabled WS participants to achieve their optimal levels performance by reducing anxiety, while the specific effects of its emotional content were relatively less relevant to these participants. However, in the current study, the enhancing effects of music for individuals with WS failed to extend to the neutral face stimuli and the non-social domain. Therefore, our results suggest that the increased performance resulted from the presence of a facial expression, or more specifically, the combination of an emotional face and music. Research directed towards audiovisual integration, emotion, and music perception may hold significant therapeutic implications for individuals with WS and may facilitate their social processing.
The WS group’s strong performance with social stimuli relative to TD controls failed to generalize to the neutral faces. As opposed to indicating a general dysfunction related to processing facial expressions, this may suggest that individuals with WS have difficulties in perceiving neutral expressions in particular. Previous studies utilizing neutral face stimuli have produced mixed results (Plesa-Skwerer et al., 2005; Gagliardi et al., 2003). In the present study, both the WS and DD participants exhibited markedly poorer performance with the neutral as compared to the affective face stimuli. These participants appear to have had a higher propensity to perceive emotion where there was none being expressed. Moreover, in the current study, the ability to identify neutral facial expressions was associated with PIQ in WS, while no such association was in evidence for the other groups.
Although the WS and TD groups performed similarly in response to emotional facial expressions, the two groups likely utilized qualitatively different underlying brain processes. Neurobiological evidence points to atypical patterns of neural activity underling face processing in WS, and to widespread structural and functional aberrations related to social cognition (e.g., Haas et al., 2009; Mills et al., 2000; Mobbs et al., 2004). For example, WS is associated with a disproportionately large volume of the amygdala (Reiss et al., 2004). In addition, two recent functional MRI (fMRI) studies indicate that WS individuals have reduced amygdala and orbitofrontal cortex (OFC) activation in response to negative face stimuli as compared to TD controls (Meyer-Lindenberg, Hariri, Munoz, Mervis, Mattay, Morris, & Berman, 2005). Additionally, combined event-related potentials (ERP) and fMRI evidence showed that neural responses to negative facial expressions were decreased in WS, while neural activity in response to positive facial expressions was increased (Haas et al., 2009). However, in the current study, no valence-specific patterns of emotion identification performance emerged for the WS group within the social domain. MRI evidence related to the neurobiological underpinnings of auditory function in WS show that such individuals have smaller left planum temporale (part of the auditory association cortex) relative to controls (Eckert, Galaburda, Karchemskiy, Liang, Thompson, Dutton, Lee, Bellugi, Korenberg, Mills, Rose, & Reiss, 2006). Conversely, larger volumes of the ventral-orbital prefrontal region have been associated with greater use of social-affective language in individuals with WS (Gothelf et al., 2008). In addition to cytoarchitectonic evidence for relative preservation of cell packing density and cell size in the primary auditory cortex in individuals with WS relative to TD controls, researchers have found an excessively large layer of neurons in an area receiving projections from the amygdala, suggesting that the auditory cortex may be more limbically connected in WS than in controls. (Holinger, Bellugi, Mills, Korenberg, Reiss, Sherman, & Galaburda, 2005). This may underlie the heightened emotional reactivity to certain sounds, such as music, in individuals with WS. In a similar vein of research, a small-scale fMRI study compared the brain responses to music and noise stimuli for five adults with WS and five TD controls (Levitin et al., 2003). The results highlighted atypical neural activation to music in the participants with WS: while the TD controls had more activation in the STG and middle temporal gyri in response to music than to noise, the only region associated with greater activity during music versus noise for the WS group was the right amygdala. Due to a lack of neuroimaging research on multi-modal processing in WS, it is difficult to explain the results of the current study by consolidating neurobiological evidence from the visual and auditory domains. However, the processing of both facial expressions and music appear to have implications for aberrant amygdala function in this population. Future studies should be directed towards investigating the neural correlates of both visual and auditory affect processing in individuals with WS, as well as their combined effects. FMRI would be an ideal technique for such studies.
The neurodevelopmental literature suggests that the increased social behavior found in WS may be at least partly driven by increased attention towards the face (Mervis, Morris, Klein-Tasman, Bertrand, Kwitny, Appelbaum, & Rice, 2003; Laing, Butterworth, Ansari, Gsödl, Longhi, Panagiotaki, Paterson, & Karmiloff-Smith, 2002). This characteristic of people with WS contrasts sharply with the tendency for social avoidance and gaze aversion characterizing individuals with autism (Dalton, Nacewicz, Johnstone, Schaefer, Gernsbacher, Goldsmith, Alexander, & Davidson, 2005; Klin, Jones, Schultz, & Volkmar, 2003). Indeed, recent evidence suggests that atypical attentional profiles make an important contribution to social development in WS and autism. In two recent studies, the consequences of opposing preferences toward attending to social versus non-social stimuli were investigated in individuals with WS and autism using eye tracking methodology (Riby & Hancock, 2008, 2009). In one study, when scanning still images depicting social scenarios, individuals with WS were found to fixate on people’s faces and particularly on the eyes for significantly longer than individuals with autism and TD (Riby & Hancock, 2008). In another study, in which stimuli comprised images of scenes with embedded faces, individuals with WS demonstrated a “face capture” behavior (Riby & Hancock, 2009); while participants with WS showed exaggerated fixation on the eye region of the face, and prolonged fixation on the face in general, individuals with autism exhibited the opposite pattern. Individuals with WS were as fast as typical controls at detecting the embedded face, but they showed a significantly reduced tendency to disengage from the face. Some have argued that social stimuli have increased salience for individuals with WS (Frigerio, Burt, Gagliardi, Cioffi, Martelli, Perrett, & Borgatti, 2006); others claim that the intense focus on the face, and possibly the difficulty in shifting attention (sticky fixation’), may contribute to the unusual social characteristics of individuals with WS (Mervis et al., 2003; Laing et al., 2002), as these behaviors may lead to a decrement in the ability to monitor and learn from the environment. This pattern of development may then manifest as superior information processing abilities in relation to social, as compared to non-social stimuli. However, both the WS and the DD groups showed superior affect labeling performance with the social as compared to the non-social stimuli. Moreover, both the WS and DD groups were more likely to attribute emotion to neutral faces as compared to non-social images than their TD counterparts; therefore the increased attention to the face, characteristic of individuals with WS, cannot fully explain the current pattern of results.
The finding that individuals with WS experienced the intensity of emotion of specifically happy facial stimuli as significantly higher as their controls appeared not to have arisen from an increased emotionality resulting from the music stimuli (cf. Don et al., 1999; Levitin et al., 2005a), as they rated the intensity of emotion slightly higher in the incongruent relative to the congruent stimulus compounds. In light of the neurobiological data indicating significantly increased neural activity to positive social stimuli in individuals with WS (Haas et al., 2009), it may thus be that such enhanced neural processing was reflected in the current results by greater intensity of experience of positive affect. Alternatively, the tendency towards higher intensity ratings by the WS group may be linked to their “emotional” and “empathic” personality (e.g., Tager-Flusberg & Sullivan, 2000; Klein-Tasman & Mervis, 2003). Klein-Tasman and Mervis (2003) found that high social ratings and empathy distinguished individuals with WS from controls with other developmental disabilities on the basis of standardized personality and temperament questionnaires. Taken together with the current data, it may be that individuals with WS experience emotion as more intense in general relative to individuals without the syndrome, which may reflect their atypical neural architecture (cf. Haas et al., 2009).
One limitation of the current study is that affect identification abilities were not tested independently within the visual and auditory domains. However, the current study was exploratory in nature, and did reveal interesting performance patterns in individuals with WS specifically within the social domain. Moreover, there is significant literature documenting social-perceptual abilities in individuals with WS, and these studies have utilized comparable standardized facial stimuli as used in the current study. Furthermore, we used pilot studies to confirm that the auditory and visual stimuli conveyed the intended emotions when presented in a uni-modal format. It may be suggested that adding music to facial perception may have enhanced certain aspects of social-perceptual processing in individuals with WS, although the mechanisms underlying this effect remain unclear. It remains to be investigated whether this effect would extend to tasks pertaining to other social-cognitive domains than emotion processing. In addition, future studies should target the independent effects of visual and auditory emotion processing upon performance in multi-modal contexts in individuals with WS. We are only aware of one previous study that tested audio-visual integration abilities in individuals with WS (Böhning et al., 2002). However, as that study focused upon natural speech perception, it is unclear how findings from it relate to the current pattern of results, due to the different nature of the test stimuli between the experiments. Böhning et al. (2002) reported deficits in visual but not auditory identification in their participants with WS, with decreased effects of visual information upon auditory processing in the audiovisual speech condition. In sharp contrast, in the present study, individuals with WS exhibited higher than expected visual affect identification performance, which may be linked to the inclusion of instrumental music.
In conclusion, the present exploratory study extended the current literature by revealing surprising patterns of processing within the social domain in individuals with WS. Specifically, individuals with WS performed in a similar manner to the TD group with the stimuli comprising facial expressions, and in a similar manner to the DD group with the non-social stimuli. Taken together, the data presented in this paper suggest that the overall emotion processing profile found in individuals with WS was mediated by an exaggerated interest in faces as well as a heightened subjective emotional significance of face stimuli. The specific role of music stimuli in the enhanced visual performance profile within the social domain remains to be investigated in individuals with WS. This profile is accompanied by an increase in perceived emotion in this group overall, suggesting that the social and emotional aspects that are usually heightened in the WS personality may impact performance on tasks involving the cognitive processing of emotion. The results open several avenues for future research, which should focus on further elucidating the experience, processing, and neural correlates of emotion by individuals with WS both within the visual and auditory domains. Future studies should also explore how each modality contributes to combined audio-visual contexts. Given that in the natural world, social information is multimodal, one question that is of significant clinical and theoretical importance concerns the abilities of individuals with WS to integrate, for example, facial affect and body language and/or facial affect and prosody. It is clear, however, that social functioning in WS is a highly complex phenomenon with intriguing patterns of dissociations, such as those pertaining to emotion processing across social and non-social domains, as highlighted by the current study.
Acknowledgments
This study was supported by a grant P01 HD033113-12 (NICHD), and support from the Michael Smith Foundation for Health Research to B.W.V.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Baumgartner T, Esslen M, Jäncke L. From emotion perception to emotion experience: emotions evoked by pictures and classical music. International Journal of Psychophysiology. 2006;60:34–43. doi: 10.1016/j.ijpsycho.2005.04.007. [DOI] [PubMed] [Google Scholar]
- Bellugi U, Wang PP, Jernigan TL. Williams syndrome: An unusual neuropsychological profile. In: Broman SH, Grafman J, editors. Atypical Cognitive Deficits in Developmental Disorders: Implications for Brain Function. Hillsdale, NJ: Erlbaum; 1994. pp. 23–56. [Google Scholar]
- Bellugi U, Lichtenberger L, Jones W, Lai Z, St George M. The neurocognitive profile of Williams syndrome: A complex pattern of strengths and weaknesses. Journal of Cognitive Neuroscience. 2000;12(Supplement 1):7–29. doi: 10.1162/089892900561959. [DOI] [PubMed] [Google Scholar]
- Benton AL, de Hamsher KS, Varney NR, Spreen O. Contributions to Neuropsychological Assessment. New York: Oxford University Press; 1983. [Google Scholar]
- Böhning M, Campbell R, Karmiloff-Smith A. Audiovisual speech perception in Williams syndrome. Neuropsychologia. 2002;40:1396–1406. doi: 10.1016/s0028-3932(01)00208-1. [DOI] [PubMed] [Google Scholar]
- Cohen JD, MacWhinney B, Flatt M, Provost J. PsyScope: A new graphic interactive environment for designing psychology experiments. Behavioral Research Methods, Instruments, and Computers. 1993;25:257–271. [Google Scholar]
- Collignon O, Girard S, Gosselin F, Roy S, Saint-Amour D, Lassonde M, Lepore F. Audio-visual integration of emotion expression. Brain Research. 2008;1242:126–135. doi: 10.1016/j.brainres.2008.04.023. [DOI] [PubMed] [Google Scholar]
- Dai L, Bellugi U, Chen X-N, Pulst-Korenberg AM, Järvinen-Pasley A, Tirosh-Wagner T, Eis PS, Mills D, Simon AF, Searcy Y, Korenberg JR. Is it Williams syndrome? GTF21 implicated in sociability and GTF21RD1 in visual-spatial construction revealed by high resolution arrays. American Journal of Medical Genetics. 2009;149A:302–314. doi: 10.1002/ajmg.a.32652. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dalton KM, Nacewicz BM, Johnstone T, Schaefer HS, Gernsbacher MA, Goldsmith HH, Alexander AL, Davidson RJ. Gaze fixation and the neural circuitry of face processing in autism. Nature Neuroscience. 2005;8:519–526. doi: 10.1038/nn1421. [DOI] [PMC free article] [PubMed] [Google Scholar]
- De Gelder B, Vroomen J. The perception of emotions by ear and by eye. Cognition and Emotion. 2000;14:289–311. [Google Scholar]
- Deruelle C, Schön D, Rondan C, Mancini J. Global and local music perception in children with Williams syndrome. Neuroreport. 2005;16:631–634. doi: 10.1097/00001756-200504250-00023. [DOI] [PubMed] [Google Scholar]
- Dolan RJ, Morris JS, de Gelder B. Crossmodal binding of fear in voice and face. Proceedings of the National Academy of Sciences of the United States of America. 2001;98:10006–10060. doi: 10.1073/pnas.171288598. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Don A, Schellenberg EG, Rourke BP. Music and language skills of children with Williams syndrome. Child Neuropsychology. 1999;5:154–170. [Google Scholar]
- Doyle TF, Bellugi U, Korenberg JR, Graham J. “Everybody in the world is my friend”. Hypersociability in young children with Williams syndrome. American Journal of Medical Genetics. 2004;124A:263–273. doi: 10.1002/ajmg.a.20416. [DOI] [PubMed] [Google Scholar]
- Dykens EM. Anxiety, fears, and phobias in persons with Williams syndrome. Developmental Neuropsychology. 2003;23:291–316. doi: 10.1080/87565641.2003.9651896. [DOI] [PubMed] [Google Scholar]
- Eckert MA, Galaburda AM, Karchemskiy A, Liang A, Thompson P, Dutton RA, Lee AD, Bellugi U, Korenberg JR, Mills D, Rose FE, Reiss AL. Anomalous sylvian fissure morphology in Williams syndrome. NeuroImage. 2006;33:39–45. doi: 10.1016/j.neuroimage.2006.05.062. [DOI] [PubMed] [Google Scholar]
- Ethofer T, Anders S, Erb M, Droll C, Royen L, Saur R, Reiterer S, Grodd W, Wildgruber D. Impact of voice on emotional judgment of faces: an event-related fMRI study. Human Brain Mapping. 2006;27:707–714. doi: 10.1002/hbm.20212. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ewart AK, Morris CA, Atkinson D, Jin W, Sternes K, Spallone P, Stock AD, Leppert M, Keating MT. Hemizygosity at the elastin locus in a developmental disorder, Williams syndrome. Nature Genetics. 1993;5:11–16. doi: 10.1038/ng0993-11. [DOI] [PubMed] [Google Scholar]
- Frigerio E, Burt DM, Gagliardi C, Cioffi G, Martelli S, Perrett DI, Borgatti R. Is everybody always my friend? Perception of approachability in Williams syndrome. Neuropsychologia. 2006;44:254–259. doi: 10.1016/j.neuropsychologia.2005.05.008. [DOI] [PubMed] [Google Scholar]
- Gagliardi C, Frigerio E, Burt DM, Cazzaniga I, Perrett DI, Borgatti R. Facial expression recognition in Williams syndrome. Neuropsychologia. 2003;41:733–738. doi: 10.1016/s0028-3932(02)00178-1. [DOI] [PubMed] [Google Scholar]
- Galaburda AM, Bellugi U. V. Multi-level analysis of cortical neuroanatomy in Williams syndrome. Journal of Cognitive Neuroscience. 2000;12(Supplement 1):74–88. doi: 10.1162/089892900561995. [DOI] [PubMed] [Google Scholar]
- Gothelf D, Farber N, Raveh E, Apter A, Attias J. Hyperacusis in Williams syndrome: characteristics and associated neuroaudiologic abnormalities. Neurology. 2006;66:390–395. doi: 10.1212/01.wnl.0000196643.35395.5f. [DOI] [PubMed] [Google Scholar]
- Gothelf D, Searcy YM, Reilly J, Lai PT, Lanre-Amos T, Mills D, Korenberg JR, Galaburda A, Bellugi U, Reiss AL. Association between cerebral shape and social use of language in Williams syndrome. American Journal of Medical Genetics A. 2008;146A:2753–2761. doi: 10.1002/ajmg.a.32507. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haas BW, Mills D, Yam A, Hoeft F, Bellugi U, Reiss A. Genetic influences on sociability: Heightened amygdala reactivity and event-related responses to positive social stimuli in Williams syndrome. Journal of Neuroscience. 2009;29:1132–1139. doi: 10.1523/JNEUROSCI.5324-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Holinger DP, Bellugi U, Mills DM, Korenberg JR, Reiss AL, Sherman GF, Galaburda AM. Relative sparing of primary auditory cortex in Williams syndrome. Brain Research. 2005;1037:35–42. doi: 10.1016/j.brainres.2004.11.038. [DOI] [PubMed] [Google Scholar]
- Howlin P, Davies M, Udwin O. Cognitive functioning in adults with Williams syndrome. Journal of Child Psychology and Psychiatry. 1998;39:183–189. [PubMed] [Google Scholar]
- Huron D. Biological Foundations of Music. Vol. 930. New York, NY: Annals of the New York Academy of Sciences; 2001. Is music an evolutionary adaptation? pp. 43–61. [DOI] [PubMed] [Google Scholar]
- Järvinen-Pasley A, Bellugi U, Reilly J, Mills DL, Galaburda A, Reiss AL, Korenberg JR. Defining the social phenotype in Williams syndrome: A model for linking gene, brain, and cognition. Development and Psychopathology. 2008;20:1–35. doi: 10.1017/S0954579408000011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones W, Bellugi U, Lai Z, Chiles M, Reilly J, Lincoln A, Adolphs R. II. Hypersociability in Williams syndrome. Journal of Cognitive Neuroscience. 2000;12(Supplement 1):30–46. doi: 10.1162/089892900561968. [DOI] [PubMed] [Google Scholar]
- Klein-Tasman BP, Mervis CB. Distinctive personality characteristics of 8-, 9-, and 10-year-olds with Williams syndrome. Developmental Neuropsychology. 2003;23:269–290. doi: 10.1080/87565641.2003.9651895. [DOI] [PubMed] [Google Scholar]
- Klin A, Jones W, Schultz R, Volkmar F. The enactive mind, or from actions to cognition: lessons from autism. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences. 2003;358:345–360. doi: 10.1098/rstb.2002.1202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Korenberg JR, Chen XN, Hirota H, Lai Z, Bellugi U, Burian D, Roe B, Matsuoka R. VI. Genome structure and cognitive map of Williams syndrome. Journal of Cognitive Neuroscience. 2000;12(Supplement 1):89–107. doi: 10.1162/089892900562002. [DOI] [PubMed] [Google Scholar]
- Laing E, Butterworth G, Ansari D, Gsödl M, Longhi E, Panagiotaki G, Paterson S, Karmiloff-Smith A. Atypical development of language and social communication in toddlers with Williams syndrome. Developmental Science. 2002;5:233–246. [Google Scholar]
- Lang PJ, Bradley MM, Cuthbert BN. Emotion and motivation: measuring affective perception. Journal of Clinical Neurophysiology. 1998;15:397–408. doi: 10.1097/00004691-199809000-00004. [DOI] [PubMed] [Google Scholar]
- Levitin DJ, Cole K, Chiles M, Lai Z, Lincoln A, Bellugi U. Characterizing the musical phenotype in individuals with Williams Syndrome. Child Neuropsychology. 2005a;104:223–247. doi: 10.1080/09297040490909288. [DOI] [PubMed] [Google Scholar]
- Levitin DJ, Cole K, Lincoln A, Bellugi U. Aversion, awareness, and attraction: investigating claims of hyperacusis in the Williams syndrome phenotype. Journal of Child Psychology and Psychiatry. 2005b;46:514–523. doi: 10.1111/j.1469-7610.2004.00376.x. [DOI] [PubMed] [Google Scholar]
- Levitin DJ, Menon V, Schmitt JE, Eliez S, White CD, Glover GH, Kadis J, Korenberg JR, Bellugi U, Reiss AL. Neural correlates of auditory perception in Williams syndrome: an fMRI study. Neuroimage. 2003;18:74–82. doi: 10.1006/nimg.2002.1297. [DOI] [PubMed] [Google Scholar]
- Leyfer OT, Woodruff-Borden J, Klein-Tasman BP, Fricke JS, Mervis CB. Prevalence of psychiatric disorders in 4 to 16-year-olds with Williams syndrome. American Journal of Medical Genetics B: Neuropsychiatric Genetics. 2006;141:615–622. doi: 10.1002/ajmg.b.30344. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mervis CB, Klein-Tasman BP. Williams syndrome: Cognition, personality, and adaptive behavior. Mental Retardation and Developmental Disabilities Research Reviews. 2000;6:148–158. doi: 10.1002/1098-2779(2000)6:2<148::AID-MRDD10>3.0.CO;2-T. [DOI] [PubMed] [Google Scholar]
- Mervis CB, Morris CA, Klein-Tasman BP, Bertrand J, Kwitny S, Appelbaum G, Rice CE. Attentional characteristics of infants and toddlers with Williams syndrome during triadic interactions. Developmental Neuropsychology. 2003;23:243–268. doi: 10.1080/87565641.2003.9651894. [DOI] [PubMed] [Google Scholar]
- Meyer-Lindenberg A, Hariri AR, Munoz KE, Mervis CB, Mattay VS, Morris CA, Berman KF. Neural correlates of genetically abnormal social cognition in Williams syndrome. Nature Neuroscience. 2005;8:991–993. doi: 10.1038/nn1494. [DOI] [PubMed] [Google Scholar]
- Meyer-Lindenberg A, Mervis CB, Berman KF. Neural mechanisms in Williams syndrome: a unique window to genetic influences on cognition and behaviour. Nature Reviews Neuroscience. 2006;7:380–393. doi: 10.1038/nrn1906. [DOI] [PubMed] [Google Scholar]
- Mills D, Alvarez T, St George M, Appelbaum L, Bellugi U, Neville H. Electrophysiological studies of face processing in Williams syndrome. Journal of Cognitive Neuroscience. 2000; 12(Supplement 1):47–64. doi: 10.1162/089892900561977. [DOI] [PubMed] [Google Scholar]
- Mobbs D, Garrett AS, Menon V, Rose FE, Bellugi U, Reiss AL. Anomalous brain activation during face and gaze processing in Williams syndrome. Neurology. 2004;62:2070–2076. doi: 10.1212/01.wnl.0000129536.95274.dc. [DOI] [PubMed] [Google Scholar]
- Morris CA, Mervis CB. Williams syndrome and related disorders. Annual Review of Genomics and Human Genetics. 2000;1:461–484. doi: 10.1146/annurev.genom.1.1.461. [DOI] [PubMed] [Google Scholar]
- Nowicki S, Jr, Duke MP. Individual differences in the nonverbal communication of affect: The Diagnostic Analysis of Nonverbal Accuracy Scale. Journal of Nonverbal Behavior. 1994;18:9–35. [Google Scholar]
- Plesa-Skwerer D, Verbalis A, Schofield C, Faja S, Tager-Flusberg H. Social-perceptual abilities in adolescents and adults with Williams syndrome. Cognitive Neuropsychology. 2005;22:1–12. doi: 10.1080/02643290542000076. [DOI] [PubMed] [Google Scholar]
- Plesa-Skwerer D, Faja S, Schofield C, Verbalis A, Tager-Flusberg H. Perceiving facial and vocal expressions of emotion in individuals with Williams syndrome. American Journal of Mental Retardation. 2006;111:15–26. doi: 10.1352/0895-8017(2006)111[15:PFAVEO]2.0.CO;2. [DOI] [PubMed] [Google Scholar]
- Porter MA, Coltheart M, Langdon R. The neuropsychological basis of hypersociability in Williams and Down syndrome. Neuropsychologia. 2007;45(12):2839–2849. doi: 10.1016/j.neuropsychologia.2007.05.006. [DOI] [PubMed] [Google Scholar]
- Reilly J, Losh M, Bellugi U, Wulfeck B. Frog, where are you? Narratives in children with specific language impairment, early focal brain injury, and Williams syndrome. Brain and Language. 2004;88(Special Issue):229–247. doi: 10.1016/S0093-934X(03)00101-9. [DOI] [PubMed] [Google Scholar]
- Reiss AL, Eckert MA, Rose FE, Karchemskiy A, Kesler S, Chang M, Reynolds MF, Kwon H, Galaburda A. An experiment of nature: brain anatomy parallels cognition and behavior in Williams syndrome. Journal of Neuroscience. 2004;24:5009–5015. doi: 10.1523/JNEUROSCI.5272-03.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Riby DM, Hancock PJ. Viewing it differently: Social scene perception in Williams syndrome and Autism. Neuropsychologia. 2008;46:2855–2860. doi: 10.1016/j.neuropsychologia.2008.05.003. [DOI] [PubMed] [Google Scholar]
- Riby DM, Hancock PJ. Do Faces Capture the Attention of Individuals with Williams Syndrome or Autism? Evidence from Tracking Eye Movements. Journal of Autism and Developmental Disorders. 2009;39:421–431. doi: 10.1007/s10803-008-0641-z. [DOI] [PubMed] [Google Scholar]
- Rossen ML, Jones W, Wang PP, Klima ES. Face processing: Remarkable sparing in Williams syndrome. Special Issue, Genetic Counseling. 1996;6:138–140. [Google Scholar]
- Searcy YM, Lincoln AJ, Rose FE, Klima ES, Bavar N, Korenberg JR. The relationship between age and IQ in adults with Williams syndrome. American Journal of Mental Retardation. 2004;109:231–236. doi: 10.1352/0895-8017(2004)109<231:TRBAAI>2.0.CO;2. [DOI] [PubMed] [Google Scholar]
- Tager-Flusberg H, Sullivan K. A componential view of theory of mind: Evidence from Williams syndrome. Cognition. 2000;76:59–89. doi: 10.1016/s0010-0277(00)00069-x. [DOI] [PubMed] [Google Scholar]
- Thompson WF, Graham P, Russo FA. Seeing music performance: Visual influences on perception and experience. Semiotica. 2005;156:177–201. [Google Scholar]
- Thompson WF, Russo F, Quinto L. Audio-visual integration of emotional cues in song. Cognition & Emotion. 2008;22(8):1457–1470. [Google Scholar]
- Tottenham N, Tanaka J, Leon AC, McCarry T, Nurse M, Hare TA, Marcus DJ, Westerlund A, Casey BJ, Nelson CA. The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Research. doi: 10.1016/j.psychres.2008.05.006. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Udwin O, Yule W. Expressive language of children with Williams syndrome. American Journal of Medical Genetics Supplement. 1990;6:108–114. doi: 10.1002/ajmg.1320370620. [DOI] [PubMed] [Google Scholar]
- Udwin O, Yule W. A cognitive and behavioural phenotype in Williams syndrome. Journal of Clinical Experimental Neuropsychology. 1991;13:232–244. doi: 10.1080/01688639108401040. [DOI] [PubMed] [Google Scholar]
- Vines BW, Krumhansl CL, Wanderley MM, Levitin DJ. Cross-Modal interactions in the perception of musical performance. Cognition. 2006;101:80–103. doi: 10.1016/j.cognition.2005.09.003. [DOI] [PubMed] [Google Scholar]
- Wechsler D. Wechsler Intelligence Scale for Children. 3. San Antonio, TX: Psychological Corporation; 1991. (WISC-III) [Google Scholar]
- Wechsler D. Wechsler Adult Intelligence Scale. 3. San Antonio, TX: Psychological Corporation; 1997. (WAIS-III) [Google Scholar]
- Wechsler D. Wechsler Abbreviated Scale of Intelligence. San Antonio, TX: Psychological Corporation; 1999. [Google Scholar]
- Zitzer-Comfort C, Doyle TF, Masataka N, Korenberg JR, Bellugi U. Nature and nurture: Williams syndrome across cultures. Developmental Science. 2007;10:755–62. doi: 10.1111/j.1467-7687.2007.00626.x. [DOI] [PubMed] [Google Scholar]