Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2011 Feb 2;31(5):1820–1824. doi: 10.1523/JNEUROSCI.5759-09.2011

Superior Facial Expression, But Not Identity Recognition, in Mirror-Touch Synesthesia

Michael J Banissy 1,, Lúcia Garrido 1,2, Flor Kusnir 3, Bradley Duchaine 4, Vincent Walsh 4, Jamie Ward 5
PMCID: PMC6623727  PMID: 21289192

Abstract

Simulation models of expression recognition contend that to understand another's facial expressions, individuals map the perceived expression onto the same sensorimotor representations that are active during the experience of the perceived emotion. To investigate this view, the present study examines facial expression and identity recognition abilities in a rare group of participants who show facilitated sensorimotor simulation (mirror-touch synesthetes). Mirror-touch synesthetes experience touch on their own body when observing touch to another person. These experiences have been linked to heightened sensorimotor simulation in the shared-touch network (brain regions active during the passive observation and experience of touch). Mirror-touch synesthetes outperformed nonsynesthetic participants on measures of facial expression recognition, but not on control measures of face memory or facial identity perception. These findings imply a role for sensorimotor simulation processes in the recognition of facial affect, but not facial identity.

Introduction

Simulation accounts of expression recognition contend that to understand the emotion associated with another's facial expression, the observer simulates the sensorimotor response associated with generating the perceived expression (Adolphs, 2002; Gallese et al., 2004; Goldman and Sripada, 2005; Keysers and Gazzola, 2006; Bastiaansen et al., 2009; Keysers and Gazzola, 2009). This view is supported by evidence that responses in expression-relevant facial muscles are increased during subliminal exposure to emotional expressions (Dimberg et al., 2000), that preventing the activation of expression-relevant muscles impairs expression recognition (Oberman et al., 2007), and that perceiving another's expressions and producing one's own recruits similar premotor and somatosensory regions (Hennenlotter et al., 2005; van der Gaag et al., 2007). Further, neuropsychological findings indicate that damage to right somatosensory cortices is associated with expression-recognition deficits (Adolphs et al., 2000), and transcranial magnetic stimulation findings have demonstrated the involvement of the right somatosensory cortex for facial expression recognition, but not face identity recognition, in healthy adults (Pitcher et al., 2008). These findings imply that purely visual face-processing mechanisms interact with sensorimotor representations to facilitate expression recognition. This may differ from facial identity recognition, as there is no clear indication of how one could simulate another's identity (Calder and Young, 2005).

A complimentary approach is to consider whether facilitation of sensorimotor mechanisms promotes expression recognition. One example of facilitated sensorimotor simulation is the case of mirror-touch synesthesia, in which simply observing touch to others elicits tactile sensations on the synesthete's own body (Blakemore et al., 2005; Banissy et al., 2009b). Functional brain imaging indicates that this variant of synesthesia is linked to heightened neural activity in a network of brain regions also activated in nonsynesthetic subjects when observing touch to others. This mirror-touch system is comprised of brain areas active during both the observation and passive experience of touch (including primary and secondary somatosensory cortices and premotor cortex) (Keysers et al., 2004; Blakemore et al., 2005; Ebisch et al., 2008). It has been suggested that brain systems involved in mirroring the experiences of others may be crucial for social perception because they provide a plausible neural mechanism to facilitate sensorimotor simulation of another's perceived state (Gallese et al., 2004; Keysers and Gazzola, 2006). In this sense, mirror-touch synesthesia can be viewed as a case of heightened sensorimotor simulation, which may be able to inform us about the role of sensorimotor simulation mechanisms in social cognition. For example, Banissy and Ward (2007) reported that mirror-touch synesthetes, but not other synesthetes, show heightened emotional empathy compared with control participants.

Here we examined whether mirror-touch synesthetes differed in another aspect of social perception; namely, facial expression recognition. We compared mirror-touch synesthetes and nonsynesthetic controls on facial expression recognition, identity recognition, and identity perception tasks. Based on the hypothesis that mirror-touch synesthetes have heightened sensorimotor simulation mechanisms, we predicted that synesthetes would show superior performance on expression recognition tasks but not on the facial identity control tasks that are less dependent on simulation.

Materials and Method

Participants

Eight mirror-touch synesthetes (six female and two male; mean age ± SD = 45.6 ± 11.7 years) and 20 nonsynesthetic control participants (15 female and five male; mean age ± SD = 35.6 ± 13.6 years) took part in the study. All cases of mirror-touch synesthesia were confirmed using a previously developed visual–tactile congruity paradigm designed to provide evidence for the authenticity of the condition (Banissy and Ward, 2007) (supplemental Table 1, available at www.jneurosci.org as supplemental material).

Materials and procedure

Participants completed four tasks in a counterbalanced order. These tasks are detailed below.

Films facial expression recognition.

This task investigated participants' ability to recognize the emotional expressions of others. In each trial, participants were presented with an adjective describing an emotional state followed by three images (each image shown for 500 ms) of the same actor or actress displaying different facial expressions. Participants were asked to indicate which of the three images best portrayed the target emotional adjective.

To portray subtle and realistic facial expressions, expression stimuli were captured from films (Fig. 1a). Fifty-eight target images (preceded by three practice trials) from 15 films were used. All films were from a non-English speaking country to decrease the probability that participants had seen them or were familiar with the actors. Target and distractor stimuli were selected based on four pilot studies (Garrido et al., 2009). Each stimulus was shown once during the test, and trials were presented in a fixed order over two blocks (29 trials per block).

Figure 1.

Figure 1.

Summary of tasks used. a, Films facial expression task. This task investigated participants' abilities to categorize the emotional expressions of others. Participants were presented with a target adjective describing an emotional state followed by three images shown consecutively for 500 ms each. Participants were asked which of the three images best portrayed the target emotion. In the actual task color stimuli were used. b, Cambridge face memory test. This task investigated participants' abilities to memorize facial identity. During the task, participants memorized six unfamiliar male faces. They were then tested on their ability to recognize the faces in a three-alternative-forced-choice paradigm. The Cambridge face memory test long form was used. This task is comprised of three sections from the original CFMT (shown in figure) and a fourth section involving 30 very difficult trials (for stimuli from the final section, see Russell et al., 2009). c, Cambridge face perception test. This task investigated participants' abilities to perceive faces while being less dependent on memory. Participants were shown a target face and six faces morphed between the target and a distractor face. Participants sorted the six faces by similarity to the target face. Faces were presented upright and inverted in a fixed pseudorandom order. d, Same–different expression-identity matching task. This task investigated participants' abilities to match another's facial identity or facial expressions. Participants were presented with a sample face followed by a fixation cross and then a target face. In the expression matching task, participants indicated whether the expression in the target face matched the expression in the sample face. In the identity matching task, participants indicated whether the identity of the target face and the sample face matched.

Cambridge face memory test.

To test face recognition, we compared performance of synesthetes and nonsynesthetes on the Cambridge face memory test long form (CFMT+) (Russell et al., 2009). The task is an adapted version of the Cambridge face memory test (CFMT) (Duchaine and Nakayama, 2006) (Fig. 1b) and was designed to distinguish normal from supernormal ability to recognize faces (Russell et al., 2009). During the task, participants were asked to learn to recognize six unfamiliar male faces from three different views and were then tested on their ability to recognize these faces in a three-alternative forced-choice task.

The test was comprised of four sections, each more difficult than the previous. The first three sections were taken from the original CFMT (Duchaine and Nakayama, 2006) and the addition of the final section forms the longer CFMT+ (Russell et al., 2009). The test began by testing recognition with the same images that were used during training. This relatively easy introduction was followed by a section using novel images that show the target faces from previously unseen perspectives and different lighting conditions, and a third section consisting of novel images with visual noise added. The final section contained 30 very difficult trials in which distractor images repeated much more frequently, targets and distractors contained more visual noise than the images in the third section, cropped (i.e., only showing internal features) and uncropped images (i.e., showing hair, ears, and necks, which had not been shown in the previous sections) were used, and images showing the targets and distractors making emotional expressions were included. The percentage of correct responses for each section and overall were measured.

Cambridge face perception test.

To investigate facial identity perception, we administered the Cambridge face perception test (Duchaine et al., 2007). This test assesses the ability to perceive differences between facial identities. Memory demands are minimal because faces are presented simultaneously. During the task, participants were shown a target face (from a 3/4 viewpoint) and six faces (from a frontal view) morphed between the target and another face in varying proportions so that they varied systematically in their similarity to the target face (Fig. 1c). Participants were asked to sort the six faces by similarity to the target face and were given 1 min to do so. The task involved eight upright and eight inverted trials that alternated in a fixed pseudorandom order. Performance was measured by an error score. This was calculated by summing the deviations from the correct position for each face, with one error reflecting each position that a face must be moved to be in the correct location. For example, if a face was one position from the correct location the error score was one. If it was three positions away from the correct location, this was an error score of three.

Same–different expression and identity matching task.

This task investigated participants' abilities to match another's facial identity or facial expressions (Pitcher et al., 2008).

In the expression matching task, participants were presented with a sample face (250 ms), followed by a fixation cross (1000 ms), and then a target face (250 ms). Participants were asked to indicate whether the target facial expression matched or was different from the sample facial expression. On half of the trials, the target and sample face expressed the same emotion and half the sample–target pairs showed different emotions (Fig. 1d). A total of 72 trials (split between two blocks) were completed. Each image showed one of six female models making one of six basic facial expressions (anger, disgust, fear, happiness, sadness, or surprise). Each stimulus was a grayscale image taken from the Ekman and Friesen's (1976) facial affect series. The hair and neck of stimuli were removed using Adobe Photoshop. In the expression task, identity always changed between sample–target pairs and each expression was presented an equal number of times.

In the identity matching task, the same stimuli and procedure were used. Participants were asked to indicate whether the sample and target face were the same or a different person. Half of the trials showed pairs with the same identity and half with a different identity. Expression always changed between the sample and target face, and the six models were presented an equal number of times.

Results

Participant age was used a covariate on all analyses because of a slight trend for synesthetes to differ from controls on age (t(26) = 1.84, p = 0.078).

Films facial expression recognition

Accuracy and reaction time performance were compared separately using a one-way between-subjects ANCOVA. One control participant was withdrawn from analysis due to difficulties in understanding the meaning of expression adjectives and performing more than 3 SDs below the control group mean on accuracy and reaction time.

Synesthetes showed superior abilities at recognizing the emotional expressions of others (Fig. 2). Analysis of accuracy performance revealed that mirror-touch synesthetes outperformed control participants on expression recognition (F(1,24) = 16.38, p < 0.001) (Fig. 2a). This difference was not due to a speed–accuracy trade-off, as no significant effect of group (synesthete or control) was found for reaction time performance (F(1,24) =.0.962, p = 0.336) (supplemental Fig. 1, available at www.jneurosci.org as supplemental material). These findings suggest that mirror-touch synesthetes show superior facial expression recognition, which may be due to heightened sensorimotor simulation mechanisms.

Figure 2.

Figure 2.

Performance of mirror-touch synesthetes and nonsynesthetes on the films facial expression task (a), the CFMT+ (b), the Cambridge face perception test (c, d), expression matching (e), and identity matching (f). Mirror-touch synesthetes were significantly more accurate than nonsynesthetes at categorizing the emotional facial expressions of others (a). This was not found to be the case in tests of face memory. The performances of synesthetes did not significantly differ from nonsynesthetes on the CFMT+ (b). Nor were any significant differences found between the performance of synesthetes and controls on a measure of facial identity perception (c, d; note that superior performance is reflected in a lower error score in the Cambridge face perception test). There was, however, a significant task × group interaction when participants made same–different expression or identity judgments. Although we were unable to establish definitively the locus of this interaction, the finding does demonstrate a different profile of strengths across groups, which is consistent with the findings from our other tasks. On the expression task (e), there was a trend for synesthetes to outperform controls, whereas on the identity task (f), there was a trend for controls to outperform synesthetes. Within-group comparisons between the tasks revealed that controls were significantly more accurate in the identity compared with the expression task. Synesthetes did not show this bias toward identity matching; expression and identity matching performances were comparable; *p < 0.001.

Cambridge face memory test

No significant differences were observed between synesthetes and controls on either the CFMT (F(1,25) = 0.023, p = 0.880) (supplemental Fig. 2, available at www.jneurosci.org as supplemental material) or the CFMT+ (F(1,25) = 0.095, p = 0.761) (Fig. 2b). Therefore, unlike facial expression recognition, synesthetes and controls did not differ in their ability to memorize facial identity.

Cambridge face perception test

Error scores on the eight upright and eight inverted trials were summed to determine the total number of upright and inverted errors. A 2 (group) × 2 (trial type) ANCOVA revealed a significant effect of trial type (F(1,25) = 5.81, p < 0.05), which was due to an inversion effect, whereby overall participants were less accurate on inverted (mean ± SEM = 70 ± 3) compared with upright trials (mean ± SEM = 41.5 ± 3.21). Importantly, this effect did not interact with group (F(1,25) = 0.37, p = 0.549) and no main effect of group was found (F(1,25) = 0.253, p = 0.619) (Fig. 2c,d). Therefore, unlike expression recognition, synesthetes and controls did not significantly differ in their abilities to match facial identities.

Same–different expression and identity matching task

A 2 (group) × 2 (task) mixed ANCOVA was conducted. No main effect of task or group was found. There was, however, a significant interaction between task and group (F(1,25) = 4.507, p < 0.05). Controls were more accurate on the identity matching task relative to the emotion matching task (F(1,18) = 5.10, p < 0.05). Synesthetes did not show this pattern; analysis of within-subject effects revealed no significant difference between the two tasks for the synesthetic group (F(1,6) = 0.759, p = 0.417). There was also a nonsignificant trend for synesthetes to outperform controls on the expression matching task (Fig. 2e), but for controls to outperform synesthetes on the identity matching task (Fig. 2f).

Discussion

This study investigated expression and identity face processing in mirror-touch synesthetes and nonsynesthete control participants. We predicted that heightened sensorimotor simulation mechanisms would result in superior expression recognition, but would not affect the identity recognition abilities of mirror-touch synesthetes. Consistent with these predictions, mirror-touch synesthetes were superior when recognizing facial expressions but not facial identities. Moreover, we found that synesthetes significantly outperformed control participants in their ability to accurately categorize facial expressions, but did not differ from nonsynesthetes in their ability to memorize or perceive facial identity. We also found that controls performed better at matching another's identity than their expressions, but the reverse trend was shown by synesthetes, a significant group × condition interaction. These findings are therefore consistent with simulation accounts of expression recognition, which suggest that individuals understand others' emotional expressions by simulating the sensorimotor response associated with generating the perceived facial expression (Adolphs, 2002; Gallese et al., 2004; Goldman and Sripada, 2005; Keysers and Gazzola, 2006, 2009).

A variety of sources indicate that recognizing another's identity and expressions relies upon multiple stages of representation, including purely visual, multimodal, expression-general, and expression-specific mechanisms (Adolphs et al., 2000; Anderson et al., 2000; Calder et al., 2001, 2004; Lawrence et al., 2002; Keane et al., 2002; Pitcher et al., 2008). Simulation accounts of expression recognition contend that one mechanism involved in expression, but not identity recognition, is an internal sensorimotor reenactment of the perceived expression (Adolphs, 2002; Gallese et al., 2004; Goldman and Sripada, 2005; Keysers and Gazzola, 2006). Functional brain imaging (Hennenlotter et al., 2005; van der Gaag et al., 2007) and neuropsychological (Adolphs et al., 2000) and transcranial magnetic stimulation studies (Pitcher et al., 2008) suggest a key role for somatosensory resources in expression recognition. Our findings that individuals who show increased levels of somatosensory simulation (mirror-touch synesthetes) demonstrate superior facial expression, but not facial identity perception, are consistent with this view. More specifically, they suggest that the level of vicarious activations in the somatosensory system contributes to the recognition of others' facial expressions.

One may note, however, that based solely on the interaction observed in our identity–expression matching task, it is possible that our findings may be more in line with reduced levels of identity perception rather than superior expression perception in mirror-touch synesthesia. There are a number of reasons why this is not the case. The identity–expression matching task demonstrates a significant group × task interaction, suggesting a different profile of strengths across groups, but does not allow us to establish definitively its locus. Therefore, it would be premature to conclude that synesthetes have identity recognition problems from this task alone and we must consider the wider picture of the other tasks. The differences on the Cambridge face perception test and the Cambridge face memory test are not significant, and therefore imply normal facial identity processing. Synesthetes do, however, show superior performance on the expression recognition measure (films task), suggesting superior expression recognition in mirror-touch synesthesia.

Although our findings are focused on facial affect processing, it is also of note that somatosensation may provide a more general component to our understanding of other people's actions and emotions (Keysers et al., 2010). For example, in both humans and monkeys, neural activity in the primary and secondary somatosensory cortices is evoked when performing and when passively observing hand actions (Raos et al., 2004; Evangeliou et al., 2009; Gazzola and Keysers, 2009; Keysers et al., 2010) and the level of activity in this system has been shown to correlate with self-reported empathy in nonsynesthetes (Gazzola et al., 2006). We have previously observed that mirror-touch synesthetes show heightened emotional reactive empathy compared with controls, but do not differ on other components of empathy (Banissy and Ward, 2007). The findings from the current investigation therefore imply that mirror-touch synesthesia may be linked to more general enhancements in emotion processing (e.g., emotional empathy, expression perception) and implicate the somatosensory system in this process.

It also remains to be established whether heightened emotion sensitivity displayed by mirror-touch synesthetes is a cause or consequence of this type of synesthetic experience. Although we assume that mirror-touch synesthetes form part of the synesthetic population, and are therefore a unique group of participants, the principles that bias what type of synesthesia will or will not be developed are a matter of debate (Grossenbacher and Lovelace, 2001; Hubbard and Ramachandran, 2005; Sagiv and Ward, 2006; Bargary and Mitchell, 2008; Cohen Kadosh and Walsh, 2008; Cohen Kadosh et al., 2009). Conceptually there are at least two possibilities: (1) mirror-touch synesthetes reflect the top end of a spectrum along which emotion sensitivity ranges (e.g., the normal architecture for multisensory interactions) and this biases them toward interpersonal synesthetic experience, or (2) mirror-touch synesthetes are a unique population whose extra sensory experiences predispose them to superior emotion sensitivity. Although this is relatively difficult to disentangle, it is of note that mirror-touch synesthesia is not the only variant of synesthesia linked with enhanced perceptual and cognitive processing. For example, synesthetes who experience color as evoked sensations show superior memory for colors (Yaro and Ward, 2007) and enhanced perceptual processing of color (Yaro and Ward, 2007; Banissy et al., 2009a) relative to nonsynesthetes; visual–sound synesthetes (synesthetes for whom seeing visual motion triggers auditory perception) demonstrate an advantage in perceiving visually presented rhythmic patterns compared with nonsynesthetes (Saenz and Koch, 2008); and synesthetes who experience a conscious mapping between time and space (e.g., they consciously perceive months and years in particular spatial layouts) show superior visuospatial abilities compared with nonsynesthetes, but do not show superior performance on tasks that do not make use of their synesthesia (Mann et al., 2009; Simner et al., 2009). Thus, a general feature of synesthesia may be facilitated perceptual and cognitive processing related to the modality of synesthetic experience.

In summary, this study demonstrates that mirror-touch synesthesia is associated with superior facial expression recognition abilities. The observed superiority in face processing is restricted to expression recognition. Mirror-touch synesthetes showed enhanced emotional expression abilities, but did not differ from controls on identity processing measures. Given that mirror-touch synesthesia has been linked to heightened somatosensory simulation, these findings are consistent with simulation-based accounts of expression recognition and indicate that somatosensory resources are an important facet in our ability to recognize the emotions of others.

Footnotes

This work was supported by a Medical Research Council grant to V.W. and an Economic and Social Research Council (ESRC) Fellowship to M.J.B. M.J.B. is supported by a British Academy Postdoctoral Fellowship. J.W. is supported by an ESRC grant. We thank Abigail Birnbaum for help with data collection and Richard Russell for assistance with the research.

References

  1. Adolphs R. Neural systems for recognizing emotion. Curr Opin Neurobiol. 2002;12:169–177. doi: 10.1016/s0959-4388(02)00301-x. [DOI] [PubMed] [Google Scholar]
  2. Adolphs R, Damasio H, Tranel D, Cooper G, Damasio AR. A role for somatosensory cortices in the visual recognition of emotion as revealed by three-dimensional lesion mapping. J Neurosci. 2000;20:2683–2690. doi: 10.1523/JNEUROSCI.20-07-02683.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Anderson AK, Spencer DD, Fulbright RK, Phelps EA. Contribution of the anteromedial temporal lobes to the evaluation of facial emotion. Neuropsychology. 2000;14:526–536. doi: 10.1037//0894-4105.14.4.526. [DOI] [PubMed] [Google Scholar]
  4. Banissy MJ, Ward J. Mirror-touch synesthesia is linked with empathy. Nat Neurosci. 2007;10:815–816. doi: 10.1038/nn1926. [DOI] [PubMed] [Google Scholar]
  5. Banissy MJ, Walsh V, Ward J. Enhanced sensory perception in synaesthesia. Exp Brain Res. 2009a;196:565–571. doi: 10.1007/s00221-009-1888-0. [DOI] [PubMed] [Google Scholar]
  6. Banissy MJ, Cohen Kadosh R, Maus GW, Walsh V, Ward J. Prevalence, characteristics and a neurocognitive model of mirror-touch synaesthesia. Exp Brain Res. 2009b;198:261–272. doi: 10.1007/s00221-009-1810-9. [DOI] [PubMed] [Google Scholar]
  7. Bargary G, Mitchell KJ. Synaesthesia and cortical connectivity. Trends Neurosci. 2008;31:335–342. doi: 10.1016/j.tins.2008.03.007. [DOI] [PubMed] [Google Scholar]
  8. Bastiaansen JA, Thioux M, Keysers C. Evidence for mirror systems in emotions. Philos Trans R Soc Lond B Biol Sci. 2009;364:2391–2404. doi: 10.1098/rstb.2009.0058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Blakemore SJ, Bristow D, Bird G, Frith C, Ward J. Somatosensory activations during the observation of touch and a case of vision-touch synaesthesia. Brain. 2005;128:1571–1583. doi: 10.1093/brain/awh500. [DOI] [PubMed] [Google Scholar]
  10. Calder AJ, Young AW. Understanding the recognition of facial identity and facial expression. Nat Rev Neurosci. 2005;6:641–651. doi: 10.1038/nrn1724. [DOI] [PubMed] [Google Scholar]
  11. Calder AJ, Lawrence AD, Young AW. The neuropsychology of fear and loathing. Nat Rev Neurosci. 2001;2:352–363. doi: 10.1038/35072584. [DOI] [PubMed] [Google Scholar]
  12. Calder AJ, Keane J, Lawrence AD, Manes F. Impaired recognition of anger following damage to the ventral striatum. Brain. 2004;127:1958–1969. doi: 10.1093/brain/awh214. [DOI] [PubMed] [Google Scholar]
  13. Cohen Kadosh R, Walsh V. Synaesthesia and cortical connections: cause or correlation? Trends Neurosci. 2008;31:549–550. doi: 10.1016/j.tins.2008.08.004. [DOI] [PubMed] [Google Scholar]
  14. Cohen Kadosh R, Henik A, Catena A, Walsh V, Fuentes LJ. Induced cross-modal synesthetic experience without abnormal neuronal connections. Psychol Sci. 2009;20:258–265. doi: 10.1111/j.1467-9280.2009.02286.x. [DOI] [PubMed] [Google Scholar]
  15. Dimberg U, Thunberg M, Elmehed K. Unconcious facial reactions to emotional facial expressions. Psychol Sci. 2000;11:86–89. doi: 10.1111/1467-9280.00221. [DOI] [PubMed] [Google Scholar]
  16. Duchaine B, Nakayama K. The Cambridge face memory test: results from neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia. 2006;44:576–585. doi: 10.1016/j.neuropsychologia.2005.07.001. [DOI] [PubMed] [Google Scholar]
  17. Duchaine B, Germine L, Nakayama K. Family resemblance: ten family members with prosopagnosia and within-class object agnosia. Cogn Neuropsychol. 2007;24:419–430. doi: 10.1080/02643290701380491. [DOI] [PubMed] [Google Scholar]
  18. Ebisch SJ, Perrucci MG, Ferretti A, Del Gratta C, Romani GL, Gallese V. The sense of touch: embodied simulation in a visuo-tactile mirroring mechanism for observed animate or inanimate touch. J Cogn Neurosci. 2008;20:1611–1623. doi: 10.1162/jocn.2008.20111. [DOI] [PubMed] [Google Scholar]
  19. Ekman P, Friesen WV. Pictures of facial affect. Palo Alto, CA: Consulting Psychologists; 1976. [Google Scholar]
  20. Evangeliou MN, Raos V, Galletti C, Savaki HE. Functional imaging of the parietal cortex during action execution and observation. Cereb Cortex. 2009;19:624–639. doi: 10.1093/cercor/bhn116. [DOI] [PubMed] [Google Scholar]
  21. Gallese V, Keysers C, Rizzolatti G. A unifying view of the basis of social cognition. Trends Cogn Sci. 2004;8:396–403. doi: 10.1016/j.tics.2004.07.002. [DOI] [PubMed] [Google Scholar]
  22. Garrido L, Furl N, Draganski B, Weiskopf N, Stevens J, Tan GC, Driver J, Dolan RJ, Duchaine B. VBM reveals reduced grey matter volume in the temporal cortex of developmental prosopagnosics. Brain. 2009;132:3443–3455. doi: 10.1093/brain/awp271. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Gazzola V, Keysers C. The observation and execution of actions share motor and somatosensory voxels in all tested subjects: single-subject analyses of unsmoothed fMRI data. Cereb Cortex. 2009;19:1239–1255. doi: 10.1093/cercor/bhn181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Gazzola V, Aziz-Zadeh L, Keysers C. Empathy and the somatotopic auditory mirror system in humans. Curr Biol. 2006;16:1824–1829. doi: 10.1016/j.cub.2006.07.072. [DOI] [PubMed] [Google Scholar]
  25. Goldman AI, Sripada CS. Simulationist models of face-based emotion recognition. Cognition. 2005;94:193–213. doi: 10.1016/j.cognition.2004.01.005. [DOI] [PubMed] [Google Scholar]
  26. Grossenbacher PG, Lovelace CT. Mechanisms of synaesthesia: cognitive and physiological constraints. Trends Cogn Sci. 2001;5:36–41. doi: 10.1016/s1364-6613(00)01571-0. [DOI] [PubMed] [Google Scholar]
  27. Hennenlotter A, Schroeder U, Erhard P, Castrop F, Haslinger B, Stoecker D, Lange KW, Ceballos-Baumann AO. A common neural basis for receptive and expressive communication of pleasant facial affect. Neuroimage. 2005;26:581–591. doi: 10.1016/j.neuroimage.2005.01.057. [DOI] [PubMed] [Google Scholar]
  28. Hubbard EM, Ramachandran VS. Neurocognitive mechanisms of synaesthesia. Neuron. 2005;48:509–520. doi: 10.1016/j.neuron.2005.10.012. [DOI] [PubMed] [Google Scholar]
  29. Keane J, Calder AJ, Hodges JR, Young AW. Face and emotion processing in frontal variant frontotemporal dementia. Neuropsychologia. 2002;40:655–665. doi: 10.1016/s0028-3932(01)00156-7. [DOI] [PubMed] [Google Scholar]
  30. Keysers C, Gazzola V. Towards a unifying theory of social cognition. Prog Brain Res. 2006;156:379–401. doi: 10.1016/S0079-6123(06)56021-2. [DOI] [PubMed] [Google Scholar]
  31. Keysers C, Gazzola V. Expanding the mirror: vicarious activity for actions, emotions, and sensations. Curr Opin Neurobiol. 2009;19:666–671. doi: 10.1016/j.conb.2009.10.006. [DOI] [PubMed] [Google Scholar]
  32. Keysers C, Wicker B, Gazzola V, Anton JL, Fogassi L, Gallese V. A touching sight: SII/PV activation during the observation and experience of touch. Neuron. 2004;42:335–346. doi: 10.1016/s0896-6273(04)00156-4. [DOI] [PubMed] [Google Scholar]
  33. Keysers C, Kaas JH, Gazzola V. Somatosensation in social perception. Nat Rev Neurosci. 2010;11:417–428. doi: 10.1038/nrn2833. [DOI] [PubMed] [Google Scholar]
  34. Lawrence AD, Calder AJ, McGowan SW, Grasby PM. Selective disruption of the recognition of facial expressions of anger. Neuroreport. 2002;13:881–884. doi: 10.1097/00001756-200205070-00029. [DOI] [PubMed] [Google Scholar]
  35. Mann H, Korzenko J, Carriere JS, Dixon MJ. Time-space synaesthesia—a cognitive advantage? Conscious Cogn. 2009;18:619–627. doi: 10.1016/j.concog.2009.06.005. [DOI] [PubMed] [Google Scholar]
  36. Oberman LM, Winkielman P, Ramachandran VS. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions. Soc Neurosci. 2007;2:167–178. doi: 10.1080/17470910701391943. [DOI] [PubMed] [Google Scholar]
  37. Pitcher D, Garrido L, Walsh V, Duchaine BC. Transcranial magnetic stimulation disrupts the perception and embodiment of facial expressions. J Neurosci. 2008;28:8929–8933. doi: 10.1523/JNEUROSCI.1450-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Raos V, Evangeliou MN, Savaki HE. Observation of action: grasping with the mind's hand. Neuroimage. 2004;23:193–201. doi: 10.1016/j.neuroimage.2004.04.024. [DOI] [PubMed] [Google Scholar]
  39. Russell R, Duchaine B, Nakayama K. Super-recognizers: people with extraordinary face recognition ability. Psychon Bull Rev. 2009;16:252–257. doi: 10.3758/PBR.16.2.252. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Saenz M, Koch C. The sound of change: visually-induced auditory synesthesia [comment] Curr Biol. 2008;18:R650–R651. doi: 10.1016/j.cub.2008.06.014. [DOI] [PubMed] [Google Scholar]
  41. Sagiv N, Ward J. Crossmodal interactions: lessons from synesthesia. Prog Brain Res. 2006;155:259–271. doi: 10.1016/S0079-6123(06)55015-0. [DOI] [PubMed] [Google Scholar]
  42. Simner J, Mayo N, Spiller MJ. A foundation for savantism? Visuo-spatial synaesthetes present with cognitive benefits. Cortex. 2009;45:1246–1260. doi: 10.1016/j.cortex.2009.07.007. [DOI] [PubMed] [Google Scholar]
  43. van der Gaag C, Minderaa RB, Keysers C. Facial expressions: what the mirror neuron system can and cannot tell us. Soc Neurosci. 2007;2:179–222. doi: 10.1080/17470910701376878. [DOI] [PubMed] [Google Scholar]
  44. Yaro C, Ward J. Searching for Shereshevski: what is superior about the memory of synaesthetes? Q J Exp Psychol. 2007;60:681–695. doi: 10.1080/17470210600785208. [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES