Skip to main content
Proceedings of the Royal Society B: Biological Sciences logoLink to Proceedings of the Royal Society B: Biological Sciences
. 2007 Jun 19;274(1622):2131–2137. doi: 10.1098/rspb.2007.0473

Turning the other cheek: the viewpoint dependence of facial expression after-effects

Christopher P Benton 1,*, Peter J Etchells 1, Gillian Porter 1, Andrew P Clark 1, Ian S Penton-Voak 1, Stavri G Nikolov 2
PMCID: PMC2706192  PMID: 17580295

Abstract

How do we visually encode facial expressions? Is this done by viewpoint-dependent mechanisms representing facial expressions as two-dimensional templates or do we build more complex viewpoint independent three-dimensional representations? Recent facial adaptation techniques offer a powerful way to address these questions. Prolonged viewing of a stimulus (adaptation) changes the perception of subsequently viewed stimuli (an after-effect). Adaptation to a particular attribute is believed to target those neural mechanisms encoding that attribute. We gathered images of facial expressions taken simultaneously from five different viewpoints evenly spread from the three-quarter leftward to the three-quarter rightward facing view. We measured the strength of expression after-effects as a function of the difference between adaptation and test viewpoints. Our data show that, although there is a decrease in after-effect over test viewpoint, there remains a substantial after-effect when adapt and test are at differing three-quarter views. We take these results to indicate that neural systems encoding facial expressions contain a mixture of viewpoint-dependent and viewpoint-independent elements. This accords with evidence from single cell recording studies in macaque and is consonant with a view in which viewpoint-independent expression encoding arises from a combination of view-dependent expression-sensitive responses.

Keywords: facial expressions, adaptation, after-effects, viewpoint dependence, psychophysics

1. Introduction

In this paper, we investigate viewpoint dependence in our encoding of facial expression. Investigation of viewpoint dependence runs through the literature on human object perception as it addresses the nature of the underlying models. The basic issue is whether we encode objects as series of two-dimensional templates or whether we construct three-dimensional viewpoint invariant representations (Biederman 1987; Bülthoff & Edelman 1992). In relation to facial identity, these questions have traditionally been studied using paradigms in which subjects are trained on an initially unfamiliar identity at one viewpoint and then tested at another (Troje & Bülthoff 1996; Hill et al. 1997; Newell et al. 1999; Watson et al. 2005); an approach that would clearly be difficult to extend to facial expressions. Recent adaptation-based paradigms offer a novel and powerful method for addressing this issue.

Adaptation paradigms examine how prolonged view of a stimulus can modify the perception of subsequent-related stimuli. For example, you might examine a line tilted leftwards of vertical for 30 s or so. When you are then briefly presented with a vertical test line, this will appear to be tilted rightwards of vertical (Gibson & Radner 1937). The phenomenon appears ubiquitous within the visual system and is believed to represent recalibration of the neural systems encoding the adapted attribute (Attneave 1954; Barlow 1961; Simoncelli & Olshausen 2001; Clifford 2005). This specificity allows us to use adaptation to directly study the neural encoding of visual attributes (such as facial identity or facial expression) and underlies the recent use of adaptation methodologies within functional imaging.

In the same way that we find adaptation to low-level visual attributes such as tilt, motion or colour, we also find high-level after-effects in face perception. This occurs for a variety of face attributes such as gender, race, identity, expression, facial distortion and gaze direction (Webster & MacLin 1999; Leopold et al. 2001; Watson & Clifford 2003; Webster et al. 2004; Rhodes et al. 2005; Jenkins et al. 2006; Fox & Barton 2007). For example, the expression after-effect can be demonstrated by taking an image of a person displaying a happy expression, an image of that same person with a sad expression and an image created by averaging the two. After adaptation to the happy expression, the averaged image, when briefly presented, will be judged as sad. Conversely, after adaptation to the sad expression, the averaged image will be judged as happy.

Adaptation has been used to examine viewpoint dependence in our encoding of identity (Benton et al. 2006; Jeffery et al. 2006; Jiang et al. 2006). The underlying logic of the latter is that, if neurons encode both viewpoint and identity, one will obtain differential responses as the angle between adaptation and test viewpoint changes. The basic pattern that emerges is that the strength of the after-effect decreases as the angle between adaptation and test viewpoint increases. However, even when there is a 90° angle between the two (in the case of left and right three-quarter views), a substantial after-effect is still obtained (Benton et al. 2006).

In comparison to facial identity, little is known about viewpoint dependence in our encoding of facial expressions. Indeed, there is no particular reason to expect it to follow a similar pattern to that found with identity. Facial expression and identity are largely held to be processed by different mechanisms (Bruce & Young 1986) and to be encoded in different brain regions. Permanent aspects of faces (such as identity and gender) are thought to be encoded by face-sensitive neurons in the fusiform gyrus while changeable aspects (such as facial expression, gaze direction and viewpoint) are thought to be encoded in the superior temporal sulcus (Perrett et al. 1987; Hasselmo et al. 1989a; Haxby et al. 2000; Rolls 2000).

This leads to rather different expectations for viewpoint dependence in adaptation to identity and adaptation to facial expression. The identity encoding system is conceived as not encoding viewpoint, or at least, not as its final outcome. However, viewpoint independence might reasonably be achieved through summing the outputs of viewpoint-dependent neurons (Rolls 2000). Such a model is supported by the mixture of viewpoint dependence and independence found in adaptation to facial identity (Benton et al. 2006). In contrast, facial expression is held to be encoded by a neural substrate that also encodes viewpoint. On this basis, we might reasonably expect expression adaptation to show strong viewpoint dependence. This expectation is supported by studies showing that gaze direction and viewpoint modulate our processing of facial expressions (Kappas et al. 1994; Lyons et al. 2000; Adams & Kleck 2003).

In the present study, we use a recently proven adaptation-based methodology (Benton et al. 2006) to examine viewpoint dependence in our encoding of facial expression. We wished our stimuli to be as naturalistic as possible; we therefore used pictures of actors producing facial expressions taken simultaneously from a variety of angles. In contrast to our initial expectation, we find that our encoding of facial expression displays substantial viewpoint invariance.

2. Methods and results

We created morphs between different facial expressions to produce sequences of images that changed gradually from one expression to another. The expressions at either end of the sequence are readily identified, but somewhere in the middle of the sequence, each subject will have a balance point, the estimated point along the morph sequence where a subject is equally likely to judge an image as either of the two original expressions. For example, with a happy to sad morph sequence, when subjects adapt to the happy image, the balance point shifts closer to the happy end of the sequence. Conversely, when subjects adapt to sad, the balance point shifts towards the sad end of the sequence.

In the following experiments, we measured after-effects by measuring adaptation-induced shifts in balance points. We used a classic adaptation/top-up paradigm in which subjects were presented with sequences of pairs of images. Each pair contained an adaptation image followed by a brief test image (to which subjects make responses). Within a sequence, the adaptation image in the initial pair was presented for 30 s. Each subsequent adaptation image served to top up the initial adaptation and was presented for 5 s.

(a) Materials

We recruited actors (11 females and 8 males) from the Drama Department at the University of Bristol and elicited facial expressions (produced upon demand) while filming simultaneously from a variety of angles using a high-definition (HD) multi-camera rig. The five Dalsa DS-25-02M30 colour cameras were placed at −45°, −22.5°, 0°, 22.5° and 45° equidistant from a stool on which the actors sat. The angles refer to rotations parallel to the ground plane (seen from above) with 0° referring to a front view (i.e. full face) of the actor. An angle of 45° means that the face is seen in a three-quarter rightward facing view. The cameras captured HD frames (1920×1080 pixels) at 25 Hz. We gathered 2 s of sequence for each actor for each of five facial expressions (anger, disgust, fear, happiness and sadness). Filming took place under controlled lighting conditions in a dedicated studio; we used a mixture of directional and non-directional lighting placed overhead and at knee level to light our actors.

From our database of actors and expressions, we generated the three expression morph sequences by morphing between images of full-blown expressions (Tiddeman et al. 2001). We used happy to sad for actor 18 (male) and happy to sad and anger to disgust for actor 5 (female). These are shown in figure 1 and were chosen on the basis of the quality of the expression (judged by the experimenters) and the ability to produce high-quality morphs between the neutral and the full-blown expressions. Each morph sequence contained 101 images (i.e. morph increments of 1%). Images were cropped and the edges of the images were blurred to display mean luminance (using Gaussian blur of standard deviation 10 pixels) so that no hard image edges would be present when the images were displayed on the mean luminance background. Examples from our morph sequences can be found in the electronic supplementary material.

Figure 1.

Figure 1

Example images showing the endpoint expressions for all sequences used. Left column shows actor 18 happy, second column shows actor 18 sad, third column shows actor 5 happy, fourth column shows actor 5 sad, fifth column shows actor 5 angry, sixth column shows actor 5 disgust. Topmost images shows the −45° viewpoint, every subsequent column displays an increment of 22.5°.

Our images were linearized and then displayed on an Iiyama Vision Master Pro 513 (MA203DT D) monitor set to a resolution of 1600×1200 pixels and a frame rate of 75 Hz. Display mean luminance was 43.2 cd m−2. The experiments were controlled by a PC; the images were rendered using the Cogent graphics Matlab extension developed by John Romaya at the LON at the Wellcome Department of Imaging Neuroscience. Images were presented in the centre of the screen with the remainder set to display mean luminance. Subjects viewed images from 1 m. At this distance, the interocular distance for the faces presented during the experiments was 1.33° of visual angle for actor 18° and 1.40° for actor 5 (figures given for full face images). The screen itself subtended 22.1°×16.7°.

(b) General procedures

We measured balance points by using an adaptive method of constants procedure (Watt & Andrews 1981) in which subjects viewed images from the morph sequences and were asked to classify these images as one of the two facial expressions from which the images are drawn. We use the responses from 64 such image presentations (or trials) to estimate each balance point by fitting a cumulative Gaussian to the resultant data (Wichmann & Hill 2001a). We refer to the group of 64 trials used to measure a balance point as a run.

Subjects completed two types of run, those without adaptation and those with adaptation. We used the unadapted runs primarily for training. In those without adaptation, subjects viewed each image for 1000 ms and indicated their judgement of expression using the arrow keys on a keyboard. There was a minimum gap of 500 ms between stimuli. In adaptation runs, subjects viewed the adaptation image for 30 s before being presented (after a 500 ms gap) with the first test image. Thereafter, each test image was presented after viewing the adaptation image for 5 s (to top up the initial adaptation).

In all, but our final experiment, we used the following procedure to control for low-level retinotopic adaptation (Fang & He 2005; Benton et al. 2006). Subjects were required to fix a small spot presented in the centre of the screen. All adaptation and test images were moved in a circular trajectory around the fixation spot with an angular velocity of 1 revolution every 5 s. The radius of the circle was 0.5°. For each image, the starting point and initial direction of travel were random. For each morph sequence at each angle, we had to decide on a centre point—i.e. the part of the image that actually moves in a circle with the fixation spot at its centre. Vertically these were chosen to lie halfway between the eyes and the mouth. Horizontally they are different for each angle. In the case of the 0° images, they fall on the faces' vertical midlines; in the 45° images, they lie directly under the centremost eye (Benton et al. 2006); and in the 22.5° images, they lie midway between these points.

Subjects completed a number of runs within a session. A session refers to a number of balance points in a single sitting. Within each session only one adapter angle was used. There was a minimum of a 12 h gap between sessions to minimize any carry over of adaptation. Within sessions the runs were not interleaved—in other words, we gathered one balance point at a time. For each combination of adaptation angle, test angle and morph sequence that we tested, we gathered both unadapted balance points and balance points under adaptation to the two ends of the morph sequence. Within sessions containing unadapted and adapted runs, we always gathered the unadapted balance points first.

When we measure the strength of adaptation, we look at the shift in balance point between the two adaptation conditions. The two adaptation conditions refer to adaptation to the two ends of a morph sequence (i.e. the original expressions). For example, in experiment 1, we measured adaptation to actor 18 using a happy to sad morph sequence. Adaptation to happy makes subsequent images appear less happy so that the balance point shifts towards the happy end of the morph sequence. Under adaptation to sad, the balance points shift in the opposite direction. We take the difference between adapters (rather than between adapted and unadapted balance points) as our metric of strength of adaptation. We do this because (i) the difference between adapted balance points provides the better signal-to-noise ratio, (ii) adaptation runs occur after training and are therefore likely to be more stable, and (iii) adaptation runs differ only in their adapter whereas adaptation and non-adaptation runs differ in other respects (Benton et al. 2006).

(c) Experiment 1

Three subjects (an experimenter S1 and two naives, S2 and S3) adapted to happy and sad expressions from actor 18. In this experiment, the adaptation viewpoint was always 45°. Test viewpoint was varied over all five available angles. We initially gathered four unadapted balance points for each angle for each participant. We then gathered the adapted balance points—again four for each subject for each combination of adapter type, adapter viewpoint and test viewpoint.

To assess statistical variability, we used parametric bootstrapping to generate 10 000 bootstrap estimates for each balance point (Wichmann & Hill 2001b). We then propagated the bootstrap populations through the relevant averaging and differencing calculations (Benton et al. 2006) to generate 95% confidence limits. These are calculated using the percentile method (Efron & Tibshirani 1993).

Note that all hypotheses testing in the current study is achieved through the use of 95% confidence limits—standard practice in statistical bootstrapping. In terms of the data presented, if the x-axis (difference equals zero) line lies outside the confidence limits associated with a data point, then that point can be considered significantly different from zero with p<0.05 (two-tailed). When comparing two means, these are significantly different with p<0.05 when there is less than about 50% overlap of error bars. When there is no overlap, they are significantly different with p<0.01 (Cumming & Finch 2005).

Results are depicted in figure 2a which essentially shows the strength of the after-effect (coded as the difference between balance points) as a function of test viewpoint. We carried out linear regressions on the bootstrap populations underlying these data to assess whether there was any significant reduction in adaptation magnitude. These data with 95% confidence limits are shown in table 1. It can be seen that in all cases the slope is significantly greater than zero indicating a decrease in the size of adaptation as the angular difference between adapt and test viewpoints increases. However, it should also be emphasized that in spite of this reduction, there is still a substantial and significant effect of adaptation when the difference between test and adaptation viewpoints is 90°.

Figure 2.

Figure 2

(a) Results from experiment 1 showing the strength of the after-effect (difference between adapted balance points) for all three subjects using actor 18's happy to sad sequence: S1, circles; S2, downward triangles; S3, squares. The upward pointing arrow on the abscissa indicates the adaptation viewpoint. (b) Results from experiment 2, actor 18 happy to sad. (c) Results from experiment 3, actor 5 happy to sad. (d) Results from experiment 4, actor 5 anger to disgust. (e) Results from experiment 5, actor 18 happy to sad, no fixation. Data show the strength of adaptation (difference between adapted balance points) over congruence of adapter and test viewpoint.

Table 1.

Slopes of linear regressions and associated lower and upper 95% confidence limits for each subject for the data shown in figure 2a.

slope lower upper
S1 0.17 0.14 0.20
S2 0.19 0.16 0.21
S3 0.10 0.07 0.13

(d) Experiments 2, 3, 4 and 5

These experiments essentially confirm and extend the results of our first experiment. One potential problem with the experiment described above is that we held adaptation viewpoint constant while varying test viewpoint. This was done for entirely practical purposes as it meant that we could gather many test viewpoints in a session. However, although unlikely, it may be the case that the reduction in adaptation as a function of test viewpoint is based upon the test viewpoint itself rather than the angular difference between test and adaptation viewpoints.

In the following experiments, we used a 2×2 design and both tested and adapted at −45° and +45°. Subject S2 did not take part in these experiments—we used an additional two naives (S4 and S5). For each experiment, each subject completed four sessions. In each session, we first gathered the unadapted balance point followed by the two adapted balance points. We gathered only one balance point per combination of test viewpoint, adapter viewpoint and adapter type. To factor out test viewpoint, we collapse across this dimension and look at the change in after-effect strength between the congruent adapter (when test and adaptation viewpoints are the same) and incongruent adapter conditions.

Experiment 2 used the same morph sequence as experiment 1 (actor 18, happy to sad). Experiment 3 used the happy to sad sequence from actor 5, while experiment 4 employed actor 5's anger to disgust sequence. Finally in experiment 5, we used a no fixation condition. For this the images were simply placed in the centre of the screen with no fixation spot and with no stimulus motion. Subjects were instructed to view the images naturally and not to suppress eye movements. This experiment used actor 18's happy to sad morph sequence.

Results for these experiments are shown in figure 2b–e which shows after-effect strength over a 90° change in test viewpoint. Our first experiment showed viewpoint dependence with substantial after-effects at an angular difference of 90° between adapter and test viewpoints. The results from experiments 2 to 5 demonstrate that this result is not due to the position of the test viewpoint itself but is due to the angular difference between test and adaptation viewpoints. The experiments also demonstrate that the effect extends across different naive subjects and to different actors and different facial expressions. The final experiment shows that the findings generalize to natural viewing conditions.

A comparison of results across experiments 1 and 2 for the two common subjects (S1 and S3) shows similar levels of adaptation within the congruent and incongruent (90° difference) conditions. This indicates little effect of training such as that found with identity adaptation where increased familiarity with a particular identity leads to increased transfer across viewpoint (Jiang et al. 2007). In the current study, any familiarity effect would most probably not be evident owing to the comparatively extensive initial training undergone by our subjects.

3. Discussion

This study uses a novel multi-view face database to investigate viewpoint dependence in facial expressions over changes in viewpoint commonly seen in our visual worlds. We studied the effects of expression adaptation at one viewpoint on the perception of facial expressions at other viewpoints. We found high-level non-retinotopic adaptation that generalized across differences in adapter and test viewpoint. However, we also observed a substantial decrease in the strength of that adaptation as the angle between test and adaptation viewpoints increased. Over a change of 90°, between three-quarter leftward and three-quarter rightward facing views, there was approximately a 40% decrease in the strength of adaptation.

Recent single cell recordings in macaque superior temporal sulcus show viewpoint-dependent tunings that would predict upwards of a 70% decrease in cell response over a 90° viewpoint change (Földiák et al. 2003). Perrett et al. (1991) looking at view-dependent neurons in macaque superior temporal sulcus, found a number of bimodal neurons which showed responses to both left and right three-quarter views. However, out of a total of 110 neurons responsive to perspective views, only 3 showed this characteristic. Based on these data, it is unlikely that the relatively small size of the view-dependent decrease found in our study can be accounted for by wide tuning or bimodality of view-dependent neurons. Instead, our findings are commensurate with a view in which facial expression is encoded by a mixture of viewpoint-dependent and viewpoint-independent mechanisms.

On the surface, our results would seem to argue against a model of expression encoding in which a viewpoint-independent representation is the final outcome. However, adaptation should target those neural systems encoding faces, whether or not they encode viewpoint as well. The endpoint of expression analysis may well be a view-independent representation; however, if this is built from view-dependent responses which are themselves adapted, this could readily give rise to view-dependent behaviour. Our adaptation-based findings therefore show simply that expression encoding occurs at viewpoint-dependent and viewpoint-independent levels.

Nevertheless, the fact that we have a substantial viewpoint-independent component implies the existence of viewpoint-independent mechanisms in addition to viewpoint-dependent mechanisms. Given that the input to the visual system is necessarily a two-dimensional image smeared over our retinae, it is clear that three-dimensional representations (whether explicit or not) must be built from two-dimensional information. The question then is, if three-dimensional representations of expressions do exist, are they built from intermediate two-dimensional expression representations or are they built directly, using something akin to Biederman's recognition by components model (Biederman 1987)? Transfer of after-effect across viewpoint cannot in itself distinguish between these possibilities. However, the fact that we see viewpoint-dependent behaviour, in addition to viewpoint independence, supports a view in which a viewpoint-independent representation of facial expression is constructed from viewpoint-dependent mechanisms that encode facial expression.

This notion of a mixed mechanism is supported by the neurophysiological literature dealing with viewpoint-dependent responses to faces. Single cell recording studies in macaque have described face-sensitive neurons in the superior temporal sulcus (Perrett et al. 1982, 1985; Hasselmo et al. 1989b), an area believed to underlie the processing of changeable aspects of faces, such as expressions (Haxby et al. 2000). While many of these cells show view-dependent responses, a number show viewpoint-independent responses. Based on these findings, it has been proposed that viewpoint-independent face mechanisms may be created from the summation of responses of viewpoint-dependent neurons tuned to a variety of different viewpoints (Rolls 2000).

The encoding of changeable and fixed aspects of faces is thought to occur in different neural substrates; with the latter in the fusiform gyrus and the former in the superior temporal sulcus (Haxby et al. 2000). This anatomical distinction accords with a prevalent model of face processing in which identity and facial expression are processed through different routes (Bruce & Young 1986). Note that Fox & Barton (2007) showed a decrease in expression after-effects when adaptation and test identities differed. This finding cannot however be taken as evidence for dependence of expression on identity because the effect may not be based on perceived identity per se but may be based on some factor that clearly covaries with identity (such as facial structure).

Adaptation studies of viewpoint dependence in facial identity, when taken together, show a mixture of viewpoint dependence and viewpoint independence (Benton et al. 2006; Jeffery et al. 2006; Jiang et al. 2006) similar to that found with facial expressions in the current study. This mixture seems therefore to be a general property of the encoding of both the changeable and the fixed aspects of faces. The similarity in viewpoint-dependent response between expression and identity seems to occur in spite of the fact that facial identity and facial expression are rather different qualities. Most obviously, the number of facial identities with which we are faced is far larger than the number of discrete facial expressions. Based on these differences, one might reasonably expect to find what we do not—evidence of substantial processing differences between the two. The concordance between our expression and identity findings hints at a common substrate where identity and expression form part of a common distributed representation (Calder et al. 2001; Calder & Young 2005).

In conclusion, our results show viewpoint-dependent expression after-effects over changes in angular difference between adaptation and test viewpoints. However, even over large changes in viewpoint, the effect of adaptation is still substantial. The viewpoint-dependent adaptation that we describe is non-retinotopic and can be observed in different identities across different facial expressions and is preserved in natural viewing conditions. Our findings demonstrate that the human encoding of facial expression occurs through a mixture of viewpoint-dependent and viewpoint-independent mechanisms. This is similar to results obtained in single cell recording studies of macaque and may well indicate a hierarchical organization in which the responses of viewpoint-independent expression-encoding neurons are summed to produce viewpoint invariant expression-dependent responses.

Supplementary Material

Supplementary figure 1

Example images taken from the morph sequences used in our experiments. Leftmost column: happy to sad from actor 18. Middle column: happy to sad from actor 5. Rightmost column: anger to disgust from actor 5. The morph increment between rows is 25%

rspb20070473s15.gif (1.9MB, gif)
Supplementary figure 2

Example images showing the 50% morph point for all sequences used. Leftmost column: happy to sad from actor 18. Middle column: happy to sad from actor 5. Rightmost column: anger to disgust from actor 5. Images on the top row show viewing angles of −45°

rspb20070473s16.gif (1.9MB, gif)

References

  1. Adams R.B, Kleck R.E. Perceived gaze direction and the processing of facial displays of emotion. Psychol. Sci. 2003;14:644–647. doi: 10.1046/j.0956-7976.2003.psci_1479.x. doi:10.1046/j.0956-7976.2003.psci_1479.x [DOI] [PubMed] [Google Scholar]
  2. Attneave F. Some informational aspects of visual perception. Psychol. Rev. 1954;61:183–193. doi: 10.1037/h0054663. doi:10.1037/h0054663 [DOI] [PubMed] [Google Scholar]
  3. Barlow H.B. Possible principles underlying the transformations of sensory messages. In: Rosenblith W.A, editor. Sensory communication. MIT Press; Cambridge, MA: 1961. pp. 217–314. [Google Scholar]
  4. Benton C.P, Jennings S.J, Chatting D.J. Viewpoint dependence in adaptation to facial identity. Vision Res. 2006;46:3313–3325. doi: 10.1016/j.visres.2006.06.002. doi:10.1016/j.visres.2006.06.002 [DOI] [PubMed] [Google Scholar]
  5. Biederman I. Recognition-by-components: a theory of human image understanding. Psychol. Rev. 1987;94:115–147. doi: 10.1037/0033-295X.94.2.115. doi:10.1037/0033-295X.94.2.115 [DOI] [PubMed] [Google Scholar]
  6. Bruce V, Young A.W. Understanding face recognition. Br. J. Psychol. 1986;77:305–327. doi: 10.1111/j.2044-8295.1986.tb02199.x. [DOI] [PubMed] [Google Scholar]
  7. Bülthoff H.H, Edelman S. Psychophysical support for a two-dimensional view interpolation theory of object recognition. Proc. Natl Acad. Sci. USA. 1992;89:60–64. doi: 10.1073/pnas.89.1.60. doi:10.1073/pnas.89.1.60 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Calder A.J, Young A.W. Understanding the recognition of facial identity and facial expression. Nat. Rev. Neurosci. 2005;6:641–651. doi: 10.1038/nrn1724. doi:10.1038/nrn1724 [DOI] [PubMed] [Google Scholar]
  9. Calder A.J, Burton A.M, Miller P, Young A.W, Akamatsu S. A principal components analysis of facial expressions. Vision Res. 2001;41:1179–1208. doi: 10.1016/s0042-6989(01)00002-5. doi:10.1016/S0042-6989(01)00002-5 [DOI] [PubMed] [Google Scholar]
  10. Clifford C.W.G. Functional ideas about adaptation applied to spatial and motion vision. In: Clifford C.W.G, Rhodes G, editors. Fitting the mind to the world: adaptation and after-effects in high-level vision. Oxford University Press; Oxford, UK: 2005. pp. 47–82. [Google Scholar]
  11. Cumming G, Finch S. Inference by eye. Am. Psychol. 2005;60:170–180. doi: 10.1037/0003-066X.60.2.170. doi:10.1037/0003-066X.60.2.170 [DOI] [PubMed] [Google Scholar]
  12. Efron, B. & Tibshirani, R. J. 1993 An introduction to the bootstrap Monographs on statistics and applied probability. Boca Raton, FL: Chapman & Hall/CRC.
  13. Fang F, He S. Viewer-centred object representation in the human visual system revealed by viewpoint aftereffects. Neuron. 2005;45:793–800. doi: 10.1016/j.neuron.2005.01.037. doi:10.1016/j.neuron.2005.01.037 [DOI] [PubMed] [Google Scholar]
  14. Földiák P, Xiao D, Keysers C, Edwards R, Perrett D.I. Rapid serial visual presentation for the determination of neural selectivity in area STSa. Prog. Brian Res. 2003;144:107–116. doi: 10.1016/s0079-6123(03)14407-x. [DOI] [PubMed] [Google Scholar]
  15. Fox C.J, Barton J.J.S. What is adapted in face adaptation? The neural representations of expression in the human visual system. Brain Res. 2007;1127:80–89. doi: 10.1016/j.brainres.2006.09.104. doi:10.1016/j.brainres.2006.09.104 [DOI] [PubMed] [Google Scholar]
  16. Gibson J.J, Radner M. Adaptation, after-effect and contrast in the perception of tilted lines. J. Exp. Psychol. 1937;20:453–467. doi:10.1037/h0059826 [Google Scholar]
  17. Hasselmo M.E, Rolls E.T, Baylis G.C. The role of expression and identity in the face-selective responses of neurons in the temporal visual cortex of the monkey. Behav. Brain Res. 1989a;32:203–218. doi: 10.1016/s0166-4328(89)80054-3. doi:10.1016/S0166-4328(89)80054-3 [DOI] [PubMed] [Google Scholar]
  18. Hasselmo M.E, Rolls E.T, Baylis G.C, Nalwa V. Object-centred encoding by face-selective neurones in the cortex in the superior temporal sulcus of the monkey. Exp. Brain Res. 1989b;75:417–429. doi: 10.1007/BF00247948. doi:10.1007/BF00247948 [DOI] [PubMed] [Google Scholar]
  19. Haxby J.V, Hoffman E.A, Gobbini M.I. The distributed human neural system for face perception. Trends Cogn. Sci. 2000;4:223–233. doi: 10.1016/s1364-6613(00)01482-0. doi:10.1016/S1364-6613(00)01482-0 [DOI] [PubMed] [Google Scholar]
  20. Hill H, Schyns P.G, Akamatsu S. Information and viewpoint dependence in face recognition. Cognition. 1997;62:201–222. doi: 10.1016/s0010-0277(96)00785-8. doi:10.1016/S0010-0277(96)00785-8 [DOI] [PubMed] [Google Scholar]
  21. Jeffery L, Rhodes G, Busey T. View-specific coding of face shape. Psychol. Sci. 2006;17:501–505. doi: 10.1111/j.1467-9280.2006.01735.x. doi:10.1111/j.1467-9280.2006.01735.x [DOI] [PubMed] [Google Scholar]
  22. Jenkins R, Beaver J.D, Calder A.J. I thought you were looking at me. Psychol. Sci. 2006;17:506–513. doi: 10.1111/j.1467-9280.2006.01736.x. doi:10.1111/j.1467-9280.2006.01736.x [DOI] [PubMed] [Google Scholar]
  23. Jiang F, Blanz V, O'Toole A.J. Probing the visual representation of faces with adaptation. Psychol. Sci. 2006;17:493–500. doi: 10.1111/j.1467-9280.2006.01734.x. doi:10.1111/j.1467-9280.2006.01734.x [DOI] [PubMed] [Google Scholar]
  24. Jiang F, Blanz V, O'Toole A.J. The role of familiarity in three-dimensional view-transferability of face identity adaptation. Vision Res. 2007;47:525–531. doi: 10.1016/j.visres.2006.10.012. doi:10.1016/j.visres.2006.10.012 [DOI] [PubMed] [Google Scholar]
  25. Kappas A, Hess U, Barr C.L, Kleck R.E. Angle of regard: the effect of vertical viewing angle on the perception of facial expressions. J. Nonverbal Behav. 1994;18:263–280. doi:10.1007/BF02172289 [Google Scholar]
  26. Leopold D.A, O'Toole A.J, Vetter T, Blanz V. Prototype-referenced shape encoding revealed by high-level aftereffects. Nat. Neurosci. 2001;4:89–94. doi: 10.1038/82947. doi:10.1038/82947 [DOI] [PubMed] [Google Scholar]
  27. Lyons M.J, Campbell R, Plante A, Coleman M, Kamachi M, Akamatsu S. The Noh mask effect: vertical viewpoint dependence of facial expression perception. Proc. R. Soc. B. 2000;267:2239–2245. doi: 10.1098/rspb.2000.1274. doi:10.1098/rspb.2000.1274 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Newell F.N, Chiroro P, Valentine T. Recognizing unfamiliar faces: the effects of distinctiveness and view. Q. J. Exp. Psychol. A. 1999;52:509–534. doi: 10.1080/713755813. doi:10.1080/027249899391179 [DOI] [PubMed] [Google Scholar]
  29. Perrett D.I, Rolls E.T, Caan W. Visual neurones responsive to faces in the monkey temporal cortex. Exp. Brain Res. 1982;47:329–342. doi: 10.1007/BF00239352. doi:10.1007/BF00239352 [DOI] [PubMed] [Google Scholar]
  30. Perrett D.I, Smith P.A.J, Potter D.D, Mistlin A.J, Head A.S, Milner A.D, Jeeves M.A. Visual cells in the temporal cortex sensitive to face view and gaze direction. Proc. R. Soc. B. 1985;223:293–317. doi: 10.1098/rspb.1985.0003. doi:10.1098/rspb.1985.0003 [DOI] [PubMed] [Google Scholar]
  31. Perrett D.I, Mistlin A.J, Chitty A.J. Visual neurones responsive to faces. Trends Neurosci. 1987;10:358–364. doi:10.1016/0166-2236(87)90071-3 [Google Scholar]
  32. Perrett D.I, Oram M.W, Harries M.H, Bevan R, Hietanen J.K, Benson P.J, Thomas S. Viewer-centred and object-centred coding of heads in the macaque temporal cortex. Exp. Brain Res. 1991;86:159–173. doi: 10.1007/BF00231050. doi:10.1007/BF00231050 [DOI] [PubMed] [Google Scholar]
  33. Rhodes G, Robbins R, Jaquet E, McKone E, Jeffery L, Clifford C.W.G. Adaptation and face perception: how aftereffects implicate norm-based coding of faces. In: Clifford C.W.G, Rhodes G, editors. Fitting the mind to the world: adaptation and after-effects in high-level vision. Oxford University Press; Oxford, UK: 2005. pp. 213–240. [Google Scholar]
  34. Rolls E.T. Functions of the primate temporal lobe cortical visual areas in invariant visual object and face recognition. Neuron. 2000;27:205–218. doi: 10.1016/s0896-6273(00)00030-1. doi:10.1016/S0896-6273(00)00030-1 [DOI] [PubMed] [Google Scholar]
  35. Simoncelli E.P, Olshausen B.A. Natural image statistics and neural representation. Annu. Rev. Neurosci. 2001;24:1193–1216. doi: 10.1146/annurev.neuro.24.1.1193. doi:10.1146/annurev.neuro.24.1.1193 [DOI] [PubMed] [Google Scholar]
  36. Tiddeman B.P, Burt D.M, Perrett D.I. Prototyping and transforming facial texture for perception research. IEEE Comput. Graph. Appl. 2001;21:42–50. doi:10.1109/38.946630 [Google Scholar]
  37. Troje N.F, Bülthoff H.H. Face recognition under varying poses: the role of texture and shape. Vision Res. 1996;36:1761–1771. doi: 10.1016/0042-6989(95)00230-8. doi:10.1016/0042-6989(95)00230-8 [DOI] [PubMed] [Google Scholar]
  38. Watson T.L, Clifford C.W.G. Pulling faces: an investigation of the face distortion aftereffect. Perception. 2003;32:1109–1116. doi: 10.1068/p5082. doi:10.1068/p5082 [DOI] [PubMed] [Google Scholar]
  39. Watson T.L, Johnston A, Hill H.C.H, Troje N.F. Motion as a cue for viewpoint invariance. Vis. Cogn. 2005;12:1291–1308. doi:10.1080/13506280444000526 [Google Scholar]
  40. Watt R.J, Andrews D.P. APE: adaptive probit estimation of psychometric functions. Curr. Psychol. Rev. 1981;1:205–214. [Google Scholar]
  41. Webster M.A, MacLin O.H. Figural aftereffects in the perception of faces. Psychon. Bull. Rev. 1999;6:647–653. doi: 10.3758/bf03212974. [DOI] [PubMed] [Google Scholar]
  42. Webster M.A, Kaping D, Mizokami Y, Dumahel P. Adaptation to natural face categories. Nature. 2004;428:557–561. doi: 10.1038/nature02420. doi:10.1038/nature02420 [DOI] [PubMed] [Google Scholar]
  43. Wichmann F.A, Hill N.J. The psychometric function: I. Fitting, sampling, and goodness of fit. Percept. Psychophys. 2001a;63:1293–1313. doi: 10.3758/bf03194544. [DOI] [PubMed] [Google Scholar]
  44. Wichmann F.A, Hill N.J. The psychometric function: II. Bootstrap-based confidence intervals and sampling. Percept. Psychophys. 2001b;63:1314–1329. doi: 10.3758/bf03194545. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary figure 1

Example images taken from the morph sequences used in our experiments. Leftmost column: happy to sad from actor 18. Middle column: happy to sad from actor 5. Rightmost column: anger to disgust from actor 5. The morph increment between rows is 25%

rspb20070473s15.gif (1.9MB, gif)
Supplementary figure 2

Example images showing the 50% morph point for all sequences used. Leftmost column: happy to sad from actor 18. Middle column: happy to sad from actor 5. Rightmost column: anger to disgust from actor 5. Images on the top row show viewing angles of −45°

rspb20070473s16.gif (1.9MB, gif)

Articles from Proceedings of the Royal Society B: Biological Sciences are provided here courtesy of The Royal Society

RESOURCES