Abstract
High-level adaptation effects reveal important features of the neural coding of objects and faces. View-adaptation in particular is a highly useful means of characterizing how depth rotation of the face is represented and therefore, how view-invariant recognition of the face may be achieved. In the present study, we used view adaptation to determine the extent to which depth rotations of a face are represented in an image-based or object-based manner. Specifically, we dissociated object-based axes from image-based axes via a 90-degree planar rotation of the adapting face and observed that participants’ responses pre- and post-adaptation are most consistent with an image-based representation of depth rotations of the face. We discuss our data in the context of previous results describing the impact of planar rotation on related aspects of face perception.
Keywords: Face perception, Face adaptation, Invariant Recognition, View coding
1. Introduction
Visual adaptation is a powerful tool for characterizing the neural processing of a range of stimulus categories spanning low- to high-level vision (Clifford et al., 2007). High-level adaptation effects in particular are a simple, yet powerful, technique for constraining the nature of the processes that support the perception and recognition of complex objects, such as faces (Webster & MacLin, 1999; Webster et al., 2004). A range of adaptation studies designed to probe multiple aspects of face processing have revealed key features of its neural substrate. For example, the study of adaptation to face identity has revealed evidence supporting norm-based encoding of identity (Leopold et al., 2001) that appears to be largely view-specific (Jeffrey, Rhodes, & Busey, 2006) unless observers are highly familiar with the target identity (Jiang, Blanz, & O’Toole, 2007). Adaptation to other facial characteristics (Rhodes et al., 2003) and interactions between multiple aspects of facial appearance (Rhodes et al., 2004) have provided a rich foundation of important insights into how faces are neurally represented and recognized.
Viewpoint aftereffects are a particularly important example of high-level adaptation to understand, since the extent to which human face recognition achieves view-invariant recognition of objects and faces likely depends critically on the implementation of view-coding and the interaction of view-coding with other neural codes for facial appearance. The basic phenomenon of viewpoint adaptation is similar to other high-level aftereffects – adaptation to a face that is rotated in depth in one direction (e.g. A face rotated 20 degrees to the right away from a frontal pose) will bias subsequent categorizations of face view in the opposite direction (Fang & He, 2005). The effect is observed at multiple neural loci (Fang, Murray, & He, 2007) which indicates that viewpoint is encoded across a distributed network supporting face processing. Viewpoint adaptation transfers strongly to changes in face identity (Fang, Ichidi, & He, 2007) but only weakly transfers to inverted faces (Fang, Murray, He, 2007)., suggesting distinct neuronal mechanisms for view coding as a function of planar rotation.
Our goal in the current study was to determine whether the coding of face viewpoint is implemented relative to an object-based frame of reference or an image-based frame of reference. That is, when a face is rotated in depth, is the direction of that rotation encoded in a manner that is invariant to the rotation of that face within the image, or is the direction instead encoded in terms of the angle relative to the world/image as a whole? Previous studies of viewpoint adaptation (even those that have examined inverted faces) have confounded these possible mechanisms, making it difficult to determine the underlying encoding of viewpoint. Presently, we dissociated object-based and image-based encoding of face viewpoint by inducing view adaptation with both an upright depth-rotated face and the same face rotated by 90-degrees in the plane. Theoretically, if distinct populations of neurons process upright and “sideways” faces (as has been suggested for the case of fully inverted images and upright images (Fang, Ichidi, & He, 2007; Watson & Clifford, 2006), we should expect little to no transfer of view adaptation between a 90-degree image and an upright image. Alternatively, if there is either shared coding of faces at these two orientations or a normalization process by which face viewpoint is encoded after taking planar rotation into account, then view adaptation should “follow” the rotation in the plane. Finally, it could also be the case that face viewpoint is encoded in an image-based, not object-based fashion. Regardless of the orientation of the face in the plane, depth rotation may be computed relative to image axes and subsequent aftereffects in the opposite image direction may be observed.
2. Methods
2.1 Subjects
We recruited 16 adults to participate in our task. Eight of these participants (5 female) were randomly assigned to the “No-rotation” condition and eight (5 female) were assigned to the “90-degree rotation” condition. All participants reported normal or corrected-to-normal vision and were between the ages of 19 and 31.
2.2 Stimuli
We created images of a male face using Poser v8.0. Face adaptation effects in general (Anderson & Wilson, 2005) and view adaptation effects in particular have been shown to obtain for synthetic face stimuli as well as real faces (Darr & Wilson, 2012), validating the use of computer-generated stimuli (which offer fine-grained control of the rendered view). A single head model was rendered from a frontal viewpoint and also from rotations of 1, 3, 5, and 7 degrees away from this viewpoint in the four cardinal directions. We will refer to this set of images as the test set. (Figure 1) In addition, a single image was rendered after a depth rotation of 20 degrees to the right (from the viewer’s POV, nose pointing towards the right visual field). We chose this rotation angle to maximize the adaptation effect, since view aftereffects peak near this rotation angle and drop off as it increases (Chen et al., 2010). We will refer to this image as the adapting image. All images were rendered in full-color at 960×600 pixels.
Figure 1.
A schematic view of the test set of images used in both adaptation conditions. The test face was rotated away from a frontal view in the upward, downward, leftward, and rightward directions by 1,3,5, and 7 degrees (1 and 5-degree steps are not pictured here for ease of illustration).
2.3 Procedure
All participants completed a baseline phase of the experiment and a subsequent adaptation phase. In the baseline phase, we presented the test images to participants in random order – each image was repeated 20 times for a grand total of 340 trials. On each trial, a single test image was presented for ~250ms in the center of the screen following an ISI of 400ms. Participants classified each image according to the perceived rotation of the head (up, down, left, or right) using a 4-way directional pad (Retrolink Nintendo USB). Participants were given unlimited time to respond to each stimulus and both responses and response latencies were recorded throughout the experiment. Test images were presented at a viewing distance of 60cm and subtended approximately 1×1.5 degrees of visual angle. The position of the test image in the visual field was randomly jittered by a small amount (~0.25 degrees) in one of the four cardinal directions.
In the adaptation phase, we were interested in determining how the planar rotation of the adapting image (a 20-degree rightward pose) affected subsequent judgments of the test set. Participants began the adaptation phase by viewing the adapting image for 30 seconds. Face adaptation results from processes that are position- and size-specific as well as processes that are not (Kovacs et al., 2008; Zhao & Chubb, 2011), so to minimize the contribution of low-level adaptation to any observed aftereffects, the adapting image was presented at twice the size of the test set (2 degrees by 3 degrees). Following this initial adaptation period, participants were asked to categorize the images in the test set with the same display parameters described above for the baseline phase. Additionally, before each test image was presented, participants viewed the adapting image for an additional 2-seconds of ‘topping-off’ adaptation. As in the baseline phase, responses (4AFC rotation categorization) and response latencies were recorded using a 4-way directional pad (Figure 2).
Figure 2.
An illustration of a single trial in the adaptation phase – participants viewed either the upright (0-degree) or rotated (90-degree) adapting image for 1 second, then classified a single test image according to apparent direction of rotation away from center (4AFC judgment). The presentation of the test image was limited to 250ms, but participants had unlimited time to respond.
Critically, one group of participants (N=8) adapted to the upright adapting image (a 0-degree planar rotation) and a second group adapted to the same image rotated in the plane by 90 degrees. If view coding of face images is highly invariant to planar rotations of the stimulus, both groups should exhibit the same adaptation effect – a rightward shift of the psychometric curve for rightward responses, and little or no shift of the psychometric curve for downward responses. Alternatively, if view coding is highly image-based, adaptation to the 90-degree image should induce a shift in the psychometric curve for downward responses, but little or no shift in the curve for rightward responses.
All stimulus presentation and response collection routines were carried out via custom functions written using the Matlab PsychToolbox extensions (Brainard, 1997; Pelli, 1997; Kleiner, Brainard, & Pelli, 2007).
3. Results
For each participant, we wished to determine how the point of subjective equality (PSE) shifted following prolonged viewing of the adapting image. For each subject, we thus estimated two psychometric curves: One describing how the proportion of “rightward” responses varied as a function of left-right rotation and another describing how the proportion of “downward” responses varied as a function of up-down rotation. We estimated these curves separately in the baseline and adaptation phases by first determining the proportion of positive classifications (“rightward” responses for left-right rotation, “downward” responses for up-down rotation) for each rotation angle of images in the test set. These proportions were calculated by determining the ratio of the number of responses made in the selected direction to the total number of responses made in all directions. That is, even though participants were free to make up/down responses to faces rotated along the left/right axis and vice-versa, we did not remove such responses from the data set at any point. The overall rate of wrong-axis responses was relatively low (~18% across all participants) and the majority of these responses were elicited by test images that depicted the smallest rotations. This is sensible, since uncertainty about rotation direction is likely quite high for these images.
Next, we fit a logistic function (1) to each set of raw datapoints using the Palamedes toolbox for Matlab (Prinz & Kingdom, 2009). All fits were carried out with the lapse rate (λ) and the guess rate (γ) fixed at 0.01 and with alpha and beta as free parameters.
(1) |
3.1 Adaptation to upright (0-degree) face
The average responses to the test set of images pre- and post-adaptation to the upright (0-degree) adapting image are displayed in Figure 3. Along both the left-right and up-down axes, the data were well-described by a logistic function and we obtained robust fits for all but one participant (who exhibited a strong bias to respond “Down” to all test images), who was excluded from our analysis.
Figure 3.
Average “right” (left panel) and “down” responses (right panel) for faces varying in rotation about the left-right and up-down axes respectively. The error bars represent +/− 1 s.e.m.
We analyzed the alpha values obtained from each fit using a 2×2 ANOVA with test phase (pre- vs. post-adaptation) and rotation axis (left-right vs. up-down) as within subject factors. This analysis revealed a main effect of test phase (F(1,6)=7.71, p=0.032, η2=0.56), but no other significant main effects or interactions. We continued by carrying out pre-planned comparisons of the pre- and post-adaptation alpha values for the left-right and up-down axes. This analysis revealed a significant effect of adaptation along the left-right axis (t(6)=−5.56, p<0.001, two-tailed paired samples t-test) but no effect along the up-down axis (t(6)= −1.35, p=0.22). Thus, our data in this condition demonstrate (in agreement with previous results) straightforward adaptation to the view of a face image along the axis of depth rotation that does not appear to significantly affect the perceived view along the orthogonal axis.
We also carried out a 2×2 ANOVA with the same within-subjects factors on the beta values obtained from our curve fits. This analysis yielded only a significant main effect of rotation axis (F(1,7)=23.9, p=0.002, η2=0.77), indicating that the curve describing “downward” responses along the up-down axis was significantly shallower than the left-right curve.
We continue by examining the data from the critical condition, adaptation to the same adapting image following a 90-degree planar rotation.
3.2 Adaptation to rotated (90-degree) face
The average responses to the test set of images pre- and post-adaptation to the rotated (90-degree) adapting image are displayed in Figure 4. As in our first group of participants, the data in all conditions were well-described by a logistic function. We obtained robust fits for all participants.
Figure 4.
Average “right” (left panel) and “down” responses (right panel) for faces varying in rotation about the left-right and up-down axes respectively. The error bars represent +/− 1 s.e.m. Adapting to the 20-degree face after a 90-degree planar rotation shifts the psychometric for “downward” responses, but only slightly shifts the psychometric curve for “rightward” responses in the opposite direction.
As above, we analyzed the alpha values obtained from each fit using a 2×2 ANOVA with test phase (pre- vs. post-adaptation) and rotation axis (left-right vs. up-down) as within subject factors. This analysis revealed a main effect of test phase (F(1,7)=16.36, p=0.005, η2=0.70), which was qualified by an interaction between test phase and rotation axis (F(1,7)=23.0, p = 0.002, η2=0.76). We continued by carrying out pre-planned t-tests on the pre- and post-adaptation alpha values obtained from each axis, which revealed a significant adaptation effect along the up-down axis (t(7)= −5.20, p=0.001, two-tailed paired-comparisons t-test), but only a marginal adaptation effect along the left-right axis (t(7)=3.3, p=0.014 – above the Bonferroni-corrected critical value of 0.05/4=0.0125). This marginal effect along the left-right axis reflects a shift of the psychometric function in the opposite direction from what we would expect if view coding were invariant to the planar rotation of the adapting image. It is difficult to conclude whether or not this effect reflects some real (and unpredicted) aspect of our stimulus or task. For example, it is possible that there are low-level properties of the adaptor image (the sharp edge at the bottom of our model’s neck, for example) that influenced task performance to a weak degree here. The present study does not allow us to draw any strong conclusions in this regard, so presently we can only speculate about the existence of some additional mechanism. However, we do note that whatever processes may or may not be driving the effects observed along the left-right axis in this condition, their contribution appears to be substantially smaller than the observed adaptation effect along the up-down axis. These data thus suggest that observers in this condition adapted to depth rotation along an image axis and were not highly invariant to planar rotation of the stimuli. We summarize the alpha values from all conditions in Figure 5.
Figure 5.
The average alpha values obtained from fitting logistic functions to the psychometric curves presented in Figures 3 and 4. The left panel summarizes the data from participants who adapted to the upright adapting image and the right panel summarized the data from participants who adapted to the same image after a 90-degree planar rotation. Error bars represent +/− 1 s.e.m.
We also ran a 2×2 ANOVA on the beta values obtained from our fits with the same within-subject factors as reported above. This analysis revealed no significant main effects or interactions.
4. Discussion
Our results demonstrate that in some cases, the coding of face pose by high-level areas may be primarily carried out with reference to the axes of rotation in the image, rather than the object. That is, the depth rotation that is present in the adapting image is not computed relative to the planar rotation of the head – if this were the case, we should not see differing results in our two groups of participants. Instead, adaptation to the image axis is observed, which indicates the contribution of some mechanism for encoding depth rotation that is image-based, in contrast to an account by which separate mechanisms for processing the 0-degree and 90-degree adapting image simply do not interact with one another (and lead to no adaptation at all). Our results are also in good agreement with previous studies that have demonstrated multichannel coding of vertical and horizontal rotations of the head (Lawson, Clifford & Calder, 2011), insofar as adaptation to one rotation angle only significantly impacted pose estimates along the relevant axis.
Our data is also broadly consistent with the existence of distinct mechanisms for processing upright and sideways faces. This is further supported by several recent results that provide behavioral and neural evidence for distinct processing of faces rotated in the plane, despite early evidence for a linear decline in face recognition ability as planar rotation varied parametrically (Valentine & Bruce, 1988). Behaviorally, the composite-face effect exhibits a substantial non-linearity in the neighborhood of a 90-degree planar rotation of the face (Rossion & Boremanse, 2008). In terms of neural responses to face stimuli, the N170 also exhibits non-linear behavior in this range, (Jaques & Rossion, 2007; Jemel et al, 2009), suggesting that at the structural level of face encoding (Bentin et al., 1996) upright and sideways orientation may be processed separately.
There are several avenues for further inquiry suggested by our data. Examining the nature of view adaptation following planar rotation at a range of neural sites (Pitcher, Walsh, & Duchaine, 2011) may reveal the extent to which image-based axes exert an influence on perceived view at different levels of encoding. Also, rotations of the head along different axes appear to be processed distinctly (Favelle, Palmisano & Avery, 2011) suggesting that relating pitch, roll, and yaw of the head to so-called “configural processes” may be one way to anticipate and explain the properties of the underlying neural code for view. Viewpoint perception is also not a unitary process, meaning that there are several questions we can ask about the extent to which the use of object and image axes applies to particular components of viewpoint processing. Specifically, the perception of face view has been shown to depend on two distinct mechanisms (Wilson et al., 2000) that differ in their use of external and internal features for determining view. To what extent does each process exhibit object-based vs. image-based adaptation? Dissociating external and internal features (Darr & Wilson, 2012) within the paradigm used here may reveal distinctions in how object and image axes are used to encode view as a function of subsets of the visual information available. Finally, though view adaptation appears to transfer weakly across face inversion in previous studies and is better described here in terms of image axes rather than object axes, there are other forms of face adaptation that do exhibit object-based transfer. The face distortion effect, for example, appears to “follow” a 90-degree in-plane rotation of the face (Watson & Clifford, 2003). In this instance, transfer was observed between a −45 degree image and a 45-degree image (left and right of vertical) which may be an important distinction. Cardinal directions in image space might be privileged (as they are in primary visual cortex (Furmanski & Engel, 2000; Westheimer, 2003, the tuning functions supporting some aspects of appearance coding may be broad across the vertical meridian and not elsewhere. Examining the generality of our result across multiple adaptation paradigms and multiple sets of image-based axes may reveal important organizing principles for encoding face viewpoint and other aspects of facial appearance as a function of position. Finally, though we chose to examine face viewpoint in the current study, we cannot speak to the face-specificity of our results. It is entirely possible that the same effects would be observed in other objects, and that our data reveal a general property of view coding that applies to a much larger class of objects. Determining whether or not our results generalize to other object categories would be straightforward, and could reveal the extent to which there exist distinct category-specific mechanisms for estimating common properties of 3D objects.
Research Highlights.
Face viewpoint adaptation is shown to affect performance along the image axis of rotation.
Viewpoint adaptation does not affect performance along object axes following planar rotation.
In general, adaptation to face view along one cardinal axis does not affect the orthogonal axis.
Acknowledgments
This publication was made possible by COBRE Grant P20 GM103505 from the National Institute of General Medical Sciences, a component of the National Institutes of Health (NIH). Its contents are the sole responsibility of the authors and do not necessarily represent the official views of NIGMS or NIH.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- 1.Anderson ND, Wilson HR. The nature of synthetic face adaptation. Vision Research. 2005;45:1815–1828. doi: 10.1016/j.visres.2005.01.012. [DOI] [PubMed] [Google Scholar]
- 2.Bentin S, McCarthy G, Perez E, Puce A, Allison T. Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience. 1996;8:551–565. doi: 10.1162/jocn.1996.8.6.551. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Brainard DH. The Psychophysics Toolbox. Spatial Vision. 1997;10:433–436. [PubMed] [Google Scholar]
- 4.Chen J, Yang H, Wang A, Fang F. Perceptual consequences of face viewpoint adaptation: Face viewpoint aftereffect, changes of differential sensitivity to face view, and their relationship. Journal of Vision. 2010;10:1–11. doi: 10.1167/10.3.12. [DOI] [PubMed] [Google Scholar]
- 5.Clifford CWG, Webster MA, Stanley GB, Stocker AA, Kohn A, Sharpee TO, Schwartz O. Visual adaptation: Neural, psychological and computational aspects. Vision Research. 2007;47:3125–3131. doi: 10.1016/j.visres.2007.08.023. [DOI] [PubMed] [Google Scholar]
- 6.Daar M, Wilson H. Viewpoint aftereffects: Adapting to full faces, head outlines, and features. Vision Research. 2012;53:54–9. doi: 10.1016/j.visres.2011.11.009. [DOI] [PubMed] [Google Scholar]
- 7.Fang F, He S. Viewer-centered object representation in the human visual system revealed by viewpoint aftereffects. Neuron. 2005;45:793–800. doi: 10.1016/j.neuron.2005.01.037. [DOI] [PubMed] [Google Scholar]
- 8.Fang F, Ijichi K, He S. Transfer of the face viewpoint aftereffect from adaptation to different and inverted faces. Journal of Vision. 2007;7:1–9. doi: 10.1167/7.13.6. [DOI] [PubMed] [Google Scholar]
- 9.Fang F, Murray SO, He S. Duration-dependent FMRI adaptation and distributed viewer-centered face representation in human visual cortex. Cerebral Cortex. 2007;17:1402–1411. doi: 10.1093/cercor/bhl053. [DOI] [PubMed] [Google Scholar]
- 10.Favelle SK, Palmisano S, Avery G. Face viewpoint effects about three axes: The role of configural and featural processing. Perception. 2011;40(7):761–784. doi: 10.1068/p6878. [DOI] [PubMed] [Google Scholar]
- 11.Furmanski CS, Engel SA. An oblique effect in human primary visual cortex. Nature Neuroscience. 2000;3:535–536. doi: 10.1038/75702. [DOI] [PubMed] [Google Scholar]
- 12.Jacques C, Rossion B. Early electrophysiological responses to multiple face orientations correlate with individual discrimination performance in humans. Neuroimage. 2007;36:863–876. doi: 10.1016/j.neuroimage.2007.04.016. [DOI] [PubMed] [Google Scholar]
- 13.Jeffery L, Rhodes G, Busey T. View-specific coding of face shape. Psychological Science. 2006;17:501–505. doi: 10.1111/j.1467-9280.2006.01735.x. [DOI] [PubMed] [Google Scholar]
- 14.Jemel B, Coutya J, Langer C, Roy S. From upright to upside-down presentation: a spatio-temporal ERP study of the parametric effect of rotation on face and house processing. BMC Neuroscience. 2009;10:100:1–17. doi: 10.1186/1471-2202-10-100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Jiang F, Blanz V, O’Toole AJ. The role of familiarity in view transferability of face identity adaptation. Vision Research. 2007;47:525–531. doi: 10.1016/j.visres.2006.10.012. [DOI] [PubMed] [Google Scholar]
- 16.Kleiner M, Brainard D, Pelli D. What's new in Psychtoolbox-3? Perception. 2007 36 ECVP Abstract Supplement. [Google Scholar]
- 17.Kovacs G, Cziraki C, Vidnyanszky Z, Schweinberger SR, Greenlee MW. Position-specific and position-invariant face aftereffects reflect the adaptation of different cortical areas. Neuroimage. 2008;43(1):156–164. doi: 10.1016/j.neuroimage.2008.06.042. [DOI] [PubMed] [Google Scholar]
- 18.Lawson RP, Clifford CWG, Calder AJ. A real head turner: Horizontal and vertical head directions are multichannel coded. Journal of Vision. 2011;11(9):1–17. doi: 10.1167/11.9.17. [DOI] [PubMed] [Google Scholar]
- 19.Leopold DA, O’Toole AJ, Vetter T, Blanz V. Prototype referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience. 2001;4:89–94. doi: 10.1038/82947. [DOI] [PubMed] [Google Scholar]
- 20.Pelli DG. The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision. 1997;10:437–442. [PubMed] [Google Scholar]
- 21.Pitcher D, Walsh V, Duchaine B. The role of the occipital face area in the cortical face perception network. Experimental Brain Research. 2011;209:481–493. doi: 10.1007/s00221-011-2579-1. [DOI] [PubMed] [Google Scholar]
- 22.Prins N, Kingdom FAA. Palamedes: Matlab routines for analyzing psychophysical data. 2009 http://www.palamedestoolbox.org.
- 23.Rhodes G, Jeffery L, Watson TL, Clifford CWG, Nakayama K. Fitting the mind to the world: Face adaptation and attractiveness aftereffects. Psychological Science. 2003;14:558–566. doi: 10.1046/j.0956-7976.2003.psci_1465.x. [DOI] [PubMed] [Google Scholar]
- 24.Rhodes G, Jeffery L, Watson TL, Jaquet E, Winkler C, Clifford CWG. Orientation-contingent face aftereffects and implications for facecoding mechanisms. Current Biology. 2004;14:2119–2123. doi: 10.1016/j.cub.2004.11.053. [DOI] [PubMed] [Google Scholar]
- 25.Rossion B, Boremanse A. Nonlinear relationship between holistic processing of individual faces and picture-plane rotation: Evidence from the face composite illusion. Journal of Vision. 2008;8(4:3):1–13. doi: 10.1167/8.4.3. [DOI] [PubMed] [Google Scholar]
- 26.Valentine T, Bruce V. Mental rotation of faces. Memory & Cognition. 1988;16:556–566. doi: 10.3758/bf03197057. [DOI] [PubMed] [Google Scholar]
- 27.Watson TL, Clifford CWG. Pulling faces: Investigating the face distortion aftereffect. Perception. 2003;32:1109–1116. doi: 10.1068/p5082. [DOI] [PubMed] [Google Scholar]
- 28.Watson TL, Clifford CWG. Orientation dependence of the orientation-contingent face aftereffect. Vision Research. 2006;20:3422–3429. doi: 10.1016/j.visres.2006.03.026. [DOI] [PubMed] [Google Scholar]
- 29.Webster MA, Kaping D, Mizokami Y, Duhamel P. Adaptation to natural facial categories. Nature. 2004;428:558–561. doi: 10.1038/nature02420. [DOI] [PubMed] [Google Scholar]
- 30.Webster MA, MacLin OH. Figural aftereffects in the perception of faces. Psychonomic Bulletin & Review. 1999;6:647–653. doi: 10.3758/bf03212974. [DOI] [PubMed] [Google Scholar]
- 31.Westheimer G. Meridional anisotropy in visual processing: implications for the neural site of the oblique effect. Vision Research. 2003;43:2281–2289. doi: 10.1016/s0042-6989(03)00360-2. [DOI] [PubMed] [Google Scholar]
- 32.Wilson HR, Wilkinson F, Lin LM, Castillo M. Perception of head orientation. Vision Research. 2000;40:459–472. doi: 10.1016/s0042-6989(99)00195-9. [DOI] [PubMed] [Google Scholar]
- 33.Zhao L, Chubb CF. The size-tuning of the face-distortion aftereffect. Vision Research. 2001;41:2979–2994. doi: 10.1016/s0042-6989(01)00202-4. [DOI] [PubMed] [Google Scholar]