Skip to main content
Sage Choice logoLink to Sage Choice
. 2023 May 17;52(7):514–523. doi: 10.1177/03010066231172087

Foveal to peripheral extrapolation of facial emotion

Feriel Zoghlami 1, Matteo Toscani 1,
PMCID: PMC10291354  PMID: 37198897

Abstract

Peripheral vision is characterized by poor resolution. Recent evidence from brightness perception suggests that missing information is filled out with information at fixation. Here we show a novel filling-out mechanism: when participants are presented with a crowd of faces, the perceived emotion of faces in peripheral vision is biased towards the emotion of the face at fixation. This mechanism is particularly important in social situations where people often need to perceive the overall mood of a crowd. Some faces in the crowd are more likely to catch people's attention and be looked at directly, while others are only seen peripherally. Our findings suggest that the perceived emotion of these peripheral faces, and the overall perceived mood of the crowd, is biased by the emotions of the faces that people look at directly.

Keywords: eye movements, face perception, crowding/eccentricity, grouping


The architecture of the visual system changes with eccentricity (Strasburger et al., 2011): the density of cone photoreceptors decreases, and stimuli seen in the periphery are processed by a much smaller number of neurons than when they are seen in central vision. These differences between central and peripheral vision imply reduced visual acuity, contrast sensitivity, and colour sensitivity (Hansen et al., 2009; Rovamo et al., 1992; Weale, 1953) and cause distortions of basic visual features like spatial frequency (Davis, 1990), luminance (Greenstein & Hood, 1981), or chromatic saturation (McKeefry et al., 2007).

However, recent research shows that peripheral and foveal vision processing are linked. Foveal appearance extends towards the periphery (Otten et al., 2017; Stewart et al., 2020; Toscani et al., 2013a, 2013b, 2017). We showed (Toscani et al., 2017) that the luminance at the fovea biases the brightness of peripheral areas on 3-D objects (brightness filling-out).

Crucially, we found no influence of fixated luminance when observers foveated outside the object's boundaries. These results indicate that our visual system uses foveal information to estimate the brightness of areas in the periphery. It does so only when a certain degree of continuity can be safely assumed, such as when two points are within the same object boundary. Taking this complexity into account changes the way we think of peripheral vision. It is possible that the biases revealed by classical findings from peripheral vision with simple stimuli presented in isolation are corrected by the integration process and do not occur in natural vision.

In many everyday-life activities, we focus on salient regions (Land, 2009) and perceive most of the scene peripherally. For instance, in social situations, when we are presented with a crowd of people, some faces are more likely to attract fixations than others depending on their emotion (Bucher & Voss, 2019), leaving the emotions of other faces poorly resolved. However, the ability to perceive the general mood of the group is of high social relevance. The visual system can rapidly form a synthetic impression of the emotion of a crowd of faces, even when observers cannot report anything about the individual face features (Haberman & Whitney, 2009).

Here we present a novel filling-out effect: the emotion of peripheral faces in a crowd is biased towards the emotion of the face at fixation. This may contribute to the perceived overall emotion of the group. We tested the hypothesis that the emotion of the face at fixation causally influences the perceived emotion of faces presented peripherally. In Experiment 1, we used a gaze-contingent paradigm to manipulate the emotion at fixation while participants judged the emotion of a face presented in peripheral vision. Observers were asked to adjust the emotion of a probe face to match the emotion of the target face that they could see only peripherally. For comfort and naturalness, we let them free to look at the probe face so that they moved their eyes back and forth between the face at fixation in the group and the probe face they adjusted. However, the perceived emotion of a face is biased away from the emotion of a face presented in the exact location shortly before, consistent with retinotopic adaptation (Leopold et al., 2001). Thus, the perceived emotion of the probe face could be biased by the emotion of the face at fixation, and participants could adjust the probe face to balance this bias. We control for this potential confound in Experiment 2. In a third experiment, observers judged the overall mood of a crowd of faces while eye movements were not constrained. We found that fixation behaviour contributes to explaining individual differences in the perceived mood of the crowd.

Method

Participants

Twenty-five participants volunteered to participate in the first experiment, 15 in the second, and 20 in the third, mostly psychology Bournemouth University Psychology students. All gave written informed consent under the Code of Ethics of the World Medical Association (Declaration of Helsinki). The local ethics committee approved the experiments.

Face Stimuli

Four individuals were chosen from the FACES database (Ebner et al., 2010). Each individual is represented in the database by a happy, neutral, and sad photograph. To generate 200 faces ranging from sad to neutral to happy, we first used a facial landmark detection tool from computer vision to identify 68 landmarks in each image. Then, we used these landmarks for 200 steps of morphing (https://github.com/Azmarie/Face-Morphing, accessed on October 24, 2022) between the sad image and the neutral image and between the neutral image and the happy image (Figure 1A). The most negative emotion was coded as −1, and the most positive as 1, with 0 representing neutral emotion.

Figure 1.

Figure 1.

Stimuli. (A) Emotional faces. Four individuals in a row, emotion from sad to happy, from left to right. Twenty example emotions are shown, linearly spaced from −1 to 1. (B) Experimental display for Experiments 1 & 3. The arrow indicates the face at fixation; the faces in the crowd are placed either 6 dva (degrees of visual angle) or 8 dva around it. The probe face is shown on the right. The red circle indicates the target face. (C) Display for Experiment 2. The arrow indicates the face at fixation. The target and the probe face are placed 8 dva away from fixation.

Eye-Tracking Recording

Using a desktop-mounted eye-tracker (EyeLink 1000; SR Research Ltd., Osgoode, Ontario, Canada), gaze position signals were recorded and sampled at 1000 Hz. The display was viewed binocularly, but only the right eye was tracked. We performed a standard calibration procedure at the beginning of each experiment (Toscani et al., 2013b).

Experimental Display

In Experiments 1&3, participants were shown 11 faces on the left side of the computer screen and one probe face on the right. The crowd consisted of one face in the centre, five faces presented in a circle around the centre with 6 dva eccentricity, and five faces in the circle with 8 dva eccentricity. The faces were evenly distributed around each of the two circles, and the two circular arrays were out-of-phase by a random angle (see Figure 1B). Among the four individuals, the faces’ identities were selected randomly. The face on the right was presented 10 dva away from the centre of the crowd. In Experiment 2, the faces of the crowd were arranged in a 3 × 4 matrix. With this new arrangement, we ensured that the probe-fixation and target-fixation distance were the same. Participants looked at the face on the central row and rightmost column while the target and the probe face were shown peripherally, at 8 dva away from fixation.

Procedure

In Experiment 1, we tested whether the emotion at fixation causally influences the perceived emotion of the peripheral target face, presented at different eccentricities. Participants used the keyboard to adjust the probe face's emotion to match the target face's emotion indicated by a cue. Pressing the left arrow made the probe face sadder and the right arrow happier on a linear scale. Participants could display the cue by pressing the keyboard's “S” key, a red ellipse (Figure 1B). We employed a gaze-contingent paradigm to ensure that the target face was presented in peripheral vision. Observers were instructed to fixate on the face in the centre of the crowd (forced looking) or to look at the probe face to adjust it. As they shifted their focus away from the central face, all faces but the probe face disappeared, and a fixation point was shown. The emotions of the central face and the target face were manipulated systematically, while the emotions of the other faces were randomly sampled from a normal distribution (mean = 0; variance = 0.1). The target face could be presented at a low or high eccentricity and express sadness, neutrality, or happiness (−.33, 0, .33, respectively – more extreme values could have hidden potential effects because of the limited range). The central face's expression could be happy or sad (−.5 or .5, respectively). Each combination was repeated five times for a total of 60 trials (2 × 3 × 2 × 5).

The procedure in Experiment 2 was the same as for Experiment 1, but we only tested sad or happy target faces (−.66, .33, respectively) and one eccentricity (8 dva) for a total of 20 trials (2 × 2 × 5). The critical difference between Experiment 1 and 2 is that in Experiment 2, participants could see the probe face and the target face only peripherally. Conversely, in Experiment 1, observers could directly look at the probe's face. The conditions of Experiment 2 allowed us to control for potential adaptation effects. Retinotopic adaptation (Leopold et al., 2001) to, for example, a happy face could cause the probe face to appear sadder when directly looking at it. Therefore, participants would have to adjust the probe face as happier to match the perceived emotion of the target face, independent of a potential filling-out effect. Also, as the target and probe faces are presented at the same eccentricity, differences across the visual field cannot explain perceptual biases.

In Experiment 3, we investigated whether the crowd's perceived overall emotion was related to the emotion of the faces that participants were looking at (free looking). Participants were required to adjust the probe face to match the overall mood of the crowd while their eye movements were recorded. The mean could assume five emotion values [−0.66, −0.33, 0, 0.33, 0.66], and the variance could take three values [.1, .2, .4] for a total of 15 possible combinations. Each combination was presented five times for a total of 75 trials (5 × 3 × 5).

Results

Experiment 1

Figure 2A and B shows the adjusted emotion of the probe face (Matched Emotion) as a function of the emotion of the target face (Peripheral Emotion) and the Emotion at Fixation (Sad vs Happy) for the target face presented at 6 dva or 8 dva, respectively.

Figure 2.

Figure 2.

Results of Experiments 1 & 2. Matched Emotion on the y-axis, Peripheral Emotion on the x-axis. The icons exemplify how emotions vary from sad to happy; they do not necessarily correspond to the values on the x-axis. The error bars represent the standard error of the mean, indicated by circular data points (red for sad face at fixation, blue for happy). (A & B) show data for the target face presented at 6 or 8 dva, respectively. (C) Data averaged across eccentricity. (D) Data for Experiment 2.

For both eccentricities, the emotion of a happy peripheral target face is matched as happier when participants looked at a happy face than when they looked at a sad face. This was not always the case when the emotion of the peripheral target face was neutral or sad. A 3-way repeated-measures ANOVA with Emotion at Fixation, Peripheral Emotion, and Eccentricity as factors reveals a significant interaction between Emotion at Fixation and Peripheral Emotion (F (2,48) = 3.421, p = .041, ηp2 = 0.125). This means that the impact of Emotion at Fixation varies with the emotion of the target face seen in the periphery (Peripheral EmotionFigure 2C). This is evident in Figure 2C (average across eccentricity): the effect of Emotion at Fixation on happy target faces is higher.

Experiment 2

Experiment 2 (Figure 2D) replicates the results from Experiment 1. A 2 × 2 repeated measures ANOVA shows a significant interaction between Emotion at Fixation and Peripheral Emotion (F (1,14) = 4.841, p = .045, ηp2 = 0.257). Together, these results suggest that the emotion of peripheral faces is influenced by the emotion at fixation, at least when peripheral faces are happy. Crucially, this effect cannot be explained by retinotopic adaptation.

Experiment 3

We first investigated whether, while looking at a crowd of faces, participants preferred to focus on some faces depending on their emotions and then whether this preference could explain the perceived mood of the crowd.

For each participant and each trial, we computed the Mean Fixated emotion (Figures 3A and B). In each trial, we determined the emotion of the closest face for each fixation and averaged emotion across fixations. We also computed the Mean emotion of the crowd and measured the perceived overall emotion of the crowd as the value of the adjusted probe face (Match).

Figure 3.

Figure 3.

Results of Experiment 3. (A & B) two example participants. Each data point represents data from one trial. Mean emotion (Mean) of the faces presented in each trial (x-axis). Left: Matched emotion (Match) on the y-axis (data in red). Right: mean Fixated Emotion on the y-axis (data in blue). The dashed coloured line is the regression line; the continuous line is the unity line. (C) Regression slopes averaged across participants for the two relationships represented in A & B (colours are consistent). The dashed line is the unity line. The error bars represent the standard error of the mean. (D) Slope of the relationship between Mean emotion and mean Fixated Emotion (examples in A & B, right) as a function of the slope of the relationship between the Mean emotion and the matched emotion (examples in A & B, left). Each data point is one participant. The significant correlation holds even after removing the two outliers highlighted by a red circle.

Figure 3A and B shows data for two example participants. The linear relationship between the mean emotion of the faces presented in each trial and the Mean Fixated emotion shows that they could match the overall emotion of the crowd. However, they tended to produce happier matches for happy faces and sadder matches for sad faces; that is, the slope of the regression line is steeper than the unity line. Similarly, when the mean emotion of the crowd was sad, they tended to focus on faces sadder than average, and when the mean was happy, they tended to focus on faces happier than average. Again, this is indicated by the slope of the regression line.

The regression slope averaged across participants is higher than 1 (Figure 3C) for the relationship between Mean emotion and Match and the relationship between Mean emotion and Fixated Emotion. T-tests show that the slope is significantly different from 1 (t (19) = 7.65, p < .001, d = 3.51) for the relationship between Fixated Emotion and Mean emotion but fails to reach significance (t (19) = 1.4, p = .179, d = 0.642) for the relationship between Match and Mean emotion (Figure 3D). A Bayesian t-test shows that the hypothesis that the slope is similar to 1 is almost twice as likely than the alternative hypothesis (BF01 = 1.85).

Crucially, this tendency of exaggerating matches respect to the mean emotion varies between participants; e.g., the example participant in Figure 3B shows a regression line closer to the unity line, as well as the tendency to fixate on happier or sadder faces than average (blue regression line, Figure 3B). In contrast, the example participant in Figure 3A tends to focus on extremes. Participants who tended to fixate on extreme emotions more than others also tended to exaggerate the mean emotion when producing their matches. This is indicated by the significant correlation (r = .65, p = .002) between the slopes (Figure 3D) across participants. This correlation is not driven by the two outliers (marked in red in Figure 3D) because it holds after removing them (r = .484, p = .031).

This experiment shows that fixation behaviour can explain around 42% of the inter-individual variability in the perceived overall emotion of a crowd of faces.

Discussion

We showed that when participants are presented with a crowd of faces, the perceived emotion of faces in peripheral vision is biased by the emotion of the face at fixation. This is similar to the brightness filling-out mechanism previously shown (Toscani et al., 2017). Analogous mechanisms are probably responsible for the uniformity illusion (Otten et al., 2017). When we fixate on the centre of textures, they tend to appear uniform (Figures 4A and B), despite systematic changes in the periphery, suggesting that local features are integrated across the visual field. In fact, it is possible to reproduce a similar illusion with a crowd of faces. When staring at the centre, all faces appear sad, although the peripheral faces are happy (Figure 4C).

Figure 4.

Figure 4.

Uniformity illusion. (A & B) Two examples of the uniformity illusion, from Otten et al. (2017). (C) Our faces version of the illusion.

We believe we discovered a novel filling-out effect which cannot be explained based on known effects of the group on the appearance of individual faces. When an attractive face is presented in a group of average faces, this face tends to look more attractive; conversely, a non-attractive face would appear even less appealing when presented in the group (Lei et al., 2020). This result is interpreted as evidence for a contrast mechanism. When judging the attractiveness of a face in a group, we compare it with the environment, which we use as a reference. Such contrast mechanisms could explain known group effects, such as the cheerleader effect (van Osch et al., 2015; Walker & Vul, 2014) or the friend effect (Ying et al., 2019). To the best of our knowledge, such contrast effects have not been reported for perceived emotion; however, if results would generalize from research on attractiveness, they could not explain our findings, as the contrast with the central face would bias the perceived emotion of the peripheral face away.

Group effects on attractiveness like the ones described above can also be explained with ensemble coding, for which faces in the group are perceived as a whole, whose appearance biases the appearance of individual faces. Ensemble coding enables people to extract the ensemble statistics of objects (Whitney & Yamanashi Leib, 2018), such as the average attractiveness of a group of faces. Then the perception of individual faces may be biased towards the average (Walker & Vul, 2014), which tends to be more attractive than individual faces (Langlois & Roggman, 1990). It is possible that ensemble coding mediates the filling-out effect we discovered. Based on our previous research (Toscani et al., 2013a, 2013b, 2016), we speculate that when peripheral and central information is integrated, that is, to estimate the group's emotion, more weight is given to information at fixation. This interpretation is supported by several studies indicating that for ensemble perception, more weight is given to location at fixation (Florey et al., 2016; Ji et al., 2014) and attended items (De Fockert & Marchant, 2008), also specifically for facial expression judgments (Ueda, 2022), like in our study. The alternative possibility is that all faces are weighted equally, and the only effect of the face at fixation is to shift the average towards its emotion. However, if this were the case, in our version of the uniformity illusion (Figure 4C), the average happy emotion would prevail over the sad emotion of the few central, predicting the opposite of what we observe.

Since groups of faces are coded as an ensemble (Haberman & Whitney, 2007), and based on our results on brightness (Toscani et al., 2017), we speculate that filling-out only works when participants look at a face within the group. Our experiments focused on the socially relevant situation of peripheral faces within a crowd. However, control experiments with peripheral faces presented in isolation or in which participants fixated on faces outside of the group could help understand the role of ensemble coding in the filling-out effect we found.

Retinotopic adaptation may bias the probe face's perceived emotion away from the face's emotion at fixation (Leopold et al., 2001). Thus, even with no filling-out mechanism, participants could adjust the probe face to compensate for the bias caused by retinotopic adaptation, producing matches biased towards the emotion of the face at fixation. In Experiment 2, the probe face was also presented peripherally and could not be subject to retinotopic adaptation to the emotion of the face at fixation in the group. Results ruled out the possibility that the effect we found is due to retinotopic adaptation; thus, we are confident we discovered a novel filling-out effect.

The emotion of faces at fixation mainly influenced happy peripheral faces. Faces are hard to recognize in the periphery unless they are happy (Goren & Wilson, 2006). This may explain why sad faces are matched with almost the same neutral emotion as neutral faces. If their emotion is not recognizable, we cannot measure whether the filling-out effect biases it. In Experiment 2, we presented participants with a sadder peripheral face than in Experiment 1 (−.66 instead of −.33). However, results showed filling-out only for the happy peripheral face. Further research with a perceptually validated emotional scale may help elucidate this finding.

Filling-out may influence how we perceive the crowd's overall emotion as we do not look at all faces with the same probability. We found that when presented with a group of sad faces, participants tended to fixate on the sadder faces and, when presented with happy faces, on the happier ones. Our finding that fixations focus on emotional faces is consistent with earlier reports (Bucher & Voss, 2019; Horstmann & Bauland, 2006; Joormann & Gotlib, 2007). This selective fixation strategy seemed to influence the perceived overall emotion of the crowd: groups of sad faces were reported to be sadder than their average, and crowds of happy faces were happier. Crucially, participants who looked more at the extreme faces also reported the most significant biases in perceived overall emotion. This suggests a link between attentional selection processes and how we perceive the crowd's mood.

Footnotes

Author Contribution(s): Feriel Zoghlami: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Project administration; Writing – original draft; Writing – review & editing.

Matteo Toscani: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Project administration; Software; Supervision; Writing – original draft; Writing – review & editing.

The authors declared no potential conflicts of interest regarding the research, authorship, and/or publication of this article.

Funding: The authors received no financial support for the research, authorship, and/or publication of this article.

ORCID iD: Matteo Toscani https://orcid.org/0000-0002-1884-5533

References

  1. Bucher A., Voss A. (2019). Judging the mood of the crowd: Attention is focused on happy faces. Emotion, 19, 1044. [DOI] [PubMed] [Google Scholar]
  2. Davis E. T. (1990). Modeling shifts in perceived spatial frequency between the fovea and the periphery. JOSA A, 7, 286–296. [DOI] [PubMed] [Google Scholar]
  3. De Fockert J. W., Marchant A. P. (2008). Attention modulates set representation by statistical properties. Perception & Psychophysics, 70, 789–794. [DOI] [PubMed] [Google Scholar]
  4. Ebner N. C., Riediger M., Lindenberger U. (2010). FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. Behavior Research Methods, 42, 351–362. [DOI] [PubMed] [Google Scholar]
  5. Florey J., Clifford C. W., Dakin S., Mareschal I. (2016). Spatial limitations in averaging social cues. Scientific Reports, 6, 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Goren D., Wilson H. R. (2006). Quantifying facial expression recognition across viewing conditions. Vision Research, 46, 1253–1262. [DOI] [PubMed] [Google Scholar]
  7. Greenstein V. C., Hood D. C. (1981). Variations in brightness at two retinal locations. Vision Research, 21, 885–891. [DOI] [PubMed] [Google Scholar]
  8. Haberman J., Whitney D. (2007). Rapid extraction of mean emotion and gender from sets of faces. Current Biology, 17, R751–R753. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Haberman J., Whitney D. (2009). Seeing the mean: Ensemble coding for sets of faces. Journal of Experimental Psychology: Human Perception and Performance, 35, 718. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Hansen T., Pracejus L., Gegenfurtner K. R. (2009). Color perception in the intermediate periphery of the visual field. Journal of Vision, 9, 26–26. [DOI] [PubMed] [Google Scholar]
  11. Horstmann G., Bauland A. (2006). Search asymmetries with real faces: Testing the anger-superiority effect. Emotion, 6, 193. [DOI] [PubMed] [Google Scholar]
  12. Ji L., Chen W., Fu X. (2014). Different roles of foveal and extrafoveal vision in ensemble representation for facial expressions. 164–173.
  13. Joormann J., Gotlib I. H. (2007). Selective attention to emotional faces following recovery from depression. Journal of Abnormal Psychology, 116, 80. [DOI] [PubMed] [Google Scholar]
  14. Land M. F. (2009). Vision, eye movements, and natural behavior. Visual Neuroscience, 26, 51–62. [DOI] [PubMed] [Google Scholar]
  15. Langlois J. H., Roggman L. A. (1990). Attractive faces are only average. Psychological Science, 1, 115–121. [Google Scholar]
  16. Lei Y., He X., Zhao T., Tian Z. (2020). Contrast effect of facial attractiveness in groups. Frontiers in Psychology, 11, 2258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Leopold D. A., O’Toole A. J., Vetter T., Blanz V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89–94. [DOI] [PubMed] [Google Scholar]
  18. McKeefry D. J., Murray I. J., Parry N. R. (2007). Perceived shifts in saturation and hue of chromatic stimuli in the near peripheral retina. JOSA A, 24, 3168–3179. [DOI] [PubMed] [Google Scholar]
  19. Otten M., Pinto Y., Paffen C. L., Seth A. K., Kanai R. (2017). The uniformity illusion: Central stimuli can determine peripheral perception. Psychological Science, 28, 56–68. [DOI] [PubMed] [Google Scholar]
  20. Rovamo J., Franssila R., Näsänen R. (1992). Contrast sensitivity as a function of spatial frequency, viewing distance and eccentricity with and without spatial noise. Vision Research, 32, 631–637. [DOI] [PubMed] [Google Scholar]
  21. Stewart E. E., Valsecchi M., Schütz A. C. (2020). A review of interactions between peripheral and foveal vision. Journal of Vision, 20, 2–2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Strasburger H., Rentschler I., Jüttner M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11, 13–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Toscani M., Gegenfurtner K. R., Valsecchi M. (2017). Foveal to peripheral extrapolation of brightness within objects. Journal of Vision, 17, 14–14. [DOI] [PubMed] [Google Scholar]
  24. Toscani M., Valsecchi M., Gegenfurtner K. R. (2013a). Optimal sampling of visual information for lightness judgments. Proceedings of the National Academy of Sciences, 110, 11163–11168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Toscani M., Valsecchi M., Gegenfurtner K. R. (2013b). Selection of visual information for lightness judgements by eye movements. Philosophical Transactions of the Royal Society B: Biological Sciences, 368, 20130056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Toscani M., Zdravković S., Gegenfurtner K. R. (2016). Lightness perception for surfaces moving through different illumination levels. Journal of Vision, 16, 21–21. [DOI] [PubMed] [Google Scholar]
  27. Ueda Y. (2022). Understanding mood of the crowd with facial expressions: Majority judgment for evaluation of statistical summary perception. Attention, Perception, & Psychophysics, 84, 843–860. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. van Osch Y., Blanken I., Meijs M. H., van Wolferen J. (2015). A group’s physical attractiveness is greater than the average attractiveness of its members: The group attractiveness effect. Personality and Social Psychology Bulletin, 41, 559–574. [DOI] [PubMed] [Google Scholar]
  29. Walker D., Vul E. (2014). Hierarchical encoding makes individuals in a group seem more attractive. Psychological Science, 25, 230–235. [DOI] [PubMed] [Google Scholar]
  30. Weale R. (1953). Spectral sensitivity and wave-length discrimination of the peripheral retina. The Journal of Physiology, 119, 170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Whitney D., Yamanashi Leib A. (2018). Ensemble perception. Annual Review of Psychology, 69, 105–129. [DOI] [PubMed] [Google Scholar]
  32. Ying H., Burns E., Lin X., Xu H. (2019). Ensemble statistics shape face adaptation and the cheerleader effect. Journal of Experimental Psychology: General, 148, 421. [DOI] [PubMed] [Google Scholar]

Articles from Perception are provided here courtesy of SAGE Publications

RESOURCES