Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2025 Dec 2;15:42983. doi: 10.1038/s41598-025-27013-w

The impact of simulated cataract on face learning

Abuk Akech 1, Benjamin Balas 1,
PMCID: PMC12672802  PMID: 41331273

Abstract

Cataracts are a common visual impairment that directly affects visual acuity and contrast sensitivity, both of which constrain mid-level and high-level visual processes. Fortunately, cataracts are also relatively easy to treat, so many adults with impaired vision caused by cataract can have their vision restored successfully. This raises the question of how impaired vision due to cataract onset and the restoration of normal vision via surgery impact performance on recognition tasks. In the present study, we examined this question in the context of learning to recognize new faces: How well do observers establish useful representation of novel face identities in the context of cataracts and how do changes in vision associated with cataract onset and treatment affect recognition of recently-learned faces? We used cataract simulation goggles to vary the appearance of face stimuli during the learning and test phases of a simple novel face learning task and examined how recognition for learned and unlearned faces was affected by visual quality in each learning phase.

Subject terms: Diseases, Health care, Medical research, Neuroscience

Introduction

Cataracts are a common visual impairment that affect nearly half of adults over the age of 801. Typically, individuals with cataracts in one or both eyes experience cloudy or blurry vision, decreased saturation of colors, and have particular difficulty viewing light sources due to diffusion of light through the cataract producing haloes, star-like formations, or haze. Fortunately, cataracts are usually easy to treat, resulting in substantial recovery of visual acuity, contrast sensitivity, and color sensitivity2. Visual cataract onset and treatment thus present an intriguing problem domain for examining the visual system’s sensitivity to changes in visual experience: Given that vision gradually becomes impaired over time as cataract density increases and then may rapidly improve following corrective surgery, how do these changes affect performance in high-level recognition tasks? Presently, we examined how simulated cataract onset and offset affected face learning. Specifically, how do changes in visual quality between learning new faces and recognizing them later affect observers’ recognition accuracy?

Face recognition is in general resilient to blur3. While high spatial frequencies do make meaningful contributions to some face recognition tasks4, face identification and matching more critically depend on an intermediate to low spatial frequency band of approximately 8–16 cycles per face, with some variability in the specific boundaries of this band across studies58. Relatively low spatial frequencies also appear to support configural face processing9 - the encoding of spatial relationships between discrete facial features like the eyes, nose, mouth - and holistic face processing - encoding of the face as a large-scale template that is not further analyzed into parts -10, which are crucial contributors to robust face recognition11,12. As such, we might expect that cataracts, even relatively dense ones, may not lead to much impairment of face recognition abilities in adulthood. However, Elliot et al.13 found that simulated cataracts did lead to measurable impairments of face identification and facial expression recognition, which they attributed to low-level deficits in contrast sensitivity. Adult patients who received cataract surgery also exhibited improvements in face recognition abilities following treatment of each eye in sequence2 which indicates both that cataracts impaired face recognition abilities prior to removal and surgical treatment generally restores these abilities to pre-cataract levels as assessed by comparison to control groups14. The impacts of cataracts on visual acuity and contrast sensitivity thus have a measurable effect on adult patients’ face recognition abilities, but these are generally alleviated by removal and replacement of the lens. The presence of cataracts in infancy and childhood may lead to long-lasting differences in face recognition outcomes between patients and control participant s15,16 even as some aspects of pattern vision17 and face processing are comparable18. An illustrative example of this phenomenon is the development of gaze-following subsequent to treatment of congenital cataracts in childhood19, which remains poor even though these patients have sufficient resolution to see the pupils of the eye adequately. Such specific differences in face processing notwithstanding, the face recognition outcomes for patients who acquire cataracts in adulthood and pursue treatment are overall very good and support everyday social interaction.

Our focus in the current study concerns a different question regarding face recognition abilities and the onset and treatment of visual cataracts, however: What is the potential for changes in visual quality, whether that change is negative or positive, to affect how face recognition tasks are performed when learning new identities and recognizing them in new contexts? In the face recognition literature, several relevant studies have examined transfer between spatial frequency bands for face recognition judgments. Typically these studies involve digital filtering of face images to create stimuli that contain only low spatial frequencies (LSF images), high spatial frequencies (HSF), or in some cases intermediate spatial frequencies achieved via bandpass filtering. In general, the results of these studies indicate that for face identity matching, overlap between the frequency bands used for familiarization and test supports better face recognition outcomes20. Faces appear to benefit from spatial frequency overlap more than other object classes (including everyday objects like chairs, hand tools, purses, etc.) as well21 and the specific spatial frequencies available during familiarization and test phases of a face recognition appear to be less important than the degree of overlap in spatial frequencies available during the two phases22. This latter result is to some extent consistent with flexible use of spatial frequencies as a function of task and availability in the stimulus23. In the context of cataract acquisition and treatment, these results suggest that the typical changes in visual acuity and contrast sensitivity associated with cataract onset and removal may have a specific impact on face learning due to the changes in available spatial frequencies pre- and post-treatment. Specifically, faces learned before cataracts become dense will have been learned with a broad range of low to high spatial frequencies available in the stimulus, but subsequent to increased cataract density high to intermediate spatial frequencies will be unavailable. On the other hand, faces that were first learned in the presence of dense cataracts would be lacking these intermediate to high spatial frequencies, but upon treatment the broader spectrum of spatial frequencies would then be visible. While both of these circumstances involve spatial frequency overlap between learning a new face and recognizing it later due to the availability of low spatial frequencies in both cases, the changing availability of higher spatial frequencies still introduces a potentially important difference in the appearance of face images that could either positively or negatively impact recognition outcomes. Our goal was to examine the impact of these specific changes in visual quality on newly learned faces.

In the current study, we employed cataract simulation goggles to manipulate visual acuity during a face learning task. Specifically, we asked participants to learn to recognize the faces of two unfamiliar individuals during a training task, followed by a test phase in which we measured their ability to successfully recognize these individuals relative to new faces only introduced during this second phase of the experiment. Across participant groups, we manipulated whether simulated cataracts were applied during the training phase and during the test phase, making it possible for us to measure the impact of cataract on face learning both when visual acuity does change between training and test and also when it does not. Relative to prior work examining how spatial frequency overlap (or the lack thereof) affects face learning, our approach incorporates a number of design choices that distinguish our work from these previous studies. First, the use of cataract simulation goggles rather than digital filtering of the target stimuli introduces changes in participants’ overall visual acuity and contrast sensitivity rather than limiting the spatial frequency information available in the images only. While it is not clear how the visibility of high spatial frequency information elsewhere in the visual field may affect participants’ selective use of spatial frequency in the stimulus, our approach reduces intermediate to high spatial frequency entirely during the familiarization period for learning new faces. Second, prior studies examining spatial frequency overlap and its impact on face learning almost exclusively used face images that were cropped to remove features like the hairline, normalized for eye position, or in some other way closely controlled to remove naturalistic variability in face appearance. The advantage of using such images is that they offer important control of low-level and mid-level image properties, making them suitable for psychophysical studies. A key disadvantage of using stimuli like these, however, is that they are potentially less representative of the kinds of face images observers must recognize in everyday settings. In particular, observers’ ability to successfully cope with within-person variability has been a key focus of face recognition research over the past fifteen years or so24,25, and the use of so-called ambient images that incorporate richer natural variability across different images of the same individuals has become common26,27. While the effects of natural image backgrounds on spatial frequencies supporting face recognition suggest little difference in face processing28, examinations of spatial frequency overlap on face matching have not only relied heavily on controlled face stimuli but also have typically used old/new or matching tasks that are image-based rather than including different images of the same individuals. Our use of a task that requires observers to learn how to both tell individuals apart, but also to “tell them together”26 is thus to our knowledge novel. Finally, we note that while digital filtering offers precise numerical manipulation of spatial frequency content it also profoundly impacts image contrast in a manner that has not always been appreciated in prior work. Perfetto, Wilder & Walther29 reported that a range of filtering parameters including the application of contrast normalization and specific filter shapes dramatically affected the evident importance of low vs. high spatial frequencies in visual recognition tasks. While simulation goggles do not provide direct control of spatial frequency or contrast, they have the advantage of approximating the real-world impacts of cataracts and thus have clear ecological validity.

We predicted that reduced visual acuity and contrast sensitivity during familiarization or test would tend to reduce face recognition accuracy during those phases of our task. Additionally, however, we also predicted that a change in visual acuity between training and test, regardless of whether the change was an improvement or an impairment, would have an independent negative impact on face recognition performance during the test phase due to the profound difference in the appearance of face images subject to removal or application of simulated cataracts. We also expected that any such effects of changing visual quality on newly-learned faces would not be evident on unlearned face images presented during the test phase.

Methods

Participants

We recruited a total of 48 participants (33 female and 15 male) from the NDSU Undergraduate Psychology Study Pool. This sample size was determined by carrying out a power analysis based on effect sizes we estimated from the data reported in Kornowski & Petersik22, which yielded a suggested sample size of 42 participants to achieve 80% power. All participants were between the ages of 19 and 23 years of age and reported normal or corrected-to-normal visual acuity. Participants who self-reported uncorrected visual acuity that is lower than normal were excluded from the study. All participants provided informed consent upon their arrival in the Balas Lab and received course credit for completing the experiment.

Stimuli

To examine how face learning was impacted by changing visual acuity, we chose to use a custom face recognition task that would require participants to learn the variable appearance of new individuals during a short training phase, followed by a test phase. In contrast to existing scales like the Cambridge Face Memory Test or the Benton Facial Recognition Test, our task includes naturalistic within-person appearance variability as opposed to explicitly varied viewpoint and lighting conditions and also provides a means of testing the generalizability of the face representations learned for each trained identity by permuting the items within our stimulus pairs. Our stimulus set consisted of 80 color images of 4 female celebrity faces (20 unique images per celebrity) displaying a range of facial expressions, poses, and other types of ambient variability. The celebrities we selected were Kate Ritchie, Tamara Oudyn, Sandrine Quetier, and Rachelle Lefevre, who were chosen because our student population was not familiar with these individuals and there were many images of each woman available online. We obtained all of our images using Google Image searches with the celebrity names as prompts and selected images that were are least 640 × 960 pixels within each slide, depicted each woman in a frontal view with both ears visible, and did not include sunglasses or other occluding elements that obscured the face.

The experimental session was divided into two phases, a training session and a testing session. To construct the stimuli for training session trials, we created a series of Powerpoint slides that each contained two images of either Kate Ritchie or Tamara Oudyn. Half of these slides contained one image of each celebrity (a total of 20 “Different ID” pairs), with the images offset to the left and right of center. The remaining images contained two images of the same celebrity (10 slides with two images of Kate Ritchie and 10 slides with two images of Tamara Oudyn, for 20 “Same ID” pairs). The stimuli for the test session were constructed in the same way. Half of these slides depicted Kate Ritchie and Tamara Oudyn in “Same ID” and “Different ID” pairs, but with different specific pairings of images than those used for the training phase stimuli. We also introduced a new set of “Same ID” and “Different ID” image pairs depicting Sandrine Quetier and Rachelle Lefevre. We exported the slides we created for the Training and Test session stimuli at a size of 1920 × 1080 pixels, with the face images themselves resized to approximately 640 × 960 pixels within each slide. The training session contained a total of 40 unique slides and the test session contained a total of 80 unique slides.

Procedure

Upon arrival in the lab, participants were randomly assigned to one of four participant groups according to when they would be asked to wear cataract simulation goggles (Low Vision Simulators, 20/200 simulated acuity, Fig. 1): Group A (8 female, 4 male) wore cataract simulation goggles during both the training and the test phases of the task. Group B (7 female, 5 male) were asked to wear the cataract simulation goggles only during the training sessions and removed them for the testing session. Group C (9 female, 3 male) did not wear the simulation goggles during the training sessions but wore the simulation goggles during the testing session. Finally, Group D (9 female, 3 male) completed the testing and training sessions without wearing the goggles. All participants completed a short eye test using an HOTV chart both with and without wearing cataract simulation goggles to measure visual acuity per subject as a function of simulated cataract. This confirmed that all participants had 20/20 vision without the goggles and 20/100 acuity (41 participants) or worse (7 participants with 20/200 acuity) while wearing them. Using the LOCS III categorization scale, a Visual acuity of 20/100 corresponds to approximately a grade of 5 for Nuclear Sclerotic and Cortical Spoking cataract or a grade of approximately 3 for Posterior cataracts. We note that our measured visual acuity of 20/100 for the majority of our participants is better than the 20/200 acuity indicated by the documentation accompanying the goggles. For our purposes, this discrepancy is not especially important so long as we are aware that the typical acuity for our observers is somewhat better than we anticipated. Also, the number of participants with 20/200 scores while wearing the goggles is relatively small (N = 7) and these individuals were distributed relatively evenly across experimental groups.

Fig. 1.

Fig. 1

Experimental design and testing set-up. Participants in our task wore cataract simulation goggles (right) during a face learning task, with impaired vision imposed during the training and test phases according to random group assignment. The individual depicted in this figure was not a participant in the study, but gave informed consent to have her image published here.

Participants were told that during the main experimental session they would be asked to learn to distinguish between two individuals they were unfamiliar with. First, they would be asked to complete a training phase to help them learn to distinguish different images of these two people. Second they would be asked to complete a test phase to measure their ability to distinguish these two people from one another relative to their discrimination abilities for a second set of novel faces. Both the training and the test phase of the experimental session were implemented using custom routines written for the MATLAB Psychophysics Toolbox v3.03032.

In both the training and the testing session, we used a same/different task to measure participants’ ability to distinguish between faces of our novel celebrities. During the training sessions, participants were presented with blocks of 40 trials: This included 20 “Same” trials and 20 “Different” trials, which were presented in a new, pseudo-randomized order per participant. This randomization procedure ensured that each participant saw a unique sequence of the training stimuli by shuffling the order of slides at the beginning of each session. This procedure involved no other constraints on the order of stimuli. Participants were given unlimited time to respond to each image pair with a “Same” or “Different” keypress assigned to the left and right Shift keys respectively. We provided feedback for incorrect answers via a clearly audible beep played through over-ear headphones. Participants completed training blocks until they exceeded performance over 85% correct during a single block, with breaks of 1–2 min administered after every 4 blocks.

After the training phase, participants were given a short break (3–5 min), followed by instructions for how to complete the test phase of the experiment. During the testing sessions, participants were presented with a single block of 80 trials. This included 20 “Same” trials and 20 “Different” trials depicting the individuals from the training phase (40 Familiar trials), but with new image pairings such that successful performance required face discrimination rather than memorization of individual trials. The remaining 40 Unfamiliar trials included 20 “Same” trials and 20 “Different” trials depicting the two individuals who were not included in the training blocks. During the test phase we did not provide any feedback regarding accuracy. All testing procedures were otherwise identical to those described above for the training phase.

The procedures described above were reviewed and approved by the NDSU IRB. All recruitment, consent, and testing procedures were in accordance with the principles described in the Declaration of Helsinki. Informed consent was obtained from all participants.

Results

Statistical analyses

For each participant, we calculated the proportion of correct responses during each block of training trials and for the learned faces and the novel faces presented during the test block. We also counted the number of training blocks presented to each participant before reaching our 85% performance criterion as a means of estimating training task difficulty as a function of simulated cataract. We have not included any analyses of reaction times as we provided participants with as much time as they wished to respond and participants were highly variable in terms of how much time they wished to take to make decisions about individual trials. One participant was excluded from the data due to a technical error that led to a failure to save the data from the test phase of the experiment, leading us to recruit a replacement participant We carried out all of our analyses in JASP33, applying repeated-measures ANOVA to the accuracy data described above and non-parametric tests to analyze the number of clocks required to reach our performance criterion. We provide more details of each analysis in the sections below.

Training phase: number of training blocks

First, to examine the effect of simulated cataract on performance in the training task we carried out a Mann-Whitney U test comparing the median number of blocks completed by participants with and without cataract simulation goggles on during the testing phase. This analysis revealed a significant difference between these conditions (U = 559.6, p < 0.001) with a median value of 6 training blocks required before participants with goggles on reached our performance criterion, and a median value of 2 training blocks required for participants without the goggles (Table 1).

Table 1.

Descriptive statistics for the number of training blocks required before participants with and without cataract simulation goggles on reached our performance criterion for face recognition.

Goggles On Goggles Off
Median 6.0 2.0
IQR 5.25 1
Minimum 2 1
Maximum 14 4

Test phase: learned faces compared to novel faces

Next, we examined how viewing conditions during the training and test phases of the experiment affected recognition accuracy with trained and untrained identities during the test phase. We analyzed these data using a 2 × 2 × 2 mixed-design ANOVA with face type (trained or untrained identities) as a within-subjects factor and viewing conditions during the training phase (cataract goggles present or absent) and the test phase (cataract goggles present or absent) as between-subjects factors. In Fig. 2, we plot the mean accuracy across participants as a function of wearing cataract simulation goggles during the training and test phases for both the learned and novel faces (Fig. 2).

Fig. 2.

Fig. 2

Test phase accuracy across participants in all cataract simulation conditions. The average proportion correct across participants as a function of wearing cataract simulation goggles during the test phase of the experiment for trained (left) and untrained identities (right). Error bars represent +/–1 standard error of the mean.

This analysis revealed significant main effects of face type, viewing conditions during training, and viewing conditions during test. These main effects were also qualified by significant interactions between face type and viewing conditions during test and also between viewing conditions during training and test. The full ANOVA tables for this analysis are displayed below in Tables 2 and 3, with the within-subjects effects reported in Table 2 and the between-subjects effects reported in Table 3. Note that all variables were included in this analysis, but we have separated the output into two tables for ease of reading.

Table 2.

The complete ANOVA table for within-subjects effects included in our analysis of face type and viewing conditions during training and test phases of the experiment.

Sum Sq. df. Mean Sq. F p w2
faceType 0.258 1 0.258 35.248 < 0.001 0.26
faceType*TrainGoggles 3.76*10− 4 1 3.76*10− 4 0.051 0.822 0
faceType*TestGoggles 0.154 1 0.154 20.957 < 0.001 0.171
faceType*TrainGoggles*TestGoggles 8.76*10− 4 1 8.76*10− 4 0.12 0.73 0
Residuals 0.32 44 0.007

 Bolded p-values reached our significance threshold of 0.05.

Table 3.

The complete ANOVA table for between-subjects effects included in our analysis of face type and viewing conditions during training and test phases of the experiment. 

Sum. Sq. df. Mean Sq. F p w2
TrainGoggles 0.095 1 0.095 11.312 0.002 0.103
TestGoggles 0.915 1 0.915 108.90 < 0.001 0.54
TrainGoggles*TestGoggles 0.05 1 0.05 6.00 0.018 0.053
Residuals 0.37 44 0.008

Bolded p-values reached our significance threshold of 0.05.

The main effect of face type was the result of overall better accuracy for trained face identities than for untrained face identities (Mean difference = 0.104, s.e.= 0.017, Cohen’s d = 1.17, pholm<0.001, within-subjects comparison). The main effects of viewing conditions during the training and test phases of the experiment were the result of poorer performance when wearing cataract simulation goggles than when not wearing them Training phase: Mean difference = −0.063, s.e. =0.019, Cohen’s d = −0.71,pholm=0.002; Testing phase: Mean difference = −0.20, s.e. =0.019, Cohen’s d = −2.20 pholm<0.001,,).

To investigate the nature of the interaction between face type and viewing conditions during the test phase, we carried out post-hoc comparisons of trained vs. untrained face identity performance when participants either did or did not experience simulated cataract during the test phase. This revealed that when participants were wearing the goggles, performance was significantly better for trained vs. untrained face identities (Mean difference = 0.184, s.e. = 0.025, Cohen’s d = 2.07,pholm<0.001), but that this difference was not significant when participants were not wearing the goggles (Mean difference = 0.024, s.e. = 0.025, Cohen’s d = 0.27, pholm<0.342). To investigate the interaction between viewing conditions during training and test phases of the experiment, we carried out post-hoc tests between all combinations of these factors. Briefly, this analysis revealed significant differences for all of these comparisons (pholm<0.001 in all cases, Mean differences between 0.11 and 0.26) with the sole exception of the difference between performance with and without simulation goggles during the training phase when goggles were worn during the test phase, which did not reach significance (Mean difference = 0.017, s.e. = 0.024, Cohen’s d = 0.19,pholm=0.52).

One critical post-hoc test we also considered that is important to highlight is the comparison between trained face identity performance in the two participant groups whose cataract status changed between the training and test phase. This comparison is particularly interesting to us because it reveals the extent to which improvement in visual quality confers a specific benefit on the recognition of newly learned identities, or if a change in visual quality regardless of the sign leads to similar performance. We find in this analysis that these two conditions do not differ significantly (pholm = 0.867), indicating that even though facial appearance improves for participants who take the simulation goggles off during the test phase, this does not lead to a measurable advantage over participants who put them on specifically for this phase.

Discussion

A number of the effects we observed are straightforward to interpret and consistent with our predictions. For example, the result that wearing simulation goggles during the training phase reduced accuracy during that part of the task is clearly the result of decreased visual acuity and contrast sensitivity negatively impacting unfamiliar face discrimination. The main effect of wearing simulation goggles during the test phase on test performance also likely reflects the same general reduction in face recognition performance when intermediate to high spatial frequencies are not available. Both of these simple results are in line with our hypotheses and consistent with prior reports that high spatial frequencies make an independent contribution to face recognition4 and may also be due to disruption of the critical 8–16 cycles/image frequency band that has been shown to critically important for accurate face recognition34. Considered in isolation, these outcomes are consistent with a rather uninteresting account of the data: Perhaps the key determiner of performance in our task is whether intermediate to high spatial frequencies are present in the stimuli.

More interesting than these basic results, however, are the effects we reported regarding the effects of simulated low vision during training and test on recognition outcomes during the test phase. Our analysis of the test phase results comparing learned to unlearned faces has several features suggesting that the effect of low vision during training on test performance has effects beyond the straightforward impairment of recognition skills during the training phase. First, the main effect of wearing simulation goggles during the training phase on both learned and unlearned face performance is intriguing and suggests that the detrimental effects of learning new faces with low vision extend in a task-specific (not image- or identity-specific) way to unlearned faces presented during the test phase, an outcome that differs from our prediction that any effects of visual acuity differences between training and test would be confined to the learned stimuli.This result is perhaps most interesting to consider by examining the data from the groups that did not wear simulation goggles during the test phase: Despite the fact that all of these observers saw the untrained faces for the first time during an unencumbered test phase, performance was still lower for these faces in the group that wore simulation goggles during the training phase. This may be consistent with rapid adaptation or plasticity in response to the simulated low vision imposed during the training phase35 or a more explicit difference in how participants approached the task under poor viewing conditions. This outcome may be the result of an explicit strategy or differences in the internal representations being formed of each face’s appearance. In either case, this effect suggests to us that rapid improvements in visual quality during the test phase (simulating cataract treatment) do not completely mitigate negative effects of learning to recognize faces under low vision conditions. Further, these negative effects extend to stimuli not encountered during the initial period of low vision. This main effect was qualified by interactions between training conditions and test conditions, and the main effect of testing conditions was additionally qualified by an interaction with face type. In the former case, the basis of the interaction as revealed by post-hoc tests is that training conditions effectively do not matter when test viewing conditions are poor, but that they do matter when viewing conditions are good (better acuity during training improves performance when better acuity is available at test).In the latter case, the interaction is the result of a significant effect of face type that is only present when goggles were worn during the test phase: under poor viewing conditions, prior familiarization helps to improve performance for the trained faces, but this difference was not observed when goggles were present during test. This differs from our initial predictions in that we anticipated more robustness to declining visual quality than is evident in our participants and potentially a smaller improvement in response to improving quality.

Our specific analysis of trained performance as a function of changing visual quality (comparing trained face recognition between the two participant groups who changed their cataract status between phases) is also useful to consider as it provides a direct comparison of how improving vs. declining acuity affects accuracy for the faces that were seen in both formats across training and test sessions. The absence of superior test performance for the group who was introduced to higher spatial frequency content at test indicates that these participants were unable to use this new information to achieve any advantage over participants who were specifically unable to use these features that were available during training. Though we note that this is a null result that should thus be interpreted with some skepticism (it is possible that the effect is not truly null but simply smaller than we were able to detect) we suggest that this as an indication that image-based recognition (matching test images to representations developed during training) matters more in this case than does the direction of changing acuity between phases. This is consistent to some extent with Collin et al.’s36 results using a face matching task in which participants were able to adjust the cut-off frequencies for high-pass and low-pass sample stimuli to facilitate matching to comparison matches that were either filtered in the same way as the sample or unfiltered. Using both an adjustment paradigm and the method of constant stimuli, participants in their task exhibited a clear bias for comparing unfiltered faces to sample low-pass images with spatial frequency content within the critical 8–16 cycles/face band37. While this study did not incorporate a learning component, the observation that matching low-pass faces to unfiltered targets leads participants to seek out intermediate spatial frequencies is commensurate with a limited ability to use that information at test if it was not available during familiarization. The lack of a measurable decrement in performance when goggles were only imposed during the training phase is also consistent with the findings we discussed earlier regarding spatial frequency overlap: Given that low spatial frequencies were always available to participants in all viewing conditions, the costs of mismatched appearance between training and test may have been minimized due to the availability of some information that was preserved across both phases22. Again, however, we are reasoning from a null outcome and so invite the reader to consider this interpretation cautiously.

Considered together, these results present an intriguing picture of how changes in viewing conditions associated with cataract onset and treatment may affect how observers learn to recognize new faces. In particular, our data indicates that even a brief period of low vision in which observers are asked to learn new face identities has consequences for performance in subsequent face recognition tasks even when viewing conditions improve. This is similar to recent results reported by Gilad-Gutnik et al.38, who reported that cataract treatment outcomes depend critically on pre-operative acuity: sufficiently low acuity in the pre-operative period for adults with congenital cataracts limits post-treatment performance in a manner that suggests there is a critically important spatial frequency range that must be available prior to treatment so that larger improvements in recognition after treatment are realizable. The broad impact of low vision during training on learned and unlearned faces we observed in our task may be indicative of either insufficient low spatial frequency information to support adequate holistic/configural face processing10 or perhaps compromised use of facial feature information carried by higher spatial frequencies9. Understanding how viewing conditions during learning may impact holistic processing vs. more general object recognition mechanisms would be an intriguing next step for future work using this paradigm. Also, another potentially meaningful difference between our study and prior work examining spatial frequency transfer for face recognition is that we have used full-color images in our study. While face recognition generally does not appear to depend critically on the availability of color information39, but see40 for data suggesting color-blindness affects face recognition), Yip & Sinha41 reported that color was recruited more often for face identification when face images were very blurry. This may mean that full-color and grayscale face images could yield meaningfully different outcomes in this task. Another potential limitation of our study is the use of the same images during the training phase and the test phase for the familiarized identities (though new combinations of these were presented during the test phase). Measuring performance with previously seen images and unseen images of the same identities relative to novel, untrained faces would further disentangle image-specific recognition processes from more general face recognition mechanisms. Still, our data highlight the sensitivity of representations formed for new identities to the visual information available during learning. Beyond understanding how interventions like cataract surgery ultimately affect high-level recognition more thoroughly, our results also have implications for the way face recognition works when observers must cope with new sources of appearance variability than those they have been exposed to when learning to recognize a new person.

Acknowledgements

Special thanks to Emily Westrick for logistical support during testing. This research was partly supported by NSF Grant BCS-2338600 awarded to BB.

Author contributions

AA and BB wrote the manuscript, contributed to reviewing and editing the text, and reviewed the final manuscript. BB prepared the figures and tables describing the data analysis. AA and BB both contributed to study conceptualization, data analysis, and participant testing. AA generated the experimental stimuli and BB wrote the code for experiment presentation.

Data availability

The stimuli, experimental code, original data files, and aggregate data files used to conduct the analyses described in the manuscript are available via the Open Science Framework at the following link: [https://osf.io/ufw3n/](https:/osf.io/ufw3n).

Declarations

Competing interests

The authors declare no competing interests.

Ethics declaration

The research described in this manuscript was reviewed and approved by the NDSU IRB. All recruitment, consent, and testing procedures were in accordance with the principles described in the Declaration of Helsinki. Informed consent was obtained from all participants.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.National Eye Institute. Cataracts | National Eye Institute. February 14 https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/cataracts (2025).
  • 2.Elliott, D. B., Patla, A. E., Furniss, M. & Adkin, A. Improvements in clinical and functional vision and quality of life after second eye cataract surgery. Optometry Vis. Science: Official Publication Am. Acad. Optometry. 77 (1), 13–24. 10.1097/00006324-200001000-00009 (2000). [DOI] [PubMed] [Google Scholar]
  • 3.Sinha, P., Balas, B., Ostrovsky, Y. & Russell, R. Face Recognition by Humans: 19 Results all Computer Vision Researchers Should Know About. Proceedings of the IEEE,1948–1962. (2006).
  • 4.Fiorentini, A., Maffei, L. & Sandini, G. The role of high Spatial frequencies in face perception. Perception12 (2), 195–201. 10.1068/p120195 (1983). [DOI] [PubMed] [Google Scholar]
  • 5.Costen, N. P., Parker, D. M. & Craw, I. Spatial content and Spatial quantisation effects in face recognition. Perception23 (2), 129–146. 10.1068/p230129 (1994). [DOI] [PubMed] [Google Scholar]
  • 6.Costen, N. P., Parker, D. M. & Craw, I. Effects of high-pass and low-pass Spatial filtering on face identification. Percept. Psychophys.58 (4), 602–612. 10.3758/bf03213093 (1996). [DOI] [PubMed] [Google Scholar]
  • 7.Näsänen, R. Spatial frequency bandwidth used in the recognition of facial images. Vision. Res.39 (23), 3824–3833. 10.1016/s0042-6989(99)00096-6 (1999). [DOI] [PubMed] [Google Scholar]
  • 8.Gao, X. & Maurer, D. A comparison of Spatial frequency tuning for the recognition of facial identity and facial expressions in adults and children. Vision. Res.51 (5), 508–519. 10.1016/j.visres.2011.01.011 (2011). [DOI] [PubMed] [Google Scholar]
  • 9.Goffaux, V., Hault, B., Michel, C., Vuong, Q. C. & Rossion, B. The respective role of low and high Spatial frequencies in supporting configural and featural processing of faces. Perception34, 77–86. 10.1068/p5370 (2005). [DOI] [PubMed] [Google Scholar]
  • 10.Goffaux, V. & Rossion, B. Faces are Spatial-Holistic face perception is supported by low Spatial frequencies. J. Exp. Psychol.32, 1023–1039. 10.1037/0096-1523.32.4.1023 (2006). [DOI] [PubMed] [Google Scholar]
  • 11.Richler, J. J., Cheung, O. S. & Gauthier, I. Holistic processing predicts face recognition. Psychol. Sci.22 (4), 464–471. 10.1177/0956797611401753 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Piepers, D. W. & Robbins, R. A. A review and clarification of the terms holistic, configural, and relational in the face perception literature. Front. Psychol.3, 559. 10.3389/fpsyg.2012.00559 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Elliott, D. B., Bullimore, M. A., Patla, A. E. & Whitaker, D. Effect of a cataract simulation on clinical and real world vision. Br. J. Ophthalmol.80 (9), 799–804. 10.1136/bjo.80.9.799 (1996). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Ni, W. et al. Impact of cataract surgery on vision-related life performances: the usefulness of Real-Life vision test for cataract surgery outcomes evaluation. Eye (London England). 29 (12), 1545–1554. 10.1038/eye.2015.147 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Mondloch, C. J., Robbins, R. & Maurer, D. Discrimination of facial features by adults, 10-year-olds, and cataract-reversal patients. Perception39 (2), 184–194. 10.1068/p6153 (2010). [DOI] [PubMed] [Google Scholar]
  • 16.de Heering, A. & Maurer, D. Face memory deficits in patients deprived of early visual input by bilateral congenital cataracts. Dev. Psychobiol.56 (1), 96–108. 10.1002/dev.21094 (2014). [DOI] [PubMed] [Google Scholar]
  • 17.Kalia, A. et al. Development of pattern vision following early and extended blindness. Proc. National Acad. Sci. 111(5), 2035–2039. 10.1073/pnas.1311041111 (2014). [DOI] [PMC free article] [PubMed]
  • 18.Ostrovsky, Y., Andalman, A. & Sinha, P. Vision following extended congenital blindness. Psychol. Sci.17 (12), 1009–1014. 10.1111/j.1467-9280.2006.01827.x (2006). [DOI] [PubMed] [Google Scholar]
  • 19.Zohary, E. et al. Gaze following requires early visual experience. Proc. Natl. Acad. Sci. U.S.A.119 (20), e2117184119. 10.1073/pnas.2117184119 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Liu, C. H., Collin, C. A., Rainville, S. J. M. & Chaudhuri, A. The effects of Spatial frequency overlap on face recognition. J. Exp. Psychol. Hum. Percept. Perform.26 (3), 956–979. 10.1037/0096-1523.26.3.956 (2000). [DOI] [PubMed] [Google Scholar]
  • 21.Collin, C. A., Liu, C. H., Troje, N. F., McMullen, P. A. & Chaudhuri, A. Face recognition is affected by similarity in Spatial frequency range to a greater degree than within-category object recognition. J. Exp. Psychol. Hum. Percept. Perform.30 (5), 975–987. 10.1037/0096-1523.30.5.975 (2004). [DOI] [PubMed] [Google Scholar]
  • 22.Kornowski, J. A. & Petersik, J. T. Effects on face recognition of spatial-frequency information contained in inspection and test stimuli. J. Gen. Psychol.130 (3), 229–244. 10.1080/00221300309601156 (2003). [DOI] [PubMed] [Google Scholar]
  • 23.Morrison, D. J. & Schyns, P. G. Usage of Spatial scales for the categorization of faces, objects, and scenes. Psychon Bull. Rev.8, 434–469. 10.3758/BF03196180 (2001). [DOI] [PubMed] [Google Scholar]
  • 24.Jenkins, R., White, D., Van Montfort, X. & Mike Burton, A. Variability in photos of the same face. Cognition121 (3), 313–323. 10.1016/j.cognition.2011.08.001 (2011). [DOI] [PubMed] [Google Scholar]
  • 25.Burton, M. A. Why has research in face recognition progressed so slowly? The importance of variability. Q. J. Experimental Psychol.66 (8), 1467–1485 (2013). [DOI] [PubMed] [Google Scholar]
  • 26.Andrews, S., Jenkins, R., Cursiter, H. & Burton, A. M. Telling faces together: learning new faces through exposure to multiple instances. Q. J. Experimental Psychol.68 (10), 2041–2050. 10.1080/17470218.2014.1003949 (2015). [DOI] [PubMed] [Google Scholar]
  • 27.Bindemann, M. & Hole, G. J. Understanding face identification through within-person variability in appearance: introduction to a virtual special issue. Q. J. Experimental Psychol.73 (12), NP1–NP8. 10.1177/1747021820959068 (2020). (Original work published 2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Collin, C. A., Wang, L. & O’Byrne, B. Effects of image background on spatial-frequency thresholds for face recognition. Perception35 (11), 1459–1472. 10.1068/p55 (2006). [DOI] [PubMed] [Google Scholar]
  • 29.Perfetto, S., Wilder, J. & Walther, D. B. Effects of Spatial frequency filtering choices on the perception of filtered images. Vis. (Basel Switzerland). 4 (2), 29. 10.3390/vision4020029 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Brainard, D. H. The psychophysics toolbox. Spat. Vis.10, 433–436 (1997). [PubMed] [Google Scholar]
  • 31.Pelli, D. G. The videotoolbox software for visual psychophysics: transforming numbers into movies. Spat. Vis.10, 437–442 (1997). [PubMed] [Google Scholar]
  • 32.Kleiner, M., Brainard, D. & Pelli, D. What’s new in Psychtoolbox-3? Perception . 36 ECVP Abstract Supplement. (2007).
  • 33.JASP Team. JASP (Version 0.19.3)[Computer software]. (2024).
  • 34.Ruiz-Soler, M. & Beltran, F. S. Face perception: an integrative review of the role of Spatial frequencies. Psychol. Res.70 (4), 273–292. 10.1007/s00426-005-0215-z (2006). [DOI] [PubMed] [Google Scholar]
  • 35.Jamal, Y. A. & Dilks, D. D. Rapid topographic reorganization in adult human primary visual cortex (V1) during noninvasive and reversible deprivation. Proc. Natl. Acad. Sci.117 (20), 11059–11067. 10.1073/pnas.1921860117 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Collin, C. A., Therrien, M., Martin, C. & Rainville, S. Spatial frequency thresholds for face recognition when comparison faces are filtered and unfiltered. Percept. Psych.68(6), 879–889. 10.3758/bf03193351 (2006). [DOI] [PubMed] [Google Scholar]
  • 37.Parker, D. M. & Costen, N. P. One extreme or the other, or perhaps the golden mean? Issues of Spatial resolution in face processing. In Validation in Psychology: Re- Search Perspectives (eds Ellis, H. & Macrae, N.) 151–162 (Transaction, 2001). [Google Scholar]
  • 38.Gilad-Gutnick, S. et al. Face-specific identification impairments following sight-providing treatment May be alleviated by an initial period of low visual acuity. Sci. Rep.14 (1). 10.1038/s41598-024-67949-z (2024). [DOI] [PMC free article] [PubMed]
  • 39.Bruce, V. & Young, A. In the Eye of the Beholder. The Sci. of Face Perception (Oxford University Press), 1998).
  • 40.Brosseau, P., Nestor, A. & Behrmann, M. Colour blindness adversely impacts face recognition. Visual Cognition. 28 (4), 279–284. 10.1080/13506285.2020.1788682 (2020). [Google Scholar]
  • 41.Yip, A. W. & Sinha, P. Contribution of color to face recognition. Perception31 (8), 995–1003. 10.1068/p3376 (2002). [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The stimuli, experimental code, original data files, and aggregate data files used to conduct the analyses described in the manuscript are available via the Open Science Framework at the following link: [https://osf.io/ufw3n/](https:/osf.io/ufw3n).


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES