Abstract
Purpose
Difficulty identifying faces is a common complaint of people with central vision loss. Dakin and Watt (2009) reported that the horizontal components of face images are most informative for face identification in normal vision. In this study, we examined whether people with central vision loss similarly rely primarily on the horizontal components of face images for face identification.
Methods
Seven observers with central vision loss (mean age = 69 ± 9 [SD]) and five age-matched observers with normal vision (mean age = 65 ± 6) participated in the study. We measured observers’ accuracy for reporting the identity of face images spatially filtered using an orientation filter with center orientation ranging from 0° (horizontal) to 150° in steps of 30°, with a bandwidth of 23°. Face images without filtering were also tested.
Results
For all observers, accuracy for identifying filtered face images was highest around the horizontal orientation, dropping systematically as the filter orientation deviated systematically from horizontal, and was the lowest at the vertical orientation. Compared with control observers, observers with central vision loss showed (1) a larger difference in accuracy between identifying filtered (at peak performance) and unfiltered face images; (2) a reduced accuracy at peak performance and (3) a smaller difference in performance for identifying filtered images between the horizontal and the vertical filter orientation.
Conclusions
Spatial information around the horizontal orientation in face images is the most important for face identification, for people with normal vision and central vision loss alike. While the horizontal information alone can support reasonably good performance for identifying faces in people in normal vision, people with central vision loss seem to also rely on information along other orientations.
Keywords: low vision, spatial vision, psychophysics, face identification, central vision loss
People with central vision loss must rely on their peripheral vision for daily activities, including identifying faces, an important component of social interaction. In low vision clinics, difficulty identifying faces is a common complaint of patients with central vision loss.1–3 The understanding of what visual information is the most crucial for face identification is therefore, vital to the visual rehabilitation of these patients.
Previous studies have shown that face identification performance is poorer in normal peripheral than in foveal vision. By adding external noise to face images, Makela et al. found that the poorer performance in the normal periphery could be attributed to the lower efficiency of observers’ utilization of peripheral information.4 However, peripheral performance for face identification could be equated with foveal performance by increasing both the size and contrast of the stimulus.5 These results indicate that people who have to rely on their periphery for functional vision, such as people with central vision loss, might benefit from contrast image enhancement for the task of face identification. Indeed, Peli et al. reported that enhancing the contrast of face images based on the contrast sensitivity loss of individual observers significantly improved performance for face identification in approximately half of their observers with central vision loss.6
Considering the intuition that not all information within a face image is equally informative about the image, an image enhancement algorithm that focuses on the most crucial information within a face image for its identification could potentially be just as effective, but more efficient, than an enhancement algorithm that targets all the information of an image. What then constitutes the most crucial information for face identification?
In terms of spatial frequency content, the critical range of frequencies for face identification has been shown to depend on the image size7 and the specific task8–14, e.g., identifying the identity of faces or facial expression. In general, between 4 and 16 cycles/face appears to be the most important for face identification.6,7,15–17 Recently, Dakin and Watt examined another dimension of spatial information for face identification, viz, orientation.18 They measured identification performance using face images that were spatially filtered to restrict information along bands of orientation. They found that observers’ performance was best when face images contained only the horizontal information and declined gradually when the orientation of the retained information deviated from horizontal, with the worst performance at the vertical orientation. According to Dakin and Watt, the relative placements of the horizontal structures (e.g. the pair of eyes) of a face convey the identity information of the face.
In this study, we examined whether or not people with central vision loss similarly rely primarily on the horizontal components of face images for identifying faces. Our prediction based on the supposition of Dakin and Watt was that people with central vision loss should also rely primarily on the horizontal information in face images for face identification. If so, then selective image enhancement along the horizontal orientation might be an efficient and effective method to improve face identification performance.
METHODS
Observers
Seven observers with central vision loss and five age-matched observers with normal or corrected to normal vision participated in the study. Table 1 shows the age, gender, diagnosis, time since the onset of the disease, best-corrected distance visual acuity, and the location of the preferred retinal locus for fixation (fPRL) of the observers. All observers with central vision loss had a central scotoma as assessed using an Amsler grid or a Rodenstock scanning laser ophthalmoscope (SLO). The location of the fPRL was determined with the SLO, using a 1° or 2° (depending on observer’s acuity) cross as the fixation target. The research followed the tenets of the Declaration of Helsinki and was approved by the Committee for Protection of Human Subjects at the University of California, Berkeley. Observers gave written informed consent prior to the commencement of data collection.
Table 1.
Summary of observers’ characteristics.
| Observer | Age (years) |
Gender | Diagnosis | Time since diagnosis (year) |
Best-corrected Distance VA (logMAR) |
fPRL | ||||
|---|---|---|---|---|---|---|---|---|---|---|
| OD | OS | OD | OS | OD | OS | OD | OS | |||
| CVL1 | 66 | M | /* | AMD | /* | 16 | /* | 1.10 | /* | 4.62° T, 9.41° A |
| CVL2 | 82 | F | AMD | AMD | 10 | 10 | 0.50 | 0.60 | 1.01° T, 4.44° B | 0.76° N, 1.47° B |
| CVL3 | 69 | M | /* | Lamellar hole | /* | 2 | /* | 0.40 | /* | NA |
| CVL4 | 59 | F | Stargardt | Stargardt | 15 | 15 | 0.78 | 0.80 | 1.52° T, 1.11° B | 2.73° T, 0.40° A |
| CVL5 | 57 | M | Stargardt | Stargardt | 40 | 40 | 1.10 | 1.10 | 11.2° T, 4.55° A | 19.3° T, 2.63° A |
| CVL6 | 73 | F | AMD | AMD | 7 | 7 | 0.98 | 0.48 | 3.23° N, 2.17° B | 2.88° T, 0.51° B |
| CVL7 | 75 | F | AMD | /* | 13 | /* | 1.10 | /* | NA | /* |
| NV1 | 58 | M | / | / | / | / | 0.14 | 0.16 | / | / |
| NV2 | 61 | M | / | / | / | / | −0.12 | −0.04 | / | / |
| NV3 | 73 | F | / | / | / | / | −0.02 | −0.02 | / | / |
| NV4 | 62 | F | / | / | / | / | −0.04 | −0.04 | / | / |
| NV5 | 69 | F | / | / | / | / | 0.00 | 0.02 | / | / |
CVL = impaired vision with central vision loss, NV = normal vision, AMD = age-related macular degeneration. T = temporal; N = nasal; A = above; B = below from fovea; NA = not available because of time constraint.
The eye was covered and not tested in the main experiment.
Stimuli and Procedure
We used software custom written in MATLAB (version 7.7.0, Mathworks, MA) and the Psychophysics Toolbox19,20 to control the experiments using a Macintosh computer, and presented stimuli on a gamma-corrected SONY color graphic display (model: Multiscan E540; refresh rate: 75 Hz; resolution: 1,280 × 1,024; dimensions: 39.3 cm × 29.4 cm). Stimuli consisted of gray-scale face images of 294 well-known persons collected from the Internet that were judged as easily recognizable by observers in pilot testings (results not included in this paper and observers did not participate in the main experiment). These well-known persons included politicians, athletes, actors and actresses for example, who became famous at different times over the past several decades. The orientation of the face in each image was either a frontal or near-frontal view (at least three-quarter profile, with both eyes present). For each well-known person, two different images were collected, with one placed in Set A, to be used in the preliminary testing, and the other in Set B, to be used in the main experiment.
For each face image, we located two reference points: the center of the mouth and the midpoint between the two eyes. Each face image was rotated so that the line connecting the two reference points was exactly vertical, and each face image was scaled so that the two reference points were separated by 128 pixels. The final image subtended 330 × 440 pixel and was centered on the midpoint between the eyes.
During the preliminary testing, observers first previewed every face image in set A and gave a familiarity rating: “not familiar”, “somewhat familiar” or “very familiar”, with no time constraint. For this preliminary testing and the main experiment, viewing distance was 40 cm for the age-matched control observers, but was adjusted for each observer with central vision loss so that they best viewed (taking into consideration the magnification of the images required and the ergonomics of the shorter viewing distance) the stimuli (range: 10 – 40 cm) without any additional low vision devices. Appropriate near corrections were given to all observers to compensate for the accommodative demand of the viewing distance. Only face images rated as “very familiar” were subsequently used in the main experiment. The number of face images rated as such ranged from 91 to 236 for the control observers and 58 to 204 for observers with central vision loss.
For each of the “very familiar” faces, a different face image (from Set B) of the same person was used as the test face in the main experiment. Following Dakin and Watt18, we applied an orientation filter in the Fourier domain (allowing all spatial frequencies to go through but selectively passing orientations by using a wrapped Gaussian profile with an orientation bandwidth specified by the standard deviation) with a bandwidth of 23°, to restrict information contained in the stimuli, where the center of the filter ranged from 0° (horizontal) to 150° in steps of 30°. Figure 1 shows examples of face images filtered with each of these filters of different center orientations. Accuracy for identifying faces filtered with each of these filters, as well as for the unfiltered condition, was measured. Across all conditions, image root mean square (RMS) contrast values were equalized (0.096).
Figure 1.
An example of an unfiltered face image and the six filtered versions of the same image. Spatial filtering was accomplished using a wrapped-Gaussian orientation filter centered at six orientations (0°, 30°, 60°, 90°, 120° (−60°), 150° (−30°), see text for details). Image RMS contrast was normalized.
Stimuli were presented against a light gray background (61.4 cd/m2). The angular subtense of the width of the images for each observer is given in Table 2 (calculated based on the physical image size presented on the display and the viewing distance of each observer). Between 8 and 20 trials per orientation were tested for each observer. None of the observers saw the same face image more than once. Prior to testing, each observer was given a few trials to practice. Based on the performance on these practice trials, we adjusted the exposure duration of the images (Table 2) so that the identification accuracy for the unfiltered condition ranged between 0.70 and 0.90. However, the actual range obtained in the main experiment was between 0.63 and 1.
Table 2.
Stimulus duration, viewing distance, image size and the number of trials tested for each observer.
| Observer | Duration (second) |
Viewing Distance (cm) |
Image Size (width) |
Number of Trials per Orientation |
|---|---|---|---|---|
| CVL1 | 3.00 | 10 | 49.7° | 20 |
| CVL2 | 2.00 | 35 | 15.1° | 13 |
| CVL3 | 2.00 | 32 | 16.5° | 9 |
| CVL4 | 2.50 | 40 | 13.2° | 9 |
| CVL5 | 5.00 | 16 | 32.3° | 8 |
| CVL6 | 0.50 | 25 | 21.0° | 14 |
| CVL7 | 4.50 | 20 | 26.1° | 8 |
| NV1—fovea | 0.25 | 40 | 13.2° | 20 |
| NV2—fovea | 0.50 | 40 | 13.2° | 13 |
| NV3—fovea | 0.30 | 40 | 13.2° | 17 |
| NV4—fovea | 0.20 | 40 | 13.2° | 20 |
| NV5—fovea | 0.25 | 40 | 13.2° | 20 |
| NV1—10° | 1.50 | 40 | 13.2° | 14 to 16 |
| NV4—10° | 2.00 | 40 | 13.2° | 13 to 15 |
In the main experiment, a white fixation dot centered on the display was presented before each trial and observers were instructed to fixate the dot. For each trial, the orientation of filter was randomly chosen. Each face image was presented with the midpoint between the two eyes located at the same position as the white dot, for a duration that was specific to each observer (Table 2). A white-noise post-mask was presented for 500 ms immediately after the stimulus image disappeared to terminate the neural processing of the stimulus, followed by a response screen consisting of eight face images from Set A — the correct answer along with seven other faces randomly chosen from images that the observer labeled as very familiar. The correct answer had the same identity as the source image of the stimulus but the pictures were not identical so that observers could not match the faces based on features specific to the pictures. All eight image choices were of the same gender as the stimulus face, and where possible, the racial categories of the stimulus and response faces were also matched. Observer responded by either pointing to, or indicating the number assigned to the image choice. Figure 2 shows the schematic diagram of the experimental paradigm.
Figure 2.
A schematic diagram of the experimental paradigm of the main experiment.
Control Experiment
To determine if the performance of observers with central vision loss is comparable with that in the normal periphery, we tested two control observers (NV1 and NV4) at 10° eccentricity in the lower and right visual fields using a similar experimental paradigm as in the main experiment. Stimuli used were either from the “very familiar” faces not used in the previous testing or face images that had been used previously but were filtered along a different orientation to ensure that there were enough images for testing. Each face image was presented with the midpoint between the two eyes located 10° from the white fixation dot in the lower or right visual field. Observers were asked to fixate the white fixation dot during testing. Stimulus duration, viewing distance, image size and the number of trials tested per orientation for each observer are listed in Table 2.
Measurement of Contrast Sensitivity Function
Dakin and Watt reported that the accuracy for identifying filtered face images was highest when information was retained along the horizontal orientation, and lowest when information was retained along the vertical orientation.18 We found similar results in the present study (see Results). Can the difference in performance for identifying faces with primarily horizontal and vertical spatial information be explained by the difference in contrast sensitivity to horizontal and vertical stimuli? To evaluate this possibility, we compared the contrast sensitivity functions measured using horizontal and vertical sinusoidal gratings for five observers (CVL2, CVL4, CVL5, NV1, NV4). Gratings were generated using a VSG 2/5 graphics board (Cambridge Research Systems, UK) and displayed on a SONY Trinitron color graphic display (model: GDM-FW900; refresh rate: 76 Hz; resolution: 1,600 × 1,024, dimensions: 47.5 cm × 30.4 cm) at a mean luminance of 50.6 cd/m2. We measured the contrast threshold for detecting the presence of a grating using a two-interval force-choice paradigm in which the grating was presented in either the first or second temporal interval (duration of each interval: 200 ms, duration between the two intervals: 500 ms; longer durations were used for observers with central vision loss). The non-target interval contained a uniform field at the mean luminance. Observers indicated which interval contained the grating. For each orientation, six to seven spatial frequencies were tested in a random order using a 2 down-1 up staircase procedure that tracked performance accuracy to 71%. The staircase terminated after 10 reversals. The geometric mean of the last eight reversals was taken as the threshold contrast value. We used bootstrapping with 10,000 resampling to estimate the 95% confidence intervals. The two control observers were tested at both the fovea and 10° lower visual field.
RESULTS
Proportion correct of face identification, plotted as a function of the orientation of filter for the control observers, is shown in Figure 3. Dashed lines represent the performance for identifying unfiltered face images. The chance performance is 0.125. Consistent with the findings of Dakin and Watt18, the accuracy for identifying filtered face images for our older adults with normal vision was highest around the 0° filter orientation (horizontal), dropping monotonically as the filter orientation deviated systematically from horizontal, and was the lowest at 90° filter orientation (vertical). Averaged across observers, performance for the 0° filter orientation (0.75 ± 0.04 [SE]) was similar to that for the unfiltered condition (0.89 ± 0.03, p = .066), and was significantly higher than that for the 90° filter orientation (0.28 ± 0.03, p < .0005).
Figure 3.
Proportion correct of face identification as a function of the orientation of filter for the five control observers (older adults with normal vision). Individual as well as group-averaged data are shown. Dashed lines represent the performance for identifying unfiltered face images. The chance performance is 0.125. In each panel, the data point plotted at −90° is the same as that at 90°. Error bars represent 95% confidence intervals.
Performance for observers with central vision loss is summarized in Figure 4. Clearly, there were large individual differences among these observers, as is typical of psychophysical studies that involve low vision observers, since there are large individual differences in their visual conditions. Therefore, to facilitate the comparison of their performance with that of the control observers, we included the control observers’ group mean accuracy ± 95% confidence limits as shaded regions in each panel in Figure 4. Data points of observers with central vision loss falling outside the control observers’ 95% confidence limits imply that they differ from the control observers’ performance at a chance level of 0.05. While every observer with central vision loss had several data points falling outside the shaded regions, the performance of these observers as a group showed that only the datum at 0° filter orientation fell out of the normal range. Considering the large individual differences observed in the individual performance vs. filter orientation functions, this difference in performance at 0° filter orientation between observers with central vision loss as a group and control observers represents a robust effect as it withstood the averaging of individual data, which tends to smooth out random individual differences. In the following, we will focus our descriptions of the performance of observers with central vision loss as a group to look for consistency across observers.
Figure 4.
Proportion correct of face identification as a function of the orientation of filter for the seven observers with central vision loss. Individual as well as group-averaged data are shown. Dashed lines represent the performance for identifying unfiltered face images. The chance performance is 0.125. In each panel, the data point plotted at −90° is the same as that at 90°. Shaded regions represent the group mean ± 95% confidence intervals of the control observers’ performance — darker regions for orientation-filtered images and lighter regions for unfiltered images. Error bars represent 95% confidence intervals.
In general, given longer exposure durations and larger image sizes, observers with central vision loss as a group could identify unfiltered face images at the same level of performance (mean = 0.87 ± 0.05) as the control observers (0.89 ± 0.03). For identifying filtered faces, the performance of observers with central vision loss also peaked around the horizontal filter orientation and declined as the filter orientation approached vertical, akin to what we observed in control observers. Across observers, the average difference in performance between the horizontal and the vertical filter orientation was 0.22 ± 0.07, which was smaller than the similar measurement in the control observers (0.47 ± 0.04). Also unlike for control observers, for most observers with central vision loss, even the best performance for identifying filtered face images was worse than that for the unfiltered condition (0° filter orientation: 0.56 ± 0.06, p = .004; –30° filter orientation: 0.62 ± 0.07, p = .043).
To ascertain that our findings in observers with central vision loss were not due to differences in the vertical positions of each observer’s performance vs. filter orientation function, which could shift up and down along the y-axis had we used a different stimulus exposure duration during testing, we also examined the normalized data — performance for the filtered condition normalized to that for the unfiltered condition. A comparison of the group-averaged normalized data between observers with central vision loss and normal controls revealed essentially the same main findings: (1) the difference in performance between the two groups of observers only occurred at the 0° filter orientation and (2) the difference in performance between the 0° filter orientation and the 90° filter orientation was lower for observers with central vision loss. These results (plots not shown) confirm that the effects observed in observers with central vision loss were not artefacts due to the specific stimulus exposure durations used in the study.
Control Experiment
Performance for identifying orientation-filtered faces at 10° lower and right visual fields for two control observers is summarized in Figure 5. Similar to the results of observers with central vision loss, the accuracy vs. filter orientation functions at both lower and right visual fields were flatter compared with those obtained at the normal fovea. The flatter functions imply that the differences in performance accuracy between the horizontal and the vertical conditions were smaller than at the normal fovea. Also, like observers with central vision loss, there was a larger difference between the best performance for identifying filtered face images and that for the unfiltered condition.
Figure 5.
Proportion correct of face identification as a function of the orientation of filter at the fovea and 10° lower and right visual fields for two of the control observers. Details of the figure are as in Figure 3.
Contrast Sensitivity Functions
Contrast sensitivity functions measured using horizontal and vertical sinewave gratings were shown in Figure 6 for two control observers tested at the fovea and 10° lower visual fields, and three observers with central vision loss. There was no systematic difference in contrast sensitivity measured using horizontal or vertical gratings for any of the observers. These results imply that the horizontal-vertical difference in performance for identifying orientation-filtered faces could not be explained by the difference in contrast sensitivity to horizontal and vertical information.
Figure 6.
Contrast sensitivity as a function of spatial frequency (c/deg) for two control observers tested at the fovea and 10° lower visual fields, and three observers with central vision loss. Unfilled squares represent data obtained using horizontal sine-wave gratings while filled circles represent data obtained using vertical sine-wave gratings. Error bars represent 95% confidence intervals and are smaller than the size of symbols when not shown.
DISCUSSION
Difficulty with identifying faces is one of the most frequent clinical complaints of patients with central vision loss.1–3 Indeed, many studies found that the ability to correctly report the identity of a face or the facial expression drops as acuity becomes poorer for observers with age-related macular degeneration (AMD), the leading cause of central vision loss.1,21,22 The difficulty in identifying faces or facial expressions can in part be compensated for by magnification. For example, Bullimore et al. compared performance for face identification in a group of 15 AMD observers with that of four age-matched older adults with normal vision.1 They found that AMD observers as a group could identify faces at a distance of 1.5 m with the same accuracy as their normal vision counterparts at 18 m.1 In another study, Tejeria et al. reported that for a group of 30 observers with AMD, the median accuracy for identifying faces improved from 26% to 68% with the aid of a 4× telescope.22 In the present study, we found that observers with central vision loss could identify unfiltered face images at a similar accuracy level as that of the control observers (older adults with normal vision), as long as the image size and presentation duration were scaled appropriately. Note that the width of a real-life face averages approximately 12 cm and subtends a visual angle of approximately 7° at 1 m, a distance with which most people would consider as comfortable when interacting with others. For our observers with central vision loss, they need to come closer to a person before the size of the face is large enough for them to see. For example, to obtain a retinal face image size of 50°, the same as the retinal image size used by observer CVL1 in our study, he would have to come as close as 14 cm to the face of the person with whom he is interacting. This distance is likely to be deemed as socially unacceptable.
Even though observers with central vision loss could identify unfiltered face images just as well as older adults with normal vision, their performance is more affected by filtering of face images into different orientation bands, compared with control observers. Specifically, the averaged performance accuracy of observers with central vision loss dropped from 0.87 ± 0.05 for identifying unfiltered face images to 0.56 ± 0.06 for identifying face images filtered along the horizontal orientation, compared with a drop in performance from 0.89 ± 0.03 to 0.75 ± 0.04 for the same condition for control observers. The larger drop in performance for identifying filtered face images for observers with central vision loss implies that although these observers rely most heavily on the spatial information along the horizontal orientation for face identification, under normal circumstances (i.e. unfiltered faces), their reliance on information along other orientations when identifying faces is higher than for control observers. This result is consistent with another observation that there is a smaller difference in accuracy for identifying filtered face images between the horizontal and the vertical filter orientation for observers with central vision loss than for control observers. An interpretation of these findings is that in the presence of central vision loss, information made available to observers is scarce and less redundant compared with the normal visual system. As such, each bit of information counts even if it is not as informative as others. Our findings elucidate that there might be more integration across orientation channels, or that the bandwidth for extracting useful information is wider, for observers with central vision loss than for control observers. Currently we are studying how spatial information is combined across different bands of orientations, and identifying the minimum bandwidth for information utilization in patients with central vision loss to test if these suppositions are correct.
For observers with central vision loss and control observers alike, best performance for identifying filtered face images occurred when the images contained primarily horizontal information. Dakin and Watt attributed the better performance for horizontal information in face images to the fact that horizontal structures (e.g. the pair of eyes) within faces comprise clusters of locally co-aligned features.18 When analyzed by V1 neurons that are tuned to the horizontal orientation, the output of these clusters becomes strips of horizontal elongated features somewhat similar to what Dakin and Watt coined as the “bar code”. Dakin and Watt suggested that the relative placements of these bar codes convey the specific information about a face from which the identity of the face can be inferred.18 Clearly, these “bar codes” are specific to a face stimulus, as such, it is of no surprise that performance for identifying orientation-filtered face images was highest when face images contained primarily horizontal information even for observers with central vision loss. The interesting finding here is that the information along other orientations, which may not be important for people with normal vision, is also important for people with central vision loss, implying some fundamental differences in spatial information processing in the presence of central vision loss.
A recent study suggested that the face processing deficits observed in patients with central vision loss might be related to abnormal eye-movement scanning patterns.23 Can the abnormal eye movement control account for the findings of observers with central vision loss in this study? We think that abnormal eye movement control is unlikely to contribute to the primary reliance on spatial information along or around the horizontal orientation. In order for the performance to be the best along the horizontal orientation, observers with central vision loss would have to exhibit a bias in eye movements along the horizontal direction. Such a horizontal bias has not been reported in the literature. Quite the contrary, most studies that document the fixation patterns of patients with central vision loss all show idiosyncratic fixation behavior, with the major axis describing the distribution of the fixation positions oriented at different orientations.24–27 For the observers who participated in this study, several of them also participated in the study of Chung in which the frequency distributions of the fixation positions of the observers were given in her Figure 4.27 It is clear from the figure that none of the observers showed a horizontal bias in eye movements. Thus, we believe that the better identification performance for horizontally filtered face images cannot be attributed to the abnormal eye movements or fixation instability of our observers with central vision loss. However, the haphazard nature of the fixational eye movements and the increased fixation instability of observers with central vision loss could have increased the “noise” of the measurements, for all filtered orientations, thus contributing to the individual differences observed in the performance vs. filter orientation functions, and possibly the overall weaker orientation tuning.
This study was in part motivated by a quest to find an efficient and effective way to improve the ability of people with central vision loss in identifying faces through image enhancement. Our findings that people with central vision loss rely on spatial information across various orientation bands (although the relative reliance is different for different orientations) to identify faces suggest that selective contrast enhancement along the most important orientation (horizontal) may not be beneficial. A caveat in interpreting our finding is that in this study, we measured performance for reporting the identity of faces in this study. In addition to the identity, another important piece of information about a human face is its expression — happy, sad, angry, frightened, etc. The ability to correctly identify and interpret facial expressions undoubtedly is crucial to social interaction.22 Previous studies suggest that human observers judge the expression of a face based primarily on features around the mouth region.28 This same reliance on facial features for expression recognition has also been demonstrated for individuals with AMD.29 These findings imply that information carried by the horizontal and/or some oblique orientation channels might be important for facial expression recognition. We are currently studying how performance for recognizing facial expression depends on information restricted to different orientation bands, and to determine if selective enhancement of face images along the crucial orientations might improve the ability of people with central vision loss to identify facial expressions.
ACKNOWLEDGMENTS
This research was supported by NIH grants R01-EY016093 and R01-EY012810. The authors thank Mouna Attarha for collecting and preparing the face images, Daniel Coates, Gordon Legge, Zhong-lin Lu, and Bosco Tjan for their helpful comments. The results were presented at the Association for Research in Vision and Ophthalmology Annual Meeting 2010, Ft. Lauderdale, Florida.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
REFERENCES
- 1.Bullimore MA, Bailey IL, Wacker RT. Face recognition in age-related maculopathy. Invest Ophthalmol Vis Sci. 1991;32:2020–2029. [PubMed] [Google Scholar]
- 2.Szlyk JP, Fishman GA, Grover S, Revelins BI, Derlacki DJ. Difficulty in performing everyday activities in patients with juvenile macular dystrophies: comparison with patients with retinitis pigmentosa. Br J Ophthalmol. 1998;82:1372–1376. doi: 10.1136/bjo.82.12.1372. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Haymes SA, Johnston AW, Heyes AD. The development of the Melbourne low-vision ADL index: a measure of vision disability. Invest Ophthalmol Vis Sci. 2001;42:1215–1225. [PubMed] [Google Scholar]
- 4.Makela P, Nasanen R, Rovamo J, Melmoth D. Identification of facial images in peripheral vision. Vision Res. 2001;41:599–610. doi: 10.1016/s0042-6989(00)00259-5. [DOI] [PubMed] [Google Scholar]
- 5.Melmoth DR, Kukkonen HT, Makela PK, Rovamo JM. The effect of contrast and size scaling on face perception in foveal and extrafoveal vision. Invest Ophthalmol Vis Sci. 2000;41:2811–2819. [PubMed] [Google Scholar]
- 6.Peli E, Goldstein RB, Young GM, Trempe CL, Buzney SM. Image enhancement for the visually impaired. Simulations and experimental results. Invest Ophthalmol Vis Sci. 1991;32:2337–2350. [PubMed] [Google Scholar]
- 7.Nasanen R. Spatial frequency bandwidth used in the recognition of facial images. Vision Res. 1999;39:3824–3833. doi: 10.1016/s0042-6989(99)00096-6. [DOI] [PubMed] [Google Scholar]
- 8.Bachmann T. Identification of spatially quantised tachistoscopic images of faces: how many pixels does it take to carry identity? Eur J Cogn Psychol. 1991;3:87–103. [Google Scholar]
- 9.Costen NP, Parker DM, Craw I. Spatial content and spatial quantisation effects in face recognition. Perception. 1994;23:129–146. doi: 10.1068/p230129. [DOI] [PubMed] [Google Scholar]
- 10.Goffaux V, Hault B, Michel C, Vuong QC, Rossion B. The respective role of low and high spatial frequencies in supporting configural and featural processing of faces. Perception. 2005;34:77–86. doi: 10.1068/p5370. [DOI] [PubMed] [Google Scholar]
- 11.Goffaux V, Rossion B. Faces are "spatial"—holistic face perception is supported by low spatial frequencies. J Exp Psychol Hum Percept Perform. 2006;32:1023–1039. doi: 10.1037/0096-1523.32.4.1023. [DOI] [PubMed] [Google Scholar]
- 12.Hayes T, Morrone MC, Burr DC. Recognition of positive and negative bandpass-filtered images. Perception. 1986;15:595–602. doi: 10.1068/p150595. [DOI] [PubMed] [Google Scholar]
- 13.Keenan PA, Whitman RD, Pepe J. Hemispheric asymmetry in the processing of high and low spatial frequencies: a facial recognition task. Brain Cogn. 1989;11:229–237. doi: 10.1016/0278-2626(89)90019-5. [DOI] [PubMed] [Google Scholar]
- 14.Schyns PG, Oliva A. Dr. Angry and Mr. Smile: when categorization flexibly modifies the perception of faces in rapid visual presentations. Cognition. 1999;69:243–265. doi: 10.1016/s0010-0277(98)00069-9. [DOI] [PubMed] [Google Scholar]
- 15.Costen NP, Parker DM, Craw I. Effects of high-pass and low-pass spatial filtering on face identification. Percept Psychophys. 1996;58:602–612. doi: 10.3758/bf03213093. [DOI] [PubMed] [Google Scholar]
- 16.Gold J, Bennett PJ, Sekuler AB. Identification of band-pass filtered letters and faces by human and ideal observers. Vision Res. 1999;39:3537–3560. doi: 10.1016/s0042-6989(99)00080-2. [DOI] [PubMed] [Google Scholar]
- 17.Parker DM, Costen NP. One extreme or the other or perhaps the golden mean? Issues of spatial resolution in face processing. Curr Psychol Dev Learn Pers Soc. 1999;18:118–127. [Google Scholar]
- 18.Dakin SC, Watt RJ. Biological "bar codes" in human faces. J Vis. 2009;9(2):1–10. doi: 10.1167/9.4.2. [DOI] [PubMed] [Google Scholar]
- 19.Brainard DH. The Psychophysics Toolbox. Spat Vis. 1997;10:433–436. [PubMed] [Google Scholar]
- 20.Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis. 1997;10:437–442. [PubMed] [Google Scholar]
- 21.Alexander MF, Maguire MG, Lietman TM, Snyder JR, Elman MJ, Fine SL. Assessment of visual function in patients with age-related macular degeneration and low visual acuity. Arch Ophthalmol. 1988;106:1543–1547. doi: 10.1001/archopht.1988.01060140711040. [DOI] [PubMed] [Google Scholar]
- 22.Tejeria L, Harper RA, Artes PH, Dickinson CM. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device. Br J Ophthalmol. 2002;86:1019–1026. doi: 10.1136/bjo.86.9.1019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Seiple WH, Schneck ME, Odom JV, Garcia PM, Rosen RB. Face processing by patients with age-related macular degeneration. Invest Ophthalmol Vis Sci. 2010;51 E-Abstract 3618. [Google Scholar]
- 24.White JM, Bedell HE. The oculomotor reference in humans with bilateral macular disease. Invest Ophthalmol Vis Sci. 1990;31:1149–1161. [PubMed] [Google Scholar]
- 25.Crossland MD, Sims M, Galbraith RF, Rubin GS. Evaluation of a new quantitative technique to assess the number and extent of preferred retinal loci in macular disease. Vision Res. 2004;44:1537–1546. doi: 10.1016/j.visres.2004.01.006. [DOI] [PubMed] [Google Scholar]
- 26.Timberlake GT, Sharma MK, Grose SA, Gobert DV, Gauch JM, Maino JH. Retinal location of the preferred retinal locus relative to the fovea in scanning laser ophthalmoscope images. Optom Vis Sci. 2005;82:177–185. doi: 10.1097/01.opx.0000156311.49058.c8. [DOI] [PubMed] [Google Scholar]
- 27.Chung STL. Improving reading speed for people with central vision loss through perceptual learning. Invest Ophthalmol Vis Sci. 2010;51 doi: 10.1167/iovs.10-6034. E-pub Nov. 11, 2010 doi 10.1167/iovs.10-6034. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Gosselin F, Schyns PG. Bubbles: a technique to reveal the use of information in recognition tasks. Vision Res. 2001;41:2261–2271. doi: 10.1016/s0042-6989(01)00097-9. [DOI] [PubMed] [Google Scholar]
- 29.Boucart M, Dinon JF, Despretz P, Desmettre T, Hladiuk K, Oliva A. Recognition of facial emotion in low vision: a flexible usage of facial features. Vis Neurosci. 2008;25:603–609. doi: 10.1017/S0952523808080656. [DOI] [PubMed] [Google Scholar]






