Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jun 1.
Published in final edited form as: Dev Psychol. 2015 Jun;51(6):744–757. doi: 10.1037/dev0000019

Eye tracking reveals a crucial role for facial motion in recognition of faces by infants

Naiqi G Xiao 1, Paul C Quinn 2, Shaoying Liu 3, Liezhong Ge 4, Olivier Pascalis 5, Kang Lee 6
PMCID: PMC4445465  NIHMSID: NIHMS677735  PMID: 26010387

Abstract

Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development.

Keywords: Facial movements, Face processing development, infant, Eye movement


Faces are arguably one of the most important social stimuli in infants’ visual environment. Early experience with processing faces lays a foundation for the development of a host of important social and cognitive skills. The lack of such experience can produce an early deficit in processing faces that has debilitating long-term effects (Le Grand, Mondloch, Maurer, & Brent, 2003). Over the previous half century and particularly in the last 10 years, we have gained a great deal of knowledge about the emergence and development of face processing in infancy (Cassia, Kuefner, Picozzi, & Vescovo, 2009; Cassia, Turati, & Simion, 2004; Fantz, 1963; Haith, Bergman, & Moore, 1977; Kelly et al., 2005; Maurer & Salapatek, 1976; Nelson, 2001; Pascalis, de Haan, & Nelson, 2002; Pascalis et al., 2005; for a review see: Lee, Quinn, Pascalis, & Slater, 2013). Most of this knowledge has come from studies using static face images as experimental stimuli. However, in real life, the faces infants see are mostly not static, but dynamic: they nod, tilt, smile, talk, chew, and blink. To date, we largely do not know to what extent our knowledge of static face processing by infants can be generalized to moving faces. We know even less about the role of facial movements in recognition of faces by infants. Here, to bridge this significant gap in the literature, with the use of eye-tracking methodology, we systematically examined infants’ scanning of dynamically moving versus static faces and how the scanning of dynamic and static faces differentially predicts infant face recognition.

In the past decade, researchers have recognized the importance of facial movement in face processing by adults. Recent studies found that adults not only scan moving faces differently from static ones (Võ, Smith, Mital, & Henderson, 2012), but also are sensitive to facial movement. Adults use facial movement information for judging face gender, kinship, and expression (de la Rosa, Giese, Bülthoff, & Curio, 2013; Hill & Johnston, 2001; Horstmann & Ansorge, 2009; Knappmeyer, Thornton, & Bülthoff, 2003; Rubenstein, 2005; Wilcox & Clayton, 1968). More relevant to the present study, extensive research has uncovered that facial movements facilitate adult face recognition performance (Butcher, Lander, Fang, & Costen, 2011; Knight & Johnston, 1997; Lander & Bruce, 2003; 2004; Lander & Chuang, 2005; Pike, Kemp, Towell, & Phillips, 1997; Thornton & Kourtzi, 2002; Wallis & Bülthoff, 2001; Xiao, Quinn, Ge, & Lee, 2012, 2013). This facilitative effect reflects the optimization of face processing by face motion (Stoesz & Jakobson, 2013; Xiao et al., 2012, 2013). Overall, evidence from adults suggests that facial movement information plays an important role in adult face processing: it not only provides more and richer facial cues than static faces, but also optimizes the way faces are processed.

Recently, infant researchers have begun to shift their focus from static faces to dynamic ones, and reported facilitative effects. For example, Ichikawa, Kanazawa, and Yamaguchi (2011) reported that 7- to 8-month-olds preferred a face with biologically possible vertical movements of the internal features (e.g., eye blinking and mouth movement) over the same stimulus with biologically impossible horizontal movements of the internal features. In addition, Bulf and Turati (2010) showed that newborns recognized a face in a novel viewpoint after seeing the face changing perspective dynamically; they failed to do so when multiple static views of the face were shown. Consistent with the newborn finding, Otsuka, Hill, Kanazawa, Yamaguchi, and Spehar (2012) reported that rotating faces but not static ones led 3- and 4-month-olds to prefer upright over inverted Mooney faces. Also, Otsuka et al. (2009) observed superior face recognition in 3- to 4-month-olds when familiarized with a moving smiling face than when familiarized with a static smiling face. Spencer, O’Brien, Johnston, and Hill (2006) further found that 4- to 8-month-olds even recognized faces based on their idiosyncratic facial movement patterns alone.

However, in contrast to the beneficial effects of facial movement, studies have also shown that facial movement may impede face recognition performance by infants. For example, Bahrick, Gogate, and Ruiz (2002) reported that 5-month-olds were unable to recognize faces when the faces were presented simultaneously with a salient action (brushing teeth or hair). They also found that 5-month-olds could recognize faces when familiarized with static faces. The results suggested that motion signals might distract infant attention from processing facial information properly, thereby leading to a non-preference (Bahrick, Lickliter, & Castellanos, 2013; Bahrick & Newell, 2008). Additional studies have indicated that presenting dynamic talking faces results in neonates preferring a familiar face over a novel face, whereas a static face familiarization procedure led to a novelty preference (Coulon, Guellaï, & Streri, 2011; Guellaï, Coulon, & Streri, 2011). The familiarity preference induced by moving faces indicates that infant representation for moving faces was relatively vague and only partially formed (Cohen, 2004; Hunter & Ames, 1988). To summarize, in the first year of life, infants already exhibit specific sensitivity to facial movement. However, it is controversial as to whether facial motion facilitates or impedes infant face recognition.

The findings that face recognition by infants can be either facilitated or impeded by facial movements are in strong contrast to the relatively consistent facilitation effect observed in adult studies. The mixed results in the infant studies might be caused by differences in the type of face movement presented in the different studies, the investigated age, or individual differences in processing moving faces. With regard to differences in facial movement, some studies examined the role of rigid facial movements, such as head rotation (e.g., Bulf & Turati, 2010; Otsuka et al., 2012), whereas others focused on elastic facial movements associated with emotions such as smiling (e.g., Otsuka et al., 2009). Still other studies examined elastic facial movement associated with talking (e.g., Coulon et al., 2011; Guellaï et al., 2011), animation of abstract faces (Spencer et al., 2006), or body movements (e.g., brushing hair and brushing teeth, Bahrick, 2002). Given that different types of facial movements might result in different influences on face recognition, the interpretation of the set of studies taken together is problematic. In the present study, we endeavored to address this stimulus inconsistency in the prior literature by using chewing and blinking facial movements. Chewing and blinking facial movements do not contain expressive, verbal, or face viewpoint changes, thereby excluding potential confounds that might arise from engaging infants in emotional and language processing (de la Rosa et al., 2013; Lewkowicz & Hansen-Tift, 2012).

Participant age is another possible confounding factor in interpreting the effect of facial movements in face recognition performance. To our knowledge, most of the infant studies have examined the effect of facial movements before 6 months of age, and each study focused on only one specific age, either newborns (e.g., Bulf & Turati, 2010; Coulon et al., 2011; Guellaï et al., 2011), 3- to 4-month-olds (e.g., Otsuka et al., 2009, 2012), or 5- to 6-month-olds (e.g., Bahrick, 2002). Because of the differences in age groups tested, it is difficult to compare results across studies. Moreover, considering the previously mentioned stimulus inconsistencies across the studies, it is difficult to track the effect of facial movement during infant development. To address this issue, the present study included participants at 3, 6, and 9 months of age so as to reveal the effect of elastic facial movements on the development of face recognition from 3 to 9 months of age.

Individual differences in the processing of moving faces might also contribute to the mixed effects of facial movement. The ability to process moving faces is likely still to be under development during infancy. In one recent study that used static faces, the face recognition performance of infants was closely related to their shifts in fixation during habituation: a greater frequency of fixation shifts led to novelty preference and a lower frequency of fixation shifts led to familiarity preference (Gaither, Pauker, & Johnson, 2012). When one considers the implications of the Gaither et al. results for studies examining the effect of movement on face recognition by infants, it is possible that for some infants, moving face parts are distracting; these infants may display fixation that will stick to a particular moving part without moving their fixation to other parts within the face, resulting in poorer face recognition (e.g., a familiarity preference). In contrast, for other infants, moving face parts may enhance encoding of the entire face by stimulating fixation shifts, thereby leading to improved face recognition (e.g., a novelty preference). Thus, different infants may have different eye movement patterns when viewing dynamically moving faces and these different patterns may be closely linked to their subsequent recognition of faces. To date, no evidence exists to support this intriguing hypothesis, because prior studies that used moving faces examined only infant visual scanning during face encoding (e.g., Hunnius & Geuze, 2004; Lewkowicz & Hansen-Tift, 2012, Wheeler et al., 2011) or only tested their face recognition performance (e.g., Bulf & Turati, 2010; Otsuka et al., 2009, 2012). No studies have concurrently investigated infants’ eye movement patterns during moving face encoding and their relation to subsequent face recognition performance. The present study aimed to bridge this significant gap in the literature.

Recent eye-tracking studies have revealed an interesting pattern in infants’ processing of moving faces. Early static face studies have consistently observed an eye movement pattern with most fixations allocated in the eye region (Haith et al., 1977; Hunnius, de Wit, Vrins, & von Hofsten, 2011; Maurer & Salapatek, 1976; Oakes & Ellis, 2013). However, for moving faces, a different pattern has emerged. Infants have shown an increased fixation shift with age from the eye region to the mouth region (Hunnius & Geuze, 2004; Lewkowicz & Hansen-Tift, 2012, but see Wheeler et al., 2011). This eye movement pattern was reported after 3 months of age (Hunnius & Geuze, 2004; Lewkowicz & Hansen-Tift, 2012), but was not observed in a study with infants younger than 3 months (Haith et al., 1977). Despite such distinctive eye movement patterns in processing moving faces, it is unclear whether this pattern reflects a specific perceptual mechanism for the processing of moving faces that directly affects recognition performance. Given the possibility of both facilitative and inhibitory effects of facial movements, we reasoned that there might exist individual differences in eye movement patterns underlying these effects. The present study thus focused on investigating the distinctive eye movement patterns elicited by moving faces in infants at different ages and their effect on face recognition at the individual level.

We used an experimenter-controlled familiarization and visual preference procedure with infants at 3, 6, and 9 months of age. Using a within-subject design, we examined face scanning and face recognition performance by infants in both the dynamic and static conditions. In the dynamic condition, infants were familiarized with a moving face, which involved chewing and blinking facial movements without sound (Xiao et al., 2013). After familiarization, infants were tested with two static face images presented side by side: the familiarized face versus a new face. The static condition was the same as the dynamic condition, except that infants were familiarized with a static face image.

We hypothesized that if facial movement leads infants to process dynamic faces differently from static faces, we should observe different eye movement patterns for the moving faces as opposed to the static ones. Alternatively, infant eye movement patterns might be similar when encoding dynamic versus static faces. Given that prior studies observed age changes in infant eye movement patterns in response to moving faces (Hunnius & Geuze, 2004), we expected that the effects of facial movement on face scanning and recognition might differ among the age groups. Further and more importantly, just as previous studies have shown that infant visual scanning of objects is associated with their object perception performance (e.g., Bornstein, Mash, & Arterberry, 2011; Johnson, Slemmer, & Amso, 2004), differential eye movement patterns engendered by facial motion might lead to significant differences in face recognition by infants when compared to the static condition. Because evidence from infants of different ages is mixed regarding whether facial motion facilitates or impairs face recognition, we hypothesized that the effect of facial motion might depend on an individual infant’s face scanning pattern: Those infants who were distracted by local facial movements (e.g., chewing) might fixate more locally and hence have poorer face recognition performance by showing a familiarity preference (Cohen, 2004; Hunter & Ames, 1988); in contrast, those infants who were less distracted by local movements might fixate more broadly on the face and thus recognize faces better by showing a novelty preference.

Method

Participants

We recruited infant participants through advertisements posted on community news boards. Ninety-nine Chinese infants participated in the present study including 41 3-month-olds (female: 17, male: 24, M = 94.49 days, SD = 3.44 days), 32 6-month-olds (female: 16, male: 16, M = 185.97 days, SD = 3.73 days) and 26 9-month-olds (female: 12, male: 14, M = 279.69 days, SD = 8.68 days). Nineteen additional infants participated in the study, but their data were not included in the final analyses because of calibration failure (n3-mos = 6, n6-mos = 2) or fussiness (n3-mos = 1, n6-mos = 1, n9-mos = 9).

Materials

The experimental stimuli used in current study were Chinese female faces. These faces were presented in frontal view with neutral facial expression. All faces were used in both the dynamic and static conditions. In the dynamic condition, the familiarization stimuli were faces depicting chewing and blinking facial movements. In the static condition, the familiarization stimuli were static face pictures with mouth closed and eyes open.

The stimuli used in the test phase were pairs of static face images. These two test face images were displayed side by side on a computer screen. One of the static face images was the familiarized face, and the other was a novel face. The familiarized face was the same person shown in the familiarization phase with similar inner (i.e., eyes, nose, and mouth) and external facial features (i.e., hair and face contour). The novel face was created by replacing the inner facial features of the familiarized test face with inner facial features from another face, which was not presented in the familiarization phase. In addition, the brightness and skin tone of the two faces were adjusted to approximately the same level. Thus, in each test condition, the novel test face shared identical external facial features with the familiarized test face, which were the same as the familiarization face’s external features. By contrast, the inner features of the novel test face differed from those of the familiarized test face (Figure 1). The reason why we performed these image manipulations is that they lead participants to discriminate between the two faces based on identity rather than on the basis of other low-level visual cues, such as shape, color, or brightness. The familiarized faces were sized 13.16 by 18.22 cm2 (11.56° by 15.93° in visual angle), and the size of each test face was 11.47 by 15.18 cm2 (10.08° by 13.32° in visual angle). There were 4 face stimuli used in the current experiment, two of which were used as familiarization faces (Faces A and B). The other two faces (Faces C and D) served as novel test faces, and were paired with Faces A and B, respectively.

Figure 1.

Figure 1

The female face stimuli used in the current study. The face in the top row is used for familiarization. The left face in the bottom row is the familiarized test face. The right face in the bottom row is the novel test face. The authors received signed consent for the individuals’ likenesses to be published in this article.

Procedure

Before testing began, infants were placed in a car seat by their parents. An experimenter adjusted the car seat position so that infants could see the computer screen above without head or body movement. The experiment started immediately after the position adjustment.

Testing of each infant started with a calibration procedure to ensure the accuracy and precision of eye movement recording. The Tobii default infant eye tracking calibration procedure (Tobii Studio, 1.7.3) was used. During calibration, a cartoon character was presented with sound on the screen. Infants’ successful fixation on this cartoon figure for more than one second would initiate its jumping to the next position, which was controlled by an experimenter. The calibration was finalized if infants successfully fixated on 5 different screen locations (4 corners and the center).

In the dynamic condition, infants were first familiarized with a silent familiarization face video for 30 seconds. After the offset of this face video, two identical static cartoon images appeared side by side on the screen as placeholders. These two cartoon images would be replaced by the familiarized face image and a novel face image once infants fixated at the center of screen. The two static face images were presented for 10 seconds. Half of the participants saw the familiarized face on the left side of screen, and the other half saw the familiarized face on the right side of the screen. During the entire experimental session, an experimenter would stay with the infant in order to maintain a stable positioning for the infant and to reorient the infant’s attention back to the screen if he or she looked away for more than 1 second. For the static condition, every aspect of the procedure was identical to that in the dynamic condition, except that participants were familiarized with a static face picture instead of a moving face video for 30 seconds. The order of the dynamic and static conditions was counterbalanced across participants.

Two Chinese female model faces were used in the present study as stimuli. Half of the participants saw one model’s face only in the dynamic familiarization condition and the other model’s face only in the static familiarization condition, and vice versa for the other half of the participants. Stimuli were controlled and presented with Tobii Studio (1.7.3) software on a 17-inch monitor with resolution of 1024 by 768 pixels2, which was placed approximately 65 cm from the eyes of the participants. The eye tracker was a Tobii 1750 with a 50 Hz sample rate.

Results

Eye fixation data were generated through a fixation filter applied to the raw eye movement data. In the present study, the dispersion-based formula was chosen according to the eye tracker sample rate: 30 pixels minimum dispersion for at least 100 ms (Salvucci & Goldberg, 2000). The following analyses were based on fixation data filtered under this fixation definition. We performed preliminary analyses to examine any potential effect of participant gender. The results did not show any significant gender effect or interactions involving gender in the following analyses. Data from female and male participants were therefore collapsed in the following analyses.

Different Eye Movement Patterns Engendered by Facial Movements

We first examined the hypothesis that moving faces lead to a different face scanning pattern from that on static faces. To determine the face scanning pattern, we focused on two aspects of the eye movement events of infants: the spatial distribution of fixation and the number of fixation shifts within the major facial features. To further interpret these eye movement differences, we analyzed the facial movement intensity and related facial configuration changes in the moving face stimuli.

Analyses of Fixation Distribution

Previous studies have shown that infants exhibit different fixation distributions on moving faces from those reported in studies with static faces (i.e., Hunnius & Geuze, 2004; Lewkowicz & Hansen-Tift, 2012). However, it is unclear whether this motion related scanning pattern is purely engendered by facial motion itself, or driven by other facial motion related factors, such as language or emotion processing. To determine whether facial movements affected where infants looked, we examined the difference in fixation duration on the major facial areas (i.e., eyes, mouth, and nose) for moving versus static faces across the 3 age groups.

Five face areas of interest (AOIs) were defined for the analysis of fixation duration: left eye, right eye, nose, mouth, and whole face. As shown in Figure 2, the left eye, right eye, mouth, and whole face AOIs were elliptical areas covering each facial feature. The nose AOI was defined by a six-point polygon area that covered the nose. The parameters for each AOI are presented in Table 1. We further defined the eye region as the combination of the left and right eye regions. All of these AOIs were slightly larger than the actual facial features in order to tolerate facial movements. The dynamic and static face stimuli had identical AOIs.

Figure 2.

Figure 2

Illustration of the areas of interest (AOIs). The authors received signed consent for the individuals’ likenesses to be published in this article.

Table 1.

The parameters of each AOI for the face stimuli

Left eye Right eye Mouth Nose Whole face
Face 1 % of area to screen display 0.93% 0.93% 1.47% 1.30% 14.54%
% of area to whole face 6.40% 6.40% 10.11% 8.94% 100.00%
Width (cm) 4.25 4.25 6.26 4.21 13.60
Height (cm) 3.03 3.03 3.73 3.67 18.20
Face 2 % of area to screen display 0.95% 0.95% 1.50% 1.32% 15.23%
% of area to whole face 6.24% 6.24% 9.85% 8.67% 100.00%
Width 4.35 4.35 6.43 4.31 13.92
Height 3.10 3.10 3.82 3.75 18.63

The mean proportional duration on each AOI across the three age groups and two face presentation formats are shown in Figure 3. Proportional duration of fixation was calculated by dividing fixation duration on each facial feature (i.e., eyes, nose, and mouth) by the whole face fixation time during the 30 seconds of familiarization presentation. Approximately 70% of the on-face fixations fell in the AOIs of the major facial features (i.e., eyes, nose, and mouth), which is consistent with previous infant face scanning eye movement data (e.g., Wheeler et al., 2011). The fixations outside of these AOIs were spread randomly on other face areas, and the distribution of these areas was not consistent across participants. Thus, we focused our analysis on within-face areas covering the major facial features.

Figure 3.

Figure 3

Mean proportional fixation time on each area of interest (AOI) for the static and dynamic conditions in the three age groups. Error bars represent unit standard error. The plots show that, with increased age, infants fixated less on the eye region and more on the mouth region when looking at moving faces. No such age related change was observed in the static condition.

We first conducted a MANOVA (using the Pillai test statistic) on the proportional fixation duration on the eye, nose, and mouth regions as dependent variables, with participant age as a between-subject independent variable and face type as a within-subject independent variable. The results showed significant effects of participant age (approx F[2, 93] = 3.34, p = .004) and face type (approx F[1, 84] = 7.06, p < .001). More importantly, the MANOVA also revealed a significant interaction between age and face type (approx F[2, 84] = 4.62, p < .001). This finding suggests that the pattern of fixation on the AOIs was significantly affected by both participant age and face type.

To further explore the effects of age and face presentation format, we followed up with a 3-way ANOVA to examine the difference among AOIs. A 3-way mixed ANOVA was conducted to examine the effects of participant age (3, 6, and 9 months), face presentation type (dynamic and static), and AOI (eye, nose, and mouth) on proportional fixation duration. Face presentation type and AOI were repeated measures factors, and participant age was a between-subjects factor. It should be noted that there were participants who successfully completed only one of the dynamic and static conditions (N = 12: ndynamic-3-mos = 0, nstatic-3-mos = 2, ndynamic-6-mos = 2, nstatic-6-mos = 1, Ndynamic-9-mos = 6, nstatic-9-mos = 1). The ANOVA excludes these participants in measuring the effects of repeated measured factors, therefore leading to mismatches in the degrees of freedom for the number of participants. The results showed that participant looking time on each AOI was significantly different, F(2, 186) = 50.53, η2p = .35, p < .001. Post-hoc pair-wise comparison revealed that participants looked at the nose area longer than the eye and mouth areas (ps < .001). They also looked at the mouth region longer than the eye region (p < .001). The finding that participants allocated longest looking time on the nose region is consistent with the finding that Chinese infants looked most to the nose region of own-race faces (Liu et al., 2011). In addition, the analysis also revealed a significant 3-way interaction, F(4, 168) = 4.33, η2p = .09, p = .002. This 3-way interaction suggested that the effects of age and face presentation type on proportional fixation durations on the eye, nose, and mouth areas were different.

To better interpret the 3-way interaction, three ANOVAs were conducted: one each for the eye, nose, and mouth regions. For the eye area, infants on average spent significantly less time looking at the eyes in the dynamic condition than in the static condition (Mdynamic = 22.52%, Mstatic = 24.84%, F[1, 84] = 8.46, η2p = .09, p = .005). We did not observe a significant main effect of age (F[2, 93] = 0.91, η2p = .02, p = .408) or an interaction between age and condition (F[2, 84] = 0.51 η2p = .01, p = .605).

For the mouth region, we found that infants looked more at this area in the dynamic condition than in the static one (Mdynamic = 17.88%, Mstatic = 9.27%, F[1, 84] = 16.25, η2p = .16, p < .001). More importantly, the interaction between condition and age was significant, indicating that the fixation percentage difference between the static and dynamic conditions changed with age (F[2, 84] = 14.86, η2p = .26, p < .001). As shown in Figure 3, based on a series of paired-sample t tests for each age group, 3- and 6-month-olds looked for the same amount of time in the static and dynamic conditions (Mdynamic-3mos = 7.59%, Mstatic-3mos = 6.98%, p = .761; Mdynamic-6-mos = 13.67%, Mstatic-6-mos = 9.70%, p = .196); however, the mouth area in the dynamic faces was fixated more in the 9-month-old group (Mdynamic-9-mos = 32.38%, Mstatic-9-mos = 11.15%, p < .001). For the nose region, we did not find a significant effect of face presentation format (F[1, 84] = 0.12, η2p = .001, p = .731), age group (F[2, 93] = 2.56, η2p = .05, p = .083) or interaction in fixation duration (F[2, 84] = 1.94, η2p = .26, p = .150).

To summarize, we observed a pattern of significant fixation differences between looking at moving and static faces, which also changed with age. Chinese infants placed most of their fixations on the nose area of Chinese faces. For moving faces, the infants fixated less on the eye region and more on the mouth region. Most importantly, infant fixation time on the mouth region increased with age only in the dynamic condition, whereas the mouth region looking time did not change with age in the static condition. The observed pattern of fixation duration differences might reflect the fact that facial movements were more salient in the mouth region (as will be evidenced in a subsequent portion of the Results section), therefore attracting more visual attention to this area.

Analyses of Fixation Shifts

We observed fixation duration differences for moving versus static faces, with decreases in fixation on the eye region and increases in fixation on the mouth region, specifically on the moving faces. One might attribute this pattern to salient mouth movement, which directly attracts and maintains continuous fixation on the salient moving area. Alternatively, facial movement might promote more fixation shifting across the whole face region. Multiple visits to different facial areas could contribute to the spatial distribution of accumulative fixation on particular features on the moving faces as opposed to the static ones. We further examined these two possibilities by directly investigating the number of fixation shifts between major facial features (i.e., eyes, nose, and mouth) based on the same AOIs used in the AOI analysis. If the fixation distribution difference reflects a maintenance effect whereby face movements lead to continuous fixation on a particular face region, we should not observe more fixation shifts to that region in the dynamic condition relative to the static condition. Alternatively, if facial movement promotes more face scanning, we should find more frequent fixation shift among facial features.

A fixation shift was defined as two consecutive fixations falling in different facial feature regions (i.e., different AOIs). To derive the number of fixation shifts, we first examined the fixation sequences for each participant. For example, if the first fixation was on the mouth region, and the second one moved to the left eye region, we regarded this fixation move as a fixation shift. By contrast, if both the first and second fixations belonged to the same AOI (e.g., the mouth region), we did not count it as a fixation shift. Through analyzing the fixation sequences of looking at moving and static familiar faces, we calculated the number of times that fixations moved from one AOI to another.

An ANOVA was conducted to examine the effects of face presentation format (dynamic vs. static) and age on the number of fixation shifts. As shown in Figure 4, infants performed more fixation shifts when looking at moving faces (M = 5.04) than at static faces (M = 3.53), F(1, 84) = 12.01, η2p = .13, p < .001. Additionally, the difference in the number of fixation shifts between the dynamic and static conditions was significantly different in the 3 age groups, as was indicated by a significant interaction, F(2, 84) = 4.04, η2p = .09, p = .021. Further simple analyses based on paired-sample t tests indicated that neither the 3- nor 6-month-old group exhibited more fixation shifts in the dynamic condition than in the static condition (3-month-olds: p = .204; 6-month-olds: p = .222). However, the 9-month-olds had significantly more fixation shifts in looking at the moving faces than at the static faces (p = .001). Taken together, the fixation shift results indicated that facial movements led to an age-related increase in the number of fixation shifts. The findings support the hypothesis that the mouth focused fixation pattern associated with moving faces can be attributed to the multiple visits among facial features, rather than to the continuous fixation driven by salient facial movement.

Figure 4.

Figure 4

Mean number of fixation shifts between AOIs for the static and dynamic conditions in the three age groups. Error bars represent unit standard error.

Analyses of Facial Movement Patterns

To probe the reason for the fixation distribution and fixation shift pattern associated with moving faces, we further analyzed the patterns of facial movement intensity and of facial spatial structure changes in moving faces. To obtain these measurements, we used the Computer Expression Recognition Toolbox (CERT, Bartlett, Littlewort, Frank, & Lee, 2014; Littlewort et al., 2011) to gauge the intensity of facial movement based on the Facial Action Coding System (FACS, Ekman & Friesen, 1978) and the distances between facial features. The CERT program can automatically detect the positions of facial features (i.e., eyes, nose, and mouth) in the video, based on which distances between facial features can be measured for each video frame. These distance measurements are aggregated to form an abstract face structure, which is represented by the relative distance between each facial feature. This face structure is then compared with a face norm. The difference between the structure and the norm is used to infer the intensity of facial movement. The facial intensity value represents the relative intensity of the examined facial movement to the norm. A positive intensity value means the facial movement is more intense than the norm, while a negative value indicates that the face moves less than the norm. The CERT program provides facial intensity measures on 26 facial parts (i.e., Action Units, AUs). These AUs allow us to compare the intensities of movements between the eye and mouth regions.

As shown in Figure 5, we averaged the intensities of facial movements across all of the video frames according to their locations (i.e., the eye and mouth regions). For the facial movements around the eye region, intensities were equal to or less than the norm. However, for facial movements around the mouth region, most of the intensities were equal to or larger than the norm (except AU 12). We used an independent t-test to compare facial movement intensity around the eye and mouth regions. The results showed that the mouth region (M = 0.48) moved with more intensity than the eye region (M = −0.42), t(13.73) = −3.02, p = .009.

Figure 5.

Figure 5

The intensity of facial Action Units (AUs) in the eye and mouth region. The larger numbers represent along the x-axis more intense face movements. Error bars represent unit standard error of the mean intensity scores. The bars with an asterisk marker indicate that motion intensity is significantly stronger or weaker than the norm (p < .05, one-sample t test). The bars without an asterisk marker indicate that motion intensity is not different from the norm. The intensity results show that facial movements around the eye region (light grey bars) were less than or equal to the norm, whereas most of the facial movements around the mouth region were more than or equal to the norm (dark grey bars).

Relationship between Face Scanning Pattern and Face Recognition Performance

Given the distinctive eye movement pattern observed for moving faces, the second question the present study aimed to address was whether this pattern observed during familiarization contributed to infant face recognition performance.

Face Recognition Performance

Infant face recognition was examined via novelty preference, which is calculated by dividing the looking time to the novel face by the total looking time to both faces and then multiplying by 100 to yield a percentage score. Based on preliminary analyses, we excluded participants who showed extreme side biases (i.e., 100% or 0% novelty preference: Ichikawa et al., 2011, ndynamic-3-mos = 20, nstatic-3-mos = 12; ndynamic-6-mos = 3, nstatic-6-mos = 4; ndynamic-9-mos = 1, nstatic-9-mos = 2) or participants who did not look at the screen during the test phase (ndynamic-3-mos = 6, nstatic-3-mos = 3; ndynamic-6-mos = 6, nstatic-6-mos = 5; ndynamic-9-mos = 4, nstatic-9-mos = 8). These participants were excluded from subsequent analyses. The mean ages of the excluded participants were: M3-mos = 94.00 days; M6-mos = 184.33 days; M9-mos = 275.40 days. Following the exclusions, there were 15 3-month-olds, 21 6-month-olds, and 15 9-month-olds remaining for subsequent analyses in the dynamic condition, and 24 3-month-olds, 22 6-month-olds, and 15 9-month-olds remaining for subsequent analyses in the static condition.

A mixed ANOVA was conducted to examine the effects of participant age and face presentation format on novelty preference. The results showed a significant effect of age on novelty preference, F(2, 34) = 3.96, η2p = .14, p = .029. The 6-month-olds showed significantly larger novelty preference than the 3-month-olds (t[80] = 2.67, p = .009), and novelty preference by 9-month-olds was marginally higher than that for 3-month-olds (t[68] = 1.92, p = .059). We did not find a significant effect for presentation format (dynamic vs. static face familiarization), F(1, 34) = 0.24, η2p = .002, p = .625, or the interaction between face presentation format and age (F[2, 34] = 0.19, η2p = .003, p = .825). The results suggest that, at the group level, infant face recognition in the dynamic condition was not significantly different from that in the static condition.

The mean fixation time data for the familiarized and novel test faces in each condition and age group are listed in Table 2. Three-month-olds were unable to recognize new faces in either the static or dynamic conditions, which were confirmed by comparing the novelty preference score to chance level (50%) via one-sample t tests (Mstatic = 44.36%, t[23] = −0.96, p = .347; Mdynamic = 43.77%, t[14] = −0.97, p = .350). By contrast, 6-month-olds showed significant novelty preference in the static condition (Mstatic = 57.39%, t[21] = 2.08, p = .049), but not in the dynamic condition (Mdynamic = 58.58%, t[20] = 1.73, p = .099). Similar to the 6-month-olds, 9-month-olds exhibited significant novelty preference in the static condition (Mstatic = 60.07%, t[15] = 0.11, p = .028), but not in the dynamic condition (Mdynamic = 52.70%, t[14] = 2.45, p = .615). Due to the fact that the novel and familiarized test faces differed in their internal features while sharing identical external features, the null-preference in the 3-month-olds might indicate that their face discrimination relied mostly on external features. With increased age, infants may have gradually relied on the integration of inner and external facial features for face recognition, an observation which is supported by the novelty preference in 6- and 9-months old infants. This proposed developmental transition in utilizing inner and external facial features for face recognition would be consistent with previous static face recognition findings (e.g., Cashon & Cohen, 2004; Schwarzer, Zauner, & Jovanovic, 2007; Younger & Cohen, 1986).

Table 2.

The mean (standard error) fixation time on the familiarized and novel test faces.

Age group Static Dynamic
Familiarized Novel Familiarized Novel
3-month-olds 3.80 (0.44) 3.10 (0.47) 3.94 (0.45) 3.21 (0.52)
6-month-olds 2.61 (0.33) 3.62 (0.45) 2.83 (0.41) 4.04 (0.47)
9-month-olds 2.40 (0.35) 3.71 (0.42) 2.77 (0.39) 2.83 (0.45)

The findings indicate that infants could discriminate the familiarized face from a new face at 6 and 9 months of age, if the familiarized face was static. However, the same infants in these two age groups failed to exhibit such discriminability when they were familiarized with moving faces. The null group outcomes may suggest that infants showed individual difference in recognition performance after being familiarized with moving faces. This difference has been observed in previous studies, which reported both novelty and familiarity preference after familiarization with moving faces (e.g., Bahrick, 2002; Coulon et al., 2011; Otsuka et al., 2009). The direction of preference is believed to reflect the quality of the representation, with a novelty preference indicating a fully formed face representation and a familiarity preference indicating a weaker or partially formed representation (Cohen, 2004; Hunter & Ames, 1988). It is plausible that the group-level non-preference result observed in the dynamic condition might consist of infants with a fully formed face representation who looked longer at the novel face and infants with a partially formed face representation who displayed a familiarity preference. Such individual differences in the quality of the face representation might be attributable to how infants processed moving faces in the familiarization phase, which can be observed in their eye movement patterns (Gaither et al., 2012), a point to which we turn next.

Face Scanning and its Relation to Face Recognition Performance

As described earlier in the Results section, we hypothesized two possible eye movement patterns elicited by facial movements: attraction of fixation to certain face regions versus activation of more fixation shifts across face regions. In addition, these two patterns might contribute to infant representation of moving faces and be revealed in their preferences. In particular, the more often infants shifted fixation during familiarization, the more they would prefer the novel face over the familiarized face.

We describe each participant’s scanning by referring to the following index. The fixation shift ratio was introduced to depict individual scanning differences. To derive this ratio, we first classified the consecutive fixation position change into two categories based on whether the two fixations fell into the same or a different AOI. If consecutive fixations were in different AOIs, this was counted as a fixation shift. By contrast, if consecutive fixations fell in the same AOI, this was counted as a fixation stay. The fixation shift ratio was calculated by dividing the number of fixation shifts by the sum of fixation shifts and fixation stays. A value of 0% means all of the fixations stayed in a single AOI, whereas a value of 100% indicates all fixation changes moved from one to another AOI.

We first examined whether infants exhibited similar fixation shift patterns to perceive the moving and static faces. The Pearson correlations were conducted to examine the linear relationship between the fixation shift ratios on moving and static faces in the 3 age groups. The results showed a significant correlation existed in the 3-month-olds (r = .45, p = .007), but not in the 6- or 9-month-old group (r6-mos = −.05, p = .818; r9-mos = .40, p = .121). These results demonstrate that facial movements elicited different fixation shift patterns from those in static faces in the 6- and 9-month-olds, but not in the 3-month-olds.

To examine the relationship between the fixation shift ratio during the learning of moving faces and face recognition performance, we conducted a mixed multiple linear regression with participant age, face presentation format, fixation shift ratio during the familiarization phase, and the interactions as regression variables. The face presentation format was a within-subject predictor. The novelty preference in the test phase was the dependent variable. We predicted that the relation between fixation shift during familiarization and novelty preference would be different across the three age groups and face presentation formats. The results supported our prediction by showing a significant 3-way interaction among age, face presentation format, and fixation shift ratio (t = 2.42, p = .021). To further explore the relation between the fixation shift ratio during face learning and recognition performance, we conducted linear regressions on the novelty preference in the dynamic and static conditions respectively with age, fixation shift ratio, and their interaction as predictors. For the dynamic condition, the results showed a significant interaction between age and fixation shift ratio (t = 2.18, p = .035). This result indicates that the relation between the shift ratio and novelty preference was significantly different among the three age groups. We additionally examined this age dependent relation by conducting Pearson correlations between fixation shift ratio and novelty preference score in the moving face condition in each age group. The 6- and 9-month-olds showed significant positive correlation between the two measures (r6-mos = .44, p = .045; r9-mos = .61, p = .016), whereas the 3-month-olds failed to show such a relationship (r = −.27, p = .330, Figure 6). This pattern of correlations suggests that when 6- and 9-month-olds looked at moving faces, the more they shifted fixation, the better they recognized the faces. In contrast, we did not find any significant relation in the same groups of participants who were familiarized with static faces (r3-mos = −.08, p = .705; r6-mos = −.30, p = .166; r9-mos = .08, p = .760).

Figure 6.

Figure 6

Correlation between fixation shift ratio (x-axis) and novelty preference ratio (y-axis) for the static and dynamic conditions in the three age groups.

Discussion

The current study investigated how facial movement affected infant face scanning patterns, and whether such face scanning patterns led to improved or impaired face recognition. Several major findings were obtained.

The first major finding is that by tracking infant eye movements during face encoding, we observed distinctive eye movement patterns and related developmental changes between the static and dynamic conditions: 1) both 3- and 6-month-olds did not show differences when scanning moving and static faces, 2) 9-month-olds spent more time looking at the mouth region and less at the eye region in the dynamic condition relative to the static condition, and 3) 9-month-olds shifted their fixation more frequently between inner facial features in the moving face condition relative to the static face condition.

As shown in our AOI proportional fixation time analysis (Figure 3), when looking at moving faces, with increased age, infants fixated more on the mouth region and less on the eye region. This finding is similar to that reported in recent studies on infant selective attention to talking faces, whether the voice was audible (Lewkowicz & Hansen-Tift, 2012) or silent (Xiao, Quinn, Wheeler, Pascalis, & Lee, 2014), or to the mother’s dynamically interactive face (Hunnius & Geuze, 2004). The same infants did not show such scanning patterns when presented with static faces. Scanning of the static faces tended to focus on the eyes, which is consistent with previous findings using static face pictures as stimuli (e.g., Maurer & Salapatek, 1976; Oakes & Ellis, 2013).

The present finding of increased scanning of the mouth relative to the eyes is likely due to the salience of movements in the mouth region, which is evidenced by our analysis of the moving face videos. As shown in Figure 5, the mouth region moved more than the eye region. However, increased scanning of the mouth by infants did not lead them to over-fixate on the mouth region. Rather, we found a novel effect of the moving faces on fixation shifting between the inner facial features. Moving faces led to more fixation shifts between the major face features than the static ones. Also, the size of this effect significantly increased with age (Figure 4). This fixation shift supports a particular hypothesis regarding the role of facial movements in facial visual scanning. Instead of maintaining continuous fixation on salient moving parts, facial movement actually promotes more fixation shifting across the whole face region. Multiple visits to different facial areas likely contribute to the spatial distribution of accumulative fixation on the moving faces as opposed to the static ones.

Although additional studies are needed to elucidate the increase in fixation shifts elicited by facial motion more thoroughly, here we propose a possible interpretation from an exogenous attention perspective. Whenever a fixation is located on a certain facial feature, other face regions fall into the peripheral visual field (Van Belle, De Graef, Verfaillie, Rossion, & Lefevre, 2010). Because such movements change the distances between facial features, the inputs from the peripheral visual field will change rapidly due to facial movements. These peripheral visual changes could act as exogenous cues, which drive attention to the changing areas. For example, when the left eye is fixated, the mouth would be in the peripheral visual field. Elastic facial movement changes the distance between the currently fixated point and the peripheral visual cue (i.e., the distance between the left eye and mouth). Such changes may be responsible for the overt shifting of attention to the mouth area. The chewing/blinking movements led to more frequent distance changes along the paths connecting the mouth and other facial features than the paths connecting non-mouth features. Thus, facial movements might have driven infants to increase fixation shifts between facial features. The increase in fixation shifts with age could reflect the development of sensitivity to subtle facial movements, which might be rooted in the rapid development of the posterior orienting network in the first year of life (Reynolds, Courage, & Richards, 2013). Taking the fixation distribution and shifting results together, it is evident that with increased age, infants become increasingly sensitive to facial movements (Xiao et al., 2014), which in turn increases their scanning of the whole face rather than the specific feature that is moving saliently.

Our second major finding is that of an age-related change in the relation between infant eye movement patterns during the encoding of the dynamic faces and face recognition performance. By looking at the eye movement patterns at an individual level, we observed that different infants displayed differential proportions of fixation shifts across facial features relative to consecutive fixation stays in the same facial feature region. We further found that whereas this proportion was not related to face recognition performance by 3-month-olds, for 6- and 9-month-olds, the more frequently they shifted fixations between different face regions during face encoding, the better they recognized the encoded faces.

In contrast to the moving face condition, we did not observe a relationship between face scanning and face recognition in the static condition in the same groups of participants. This might be because infants mainly fixated on the eye and nose regions of the static faces (Figure 3), which constricted fixation shifting within this small area. Given the small size of the area, fixation shifts that function to link face part information in the static condition would not be as expansive as those subtending larger face areas (e.g., mouth to eyes), which were more characteristic of scanning of moving faces. To further support this hypothesis, we observed that the fixation shift patterns by the same infants were not correlated between the moving and static conditions. This finding suggests that the eye movement pattern that contributes to face recognition is distinctive to moving faces. Overall, the relationship between scanning and recognition suggests that cross-regional scanning activated by facial movements may contribute to successful face recognition from 6 months of age.

The correlation between scanning of moving faces and recognition performance has an important implication for understanding the role of facial movement in the development of face recognition. The fixation shifting across facial features might reflect the emergence of configural/holistic face processing. Because facial movement changes the face configuration (i.e., inter-feature spatial relationships) from moment to moment, it offers a distribution of possible facial configurations, rather than a snapshot of facial configuration structure in a static face picture. By processing these possible facial configurations, one might more readily recognize a face, and this can probably be achieved by active scanning between facial regions. The inter-feature scanning could help infants not only sample information from various facial features, but perhaps more importantly, allow them to attend to and encode the spatial relations among the features, which is the configural/holistic facial information. Previous studies have shown that infants begin to process between-feature relational information in faces around 3 to 6 months of age (Bhatt, Bertin, Hayden, & Reed, 2005; Cashon & Cohen, 2003; Quinn & Tanaka, 2009; Schwarzer et al., 2007), which is consistent with our findings that infants increase their fixation shifting around 6 months of age. Future studies might consider directly examining the role of facial movement in the development of facial configural processing by using paradigms such as the “switch” visual habituation procedure (Cashon & Cohen, 2004; Schwarzer, et al., 2007; Younger & Cohen, 1986).

It should be noted that, in the present study, we familiarized participants with dynamic or static faces, and tested them with static faces. Future studies might consider using a full factorial design to further investigate the role of facial movement in the development of face processing. The full factorial design would include both moving and static faces in familiarization and test phases, which allows one to examine the effects of facial movement at both the face encoding and retrieval stages. As suggested in previous studies, infants might recognize faces based on their idiosyncratic facial movement patterns (Spencer, et al., 2006), which can be readily tested with a full factorial experimental design (Knappmeyer et al., 2003; Lander & Davies, 2007; O’Toole, Roark, & Abdi, 2002).

Another noteworthy feature of the present study is that it focused on own-race dynamic face processing in a Chinese population of infants. It would be worthwhile for forthcoming investigations to include participants from different racial backgrounds and both own- and other-race moving faces as stimuli. Such a design would also allow for examination of possible cultural effects on how facial movement influences the development of face processing. Previous studies have shown that cultural factors exert substantial influence in face scanning patterns (Blais, Jack, Scheepers, Fiset, & Caldara, 2008; Fu, Hu, Wang, Quinn, & Lee, 2012; Kelly et al., 2011; Kelly, Miellet, & Caldara, 2010; Liu et al., 2011; Wheeler et al., 2011), and a recent study demonstrated that facial movements are not culturally universal (Jack, Garrod, Yu, Caldara, & Schyns, 2012). We might therefore expect cultural differences in how face movement affects the development of face processing.

A limitation of the present study is that it is unclear, to what extent, the effects of facial movements can be generalized to processing other visual objects. On the one hand, previous studies have shown that a rigid motion signal in non-face objects can influence infant scanning patterns, therefore affecting perception of those objects (e.g., Johnson et al., 2004). On the other hand, facial movements might play a distinct role in infant visual processing. As indicated in a recent study, although infants were sensitive to motion signals in general, they exhibited especially strong sensitivity to the movements that depict human faces (Ichikawa, et al., 2011). The specificity of the effects of facial movements might be due to 1) facial movement complexity, which contains both rigid and elastic motion signals; 2) embedded social signals, which contain various bits of facial information; and 3) infants’ rich visual experience with moving faces. Although it is beyond the scope of present study, it is important to investigate the specific role of facial movements in visual development in infancy. To address this issue, one approach would be to examine those factors that may contribute to a special role for facial movements.

To further understand the role of facial movements in the development of face processing, future studies might consider increasing the number of moving face exemplars and types of facial movements, such as presenting movements in the eye region. Increasing the variance in moving face stimuli would offer an opportunity to examine the generalizability of the current findings. For example, it could help us to understand to what extent facial movements affect face scanning patterns. Moreover, the current study controlled the external features of the familiarized and novel test faces. Although this manipulation ensures that face recognition is based on face identity, it might inhibit face recognition by young infants, which has been found to rely primarily on external facial features (e.g., Gallay, Baudouin, Durand, Lemoine, & Lécuyer, 2006; Rennels & Cummings, 2013; Rose, Jankowski, & Feldman, 2008; Turati, Cassia, Simion, & Leo, 2006). Absence of external feature information may thus mask the possible contribution of facial movement information to face recognition performance in young infants (e.g., Otsuka et al., 2009). Thus, to further understand the contribution of facial movements to the development of face processing, forthcoming studies might consider a more universal paradigm across infancy.

To conclude, the present study has observed a distinctive age-related eye movement pattern in processing moving faces as opposed to static faces. More importantly, we found that the specific eye movement pattern corresponding to moving faces was related to infant face recognition performance. The present study, along with some recent adult facial movement studies (e.g., Võ et al., 2012), suggest that how individuals process moving faces is not necessarily predicable from how static faces are processed. Instead, this work shows that even subtle facial movements, like chewing/blinking, can dramatically affect infant face encoding, which consequently influences recognition performance. The implication is that facial movement plays a significant role in shaping the development of face processing. Focusing on moving face processing should therefore help us better understand the development of face processing in real world contexts. In future studies, the dynamic aspect of face information is worth emphasizing, as it might provide crucial insight into the nature of face processing development, which is difficult to obtain from studies using static face pictures as stimuli. For example, inquiries into such issues as the emergence and development of face configural/holistic processing might be better answered through the exploration of moving face processing.

Acknowledgments

This research is supported by grants from the Natural Science and Engineering Research Council of Canada, National Institutes of Health (R01 HD046526), and National Science Foundation of China (31070908, 31300860, 31371041, and 31470993).

Contributor Information

Naiqi G. Xiao, Email: naiqi.xiao@mail.utoronto.ca, University of Toronto, 45 Walmer Road, Toronto, ON, M5R 2X2, Canada

Paul C. Quinn, Email: pquinn@psych.udel.edu, University of Delaware, Newark, DE, 19716, United States

Shaoying Liu, Email: syliu@zstu.edu.cn, Zhejiang Sci-Tech University, 5 No. 2 Street, Hangzhou, 310018, China.

Liezhong Ge, Email: topglzh@163.com, Zhejiang Sci-Tech University, 5 No. 2 Street, Hangzhou, 310018, China.

Olivier Pascalis, Email: olivier.pascalis@upmf-grenoble.fr, LPNC–Université Grenoble Alpes, CNRS, Grenoble, 38400, France.

Kang Lee, Email: kang.lee@utoronto.ca, University of Toronto, 45 Walmer Road, Toronto, ON, M5R 2X2, Canada.

References

  1. Bahrick LE, Gogate LJ, Ruiz I. Attention and memory for faces and actions in infancy: The salience of actions over faces in dynamic events. Child Development. 2002;73:1629–1643. doi: 10.1111/1467-8624.00495. [DOI] [PubMed] [Google Scholar]
  2. Bahrick LE, Lickliter R, Castellanos I. The development of face perception in infancy: Intersensory interference and unimodal visual facilitation. Developmental Psychology. 2013;49:1919–1930. doi: 10.1037/a0031238. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bahrick LE, Newell LC. Infant discrimination of faces in naturalistic events: Actions are more salient than faces. Developmental Psychology. 2008;44:983–996. doi: 10.1037/0012-1649.44.4.983. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bartlett MS, Littlewort GC, Frank MG, Lee K. Automatic decoding of facial movements reveals deceptive pain expressions. Current Biology. 2014;24:738–743. doi: 10.1016/j.cub.2014.02.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bhatt RS, Bertin E, Hayden A, Reed A. Face processing in infancy: Developmental changes in the use of different kinds of relational information. Child Development. 2005;76:169–181. doi: 10.1111/j.1467-8624.2005.00837.x. [DOI] [PubMed] [Google Scholar]
  6. Blais C, Jack RE, Scheepers C, Fiset D, Caldara R. Culture shapes how we look at faces. PLoS ONE. 2008;3:e3022. doi: 10.1371/journal.pone.0003022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bornstein MH, Mash C, Arterberry ME. Perception of object–context relations: Eye-movement analyses in infants and adults. Developmental Psychology. 2011;47:364–375. doi: 10.1037/a0021059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bulf H, Turati C. The role of rigid motion in newborns’ face recognition. Visual Cognition. 2010;18:504–512. doi: 10.1080/13506280903272037. [DOI] [Google Scholar]
  9. Butcher N, Lander K, Fang H, Costen N. The effect of motion at encoding and retrieval for same- and other-race face recognition. British Journal of Psychology. 2011;102:931–942. doi: 10.1111/j.2044-8295.2011.02060.x. [DOI] [PubMed] [Google Scholar]
  10. Cashon CH, Cohen LB. The construction, deconstruction, and reconstruction of infant face perception. In: Pascalis O, Slater A, editors. The development of face processing in infancy and early childhood. New York: Nova Science Publishers; 2003. pp. 55–68. [Google Scholar]
  11. Cashon CH, Cohen LB. Beyond U-shaped development in infants’ processing of faces: An information-processing account. Journal of Cognition and Development. 2004;5:59–80. doi: 10.1207/s15327647jcd0501_4. [DOI] [Google Scholar]
  12. Cassia VM, Kuefner D, Picozzi M, Vescovo E. Early experience predicts later plasticity for face processing: Evidence for the reactivation of dormant effects. Psychological Science. 2009;20:853–859. doi: 10.1111/j.1467-9280.2009.02376.x. [DOI] [PubMed] [Google Scholar]
  13. Cassia VM, Turati C, Simion F. Can a nonspecific bias toward top-heavy patterns explain newborns’ face preference? Psychological Science. 2004;15:379–383. doi: 10.1111/j.0956-7976.2004.00688.x. [DOI] [PubMed] [Google Scholar]
  14. Cohen LB. Uses and misuses of habituation and related preference paradigms. Infant and Child Development. 2004;13:349–352. doi: 10.1002/icd.355. [DOI] [Google Scholar]
  15. Coulon M, Guellaï B, Streri A. Recognition of unfamiliar talking faces at birth. International Journal of Behavioral Development. 2011;35:282–287. doi: 10.1177/0165025410396765. [DOI] [Google Scholar]
  16. de la Rosa S, Giese M, Bülthoff HH, Curio C. The contribution of different cues of facial movement to the emotional facial expression adaptation aftereffect. Journal of Vision. 2013;13(1):1–15. doi: 10.1167/13.1.23. [DOI] [PubMed] [Google Scholar]
  17. Ekman P, Friesen W. Facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press; Palo Alto: 1978. [Google Scholar]
  18. Fantz RL. Pattern vision in newborn infants. Science. 1963;140:296–297. doi: 10.1126/science.140.3564.296. [DOI] [PubMed] [Google Scholar]
  19. Fu G, Hu CS, Wang Q, Quinn PC, Lee K. Adults scan own- and other-race faces differently. PLoS ONE. 2012;7:e37688. doi: 10.1371/journal.pone.0037688. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Gaither SE, Pauker K, Johnson SP. Biracial and monoracial infant own-race face perception: an eye tracking study. Developmental Science. 2012;15:775–782. doi: 10.1111/j.1467-7687.2012.01170.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Gallay M, Baudouin JY, Durand K, Lemoine C, Lécuyer R. Qualitative differences in the exploration of upright and upside-down faces in four-month-old infants: an eye-movement study. Child Development. 2006;77:984–996. doi: 10.1111/j.1467-8624.2006.00914.x. [DOI] [PubMed] [Google Scholar]
  22. Guellaï B, Coulon M, Streri A. The role of motion and speech in face recognition at birth. Visual Cognition. 2011;19:1212–1233. doi: 10.1080/13506285.2011.620578. [DOI] [Google Scholar]
  23. Haith MM, Bergman T, Moore M. Eye contact and face scanning in early infancy. Science. 1977;198:853–855. doi: 10.1126/science.918670. [DOI] [PubMed] [Google Scholar]
  24. Hill H, Johnston A. Categorizing sex and identity from the biological motion of faces. Current Biology. 2001;11:880–885. doi: 10.1016/S0960-9822(01)00243-3. [DOI] [PubMed] [Google Scholar]
  25. Horstmann G, Ansorge U. Visual search for facial expressions of emotions: A comparison of dynamic and static faces. Emotion. 2009;9:29–38. doi: 10.1037/a0014147. [DOI] [PubMed] [Google Scholar]
  26. Hunnius S, de Wit TCJ, Vrins S, von Hofsten C. Facing threat: infants“ and adults” visual scanning of faces with neutral, happy, sad, angry, and fearful emotional expressions. Cognition & Emotion. 2011;25:193–205. doi: 10.1080/15298861003771189. [DOI] [PubMed] [Google Scholar]
  27. Hunnius S, Geuze RH. Developmental changes in visual scanning of dynamic faces and abstract stimuli in infants: A longitudinal study. Infancy. 2004;6:231–255. doi: 10.1207/s15327078in0602_5. [DOI] [PubMed] [Google Scholar]
  28. Hunter MA, Ames EW. A multifactor model of infant preferences for novel and familiar stimuli. In: Rovee-Collier C, Lipsitt LP, editors. Advances in infancy research. Vol. 5. Norwood, NJ: Ablex; 1988. pp. 69–95. [Google Scholar]
  29. Ichikawa H, Kanazawa S, Yamaguchi MK. The movement of internal facial features elicits 7- to 8-month-old infants’ preference for face patterns. Infant and Child Development. 2011;20:464–474. doi: 10.1002/icd.724. [DOI] [Google Scholar]
  30. Jack RE, Garrod OGB, Yu H, Caldara R, Schyns PG. From the cover: Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Sciences of the United States of America. 2012;109:7241–7244. doi: 10.1073/pnas.1200155109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Johnson SP, Slemmer JA, Amso D. Where infants look determines how they see: Eye movements and object perception performance in 3-month-olds. Infancy. 2004;6:185–201. doi: 10.1207/s15327078in0602_3. [DOI] [PubMed] [Google Scholar]
  32. Kelly DJ, Liu S, Rodger H, Miellet S, Ge L, Caldara R. Developing cultural differences in face processing. Developmental Science. 2011;14:1176–1184. doi: 10.1111/j.1467-7687.2011.01067.x. [DOI] [PubMed] [Google Scholar]
  33. Kelly DJ, Miellet S, Caldara R. Culture shapes eye movements for visually homogeneous objects. Frontiers in Psychology. 2010;1:1–7. doi: 10.3389/fpsyg.2010.00006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Kelly DJ, Quinn PC, Slater AM, Lee K, Gibson A, Smith M, Pascalis O. Three-month-olds, but not newborns, prefer own-race faces. Developmental Science. 2005;8:F31–F36. doi: 10.1111/j.1467-7687.2005.0434a.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Knappmeyer B, Thornton IM, Bülthoff HH. The use of facial motion and facial form during the processing of identity. Vision Research. 2003;43:1921–1936. doi: 10.1016/S0042-6989%2803%2900236-0. [DOI] [PubMed] [Google Scholar]
  36. Knight B, Johnston A. The role of movement in face recognition. Visual Cognition. 1997;4:265–273. doi: 10.1080/713756764. [DOI] [Google Scholar]
  37. Lander K, Bruce V. The role of motion in learning new faces. Visual Cognition. 2003;10:897–912. doi: 10.1080/13506280344000149. [DOI] [Google Scholar]
  38. Lander K, Bruce V. Repetition priming from moving faces. Memory and Cognition. 2004;32:640–647. doi: 10.3758/BF03195855. [DOI] [PubMed] [Google Scholar]
  39. Lander K, Chuang L. Why are moving faces easier to recognize? Visual Cognition. 2005;12:429–442. doi: 10.1080/13506280444000382. [DOI] [Google Scholar]
  40. Lander K, Davies R. Exploring the role of characteristic motion when learning new faces. The Quarterly Journal of Experimental Psychology. 2007;60:519–526. doi: 10.1080/17470210601117559. [DOI] [PubMed] [Google Scholar]
  41. Lee K, Quinn PC, Pascalis O, Slater A. Development of face-processing ability in childhood. In: Zelazo PD, editor. The Oxford Handbook of Developmental Psychology: Vol. 1. Body and mind. Oxford University Press; 2013. pp. 338–370. [Google Scholar]
  42. Le Grand R, Mondloch CJ, Maurer D, Brent HP. Expert face processing requires visual input to the right hemisphere during infancy. Nature Neuroscience. 2003;6:1108–1112. doi: 10.1038/nn1121. [DOI] [PubMed] [Google Scholar]
  43. Lewkowicz DJ, Hansen-Tift AM. Infants deploy selective attention to the mouth of a talking face when learning speech. Proceedings of the National Academy of Sciences of the United States of America. 2012;109:1431–1436. doi: 10.1073/pnas.1114783109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Liu S, Quinn PC, Wheeler A, Xiao NG, Ge L, Lee K. Similarity and difference in the processing of same- and other-race faces as revealed by eye tracking in 4- to 9-month-olds. Journal of Experimental Child Psychology. 2011;108:180–189. doi: 10.1016/j.jecp.2010.06.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Littlewort G, Whitehill J, Wu T, Fasel I, Frank M, Movellan J, Bartlett M. The computer expression recognition toolbox (CERT). Presented at the Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on IS - SN - VO -, IEEE; 2011. pp. 298–305. [DOI] [Google Scholar]
  46. Maurer D, Salapatek P. Developmental changes in the scanning of faces by young infants. Child Development. 1976;47:523–527. [PubMed] [Google Scholar]
  47. Nelson CA. The development and neural bases of face recognition. Infant and Child Development. 2001;10:3–18. doi: 10.1002/icd.239. [DOI] [Google Scholar]
  48. O’Toole AJ, Roark DA, Abdi H. Recognizing moving faces: A psychological and neural synthesis. Trends in Cognitive Sciences. 2002;6:261–266. doi: 10.1016/S1364-6613(02)01908-3. [DOI] [PubMed] [Google Scholar]
  49. Oakes LM, Ellis AE. An eye-tracking investigation of developmental changes in infants’ exploration of upright and inverted human faces. Infancy. 2013;18:134–148. doi: 10.1111/j.1532-7078.2011.00107.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Otsuka Y, Hill H, Kanazawa S, Yamaguchi MK, Spehar B. Perception of Mooney faces by young infants: the role of local feature visibility, contrast polarity, and motion. Journal of Experimental Child Psychology. 2012;111:164–179. doi: 10.1016/j.jecp.2010.10.014. [DOI] [PubMed] [Google Scholar]
  51. Otsuka Y, Konishi Y, Kanazawa S, Yamaguchi MK, Abdi H, O’Toole AJ. Recognition of moving and static faces by young Infants. Child Development. 2009;80:1259–1271. doi: 10.1111/j.1467-8624.2009.01330.x. [DOI] [PubMed] [Google Scholar]
  52. Pascalis O, de Haan M, Nelson CA. Is face processing species-specific during the first year of life? Science. 2002;296:1321–1323. doi: 10.1126/science.1070223. [DOI] [PubMed] [Google Scholar]
  53. Pascalis O, Scott LS, Kelly DJ, Shannon RW, Nicholson E, Coleman M, Nelson CA. Plasticity of face processing in infancy. Proceedings of the National Academy of Sciences of the United States of America. 2005;102:5297–5300. doi: 10.1073/pnas.0406627102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Pike GE, Kemp RI, Towell NA, Phillips KC. Recognizing moving faces: The relative contribution of motion and perspective view information. Visual Cognition. 1997;4:409–437. doi: 10.1080/713756769. [DOI] [Google Scholar]
  55. Quinn PC, Tanaka JW. Infants’ processing of featural and configural information in the upper and lower halves of the face. Infancy. 2009;14:474–487. doi: 10.1080/15250000902994248. [DOI] [PubMed] [Google Scholar]
  56. Rennels JL, Cummings AJ. Sex differences in facial scanning: similarities and dissimilarities between infants and adults. International Journal of Behavioral Development. 2013;37:111–117. doi: 10.1177/0165025412472411. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Reynolds GD, Courage ML, Richards JE. The development of attention. In: Reisberg D, editor. The Oxford handbook of cognitive psychology. New York: Oxford University Press; 2013. pp. 1000–1013. [DOI] [Google Scholar]
  58. Rose SA, Jankowski JJ, Feldman JF. The inversion effect in infancy: the role of internal and external features. Infant Behavior and Development. 2008;31:470–480. doi: 10.1016/j.infbeh.2007.12.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Rubenstein AJ. Variation in perceived attractiveness: Differences between dynamic and static faces. Psychological Science. 2005;16:759–762. doi: 10.1111/j.1467-9280.2005.01610.x. [DOI] [PubMed] [Google Scholar]
  60. Salvucci DD, Goldberg JH. Identifying fixations and saccades in eye-tracking protocols. Presented at the Proceedings of the 2000 symposium on Eye tracking research & applications (ETRA ‘00); New York: OCLC; 2000. pp. 71–78. [DOI] [Google Scholar]
  61. Spencer J, O’Brien J, Johnston A, Hill H. Infants’ discrimination of faces by using biological motion cues. Perception. 2006;35:79–89. doi: 10.1068/p5379. [DOI] [PubMed] [Google Scholar]
  62. Stoesz BM, Jakobson LS. A sex difference in interference between identity and expression judgments with static but not dynamic faces. Journal of Vision. 2013;13(5):1–14. doi: 10.1167/13.5.26. [DOI] [PubMed] [Google Scholar]
  63. Schwarzer G, Zauner N, Jovanovic B. Evidence of a shift from featural to configural face processing in infancy. Developmental Science. 2007;10:452–463. doi: 10.1111/j.1467-7687.2007.00599.x. [DOI] [PubMed] [Google Scholar]
  64. Thornton IM, Rensink RA, Shiffrar M. Active versus passive processing of biological motion. Perception. 2002;31:837–853. doi: 10.1068/p3072. [DOI] [PubMed] [Google Scholar]
  65. Turati C, Cassia VM, Simion F, Leo I. Newborns’ face recognition: role of inner and outer facial features. Child Development. 2006;77:297–311. doi: 10.1111/j.1467-8624.2006.00871.x. [DOI] [PubMed] [Google Scholar]
  66. Van Belle G, De Graef P, Verfaillie K, Rossion B, Lefevre P. Face inversion impairs holistic perception: Evidence from gaze-contingent stimulation. Journal of Vision. 2010;10(5):1–13. doi: 10.1167/10.5.10. [DOI] [PubMed] [Google Scholar]
  67. Võ MLH, Smith TJ, Mital PK, Henderson JM. Do the eyes really have it? Dynamic allocation of attention when viewing moving faces. Journal of Vision. 2012;12:1–14. doi: 10.1167/12.13.3. [DOI] [PubMed] [Google Scholar]
  68. Wallis G, Bülthoff HH. Effects of temporal association on recognition memory. Proceedings of the National Academy of Sciences of the United States of America. 2001;98:4800–4804. doi: 10.1073/pnas.071028598. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Wheeler A, Anzures G, Quinn PC, Pascalis O, Omrin DS, Lee K. Caucasian infants scan own- and other-race faces differently. PLoS ONE. 2011;6:e18621. doi: 10.1371/journal.pone.0018621.t002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Wilcox BM, Clayton FL. Infant visual fixation on motion pictures of the human face. Journal of Experimental Child Psychology. 1968;6:22–32. doi: 10.1016/0022-0965(68)90068-4. [DOI] [PubMed] [Google Scholar]
  71. Xiao NG, Quinn PC, Ge L, Lee K. Rigid facial motion influences featural, but not holistic, face processing. Vision Research. 2012;57:26–34. doi: 10.1016/j.visres.2012.01.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Xiao NG, Quinn PC, Ge L, Lee K. Elastic facial movement influences part-based but not holistic processing. Journal of Experimental Psychology: Human Perception and Performance. 2013;39:1457–1467. doi: 10.1037/a0031631. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Xiao NG, Quinn PC, Wheeler A, Pascalis O, Lee K. Natural, but not artificial, facial movements elicit the left visual field bias in infant face scanning. Neuropsychologia. 2014;62:175–183. doi: 10.1016/j.neuropsychologia.2014.07.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Younger BA, Cohen LB. Developmental change in infants’ perception of correlations among attributes. Child Development. 1986;57:803–815. [PubMed] [Google Scholar]

RESOURCES