Abstract
Faces can be represented at a variety of different subordinate levels (e.g., race) that can become “privileged” for visual recognition in perceivers and is reflected as patterns of biases (e.g., own-race bias). The mechanisms encoding privileged status are likely varied, making it difficult to predict how neural systems represent subordinate-level biases in face processing. Here, we investigate the neural basis of subordinate-level representations of human faces in the ventral visual pathway, by leveraging recent behavioral findings indicating the privileged nature of peer faces in identity recognition for adolescents and emerging adults (i.e., ages 18–25 years). We tested 166 emerging adults in a face recognition paradigm and a subset of 31 of these participants in two fMRI task paradigms. We showed that emerging adults exhibit a Peer Bias in face recognition behavior, which indicates a privileged status for a subordinate level category of faces that is not predicted based on experience alone. This privileged status of peer faces is supported by multiple neural mechanisms within the ventral visual pathway, including enhanced neural magnitude and neural size in the FFA1, which is a critical part of the face processing network, that fundamentally supports the representations of subordinate-level categories of faces (in terms of developmental stage). These findings demonstrate organizational principles that the human ventral visual pathway uses to privilege relevant social information in face representations, which is essential for navigating human social interactions. It will be important to understand whether similar mechanisms support representations of other subordinate-level categories like race and gender.
Keywords: face recognition, subcategory, peer bias, FFA, fusiform gyrus
Introduction
Researchers have proposed that visual object recognition occurs at multiple levels of abstraction that are organized hierarchically (Rosch, 1978). The basic level is “privileged” in recognition for several reasons: the category members share a generalized shape (e.g., all faces have a canonical shape) that is easily identifiable by perceivers; labels for basic-level categories are among the first words learned by infants (e.g., dog, cup); and adults categorize and verify labels for objects fastest at the basic level (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). Interestingly, most of the work investigating the topographic organization of visual object recognition in the brain also indicates that the basic level is privileged at the neural level. Using a variety of behavioral paradigms during neuroimaging, researchers have consistently observed that faces, common objects, and places elicit different patches of activation throughout the ventral visual cortex, particularly in adult brains (Grill-Spector, Golarai, & Gabrieli, 2008; Grill-Spector, Knouf, & Kanwisher, 2004).
However, visual objects can be recognized at increasingly subordinate-level categories (e.g., adult face, child face), which is especially important for objects of expertise, like faces (Tanaka, 2001; Tanaka & Taylor, 1991). With increasing visuoperceptual expertise, the subordinate-level category can become the most psychologically fundamental or “privileged” level (Tanaka & Taylor, 1991). Behavioral training studies with novel objects reveal that early in the process of learning to individuate novel objects, the basic level is privileged; however, with increased training, expertise in recognizing individual novel objects improves and recognition at the subordinate level becomes as fast and accurate as at the basic level (Gauthier, Williams, Tarr, & Tanaka, 1998). This shift reveals how the visual information that the perceiver attends to for the purpose of recognition changes as a function of experience individuating objects within a category. Despite the wealth of behavioral studies investigating this “subordinate-level shift” in visuoperceputal expertise (Meissner & Brigham, 2001; Pascalis et al., 2005; Pascalis & Bachevalier, 1998; Tanaka, Kiefer, & Bukach, 2004), there is little work assessing how subordinate-level representations are manifest in the brain. The central goal of this study was to evaluate how subordinate-level representations of human faces are organized in the ventral visual pathway.
Psychological Relevance of Subordinate Level Categories of Human Faces
In adults, the psychological relevance of subordinate-level categories of human faces, including race and gender, is reflected in patterns of bias in recognition abilities. For example, adults often exhibit superior abilities to recognize faces of individuals from within their own compared to within another race (i.e., the “own race effect”; Hugenberg, Young, Bernstein, & Sacco, 2010; Levin, 1996; Meissner & Brigham, 2001; Tanaka et al., 2004), and to a lesser extent, within their own compared to another gender (i.e., “own-gender bias ; Lewin & Herlitz, 2002; Wright, Boyd, & Tredoux, 2003). There is also evidence that the general age of a face may organize subordinate level representations of faces, although the patterns of bias in recognition abilities are quite complex. There are examples of an “own-age bias” in recognition abilities of adults (e.g., Wright & Stroud, 2002), but also superior processing of faces from other age groups, including maternity-ward nurses recognizing newborn faces (e.g., Cassia, Picozzi, Kuefner, & Casati, 2008) and young children recognizing adult faces (i.e., “caregiver bias”; Picci & Scherf, 2016). These biases reflect that faces can be represented at a variety of different subordinate-levels, which can become “privileged” for visual recognition in perceivers.
The mechanisms influencing the privileged status of subordinate-level representations of faces are likely varied, reflecting visuoperceptual experience (Brigham & Malpass, 1985; Diamond & Carey, 1986; Tanaka, 2001), social motivation (Hugenberg et al., 2010), and/or the primacy of social developmental tasks (Picci & Scherf, 2016; Scherf & Scott, 2012); and may change as a function of development (Scherf & Scott, 2012). As a result, it has been difficult to make strong predictions about how neural systems represent these subordinate-level biases in face processing.
Neural Representation of Subordinate Level Categories of Human Faces
There is a large literature investigating the neural representation of faces at both the basic (face vs. object) and even individual (Marilyn vs. Maggie) levels. When defined at the basic level (e.g., faces vs. common objects), faces activate a broad network of regions that extend throughout the ventral temporal pathway including the occipital face area (OFA), posterior fusiform gyrus (pFG), the fusiform face area (FFA), and the anterior temporal pole (ATP), but also include the amygdala, ventromedial prefrontal cortex, posterior parietal cortex, and posterior superior temporal cortex in both hemispheres (Haxby, Hoffman, & Gobbini, 2002; Haxby et al., 2000). At the exemplar, or individual, level, the topography of activation doesn’t vary as much as the patterns of activation within specific regions of the face-processing network. For example, using a multivariate analytic approach, researchers determined that regions in the bilateral fusiform gyri and anterior middle temporal gyri represent face identity (Nestor, Plaut, & Behrmann, 2011). Critically, the regions in the fusiform gyrus that were identified in this multivariate approach were not face-selective regions, meaning that they were not identified in a basic-level contrast (i.e., faces vs. objects). This finding may indicate that representations for faces at the basic and exemplar levels are differentiated, in part, by the locus of activation within the fusiform gyrus.
Compared to these basic- and individual-level studies of neural representation, there is a relative dearth of studies evaluating how the brain represents subordinate-level categories of faces. Most studies investigating the neural basis of subordinate-level face representations have focused on understanding how the brain represents own- versus other-race faces or the gender of faces. Many of these studies have reported differences in the magnitude and/or pattern of activation between the different subcategories of faces within regions of the face network (for review, Kubota, Banaji, & Phelps, 2012). One study reported that the right and left FFA encode difference race faces as a function of differential patterns of activation (Contreras, Banaji, & Mitchell, 2013). The same authors reported that the bilateral FFA also encodes gender via differential patterns of activation (Contreras et al., 2013). Other studies of face gender have reported a similar finding of differential patterns of activation throughout multiple regions in the face-processing network (e.g., Kaul, Rees, & Ishai, 2011).
There are far fewer studies investigating age as a subordinate level for organizing face representations in the brain. In one study, young adults (~23 years old) categorized unfamiliar peer-aged (~22 years old) and older-aged (~75 years old) faces by age and gender while being scanned (Weise et al., 2012). Although participants were faster to categorize older faces than peer faces, there were no differences in the magnitude of activation (i.e., percentage signal change) related to the age of the face stimuli in any of the regions within the face processing network that they investigated (bilateral FFA and OFA). In another study, children (ages 7–10 years) and adults (ages 18–40-years) performed a simple recognition memory task separately for adult and child faces (Golarai, Liberman, & Grill-Spector, 2017). There were no differences in the recognition accuracy for either face age or as a function of either age group. However, the adults exhibited reliably different patterns of activation throughout the ventrotemporal cortex during recognition of the adult and child faces, whereas the children did not. Altogether, the work investigating the neural basis of race, gender, and age as relevant dimensions of subordinate-level representations of faces provides inconsistent information regarding the mechanism for encoding faces at this level.
Current Study
To investigate the neural basis of subordinate-level representations of human faces in the ventral visual pathway, we leveraged recent behavioral findings indicating the privileged nature of peer faces in identity recognition for adolescents and emerging adults (Picci & Scherf, 2016). In this work, peer faces are defined not just by age, but also by social developmental tasks (e.g., finding a mate, creating a family, raising a family) that are related to, but not yoked to age. Adolescence, roughly the second decade of life, and emerging adulthood (~18 – 25 years, Arnett, 2000) represent developmental periods of increasing autonomy from the familial unit and in which affiliative relationships become reorganized to focus on peers (Havighurst, 1972). Previously, we proposed that these social developmental tasks (e.g., autonomy from familial unit, developing peer relationships) fundamentally shape the computational goals of the perceptual system, which are ultimately reflected in face-processing biases (Scherf & Scott, 2012; Scherf et al., 2012). We reported that a peer bias (i.e., superior recognition for peer compared to other faces) emerges in adolescence and becomes prominent in the face recognition behavior of emerging adults (Picci & Scherf, 2016). Specifically, emerging adults exhibit superior recognition of emerging adult faces compared to the other kinds of faces, whereas adolescents exhibit superior recognition of adolescent faces compared to the other groups of faces. Given the strength of these behavioral findings, we used a similar paradigm to investigate the neural basis of the peer bias in the face recognition behavior of emerging adults (age 18–22 years) who are expected to exhibit a prominent behavioral peer bias in their face recognition abilities1.
Specifically, we evaluated the presence of a potential peer bias in face identity recognition behavior among 166 emerging adults in the context of recognizing child, early puberty adolescent, late puberty adolescent, emerging adult (i.e., peer), and faces of individuals that would be approximately the age of the parents of the emerging adult participants (i.e., parent-aged faces). To assess the neural basis of subordinate level representations of all these face categories and determine whether any of them is privileged in the brains of emerging adults, we scanned a subset of these participants using fMRI in two paradigms. In the first paradigm, we employed a visual stimulation task to map the topography of neural responses to each of these subordinate-level faces in the ventral visual pathway (i.e., child, early adolescent, late adolescent, emerging adult, and parent-aged faces). We hypothesized that the locus, size of regions activated, and/or magnitude of activation within the regions may differentiate the subcategories of faces, and particularly peer faces. In a second task, participants engaged in a 1-back recognition memory task for faces from each subordinate-level category. This task was used to specifically measure blood-oxygen-level-dependent (BOLD) activation during the process of face identity recognition. Given the previously observed peer bias in face recognition behavior, we predicted that the magnitude of responses would differentiate the peer compared to the other aged faces among regions in the face-processing network.
Methods
Participants
Typically developing, emerging adults (N = 166, M = 19.7, SD = 1.9; 57% of female; see Table 1) were tested in the behavioral portion of the experiment. Participants were healthy with no history of neurological or psychiatric disorders in themselves or their first-degree relatives. They were screened for behavioral symptoms indicative of undiagnosed psychopathology. A total of 35 participants were excluded because of experimenter error or their average total performance was below chance (i.e., accuracy is below 50%; N = 12). As a result, the final sample included 131 participants (MAge = 19.5, SD = 1.7; 60% female; see Table 1). A subset of 31 emerging adults (MAge = 19.3 years, SD = 1.3; 52% female; see Table 1) were scanned in the fMRI study. These individuals were selected into the fMRI study because of their interest in participating and their ability to pass the MRI safety screening procedures, without regard for their behavioral performance. This included no history of head injuries or concussions, normal or corrected vision, and were all right-handed. The sample size for the behavioral study was based on our recent behavioral study that at least 28 individuals was required to give us 80% power to detect a moderate sized effect (i.e., 0.5) at a significance level of 0.05 (Picci & Scherf, 2016). The sample size for the neuroimaging studies was based on previous studies in which experimental effects were observed in face processing network using a similar blocked design (Scherf, Elbich, Minshew, & Behrmann, 2015; Scherf et al., 2017).
Table 1.
Participant Demographics
Sample | N | Age (SD) | % Female |
---|---|---|---|
Behavioral testing | 166 | 19.7 (1.9) | 57% |
fMRI testing | 31 | 19.3 (1.3) | 52% |
Written informed consent was obtained using procedures approved the Internal Review Board of The Pennsylvania State University. Participants were recruited through the Psychology Department undergraduate subject pool and via fliers on campus.
Measures
Peer Bias Behavioral Task
To investigate the peer bias in face recognition memory in emerging adults we adapted the old/new face-memory task that we developed previously (Picci & Scherf, 2016). We included all four of the original face category stimuli (i.e., 6- to 8-year-old children, 11- to 14-year-old early-puberty adolescents, 11- to 14-year-old late-puberty adolescents, and sexually mature 18- to 20-year-old emerging adults) from our previous paradigm (Picci & Scherf, 2016). In addition, we added a fifth block of faces that represented the current age of parent faces to the emerging adult participants, that is individuals who are between the ages of 40–50 years of age. We call this developmental group “parent-aged” faces because of their relative developmental relation to the emerging adult participants, not because we confirmed that the individuals did indeed have children of their own.
The stimuli consisted of 150 gray-scale photographs of faces with neutral and happy expressions (See Fig.1a). There were 30 images from each of five face subcategories that corresponded to each developmental group: (6- to 8-year-old children, 11- to 14-year-old early-puberty adolescents, 11- to 14-year-old late-puberty adolescents, sexually mature 18- to 20-year-old emerging adults, and sexually mature 40- to 50-year-olds). We collected perceived pubertal status of the adolescent faces from an independent group of emerging adult participants and verified that the status ratings were significantly different across all categories (e.g., child < all others; early puberty adolescent < late puberty adolescent < emerging adult). The racial/ethnic composition of the faces in the stimuli reflected the racial/ethnic distribution of the town where we recruited participants. For each face category, there were an even number of male and female target and distractor identities. There were two separate images of each target identity, one presented at encoding and the other at test (See Fig.1a).
Figure 1. Face Recognition Behavioral Task and Performance.
a). Example of face identity recognition behavioral task from the emerging adult faces block. Each task block was divided into three sections, encoding, delay, and recognition. During the encoding phase, participants were presented with 10 target faces for 2000 ms each and asked to remember them. In the delay period, all participants watched a trailer for a movie (~90 sec). During the recognize phrase, participants were presented with 10 target and 10 distractor faces, all of whom exhibited a smiling expression, and were asked to determine whether the face was “old” or “new.” Behavioral performance as a function of face subcategory for b). all participants (N = 166). c). individuals who also participated in the scanning experiment (N = 31). The data figures represent the distribution of d’ scores for each face condition across participants. The plot shows the means (bars) ± 1 standard error (SE). Peer – Emerging adult faces condition. “All Other 4 Faces” represents the average performance across the child, early pubertal adolescent, late pubertal adolescent, and parent-aged face conditions. “All Other 3 Faces” represents the average performance across the child, early pubertal adolescent, and late pubertal adolescent face conditions; we include these results to replicate our previous behavioral findings (Picci & Scherf, 2016).
Photographs were obtained from multiple face databases: NimStim (Tottenham et al., 2009), Karolinska (Lundqvist, Flykt, & Öhman, 1998), National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-ChEFS; Egger et al., 2011), JimStim (Tanaka & Pierce, 2009), and the Radboud Face Database (RaFD; Langner et al., 2010). In addition, approximately 50% of the images of late-puberty adolescents and 50% of images of parents were taken in the Laboratory of Developmental Neuroscience at the Pennsylvania State University. Extreme blemishes or scars were masked to eliminate artificial cues to recognition. Hair and clothing were not cropped out of the images. All images were presented on a black background and were standardized for luminance and image size.
Face-recognition abilities were measured using a computerized game. After studying 10 target faces, participants identified whether each face in a set of 20 faces (10 targets, 10 distractors) was in the study group (“old”) or not (“new”). The task was presented in blocked design with each block containing face stimuli for one of the five face categories. The order of the blocks was counterbalanced across participants. Participants first completed a practice session, which consisted of an abbreviated version of each phase of the task. At the end of the practice, participants were instructed to remember the person and not the picture to encourage them to create an invariant representation of the face identity. This task has good external validity with other tests of unfamiliar face identity recognition, including both the Male- and Female- Cambridge Face Memory Long Form Tests (see Arrington, Elbich, Dai, Duchaine, & Scherf, 2022).
Each task block was divided into three sections, encoding, delay, and recognition. During the encoding phase, participants were presented with the target faces and told that these were the faces of people who were going into a movie; each face had a neutral expression. Participants had 2,000 ms to encode each face. In the delay period, all participants watched a movie trailer (~90 seconds). During the test phase, participants were presented with the 10 target faces, except now each target face exhibited a smiling expression. These target faces were shown together with 10 distractor faces, which were also smiling. By presenting perceptually transformed images of the target faces during the test phase, we were able to assess participants’ invariant representation of face identity rather than image-specific memory. Each face was presented for 3,000 ms. Participants responded “yes” (“I recognize this face”) or “no” (“I do not recognize this face”) by pressing a key. Participants were instructed to perform as quickly and accurately as possible.
MRI Acquisition
Prior to scanning, all participants were placed in a mock MRI scanner for approximately 20 minutes and practiced lying still. This procedure is highly effective at acclimating participants to the scanner environment and minimizing motion artifact and anxiety (Scherf, Thomas, Doyle, & Behrmann, 2014). Participants were scanned using a Siemens 3T Trio MRI scanner with a 12-channel phase array head coil at the Social, Life, and Engineering Imaging Center (SLEIC) at The Pennsylvania State University. During the scanning section, the stimuli were displayed on a rear-projection screen located inside the MR scanner.
Functional EPI images were acquired in 34 slices 3mm-thick slices that were aligned approximately 30° perpendicular to the hippocampus, which is effective for maximizing signal in the medial temporal lobes (Whalen et al., 2008). This scanning protocol allowed for complete coverage of the medial and lateral temporal lobes, frontal, and occipital lobes. For individuals with larger head size, some of the superior parietal lobe was not scanned. The scan parameters were as follows: 3 mm isotropic voxels; TR = 2000; TE = 25 ms; FOV = 210 × 210; matrix 70 × 70; flip angle = 80°. High-resolution anatomical images were also collected using a 3D-MPRAGE with 176 1mm3, T1-weighted, straight sagittal slices (TR = 1700; TE = 1.78; flip angle = 9°; FOV = 256).
Visual Stimulation Task.
A visual stimulation task was created to activate face-selective regions in individual participants. Tasks with dynamic stimulation are better at eliciting face-related activation throughout multiple nodes of the distributed face-processing network (Fox, Iaria, & Barton, 2009; Pitcher, Walsh, & Duchaine, 2011). Given that our goal was to investigate the possibility that subordinate-level category information is represented in multiple nodes of the face-processing network, we employed a dynamic task that would provide a better chance of identifying activation throughout this distributed network.
This task contained two runs and included blocks of silent, fluid concatenations of short movie clips downloaded from the internet. The short (3–5 sec) video clips in the stimulus blocks were taken from YouTube and edited together using iMovie. Movies of objects included moving mechanical toys and devices (e.g., dominos falling) and were used in our previous work (Elbich, Molenaar, & Scherf, 2019; Elbich & Scherf, 2017). The movie clips of faces were intensely affective (e.g., a person laughing or crying) to elicit activation throughout the network of core and extended face processing regions (Gobbini & Haxby, 2007). Videos from all 5 face categories were represented in this task, including children, early-puberty (EP) adolescents, late-puberty (LP) adolescents, emerging adults (EA), and parent-aged (PA) individuals. The face movies included an even number of unfamiliar male and female actors expressing positive (e.g., happy, joyful, or enthusiastic) or negative (e.g., angry, scared, or crying) emotion (see Fig. 2a). The movie clips were rated for emotional intensity in a separate group of emerging adult participants. The movie clips were balanced across subcategory for emotional intensity.
Figure 2. Neuroimaging Tasks.
a). Examples of Face Categories in the Visual Stimulation Task. Movie clips for each of five face conditions were included in the task including peer (i.e., emerging adult), child, early pubertal (EP) adolescent, late pubertal (LP) adolescent, and parent-aged (PA) adult. The emotional intensity of the movie clips was matched across conditions. b). Example block of fMRI Face Recognition Task. The 1-back recognition task was executed in separate blocks for each of the five face subcategories as defined in the Visual Stimulation Task (block orders were randomized). This is an example of the stimuli from the child face 1-back block.
In total, there were 18 16-second stimulus blocks. In each run, there were 3 16-second blocks for each condition. (i.e., total of 6/conditions). Each run lasted 7 minutes and 12 seconds, which began and ended with 12-second fixation blocks. The order of the stimulus blocks was randomized for each participant. Fixation blocks (6 seconds) were interleaved between task blocks. After the first fixation block in each run, there was a 12-second block of patterns. The order of the two runs was counterbalanced across participants.
Participants were instructed to watch the movies and be still. Therefore, there were no constraints on the kind of processing that was engaged by participants in response to the stimuli; it was not focused on identity recognition or any other process (e.g., emotion categorization, gender or age evaluation). The task was designed so that large regions of interest (ROIs) could be identified and characterized within individual participants.
fMRI Face Recognition Task.
This task was specifically designed to engage face identity recognition processes because the peer bias is manifest in recognition behavior. When creating this task, we balanced the needs to evince a peer bias in activation and the maximize the signal-to-noise ratio in a novel paradigm in regions that are susceptible to sinus artefact (e.g., amygdala, anterior temporal lobe). As a result, we settled on a 1-back memory task for face identity in a blocked fMRI design using static images. Although the n-back was originally designed to engage working memory processes, the 1-back version has been used successfully to engage face identity recognition processes in which participants must invoke a mental representation of face identity in the absence of a percept (e.g., Gauthier et al., 2000; Herrington, Riley, Grupe, & Schultz, 2015; Kanwisher, Tong, & Nakayama, 1998)
Participants saw images of common objects and faces from each of the 5 developmental categories (i.e., child, EP adolescent, LP adolescent, peer – emerging adult, PA faces). The face images used in this experiment were selected from several databases including Child Affective Facial Expression (LoBue & Thrasher, 2015), JimStim (Tanaka & Pierce, 2009), NIMH-ChEFS (Egger, et al., 2011), Center for Vital Longevity Face (Minear & Park, 2004), NimStim (Tottenham et al., 2009), Karolinska (Lundqvist, Flykt, & Öhman, 1998). Common objects (inanimate objects) were downloaded from the Internet. The size (300 pixels/inch), color (gray scale), and luminance of the images were controlled using Photoshop.
The task was a single run block design with six 12-second blocks of each visual category, the order of which was randomized for each participant, interleaved with 6-second fixation blocks. Within each task block, 12 images were each presented for 800 ms followed by a 200 ms fixation. Participants completed a 1-back task while viewing the pictures and responded by button press when they saw a picture repeat (see Fig. 2b). Two images repeated in each stimulus block, the order of which was counterbalanced across blocks. Accuracy was collected as a measure of task performance. The duration of the task was 9 min 36 seconds.
Data Analyses
Peer Bias Behavioral Data
We converted the accuracy data to d-prime scores. We used the loglinear approach to compute the hit and false alarm rates as follows (Stanislaw & Todorov, 1999).
Prior to analyses, a process template was designed and coded for data cleaning and processing at the individual participant level. The raw data from each participant were downloaded onto this process template to identify trials with outlier response times (RT). We determined the minimum RT threshold by averaging RT on the fastest five accurate trials across all participants (50 milliseconds). The maximum RT threshold was 2950 milliseconds (i.e., 50 milliseconds under the max trial time). Therefore, any trials with RT less than 50 milliseconds or greater than 2950 milliseconds were removed from the analysis for each participant. The corresponding hit or false alarm rates were adjusted to accommodate the removal of each trial. Data from the entire task were removed from the analysis if the average total accuracy across all conditions was below chance (i.e., < 50%; N = 12) or whose average d’ score across conditions indicated confusion executing the task (i.e., d’ < −1; N = 19). Finally, to address outlier data points, we Winsorized the d’ values separately for each face condition (e.g., Picci & Scherf, 2016).
Planned comparisons were evaluated using paired-samples t-tests. Specifically, we predicted that recognition performance for peer faces (i.e., emerging adult faces) would exceed performance for all other faces (average of other face categories - see Picci & Scherf, 2016). However, because we added the PA faces in the paradigm, we computed two estimates of “other faces” for each participant. First, we computed the average performance in response to the child, and both groups of adolescent faces to directly compare these findings with our prior results (Picci & Scherf, 2016). Second, we computed the average performance in response to all other face categories, including PA faces.
fMRI Face Recognition Task Behavioral Data.
Because of the relatively limited number of potential hits (i.e., 2) compared to false alarms (i.e., 8) in each block, we did not convert the accuracy data from the scanner face recognition task into d-prime scores. To compute accuracy for each face subcategories, we adopted a strategy like signal detection theory. Specifically, in addition to assessing accuracy on the target present trials, we also assessed accuracy on the target absent trails. So, if a participant pressed the response button for a non-target trial, it counted as a false alarm and, thus, an incorrect trial for the analysis.
Neuroimaging Data
Defining Individual Face-Related ROIs.
Imaging data were analyzed using Brain Voyager QX version 2.3 (Brain Innovation, Maastricht, The Netherlands). Preprocessing of functional data included 3D-motion correction and filtering out low frequencies (3 cycles). Only those participants who exhibited maximum motion of less than 2/3 of a voxel in all six directions (i.e., no motion greater than 2.0 mm in any of 6 motion vectors on any image in all runs of both tasks) were included in the fMRI analyses. No participant was excluded due to excessive motion.
For each participant and each task, the time-series images for each brain volume were analyzed separately for basic and subordinate-level category differences (e.g., objects, child faces, EP adolescent faces, LP adolescent faces, peer faces, and parent-aged faces) in a fixed-factor general linear model (GLM). The GLM was computed on the z-normalized raw signal in each voxel. Each stimulus category was defined as a separate predictor and modeled with a box-car function, which was convolved with a canonical hemodynamic response to accommodate the delay in BOLD response. The time-series images were then spatially normalized into Talairach space. The functional images were not spatially smoothed (Grill-Spector & Weiner, 2014).
Face-related activation was defined separately for each participant for each subordinate-level face category using a whole-brain contrast (e.g., child faces – objects; peer faces – objects) that was corrected for false positive activation at the whole-brain level using the False Discovery Rate (FDR) with q < 0.10 (Genovese, Lazar, & Nichols, 2002). This generated activation maps for each participant including, a child face map, an EP adolescent face map, an LP adolescent face map, a peer face map, and a parent-aged face map. From each activation map, we defined face-related functional regions of interests (ROIs) in each individual participant that were inspired by the Gobbini and Haxby (2007) model of face processing. These ROIs included core (e.g., OFA, FFA1, pSTS) and extended (e.g., amygdala, anterior temporal lobe) regions in each hemisphere. The cluster of contiguous voxels nearest the classically defined fusiform area (FFA) in the middle portion of the gyrus was identified as the pFus-faces/FFA1 (Weiner and Grill-Spector, 2010). We defined the OFA as the set of contiguous voxels on the lateral surface of the occipital lobe closest to our previously defined adult group level coordinates (x, y, z coordinates: 50, −66, −4; Scherf, Behrmann, Humphreys, & Luna, 2007). The posterior superior temporal sulcus (pSTS) was defined as the set of contiguous voxels within the horizontal posterior segment of the superior temporal sulcus that did not ascent into the posterior segment of the STS. The most anterior boundary of the pSTS was where the ascending segment of the IPS intersected the lateral fissure. The anterior temporal lobe ROI was defined as the cluster of voxels nearest the coordinates reported previously in studies of individual face recognition (Mur, Ruff, Bodurka, Bandettini, & Kriegeskorte, 2010), which is at the most anterior tip of the collateral sulcus and fusiform gyrus, between the occipitotemporal sulcus and the parahippocampal gyrus. The amygdala was defined as the entire cluster of face-selective voxels within the grey matter structure. Any active voxels that extended beyond the structure out to the surrounding white matter, horn of the lateral ventricle, or hippocampus were excluded.
Defining Individual EVC ROIs.
We defined early visual cortex (EVC) in each individual in each hemisphere separately to evaluate as control regions. Functional activation in EVC was defined using a whole-brain contrast (objects – fixation) that was corrected for false positive activation at the whole-brain level using the False Discovery Rate (FDR) with q < 0.05 (Genovese et al., 2002). Following Nishimura and colleagues (Nishimura, Scherf, Zachariou, Tarr, & Behrmann, 2015), EVC was identified in each hemisphere separately in the most ventral axial slice of the occipital pole in which the calcarine sulcus could be visualized. We selected activation posterior to the calcarine sulcus on the axial slice and included contiguous voxels of activation that extended a maximum of 10 mm above or below the axial slice in which the calcarine sulcus was identified. We used the transverse occipital sulcus as the lateral boundary for activation.
ROI-Based Analyses
Visual Stimulation Task.
Table 2 illustrates the total number of participants for whom each ROI was definable together with the average coordinates of the centroid of each region. The ROIs were quantified in terms of the total number of significantly active voxels. A score of 0 was entered if a participant did not exhibit any significantly active voxels for a given ROI. Only ROIs in which a minimum of 50% of the participants had definable functional ROIs for all 5 categories were selected for additional analyses. This included bilateral FFA1, right OFA, and right posterior STS. There was not enough data to include the bilateral amygdala, anterior temporal poles, left OFA or left pSTS in additional analyses.
Table 2.
Number of Definable Face-Selective ROIs for Each Subordinate Level Face Category from Visual Stimulation Task
Child | EP | LP | EA | PA | |
---|---|---|---|---|---|
| |||||
rFFA1 | 27 | 26 | 28 | 27 | 26 |
lFFA1 | 27 | 26 | 26 | 25 | 23 |
rOFA | 16 | 17 | 18 | 22 | 16 |
rpSTS | 21 | 24 | 20 | 23 | 22 |
rAmygdala | 5 | 12 | 10 | 7 | 8 |
lAmygdala | 7 | 8 | 11 | 8 | 10 |
rATL | 12 | 18 | 13 | 12 | 10 |
lATL | 13 | 14 | 12 | 12 | 10 |
lOFA | 12 | 17 | 16 | 21 | 16 |
lpSTS | 12 | 21 | 19 | 19 | 22 |
Note: Each functional ROI in this table was defined as relevant face category versus object (e.g., child vs. object). EP represents early pubertal adolescent face, LP represents late pubertal adolescent face, EA represents emerging adult face, and PA means parent-aged face condition; r = right, l = left. There are in total of 31 participants in this study. FFA – fusiform area, OFA – occipital face area, pSTS – posterior superior temporal sulcus, and ATL – anterior temporal lobe. We focused on the core and extended face neural network and defined these bilateral ROIs. In order to ensure the power of analysis, only those ROIs in which more than half of participants (i.e., > 16) had significant activation were included in final statistical analyses (i.e., rFFA1, lFFA1, rOFA, and rpSTS).
The data that were analyzed for the effects of subordinate level representation are shown in Table 3. These data were not normally distributed; therefore, we used a square root transformation prior to analyses (Elbich & Scherf 2017). Planned contrasts were conducted to compare the size of ROIs activated by peer faces versus those from the other face subcategories (i.e., average of Child, EP, LP, and PA face ROIs). We also report the 95% confidence intervals of the difference scores.
Table 3.
Functional ROIs for Each Face Category from Visualization Stimulation Task
ROIs (defined maps) | Size of ROI (SE) | N | X Mean (SE) | Y mean (SE) | Z mean (SE) |
---|---|---|---|---|---|
| |||||
rFFA1 | |||||
Child | 597.42 (211.54) | 27 | 39 (0.58) | −51 (1.35) | −17(0.77) |
EP | 516.19 (207.67) | 26 | 39 (0.77) | −50 (1.35) | −18 (0.77) |
LP | 716.82 (227.38) | 28 | 38 (0.58) | −50 (1.54) | −17(0.77) |
EA | 828.35 (148.31) | 27 | 39 (0.77) | −51 (1.54) | −17 (0.77) |
PA | 488.74 (153.94) | 26 | 38 (0.77) | −52 (1.35) | −18 (0.77) |
lFFA1 | |||||
Child | 519.42 (191.85) | 27 | −37 (0.60) | −50 (1.60) | −18 (0.80) |
EP | 557.44 (221.41) | 26 | −39 (0.80) | −51 (1.80) | −18 (0.60) |
LP | 600.32 (226.02) | 26 | −40 (0.60) | −48 (1.60) | −19 (0.60) |
EA | 655.16 (163.07) | 25 | −39 (0.60) | −51 (1.80) | −18 (0.60) |
PA | 409.75 (154.33) | 23 | −40 (0.60) | −50 (1.60) | −17 (0.60) |
rOFA | |||||
Child | 109.35 (63.39) | 16 | 29 (2.62) | −86 (2.14) | −4 (2.38) |
EP | 368.90 (210.92) | 17 | 25 (2.86) | −83 (2.38) | −4 (2.86) |
LP | 600.76 (278.59) | 18 | 27 (3.33) | −84 (2.38) | −7(1.90) |
EA | 597.87 (175.51) | 22 | 27 (2.62) | −87 (1.67) | −10 (1.67) |
PA | 322.23 (159.74) | 16 | 24 (2.62) | −87 (1.67) | −9 (2.14) |
rpSTS | |||||
Child | 282.35 (93.25) | 21 | 50 (1.28) | −50 (1.92) | 8 (1.28) |
EP | 583.48 (177.06) | 24 | 51 (1.07) | −48 (1.92) | 9 (1.28) |
LP | 803.29 (211.04) | 20 | 53 (0.64) | −48 (1.71) | 8 (0.85) |
EA | 739.77 (208.94) | 23 | 52 (1.28) | −49 (2.13) | 7 (1.07) |
PA | 449.36 (158.22) | 22 | 51 (1.07) | −49 (1.92) | 8 (1.28) |
Note. Each face-selective ROI was defined using the contrast [face subcategory vs. objects] during the Visual Stimulation task and was corrected for familywise error using the False Discovery Rate procedure. The average size and coordinates of the individually defined functional ROIs for each face subcategory. The size of the average ROIs is reported in number of contiguously active voxels. The average coordinates were based on the centroid of the individually defined functional ROIs. EP - early pubertal adolescent face, LP - late pubertal adolescent face, EA - emerging adult face (peer), and PA - parent-aged face condition. FFA – fusiform area, OFA – occipital face area, pSTS – posterior superior temporal sulcus. Original values of activation size of ROIs were reported.
To compute the magnitude of subcategory selectivity within each region, separate ROI-based GLMs were conducted for each participant in each ROI. This generated beta weights for each subcategory for each participant from each map. Participants with no definable voxels in an ROI were excluded from the statistical analyses for neural magnitude, given that no beta weights could be extracted from ROI-based GLM. The goal was to evaluate whether a functionally defined ROI that was selective for one subcategory of faces (e.g., peer faces) was equally selective for other subcategories of faces. Importantly, although the ROI was defined using the core subcategory (e.g., peer faces – objects), the important and independent information is regarding the contrast in magnitude between core and other subcategories of faces. Because the other subcategories of faces were not used to define the ROI, this is an independent analysis. As with the behavioral data, we computed a measure of “other faces” by averaging the beta weights from the 4 other face subcategories to compare to the core/defining category of the ROI. Because this was a planned contrast, we used paired-samples t-tests to evaluate the difference in magnitude of activation in each ROI and the 95% confidence intervals of the difference scores. This also reduced the number of statistical tests we administered per ROI, which reduced the familywise error rate. In the ROIs, we executed each of these contrasts (e.g., peer – other; child – other) separately. As a result, we employed a Bonferroni correction for the number of contrasts within each ROI (0.05/5, p = .01) to these analyses.
Face Recognition Task.
Within each participant, an ROI-based GLM was computed on the time-series data from the fMRI Face Recognition Task in the individually defined right and left FFA1 ROIs (for each face subcategory) obtained from the Visual Stimulation task. This generated a set of beta weights for each face subcategory for each participant (child, EP adolescent, LP adolescent, peer, and parent-aged faces). As in analyses of the data from the Visual Stimulation task, we computed a measure of “other faces” by averaging the beta weights from the non-defining face subcategories. We submitted the beta weights from the defining and “other face” categories to separate paired-sample t-tests for each ROI to evaluate the relative selectivity of each ROI to the defining face category. These were planned contrasts that reduced the number of statistical tests we administered per ROI, and thus, the familywise error rate. Relations between the neural magnitude and behavioral performance in fMRI face recognition task for each face subcategory were explored using linear regression analyses with neural magnitude (i.e., beta weight) as independent variable and behavioral performance as the dependent variable in separate analyses. All the results were Bonferroni corrected for the number of contrasts within each ROI.
Results
Behavioral Results
Peer Bias Task.
Fig. 1b illustrates performance in the Peer Bias face recognition task as a function of face subcategory. To evaluate the replicability of our previous findings, peer faces were initially contrasted with “other faces” as defined by the average performance (i.e., d-prime) in response to the child and adolescent faces (see Picci & Scherf, 2016). The planned contrast revealed a peer bias, t(130) = 3.20, p < 0.005 (1-tailed; 95% CI of the difference = [0.08, 0.35]), indicating enhanced recognition of peer (M = 0.85, SE = 0.06) compared to other (M = 0.71, SE = 0.05) faces. Second, when “other faces” included parent-aged faces as well, the analyses also revealed a peer bias, t(130) = 2.06, p < 0.05 (1-tailed; 95% CI of the difference = [0.08, 0.28]), with superior recognition of peer compared to other (M = 0.63, SE = 0.05) faces. There were no other subcategories of faces that evinced this privileged status for adults. In other words, adults performed similarly when comparing each of the other subcategories of faces to the other groups of faces. Together, these results indicate that the subcategory of peer faces is privileged in the face recognition behavior of emerging adults, even when including the age-related category of faces that are likely to be the most over-represented in their visual input (i.e., parent-aged faces).
In addition, we analyzed the data from 19 of the 31 individuals who also participated in the fMRI study. We only have partial behavioral data for these individuals who participated in our neuroimaging task due to computer error during data collection. Fig. 1c shows scanning participants’ behavioral performance in the Peer Bias Face Recognition task. For these individuals, the planned contrast also revealed a peer bias in their recognition behavior, t(18) = 1.89, p < 0.05 (1-tailed; 95% CI of the difference = [−0.04, 0.75]), indicating enhanced recognition of peer (M = 1.14 SE = 0.18) compared to other faces (M = 0.78, SE = 0.15); when “other faces” included parent-aged faces as well, the analyses also revealed a peer bias, t(18) = 1.87, p < 0.05 (1-tailed; 95% CI of the difference = [−0.05, 0.85]), with superior recognition of peer compared to other faces (M = 0.74, SE = 0.15).
fMRI Face Recognition Task.
In contrast to the face recognition task outside the scanner, emerging adult participants did not exhibit a peer bias in face recognition behavior during the 1-back fMRI face recognition task. They exhibited equal recognition of peer faces (M = 0.98, SE = 0.01) and other-aged faces (M = 0.97, SE = 0.01), t (29) = 1.88, p = 0.07 (1-tailed; 95% CI of the difference = [−0.001, 0.02]), which is not surprising given that performance for all conditions was at ceiling.
Neuroimaging Results
Visual Stimulation Task.
Tables 2–3 illustrates the number of participants for whom each ROI was functionally definable and the average coordinates of the centroid of each ROI across participants. To maximize power in our analyses, we only quantified data from ROIs in which at least 50% of the participants (i.e., 16 out of 31) exhibited definable activation. Therefore, we excluded bilateral ATL, amygdala, and left OFA and left pSTS from subsequent analyses.
Number and Location of Definable Face-Related ROIs.
We used the non-parametric Friedman test to evaluate whether there were differences in the numbers of participants who exhibited definable activation for each ROI as a function of each face subcategory (e.g., FFA1 child faces vs adolescent vs peer vs parent-aged faces). There were no significant differences in the numbers of definable ROIs across face subcategories in any of the ROIs (all p > 0.05). In other words, within each face-selective ROI (i.e., rFFA1, lFFA1, rOFA, rpSTS), if a participant evinced face-selective voxels for any one face subcategory (e.g., child faces), they likely had face-selective voxels for each of the other face subcategories as well (i.e., adolescent, peer, parent).
Following Scherf et al., (2010), we also tested whether there is a difference in the location of the centroid of the face-subcategory ROIs for each part of the network. (i.e., FFA, OFA, STS). We found that within each region, the subordinate level face-selective ROIs were highly overlapping.
Size of Definable ROIs.
Within each of the individually defined face-selective ROIs (e.g., rFFA1), we investigated whether the separate ROIs for each face subcategory differed in size (i.e., number of significantly active contiguous voxels). Means and standard errors of ROI size for each definable ROI are shown in Table 3. We observed a difference in the size of the functionally defined face regions bilaterally within left and right FFA1 (see Fig. 3). In lFFA1, peer faces elicited a larger swath of activation than did the other face categories, t(30) = 2.19, p = 0.06 (1-tailed; Mpeer = 16.36, SE = 2.83; Mother = 12.09, SE = 2.13; 95% CI of the difference = [0.03, 0.74]). Similarly, in rFFA1, peer faces elicited a larger swath of activation than did the other face categories, t(30) = 2.92, p < 0.001 (1-tailed; Mpeer = 21.07, SE = 2.81; Mother = 13.96, SE = 2.32; 95% CI of the difference = [0.14, 0.89]). Results are Bonferroni corrected for the number of ROIs (p = 0.05/4 = 0.0125). Figure 4 illustrates the sub-category ROIs (each in a separate color) in the bilateral FFA1 in a representative brain. Peer faces also elicited a larger swatch of activation than other face subcategories in both the right OFA and pSTS; however, these results are not statistically significant with correction.
Figure 3. The Neural Size of Peer Face FFA1 is Larger than that of Other Subcategory Face Regions.
Within right and left FFA1, separate ROIs were individually defined for each participant for each face subcategory (e.g., peer faces – objects, child faces vs objects). The plot represents the mean number of contiguously activated voxels in the region ± one standard error (SE). Only the Peer Face ROIs were significantly larger in the right (a) and left (b) FFA1. * represents p < 0.05, ** represents p < 0.01, *** represents p < .005 (Bonferroni corrected).
Figure 4. Representative Brain Activation to Face Subcategory within Bilateral FFA1.
A transverse view of brain activation within bilateral FFA from one participant (subject ID: sub_011) is shown. Within the bilateral FFA1, separate ROIs were defined for each face subcategory including peer (i.e., emerging adult), child, early pubertal (EP) adolescent, late pubertal (LP) adolescent, and parent-aged (PA) adult faces. L – left, R – right.
Magnitude of Activation within Definable ROIs.
In these analyses, the comparison of activation in response to different subordinate level categories of faces was conducted in each basic-level face-defined ROI (e.g., peer faces vs objects). Importantly, because the ROI is defined relative to objects, the estimation of the neural response to the “other face” categories is of primary interest. The estimation of this response is independent of the voxel selection process. We discuss the significant results here, but the full set of results are reported in Table 4.
Table 4.
Summary of Neural Magnitude Results from Visual Stimulation Task (fMRI).
Face Contrast | ROIs | Target Faces M (SE) | Other Faces M (SE) | df | t value | 95% CIs |
---|---|---|---|---|---|---|
| ||||||
Peer vs. others | rFFA1 | 0.34 (0.04) | 0.17 (0.03) | 25 | 3.70 ** | [.03, .25] |
lFFA1 | 0.37 (0.04) | 0.21 (0.03) | 23 | 2.90 ** | [.05, .27] | |
rOFA | 0.82 (0. 20) | 0.77 (0.18) | 15 | 1.06 | [−.05, .15] | |
rpSTS | 0.92 (0.29) | 0.74 (0.11) | 22 | 3.00 * | [.05, .29] | |
Child vs. others | rFFA1 | 0.33 (0.04) | 0.22 (0.03) | 27 | 2.35 | [.02, .19] |
lFFA1 | 0.32 (0.04) | 0.18 (0.02) | 26 | 3.89 *** | [.08, .25] | |
rOFA | 0.91 (0.15) | 0.95 (0.13) | 14 | −.59 | [−.18, .10] | |
rpSTS | 0.74 (0.12) | 0.83 (0.14) | 20 | −1.66 | [−.21, .02] | |
EP vs. others | rFFA1 | 0.24 (0.03) | 0.19 (0.02) | 25 | −.59 | [−.12, .06] |
lFFA1 | 0.31 (0.02) | 0.20 (0.03) | 24 | 2.59 * | [.01, .18] | |
rOFA | 0.84 (0.12) | 0.70 (0.16) | 15 | 2.03 | [−.01, .29] | |
rpSTS | 0.83 (0.10) | 0.73 (0.13) | 22 | 1.82 | [−.01, .21] | |
LP vs. others | rFFA1 | 0.33 (0.04) | 0.16 (0.03) | 26 | 4.20 *** | [.07, .25] |
lFFA1 | 0.36 (0.08) | 0.17 (0.03) | 25 | 4.62 *** | [.11, .30] | |
rOFA | 1.06 (0.20) | 0.93 (0.17) | 16 | 1.97 | [−.01, .28] | |
rpSTS | 0.95 (0.18) | 0.69 (0.12) | 19 | 1.81 | [−.03, .21] | |
PA vs. others | rFFA1 | 0.30 (0.04) | 0.20 (0.03) | 26 | 2.58 * | [.07, .25] |
lFFA1 | 0.31 (0.05) | 0.20 (0.03) | 23 | 2.97 * | [.02, .13] | |
rOFA | 0.86 (0.18) | 0.77 (0.17) | 16 | 1.65 | [−.02, .18] | |
rpSTS | 0.63 (0.03) | 0.56 (0.02) | 21 | .95 | [−.05, .14] | |
Note: EP - early pubertal adolescent face condition, LP - late pubertal adolescent face condition, peer - emerging adult face condition, PA - parent-aged face condition. FFA – fusiform area, OFA – occipital face area, pSTS – posterior superior temporal sulcus. CI represents 95% confidence interval.
= p < 0.05
= p < 0.01
= p < 0.005 (Bonferroni corrected).
Other faces elicited significantly weaker activation than did peer faces in peer-face defined rFFA1, lFFA1, and rpSTS. Other faces also elicited significantly weaker activation than child faces in child-face defined lFFA1. We also found that other faces elicited a significantly weaker activation than early pubertal adolescent faces in early pubertal adolescent-face defined lFFA1. We also observed that other faces elicited significantly weaker activation than late pubertal adolescent faces in late pubertal adolescent-face defined rFFA1 and lFFA1. Finally, other faces also elicited significantly weaker activation than parent-aged faces in the parent-aged-face defined rFFA1 and lFFA1 (see Fig. 5).
Figure 5. Face Subcategory Selectivity of ROIs within Bilateral FFA1.
Within right and left FFA1, separate ROIs were individually defined for each participant for each face subcategory (e.g., peer faces – objects, child faces - objects). To determine the selectivity of these regions, the response to the “other” non-target faces was evaluated, which is independent from how the regions were functionally defined. Similar magnitude responses would indicate a basic level encoding of faces. In both the right (a) and left (b) FFA1, each of the subcategorically defined face ROIs exhibited a stronger response magnitude to the non-target faces, indicating a selective response to the target category. Magnitude in the y-axis represents the beta weight of the BOLD response for each face condition. The plot represents the mean magnitude ± one standard error (SE). * represents p < 0.05, ** represents p < 0.01, *** represents p < .005 (Bonferroni corrected).
We did not observe significant differences in the magnitude of activation between other faces and target faces in the functionally defined rpSTS and rOFA. Further, there were no significant differences in the magnitude of activation between subordinate level categories of faces in the control regions (i.e., bilateral EVC). These findings suggest that there is a high degree of functional specificity in FFA1 for representing subordinate level categories of human faces, particularly in terms of developmental stage.
fMRI Face Recognition Task
Magnitude of Activation.
We focused these analyses on bilateral FFA1 regions where we consistently observed the representations of each face subcategory in the analyses of the Visual Stimulation task. We discuss the significant results here, but the full set of results are reported in Fig. 6 and Table 5.
Figure 6. Evaluation of Neural Response in Bilateral FFA1 During Face Recognition fMRI Task.
ROI-based GLMs were run on the timeseries data from the fMRI Face Recognition task within the individual identified regions from the Visual Stimulation Task and the beta-weights were extracted. This allowed us to determine what the neural response of the subcategorically defined regions looked like during the process of recognition specifically. In both the right (a) and left (b) FFA1, the response to peer faces was lower than the response to the other kinds of faces. In all other face subcategory defined FFA1, the target faces elicited higher activation than did the non-target faces. The plot represents the means and standard error (SE). Magnitude in the y-axis stands for beta weight for each face condition. * represents p < 0.05, ** represents p < 0.01, *** represents p < .005 (Bonferroni corrected).
Table 5.
Summary of Neural Magnitude Results from fMRI Face Recognition Task.
Face Contrast | ROIs | Target Faces M (SE) | Other Faces M (SE) | df | t value | 95% CIs |
---|---|---|---|---|---|---|
| ||||||
Peer vs. others | rFFA1 | 0.66 (0.09) | 0.76 (0.09) | 25 | −3.42 *** | [−.15, −.03] |
lFFA1 | 0.63 (0.09) | 0.74 (0.09) | 23 | −4.00 *** | [−.17, −.04] | |
Child vs. others | rFFA1 | 0.94 (0.11) | 0.86 (0.11) | 24 | 2.30 * | [0, .11] |
lFFA1 | 0.88 (0.14) | 0.86 (0.13) | 25 | .79 | [−.04, .08] | |
EP vs. others | rFFA1 | 0.72 (0.13) | 0.74 (0.11) | 26 | −1.71 | [−1.12, 0] |
lFFA1 | 0.45 (0.09) | 0.53 (0.09) | 25 | −2.18 | [−.84, −.09] | |
LP vs. others | rFFA1 | 0.86 (0.11) | 0.79 (0.11) | 26 | 1.97 | [−.01, .21] |
lFFA1 | 0.67 (0.09) | 0.56 (0.09) | 25 | 2.59 * | [.01, .13] | |
PA vs. others | rFFA1 | 0.88 (0.12) | 0.80 (0.11) | 26 | 2.77 * | [.02, .13] |
lFFA1 | 0.74 (0.09) | 0.68 (0.09) | 23 | 2.94 ** | [.02, .10] | |
Note. EP - early pubertal adolescent face condition, LP - late pubertal adolescent face condition, peer - emerging adult face condition, PA - parent-aged face condition. FFA – fusiform area, OFA – occipital face area, pSTS – posterior superior temporal sulcus. CI represents 95% confidence interval.
= p < 0.05
= p < 0.0
= p < 0.005 (Bonferroni corrected).
In the peer defined bilateral FFA1, the response to other faces was stronger than that to peer faces. In all other subordinate level face defined ROIs, the response to other faces was weaker than to the target category, including the child, late puberty adolescent, and parent-defined ROIs.
Behavioral correlation:
Although there were no differences in recognition behavior for subcategories of faces in the scanner task, we conducted exploratory analyses to evaluate potential associations between neural activation and face recognition performance for each subordinate-level category of faces. For each subcategory of face, we computed a separate regression of the difference score in neural magnitude (e.g., peer vs. other faces) on the difference score in face recognition behavior (e.g., peer vs. other faces) for the FFA ROIs. There were no significant relations between neural magnitude and recognition behaviors in any ROI.
Discussion
The psychological relevance of subordinate-level categories of human faces, like race, gender, and developmental stage, is well established behaviorally, and is important for understanding the way we process social categories. Yet, the mechanisms by which the brain represents this subordinate-level information about faces remains largely unknown. The patterns of bias in face recognition behavior reflect differential sensitivity to this subordinate-level information. For example, as children become adolescents, the relatively enhanced ability to recognize adult faces (i.e., Caregiver Bias) undergoes a developmental change so that peer faces of a similar pubertal status become the privileged subcategory for recognition (i.e., Peer Bias; Picci & Scherf, 2016). The Peer Bias becomes even more strongly entrenched in the face recognition behavior of emerging adults. Our goal in this study was to evaluate the neural mechanisms that represent this kind of subordinate-level information about faces, specifically the primacy of peer faces for emerging adults, individuals 18–25 years of age who are in a distinct developmental stage in which peers are highly relevant to their social developmental tasks (Arnett, 2000, 2007, 2014).
First, we tested a large cohort of participants in a behavioral paradigm to replicate and extend previous findings of a Peer Bias in the face recognition behavior of emerging adults. Then, in a subset of these same participants, we used fMRI to evaluate multiple potential mechanisms by which subordinate-level information underlying the Peer Bias may be represented within the face processing neural network. We mapped out the topography of the regions selectively responsive to the 5 subcategories of faces (child, early puberty adolescent, late puberty adolescent, emerging adult, parent faces) in each individual participant using a visual stimulation task. This allowed us to evaluate whether the locus, size, or magnitude of activation differentiated each of the subcategories of faces in the face processing network. Next, we independently interrogated the response properties of these regions as participants engaged in a 1-back face recognition task with all five subcategories of faces.
Emerging Adults are Biased to Recognize Peer Faces
To replicate and extend our prior work investigating the influence of developmental stage as a relevant type of subordinate-level information that biases the face recognition behavior of emerging adults, we employed an old/new recognition memory paradigm. In this task, participants encode and recognize faces from each of five developmental stages. Three of these subcategories of faces (child, early puberty adolescent, late puberty adolescent) all represent developmental stages that emerging adults have already lived through themselves. In our prior work using this task, we reported a Peer Bias in emerging adults, namely that recognition behavior was superior in the emerging adult condition compared to all other conditions (Picci & Scherf, 2016). This prior finding underscores the importance of peer faces during this developmental period, supporting the notion that the visuoperceptual system is fine-tuned to subserve social developmental tasks (e.g., bonding with peers for relationships; Scherf et al., 2012). Although, the emerging adult face category represents the emerging adults’ current developmental category, and thus their “peers”, emerging adult faces may also be overrepresented in their accumulated visual experience given the role that adult caregivers and teachers play in children’s and adolescents’ lives. Also, emerging adult faces are sexually mature, and thus sexually dimorphic, making them potentially more distinctive than the child and adolescent faces. As a result, we extended this prior work here by including another group of sexually mature adult faces that would control for these alternative explanations. We included parent-aged faces (those between the ages of 40–50 years) because they likely represent the faces that are most overrepresented in emerging adults’ visual experience and that are also sexually dimorphic.
Indeed, we observed that emerging adults exhibit superior recognition for peer faces compared to all other faces even when parent-aged faces were included in the task. First, this finding replicates our previous finding of a Peer Bias in emerging adults (Picci & Scherf, 2016). Importantly, it also extends these earlier findings by ruling out alternative mechanistic explanation for the development of this bias (accumulated experience), suggesting that developmental stage is psychologically relevant information that organizes and influences subordinate-level face recognition behavior, which ultimately manifests in the differential sensitivity to peer faces in emerging adults.
It is important to consider potential alternative explanations for this pattern of findings. For example, there may be something inherently different about the emerging adult faces that makes them more memorable or distinguishable in this paradigm than are the other faces. If this is the case, we might predict that a bias for the emerging adult faces would materialize for all groups of participants who take this task. In fact, this is not the pattern of result that we observe. In a prior experiment using these same stimuli (apart from the parent-aged stimuli), adolescent participants were not biased to remember the emerging adult faces. They were biased to remember adolescent faces that matched their own pubertal status. In fact, they evinced a relative decrement in performance on the emerging adult faces (Picci & Scherf, 2016). Therefore, given this pattern of results across studies, it is unlikely that the findings are related to unique characteristics of the emerging adult face stimuli.
Importantly, although this behavioral finding is consistent with the notion of “Own Age” Bias (Anastasi & Rhodes, 2005), we suggest that when interpreted in the context of the broader set of findings in the literature, it is evident that developmental stage, which is correlated with but not limited to age, that is the defining feature for this privileged subordinate-level category in face recognition behavior. For instance, our prior findings show that adolescents matched on age, exhibit a bias in recognizing other adolescent faces with whom they share a similar pubertal status (early vs late), indicating that pubertal status is relevant for defining “peer” faces for adolescents (Picci & Scherf, 2016). Moreover, peer biases do not influence the face recognition abilities of children, instead they exhibit Caregiver biases indicating the primary role of caregivers in their social developmental tasks (for review, Scherf & Scott, 2012). Also, our finding that emerging adults exhibit a peer bias in their face recognition behavior even when considering parent, adolescent, and child faces together suggests that cumulative visual experience (as represented by parent faces) and lived experiences (as represented by child and adolescent faces) are not the only features for organizing subcategories of faces. The social developmental tasks of the individual, which influence motivational components of face processing (Scherf et al., 2012), are critical as well.
Neural Mechanisms Underlying the Peer Bias in Face Recognition
To investigate the neural mechanisms that subserve the Peer Bias in emerging adults’ face recognition behaviors, we employed two fMRI tasks. First, we determined whether subordinate-levels face categories are represented as separate patches of neural real-estate within the core and/or extended regions of the broader face processing network. To do so, we employed a visual stimulation task that included blocks of dynamic videos of faces from each of the five subcategories that were all matched on emotional valence. The task also included dynamic videos of common objects. For each subordinate level category of face, we individually defined patches of face-selective activation in the bilateral core regions (FFA, OFA, and pSTS) and amygdala and anterior temporal lobe of the extended regions using the contrast [face subcategory – objects]. From this set of regions, we were able to quantify separate functional ROIs for each of the face subcategories from at least 50% of the participants in bilateral FFA1, right OFA, and right pSTS. Importantly, if a participant had definable face-selective activation for one of the face categories (e.g., child) within a region (e.g., right FFA), then they were very likely to also have face-selective activation for the other four subordinate level categories as well. Analyses of the centroid of the location of these subcategory ROIs within each region (i.e., FFA1, OFA, pSTS) revealed that they were, on average, largely overlapping. Therefore, the presence of these subcategory ROIs could mean that the regions are either insensitive to the subordinate level categories of faces (i.e., respond to faces as a basic level category - anything that has first-order configuration of face features) or that they were exquisitely sensitive to these subordinate levels of categories (i.e., overlapping distributed representations of the developmental stage of human faces). To sort out these possibilities, it was essential that we quantify the size and magnitude of activation for each of the face subordinate level categories within each region. If the regions are insensitive to face subordinate level category (i.e., encode faces at the basic level), then the variation in size and magnitude of response across categories is predicted to be very small. If, on the other hand, the regions are differentially sensitive to one subordinate level category compared to others (e.g., peer faces), then the size and/or magnitude of responses is expected to reflect this sensitivity.
In bilateral FFA1, the size of activation regions for peer faces were larger (~20%), on average, than were the ROIs defined for any of the other faces. The pattern was similar in right OFA and right pSTS, although the findings were not significant. Notably, we did not spatially smooth the data and we identified each functional brain region in each participant based on their own activation patterns and anatomy. When functional data are not smoothed, the size of the area provides critical information about extent of the distributed representation (Golarai, Liberman, Yoon, & Grill-Spector, 2010; Scherf et al., 2007). These findings regarding differences in the size of the peer-defined bilateral FFA1 relative to the other face-defined FFA1 regions are consistent with our previous developmental (e.g., Scherf et al., 2007) and individual differences (Elbich & Scherf, 2017) findings. Specifically, age-related improvements and adult individual differences in face recognition abilities are related to larger face-selective ROIs within the broader face-processing regions, particularly in the bilateral FFA1. We have argued previously that enhanced performance may be reflected in the ability to access more informative or cleaner signal about faces by integrating local circuits that carry information about distributed representations, which presents as a larger single functional ROI at the resolution of fMRI. (Elbich & Scherf, 2017). Given the privileged status of peer faces in the face recognition behavior of emerging adults, these larger functional ROIs for peer faces in bilateral FFA1 might reflect one mechanism by which the brain privileges these representations as well.
We also evaluated whether the magnitude of activation distinguished the encoding and representation of subordinate level categories of faces. Importantly, only target faces (e.g., peer faces) were used to functionally define the face-selective ROIs. Therefore, assessing the magnitude of response within these ROIs to the non-target (i.e., other) faces, provided an independent assessment of the sensitivity to subordinate versus basic level encoding of faces. A differential response to the non-target faces would indicate sensitivity to the subordinate level categories; however, a comparable magnitude response would suggest a more basic level encoding across all the categories. We found that within the peer defined bilateral FFA1 and right pSTS, non-target faces elicited significantly weaker activation, indicating sensitivity to the subordinate level category of peer faces. Furthermore, within each of the other subordinate level face-selective ROIs of the bilateral FFA1 (child, early puberty adolescent, late puberty adolescent, parent), the non-target faces also elicited weaker activation within the ROI. Importantly, there were no differences in the profile of activation to any of the subordinate levels of face category in early visual cortex (i.e., no sensitivity to subcategory). This helps rule out concerns that the responses in the FFA1 were driven by differences in the stimuli at the level of the visual input. In sum, this converging set of findings suggest that the bilateral FFA1 encodes each of these subordinate level categories of faces via the magnitude of activation of response.
To further interrogate the patterns of activation to target and non-target faces within the bilateral FFA1 specifically during face recognition, we used these same regions derived from the Visual Stimulation task to conduct separate ROI-based general linear models on the timeseries data from the fMRI Face Recognition task. This approach allowed us to extract beta weights for each subordinate level face category during the fMRI Face Recognition task. The voxel selection and estimation of the difference between target and non-target faces are completely independent. In contrast to the Visual Stimulation task, during face recognition, when peer faces were the target face, they elicited weaker activation in comparison to the non-target other faces in bilateral FFA1. This pattern of results was specific to peer faces. When any of the other subordinate level categories of faces were the target face (for defining the ROI), they elicited stronger activation in comparison to the non-target faces in bilateral FFA1. Unfortunately, the exploratory analyses of the brain activation – behavior correspondences did not provide any evidence to help interpret these findings. Perhaps these results reflect the relative ease with which emerging adults recognize peer compared to other groups of faces. In other words, during the task of recognition, especially a particularly easy task, emerging adults may not have to recruit much activation from the FFA1 when recognizing peer faces. This interpretation of the findings is consistent with empirical reports of decreased activation in the ventral visual pathway, including FFA, to faces of personal expertise such as familiar faces (Beaton et al., 2009), own-age faces (Golarai, Liberman, & Grill-Spector, 2017), and own-race faces (Herzmann, Minor, & Adkins, 2017; Herzmann, Willenbockel, Tanaka, & Curran, 2011; Natu, Raboy, & O’Toole, 2011). Note that this finding contrasts with the finding that emerging adults recruit larger regions and more activation while passively viewing the dynamic Visual Stimulation task. This task is unconstrained and is likely engaging a multitude of processes that are influenced by motivational factors and social developmental tasks (e.g., affect recognition, trait evaluation, identity recognition).
Limitations and Future Direction
In this work, we designed the study specifically to maximize power for univariate analyses of multiple neural mechanisms (size, activation). Our approach and strict analytic criteria resulted in a narrow sampling of regions within this broadly distributed face-processing network to only include bilateral FFA1, right OFA, and right pSTS. Going forward, it will be important to employ techniques that enable rigorous sampling of more regions in the network, including m-fus/FFA2 (Weiner & Grill-Spector, 2015) to evaluate representational capacity for sub-categorical information about faces. Second, Although, each face-selective ROI (e.g., FFA1) was independently defined in our analyses, these sub-categorical ROIs are largely overlapping. This suggests that the pattern of activation may help distinguish the representations of face subcategories. In subsequent studies it will be helpful to use multivariate analyses to investigate potential differences in the way subordinate level categories of faces are represented (distributed vs. more sparsely) within each of these functionally defined ROIs, particularly within the bilateral FFA1. Third, we used a 1-back recognition task to study the neural basis of these subordinate level category face representations during recognition. However, participants were at ceiling in their behavioral performance. It will be essential for future studies to engage face recognition behavior during scanning under more rigorous conditions (e.g., 2-back task) to elicit the privileged status of peer faces in recognition behavior. This will provide a more naturalistic state under which to interrogate the neural systems and potentially evaluate brain-behavior associations. Finally, as with so many fMRI studies, it will be important to replicate these findings with a larger sample, other developmental groups (e.g., adolescents) and using newer scan sequences that optimize signal-to-noise ratios in the parts of the face processing system that are vulnerable to artifacts during scanning, like the amygdala and the anterior temporal lobe. Importantly, we did employ multiple strategies for improving statistical power with smaller sample sizes in neuroimaging studies (see Poldrack et al., 2017).
Conclusions
To summarize, we reported three key findings in the current study: (1) emerging adults exhibit a Peer Bias in face recognition behavior, which indicates a privileged status for a subordinate level category of faces that is not predicted on the basis of experience; (2) this privileged status of peer faces is supported by the multi-modal neural responses within ventral visual pathway (i.e., FFA1) including neural magnitude and neural size; (3) the FFA1, which is a critical region for face processing, fundamentally underlies the representations of subordinate-level categories of faces, in terms of developmental stage. These findings demonstrate the organizational principles that human ventral visual pathway uses to organize social information at subordinate level, which is essential for navigating human social interactions.
Supplementary Material
Acknowledgement
We would like to thank staff from the Social, Life, and Engineering Sciences Center (SLEIC) for help in acquiring the neuroimaging data.
Funding Information
The research was supported by a grant (RO1 MH112573-01) from the National Institute of Mental Health (NIMH) to K.S.S, the Department of Psychology, and the Social Science Research Institute at Penn State University.
Footnotes
Note that this hypothesis is not based on age, like those hypotheses suggesting the “Own-Age” Bias. This hypothesis is focused on the relative influence of social developmental tasks. For example, if an emerging adult becomes a parent, their social developmental tasks shift to caregiving. We predict that the biases in face processing will shift as well to reflect the developmental focus on caregiving. The participants in this study were all recruited from a residential college campus and were not likely to be in caregiving positions. This also explains why the age range of the participants was so narrow.
Citation Diversity Statement
Junqiang Dai identifies as male.
K. Suzanne Scherf identifies as female.
Data Availability Statement
Researcher may access the study data and material via email to the corresponding author.
References:
- Anastasi JS, & Rhodes MG (2005). An own-age bias in face recognition for children and older adults. Psychonomic Bulletin & Review, 12(6), 1043–1047. 10.3758/bf03206441 [DOI] [PubMed] [Google Scholar]
- Arnett JJ (2000). Emerging adulthood: A theory of development from the late teens through the twenties. American Psychologist, 55(5), 469. 10.1037/0003-066x.55.5.469 [DOI] [PubMed] [Google Scholar]
- Arnett JJ (2007). Emerging Adulthood: What Is It, and What Is It Good For? Child Development Perspectives, 1(2), 68–73. 10.1111/j.1750-8606.2007.00016.x [DOI] [Google Scholar]
- Arnett JJ (2014). Presidential Address: The Emergence of Emerging Adulthood. Emerging Adulthood, 2(3), 155–162. 10.1177/2167696814541096 [DOI] [Google Scholar]
- Arrington M, Elbich D, Dai J, Duchaine B, & Scherf KS (2022). Introducing the female Cambridge face memory test – long form (F-CFMT+). Behavior Research Methods, 1–14. doi: 10.3758/s13428-022-01805-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beaton EA, Schmidt LA, Schulkin J, Antony MM, Swinson RP, & Hall GB (2009). Different fusiform activity to stranger and personally familiar faces in shy and social adults. Social Neuroscience, 4(4), 308–316. doi: 10.1080/17470910902801021 [DOI] [PubMed] [Google Scholar]
- Brigham JC, & Malpass RS (1985). The Role of Experience and Contact in the Recognition of Faces of Own- and Other-Race Persons. Journal of Social Issues, 41(3), 139–155. 10.1111/j.1540-4560.1985.tb01133.x [DOI] [Google Scholar]
- Cassia VM, Picozzi M, Kuefner D, & Casati M (2008). Short article: Why mix-ups don’t happen in the nursery: Evidence for an experience-based interpretation of the other-age effect. Quarterly Journal of Experimental Psychology, 62(6), 1099–1107. 10.1080/17470210802617654 [DOI] [PubMed] [Google Scholar]
- Contreras J, Banaji MR, & Mitchell JP (2013). Multivoxel patterns in fusiform face area differentiate faces by sex and race. PloS One, 8(7). 10.1371/journal.pone.0069684 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Diamond R, & Carey S (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115(2), 107. 10.1037/0096-3445.115.2.107 [DOI] [PubMed] [Google Scholar]
- Egger HL, Pine DS, Nelson E, Leibenluft E, Ernst M, Towbin KE, & Angold A (2011). The NIMH Child Emotional Faces Picture Set (NIMH-ChEFS): a new set of children’s facial emotion stimuli. International Journal of Methods in Psychiatric Research, 20(3), 145–156. 10.1002/mpr.343 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Elbich DB, Molenaar PCM, & Scherf KS (2019). Evaluating the organizational structure and specificity of network topology within the face processing system. Human Brain Mapping. 10.1002/hbm.24546 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Elbich DB, & Scherf KS (2017). Beyond the FFA: Brain-behavior correspondences in face recognition abilities. NeuroImage, 147, 409–422. 10.1016/j.neuroimage.2016.12.042 [DOI] [PubMed] [Google Scholar]
- Fox CJ, Iaria G, & Barton JJS (2009). Defining the face processing network: Optimization of the functional localizer in fMRI. Human Brain Mapping, 30(5), 1637–1651. doi: 10.1002/hbm.20630 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gauthier I, Williams P, Tarr MJ, & Tanaka J (1998). Training ‘greeble’ experts: a framework for studying expert object recognition processes. Vision Research, 38(15–16), 2401–2428. 10.1016/s0042-6989(97)00442-2 [DOI] [PubMed] [Google Scholar]
- Gauthier I, Tarr MJ, Moylan J, Skudlarski P, Gore JC, & Anderson AW (2000). The Fusiform Face Area is Part of a Network that Processes Faces at the Individual Level. 12(3), 495–504. doi: 10.1162/089892900562165 [DOI] [PubMed] [Google Scholar]
- Genovese CR, Lazar NA, & Nichols T (2002). Thresholding of Statistical Maps in Functional Neuroimaging Using the False Discovery Rate. NeuroImage, 15(4), 870–878. 10.1006/nimg.2001.1037 [DOI] [PubMed] [Google Scholar]
- Gobbini MI, & Haxby JV (2007). Neural systems for recognition of familiar faces. Neuropsychologia, 45(1), 32–41. 10.1016/j.neuropsychologia.2006.04.015 [DOI] [PubMed] [Google Scholar]
- Golarai G, Liberman A, & Grill-Spector K (2017). Experience Shapes the Development of Neural Substrates of Face Processing in Human Ventral Temporal Cortex. Cerebral Cortex, 27(2), 1229–1244. 10.1093/cercor/bhv314 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Golarai G, Liberman A, Yoon JMD, & Grill-Spector K (2010). Differential development of the ventral visual cortex extends through adolescence. Frontiers in Human Neuroscience, 3, 80. 10.3389/neuro.09.080.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grill-Spector K, Golarai G, & Gabrieli J (2008). Developmental neuroimaging of the human ventral visual cortex. Trends in Cognitive Sciences, 12(4), 152–162. 10.1016/j.tics.2008.01.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grill-Spector K, Knouf N, & Kanwisher N (2004). The fusiform face area subserves face perception, not generic within-category identification. Nature Neuroscience, 7(5), 555–562. 10.1038/nn1224 [DOI] [PubMed] [Google Scholar]
- Grill-Spector K, & Weiner KS (2014). The functional architecture of the ventral temporal cortex and its role in categorization. Nature Reviews Neuroscience, 15(8), 536–548. 10.1038/nrn3747 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Havighurst RI 1972. Developmental tasks and education 3d ed. New York McKay. [Google Scholar]
- Haxby JV, Hoffman EA, & Gobbini MI (2002). Human neural systems for face recognition and social communication. Biological Psychiatry, 51(1), 59–67. 10.1016/s0006-3223(01)01330-0 [DOI] [PubMed] [Google Scholar]
- Haxby JV, Hoffman EA, Gobbini MI, Haxby JV, Hoffman EA, & Gobbini MI (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4(6), 223–233. 10.1016/s1364-6613(00)01482-0 [DOI] [PubMed] [Google Scholar]
- Herrington JD, Riley ME, Grupe DW, & Schultz RT (2015). Successful Face Recognition is Associated with Increased Prefrontal Cortex Activation in Autism Spectrum Disorder. Journal of Autism and Developmental Disorders, 45(4), 902–910. doi: 10.1007/s10803-014-2233-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Herzmann G, Minor G, & Adkins M (2017). Neural correlates of memory encoding and recognition for own-race and other-race faces in an associative-memory task. Brain Research, 1655, 194–203. doi: 10.1016/j.brainres.2016.10.028 [DOI] [PubMed] [Google Scholar]
- Herzmann G, Willenbockel V, Tanaka JW, & Curran T (2011). The neural correlates of memory encoding and recognition for own-race and other-race faces. Neuropsychologia, 49(11), 3103–3115. doi: 10.1016/j.neuropsychologia.2011.07.019 [DOI] [PubMed] [Google Scholar]
- Hugenberg K, Young SG, Bernstein MJ, & Sacco DF (2010). The categorization-individuation model: An integrative account of the other-race recognition deficit. Psychological Review, 117(4), 1168. 10.1037/a0020463 [DOI] [PubMed] [Google Scholar]
- Kanwisher N, Tong F, & Nakayama K (1998). The effect of face inversion on the human fusiform face area. Cognition, 68(1), B1–B11. doi: 10.1016/s0010-0277(98)00035-3 [DOI] [PubMed] [Google Scholar]
- Kaul C, Rees G, & Ishai A (2011). The Gender of Face Stimuli is Represented in Multiple Regions in the Human Brain. Frontiers in Human Neuroscience, 4, 238. 10.3389/fnhum.2010.00238 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kubota JT, Banaji MR, & Phelps EA (2012). The neuroscience of race. Nature Neuroscience, 15(7), 940–948. 10.1038/nn.3136 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Levin DT (1996). Classifying Faces by Race: The Structure of Face Categories. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22(6), 1364–1382. 10.1037/0278-7393.22.6.1364 [DOI] [Google Scholar]
- Lewin C, & Herlitz A (2002). Sex differences in face recognition—Women’s faces make the difference. Brain and Cognition, 50(1), 121–128. 10.1016/s0278-2626(02)00016-7 [DOI] [PubMed] [Google Scholar]
- LoBue V, & Thrasher C (2015). The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults. Frontiers in Psychology, 5, 1532. 10.3389/fpsyg.2014.01532 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lundqvist D, Flykt A, & Öhman A (1998). Karolinska directed emotional faces. Cognition and Emotion. 10.1037/t27732-000 [DOI] [Google Scholar]
- Meissner CA, & Brigham JC (2001). Thirty years of investigating the own-race bias in memory for faces: A meta-analytic review. Psychology, Public Policy, and Law, 7(1), 3. 10.1037/1076-8971.7.1.3 [DOI] [Google Scholar]
- Minear M, & Park DC (2004). A lifespan database of adult facial stimuli. Behavior Research Methods, Instruments, & Computers, 4(36), 630–633. [DOI] [PubMed] [Google Scholar]
- Mur M, Ruff DA, Bodurka J, Bandettini PA, & Kriegeskorte N (2010). Face-Identity Change Activation Outside the Face System: “Release from Adaptation” May Not Always Indicate Neuronal Selectivity. Cerebral Cortex, 20(9), 2027–2042. 10.1093/cercor/bhp272 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Natu V, Raboy D, & O’Toole AJ (2011). Neural correlates of own- and other-race face perception: Spatial and temporal response differences. NeuroImage, 54(3), 2547–2555. doi: 10.1016/j.neuroimage.2010.10.006 [DOI] [PubMed] [Google Scholar]
- Nestor A, Plaut DC, & Behrmann M (2011). Unraveling the distributed neural code of facial identity through spatiotemporal pattern analysis. Proceedings of the National Academy of Sciences, 108(24), 9998–10003. 10.1073/pnas.1102433108 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nishimura M, Scherf KS, Zachariou V, Tarr MJ, & Behrmann M (2015). Size Precedes View: Developmental Emergence of Invariant Object Representations in Lateral Occipital Complex. Journal of Cognitive Neuroscience, 27(3), 474–491. 10.1162/jocn_a_00720 [DOI] [PubMed] [Google Scholar]
- Pascalis O, Scott LS, Kelly DJ, Shannon RW, Nicholson E, Coleman M, & Nelson CA (2005). Plasticity of face processing in infancy. Proceedings of the National Academy of Sciences of the United States of America, 102(14), 5297–5300. 10.1073/pnas.0406627102 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pascalis Olivier, & Bachevalier J (1998). Face recognition in primates: a cross-species study. Behavioural Processes, 43(1), 87–96. 10.1016/s0376-6357(97)00090-9 [DOI] [PubMed] [Google Scholar]
- Picci G, & Scherf KS (2016). From Caregivers to Peers. Psychological Science, 27(11), 1461–1473. 10.1177/0956797616663142 [DOI] [PubMed] [Google Scholar]
- Pitcher D, Walsh V, & Duchaine B (2011). The role of the occipital face area in the cortical face perception network. Experimental Brain Research, 209(4), 481–493. doi: 10.1007/s00221-011-2579-1 [DOI] [PubMed] [Google Scholar]
- Poldrack RA, Baker CI, Durnez J, Gorgolewski KJ, Matthews PM, Munafò MR, … Yarkoni T (2017). Scanning the horizon: towards transparent and reproducible neuroimaging research. Nature Reviews Neuroscience, 18(2), 115–126. 10.1038/nrn.2016.167 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rosch E, Mervis CB, Gray WD, Johnson DM, & Boyes-Braem P (1976). Basic objects in natural categories. Cognitive Psychology, 8(3), 382–439. 10.1016/0010-0285(76)90013-x [DOI] [Google Scholar]
- Rosch E (1978). Principles of categorization.
- Scherf KS, Behrmann M, & Dahl RE (2012). Facing changes and changing faces in adolescence: A new model for investigating adolescent-specific interactions between pubertal, brain and behavioral development. Developmental Cognitive Neuroscience, 2(2), 199–219. 10.1016/j.dcn.2011.07.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scherf KS, Behrmann M, Humphreys K, & Luna B (2007). Visual category-selectivity for faces, places and objects emerges along different developmental trajectories. Developmental Science, 10(4), F15–F30. 10.1111/j.1467-7687.2007.00595.x [DOI] [PubMed] [Google Scholar]
- Scherf KS, Elbich D, Minshew N, & Behrmann M (2015). Individual differences in symptom severity and behavior predict neural activation during face processing in adolescents with autism. NeuroImage: Clinical, 7, 53–67. 10.1016/j.nicl.2014.11.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scherf KS, & Scott LS (2012). Connecting developmental trajectories: Biases in face processing from infancy to adulthood. Developmental Psychobiology, 54(6), 643–663. 10.1002/dev.21013 [DOI] [PubMed] [Google Scholar]
- Scherf KS, Thomas C, Doyle J, & Behrmann M (2014). Emerging Structure–Function Relations in the Developing Face Processing System. Cerebral Cortex, 24(11), 2964–2980. 10.1093/cercor/bht152 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stanislaw H, & Todorov N (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31(1), 137–149. 10.3758/bf03207704 [DOI] [PubMed] [Google Scholar]
- Tanaka JW (2001). The Entry Point of Face Recognition: Evidence for Face Expertise. Journal of Experimental Psychology: General, 130(3), 534–543. 10.1037/0096-3445.130.3.534 [DOI] [PubMed] [Google Scholar]
- Tanaka JW, Kiefer M, & Bukach CM (2004). A holistic account of the own-race effect in face recognition: evidence from a cross-cultural study. Cognition, 93(1), B1–B9. 10.1016/j.cognition.2003.09.011 [DOI] [PubMed] [Google Scholar]
- Tanaka JW, & Pierce LJ (2009). The neural plasticity of other-race face recognition. Cognitive, Affective, & Behavioral Neuroscience, 9(1), 122–131. 10.3758/cabn.9.1.122 [DOI] [PubMed] [Google Scholar]
- Tanaka JW, & Taylor M (1991). Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology, 23. [Google Scholar]
- Tottenham N, Tanaka JW, Leon AC, McCarry T, Nurse M, Hare TA, … Nelson C (2009). The NimStim set of facial expressions: Judgments from untrained research participants. Psychiatry Research, 168(3), 242–249. 10.1016/j.psychres.2008.05.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weiner KS, & Grill-Spector K (2015). The evolution of face processing networks. Trends in Cognitive Sciences, 19(5), 240–241. doi: 10.1016/j.tics.2015.03.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Whalen PJ, Johnstone T, Somerville LH, Nitschke JB, Polis S, Alexander AL, … Kalin NH (2008). A Functional Magnetic Resonance Imaging Predictor of Treatment Response to Venlafaxine in Generalized Anxiety Disorder. Biological Psychiatry, 63(9), 858–863. 10.1016/j.biopsych.2007.08.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wright DB, Boyd CE, & Tredoux CG (2003). Inter-racial contact and the own-race bias for face recognition in South Africa and England. Applied Cognitive Psychology, 17(3), 365–373. 10.1002/acp.898 [DOI] [Google Scholar]
- Wright DB, & Stroud JN (2002). Age Differences in Lineup Identification Accuracy: People Are Better with Their Own Age. Law and Human Behavior, 26(6), 641–654. 10.1023/a:1020981501383 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Researcher may access the study data and material via email to the corresponding author.