Skip to main content
NeuroImage : Clinical logoLink to NeuroImage : Clinical
. 2018 Nov 28;21:101613. doi: 10.1016/j.nicl.2018.101613

Reduced neural sensitivity to rapid individual face discrimination in autism spectrum disorder

Sofie Vettori a,c,, Milena Dzhelyova b,c, Stephanie Van der Donck a,c, Corentin Jacques a,b, Jean Steyaert a,c, Bruno Rossion b,d,e, Bart Boets a,c,
PMCID: PMC6411619  PMID: 30522972

Abstract

Background

Individuals with autism spectrum disorder (ASD) are characterized by impairments in social communication and interaction. Although difficulties at processing social signals from the face in ASD have been observed and emphasized for many years, there is a lot of inconsistency across both behavioral and neural studies.

Methods

We recorded scalp electroencephalography (EEG) in 23 8-to-12 year old boys with ASD and 23 matched typically developing boys using a fast periodic visual stimulation (FPVS) paradigm, providing objective (i.e., frequency-tagged), fast (i.e., few minutes) and highly sensitive measures of rapid face categorization, without requiring any explicit face processing task. We tested both the sensitivity to rapidly (i.e., at a glance) categorize faces among other objects and to individuate unfamiliar faces.

Outcomes

While general neural synchronization to the visual stimulation and neural responses indexing generic face categorization were undistinguishable between children with ASD and typically developing controls, neural responses indexing individual face discrimination over the occipito-temporal cortex were substantially reduced in the individuals with ASD. This difference vanished when faces were presented upside-down, due to the lack of significant face inversion effect in ASD.

Interpretation

These data provide original evidence for a selective high-level impairment in individual face discrimination in ASD in an implicit task. The objective and rapid assessment of this function opens new perspectives for ASD diagnosis in clinical settings.

Keywords: Autism, EEG, Face processing

Highlights

  • We assess implicit face processing in ASD via Fast Periodic Visual Stimulation EEG.

  • Rapid categorization of a face as a face is not impaired in children with ASD.

  • Individual face discrimination is selectively impaired in ASD.

  • Children with ASD show no face inversion effect.

  • FPVS-EEG opens new perspectives for clinical settings.

1. Introduction

The human face is a highly familiar, complex, multidimensional visual pattern, conveying a wide variety of information about an individual (identity, sex, age, mood, etc.). It constitutes arguably the most salient class of visual images for understanding perceptual categorization, a fundamental brain function. Faces can be differentiated from other objects with astounding accuracy and speed (Crouzet et al. 2010; Crouzet and Thorpe 2011; Hershler et al. 2010; Hershler and Hochstein 2005) but a more fine-grained distinction is necessary in order to differentiate among individual faces. Although there is a clear advantage at individuating familiar over unfamiliar individuals from their faces (Young and Burton 2018), neurotypical human adults are also experts at individual discrimination of unfamiliar faces (Rossion 2018). Indeed, hundreds of behavioral experiments show that, without any task training, typical human adults are highly accurate at unfamiliar face matching tasks, even in difficult tasks requiring high levels of generalization, and with similar-looking distractors (e.g., Megreya & Burton, 2006; Rossion and Michel 2018). Unfamiliar individual face discrimination is also largely affected in cases of prosopagnosia following brain damage (e.g., Sergent and Signoret, 1992b), and by simple manipulations preserving low-level visual cues such as contrast reversal (Galper 1970; Russell et al. 2006) or picture-plane inversion (Rossion 2008 for review; Yin 1969).

Given that in the human species successful social interactions require efficient decoding of information from the face, it is not surprising that deficits in face processing have been put forward as a hallmark of social difficulties in Autism Spectrum Disorder (ASD) (American Psychiatric Association 2013; Tang et al. 2015; Weigelt et al. 2012). Individuals with ASD are characterized by impairments in social communication and interaction, combined with a pattern of restricted and repetitive behavior and interests (American Psychiatric Association 2013). Many studies have tested individuals with ASD on explicit behavioral face processing tasks (e.g. see Tang et al. 2015; Weigelt et al. 2012). For instance, already in the late 1970's and early 1980's, it was observed that young children with ASD were less proficient than controls in identifying familiar peers when relying on the eye region, and that the decrease of performance for facial identity processing with inversion was smaller when compared to healty controls (Hobson et al. 1988; Langdell 1978). These impairments have generally been hypothesized to arise from a lack of interest to social stimuli such as faces early in life (Chawarska et al. 2013; Pierce et al. 2016), atypical perceptual processing strategies that favor detail processing at the cost of global holistic processing (Behrmann et al. 2006b; Behrmann et al., 2006a), and/or dysfunction of the extensive neural circuitry subtending face processing (Campatelli et al. 2013; Nomi and Uddin 2015).

However, findings from the numerous behavioral studies that have been carried out testing face processing in ASD are generally mixed and inconsistent, with some studies reporting poorer face processing abilities in ASD (Rose et al. 2007; Rosset et al. 2008; Tantam et al. 1989; Van Der Geest et al. 2002), and others reporting similar performance as neurotypical individuals (Barton et al. 2007; Falck-Ytter 2008; Guillon et al. 2014; Hedley et al. 2015; Jemel et al. 2006; Reed et al. 2007; Scherf et al. 2008; Teunisse and de Gelder 2003). Comparisons across studies are difficult due to the use of different populations (e.g. in terms of age and sex and intelligence) and the vast heterogeneity in ASD inclusion criteria, but also because of large differences in task requirements. For instance, children with ASD may be able to perform individual discrimination tasks with simultaneously presented faces, but be impaired when faces are shown consecutively (Weigelt et al. 2012). A more recent review concluded to both quantitative and qualitative differences in face recognition for individuals with ASD when compared to typically developing control participants (Tang et al. 2015). Quantitatively, the majority of reviewed studies reported reduced individual face recognition accuracy among individuals with ASD but no systematic difference in response time. Qualitatively, many studies provided evidence for the use of different face recognition strategies in individuals with ASD, as indicated by markers of atypical individual face recognition such as a reduced inversion effect (Hedley et al. 2015; Rose et al. 2007; Tavares et al. 2016; Teunisse and de Gelder 2003).

To better understand the nature of face processing impairments and to overcome the difficulty of interpreting explicit behavioral findings (which may have many sources beyond specific face processing), researchers have turned their attention for almost two decades towards implicit face processing measures such as eye-tracking (Chita-Tegmark 2016; Guillon et al. 2014), scalp electroencephalography (EEG), and functional magnetic resonance imaging (fMRI) (Campatelli et al. 2013; Nomi and Uddin 2015; Schultz 2005). Due to its relatively low cost and ease of application, EEG has been the methodology of choice for many studies in this field. While EEG studies have examined different event-related potentials (ERPs) in response to face stimuli (e.g. Benning et al. 2016; Dawson et al. 2002; Gunji et al. 2013; McCleery et al. 2009; Monteiro et al. 2017; O'Connor et al. 2007; Webb et al. 2010), the vast majority of studies focused on the N170, a negative event-related potential (ERP) peaking at about 170 ms over occipito-temporal sites following the sudden onset of a face stimulus (Bentin et al. 1996). This component is particularly interesting since it differs reliably between faces and other stimuli in neurotypical individuals (Rossion and Jacques 2011 for review) and reflects the interpretation of a stimulus as a face, beyond physical characteristics of the visual input (Caharel et al. 2013; Churches et al. 2014; Rossion 2014a). In particular, the N170 is typically right lateralized, larger in amplitude to faces as compared to non-face objects (Bentin et al. 1996; Rossion et al. 2000), and is specifically increased in amplitude and latency by picture-plane inversion of the stimulus (Rossion et al. 2000).

Unfortunately, thus far, electrophysiological studies of children or adults with ASD have failed to provide consistent evidence of abnormal N170 amplitude, latency or scalp topography in response to face stimuli (e.g. Dawson et al. 2005; Kang et al. 2018; Naumann et al. 2018; Tavares et al. 2016; Webb et al. 2010). Although a recent meta-analysis pointed to a small but significant delay in N170 latency in ASD compared to neurotypicals (Kang et al. 2018), this effect may reflect the generally slower processing of meaningful, even non-social, visual stimuli, and is quite unspecific, being found in a wide variety of psychiatric and neurological disorders regardless of diagnosis (Feuerriegel et al. 2015). Moreover, the N170 delay in response to faces may already be present in earlier visual components such as the P1, reflecting basic sensory processes (Neuhaus et al. 2016). More generally, the absolute parameters of the N170 evoked by a face stimulus (i.e., its latency, amplitude or pattern of lateralization) cannot directly index processes subtending social communication, such as the categorization of faces as faces, or the categorization of faces in terms of identity, emotional expression or gaze direction, etc. Thus, pertaining to the specifically underlying processes, an abnormal N170 parameter is not very informative, as it does not disambiguate the functional specificity in terms of generic face categorization, individualization or other face processes (Vettori et al. 2018). While a number of studies have shown that the N170 amplitude is sensitive (i.e., reduced) to the repetition of the same individual face (as compared to different faces) (e.g. Caharel et al. 2009; Heisz et al. 2006; Jacques et al. 2007), providing an electrophysiological index of individual face discrimination, this effect depends greatly on stimulation parameters and is not very large in typical individuals and is therefore not significant in every study, e.g.,(Amihai et al. 2011). Moreover, the N170 reduction in amplitude to repeated individual faces is difficult to identify and quantify in individual participants, and requires a relatively long recording duration to accumulate a sufficiently high number of trials.

What would be desirable at this stage to move the field forward is an implicit and yet sensitive and directly quantifiable electrophysiological measure of these specific socio-communicative face perception aspects. In the present study, we apply EEG frequency-tagging, or fast periodic visual stimulation (FPVS), to meet these requirements. The FPVS-EEG technique is based on the fairly old observation (in fact preceding standard ERP measures) that a visual stimulus presented at a fixed rate, e.g., a light flickering on/off 17 times per second (17 Hz), generates an electrical brain wave exactly at the stimulation frequency (i.e., 17 Hz in this instance), which can be recorded over the visual cortex (Adrian and Matthews 1934). The data can be transformed in the frequency domain through Fourier analysis (Regan 1966), providing highly sensitive (i.e., high signal-to-noise ratio, SNR) (Regan 1989) and objective (i.e., at a pre-determined frequency) quantifiable markers of an automatic visual process without explicit task, making it ideal to use it as a clinical diagnostic tool across age and different populations (Norcia et al. 2015; Regan, 1981, Regan, 1989; Rossion 2014a).

While this approach has long been confined to the study of low-level processes (i.e., ophthalmology and low-level vision, Norcia et al. 2015 for review) as well as their modulation by spatial and selective attention (Morgan et al. 1996; Müller et al. 2006), it has recently been extended to measure visual discrimination of more complex images, faces in particular (e.g. Rossion et al., 2012).

Besides the above-mentioned methodological advantages of the approach, the specific use of an oddball-like FPVS paradigm with complex images can provide direct measures of automatic and rapid face categorization processes with high validity and specificity. Particularly relevant for the present study is the generic face categorization paradigm (Rossion et al. 2015), yielding robust generic face categorization responses not accounted for by low-level stimulus characteristics, and the individual face discrimination paradigm, yielding robust individual face discrimination responses and a large face inversion effect in neurotypical adults (Liu-Shuang et al. 2014; Liu-Shuang et al. 2016; Xu et al. 2017).

Capitalizing on this approach, here we tested 23 boys with ASD (8-to-12 year) and 23 matched typically developing (TD) boys with EEG recording. Each child participated in FPVS-EEG experiments assessing generic face categorization (i.e., faces vs. objects) and discrimination of unfamiliar individual faces. In both experiments, children viewed images presented one-by-one at a rate of 6 images/s (i.e. 6 Hz base rate, allowing only one fixation per face) in sequences of 40 s, while performing an orthogonal task detecting changes in the color of the fixation cross. In the generic face categorization experiment (from Rossion et al. 2015), sequences consisted of natural images of various objects, with natural face images appearing every fifth stimulus (at a rate of 6 Hz/5 = 1.2 Hz rate; Fig. 1A and movie 1 in SI). In the individual face discrimination experiment (from Liu-Shuang et al. 2014), sequences consisted of a face with a fixed identity varying in size, with faces of different identities appearing every fifth face (i.e. at 1.2 Hz, Fig. 1B and movie 2 in SI). Sequences of inverted faces provide an electrophysiological measure of the face inversion effect and allow isolating specific markers of individual face discrimination (Liu-Shuang et al. 2014).

Fig. 1.

Fig. 1

Fast periodic visual stimulation (FPVS) paradigms used in 2 separate experiments to test generic face categorization and individual face discrimination.

Based on previous research with these paradigms, we expected robust general visual responses, as indicated by the 6 Hz base rate response, centered over medial occipital areas for both groups. Furthermore, we expected robust generic categorization (i.e., face-selective) and face-individuation responses over occipito-temporal electrode sites in the TD group. We also expected to observe a large inversion effect in the TD group, indicated by a decreased amplitude of the face-individuation responses for inverted faces compared with upright faces (Liu-Shuang et al. 2014). In line with those studies indicating that individual face processing may be impaired in ASD, we expected that the amplitude of the face individuation response to upright faces would be decreased in individuals with ASD and that they would show a smaller face inversion effect. Pertaining to generic face categorization, hypotheses are less specific due to a lack of studies with similar designs. On the one hand, eye-tracking studies suggests that social stimuli are processed in a less salient manner, particularly in young children with ASD (e.g. Pierce et al. 2016). On the other hand, classical EEG studies using the N170 provide a mixed pattern of basic face processing abilities in ASD, with evidence for adequate as well as abnormal processing (Kang et al. 2018). As face individualization can be dissociated from generic face categorization (as in prosopagnosia for instance; see e.g. Rossion 2014b; Rossion et al. 2011; see Liu-Shuang et al. 2016 for dissociation between the two paradigms used here), it is plausible that the level of impairment in ASD is determined by the subtlety of the underlying socio-communicative processes that are required.

2. Material and methods

2.1. Participants

We tested 46 8-to-12 year old boys, comprising 23 typically developing (TD) boys (mean age = 10.5 years ± SD = 1.2) and 23 boys with ASD (mean age = 10.6 ± 1.24, Table 1). All participants were right-handed, and had normal or corrected-to-normal vision, and had no intellectual disability. Participants with ASD were recruited through the Autism Expertise Center of the university hospital Leuven, Belgium. TD participants were recruited through elementary schools and sports clubs.

Table 1.

Participant characteristics.

ASD (mean ± SD) TD (mean ± SD) t(df) p
Verbal IQ 103 ± 15 109 ± 12 t(44) = −1.37 0.18
Performance IQ 102 ± 16 106 ± 9 t(44) = −1.03 0.31
Total IQ 103 ± 12 107 ± 9 t(44) = −1.53 0.13
Age 10.4 ± 1.2 10.5 ± 1.2 t(44) = −0.30 0.77
Social Responsiveness Scale (T-score) 85 ± 12 41 ± 4 t(29.14) = −16.15 <0.0001

Participant exclusion criteria were the presence or suspicion of a psychiatric, neurological, learning or developmental disorder (other than ASD or comorbid ADHD in ASD participants) in the participant or in a first- or second-degree relative. Inclusion criteria for the ASD group were a formal diagnosis of ASD made by a multidisciplinary team in a standardized way according to DSM-IV-TR or DSM-5 criteria (American Psychiatric Association 2013) and a total T-score above 60 on the Social Responsiveness Scale (SRS parent version (Constantino and Gruber 2012)). Six participants with ASD took medication to reduce symptoms related to ASD and/or ADHD (Rilatine, Concerta, Aripiprazol). The TD sample comprised healthy volunteers, matched for gender, age, and verbal and performance IQ. Parents of the TD children also completed the SRS questionnaire to exclude the presence of substantial ASD symptoms. Descriptive statistics for both groups are displayed in Table 1, showing that they did not differ for age and IQ. Evidently, both groups differed highly significantly on SRS scores.

2.2. General procedure

The Medical Ethical Committee of the university hospital approved the study, and the participants as well as their parents provided informed consent according to the Declaration of Helsinki. All participants received a monetary reward and a small present of their choice. The session started with an assessment of intellectual abilities, followed by the two FPVS-EEG experiments and two behavioral face processing tasks. The FPVS-EEG and behavioral experiments were administered in a counter-balanced order.

2.3. IQ measures

An abbreviated version of the Dutch Wechsler Intelligence Scale for Children, Third Edition (WISC-III-NL; (Kort et al. 2005; Wechsler 1991)) was administered. Performance IQ was estimated by the subtests Block Design and Picture Completion, verbal IQ by the subtests Vocabulary and Similarities (Sattler 2001).

2.4. Behavioral measures

Two computerized behavioral face recognition tasks were administered: the Benton Facial Recognition Test (BFRT) (Benton et al. 1983) and a shortened version of the Cambridge Face Memory Test (CFMT) (Duchaine and Nakayama 2006).

The BFRT is a widely used test for face perception abilities in adults which has also been used in children (De Heering et al. 2012). We used a digitized version in which grayscale photographs were presented on a computer screen (BFRT-c (Rossion and Michel 2018)). The BFRT-c requires matching facial identities despite changes in lighting, viewpoint and size. Hence, participants cannot rely on a low-level pixel-matching strategy. Target, probe and distractor face pictures are shown simultaneously on the screen so that memory load is minimal.

In the CFMT participants also have to match faces across changes in viewpoint and illumination, but here a memory component is involved as well. To minimize the testing burden in the children, we only administered the first stage of the test. Participants are subsequently presented with three study images of the same face: frontal, left and right viewpoint, each for 3 s. Then, a display with three faces is presented, comprising one of the study images together with two other distractor faces, and participants have to select the target identity.

2.5. FPVS EEG experiment

Two FPVS-EEG experiments were administered in a randomized order.

2.5.1. Experiment 1: generic face categorization

2.5.1.1. Stimuli

The same stimuli as in Rossion et al. 2015 were used: 200 images of various non-face objects (animals, plants, man-made objects) and 50 images of faces; all within their original background. All images were centered, but differed in terms of size, viewpoint, lighting conditions and background. The entire set of stimuli is available online at http://face-categorization-lab.webnode.com/resources/natural-face-stimuli/. All stimuli were gray-scaled, resized to 200 × 200 pixels, and had equal pixel luminance and root-mean-square contrast on the whole image. Both the face and the object images were presented in a random order. At a distance of 80 cm and a resolution of 800 × 600 pixels, the stimuli subtended approximately 3.9 × 3.9 degrees of visual angle.

2.5.1.2. FPVS procedure

The procedure was similar to the study of Rossion et al. (2015), except for a shorter duration of the stimulation sequences. During EEG recording, participants were seated at a distance of 80 cm from a computer monitor (24-in. LED-backlit LCD monitor). They viewed sequences of images appearing at the center of the monitor. During the sequences, stimuli were presented through sinusoidal contrast modulation at a rate of 6 Hz using in-house built software (Rossion et al. 2015). A sequence lasted 44 s, including 40 s of stimulation at full contrast flanked by 2 s of fade-in and fade-out, where contrast gradually increased or decreased, respectively. Fade-in and fade-out were used to avoid eye blinks and abrupt eye movements due to the sudden appearance or disappearance of flickering stimuli. In total, there were four sequences and the total duration of the experiment was approximately 5 min.

In each sequence, natural images of objects were presented at 6 Hz, with images of faces presented periodically as every fifth image (i.e. at 1.2 Hz = 6/5 Hz, Fig. 1, see Supplemental Movie 1). All images were drawn randomly from their respective categories, cycling through all available images before any image repeat.

The participants were instructed to fixate a black cross positioned in the center of the stimuli while continuously monitoring the flickering stimuli. They were instructed to press a key whenever they detected brief (500 ms) changes in the color of the fixation cross (which randomly occurred 10 times per sequence). This task was orthogonal to the effect/manipulation of interest and was aimed to ensure that the participants had a constant level of attention throughout the entire experiment.

2.5.2. Experiment 2: individual face discrimination

2.5.2.1. Stimuli

The same stimuli as in the study of Liu-Shuang et al. (2014) were used: 25 female and 25 male faces, with a neutral expression, a neutral gray background, no facial hair and cropped to remove any external features. Final images had a height of 250 pixels and a width of 186 ± 11 pixels. Shown at a distance of 80 cm, the stimuli had a visual angle of approximately 5 × 4 degrees. Inverted versions of the faces were created by vertically flipping all face images. Mean luminance of the faces was equalized online during stimulation.

2.5.2.2. Procedure

Similarly to experiment 1, participants viewed sequences of images of faces presented through sinusoidal contrast modulation at a frequency rate of 6 Hz (Fig. 1, see Supplemental Video 2). In each sequence, a face of a given identity (e.g. identity A) was randomly selected and repeatedly presented. At every 5th presentation, a face of a different identity (e.g. identity B, C, D,..) was presented. Hence, changes in facial identity occurred periodically at a frequency rate of 1.2 Hz (6/5 Hz) and a sequence was as follows: AAAABAAAAC…. The experiment consisted of two conditions where faces were either presented upright or inverted (Fig. 1). Each condition was presented in four sequences (two with male faces, two with female faces). Each sequence started with a blank screen (2–5 s), then 2 s of fade-in, 40 s of full contrast stimulation and 2 s of fade-out.

The order of conditions was randomized. At each presentation cycle, the size of the face varied randomly between 80% and 120% (with 20% steps) of the original size to avoid simple image-based repetition effects and the confounding of changes in identity with changes in low-level features. Similarly to experiment 1, participants were seated at a distance of 80 cm from the computer screen, and were instructed to fixate a cross presented on the faces either between the eyes (4 sequences) or on the mouth (4 sequences). This manipulation was implemented to investigate potential group differences when fixating on the eye vs. mouth region. However, analyzing the results separately by fixation position indicated no significant effect of position, nor position by group interactions for EEG responses (see Supplemental Fig. 2). We therefore collapsed the data across both fixation positions for the main analyses.

2.6. EEG acquisition

EEG was recorded using a BioSemi Active-Two amplifier system with 64 Ag/AgCl electrodes. During recording, the system uses two additional electrodes for reference and ground (CMS, common mode sense, and DRL, driven right leg). Horizontal and vertical eye movements were recorded using four electrodes placed at the outer canthi of the eyes and above and below the right orbit. The EEG was sampled at 512 Hz.

2.7. EEG analysis

2.7.1. Preprocessing

All EEG processing steps were carried out using Letswave 6 (http://nocions.webnode.com/letswave) and Matlab 2017 (The Mathworks). EEG data was segmented in 47-s segments (2 s before and 5 s after each sequence), bandpass filtered (0.1 to 100 Hz) using a fourth-order Butterworth filter, and downsampled to 256 Hz. Generally, at the group level, there were no differences in eye-blinks (T(44) = 0.375, p = 0.71). For one participant of the control group who blinked excessively (0.66 times/s, which is more than 2SD above the mean (M = 0.43 times/s, based on all participants from both groups) blinks were corrected by means of independent component analysis (ICA) using the runica algorithm (Bell and Sejnowski 1995; Makeig et al. 1996) as implemented in EEGLAB. For this participant, the first component, accounting for most of the variance, representing vertical eye movements was removed. In contrast, no participants from the ASD group blinked more than this threshold. Note that FPVS yields responses with a high SNR at specific frequency bins, while blink artefacts are broadband and thus do not generally interfere with the responses at the predefined frequency (Regan 1989). Hence, blink correction (or removal of trials with many blinks) is not performed systematically in such studies (e.g. Rossion and Boremanse 2011). Next, noisy electrodes were linearly interpolated from the 3 spatially nearest electrodes (not >5% of the electrodes, −i.e. 3 electrodes, were interpolated). All data segments were re-referenced to a common average reference.

2.7.2. Frequency-domain analysis

Preprocessed data segments were further cropped to contain an integer number of 1.2 Hz cycles, beginning after fade-in until 39.1992 s (48 cycles, 10,035 time bins in total). The resulting segments were averaged for each experiment and condition separately (generic face categorization, individual face discrimination: upright, inverted), transformed into the frequency domain using a fast Fourier transform (FFT), and the amplitude spectrum was computed with a high spectral resolution of 0.025 Hz (1/40 s).

In these experiments, the recorded EEG contains signal at frequencies that are integer multiples (harmonics) of the frequency at which images are presented (base stimulation frequency: 6 Hz) and at the frequency at which a dimension of interest is manipulated in the sequence (1.2 Hz; face appearance in experiment 1 and face identity change in experiment 2). Since the EEG response at harmonics of these frequencies reflects both the overall noise level and the signal unique to the stimulus presentation, we used 2 measures to describe the response in relation to the noise level: Signal-to-noise ratio (SNR) and baseline-corrected amplitudes (Dzhelyova et al. 2017; Liu-Shuang et al. 2014). SNR was computed at each frequency bin as the amplitude value at a given bin divided by the average amplitude of the 20 surrounding frequency bins (12 bins on each side, i.e., 24 bins, but excluding the 2 bins directly adjacent and the 2 bins with the most extreme values). Baseline-corrected amplitude was computed in the same way but subtracting the average amplitude of the 20 surrounding bins. For group visualization (Fig. 2), we computed across-subjects averages of the SNR and baseline-corrected amplitudes for each condition and electrode separately.

Fig. 2.

Fig. 2

Spectral representation and scalp distribution of EEG signal during FPVS.
  • A.
    Similar generic face categorization response in ASD and TD. SNR spectrum over the averaged electrodes of left and right occipito-temporal (OT) ROI (indicated with open circles on the topographical maps). ASD (green) and TD boys (blue) show similar face-selective responses, reflected by equal amplitudes at the face presentation frequency (1.2 Hz) and harmonics (2.4 Hz, 3.6 Hz, …). The response is quantified by summing the baseline-corrected amplitudes over all significant harmonics and is visualized in scalp topographies and bar graphs. Scalp topographies show that the distribution of the face-selective response is also qualitatively similar in both groups. Bar graphs (mean ± SEM) show that the amplitudes of responses in LOT and ROT are similar for both groups.
  • B.
    Reduced individual face discrimination response to upright faces in ASD. SNR spectra, scalp topographies and bar graphs of left and right OT are shown for the conditions with upright and inverted faces. *: p < 0.05; **: p < 0.01.

For amplitude quantification we first determined the range of harmonics of the 1.2 Hz and 6 Hz stimulation frequencies to consider for further analyses, based on group-level data. We determined harmonics in which the amplitude was significantly above noise using a z-score approach for each experiment separately (Dzhelyova et al. 2017; Jacques et al. 2016; Liu-Shuang et al. 2014; Liu-Shuang et al. 2016; Rossion et al. 2015): (1) FFT amplitude spectra were averaged across subjects, (2) then averaged across all electrodes and across electrodes in the relevant ROIs for each condition and experiment, and (3) the resulting FFTs were transformed in z-scores computed as the difference between the amplitude at each frequency bin and the mean amplitude of the corresponding 20 surrounding bins divided by the SD of amplitudes in these 20 surrounding bins. For each experiment separately, we quantified the response by summing the baseline-corrected amplitudes of all consecutive significant harmonics (i.e. Z > 1.64 or p < 0.05, one-tailed), see Retter and Rossion 2016).

Based on this criterion, for experiment 1 we quantified generic face categorization responses by summing 12 harmonics: harmonics 1 (1.2 Hz) to 14 (16.8 Hz) excluding the harmonics corresponding to the base stimulation frequency (6 and 12 Hz). For experiment 2, individual face discrimination responses were quantified as the sum of 6 harmonics, i.e. 1 (1.2 Hz) to 7 (8.4 Hz), excluding 6 Hz. For both experiments, the general visual response was quantified as the sum of the response at the base rate (6 Hz) and 2 consecutive harmonics (12 Hz and 18 Hz). Analyses performed at the individual level indicated that, despite the short recording time, in both groups every participant showed a significant (p < 0.01) face-categorization response. Likewise, for the identity discrimination experiment, every individual showed clear peaks at the individual face discrimination frequencies, and individual subject analyses indicated that for 41 out of 46 (22 TD, 19 ASD) participants the individual face discrimination response for faces presented at upright orientation was significant. In experiment 1, overall, the responses were higher and distributed over more harmonics than in experiment 2. This is in line with previous studies (e.g. see the study of Liu-Shuang et al. 2016, where both paradigms were also used and compared between normal observers and a prosopagnosic patient).

Based on inspection of the topographical maps of both groups (Fig. 2, Supplemental Figs. 1 and 3), and in line with previous studies using these paradigms (e.g. Dzhelyova and Rossion, 2014a, Dzhelyova and Rossion, 2014b; Liu-Shuang et al., 2014, Liu-Shuang et al., 2016; Rossion et al. 2015), EEG amplitude was quantified by regions of interest (ROI) in which the signal at multiple nearby electrodes is averaged. The analysis of the general visual response at base rate frequency (6 Hz and its harmonics) focused on three ROIs: medial occipital (MO: Oz, Iz, O1, O2), left occipital (LOT: P7, P9, P07) and right occipital (ROT: P8, P10, P08). The analysis of the generic face categorization (Exp. 1) and face individuation (Exp. 2) response at 1.2 Hz and harmonics focused on two regions of interest: LOT (P7, P9, P07) and ROT (P8, P10, P08). The electrodes in these ROIs showed the largest responses in each of the groups, suggesting the same spatial grouping (see Supplemental Fig. 3).

The baseline-corrected amplitudes in each ROI were statistically analyzed at group-level using repeated measures mixed-model ANOVAs. The general visual (base rate, 6 Hz) and generic face categorization and individual face discrimination (1.2 Hz) responses were examined separately using ROI (ROT, LOT, MO) and ROI (ROT, LOT) as within-subjects factors, respectively. For the individual face discrimination experiment, Orientation (upright vs. inverted faces) was an additional within-subjects factor. In both experiments, Group (ASD vs. TD) was a between-subjects factor for the comparison between typically developing children and children with ASD. The assumption of sphericity was checked using a Mauchly's test (with α = 0.05) and the assumption of normality of the dependent variable was checked using a standard-Wilkson test (α = 0.05). If sphericity was not met, degrees of freedom were corrected with a Greenhouse-Geisser correction. Assumptions of normality were met for all dependent variables. The assumption of homogeneity of variances was analyzed using a Levene's test (α = 0.05). For significant effects, post-hoc pairwise comparisons were conducted, using a Bonferroni correction for multiple comparisons.

In addition, we determined the significance of generic face categorization/individual face discrimination responses within the ROIs for each individual participant as follows (e.g., Dzhelyova et al. 2017): (1) the raw FFT amplitude spectrum was averaged across electrodes per ROI, and (2) cut into segments centered on the harmonics of the 1.2 Hz frequency bin surrounded by 20 neighboring bins on each side; (3) the amplitude values across 12 segments (experiment 1) and 6 segments (experiment 2) of FFT spectra were summed; (4) the summed FFT spectrum was transformed into a z-score using the 20 surrounding bins (see above). Response within a given ROI/participant was considered significant if the z-score at the 1.2 Hz frequency bin exceeded 2.33 (i.e., p < 0.01 one-tailed: signal>noise).

Finally, we applied classification models to classify individuals as belonging to the ASD or TD group. Therefore, as input variables we use the most promising outcome measures, being the amplitudes of the individual face discrimination responses. We considered three types of classification models: linear discriminant analysis (LDA), logistic regression (LR) and support vector machines (SVM), all from the scikit-learn library (Pedregosa et al. 2011). LDA is a classifier with a linear decision boundary generated by fitting class conditional probability distributions to the data. Hereby, for each class, a multivariate Gaussian probability distribution is fitted to the data, consisting of the subject-specific vectors of significant harmonics. The model classifies a subject by considering the log of the probability ratios of the class-specific probability distributions. In LR, the logs of these probability ratios are fitted by a linear model. Linear SVM is a linear classifier with the additional constraint of making the margin between the two categories as wide as possible.

3. Results

3.1. No difference in general visual base rate responses in TD and ASD

Fig. 2 displays the results for the generic face categorization and for the individual face discrimination experiments. In both experiments we observed robust brain responses at harmonics of the 6 Hz base frequency, reflecting the general response to all stimuli presented in the sequences (see Fig. 2A and B and Supplemental Fig. 1A and 1B). This response was focused on medial occipital regions, and magnitude and scalp distribution were extremely similar for both groups, yielding no significant group differences nor interactions with group (all p > 0.25).

3.2. No difference in generic face categorization responses in TD and ASD

Fig. 2A displays the results for the generic face categorization experiment, showing that the magnitude and scalp distribution of the face-selective response were virtually identical across ASD and TD groups. Responses were observed on bilateral occipito-temporal (OT) regions with maximal amplitude over electrodes PO7 (left) and PO8 (right). Repeated-measures mixed-model ANOVAs performed on averaged response amplitudes revealed no significant group differences between ASD and TD (F1,44 = 0.002, p = 0.96, ηp2 = 0), a significant effect of Region of Interest (ROI) (F1,44 = 8.6, p < 0.01, ηp2 = 0.16), and no Group by ROI interaction (F1,44 = 1.8, p = 0.19, ηp2 = 0.04), indicating that in both groups face-selective responses were larger in right compared to left OT region.

3.3. Selectively reduced individual face discrimination responses in children with ASD

Fig. 2B displays the results for individual face discrimination, both for upright and inverted faces. Individual face discrimination responses were centered on bilateral OT, with a right hemisphere dominance. A repeated-measures mixed-model ANOVA with factors Group, Face Orientation and ROI revealed a main effect of Group (F1,44 = 8.45, p < 0.01, ηp2 = 0.16), Orientation (F1,44 = 18.3, p < 0.001, ηp2 = 0.29) and ROI (F1,44 = 14.5, p < 0.001, ηp2 = 0.25). Crucially, the significant Group by Orientation interaction (F1,44 = 7.67, p < 0.01, ηp2 = 0.15) indicated that only upright faces triggered a higher response in the TD versus ASD group (pbonferroni < 0.01; Fig. 2B), whereas the response to inverted faces did not differ between groups (pbonferroni = 0.55). Likewise, only the TD group displayed a significant face inversion effect with larger responses for upright compared to inverted faces (pbonferroni < 0.001; ASD group: pbonferroni = 0.29). There were no other significant two- or three-way interactions.

An additional ANOVA including all electrodes confirmed that there were no electrodes with a significantly larger response in the ASD group compared to the TD group. The ANOVA showed a significant interaction of Group by Electrode (F(63, 2772) = 2.70, p < 0.0001). We looked at post-hoc tests to interpret this effect. If applying a strict bonferroni correction for the number of electrodes tested (i.e., a too severe correction because the activities recorded at the different electrodes are not independent), the statistical threshold would be 0.05/64 = 0.00078. Even at this highly conservative threshold, 3 contiguous electrodes in each hemisphere show a higher response in the TD group compared to the ASD group: P7, P9, PO7, P10, PO8 and O1 (all ps < 0.0001).

As the identification of a sensitive marker of impaired socio-communicative processing extends beyond statistical group differences (Kapur et al. 2012; Loth et al. 2016; McPartland 2017), we also examined how well we can predict group membership (TD vs. ASD), based on neural responses to brief identity changes in upright faces. Therefore, we used LDA, LR and SVM to classify individuals as belonging to the ASD or TD group. The ten-dimensional input vectors for the models consist of the first five harmonics of the frequency for the left and right occipito-temporal areas of interest. Harmonics are expected to be highly correlated, which is accounted for in the models. Fig. 3 demonstrates the linear separability of the ASD and TD groups, based on the LDA model. To assess the generalizability of the classification, we carried out a leave-one-out cross-validation analysis of the models, demonstrating that for LDA: 78.3%, LR: 82.6%, SVM: 87.0% of the individuals with ASD can be identified correctly (recall), and that, overall, a correct diagnostic identification of ASD vs TD (accuracy) is obtained in LDA: 73.9%, LR: 76.1%, SVM: 78.3% of the participants. Crucially, we addressed the small-sample problem and the possibility of over-fitting by performing permutation tests to statistically assess the robustness of the model (Noirhomme et al. 2014). For 10,000 permutations and two feature-selection possibilities (including the amplitude at 8.4 Hz or not), we find that the probability to find these accuracies by chance is LDA: p = 0.0049, LR: p = 0.0026, SVM: p = 0.0042.

Fig. 3.

Fig. 3

Violin plot of the ten-dimensional data of the relevant harmonics of the individual face discrimination response, projected along the LDA projection vector. The LDA was fitted to the full dataset and illustrates the separability of the groups. The horizontal line represents the decision boundary of the LDA classifier.

We correlated the amplitude of the EEG responses to upright individual face discrimination with the scores on the SRS. While the correlation was high across groups (r = 0.45, p < 0.01), we did not find any significant correlations within the groups.

3.4. No group difference in control task performance and behavioral facial identity recognition

Both groups performed equally on the behavioral fixation cross change detection task, suggesting a similar level of attention throughout the experiments. Both groups showed accuracies between 91 and 96% for the two experiments with mean response times between 0.48 and 0.51 s. Statistical analyses showed no differences between the ASD group and the TD group, neither for the generic face categorization experiment (accuracy: t(25.79) = −0.53, p = 0.6; response times t(36) = 0.93, p = 0.36), nor for the face identification experiment (t(24.45) = −0.97, p = 0.34 for accuracy; t(35) = 1.23, p = 0.23 for response times).

All children also completed two behavioral face recognition tasks involving face matching and a memory component, but behavioral effects were not strong enough to reveal significant group differences (all p > 0.09; see Table 2).

Table 2.

Behavioral data on explicit facial identity recognition.

ASD (mean ± SD) TD (mean ± SD) Statistic p
Cambridge face memory test accuracy
 (% correct) 0.72 ± 0.26 0.83 ± 0.17 W = 303 0.255
 RT (s) 3.84 ± 1.15 4.48 ± 1.50 t(43) = 1.71 0.094
Benton facial recognition test accuracy
 (% correct) 0.71 ± 0.07 0.72 ± 0.08 t(41) = 0.60 0.552
 Benton RT (s) 11.72 ± 3.22 13.53 ± 5.12 W = 270.5 0.343

Note. Assumptions of normal distribution of the dependent variable and homogeneity of variances were checked using a Shapiro-Wilk test and a Levene's test (both with α = 0.05) for each dependent variable separately. If the assumptions were met, behavioral data were analyzed using a t-test for independent samples (with α = 0.05). If the assumption of normal distribution of the dependent variable was violated a Mann-Whitney U test (with α = 0.05) was used. Neither the Benton face recognition test nor the Cambridge face memory test showed significant group differences in terms of accuracy and response times (RT).

4. Discussion

We applied FPVS-EEG to assess implicit neural face processing in children with ASD as compared to matched TD controls. Our findings reveal a dissociation between generic face categorization on the one hand and individual face discrimination on the other hand, with the ASD group being selectively impaired in the latter, more fine-grained, perceptual ability.

The base rate response across both experiments – reflecting general synchronization to the visual stimulation- was of equal amplitude for ASD and TD participants, indicating that the brains of these children similarly synchronize to the general presentation rate of visual stimuli. This response reflects a mixture of low- and high-level processes and, as in previous studies with adults, was distributed mainly over medial occipital sites, possibly due to a major contribution of early visual cortical regions (Liu-Shuang et al. 2014). The lack of difference in amplitude of this base rate response between the two groups is in line with the absence of difference in performance at the orthogonal behavioral fixation cross change detection task between the two groups, suggesting that children of both groups devoted a similar level of attention and motivation to all the tasks.

The generic face categorization response was distributed over occipito-temporal sites, slightly right lateralized, reflecting the wide distribution of face-selective neural responses across occipital and temporal cortices (Rossion et al. 2015). This response was not as clearly right lateralized as typically seen in adults (Retter and Rossion 2016; Rossion et al. 2015) and infants (de Heering and Rossion 2015), but is in line with observations in younger children (preschoolers in (Lochy et al. 2017)). The absence of amplitude difference of this generic face categorization response between the groups indicate that the brains of school-aged children with ASD are just as sensitive as those of TD children to implicitly detect socially relevant information (i.e. faces) among a stream of non-social images. These results fit with evidence from other implicit social paradigms, such as the social preference eye-tracking studies, showing that social orienting is not qualitatively impaired in school-aged children with ASD (Guillon et al. 2014).

For both groups, the individual face discrimination response (signaling the neural sensitivity for differences in facial identity) was distributed over occipito-temporal cortices and was clearly right lateralized, in agreement with the well-established right hemispheric dominance of face perception in humans (e.g. Bentin et al. 1996; Jonas et al. 2016; Meadows 1974; Sergent & Signoret, 1992). By presenting facial images at 6 Hz (~ 167 ms per face), the idiosyncratic characteristics of novel faces need to be grasped at a single glance. For upright faces, this rapid and automatic individual face discrimination is typically facilitated by the perception of the face as a whole, or as a single representation undecomposed in features (“holistic face perception” (Rossion 2013 for review; Sergent 1984; Tanaka and Farah 1993; Young et al. 1987). When faces are presented upside down, however, holistic face perception is impaired, despite preserving the low-level properties of the images. The ASD group showed reduced facial identity discrimination responses only for the upright faces and not for the inverted faces. Accordingly, in boys with ASD, much like in brain-damaged patients with prosopagnosia (Busigny and Rossion 2010), rapid discrimination of upright faces is not superior to inverted faces, suggesting that individuals with ASD may employ atypical processing strategies when individuating faces (Evers et al. 2018; Tang et al. 2015).

Note that this group difference cannot be explained by different levels of attention or motivation, or by general differences in neural synchronization to visual stimulation. Likewise, the selective deficit in upright facial identity discrimination cannot be explained by an inability to reliably detect discrimination responses in the ASD group, since the two groups did not differ in the generic categorization of faces and in the discrimination of inverted faces. These findings highlight the importance of studying face processing by a broader series of tasks, as the exclusive use of the generic face categorization paradigm would have led to the wrong conclusion that boys with ASD present intact face processing.

This study illustrates the strength of the FPVS-EEG approach as compared for instance to a standard EEG approach with slow non-periodic stimuli leading to components (ERPs) analyzed in the time-domain (Regan 1989; Retter and Rossion 2016; Rossion 2014a). With FPVS-EEG, the process of interest is identified objectively because the frequency is known in advance by the experimenter. Moreover, the response can be quantified directly in the frequency domain without having to define time-windows based on participants' specific responses. The technique is also highly sensitive because a large number of discriminations can be presented in a very short amount of time. Most importantly, with a high frequency resolution, the response of interest falls into a tiny frequency bin (and harmonics) containing very little noise, since the noise is distributed into numerous frequency bins (Regan 1989). This high sensitivity of the approach allows to obtain significant responses found in virtually every single participant following a few minutes of testing (Liu-Shuang et al. 2014; Xu et al. 2017). The response is also highly specific, dissociating general visual activity (at the base rate) from responses reflecting generic face categorization (Rossion et al. 2015), individual face discrimination (Liu-Shuang et al. 2014), or else facial expression discrimination (Dzhelyova et al. 2017), without the need to subtract a control condition from the condition of interest. Finally, the paradigms measure these functions under severe time constraints, i.e. here a single glance at a face, and without explicit task, mimicking the speed and automaticity of these processes in real life without adding unreasonable pressure for behavioral responses in the population tested.

Our results indicate that the FPVS EEG approach is able to rapidly pinpoint face processing impairments in ASD, which may be invisible in explicit behavioral recognition tasks. Here, crucially, the impairments in ASD were confined to more subtle socio-communicative cues, such as the (holistic) neural processing of facial identity. One might question why these specific effects are not reflected in the behavioral face recognition tasks in our sample. This may be due to the explicit nature of these behavioral tasks, allowing compensatory strategies and the influence of other factors beyond face processing, such as motivation and attention (Weigelt et al. 2012). This is also illustrated by the weak correlation between both behavioral face processing measures. Previous behavioral research has shown that explicit tasks are often not discriminative between TD and ASD groups (Weigelt et al. 2012). Hence, implicit tasks might better reflect how faces are processed on a daily life basis. Against this background, a recent study investigated the association between the FPVS-EEG face individuation response and performance on the CFMT, and concluded that both tasks do share some common variance but are not strongly related. In particular, the FPVS response captures the perceptual processes involved in facial identity discrimination, while the CFMT is an explicit and cognitively complex memory task requiring memory and decision-making processes that go beyond the mere perceptual differentiation of face identity (Xu et al. 2017). In addition, due to time constraints, the children in our study only completed the first part of the CFMT, which is typically the easiest part, characterized by the highest performance (Bowles et al., 2009), thus possibly less sensitive to observe clear group differences.

While the correlation between EEG face identity discrimination responses and SRS scores was significant across the groups, we did not find any significant correlations within the groups. We believe that this is due to two factors. First, the limited variation in SRS scores within each of the groups. This is partly due to our inclusion criterion being a cut-off for each group: boys in the ASD group had a total SRS T-score higher than 60, while TD boys all had a score lower than 60. Moreover, the SRS measures the severity of ASD symptoms over a variety of domains, based on evaluations by the parents. Hence, while it gives a clear idea of the perceived symptoms in daily life, this measure does not purely reflect the actual behavior and performance, and is also determined by several other parent-related processes (e.g. whether there are other children in the family with an ASD diagnosis) (De la Marche et al. 2015). Second, although the EEG individual face discrimination response reflects a highly selective automatic process, the variations of amplitude of this response across individuals also reflects general factors such as skull thickness and cortical folding (see the discussion in Xu et al. 2017). While these factors should be neutralized when comparing relatively large groups of participants (or comparing different paradigms in the same participants), they add variance to amplitude differences within a group of individuals, reducing the significance of correlation measures.

The use of a well-selected, well-matched and homogeneous participant sample in terms of age, gender, IQ, and diagnostic status is certainly an asset. It allowed us to observe clear differences in the neural individual face discrimination response. In comparison, a recent study with a similar FPVS-EEG approach (where only individual face discrimination was tested) failed to find such differences in adults with ASD (Dwyer et al. 2018). Yet, in that study, participants were both male and female adults with a variable age range. Moreover, the patient group comprised self-selected individuals who themselves reported having a diagnosis of ASD, but without any formal professional multidisciplinary assessment. In contrast, in our study, all patients in the ASD group had a formal and recently confirmed diagnosis of ASD, as assessed by the multidisciplinary team of the University Hospitals.

Against this background, one may question whether the findings will generalize to the broader autism population. Further studies will of course be required in order to address this issue. Importantly, however, the advantages of the FPVS approach offer a unique opportunity to obtain data in low-functioning individuals with ASD, as well as in young children and infants (de Heering and Rossion 2015; Lochy et al. 2017). Furthermore, in the longer term, the discrimination responses obtained with FPVS-EEG yield the potential to be used as a biomarker, possibly for the early detection of ASD. Indeed, when the individual data were taken into account, the classification analyses missed only a few participants with ASD, thus showing a great potential for individual classification. Evidently, for this purpose, the sensitivity and specificity of the approach should further be improved, possibly by incorporating data of additional FPVS-EEG paradigms that also show discriminative value.

5. Conclusions

While showing typical generic face categorization responses, individuals with ASD were impaired at rapid individuation of faces, a crucial aspect of daily life social interactions. Given the strength of the effects obtained, the implicit nature of the measure and the straightforward application and analysis, the presented FPVS-EEG approach opens an avenue for studying populations that are not susceptible to explicit verbal instructions and hence less accessible for research, such as infants and people with low-functioning ASD.

Declaration of interest

The authors declare no competing interests.

Acknowledgements

This work was supported by grants of the Research Foundation Flanders (FWO; G0C7816N) and Excellence of Science (EOS) grant (G0E8718N; HUMVISCAT) and of the Marguerite-Marie Delacroix foundation.

Footnotes

Appendix A

Supplementary data to this article can be found online at https://doi.org/10.1016/j.nicl.2018.101613.

Contributor Information

Sofie Vettori, Email: sofie.vettori@kuleuven.be.

Bart Boets, Email: bart.boets@kuleuven.be.

Appendix A. Supplementary data

Supplementary material

mmc1.docx (898.1KB, docx)

References

  1. Adrian E.D., Matthews B.H.C. The interpretation of potential waves in the cortex. J. Physiol. 1934;81(4):440–471. doi: 10.1113/jphysiol.1934.sp003147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. American Psychiatric Association . American Psychiatric Pub; 2013. Diagnostic and statistical manual of mental disorders (DSM-5) [Google Scholar]
  3. Amihai I., Deouell L.Y., Bentin S. Neural adaptation is related to face repetition irrespective of identity: a reappraisal of the N170 effect. Exp. Brain Res. 2011;209(2):193–204. doi: 10.1007/s00221-011-2546-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Barton J.J.S., Hefter R.L., Cherkasova M.V., Manoach D.S. Investigations of face expertise in the social developmental disorders. Neurology. 2007;69(9):860–870. doi: 10.1212/01.wnl.0000267842.85646.f2. [DOI] [PubMed] [Google Scholar]
  5. Behrmann Marlene, Avidan G., Leonard G.L., Kimchi R., Luna B., Humphreys K., Minshew N. Configural processing in autism and its relationship to face processing. Neuropsychologia. 2006;44(1):110–129. doi: 10.1016/j.neuropsychologia.2005.04.002. [DOI] [PubMed] [Google Scholar]
  6. Behrmann M., Thomas C., Humphreys K. Seeing it differently: visual processing in autism. Trends Cogn. Sci. 2006;10(6):258–264. doi: 10.1016/j.tics.2006.05.001. [DOI] [PubMed] [Google Scholar]
  7. Bell A.J., Sejnowski T.J. An information-maximization approach to blind separation and blind deconvolution. Neural Comput. 1995;7(6):1129–1159. doi: 10.1162/neco.1995.7.6.1129. [DOI] [PubMed] [Google Scholar]
  8. Benning S.D., Kovac M., Campbell A., Miller S., Hanna E.K., Damiano C.R.…Dichter G.S. Late positive potential ERP responses to social and nonsocial stimuli in youth with Autism spectrum disorder. J. Autism Dev. Disord. 2016;46(9):3068–3077. doi: 10.1007/s10803-016-2845-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bentin S., Allison T., Puce A., Perez E., McCarthy G. Electrophysiological studies of face perception in humans. J. Cogn. Neurosci. 1996;8(6):551–565. doi: 10.1162/jocn.1996.8.6.551. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Benton A.L., Sivan A.B., Hamsher K.D.S., Varney N.R., Spreen O. Contribution to Neuropsychological Assessment. Oxford University Press; New York: 1983. Facial recognition: stimulus and multiple choice pictures. [Google Scholar]
  11. Bowles D.C., McKone E., Dawel A., Duchaine B., Palermo R., Schmalzl L.…Yovel G. Diagnosing prosopagnosia: Effects of ageing, sex, and participant–stimulus ethnic match on the Cambridge Face Memory Test and Cambridge Face Perception Test. Cognitive Neuropsychology. 2009;26(5):423–455. doi: 10.1080/02643290903343149. [DOI] [PubMed] [Google Scholar]
  12. Busigny T., Rossion B. Acquired prosopagnosia abolishes the face inversion effect. Cortex. 2010;46(8):965–981. doi: 10.1016/j.cortex.2009.07.004. [DOI] [PubMed] [Google Scholar]
  13. Caharel S., D'Arripe O., Ramon M., Jacques C., Rossion B. Early adaptation to repeated unfamiliar faces across viewpoint changes in the right hemisphere: evidence from the N170 ERP component. Neuropsychologia. 2009;47(3):639–643. doi: 10.1016/j.neuropsychologia.2008.11.016. [DOI] [PubMed] [Google Scholar]
  14. Caharel S., Leleu A., Bernard C., Viggiano M.-P., Lalonde R., Rebaï M. Early holistic face-like processing of Arcimboldo paintings in the right occipito-temporal cortex: evidence from the N170 ERP component. Int. J. Psychophysiol. 2013;90(2):157–164. doi: 10.1016/j.ijpsycho.2013.06.024. [DOI] [PubMed] [Google Scholar]
  15. Campatelli G., Federico R.R., Apicella F., Sicca F., Muratori F. Face processing in children with ASD: Literature review. Res. Autism Spectr. Disord. 2013;7(3):444–454. [Google Scholar]
  16. Chawarska K., Macari S., Shic F. Decreased spontaneous attention to social scenes in 6-month-old infants later diagnosed with autism spectrum disorders. Biol. Psychiatry. 2013;74(3):195–203. doi: 10.1016/j.biopsych.2012.11.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Chita-Tegmark M. Social attention in ASD: A review and meta-analysis of eye-tracking studies. Res. Dev. Disabil. 2016;48:79–93. doi: 10.1016/j.ridd.2015.10.011. [DOI] [PubMed] [Google Scholar]
  18. Churches O., Nicholls M., Thiessen M., Kohler M., Keage H. Emoticons in mind: an event-related potential study. Soc. Neurosci. 2014;9(2):196–202. doi: 10.1080/17470919.2013.873737. [DOI] [PubMed] [Google Scholar]
  19. Constantino J.N., Gruber C.P. Western Psychological Services Torrance; CA: 2012. Social Responsiveness Scale (SRS) [Google Scholar]
  20. Crouzet S.M., Thorpe S.J. Low-level cues and ultra-fast face detection. Front. Psychol. 2011;2(342) doi: 10.3389/fpsyg.2011.00342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Crouzet S.M., Kirchner H., Thorpe S.J. Fast saccades toward faces: face detection in just 100 ms. J. Vis. 2010;10(4):16.1–17. doi: 10.1167/10.4.16. [DOI] [PubMed] [Google Scholar]
  22. Dawson G., Carver L., Meltzoff A.N., Panagiotides H., McPartland J., Webb S.J. Neural correlates of face and object recognition in young children with autism spectrum disorder, developmental delay, and typical development. Child Dev. 2002;73(3):700–717. doi: 10.1111/1467-8624.00433. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Dawson G., Webb S.J., McPartland J. Understanding the Nature of Face Processing Impairment in Autism: Insights From Behavioral and Electrophysiological Studies. Dev. Neuropsychol. 2005;27(3):403–424. doi: 10.1207/s15326942dn2703_6. [DOI] [PubMed] [Google Scholar]
  24. De Heering A., Rossion B., Maurer D. Developmental changes in face recognition during childhood: Evidence from upright and inverted faces. Cogn. Dev. 2012;27(1):17–27. [Google Scholar]
  25. De la Marche W., Noens I., Kuppens S., Spilt J.L., Boets B., Steyaert J. Measuring quantitative autism traits in families: informant effect or intergenerational transmission? Euro. Child Adolesc. Psychiatr. 2015;24(4):385–395. doi: 10.1007/s00787-014-0586-z. [DOI] [PubMed] [Google Scholar]
  26. Duchaine B., Nakayama K. The Cambridge Face Memory Test: results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia. 2006;44(4):576–585. doi: 10.1016/j.neuropsychologia.2005.07.001. [DOI] [PubMed] [Google Scholar]
  27. Dwyer P., Xu B., Tanaka J.W. Investigating the perception of face identity in adults on the autism spectrum using behavioural and electrophysiological measures. Vis. Res. 2018 doi: 10.1016/j.visres.2018.02.013. [DOI] [PubMed] [Google Scholar]
  28. Dzhelyova M., Rossion B. Supra-additive contribution of shape and surface information to individual face discrimination as revealed by fast periodic visual stimulation. J. Vis. 2014;14(14):15. doi: 10.1167/14.14.15. [DOI] [PubMed] [Google Scholar]
  29. Dzhelyova M., Rossion B. The effect of parametric stimulus size variation on individual face discrimination indexed by fast periodic visual stimulation. BMC Neurosci. 2014;15:87. doi: 10.1186/1471-2202-15-87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Dzhelyova M., Jacques C., Rossion B. At a single glance: Fast periodic visual stimulation uncovers the spatio-temporal dynamics of brief facial expression changes in the human brain. Cereb. Cortex. 2017;27(8):4106–4123. doi: 10.1093/cercor/bhw223. [DOI] [PubMed] [Google Scholar]
  31. Evers K., Van Belle G., Steyaert J., Noens I., Wagemans J. Gaze-contingent display changes as new window on analytical and holistic face perception in children With Autism spectrum disorder. Child Dev. 2018;89(2):430–445. doi: 10.1111/cdev.12776. [DOI] [PubMed] [Google Scholar]
  32. Falck-Ytter T. Face inversion effects in autism: a combined looking time and pupillometric study. Autism Res. 2008;1(5):297–306. doi: 10.1002/aur.45. [DOI] [PubMed] [Google Scholar]
  33. Feuerriegel D., Churches O., Hofmann J., Keage H.A.D. The N170 and face perception in psychiatric and neurological disorders: A systematic review. Clin. Neurophysiol. 2015;126(6):1141–1158. doi: 10.1016/j.clinph.2014.09.015. [DOI] [PubMed] [Google Scholar]
  34. Galper R.E. Recognition of faces in photographic negative. Psychon. Sci. 1970;19(4):207–208. [Google Scholar]
  35. Guillon Q., Hadjikhani N., Baduel S., Rogé B. Visual social attention in autism spectrum disorder: Insights from eye tracking studies. Neurosci. Biobehav. Rev. 2014;42:279–297. doi: 10.1016/j.neubiorev.2014.03.013. [DOI] [PubMed] [Google Scholar]
  36. Gunji A., Goto T., Kita Y., Sakuma R., Kokubo N., Koike T.…Inagaki M. Facial identity recognition in children with autism spectrum disorders revealed by P300 analysis: a preliminary study. Brain Dev. 2013;35(4):293–298. doi: 10.1016/j.braindev.2012.12.008. [DOI] [PubMed] [Google Scholar]
  37. Hedley D., Brewer N., Young R. The effect of inversion on face recognition in adults with autism spectrum disorder. J. Autism Dev. Disord. 2015;45(5):1368–1379. doi: 10.1007/s10803-014-2297-1. [DOI] [PubMed] [Google Scholar]
  38. de Heering A., Rossion B. Rapid categorization of natural face images in the infant right hemisphere. eLife. 2015;4 doi: 10.7554/eLife.06564. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Heisz J.J., Watter S., Shedden J.M. Progressive N170 habituation to unattended repeated faces. Vis. Res. 2006;46(1–2):47–56. doi: 10.1016/j.visres.2005.09.028. [DOI] [PubMed] [Google Scholar]
  40. Hershler O., Hochstein S. At first sight: a high-level pop out effect for faces. Vis. Res. 2005;45(13):1707–1724. doi: 10.1016/j.visres.2004.12.021. [DOI] [PubMed] [Google Scholar]
  41. Hershler O., Golan T., Bentin S., Hochstein S. The wide window of face detection. J. Vis. 2010;10(10):21. doi: 10.1167/10.10.21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Hobson R.P., Ouston J., Lee A. What's in a face? The case of autism. Br. J. Psychol. 1988;79(4):441–453. doi: 10.1111/j.2044-8295.1988.tb02745.x. [DOI] [PubMed] [Google Scholar]
  43. Jacques C., D'Arripe O., Rossion B. The time course of the inversion effect during individual face discrimination. J. Vis. 2007;7(8):3. doi: 10.1167/7.8.3. [DOI] [PubMed] [Google Scholar]
  44. Jacques C., Retter T.L., Rossion B. A single glance at natural face images generate larger and qualitatively different category-selective spatio-temporal signatures than other ecologically-relevant categories in the human brain. NeuroImage. 2016;137:21–33. doi: 10.1016/j.neuroimage.2016.04.045. [DOI] [PubMed] [Google Scholar]
  45. Jemel B., Mottron L., Dawson M. Impaired Face Processing in Autism: Fact or Artifact? J. Autism Dev. Disord. 2006;36(1):91–106. doi: 10.1007/s10803-005-0050-5. [DOI] [PubMed] [Google Scholar]
  46. Jonas J., Jacques C., Liu-Shuang J., Brissart H., Colnat-Coulbois S., Maillard L., Rossion B. A face-selective ventral occipito-temporal map of the human brain with intracerebral potentials. Proc. Natl. Acad. Sci. U. S. A. 2016;113(28):E4088–E4097. doi: 10.1073/pnas.1522033113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Kang E., Keifer C.M., Levy E.J., Foss-Feig J.H., McPartland J.C., Lerner M.D. Atypicality of the N170 event-related potential in autism spectrum disorder: A meta-analysis biological psychiatry. Cogn. Neurosci. Neuroimag. 2018;3(8):657–666. doi: 10.1016/j.bpsc.2017.11.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Kapur S., Phillips A.G., Insel T.R. Why has it taken so long for biological psychiatry to develop clinical tests and what to do about it? Mol. Psychiatry. 2012;17(12):1174–1179. doi: 10.1038/mp.2012.105. [DOI] [PubMed] [Google Scholar]
  49. Kort W., Schittekatte M., Dekker P.H., Verhaeghe P., Compaan E.L., Bosmans M., Vermeir G. Psychologen HTPNIv; Amsterdam: 2005. WISC-III NL wechsler intelligence scale for children. Derde Editie NL. Handleiding en Verantwoording. [Google Scholar]
  50. Langdell T. Recognition of faces: an approach to the study of autism. J. Child Psychol. Psychiatr. Allied Disciplines. 1978;19(3):255–268. doi: 10.1111/j.1469-7610.1978.tb00468.x. [DOI] [PubMed] [Google Scholar]
  51. Liu-Shuang J., Norcia A.M., Rossion B. An objective index of individual face discrimination in the right occipito-temporal cortex by means of fast periodic oddball stimulation. Neuropsychologia. 2014;52:57–72. doi: 10.1016/j.neuropsychologia.2013.10.022. [DOI] [PubMed] [Google Scholar]
  52. Liu-Shuang J., Torfs K., Rossion B. An objective electrophysiological marker of face individualisation impairment in acquired prosopagnosia with fast periodic visual stimulation. Neuropsychologia. 2016;83:100–113. doi: 10.1016/j.neuropsychologia.2015.08.023. [DOI] [PubMed] [Google Scholar]
  53. Lochy A., de Heering A., Rossion B. The non-linear development of the right hemispheric specialization for human face perception. Neuropsychologia. 2017 doi: 10.1016/j.neuropsychologia.2017.06.029. [DOI] [PubMed] [Google Scholar]
  54. Loth E., Spooren W., Ham L.M., Isaac M.B., Auriche-Benichou C., Banaschewski T.…Murphy D.G.M. Identification and validation of biomarkers for autism spectrum disorders. Nat. Rev. Drug Discov. 2016;15(1):70. doi: 10.1038/nrd.2015.7. [DOI] [PubMed] [Google Scholar]
  55. Makeig S., Bell A.J., Jung T.-P., Sejnowski T.J. Advances in Neural Information Processing Systems. 1996. Independent component analysis of electroencephalographic data; pp. 145–151.http://papers.nips.cc/paper/1091-independent-component-analysis-of-electroencephalographic-data.pdf Retrieved from. [Google Scholar]
  56. McCleery J.P., Akshoomoff N., Dobkins K.R., Carver L.J. Atypical Face Versus Object Processing and Hemispheric Asymmetries in 10-Month-Old Infants at Risk for Autism. Biol. Psychiatry. 2009;66(10):950–957. doi: 10.1016/j.biopsych.2009.07.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. McPartland J.C. Developing Clinically Practicable Biomarkers for Autism Spectrum Disorder. J. Autism Dev. Disord. 2017;47(9):2935–2937. doi: 10.1007/s10803-017-3237-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Meadows J.C. The anatomical basis of prosopagnosia. J. Neurol. Neurosurg. Psychiatry. 1974;37(5):489–501. doi: 10.1136/jnnp.37.5.489. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Megreya A.M., Burton A.M. Recognising faces seen alone or with others: When two heads are worse than one. Applied Cognitive Psychology. 2006;20(7):957–972. [Google Scholar]
  60. Monteiro R., Simões M., Andrade J., Castelo Branco M. Processing of Facial Expressions in Autism: a Systematic Review of EEG/ERP Evidence. Rev. J. Autism Dev. Disorders. 2017;4(4):255–276. [Google Scholar]
  61. Morgan S.T., Hansen J.C., Hillyard S.A. Selective attention to stimulus location modulates the steady-state visual evoked potential. Proc. Natl. Acad. Sci. U. S. A. 1996;93(10):4770–4774. doi: 10.1073/pnas.93.10.4770. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Müller M.M., Andersen S., Trujillo N.J., Valdés-Sosa P., Malinowski P., Hillyard S.A. Feature-selective attention enhances color signals in early visual areas of the human brain. Proc. Natl. Acad. Sci. U. S. A. 2006;103(38):14250–14254. doi: 10.1073/pnas.0606668103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Naumann S., Senftleben U., Santhosh M., McPartland J., Webb S.J. Neurophysiological correlates of holistic face processing in adolescents with and without autism spectrum disorder. J. Neurodev. Disord. 2018;10(1):27. doi: 10.1186/s11689-018-9244-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Neuhaus E., Kresse A., Faja S., Bernier R.A., Webb S.J. Face processing among twins with and without autism: social correlates and twin concordance. Soc. Cogn. Affect. Neurosci. 2016;11(1):44–54. doi: 10.1093/scan/nsv085. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Noirhomme Q., Lesenfants D., Gomez F., Soddu A., Schrouff J., Garraux G.…Laureys S. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions. NeuroImage: Clinical. 2014;4:687–694. doi: 10.1016/j.nicl.2014.04.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Nomi J.S., Uddin L.Q. Face processing in autism spectrum disorders: From brain regions to brain networks. Neuropsychologia. 2015;71:201–216. doi: 10.1016/j.neuropsychologia.2015.03.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Norcia A.M., Appelbaum L.G., Ales J.M., Cottereau B.R., Rossion B. The steady-state visual evoked potential in vision research: A review. J. Vis. 2015;15(6):4. doi: 10.1167/15.6.4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. O'Connor K., Hamm J.P., Kirk I.J. Neurophysiological responses to face, facial regions and objects in adults with Asperger's syndrome: an ERP investigation. Int. J. Psychophysiol. 2007;63(3):283–293. doi: 10.1016/j.ijpsycho.2006.12.001. [DOI] [PubMed] [Google Scholar]
  69. Pedregosa F., Varoquaux G., Gramfort A., Michel V., Thirion B., Grisel O.…Duchesnay É. Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 2011;12(Oct):2825–2830. [Google Scholar]
  70. Pierce K., Marinero S., Hazin R., McKenna B., Barnes C.C., Malige A. Eye-tracking Reveals Abnormal Visual Preference for Geometric Images as an Early Biomarker of an ASD Subtype Associated with Increased Symptom Severity. Biol. Psychiatry. 2016;79(8):657–666. doi: 10.1016/j.biopsych.2015.03.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Reed C.L., Beall P.M., Stone V.E., Kopelioff L., Pulham D.J., Hepburn S.L. Brief Report: perception of body posture—what individuals with Autism spectrum disorder might be missing. J. Autism Dev. Disord. 2007;37(8):1576–1584. doi: 10.1007/s10803-006-0220-0. [DOI] [PubMed] [Google Scholar]
  72. Regan D. Some characteristics of average steady-state and transient responses evoked by modulated light. Electroencephalogr. Clin. Neurophysiol. 1966;20(3):238–248. doi: 10.1016/0013-4694(66)90088-5. [DOI] [PubMed] [Google Scholar]
  73. Regan D. Evoked potential studies of visual perception. Can. J. Psychol. 1981;35(2):77–112. doi: 10.1037/h0081156. [DOI] [PubMed] [Google Scholar]
  74. Regan D. Elsevier; Amsterdam, The Netherlands: 1989. Human brain electrophysiology: Evoked potentials and evoked magnetic fields in science and medicine. [Google Scholar]
  75. Retter T.L., Rossion B. Uncovering the neural magnitude and spatio-temporal dynamics of natural image categorization in a fast visual stream. Neuropsychologia. 2016;91:9–28. doi: 10.1016/j.neuropsychologia.2016.07.028. [DOI] [PubMed] [Google Scholar]
  76. Rose F.E., Lincoln A.J., Lai Z., Ene M., Searcy Y.M., Bellugi U. Orientation and affective expression effects on face recognition in Williams syndrome and autism. J. Autism Dev. Disord. 2007;37(3):513–522. doi: 10.1007/s10803-006-0200-4. [DOI] [PubMed] [Google Scholar]
  77. Rosset D.B., Rondan C., Fonseca D.D., Santos A., Assouline B., Deruelle C. Typical emotion processing for cartoon but not for real faces in children with autistic spectrum disorders. J. Autism Dev. Disord. 2008;38(5):919–925. doi: 10.1007/s10803-007-0465-2. [DOI] [PubMed] [Google Scholar]
  78. Rossion B. Picture-plane inversion leads to qualitative changes of face perception. Acta Psychol. 2008;128(2):274–289. doi: 10.1016/j.actpsy.2008.02.003. [DOI] [PubMed] [Google Scholar]
  79. Rossion B. The composite face illusion: A whole window into our understanding of holistic face perception. Vis. Cogn. 2013;21(2):139–253. [Google Scholar]
  80. Rossion B. Understanding face perception by means of human electrophysiology. Trends Cogn. Sci. 2014;18(6):310–318. doi: 10.1016/j.tics.2014.02.013. [DOI] [PubMed] [Google Scholar]
  81. Rossion B. Understanding face perception by means of prosopagnosia and neuroimaging. Front. Biosci. (Elite Edition) 2014;6:258–307. doi: 10.2741/E706. [DOI] [PubMed] [Google Scholar]
  82. Rossion B. Humans Are Visual Experts at Unfamiliar Face Recognition. Trends Cogn. Sci. 2018;22(6):471–472. doi: 10.1016/j.tics.2018.03.002. [DOI] [PubMed] [Google Scholar]
  83. Rossion B., Boremanse A. Robust sensitivity to facial identity in the right human occipito-temporal cortex as revealed by steady-state visual-evoked potentials. J. Vis. 2011;11(2):16. doi: 10.1167/11.2.16. (010.1167/11.2.16) [DOI] [PubMed] [Google Scholar]
  84. Rossion B., Jacques C. The Oxford Handbook of Event-Related Potential Components. 2011. The N170: understanding the time course of face perception in the human brain. [Google Scholar]
  85. Rossion B., Michel C. Normative accuracy and response time data for the computerized benton facial recognition test (BFRT-c) Behav. Res. Methods. 2018 doi: 10.3758/s13428-018-1023-x. [DOI] [PubMed] [Google Scholar]
  86. Rossion B., Gauthier I., Tarr M.J., Despland P., Bruyer R., Linotte S., Crommelinck M. The N170 occipito-temporal component is delayed and enhanced to inverted faces but not to inverted objects: an electrophysiological account of face-specific processes in the human brain. Neuroreport. 2000;11(1):69–74. doi: 10.1097/00001756-200001170-00014. [DOI] [PubMed] [Google Scholar]
  87. Rossion B., Dricot L., Goebel R., Busigny T. Holistic face categorization in higher order visual areas of the normal and prosopagnosic brain: toward a non-hierarchical view of face perception. Front. Hum. Neurosci. 2011;4(225) doi: 10.3389/fnhum.2010.00225. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Rossion B., Prieto E.A., Boremanse A., Kuefner D., Van Belle G. A steady-state visual evoked potential approach to individual face perception: effect of inversion, contrast-reversal and temporal dynamics. NeuroImage. 2012;63(3):1585–1600. doi: 10.1016/j.neuroimage.2012.08.033. [DOI] [PubMed] [Google Scholar]
  89. Rossion B., Torfs K., Jacques C., Liu-Shuang J. Fast periodic presentation of natural images reveals a robust face-selective electrophysiological response in the human brain. J. Vis. 2015;15(1):18. doi: 10.1167/15.1.18. [DOI] [PubMed] [Google Scholar]
  90. Russell R., Sinha P., Biederman I., Nederhouser M. Is pigmentation important for face recognition? evidence from contrast negation. Perception. 2006;35(6):749–759. doi: 10.1068/p5490. [DOI] [PubMed] [Google Scholar]
  91. Sattler J.M. JM Sattler; 2001. Assessment of Children: Cognitive Applications. [Google Scholar]
  92. Scherf K.S., Behrmann M., Minshew N., Luna B. Atypical development of face and greeble recognition in autism. J. Child Psychol. Psychiatry. 2008;49(8):838–847. doi: 10.1111/j.1469-7610.2008.01903.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Schultz R.T. Developmental deficits in social perception in autism: the role of the amygdala and fusiform face area. Int. J. Dev. Neurosci. 2005;23(2–3):125–141. doi: 10.1016/j.ijdevneu.2004.12.012. [DOI] [PubMed] [Google Scholar]
  94. Sergent Justine. Configural processing of faces in the left and the right cerebral hemispheres. J. Exp. Psychol. Hum. Percept. Perform. 1984;10(4):554–572. doi: 10.1037//0096-1523.10.4.554. [DOI] [PubMed] [Google Scholar]
  95. Sergent J., Signoret J.L. Varieties of functional deficits in prosopagnosia. Cerebral Cortex (New York, N.Y) 1992;2(5):375–388. doi: 10.1093/cercor/2.5.375. 1991. [DOI] [PubMed] [Google Scholar]
  96. Sergent J., Signoret J.L. Functional and anatomical decomposition of face processing: evidence from prosopagnosia and PET study of normal subjects. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. 1992;335(1273):55–61. doi: 10.1098/rstb.1992.0007. discussion 61-62. [DOI] [PubMed] [Google Scholar]
  97. Tanaka J.W., Farah M.J. Parts and wholes in face recognition. Quarter. J. Exp. Psychol. Human Exp. Psychol. 1993;46(2):225–245. doi: 10.1080/14640749308401045. [DOI] [PubMed] [Google Scholar]
  98. Tang J., Falkmer M., Horlin C., Tan T., Vaz S., Falkmer T. Face recognition and visual search strategies in autism spectrum disorders: amending and extending a recent review by Weigelt et al. PLoS One. 2015;10(8) doi: 10.1371/journal.pone.0134439. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Tantam D., Monaghan L., Nicholson H., Stirling J. Autistic children's ability to interpret faces: a research note. J. Child Psychol. Psychiatry. 1989;30(4):623–630. doi: 10.1111/j.1469-7610.1989.tb00274.x. [DOI] [PubMed] [Google Scholar]
  100. Tavares P.P., Mouga S.S., Oliveira G.G., Castelo-Branco M. Preserved face inversion effects in adults with autism spectrum disorder: an event-related potential study. Neuroreport. 2016;27(8):587–592. doi: 10.1097/WNR.0000000000000576. [DOI] [PubMed] [Google Scholar]
  101. Teunisse J.-P., de Gelder B. Face processing in adolescents with autistic disorder: the inversion and composite effects. Brain Cogn. 2003;52(3):285–294. doi: 10.1016/s0278-2626(03)00042-3. [DOI] [PubMed] [Google Scholar]
  102. Van Der Geest J.N., Kemner C., Verbaten M.N., Engeland H.V. Gaze behavior of children with pervasive developmental disorder toward human faces: a fixation time study. J. Child Psychol. Psychiatry. 2002;43(5):669–678. doi: 10.1111/1469-7610.00055. [DOI] [PubMed] [Google Scholar]
  103. Vettori S., Jacques C., Boets B., Rossion B. Can the N170 be used as an electrophysiological biomarker indexing face processing difficulties in autism spectrum disorder? Biol. Psychiatr. 2018 doi: 10.1016/j.bpsc.2018.07.015. [DOI] [PubMed] [Google Scholar]
  104. Webb S.J., Jones E.J.H., Merkle K., Murias M., Greenson J., Richards T.…Dawson G. Response to familiar faces, newly familiar faces, and novel faces as assessed by ERPs is intact in adults with autism spectrum disorders. Int. J. Psychophysiol. 2010;77(2):106–117. doi: 10.1016/j.ijpsycho.2010.04.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Wechsler D. 3rd ed. The Psychological Corporation; San Antonio, TX: 1991. The Wechsler intelligence scale for children. [Google Scholar]
  106. Weigelt S., Koldewyn K., Kanwisher N. Face identity recognition in autism spectrum disorders: a review of behavioral studies. Neurosci. Biobehav. Rev. 2012;36(3):1060–1084. doi: 10.1016/j.neubiorev.2011.12.008. [DOI] [PubMed] [Google Scholar]
  107. Xu B., Liu-Shuang J., Rossion B., Tanaka J. Individual differences in face identity processing with fast periodic visual stimulation. J. Cogn. Neurosci. 2017;29(8):1368–1377. doi: 10.1162/jocn_a_01126. [DOI] [PubMed] [Google Scholar]
  108. Yin R.K. Looking at upside-down faces. J. Exp. Psychol. 1969;81(1):141–145. [Google Scholar]
  109. Young Andrew W., Burton A.M. What we see in unfamiliar faces: a response to rossion. Trends Cogn. Sci. 2018;22(6):472–473. doi: 10.1016/j.tics.2018.03.008. [DOI] [PubMed] [Google Scholar]
  110. Young A.W., Hellawell D., Hay D.C. Configurational information in face perception. Perception. 1987;16(6):747–759. doi: 10.1068/p160747. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary material

mmc1.docx (898.1KB, docx)

Articles from NeuroImage : Clinical are provided here courtesy of Elsevier

RESOURCES