Abstract
Research consistently shows that autistic adults do not attend to faces as much as non-autistic adults. However, this conclusion is largely based on studies using pre-recorded videos or photographs as stimuli. In studies using real social scenarios, the evidence is not as clear. To explore the extent to which differences in findings relate to differences in the methodologies used across studies, we directly compared social attention of 32 autistic and 33 non-autistic adults when watching exactly the same video. However, half of the participants in each group were told simply to watch the video (Video condition), and the other half were led to believe they were watching a live webcam feed (‘Live’ condition). The results yielded no significant group differences in the ‘Live’ condition. However, significant group differences were found in the ‘Video’ condition. In this condition, non-autistic participants, but not autistic participants, showed a marked social bias towards faces. The findings highlight the importance of studying social attention combining different methods. Specifically, we argue that studies using pre-recorded footage and studies using real people tap into separate components contributing to social attention. One that is an innate, automatic component and one that is modulated by social norms.
Lay Abstract
Early research shows that autistic adults do not attend to faces as much as non-autistic adults. However, some recent studies where autistic people are placed in scenarios with real people reveal that they attend to faces as much as non-autistic people. This study compares attention to faces in two situations. In one, autistic and non-autistic adults watched a pre-recorded video. In the other, they watched what they thought were two people in a room in the same building, via a life webcam, when in fact exactly the same video in two situations. We report the results of 32 autistic adults and 33 non-autistic adults. The results showed that autistic adults do not differ in any way from non-autistic adults when they watched what they believed was people interacting in real time. However, when they thought they were watching a video, non-autistic participants showed higher levels of attention to faces than non-autistic participants. We conclude that attention to social stimuli is the result of a combination of two processes. One innate, which seems to be different in autism, and one that is influenced by social norms, which works in the same way in autistic adults without learning disabilities. The results suggest that social attention is not as different in autism as first thought. Specifically, the study contributes to dispel long-standing deficit models regarding social attention in autism as it points to subtle differences in the use of social norms rather than impairments.
Keywords: autism, ecological validity, eye-tracking, social attention
Introduction
Autism is characterised by idiosyncratic social interaction and communication patterns (American Psychiatric Association (APA), 2013), which are unique to the condition (Lewis & Kim, 2009). The most favoured current explanation for this unique profile is that, as first suggested by Kanner (1943) and later by Hobson (1998, 2004), the drive to engage with others is reduced in autism, a proposal that has been recently re-described as the reduced social motivation theory (Chevallier et al., 2012).
Much of the evidence supporting the reduced social motivation model comes from the eye tracking literature. In non-autistic populations, 1 research has demonstrated a consistent attentional bias towards social stimuli; when viewing visual scenes, non-autistic adults preferentially look at people’s faces (e.g. Birmingham et al., 2009; Fletcher-Watson et al., 2008; Yarbus, 1967) and, in particular, to the eye region (see Birmingham & Kingstone, 2009; Gliga & Csibra, 2007 for reviews). This attraction towards the eyes of others is present in the first few weeks of life (Haith et al., 1977; Zeifman et al., 1996) and has a strong evolutionary and organic basis (Emery, 2000; Pelphrey & Morris, 2006). This innate bias is thought to undergo maturational processes alongside brain specialisation for social stimuli from 6 months (Jones et al., 2015).
In contrast, by and large, research finds a reduced attentional bias towards social stimuli in autistic children and adults (Chita-Tegmark, 2016; Papagiannopoulou et al., 2014 for reviews including meta-analyses). However, studies with infants likely to develop autism suggest that the bias towards both faces and eyes appears to be intact very early in development with differences starting to emerge between 4 and 7 months (Elsabbagh et al., 2013; Jones & Klin, 2013); a finding that suggests that maturational processes in social brain specialisation may be compromised in autism.
The attentional priority given to social stimuli is also manifest in the strong drive in non-autistic populations towards following the gaze of others (e.g. see Gregory & Hodgson, 2012 or Frischen et al., 2007 for reviews), a drive that develops from an early age (Gregory et al., 2016; Hood et al., 1998; Moore & Corkum, 1998). In comparison, several studies show that autistic children, adolescents and adults are less likely to spontaneously follow the gaze of others (e.g. Bedford et al., 2012; Riby et al., 2013; Vlamings et al., 2005).
The influence of social partners on eye gaze behaviour
The research findings described so far come from studies that test participants using videos, photographs, or schematic pictures. That is, studies that test participants in the absence of social partners. Although there is no question that it is possible to extract information from visual images of people with whom we are not interacting, the theoretical perspective and methods underpinning these studies have been increasingly questioned. The strongest challenge comes from the second-person approach to social understanding (De Jaegher & Di Paolo, 2007; Hobson, 2004; Reddy, 2008; Trevarthen, 1979; Zahavi, 2001), which argues that the kind of understanding we gain when there is potential for, or actual, engagement with others is qualitatively different from that we gain as mere observers (for a review see Moore & Baresi, 2017).
An important source of support for the second-person approach comes from studies investigating eye gaze behaviour where participants are placed in actual social interactions. In non-autistic adults, these studies consistently show that – unlike in studies where social partners are not present – non-autistic adults are actually rather disinclined to look at real people (e.g. Foulsham et al., 2011; Konovalova et al., 2021; Laidlaw et al., 2011 also see Risko et al., 2012 for a review) or follow gaze direction (Gallup et al., 2012). That is, social attention, at least in adulthood, is markedly different when engaged in social interactions, compared to merely observing others. Yet, what these patterns of social attention may look like is not clear, as both eye gaze behaviour towards people and gaze following are heavily influenced by contextual factors. For instance, the likelihood to look at others’ faces or follow their gaze depends on whether the person is averting their eyes or making eye contact (e.g. Freeth et al., 2013; Pönkanën et al., 2011). The particular stage in the conversation also affects likelihood to look at other’s faces. Studies show that non-autistic speakers look at a social partner to end their turn in conversation, but when starting their turn, they avoid looking at the social partner (Ho et al., 2015). Similarly, non-autistic adults are more likely to look at a social partner’s face when answering a question than when asking a question (Freeth et al., 2013). Gaze following is also influenced by the spatial relation between partners. Non-autistic adults are more likely to follow gaze when walking in the same direction as a partner than when the partner is walking towards them (Gallup et al., 2012). That is, in naturalistic settings, the bias present in non-autistic adults towards others is modulated by contextual factors, which are yet to be fully understood.
The challenge posed by the second-person approach is particularly relevant to autism. If what we are trying to understand are the precise interaction and communication difficulties that autistic people face in everyday life, then we need to use research methods that allow the exploration of these difficulties in naturalistic social encounters. There are few studies exploring gaze behaviour in naturalistic social settings in the autistic population. These studies tend to provide evidence for reduced social attention in childhood. When listening to a story told by an adult, undergoing cognitive testing or meeting an adult for the first time, autistic children look less at the experimenter’s face and eyes (Falck-Ytter, 2015; Hanley et al., 2014; Noris et al., 2012). However, some studies fail to find differences in the amount of time looking at the face of social partners while having a conversation (e.g. Nadig et al., 2010).
In contrast, the evidence from studies with autistic adults provides a complex picture of similarities and differences that, as with non-autistic adults, seem to relate to contextual factors. When in conversation, group differences emerge when talking about what people feel, but not when talking about what people do, or when the experimenter directly gazes at participants but not when their gaze is averted (Freeth & Bugembe, 2019; Hutchins & Brien, 2016). In terms of gaze following, autistic adults are as likely to follow gaze compared to non-autistic adults and can effectively make line-of-sight judgements (Freeth et al., 2020), although they are slower in initiating eye movements (Birmingham et al., 2017; Magrelli et al., 2013). Taken these findings together, it seems clear that social attention in autism is – as is the case in non-autistic adults – context dependent.
What is it about real people that influences social attention?
Existing studies with both autistic and non-autistic samples reveal discrepancies in looking behaviour towards social stimuli depending on whether participants are engaged or not with real partners and, in the presence of social partners, on specific contextual factors. It is difficult to identify exactly what it is about being in the presence of partners that gives rise to these differences. Only a handful of studies, both with non-autistic and/or autistic samples, have compared social attention to a pre-recorded video and to a person in a face-to-face situation or via a monitor (e.g. Cañigueral et al., 2021; Grossman et al., 2019; Laidlaw et al., 2011). However, differences in the visual features of stimuli across conditions makes these comparisons slightly problematic in addressing the question of whether being in the presence of a real person alone influences looking behaviour.
To directly address this question, Gregory et al. (2015) presented non-autistic participants with the same video under three conditions. In one condition, participants were told they would watch a pre-recorded video depicting a social interaction between two people (Video condition). In another condition (‘Live’ condition), the same video was presented but this time participants were led to believe they were watching a live webcam, that is people in real time, in a waiting room in the same building. In the last condition (Engaged condition), participants were additionally told that they would later complete a group task with the people in the waiting room. Results showed marked differences between the video condition and the ‘Live’ and Engaged conditions. Specifically, in the Video condition, participants looked significantly more at the faces of actors and followed their gaze significantly more than in the ‘Live’ and Engaged conditions. Crucially, there were no differences between ‘Live’ and Engaged conditions, suggesting that it is the mere belief that they are watching people in real time, and not the probability to engage with them, that triggers a change in viewing behaviour.
In this study, we use the same methodology employed by Gregory et al. (2015) to directly compare autistic and non-autistic participants’ looking behaviour when they viewed a recording and when they viewed the same recording but they believed that they were watching people in real time. Given that the original study found no differences between ‘Live’ and Engaged conditions, we opted for a simplified design including only the Video and ‘Live’ conditions. Due to the mixed findings in previous studies regarding the eye behaviour of autistic participants in the presence of partners, we could not make a specific prediction as to whether their looking behaviour would differ from the non-autistic sample in the ‘Live’ condition. However, in line with previous research with autistic (e.g. Grossman et al., 2019) and non-autistic samples (Gregory et al., 2015), we expected a significant interaction between Group and Area of Interest in the Video condition. Specifically, we predicted that non-autistic participants, but not autistic participants, would show increased attention to faces in the Video condition. In terms of gaze following, we predicted that non-autistic participants would be more likely to overtly follow gaze shifts in the Video than in the ‘Live’ condition, but, for the reasons stated above, we could not make predictions regarding autistic participant’s sensitivity to the experimental manipulation.
Method
Design
The study adopted a 2 × 2 × 3 mixed design with Group (Autistic, AUT vs Non-autistic, NA) and Condition (Video vs ‘Live’) as between-participants independent variables and Area of Interest (AoI; Faces vs Body vs Background) as the within-participants independent variable. Two sets of dependent variables were used. The first explored eye movements by measuring the mean proportion of dwell time and proportion of fixations. 2 The second explored participants’ responses to the four head shifts performed by the actors in the video. These included saccades made in the direction of the shift (an overt gaze shift towards the target of the actors’ head shift) and mean proportion of dwell time on the target area following the actors’ head shift.
Participants
Seventy-five participants were recruited for the study. There was no a priori exclusion criteria regarding intelligence quotient (IQ); however, all participants recruited obtained Verbal and Performance IQ scores above 75 (see Table 1). Two AUT participants were excluded due to poor eye-tracker calibration and eight participants (AUT = 2, NA = 6) were excluded from the ‘Live’ condition because a post-experimental check revealed that they had not believed that they were viewing live footage from a webcam. The final sample consisted of 32 adult autistic participants, of which 16 (all males) took part in the ‘Live’ condition at the University of Portsmouth and 16 (Males = 11; females = 5) took part in the Video condition at Sheffield University. All autistic participants were recruited through the authors’ existing databases and had a formal diagnosis of autism by a qualified practitioner. Current levels of severity were assessed using the Autism Diagnostic Observation Schedule (ADOS, Lord et al., 2000). ADOS scores ranged from 3 to 22 (Mean = 10.01, SD = 3.96). Their verbal IQ (VIQ) and Performance IQ (PIQ) were measured with the Wechsler Abbreviated Scale of Intelligence (WASI; Wechsler, 1999). There were some IQ scores missing from the AUT sample. Specifically, VIQ scores for three participants were missing from the Video condition. Seven PIQ scores were missing from the Video condition and one from the ‘Live’ condition. Table 1 summarises participants’ characteristics.
Table 1.
Participant characteristics across conditions.
| Group | Condition | n | Age | VIQ | PIQ | |
|---|---|---|---|---|---|---|
| AUT | ‘Live’ | 16 | Mean | 28.75 | 102.12 | 112.20 |
| SD | 11.57 | 17.11 | 11.67 | |||
| Range | 18–57 | 75–134 | 89–129 | |||
| ‘Video’ | 16 | Mean | 37.31 | 114.61 | 117.89 | |
| SD | 13.85 | 10.52 | 12.51 | |||
| Range | 20–67 | 92–131 | 99–132 | |||
| NT | ‘Live’ | 16 | Mean | 32.18 | 104.48 | 115.76 |
| SD | 11.12 | 10.63 | 11.38 | |||
| Range | 18–51 | 87–122 | 88–133 | |||
| ‘Video’ | 17 | Mean | 36.50 | 105.75 | 112.25 | |
| SD | 15.45 | 8.82 | 13.38 | |||
| Range | 21–67 | 90–119 | 86–138 |
VIQ: verbal IQ; PIQ: performance IQ; AUT: autistic; SD: standard deviation; NT: non-autistic.
The comparison sample consisted of 33 non-autistic (NA) participants matched, at group level, in age and IQ to the AUT sample. These participants were recruited through existing participant databases and word of mouth. Of these, 17 (all males) took part in the ‘Live’ condition at the University of Portsmouth and 16 took part in the Video condition at Bournemouth University (Males = 5; Females = 11). Non-autistic participants both at the University of Portsmouth and Bournemouth University were students, members of staff and members of the public. Participants’ ages ranged from 18 to 67 (M = 34.28, SD = 13.36). There were no significant differences in age (t(1,63) = .376, p = 0.71) nor IQ (VIQ: t(1,63) = .785, p = 0.44; PIQ: t(1,63) = .083, p = 0.93) across the two groups. However, AUT participants’ VIQ, but not PIQ, scores were significantly lower in the ‘Live’ condition than in the Video condition (VIQ: t(27) = 2.98, p = 0.03; PIQ (t(27) = 0.98, p = 0.33).
All participants had normal or corrected-to-normal vision. Participants received a small monetary compensation for their participation or course credit.
Apparatus
Participants at all sites were tested using an identical eye tracker: the Eyelink 1000 with desktop mount (SR Research, Canada). Participants sat 57 cm from the eye tracker with their heads stabilised by the use of a chin rest. At each site, the eye tracker was connected to a host PC, which was in turn connected to the display computer which had a 19” CRT monitor at the University of Portsmouth and a 22” monitor at Bournemouth University and the University of Sheffield. Pupil and corneal reflection position were recorded monocularly at a rate of 1000 Hz. Saccades were parsed online by the Eyelink 1000 using a velocity threshold of 30°/s and an acceleration threshold of 8000°/s2.
Materials
The same video as in Gregory et al. (2015) was used in all conditions. The first frame of the video showed a woman sitting in a waiting room and using a mobile phone (see Figure 1). After approximately 10 seconds, a second woman entered the room and sat next to her. Both women remained seated throughout either reading a magazine or using their mobile phones. The video had the audio track removed. During the 2 minutes, six head and gaze shifts were performed by the actors (three to the right and three to the left) before an obvious target was visible. Two head shifts were excluded from all analyses: Shift 3 was not followed by a single participant and shift 6 was qualitatively quite different from the others as it involved a social interaction between the actors – one actor turned to the other actor to ask the time. Figure 1 illustrates the experimental set up and the four shifts included in the analyses.
Figure 1.
Screen shots of the video used by Gregory et al. (2015) and this study showing the four gaze shifts performed by the actors.
A post-experimental questionnaire was used after participants viewed the video to check whether the participants in the ‘Live’ condition believed that they had viewed a live webcam feed. Specifically, they were asked to rate the extent of their belief on a 1–7 Likert-type scale (1 = total disbelief to 7 = total belief). Participants were also asked to provide the reasons why they had, or had not, believed it was. The eight participants who indicated total disbelief (score = 1) were removed from the analyses. 3 Participants in the Video condition were asked whether they believed they had viewed a pre-recorded video.
Procedure
In the ‘Live’ condition, autistic and non-autistic participants at the University of Portsmouth were led to believe that they would view live webcam footage from a waiting room in the same building when in fact they viewed the same pre-recorded video as in the Video condition. To aid this deception while participants completed the consent form, the experimenter brought up on the screen a pre-recorded video, which showed the first woman of the video talking to someone else off-screen. At this point, the experimenter told the participant that it seemed only one person had arrived at the waiting room and ‘switched-off’ the webcam to bring back up a black screen. Then a second experimenter entered the laboratory to explain that only one woman had arrived, but that the second woman had rung and would arrive shortly so they could get started with the calibration. The calibration of the eye tracker was done with a 9-point calibration procedure. Once calibration was completed, participants were told to watch the ‘webcam’ feed. After the video elapsed, the screen turned black and, to add to the deception, a message was displayed in the top left-hand side stating ‘connection to webcam host 192.162.3.1 lost. Connect to webcam host? Y/N’. Participants then completed the post-experiment questionnaire regarding the extent of their belief that they had watched a live webcam. In the Video condition – tested across Sheffield (AUT participants) and Bournemouth (NA participants) universities – participants first completed an informed consent form and then underwent the eye-tracker calibration procedure and were asked simply to watch a short video. The contact time participants had with the researcher was equivalent in both conditions.
In both conditions, eye-movements were recorded for each frame of the 30-frame-per-second video, and the video was presented as a 720 × 400 pixels central window on a black background. All participants were fully debriefed as to the aims of the study and, where appropriate, the rationale for the deception. IQ tests and the ADOS, when data were not already available, were administered after the eye-tracking session took place.
Community involvement statement
The data collection for this study took place in 2015. At that time, it was still not common practice to involve members of the community in the development of the research studies of this nature. Thus, no members of the community were involved in the preparation of this manuscript although the results have been discussed with one autistic academic. We acknowledge this is a limitation of the study.
Results
Data processing 4
The data were processed in the same way as by Gregory et al. (2015). Static interest areas around the important areas of the scene were drawn – using Dataviewer (SR Research) – 20 pixels around the actors’ heads and bodies. In addition, rectangular areas were drawn around the bookshelf, the door, and the area where a magazine briefly appeared when moved by an off-screen actor. Finally, one large rectangular area was drawn around the video window itself, which encompassed all elements of the scene. Significant testing was conducted using IBM SPSS (Version 24.0; IBM Corp., 2016) and Bayesian analyses with JASP (Version 0.14.1; JASP Team, 2020). Bayes factors were used to assess strength of evidence for the alternative hypothesis (BFincl) and for the evidence for the null hypothesis (BFincl). A BFincl of above 3 indicates ‘substantial’ evidence for the alternative hypothesis and a BFincl above 1 suggests stronger evidence for the alternative hypothesis than the alternative hypothesis (Wetzels & Wagenmakers, 2012), and vice versa, a BFexcl of above 3 indicates ‘substantial’ evidence for the null hypothesis, and a BFexcl above 1 suggests stronger evidence for the null hypothesis than the alternative hypothesis (Wetzels & Wagenmakers, 2012).
Overall viewing behaviour
Initial analyses explored the influence of VIQ, PIQ, and Full IQ scores on all dependent variables by means of a series of analyses of covariance (ANCOVAs). None of the scores covaried significantly with neither Group, Condition nor AoI for any of the two dependent variables; therefore, they were removed from further analyses. All subsequent analyses were conducted using a mixed analysis of variance (ANOVA) with Condition (Video vs ‘Live’) and Group (AUT vs NA) as between-participants factors and AoI (Face vs Body vs Background areas) as the within-participants factor. A Greenhouse-Geisser adjustment was used when the Mauchly’s test of sphericity was significant (p < 0.05).
Figure 2 displays the results for the two eye movement measures. The ANOVAs for mean proportion of dwell time and proportion of fixations both revealed a significant three-way interaction (F(1.751,106.8) = 4.77, p = .014, partial η2 = 0.073, BFincl = 5.42; BFexcl = 0.18; F(2,122) = 3.88, p = .025, partial η2 = 0.06, BFincl = 2.02; BFexcl = 0.49, respectively). This interaction was explored further with two 2 (Group) × 3 (AoI) ANOVAs for each condition (Video and ‘Live’) for each variable.
Figure 2.
Descriptive statistics of eye-movements across groups, conditions, and areas of interest (bars represent standard errors).
‘Live’ condition
The follow-up ANOVAs revealed that neither the main effect of Group (Dwell: F(1,31) = 0.675, p = .417, partial η2 = 0.21; Prop fixation: F(2,122) = 3.121, p = .087, partial η2 = 0.09; Dwell: BFincl = 0.18; BFexcl = 5.54; Prop fixation: BFincl = 0.19; BFexcl = 5.28) nor the interaction between Group and AoI (Dwell: F(2,62) = .069, p = .934, partial η2 = 0.01; Prop fixation: F(2,122) = 0.322, p = .726, partial η2 = 0.01; Dwell: BFincl = 0.11; BFexcl = 8.64; Prop fixation: BFincl = 0.10; BFexcl = 9.75) were significant for either mean proportion of dwell time or proportion of fixations.
Video condition
By contrast, the AoI × Group interaction was significant (Dwell: F(2,62) = 8.46, p = .001, partial η2 = 0.220; and Prop fixations: F(2,60) = 6.95, p = .002, partial η2 = 0.188; Dwell: BFincl = 648.7; BFexcl = 0.002; Prop fixation: BFincl = 66.12; BFexcl = 0.02;). In both cases, the interactions had large effect sizes (η2 ⩾ 0.14, Cohen 1988), and Bayesian analyses confirmed the evidence was very strong for the alternative hypothesis for both measures. These interactions were explored with planned multiple pairwise comparisons – applying a Bonferroni adjustment (p < .017) – independently for each group.
These comparisons showed that the AoI × Group interactions came from significant differences across AoIs in the NA group on both measures (Dwell: F(2,30) = 9.938, p = .001, partial η2 = 0.399; Prop fixations: F(2,32) = 7.439, p = .002, partial η2 = 0.332; Dwell: BFincl = 1763.2; BFexcl < 0.001); Prop fixation: BFincl = 217.42; BFexcl = 0.005), but not in the AUT group (Dwell: F(2,30) = 1.195, p = .317, partial η2 = 0.074; Prop fixations: F(2,30) = 1.283, p = .292, partial η2 = 0.079; AUT: Prop fixation: BFincl = 0.61; BFexcl = 1.64; Dwell: BFincl = 0.55; BFexcl = 1.85). Again Bayesian analyses confirmed the evidence was very strong for the alternative hypothesis for both measures in the NA group, for the AUT group the evidence was not very strong for either the alternative nor for the null hypothesis. NA participants’ mean proportion of dwell time was significantly longer to Faces than to Background (t(15) = 6.06, p = .001; BF10 = 1107.45; BF01 < .001) and Bodies (t(1,15) = 3.07, p = .008; BF10 = 0.69; BF01 = 1.45). Faces areas also attracted a significantly higher proportion of fixation in the NA group than the Background (t(15) = 4.48, p = .001; BF10 = 76.96; BF01 = 0.013) although no significant differences were found between Faces and Bodies (t(15) = 1.74, p = .102; BF10 = 0.88; BF01 = 1.14) and only a marginal difference was found between Background and Bodies (t(15) = 2.10, p = .053; BF10 = 1.44; BF01 = 0.69).
Gaze following behaviour
Overt gaze following
Overt gaze following was measured as saccades originating within the Face AOI during the critical period of interest in the direction of the eventual target with amplitudes greater than 2° visual angle, between the start of the actor’s gaze shift and the last frame before the target being gazed at became visible. This ensured that the saccades were occurring only in response to the actor’s gaze cue and not the appearance of an object in the periphery. The gaze following rate was calculated as a proportion of gaze shifts resulting in a saccade meeting the above criteria out of the four shifts (see Figure 3). PIQ covaried with gaze following rate (p = .036), hence it was included in the analyses. This ANCOVA revealed a significant main effect of condition (F(1,51) = 4.939, p = .031, partial η2 = 0.088; BFincl = 1.14; BFexc l = 0.88) as participants significantly followed gaze shifts more in the Video than ‘Live’ condition. However, Bayesian analyses did not provide strong evidence for neither the null nor the alternative hypotheses in terms of overall differences across conditions. Neither the main effect of Group (BFincl = 0.24; BFexc l = 4.14) nor the interaction between Group and Condition were significant (p > .05; BFincl = 0.27; BFexc l = 3.75), a finding supported also by the Bayesian analyses.
Figure 3.
Proportion of head shifts followed and proportion of dwell time to gazed-at targets before the shift and during the shift, across groups and conditions (bars represent standard errors).
Covert gaze following
Covert attention to gazed-at areas was calculated by comparing the mean proportion of dwell time towards gazed-at targets during the shift period and before the shift occurred (see Gregory et al., 2015 for details). An ANOVA conducted with Condition (Video vs ‘Live’) and Group (NA vs AUT) as between-participants factors and Time (i.e. Before shift vs During shift) as the repeated measures factor revealed that the only effect that yielded significance was the main effect of Time. Mean proportion of dwell time towards gazed-at areas was significantly higher during the shift period than before the gaze shift (F(1,61) = 17.086, p < . 001, partial η2 = 0.219; BFincl = 309.2; BFexc l = 0.003), regardless of condition (BFincl = 0.58; BFexc l = 1.73; see Figure 3).
Discussion
Studies using pre-recorded videos or photographs consistently show that attention towards social stimuli is reduced in autistic samples, relative to non-autistic samples. However, this is not always the case when studies use real social scenarios instead. To date, it has been difficult to determine whether the presence of a real person alone is responsible for the discrepancy in findings as methodological differences across studies prevent meaningful comparisons. To overcome this limitation, this study aimed to investigate looking behaviour in a sample of autistic and non-autistic participants by directly comparing social attention towards ‘reel’ people and towards what participants believed to be people in real time. Specifically, in this study half of the participants watched a video (Video condition) and the other half watched exactly the same video, but they are made to believe they were watching a live webcam feed (‘Live’ condition).
As predicted, in the Video condition, autistic participants showed significantly reduced social bias relative to the non-autistic participants. Non-autistic participants dwelled significantly longer, and fixated more often, on faces and bodies than the background when the scene was known to be pre-recorded. This was not the case for autistic participants; attention allocation in this group was not significantly different for any of the Areas of Interest. However, when viewing what participants believed to be a social scenario in real time (i.e. ‘Live’ condition), there were no group differences in viewing behaviour in either mean proportion of dwell time or proportion of fixations. It could be argued that there is not much difference between the two conditions used in this study. However, this is a robust finding, given the large effect sizes of the interactions between Area of Interest and Group (both measures η2 ⩾ 0.14) and Bayesian factors suggestive of very strong evidence for the alternative hypothesis (both measures > 66). In terms of gaze following, no group differences were found in overt or covert gaze following in either condition. Participants in both groups were significantly more likely to overtly follow gaze in the Video condition relative to the ‘Live’ condition, but both groups followed gaze covertly, as evidenced by biased attention towards the gazed at targets in both conditions.
It is difficult at this stage to pinpoint what it is about watching people in real time that drives such marked difference between conditions in non-autistic people. The most likely explanation, already put forward by Gregory et al. (2015), is that when watching a video, it is assumed that one is watching actors, not real people, and hence the usual social norms need not be applied. Instead, when watching people in real time, the assumption is that one is watching real people and hence social norms such as Goffman’s (1963) principle of ‘civil’ inattention are automatically activated.
The combined findings from the autistic and non-autistic participants have important implications for future research in social attention. The first, and probably most important, relates to the controversy regarding the methodologies used to study social attention. There is no doubt that there is a need to better understand what factors modulate our social attention when taking part in social situations, especially in the autistic population; at the end of the day, what we are trying to explain is the social difficulties experienced in everyday life. However, we would like to argue that studying social attention both within social interactions and with pre-recorded videos/pictures are both informative, especially when combined, as they seem to be assessing different aspects of social attention. For illustration, in this study by using both methodologies we reveal, first, that non-autistic participants display a strong bias towards social stimuli when freely observing people and that this bias is inhibited when observing what are believed to be people in real time and, second, that this is not the case for autistic people. When non-autistic people are free to observe, that is, when the stimulus is known to be a recording, it is likely that their viewing behaviour is driven by early social bias present from birth (e.g. Morton & Johnson, 1991). In contrast, when they are placed in a genuine social scenario, and people become ‘real’, their viewing behaviour is likely to be modulated by social norms and social engagement rules. This is, in our view, an important distinction that explains the apparent contradictory findings in previous literature and allows for a more refined understanding of social attention processes both in autism and in the non-autistic population.
In relation to autism, these findings seem to add support for the notion of a reduced innate social bias in autistic people (Chevallier et al., 2012; Hobson, 1993, 2004; Kanner, 1943) that remains across the lifespan. By and large, when watching videos, the evidence points to a reduced bias towards faces in both children and adults relative to non-autistic samples (for a review see Guillon et al., 2014), regardless of the number of distractors present (Freeth et al., 2011; Harrison & Slane, 2020). However, this reduced innate social bias does not translate into differences in social attention when viewing real social scenarios, or people in real time, as investigated in this study and the two other studies that compare directly both mechanisms in autism (Cañigueral et al., 2021; Grossman et al., 2019). It could be argued that it is because autistic people use social norms in the same way than non-autistic people do, at least in adulthood. This suggestion would be supported by the fact that autistic participants inhibited gaze following as much as non-autistic participants in the ‘Live’ condition. However, the everyday difficulties they experience interacting with non-autistic people and the emerging evidence coming from studies using naturalistic settings (e.g. Freeth & Bugembe, 2019) tell us that it is more likely the differences are subtle and need to be further explored.
Future research will also need to map the developmental pattern in autism of the ability to use head turns as a signal to potentially relevant events. Studies with autistic children indicate that they are less likely to follow gaze and they do not seem to assign the same social value to gazed-at objects than comparison samples (Riby et al., 2013; Swanson & Siller, 2013), even in the presence of a social partner (Leekam et al., 2000). As shown in this and other studies (i.e. Freeth et al., 2010; Swettenham et al., 2003), by adulthood, autistic people are as likely as non-autistic people to follow gaze, possibly as a result of a learning process, although there are indications that autistic people may not process the gazed-at objects in the same way (e.g. Fletcher-Watson et al., 2009; Freeth et al., 2010).
Finally, a more crucial aspect yet to be explored is the role that the diagnostic status of social partners may have in modulating social attention during social interactions. Recent evidence shows that, although unconventional, social interaction patterns between autistic participants are as effective in creating rapport and in transmitting information than mixed (i.e. autistic and non-autistic) dyads (Crompton et al., 2020; Heasman & Gillespie, 2019). Hence, it is important to investigate, how social attention may differ when interacting with an autistic or non-autistic partner, and, whether this difference has an actual bearing on the quality of the interaction.
Further studies will also have to investigate the extent to which the findings from this study would generalise to autistic people with learning disabilities (LDs). Although the study did not predetermine exclusion criteria based on LD, the fact that none of the participants in the autistic sample had LD suggest that there was a selection bias. In a recent review, Russell et al. (2019) estimated that 94% of autistic participants in research studies published in the top 4 autism journals did not have LD. This is problematic both in terms of our understanding of autism and also has consequences for intervention development. As by Russell et al. (2019), we would like to encourage researchers to develop recruitment strategies that ensure an accurate representation of the autistic population. Three further methodological limitations are worth highlighting. First, data were collected in three different locations, which means any differences between conditions may relate to testing differences at the different sites and not the experimental manipulation. The fact that results replicate those of Gregory et al. (2015) who found the same difference in non-autistic participants tested at the same location lessens to some extent this concern. Second, the control question to test the experimental manipulation was misunderstood by some participants (i.e. some thought the social scenario was live, but staged). Third, the study used a small sample size. Although Bayesian analyses ameliorate to some extent the use of small sample sizes, it does not so fully (McNeish, 2016). Hence, a replication with a larger sample may be needed and to also explore whether watching a real or staged interaction in itself may be one of the factors that affect eye behaviour.
In terms of typical social attention, this study demonstrates that the mere belief that we are observing people in real time is enough to trigger a qualitative difference in gaze behaviour rather than potential interaction being necessary for this to emerge, as it has been suggested elsewhere (Gregory & Antolin, 2019; Laidlaw et al., 2011). What seems clear, from this and previous studies, is that investigating social attention requires more nuanced theoretical models that explain the inter-relation between biologically determined social biases and social norms both in autism and in the typical population. As argued earlier, Goffman’s (1963) principle of ‘civil’ inattention may be playing a role in the findings from this study. However, ‘civil’ inattention is just one of the many factors that contribute to the modulation of social attention in naturalistic settings. As discussed in the introduction, other factors such as gaze direction, stages in conversation or spatial relations contribute to social attention patterns. Hence, it is crucial to shift our efforts to understanding the role of social norms in social attention by the use of naturalistic studies systematically investigating the influence of contextual factors and social norms. Such an understanding would in turn contribute to a more refined understanding of what may be different, or not, in autism. While evidence supports the notion that the innate bias towards attending social stimuli may be reduced, we have little information about their of use social norms to modulate social attention in social interactions.
Acknowledgments
The authors would like to thank all the participants who gave their time to take part in this study, They would also like to thank two Hartmut Blank, anonymous reviewers and the editor for their comments on a previous version of this manuscript
We use the term non-autistic as opposed to neurotypical as participants in most studies are not screened for other neurodivergent conditions such as dyslexia or ADHD.
The number of fixations was also measured. However, the statistical analyses for this variable practically mirrored those of the proportion of fixations. To avoid unnecessary redundancy, these results are not reported.
The main reason for such a relatively low cut-off point was that the reasons provided by participants for not believing it was live suggested that some participants misinterpreted the question as asking not if it was live per se but if it was a natural, spontaneous interaction, rather than a staged one. It was therefore decided, to maximise the sample size, to just remove participants with the lowest possible score. To explore whether removing more participants influenced the findings, we conducted the same analyses reported here including only those participants that scored 4 and above. The findings did not change. There was still a significant 3-way interaction and a 2-way interaction with NA participants, but not AUT participants, showing increased dwell and proportion of fixations to heads in the Video but not in the ‘Live’ condition.
Although we fully support Open Science practices, at the time of data collection, requesting consent to make the data publicly available was not standard practice, which regrettably prevents us from sharing the data openly.
Footnotes
Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.
ORCID iDs: Beatriz López
https://orcid.org/0000-0001-5621-6044
Megan Freeth
https://orcid.org/0000-0003-0534-9095
References
- American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). 10.1176/appi.books.9780890425596 [DOI]
- Bedford R., Elsabbagh M., Gliga T., Pickles A., Senju A., Charman T., Johnson M. H. (2012). Precursors to social and communication difficulties in infants at-risk for autism: Gaze following and attentional engagement. Journal of Autism and Developmental Disorders, 42(10), 2208–2218. 10.1007/s10803-012-1450-y [DOI] [PubMed] [Google Scholar]
- Birmingham E., Bischof W. F., Kingstone A. (2009). Saliency does not account for fixations to eyes within social scenes. Vision Research, 49(24), 2992–3000. 10.1016/j.visres.2009.09.014 [DOI] [PubMed] [Google Scholar]
- Birmingham E., Johnston K. H. S., Iarocci G. (2017). Spontaneous gaze selection and following during naturalistic social interactions in school-aged children and adolescents with autism spectrum disorder. Canadian Journal of Experimental Psychology, 71(3), 243. https://psycnet.apa.org/doi/10.1037/cep0000131 [DOI] [PubMed] [Google Scholar]
- Birmingham E., Kingstone A. (2009). Human social attention. Annals of the New York Academy of Sciences, 1156(1), 118–140. 10.1111/j.1749-6632.2009.04468.x [DOI] [PubMed] [Google Scholar]
- Cañigueral R., Ward J. A., Hamilton A. F. D. C. (2021). Effects of being watched on eye gaze and facial displays of typical and autistic individuals during conversation. Autism, 25(1), 210–226. 10.1177/1362361320951691 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chevallier C., Kohls G., Troiani V., Brodkin E. S., Schultz R. T. (2012). The social motivation theory of autism. Trends in Cognitive Sciences, 16(4), 231–239. 10.1016/j.tics.2012.02.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chita-Tegmark M. (2016). Social attention in ASD: A review and meta-analysis of eye-tracking studies. Research in Developmental Disabilities, 48, 79–93. 10.1016/j.ridd.2015.10.011 [DOI] [PubMed] [Google Scholar]
- Cohen J. (1988). Statistical power analysis for the behavioural sciences (2nd ed.) Lawrence Erlbaum. 10.4324/9780203771587 [DOI] [Google Scholar]
- Crompton C. J., Ropar D., Evans-Williams C. V., Flynn E. G., Fletcher-Watson S. (2020). Autistic peer-to-peer information transfer is highly effective. Autism, 24(7), 1704–1712. 10.1177/1362361320919286 [DOI] [PMC free article] [PubMed] [Google Scholar]
- De Jaegher H., Di Paolo E. (2007). Participatory sense-making. Phenomenology and the Cognitive Sciences, 6(4), 485–507. 10.1007/s11097-007-9076-9 [DOI] [Google Scholar]
- Elsabbagh M., Gliga T., Pickles A., Hudry K., Charman T., Johnson M. H., & BASIS Team. (2013). The development of face orienting mechanisms in infants at-risk for autism. Behavioural Brain Research, 251, 147–154. 10.1016/j.bbr.2012.07.030 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emery N. J. (2000). The eyes have it: The neuroethology, function and evolution of social gaze. Neuroscience & Biobehavioral Reviews, 24(6), 581–604. 10.1016/S0149-7634(00)00025-7 [DOI] [PubMed] [Google Scholar]
- Falck-Ytter T. (2015). Gaze performance during face-to-face communication: A live eye tracking study of typical children and children with autism. Research in Autism Spectrum Disorders, 17, 78–85. 10.1016/j.rasd.2015.06.007 [DOI] [Google Scholar]
- Fletcher-Watson S., Findlay J. M., Leekam S. R., Benson V. (2008). Rapid detection of person information in a naturalistic scene. Perception, 37(4), 571–583. 10.1068/p5705 [DOI] [PubMed] [Google Scholar]
- Fletcher-Watson S., Leekam S. R., Benson V., Frank M. C., Findlay J. M. (2009). Eye-movements reveal attention to social information in autism spectrum disorder. Neuropsychologia, 47(1), 248–257. 10.1016/j.neuropsychologia.2008.07.016 [DOI] [PubMed] [Google Scholar]
- Foulsham T., Walker E., Kingstone A. (2011). The where, what and when of gaze allocation in the lab and the natural environment. Vision Research, 51(17), 1920–1931. 10.1016/j.visres.2011.07.002 [DOI] [PubMed] [Google Scholar]
- Freeth M., Bugembe P. (2019). Social partner gaze direction and conversational phase; factors affecting social attention during face-to-face conversations in autistic adults? Autism, 23(2), 503–513. 10.1177/1362361318756786 [DOI] [PubMed] [Google Scholar]
- Freeth M., Chapman P., Ropar D., Mitchell P. (2010). Do gaze cues in complex scenes capture and direct the attention of high functioning adolescents with ASD? Evidence from eye-tracking. Journal of Autism and Developmental Disorders, 40(5), 534–547. 10.1007/s10803-009-0893-2 [DOI] [PubMed] [Google Scholar]
- Freeth M., Foulsham T., Chapman P. (2011). The influence of visual saliency on fixation patterns in individuals with Autism Spectrum Disorders. Neuropsychologia, 49(1), 156–160. 10.1016/j.neuropsychologia.2010.11.012 [DOI] [PubMed] [Google Scholar]
- Freeth M., Foulsham T., Kingstone A. (2013). What affects social attention? Social presence, eye contact and autistic traits. PLOS ONE, 8(1), Article e53286. 10.1371/journal.pone.0053286 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Freeth M., Morgan E., Bugembe P., Brown A. (2020). How accurate are autistic adults and those high in autistic traits at making face-to-face line-of-sight judgements? Autism, 24(6), 1482–1493. 10.1177/1362361320909176 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Frischen A., Bayliss A. P., Tipper S. P. (2007). Gaze cueing of attention: Visual attention, social cognition, and individual differences. Psychological Bulletin, 133(4), 694–724. 10.1037/0033-2909.133.4.694 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gallup A. C., Chong A., Couzin I. D. (2012). The directional flow of visual information transfer between pedestrians. Biology Letters, 8(4), 520–522. 10.1098/rsbl.2012.0160 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gliga T., Csibra G. (2007). Seeing the face through the eyes: A developmental perspective on face expertise. Progress in Brain Research, 164, 323–339. 10.1016/S0079-6123(07)64018-7 [DOI] [PubMed] [Google Scholar]
- Goffman E. (1963). Behavior in public places. Free Press. [Google Scholar]
- Gregory N. J., Antolin J. V. (2019). Does social presence or the potential for interaction reduce social gaze in online social scenarios? Introducing the ‘live lab’ paradigm. Quarterly Journal of Experimental Psychology, 72(4), 779–791. 10.1177/1747021818772812 [DOI] [PubMed] [Google Scholar]
- Gregory N. J., Hermens F., Facey R., Hodgson T. L. (2016). The developmental trajectory of attentional orienting to socio-biological cues. Experimental Brain Research, 234(6), 1351–1362. 10.1007/s00221-016-4627-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gregory N. J., Hodgson T. L. (2012). Giving subjects the eye and showing them the finger: Socio-biological cues and saccade generation in the anti-saccade task. Perception, 41(2), 131–147. 10.1068/p7085 [DOI] [PubMed] [Google Scholar]
- Gregory N. J., Lόpez B., Graham G., Marshman P., Bate S., Kargas N. (2015). Reduced gaze following and attention to heads when viewing a ‘live’ social scene. PLOS ONE, 10(4), Article e0121792. 10.1371/journal.pone.0121792 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grossman R. B., Zane E., Mertens J., Mitchell T. (2019). Facetime vs. Screentime: Gaze patterns to live and video social stimuli in adolescents with ASD. Scientific Reports, 9(1), Article 12643. 10.1038/s41598-019-49039-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guillon Q., Hadjikhani N., Baduel S., Rogé B. (2014). Visual social attention in autism spectrum disorder: Insights from eye tracking studies. Neuroscience & Biobehavioral Reviews, 42, 279–297. 10.1016/j.neubiorev.2014.03.013 [DOI] [PubMed] [Google Scholar]
- Haith M. M., Bergman T., Moore M. J. (1977). Eye contact and face scanning in early infancy. Science, 198(4319), 853–855. 10.1126/science.918670 [DOI] [PubMed] [Google Scholar]
- Hanley M., Riby D. M., McCormack T., Carty C., Coyle L., Crozier N., . . . McPhillips M. (2014). Attention during social interaction in children with autism: Comparison to specific language impairment, typical development, and links to social cognition. Research in Autism Spectrum Disorders, 8(7), 908–924. 10.1016/j.rasd.2014.03.020 [DOI] [Google Scholar]
- Harrison A. J., Slane M. M. (2020). Examining how types of object distractors distinctly compete for facial attention in autism spectrum disorder using eye tracking. Journal of Autism and Developmental Disorders, 50(3), 924–934. 10.1007/s10803-019-04315-3 [DOI] [PubMed] [Google Scholar]
- Heasman B., Gillespie A. (2019). Neurodivergent intersubjectivity: Distinctive features of how autistic people create shared understanding. Autism, 23(4), 910–921. 10.1177/1362361318785172 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ho S., Foulsham T., Kingstone A. (2015). Speaking and listening with the eyes: Gaze signaling during dyadic interactions. PLOS ONE, 10(8), Article e0136905. 10.1371/journal.pone.0136905 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hobson P. (1998). The intersubjective foundations of thought. In Bråten S. (Ed.), Intersubjective communication and emotion in early ontogeny (pp. 283–296). Cambridge University Press. [Google Scholar]
- Hobson P. (2004). The cradle of thought: Exploring the origins of thinking. Pan Macmillan. [Google Scholar]
- Hood B. M., Willen J. D., Driver J. (1998). Adult’s eyes trigger shifts of visual attention in human infants. Psychological Science, 9(2), 131–134. 10.1111/1467-9280.00024 [DOI] [Google Scholar]
- Hutchins T. L., Brien A. (2016). Conversational topic moderates social attention in autism spectrum disorder: Talking about emotions is like driving in a snowstorm. Research in Autism Spectrum Disorders, 26, 99–110. 10.1016/j.rasd.2016.03.006 [DOI] [Google Scholar]
- IBM Corp. (2016). IBM SPSS Statistics for Windows (Version 24.0). [Google Scholar]
- JASP Team. (2020). JASP (Version 0.14.1). [Google Scholar]
- Jones E. J., Venema K., Lowy R., Earl R. K., Webb S. J. (2015). Developmental changes in infant brain activity during naturalistic social experiences. Developmental Psychobiology, 57(7), 842–853. 10.1002/dev.21336 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones W., Klin A. (2013). Attention to eyes is present but in decline in 2 -6-month-old infants later diagnosed with autism. Nature, 504(7480), 427–431. 10.1038/nature12715 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kanner L. (1943). Autistic disturbances of affective contact. Nervous Child, 2(3), 217–250. [PubMed] [Google Scholar]
- Konovalova I., Antolin J. V., Bolderston H., Gregory N. J. (2021). Adults with higher social anxiety show avoidant gaze behaviour in a real-world social setting: A mobile eye tracking study. PLOS ONE, 16(10), Article e0259007. 10.1371/journal.pone.0259007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Laidlaw K. E., Foulsham T., Kuhn G., Kingstone A. (2011). Potential social interactions are important to social attention. Proceedings of the National Academy of Sciences, 108(14), 5548–5553. 10.1073/pnas.1017022108 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leekam S. R., López B., Moore C. (2000). Attention and joint attention in preschool children with autism. Developmental Psychology, 36(2), 261–273. 10.1037/0012-1649.36.2.261 [DOI] [PubMed] [Google Scholar]
- Lewis M., Kim S. J. (2009). The pathophysiology of restricted repetitive behavior. Journal of Neurodevelopmental Disorders, 1(2), 114–132. 10.1007/s11689-009-9019-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lord C., Risi S., Lambrecht L., Cook E. H., Leventhal B. L., DiLavore P. C., Pickles A., Rutter M. (2000). The autism diagnostic observation schedule –generic: A standard measure of social and communication deficits associated with the spectrum of autism. Journal of Autism and Developmental Disorders, 30, 205–223. 10.1023/A:1005592401947 [DOI] [PubMed] [Google Scholar]
- Magrelli S., Jermann P., Basilio N., Ansermet F., Hentsch F., Nadel J., Billard A. (2013). Social orienting of children with autism to facial expressions and speech: A study with a wearable eye-tracker in naturalistic settings. Frontiers in Psychology, 4, Article 840. 10.3389/fpsyg.2013.00840 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McNeish D. (2016). On using Bayesian methods to address small sample problems. Structural Equation Modeling: A Multidisciplinary Journal, 23(5), 750–773. 10.1080/10705511.2016.1186549 [DOI] [Google Scholar]
- Moore C., Barresi J. (2017). The role of second-person information in the development of social understanding. Frontiers in Psychology, 8, 1667. 10.3389/fpsyg.2017.01667 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore C., Corkum V. (1998). Infant gaze following based on eye direction. British Journal of Developmental Psychology, 16(4), 495–503. 10.1111/j.2044-835X.1998.tb00767.x [DOI] [Google Scholar]
- Morton J., Johnson M. H. (1991). CONSPEC and CONLERN: A two-process theory of infant face recognition. Psychological Review, 98(2), 164–181. 10.1037/0033-295X.98.2.164 [DOI] [PubMed] [Google Scholar]
- Nadig A., Lee I., Singh L., Bosshart K., Ozonoff S. (2010). How does the topic of conversation affect verbal exchange and eye gaze? A comparison between typical development and high-functioning autism. Neuropsychologia, 48(9), 2730–2739. 10.1016/j.neuropsychologia.2010.05.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Noris B., Nadel J., Barker M., Hadjikhani N., Billard A. (2012). Investigating gaze of children with ASD in naturalistic settings. PLoS One, 7(9), e44144. 10.1371/journal.pone.0044144 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Papagiannopoulou E. A., Chitty K. M., Hermens D. F., Hickie I. B., Lagopoulos J. (2014). A systematic review and meta-analysis of eye-tracking studies in children with autism spectrum disorders. Social Neuroscience, 9(6), 610–632. 10.1080/17470919.2014.934966 [DOI] [PubMed] [Google Scholar]
- Pelphrey K. A., Morris J. P. (2006). Brain mechanisms for interpreting the actions of others from biological-motion cues. Current Directions in Psychological Science, 15(3), 136–140. 10.1111/j.0963-7214.2006.00423.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pönkanën L. M., Alhoniemi A., Leppanen J. M., Hietanen J. K. (2011). Does it make a difference if I have an eye contact with you or with your picture? An ERP study. Social Cognitive and Affective Neuroscience, 6, 486–494. 10.1093/scan/nsq068 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reddy V. (2008). How infants know minds. Harvard University Press. 10.4159/9780674033887 [DOI] [Google Scholar]
- Riby D. M., Hancock P. J., Jones N., Hanley M. (2013). Spontaneous and cued gaze-following in autism and Williams syndrome. Journal of Neurodevelopmental Disorders, 5(1), Article 13. 10.1186/1866-1955-5-13 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Risko E. F., Laidlaw K. E., Freeth M., Foulsham T., Kingstone A. (2012). Social attention with real versus reel stimuli: Toward an empirical approach to concerns about ecological validity. Frontiers in Human Neuroscience, 6, Article 143. 10.3389/fnhum.2012.00143 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Russell G., Mandy W., Elliott D., White R., Pittwood T., Ford T. (2019). Selection bias on intellectual ability in autism research: A cross-sectional review and meta-analysis. Molecular Autism, 10(1), Article 9. 10.1186/s13229-019-0260-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Swanson M. R., Siller M. (2013). Patterns of gaze behavior during an eye-tracking measure of joint attention in typically developing children and children with autism spectrum disorder. Research in Autism Spectrum Disorders, 7(9), 1087–1096. 10.1016/j.rasd.2013.05.007 [DOI] [Google Scholar]
- Swettenham J., Condie S., Campbell R., Milne E., Coleman M. (2003). Does the perception of moving eyes trigger reflexive visual orienting in autism? Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 358(1430), 325–334. 10.1098/rstb.2002.1203 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Trevarthen C. (1979). Communication and cooperation in early infancy: A description of primary intersubjectivity. Before Speech, 1, 530–571. [Google Scholar]
- Vlamings P. H., Stauder J. E., van Son I. A. (2005). Atypical visual orienting to gaze-and arrow-cues in adults with high functioning autism. Journal of Autism and Developmental Disorders, 35(3), 267–277. 10.1007/s10803-005-3289-y [DOI] [PubMed] [Google Scholar]
- Wechsler D. (1999). Wechsler Abbreviated Scale of Intelligence (WASI) [Database record]. APA Psychology Tests. 10.1037/t15170-000 [DOI]
- Wetzels R., Wagenmakers E. J. (2012). A default Bayesian hypothesis test for correlations and partial correlations. Psychonomic Bulletin & Review, 19, 1057–1064. 10.3758/s13423-012-0295-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yarbus A. L. (1967). Eye movements during perception of complex objects. In Yarbus A. L. (Ed.), Eye movements and vision (pp. 171–211). Springer. 10.1007/978-1-4899-5379-7_8 [DOI] [Google Scholar]
- Zahavi D. (2001). Beyond empathy. Phenomenological approaches to intersubjectivity. Journal of Consciousness Studies, 8(5–6), 151–167. [Google Scholar]
- Zeifman D., Delaney S., Blass E. M. (1996). Sweet taste, looking, and calm in 2-and 4-week-old infants: The eyes have it. Developmental Psychology, 32(6), 1090–1099. 10.1037/0012-1649.32.6.1090 [DOI] [Google Scholar]



