Abstract
In social interactions, we rely on non-verbal cues like gaze direction to understand the behaviour of others. How we react to these cues is determined by the degree to which we believe that they originate from an entity with a mind capable of having internal states and showing intentional behaviour, a process called mind perception. While prior work has established a set of neural regions linked to mind perception, research has just begun to examine how mind perception affects social-cognitive mechanisms like gaze processing on a neuronal level. In the current experiment, participants performed a social attention task (i.e. attentional orienting to gaze cues) with either a human or a robot agent (i.e. manipulation of mind perception) while transcranial direct current stimulation (tDCS) was applied to prefrontal and temporo-parietal brain areas. The results show that temporo-parietal stimulation did not modulate mechanisms of social attention, neither in response to the human nor in response to the robot agent, whereas prefrontal stimulation enhanced attentional orienting in response to human gaze cues and attenuated attentional orienting in response to robot gaze cues. The findings suggest that mind perception modulates low-level mechanisms of social cognition via prefrontal structures, and that a certain degree of mind perception is essential in order for prefrontal stimulation to affect mechanisms of social attention.
This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
Keywords: human–robot interaction, mind perception, brain stimulation, social attention
1. Introduction
People are seeing a day-to-day increase in the number of robot agents in their lives that could assist them in various domains [1]. Although evidence of positive outcomes of human–robot interactions exist [2–4], designing social robots that elicit natural human responses can be challenging owing to people's negative perceptions about having social robots be a part of their everyday life [5,6]. To remedy this and design robots that are able to elicit social responses, we must understand how the human brain processes social information in interactions with others and whether non-human agents are able to activate these networks to a similar extent to human interaction partners (and if so, under which conditions). Following this approach, we can identify physical and behavioural agent features that reliably activate brain areas involved in social-cognitive processes and investigate whether robots that elicit activation in these networks lead to more acceptance and trust, as well as improved performance in human–robot interaction [7].
Meaningful social interactions require the ability to infer internal states of others, such as intentions (i.e. mentalizing) and emotions (i.e. empathizing) [8], and to use this information to predict future behaviour [9]. For that purpose, the human brain is equipped with neural networks specialized in processing information relevant to social interactions (i.e. social brain; [9–11]) which involve posterior areas like the temporo-parietal junction (TPJ), the superior temporal sulcus (STS) and the fusiform gyrus (FG), as well as anterior areas like the anterior cingulate cortex (ACC) and the ventromedial and dorsolateral prefrontal cortex (vmPFC, dlPFC) [9,12–17]. TPJ is involved in inferring higher-order action goals [18–26] and mental and spatial perspective taking [23,25,27], while STS and FG are involved in processing biological motion and face identity, respectively [9,16]. PFC is involved in making inferences about enduring dispositions such as preferences or beliefs [15,24,28,29], and activation in medial PFC is positively correlated with the amount of background knowledge we have about others [19,30]. ACC is activated in social interactions requiring mentalizing in real-time [26,31], and is more strongly activated when interacting with human versus machine agents (i.e. computers; [32]).
Critically for human–robot interaction, most current social robot platforms underactivate the social brain network [32–34], which negatively impacts social (e.g. joint attention), emotional (e.g. empathy) and cognitive (e.g. trust) processes that would be essential in order for humans to socially interact with robot agents. Fortunately for social roboticists, social brain activation depends on the degree to which an interaction partner is perceived as ‘having a mind’ (i.e. mind perception [35]), with the general capability of making changes in the environment (i.e. agency; [36]), and experiencing internal states, such as emotions and intentions (i.e. experience), and as such can presumably be triggered by design. Whereas mind is easily perceived in other human agents [37], the degree to which non-human entities like robots trigger mind perception depends on whether their physical and behavioural characteristics are perceived as sufficiently human-like [38–41]. Mind perception is in fact a highly effortless process that activates social brain networks in a bottom-up or reflexive fashion [42–45], triggered by human-like facial features and relations (i.e. spatial arrangement of eye–nose–mouth configurations) [43,44,46–48], as well as biological motion and/or predictable behaviour [49].
Owing to its reflexive nature, mind perception allows observers to differentiate intentional from non-intentional agents within a few hundred milliseconds [44,45], and even just passively viewing stimuli that trigger mind perception is sufficient in order to induce activation in a wide range of social brain networks [50], which varies parametrically as a function of the agent's physical human-likeness (i.e. increases as the agent's face or body becomes more human-like in appearance) [51,52]. Brain areas related to variations in mind perception involve anterior social brain areas, such as the left ACC [13,26,53], as well as posterior areas, such as the left TPJ [54]. Left ACC is activated when others are treated as intentional agents [32,53], and responds more strongly during social decision-making tasks that involve intentional versus non-intentional agents [32,53]; activation within left TPJ is associated with attributing human-likeness and intentionality to non-human agents [18,55,56], and grey matter volume in left TPJ corresponds to individual differences in anthropomorphizing non-human entities, in particular animals [54]. The degree to which agents trigger mind perception not only modulates activation in social brain areas, it also determines how we feel about the agents [33,57–59], behave towards them [32,34,53,60,61] and interact with them [49,62–64], and as such has the potential to impact acceptance, trust and performance in human–robot interactions. Mind perception affects higher-order social-cognitive processes like prosociality, morality and economic decision-making [49,61], and also modulates low-level social-cognitive processes, such as face perception [65] and social attention [66–69]. The effect of mind perception on social cognition is so profound that the mere belief that observed behaviour may reflect the actions of an agent ‘with a mind’ makes people interpret pre-programmed behaviours as intentional and motivates them to attune their actions accordingly [34,68–70].
Although the neural link between mind perception and social brain activation [28,51], as well as the behavioural link between mind perception and social-cognitive processes, are relatively well understood [7], research has just begun to examine how mind perception modulates social cognition on a neural level (i.e. activation of which brain areas modulates social cognition as a function of mind perception) [34]. To address this important issue, we use transcranial direct current stimulation (tDCS) to investigate the link between activation of social brain areas implicated in mind perception and low-level social cognitive processes, such as social attention (i.e. the degree to which changes in gaze direction are followed) [71]. tDCS is a non-invasive electrical stimulation technique that can be applied without disrupting the participant during task execution, and has been proven to be effective in modulating a wide range of cognitive processes in previous studies (e.g. memory, attention, decision-making, perception; [72–75]). Social attention, or the tendency to follow changes in others' gaze direction, was chosen for the present study, as it is a basic, yet essential, social-cognitive mechanism that allows for initiating and coordinating communication [76–78], establishing joint attention between two interaction partners and an object of interest in the environment [79], and an important precursor for developing a theory of mind [79].
Social attention can be examined using a gaze-cueing paradigm, where a face-like stimulus is presented centrally on a screen that first looks straight and then changes gaze direction to the left or right side of the screen. This so-called gaze cue is followed by the presentation of a target stimulus (e.g. F or T) that appears at either the cued location (i.e. valid trial) or an uncued location (i.e. invalid trial) and triggers shifts of the observer's attention to the gazed-at location. As a result, reaction times on valid trials are usually shorter than reaction times on invalid trials, with the difference in reaction times between invalid and valid conditions constituting the gaze-cueing effect [71]. Although it is widely accepted that social attention has a strong bottom-up component (i.e. attention is shifted reflexively to the cued location; [79]), there is accumulating evidence that it can be top-down controlled by higher-order social-cognitive processes when context information is available that increases the social relevance of observed gaze signals (e.g. gazer is similar or known to the observer; [68,80–89]). With particular relevance to the current study, manipulating the degree to which a gazer is perceived as an intentional being ‘with a mind’ has been shown to modulate social attention, such that gaze-cueing effects are larger in response to human (i.e. intentional) versus machine (i.e. pre-programmed) gazers [68,69,90,91]. The notion that this top-down modulation is specifically related to mind perception is supported by experiments showing that although individuals on the autism spectrum reflexively attend to gaze signals, they do not show the reported enhancement of gaze-cueing effects for human agents but an enhancement in attentional orienting in response to robot gaze cues (potentially owing to difficulties with inferring internal states underlying human gaze behaviour but an increased interest in machine behaviour) [91].
While modulatory effects of mind perception on social attention have been examined behaviourally [61,63], how mind perception modulates lower-level social-cognitive mechanisms like gaze cueing at the neuronal level still remains an open question. In particular, it is unclear whether anterior or posterior parts of the social brain network are more strongly involved in modulating social cognition via mind perception. On the one hand, Özdem et al. showed that believing that an agent's eye movements are human-controlled (i.e. intentional) as opposed to pre-programmed (i.e. non-intentional) modulated activation in bilateral TPJ (but not prefrontal networks), and enhanced attentional orienting to gaze cues [34]. On the other hand, Wiese et al. showed that although variations in social attention in response to human versus robot gazers correlated with activation in anterior (i.e. vmPFC) and posterior social brain areas (i.e. TPJ), only activation in bilateral vmPFC correlated with both variations in mind perception and social attention, suggesting that vmPFC might be involved in the top-down control of social attention via mind perception [51]. Taken together, these findings show that although there is convincing evidence that social attention can be modulated via mind perception, the exact source of this modulatory effect still needs to be identified.
To address inconsistencies of previous studies and to examine whether anterior and/or posterior areas of the social brain network implicated in mind perception are causally involved in modulating social attention, in the current study we compare gaze-cueing effects induced by agents ‘with a mind’ (i.e. human) to agents ‘without a mind’ (i.e. robot), with and without tDCS stimulation to left prefrontal and left temporo-parietal areas. This manipulation was chosen for two reasons: first, previous studies showed that sophisticated minds are attributed to agents with human-like appearance but not to agents with a robot-like appearance (i.e. mind ratings increase as a function of physical human-likeness [36,92–94]), which allows us to experimentally manipulate mind perception via physical human-likeness. Second, because mind perception increases activation in social brain networks [52,95–97] and has been shown to modulate social-cognitive processes like gaze cueing on a behavioural level [69,98], using tDCS in the context of a social attention paradigm seems suitable to investigate the outlined research goal. Left prefrontal and left temporo-parietal areas were chosen as sites for tDCS stimulation for the following reasons: first, both areas have been implicated in mind perception in previous studies [28,32,34,51,53], and activation in both areas has been shown to correlate with variations in social attention (i.e. gaze cueing) [34,51]. Second, both areas are located distant enough from each other to minimize the risk of accidental stimulation of the respective other site during stimulation (i.e. prefrontal versus temporo-parietal). Third, previous studies suggest that active stimulation is superior to sham stimulation as a control, as it controls for side effects of stimulation that are not directly associated with brain functionality like stimulation sensations, and allows for drawing specific inferences regarding the origin of the modulation in terms of location, as permitting by the low spatial resolution of tDCS targeting [99,100].
2. Methods and materials
(a). Participants
Eighty-two undergraduate students from George Mason University (57 females; Mage = 19.84 years, s.d. = 2.35, range = 18–29 years) participated in the current study for course credit. All participants were right-handed, had normal or corrected-to-normal vision, had no known neurological or psychological deficits, were not taking any medications known to affect the central nervous system at the time of the experiment, and had no history of migraines, seizures, or head injuries. All participants provided written consent to participation and were debriefed at the end of the study. Collection and handling of participant data were in accordance with the Internal Review Board guidelines (obtained prior to data collection). Data of participants whose accuracy rate in the gaze-cueing task during baseline was below 85% (six participants), who did not follow the instructions properly (two participants), or did not complete the study (two participants) were excluded from data analysis (13% of participants in total). The remaining 72 participants were quasi-randomly assigned to the experimental condition, that is: active stimulation of the left prefrontal cortex (PFA; n = 36, 23 females), or stimulation of the left temporo-parietal cortex (TPA; n = 36, 27 females). To determine the sample size needed for the study, an a priori power analysis was conducted in G*power for a mixed-effects repeated measures design (i.e. one between factor and one within factor). Since mixed-effects repeated measures designs in G*power can only handle two-factorial designs, the sample size was determined for an ANOVA with a 2-level between factor and a 4-level within factor (i.e. the two within variables were combined). The analysis was based on an alpha of (α = 0.05), the power set to (1 – β = 0.95), and the assumption of a small-to-medium effect size (Cohen's d = 0.17). The analysis showed that an approximate sample size of 76 participants would be sufficient for both experimental groups. Since the current version of G*power cannot test for a three-way factorial design but only a two-way factorial design, another power analysis was conducted using R for the general linear model, which is a more generic test. The analysis was done for a 2 × 2 × 2 factorial model that contains three main effects and their interactions (i.e. three two-way interactions and one three-way interaction) using the same alpha (α = 0.05), beta (1 – β = 0.95) and effect size (Cohen's d = 0.17) as the previous analysis. The analysis for the general linear model revealed that the study needed a total of 83 participants.
(b). Apparatus
Stimuli were presented on a Dell 1703FP monitor with the refresh rate set at 85 Hz. Reaction time measures were based on standard keyboard responses. Participants were seated approximately 57 cm from the monitor, and the experimenter ensured that participants were centred with respect to the monitor. The experiment was controlled by Experiment Builder (SR Research Ltd, Ontario, Canada).
(c). tDCS stimulation
An ActivaDose II, ActivaTek system was used to administer the 2 mA stimulation. Although there is not a universal and ideal level of current stimulation for any one montage, we selected 2 mA because it is a commonly used level that is within human safety limits [101]. Electrode placement followed the 10–5 electroencephalogram (EEG) system [102], and tDCS was delivered via two 5 × 5 cm saline-soaked sponges in rubber housing (resulting in a sponge contact area of 3 × 3 cm). For PFA stimulation, the anode was placed on the scalp over F9 and the cathode was placed over Fz. For TPA stimulation, the anode was placed on the scalp over P5 and the cathode was placed extracephalically on the right [103,104]. This montage was chosen to minimize erroneous stimulation of non-targeted cortical areas based on brain modelling (see §2d ‘Brain modelling’ for more details).
(d). Brain modelling
Stimulation parameters were modelled using Finite-Element Models (FEMs) within the HD Explore brain modelling software (Soterix Medical, NY, USA). Brain models were obtained based on conductivities from Datta et al. [105]. The brain models for PFA and TPA stimulation were created for 5 × 5 cm sponges (as required by Soterix software). However, because the sponges were reduced to 3 × 3 cm owing to the use of sponge housings, we used a 160-channel EEG cap to measure the number of electrodes that were not targeted by the 3 × 3 cm sponges in order to correct the initial brain models by reducing the number of electrodes stimulated in the brain modelling software to provide the best approximation of the stimulation intensity at each area.
PFA stimulation reached multiple prefrontal structures, including the left medial prefrontal cortex (lmPFC), left dorsolateral prefrontal cortex (ldlPFC), left dorsomedial prefrontal cortex (ldmPFC), left ventromedial prefrontal cortex (lvmPFC) and left anterior cingulate cortex (lACC); Soterix showed an average peak stimulation intensity of 0.34 V m−1 per 1 mA. For TPA stimulation, structures that received stimulation included the left temporo-parietal junction (lTPJ), left superior temporal sulcus (lSTS), left precuneus (lPRC) and left fusiform face area (IFFA): Soterix showed an average peak stimulation intensity of 0.33 V m−1 per 1 mA (see figure 1 for brain models and electronic supplementary materials for exact stimulation intensities of each structure).
Figure 1.
Brain models illustrate the field intensities of stimulation in V m−1 per mA. Darker red areas illustrate higher stimulation intensities compared with blue areas. The circled regions in the ‘left lateral’ pane show the PFA region and the TPA region, respectively. See the electronic supplementary material for exact stimulation intensities of each structure.
Although multiple models were evaluated, the electrode montages for PFA and TPA stimulation reported in §2c ‘tDCS stimulation’ demonstrated the greatest peak stimulation intensity over the targeted brain regions, with the least amount of stimulation to non-target brain regions. Specifically, FEM modelling illustrated that using an extracephalic cathodal electrode reduces erroneous stimulation for non-targeted cortical areas in the TPA stimulation condition. It is important to note that although recent evidence suggests that an extracephalic electrode may reduce the magnitude of the stimulation effect when compared with scalp placements [106], this is not the case in the current experiment as FEM models suggest that similar stimulation intensities were achieved for PFA and TPA stimulation (0.34 V m−1 versus 0.33 V m−1 per 1 mA).
(e). Stimuli
Gaze cueing requires participants to detect, locate or identify targets that are looked at or looked away from by a gazer [71], which in the current study is either a human (mind perception high) or a robot (mind perception low). In the human condition, the digitized photo of a female face was used as a gazer, which can be found in the Karolinska Directed Emotional Faces database, while in the robot condition the photo of a humanoid robot was used as a gazer (EDDIE; developed by the Technical University of Munich, Germany). The gazing stimuli were 6.4° wide and 10.0° high, depicted on a white background and presented in full frontal orientation with eyes positioned on the central horizontal axis of the screen (figure 2). For left- and rightward gaze, irises and pupils of the human and the robot gazer were shifted with Photoshop and deviated 0.4° from direct gaze. The target stimulus was a black capital letter (F or T; measuring 0.8° in width and 1.3° in height), which participants had to discriminate by pressing assigned keys on a regular keyboard. The target letters appeared on the horizontal axis of the screen and were located 6.0° left or right from the centre of the screen.
Figure 2.
Human and robot stimuli. The human agent is represented by a female face taken from the Karolinska Directed Emotional Faces (KDEF) database (F07; written informed consent from the Karolinska Institute was received to use the photograph for experimental investigations and illustrations). The robot agent is the robot EDDIE (developed at the Technical University of Munich, Germany).
(f). Procedure
At the beginning of the experiment, participants gave informed consent and completed the Snellen near-sightedness exam to test their eye vision. They then answered a set of questionnaires to assess their perception of robots from the Godspeed measure [107], attitudes towards robots from the Negative Attitude Towards Robots questionnaire [108] and autistic traits from the Autism quotient [109]. Upon completion of the questionnaires, participants were instructed how to perform the gaze-cueing task, and were informed that the experiment consisted of two parts: a first part where they would perform the gaze-cueing task without stimulation, and a second part, where they would perform the gaze-cueing task under stimulation, with a break for setting up the tDCS equipment in between.
After completing the baseline block, which took 20 min, the researcher started setting up the tDCS equipment, which took about 10 min. As soon as the current reached its maximum value of 2.0 mA, participants completed a sensation questionnaire to monitor their comfort levels. They were given an unrelated videogame experience questionnaire following the first sensation questionnaire. The unrelated video game questionnaire was administered to ensure that the timing of the stimulation was similar to previous studies that successfully modulated cognitive processes using tDCS [103,110–113]. This was an important step as there is no unified standard for the use of tDCS modulation of cognitive tasks [114]. After completing the unrelated videogame questionnaire, participants completed a second sensation questionnaire. Next, subjects started the stimulation gaze-cueing block, which also took 20 min. After the stimulation gaze-cueing block, a third sensation questionnaire was administered, the electrodes were removed, the GSM and NARS questionnaire were administered again, and the participants were debriefed. The timing of the experiment can be viewed in figure 3.
Figure 3.
Timing of the experimental procedure. The experiment started with participants completing the baseline gaze-cueing task. Next the researcher set up the tDCS machine and participants completed a questionnaire about their sensations. The participants then completed a decoy survey, which asked about their video-game experience. Participants then completed a second sensation questionnaire followed by a gaze-cueing task under stimulation. After the stimulation gaze cueing task was completed, a final sensation questionnaire was administered and the tDCS stimulation was stopped (set-up image was adapted from Dayan et al. [115]).
The sequence of events on a given trial of gaze cueing is illustrated in figure 4. The beginning of each trial was signalled by a fixation cross at the centre of the screen. Between 700 and 1000 ms later, one of the agents (i.e. human or robot) appeared on the screen looking straight (and with the fixation cross remaining in its position). After a random time interval of 700–1000 ms, the gazer shifted its gaze either left- or rightwards, which constituted the gaze cue. After a stimulus onset asynchrony (SOA; i.e. time interval between gaze cue and target onset) of 400–600 ms, a target letter (F or T) appeared on the left or the right side of the screen, and participants were asked to respond as fast and accurately as possible to the identity of the target [71].1 The gaze cue and target letter remained on the screen until a response was given or after a timeout of 1200 ms was reached, whichever came first. The next trial started after an inter-trial interval (ITI) of 680 ms.
Figure 4.
Sequence of events on a trial of gaze cueing: participants first fixated on a fixation cross for 700–1000 ms and were then presented with the gazing agent (human versus robot) looking straight for 700–1000 ms, followed by a change in gaze direction (to either the left or right side of the screen). After an SOA of 400–600 ms, the target letter (F or T) appeared either where the face was looking or opposite to where the face was looking. The target remained on the screen until a response was given or a timeout of 1200 ms was reached.
Participants were instructed to fix their gaze on the fixation cross at the beginning of a trial, and to not make any eye movements during the trial. They were also instructed that after the fixation cross the image of a social agent would appear in the centre of the screen, which would first look at them (mutual gaze), and then after some time make an eye movement to look to the left or right side of the screen (averted gaze). Participants were further advised that the change in gaze direction would be followed by the appearance of a target letter (F or T), which would appear either at the gazed-at location or at the opposite of the gazed-at location. Participants were asked to indicate as quickly and accurately as possible whether ‘F’ or ‘T’ was shown on the screen by pressing the respective response key: for one half of the participants ‘F’ was assigned to the ‘D’ key and ‘T’ to the ‘K’ key on a regular keyboard, while for the other half of participants stimulus–response mapping was reversed. The original key labels were covered with a sticker to prevent interference effects with the actual letters on the keyboard. All instructions were given in written form.
Gaze direction (left, right), target location (left, right), target identity (F, T) and agent type (human, robot) were selected pseudo-randomly; every combination appeared with equal frequency throughout the experiment. Gaze direction was manipulated orthogonally to target location: in half of the trials, the target was validly cued, and in the other half of the trials, the target was invalidly cued. Each experimental session was composed of 220 trials, with a block of 20 practice trials preceding two gaze-cueing blocks of 100 trials of gaze cueing each (i.e. 220 trials for the stimulation baseline and 220 trials for the stimulation block). Participants first completed one block of gaze cueing without stimulation, and were then assigned to either the group that received active stimulation to left PFA or the group that received active stimulation to left TPA. Stimulation was applied for 30 min, during which participants completed the second block of gaze cueing.
3. Results
(a). Questionnaires
We used the Godspeed measure (GSM; [107]) and the Negative Attitude Towards Robots Scale (NARS; [108]) to determine whether participants in the two different stimulation conditions differed in their perception of and attitudes towards robots. Both questionnaires were administered at the beginning of the experiment (to capture potential a priori individual differences), as well as after the completion of the experiment (to assess whether attitudes changed). The short version of the Autism Quotient (AQ-short; [109]) was administered once, at the beginning of the experiment, to measure participants' autistic traits. A 10-point sensation questionnaire was also administered three times to monitor participants' comfort levels during the experiment [103,104]. Participants were told that reporting a ‘7’ would illustrate unbearable sensations. The questionnaire was administered as soon as stimulation started, before the start of the stimulation block, and after completing the stimulation block. Participants in the PFA and the TPA stimulation did not differ in terms of their perception of robots (GSM; F1,69 = 0.09, p = 0.75, η2 < 0.001), attitudes towards robots (NARS; F1,69 = 0.2, p = 0.65, η2 < 0.01) or autistic traits (AQ score; t69 = −1.66, p = 0.09). Comparison of the GSM and NARS pre–post ratings did not reveal differences between the two stimulation conditions (GSM: F1,69 = 1.19, p = 0.27, η2 < 0.01; NARS: F1,69 = 0.04, p = 0.84, η2 < 0.001); see the electronic supplementary material, tables S1 and S2 for the full report of the inference statistics. Two participants indicated sensation levels of above ‘7’, which indicated that they were uncomfortable and did not continue the experiment (see §2a ‘Participants’).
(b). Behavioural data
To determine whether active PFA and/or TPA stimulation had a modulatory effect on gaze cueing, we conducted a 2 × 2 × 2 mixed ANOVA on gaze-cueing effects (i.e. mean reaction times for invalid–valid trials) with Agent type (human versus robot) and Session (baseline versus stimulation) as within-participants factors, and Brain site (left PFA versus TPA) as between-participants factors. The descriptive statistics of this analysis are depicted in figure 5.
Figure 5.
Gaze-cueing effects (in ms) as a function of Brain site (left PFA, left TPA), Session (baseline, stimulation) and Agent type (human, robot). There was a significant change in gaze cueing for active PFA stimulation, with no differences in gaze cueing between human and robot at baseline, but significantly larger gaze-cueing effects for the human versus the robot agent under stimulation. Active TPA stimulation did not have significant effects on gaze cueing (*p < 0.05).
Before examining the results of the statistical models, we tested the normality of the residuals using the Kolmogorov–Smirnov test (and not the more commonly used Shapiro–Wilk test as is it not recommended for larger sample sizes; [116,117]). The Kolmogorov–Smirnov test revealed a non-significant effect (D = 0.07, p = 0.42), showing that the residuals of our data did not differ significantly from a normal distribution (i.e. the normality of the residuals assumption of parametric tests was not violated).
The ANOVA revealed no main effect of Agent type (F1,70 = 0.2, p = 0.64, η2 ≤ 0.001), indicating that across stimulation sites and sessions, there were no significant differences in gaze-cueing effects for the human agent compared with the robot agent (human: 8.22 ms versus robot: 7.20 ms). The main effect of Session (F1,70 < 0.001, p = 0.98, η2 < 0.001) was also not significant, showing that across stimulation sites and agent types gaze-cueing effects did not differ between baseline and stimulation (baseline: 7.69 ms versus stimulation: 7.74 ms). The main effect of Brain site (F1,70 < 0.001, p = 0.98, η2 < 0.001) was also not significant, showing that across agent types and sessions, no differences in gaze-cueing effects were found between the two stimulation sites (PFA: 7.69 ms versus TPA: 7.74 ms). All two-way interactions were also not significant (Agent type × Session: F1,70 = 0.61, p = 0.43, η2 < 0.001; Agent type × Brain site: (F1,70 = 2.24, p = 0.13, η2 < 0.01); Session × Brain site: (F1,70 = 0.17, p = 0.67, η2 < 0.001). Most importantly, however, the three-way interaction of Agent type × Session×Brain site was significant (F1,70 = 4.50, p = 0.03, η2 = 0.12), indicating that tDCS stimulation affected gaze-cueing effects differently for the human versus the robot condition under left PFA but not left TPA stimulation.
To examine the significant three-way interaction effect further, two 2 × 2 post hoc ANOVAs with Agent type (Human versus Robot) and Session (Baseline versus Stimulation) were conducted, one for left PFA stimulation and one for left TPA stimulation. The ANOVA for the PFA stimulation condition showed no main effects of Agent type (F1,35 = 2.21, p = 0.14, η2 = 0.01) or Session (F1,35 = 0.08, p = 0.77, η2 < 0.001), but a significant Agent type × Session interaction (F1,35 = 5.51, p = 0.02, η2 = 0.02). Post hoc paired t-tests revealed that there was no significant difference in gaze-cueing effects between the human and robot agents at baseline (human: 7.47 ms versus robot: 8.66 ms; p = 0.75), but significantly larger cueing effects for the human versus the robot gazer under stimulation (human: 12.25 ms versus robot: 2.34 ms; p = 0.02). By contrast, the ANOVA for the TPA stimulation condition revealed neither main effects of Agent type (F1,35 = 0.47, p = 0.49, η2 < 0.01) or Session (F1,35 = 0.09, p = 0.76, η2 < 0.01), nor a significant Agent type × Session interaction (F1,35 = 0.72, p = 0.39, η2 < 0.01). This suggests that while PFA stimulation modulated gaze-cueing effects with significantly larger gaze-cueing effects for the human versus the robot agent under stimulation, TPA stimulation did not have such a modulatory effect on gaze cueing (i.e. no differences in gaze cueing at baseline and under stimulation). Post hoc t-tests were corrected using the false discovery rate (FDR) procedure.
A separate 2 × 2 × 2 mixed ANOVA was conducted on accuracy ratings of participants. The ANOVA revealed no significant main effects or interaction effects. See the ‘Behavioural Results’ section of the electronic supplementary materials for inference statistics. All data and stimuli can be publicly viewed on https://osf.io/s8ewg/.
4. Discussion
Previous studies have shown that activation in left prefrontal [11,32,51,53,118,119] and temporo-parietal [26,28,53,54] areas is related to mind perception and the modulation of low-level social-cognitive processes like gaze cueing [7,34,51]. The goal of the current experiment was to examine the causal involvement of left prefrontal and left temporo-parietal areas in the top-down modulation of social attention via mind perception. To address this issue, we asked participants to perform a gaze-cueing task with a human and a robot agent (i.e. manipulation of mind perception via physical human-likeness) while applying tDCS to left prefrontal and left temporo-parietal areas. The findings show that stimulating left prefrontal had a modulatory effect on social attention, such that gaze cues of intentional agents (i.e. human) were followed significantly more strongly than gaze cues of machine agents (i.e. robot) under stimulation of prefrontal areas. Left temporo-parietal stimulation, in contrast, did not significantly modulate gaze-cueing effects for either of the two agents. These results are in line with previous studies showing that experimental manipulations of mind perception enhance the degree to which gaze signals are followed [34,68,69]. In particular, it was shown that interpreting gaze signals as intentional or human-controlled augments sensory processing of stimuli presented at gazed-at locations [68], and increases the social relevance of observed gaze signals [67,69,91,120]. The results are also in line with studies showing a correlational relationship between activation in left prefrontal areas related to mind perception and modulation of social attention when performing an orthogonal gaze-cueing task outside the fMRI scanner [51], as well as tDCS studies showing that temporo-parietal areas are causally involved in social-cognitive processes like imitation and perspective taking but might not be causally involved in mind perception [121–123].
The current experiment adds to previous findings by localizing the source of top-down modulation of social attention via mind perception to left prefrontal areas, including areas like the ACC and vmPFC. These areas are associated with mentalizing [15,32], and involved in impression formation in social interaction [124,125]. In particular, activation in mPFC is linked to retrieving stereotypical knowledge about other people [126–128], and associated with retrieving script-based social knowledge [129,130]. Medial prefrontal areas are also more strongly activated when mentalizing about the internal states of similar than dissimilar others [125,131–133], as well as when viewing social scenes that contain human versus non-human agents [50]. The current experiment adds to the literature by indicating that prefrontal areas might not only be involved in the modulation of higher-order social-cognitive processes like decision-making [32,53], but might also exert modulatory effects on low-level social cognitive processes like social attention.
Note that the current experiment does not show the previously reported difference in gaze cueing between human and robot gazers at baseline [69]. This could be due to several reasons. First, in previous experiments, the gazers were introduced as ‘human’ versus ‘robot’ via instruction, which provided participants with explicit labels as to how to treat them in terms of mind perception (i.e. ‘human/has a mind’ and ‘robot/has no mind’). By contrast, in the current experiment, the gazers were introduced more neutrally as ‘agents’ and their mind status had to be inferred from physical appearance, which makes the mind perception manipulation more implicit. Although this certainly increases the external validity of the experimental manipulation, it is possible that participants did not pay enough attention to the gazers' mind status, which could wash out effects between the two agents at baseline. Second, because mind perception was manipulated via physical appearance in the current experiment, it is also possible that individual differences in anthropomorphism [54,134] attenuated differences in gaze cueing between the human and robot gazers at baseline. However, because baseline effects are comparable for both stimulation conditions and because the current paper is mainly interested in the modulation of social attention via mind perception, insignificant differences in gaze cueing at baseline should not have impacted the reported findings. Nevertheless, in order to validate the robustness of the reported findings, future experiments should be conducted to determine to what degree they might be influenced by individual differences in anthropomorphism.
The question remains why stimulation of left temporo-parietal networks did not significantly modulate low-level mechanisms of social cognition despite previous reports showing a correlational relationship between mechanisms of social attention and bilateral TPJ activation [34]. One explanation for the lack of a significant effect of left temporo-parietal stimulation on gaze cueing is that it is possible that processes related to mind perception and social attention are not sufficiently interconnected at the level of the left TPJ in order to exert a top-down modulatory effect on attentional orienting to gaze cues. This interpretation would be in line with previous reports showing that social functions within the TPJ are lateralized [18], and that an overlap between attentional orienting and mentalizing is found within the right but not left TPJ [12,135,136]. By contrast, left TPJ lesions have been shown to cause selective deficits in false belief reasoning [134,137], which does require mentalizing but no orientation of social attention. In support of this notion, it has been shown that early posterior ERP components like the N170 are sensitive to the intentionality of an agent without being responsive to the congruency or social outcome of its gaze cues [120], whereas later anterior ERP components like the P350 are sensitive to both an agent's intentionality and the congruency of gaze cues [90] (i.e. significant difference in P350 amplitudes for invalid versus valid trials for human versus computer-controlled conditions), suggesting that the integration of mind perception related processes and social attention might be instantiated in prefrontal (but not temporo-parietal) areas.
Another possible explanation is that prefrontal and temporo-parietal brain regions might process information about an agent's mind on different levels, with TPJ activation being related to inferring particular internal states from observed behaviours [12,20,28,118] (e.g. observing an agent smile leads to the inference that the agent is currently in a state of happiness), and mPFC and vmPFC activation being related to reasoning based on stereotypical assumptions regarding general traits associated with intentional agents [138] (e.g. agent that looks like a child might like toys). It could be possible that studies that manipulate mind perception via instruction of particular beliefs (i.e. ‘eye movements are intentional’) engage the posterior mind perception network [34], while studies that manipulate mind perception via physical appearance (i.e. the agent looks human) more strongly activate stereotypical assumptions about human behaviour, thereby engaging the anterior mind perception network involving the mPFC and vmPFC [126–128]. This interpretation is particularly plausible given that attentional orienting to gaze cues is a fast-acting process [71], which requires information from a readily available source in order to be top-down controlled. Since stereotypical information about an agent is more readily available than the outcomes of mentalizing processes about particular internal states, a stronger involvement of prefrontal areas in the top-down modulation of fast processes like social attention seems tenable.
A third explanation is that the observed modulation of gaze cueing is not specific to mind perception, but due to other (related) functions associated with the left prefrontal cortex. One aspect of the experimental design that could have affected social attention in addition to the gazer's physical appearance is the unpredictivity of gaze cues in the current experiment (i.e. targets appear with equal frequency at validly and invalidly cued locations). Previous studies have shown that stimuli whose behaviour is hard to predict are more likely anthropomorphized, and that evaluating unpredictable stimuli is associated with increased activation in medial prefrontal areas, and specifically the vmPFC and ACC [49]. In consequence, it is possible that one's sensitivity to the predictability of gaze cues had an impact on the degree to which the gazer was anthropomorphized and prefrontal brain areas were activated during gaze cueing. For the current experiment, this means that stimulating prefrontal areas may have specifically enhanced the social relevance of human gaze cues, leading to longer processing times on invalid trials [90] and larger gaze-cueing effects (i.e. difference in reaction times between invalid and valid trials), while temporo-parietal stimulation may not have affected the perceived social relevance of human gaze cues.2
Alternatively, the vmPFC has been shown to track feelings of eeriness towards non-human agents in a parametric fashion [138] and has been labelled the potential neural correlate of the uncanny valley [139] (i.e. non-human agents with human-like appearance induce feelings of eeriness if they are not perfectly human). If that were the case, stimulation of prefrontal areas could have enhanced feelings of eeriness towards the robotic agent, leading to a disengagement from robot gaze cues together with an increased engagement in attending to human gaze cues (i.e. eeriness of robot cues made human gaze cues more ‘desirable’). Effects related to non-social prefrontal functions such as working memory, executive functioning or abstract reasoning are less likely to have influenced gaze cueing, because one would expect comparable effects of stimulation for human and non-human agents. Whether prefrontal stimulation modulated gaze cueing directly via mind perception or via processes affected by mind perception, such as perception of uncertainty [90] or emotional reactions to uncanny agents [138], cannot ultimately be answered based on the current data and requires follow-up studies. It can also not be clearly determined—owing to the lack of spatial specificity of tDCS—which prefrontal brain area(s) ultimately caused the observed top-down modulation of social attention (i.e. areas directly implicated in mind perception such as the left ACC, or areas that are indirectly involved in mind perception such as the left mPFC, vmPFC or dlPFC).
5. Conclusion
Previous studies have shown that the degree to which we attend to social cues depends on the degree to which we perceive mind in the entity sending the cues. The neural correlates of mind perception have been localized to prefrontal and temporo-parietal structures in previous studies, but the causal involvement of these areas in the modulation of low-level social-cognitive processes like gaze cueing has not been determined yet. The current study shows that stimulation to prefrontal areas increases the degree to which human gaze is followed compared with the degree to which robot gaze is followed, while stimulation to temporo-parietal regions does not seem to have a measurable modulatory effect on gaze cueing. Since the effect of prefrontal stimulation is only observable for human gazers, it is tenable that prefrontal stimulation does not simply lead someone to perceive ‘more’ mind in others, but rather seems to enhance the social relevance of signals coming from agents ‘with a mind’. In other words, prefrontal stimulation does not seem to make participants perceive more human-likeness in non-human agents, which makes it unlikely that the observed effect is related to anthropomorphism. Instead, prefrontal stimulation seems to help discriminate agents ‘with a mind’ from agents ‘without a mind’, as evidenced by an increased difference in gaze cueing between the human and the robot gazer under stimulation, indicating that stimulation of prefrontal areas enhances the importance of social signals coming from human agents. Taken together, this study shows a causal link between prefrontal stimulation and mechanisms of social attention, and dissociation between the anterior and posterior part of the social brain network in terms of top-down modulation of social-cognitive processes. Whether the effect is specific to mind perception or related to processes indirectly affected by mind perception needs to be determined in future studies.
Supplementary Material
Footnotes
The SOAs were jittered to prevent preparedness effects on the participant's side. Note that this is in line with previous studies (e.g. [69]) and should not affect attentional orienting, as no qualitative differences in attentional orienting have been reported in this time frame (i.e. both reflexive and voluntary shifts of attention occur in the 400–600 ms SOA range, and no inhibition of return effects has been reported) [79].
Note that the conducted a priori power analysis was based on a small-to-medium effect size, which leaves the possibility that very small-to-small effects might not have been detected with the current sample size.
Data accessibility
‘Details about the data are available in the electronic supplementary material and are accessible at https://osf.io/s8ewg/.’
Competing interests
We declare we have no competing interests.
Funding
We received no funding for this study.
References
- 1.Tapus A, Matarić M. 2006. Towards socially assistive robots. J. Robot Soc. Jpn 14, 576–578. ( 10.7210/jrsj.24.576) [DOI] [Google Scholar]
- 2.Scassellati B, Admoni H, Matarić M. 2012. Robots for use in autism research. Annu. Rev. Biomed. Eng. 14, 275–294. ( 10.1146/annurev-bioeng-071811-150036) [DOI] [PubMed] [Google Scholar]
- 3.Basteris A, Nijenhuis SM, Stienen AH, Buurke JH, Prange GB, Amirabdollahian F. 2014. Training modalities in robot-mediated upper limb rehabilitation in stroke: a framework for classification based on a systematic review. J. NeuroEng. Rehabil. 11, 111 ( 10.1186/1743-0003-11-111) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Mubin O, Stevens CJ, Shahid S, Mahmud AA, Dong J-J. 2013. A review of the applicability of robots in education. Technol. Educ. Learn. 1, 13 ( 10.2316/Journal.209.2013.1.209-0015) [DOI] [Google Scholar]
- 5.Bartneck C, Reichenbach J. 2005. Subtle emotional expressions of synthetic characters. Int. J. Hum. Comput. Stud. 62, 179–192. ( 10.1016/j.ijhcs.2004.11.006) [DOI] [Google Scholar]
- 6.Scopelliti M, Giuliani MV, Fornara F. 2005. Robots in a domestic setting: a psychological approach. Univers. Access Inf. Soc. 4, 146–155. ( 10.1007/s10209-005-0118-1) [DOI] [Google Scholar]
- 7.Wiese E, Metta G, Wykowska A. 2017. Robots as intentional agents: using neuroscientific methods to make robots appear more social. Front. Psychol. 8, 1663 ( 10.3389/fpsyg.2017.01663) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Baron-Cohen S. 1997. Mindblindness: an essay on autism and theory of mind. New York, NY: MIT Press. [Google Scholar]
- 9.Frith CD, Frith U. 2006. How we predict what other people are going to do. Brain Res. 1079, 36–46. ( 10.1016/j.brainres.2005.12.126) [DOI] [PubMed] [Google Scholar]
- 10.Adolphs R. 2009. The social brain: neural basis of social knowledge. Annu. Rev. Psychol. 60, 693–716. ( 10.1146/annurev.psych.60.110707.163514) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Van Overwalle F, Baetens K. 2009. Understanding others' actions and goals by mirror and mentalizing systems: a meta-analysis. Neuroimage 48, 564–584. ( 10.1016/j.neuroimage.2009.06.009) [DOI] [PubMed] [Google Scholar]
- 12.Bzdok D, Langner R, Schilbach L, Engemann DA, Laird AR, Fox PT, Eickhoff S. 2013. Segregation of the human medial prefrontal cortex in social cognition. Front. Hum. Neurosci. 7, 232 ( 10.3389/fnhum.2013.00232) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Spunt RP, Adolphs R. 2014. Validating the why/how contrast for functional MRI studies of theory of mind. Neuroimage 99, 301–311. ( 10.1016/j.neuroimage.2014.05.023) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Spunt RP, Lieberman MD. 2012. Dissociating modality-specific and supramodal neural systems for action understanding. J. Neurosci. 32, 3575–3583. ( 10.1523/JNEUROSCI.5715-11.2012) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Amodio DM, Frith CD. 2006. Meeting of minds: the medial frontal cortex and social cognition. Nat. Rev. Neurosci. 7, 268–277. ( 10.1038/nrn1884) [DOI] [PubMed] [Google Scholar]
- 16.Saxe R, Carey S, Kanwisher N. 2004. Understanding other minds: linking developmental psychology and functional neuroimaging. Annu. Rev. Psychol. 55, 87–124. ( 10.1146/annurev.psych.55.090902.142044) [DOI] [PubMed] [Google Scholar]
- 17.Gallagher HL, Frith CD. 2003. Functional imaging of ‘theory of mind’. Trends Cogn. Sci. 7, 77–83. ( 10.1016/S1364-6613(02)00025-6) [DOI] [PubMed] [Google Scholar]
- 18.Perner J, Aichhorn M, Kronbichler M, Staffen W, Ladurner G. 2006. Thinking of mental and other representations: the roles of left and right temporo-parietal junction. Social Neurosci. 1, 245–258. ( 10.1080/17470910600989896) [DOI] [PubMed] [Google Scholar]
- 19.Grèzes J, Berthoz S, Passingham RE. 2006. Amygdala activation when one is the target of deceit: did he lie to you or to someone else? Neuroimage 30, 601–608. ( 10.1016/j.neuroimage.2005.09.038) [DOI] [PubMed] [Google Scholar]
- 20.Saxe R, Powell LJ. 2006. It's the thought that counts: specific brain regions for one component of theory of mind. Psychol. Sci. 17, 692–699. ( 10.1111/j.1467-9280.2006.01768.x) [DOI] [PubMed] [Google Scholar]
- 21.Ohnishi T, et al. 2004. The neural network for the mirror system and mentalizing in normally developed children: an fMRI study. Neuroreport 15, 1483–1487. ( 10.1097/01.wnr.0000127464.17770.1f) [DOI] [PubMed] [Google Scholar]
- 22.Grezes J. 2004. Brain mechanisms for inferring deceit in the actions of others. J. Neurosci. 24, 5500–5505. ( 10.1523/JNEUROSCI.0219-04.2004) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Farrer C, Franck N, Georgieff N, Frith CD, Decety J, Jeannerod M. 2003. Modulating the experience of agency: a positron emission tomography study. Neuroimage 18, 324–333. ( 10.1016/S1053-8119(02)00041-1) [DOI] [PubMed] [Google Scholar]
- 24.Saxe R, Kanwisher N. 2003. People thinking about thinking people: the role of the temporo-parietal junction in ‘theory of mind’. Neuroimage 19, 1835–1842. ( 10.1016/S1053-8119(03)00230-1) [DOI] [PubMed] [Google Scholar]
- 25.Chaminade T, Decety J. 2002. Leader or follower? Involvement of the inferior parietal lobule in agency. Neuroreport 13, 1975–1978. ( 10.1097/00001756-200210280-00029) [DOI] [PubMed] [Google Scholar]
- 26.Gallagher HL, Happé F, Brunswick N, Fletcher PC, Frith U, Frith CD. 2000. Reading the mind in cartoons and stories: an fMRI study of ‘theory of mind’ in verbal and nonverbal tasks. Neuropsychologia 38, 11–21. ( 10.1016/S0028-3932(99)00053-6) [DOI] [PubMed] [Google Scholar]
- 27.Ruby P, Decety J. 2001. Effect of subjective perspective taking during simulation of action: a PET investigation of agency. Nat. Neurosci. 4, 546–550. ( 10.1038/87510) [DOI] [PubMed] [Google Scholar]
- 28.Van Overwalle F. 2009. Social cognition and the brain: a meta-analysis. Hum. Brain Mapp. 30, 829–858. ( 10.1002/hbm.20547) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Saxe R. 2006. Uniquely human social cognition. Curr. Opin. Neurobiol. 16, 235–239. ( 10.1016/j.conb.2006.03.001) [DOI] [PubMed] [Google Scholar]
- 30.Saxe R, Wexler A. 2005. Making sense of another mind: the role of the right temporo-parietal junction. Neuropsychologia 43, 1391–1399. ( 10.1016/j.neuropsychologia.2005.02.013) [DOI] [PubMed] [Google Scholar]
- 31.McCabe K, Houser D, Ryan L, Smith V, Trouard T. 2001. A functional imaging study of cooperation in two-person reciprocal exchange. Proc. Natl Acad. Sci. USA 98, 11 832–11 835. ( 10.1073/pnas.211415698) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Gallagher HL, Jack AI, Roepstorff A, Frith CD. 2002. Imaging the intentional stance in a competitive game. Neuroimage 16, 814–821. ( 10.1006/nimg.2002.1117) [DOI] [PubMed] [Google Scholar]
- 33.Harris LT, Fiske ST. 2011. Perceiving humanity or not: a social neuroscience approach to dehumanized perception. In Social neuroscience: toward understanding the underpinnings of the social mind. (eds A Todorov, ST Fiske, DA Prentice), pp. 123–134. Oxford, UK: Oxford University Press. [Google Scholar]
- 34.Özdem C, Wiese E, Wykowska A, Müller H, Brass M, Van Overwalle F. 2016. Believing androids – fMRI activation in the right temporo-parietal junction is modulated by ascribing intentions to non-human agents. Social Neurosci. 12, 582–593. ( 10.1080/17470919.2016.1207702) [DOI] [PubMed] [Google Scholar]
- 35.Spunt RP, Meyer ML, Lieberman MD. 2015. The default mode of human brain function primes the intentional stance. J. Cogn. Neurosci. 27, 1116–1124. ( 10.1162/jocn_a_00785) [DOI] [PubMed] [Google Scholar]
- 36.Gray HM, Gray K, Wegner DM. 2007. Dimensions of mind perception. Science 315, 619 ( 10.1126/science.1134475) [DOI] [PubMed] [Google Scholar]
- 37.Epley N, Waytz A, Cacioppo JT. 2007. On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114, 864–886. ( 10.1037/0033-295X.114.4.864) [DOI] [PubMed] [Google Scholar]
- 38.Kiesler S, Powers A, Fussell SR, Torrey C. 2008. Anthropomorphic interactions with a robot and robot-like agent. Social Cogn. 26, 169–181. ( 10.1521/soco.2008.26.2.169) [DOI] [Google Scholar]
- 39.DiSalvo C, Gemperle F. 2003. From seduction to fulfillment: the use of anthropomorphic form in design. In DPPI '03 Proc. 2003 Int. Conf. Designing Pleasurable Products and Interfaces, Pittsburgh, PA, 2–26 June 2003, pp. 67–72. New York, NY: ACM.
- 40.Castelli F, Happé F, Frith U, Frith C. 2000. Movement and mind: a functional imaging study of perception and interpretation of complex intentional movement patterns. Neuroimage 12, 314–325. ( 10.1006/nimg.2000.0612) [DOI] [PubMed] [Google Scholar]
- 41.Heider F, Simmel M. 1944. An experimental study of apparent behavior. Am. J. Psychol. 57, 243–259. ( 10.2307/1416950) [DOI] [Google Scholar]
- 42.Gao T, McCarthy G, Scholl BJ. 2010. The wolfpack effect: perception of animacy irresistibly influences interactive behavior. Psychol. Sci. 21, 1845–1853. ( 10.1177/0956797610388814) [DOI] [PubMed] [Google Scholar]
- 43.Schein C, Gray K. 2015. The unifying moral dyad: liberals and conservatives share the same harm-based moral template. Pers. Social Psychol. Bull. 41, 1147–1163. ( 10.1177/0146167215591501) [DOI] [PubMed] [Google Scholar]
- 44.Looser CE, Wheatley T. 2010. The tipping point of animacy: how, when, and where we perceive life in a face. Psychol. Sci. 21, 1854–1862. ( 10.1177/0956797610388044) [DOI] [PubMed] [Google Scholar]
- 45.Wheatley T, Weinberg A, Looser C, Moran T, Hajcak G. 2011. Mind perception: real but not artificial faces sustain neural activity beyond the N170/VPP. PLoS ONE 6, e17960 ( 10.1371/journal.pone.0017960) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Balas B, Tonsager C. 2014. Face animacy is not all in the eyes: evidence from contrast chimeras. Perception 43, 355–367. ( 10.1068/p7696) [DOI] [PubMed] [Google Scholar]
- 47.Deska JC, Lloyd EP, Hugenberg K. 2016. Advancing our understanding of the interface between perception and intergroup relations. Psychol. Inq. 27, 286–289. ( 10.1080/1047840X.2016.1215208) [DOI] [Google Scholar]
- 48.Maurer D, Grand RL, Mondloch CJ. 2002. The many faces of configural processing. Trends Cogn. Sci. 6, 255–260. ( 10.1016/S1364-6613(02)01903-4) [DOI] [PubMed] [Google Scholar]
- 49.Waytz A, Gray K, Epley N, Wegner DM. 2010. Causes and consequences of mind perception. Trends Cogn. Sci. 14, 383–388. ( 10.1016/j.tics.2010.05.006) [DOI] [PubMed] [Google Scholar]
- 50.Wagner DD, Kelley WM, Heatherton TF. 2011. Individual differences in the spontaneous recruitment of brain regions supporting mental state understanding when viewing natural social scenes. Cereb. Cortex 21, 2788–2796. ( 10.1093/cercor/bhr074) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Wiese E, Buzzell GA, Abubshait A, Beatty PJ. 2018. Seeing minds in others: mind perception modulates low-level social-cognitive performance and relates to ventromedial prefrontal structures. Cogn. Affect. Behav. Neurosci. 18, 837–856. ( 10.3758/s13415-018-0608-2) [DOI] [PubMed] [Google Scholar]
- 52.Krach S, Hegel F, Wrede B, Sagerer G, Binkofski F, Kircher T. 2008. Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLoS ONE 3, e2597 ( 10.1371/journal.pone.0002597) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Sanfey AG, Rilling JK, Aronson JA, Nystrom LE, Cohen JD. 2003. The neural basis of economic decision-making in the Ultimatum Game. Science 300, 1755–1758. ( 10.1126/science.1082976) [DOI] [PubMed] [Google Scholar]
- 54.Cullen H, Kanai R, Bahrami B, Rees G. 2013. Individual differences in anthropomorphic attributions and human brain structure. Social Cogn. Affect. Neurosci. 9, 1276–1280. ( 10.1093/scan/nst109) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Zink CF, Kempf L, Hakimi S, Rainey CA, Stein JL, Meyer-Lindenberg A. 2011. Vasopressin modulates social recognition-related activity in the left temporoparietal junction in humans. Transl. Psychiatry 1, e3 ( 10.1038/tp.2011.2) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Chaminade T, Hodgins J, Kawato M. 2007. Anthropomorphism influences perception of computer-animated characters' actions. Social Cogn. Affect. Neurosci. 2, 206–216. ( 10.1093/scan/nsm017) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Gutsell JN, Inzlicht M. 2012. Intergroup differences in the sharing of emotive states: neural evidence of an empathy gap. Social Cogn. Affect. Neurosci. 7, 596–603. ( 10.1093/scan/nsr035) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Saygin AP, Chaminade T, Ishiguro H, Driver J, Frith C. 2012. The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Social Cogn. Affect. Neurosci. 7, 413–422. ( 10.1093/scan/nsr025) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Cehajic S, Brown R, Gonzalez R. 2009. What do I care? Perceived ingroup responsibility and dehumanization as predictors of empathy felt for the victim group. Group Process Intergroup Relat. 12, 715–729. ( 10.1177/1368430209347727) [DOI] [Google Scholar]
- 60.Haley KJ, Fessler DMT. 2005. Nobody's watching? Subtle cues affect generosity in an anonymous economic game. Evol. Hum. Behav. 26, 245–256. ( 10.1016/j.evolhumbehav.2005.01.002) [DOI] [Google Scholar]
- 61.Bering JM, Johnson D. 2005. O Lord… you perceive my thoughts from afar’: recursiveness and the evolution of supernatural agency. J. Cogn. Cult. 5, 118–142. ( 10.1163/1568537054068679) [DOI] [Google Scholar]
- 62.Hertz N, Wiese E. 2017. Social facilitation with non-human agents: possible or not? Proc. Hum. Factors Ergon Soc. Annu. Meeting 61, 222–225. ( 10.1177/1541931213601539) [DOI] [Google Scholar]
- 63.Riether N, Hegel F, Wrede B, Horstmann G. 2012. Social facilitation with social robots? In HRI '12. Proc. 7th Ann. ACM/IEEE Int. Conf. Human-Robot Interaction, 5–8 March 2012, Boston, MA, pp. 41–46. New York, NY: ACM Press; See http://dl.acm.org/citation.cfm?doid=2157689.2157697 (accessed 15 August 2018). [Google Scholar]
- 64.Short E, Hart J, Vu M, Scassellati B. 2010. No fair!! An interaction with a cheating robot. In 2010 5th ACM/IEEE Int. Conf. Human-Robot Interaction (HRI), Nara, Japan, 2 March 2010 (eds P Hinds, H Ishiguro, T Kanda, P Kahn), pp. 219–226. Piscataway, NJ: IEEE Press. [Google Scholar]
- 65.Takahashi K, Watanabe K. 2013. Gaze cueing by pareidolia faces. i-Perception 4, 490–492. ( 10.1068/i0617sas) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Abubshait A, Wiese E. 2017. You look human, but act like a machine: agent appearance and behavior modulate different aspects of human–robot interaction. Front. Psychol. 8, 1393 ( 10.3389/fpsyg.2017.01393) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Caruana N, de Lissa P, McArthur G. 2016. Beliefs about human agency influence the neural processing of gaze during joint attention. Social Neurosci. 12, 194–206. ( 10.1080/17470919.2016.1160953) [DOI] [PubMed] [Google Scholar]
- 68.Wykowska A, Wiese E, Prosser A, Müller HJ. 2014. Beliefs about the minds of others influence how we process sensory information. PLoS ONE 9, e94339 ( 10.1371/journal.pone.0094339) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Wiese E, Wykowska A, Zwickel J, Müller HJ. 2012. I see what you mean: how attentional selection is shaped by ascribing intentions to others. PLoS ONE 7, e45391 ( 10.1371/journal.pone.0045391) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Epley N, Waytz A, Akalis S, Cacioppo JT. 2008. When we need a human: motivational determinants of anthropomorphism. Social Cogn. 26, 143–155. ( 10.1521/soco.2008.26.2.143) [DOI] [Google Scholar]
- 71.Friesen CK, Kingstone A. 1998. The eyes have it! Reflexive orienting is triggered by nonpredictive gaze. Psychon. Bull. Rev. 5, 490–495. ( 10.3758/BF03208827) [DOI] [Google Scholar]
- 72.Coffman BA, Clark VP, Parasuraman R. 2014. Battery powered thought: enhancement of attention, learning, and memory in healthy adults using transcranial direct current stimulation. Neuroimage 85, 895–908. ( 10.1016/j.neuroimage.2013.07.083) [DOI] [PubMed] [Google Scholar]
- 73.Jacobson L, Koslowsky M, Lavidor M. 2012. tDCS polarity effects in motor and cognitive domains: a meta-analytical review. Exp. Brain Res. 216, 1–10. ( 10.1007/s00221-011-2891-9) [DOI] [PubMed] [Google Scholar]
- 74.Antal A, Nitsche MA, Paulus W. 2001. External modulation of visual perception in humans. Neuroreport 12, 3553–3555. ( 10.1097/00001756-200111160-00036) [DOI] [PubMed] [Google Scholar]
- 75.Cohen Kadosh R, Soskic S, Iuculano T, Kanai R, Walsh V. 2010. Modulating neuronal activity produces specific and long-lasting changes in numerical competence. Curr. Biol. 20, 2016–2020. ( 10.1016/j.cub.2010.10.007) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Nummenmaa L, Calder AJ. 2009. Neural mechanisms of social attention. Trends Cogn. Sci. 13, 135–143. ( 10.1016/j.tics.2008.12.006) [DOI] [PubMed] [Google Scholar]
- 77.Adams RB, Kleck RE. 2005. Effects of direct and averted gaze on the perception of facially communicated emotion. Emotion 5, 3–11. ( 10.1037/1528-3542.5.1.3) [DOI] [PubMed] [Google Scholar]
- 78.Blakemore SJ, Winston J, Frith U. 2004. Social cognitive neuroscience: where are we heading? Trends Cogn. Sci. 8, 216–222. ( 10.1016/j.tics.2004.03.012) [DOI] [PubMed] [Google Scholar]
- 79.Frischen A, Bayliss AP, Tipper SP. 2007. Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychol. Bull. 133, 694–724. ( 10.1037/0033-2909.133.4.694) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Dalmaso M, Edwards SG, Bayliss AP. 2016. Re-encountering individuals who previously engaged in joint gaze modulates subsequent gaze cueing. J. Exp. Psychol. Learn. Mem. Cogn. 42, 271–284. ( 10.1037/xlm0000159) [DOI] [PubMed] [Google Scholar]
- 81.Cazzato V, Liuzza MT, Caprara GV, Macaluso E, Aglioti SM. 2015. The attracting power of the gaze of politicians is modulated by the personality and ideological attitude of their voters: a functional magnetic resonance imaging study. Eur. J. Neurosci. 42, 2534–2545. ( 10.1111/ejn.13038) [DOI] [PubMed] [Google Scholar]
- 82.Porciello G, Holmes BS, Liuzza MT, Crostella F, Aglioti SM, Bufalari I. 2014. Interpersonal multisensory stimulation reduces the overwhelming distracting power of self-gaze: psychophysical evidence for ‘engazement’. Sci. Rep. 4, 1–7. ( 10.1038/srep06669) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 83.Ciardo F, Marino BFM, Actis-Grosso R, Rossetti A, Ricciardelli P. 2014. Face age modulates gaze following in young adults. Sci. Rep. 4, 4746 ( 10.1038/srep04746) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84.Hungr CJ, Hunt AR. 2012. Physical self-similarity enhances the gaze-cueing effect. Q. J. Exp. Psychol. 65, 1250–1259. ( 10.1080/17470218.2012.690769) [DOI] [PubMed] [Google Scholar]
- 85.Kawai N. 2011. Attentional shift by eye gaze requires joint attention: eye gaze cues are unique to shift attention: social attention by the gaze cues. Jpn Psychol. Res. 53, 292–301. ( 10.1111/j.1468-5884.2011.00470.x) [DOI] [Google Scholar]
- 86.Bonifacci P, Ricciardelli P, Lugli L, Pellicano A. 2008. Emotional attention: effects of emotion and gaze direction on overt orienting of visual attention. Cogn. Process. 9, 127–135. ( 10.1007/s10339-007-0198-3) [DOI] [PubMed] [Google Scholar]
- 87.Fox MD, Snyder AZ, Vincent JL, Raichle ME. 2007. Intrinsic fluctuations within cortical systems account for intertrial variability in human behavior. Neuron 56, 171–184. ( 10.1016/j.neuron.2007.08.023) [DOI] [PubMed] [Google Scholar]
- 88.Tipples J. 2006. Fear and fearfulness potentiate automatic orienting to eye gaze. Cogn. Emot. 20, 309–320. ( 10.1080/02699930500405550) [DOI] [Google Scholar]
- 89.Ristic J, Kingstone A. 2005. Taking control of reflexive social attention. Cognition 94, B55–B65. ( 10.1016/j.cognition.2004.04.005) [DOI] [PubMed] [Google Scholar]
- 90.Caruana N, McArthur G, Woolgar A, Brock J. 2017. Simulating social interactions for the experimental investigation of joint attention. Neurosci. Biobehav. Rev. 74, 115–125. ( 10.1016/j.neubiorev.2016.12.022) [DOI] [PubMed] [Google Scholar]
- 91.Wiese E, Wykowska A, Müller HJ. 2014. What we observe is biased by what other people tell us: beliefs about the reliability of gaze behavior modulate attentional orienting to gaze cues. PLoS ONE 9, e94529 ( 10.1371/journal.pone.0094529) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Rosenthal-Von Der Pütten AM, Krämer NC. 2014. How design characteristics of robots determine evaluation and uncanny valley related responses. Comput. Hum. Behav. 36, 422–439. ( 10.1016/j.chb.2014.03.066) [DOI] [Google Scholar]
- 93.Bartneck C. 2013. Robots in the theatre and the media In Proc. 8th Int. Conf. Design and Semantics of Form and Movement (DeSForM2013), Wuxi, PR China, 22–25 September 2013 (eds L-L Chen, JP Djajadiningrat, LMG Feijs, S Fraser, J Hu, SHM Kyffin, D Steffen), pp. 64–70. Eindhoven, The Netherlands: Technische Universiteit Eindhoven.
- 94.Jack AI, Robbins P. 2012. The phenomenal stance revisited. Rev. Philos. Psychol. 3, 383–403. ( 10.1007/s13164-012-0104-5) [DOI] [Google Scholar]
- 95.Takahashi H, et al. 2014. Different impressions of other agents obtained through social interaction uniquely modulate dorsal and ventral pathway activities in the social human brain. Cortex 58, 289–300. ( 10.1016/j.cortex.2014.03.011) [DOI] [PubMed] [Google Scholar]
- 96.Gobbini MI, Gentili C, Ricciardi E, Bellucci C, Salvini P, Laschi C, Guazzelli M, Pietrini P. 2011. Distinct neural systems involved in agency and animacy detection. J. Cogn. Neurosci. 23, 1911–1920. ( 10.1162/jocn.2010.21574) [DOI] [PubMed] [Google Scholar]
- 97.Carter EJ, Hodgins JK, Rakison DH. 2011. Exploring the neural correlates of goal-directed action and intention understanding. Neuroimage 54, 1634–1642. ( 10.1016/j.neuroimage.2010.08.077) [DOI] [PubMed] [Google Scholar]
- 98.Gobel MS, Tufft MRA, Richardson DC. 2017. Social beliefs and visual attention: how the social relevance of a cue influences spatial orienting. Cogn. Sci. 42, 161–185. ( 10.1111/cogs.12529) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 99.Polanía R, Nitsche MA, Ruff CC. 2018. Studying and modifying brain function with non-invasive brain stimulation. Nat. Neurosci. 21, 174–187. ( 10.1038/s41593-017-0054-4) [DOI] [PubMed] [Google Scholar]
- 100.Parkin BL, Ekhtiari H, Walsh VF. 2015. Non-invasive human brain stimulation in cognitive neuroscience: a primer. Neuron 87, 932–945. ( 10.1016/j.neuron.2015.07.032) [DOI] [PubMed] [Google Scholar]
- 101.Bikson M, Datta A, Elwassif M. 2009. Establishing safety limits for transcranial direct current stimulation. Clin. Neurophysiol. 120, 1033–1034. ( 10.1016/j.clinph.2009.03.018) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102.Oostenveld R, Praamstra P. 2001. The five percent electrode system for high-resolution EEG and ERP measurements. Clin. Neurophysiol. 112, 713–719. ( 10.1016/S1388-2457(00)00527-7) [DOI] [PubMed] [Google Scholar]
- 103.Blumberg EJ, Foroughi CK, Scheldrup MR, Peterson MS, Boehm-Davis DA, Parasuraman R. 2014. Reducing the disruptive effects of interruptions with noninvasive brain stimulation. Hum. Factors 57, 1051–1062. ( 10.1177/0018720814565189) [DOI] [PubMed] [Google Scholar]
- 104.Falcone B, Coffman BA, Clark VP, Parasuraman R. 2012. Transcranial direct current stimulation augments perceptual sensitivity and 24-hour retention in a complex threat detection task. PLoS ONE 7, e34993 ( 10.1371/journal.pone.0034993) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105.Datta A, Truong D, Minhas P, Parra LC, Bikson M. 2012. Inter-individual variation during transcranial direct current stimulation and normalization of dose using MRI-derived computational models. Front. Psychiatry 3, 91 ( 10.3389/fpsyt.2012.00091) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 106.Moliadze V, Antal A, Paulus W. 2010. Electrode-distance dependent after-effects of transcranial direct and random noise stimulation with extracephalic reference electrodes. Clin. Neurophysiol. 121, 2165–2171. ( 10.1016/j.clinph.2010.04.033) [DOI] [PubMed] [Google Scholar]
- 107.Bartneck C, Croft E, Kulic D. 2008. Measuring the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots In Proc. Workshop on Metrics for Human-Robot Interaction, at 3rd ACM/IEEE Int. Conf. Human-Robot Interaction (HRI08), Amsterdam, 12 March 2008 (eds CR Burghardt, A Steinfeld), Tech. Rep. no. 471, pp. 37–40. Hatfield, UK: University of Hertfordshire.
- 108.Nomura T, Kanda T, Suzuki T. 2006. Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. AI Soc. 20, 138–150. ( 10.1007/s00146-005-0012-7) [DOI] [Google Scholar]
- 109.Baron-Cohen S, Wheelwright S, Skinner R, Martin J, Clubley E. 2001. The autism-spectrum quotient (AQ): evidence from Asperger syndrome/high-functioning autism, males and females, scientists and mathematicians. J. Autism Dev. Disord. 31, 5–17. ( 10.1023/A:1005653411471) [DOI] [PubMed] [Google Scholar]
- 110.Javadi AH, Cheng P, Walsh V. 2012. Short duration transcranial direct current stimulation (tDCS) modulates verbal memory. Brain Stimulat. 5, 468–474. ( 10.1016/j.brs.2011.08.003) [DOI] [PubMed] [Google Scholar]
- 111.Berryhill ME. 2011. Hits and misses: leveraging tDCS to advance cognitive research. Front. Psychol. 5, 800 ( 10.3389/fpsyg.2014.00800) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 112.Boggio PS, Rocha RR, da Silva MT, Fregni F. 2008. Differential modulatory effects of transcranial direct current stimulation on a facial expression go-no-go task in males and females. Neurosci. Lett. 447, 101–105. ( 10.1016/j.neulet.2008.10.009) [DOI] [PubMed] [Google Scholar]
- 113.Ferrucci R, et al. 2008. Transcranial direct current stimulation improves recognition memory in Alzheimer disease. Neurology 71, 493–498. ( 10.1212/01.wnl.0000317060.43722.a3) [DOI] [PubMed] [Google Scholar]
- 114.Nitsche MA, et al. 2008. Transcranial direct current stimulation: state of the art 2008. Brain Stimulat. 1, 206–223. ( 10.1016/j.brs.2008.06.004) [DOI] [PubMed] [Google Scholar]
- 115.Hogeveen J, Obhi SS, Banissy MJ, Santiesteban I, Press C, Catmur C, Bird G. 2014. Task-dependent and distinct roles of the temporoparietal junction and inferior frontal cortex in the control of imitation. Social Cogn. Affect. Neurosci. 10, 1003–1009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 116.Das KR, Imon AH. 2016. A brief review of tests for normality. Am. J. Theor. Appl. Stat. 5, 5 ( 10.11648/j.ajtas.20160501.12) [DOI] [Google Scholar]
- 117.Park YG. 2013. Comments on statistical issues in January 2013. Korean J. Fam. Med. 34, 64 ( 10.4082/kjfm.2013.34.1.64) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 118.Schurz M, Radua J, Aichhorn M, Richlan F, Perner J. 2014. Fractionating theory of mind: a meta-analysis of functional brain imaging studies. Neurosci. Biobehav. Rev. 42, 9–34. ( 10.1016/j.neubiorev.2014.01.009) [DOI] [PubMed] [Google Scholar]
- 119.Chaminade T, Rosset D, Da Fonseca D, Nazarian B, Lutcher E, Cheng G, Deruelle C. 2012. How do we think machines think? An fMRI study of alleged competition with an artificial intelligence. Front. Hum. Neurosci. 6, 103 ( 10.3389/fnhum.2012.00103) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 120.Caruana N, de Lissa P, McArthur G. 2015. The neural time course of evaluating self-initiated joint attention bids. Brain Cogn. 98, 43–52. ( 10.1016/j.bandc.2015.06.001) [DOI] [PubMed] [Google Scholar]
- 121.Santiesteban I, Banissy MJ, Catmur C, Bird G. 2012. Enhancing social ability by stimulating right temporoparietal junction. Curr. Biol. 22, 2274–2277. ( 10.1016/j.cub.2012.10.018) [DOI] [PubMed] [Google Scholar]
- 122.Santiesteban I, Banissy MJ, Catmur C, Bird G. 2015. Functional lateralization of temporoparietal junction - imitation inhibition, visual perspective-taking and theory of mind. Eur. J. Neurosci. 42, 2527–2533. ( 10.1111/ejn.13036) [DOI] [PubMed] [Google Scholar]
- 123.Hogeveen J, Obhi SS, Banissy MJ, Santiesteban I, Press C, Catmur C, Bird G. 2015. Task-dependent and distinct roles of the temporoparietal junction and inferior frontal cortex in the control of imitation. Social Cogn. Affect. Neurosci. 10, 1003–1009. ( 10.1093/scan/nsu148) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 124.Szczepanski SM, Knight RT. 2014. Insights into human behavior from lesions to the prefrontal cortex. Neuron 83, 1002–1018. ( 10.1016/j.neuron.2014.08.011) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 125.Mitchell JP, Banaji MR, Macrae CN. 2005. General and specific contributions of the medial prefrontal cortex to knowledge about mental states. Neuroimage 28, 757–762. ( 10.1016/j.neuroimage.2005.03.011) [DOI] [PubMed] [Google Scholar]
- 126.Fairhall SL, Anzellotti S, Ubaldi S, Caramazza A. 2014. Person- and place-selective neural substrates for entity-specific semantic access. Cereb. Cortex 24, 1687–1696. ( 10.1093/cercor/bht039) [DOI] [PubMed] [Google Scholar]
- 127.Contreras JM, Banaji MR, Mitchell JP. 2012. Dissociable neural correlates of stereotypes and other forms of semantic knowledge. Social Cogn. Affect. Neurosci. 7, 764–770. ( 10.1093/scan/nsr053) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 128.Simmons WK, Reddish M, Bellgowan PSF, Martin A. 2010. The selectivity and functional connectivity of the anterior temporal lobes. Cereb. Cortex 20, 813–825. ( 10.1093/cercor/bhp149) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 129.Ghosh VE, Moscovitch M, Melo Colella B, Gilboa A. 2014. Schema representation in patients with ventromedial PFC lesions. J. Neurosci. 34, 12 057–12 070. ( 10.1523/JNEUROSCI.0740-14.2014) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 130.Van Kesteren MTR, Ruiter DJ, Fernández G, Henson RN. 2012. How schema and novelty augment memory formation. Trends Neurosci. 35, 211–219. ( 10.1016/j.tins.2012.02.001) [DOI] [PubMed] [Google Scholar]
- 131.Jenkins AC, Macrae CN, Mitchell JP. 2008. Repetition suppression of ventromedial prefrontal activity during judgments of self and others. Proc. Natl Acad. Sci. USA 105, 4507–4512. ( 10.1073/pnas.0708785105) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 132.Mitchell JP. 2004. Encoding-specific effects of social cognition on the neural correlates of subsequent memory. J. Neurosci. 24, 4912–4917. ( 10.1523/JNEUROSCI.0481-04.2004) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 133.Mitchell JP, Macrae CN, Banaji MR. 2006. Dissociable medial prefrontal contributions to judgments of similar and dissimilar others. Neuron 50, 655–663. ( 10.1016/j.neuron.2006.03.040) [DOI] [PubMed] [Google Scholar]
- 134.Hackel LM, Looser CE, Van Bavel JJ. 2014. Group membership alters the threshold for mind perception: the role of social identity, collective identification, and intergroup threat. J. Exp. Social Psychol. 52, 15–23. ( 10.1016/j.jesp.2013.12.001) [DOI] [Google Scholar]
- 135.Kubit B, Jack AI. 2013. Rethinking the role of the rTPJ in attention and social cognition in light of the opposing domains hypothesis: findings from an ALE-based meta-analysis and resting-state functional connectivity. Front. Hum. Neurosci. 7, 323 ( 10.3389/fnhum.2013.00323) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 136.Krall SC, Rottschy C, Oberwelland E, Bzdok D, Fox PT, Eickhoff SB, Fink GR, Konrad K. 2015. The role of the right temporoparietal junction in attention and social interaction as revealed by ALE meta-analysis. Brain Struct. Funct. 220, 587–604. ( 10.1007/s00429-014-0803-z) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 137.Apperly IA, Samson D, Chiavarino C, Humphreys GW. 2004. Frontal and temporo-parietal lobe contributions to theory of mind: neuropsychological evidence from a false-belief task with reduced language and executive demands. J. Cogn. Neurosci. 16, 1773–1784. ( 10.1162/0898929042947928) [DOI] [PubMed] [Google Scholar]
- 138.Wang Y, Quadflieg S. 2015. In our own image? Emotional and neural processing differences when observing human–human vs human–robot interactions. Social Cogn. Affect. Neurosci. 10, 1515–1524. ( 10.1093/scan/nsv043) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 139.Mori M. 2012. The uncanny valley: the original essay by Masahiro Mori (transl. KF MacDorman, N Kageki) IEEE Spectrum. See http://spectrum.ieee.org/automaton/robotics/humanoids/the-uncanny-valley.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
‘Details about the data are available in the electronic supplementary material and are accessible at https://osf.io/s8ewg/.’