Abstract
Pairs of participants mutually communicated (or not) biographical information to each other. By combining simultaneous eye-tracking, face-tracking and functional near-infrared spectroscopy, we examined how this mutual sharing of information modulates social signalling and brain activity. When biographical information was disclosed, participants directed more eye gaze to the face of the partner and presented more facial displays. We also found that spontaneous production and observation of facial displays was associated with activity in the left SMG and right dlPFC/IFG, respectively. Moreover, mutual information-sharing increased activity in bilateral TPJ and left dlPFC, as well as cross-brain synchrony between right TPJ and left dlPFC. This suggests that a complex long-range mechanism is recruited during information-sharing. These multimodal findings support the second-person neuroscience hypothesis, which postulates that communicative interactions activate additional neurocognitive mechanisms to those engaged in non-interactive situations. They further advance our understanding of which neurocognitive mechanisms underlie communicative interactions.
Keywords: Second-person neuroscience, Communication, Eye gaze, Facial displays, Cross-brain synchrony, fNIRS
1. Introduction
Understanding the neuroscience of face-to-face human social interactions remains a challenge, despite frequent calls for an increase in second-person neuroscience (Redcay and Schilbach, 2019) and for using an interactionist approach (Di Paolo and De Jaegher, 2012) to study concepts like the we-mode (Gallotti and Frith, 2013). Key to these theories is the idea that communication or mutual engagement between people (i.e. two or more partners jointly sharing information with one another) involves additional neural networks and social dynamics compared to performing the same task alone (Schilbach et al., 2013), and that analysis of two brains together can reveal more than studying one brain at a time. However, defining the distinct neurocognitive components of communicative social interactions remains challenging.
Here, we test the hypothesis that sharing (versus not sharing) biographical information between two people modulates social signalling behaviours and brain activity (individual and interpersonal) during live interactions, underpinning the we-mode effect. We aim to test what is special about information exchange, and to parse out the specific features that make this different to a matched situation without information exchange. First, we outline our novel approach to this question and then we describe our specific hypotheses.
1.1. Levels of social interaction
Neuroscientific investigations of dynamic social interaction are challenging to design because it is not easy to add or subtract elements of an interaction in order to isolate brain systems underlying one particular process. Previous studies have investigated how brain activity patterns change when participants are attending to a target alone versus experiencing joint attention with another person (Pfeiffer et al., 2013), as well as how brain activity changes when performing a task alone versus performing the same task when being watched (Izuma et al., 2010; Muller-Pinzler et al., 2016). Another study examined differences between speaking to a microphone and speaking to communicate with another person (Warnell et al., 2017). Social brain networks including the medial prefrontal cortex (mPFC) have been reported in all these cases. However, in many of these studies, the contrast is between being alone and being with another person, so general person perception effects and social facilitation effects could contribute to the results.
Here, we aimed to take a step beyond existing studies to examine a more subtle contrast − the communication or sharing of information. We developed a face-to-face information-sharing task where, on every trial, participants hear a statement about some biographical information (e.g. I try not to cover up my mistakes) and must press a button to indicate if this is true or not (Fig. 1A). The critical manipulation is that, in some blocks, participants’ answers are shared with another participant, while in other blocks their answers remain private. When answers are shared, participants can learn new information about their partner while at the same time they expose biographical information about themselves (Fig. 1A, Feedback phase). In all cases, participants are sitting face-to-face, so they are able to see each other and engage in natural nonverbal behaviours such as eye gaze and facial displays. Thus, we can precisely test how the ability to communicate changes both nonverbal behaviours and brain activity patterns. This provides a subtle test of the second-person neuroscience hypothesis and allows us to define which (if any) additional brain systems are engaged when a minimal form of communication is enabled.
Fig. 1. Experimental paradigm.
A) Timeline for one trial. Each trial comprises a Question phase, Answer phase and Feedback phase, with the same timings in both Shared and Private blocks. In the Question phase, a voice cue reads a statement (two examples given). In the Answer phase, participants must press a key to indicate if that statement is True or False about themselves. In Shared blocks, the Feedback phase provides information on whether the two participants gave the same answer or different answers; in Private blocks, the Feedback phase only tells participants that their answers were received by the computer. B) Design of the whole task. Participants complete blocks of 5 Shared trials alternating with blocks of 5 Private trials, for a total of 8 blocks. Before each block, a voice cue tells participants if the next block is Shared or Private.
1.2. Using fNIRS for the study of social interactions
The study of neural correlates of social interactions is challenged within restricted neuroimaging environments (e.g. functional magnetic resonance imaging; fMRI), where participants are alone inside the scanner. This limitation can be addressed with functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging technique that enables the recording of the hemodynamic response to neural activity using near-infrared (NIR) light. This technique uses NIR light sources and detectors placed on the scalp, and measures the changes in the reflected NIR light intensity that are mainly due to the spectral absorbence changes of oxyhemoglobin (OxyHb) and deoxyhemoglobin (deOxyHb). fNIRS therefore can quantify the concentration changes of OxyHb and deOxyHb, respectively. Similar to fMRI, this hemodynamic signal is taken as a proxy for brain activity (Boas et al., 2014; Cui et al., 2011; Ferrari and Quaresima, 2012; Pinti et al., 2018; Scholkmann et al., 2014).
It is important to note that fNIRS has lower spatial resolution than fMRI, and that it measures brain activity only in the outer layers of the cortex. Nonetheless, due to its high portability and tolerance to motion, fNIRS allows researchers to record brain activity in ecologically valid settings (Pinti et al., 2018). For instance, it has been used in two-person studies where individuals are interacting face-to-face (Cui et al., 2012; Hirsch et al., 2018; 2017; Jiang et al., 2012; Piva et al., 2017), in studies with infants (Lloyd-Fox et al., 2010), and for bedside imaging (Obrig, 2014). Here, we used fNIRS to simultaneously measure brain activity of two participants while they engage in face-to-face interactions and share information with each other.
1.3. Social neuroscience is embodied in behaviour
The present study is unique in that we capture face and gaze behaviour simultaneously with brain imaging. That is, we study social interaction as an embodied behaviour implemented in the face and eyes. This allows us to test whether any brain activity differences between a context of information-sharing, compared to not sharing (Fig. 1A), are driven by changes in social behaviour. We can contrast two possible hypotheses. One possibility is that, when the task structure prevents participants from exchanging information, they make more use of non-verbal signals such as gaze and facial displays in order to compensate. A second possibility is that, when participants exchange information, they make extra use of the nonverbal cues to contextualise or modify the information they have shared in the task.
Evidence from previous studies is rather equivocal on these possibilities. People direct less gaze to a live person compared to a video of the same person (Cañigueral and Hamilton, 2019a; Laidlaw et al., 2011), suggesting that gaze patterns in real interactions are modulated to signal compliance to social norms (Foulsham et al., 2011; Gobel et al., 2015; Goffman, 1963). This also implies that gaze behaviour might be reduced when communication is enabled. However, older studies described three functions of eye gaze during conversation (Kendon, 1967): regulatory (gaze modulates turn-taking between speaker and listener), monitoring (gaze tracks attentional states and facial displays of the partner), and expressive (gaze regulates the level of arousal in the interaction). In our task, the need to monitor the partner is increased when participants share information, which predicts that they may direct more gaze to the face of the partner to check for social approval from others (Efran, 1968; Efran and Broughton, 1966; Kleinke, 1986).
Similar to eye gaze, it has been suggested that we make facial displays not only to convey emotions, but also as a means of communication (Crivelli and Fridlund, 2018). For instance, Fridlund (1991) showed that the amount of smiling when watching a video was higher when participants were (or imagined they were) with a friend than when they were alone (see also Chovil, 1991). Similarly, participants show increased mimicry of smiles from faces that can reciprocate compared to faces that cannot (Hietanen et al., 2018). Thus, facial displays may serve to influence, or signal, a target audience (Crivelli and Fridlund, 2018). To our knowledge there are no previous studies that directly look at the relationship between facial displays and information-sharing in social interactions. Building on the studies presented above, we hypothesised that communicative interactions might lead to more exchanges of facial displays between the interacting partners to signal what they think of each other.
1.4. Neurocognitive mechanisms for information-sharing
We can also consider which neurocognitive mechanisms might be engaged when participants perform an information-sharing task. In the condition with shared answers (Fig. 1A), participants can learn new information about their partner, but may also feel judged by their partner: each of these processes may engage additional brain systems. For instance, learning about another person will engage mentalising brain areas, such as the mPFC and right temporo-parietal junction (TPJ) (Frith and Frith, 2006; Saxe and Kanwisher, 2003; Saxe and Wexler, 2005). Moreover, feeling judged may activate processes of self-impression management which emerge from the desire to promote positive judgements in the presence of others (Cage, 2015; Emler, 1990; Resnick et al., 2006; Silver and Shaw, 2018; Tennie et al., 2010). Neuroimaging studies show that brain systems linked to mentalising (e.g. mPFC and TPJ) and social reward processing (e.g. ventral striatum) are engaged during self-impression management (Bhatt et al., 2010; Izuma, 2012, 2009; Izuma et al., 2010). In addition, brain systems for strategic decision-making and self-control processes (e.g. dorsolateral prefrontal cortex; dlPFC) might be needed to guide strategic behavioural changes in front of others (Izuma, 2012). Based on these previous findings, we predict that social brain networks (including TPJ) and strategic control networks (including dlPFC) will be more engaged when participants exchange biographical information compared to when they perform the same task without sharing any information.
By using fNIRS, the present study simultaneously records brain activity of two participants while they share information in a face-to-face interaction. Thus, we can measure brain activity at the individual level, but also correlated brain activity between two interacting partners (i.e. cross-brain synchrony). Previous studies using cross-brain coherence analysis have found that brain activity of two interacting partners becomes more synchronised during mutual gaze (Hirsch et al., 2017; Noah et al., 2020), dialogue (Hirsch et al., 2018; Jiang et al., 2012) or cooperative contexts (Cui et al., 2012; Piva et al., 2017), amongst others. In line with this, it has been suggested that cross-brain synchrony reflects the optimal processing of social signals exchanged between partners (Hasson and Frith, 2016).
A recent study in interacting mice goes one step further (Kingsbury et al., 2019). Activity in the prefrontal cortex of interacting pairs of mice was recorded with calcium imaging and the behaviour of the animals was coded from video in detail. Using an analysis based on the general linear model (GLM), Kingsbury and colleagues build up sequential models of brain activity patterns in each animals’ brain, in terms of the behaviour of that animal, the behaviour of its partner, and the brain activity of its partner. They show that models including the partner’s brain activity provide the best fit to the data, and further suggest that cross-brain synchrony may be related to the context of the ongoing interaction (e.g. reciprocal anticipation and reaction to the partner’s behaviours) rather than moment-to-moment social behaviours. Taking this approach to human-to-human interactions, it could be that cross-brain synchrony is also related to the content of verbal communications and nonverbal social signals, which are hard to reliably model in the GLM. Building on these hypotheses, we performed a post-hoc cross-brain GLM analysis to test whether sharing information with a partner (versus not sharing) increases cross-brain synchrony between social brain regions (e.g. TPJ) and strategic control regions (e.g. dlPFC). This allows us to test if the method used by Kingsbury and colleagues can also be applied to humans. Since our model also accounts for task- and behaviour-related effects, greater cross-brain synchrony when sharing information could indicate that interpersonal synchrony of brain signals is related to processing the context and content of shared information throughout the interaction and is critical to understanding the social brain.
1.4. The present study
Our aims in the present study were to investigate how social signals (eye gaze and facial displays) are modulated by sharing of biographical information, and to examine which brain systems are recruited by this shared experience. Pairs of participants sat across a table from each other and performed the Shared Feedback Task (Fig. 1A). In this task, participants privately indicated their personal preferences but, before each block, they were informed of whether their choices would be disclosed (Shared condition) or not (Private condition) to the partner (Fig. 1B). Disclosure of biographical information created a shared environment where participants would learn about the partners’ choices but could also be judged by the partner. Multimodal measurements with eye-tracking, face-tracking and fNIRS during the task (Fig. 2A), allowed us to study how social signals (eye gaze and facial displays) and brain activity (at the individual and interpersonal level) are modulated during communicative interactions.
Fig. 2.
A) Schematic of the testing room showing the equipment used to test a dyad: fNIRS (red), eye-tracking (orange), and high-definition scene cameras for face-tracking (green). B) Sample signals contributing to data analysis for participant A. Behavioural signals comprise gaze of A towards/away from B’s face (eye-tracker 1), production of facial motion from A (recorded with camera 2), and observation of facial motion from B (recorded with camera 1). Neural signals comprise brain activity recorded from A (fNIRS 1), and brain activity recorded from B (fNIRS 2, for cross-brain analysis); a sample fNIRS signal is shown from one channel (zoomed in on both axes), 58 channels/participant were recorded in the study. C) Sample signals contributing to data analysis for participant B: gaze of B towards/away from A’s face (eye-tracker 2), production of facial motion from B (recorded with camera 1), observation of facial motion from A (recorded with camera 2), neural signal from B (recorded with fNIRS 2), and neural signal from A (recorded with fNIRS 1). D) Layout of fNIRS channels on each brain: average locations of channel centroids (blue dots) are represented on the right and left hemisphere of a single rendered brain (see Table S1 for full list of channels coordinates and anatomical regions). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Our hypotheses were the following. First, we expected that participants would increase gaze directed to the face of the partner and produce more facial displays in the Shared condition compared to the Private condition, particularly during the Feedback phase. We also performed an exploratory analysis to investigate brain activity associated with spontaneous production (participants moving their own face) and observation (participants seeing their partner move the face) of facial displays during face-to-face interactions. Second, we predicted that in the Shared condition there would be increased brain activity in regions related to self-impression management and learning about others. Focusing on brain systems accessible to our fNIRS device (i.e. lateral cortical regions in both hemispheres), we predicted engagement of TPJ (linked to mentalising) and dlPFC (linked to strategic decision-making) when participants shared information. Finally, we performed a post-hoc analysis to test the hypothesis that in the Shared condition there would be increased cross-brain synchrony between TPJ and dlPFC.
2. Materials and methods
2.1. Participants
Thirty healthy adult participants (15 dyads) participated in the study: 22 females, 8 males; mean age: 28.2 ± 7.33, age range from 18 to 45 years; 28 right-handed, 2 left-handed (Oldfield, 1971). All thirty participants were included in the facial motion analysis and neural data analysis. However, nine participants were excluded from the eye gaze analysis due to poor signal quality in the eye-tracking data (note that this analysis was run at the individual level, so data from “good” participants was used even if their partners had poor signal quality). Participants included in the study previously demonstrated reliable fNIRS signal responses over the primary motor cortex during a screening process involving a finger-thumb tapping task (Witt et al., 2008). Participants were assigned to pairs in order of recruitment: they were all strangers prior to the study, and no participant was included in more than one dyad. Eight pairs were mixed gender, and eleven pairs were female-female. All participants provided written informed consent and were compensated for their participation in the study. The study took place at the Brain Function Laboratory (Yale University), and was granted ethical approval by the Yale University Human Investigation Committee (HIC #1501015178) and the University College London Research Ethics Committee, and was performed in accordance with the Declaration of Helsinki and APA ethical standards. Data and relevant code from this study are available upon direct request by contacting the corresponding author.
2.2. Experimental paradigm
To manipulate the opportunity for communicative interactions we designed the Shared Feedback Task. This task is inspired by Izuma et al. (2010), where participants disclosed their tendencies relative to social norms. We created a set of 40 statements, each one describing a particular personal preference or behaviour. Half of these statements described daily situations (e.g. ‘I sometimes drink coffee in the morning’) and half were taken from pre-existing questionnaires measuring concerns about self-impression (e.g. ‘I try not to cover up my mistakes’) (Crowne and Marlowe, 1960; Paulhus, 1984, 1991); for the analyses we pooled all statements together, since there was not enough power to test the effect of a 3-way interaction between type of statement, condition and phase. See Supplementary Materials (“S1. List of statements”) for a full list of statements used in the study.
For each trial, participants first heard a recording of a statement that was between 3 and 5 s long (Question phase) (see Fig. 1A). This was followed by a tone and a 3 second period where participants indicated if the statement was true or false about themselves by pressing a key on the desktop keyboard (Answer phase). Then, the choices of both participants could be either disclosed or not to the dyad (Feedback phase). In the Shared condition, choices were disclosed and participants heard a recording saying ‘same answers’ or ‘Different answers’. In either case, participants learnt about their partners‘ choices and could evaluate their choices relative to their partners’. In the Private condition, choices were not disclosed and participants heard a recording saying ‘Answers received’, so there was no opportunity to exchange information. If any of the choices were missing, then participants heard a recording saying ‘Answer missing’. After hearing the recording, there was a silence period of 5 s for processing information from the Feedback. Note that participants were instructed not to talk to each other during the task. After the Feedback phase, participants heard the instruction ‘Rest’ and they looked at a fixation cross on their left side of the table for 10 s. Then, the next trial started. The total duration of each trial was between 21 and 23 s long.
Participants completed 8 blocks of 5 trials each, and each fNIRS run was composed of 2 blocks. Half of the blocks were Shared and half were Private. Before each block started, participants heard a recording saying ‘Your answers will be shared’ or ‘Your answers will not be shared’ to indicate if that block was Shared or Private (see Fig. 1B). The statements were randomly assigned to the blocks for each participant, and the order of the blocks was randomised across participants. The total duration of the task was around 25 min.
2.3. Experimental set-up
Participants sat across a table at approximately 140 cm from each other, in an experimental room with dim fluorescent light. Noise around the experimental room was minimised to prevent distraction of participants during the study. The room was equipped with an fNIRS, eye-tracking and high-definition scene camera system arranged to record data from the faces of two participants (see Fig. 2A). Each participant had a keyboard on the table to indicate their answers. An occluder was positioned between participants to prevent them from seeing the keyboard of their partner. On the left side of each participant, a black fixation-cross was located as a resting position between trials and blocks. This set-up is similar to those used in previous publications (e.g. Hirsch et al., 2018, 2017), and combines simultaneous recordings of eye-tracking, face-tracking and fNIRS (see Fig. 2B-C).
2.4. Eye-tracking and facial motion signal acquisition
The two-person eye-tracking system included a high-definition scene camera placed above each participant’s head to record the face of the partner, and a table-based eye-tracker (Tobii Pro-Lab X3−120) attached to each side of the occluder to record eye movements of the participant. The system then merged the input from each eye-tracker with its scene camera to map the gaze of each participant onto the scene. Participants sat approximately 70 cm from the eye-tracker and a 3-point calibration routine (right eye, left eye, and tip of chin of the partner) was employed before starting the task. The eye-tracker recorded eye positions within 0.4° of visual angle and movements of both eyes at a rate of 120 Hz. This signal was synchronized with stimulus presentations and fNIRS acquisition of neural signal via a TTL trigger mechanism.
To track facial motion (i.e. facial displays), the high-definition scene camera information was further processed with OpenFace (Baltrusaitis et al., 2016). The OpenFace algorithm uses the Facial Action Coding System (FACS; Ekman and Friesen, 1976) to taxonomise movements of human facial muscles and deconstruct facial displays in specific action units. OpenFace can recognise a subset of 18 facial action units (including facial muscles in areas near the eyes, nose, cheeks, mouth and chin), and gives information about the presence or absence of activity in each of these facial action units for each frame of the video.
2.5. Gaze and facial motion analysis
For the eye gaze analysis, three time windows and one area of interest was defined. The 3 time windows corresponded to the Question phase, Answer phase and Feedback phase. The face of the partner was manually defined frame-by-frame using the Tobii Pro-Lab eye-tracking software. To measure eye gaze, we computed the mean fixation duration at the face of the partner for each time phase. For the facial motion analysis, the same three time windows were defined (Question phase, Answer phase and Feedback phase). To measure facial motion, we combined all 18 facial action units to compute the mean number of active facial action units for each time phase. For each measure, a 2-way repeated measures ANOVA with Condition (Shared and Private) and Phase (Question, Answer and Feedback) as within-subject factors was performed, using post-hoc pairwise comparisons and Bonferroni’s adjustment for multiple comparisons.
2.6. Neural signal acquisition
Hemodynamic signals were acquired using a 40 optode pair continuous wave fNIRS system with 116 channels (Shimadzu LABNIRS, Kyoto, Japan) configured for hyperscanning of two participants. Each participant in a dyad had the same distribution of 58 channels over both hemispheres (see Fig. 2D). Participants were fitted with a cap with optode holders, where channel separations were adjusted by individual differences in head size (2.5 cm separation for small heads, head circumference is 54.5 cm; 2.75 cm separation for medium heads, head circumference is 56.5 cm; and 3.0 cm separation for large heads, head circumference is 60 cm). This ensured that the cap fitted the participants’ head and the signals recorded were of good quality, and also that across participants the same channels (source-detector pairs) overlaid the same cortical areas. A lighted fibre-optic probe (Daiso, Hiroshima, Japan) was used to remove hair from each optode holder area before placing the optode inside the holder, to maximise the transmission of light through the scalp. Three wavelengths of light (780, 805 and 830 nm) were delivered by each source and their reflectance was measured by each detector. Before starting the signal recording, light intensity for each channel (source-detector pair) was measured and the detector gains were adjusted appropriately to assure each detector was able to detect sufficient reflected light output from each paired source.
2.7. Optode localisation
Once the signal acquisition was finished, the optodes were removed but the cap was left on the head of the participant to map the optode locations on the scalp. Anatomical locations of optodes were determined for each participant in relation to standard 10−20 system based on head landmarks (inion, nasion, top centre (Cz), and left and right tragi) using a Patriot 3D Digitizer (Polhemus, Rochester, VT) and linear transform techniques (Eggebrecht et al., 2012; Ferradal et al., 2014; Okamoto and Dan, 2005). Montreal Neurological Institute (MNI) coordinates for the channels were obtained using NIRS-SPM (Ye et al., 2009) with MATLAB (Mathworks, Natick, MA), and corresponding anatomical locations of each channel were determined using the Talairach Atlas (see Fig. 2D and Table S1 for median channel centroids).
2.8. Signal processing
Using the modified Beer−Lambert equation with a path length of 1.00, levels of absorption for each of the three wavelengths were converted to concentration changes for oxyhemoglobin (OxyHb), deoxy-hemoglobin (deOxyHb), and the sum of deoxyhemoglobin and oxyhemoglobin. Temporal resolution for signal acquisition was 27 Hz. Baseline drift was removed using wavelet detrending (NIRS-SPM), and hemo-dynamic modelling of the data served as a low-pass filter. For each participant, channels with strong noise were automatically identified and removed from the analyses if the root mean square of the raw data was more than 10 times the average signal. Approximately 14% of the channels were automatically excluded using this criterion. Global components originating from systemic activity (e.g. blood pressure, respiration and blood flow) were removed from the fNIRS signal using a principle components analysis (PCA) spatial filter (Zhang et al., 2017, 2016) prior to hemodynamic modelling of the data. This method detects and removes components in the signal that are present throughout the brain (related to systemic effects), to isolate localised signals originating from neural activity related to the task (however note that it cannot remove systemic components with heterogeneous spatial distribution). See Pinti et al. (Pinti et al., 2019) for a guideline on fNIRS signal pre-processing using a similar pipeline.
2.9. Signal selection
In the present study we analysed both OxyHb and deOxyHb signals (Tachtsidis and Scholkmann, 2016). Since OxyHb has stronger signal magnitude than deOxyHb, the former is frequently used in fNIRS investigations. However, OxyHb signals are more contaminated by systemic artifacts (e.g. related to blood pressure, heart rate, breathing rate) than deOxyHb signals (Tachtsidis and Scholkmann, 2016; Zhang et al., 2016). Importantly, previous research has shown that deOxyHb signals are more highly correlated with the blood oxygen level dependant (BOLD) signal acquired during fMRI (Sato et al., 2013), and that they have greater spatial specificity (Dravida et al., 2017). This is also validated by other fNIRS studies investigating eye-to-eye contact (Hirsch et al., 2017; Noah et al., 2020), talking and listening (Hirsch et al., 2018; Zhang et al., 2017), human-to-human competition (Piva et al., 2017), and dyadic drum playing (Rojiani et al., 2018). For these reasons, in the present study our findings and conclusions are based on the deOxyHb signal. Results using the filtered OxyHb signal are included in the Supplementary Materials.
2.10. Data analysis: voxel-wise contrast effects
Three different general linear models (GLM, SPM8) were built for each participant, to fit the deOxyHb signal. Beta values were obtained for each channel and reshaped into a 3-D volume image with 2 × 2 × 2 mm voxels that tiled the brain regions covered by the channels. The first model was a ‘task GLM’ which included only the task factors that would be used in a standard fMRI study, in this case the 6 categorical regressors corresponding to all combinations of Condition and Phase levels: Shared-Question, Shared-Answer, Shared-Feedback, Private-Question, Private-Answer, Private-Feedback. This GLM generated contrast comparisons between Shared and Private conditions for each Phase (Question, Answer and Feedback).
The second model was a ‘task+face GLM’ which included all 6 previous categorical regressors and 2 additional parametric regressors that accounted for production of facial displays (participants moving their own face) and observation of facial displays (participants seeing their partner move the face), respectively. To generate the Production regressor, we added a column to the design matrix for each participant to model the amount of facial motion of that participant (convolved with the HRF) over the whole trial time-course. To generate the Observation regressor, we added a column to the design matrix for each participant to model the amount of facial motion in their interaction partner (convolved with the HRF) over the whole trial time-course. We also considered that the observation of facial displays might modulate brain activity only if the participant is actually directing gaze to their partner’s face. Thus, we ran a ‘task+face+gaze GLM’ where, in each Observation regressor, we replaced the data with zeros for all time points where the participant was not watching the face of their partner. In this analysis only, 9 participants were excluded due to poor signal quality in the eye-tracking data. Contrasts were generated between Shared and Private conditions for each Phase (Question, Answer and Feedback), and for each of the face parametric regressors (Production and Observation) against zero.
To build the third model and identify cross-brain synchrony we followed the method by Kingsbury and colleagues (Kingsbury et al., 2019). This model was a ‘task+face+brain GLM’ which included all 8 previous regressors (6 categorical regressors for task-related effects, and 2 parametric regressors for face-related effects) and 4 additional parametric regressors that accounted for brain activity in the partner’s right TPJ and left dlPFC, split into separate regressors for Shared and Private blocks. These two brain regions of interest were chosen based on our hypotheses and findings that social (right TPJ) and strategic control networks (left dlPFC) were more engaged when participants shared information (see Results section “3.2. Brain activity related to information-sharing”). To generate these four regressors (Shared-TPJ, Shared-dlPFC, Private-TPJ and Private-dlPFC) we first identified ROIs (regions of interest) for right TPJ and left dlPFC in the partner of each participant; if there was more than one channel in a region, we computed the mean activity across channels. We then used the ROI activity to generate the four regressors added to the design matrix of each participant, allowing us to model that person’s brain activity in terms of their partner’s right TPJ and left dlPFC over Shared and Private blocks. This GLM generated contrast comparisons between Shared and Private conditions for each Phase (Question, Answer and Feedback) and Partner Region (right TPJ and left dlPFC), and for each of the face parametric regressors (Production and Observation) against zero.
For each contrast comparison, one-tailed t-tests were computed using SPM8. The FDR correction method (q < 0.05) was used to correct for multiple comparisons. All results are presented on a normalised brain using images rendered on a standardized MNI template, using a p < 0.05 threshold. Anatomical locations of peak voxel activity were identified using the NIRS-SPM atlas (Ye et al., 2009).
2.11. Effects related to behavioural choices
During the task we also recorded the choices of participants. Particularly in the Shared condition, where choices are disclosed to the dyad, eye gaze and facial motion might be modulated by whether partners agree or disagree in their choices: it could be that effects of communication on eye gaze and facial motion are stronger if partners disagree than if they agree. To test this, we ran two additional analyses (for eye gaze and for facial motion) and found that there were no effects of agreement on these measures (see Supplementary Materials “S2. Effects of agreement on eye gaze and facial motion” for details of these analyses). Note that, since participants made choices freely, the mean number of trials for agree and disagree categories was not balanced: there were around 3 times more trials where participants agreed than disagreed, for both Shared and Private conditions. Thus, we did not test effects of agreement on brain activity due to lack of sufficient statistical power.
3. Results
3.1. Eye gaze and facial motion
To test effects of communication on eye gaze, we measured the mean fixation duration on the face of the partner for each Condition and Phase (see Table 1A for descriptives: mean and SD). There was no main effect of Condition (F(1,20) = 1.15, p > 0.05, np2 = 0.054) or Phase : 1.54, p > 0.05, np2 = 0.072), but there was an interaction effect between Condition and Phase, F(2,40) = 6.77, p < 0.01, np2 = 0.253. Post-hoc pairwise comparisons showed that the mean fixation duration to the face of the partner was higher in the Shared condition compared to the Private condition in the Feedback phase, t(20) = 3.10, p < 0.01, dz = 0.676 (see Fig. 3A-B). Specifically, participants looked more at the face of the partner in the Shared condition during the Feedback phase.
Table 1. Descriptives for eye gaze and facial motion.
| A) Duration fixation of gaze to face of partner (in ms) | |||
|---|---|---|---|
| Condition | Phase Question |
Answer | Feedback ** |
| Shared | M = 389.05 | M = 349.57 | M = 477.38 |
| SD = 205.89 | SD = 195.20 | SD = 247.20 | |
| Private | M = 367.76 | M = 395.81 | M = 367.52 |
| SD = 180.32 | SD = 189.97 | SD = 181.91 | |
| B) Number of active facial action units | |||
| Condition | Phase Question*** |
Answer *** | Feedback*** |
| Shared | M = 2.65 | M = 1.84 | M = 2.99 |
| SD = 0.898 | SD = 0.600 | SD = 1.05 | |
| Private | M = 2.27 | M = 1.52 | M = 2.42 |
| SD = 0.796 | SD = 0.485 | SD = 0.862 | |
Asterisks signify difference between Shared and Private conditions at p < 0.05 (*), p < 0.01 (**) and p < 0.001 (***).
Fig. 3.
A) Sample video frame highlighting the selected region of interest (face of partner). B) Violin plot for the duration of fixations to face of partner for each Condition and Phase: mean (filled circle), SE (error bars), and frequency of values (width of distribution). C) Sample frame of the OpenFace output video. D) Violin plot for the number of active facial action units (AUs) for each Condition and Phase: mean (filled circle), SE (error bars), and frequency of values (width of distribution). Asterisks signify difference at p < 0.05 (*), p < 0.01 (**) and p < 0.001 (** *).
To test effects of communication on facial motion, we measured the mean number of active facial action units for each Condition and Phase (see Table 1B for descriptives: mean and SD). There was a main effect of Condition, F(1,29) = 23.6, p < 0.001, ηp2 = 0.449, showing that there were more active facial action units in the Shared compared to the Private condition. There was also a main effect of Phase, F(2,58) = 132.3, p < 0.001, ηp2 = 0.820, and post-hoc pairwise comparisons showed that the number of active facial action units was higher in the Feedback phase than in the Question phase (t(29) = 4.82, p < 0.001, dz = 0.881) and Answer phase (t(29) = 13.2, p < 0.001, dz = 2.41). We also found an interaction effect between Condition and Phase, F(2,58) = 7.71, p < 0.01, ηp2 = 0.210. Post-hoc pairwise comparisons replicated the pattern of results found for the main effects: there were more active facial action units in the Shared than in the Private condition for all Phases (Question: t(29) = 4.81, p < 0.001, dz = 0.860; Answer: t(29) = 4.05, p < 0.001, dz = 0.740; Feedback: t(29) = 4.70, p < 0.001, dz = 0.860), and there were more active facial action units in the Feedback phase compared to the Question and Answer phases for both Conditions (Shared Feedback-Question: t(29) = 4.26, p < 0.001, dz = 0.780; Shared Feedback-Answer: t(29) = 12.03, p < 0.001, dz = 2.20; Private Feedback-Question: t(29) = 3.33, p < 0.01, dz = 0.610; Private Feedback-Answer: t(29) = 12.06, p < 0.001, dz = 2.20) (see Fig. 3C-D). Specifically, participants moved more facial muscles in the Shared condition across all phases, and during the Feedback phase compared to all other phases.
3.2. Brain activity related to information-sharing
To test effects of communication on brain activity, we used the output of the ‘task GLM’ to contrast between Shared and Private conditions for each Phase (Question, Answer and Feedback). Only significant FDR-corrected clusters for deOxyHb signal, Shared > Private, are reported in the main text, Table 2 and Fig. 4. Full statistics for all activated clusters are given in Table S4 and Figure S2; results for the same analysis using OxyHb signal are given in Table S5 and Figure S3.
Table 2. Voxel-wise GLM contrast comparisons for task-related effects (deOxyHb signal).
| Contrast (contrast threshold) |
Phase | Peak voxels MNI coordinates1 ss(X Y Z) |
t | p | df2 | Anatomical region | BA3 | Probability inclusion | Voxels (n in lustre) |
|---|---|---|---|---|---|---|---|---|---|
| Shared > Private (p < 0.05) |
Question | −48 24 40 | 2.78 | 0.005 | 29 | Dorsolateral Prefrontal Cortex Frontal Eye Fields Dorsolateral Prefrontal Cortex |
9 8 46 |
0.494 0.349 0.152 |
592 |
| Answer | −48 −72 26 | 3.22 | 0.002 | 29 | Angular Gyrus, part of TPJ V3 |
39 19 |
0.776 0.223 |
845 | |
| 52 −76 22 | 3.18 | 0.002 | 29 | Angular Gyrus, part of TPJ | 39 | 0.566 | 189 | ||
| Feedback | V3 | 19 | 0.433 |
Coordinates are based on the MNI system and (-) indicates left hemisphere.
df = degrees of freedom.
BA = Brodmann Area.
Fig. 4.
Contrast effects for ‘task GLM’, which included the following regressors: SQ = Shared-Question; SA = Shared-Answer; SF = Shared-Feedback; PQ = Private-Question; PA = Private-Answer; PF = Private-Feedback. A) Contrast effects for Shared > Private in the Question phase (deOxyHb signal; red colour indicates p < 0.05; areas of contrasts in black circles indicate FDR-corrected clusters at q = 0.05). Beta values for FDR-corrected clusters in Question phase (measured at the peak voxel) are shown for each trial Phase. B) Contrast effects for Shared > Private in the Answer phase (deOxyHb signal; red colour indicates p < 0.05; areas of contrasts in black circles indicate FDR-corrected clusters at q = 0.05). Beta values for FDR-corrected clusters in Answer phase (measured at the peak voxel) are shown for each trial Phase. C) Contrast effects for Shared > Private in the Feedback phase (deOxyHb signal; red colour indicates p < 0.05; areas of contrasts in black circles indicate FDR-corrected clusters at q = 0.05). Beta values for FDR-corrected clusters in Feedback phase (measured at the peak voxel) are shown for each trial Phase. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
For the Question phase, results showed that there was greater brain activity in the Shared compared to the Private condition in a cluster with peak voxel located at (−48, 24, 40) (p = 0.005), which included the left dlPFC (BA9, 49% probability inclusion; BA46, 15% probability inclusion), and left frontal eye fields, FEF (BA8, 35% probability inclusion) (Fig. 4A). For the Answer phase, there was greater activity in the Shared compared to the Private condition in a cluster with peak voxel located at (−48, −72, 26) (p = 0.002), which included the left angular gyrus, AG, (BA39, 78% probability inclusion) and left visual area 3, V3 (BA19, 22% probability inclusion) (Fig. 4B). For the Feedback phase, there was greater activity in the Shared compared to the Private condition in a cluster with peak voxel located at (52, −76, 22) (p = 0.002), which included the right AG (BA39, 57% probability inclusion) and right V3 (BA19, 43% probability inclusion) (Fig. 4C).
3.3. Brain activity related to production and observation of facial motion
To test effects of production and observation of facial motion on brain activity, we used the output of the ‘task+face GLM’ to contrast between Shared and Private conditions for each Phase (Question, Answer and Feedback), as well as to contrast each of the facial motion regressors (Production and Observation) (same for the ‘task+face+gaze GLM’). Only significant FDR-corrected clusters for deOxyHb signal related to face Production and Observation are reported in the main text, Table 3 and Fig. 5. Full statistics for all activated clusters are given in Table S6 and Figure S4 (as expected, the task-related contrasts yielded results similar to those obtained in the ‘task GLM; results for the same analysis using OxyHb signal are given in Table S7 and Figure S5.
Table 3. Voxel-wise GLM contrast comparisons for face-related effects (deOxyHb signal).
| Contrast (contrast threshold) |
Process | Peak voxels MNI coordinates1 (X Y Z) |
t | p | df2 | Anatomical region | BA3 | Probability inclusion | Voxels (n in cluster) |
|---|---|---|---|---|---|---|---|---|---|
| Face > Baseline | Production | −64 −42 42 | 2.63 | 0.007 | 29 | Supramarginal Gyrus | 40 | 0.949 | 12 |
| (p < 0.05) | Observation | 40 30 30 | 2.32 | 0.014 | 29 | Dorsolateral Prefrontal Cortex | 9 | 0.578 | 44 |
| Dorsolateral Prefrontal Cortex | 46 | 0.422 | |||||||
| 58 10 14 | 2.19 | 0.020 | 20 | Pars Opercularis, part of IFG | 44 | 0.308 | 22 | ||
| Observation controlling for gaze | Pars Triangularis, part of IFG | 45 | 0.273 | ||||||
| Pre- and Suppl. Motor Cortex | 6 | 0.210 | |||||||
| Superior Temporal Gyrus | 22 | 0.138 |
Coordinates are based on the MNI system and (-) indicates left hemisphere.
df = degrees of freedom.
BA = Brodmann Area.
Fig. 5.
Contrast effects for ‘task+face GLM’, which included the following regressors: SQ = Shared-Question; SA = Shared-Answer; SF = Shared-Feedback; PQ = Private-Question; PA = Private-Answer; PF = Private-Feedback; fProd = Production of facial motion; fObs = Observation of facial motion. A) Contrast effects for Production > Baseline (deOxyHb signal; red colour indicates p < 0.05; areas of contrasts in black circles indicate FDR-corrected clusters at q = 0.05). Beta values for FDR-corrected clusters in Production (measured at the peak voxel) are shown for each Process. B) Contrast effects for Observation > Baseline (deOxyHb signal; red colour indicates p < 0.05; areas of contrasts in black circles indicate FDR-corrected clusters at q = 0.05). Beta values for FDR-corrected clusters in Observation (measured at the peak voxel) are shown for each Process. C) Contrast effects for Observation controlling for gaze > Baseline (deOxyHb signal; red colour indicates p < 0.05; areas of contrasts in black circles indicate FDR-corrected clusters at q = 0.05). Beta values for FDR-corrected clusters in Observation controlling for gaze (measured at the peak voxel) are shown for each Process. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
We expected that brain regions involved in face processing would be differently activated during production and observation of facial motion. We found that Production of facial motion showed greater activity in a cluster with peak voxel located at (−64, −42, 42) (p = 0.007), which included the left supramarginal gyrus, SMG (BA40, 95% probability inclusion) (Figure A). However, Observation of facial motion showed greater activity in a cluster with peak voxel located at (40, 30, 30) (p = 0.014), which included the right dlPFC (BA9, 58% probability inclusion; BA46, 42% probability inclusion) (Fig. 5B). Similarly, Observation of facial motion controlling for gaze showed greater activity in a cluster with peak voxel located at (58, 10, 14) (p = 0.020), which included the right inferior frontal gyrus, IFG (BA44, 31% probability inclusion; BA45, 27% probability inclusion) (Fig. 5C).
3.4. Cross-brain synchrony related to information-sharing
To test effects of communication on cross-brain synchrony, we used the output of the ‘task+face+brain GLM’ to compare the Shared and Private conditions for each Phase (Question, Answer and Feedback) and Partner Region (right TPJ and left dlPFC), and to contrast each of the facial motion regressors (Production and Observation). Only significant FDR-corrected clusters for deOxyHb signal related to cross-brain synchrony, Shared > Private, are reported in the main text, Table 4 and Fig. 6. Full statistics for all activated clusters are given in Table S8 and Figure S6 (as expected, the task- and face-related contrasts yielded results similar to those obtained in the ‘task GLM’ and ‘task+face GLM’); results for the same analysis using OxyHb signal are given in Table S9 and Figure S7.
Table 4. Voxel-wise GLM contrast comparisons for cross-brain synchrony effects (deOxyHb signal).
| Contrast (contrast threshold) |
Partner Region |
Peak voxels MNI coordinates1 (X Y Z) |
t | p | df2 | Anatomical region | BA3 | Probability inclusion | Voxels (n in cluster) |
|---|---|---|---|---|---|---|---|---|---|
| Shared > Private (p < 0.05) |
Right TPJ | −46 28 8 | 2.86 | 0.003 | 29 | Pars Triangularis, part of IFG Dorsolateral Prefrontal Cortex Inferior Prefrontal Gyrus |
45 46 47 |
0.521 0.244 0.234 |
530 |
Coordinates are based on the MNI system and (-) indicates left hemisphere.
df = degrees of freedom.
BA = Brodmann Area.
Fig. 6.
Contrast effects for ‘task+face+brain GLM’, which included the following regressors: SQ = Shared-Question; SA = Shared-Answer; SF = Shared-Feedback; PQ = Private-Question; PA = Private-Answer; PF = Private-Feedback; fProd = Production of facial motion; fObs = Observation of facial motion; St = Shared-TPJ; Sd = Shared-dlPFC; Pt = Private-TPJ; Pd = Private-dlPFC. A) Contrast effects for Shared > Private predicted by right TPJ activity in the partner (deOxyHb signal; red colour indicatesp < 0.05; areas of contrasts in black circles indicate FDR-corrected clusters at q = 0.05). Beta values for FDR-corrected clusters predicted by right TPJ activity in the partner (measured at the peak voxel) are shown for each Partner Region. B) Contrast effects for Shared > Private predicted by left dlPFC activity in the partner (deOxyHb signal; red colour indicates p < 0.05). C) Representation of cross-brain synchrony between right TPJ and left dlPFC. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Results showed that brain activity in the right TPJ of the partner predicted brain activity in the left dlPFC of each participant, with a stronger effect in the Shared condition than in the Private condition. This was true for a cluster with peak voxel located at (−46, 28, 8) (p = 0.004), which included the left pars triangularis (BA45, 52% probability inclusion), left dlPFC (BA46, 24% probability inclusion), and left prefrontal gyrus (BA47, 23% probability inclusion) (Fig. 6A). However, there was no evidence of greater cross-brain synchrony in the Shared condition between brain activity in the left dlPFC of the partner and brain activity of each participant (Fig. 6B). A representation of cross-brain synchrony between right TPJ and left dlPFC is shown in Fig. 6C.
4. Discussion
Here, we investigated the modulation of social signals and brain activity during sharing of biographical information in a face-to-face interaction. Our findings show that participants gazed more at each other’s face and produced more facial displays when communication was enabled (Fig. 3). In an exploratory analysis we also found specific patterns of brain activity during spontaneous production (left supramarginal gyrus; SMG) and observation (right dlPFC/IFG) of facial displays (Fig. 5). Moreover, during communicative interactions there was greater brain activity in the left dorsolateral prefrontal cortex (dlPFC) and bilateral temporo-parietal junction (TPJ) (Fig. 4). A post-hoc analysis further showed increased cross-brain synchrony between right TPJ and left dlPFC when participants mutually shared information (Fig. 6). A summary diagram with brain systems engaged during spontaneous face processing, face-to-face communication and cross-brain synchrony is shown on Fig. 7. We discuss the implications of these findings below.
Fig. 7.
Summary diagram with brain systems identified in our study. Spontaneous face production engaged left SMG, while spontaneous face observation engaged right dlPFC/IFG. Being in a face-to-face communicative context (compared to being in a face-to-face non-communicative context) recruited left dlPFC and bilateral TPJ. Finally, being in a face-to-face communicative context (compared to being in a face-to-face non-communicative context) increased cross-brain synchrony between right TPJ and left dlPFC.
4.1. Testing the second-person neuroscience hypothesis
The second-person neuroscience hypothesis suggests that engaging in social interactions involves additional brain networks and social dynamics compared to not being engaged in an interaction (Schilbach et al., 2013). It also suggests that analysing two brains together can tell us more about the mechanisms underlying social interactions than analysing one brain alone (Redcay and Schilbach, 2019). Taking a step beyond existing studies, here we developed a well-controlled face-to-face information-sharing task where a minimal form of communication is enabled or not, despite participants are always sitting face-to-face and able to see each other. This provides a subtle test of the second-person neuroscience hypothesis − how the ability to communicate (i.e. share information) modulates nonverbal behaviour and brain activity patterns.
Our findings provide evidence in favour of the second-person neuroscience hypothesis by showing that mutual sharing of information involves brain activity patterns additional to those engaged when not sharing information. We further show that brain activity of each participant is related to three factors: the task (being asked a question, giving an answer and receiving feedback), the nonverbal behaviour of the participant and the partner (production and observation of facial motion), and the brain activity of the partner (in particular in the right TPJ). Although none of these factors can fully explain brain activity patterns of participants, together they point to a system that is critical for information-sharing during social interactions, which involves social brain regions (e.g. TPJ) and strategic control brain regions (e.g. dlPFC). To explore the implications of these findings in more detail, we first consider how eye gaze and facial displays change when sharing information, and then we turn to how individual brain activity and cross-brain synchrony are modulated in this situation.
4.2. Eye gaze and facial displays are used as social signals
Tracking eye gaze and facial motion is critical in this task to determine what kind of social behaviours change in face-to-face interactions, which in turn might cause changes in brain activity patterns. Our results showed that participants gazed more to the face of the partner and made more facial displays in the Shared compared to the Private condition, particularly during the Feedback phase (Fig. 3). It is during the Feedback phase of the Shared condition that participants learn new information about their partner and might also be judged for their own responses. This implies that participants use gaze and facial displays to modulate or contextualise their responses. The findings are also in line with previous studies suggesting that, in live interactions, people increase eye gaze to monitor the attentional states and facial displays of the partner when their self-impression is under public scrutiny (Cañigueral and Hamilton, 2019a; Efran, 1968; Efran and Broughton, 1966; Kendon, 1967; Kleinke, 1986). The facial motion results are in line with previous studies showing that participants use facial displays as a means of communication, to signal or influence an audience (Chovil, 1991; Crivelli and Fridlund, 2018; Fridlund, 1991; Hietanen et al., 2018). In particular, here we suggest that participants made more facial displays to communicate judgements regarding the shared information, that is, whether they like or dislike their partners’ choices.
Altogether, our findings suggest that gaze patterns and facial displays are closely intertwined: when participants gaze more to each other’s face, they also produce more facial displays. The coordinated exchange and integration of these social signals, characteristic of face-to-face interactions, may allow participants to efficiently perceive and send information to each other (Cañigueral and Hamilton, 2019b; Schilbach et al., 2013). In the context of our task, social signals were likely used to send information about how participants evaluated and learnt about each other’s answers. This suggests that eye gaze and facial displays were used to modulate the verbal communication enforced by the task, and indicates that spontaneous nonverbal behaviours complement verbal communication.
4.3. Brain systems for spontaneous face processing
Our face-to-face set-up allowed us to track spontaneous patterns of gaze and facial displays during the task, so we performed an exploratory analysis to test how spontaneous production and observation of facial motion relates to brain activity measured by fNIRS.
Results showed that production of facial displays (i.e. participants moving their own face) recruited the left SMG (Fig. 5A, Fig. 7). This region is engaged during motor planning for hand action (Tunik et al., 2008), producing speech actions (Wildgruber et al., 1996) and smiles (Wild et al., 2003), and is also associated to receptive functions during communication (Hirsch et al., 2018). We also found that observation of facial displays (i.e. participants seeing their partner move the face) recruited the right dlPFC/IFG (Fig. 5B-C, Fig. 7). Previous studies have shown that the right dlPFC and right IFG are recruited when inferring emotions from faces (A. Nakamura et al., 2014; K. Nakamura et al., 1999; Ran et al., 2016; Sabatinelli et al., 2011; Uono et al., 2017). We also observed activations of motor processing areas (e.g. premotor and supplementary motor area) during production of facial displays, as well as activations of face processing areas (e.g. superior, middle and inferior temporal gyrus) during observation of facial displays: although these activations match traditional brain areas related to motor and perceptual processing of actions and faces, their threshold values did not meet our stringent statistical criteria.
Overall, our results suggest that spontaneous production and observation of facial displays in a dynamic task engage a network of brain regions beyond those traditionally linked to face perception and motor control. Future studies will be needed to understand how this wider network works together to enable real-world face-to-face social interaction.
4.4. Individual and interpersonal brain systems for information-sharing
Using fNIRS, we measured brain activity associated with mutual sharing of information. Our strongest hypothesis was that in the Shared condition (compared to the Private condition) there would be more activation in the TPJ, and our findings confirm this assumption. In particular, we found that the right TPJ (during Feedback phase) and left TPJ (during Answer phase) were more activated in the Shared condition (Fig. 4B-C, Fig. 7). Previous studies have related activity in the right and left TPJ with mentalising (Saxe and Kanwisher, 2003; Saxe and Wexler, 2005; Seghier, 2013), that is, the ability to infer other people’s beliefs and intentions. Thus, the activation of the TPJ during the Shared condition could be explained by increased mentalising related to self-impression management (Izuma, 2012; Izuma et al., 2010) or learning new information about the partner: any of these processes could be more in demand in the context of a communicative interaction where information is shared. Interestingly, it has also been suggested that the TPJ receives input from the mirror neuron system, and that it is involved in the shared representation of self and other (Van Overwalle and Baetens, 2009): this could play a role in the context of our task, where participants try to empathise with their partner when making choices in the Shared condition.
We also found that the dlPFC (particularly in the left hemisphere) was more activated in the Shared compared to the Private condition (during Question phase) (Fig. 4A, Fig. 7). Previous studies link the dlPFC to strategic social decision-making. For instance, disruption of the dlPFC with transcranial magnetic stimulation decreases cooperative responses in economic games (Soutschek et al., 2015; Speitel et al., 2019). The dlPFC is also linked to mentalising, self-other distinction and regulation of biased behaviours (Amodio, 2014; Costa et al., 2008; Kalbe et al., 2010). In the context of our task, it is likely that the left dlPFC contributes to the selection of an appropriate answer in the Shared condition, either by making choices that present a favourable impression or by integrating new information that participants learn about the partner.
Since we simultaneously recorded brain activity of both interacting partners, we were also able to investigate cross-brain synchrony while sharing (versus not sharing) biographical information. We implemented a novel cross-brain GLM approach and found that signal in the right TPJ of the partner predicted (in the statistical sense, not the causal sense) the signal in the left dlPFC of the participant in the Shared condition compared to the Private condition (Fig. 6A, Fig. 7). Importantly, our model also accounts for task- and face-related effects, which suggests that cross-brain synchrony is not driven purely by the task structure or by facial visual or motor inputs. Cross-brain synchrony must instead by driven by other features of the situation that are not captured in our task and behaviour models. For example, it could also be that cross-brain synchrony is related to processing the content of task statements and social signals, which is not captured by the task and facial motion regressors in our model. Similar cross-brain effects in relation to semantic content are seen in fMRI studies (Nguyen et al., 2019). Critically, here we show that these effects are not driven by the auditory input alone, but are specific to the Shared condition where the content of verbal and non-verbal communications needs to be processed in relation to the ongoing shared interaction, thus having higher social and reciprocal value than in the Private condition. In both cases, this may require greater mentalising and strategic decisions over each other’s choices, which increases cross-brain synchrony between right TPJ and left dlPFC. This is equivalent to the findings in mice, where Kingsbury and colleagues suggest that greater cross-brain synchrony is related to the reciprocal anticipation and reaction to each other’s choices during a shared interaction (Kingsbury et al., 2019).
In line with the second-person neuroscience hypothesis, these findings show that there are additional individual and interpersonal brain systems that are more activated when communication is enabled, and further indicate that these systems include brain regions related to both mentalising and strategic decision-making.
4.5. Limitations and future directions
The present findings open up promising avenues for future research on how social information is processed during communication. However, there are also limitations that could be addressed in future studies. First, fNIRS measures activity from the cortical surface, and our optode coverage did not include frontal and occipital cortices. Thus, our hypotheses and findings were constrained to the lateral sections of the cortical surface. Due to its higher portability than fMRI and higher tolerance to motion than EEG, fNIRS provides a unique opportunity to simultaneously measure brain activity of two participants interacting face-to-face (Pinti et al., 2018). However, studies using fNIRS on different cortical regions, as well as fMRI studies that can measure brain activity below the cortex (e.g. ventral striatum for reward processing) are needed to complement the present findings. Our results also show that the pattern of findings is generally not consistent between deOxyHb and OxyHb signals. A reason for this could be that these two signals are differently influenced by systemic artifacts, and that OxyHb signals are generally more compromised and require more preprocessing. Ongoing and future investigations will be critical to help us further understand the functional significance of the differences found between these two signals.
Second, we find greater brain activity in the TPJ during the Shared condition, but two different mechanisms linked to mentalising could explain this result: self-impression management and learning about others. Using a paradigm where only the choices of one participant are disclosed (i.e. one participant’s self-impression is at risk, while the other one learns new information) or a paradigm where we test how well participants recall choices of their partner could help to clarify the cognitive mechanisms underlying these activations. Moreover, including direct measures of mentalising and emotional reactivity could further elucidate which socio-affective mechanisms are triggered by our manipulation. Our paradigm was also constrained to have fixed trial timings, because introducing long gaps between the phases of a trial made the interaction very slow and socially awkward. Since the timing of the hemodynamic responses (5 s) is longer than the duration of the Question phase (3 to 5 s), we do not draw strong conclusions about the differences between different phases of the trials. Future studies could use other timing combinations to more clearly separate strategic decision-making (Question phase) from mentalising linked to social learning or evaluation (Feedback phase). Moreover, in our paradigm communication between partners was mediated by a computer: participants pressed a key to choose their answer, and then the computer announced whether their answers were the same or different. This setup gives precise control of trial timing and prevents speech-related artefacts, but is less natural than spontaneous dialogue. Future designs that allow participants to speak to each other will be helpful to test how our findings apply to more natural situations.
Finally, we introduce two novel analyses that provide further understanding of the neural mechanisms of information-sharing during face-to-face interactions. First, we model brain activity of one partner as a function of spontaneous facial motion of the self and the partner, and find two brain regions (left SMG and right dlPFC/IFG) that are recruited when producing and observing facial displays. This is the first study we are aware of to track spontaneous facial displays in relation to brain activity patterns in humans. Although here we tested how the overall amount of facial displays modulates brain activity, it is likely that different patterns of brain activity are engaged depending on the facial expression being displayed as well as the communicative context (e.g. what is being said). Thus, further research is needed to elucidate the details of this exploratory analysis. Second, we model brain activity of one partner as a function of behaviour and brain activity of the other partner (Kingsbury et al., 2019), and find that information-sharing increases cross-brain synchrony between specific brain areas (right TPJ and left dlPFC). Future studies that carefully investigate how cross-brain synchrony is modulated by the context and content of reciprocal social signals (verbal and nonverbal) will be key to understand the functional significance of this mechanism.
4.6. Conclusion
The present study investigated how communicative interactions modulate social signalling and brain activity when sharing biographical information. We show that a shared situation where participants engage with each other translated into both more monitoring of the partner’s face and more production of facial displays. We also found that spontaneous production and observation of facial displays recruited the left SMG and right dlPFC/IFG, respectively. Moreover, sharing of information recruited bilateral TPJ the left dlPFC, and increased cross-brain synchrony between right TPJ and left dlPFC. Overall, our results are consistent with the second-person neuroscience hypothesis that there are additional individual and interpersonal brain networks engaged when people exchange information. More importantly, we show that the type of face-to-face interaction matters. A context in which it is possible to engage in reciprocal communication (even with the very minimal exchange of information available in the present study) recruited additional brain systems than equivalent trials without any explicit communication. As communication contexts become richer, even more complex and dynamic patterns of brain activity may be seen. We suggest that analysing data from two brains and two behaving individuals together will allow us to understand how neural mechanisms for communicating about the self, learning about others and producing social behaviour can be coordinated in real time. Understanding how all of these systems work together will be an important challenge for future studies.
Supplementary Material
Supplementary material associated with this article can be found, in the online version, at doi:http://dx.doi.org/10.1016/j.neuroimage.2020.117572.
Acknowledgements
R.C. acknowledges financial support from the Yale-UCL Collaborative Fellowship, the Bogue Fellowship and the Alan & Nesta Ferguson Trust. This work was partially funded by the Yale-UCL Collaborative Fellowship (R.C.), the Leverhulme Trust under the grant code RPG-2016-251 (PI A.H.), and the National Institute of Mental Health of the National Institutes of Health under the award number R01MH107513-01 and R01MH119430-01 (PI J.H.). I.T. acknowledges financial support from the Wellcome Trust (104580/Z/14/Z). The funding bodies had no involvement in the execution of this study and writing of the report. The authors are grateful for the significant contributions of Swethasri Dravida, Courtney DiCocco and Jen Cuzzocreo during data collection.
Footnotes
Declaration of Competing Interest
The authors declare no conflicts of interest.
Author contributions
R.C. designed the study, led the data collection, data analyses and manuscript preparation; X.Z. contributed to data analyses; J.A.N. contributed to data collection and analyses; I.T. assisted with technical issues around data collection and analysis; A.H. and J.H. supervised all aspects of the study and manuscript preparation.
Data code and availability statement
The data and the code used for this project will be made available upon request.
References
- Amodio DM. The neuroscience of prejudice and stereotyping. Nat Rev Neurosci. 2014;15(10):670–682. doi: 10.1038/nrn3800. [DOI] [PubMed] [Google Scholar]
- Baltrusaitis T, Robinson P, Morency LP. OpenFace: an open source facial behavior analysis toolkit. 2016:1–10. doi: 10.1109/WACV.2016.7477553. [DOI] [Google Scholar]
- Bhatt MA, Lohrenz T, Camerer CF, Montague PR. Neural signatures of strategic types in a two-person bargaining game. Proc Natl Acad Sci. 2010;107(46):19720–19725. doi: 10.1073/pnas.1009625107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Boas DA, Elwell CE, Ferrari M, Taga G. Twenty years of functional near-infrared spectroscopy: introduction for the special issue. Neuroimage. 2014;85:1–5. doi: 10.1016/j.neuroimage.2013.11.033. [DOI] [PubMed] [Google Scholar]
- Cage EA. Mechanisms of social influence: reputation management in typical and autistic individuals. 2015 [Google Scholar]
- Cañigueral R, de Hamilton AFC. Being watched: effects of an audience on eye gaze and prosocial behaviour. Acta Psychol (Amst) 2019a;195:50–63. doi: 10.1016/j.actpsy.2019.02.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cañigueral R, de Hamilton AFC. The role of eye gaze during natural social interactions in typical and autistic people. Front Psychol. 2019b;10(560):1–18. doi: 10.3389/fpsyg.2019.00560. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chovil N. Social determinants of facial displays. J Nonverbal Behav. 1991;15(3):141–167. [Google Scholar]
- Costa A, Torriero S, Oliveri M, Caltagirone C. Prefrontal and temporo-parietal involvement in taking others' perspective: TMS evidence. Behav Neurol. 2008;19(1–2):71–74. doi: 10.1155/2008/694632. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crivelli C, Fridlund AJ. Facial displays are tools for social influence. Trends Cogn Sci. 2018;22(5):388–399. doi: 10.1016/j.tics.2018.02.006. [DOI] [PubMed] [Google Scholar]
- Crowne DP, Marlowe D. A new scale of social desirability independent of psychopathology. J Consult Psychol. 1960;24(4):349–354. doi: 10.1037/h0047358. [DOI] [PubMed] [Google Scholar]
- Cui X, Bray S, Bryant DM, Glover GH, Reiss AL. A quantitative comparison of NIRS and fMRI across multiple cognitive tasks. Neuroimage. 2011;54(4):2808–2821. doi: 10.1016/j.neuroimage.2010.10.069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cui X, Bryant DM, Reiss AL. NIRS-based hyperscanning reveals increased interpersonal coherence in superior frontal cortex during cooperation. Neuroimage. 2012;59(3):2430–2437. doi: 10.1016/j.neuroimage.2011.09.003.NIRS-Based. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Di Paolo E, De Jaegher H. The interactive brain hypothesis. Front Hum Neurosci. 2012;6(163):1–16. doi: 10.3389/fnhum.2012.00163. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dravida S, Noah JA, Zhang X, Hirsch J. Comparison of oxyhemoglobin and deoxyhemoglobin signal reliability with and without global mean removal for digit manipulation motor tasks. Neurophotonics. 2017;5(1):011006. doi: 10.1117/1.NPh.5.1.011006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Efran JS. Looking for approval: effects on visual behavior of approbation from persons differing in importance. J Pers Soc Psychol. 1968;10(1):21–25. doi: 10.1037/h0026383. [DOI] [PubMed] [Google Scholar]
- Efran JS, Broughton A. Effect of expectancies for social approval on visual behavior. J Pers Soc Psychol. 1966;4(1):103–107. doi: 10.1037/h0023511. [DOI] [PubMed] [Google Scholar]
- Eggebrecht AT, White BR, Feradal Silvina L, Chen C, Zhan Y, Snyder AZ, et al. Culver JP. A quantitative spatial comparison of high-density diffuse optical tomography and fMRI cortical mapping. Neuroimage. 2012;61(4):1120–1128. doi: 10.1016/j.neuroimage.2012.01.124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ekman P, Friesen WV. Mesauring facial movement. Environ Psychol Nonverbal Behav. 1976;1(1):56–75. [Google Scholar]
- Emler N. A social psychology of reputation. Eur Rev Soc Psychol. 1990;1(1):171–193. [Google Scholar]
- Ferradal SL, Eggebrecht AT, Hassanpour M, Snyder AZ, Culver JP. Atlas-based head modeling and spatial normalization for high-density diffuse optical tomography: in vivo validation against fMRI. Neuroimage. 2014;85:117–126. doi: 10.1016/j.neuroimage.2013.03.069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ferrari M, Quaresima V. A brief review on the history of human functional near-infrared spectroscopy (fNIRS) development and fields of application. Neuroimage. 2012;63(2):921–935. doi: 10.1016/j.neuroimage.2012.03.049. [DOI] [PubMed] [Google Scholar]
- Foulsham T, Walker E, Kingstone A. The where, what and when of gaze allocation in the lab and the natural environment. Vision Res. 2011;51(17):1920–1931. doi: 10.1016/j.visres.2011.07.002. [DOI] [PubMed] [Google Scholar]
- Fridlund AJ. Sociality of solitary smiling: potentiation by an implicit audience. J Pers Soc Psychol. 1991;60(2):229–240. [Google Scholar]
- Frith CD, Frith U. The neural basis of mentalizing. Neuron. 2006;50(4):531–534. doi: 10.1016/j.neuron.2006.05.001. [DOI] [PubMed] [Google Scholar]
- Gallotti M, Frith CD. Social cognition in the we-mode. Trends Cogn Sci. 2013;17(4):160–165. doi: 10.1016/j.tics.2013.02.002. [DOI] [PubMed] [Google Scholar]
- Gobel MS, Kim HS, Richardson DC. The dual function of social gaze. Cognition. 2015;136:359–364. doi: 10.1016/j.cognition.2014.11.040. [DOI] [PubMed] [Google Scholar]
- Goffman E. Behavior in Public Places. Simon and Schuster; 1963. [Google Scholar]
- Hasson U, Frith CD. Mirroring and beyond: coupled dynamics as a generalized framework for modelling social interactions. Philosoph Trans R Soc B. 2016;371:20150366. doi: 10.1098/rstb.2015.0366. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hietanen JK, Kylliäinen A, Peltola MJ. Facial mimicry of another’s affiliative smile is sensitive to the belief of being watched. Preprint. 2018:1–30. doi: 10.31234/osf.io/qna36. [DOI] [Google Scholar]
- Hirsch J, Noah JA, Zhang X, Dravida S, Ono Y. A cross-brain neural mechanism for human-to-human verbal communication. Soc Cogn Affect Neurosci. 2018:907–920. doi: 10.1093/scan/nsy070. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hirsch J, Zhang X, Noah JA, Ono Y. Frontal temporal and parietal systems synchronize within and across brains during live eye-to-eye contact. Neuroimage. 2017 January;157:314–330. doi: 10.1016/j.neuroimage.2017.06.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Izuma K. The social neuroscience of reputation. Neurosci Res. 2012;72(4):283–288. doi: 10.1016/j.neures.2012.01.003. [DOI] [PubMed] [Google Scholar]
- Izuma K, Saito DN, Sadato N. Processing of the incentive for social approval in the ventral striatum during charitable donation. J Cogn Neurosci. 2009;22(4):621–631. doi: 10.1162/jocn.2009.21228. [DOI] [PubMed] [Google Scholar]
- Izuma K, Saito DN, Sadato N. The roles of the medial prefrontal cortex and striatum in reputation processing. Soc Neurosci. 2010;5(2):133–147. doi: 10.1080/17470910903202559. [DOI] [PubMed] [Google Scholar]
- Jiang J, Dai B, Peng D, Zhu C, Liu L, Lu C. Neural synchronization during face-to-face communication. J Neurosci. 2012;32(45):16064–16069. doi: 10.1523/jneurosci.2926-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kalbe E, Schlegel M, Sack AT, Nowak DA, Dafotakis M, Bangard C, Kessler J. Dissociating cognitive from affective theory of mind: a TMS study. Cortex. 2010;46(6):769–780. doi: 10.1016/j.cortex.2009.07.010. [DOI] [PubMed] [Google Scholar]
- Kendon A. Some functions of gaze-direction in social interaction. Acta Psychol (Amst) 1967;26:22–63. doi: 10.1016/0001-6918(67)90005-4. [DOI] [PubMed] [Google Scholar]
- Kingsbury L, Huang S, Wang J, Gu K, Golshani P, Wu YE, Hong W. Correlated neural activity and encoding of behavior across brains of socially interacting animals. Cell. 2019;178(2):429–446.:e16. doi: 10.1016/j.cell.2019.05.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kleinke CL. Gaze and eye contact: a research review. Psychol Bull. 1986;100(1):78–100. [PubMed] [Google Scholar]
- Laidlaw K, Foulsham T, Kuhn G, Kingstone A. Potential social interactions are important to social attention. Proc Natl Acad Sci. 2011;108(14):5548–5553. doi: 10.1073/pnas.1017022108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lloyd-Fox S, Blasi A, Elwell CE. Illuminating the developing brain: the past, present and future of functional near infrared spectroscopy. Neurosci Biobehav Rev. 2010;34(3):269–284. doi: 10.1016/j.neubiorev.2009.07.008. [DOI] [PubMed] [Google Scholar]
- Müller-Pinzler L, Gazzola V, Keysers C, Sommer J, Jansen A, Frassle S, Krach S. Neural pathways of embarrassment and their modulation by social anxiety. Neuroimage. 2016;49(0):252–261. doi: 10.1016/j.neuroimage.2015.06.036.Neural. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nakamura A, Maess B, Knösche TR, Friederici AD. Different hemispheric roles in recognition of happy expressions. PLoS ONE. 2014;9(2):e88628. doi: 10.1371/journal.pone.0088628. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nakamura K, Kawashima R, Ito K, Sugiura M, Kato T, Nakamura A, et al. Kojima S. Activation of the right inferior frontal cortex during assessment of facial emotion. J Neurophysiol. 1999;82(3):1610–1614. doi: 10.1152/jn.1999.82.3.1610. [DOI] [PubMed] [Google Scholar]
- Nguyen M, Vanderwal T, Hasson U. Shared understanding of narratives is correlated with shared neural responses. Neuroimage. 2019 August;184:161–170. doi: 10.1016/j.neuroimage.2018.09.010. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Noah JA, Zhang X, Dravida S, Ono Y, Naples A, McPartland JC, Hirsch J. Real-time eye-to-eye contact is associated with cross-brain neural coupling in angular gyrus. Front Hum Neurosci. 2020 February;14:1–10. doi: 10.3389/fnhum.2020.00019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Obrig H. NIRS in clinical neurology - a ' 'promising" tool? Neuroimage. 2014;85:535–546. doi: 10.1016/j.neuroimage.2013.03.045. [DOI] [PubMed] [Google Scholar]
- Okamoto M, Dan I. Automated cortical projection of head-surface locations for transcranial functional brain mapping. Neuroimage. 2005;26(1):18–28. doi: 10.1016/j.neuroimage.2005.01.018. [DOI] [PubMed] [Google Scholar]
- Oldfield RC. The assessment and analysis of handedness: the edinburgh inventory. Neuropsychologia. 1971;9:97–113. doi: 10.1016/0028-3932(71)90067-4. [DOI] [PubMed] [Google Scholar]
- Paulhus DL. Two-component models of social desirable responding. J Pers Soc Psychol. 1984;46(3):598–609. [Google Scholar]
- Paulhus DL. In: Measures of Personality and Social Psychology Attitudes. Robinson JP, Shaver PR, Wrightsman LS, editors. Academic Press; San Diego, CA: 1991. Measure and control of response bias: balanced inventory of desirable responding (BIDR) pp. 17–59. [Google Scholar]
- Pfeiffer UJ, Vogeley K, Schilbach L. From gaze cueing to dual eye-tracking: novel approaches to investigate the neural correlates of gaze in social interaction. Neurosci Biobehav Rev. 2013;37(10):2516–2528. doi: 10.1016/j.neubiorev.2013.07.017. [DOI] [PubMed] [Google Scholar]
- Pinti P, Aichelburg C, Gilbert SJ, de Hamilton AFC, Hirsch J, Burgess PW, Tachtsidis I. A review on the use of wearable functional near-infrared spectroscopy in naturalistic environments. Japanese Psychol Res. 2018a;60(4):347–373. doi: 10.1111/jpr.12206. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pinti P, Scholkmann F, de Hamilton AFC, Burgess PW, Tachtsidis I. Current status and issues regarding pre-processing of fNIRS neuroimaging data: an investigation of diverse signal filtering methods within a general linear model framework. Front Hum Neurosci. 2019 January;12:1–21. doi: 10.3389/fnhum.2018.00505. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pinti P, Tachtsidis I, de Hamilton AFC, Hirsch J, Aichelburg C, Gilbert SJ, Burgess PW. The present and future use of functional near-infrared spectroscopy (fNIRS) for cognitive neuroscience. Ann N Y Acad Sci. 2018b:1–25. doi: 10.1111/nyas.13948. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Piva M, Zhang X, Noah JA, Chang SWC, Hirsch J. Distributed neural activity patterns during human-to-human competition. Front Hum Neurosci. 2017 November;11:1–14. doi: 10.3389/fnhum.2017.00571. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ran G, Chen X, Zhang Q, Ma Y, Zhang X. Attention modulates neural responses to unpredictable emotional faces in dorsolateral prefrontal cortex. Front Hum Neurosci. 2016;10(332) doi: 10.3389/fnhum.2016.00332. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Redcay E, Schilbach L. Using second-person neuroscience to elucidate the mechanisms of social interaction. Nat Rev Neurosci. 2019 doi: 10.1038/s41583-019-0179-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Resnick P, Zeckhauser R, Swanson J, Lockwood K. The value of reputation on eBay: a controlled experiment. Exp Econ. 2006;9(2):79–101. [Google Scholar]
- Rojiani R, Zhang X, Noah JA, Hirsch J. Communication of emotion via drumming: dual-brain imaging with functional near-infrared spectroscopy. Soc Cogn Affect Neurosci. 2018;13(10):1047–1057. doi: 10.1093/scan/nsy076. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sabatinelli D, Fortune EE, Li Q, Siddiqui A, Krafft C, Oliver WT, Jeffries J. Emotional perception: meta-analyses of face and natural scene processing. Neuroimage. 2011;54(3):2524–2533. doi: 10.1016/j.neuroimage.2010.10.011. [DOI] [PubMed] [Google Scholar]
- Sato H, Yahata N, Funane T, Takizawa R, Katura T, Atsumori H, et al. Kasai K. A NIRS-fMRI investigation of prefrontal cortex activity during a working memory task. Neuroimage. 2013;83:158–173. doi: 10.1016/j.neuroimage.2013.06.043. [DOI] [PubMed] [Google Scholar]
- Saxe R, Kanwisher N. People thinking about thinking people: the role of the temporo-parietal junction in "theory of mind. Neuroimage. 2003;19:1835–1842. doi: 10.1016/S1053-8119(03)00230-1. [DOI] [PubMed] [Google Scholar]
- Saxe R, Wexler A. Making sense of another mind: the role of the right temporo-parietal junction. Neuropsychologia. 2005;43(10):1391–1399. doi: 10.1016/j.neuropsychologia.2005.02.013. [DOI] [PubMed] [Google Scholar]
- Schilbach L, Timmermans B, Reddy V, Costall A, Bente G, Schlicht T, Vogeley K. Toward a second-person neuroscience. Behav Brain Sci. 2013;36(4):393–414. doi: 10.1017/S0140525X12000660. [DOI] [PubMed] [Google Scholar]
- Scholkmann F, Kleiser S, Metz AJ, Zimmermann R, Mata Pavia J, Wolf U, Wolf M. A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology. Neuroimage. 2014;85(1):6–27. doi: 10.1016/j.neuroimage.2013.05.004. [DOI] [PubMed] [Google Scholar]
- Seghier ML. The angular gyrus: multiple functions and multiple subdivisions. Neuroscientist. 2013;19(1):43–61. doi: 10.1177/1073858412440596. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Silver IM, Shaw A. Pint-sized public relations: the development of reputation management. Trends Cogn Sci. 2018;22(4):277–279. doi: 10.1016/j.tics.2018.01.006. [DOI] [PubMed] [Google Scholar]
- Soutschek A, Sauter M, Schubert T. The importance of the lateral prefrontal cortex for strategic decision making in the prisoner’s dilemma. Cognit Affect Behav Neurosci. 2015;15(4):854–860. doi: 10.3758/s13415-015-0372-5. [DOI] [PubMed] [Google Scholar]
- Speitel C, Traut-Mattausch E, Jonas E. Functions of the right DLPFC and right TPJ in proposers and responders in the ultimatum game. Soc Cogn Affect Neurosci. 2019;14(3):263–270. doi: 10.1093/scan/nsz005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tachtsidis I, Scholkmann F. False positives and false negatives in functional near-infrared spectroscopy: issues, challenges, and the way forward. Neurophotonics. 2016;3(3):1–6. doi: 10.1117/1.NPh.3.3.031405. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tennie C, Frith U, Frith CD. Reputation management in the age of the world-wide web. Trends Cogn Sci. 2010;14(11):482–488. doi: 10.1016/j.tics.2010.07.003. [DOI] [PubMed] [Google Scholar]
- Tunik E, Lo O-Y, Adamovich SV. Transcranial magnetic stimulation to the frontal operculum and supramarginal gyrus disrupts planning of outcome-based hand-object interactions. J Neurosci. 2008;28(53):14422–14427. doi: 10.1523/jneurosci.4734-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Uono S, Sato W, Kochiyama T, Sawada R, Kubota Y, Yoshimura S, Toichi M. Neural substrates of the ability to recognize facial expressions: a voxel-based morphometry study. Soc Cogn Affect Neurosci. 2017;12(3):487–495. doi: 10.1093/scan/nsw142. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van Overwalle F, Baetens K. Understanding others' actions and goals by mirror and mentalizing systems: a meta-analysis. Neuroimage. 2009;48(3):564–584. doi: 10.1016/j.neuroimage.2009.06.009. [DOI] [PubMed] [Google Scholar]
- Warnell KR, Sadikova E, Redcay E. Let’s chat: developmental neural bases of social motivation during real-time peer interaction. Dev Sci. 2017 April;:1–14. doi: 10.1111/desc.12581. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wild B, Erb M, Eyb M, Bartels M, Grodd W. Why are smiles contagious? An fMRI study of the interaction between perception of facial affect and facial movements. Psychiatr Res - Neuroimag. 2003;123(1):17–36. doi: 10.1016/S0925-4927(03)00006-4. [DOI] [PubMed] [Google Scholar]
- Wildgruber D, Ackermann H, Klose U, Kardatzki B, Grodd W. Functional lateralization of speech production at primary motor cortex: a fMRI study. Neuroreport. 1996;7:2791–2795. doi: 10.1097/00001756-199611040-00077. [DOI] [PubMed] [Google Scholar]
- Witt ST, Meyerand ME, Laird AR. Functional neuroimaging correlates of finger tapping task variations: an ALE meta-analysis. Neuroimage. 2008;42(1):343–356. doi: 10.1016/j.neuroimage.2008.04.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ye JC, Tak S, Jang KE, Jung J, Jang J. NIRS-SPM: statistical parametric mapping for near-infrared spectroscopy. Neuroimage. 2009;44(2):428–447. doi: 10.1016/j.neuroimage.2008.08.036. [DOI] [PubMed] [Google Scholar]
- Zhang X, Noah JA, Dravida S, Hirsch J. Signal processing of functional NIRS data acquired during overt speaking. Neurophotonics. 2017;4(4):041409. doi: 10.1117/1.NPh.4.4.041409. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang X, Noah JA, Hirsch J. Separation of the global and local components in functional near-infrared spectroscopy signals using principal component spatial filtering. Neurophotonics. 2016;3(1):015004. doi: 10.1117/1.NPh.3.1.015004. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data and the code used for this project will be made available upon request.







