Abstract
How do bilingual interlocutors inhibit interference from the non-target language to achieve brain-to-brain information exchange in a task to simulate a bilingual speaker–listener interaction. In the current study, two electroencephalogram devices were employed to record pairs of participants’ performances in a joint language switching task. Twenty-eight (14 pairs) unbalanced Chinese–English bilinguals (L1 Chinese) were instructed to name pictures in the appropriate language according to the cue. The phase-amplitude coupling analysis was employed to reveal the large-scale brain network responsible for joint language control between interlocutors. We found that (1) speakers and listeners coordinately suppressed cross-language interference through cross-frequency coupling, as shown in the increased delta/theta phase-amplitude and delta/alpha phase-amplitude coupling when switching to L2 than switching to L1; (2) speakers and listeners were both able to simultaneously inhibit cross-person item-level interference which was demonstrated by stronger cross-frequency coupling in the cross person condition compared to the within person condition. These results indicate that current bilingual models (e.g., the inhibitory control model) should incorporate mechanisms that address inhibiting interference sourced in both language and person (i.e., cross-language and cross-person item-level interference) synchronously through joint language control in dynamic cross-language communication.
Keywords: Cross-language communication, Joint language control, Cross-frequency coupling, Bilingualism, Language switching
Introduction
More than half of the world’s population is bilingual (Grosjean 2010). A task that bilinguals encounter on a daily basis is how to control the interference of the language not-in-use (Costa and Santesteban 2004; Green 1998). It has been demonstrated that bilinguals resort to language control to accurately select the language in use in different tasks (e.g., Liu et al. 2016; De Bruin et al. 2014). Although most studies have already examined the language control mechanism during independent cross-language production or comprehension, one important aspect missing in the previous work is to address the role of language control in the interactive mode during cross-language communication.
In modeling bilingual production, the inhibitory control (IC) model proposes that language control plays a role of suppressing/inhibiting cross-language interference when switching from one language to the other (Green 1998). Such language control mechanism may occur at the language task schema competition phase (e.g., L1, L2), the lexical selection phase (e.g., “苹果”, “apple”), or both. Specifically, unbalanced bilinguals are more proficient in their dominant/first language (L1) than in their second language (L2). L1 will always be easier to activate but harder to suppress, causing more cross-language interference when switching to L2. To produce L2 successfully, the more dominant L1 would need to be suppressed (i.e., inhibitory control). However, to overcome the residual inhibitory control applied on L1 during L2 production is more costly than the other direction; thus, switching to L1 is usually more costly than switching to L2 (e.g., Meuter and Allport 1999). However, the IC model only accounts for cross-language interference for independent bilingual production, without addressing the interference from the cross-person item-level factor as demonstrated in the recent literature (e.g., Gambi and Hartsuiker 2016; Kootstra et al. 2010). For example, in Gambi and Hartsuiker (2016), one of the paired unbalanced bilinguals (i.e., Switch participant) was asked to name pictures freely in either Dutch (L1) or English (L2); while the partner (Non-switch participant) always named in L1. As a result, the non-switch participant named pictures more slowly in L1 if the partner had previously named in L2, thus exhibiting switch costs in L1 production after listening to L2 words by the partner (Switch participant). This switch cost suggested that the language used by the speaker could be primed in the bilingual listener, and that the activation of the non-target language (L2) could slow down the selection of the target language (L1) in production. Gambi and Hartsuiker termed this cross-person effect as cross-person item-level interference (2016). In addition to lexical processing, syntactic processes also slowed down due to peers’ influence. For instance, Kootstra et al. (2010) asked a confederate to manipulate word order and L1/L2 switch positions when generating sentences. As a result, participants showed a clear alignment towards the confederate in their word order choices and code-switching patterns.
To address the interference from the interlocutor during cross-language communication, the Interactive Alignment Model provides insights into this cross-person process (Pickering and Garrod 2004; Garrod and Pickering 2009). According to this model, linguistic representations at different levels (e.g., phonological, lexical and semantic) were activated during a speaker’s production and this activation would prime the linguistic systems in the listener. During a listener’s comprehension, linguistic activations were unconsciously aligned with those in the speaker and resulted in linguistic output of similar structures during the listener’s subsequent production in dialogue. Thus, it is plausible that the language (e.g., L1) used by the speaker would prime the same language in the listener but interfere the activation of the target language (e.g., L2) in production. This interference is sourced at the cross-person item level where the listener tries to align with the speaker. Importantly, this effect was supported by hyper-scanning EEG studies (Pérez et al. 2017, 2019). For example, Pérez et al. (2017) asked pairs of participants to listen to each other’s semi-structured oral narratives without interpersonal visual contact. During the process, participants were required to alternate the roles of speaker and listener while interchanging verbal information. The phase lock value (PLV) was employed to calculate the coupling between the speaker and listener. They found that a pure synchronization in the alpha and beta band seems to emerge directly from the joint activity between the speaker and listener, without being mediated by the physical properties of speech. Pérez et al. (2019) nested a foreign language into the dialogue experiment. Participants were required to discuss topics in different languages (native language vs. foreign language) according to language-used cues. Results showed that coupling strength between brain signals was increased in both native language and foreign language context in the alpha frequency band. They interpreted this coupling as caused by the similarity at the sensory-motor levels between production and comprehension, or the joint attention for the prediction processes between interlocutors. Thus, the study demonstrated the coupling of production and comprehension during language switching.
Cross-frequency coupling (CFC) in language processing
Coupling between production and comprehension during language switching can be quite complex and we attempted to use cross-frequency coupling (CFC) to further reveal the information transfer from large-scale brain networks to the fast, local cortical processing required for effective co-ordination between the speaker and listener in completing a task together (Axmacher et al. 2010; Canolty and Knight 2010; Friese et al. 2013; Hyafil et al. 2015). For instance, CFC analysis was employed in the working memory literature to show that the CFC between the theta and gamma bands increased with repetitive working memory training in the T maze task (Zhang et al. 2020). Using the same method, Yu et al. (2020) found that significant decreases/increases of the local/global efficiency are in the delta–beta, delta–alpha, theta–alpha and delta–theta bands in epileptic seizures. Importantly, one of our recent language production studies revealed that switching to L2 was more modulated by F4–F3 alpha–beta phase-amplitude relative to switching to L1 at the language task schema phase (i.e., the cue phase in the picture naming task). This finding suggests that language control in single language production needs the coordination of the left and the right frontal hemispheres (Tong et al. 2020). To our knowledge, this is the first investigation of neural mechanism of bilingual language control in a simulated inter-personal context using CFC. In fact, the cross-person factor in a joint picture naming task can be better investigated and revealed via the neural coupling mechanism between production and comprehension.
Bilingual cross-language communication is dynamic by nature, where bilinguals not only need to monitor language switching as a listener but also operate language switching as a speaker. During the communication, speakers’ and listeners’ nerve coupling in the cerebral cortex can represent the same information jointly to complete information transition, and cause coupling oscillations (Hasson et al. 2012; Kawasaki et al. 2013). The reason that bilinguals are able to exchange information in the bilingual context might be the oscillatory coupling between the speaker and the listener’s brains in the spatiotemporal network (Bevilacqua et al. 2014; Anders et al. 2011). Cross-frequency coupling (CFC) provides a framework for the mechanism of hierarchical nesting between different neural rhythms (see Canolty and Knight 2010; Hyafil et al. 2015). There is a correlation between the amplitude of the high-frequency oscillation and the phase of the low-frequency neuroseism (Onslow et al. 2014). Low-frequency neural oscillations may modulate neural connections in long-distance large-scale brain regions with the activation of larger nucleus, while high-frequency neural oscillations are mostly associated with local nucleus activity (Canolty and Knight 2010; Hyafil et al. 2015; Tort et al. 2010). Therefore, we employed the CFC analysis in the current study to reveal the joint language control mechanism across large-scale brain networks1 between interlocutors during cross-language communication.
Further, previous studies that propose a fronto-parietal network responsible for language control primarily focus on the couple of electrodes located on the frontal and parietal areas (e.g., Abutalebi and Green 2007, 2016; Green and Abutalebi 2013; Cooper et al. 2015), including our own recent work that was designed to understand language control in the joint language switching task (Liu et al. 2018; Xie et al. 2018). In this task, two bilinguals were cued to take turns to name pictures in either L1 or L2 to simulate production and comprehension between interlocutors during cross-language communication (for details see Procedure section). Our findings suggest that joint language switching involves suppressing both cross-language interference and cross-person item level interference. However, our traditional time-locked event-related potential (ERP) analysis was not able to show the phase-locked oscillations in simultaneous production and comprehension. Yet it is crucial to understand how neural oscillations are implemented in language control during joint language switching. Neural oscillations are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. It is worth to note that neural oscillations vary with brain functions in the neural network (for review, see Herrmann et al. 2016; Von Stein et al. 2000). To be more specific, some language studies have reported the cognitive significance of different frequencies in EEG recordings. Brunetti et al. (2013) used a categorization task to investigate the neuronal oscillatory mechanism in the detection of relevant lexical information and found increased phase synchronization of the theta and delta bands in the interval of semantic analysis. Their results indicated that these two bands may jointly relate to semantic processing. Similarly, Strauß et al. (2014) conducted a spoken-word recognition task and found that enhanced alpha oscillations could control lexical integration and that theta oscillations were related to stimulus-specific lexico-semantic processes. Moreover, some studies reported increased beta oscillation following the target words when processing syntactically and semantically acceptable sentences compared to those of syntactic violation or semantic incongruity (Bastiaansen et al. 2010; Pérez et al. 2012; Kielar et al. 2014; Luo et al. 2010; Wang et al. 2012).
So far, few studies have investigated delta, theta, alpha and beta oscillations during language switching, in which the oscillatory cognitive significance may indicate more domain-general cognitive processes (Herrmann et al. 2016; Von Stein et al. 2000). Based on the literature, delta oscillations are important for large-scale cortical integration (Bruns and Eckhorn 2004), as well as for attentional and syntactic processes (Roehm et al. 2004; Ocklenburg et al. 2011). Theta oscillations appear to be important for a variety of cognitive functions (for review, see Harmony 2013). In terms of inhibitory control, the increase of theta oscillation was found to be associated with the inhibition of the unrelated task (van Steenbergen et al. 2012; Zavala et al. 2013). Alpha oscillation has been typically associated with attention (Klimesch 2012; Palva and Palva 2007), working memory (Wilsch et al. 2015), and inhibition of the task-irrelevant cortical regions (Jensen and Mazaheri 2010). Beta oscillation has been related to the planning and execution of movements (Kim and Chung 2007; Pfurtscheller et al. 1998).
Taken together, the oscillatory significance depends on the cognitive task and one task might require oscillations at different frequency bands to work simultaneously. To understand neural oscillations during joint language switching, CFC, a novel approach to study language processing, is able to demonstrate how different/same oscillations co-ordinate among themselves. Specifically, in order to investigate the cognitive mechanisms of inter-brain oscillatory communication, we simultaneously recorded EEG from pairs of participants who performed a joint language switching task as speakers or listeners in turn (see Procedure section). A comparison of CFC between two brains, obtained for delta (1–3 Hz), theta (4–7 Hz), alpha (8–12 Hz) and beta (13–30 Hz) bands, evaluated differences between conditions (i.e., switch vs. repeat; switch to L1 vs. switch to L2; within-person switch vs. cross-person switch). These frequency bands contain the range of frequencies that are responsible for language processing (Bastiaansen et al. 2010; Brunetti et al. 2013; Kielar et al. 2014; Pérez et al. 2012; Wang et al. 2012) and domain-general cognitive processes (Brunetti et al. 2013; Strauß et al. 2014; van Steenbergen et al. 2012; Wilsch et al. 2015; Zavala et al. 2013). We aimed to examine whether the increased/decreased language control in a joint language switching task was accompanied by increased/decreased phase-amplitude coupling in the same/different frequency oscillations when inhibiting cross-language and cross-person interference. If we could observe coupling in the same/different frequency oscillations when participants suppressed interference from both sources of language and person, we would be able to conclude that language control during interactive cross-language communication require multi-oscillations to coordinate.
Methods
Participants
Thirty-four unbalanced Chinese–English bilinguals (L1 Chinese) were recruited from Liaoning Normal University, China. The experimental procedures were carefully explained to participants before they signed the consent forms. All procedures in the present study were authorized by the Ethics Committee of Research Center of Brain and Cognitive Neuroscience of Liaoning Normal University (No. 31700991). The current data were partially from our previous study (10.1016/j.jneuroling.2017.09.002). Note that we only successfully recorded 17 pairs. Further, due to excessive EEG artefacts, we had removed data from three pairs of participants, and kept the remaining 28 bilinguals (14 pairs) as the final sample for analysis.
Our unbalanced bilingual participants were instructed to rate their Chinese (L1) and English (L2) proficiency on a six-point scale, ranging from 1 (very unskilled) to 6 (very skilled) for L1, and ranging from 1 (L2 skills were much worse than L1 skills) to 6 (L2 skills were just as good as L1 skills) for L2. As the result of paired-sample T tests showed, the difference of participants’ self-evaluation between L1 and L2 was statistically significant in all skills: listening, t(27) = 8.49, p < .001, speaking, t(27) = 10.48, p < .001, reading, t(27) = 10.30, p < .001, and writing, t(27) = 6.41, p < .001. In addition, the LexTALE (a test of word or non-word in English with the maximum score of 100; Lemhöfer and Broersma 2012) and the Oxford Placement Test (25 multiple choice questions and a cloze test, with the maximum score of 50; Allan 2004) were administrated in order to measure their language abilities more objectively. The average scores of L2 proficiency were reported in Table 1.
Table 1.
Participants’ background and language profile
| Characteristics | M | SD |
|---|---|---|
| Age (years) | 20.75 | 2.10 |
| Age of acquisition of English | 9.21 | 2.54 |
| Self-rated Chinese proficiency | ||
| Listening | 5.50 | 0.69 |
| Speaking | 5.07 | 0.72 |
| Reading | 4.36 | 1.10 |
| Writing | 4.75 | 1.11 |
| Self-rated English proficiency | ||
| Listening | 3.50 | 1.07 |
| Speaking | 3.18 | 0.86 |
| Reading | 2.57 | 0.88 |
| Writing | 2.89 | 1.37 |
| Proficiency test | ||
| LexTALE | 54.33 | 8.15 |
| Oxford Placement Test | 36.61 | 5.07 |
Materials
The stimuli consisted of forty-eight line-drawing pictures (15 cm × 15 cm) that were selected from the picture database of Snodgrass and Vanderwart’s photo gallery (Snodgrass and Vanderwart 1980). These pictures were standardized by Zhang and Yang (2003) to accommodate Chinese participants. The word familiarity ratings in both L1 and L2 were collected from forty unbalanced Chinese–English bilinguals who did not participate in this experiment. They were encouraged to report whether they were familiar with L1 and L2 names of the selected drawings on a 5-point scale (1= “very unfamiliar”, 5= “very familiar”). Neither the average subjective familiarity (L1: 4.79 ± 0.12, L2: 4.81 ± 0.10, t(47) = − 1.48, p > .05) nor word frequency (L1: 77.53 ± 114.24, L2: 104.23 ± 128.39, t(47) = 1.54, p > .05) differed between L1 and L2 significantly (Cai and Brysbaert 2010 for Chinese word frequency, and Brysbaert and New 2009 for English word frequency).
Procedure and design
Participants were randomly matched in pairs and labelled as A or B. After completing the language background questionnaires, they performed the joint language switching task in a sound-attenuated room. They sat side by side to each other in front of the same computer screen without overt non-verbal communication (e.g., embodied and visual communication) during the entire experiment. Prior to the actual testing, participants were instructed to be familiarized with the L1 and L2 names of the selected pictures. In addition, they were instructed that two types of information would cue their responses during the task: the shape and color. The shape of the cue (triangle or circle) indicated who was the speaker; while its color (red or blue) indicated the target language (L1 or L2). For example, the triangle (“
” or “
”) represented participant A, while the circle (“
”or “
”) represented B; the red indicated Chinese (L1) as the target language, while the blue indicated English (L2) as the target language. While the speaker was naming, the other participant remained quiet. To ensure synchronization between the two sets of electrodes, two identical ANT amplifiers with separate grounds were optically coupled to the computer and recorded through the same software interface. Two microphones for each participant were used to record response times connected by two PSTSR-BOXs.
Given one trial, there were three processing phases: cue processing, lexical processing and output processing. During the cue processing, participants were cued about their role either as a speaker or listener and the language to name the upcoming picture. For example, a blue triangle (“
”) indicated participant A to name the picture in L2. During the lexical processing, the speaker retrieved the target lemma upon the presentation of a picture. For example, the lemma of “apple” needed to be retrieved at this stage. During the output processing, the speaker overtly named the picture in the selected language. For example, Participant B needed to speak out “apple” in English if cued with a blue circle. The other participant, namely the listener, kept silent till being instructed by the appropriate cue to name. Prior to the turn to name, the listener’s target language schema could be primed or activated by the utterances of the speaker, triggering “cross-person item-level interference” (Liu et al. 2018).
Figure 1 shows the specific structure of a given trial. At the center of the computer screen, a cue appeared for 250 ms to indicate which participant to name in the target language. After a 500 ms blank, a picture was presented on the screen for 2000 ms, waiting for an output response. Subsequently, a blank screen appeared for 1000 ms after the output response was recorded. Then a new trial started with a naming cue.
Fig. 1.
Joint language switching procedure
Our study is a 2 × 2 × 2 experimental design. The three factors are: language (L1 and L2), language sequence (repeat and switch) and person sequence (within-person and cross-person). This design would give us 8 types of trials. Each block consists of 96 trials of all the 8 types/conditions (12 trials per condition). We have 8 random blocks in total and each block contains the same 96 trials. Participants were given 12 trials for practice, prior to the testing. The block order was counterbalanced across the dyads. In repeat trials, the response language of at least two continuous trials was the same (L1L1 or L2L2); whereas in switch trials, the response language of the current trial was different from that of the previous trial (L1L2 or L2L1). A trial of ‘within-person condition’ was named by the same participant as the preceding trial; whereas a trial of ‘cross-person condition’ was named by a different participant as the preceding trial. The L1/L2 repeat and L1/L2 switch trials were equally distributed in the within-person and cross-person conditions. In this design, first, cross-frequency coupling comparisons between L1 switch trials and L2 switch trials either in within-person or cross-person condition could illustrate that cognitive resources recruited for language control in suppressing cross-language interference could affect brain-to-brain coupling. Second, cross-frequency coupling comparisons between within person and cross person conditions could clarify that language control for suppressing the cross-person item-level interference could also impact brain-to-brain coupling.
Hyperscanning recording and preprocessing analysis
Brain signals were preprocessed using two 64-channel caps with Ag/AgCl impedance-optimized active electrodes (ANT Neuro). The standard electrode sites are based on the international 10–20 system. The signal was recorded with 1 kHz sampling rate with the CPZ electrode reference online and re-referenced offline to the average of left and right mastoids, i.e., (M1 + M2)/2. Impedances were kept below 5 kΩ. Due to excessive artefacts across subjects, twenty electrodes located around the head were removed from analysis. Further analysis was conducted on 40 electrodes (F5, F3, F1, FZ, F2, F4, F6, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, C5, C3, C1, CZ, C2, C4, C6, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, P5, P3, P1, PZ, P2, P4, P6, PO5, PO3, POZ, PO4, PO6) and the sampling rate was reduced to 500 Hz. Electroencephalographic activity was filtered online within a bandpass between 0.1 and 100 Hz and low cut off filter offline with a 30 Hz. Independent components analysis (ICA) was conducted on the EEG data in order to decrease or eliminate ocular movement artefacts. The ICA components with the most pronounced correlations with the vertical and horizontal EOG were rejected, and the ICA-corrected data were recalculated by regression of the remaining components (Makeig et al. 2000).
Continuous recordings were segmented into cue-locked − 100 to 600 ms epochs, stimulus-locked − 150 to 650 ms epochs (Verhoef et al. 2009, 2010). Correspondingly, the epochs were referenced to 100 ms pre-cue baseline and the 150 ms pre-stimulus baseline. Signals exceeding ± 80 µV in any given epoch were automatically discarded. All further analysis was performed using Matlab software (Mathworks Inc).
Cross-brain cross-frequency coupling analysis
Previous studies have analyzed the CFC of the full time window after the stimulus onset (Friese et al. 2013; Bevilacqua et al. 2018); therefore, we calculated the CFC of the full time window located in 600 ms after the cue as well as the time window of 650 ms after the stimulus (i.e. the picture). In addition, we only analysed phase-amplitude coupling (PAC), as this type of CFC is the most suitable measure of cognitive-neural processes (see Canolty and Knight 2010; Hyafil et al. 2015).
To estimate how the amplitude was modulated by the rhythm, we calculated modulation index (MI) based on the standard measures previously described in Canolty et al. (2006). The MI can reflect the coupling between different frequencies that we were interested in. We calculated the phase when the speaker modulated the amplitude of the listener and vice versa. The frequency range was between 1 and 30 Hz. Based on the standard practice, we divided the frequencies into four frequency bands: delta (1–3 Hz), theta (4–7 Hz), alpha (8–13 Hz) and beta (14–30 Hz) (Poikonen et al. 2018; Burgess 2013; Yun et al. 2012). The classification of high and low frequency is relative: delta and theta frequencies were relatively lower compared to alpha and beta frequencies. We analyzed the inter-brain coupling between low-frequency phase and high-frequency amplitude (e.g., the coupling between delta phase and theta amplitude) as well as the coupling of the same band between the speaker and the listener (e.g., the coupling between the delta phase of the speaker and the delta amplitude of the listener) or between the listener and the speaker (e.g., the coupling between the delta phase of the listener and the delta amplitude of the speaker).
The f1 A (high frequency amplitude) and f2 P (low frequency phase) are used to express the amplitude and the phase of each frequency range, respectively. The procedure of MI calculation is described as below:
Take the phase of the listener coupling with the amplitude of the speaker as an instance:
First, the function of ‘eegfilter’ (FIR filter) in EEGLAB (Delorme and Makeig 2004) was used to go through band-pass filter, yielding the time series of different frequency signals: low-frequency signals of the listener and high-frequency signals of the speaker. In order to decrease the effect of artefacts from the filter, one third of the data points of low and high frequency cycles were removed.
Second, to get the instantaneous amplitude of the speaker and the phase of the listener, we used Hilbert Transform (Oppenheim 1999; Marple 1999).
raw signal, real part of the signal, imaginary part of the signal.
the signal transformed by Hilbert
instantaneous amplitude of the signal, instantaneous phase of the signal
Third, we constructed a composite complex-valued signal by combining the amplitude time series of one frequency with the phase time series of the other to obtain the modulation index.
Z: a new signal, A(t)hf: amplitude of high frequency, φ(t)lf: phase of low frequency.
Fourth, to measure the degree of asymmetry of the probability density function, the mean moment Z[t] was computed to provide a useful metric for coupling between the two time series. The absolute value of the mean vector is then given by
The vertical bars ‘||’ denote the absolute value. Assuming that φ(t)lf is uniformly distributed, any departure of the z[n] distribution from radial symmetry will indicate a dependence of A(t)hf on φ(t)lf. Therefore, a non-zero value of MI raw will indicate CFC.
Fifth, we normalized MI. A significance value can be attached to MI raw using a surrogate data approach. By introducing an arbitrary time lag between φ(t)lf and A(t)hf, we can compute the surrogate complex variable Zs[n]. The mean of this over n = 1,..., N was then given as Ms. This procedure was repeated to produce s = 1,..., S surrogate values. From this surrogate dataset we then computed the mean µ and variance σ, to work out a normalized modulation index: M = (Mraw -µ)/σ.
Then, we compared the MI values on different conditions. The normalized MI value across conditions was calculated by a three-factor repeated-measure ANOVA, language (L1/L2) × language sequence (repeat/switch) × person sequence (cross-person/within-person), in each electrode pairs with the coupling frequency bands.
Finally, we multiplied the electrode pairs obtained from the speaker (40 electrodes) with the ones from the listener (40 electrodes); hence, in total, 1600 three-factor repeated-measure ANOVAs were performed under each coupling frequency band. In order to avoid false-positive results, an FDR (Benjamini and Hochberg 1995; Benjamini and Yekutieli 2001) correction (q = 0.05) was applied to p values of ANOVAs in all couples of electronic points in each condition and each coupling frequency band.
Results
Behavioral data treatment
As accuracy was quite high overall (> 95%), only naming latencies were analyzed. For analysis, we excluded the first two trials of each block, those of naming latencies beyond M ± 3SD, error trials, as well as those contaminated by artefacts (13.62%). Note that naming latencies were only obtained for the speakers in each trial as the listeners were not instructed to generate any behavioral responses. We conducted a three-way repeated-measure ANOVA with the three within-participant factors: language (L1, L2), language sequence (repeat, switch) and person sequence (within-person, cross person) on the data. Only significant results were reported here.
Behavioral results
Figure 2 shows the naming latencies in different conditions. The results showed that repeat naming latencies (1001 ± 20 ms) were faster than switch ones (1017 ± 22 ms), exhibiting a main effect of language sequence, F(1, 27) = 8.53, p = .01, ηp2 = 0.24.
Fig. 2.
Mean naming latencies of joint language switching. Error bars show standard errors of naming latencies across subjects calculated separately for each level
Cross-frequency coupling results
Data treatment was the same as that in behavioral data analysis. We did three-way repeated-measures ANOVA with three within-participant factors: language (L1, L2), language sequence (repeat, switch) and person sequence (within-person, cross-person) for the experimental data. If an interaction was significance (p < .05), subsequent analysis of simple effects was performed. We only reported pairs of electrodes that showed significant results, as we performed significance tests on 1600 pairs of electrodes (i.e., 40 × 40). Based on previous studies employing PAC analysis (Jensen and Colgin 2007; Canolty and Knight 2010; Tort et al. 2008), we focused on how low-frequency phase modulated high-frequency amplitude, as well as the modulation of phase on amplitude in the same frequency band. Thus, we reported such cross-frequency coupling resulting from PAC analysis, but not the other types of coupling (e.g., the modulation of high-frequency phase on low-frequency amplitude).
(1) Cue-locked same/different frequency coupling
To better illustrate the results, we presented the main effects and interactions in the same or different frequency bands in Table 2. Our analyses are based on two types of coupling: the rhythm of production phase to modulate the amplitude of comprehension and the rhythm of comprehension phase to modulate the amplitude of production. In addition, we presented focused electrode pairs at both cue-locked and stimulus-locked stages in CFC in Fig. 3.
Table 2.
Main effects and interactions for cue-locked cross-frequency coupling
| Types | Main/interaction effect | Frequency band | MI results across conditions |
Electrodes | F | p | ηp2 |
|---|---|---|---|---|---|---|---|
| Production coupling comprehension | Person sequence | Theta/theta | Cross-person > Within-person | P2–F2 | 11.56 | < 0.001 | 0.30 |
| Person sequence | Alpha/alpha | Cross-person > Within-person | P6–P6 | 12.69 | < 0.001 | 0.32 | |
| Language sequence | Beta/beta | Switch > Repeat | P6–P4 | 10.17 | < 0.001 | 0.27 | |
| Language sequence | Delta/theta | Switch > Repeat | P2–P4 | 10.30 | < 0.001 | 0.28 | |
| Language sequence | Delta/theta | Switch > Repeat | P2–P2 | 18.10 | < 0.001 | 0.40 | |
| Language sequence | Delta/theta | Switch > Repeat | P6–P4 | 8.94 | < 0.01 | 0.25 | |
| Language sequence | Delta/alpha | Switch > Repeat | F6–P4 | 8.22 | < 0.01 | 0.23 | |
| Language × person sequence | Alpha/beta | L1/2 cross-person > L1/2 within-person | F4–P6 | 11.70 | < 0.001 | 0.30 | |
| Comprehension coupling production | Language × language sequence | Beta/beta | L2 switch > L1 switch | F4–P6 | 9.46 | 0.01 | 0.26 |
| Language × person sequence | Delta/delta | L2 cross-person > L2 within-person | P4–F4 | 9.29 | 0.01 | 0.26 | |
| Language × person sequence | Delta/delta | L2 cross-person > L2 within-person | P4––F2 | 9.89 | < 0.001 | 0.27 | |
| Language × person sequence | Theta/beta | L2 cross-person > L2 within-person | P2–F6 | 8.08 | 0.01 | 0.23 |
Fig. 3.
Cross-frequency coupling electrode pairs. a shows cue-locked electrode pairs of P4–F4 (red line), P4–F2 (yellow line), P2–F6 (purple line), and F4–P6 (green line). b represents stimulus-locked electrode pairs of F6–P6 (red line), and P4–P2 (blue line). The red head represents speakers; the blue head represents listeners. (Color figure online)
Production coupling comprehension There was a main effect of person sequence at the electrodes of P2–F2 and P6–P6, showing stronger modulation occurring in the cross-person condition than in the within-person condition at the same frequency band for both theta and alpha phase-amplitude. These results indicate that cross-person trials generated stronger coupling than within-person trials, suggesting more interference was overcome for cross-person trials in general. There was also a main effect of language sequence covering the couple electrodes of P6–P4, P2–P4, P2–P2, and F6–P4. Among these couple electrodes, P6–P4 generated stronger coupling in the switch trials than repeat trials at both within frequency (i.e., beta/beta) and cross frequency (i.e., delta/theta) bands; while the rest generated stronger CFC at cross frequency bands only (i.e., delta/theta phase-amplitude for P2–P4 and P2–P2, delta/alpha phase-amplitude for F6–P4). These results show that switch trials produced stronger coupling than repeat trials at different couple electrodes and are consistent with the notion that language switching involves inhibiting interference. Furthermore, an interaction of language × person sequence reached significance at the couple electrodes of F4–P6. As we are primarily interested in the role of person sequence in bilingual control, our subsequent analyses showed stronger modulation in L2 cross-person condition than L2 within-person condition (F(1,27) = 5.69, p = .02, ηp2 = 0.17), as well as stronger modulation in L1 cross-person condition than L1 within-person condition (F(1, 27) = 4.97, p = .03, ηp2 = 0.15) (see Fig. 4a–c). These results suggest that participants encountered more interference in the cross-person condition than within-person condition in both L1 and L2.
Fig. 4.
Cross-frequency coupling at the cue-locked phase. a–c show that the beta rhythms revealed stronger modulation on the alpha amplitude of L2 trials in the cross-person condition than those in the within-person condition. d–f show that the modulation of the same phase was stronger in the cross-person condition than the within-person condition on the delta amplitude in L2 trials. The red-line boxes in a, b, d and e mark the focused frequency bands. (Color figure online)
Comprehension coupling production There was no main effect, but we observed two-way interactions of language x language sequence at the couple electrodes of F4–P6 and language x person sequence at the couple electrodes of P4–F4, P4–F2, and P2–F6. That is, the beta rhythm modulated amplitude at the same phase, generating stronger coupling in L2 switch trials than L1 switch trials (F(1, 27) = 4.58, p = .04, ηp2 = 0.15). These results suggest that switching to L2 encountered more interference than switching to L1, consistent with the literature (Green 1998; Costa and Santesteban 2004; Costa et al. 2006; De Bruin et al. 2014; Declerck et al. 2015; Liu et al. 2016, 2018). Meanwhile, stronger modulation was observed in the same delta bands in L2 cross-person condition than L2 within-person condition at both P4–F4 (F(1, 27) = 4.50, p = .04, ηp2 = 17), and P4–F2 (F(1, 27) = 4.75, p = .04, ηp2 = 0.15) (see Fig. 4d–f). Likewise, stronger modulation was observed in theta/beta phase-amplitude at P2–F6 in L2 cross-person condition than L2 within-person condition (F(1, 27) = 6.81, p = .02, ηp2 = 0.20). These results were partially consistent with those from production coupling comprehension, indicating that participants encountered more interference in the cross-person condition than the within-person condition for L2 but not L1.
(2) Stimulus-locked same/different frequency coupling.
Similarly, Table 3 provides an overview of main effects and interactions of CFC at the same or different frequency bands during the stimulus-locked stage.
Table 3.
Main effects and interactions for stimulus-locked cross-frequency coupling
| Types | Main/interaction effect | Frequency band | MI results across conditions | Electrodes | F | p | ηp2 |
|---|---|---|---|---|---|---|---|
| Production coupling comprehension | Language × person sequence | Alpha/alpha | L1 cross-person > L1 within-person | P6–P2 | 6.54 | 0.02 | 0.20 |
| Language × person sequence | Alpha/alpha | L1 cross-person > L1 within-person | P6–P6 | 7.82 | 0.01 | 0.23 | |
| Language × person sequence | Alpha/beta | L1 cross-person > L1 within-person | P6–P6 | 8.99 | 0.01 | 0.25 | |
| Language × language sequence × person sequence | Delta/theta | L2 switch within-person > L1 switch within-person | P4–P2 | 10.43 | 0.003 | 0.28 | |
| Comprehension coupling production | Language × language sequence × person sequence | Delta/alpha | L2 switch within-person > L1 switch within-person | F6–P6 | 8.89 | 0.006 | 0.25 |
Production coupling comprehension. We only observed a two-way interaction of language x person sequence at the couple electrodes of P6–P2 and P6–P6, as well as a three-way interaction of language x language sequence x person sequence at the couple electrodes of P4–P2. That is, the alpha rhythm modulated amplitude at the same phase, generating stronger coupling in L1 cross-person condition than L1 within-person condition at both P6–P2 (F(1, 27) = 4.78, p = .04, ηp2 = 0.15) and P6–P6 (F(1, 27) = 4.48, p = .04, ηp2 = 0.19). In addition, stronger modulation was observed in alpha/beta phase-amplitude in P6–P6 in L1 cross-person condition than L1 within-person condition (F (1, 27) = 4.58, p = .04, ηp2 = 0.15). These results suggest that more cognitive effort was required in L1 cross-person condition than in L1 within-person condition.
Importantly, the interaction of language × language sequence × person sequence was significant at the couple electrodes of P4–P2. Further analysis showed that the delta rhythm modulated the amplitude in the theta phase, with stronger modulation occurring in L2 switch trials compared to L1 switch trials in the within-person condition only: F(1, 27) = 8.61, p = .007, ηp2 = 0.24 (see Fig. 5a–c). Again, this result is consistent with the literature showing that more interference was encountered when switching to L2 than L1.
Fig. 5.
Cross-frequency coupling at the stimulus-locked phase. a–c show that the delta rhythms modulated amplitude in the theta phase in the within-person condition, revealing stronger modulation in the L2 switch trials than L1 switch trials. d–f represent the amplitude in the alpha phase of the within-person condition, revealing stronger modulation in L2 switch trials than L1 switch trials. The red-line boxes in a, b, d and e mark the focused frequency bands. (Color figure online)
Comprehension coupling production. We only observed a three-way interaction at the couple electrodes of F6–P6 at cross-frequency bands (i.e., delta/alpha phase-amplitude). Further analysis indicated that the delta rhythm modulated the amplitude in the alpha phase, showing stronger modulation occurring in L2 switch trials compared to L1 switch trials in the within-person condition: F(1, 27) = 4.73, p = .04, ηp2 = 0.15 (see Fig. 5d–f). This result was consistent with that from production coupling comprehension, indicating cross-language interference was larger when switching to L2 than L1.
To summarize the main results, we found both role switching (i.e., cross-person item-level interference) and language switching effects (i.e., cross-language interference) at the cue-locked stage, showing cross-person trials generating stronger CFC than within-person trials, as well as switch trials producing stronger CFC than repeat trials. In particular, these effects were not subjective to which language was used, suggesting that both listeners and speakers encountered interference from both sources (i.e., cross-person and cross-language) in the task. However, during the stimulus-locked stage, we found stronger CFC in L2 switch trials than L1 switch trials only in the within-person condition, but not cross-person condition. This suggests that participants were more prepared to overcome the cross-language interference (i.e., stronger inhibition of L1 when switching to L2) in the within-person condition after they were aware of who would be the next speaker at the cue-locked stage. However, this cross-language inhibition was absent when participants switched roles in the task.
Discussion
Our current study takes a novel approach to understand how joint language control was achieved through brain-to-brain communication in a joint picture naming task by using CFC. CFC is a sensitive measure of neuronal oscillations that can serve as a mechanism to transfer information from large-scale brain networks at behavioural timescales to the fast, local cortical processing required for cognitive processes. Here we are interested in measuring the synchronized cross-language communication between speakers and listeners in the aspect of inhibiting interferences from different sources. Based on the literature, we identified two sources of interference participants would encounter in this task: cross-language and cross-person. We found that (1) speakers and listeners coordinately suppressed cross-language interference through cross-frequency coupling, as shown in the increased delta/theta phase-amplitude and delta/alpha phase-amplitude coupling when switching to L2 than switching to L1; (2) speakers and listeners both simultaneously inhibited cross-person item-level interference which was shown by stronger cross-frequency coupling in the cross-person condition than the within-person condition. Importantly, both inhibitions were primarily completed during the cue-locked stage.
These main findings show that the modulation of the speaker’s phase on the listener’s amplitude was quite similar to that of the listener’s phase on the speaker’s amplitude. These results were very likely driven by the task in which both speakers and listeners needed to pay attention to the same cues which instructed the behaviors of the participants. During a particular trial involving language switching following a prior trial, the cue would trigger the top-down language control in the speaker during picture naming, as well as the bottom-up attentional control in the listener when monitoring the speaker. Similarly, in a trial involving role switching following a prior trial, the cue would trigger bottom-up attentional control in both speakers and listeners. In both contexts, our CFC results showed that both speakers and listeners jointly implemented the language control mechanisms to complete the task simultaneously. In the following sections, we discussed the details of joint language control in both contexts.
Joint language control in suppressing cross-language interference
Language control plays an important role for suppressing cross-language interference during independent bilingual production and comprehension (e.g., Green 1998). Our findings suggest that this language control mechanism was implemented simultaneously in production and comprehension in two independent brains during cross-language communication. One of our main results is that switching to the weaker language, L2, evoked more delta/theta phase-amplitude and delta/alpha phase-amplitude coupling than switching to the dominant language, L1. In addition, this stronger CFC in L2 switch trials was shown on both modulation of the speaker’s phase on the listener’s amplitude and modulation of the listener’s phase on the speaker’s amplitude. Previous studies have demonstrated that delta/theta-band oscillations were associated with semantic processes (Brunetti et al. 2013). In our joint picture-naming task, L1 lexical representations were more likely to be activated than L2 for unbalanced bilinguals at the stimulus-locked stage. If stronger CFC was due to lexico-semantic processes, we should have observed stronger CFC in L1 switch trials rather than L2 switch trials. Thus, the stronger CFC observed in L2 switch trials suggested that additional cognitive processes were involved during the inter-brain communication. These cognitive processes were required as a synchronous effort from both speakers and listeners to control the cross-language interference to complete the task.
One important issue is to identify the locus of language control during language switching, and our findings provide insights into this. Previous studies showed that language control may occur at different linguistic levels, such as the conceptual level (Declerck et al. 2015; La Heij 2005), the lemma level (Declerck et al. 2015; Declerck and Philipp 2015a; Tarlowski et al. 2013), the phonological form level (Goldrick et al. 2014), and the orthographical form level (Orfanidou and Sumner 2005). As our task did not involve any orthography, we can rule out the possibility of inhibition driven by orthography. Similarly, we can rule out the possibility that inhibiting cross-language interference occurred at the conceptual level. Theoretical models assume stronger connections from concepts to L1 lemmas than L2 lemmas, a functional language processing level where lexical properties were specified prior to output (e.g., the RHM in Kroll and Stewart 1994). If the stronger CFC in L2 switch trials (compared to L1 switch trials) was driven by concept due to the weaker connections to L2 lemmas (compared to L1 switch trials), we should had observed similar effects in L1/L2 repeat trials. However, similar effects were absent in repeat trials.
Thus, our findings suggest that stronger CFC effects in L2 switch trials can be interpreted as the consequence of suppressing/inhibiting interference from L1 lemma/phonology or the interaction between these two levels. This is in line with the literature. For instance, Goldrick et al. (2014) suggested that the difference between L1 and L2 in lemma activation during the language control processes could spill over into their respective phonemes thus leading to the differences in phonology. Language control can occur at the lemma level but also influence the phonological level. On the other hand, Declerck and Philipp (2015b) argued that phonology can influence language control at the lemma level through phonological feedback. These proposals suggest that speakers may resort to a top-down language control at the lemma and phonological levels and that listeners may trigger a bottom-up control through the phonological feedback loops after hearing the words produced speakers. In this logic, language control can occur in both speakers and listeners at the same time. Thus, we observed stronger CFC in L2 switch trials in production’s phase coupling comprehension’ amplitude and vice versa (compared to L1 switch trials).
Joint language control in suppressing cross-person item-level interference
We hypothesized that dynamic cross-language communication was also achieved through inhibiting cross-person item-level interference, in addition to cross-language interference regularly reported in single-bilingual production and comprehension. This hypothesis has been supported by the current study: speakers and listeners suppressed cross-person item-level interference simultaneously only at the cue-locked phase. This inhibition was indicated by the stronger modulation in the cross-person condition compared to the within-person condition at multiple cross-frequency coupling. What drives the cross-person item-level language control?
We consider this effect primarily driven by joint attention that may modulate speech tracking (Zion Golumbic et al. 2013). Joint attention is a ubiquitous element of theories of successful interpersonal communication (Schirmer et al. 2016). In a joint naming task, participants need to reshape their attentional strategies to achieve successful communication during role switching. It is plausible that the cross-person condition required more attentional resources than the within-person condition. Moreover, our results indicate that this joint attention was not subjective to language context as stronger CFC was observed in both L1 and L2 in the cross-person condition compared to the within-person condition. These couplings in a series of oscillations were associated to attentional control indicating role switching (Harmony 2013; Roehm et al. 2004; Ocklenburg et al. 2011; van Steenbergen et al. 2012; Zavala et al. 2013).
As discussed earlier, this cross-person item-level interference was interpreted as a type of priming effects during a dialogue in previous studies (e.g., the Interactive Alignment Model in Pickering and Garrod 2004; Garrod and Pickering 2009). That is, various levels of linguistic representations (e.g. semantic, syntactic, lemma, phonology, etc.) were simultaneously activated during both production and comprehension and that the speaker’s linguistic representation system would prime the listener to unconsciously resonate with that of the speaker. The increased consistency between the two linguistic representation systems of the communicators showed the alignment effect. For example, Kootstra et al. (2010) found that participants were inclined to adopt the order and location in language switching used by confederates in a dialogue using a picture-sentence completion task. Thus, it is plausible that listening participants could benefit from trials where the preceding speaker produced in the same language (i.e., repeat trials). However, we failed to observe CFC difference between switch trials and repeat trials in the cross-person condition. This could be due to the fact that interference was resolved primarily at the cue-locked stage in our task and that participants were more sensitive to the role and language at this stage. In other words, their attentional resources were primarily involved at the cue-locked stage to sufficiently complete the task. As a result, this potential priming effect diminished at the stimulus-locked stage. Future studies could focus on identifying the source of this type of cross-person interference in bilingual communication.
Taken together, our results showed that dynamic cross-language communication involves cross-person item-level interference that is inhibited by the joint effort of both speakers and listeners. This joint language control mechanism cannot be explained by inhibition operated in separate/independent bilingual production or comprehension as the IC model (Green 1998) argued. Therefore, the IC model should incorporate mechanisms that address inhibiting interference sourced in both language and person (i.e., cross-language and cross-person item-level interference) synchronously through joint language control during simultaneously producing and comprehending language switching.
Conclusions
The current study adopted a joint language switching paradigm to show the large-scale brain networks in communicators when they jointly suppressed/inhibited both cross-language and cross-person item-level interference during dynamic cross-language communication. Our findings have demonstrated crucial language control mechanisms involved in an interactive bilingual communication mode. These mechanisms should be taken into account in bilingual models. That is, bilingual communicators have to synchronously inhibit interference (both cross-language and cross-person item-level interference) via their language control during cross-language communication. This study has not only provided/experimented a new approach to study language control in language switching using CFC, which is simulating a bilingual dialogue; but also constructed empirical and theoretical foundations for studying dynamic cross-language communication.
Acknowledgements
This research was supported by a Grant from National Natural Science Foundation of China Youth Fund (31700991), China Postdoctoral Science Foundation (2017M621158), and Open Fund of Beijing Key Lab of Applied Experimental Psychology.
Footnotes
The large-scale network refers to the cross-frequency coupling between different electrode points (Canolty and Knight 2010).
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Abutalebi J, Green D. Bilingual language production: the neurocognition of language representation and control. J Neuroling. 2007;20(3):242–275. [Google Scholar]
- Abutalebi J, Green DW. Neuroimaging of language control in bilinguals: neural adaptation and reserve. Biling Lang Cogn. 2016;19(4):689–698. [Google Scholar]
- Allan D. Oxford placement test 1: test pack. Oxford: Oxford University Press; 2004. [Google Scholar]
- Anders S, Heinzle J, Weiskopf N, Ethofer T, Haynes JD. Flow of affective information between communicating brains. NeuroImage. 2011;54(1):439–446. doi: 10.1016/j.neuroimage.2010.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Axmacher N, Henseler MM, Jensen O, Weinreich I, Elger CE, Fell J (2010) Cross-frequency coupling supports multi-item working memory in the human hippocampus. In: Proceedings of the National Academy of Sciences, 200911531 [DOI] [PMC free article] [PubMed]
- Bastiaansen M, Magyari L, Hagoort P. Syntactic unification operations are reflected in oscillatory dynamics during on-line sentence comprehension. J Cogn Neurosci. 2010;22(7):1333–1347. doi: 10.1162/jocn.2009.21283. [DOI] [PubMed] [Google Scholar]
- Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B (Methodol) 1995;57:289–300. [Google Scholar]
- Benjamini Y, Yekutieli D. The control of the false discovery rate in multiple testing under dependency. Ann Stat. 2001;29:1165–1188. [Google Scholar]
- Bevilacqua D, Davidesco I, Wan L, Chaloner K, Rowland J, Ding M, Poeppel D, Dikker S. Brain-to-brain synchrony and learning outcomes vary by student–teacher Dynamics: evidence from a real-world classroom electroencephalography study. J Cogn Neurosci. 2018;31:401–411. doi: 10.1162/jocn_a_01274. [DOI] [PubMed] [Google Scholar]
- Brunetti E, Maldonado PE, Aboitiz F. Phase synchronization of delta and theta oscillations increase during the detection of relevant lexical information. Front Psychol. 2013;4(4):308. doi: 10.3389/fpsyg.2013.00308. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bruns A, Eckhorn R. Task-related coupling from high-to low-frequency signals among visual cortical areas in human subdural recordings. Int J Psychophysiol. 2004;51(2):97–116. doi: 10.1016/j.ijpsycho.2003.07.001. [DOI] [PubMed] [Google Scholar]
- Brysbaert M, New B. Moving beyond Kučera and Francis: a critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behav Res Methods. 2009;41(4):977–990. doi: 10.3758/BRM.41.4.977. [DOI] [PubMed] [Google Scholar]
- Burgess AP. On the interpretation of synchronization in EEG hyperscanning studies: a cautionary note. Frontiers in Human Neuroscience. 2013;7:881. doi: 10.3389/fnhum.2013.00881. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cai Q, Brysbaert M. SUBTLEX-CH: Chinese word and character frequencies based on film subtitles. PloS One. 2010;5(6):e10729. doi: 10.1371/journal.pone.0010729. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Canolty RT, Edwards E, Dalal SS, Soltani M, Nagarajan SS, Kirsch HE, Knight RT. High gamma power is phase-locked to theta oscillations in human neocortex. Science. 2006;313(5793):1626–1628. doi: 10.1126/science.1128115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Canolty RT, Knight RT. The functional role of cross-frequency coupling. Trends Cogn Sci. 2010;14(11):506–515. doi: 10.1016/j.tics.2010.09.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cooper PS, Wong AS, Fulham WR, Thienel R, Mansfield E, Michie PT, Karayanidis F. Theta frontoparietal connectivity associated with proactive and reactive cognitive control processes. NeuroImage. 2015;108:354–363. doi: 10.1016/j.neuroimage.2014.12.028. [DOI] [PubMed] [Google Scholar]
- Costa A, Santesteban M. Lexical access in bilingual speech production: evidence from language switching in highly proficient bilinguals and L2 learners. J Mem Lang. 2004;50(4):491–511. [Google Scholar]
- Costa A, Santesteban M, Ivanova I. How do highly proficient bilinguals control their lexicalization process? Inhibitory and language-specific selection mechanisms are both functional. J Exp Psychol Learn Mem Cogn. 2006;32(5):1057–1074. doi: 10.1037/0278-7393.32.5.1057. [DOI] [PubMed] [Google Scholar]
- De Bruin A, Roelofs A, Dijkstra T, FitzPatrick I. Domain-general inhibition areas of the brain are involved in language switching: FMRI evidence from trilingual speakers. NeuroImage. 2014;90:348–359. doi: 10.1016/j.neuroimage.2013.12.049. [DOI] [PubMed] [Google Scholar]
- Declerck M, Koch I, Philipp AM. The minimum requirements of language control: evidence from sequential predictability effects in language switching. J Exp Psychol Learn Mem Cogn. 2015;41(2):377. doi: 10.1037/xlm0000021. [DOI] [PubMed] [Google Scholar]
- Declerck M, Philipp AM. A sentence to remember: instructed language switching in sentences. Cognition. 2015;137:166–173. doi: 10.1016/j.cognition.2015.01.006. [DOI] [PubMed] [Google Scholar]
- Declerck M, Philipp AM. The unusual suspect: influence of phonological overlap on language control. Biling Lang Cogn. 2015;18(4):726–736. [Google Scholar]
- Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods. 2004;134(1):9–21. doi: 10.1016/j.jneumeth.2003.10.009. [DOI] [PubMed] [Google Scholar]
- Dikker S, Silbert LJ, Hasson U, Zevin JD. On the same wavelength: predictable language enhances speaker–listener brain-to-brain synchrony in posterior superior temporal gyrus. J Neurosci. 2014;34(18):6267–6272. doi: 10.1523/JNEUROSCI.3796-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friese U, Köster M, Hassler U, Martens U, Trujillo-Barreto N, Gruber T. Successful memory encoding is associated with increased cross-frequency coupling between frontal theta and posterior gamma oscillations in human scalp-recorded EEG. NeuroImage. 2013;66:642–647. doi: 10.1016/j.neuroimage.2012.11.002. [DOI] [PubMed] [Google Scholar]
- Gambi C, Hartsuiker RJ. If you stay, it might be easier: switch costs from comprehension to production in a joint switching task. J Exp Psychol Learn Mem Cogn. 2016;42(4):608–626. doi: 10.1037/xlm0000190. [DOI] [PubMed] [Google Scholar]
- Garrod S, Pickering MJ. Joint action, interactive alignment, and dialog. Top Cogn Sci. 2009;1(2):292–304. doi: 10.1111/j.1756-8765.2009.01020.x. [DOI] [PubMed] [Google Scholar]
- Goldrick M, Runnqvist E, Costa A. Language switching makes pronunciation less nativelike. Psychol Sci. 2014;25:1031–1036. doi: 10.1177/0956797613520014. [DOI] [PubMed] [Google Scholar]
- Green DW. Mental control of the bilingual lexico-semantic system. Biling Lang Cogn. 1998;1(2):67–81. [Google Scholar]
- Green DW, Abutalebi J. Language control in bilinguals: the adaptive control hypothesis. J Cogn Psychol. 2013;25(5):515–530. doi: 10.1080/20445911.2013.796377. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grosjean F. Bilingual. Cambridge: Harvard University Press; 2010. [Google Scholar]
- Harmony T. The functional significance of delta oscillations in cognitive processing. Front Integr Neurosci. 2013;7:83. doi: 10.3389/fnint.2013.00083. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hasson U, Ghazanfar AA, Galantucci B, Garrod S, Keysers C. Brain-to-brain coupling: a mechanism for creating and sharing a social world. Trends Cogn Sci. 2012;16(2):114–121. doi: 10.1016/j.tics.2011.12.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Herrmann CS, Strüber D, Helfrich RF, Engel AK. EEG oscillations: from correlation to causality. Int J Psychophysiol. 2016;103:12–21. doi: 10.1016/j.ijpsycho.2015.02.003. [DOI] [PubMed] [Google Scholar]
- Hyafil A, Giraud AL, Fontolan L, Gutkin B. Neural cross-frequency coupling: connecting architectures, mechanisms, and functions. Trends Neurosci. 2015;38(11):725–740. doi: 10.1016/j.tins.2015.09.001. [DOI] [PubMed] [Google Scholar]
- Jensen O, Colgin LL. Cross-frequency coupling between neuronal oscillations. Trends Cogn Sci. 2007;11(7):267–269. doi: 10.1016/j.tics.2007.05.003. [DOI] [PubMed] [Google Scholar]
- Jensen O, Mazaheri A. Shaping functional architecture by oscillatory alpha activity: gating by inhibition. Front Hum Neurosci. 2010;4:186. doi: 10.3389/fnhum.2010.00186. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kawasaki M, Yamada Y, Ushiku Y, Miyauchi E, Yamaguchi Y. Inter-brain synchronization during coordination of speech rhythm in human-to-human social interaction. Sci Rep. 2013;3:1692. doi: 10.1038/srep01692. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim JS, Chung CK. Robust source analysis of oscillatory motor cortex activity with inherently variable phase delay. NeuroImage. 2007;37(2):518–529. doi: 10.1016/j.neuroimage.2007.04.068. [DOI] [PubMed] [Google Scholar]
- Kielar A, Meltzer JA, Moreno S, Alain C, Bialystok E. Oscillatory responses to semantic and syntactic violations. J Cogn Neurosci. 2014;26(12):2840–2862. doi: 10.1162/jocn_a_00670. [DOI] [PubMed] [Google Scholar]
- Klimesch W. Alpha-band oscillations, attention, and controlled access to stored information. Trends Cogn Sci. 2012;16(12):606–617. doi: 10.1016/j.tics.2012.10.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kootstra GJ, van Hell JG, Dijkstra T. Syntactic alignment and shared word order in code-switched sentence production: evidence from bilingual monologue and dialogue. J Mem Lang. 2010;63(2):210–231. [Google Scholar]
- Kroll JF, Stewart E. Category interference in translation and picture naming: evidence for asymmetric connections between bilingual memory representations. J Mem Lang. 1994;33:149–174. [Google Scholar]
- La Heij W. Selection processes in monolingual and bilingual lexical access. In: Kroll JF, de Groot AMB, editors. Handbook of bilingualism: Psycholinguistic Approaches. Oxford: Oxford University Press; 2005. pp. 289–307. [Google Scholar]
- Lemhöfer K, Broersma M. Introducing LexTALE: a quick and valid lexical test for advanced learners of English. Behav Res Methods. 2012;44(2):325–343. doi: 10.3758/s13428-011-0146-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu H, Liang L, Dunlap S, Fan N, Chen B. The effect of domain-general inhibition-related training on language switching: an ERP study. Cognition. 2016;146:264–276. doi: 10.1016/j.cognition.2015.10.004. [DOI] [PubMed] [Google Scholar]
- Liu H, Xie N, Zhang M, Gao X, Dunlap S, Chen B. The electrophysiological mechanism of joint language switching: evidence from simultaneous production and comprehension. J Neuroling. 2018;45:45–59. [Google Scholar]
- Luo Y, Zhang Y, Feng X, Zhou X. Electroencephalogram oscillations differentiate semantic and prosodic processes during sentence reading. Neuroscience. 2010;169(2):654–664. doi: 10.1016/j.neuroscience.2010.05.032. [DOI] [PubMed] [Google Scholar]
- Makeig S, Bell T, Lee TW, Jung TP, Enghoff S (2000) EEGLAB: ICA Toolbox for Psychophysiological Research. http://www.sccn.ucsd.edu/eeglab/
- Marple L. Computing the discrete-time "analytic” signal via FFT. IEEE Trans Signal Process. 1999;47(9):2600–2603. [Google Scholar]
- Meuter RF, Allport A. Bilingual language switching in naming: asymmetrical costs of language selection. J Mem Lang. 1999;40(1):25–40. [Google Scholar]
- Ocklenburg S, Güntürkün O, Beste C. Lateralized neural mechanisms underlying the modulation of response inhibition processes. Neuroimage. 2011;55(4):1771–1778. doi: 10.1016/j.neuroimage.2011.01.035. [DOI] [PubMed] [Google Scholar]
- Onslow AC, Jones MW, Bogacz R. A canonical circuit for generating phase-amplitude coupling. PLoS One. 2014;9(8):e102591. doi: 10.1371/journal.pone.0102591. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oppenheim AV. Discrete-time signal processing. New Delhi: Pearson Education India; 1999. [Google Scholar]
- Orfanidou E, Sumner P. Language switching and the effects of orthographic specificity and response repetition. Mem Cogn. 2005;33:355–369. doi: 10.3758/bf03195323. [DOI] [PubMed] [Google Scholar]
- Palva S, Palva JM. New vistas for α-frequency band oscillations. Trends Neurosci. 2007;30(4):150–158. doi: 10.1016/j.tins.2007.02.001. [DOI] [PubMed] [Google Scholar]
- Pérez A, Molinaro N, Mancini S, Barraza P, Carreiras M. Oscillatory dynamics related to the unagreement pattern in Spanish. Neuropsychologia. 2012;50(11):2584–2597. doi: 10.1016/j.neuropsychologia.2012.07.009. [DOI] [PubMed] [Google Scholar]
- Pérez A, Carreiras M, Duñabeitia JA. Brain-to-brain entrainment: EEG interbrain synchronization while speaking and listening. Sci Rep. 2017;7(1):1–12. doi: 10.1038/s41598-017-04464-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pérez A, Dumas G, Karadag M, Duñabeitia JA. Differential brain-to-brain entrainment while speaking and listening in native and foreign languages. Cortex. 2019;111:303–315. doi: 10.1016/j.cortex.2018.11.026. [DOI] [PubMed] [Google Scholar]
- Pfurtscheller G, Zalaudek K, Neuper C. Event-related beta synchronization after wrist, finger and thumb movement. Electroencephalogr Clin Neurophysiol/Electromyogr Motor Control. 1998;109(2):154–160. doi: 10.1016/s0924-980x(97)00070-2. [DOI] [PubMed] [Google Scholar]
- Pickering MJ, Garrod S. Toward a mechanistic psychology of dialogue. Behav Brain Sci. 2004;27(2):169–190. doi: 10.1017/s0140525x04000056. [DOI] [PubMed] [Google Scholar]
- Poikonen H, Toiviainen P, Tervaniemi M. Naturalistic music and dance: cortical phase synchrony in musicians and dancers. PloS One. 2018;13(4):e0196065. doi: 10.1371/journal.pone.0196065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roehm D, Schlesewsky M, Bornkessel I, Frisch S, Haider H. Fractionating language comprehension via frequency characteristics of the human EEG. Neuroreport. 2004;15(3):409–412. doi: 10.1097/00001756-200403010-00005. [DOI] [PubMed] [Google Scholar]
- Schirmer A, Meck WH, Penney TB. The socio-temporal brain: connecting people in time. Trends Cogn Sci. 2016;20(10):760–772. doi: 10.1016/j.tics.2016.08.002. [DOI] [PubMed] [Google Scholar]
- Snodgrass JG, Vanderwart M. A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. J Exp Psychol Hum Learn Mem. 1980;6(2):174. doi: 10.1037//0278-7393.6.2.174. [DOI] [PubMed] [Google Scholar]
- Strauß A, Kotz SA, Scharinger M, Obleser J. Alpha and theta brain oscillations index dis-sociable processes in spoken word recognition. NeuroImage. 2014;97(2):387. doi: 10.1016/j.neuroimage.2014.04.005. [DOI] [PubMed] [Google Scholar]
- Tarlowski A, Wodniecka Z, Marzecová A. Language switching in the production of phrases. J Psycholing Res. 2013;42(2):103–118. doi: 10.1007/s10936-012-9203-9. [DOI] [PubMed] [Google Scholar]
- Tong J, Kong C, Wang X, Liu H, Li B, He Y. Transcranial direct current stimulation influences bilingual language control mechanism: evidence from cross-frequency coupling. Cogn Neurodyn. 2020;14:203–214. doi: 10.1007/s11571-019-09561-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tort AB, Kramer MA, Thorn C, Gibson DJ, Kubota Y, Graybiel AM, Kopell NJ. Dynamic cross-frequency couplings of local field potential oscillations in rat striatum and hippocampus during performance of a T-maze task. Proc Natl Acad Sci. 2008;105(51):20517–20522. doi: 10.1073/pnas.0810524105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tort AB, Komorowski R, Eichenbaum H, Kopell N. Measuring phase-amplitude coupling between neuronal oscillations of different frequencies. J Neurophysiol. 2010;104(2):1195–1210. doi: 10.1152/jn.00106.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- van Steenbergen H, Band GP, Hommel B. Reward valence modulates conflict-driven attentional adaptation: electrophysiological evidence. Biol Psychol. 2012;90(3):234–241. doi: 10.1016/j.biopsycho.2012.03.018. [DOI] [PubMed] [Google Scholar]
- Verhoef KMW, Roelofs A, Chwilla DJ. Role of inhibition in language switching: evidence from event-related brain potentials in overt picture naming. Cognition. 2009;110:84–99. doi: 10.1016/j.cognition.2008.10.013. [DOI] [PubMed] [Google Scholar]
- Verhoef KMW, Roelofs A, Chwilla DJ. Electrophysiological evidence for endogenous control of attention in switching between languages in overt picture naming. J Cogn Neurosci. 2010;22(8):1832–1843. doi: 10.1162/jocn.2009.21291. [DOI] [PubMed] [Google Scholar]
- Von Stein A, Chiang C, König P. Top-down processing mediated by interareal synchronization. Proc Natl Acad Sci. 2000;97(26):14748–14753. doi: 10.1073/pnas.97.26.14748. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang L, Jensen O, Van den Brink D, Weder N, Schoffelen JM, Magyari L, Hagoort P, Bastiaansen M. Beta oscillations relate to the N400m during language comprehension. Hum Brain Mapp. 2012;33(12):2898–2912. doi: 10.1002/hbm.21410. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilsch A, Henry MJ, Herrmann B, Maess B, Obleser J. Alpha oscillatory dynamics index temporal expectation benefits in working memory. Cerebr Cortex. 2015;25(7):1938–1946. doi: 10.1093/cercor/bhu004. [DOI] [PubMed] [Google Scholar]
- Xie N, Li B, Zhang M, Liu H. Role of top-down language control in bilingual production and comprehension: evidence from induced oscillations. Int J Biling. 2018;4:136700691878107. [Google Scholar]
- Yu H, Zhu L, Cai L, Wang J, Liu C, Shi N, Liu J. Variation of functional brain connectivity in epileptic seizures: an EEG analysis with cross-frequency phase synchronization. Cogn Neurodyn. 2020;14(1):35–49. doi: 10.1007/s11571-019-09551-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yun K, Watanabe K, Shimojo S. Cross person body and neural synchronization as a marker of implicit social interaction. Sci Rep. 2012;2:959. doi: 10.1038/srep00959. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zavala B, Brittain JS, Jenkinson N, Ashkan K, Foltynie T, Limousin P, Brown P. Subthalamic nucleus local field potential activity during the Eriksen flanker task reveals a novel role for theta phase during conflict monitoring. J Neurosci. 2013;33(37):14758–14766. doi: 10.1523/JNEUROSCI.1036-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang QF, Yang YF. The determiners of picture naming latency. Acta Psychol Sin. 2003;35(4):447–454. [Google Scholar]
- Zhang W, Guo L, Liu D, Xu G. The dynamic properties of a brain network during working memory based on the algorithm of cross-frequency coupling. Cogn Neurodyn. 2020;14(2):215–228. doi: 10.1007/s11571-019-09562-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zion Golumbic EM, Ding N, Bickel S, Lakatos P, Schevon CA, McKhann GM, Goodman RE, Emerson R, Mehta A, Simon JZ, Poeppel D, Schroeder CE. Mechanisms underlying selective neuronal tracking of attended speech at a ‘‘cocktail party’’. Neuron. 2013;77(5):980–991. doi: 10.1016/j.neuron.2012.12.037. [DOI] [PMC free article] [PubMed] [Google Scholar]





