Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Mar 1.
Published in final edited form as: J Exp Psychol Learn Mem Cogn. 2019 Jun 27;46(3):443–454. doi: 10.1037/xlm0000734

Turning languages on and off: Switching into and out of code-blends reveals the nature of bilingual language control

Karen Emmorey 1,*, Chuchu Li 2, Jennifer Petrich 1, Tamar H Gollan 2
PMCID: PMC6933100  NIHMSID: NIHMS1039358  PMID: 31246060

Abstract

When spoken language (unimodal) bilinguals switch between languages, they must simultaneously inhibit one language and activate the other language. Because American Sign Language (ASL)-English (bimodal) bilinguals can switch into and out of code-blends (simultaneous production of a sign and a word), we can tease apart the cost of inhibition (turning a language off) and activation (turning a language on). Results from a cued picture-naming task with 43 bimodal bilinguals revealed a significant cost to turn off a language (switching out of a code-blend), but no cost to turn on a language (switching into a code-blend). Switching from single to dual lexical retrieval (adding a language) was also not costly. These patterns held for both languages regardless of default language, i.e., whether switching between speaking and code-blending (English default) or between signing and code-blending (ASL default). Overall, the results support models of bilingual language control that assume a primary role for inhibitory control and indicate that disengaging from producing a language is more difficult than engaging a new language.

Keywords: bimodal bilinguals, code-switching, code-blending, language control, inhibition


Bimodal bilinguals have acquired a signed and a spoken language, and they offer a unique window into the architecture of the language processing system because distinct sensory-motor systems are required to comprehend and produce their two languages. Previous investigations of natural language mixing by hearing bimodal bilingual adults (Bishop, 2010; Emmorey, Borinstein, Thompson & Gollan, 2008) and children (Lillo-Martin, de Quadros, Chen Pichler & Fieldsteel, 2014; Petitto et al., 2001) have shown that bimodal bilinguals prefer to code-blend (simultaneously produce a word and a sign) rather than code-switch between signing and speaking. Emmorey et al. (2008) argued that the overwhelming preference for code-blending over code-switching indicates that the locus of lexical selection for bilingual language production is relatively late – a single lexical representation need not be selected for production at the conceptual or lemma level (otherwise code-blending would not occur). By implication, this result also suggests that dual lexical retrieval is less costly than language inhibition, which is assumed to be required for code-switching (e.g., Green, 1998; Meuter & Allport, 1999).

Emmorey, Petrich, and Gollan (2012) further investigated whether dual lexical retrieval was cost free by asking American Sign Language (ASL)-English bilinguals to name pictures in ASL alone, English alone, or with an ASL-English code-blend in separate blocks. Code-blending did not slow lexical retrieval for ASL: response times for ASL signs produced alone were not different than for ASL signs produced in a code-blend. Recently, the lack of a processing cost for dual lexical retrieval has been replicated with unbalanced bimodal bilinguals who were native German speakers and novice signers of Deutsche Gebärden Sprache (DGS) (Kaufman & Philipp, 2017). Emmorey et al. (2012) also found that code-blending facilitated retrieval of low frequency ASL signs (specifically, error rates were smaller for low frequency signs in a code-blend than when ASL was produced alone). Facilitation in code-blend production could occur via translation priming (e.g., Costa & Caramazza, 1999; Gollan & Acenas, 2004; Gollan Montoya, Fennema-Notestine, and Morris, 2005), such that retrieving the English word rendered the ASL sign more accessible.

Although production of a single code-blend does not appear to incur a processing cost, bimodal bilinguals must nonetheless switch into and out of a code-blend. Consider the code-blend example in (1) (from Bishop, 2010; p. 225). By convention, signs are written in capitals as English glosses (the nearest translation equivalent; hyphens indicate a multiword gloss).

(1) I’m pretty sure they were hearing.
SURE HEARING

In this example, English can be considered the default (Gollan & Goldrick, 2018) or matrix language providing the base of the utterance (i.e., contributing most of the lexical items and the syntactic structure; Myers-Scotton & Jake, 2009). The bilingual first switches from speaking English alone into an ASL-English code-blend and then switches out of the code-blend to speaking English alone again. When unimodal bilinguals switch between languages, they must simultaneously turn one language off (i.e., inhibit that language) and turn on the other language, releasing that language from inhibition (Green, 1998; Philipp & Koch, 2009). The existence of code-blends allows us to examine these processes separately. Specifically, switching from speaking into an ASL-English code-blend involves just turning ASL on – English is produced in the code-blend and therefore is not inhibited. Switching out of an ASL-English code-blend into English alone involves only turning ASL off. Of course, it is unlikely that there is a simple on/off switch for activating a target language and inhibiting a non-target language (Kroll & Bice, 2016; Poarch, 2016); the terms ‘turning on’ and ‘turning off’ provide a metaphorical short-hand for describing the cognitive processes underlying bimodal bilingual language mixing - teasing apart aspects of language control that are necessarily conflated if only unimodal bilinguals are tested.

Widely cited models of bilingual language control propose that switching between languages requires two separate processes: engagement (activation) of the target language and disengagement (inhibition) of the non-target language (e.g., Green & Abutalebi, 2013; Rodriguez-Fornells, De Diego Balaguer, & Münte, 2006; for review see Declerck & Philipp, 2015). Because these two processes occur simultaneously during language switching for unimodal bilinguals, it has been difficult to tease them apart. Switch-costs could arise from activating the target language (and releasing it from inhibition), from applying inhibition to the (recently active) non-target language, or from both processes. Importantly, some have argued that inhibition is not needed in proficient bilinguals (e.g., Costa & Santesteban, 2004) or even in bilinguals with one clearly dominant language; that is, activation alone is sufficient to explain patterns of switch costs observed in studies of cued unimodal bilingual language switching (e.g., Verhoef, Roelofs, & Chwilla, 2009).

Critically, bimodal bilingual language switching makes it possible to isolate these two processes. If inhibition is a key underlying control process that enables switching, then turning a language off should be costly. If activation is a key underlying control process, then turning a language on should be costly. To this end, in the present study participants named two sequentially presented pictures (switch and stay trials) in one language alone (English or ASL) or with an ASL-English code-blend (see Figure 1 below). As in previous studies, response times (RTs) for English were measured from voice onset, and RTs for ASL were measured from a manual key-release (e.g., Emmorey et al., 2012; Giezen & Emmorey, 2016; Kaufman & Philipp, 2017; Kaufmann, Mittelberg, Koch, & Philipp, 2017). We investigated whether switch costs were observed when ASL-English bilinguals switched into a code-blend (selecting or turning on a target language) and/or when switching out of a code-blend (inhibiting or turning off a non-target language).

Figure 1.

Figure 1.

Illustration of conditions in which (A) English is the default language and (B) ASL is the default language. Lower orange text indicates that the dependent measure is ASL RT, and upper blue text indicates that English RT is the dependent measure. Cost is defined as the difference between switch and stay trials.

In addition, as illustrated in Figure 1, we can also measure the possible switch-cost for adding a language when switching into a code-blend. That is, we can measure the cost of adding ASL to English when switching from speaking to code-blending, as well as the cost of adding English to ASL when switching from signing to code-blending. Based on the blocked picture-naming results of Emmorey et al. (2012), we do not expect to observe an overall cost to ASL to add English because naming latencies did not differ when ASL was produced alone compared to when English was added to ASL in a code-blend. As noted, this finding indicates that simultaneously retrieving two lexical items is not costly – at least not for the non-dominant language. However, it is not clear whether there is a switch-cost associated with dual lexical retrieval. That is, it could be costly to switch from naming in a single language to naming in two languages (i.e., switching from single to dual lexical retrieval).

Emmorey et al. (2012) found that code-blending delays the onset of speech such that RTs (measured by voice onset) are considerably longer (~450 ms) for English produced in a code-blend than for English produced alone. The voice onset delay occurs because bimodal bilinguals prefer to synchronize the onsets of words and signs, i.e., delaying speech until the hand reaches the target location of the sign. In fact, the RT difference between English produced alone and in a code-blend equals the transition time for moving the hand from the response key to sign onset (Emmorey et al., 2012). Thus, the long RTs for English produced in a code-blend are not due to dual-lexical retrieval, but are a result of synchronization across output channels (similar synchronization is observed for co-speech gestures; McNeill, 1992). In the present study, however, we can compare switch vs. stay trials for English produced within a code-blend to examine whether there is a cost for switching from single to dual lexical retrieval for English, i.e., whether there is a switch-cost for adding ASL to English (see Figure 1A). Nonetheless, such a comparison must be interpreted with caution because English RTs in a code-blend are inflated by ASL sign transition times.

As illustrated in Figure 1, bilinguals either switched between speaking and code-blending (English default condition) or they switched between signing and code-blending (ASL default condition). Although examples like (1) above in which bilinguals switch between speaking and code-blending are common, examples in which bilinguals switch between signing and code-blending appear to be rare. For example, no studies (that we are aware of) report code-blend examples in which the sign language is the default language of the utterance, and a single spoken word is produced, such as the invented example in (2) given by Emmorey et al. (2008; p. 51). In general, if ASL is the default language, then code-blending tends to be continuous, as in (3) from Bishop (2010; p. 226).

(2) * think
* THEN LATER CAT THINK WHAT-DO.
(3) That hearing way
THAT HEARING WAY.

Emmorey et al. (2008) suggested that bimodal bilinguals do not produce single words while signing as in (2) because the spoken language is almost always the dominant language in adulthood for hearing bimodal bilinguals (see Emmorey, Giezen, & Gollan, 2016, for discussion). Thus, while signing (i.e., ASL is the default language providing the syntactic structure), English must be inhibited. Continuous code-blending as in (3) can occur because speech has been completely released from inhibition. Single-sign code-blends as in (1) can occur because the non-dominant sign language is not strongly inhibited. Thus, we predict that overall code-blend latencies for both languages will be longer when ASL is the default language because we hypothesize that English must be inhibited in this condition and that ASL is only weakly inhibited in the English default condition. Specifically, in the ASL default condition, inhibition of English should result in longer ASL RTs in a code-blend (compared to ASL RTs in the English default condition) because retrieval of the ASL sign in a code-blend will benefit less when English word retrieval is inhibited. That is, the extent to which the English translation can facilitate retrieval of the ASL sign within a code-blend (Emmorey et al., 2012) may be reduced when English must be inhibited on preceding trials (ASL default condition) than when it is not inhibited (English default condition). In addition, code-blend naming is not initiated until both lexical items have been retrieved (see Emmorey et al., 2012), and inhibition of English may delay retrieval of English words in the ASL default condition. Longer RTs for ASL in a code-blend in the ASL default condition (compared to the English default condition) would provide a new type of evidence for asymmetrical language inhibition that does not stem from magnitude differences in switch costs for the dominant versus the non-dominant language.

Recently, Kaufman and Philipp (2017) examined switching between code-blending and speaking or signing with German native speakers who were new learners of DGS. Kaufman and Philipp reported significant switch-costs for both German and DGS when switching out of a code-blend, which indicates a cost to turn off the other language. However, it is possible that these switch-costs arose in part because the bimodal bilinguals were novice DGS signers since switch-costs may be larger in unbalanced bilinguals (e.g., Costa and Santesteban, 2004; Costa, Santesteban & Ivanova, 2006). In the present study, we included highly proficient ASL-English bilinguals in order to examine whether proficiency impacts the cost to inhibit or turn off a language. With respect to switching into a code-blend, Kaufman and Philipp (2017) did not distinguish between turning on a language and adding a language (see Figure 1), collapsing these trial types. In addition, Kaufman and Philipp did not examine the effect of default language, i.e. which language preceded and followed a code-blend, collapsing across these trial types as well. Examining the role of the default language is particularly important because switching back into the default language can be virtually cost-free (Gollan & Goldrick, 2018), and selection of a default language is likely ubiquitous in spontaneous communication between bilinguals, whether unimodal or bimodal (Myers-Scotton, 1997). Thus, although Kaufman and Philipp found that the cost of switching out of a code-blend (turning off a language) was greater than switching into a code-blend (turning on and adding a language), it is unclear which processes led to this pattern, whether this pattern is affected by a default language, and whether the findings are limited to novice bimodal bilinguals.

In sum, we examined the performance of highly proficient bimodal bilinguals when they switched into and out of a code-blend in order to tease apart whether language switch-costs arise from selecting (activating) a target language for production, disengaging (inhibiting) a language, or both. In addition, we examined whether there are costs associated with switching from single lexical retrieval to dual lexical retrieval (i.e., whether there is a cost to add a language) – a switching phenomenon that is unique to bimodal bilinguals. Finally, we examined whether the default language alters the pattern of switch costs for bimodal language switching. Our goal was to tease apart the different processes involved in bimodal bilingual language switching, i.e., selecting a language, inhibiting a language, and retrieving two lexical items, and in so doing provide unique evidence regarding the cognitive control mechanisms that underlie language switching in all bilinguals.

Method

Participants.

Forty-three highly proficient ASL-English bilinguals participated in the study (25 females; mean age = 32 years, SD = 8 years). Twenty were early bilinguals who were exposed to ASL during infancy from their deaf signing families. Twenty-three were late bilinguals (mean age of ASL acquisition = 16; range = 6–32 years). Table 1 provides participant characteristics obtained from a language history and background questionnaire. All participants used ASL on a daily basis, and 29 worked as professional interpreters. All participants reported normal hearing, (corrected) vision, and no history of neurological disorders. Informed consent was obtained from all participants in accordance with the Institutional Review Board at San Diego State University.

Table 1.

Means for participant characteristics. Standard deviations in parentheses.

Age
(years)
Age of
ASL
exposure
ASL
self-
ratinga
English
self-
ratinga
Early ASL-English 29 (5.3) birth 6.5 (0.7) 6.5 (0.8)
bilinguals (N = 20)
Late ASL-English 32 (9.8) 16 (6.2) 5.7 (0.7) 7.0 (0)
bilinguals (N = 23)
a

Based on a scale of 1–7 (1 = not fluent; 7 = very fluent)

Materials.

Participants named 200 line drawings of objects taken from the UCSD Center for Research on Language International Picture Naming Project (Bates et al., 2003; Székely et al., 2003). For English, the pictures all had good name agreement based on Bates et al. (2003): mean percentage of target response = 97.4% (SD = 6.5%). For ASL, the pictures were judged by two native deaf signers to be named with lexical signs (English translation equivalents), rather than by fingerspelling, compound signs, or phrasal descriptions, and these signs were also considered unlikely to exhibit a high degree of regional variation. The mean ln-transformed CELEX frequency of the English target words was 3.1 (SD = 1.5). The mean frequency rating for the target ASL signs was 3.7 (SD = 1.2) based on a scale of 1 (very infrequent) to 7 (very frequent) (from ASL-LEX; Caselli, Zevcikova Sehyr, Goldberg, & Emmorey, 2017).

Procedure.

Pictures were presented using Psyscope Build 46 (Cohen, MacWhinney, Flatt, & Provost 1993) on a Macintosh PowerBook G4 computer with a 15-inch screen. English naming times were recorded using a microphone connected to a Psyscope response box. ASL naming times were recorded using a pressure release key (triggered by lifting the hand) that was also connected to the Psyscope response box (see Emmorey et al. (2012) for a discussion of the use of the key-release technique to measure naming times in a sign language). Within a code-blend, separate naming times were recorded simultaneously for English and ASL using the microphone and key release mechanisms. Participants initiated each trial by pressing the space bar. Each trial began with a 1000-ms presentation of a central fixation point (+) that was immediately replaced by simultaneous presentation of the picture and a naming cue. The picture and cue disappeared when either the voice-key or the release-key triggered. All experimental sessions were videotaped for later analysis, and participants were asked to respond as quickly and as accurately as possible.

Each picture had a cue to the left indicating whether the participant should say the name of the picture (cue = the printed word “say”), sign the name (cue = an image of SIGN in ASL), or say and sign the name simultaneously (cue = “say” and SIGN aligned vertically); this cue was not present prior to stimulus onset. As illustrated in Figure 1, participants alternated between naming two pictures in either English alone or in ASL alone and naming pictures with an ASL-English code-blend. The default language (English or ASL) was blocked and counterbalanced across participants. Participants were informed before each block whether the default language was ASL or English. Each block contained 20 pictures to be named in one language alone (either English or ASL) and 20 pictures to be named with a code-blend. Six practice trials preceded each block (not seen in the experimental blocks). Participants completed each block in counterbalanced order with two other switching conditions (reported elsewhere to simplify exposition of the present study).1 No participant saw the same picture twice.

Results

English responses in which the participant produced a vocal hesitation (e.g., “um”) or in which the voice key was not initially triggered were excluded from the reaction time (RT) analysis, but were included in the baseline of error analyses. ASL responses in which the participant paused or produced a manual hesitation gesture (e.g., UM in ASL) after lifting their hand from the response key were also excluded from the RT analysis, but were included in the baseline of error analyses. For code-blending, if either the ASL or the English response was preceded by a hesitation (“um,” UM, or a manual pause) or the voice key was not initially triggered, neither the ASL nor the English response was entered into the RT analysis. All incorrect responses were also excluded from the RT analyses. As a result, 14.67% of the RT data was removed for the early bilinguals, and 16.31% of the RT data was removed for the late bilinguals. RTs that were 2.5 standard deviations above or below the mean for each participant for each language were also excluded from the RT analyses. This procedure excluded another 1.96% of the data for the early bilinguals and 1.38% for the late bilinguals. In total, 16.63% of the RT data were removed for the early bilinguals, and 17.69% were removed for the late bilinguals.

We first investigated whether there was a processing cost to turn on a language or to add a language by examining RTs for stay and switch trials when ASL and English were produced simultaneously in a code-blend; this analysis examined the cost of switching into a code-blend. We next investigated whether there was a cost to turn off a language by examining the cost to ASL or to English produced alone when switching out of a code-blend.

For all analyses, stay trials are defined as trials that are preceded by the same production type (e.g., a code-blend preceded by a code-blend), while switch trails are preceded by a different production type (e.g., a code-blend preceded by English or ASL alone). The cost to add a language is measured as the difference between switch and stay trials for the default language within a code-blend and represents a measure of the cost to switch from single to dual language production (i.e., for this analysis, we subtract switch from stay trials for the default language within a code-blend when the switch trial is preceded by a trial in which only the default language was produced; English in Figure 1A and ASL in Figure 1B). The cost to turn on a language is measured by the difference between switch and stay trials for the non-default language within a code-blend (i.e., ASL in Figure 1A and English in Figure 1B; that is, we compare trials in the non-default language in a code-blend when the switch trial is preceded by a response in the default language only). The cost to turn off a language is measured as the difference between switch and stay trials for each language alone following a code-blend for either English (Figure 1A) or ASL (Figure 1B).

Due to the possible confounding of manual vs. vocal articulation, we did not directly compare RTs for ASL and English. All analyses were carried out in R, an open source programming environment for statistical computing (R Core Team, 2013) with the lme4 package (Bates, Maechler, Bolker, & Walker, 2013) for linear mixed effects modeling (LMM) and general linear mixed effects modeling (GLMM), both with the “bobyqa” optimizer.

Turning on a language and adding a language: Switching into a code-blend (ASL and English produced simultaneously)

These analyses examined naming latencies and error rates for each language produced within a code-blend after switching from either signing (ASL is the default language) or speaking (English is the default language). The RT results are illustrated in Figure 2, and mean error rates for each default condition are given in Table 2. Contrast-coded fixed effects included default language (ASL vs. English), trial type (switch vs. stay), group (early vs. late bilinguals), and the interaction of these factors. Subject and item/picture were entered as two random intercepts with related random slopes (i.e., default language, trial type, and their interaction for item intercept; default language, trial type, group, and their interactions for subject intercept). If the full model failed to converge, we simplified the model first by removing random correlations and then the random slopes that accounted for the least variance. In the current case, the converged model did not include the correlation between random effects. The same fixed effects and random intercepts were included in the logistic regression for error rate analyses (all the random slopes were removed due to the failure to converge, and the significance of each fixed effect was assessed via likelihood ratio tests; Barr, Levy, Scheepers, & Tily, 2013).

Figure 2.

Figure 2.

Mean RT (ms) for each language when switching into a code-blend either from signing (ASL is the default language) or from speaking (English is the default language). No significant switch costs were observed, indicating there was no cost to turn on a language or to add a language (see Figure 1). Adding a language compares two default language responses within a code-blend, with one following another code-blend response (stay) and the other following a default language alone response (switch). Turning on a language compares two non-default language responses within a code-blend, with one following another code-blend response (stay) and the other following a default-language alone response (switch).

Table 2.

Error rates (percent) for each language produced in a code-blend for each default language condition and each bilingual group. Standard error is given in parentheses

ASL in a Code-Blend English in a Code-Blend
Stay trials Switch trials Stay trials Switch trials
Early
bilinguals
ASL default 11.5 (2.3) 3.5 (1.3) 9.0 (2.0) 5.5 (1.6)
English default 7.5 (1.9) 11.5 (2.2) 3.0 (1.2) 4.0 (1.4)
Late
bilinguals
ASL default 7.4 (1.7) 6.5 (1.6) 4.3 (1.3) 4.3 (1.7)
English default 6.5 (1.6) 7.4 (1.7) 3.5 (1.2) 3.9 (1.3)

ASL in a code-blend.

As can be seen in Figure 2 (left panel), ASL RTs in a code-blend were overall slower when ASL was the default language than when English was the default language (mean 1357 ms vs. 1246 ms, β = −115.38; SE β = 44.33; χ2 (1) = 6.40, p = .010). None of the other effects were significant (ps ≥ .31). Thus, we observed no cost for either bilingual group to turn ASL on when English was the default language (see Figure 1A), and no cost to add English when ASL was the default language (see Figure 1B). Given that these are null results, we additionally calculated the Bayes Factor for switch costs in each condition, which favored the null hypothesis (i.e., no switch costs). For turning ASL on (English = default language), the Bayes Factor value was 1.72, and for adding English (ASL = default language) the Bayes Factor was 6.01.2

As even the simplest model for error rates with all the fixed effects (i.e., group, default language, trial type, and their interactions) failed to converge, we analyzed errors for each group separately. For both groups, ASL error rates were unaffected by default language or trial type (ps ≥ .21), except that early bilinguals showed an interaction between default language and trial type (β = −1.79; SE β = .86; χ2 (1) = 4.13, p = .042). When the default language was English, error rates were not significantly different between switch and stay trials (mean 11.5% vs. 7.5%, β = − .52; SE β = .48; χ2 (1) = 1.16, p = .281); when the default language was ASL, errors were marginally less frequent on switch than stay trials (mean 3.5% vs. 11.5%; β = 1.25; SE β = .66; χ2 (1) = 3.51, p = .061).

English in a code-blend.

As found for ASL, English RTs were slower when ASL was the default language than when English was the default language (mean 1942 ms vs. 1714 ms, β = −245.38; SE β = 8.09; χ2 (1) = 8.07, p = .0045) (Figure 2; right panel). None of the other effects showed significant results (ps ≥ .11). Thus, there was also no cost to English to add ASL in a code-blend (Figure 1A), or to turn on English when ASL was the default language (Figure 1B). Again since these are null results, we calculated the Bayes Factor for switch costs in each condition, which favored the null hypothesis (i.e., no switch costs) as found for the ASL analysis. For turning English on (ASL = default language), the Bayes Factor value was 3.48, and for adding ASL (English = default language) the Bayes Factor was 3.00.

As is apparent in Figure 2, RTs for English in a code-blend were much longer than for ASL in a code-blend (and compared to English produced alone - see Figure 3 below). As noted in the introduction, this pattern arises because bimodal bilinguals co-ordinate the onsets of words and signs within a code-blend, which delays the onset of speech. That is, bilinguals do not time their production of the English word with the manual key-release but with the physical onset of the ASL sign (i.e., when the hand reaches the target location on the body).

Figure 3.

Figure 3.

Mean RT (ms) for each language when switching out of a code-blend. The asterisks indicated the significant difference between the stay and switch trials of each condition. There is a cost to turn off a language after a code-blend for both groups. Turning off a non-default language compares switch and stay trials in the default language produced alone, where switch trials follow a trial in which the non-default language was produced in a code-blend.

English error rates were not modulated by default language or trial type for either group (all ps ≥ .48). Again, we conducted the error analyses for each group separately as even the simplest model with all the fixed effects (i.e., default language, trial type, group, and their interactions) failed to converge.

Turning off a language: Switching out of a code-blend (ASL and English produced alone)

These analyses examined naming latencies and error rates for each language produced alone when switching out of a code-blend. RT results are illustrated in Figure 3, and the mean error rates are given in Table 3. Data analysis procedures were similar to that in the section above. For each language, contrast-coded fixed effects included group (early vs. late bilinguals), trial type (switch vs. stay), and the interaction of these two factors. For clarity, default language is shown in Figure 3, but the default language was always consistent with the target language in these analyses, so it was not included in the models. Subject and item/picture were entered as two random intercepts with related random slopes (i.e., trial type for subject intercept; trial type, group, and their interaction for item intercept), and the full model converged. The same fixed effects and random intercepts were included in the logistic regression for error rates analyses (all the random slopes were removed due the failure to converge). The significance of each fixed effect was assessed via likelihood ratio tests (Barr, Levy, Scheepers, & Tily, 2013).

Table 3.

Mean error rates (percent) for each language produced alone for each default language condition and each bilingual group. Standard error is given in parentheses

ASL Alone (ASL=Default) English Alone (English=Default)
Stay Trials Switch Trials Stay Trials Switch Trials
Early bilinguals 6.5 (1.7) 6.5 (1.7) 4.5 (1.5) 3.5 (1.3)
Late bilinguals 9.2 (1.9) 8.3 (1.8) 3.5 (1.2) 5.2 (1.5)

ASL alone.

In RTs, there was a main effect of trial type, with slower RTs for switch than stay trials (mean 1378 vs. 1224 ms; β = 162.40; SE β = 43.04; χ2 (1) = 13.22, p < .001), while neither group nor the interaction between group and trial type showed significant results (ps ≤ .53). ASL error rates also were unaffected by group or trial type (for both the main effects and the interaction between these two factors, ps ≥ .23).

English alone.

Similar to ASL alone, in RTs, there was a main effect of trial type, with slower RTs for switch than stay trials (mean 1236 vs. 1062 ms; β = 175.44; SE β = 37.20; χ2 (1) = 19.59, p < .001), and neither group nor the interaction between group and trial type showed significant results (ps ≥ .44). English error rates were also unaffected by group or trial type (for both the main effects and the interaction, ps ≥ .26).3

Discussion

Because their two languages use the same articulators for production (the vocal tract), unimodal bilinguals must simultaneously disengage (inhibit) one language and activate (release from inhibition) their other language when they switch between languages. The unique ability of bimodal bilinguals to switch into and out of a code-blend (the simultaneous production of a word and a sign) allows us to disentangle the possible switch-costs associated with these two component processes. Our results unequivocally demonstrate that inhibiting a language (switching out of a code-blend) incurs a cost, whereas releasing a language from inhibition (switching into a code-blend) is cost-free. These behavioral results are nicely consistent with a recent magnetoencephalography (MEG) study by Blanco-Elorrieta, Emmorey, and Pylkkänen (2018). This study investigated whether cognitive control regions were engaged when ASL-English bilinguals switched into or out of a code-blend, but response time data were not collected due to technical difficulties. The MEG results revealed that turning off a language (switching out of a code-blend) resulted in increased neural activity in anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (DLPFC), while turning on a language (switching into a code-blend) did not elicit increased activation in cognitive control regions. Our behavioral data complement these MEG results and also replicate the behavioral cost for switching out of a code-blend observed by Kaufman and Philipp (2017) for novice bimodal bilinguals. Together, these findings demonstrate the cognitive cost of inhibiting a language, regardless of language modality, bilingual proficiency, or block-wide naming characteristics (i.e., whether a single language predominates during language mixing).

We found no difference in the pattern of results between early and late bimodal bilinguals. Previous research with unimodal bilinguals has indicated that the pattern of switch costs varies with proficiency (e.g., Costa & Santesteban, 2004; Costa et al., 2006; Philipp, Gade, & Koch, 2007). However, the late bilinguals in our study were highly proficient in ASL (most worked as professional interpreters). They also had a relatively early mean age of ASL exposure for bimodal bilinguals (16 years; SD = 6.2); several participants were exposed to ASL from a deaf relative or friend growing up. Unfortunately, we were not able to acquire objective ASL proficiency measures at the time this study was conducted, but we suspect that the early and late bilinguals had similar ASL proficiency levels. It is unclear whether the cost to turn off a language (particularly the dominant spoken language) might be larger for less proficient signers. The novice bimodal bilinguals studied by Kaufman and Philipp (2017) repeatedly named the same 10 pictures across the experiment, and thus the size of the switch-cost for novice bimodal bilinguals cannot easily be compared with the proficient bimodal bilinguals studied here. Nonetheless, our results are consistent with previous studies of unimodal bilinguals that report language switching costs for highly proficient bilinguals, including when the switch-costs are not asymmetric (e.g., Costa & Santesteban, 2004).

In their review of the language switching literature for unimodal bilinguals, Declerck and Philipp (2015) observed that both language activation and inhibition processes have been argued to account for switch costs. For example, to account for asymmetric switch costs (larger for L1 than L2), an activation account entails that producing L2 requires greater activation than L1, and thus on switch trials L2 is a strong competitor for L1 (see Philipp et al., 2007). An inhibition account entails that when producing L2, the L1 must be more strongly inhibited, and thus the persisting stronger inhibition must be overcome on L1 switch trials (Meuter & Allport, 1999; Green, 1998; Philipp & Koch, 2009). For unimodal bilinguals, activation and inhibition accounts are difficult to disentangle. However, assuming the same control mechanisms for unimodal and bimodal bilinguals, then the pattern of results that we found for bimodal bilinguals indicates that the nearly ubiquitous switching costs observed for unimodal bilinguals are more likely related to the difficulty of applying inhibition to the non-target language, rather than from activating the target language. Further, there are good reasons to believe that unimodal and bimodal bilinguals engage the same cognitive mechanisms when controlling their two languages. First, the same neural cognitive control regions (i.e., the ACC and DLPFC) are engaged during both unimodal and bimodal language switching (e.g., Green & Abutalebi, 2013; Blanco-Elorietta et al., 2018). Second, Giezen, Blumenfeld, Shook, Marian, & Emmorey (2015) found a correlation between cognitive control ability and the extent of cross-linguistic activation, as reported for unimodal bilinguals (e.g., Blumenfeld & Marian, 2013). Specifically, smaller Stroop effects (indexing more efficient inhibition) were associated with reduced cross-language activation (indexed by reduced looks to a cross-linguistic competitor in the visual world paradigm). Thus, bimodal and unimodal bilinguals appear to rely on similar cognitive control mechanisms, indicating that our results should generalize across these populations.

Nonetheless, there are some reasons to be cautious in concluding that turning on a language is cost-free for unimodal bilinguals. Although language control in bimodal and unimodal bilinguals may involve a common neurocognitive system, how this system operates could be influenced by the nature of the competition between languages and the particulars of their use in interactive contexts (Green & Abutalebi, 2013). As noted, there is no articulatory competition between languages for bimodal bilinguals, and thus both languages can be readied for use in a bilingual context that includes code-blending. This situation could minimize engagement (reactivation) costs for a previous non-target language for bimodal bilinguals compared to unimodal bilinguals. It is also possible that unimodal bilinguals engage additional neural structures (e.g., the caudate; Crinion et al., 2006; Abutalebi & Green, 2007) during language switching which could play a role in language reactivation that is not observed for bimodal bilinguals. Further research is needed before we can conclude with confidence that there are no costs associated with language activation for unimodal bilinguals.

In addition, we disentangled the possible costs of turning on versus adding a language when switching into a code-blend (see the labeled lines in Figure 2). Neither process incurred a significant switch-cost for either ASL or English. The finding that there is no cost to turn on ASL or English is consistent with a large literature indicating that for bilinguals both languages are always active (‘on’) and cannot be completely inhibited (e.g., Kroll, Bobb, Misra, & Guo, 2008). This result also indicates that switch-costs do not arise from actively engaging a new language. The finding that there was no cost for adding a language indicates that switching from single to dual lexical retrieval is also cost-free. However, there is some evidence that this may only be true for the non-dominant language (ASL). Specifically, the cost of adding ASL to English is measured by English RTs within a code-blend (Figure 2; right panel), but as noted earlier these RTs are inflated because bimodal bilinguals delay speech to synchronize word and sign onsets. Thus, the lack of a switch-cost in this condition for English must be treated with caution. In addition, Blanco-Elorrieta et al. (2018) compared neural activity when bimodal bilinguals named pictures in blocks with ASL alone, English alone, or with ASL-English code-blends. Consistent with our previous behavioral results (Emmorey et al., 2012), neural activity for retrieving ASL alone was greater in the left temporal lobe than when producing code-blends, suggesting that retrieval of ASL (the non-dominant language) may have benefitted from the simultaneous retrieval of English in a code-blend. In contrast, neural activity for English (the dominant language) was reduced when English was produced alone compared to when both ASL and English had to be retrieved in a code-blend. This pattern of results suggests that there may be a neural cost for dual lexical retrieval (adding a language) for the dominant language, but not for the non-dominant language.4

As we predicted, code-blend response times for both ASL and English were longer when ASL was the default language compared to when English was the default language (Figure 2). As noted in the introduction, frequent switching between signing alone and code-blending is unattested, whereas switching between speaking alone, code-blending, and speaking alone again is relatively common (Emmorey et al., 2008). Emmorey et al. (2008) argued that this pattern is due to the need to inhibit the dominant language, English, when signing. In contrast, ASL is only weakly inhibited when speaking – as attested by the frequent production of ASL signs and grammatical facial expressions when speaking with English monolinguals (Pyers & Emmorey, 2008; Casey & Emmorey, 2009; Weisberg, Casey, Sevcikova Sehyr, & Emmorey, in press). This asymmetry is also consistent with the recent study by Dias, Villameriel, Giezen, Costello, and Carreiras (2017) who reported asymmetric switch costs for bimodal bilinguals when switching between Spanish (the dominant language) and LSE (Lengua de Signos Española; the non-dominant language), with larger switch-costs for switching from LSE to Spanish than from Spanish to LSE.

The effect of default language on code-blend response times also provides a unique way to examine effects of global language control (e.g., Christoffels, Firk, & Schiller, 2007). Specifically, the non-default language may be globally inhibited across the naming block, in contrast to the default language which is active on every trial. Parallel to what has been found for unimodal bilinguals, effects of such global inhibition were greater when the dominant language (English) had to be inhibited. Further, even when ASL was active on every trial (ASL = default), RTs for ASL in a code-blend were nonetheless longer than when ASL was suppressed across the naming block (i.e., English = default). We interpret this pattern as reflecting the effect of global inhibition of English on dual lexical retrieval in the code-blend. That is, slow retrieval of English within a code-blend causes a response delay, assuming that code-blend naming does not occur until both lexical items have been retrieved (Emmorey et al., 2012). Further, when English is active on every trial, ASL retrieval within a code-blend can benefit from translation priming, in contrast to when English must be globally suppressed. Importantly, language switching studies typically include equal numbers of responses in both languages (and 33% or 50% switch rates), thereby making it impossible for bilinguals to specify a default language. However, language switching generally does not occur in a fixed ratio, and the quantity of each language present in a discourse or experimental paradigm can impact switch-costs (e.g., Olson, 2016). Further investigation of this variable could reveal greater parallels between unimodal and bimodal bilinguals. In particular, the effect of the default language on code-blending may represent an instance of inhibitory control at the whole-language level (Branzi, Martin, Abutalebi, & Costa, 2014; Branzi, Della Rosa, Canini, Costa, & Abutalebi, 2016; Misra, Guo, Bobb, & Kroll, 2012; Van Assche, Duyck, & Gollan, 2013).

Finally, the present results support the hypothesis that bimodal bilinguals need to engage control mechanisms when mixing their languages (Blanco-Elorrieta et al., 2018) and are consistent with previous findings showing a relationship between cognitive control ability and degree of language co-activation in bimodal bilinguals (Giezen et al., 2015). However, bimodal bilinguals, in contrast to unimodal bilinguals, have not shown an advantage over monolinguals on cognitive control tasks (Emmorey, Luk, Pyers, & Bialystok, 2008; Giezen et al., 2015) and neuroanatomical changes observed for unimodal bilinguals have not been observed for bimodal bilinguals (Olulade, Jamal, Koo, Perfetti, LaSasso, & Eden, 2015), although Li et al. (2017) reported less age-related grey matter reductions for both types of bilinguals compared to monolinguals. How do we reconcile the apparent lack of a bilingual advantage with evidence that bimodal bilinguals utilize cognitive control mechanisms during language mixing?

One possible explanation is that the degree of language control is simply less for bimodal than unimodal bilinguals, and thus less (or no) behavioral advantage over monolinguals is observed in cognitive control tasks. For example, in single-language (monolingual) contexts, bimodal bilinguals can produce ASL signs as a form of co-speech gesture when speaking English without disrupting communication (Casey & Emmorey, 2009; Weisberg et al., in press), and they can produce English mouthings when signing ASL with deaf signers (cf. Bank, Crasborn, & Van Hout, 2016). Unimodal bilinguals do not have such flexibility in single-language contexts. In addition, our results indicate that inhibitory control is only needed to switch out of a code-blend, whereas inhibitory control is necessary when unimodal bilinguals switch both into and out of each language during code-switching. As suggested by the adaptive control hypothesis (Green & Abutalebi, 2013), the demands of different interactional contexts may affect the extent to which cognitive control mechanisms are engaged.

Another possibility is that the unimodal bilingual advantage is linked more to differences in attention or conflict monitoring, rather than to inhibitory control (e.g., Bialystok, 2017; Teubner-Rhodes. Bolger, & Novick, 2017). For example, less attentional demands may be needed to discriminate between languages during development for bimodal bilinguals because one language is auditory-vocal and the other is visual-manual. The perceptual and motor cues to language membership are very strong and clear. Similarly, only one language is written (no widely-accepted writing system exists for sign languages), and thus bimodal bilinguals do not experience orthographic competition between languages and are only literate in one of their languages (unlike most unimodal bilinguals that have been studied). Further research is needed to determine the possible roles of attention and conflict monitoring when managing languages that are instantiated in distinct modalities.

In conclusion, bimodal bilinguals permit more fine-tuned investigation of the nature of language control processes – teasing apart its different component parts. We found clear evidence that inhibiting a language was costly, while activating a language was not. These results provide behavioral data that complement MEG data which indicated that turning a language off engages cognitive control regions within the brain, but turning a language on does not (Blanco-Elorietta et al., 2018). These findings support models of bilingual language control that assume a fundamental role for inhibitory control (e.g., Declerck, Koch, & Philipp, 2015; Grainger & Dijkstra, 1992; Green, 1998), run counter to models that do not rely on inhibition (e.g., Costa, Miozzo, & Caramazza, 1999; Finkbeiner, Almeida, Janssen, & Caramazza, 2006; Roelofs, 1998; Verhoef et al., 2009), and further suggest differences in the extent to which applying versus releasing inhibition is effortful. In addition, we found that switching from single to dual lexical retrieval (adding a language) is not effortful – at least not for the non-dominant language. This result is somewhat counter-intuitive since a dual task is expected to be more difficult than a single task, particularly if the two tasks are similar to each other. The distinct linguistic articulators for sign and speech may reduce (or eliminate) this type of switch-cost (see also Kaufman et al., 2017). Finally, our manipulation of the default language (i.e., which language remained active (or not) throughout a block) indicated asymmetric global inhibition effects that primarily affect the dominant language, paralleling effects found for unimodal bilinguals. Although code-blending – the ability to simultaneous produce two languages – is unique to bimodal bilinguals, they appear to rely on the same language control mechanisms as unimodal bilinguals (including the same neural regions; Blanco-Eliorrieta et al., 2018). The study of this unique language mixing phenomenon has allowed us to discover that the cost associated with language-switching lies in disengaging from (inhibiting) the non-target language, rather than engaging (selecting) the target language. However, further work will be needed to determine whether engaging a target language is indeed cost-free for unimodal bilinguals, given the distinct control demands for bimodal compared to unimodal bilinguals.

Acknowledgements

This work was supported by the National Institutes of Health (R01 HD047736; DC011492). We would like to thank the following people for help with this research: Shannon Casey, Rachel Colvin Schupp, Ashley Engle, Jackilyn Escosio, Cindy Farnady, Marcel Giezen, Heather Larrabee Angeles, and Danielle Thompson. We would also like to thank the ASL-English bilinguals who participated in the study.

Footnotes

1

The other conditions were “pure switch” (switching between English and ASL) and a condition with no default language (English, ASL, and code-blend were all included in the same block). Four different lists were created and rotated across participants to create a counterbalanced order.

2

A Bayes Factor of less than 3 is not considered robust, but there is controversy about assigning labels to these values (Rouder et al., 2009).

3

We also considered whether the results might differ across the different counter-balanced lists recall that the lists also included “pure switch” and “no default language” switching conditions. After adding list as an additional variable in the four models (i.e., ASL in a code-blend, English in a code-blend, ASL alone, English alone), all of our critical results still hold: turning off a language remained costly for both English and ASL (ps < .01), while turning on and adding a language did not show significant differences for either language (ps > .45). The only change was the emergence of a default language by list interaction for both ASL and English in a code-blend (ps < .001). The default language difference was no longer significant (for ASL in a code-blend, mean 1269 vs. 1261 ms when the default language was ASL vs. English; for English in a code-blend, mean 1734 vs. 1760 ms when the default language was ASL vs. English; ps > .83) if participants completed the other two switching tasks first. Note that for both languages RTs were overall slower when ASL was the default language than when English was the default language before list was taken into consideration. A possibility is that the nondominant ASL became more activated as a result of practice (i.e., ASL benefits more from practice than English), but caution is needed when explaining this interaction given small number of participants in each list (≤ 13).

4

It is worth noting that there might be some environments in which language dominance might switch to ASL – at least temporarily, e.g., when hearing bimodal bilinguals interact exclusively with deaf signers. It is possible that the nature of bilingual language co-activation and control might temporarily adapt to such an ASL immersion context (cf. the adaptive control hypothesis of Green and Abutalebi, 2013; see also Kroll, Dussias, Bice, & Perrotti, 2015).

References

  1. Abutalebi J, & Green D (2007). Bilingual language production: The neurocognition of language representation and control. Journal of neurolinguistics, 20(3), 242–275. [Google Scholar]
  2. Bank R, Crasborn O, & Van Hout R (2018). Bimodal code-mixing: Dutch spoken language elements in NGT discourse. Bilingualism: Language and Cognition, 21(1), 104–120. [Google Scholar]
  3. Barr DJ, Levy R, Scheepers C, & Tily HJ (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of memory and language, 68(3), 255–278.) [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bates E, D’Amico S, Jacobsen T, Szekely A, Andonova E, Devescovi A, et al. (2003). Timed picture naming in seven languages. Psychonomic Bulletin & Review, 10, 344–380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bates D, Maechler M, Bolker B, & Walker S (2013). lme4: linear mixedeffects models using Eigen and S4. R package version 10–4. http://cran.r-project.org/web/packages/lme4/index. [Google Scholar]
  6. Bishop M (2010). Happen can’t hear: An analysis of code-blends in hearing, native signers of American Sign Language. Sign Language Studies, 11, 205–240. [Google Scholar]
  7. Blanco-Elorrieta E, Emmorey K, & Pylkkänen L (2018). Language switching decomposed through MEG and evidence from bimodal bilinguals. Proceedings of the National Academy of Sciences. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Blumenfeld HK, & Marian V (2013). Parallel language activation and cognitive control during spoken word recognition in bilinguals. Journal of Cognitive Psychology, 25(5), 547–567. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bobb SC, & Wodniecka Z (2013). Language switching in picture naming: What asymmetric switch costs (do not) tell us about inhibition in bilingual speech planning. Journal of Cognitive Psychology, 25(5), 568–585 [Google Scholar]
  10. Branzi FM, Martin CD, Abutalebi J, & Costa A (2014). The after-effects of bilingual language production. Neuropsychologia, 52, 102–116. [DOI] [PubMed] [Google Scholar]
  11. Branzi FM, Della Rosa P. A., Canini M, Costa A, & Abutalebi J (2016). Language control in bilinguals: monitoring and response selection. Cerebral Cortex, 26(6), 2367–2380. [DOI] [PubMed] [Google Scholar]
  12. Caselli N, Sevcikova Sehyr Z., Cohen-Goldberg A, & Emmorey K (2017). ASL-LEX: A lexical database of American Sign Language. Behavioral Research Methods, 49, 784–801. doi: 10.3758/s13428-016-0742-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Casey S, & Emmorey K (2009). Co-speech gesture in bimodal bilinguals. Language and Cognitive Processes, 24, 290–312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Christoffels IK, Firk C, & Schiller NO (2007). Bilingual language control: An event-related brain potential study. Brain Research, 1147, 192–208. [DOI] [PubMed] [Google Scholar]
  15. Cohen JD, MacWhinney B, Flatt M, & Provost J (1993). PsyScope: A new graphic interactive environment for designing psychology experiments. Behavioral Research Methods, Instruments, and Computers, 25, 257–271. [Google Scholar]
  16. Costa A, & Caramazza A (1999). Is lexical selection in bilingual speech production language-specific? Further evidence from Spanish-English and English-Spanish bilinguals. Bilingualism: Language and Cognition, 2, 231–244. [Google Scholar]
  17. Costa A, Miozzo M, & Caramazza A (1999). Lexical selection in bilinguals: Do words in the bilingual’s two lexicons compete for selection? Journal of Memory and Language, 41, 491–511. [Google Scholar]
  18. Costa A, & Santesteban M (2004). Lexical access in bilingual speech production: Evidence from language switching in highly proficient bilinguals and L2 learners. Journal of memory and Language, 50(4), 491–511. [Google Scholar]
  19. Costa A, Santesteban M, & Ivanova I (2006). How do highly proficient bilinguals control their lexicalization process? Inhibitory and language-specific selection mechanisms are both functional. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(5), 1057. [DOI] [PubMed] [Google Scholar]
  20. Crinion J, Turner R, Grogan A, Hanakawa T, Noppeney U, Devlin JT, … & Usui K (2006). Language control in the bilingual brain. Science, 312(5779), 1537–1540. [DOI] [PubMed] [Google Scholar]
  21. Dias PA, Villameriel S, Giezen M, Costello B, & Carreiras M (2017). Language-switching across modalities: evidence from bimodal bilinguals. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(11), 1828–1834. [DOI] [PubMed] [Google Scholar]
  22. Declerck M, & Philipp AM (2015). A review of control processes and their locus in language switching. Psychonomic bulletin & review, 22(6), 1630–1645. [DOI] [PubMed] [Google Scholar]
  23. Declerck M, Koch I, & Philipp AM (2015). The minimum requirements of language control: Evidence from sequential predictability effects in language switching. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(2), 377. [DOI] [PubMed] [Google Scholar]
  24. Emmorey K, Borinstein HB, Thompson R, & Gollan TH (2008). Bimodal bilingualism. Bilingualism: Language and Cognition, 11, 43–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Emmorey K, Luk G, Pyers J, & Bialystok E (2008). The source of enhanced executive control in bilinguals: Evidence from bimodal bilinguals. Psychological Science, 19(12), 1201–1206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Emmorey K, Petrich JAF, & Gollan TH (2012). Bilingual processing of ASL–English code-blends: The consequences of accessing two lexical representations simultaneously. Journal of Memory and Language, 67, 199–210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Emmorey K, Giezen MR, & Gollan TH (2016). Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. Bilingualism: Language and Cognition, 19(2), 223–242. doi: 10.1017/S1366728915000085 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Finkbeiner M, Almeida J, Janssen N, & Caramazza A (2006). Lexical selection in bilingual speech production does not involve language suppression. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(5), 1075. [DOI] [PubMed] [Google Scholar]
  29. Giezen MR, & Emmorey K (2016). Language co-activation and lexical selection in bimodal bilinguals: Evidence from picture–word interference. Bilingualism: Language and Cognition, 19(2), 264–276. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Giezen MR, Blumenfeld H, Shook A, Marian V, & Emmorey K (2015). Parallel language activation and inhibitory control in bimodal bilinguals. Cognition, 141, 9–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Gollan TH, & Acenas LAR (2004). What is a TOT? Cognate and translation effects on tipof-the-tongue states in Spanish-English and Tagalog-English bilinguals. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 246–269. [DOI] [PubMed] [Google Scholar]
  32. Gollan TH, Montoya RI, Fennema-Notestine C, Morris SK, (2005). Bilingualism affects picture naming but not picture classification. Memory & Cognition, 33, 1220–1234. [DOI] [PubMed] [Google Scholar]
  33. Gollan TH, & Ferreira VS, (2009). Should I stay or should I switch? A cost-benefit analysis of voluntary language switching in young and aging bilinguals. Journal of Experimental Psychology: Learning, Memory, & Cognition, 35, 640–665. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Gollan TH, & Goldrick M (2018). A Switch is Not a Switch: Syntactically-Driven Bilingual Language Control. Journal of Experimental Psychology: Learning, Memory, & Cognition [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Gollan TH, Schotter ER, Gomez J, Murillo M, & Rayner K (2014). Multiple levels of bilingual language control: Evidence from language intrusions in reading aloud. Psychological science, 25(2), 585–595. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Grainger J, & Dijkstra T (1992). On the representation and use of language information in bilinguals In Harris RJ (Ed.), Cognitive processing in bilinguals. Amsterdam, The Netherlands: North Holland. [Google Scholar]
  37. Green DW (1998). Mental control of the bilingual lexico-semantic system. Bilingualism: Language and Cognition, 1, 67–81. [Google Scholar]
  38. Green DW, & Abutalebi J (2013). Language control in bilinguals: The adaptive control hypothesis. Journal of Cognitive Psychology, 25(5), 515–530. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Ivanova I, Murillo M, Montoya RI, & Gollan TH (2016). Does bilingual language control decline in older age? Linguistic Approaches to Bilingualism, 6(1), 86–118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Kaufmann E, & Philipp AM (2017). Language-switch costs and dual-response costs in bimodal bilingual language production. Bilingualism: Language and Cognition, 20(2), 418–434. [Google Scholar]
  41. Kaufmann E, Mittelberg I, Koch I, & Philipp AM (2017). Modality effects in language switching: Evidence for a bimodal advantage. Bilingualism: Language and Cognition, 1–8. [Google Scholar]
  42. Kleinman D, & Gollan TH (2016). Speaking two languages for the price of one: Bypassing language control mechanisms via accessibility-driven switches. Psychological science, 27(5), 700–714. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Kroll JF, Bobb SC, Misra M, & Guo T (2008). Language selection in bilingual speech: evidence for inhibitory processes. Acta Psychologica, 128, 416–430. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Kroll JF, Dussias PE, Bice K, & Perrotti L (2015). Bilingualism, mind, and brain. Annual Review of Linguistics, 1(1), 377–394. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Kroll JF, & Bice K (2016). Bimodal bilingualism reveals mechanisms of cross-language interaction. Bilingualism: Language and Cognition, 19(2), 250–252. [Google Scholar]
  46. Li L, Abutalebi J, Emmorey K, Gong G, Yan X, Feng X, …& Ding G (2017). How bilingualism protects the brain from aging: Insights from bimodal bilinguals. Human brain mapping, 38(8), 4109–4124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Lillo-Martin D, de Quadros RM, Chen Pichler D., & Fieldsteel Z (2014). Language choice in bimodal bilingual development. Frontiers in Psychology, 5, 1163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. McNeill D (1992). Hand and mind: What gestures reveal about thought. Chicago, IL: University of Chicago Press. [Google Scholar]
  49. Meuter RF, & Allport A (1999). Bilingual language switching in naming: Asymmetrical costs of language selection. Journal of memory and language, 40, 25–40. [Google Scholar]
  50. Misra M, Guo T, Bobb SC, & Kroll JF (2012). When bilinguals choose a single word to speak: Electrophysiological evidence for inhibition of the native language. Journal of Memory and language, 67(1), 224–237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Myers-Scotton C (1997). Dueling languages: Grammatical structure in codeswitching (2nd Edition). Oxford: Clarendon Press. [Google Scholar]
  52. Myers-Scotton C & Jake JL (1995). Matching lemmas in a bilingual competence and production model: Evidence from intrasentential code-switching. Linguistics, 33, 981–1024. [Google Scholar]
  53. Olson DJ (2016). The gradient effect of context on language switching and lexical access in bilingual production. Applied Psycholinguistics, 37, 725–756. 10.1017/S0142716415000223 [DOI] [Google Scholar]
  54. Olulade OA, Jamal NI, Koo DS, Perfetti CA, LaSasso C, & Eden GF (2015). Neuroanatomical evidence in support of the bilingual advantage theory. Cerebral Cortex, 26(7), 3196–3204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Petitto LA, Katerelos M, Levy BG, Gauna K, Tétreault K, & Ferraro V (2001). Bilingual signed and spoken language acquisition from birth: implications for the mechanisms underlying early bilingual language acquisition. Journal of Child Language, 28, 453–496. [DOI] [PubMed] [Google Scholar]
  56. Philipp AM, Gade M, & Koch I (2007). Inhibitory processes in language switching: Evidence from switching language-defined response sets. European Journal of Cognitive Psychology, 19(3), 395–416. [Google Scholar]
  57. Philipp AM, & Koch I (2009). Inhibition in language switching: What is inhibited when switching between languages in naming tasks? Journal of Experimental Psychology: Learning, Memory, and Cognition. 35(5), 1187–95. [DOI] [PubMed] [Google Scholar]
  58. Poarch GJ (2016). What bimodal and unimodal bilinguals can tell us about bilingual language processing. Bilingualism: Language and Cognition, 19(2), 256–258. [Google Scholar]
  59. Pyers JE, & Emmorey K (2008). The face of bimodal bilingualism: Grammatical markers in American Sign Language are produced when bilinguals speak to English monolinguals. Psychological Science, 19, 531–535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. R Core Team (2013). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria: URL http://www.R-project.org/. [Google Scholar]
  61. Rodriguez-Fornells A, De Diego Balaguer R., & Münte TF (2006). Executive control in bilingual language processing. Language Learning, 56, 133–190. [Google Scholar]
  62. Roelofs A (1998). Lemma selection without inhibition of languages in bilingual speakers. Bilingualism: Language and Cognition, 1, 94–95. [Google Scholar]
  63. Schaeffner S, Fibla L, & Philipp AM (2017). Bimodal language switching: New insights from signing and typing. Journal of Memory and Language, 94, 1–14. [Google Scholar]
  64. Székely A, D’Amico S, Devescovi A, Federmeier K, Herron D, Iyer G, et al. (2003). Timed picture naming extended norms and validation against previous studies. Behavioral Research Methods, Instruments, and Computers, 35, 621–633. [DOI] [PubMed] [Google Scholar]
  65. Rouder JN, Speckman PL, Sun D, Morey RD, & Iverson G (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Rreview, 16(2), 225–237. [DOI] [PubMed] [Google Scholar]
  66. Teubner-Rhodes S, Bolger DJ, & Novick JM (2017). Conflict monitoring and detection in the bilingual brain. Bilingualism: Language and Cognition, 1–25. [Google Scholar]
  67. Van Assche E, Duyck W, & Gollan TH (2013). Whole-language and item-specific control in bilingual language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(6), 1781–1792. [DOI] [PubMed] [Google Scholar]
  68. Verhoef K, Roelofs A, & Chwilla DJ (2009). Role of inhibition in language switching: Evidence from event-related brain potentials in overt picture naming. Cognition, 110(1), 84–99. [DOI] [PubMed] [Google Scholar]
  69. Weisberg J, Casey S, Sevcikova Sehyr Z., & Emmorey K (in press). Second language acquisition of American Sign Language influences co-speech gesture production. Bilingualism: Language and Cognition. [PMC free article] [PubMed] [Google Scholar]

RESOURCES