Skip to main content
Trends in Hearing logoLink to Trends in Hearing
. 2024 Jan 29;28:23312165231215916. doi: 10.1177/23312165231215916

The Right-Ear Advantage in Static and Dynamic Cocktail-Party Situations

Moritz Wächtler 1,2,, Pascale Sandmann 3, Hartmut Meister 1,2
PMCID: PMC10826403  PMID: 38284359

Abstract

When presenting two competing speech stimuli, one to each ear, a right-ear advantage (REA) can often be observed, reflected in better speech recognition compared to the left ear. Considering the left-hemispheric dominance for language, the REA has been explained by superior contralateral pathways (structural models) and language-induced shifts of attention to the right (attentional models). There is some evidence that the REA becomes more pronounced, as cognitive load increases. Hence, it is interesting to investigate the REA in static (constant target talker) and dynamic (target changing pseudo-randomly) cocktail-party situations, as the latter is associated with a higher cognitive load than the former. Furthermore, previous research suggests an increasing REA, when listening becomes more perceptually challenging. The present study examined the REA by using virtual acoustics to simulate static and dynamic cocktail-party situations, with three spatially separated talkers uttering concurrent matrix sentences. Sentences were presented at low sound pressure levels or processed with a noise vocoder to increase perceptual load. Sixteen young normal-hearing adults participated in the study. The REA was assessed by means of word recognition scores and a detailed error analysis. Word recognition revealed a greater REA for the dynamic than for the static situations, compatible with the view that an increase in cognitive load results in a heightened REA. Also, the REA depended on the type of perceptual load, as indicated by a higher REA associated with vocoded compared to low-level stimuli. The results of the error analysis support both structural and attentional models of the REA.

Keywords: auditory asymmetry, speech perception, multi-talker listening, error analysis, cognitive load

Introduction

A fundamental characteristic of most species is that they have two ears. This is particularly important for spatial orientation as well as speech understanding in “cocktail-party” situations, since binaural hearing allows sound sources to be separated and lays the foundation for focusing attention on the speaker of interest (cf. Bronkhorst, 2015). It is often assumed that hearing is symmetrical and that perfect symmetry is a prerequisite for processing binaural signals. However, when presenting two competing speech stimuli, one to each ear, a right-ear advantage (REA) can frequently be observed, which is reflected in better speech recognition compared to the left ear. The phenomenon of the REA has been investigated not only in behavioral but also in brain imaging studies (for a review, see, e.g., Hugdahl & Westerhausen, 2016).

Seminal Experiment of the REA and the Structural Model

The REA was originally discovered and described by Kimura (1961a, 1961b) based on a paradigm first introduced by Broadbent (1954). She presented dichotic sequences of digits to the participants and asked them to repeat as many items as possible from both ears. Kimura (1967) suggested explanations for these observations which are often referred to as the structural model of the REA: each ear's input has a stronger representation in the contralateral than in the ipsilateral hemisphere of the brain. This results in a better performance of the right ear for speech stimuli compared to the left ear, since in most people the left hemisphere is specialized for language functions. Support for this model has been provided by brain imaging studies that show that speech sounds in dichotic listening conditions induce bilateral activation in the superior temporal lobe, with a significantly stronger activation in the left than in the right hemisphere (Hugdahl et al., 1999; van den Noort et al., 2008).

However, this speech-specific explanation has been challenged. More recent models suggest that the left hemisphere is specialized for processing rapidly fluctuating acoustic signals, whereas the right hemisphere is more proficient at processing slower temporal changes (Poeppel, 2003). The inherence of rapid temporal changes such as formant transitions in speech signals could therefore explain the left-hemispheric advantage for speech recognition. Nevertheless, it should be noted that this is a simplified acoustic description of speech, rendering a corresponding simple domain-specific dichotomy unlikely (McGettigan & Scott, 2012).

Even though the existence of the REA has been shown in a multitude of studies, its exact magnitude varies considerably depending on the choice of various stimulus characteristics. These include, but are not limited to, the sound pressure level (SPL) of the speech signal (Fumero et al., 2022), background noise level (Sequeira et al., 2008), stimulus duration (Godfrey, 1974), and phonological features such as voice onset time (Sandmann et al., 2007).

REA and Cognitive Factors

In addition to the model proposed by Kimura (1967), other explanations for the REA were suggested that take attentional aspects into account. Though not explicitly stated, Kimura's model assumes that when listeners await a stimulus presentation, they distribute their attention equally to both ears. This assumption was challenged by Kinsbourne (1970) whose model instead explains the REA on the basis that speech stimuli lead to a cerebral activation, which among other things encompasses a “verbal set” that is a “state of expectancy for verbal input.” This activation is more pronounced for the left than for the right hemisphere, creating an attentional bias directed toward the right side of space. This leads to a general advantage for detecting stimuli from the right, including visual as well as auditory, and also non-linguistic stimuli. For example, Kinsbourne (1970) showed that participants have an advantage in detecting non-linguistic visual stimuli presented to their right compared to their left field of view when they are simultaneously subjected to a verbal task. The verbal task included listening to speech stimuli and memorizing them during the presentation of the visual stimulus. Only presenting the visual stimuli, without a simultaneous verbal task, revealed no significant difference in detection performance between the left and right field of view, which supports Kinsbourne's model, that is, an attentional bias toward the right side caused by verbal activity.

In Kimura's original experiments, participants were asked to repeat back stimuli regardless of which ear they had been presented to (free recall). In an attempt to control the possible lateral biases of attention assumed by Kinsbourne (1970), researchers used modified instructions by asking participants to only attend to one designated ear (Asbjornsen & Hugdahl, 1995; Bryden et al., 1983; Hugdahl & Andersson, 1986). In those so-called “forced left” and “forced right” conditions, participants generally repeat back more words from the attended than from the unattended ear, meaning attention instructions can reverse the REA and thus create a left-ear advantage (LEA). However, the LEA in the “forced left” condition is usually less pronounced compared to the REA in the “forced right” condition. This indicates that controlled attention—albeit a strong contributor—cannot fully account for the REA observed during free recall (as pointed out by Hiscock & Kinsbourne, 2011). On the cortical level, the forced attention on one ear results in the activation of a cortico-striatal network, with a bilateral frontal activation in the “forced left” condition, and a unilateral right-hemisphere frontal activation in the “forced right” condition (Kompus et al., 2012). This supports the idea that a cognitive conflict arises in the “forced left” condition, since listeners have to attend to the perceptually less salient stimulus (Kompus et al., 2012).

Some studies provide evidence for the existence of a cognitive factor besides attention that modifies the REA (e.g., Bryden, 1967; Penner et al., 2009). In tasks resembling Broadbent's paradigm, the REA became larger when the number of dichotic stimulus pairs the participants had to recall was increased (Kimura, 1961a). This suggests that the extent of auditory laterality is associated with the load on working memory. Furthermore, Penner et al. (2009) demonstrated that the REA is especially pronounced for later positions in a sequence of dichotic stimulus pairs, possibly revealing processing in favor of the right ear when the number of stimuli kept in working memory is currently high. In conclusion, the study by Penner et al. (2009) indicates that enhancing the load on a particular component of cognition (memory, in this case) may result in a heightened REA. However, it remains unclear if this relationship extends to other components of cognition such as attention.

Two-Component Models

As Hiscock and Kinsbourne (2011) point out, the observation that the REA can be modified by both characteristics of the stimulus and cognitive factors motivated the development of the so-called two-component models (e.g., Andersson et al., 2008). The two basic components these models distinguish are, firstly, a bottom-up component that refers to the hemispheric asymmetry described by Kimura's structural model and whose influence on the REA depends on various stimulus characteristics; secondly, a top-down component involving cognitive factors such as attention which can, amongst others, be modified by experimental instructions (Hugdahl & Westerhausen, 2016).

Similarly, further support for the idea of two distinct components influencing the REA comes from the so-called “ventriloquism effect,” whereby the believed location of a sound source (e.g., a talker) can be changed by a visual stimulus (review in Chen & Vroomen, 2013). Morais (1974) showed that changing the believed location of a target talker in this way while leaving its actual location unchanged, can also influence the extent of the REA. This provides additional evidence that the REA cannot be fully explained by stimulus characteristics and structural asymmetries alone but that the focus of attention plays a role, too.

REA in “Cocktail-Party Situations”

The studies covered so far mostly used relatively simple stimuli such as digits or consonant-vowel syllables instead of whole sentences (e.g., Bryden, 1967; Kimura, 1961a, 1961b; Penner et al., 2009). In addition, stimuli were often played back through headphones such that each of the competing talkers was presented to one ear exclusively. This differs from listening situations in a realistic sound field in which the sound of each talker arrives at both ears and, depending on the angle of incidence, does so with an interaural time and intensity difference. This raises the question if the REA is restricted to basic speech signals or can also be observed in more ecologically valid listening situations including competing talkers, possibly causing consequences for everyday communication.

Meister et al. (2020) implemented cocktail-party situations using a sound field including one target and two masker talkers at different positions uttering matrix sentences. The study found a better speech recognition performance when the target talker was located to the right of the participants compared to the left side. Even though these results have to be interpreted with caution since performance was close to the ceiling, they provide some indication that an REA also may exist in listening situations that are closer to everyday communication than classic dichotic listening tests in terms of speech material and binaural cues.

Wächtler et al. (2020) conducted additional analyses using the data from Meister et al. (2020) and compared the REA between static and dynamic cocktail-party situations. In static cocktail-party situations, the target talker remains the same and is known to the listener in advance, whereas in dynamic situations the target talker switches in an unpredictable manner, that is, uncued and in pseudo-random patterns. Regarding the latter situation, listeners are required to divide their attention between the talkers due to the stimulus uncertainty and to refocus their attention when the target changes. Therefore, dynamic situations are associated with a higher cognitive load than static situations. Wächtler et al. (2020) observed a more pronounced REA in dynamic relative to static situations. Moreover, in dynamic situations with permanent switches (and therefore a high cognitive demand), the REA tended to be higher compared to when switches were rare. This seems to support the hypothesis that the REA gets larger as cognitive load increases. However, the aforementioned ceiling effects in recognition performance in the static condition complicate the interpretation as well as the statistical analysis of the results and make it hard to draw firm conclusions. It is also conceivable that the increase of the REA in the dynamic relative to the static conditions was only partly due to the increased cognitive load. Potential asymmetries in attention-shifting ability, where switching from left to right is easier than vice versa (Hiscock & Chipuer, 1993), may also have played a role in REA during dynamic conditions.

In a recent study by Fumero et al. (2022), participants had to repeat two sentences that were simultaneously uttered by a female and a male talker who were located to their left and right side, respectively. In this divided attention task, recognition performance was similar for both talkers, when speech was presented at normal conversation levels (65 dB SPL). However, reducing acoustic cues by presenting the speech at low SPLs (35 dB SPL) or by processing it with a noise vocoder revealed an advantage for the talker on the right. Although this gives some evidence that an REA emerges, when the perceptual difficulty is increased, this result has to be interpreted with caution, as talkers on the left and right differed in terms of voice characteristics and onset times. Indeed, when the presentation of the female and male talker was side-reversed, no advantage for either the right or the left ear could be observed. This indicates an interaction between the presentation side and talker intelligibility, complicating the interpretation of the result concerning factors affecting the REA.

A second experiment by Fumero et al. (2022) showed no significant REA when listeners only had to selectively attend to one of the talkers. This finding was regardless of the placement (left/right) of talkers and might be due to the lower cognitive load compared to the divided listening task in Fumero et al.'s first experiment. However, the comparability between the first and second experiments was limited, as the two tasks differed in terms of stimuli (the former did not use noise vocoding or low SPLs) and methodology (adaptive procedure vs. constant target-to-masker ratio). In summary, the study by Fumero et al. (2022) gives some evidence regarding bottom-up modulators (stimulus modifications) of the REA and the influence of cognitive load as a top-down factor in simulated cocktail-party situations. Drawing firm conclusions, however, seems difficult due to the confounding factor of the talker voice and the limited comparability between tasks with varying degrees of cognitive load.

Taken together, previous studies using either an increased number of dichotic stimulus pairs (Penner et al., 2009) or competing sentences to simulate cocktail-party situations (Fumero et al., 2022) suggest that an increased cognitive load caused by high memory or attentional demands leads to a larger REA. The present study extends previous studies by investigating the influence of attentional demands on the REA in a cocktail-party scenario. Limitations of previous studies were overcome by avoiding ceiling effects as well as by improving comparability between conditions with low and high cognitive load. Similar to the cocktail-party setup from Meister et al. (2020), the present study used a speech recognition test with three competing talkers and static as well as dynamic situations. We hypothesized that dynamic situations reveal a larger REA than static situations due to the increased cognitive load. Similar to the study by Fumero et al. (2022), speech stimuli were processed using a noise vocoder or were presented at low SPLs in order to study the effect of perceptual load on the REA. Moreover, these stimulus modifications were used to avoid ceiling effects in speech recognition performance.

In order to uncover mechanisms underlying the REA, we performed an additional error analysis. Errors during speech recognition in multi-talker situations can be divided into two categories, namely misunderstanding or completely omitting utterances (random errors) and confusing utterances from target and masker (confusion errors) (cf. Lin & Carlile, 2015; Wächtler et al., 2022). Due to the dominance of the left hemisphere for speech recognition and the consequential advantage for right-ear input (cf. Kimura's model), we expected that the REA is mainly due to fewer random errors for target talker presentations from the right compared to the left side. Alternatively, it seemed conceivable that because of an attentional bias to the right side of space (cf. Kinsbourne, 1970), participants were less prone to intrusions from other directions when the target was presented from the right, resulting in fewer confusion errors compared to presentations from the left.

Methods

Cocktail-Party Setup and Stimuli

Experiments were conducted in a sound-isolated and sound-treated booth using virtual acoustics. Virtual acoustics was advantageous over free-field display because of a more controlled stimulus presentation. Stimuli were converted from digital to analog using an RME Hammerfall DSP Multiface II and were presented through Sennheiser HD-25 headphones. The frequency response of the headphones was measured with a Brüel & Kjær type 4152 artificial ear between 125 and 12.5 kHz and was then equalized by inverse filtering in the frequency domain in order to achieve a flat overall frequency response.

In each trial, three sentences were presented simultaneously and were uttered by three talkers with different voice characteristics: a high and a low female talker with fundamental frequencies of 202  and 143 Hz, respectively, as well as a male talker (122 Hz). The three talkers were located at −60° (left), 0° (center), and 60° (right) azimuth angles relative to the participant (Figure 1(a), for detailed information also see Meister et al., 2020). Despite the fact that we were interested in laterality effects, we included three talkers instead of two in order to achieve a higher stimulus uncertainty and thus a higher cognitive load in the dynamic situation.

Figure 1.

Figure 1.

(a) Illustration of the spatial setup that was simulated by means of virtual acoustics. In this example trial, the target sentence is presented from the left side, as indicated by the keyword “Stefan.” (b) Diagram showing examples of the static condition and of the switching in the dynamic condition. Note that in the static situation, the target talker (indicated by a rectangular box) remains at the same position, whereas it changes positions after every trial in the dynamic condition. In both conditions, the target talker always has the same voice (female low, i.e., the female talker with the lower fundamental frequency).

This spatial configuration was simulated by convolving head-related impulse responses (HRIRs) with the speech waveforms. HRIRs, which were recorded using a KEMAR artificial head with blocked ear canals, were taken from the OlHeaD-HRTF database (Denk et al., 2018).

Sentences were taken from the German version of the Oldenburg sentence test (Wagener et al., 1999), a matrix test consisting of five-word sentences such as “Stefan kauft acht nasse Autos” (Stefan buys eight wet cars) or “Doris malt fünf grüne Tassen” (Doris draws five green cups). Each sentence had a duration of 2.5 s. The target talker was indicated by the sentence beginning with the name “Stefan.” Participants were instructed to verbally repeat back the target sentence and to ignore the competing sentences, that differed in each word position. Guessing words was allowed.

In the static conditions, the target talker location remained constant throughout a test list (Figure 1(b), upper part) and was verbally announced prior to the presentation of a test list. The static condition consisted of three test lists of 10 trials each (one test list for each of the target positions).

The dynamic condition comprised switches, that is, unpredictable changes in the target talker location, after every trial (Figure 1(b), lower part). Participants were not provided with a priori information regarding the location of the target talker. This creates stimulus uncertainty, requiring listeners to monitor the three talkers, to identify the target, and then refocus their attention to the respective talker location. The test list for the dynamic condition included 30 trials (10 trials for each of the three target positions).

In both the static and the dynamic conditions, the target sentences were always uttered by the female talker with the lower fundamental frequency. This means that switches in the dynamic condition were achieved by changing the positions of the target talker and the two maskers (Figure 1(b), lower part), rather than letting different talkers on fixed positions utter the target sentences. The rationale behind keeping the target voice constant was to avoid a confounding effect of talker intelligibility, which could have complicated the interpretation of the effect of target location on speech recognition.

We aimed to create listening situations with a high perceptual difficulty, in order to increase the extent of the expected auditory asymmetries (Fumero et al., 2022). Similar to the study by Fumero et al. (2022), this was done either by modifying stimuli with a noise vocoder (Gaudrain & Başkent, 2015) or leaving them unprocessed but presenting them at low SPLs.

The vocoder analyzed the speech signal with a filter bank spanning the frequency range from 150 Hz to 7000 Hz. The temporal envelope for each filter bank channel was estimated by using a half-wave rectifier and a subsequent low-pass filter with a cut-off frequency of 160 Hz. Narrow-band noise signals, one for each channel of the filter bank, were amplitude-modulated with these envelopes, and the resulting signals were summed across channels in order to obtain the final vocoded signal.

Another reason for increasing the perceptional difficulty was the observation of previous studies that young normal-hearing listeners partly achieve close-to-ceiling performance in cocktail-party setups such as the one used in the present study (e.g., Meister et al., 2020), complicating the interpretation and statistical analysis of the results. Therefore, in order to obtain speech recognition scores that are neither affected by ceiling nor by floor effects, the number of frequency bands for the vocoder and the SPL attenuation were adjusted individually for each listener, aiming at a speech recognition score of roughly 75% to 80% for the configuration that was expected to yield the highest performance (static condition and target on the right). In this way, a safe distance to the performance ceiling was kept, while at the same time, we prevented floor effects for the most adverse condition (dynamic condition, target on the left).

Edinburgh Handedness Inventory

It is assumed that left-hemispheric dominance can be found in virtually all right-handed individuals, whereas the proportion is lower for left-handedness (e.g., Knecht et al., 2000). The Edinburgh Inventory (Oldfield, 1971) is a questionnaire developed to yield a quantitative measure of hand laterality. Participants are asked if they use their left, right, or either hand for 10 commonplace activities such as holding scissors or a spoon. In addition, participants can also indicate that their preference for one hand is particularly strong (i.e., exclusive). The responses are then used to calculate an individual laterality quotient ranging from −100 (fully left-handed) to +100 (fully right-handed).

Procedures

In general, until the end of the experiment, we took care not to give participants cues that the study investigated laterality effects. The rationale for this was to avoid a bias of the results due to potential preconceptions of the participants regarding their own auditory asymmetry, be it from everyday experience or prior knowledge about the REA.

After signing the informed consent forms, pure-tone thresholds were measured. Subsequently, the speech recognition experiments using the described cocktail-party paradigm were conducted. They started with a familiarization phase in which participants were presented with the speech materials from the Oldenburg sentence test as well as with the different stimulus modifications (vocoding, decreased SPL). In addition, participants could also practice the static and dynamic cocktail-party task during two to three practice conditions of 22 trials each, until it was clear that they understood the task and their speech recognition performance had stabilized. During the familiarization phase, the experimenter adjusted two independent parameters, namely the number of vocoder bands and stimulus attenuation individually, so that participants achieved approximately 75% to 80% word recognition for target sentences presented from the right side in the static condition. The parameter values found in this way were then kept constant throughout the experiment.

The actual test comprised the static and dynamic cocktail situations, each presented with both vocoding and SPL attenuation applied to the stimuli (but not at the same time), resulting in four different conditions. The order of conditions was counterbalanced using a Latin square design.

Following a short break of about 10 to 15 min, a retest of these four conditions was conducted, this time with the participants wearing the headphones side-reversed, that is, left and right side swapped. This was done to counterbalance potential asymmetries of the audio hardware that possibly could not be fully compensated by the calibration. Moreover, despite the optimized speech material (cf. Kollmeier et al., 2015) the intelligibility of the sentences might be slightly different which was also compensated for by this measure. The reversal of the headphones was done out of sight of the participants. Again, the rationale here was to avoid giving participants any cues about the purpose of the investigations. For the same reason, participants filled in the Edinburgh handedness questionnaire only after completing the entire listening test.

The total duration of the experiment was 2.5 to 3 h including breaks. Participants were reimbursed for their time by €10/h. The study was approved by the ethics committee of the medical faculty at the University of Cologne.

Participants

An a priori calculation of the necessary sample size was performed using G*Power 3.1.9.2 (Faul et al., 2009) with a significance level of 5% and a power threshold of 80%, based on the data of the static and dynamic situations observed for normal-hearing listeners in the Meister et al. (2020) study, as reported by Wächtler et al. (2020). An effect size of 0.79 was calculated, resulting in a minimum sample size of 12 participants and yielding an actual power of 0.82. To accommodate the fact, that the REA based on this data might have been somewhat inflated due to ceiling effects in the static condition, this number was raised to the next value defined by the 4 × 4 Latin square design, namely 16.

Those 16 listeners (8 male, 8 female) were between 18 and 35 years of age (mean 25.6 years) and were all German native speakers. All participants had pure-tone thresholds ≤20 dB HL at octave frequencies ranging from 250 to 8000 Hz and reported no history of hearing problems.

Their hand laterality quotients as determined by the Edinburg Inventory ranged from −57.9 to 100. According to the categorization by Isaacs et al. (2006), four participants were ambidextrous, six participants had a moderate right-hand preference and in the remaining six, this preference was strong. The individual handedness indicates the likelihood of having a language-dominant left hemisphere, which according to Kimura's structural model is a prerequisite for the emergence of an REA. For example, Knecht et al. (2000) determined the frequency of the opposite case, namely right-hemisphere language dominance. When viewing handedness as a continuum, the rate of right-hemispheric language dominance is lowest (4%) for people who have a strong preference for the right hand and increases gradually up to 27% for strong left-handers. In other words, typical left-hemispheric language dominance in right-handers amounts up to 96% whereas it is still found in 73% of the left-handers. These values have been confirmed in a subsequent assessment considering a different study sample (i.e., 97% vs. 74%, see Flöel et al., 2005). Based on these previous results, one can conclude that the likelihood of having a left-hemispheric language dominance is high, even in participants with a strong negative laterality quotient. Thus, we decided to include our ambidextrous individuals in the data analysis.

Data Analysis

Speech Recognition Performance and Error Types

We calculated performance values as the percentage of correctly recognized target words relative to the total number of presented target words. The first word of each target sentence was not taken into account, as it only served as a keyword to indicate the target talker. Also, note that every condition was presented twice per participant (test/retest) and the results of these two presentations were combined. This means for each condition 240 target words were considered: 30 trials × 4 target words/trial × 2 test/retest.

The number of target words was further broken down by target talker position, resulting in 80 target words per condition and position. The data for target talker presentations from the center (0°) position are not considered in the following, as they do not convey information regarding the REA.

For the error analysis, we distinguished between random and confusion errors. A random error was registered when a participant omitted a word or reported a word that was neither presented by the target nor by the masker talkers. Participants reporting words that had been uttered by the masker talkers were counted as confusion errors. Error rates are given, that is, the number of errors relative to the number of target words.

Statistical Analysis

Two separate linear mixed models were used for the statistical analyses of the speech recognition scores and error rates. Models were fitted in R version 4.2.1 (R Core Team, 2022) by means of version 1.1–31 of the R-package lme4 (Bates et al., 2015). Participants were included in the models as a random factor; the exact random effects structure was decided upon using a corrected version of the Akaike information criterion for small sample sizes (Hurvich & Tsai, 1989) as implemented in the R-package MuMIn (version 1.47.5; Barton, 2023). The resulting model formulas are shown in the Supplemental materials. The car library (version 3.1–1, Fox & Weisberg, 2019) was used to obtain χ2-statistics and p-values from the models by means of Wald tests. Post-hoc paired comparisons were carried out with version 1.8.3 of the emmeans package (Lenth, 2022) and the resulting p-values were Bonferroni corrected. The significance level was 0.05.

Results

Figure 2 depicts the speech recognition performance for target talker presentations from the left and right side. It can be seen that presentations from the right side in the static conditions yield a performance of approximately 75%, regardless of the challenge. In general, performance is better for static than for dynamic conditions and higher for the right compared to the left side—the latter in accordance with the concept of an REA that amounts to 0.8 percentage points (pp) (low-level, static), 3.7 pp (vocoding, static), 5.6 pp (low-level, dynamic), and 15.7 pp (vocoding, dynamic). Hence, the REA appears to be more pronounced for the dynamic than for the static conditions and for vocoding compared to presentations at low levels. Indeed, this is confirmed by the linear mixed model (LMM), which reveals significant influences of condition (χ2(1) = 56.3, p < 0.001) and side (χ2(1) = 7.00, p = 0.008) as well as significant interactions of condition × side (χ2(1) = 6.49, p = 0.011) and challenge × side (χ2(1) = 3.92, p = 0.048). Post-hoc tests show that only the dynamic condition in combination with vocoded stimuli yields a significant REA (t(61.6) = −4.19, p < 0.001); all other p-values were 0.14 or larger (two-tailed, Bonferroni-corrected). For a detailed listing of all paired comparisons conducted and the corresponding results, please refer to the Supplemental materials.

Figure 2.

Figure 2.

Percentages of correctly recognized words as a function of the side of the target talker. The left and right panels show the results for the different degradations of the speech signal, in particular by lowering the sound pressure levels (“low level”) and by using a noise vocoder (“vocoding”). Means ± 1 standard error of the mean are shown (n.s.: not significant; ***p < 0.001).

Figure 3 plots the results of a more detailed analysis, distinguishing between random and confusion error rates. In general, error rates for the right side seem to be lower compared to the left, especially in dynamic conditions. An exception from this is the random error rates during presentations with low SPLs (lower left panel), which show no clear tendency regarding the side. Random errors are more frequent than confusion errors, regardless of challenge and condition. Confusion errors show some dependence on the challenge in that they tend to be slightly higher when vocoding is used as opposed to presentations with low levels. By contrast, there is no such trend for random errors.

Figure 3.

Figure 3.

Error rates for target talker presentations from the left and right side. Means ± 1 standard error of the mean are shown (n.s.: not significant; **p < 0.01, ***p < 0.001).

The LMM reveals significant effects of side (χ2(1) = 7.00, p = 0.008) condition (χ2(1) = 64.85, p < 0.001) and error type (χ2(1) = 25.78, p < 0.001) but not for challenge (χ2(1) = 1.73, p = 0.189).

The interactions condition × side (χ2(1) = 7.48, p = 0.006) challenge × side (χ2(1) = 4.51, p = 0.034) and challenge × error type (χ2(1) = 4.84, p = 0.028) are significant, while the side × error type (χ2(1) = 0.60, p = 0.438) and challenge × side × error type interactions (χ2(1) = 1.06, p = 0.30) are not. Besides the significant effects already found in the analysis of speech recognition performance, this confirms the observation that confusions are generally less frequent than random errors. In addition, it confirms an increase in the confusion error rate for vocoding relative to the low-level challenge, which cannot be seen for random errors. Post-hoc paired comparisons (two-tailed, Bonferroni-corrected p-values) revealed that significant differences in error rates between left and right sides can only be found in the dynamic but not in the static conditions (see indication of significant differences in Figure 3). More precisely, this is the case for confusion errors during low level (t(170) = 2.64, p = 0.009) and vocoder (t(170) = 3.10, p = 0.002) presentations, as well as for random errors during vocoder presentations (t(170) = 3.53, p < 0.001). All other p-values were ≥0.41. Again, please refer to the Supplemental materials for a detailed listing of all paired comparisons conducted.

Discussion

This study addressed the REA in a simulated cocktail-party situation with a particular focus on the potential effects of perceptual and cognitive load. Therefore, stimuli were presented using virtual acoustics, and perception was challenged by reducing the presentation level of the competing sentences as well as subjecting the sentences to a noise vocoder. Cognitive load was varied by either considering a predefined and constant (static condition) or a permanently switching target talker (dynamic condition). We hypothesized that listening in the dynamic condition would increase the REA compared to when listening to an a priori known target talker. Similar to the findings by Fumero et al. (2022), we also assumed that a higher perceptual load increases the magnitude of the REA. In addition, we expected that analyzing different error types could give more details on the mechanisms of the REA in cocktail-party situations.

We included young normal-hearing listeners in the study. Although four of these individuals were ambidextrous, they were included in the data analysis due to the following reasons. First, handedness does not unequivocally determine hemispheric language dominance. For example, following the approximation described by Knecht et al. (2000), the probability of left-hemispheric language dominance in our ambidextrous participants is between 78 and 89%. Second, visual inspection of the speech recognition patterns of these individuals showed that they were not substantially different from the remaining sample in terms of their REA magnitudes (see Supplemental Materials, Figure S1). Third, a correlation analysis did not show a significant relationship between handedness score (i.e., individual laterality quotient) and the magnitude of the REA (also see Supplemental materials, Figure S1). Importantly, it should be mentioned that even if the ambidextrous participants had revealed a smaller REA than the right-handed ones (which, however, was not observed), this would have been unlikely to interfere with the investigation of our hypotheses, since our study mainly focuses on the REA differences between static and dynamic conditions rather than on absolute REA values.

REA in Cocktail-Party Situations

Our study investigated the REA in cocktail-party situations, which are different from the classic dichotic listening paradigm (Broadbent, 1954) used by Kimura (1961a, 1961b) in that listeners are exposed to a (simulated) sound field with multiple sound sources at different locations. This results in soundwaves from each talker arriving at both ears, as opposed to Broadbent's paradigm in which each talker is presented only through one headphone speaker and thus only to one ear, which makes our setup more realistic in terms of binaural cues. In other words, in Broadbent's paradigm, listeners are presented with two perfectly binaurally separated speech signals, while during sound-field listening this binaural separation is reduced.

There is some evidence that the REA can also be observed in sound fields (Fumero et al., 2022; Hublet et al., 1976; Meister et al., 2020; Wächtler et al., 2020). Still, it is possible that the reduced binaural separation caused the REA to become smaller than in the classic dichotic listening paradigm, as the language advantage of the left hemisphere can be expected to show its greatest effect on the REA when signals from the left and right are perfectly separated. Thus, we compared the REA values obtained in our study (0.8 to 15.7 pp) with those found in studies using the classic dichotic listening paradigm, which for instance reported 4 pp (Kimura, 1967) and 6 to 12 pp (Penner et al., 2009). The result from this comparison, however, does not suggest an effect of binaural separation on the extent of the REA. Nevertheless, comparability between our results and those of the previous studies is limited due to differences regarding attentional demands (mix of selective/divided/switching attention vs. divided attention), stimuli (sentences vs. single letters/digits), and stimulus modifications (vocoding/low-level speech vs. clean speech at normal levels).

Besides considerations of the sound field, the speech material differs from the stimuli used in most of the previous studies, which—according to the two-component model—may have influenced the REA by bottom-up processing. In this study, we used whole sentences that more closely resemble realistic communication in general and cocktail-party situations in particular, rather than phonemes or single words. Another aspect adding to the realism of the listening situation was the fact that the competing talkers had different voice characteristics, offering listeners an additional cue for separating the talkers from one another. This may have helped the listeners to focus on the target talker as it possibly reduced the load of segregating the competing speech streams.

Static and Dynamic Conditions: Effect of Cognitive Load

We observed a greater REA in dynamic than in static situations, as reflected by the significant interaction between the presentation side and the condition. Compared to static situations, dynamic situations require the listener to monitor multiple talkers simultaneously and to quickly redirect attention once the target is identified, which is why they are associated with an increased cognitive load. More specifically, these challenges of the dynamic situations may lead to an increased demand on executive functions (e.g., Lin & Carlile, 2015), which encompass concepts such as cognitive flexibility and task-switching ability. The increased cognitive load is reflected, among other things, in a worse speech recognition performance for dynamic situations, which could be observed in the present as well as in other studies (Brungart & Simpson, 2007; Lin & Carlile, 2015; Meister et al., 2020). The observation that a higher cognitive load leads to a more pronounced auditory asymmetry generally agrees with the findings of Fumero et al. (2022) and Penner et al. (2009), although the latter tapped into another cognitive domain than the present study, namely working memory. One might ask the question of why auditory asymmetry favoring the right side is increased under cognitive load in the first place. Findings from research on visual perception provide clues that point toward a cognitive-load dependent attentional bias. In situations with high cognitive load, rightward shifts of attention are a well-known phenomenon in the visual domain (e.g., Pérez et al., 2009). Neuroimaging data have revealed that the dorsal attention network—a brain network responsible for visuospatial attention—shows more left- than right-hemispheric activation in conditions with high cognitive load, thereby biasing attention toward the right side of space (Paladini et al., 2020). Whether these findings from visual research can be transferred to the auditory domain remains unclear, but they at least indicate as to which type of mechanism could be at play in auditory asymmetry. Nevertheless, studies combining the dichotic listening paradigm with functional magnetic resonance imaging have provided additional insights into the neural underpinnings of REA modulations as induced by bottom-up (stimulus-driven) and top-down (instruction-driven) factors (Westerhausen et al., 2010). For instance, Falkenberg et al. (2011) identified two distinct brain networks recruited under high and low demands on executive functions (there referred to as cognitive control), namely a fronto-parietal network as well as a network encompassing the superior temporal and the post-central gyri. A direct transfer from these earlier results to our current findings is difficult due to methodological discrepancies between the studies. Nonetheless, we speculate that the enhanced REA in our dynamic compared to static situations is related to the recruitment of distinct cortical networks. This needs to be verified by future brain imaging studies, investigating auditory asymmetry in static and dynamic cocktail-party situations.

It is conceivable that the higher magnitude of the REA in dynamic relative to static conditions was only partly caused by the increased cognitive load. One may speculate that different instructions on how to focus attention and the corresponding tasks in combination with a right-side attentional bias independent of cognitive load might have played a role, too. This attentional bias might have been caused by the verbal nature of the stimuli and the consequential stronger activation of the left hemisphere compared to the right one (cf. Kinsbourne, 1970). In the static condition, participants were instructed to focus their attention on a designated talker position, which might have partly overwritten the described attentional bias. By contrast, in the dynamic condition, the listeners were free to direct their attention until they identified the target and focused on the talker of interest. This could mean the larger REA in dynamic relative to static situations is only partly due to higher cognitive load but can to some extent be explained by the different procedures used in the static and dynamic situations that make the attentional biases stand out to different extents.

Low-Level and Vocoding: Effects of Perceptual Load

Following Fumero et al. (2022), we used presentations at low SPLs and vocoding as a means to increase perceptual difficulty. As a consequence, even in the condition with the highest performance (presentations from the right side in the static situation), word recognition only averaged around 75%, meaning that ceiling effects that were observed in previous studies (see Meister et al., 2020) were avoided in the present study.

We observed a larger REA for vocoding than for presentations at low SPLs. A closer inspection of the data shown in Figure 2 reveals that static as well as dynamic recognition performance for the right side is roughly the same for both challenges and only varies for the left side. This allows for a comparison of these two different signal degradations with respect to the REA while controlling for the confounder of performance. That the performance for the left side is differently affected by the signal degradations could possibly be explained by Kimura's structural model: the strong representation of the right-side signal in the speech-dominant left hemisphere might allow compensating for the imposed signal degradations, whereas such an advantage does not exist for stimuli presented from the left, making the different effects of the signal degradations apparent only for this presentation side. Even though not explicitly discussed and not tested for statistical significance, the experiments by Fumero et al. (2022) also showed a trend for vocoded stimuli to create a larger REA than low-level stimuli. In general, the REA found in the Fumero et al. study was larger than in the present investigation. In line with the notion that perceptual load affects the REA, a possible explanation could be that their signal-degradation measures yielded clearly lower performance (around 30% to 50%) than our methods did. Our choice was more conservative in order to prevent any floor effects for the demanding dynamic condition.

Another possible reason for the different REA elicited by the two challenges could be the way they affect the binaural separation of the speech signals from the left and right. While the vocoder mainly reduces the frequency resolution, lowering the SPL of the signal causes the relatively quiet high-frequency components of the speech signal to become inaudible to the listeners. Due to the small acoustic head-shadow effect in the remaining low-frequency regions, the speech signals from the left and right talkers arrive at the opposite ear with only slightly reduced intensity, resulting in a lower binaural separation compared to signals with richer high-frequency content. According to Kimura's structural model, the decreased binaural separation might have reduced the right-side talker's benefit to be primarily processed in the speech-dominant left hemisphere, thus resulting in a smaller REA.

Finally, while we classified perceptual and cognitive load as completely separate bottom-up and top-down factors in the framework of the two-component models of auditory asymmetry, they might be directly connected. Cognitive models of speech perception such as the “ease of language understanding-model” (Rönnberg et al., 2013) suggest that speech signal degradations might activate top-down compensatory processes which can cause an additional cognitive load. This could mean that the two stimulus degradations used in our study to create perceptual load elicited different extends of REAs because they caused different amounts of cognitive load, thus somewhat blurring the dividing line between bottom-up and top-down factors.

Error Analysis

We hypothesized that the REA can be attributed to a decreased rate of random errors for the right compared to the left side due to the left-hemispheric dominance for speech signals (cf. structural model, Kimura, 1967). Alternatively, it appeared conceivable that an attentional bias to the right (cf. Kinsbourne, 1970) reduces the rate of confusion errors when the target is presented from this side of space.

Indeed, the error analysis revealed that both the random and the confusion errors were lower for the right than for the left side, indicating that both structural asymmetries, as well as attentional biases, could play a role in the emergence of the REA. However, this explanation does not apply to the condition with low SPLs, where random errors did not show an advantage for either side. This latter observation is rather compatible with the interpretation that the reduced REA is caused by the decreased binaural separation of the left and right talker at low SPLs and the consequential reduction of the effect of structural asymmetries.

Confusion errors, which generally occurred less frequently than random errors, showed a similar pattern for the two challenges, in particular the low-level and vocoding conditions. This may be in line with an attentional advantage of the right side independent of the type of signal degradation (cf. Kinsbourne, 1970). The higher amount of confusion for vocoding might be explained by the spectro-temporal degradation of important voice-cues, such as the fundamental frequency and formants that are much less affected by presenting the stimuli at low levels.

Limitations and Outlook

The study aimed to investigate the REA in static and dynamic cocktail-party situations. While the listening situation used in the current study can be considered to reflect a realistic multi-talker situation, it still differs from everyday scenarios in some regards. For instance, the matrix sentences used offer a lower predictability than most utterances in real life. This is due to the fact that language typically contains pronounced context effects. It is conceivable that high predictability facilitates speech recognition and therefore lowers the cognitive load, possibly resulting in a smaller REA.

The magnitude of the REA clearly depended on the different conditions considered in our study. Our primary aim was to assess the REA in dynamic compared to static situations. Indeed, our analysis revealed a significant condition × side interaction and showed a significant REA in the dynamic condition with vocoded stimuli. However, our sample size was not necessarily large enough to address a significant REA per se, such as for the static conditions.

Furthermore, the magnitude of the REA might be prone to age effects. Only young adults with normal-hearing participated in the present study. Thus, it remains unclear if our findings can be generalized to a larger population including listeners with hearing loss and other possibly age-related impairments such as cognitive decline. Behavioral and electrophysiological studies on dichotic speech perception found that the REA increases with age (for a review, see Martin & Jerger, 2005). This begs the question of which effect aging and hearing loss have in more cognitively demanding listening situations. For example, using a three-talker cocktail-party setup like in the present study, Wächtler et al. (2022) examined increases in speech recognition errors in dynamic relative to static situations. Their results showed that some of the high cognitive demands of the dynamic situation (in particular stimulus uncertainty) caused higher increases in error rates for the older hearing-impaired listeners than for the younger normal-hearing listeners. Whether the decreased ability to cope with these challenging situations and the consequential increased cognitive load lead to a larger REA remains to be investigated.

It also remains unclear if the REA is consistent over time or if adaptation to a crowded acoustic scene reduces laterality effects. Plotting its time course (not shown here) reveals a trend of the REA being particularly pronounced at the beginning and toward the end of the experimental session, while being weaker in the middle. One could explain these findings in terms of cognitive load: not being used to a task (beginning of the experiment) or being exhausted (toward the end of the experiment) may increase the REA due to higher cognitive load. However, since participants underwent the conditions in different orders, an insightful analysis of the REA's time course is not possible with the Latin square design employed in our study.

Summary and Conclusions

The aim of the present study was to investigate the effects of perceptual and cognitive load on the REA in a setting with competing talkers. It showed that

  • An REA can be observed in cocktail-party situations with talkers uttering full sentences, that is, in situations that are more realistic than the classic dichotic listening paradigm in terms of the sound field, segregation cues, and stimulus material.

  • The magnitude of the REA is affected by both bottom-up as well as top-down factors. The bottom-up factor was constituted by the two different signal degradations: processing speech stimuli with a noise vocoder caused a larger REA than presenting them at low SPLs. The influence of the top-down factor became evident in that the REA was only significant when a cognitive load was high, that is, in the dynamic and not in the static situation.

  • Both random and confusion errors contribute to the REA. This suggests that both structural differences, as well as an attentional bias to the right side of space, might explain auditory asymmetry.

Supplemental Material

sj-pdf-1-tia-10.1177_23312165231215916 - Supplemental material for The Right-Ear Advantage in Static and Dynamic Cocktail-Party Situations

Supplemental material, sj-pdf-1-tia-10.1177_23312165231215916 for The Right-Ear Advantage in Static and Dynamic Cocktail-Party Situations by Moritz Wächtler, Pascale Sandmann and Hartmut Meister in Trends in Hearing

Acknowledgements

Parts of this work were presented at the annual conference of the German Society of Audiology (DGA) 2023, 1–3 March 2023, Cologne, Germany.

Footnotes

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Deutsche Forschungsgemeinschaft (grant no. ME 2751/3-2).

Data availability: Research data for this article are available from the corresponding author upon reasonable request.

Supplemental Material: Supplemental material for this article is available online.

References

  1. Andersson M., Llera J. E., Rimol L. M., Hugdahl K. (2008). Using dichotic listening to study bottom-up and top-down processing in children and adults. Child Neuropsychology, 14(5), 470–479. 10.1080/09297040701756925 [DOI] [PubMed] [Google Scholar]
  2. Asbjornsen A. E., Hugdahl K. (1995). Attentional effects in dichotic listening. Brain and Language, 49(3), 189–201. 10.1006/brln.1995.1029 [DOI] [PubMed] [Google Scholar]
  3. Barton K. (2023). MuMIn: Multi-model inference. https://CRAN.R-project.org/package=MuMIn.
  4. Bates D., Mächler M., Bolker B., Walker S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. 10.18637/jss.v067.i01 [DOI] [Google Scholar]
  5. Broadbent D. E. (1954). The role of auditory localization in attention and memory span. Journal of Experimental Psychology, 47(3), 191–196. 10.1037/h0054182 [DOI] [PubMed] [Google Scholar]
  6. Bronkhorst A. W. (2015). The cocktail-party problem revisited: Early processing and selection of multi-talker speech. Attention, Perception, & Psychophysics, 77(5), 1465–1487. 10.3758/s13414-015-0882-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brungart D. S., Simpson B. D. (2007). Cocktail party listening in a dynamic multitalker environment. Perception & Psychophysics, 69(1), 79–91. 10.3758/BF03194455 [DOI] [PubMed] [Google Scholar]
  8. Bryden M. P. (1967). An evaluation of some models of laterality effects in dichotic listening. Acta Oto-Laryngologica, 63(2–3), 595–604. 10.3109/00016486709128792 [DOI] [PubMed] [Google Scholar]
  9. Bryden M. P., Munhall K., Allard F. (1983). Attentional biases and the right-ear effect in dichotic listening. Brain and Language, 18(2), 236–248. 10.1016/0093-934X(83)90018-4 [DOI] [PubMed] [Google Scholar]
  10. Chen L., Vroomen J. (2013). Intersensory binding across space and time: A tutorial review. Attention, Perception, & Psychophysics, 75(5), 790–811. 10.3758/s13414-013-0475-4 [DOI] [PubMed] [Google Scholar]
  11. Denk F., Ernst S. M. A., Ewert S. D., Kollmeier B. (2018). Adapting hearing devices to the individual ear acoustics: Database and target response correction functions for various device styles. Trends in Hearing, 22. 10.1177/2331216518779313 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Falkenberg L. E., Specht K., Westerhausen R. (2011). Attention and cognitive control networks assessed in a dichotic listening fMRI study. Brain and Cognition, 76(2), 276–285. 10.1016/j.bandc.2011.02.006 [DOI] [PubMed] [Google Scholar]
  13. Faul F., Erdfelder E., Buchner A., Lang A.-G. (2009). Statistical power analyses using G*power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. 10.3758/BRM.41.4.1149 [DOI] [PubMed] [Google Scholar]
  14. Flöel A., Buyx A., Breitenstein C., Lohmann H., Knecht S. (2005). Hemispheric lateralization of spatial attention in right- and left-hemispheric language dominance. Behavioural Brain Research, 158(2), 269–275. 10.1016/j.bbr.2004.09.016 [DOI] [PubMed] [Google Scholar]
  15. Fox J., Weisberg S. (2019). An R Companion to Applied Regression (Third). Sage. https://socialsciences.mcmaster.ca/jfox/Books/Companion/ [Google Scholar]
  16. Fumero M. J., Marrufo-Pérez M. I., Eustaquio-Martín A., Lopez-Poveda E. A. (2022). Divided listening in the free field becomes asymmetric when acoustic cues are limited. Hearing Research, 416, 108444. 10.1016/j.heares.2022.108444 [DOI] [PubMed] [Google Scholar]
  17. Gaudrain E., Başkent D. (2015). Factors limiting vocal-tract length discrimination in cochlear implant simulations. The Journal of the Acoustical Society of America, 137(3), 1298–1308. 10.1121/1.4908235 [DOI] [PubMed] [Google Scholar]
  18. Godfrey J. J. (1974). Perceptual difficulty and the right ear advantage for vowels. Brain and Language, 1(4), 323–335. 10.1016/0093-934X(74)90010-8 [DOI] [Google Scholar]
  19. Hiscock M., Chipuer H. (1993). Children’s ability to shift attention from one ear to the other: Divergent results for dichotic and monaural stimuli. Neuropsychologia, 31(12), 1339–1350. 10.1016/0028-3932(93)90102-6 [DOI] [PubMed] [Google Scholar]
  20. Hiscock M., Kinsbourne M. (2011). Attention and the right-ear advantage: What is the connection? Brain and Cognition, 76(2), 263–275. 10.1016/j.bandc.2011.03.016 [DOI] [PubMed] [Google Scholar]
  21. Hublet C., Morais J., Bertelson P. (1976). Spatial constraints on focused attention: Beyond the right-Side advantage. Perception, 5(1), 3–8. 10.1068/p050003 [DOI] [PubMed] [Google Scholar]
  22. Hugdahl K., Andersson L. (1986). The “forced-attention paradigm” in dichotic listening to CV-syllables: A comparison between adults and children. Cortex, 22(3), 417–432. 10.1016/S0010-9452(86)80005-3 [DOI] [PubMed] [Google Scholar]
  23. Hugdahl K., Brønnick K., Kyllingsbrk S., Law I., Gade A., Paulson O. B. (1999). Brain activation during dichotic presentations of consonant-vowel and musical instrument stimuli: A 15O-PET study. Neuropsychologia, 37(4), 431–440. 10.1016/S0028-3932(98)00101-8 [DOI] [PubMed] [Google Scholar]
  24. Hugdahl K., Westerhausen R. (2016). Speech processing asymmetry revealed by dichotic listening and functional brain imaging. Neuropsychologia, 93, 466–481. 10.1016/j.neuropsychologia.2015.12.011 [DOI] [PubMed] [Google Scholar]
  25. Hurvich C. M., Tsai C.-L. (1989). Regression and time series model selection in small samples. Biometrika, 76(2), 297–307. 10.1093/biomet/76.2.297 [DOI] [Google Scholar]
  26. Isaacs K. L., Barr W. B., Nelson P. K., Devinsky O. (2006). Degree of handedness and cerebral dominance. Neurology, 66(12), 1855–1858. 10.1212/01.wnl.0000219623.28769.74 [DOI] [PubMed] [Google Scholar]
  27. Kimura D. (1967). Functional asymmetry of the brain in dichotic listening. Cortex, 3(2), 163–178. 10.1016/S0010-9452(67)80010-8 [DOI] [Google Scholar]
  28. Kimura D. (1961a). Cerebral dominance and the perception of verbal stimuli. Canadian Journal of Psychology/Revue Canadienne de Psychologie, 15(3), 166–171. 10.1037/h0083219 [DOI] [Google Scholar]
  29. Kimura D. (1961b). Some effects of temporal-lobe damage on auditory perception. Canadian Journal of Psychology/Revue Canadienne de Psychologie, 15(3), 156–165. 10.1037/h0083218 [DOI] [PubMed] [Google Scholar]
  30. Kinsbourne M. (1970). The cerebral basis of lateral asymmetries in attention. Acta Psychologica, 33, 193–201. 10.1016/0001-6918(70)90132-0 [DOI] [PubMed] [Google Scholar]
  31. Knecht S., Dräger B., Deppe M., Bobe L., Lohmann H., Flöel A., Ringelstein E.-B., Henningsen H. (2000). Handedness and hemispheric language dominance in healthy humans. Brain, 123(12), 2512–2518. 10.1093/brain/123.12.2512 [DOI] [PubMed] [Google Scholar]
  32. Kollmeier B., Warzybok A., Hochmuth S., Zokoll M. A., Uslar V., Brand T., Wagener K. C. (2015). The multilingual matrix test: Principles, applications, and comparison across languages: A review. International Journal of Audiology, 54(Suppl. 2), 3–16. 10.3109/14992027.2015.1020971 [DOI] [PubMed] [Google Scholar]
  33. Kompus K., Specht K., Ersland L., Juvodden H. T., van Wageningen H., Hugdahl K., Westerhausen R. (2012). A forced-attention dichotic listening fMRI study on 113 subjects. Brain and Language, 121(3), 240–247. 10.1016/j.bandl.2012.03.004 [DOI] [PubMed] [Google Scholar]
  34. Lenth R. V. (2022). emmeans: Estimated marginal means, aka least-squares means. https://CRAN.R-project.org/package=emmeans.
  35. Lin G., Carlile S. (2015). Costs of switching auditory spatial attention in following conversational turn-taking. Frontiers in Neuroscience, 9, 1–11. 10.3389/fnins.2015.00124 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Martin J. S., Jerger J. F. (2005). Some effects of aging on central auditory processing. The Journal of Rehabilitation Research and Development, 42(4 s), 25. 10.1682/JRRD.2004.12.0164 [DOI] [PubMed] [Google Scholar]
  37. McGettigan C., Scott S. K. (2012). Cortical asymmetries in speech perception: What’s wrong, what’s right, and what’s left? Trends in Cognitive Sciences, 16(5), 269–276. 10.1016/j.tics.2012.04.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Meister H., Wenzel F., Gehlen A. K., Kessler J., Walger M. (2020). Static and dynamic cocktail party listening in younger and older adults. Hearing Research, 395, 108020. 10.1016/j.heares.2020.108020 [DOI] [PubMed] [Google Scholar]
  39. Morais J. (1974). The effects of ventriloquism on the right-side advantage for verbal material. Cognition, 3(2), 127–139. 10.1016/0010-0277(74)90016-X [DOI] [Google Scholar]
  40. Oldfield R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9(1), 97–113. 10.1016/0028-3932(71)90067-4 [DOI] [PubMed] [Google Scholar]
  41. Paladini R. E., Wieland F. A. M., Naert L., Bonato M., Mosimann U. P., Nef T., Müri R. M., Nyffeler T., Cazzoli D. (2020). The impact of cognitive load on the spatial deployment of visual attention: Testing the role of interhemispheric balance with biparietal transcranial direct current stimulation. Frontiers in Neuroscience, 13, 1391. 10.3389/fnins.2019.01391 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Penner I.-K., Schläfli K., Opwis K., Hugdahl K. (2009). The role of working memory in dichotic-listening studies of auditory laterality. Journal of Clinical and Experimental Neuropsychology, 31(8), 959–966. 10.1080/13803390902766895 [DOI] [PubMed] [Google Scholar]
  43. Pérez A., Peers P. V., Valdés-Sosa M., Galán L., García L., Martínez-Montes E. (2009). Hemispheric modulations of alpha-band power reflect the rightward shift in attention induced by enhanced attentional load. Neuropsychologia, 47(1), 41–49. 10.1016/j.neuropsychologia.2008.08.017 [DOI] [PubMed] [Google Scholar]
  44. Poeppel D. (2003). The analysis of speech in different temporal integration windows: Cerebral lateralization as ‘asymmetric sampling in time’. Speech Communication, 41(1), 245–255. 10.1016/S0167-6393(02)00107-3 [DOI] [Google Scholar]
  45. R Core Team (2022). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. https://www.R-project.org/ [Google Scholar]
  46. Rönnberg J., Lunner T., Zekveld A., Sörqvist P., Danielsson H., Lyxell B., Dahlström O., Signoret C., Stenfelt S., Pichora-Fuller M. K., Rudner M. (2013). The ease of language understanding (ELU) model: Theoretical, empirical, and clinical advances. Frontiers in Systems Neuroscience, 7, 31. 10.3389/fnsys.2013.00031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Sandmann P., Eichele T., Specht K., Jäncke L., Rimol L. M., Nordby H., Hugdahl K. (2007). Hemispheric asymmetries in the processing of temporal acoustic cues in consonant-vowel syllables. Restorative Neurology and Neuroscience, 25(3–4), 227–240. [PubMed] [Google Scholar]
  48. Sequeira S. D. S., Specht K., Hämäläinen H., Hugdahl K. (2008). The effects of different intensity levels of background noise on dichotic listening to consonant-vowel syllables. Scandinavian Journal of Psychology, 49(4), 305–310. 10.1111/j.1467-9450.2008.00664.x [DOI] [PubMed] [Google Scholar]
  49. van den Noort M., Specht K., Rimol L. M., Ersland L., Hugdahl K. (2008). A new verbal reports fMRI dichotic listening paradigm for studies of hemispheric asymmetry. NeuroImage, 40(2), 902–911. 10.1016/j.neuroimage.2007.11.051 [DOI] [PubMed] [Google Scholar]
  50. Wächtler M., Kessler J., Walger M., Meister H. (2022). Revealing perceptional and cognitive mechanisms in static and dynamic cocktail party listening by means of error analyses. Trends in Hearing, 26. 10.1177/23312165221111676 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Wächtler M., Wenzel F., Kessler J., Walger M., Meister H. (2020). What are some of the challenges in dynamic cocktail party listening? Speech in Noise Workshop 2020, Toulouse, France. 9–10 January 2020, Toulouse, France. https://zenodo.org/record/8101983.
  52. Wagener K., Brand T., Kollmeier B. (1999). Entwicklung und Evaluation eines Satztests für die deutsche Sprache I: Design des Oldenburger Satztests [Development and evaluation of a German sentence test - Part I: Design of the Oldenburg sentence test]. Zeitschrift Für Audiologie (Audiological Acoustics), 38(1), 4–15. [Google Scholar]
  53. Westerhausen R., Moosmann M., Alho K., Belsby S.-O., Hämäläinen H., Medvedev S., Specht K., Hugdahl K. (2010). Identification of attention and cognitive control networks in a parametric auditory fMRI study. Neuropsychologia, 48(7), 2075–2081. 10.1016/j.neuropsychologia.2010.03.028 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-pdf-1-tia-10.1177_23312165231215916 - Supplemental material for The Right-Ear Advantage in Static and Dynamic Cocktail-Party Situations

Supplemental material, sj-pdf-1-tia-10.1177_23312165231215916 for The Right-Ear Advantage in Static and Dynamic Cocktail-Party Situations by Moritz Wächtler, Pascale Sandmann and Hartmut Meister in Trends in Hearing


Articles from Trends in Hearing are provided here courtesy of SAGE Publications

RESOURCES