Skip to main content
The Journal of the Acoustical Society of America logoLink to The Journal of the Acoustical Society of America
. 2018 Apr 17;143(4):2119–2127. doi: 10.1121/1.5030919

Level dominance effect and selective attention in a dichotic sample discrimination task

Alison Y Tan 1,a),, Bruce G Berg 1
PMCID: PMC6909989  PMID: 29716301

Abstract

Differences in individual listening patterns are reported for a dichotic sample discrimination task. Seven tones were drawn from normal distributions with means of 1000 or 1100 Hz on each trial. Even-numbered tones (2, 4, and 6) and odd-numbered tones (1, 3, 5, and 7) were drawn, respectively, from distributions with a 50-Hz and 200-Hz standard deviation. Task difficulty was manipulated by presenting odd and even tones at different intensities. In easy conditions, high and low informative tones were presented at 70 dB and 50 dB, respectively. In difficult conditions, high informative and low informative tones were presented at 50 dB and 70 dB, respectively. Participants judged whether the sample was from high- or low-mean distribution. Decision weights, efficiency, and sensitivity showed a range of abilities to attend to high informative tones, with d′ from 2.4–0.7. Most listeners showed a left-ear advantage, while no listeners showed a right ear advantage. Some listeners, but not all, showed no loudness dominance effect with the ability to selectively attend to quiet tones in difficult conditions. These findings show that the influence of an attentional strategy in dichotic listening can overcome the loudness dominance effect for some listeners.

I. INTRODUCTION

An individual with normal hearing should, ideally, be able to attend to either ear equally well. Surprisingly, however, there are instances where individuals are unable to do so. This phenomenon has been coined as an “ear advantage,” most notable for the right ear advantage (REA) for verbal stimuli (Kimura, 1961). While ear advantage has been well-documented in the literature (Efron and Yund, 1974; Bryden, 1970; Bryden et al., 1983; Brancucci and Martini, 1999; Hiscock et al., 1999), there is limited research on how attentional strategy interacts with laterality, in particular the effect of an ear advantage for nonverbal stimuli.

For verbal stimuli, Broadbent (1954) first introduced a dichotic listening task in order to study attention and attention switching in the auditory domain. By presenting different information in both ears over headphones or in different locations with speakers, Broadbent found that spatial separation helped listeners who were attending to two or more sources. Kimura (1961) used the dichotic method to show that responses reflected better performance in the right ear rather than the left for verbal stimuli. There is a general consensus that the REA is due to left-hemispheric language lateralization. Evidence since Kimura's original finding has generated a large body of work indicating that attentional effects in dichotic listening have an impact on ear advantage that laterality cannot account for alone (Bryden et al., 1983; Asbjørnsen and Hugdahl, 1995; Moncrieff, 2011). This work commonly uses performance accuracy or percent correct to establish which ear listeners attend to more. However, performance accuracy does not offer insights into listening behavior. These studies report major individual differences when attention is focused by instruction, yet the extent with which volitional control can shift attentional strategy and ear advantage remains unclear.

A phenomenon that is inconsistent with volitional control is level dominance, an effect where attention is drawn to the loudest component in an acoustic display, even when this strategy leads to suboptimal performance. Berg (1990) estimated decision weights in a diotic sample discrimination task where listeners are asked to determine whether a sample (i.e., a sequence of tones) was drawn from a low-frequency distribution or from a high-frequency distribution. The sensitivity indices for even- and odd-numbered tones were d = 2 and d = 1, respectively, so even tones were always more “informative.” Berg found that listeners placed greater emphasis on more intense tones, even if they came from less reliable samples. This robust effect has been studied in various contexts including long temporal gaps between tone bursts (Turner and Berg, 2007), tones sampled with different overall level and large level perturbations (Lutfi and Jesteadt, 2006), and when identifying sound sources that form basic auditory objects (Lutfi et al., 2008). Additional cases like using wide-band noise sequences to make loudness judgments (Oberfeld and Plank, 2011) and detecting the order of statistical changes in a sound stream (Richards et al., 2013) also yielded the same strategy in which listeners attended to the loudest stimuli in the display.

This study develops a dichotic sample discrimination task in the interest of initiating a new body of data related to selective auditory attention. The findings broaden the empirical view of both level dominance and ear advantage. Sample discrimination tasks are founded on signal detection theory. A useful feature is that the value or informativeness of individual observations can be quantified and precisely controlled. This allows the development of a psychophysical measurement of efficiency in attending to a targeted ear that is useful. In the current task, seven tones are sampled from normal distributions with a common mean of either 1000 Hz or 1100 Hz. The informativeness of tones is controlled by setting the values of the standard deviations. In reference to the order of presentation, even-numbered tones are always the most informative (d = 2) and always presented to the target ear. Odd-numbered tones are less informative (d = 0.5) and always presented to the non-target ear. After hearing a sequence of tones alternating between ears, listeners report their decisions about whether the tones were sampled from distributions with high or low means.

The level dominance effect is used to manipulate task difficulty. An interesting challenge to listeners is to present quiet informative tones to the target ear and loud tones to the non-target ear that are less informative. Compared to a condition in which informative tones are loud and less informative tones are quiet, a level dominance effect should be clearly evident from decreases in performance measures. The present study's findings reveal a range of individual differences that contrasts sharply with the ubiquity of the level dominance effect that has previously been observed (Berg, 1990; Lutfi et al., 2008). In particular, a modest left-ear advantage is another unexpected discovery.

Using a sample discrimination task to investigate selective auditory attention in dichotic listening introduces an arsenal of quantitative techniques that have been developed within the context of the signal detection paradigm (Berg, 1989, 1990; Lutfi, 1989, 1990a,b, 1992; Richards et al., 2013). Overall performance for each listener, quantified by the sensitivity index d, can be normatively compared to ideal performance, dopt. A set of estimated decision weights that quantify the relative influence of each observation on decisions can be compared to a derived set of optimal weights. Consideration can be given to the pattern of weights across the sequential observations, potentially revealing a primacy effect, for instance. The overall weighting efficiency, quantified by the parameter ηwgt, provides a performance measure that is theoretically unaffected by internal noise. This measure is also well-suited for quantifying an individual's ability to alternate selective attention between the two ears in a manner that will favor the more informative components of the stimulus. Its usefulness will become apparent in light of the unexpected range of individual differences encountered.

II. METHODS

Eight subjects from the University of California, Irvine, participated in a dichotic sample discrimination task, including the author. Subjects ranged in age from 18–28 yr and were screened for normal hearing. Listeners were either volunteers or paid at an hourly rate for their participation. Listeners displayed less than 20 dB hearing loss (HL) for pure tones ranging from 0.5–8 kHz.

Stimuli were generated with matlab R2008a running on a PC with Windows 7. The waveforms were played through a two channel D/A converter (0202 USB 2.0 Audio Interface; E-MU Systems) at a 44.1 kHz sampling rate. These were passed through a fixed attenuator for calibration. A TDT System II headphone buffer split the signals to both channels of the headphones. The sounds were delivered through Sennheisser HD414SL headphones. The subject was seated in a single-walled sound attenuating chamber (IAC). Feedback was presented on a computer monitor and responses collected with a standard computer keyboard.

On each trial, subjects were presented a sequence of seven tones with the same mean frequency. Each tone was 60 ms in duration, with 20 ms onset and offset linear ramps. The inter-tone interval (ITI) was 0 ms. The offset of a tone in one ear was coincident with the onset of the next tone in the other ear. Listeners were instructed about which ear to attend to before each block of trials; the target ear always corresponded to the ear that received the more informative tones. On any given trial, tones were either sampled from a low distribution (μL = 1000 Hz) or a high distribution (μH = 1100 Hz) with equal probability. The frequencies of the odd tones (first, third, fifth, and seventh) were sampled from a normal distribution with standard deviation of 200 Hz (i.e., d = 0.5). Even tones (second, fourth, and sixth) were sampled from a distribution with a standard deviation of 50 Hz (i.e., d = 2). After presentation of tone sequence, listeners indicated whether the tones were sampled from the high or low distribution. They were provided immediate feedback on their accuracy after each trial.

Task difficulty was manipulated by presentation of the odd and even tones at different intensities. A 20 dB relative difference in intensity was maintained throughout conditions. In easy conditions, the informative and less informative tones were presented dichotically at 70 dB and 50 dB, respectively. There were two easy conditions, one in which the right ear received the louder more informative tones, Right Loud (R-L), and another in which the left ear received the louder, more informative tones, Left Loud (L-L). Harder conditions presented the more informative and less informative tones at 50 dB and 70 dB, respectively, where the even tones were quieter and sent to the target ear for conditions Right Quiet (R-Q) or Left Quiet (L-Q).

Decision weights, which quantify the relative contribution of each tone to a listener's decisions, were estimated using a technique based on signal detection theory (Berg, 1989). This method assumes that observations are combined to produce a decision statistic from the weighted average of the seven observations, ai(xi+ϵi)/n, where n is the number of tones, xi, i = 1,..., n, is the frequency of the ith tone, ai is the associated weight, and ϵi represent internal noise, assumed to be normally distributed with a mean of zero and a variance of σint2. Maximum likelihood estimation is used to find the set of weights, normalized to sum to one, that best predicts a listener's trial-by-trial decisions. Two sets of weights, one for each type of trial (i.e., high or low distribution) are estimated for each condition. Data is collected until the root-mean-square (rms) difference between the two sets is less than or equal to 0.06, a stopping criterion that attempts to balance accuracy with costs of data collection.

The number of 100-trial blocks required to reach the rms stopping criterion varied greatly across listeners, ranging from 11 to 60 across all experiments. Training effects do not appear to be a source of this variability. While collecting data, potential training effects are minimized by iteratively omitting one block from the analysis in reverse order, starting with the first block. Weights are re-estimated and the rms is recalculated for each iteration. If early and late listening strategies are different, then omitting the initial blocks should decrease the variability of estimates as reflected by the rms as the reduced data set becomes more homogenous. In cases where this occurs, blocks are omitted until the rms begins to increase. If the rms >0.06 for the reduced data set, additional data are collected. About half of reported results are based on a reduced data sets cases. For the remaining, rms increases as soon as the initial blocks are discarded, which implies a more consistent listening strategy for these cases.

III. RESULTS

A. Decision weights and efficiency measures

Figure 1 shows weight estimates as a function of temporal position for conditions L-L and R-L. Optimal weights are shown by the thick black line. These plots show that listeners are capable of attending to the loud informative tones. Most listeners lose efficiency by giving too much weight to the first two tones, particularly in the R-L condition.

FIG. 1.

FIG. 1.

Decision weights for all individuals in R-L and L-L. Optimal weights are plotted as a thick black line with peaks at the even tones.

Weight estimates for R-Q and L-Q conditions are shown in Fig. 2. Listeners are categorized into those who are able to attend to the quiet, most informative tones, designated R-QAble, and L-QAble [Figs. 2(a) and 2(c)], and those who are unable to attend to the quiet tones [Figs. 2(b) and 2(d)]. Stated differently, the latter group shows a strong level dominance effect, whereas the former does not. To our knowledge, this is the first reported case in which a loudness dominance effect was not observed for all listeners. There also seems to be two different manifestations of loudness dominance. A straightforward explanation for the pattern of weights shown in Fig. 2(b) is that attention is given to the loud tones instead of the quiet tones. However, an explanation of results for condition L-Q, shown in Fig. 2(d), is more tenuous. Instead of listening to the loud tones, listeners appear to adopt a different, though still inappropriate, strategy of giving too much weight to the first tone of the sequence. By definition, the result is considered a manifestation of loudness dominance because performance is superior in L-L compared to L-Q.

FIG. 2.

FIG. 2.

Weights for conditions Right Quiet and Left Quiet. Listeners are grouped by those who do not show a level dominance effect [(a), (c)] and those that do [(b), (d)]. Ideal weights are shown in black with peaks at the even tones.

Table I lists the means of three performance measures for each condition. For the difficult conditions, listeners are distributed into groups that do not show a loudness dominance effect (R-Qable and L-Qable) and those that do (R-Q and L-Q), as shown in Fig. 2. Data for the first measure, d, are used for the statistical analysis described below. Estimates of d are obtained by averaging the d calculated for each 100-trial block. A single instance of an incomplete response matrix for a block of trials (i.e., no false alarm) is corrected by adding a single response to the cell.

TABLE I.

Mean estimates for d, ηwgt, and ηnoise.

d ηwgt ηnoise
R-Qable 2.26 0.363 0.316
R-Q 1.01 0.118 0.418
L-Qable 2.29 0.430 0.297
L-Q 1.64 0.211 0.379
R-L 2.08 0.592 0.297
L-L 2.22 0.698 0.238

The second measure, ηwgt, represents the loss in efficiency from using non-optimal weights (Berg, 1990). It is calculated using the terms on the right-side of the equation

ηwgt=(dwgt)2(dideal)2=a^i2σi2ai2σi2, (1)

where dideal is the ideal performance with a^i being the ideal weights for an optimal observer and σi is the standard deviation of the stimulus distributions at the ith temporal position. dwgt represents a hypothetical observer who has a non-optimal weighting strategy with no internal noise and the ai represents the observed weights from a hypothetical listener. As expected, the highest estimates for ηwgt are obtained for the easiest conditions, R-L and L-L. Although listeners grouped as R-Qable and L-QAble do not show a strong loudness dominance effect, ηwgt was approximately reduced by half compared to R-L and L-L. Weighting efficiency is approximately halved again in comparing the R-QAble and L-QAble groups to the R-Q and L-Q groups.

The third measure, ηnoise, estimates the loss in efficiency attributable to internal noise Berg (1990) obtained by comparing the performance of a hypothetical listener that uses the same weights as a listener to the actual performance of the listener according to the equation

ηnoise=(dobsdwgt)2, (2)

where dobs is the listener's observed d. Table I shows that ηnoise has less range and appears more stable than ηwgt, which suggests that internal noise is relatively constant across conditions.

B. Bayesian analysis

The standard statistical analysis of the general linear model (GLM) presented has a number of well-known limitations, stemming from a basic inability to represent uncertainty about inferences, or quantify evidence for and against hypotheses on the basis of data (e.g., Morey et al., 2016; Wagenmakers, 2007). To address these deficiencies, we conducted a Bayesian analysis using the jasp program (Love et al., 2015).

jasp provides a number of Bayesian measures for evaluating the evidence for hypotheses in terms of data, including Bayes factors, which are the Bayesian gold standard (Kass and Raftery, 1995). Since our interest is in a small number of hypotheses—the presence or absence of two main effects and their interactions—we focused on the posterior model probabilities, P(M|data). These probabilities effectively summarize the information provided by all of the Bayes factors between all pairs of models, under the (reasonable, in our case) assumption that the models considered exhaust the theoretically interesting possibilities.

Specifically, we considered five models—null, ear, level, ear, and level, and interaction—that correspond to the various meaningful possibilities of the presence or absence of main effects and interactions. The null model corresponds to the possibility there is no effect of level being quiet or loud on the left or right ear. The ear model corresponds to a main effect of ear being left or right. The level model corresponds to a main effect of whether level was loud or quiet. The ear model and level model corresponds to both of these main effects applying independently. The interaction model corresponds to an ear effect (i.e., left or right) that depends on the level manipulation of either quiet or loud.

Our analysis assumes each of these models is a priori equally likely, and thus the posterior model probabilities reflect the evidence provided by the data for and against each. These posterior probabilities automatically take into account the goodness-of-fit and complexity of each of the models. They also naturally lie on the meaningful scale of probabilities, calibrated by betting. The posterior model probabilities for each subject analyzed individually, along with other output from the jasp program, are shown in Table II. Probabilities less than one-millionth have been denoted by “–,” while probabilities denoted as less than 0.001 indicate there is little evidence and those greater than 1000 indicate overwhelming evidence.

TABLE II.

Bayesian ANOVA results for individuals.

Null Ear Level Ear and Level Interaction
JS P(M| data) <0.001 <0.001 0.999
BF10 1.00 0.717 >1000 >1000 >1000
AL P(M| data) <0.001 <0.001 1.00
BF10 1.00 1.00 >1000 >1000 >1000
AT P(M| data) 0.256 0.073 0.67
BF10 1.00 0.285 >1000 >1000 >1000
VL P(M| data) 0.101 0.275 0.036 0.102 0.486
BF10 1.00 2.72 0.353 1.01 4.81
EG P(M| data) 0.160 0.626 0.213
BF10 1.00 1.11 >1000 >1000 >1000
JF P(M| data) 0.608 0.188 0.144 0.045 0.016
BF10 1.00 0.309 0.236 0.073 0.027
JZ P(M| data) 0.493 0.386 0.121
BF10 1.00 0.578 >1000 >1000 >1000
PN P(M| data) 0.190 0.391 0.091 0.195 0.132
BF10 1.00 2.06 0.479 1.02 0.696

JS, AL, and AT had a posterior probability of the interaction model of 67% or greater, as shown in Table II. From the five available models, the data provide evidence that makes it 67% certain the interaction model is the best model for AT and 99% certain for JS and AL. On this basis, with reference to Fig. 3 to understand the direction of the effect that leads to the difference, we conclude that moving from the left to right ear results in better performance for the easy conditions (L-L, R-L), while performance in the difficult quiet conditions (L-Q, R-Q) worsens significantly. This result suggests a shared pattern among all three listeners in which the performance in the quiet versus loud level, moving from the left to right ear, emphasizes the task difficulty level such that the quiet condition gets harder and the loud condition gets easier.

FIG. 3.

FIG. 3.

Individual d values for Quiet and Loud conditions for both ears with error bars representing one standard error. Listeners PN and JF show no main effect of level or ear.

On this basis, with reference to Fig. 3, we conclude that the difference between easy and difficult conditions is greater for the right ear (i.e., R-Q compared to R-L) than for the left ear (i.e., L-Q compared to L-L).

VL had a posterior probability for the interaction model of about 49%, but also a 28% posterior probability for the ear model as shown in Table II. Thus, there is uncertainty as to which of these two models provides a better account of VL's performance, and it makes sense to consider how their performance would be interpreted in each case. Under the interaction model, VL shows a decrease in performance in the easy condition moving from the left to right ear, while performance in the difficult condition improved slightly as shown in Fig. 3. This is unexpected given that the loud condition is easier for most subjects. Under the ear model, VL shows better performance in her left ear than the right ear as shown in Fig. 3.

EG, as shown in Table II, had posterior model probabilities of 63% for the ear and level model, and 21% for the interaction model. Once again, this result expresses the uncertainty that is a natural feature of Bayesian analysis. The data do not unequivocally support one model over all others, but are consistent with two of the models. The ear and level model is most likely, but there is also some lesser evidence for the interaction model. Under the ear and level model, better performance is seen in the left ear for both conditions, whereas under the level model, better performance is seen in the loud rather than quiet conditions for both ears.

JF had a posterior model probability of 61% for the null model. This provides strong, but not completely conclusive, evidence for the lack of any effect of level on her right or left ear. In other words, the most likely account is that the listener is stable in performance for quiet or loud conditions in both ears. This further shows listener JF can selectively attend to both her right and left ear equally well (see Table II).

JZ had posterior probabilities of 49% for the level model and 38% for the ear and level model. The level model shows the loud condition resulted in better performance for both ears. The ear and level model include that performance is slightly better in the left ear for both quiet and loud conditions (see Table II).

PN had the largest posterior probability of 39% for the ear model, but also had 19% posterior probabilities for the null model and the ear and level model (Table II). Thus, the data are quite ambiguous as to the best account of performance. Under the ear model, the left ear has a better performance than the right ear for both loud and quiet conditions. This can be seen in Fig. 3.

This provides greater detail for qualitative binaural models. While there are clear individual listening patterns that exist here, there is overwhelming evidence that listening ability between the two ears is not equal for all listeners, as one would expect. These analyses show that at least half of the listeners in this group are under the interaction model where there is better performance in the left rather than the right ear. More importantly, the left ear performance increases with difficulty.

Two additional listeners, EG and PN, under the ear model showed a better performance in the left rather than right ear, while JZ was most affected by the loud level manipulation. Only one listener, JF, was found to be under the null model, indicating that she had the ability to selectively attend to both ears equally well while others showed a marked preference for the left.

IV. FOLLOW-UP: LIMITS TO THE LEVEL DOMINANCE EFFECT

The results are atypical in that some listeners show a level dominance effect and some do not. In a diotic sample discrimination task, Turner and Berg (2007) found a loudness dominance effect for all listeners using a seven-tone sample with an ITI of 200 ms. For two listeners, additional testing revealed that respective ITIs of 500 ms and 700 ms were required to overcome the loudness dominance effect. The increased trial duration resulted in a strong recency effect that countered any gain in overall performance. Given the strong cautions of across-study comparisons, differences between diotic and dichotic presentations are evidently large. Our inability to predict this clear distinction suggests a void in the empirical record. In order to gain more knowledge, we attempted to discover conditions which mitigate the inefficient attention strategy of the listeners affected by loudness dominance. The selection of listeners in a follow-up study is based on their non-optimal weighting strategy in the L-Q and R-Q conditions. In addition to increasing the ITI, the number of tones is reduced in some cases in an attempt to circumvent primacy or recency effects.

A. Results

Decision weights for AT and AL are shown in Fig. 4 for condition R-Q with an ITI of 300 ms. For comparison, weights obtained with an ITI of 0 ms are replotted from Fig. 2. The absence of the loudness dominance effect when the ITI is increased to 300 ms is evident from the pattern of weights. The performance measure for both listeners also reflects an improvement in weighting strategy as the ITI increases from 0 to 300 ms. Estimates of d increase by a factor of two (from 0.95 to 1.90 for AT; from 1.07 to 1.93 for AL). The gain in weighting efficiency, however, is even greater, with ηwgt increasing from 0.119 to 0.772 for AT and from 0.119 to 0.491 for AL. However, the overall pattern of weights show a noticeable recency effect at 300 ms, which may result in some loss of weighting efficiency. Some decay of initial information and over-weighting of later information is expected, given that the trial duration is 2.2 s.

FIG. 4.

FIG. 4.

Weights for condition R-Q at 0 ms, 300 ms for subjects AT and AL. At 0 ms, both listeners show non-optimal weights with peaks at the odd-numbered tones. At 300 ms, both d and ηwgt reflect the improved weights.

Increasing the ITI to 300 ms did not benefit EG. The pattern of estimated weights, shown in Fig. 5, reveal a pronounced recency effect, with the last loud tone given a disproportionate amount of weight for both R-Q and L-Q. Similar estimates of d are obtained for L-Q at 0 and 300 ms, 1.38 and 1.42, respectively. For R-Q, d decreases from 1.16 to 0.735 when the ITI is increased to 300 ms. Estimates of ηwgt for L-Q at 0 and 300 ms are 0.141 and 0.093, respectively. For R-Q at 0 and 300 ms, ηwgt is 0.137 and 0.046.

FIG. 5.

FIG. 5.

Weights for condition L-Q and R-Q at 300 ms for subject EG.

One cannot infer from the data whether the extreme recency effect shown by EG is due to a strategy of over-attending to the last, loud tone or whether it reflects the decay of the information provided by initial tones in a lengthy trial (i.e., 2.2 s). In an ad hoc experiment, the cognitive load is lowered by reducing the number of sampled tones to three. The first and last tone are sampled from distributions with (σ) = 200 Hz and presented to the non-target ear at 70 dB sound pressure level (SPL). The informative second tone is sampled from a distribution with (σ) = 50 Hz and presented to the target ear at 50 dB SPL. The pattern of weight estimates in Fig. 6 suggest no loudness dominance or recency effect with a 300 ms ITI when the informative tone is presented to the left ear (i.e., L-Q). For R-Q with a 300 ms ITI, a recency effect is evident that is most likely attributable to loudness dominance because the low cognitive load makes an explanation based on short-term memory decay less likely. For L-Q, ηwgt is 0.711 with a d = 2.38 compared to R-Q with ηwgt = 0.296 and d = 1.14. Performance in the L-Q condition remains superior even when the ITI is increased to 500 ms for R-Q, which yields ηwgt = 0.545 and d = 1.36. This difference between ears might be considered as another manifestation of a LEA.

FIG. 6.

FIG. 6.

Decision weights for condition L-Q and R-Q at 300 ms with three tones for subject EG.

V. DISCUSSION

Decision weights provide a rich description of listening behavior that can reveal individual listener strategies and provide a unique observation of selective auditory attention. While previous studies that report a left ear advantage (LEA) typically average over subjects' reaction times (Brancucci and Martini, 1999; D'Anselmo et al., 2016) and performance accuracy (Boucher and Bryden, 1997; Moncrieff, 2011; Morton and Siegel, 1991), the findings presented here show an unexpectedly diverse range of performance across listeners and evidence that some listeners show a LEA for nonverbal stimuli.

Idiosyncratic, non-optimal patterns of selective attention, such as primacy effects [see Figs. 2(b) and 2(d)] and recency effects (see Fig. 4 and Fig. 5), are common findings from sample discrimination tasks. Similar findings are evident when listeners judge level changes in a sequence of noise (Oberfeld et al., 2012; Pedersen and Ellermeier, 2007) or the impact of specific temporal segments in judgements of annoyance (Dittrich and Oberfeld, 2009). Individual differences, however, appear to be amplified by dichotic presentations. For conditions R-Q and L-Q in Experiment 1, the most sensitive listeners exhibit more than twice the weighting efficiency of the least sensitive listeners, with mean estimates of ηwgt ranging from 0.363 to 0.118 for the R-Q condition and from 0.430 to 0.211. Differences appear to be attributed to central processing speed and memory capacity because most of the poor performers can reach the weighting efficiency of the best listeners if the ISI is increased to durations greater than 300 ms and the number of tones is reduced to three.

There also seem to be two different manifestations of loudness dominance. A straightforward explanation for the pattern of weights shown in Fig. 1(b) is that attention is given to the loud tones instead of the quiet tones. An explanation of results for condition L-Q, shown in Fig. 2(d), is more tenuous. There seems to be no bias in attending to loud tones more than quiet tones in L-Q. Instead, the pattern of weights for all three listeners reflects a primacy effect that may be related to short term, echoic memory or to a constant decline in attentional resources as the 420 ms trial proceeds. There are no obvious effects of intensity, neither as a cue nor as a distractor. The current hypothesis is that the inherent difficulty of attending to the quiet tones imposes a breakdown in the process of distributing attention. Instead of listening to the loud tones, however, listeners adopt a different, though still inappropriate, strategy of giving too much weight to the first tone of the sequence. Here, the result is considered a manifestation of loudness dominance because performance is superior in L-L compared to L-Q.

Listeners can also be categorized into those that display a near optimal pattern of selective attention and those that display sharp deviations from optimal. The dichotomy is unusual given that in previous sample discrimination studies using diotic (i.e., same stimulus presented simultaneously to both ears) presentations, experimental manipulations have a more uniform effect across listeners. For instance, in diotic listening, when ITIs are less than 200 ms, a loudness dominance effect is found for all listeners (Berg, 1990; Lutfi and Jesteadt, 2006; Turner and Berg, 2007). Generalizations about the subgroup of listeners that display loudness dominance effects in Experiment 1 are more difficult to come by. Case studies appear necessary to understand the idiosyncratic patterns of decision weights, making it difficult to arrive at generalizations about the effects of presentation rate and iconic memory on the dynamics of attention allocation. Nonetheless, we have learned that those categorized as loudness dominance listeners display attentional strategies that are more optimal when the information load is reduced.

A feature-based explanation may underlie the lack of a loudness dominance effect in dichotic listening. In diotic experiments, loud and quiet tones differ only on the dimension of loudness. Hypothetically, attention is automatically directed towards the loudest auditory object. In dichotic conditions, loud and quiet tones also differ with respect to the dimension of lateralization and differences along two dimensions may provide a usable cue to reduce the confusability of rapidly presented tones. The absence of a loudness dominance effect implies that lateralization has the higher priority for attentional resources.

The most unexpected findings are the marked differences in individual listening patterns that emerged, with some listeners showing evidence for a LEA. Results from Bayesian analyses corroborate these findings that some listeners exhibit a LEA, i.e., estimates of d and ηwgt are greater when the most informative tones are presented to the left ear. No listener displayed REA. This LEA for tonal stimuli contrasts with the REA first reported in Kimura (1961), where the right ear outperformed the left ear for verbal stimuli. Although REA has been further supported for verbal stimuli (Ahonniska et al., 1993; Berman et al., 2003; Studdert-Kennedy and Shankweiler, 1970) definitive psychophysical evidence of an ear advantage has been largely unexplored at the level of detail and quantitative assessment provided by a dichotic sample discrimination task. The patterns of individual listening found here are interesting because they clearly show that not all listeners show a loudness dominance effect, and while LEA is present for some individuals, a scant few perform optimally.

ACKNOWLEDGMENTS

We would like to thank Dr. Michael Lee for his feedback using the Bayesian approach. We would also like to thank Dr. Robert Lutfi, Dr. Justin Mark, Dr. Allison Shim, and two anonymous reviewers for their overall comments to earlier revisions of the manuscript. Work was supported in part by the NIDCD 2 R01 DC001262-24.

References

  • 1. Ahonniska, J. , Cantell, M. , Tolvanen, A. , and Lyytinen, H. (1993). “ Speech perception and brain laterality: The effect of ear advantage on auditory event-related potentials,” Brain Lang. 45, 127–146. 10.1006/brln.1993.1039 [DOI] [PubMed] [Google Scholar]
  • 2. Asbjørnsen, A. E. , and Hugdahl, K. (1995). “ Attentional effects in dichotic listening,” Brain Lang. 49, 189–201. 10.1006/brln.1995.1029 [DOI] [PubMed] [Google Scholar]
  • 3. Berg, B. G. (1989). “ Analysis of weights in multiple observation tasks,” J. Acoust. Soc. Am. 86(5), 1743–1746. 10.1121/1.398605 [DOI] [PubMed] [Google Scholar]
  • 4. Berg, B. G. (1990). “ Observer efficiency and weights in a multiple observation task,” J. Acoust. Soc. Am. 88(1), 149–158. 10.1121/1.399962 [DOI] [PubMed] [Google Scholar]
  • 5. Berman, S. M. , Mandelkern, M. A. , Phan, H. , and Zaideld, E. (2003). “ Complementary hemispheric specialization for word and accent detection,” NeuroImage 19(2003), 319–331. 10.1016/S1053-8119(03)00120-4 [DOI] [PubMed] [Google Scholar]
  • 6. Boucher, R. , and Bryden, M. (1997). “ Laterality effects in the processing of melody and timbre,” Neuropsychologia 35(11), 1467–1473. 10.1016/S0028-3932(97)00066-3 [DOI] [PubMed] [Google Scholar]
  • 7. Brancucci, A. , and Martini, P. S. (1999). “ Laterality in the perception of temporal cues of musical timbre,” Neuropsychologia 37, 1445–1451. 10.1016/S0028-3932(99)00065-2 [DOI] [PubMed] [Google Scholar]
  • 8. Broadbent, D. (1954). “ The role of auditory localization in attention and memory span,” J. Exp. Psychol. 47(3), 191–196. 10.1037/h0054182 [DOI] [PubMed] [Google Scholar]
  • 9. Bryden, M. (1970). “ Laterality effects in dichotic listening: Relations with handedness and reading ability in children,” Neuropsychologia 8, 443–450. 10.1016/0028-3932(70)90040-0 [DOI] [PubMed] [Google Scholar]
  • 10. Bryden, M. , Munhall, K. , and Allard, F. (1983). “ Attentional biases and the right-ear effect in dichotic listening,” Brain Lang. 18, 236–248. 10.1016/0093-934X(83)90018-4 [DOI] [PubMed] [Google Scholar]
  • 11. D'Anselmo, A. , Marzoli, D. , and Brancucci, A. (2016). “ The influence of memory and attention on the ear advantage in dichotic listening,” Hear. Res. 342, 144–149. 10.1016/j.heares.2016.10.012 [DOI] [PubMed] [Google Scholar]
  • 12. Dittrich, K. , and Oberfeld, D. (2009). “ A comparison of the temporal weighting of annoyance and loudness,” J. Acoust. Soc. Am. 126(6), 3168–3178. 10.1121/1.3238233 [DOI] [PubMed] [Google Scholar]
  • 13. Efron, R. , and Yund, E. W. (1974). “ Dichotic competition of simultaneous tone bursts of different frequency—I. Dissociation of pitch from lateralization and loudness,” Neuropsychologia 12, 249–256. 10.1016/0028-3932(74)90010-4 [DOI] [PubMed] [Google Scholar]
  • 14. Hiscock, M. , Inch, R. , and Kinsbourne, M. (1999). “ Allocation of attention in dichotic listening: Differential effects on the detection and localization of signals,” Neuropsychology 13(3), 404–414. 10.1037/0894-4105.13.3.404 [DOI] [PubMed] [Google Scholar]
  • 15. Kass, R. E. , and Raftery, A. E. (1995). “ Bayes factors,” J. Am. Stat. Assoc. 90, 773–795. 10.1080/01621459.1995.10476572 [DOI] [Google Scholar]
  • 16. Kimura, D. (1961). “ Cerebral dominance and the perception of verbal stimuli,” Can. J. Psychol. 15, 166–171. 10.1037/h0083219 [DOI] [Google Scholar]
  • 17. Love, J. , Selker, R. , Marsman, M. , Jamil, T. , Dropmann, D. , Verhagen, A. J. , Ly, A. , Gronau, Q. F. , Smira, M. , Epskamp, S. , Matzke, D. , Wild, A. , Rouder, J. N. , Morey, R. D. , and Wagenmakers, E.-J. (2015). jasp (Version 0.7) (University of Amsterdam, Amsterdam, the Netherlands).
  • 18. Lutfi, R. A. (1989). “ Informational processing of complex sound. I: Intensity discrimination,” J. Acoust. Soc. Am. 86(3), 934–944. 10.1121/1.398728 [DOI] [PubMed] [Google Scholar]
  • 19. Lutfi, R. A. (1990a). “ How much masking is informational masking?,” J. Acoust. Soc. Am. 88(6), 2607–2610. 10.1121/1.399980 [DOI] [PubMed] [Google Scholar]
  • 20. Lutfi, R. A. (1990b). “ Informational processing of complex sounds. II. Cross-dimensional analysis,” J. Acoust. Soc. Am. 87(5), 2141–2148. 10.1121/1.399182 [DOI] [PubMed] [Google Scholar]
  • 21. Lutfi, R. A. (1992). “ Informational processing of complex sound. III: Interference,” J. Acoust. Soc. Am. 91(6), 3391–3401. 10.1121/1.402829 [DOI] [PubMed] [Google Scholar]
  • 22. Lutfi, R. A. , and Jesteadt, W. (2006). “ Molecular analysis of the effect of relative tone level on multitone pattern discrimination,” J. Acoust. Soc. Am. 120(6), 3853–3860. 10.1121/1.2361184 [DOI] [PubMed] [Google Scholar]
  • 23. Lutfi, R. A. , Liu, C.-J. , and Stoelinga, C. (2008). “ Level dominance in sound source identification,” J. Acoust. Soc. Am. 124(6), 3784–3792. 10.1121/1.2998767 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Moncrieff, D. W. (2011). “ Dichotic listening in children: Age related changes in direction and magnitude of ear advantage,” Brain Cogn. 76, 316–322. 10.1016/j.bandc.2011.03.013 [DOI] [PubMed] [Google Scholar]
  • 25. Morey, R. D. , Hoekstra, R. , Rouder, J. N. , Lee, M. D. , and Wagenmakers, E.-J. (2016). “ The fallacy of placing confidence in confidence intervals,” Psychonom. Bull. Rev. 23(1), 103–123. 10.3758/s13423-015-0947-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Morton, L. , and Siegel, L. (1991). “ Left ear dichotic listening performance on consonant-vowel combinations and digits in subtypes of reading-disabled children,” Brain Lang. 40, 162–180. 10.1016/0093-934X(91)90123-I [DOI] [PubMed] [Google Scholar]
  • 27. Oberfeld, D. , Heeren, W. , Rennies, J. , and Verhey, J. (2012). “ Spectro-temporal weighting of loudness,” PLoS One 7(11), e50184. 10.1371/journal.pone.0050184 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Oberfeld, D. , and Plank, T. (2011). “ The temporal weighting of loudness: Effects of the level profile,” Atten. Percep. Psychophys. 73, 189–208. 10.3758/s13414-010-0011-8 [DOI] [PubMed] [Google Scholar]
  • 29. Pedersen, B. , and Ellermeier, W. (2007). “ Temporal weights in the level discrimination of time-varying sounds,” J. Acoust. Soc. Am. 123(2), 963–972. 10.1121/1.2822883 [DOI] [PubMed] [Google Scholar]
  • 30. Richards, V. M. , Shen, Y. , and Chubb, C. (2013). “ Level dominance for the detection of changes in level distribution in sound streams,” J. Acoust. Soc. Am. 134(2), EL237–EL243. 10.1121/1.4813591 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Studdert-Kennedy, M. , and Shankweiler, D. (1970). “ Hemispheric specialization for speech perception,” J. Acoust. Soc. Am. 48(2), 579–594. 10.1121/1.1912174 [DOI] [PubMed] [Google Scholar]
  • 32. Turner, M. , and Berg, B. (2007). “ Temporal limits of level dominance in a sample discrimination task,” J. Acoust. Soc. Am. 121(4), 1848–1851. 10.1121/1.2710345 [DOI] [PubMed] [Google Scholar]
  • 33. Wagenmakers, E.-J. (2007). “ A practical solution to the pervasive problems of p values,” Psychonom. Bull. Rev. 14, 779–804. 10.3758/BF03194105 [DOI] [PubMed] [Google Scholar]

Articles from The Journal of the Acoustical Society of America are provided here courtesy of Acoustical Society of America

RESOURCES