Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 May 1.
Published in final edited form as: Eur J Neurosci. 2021 Mar 22;53(9):3142–3159. doi: 10.1111/ejn.15167

Stimulus value gates multisensory integration

Naomi L Bean 1, Barry E Stein 1, Benjamin A Rowland 1
PMCID: PMC8177070  NIHMSID: NIHMS1681190  PMID: 33667027

Abstract

The brain enhances its perceptual and behavioral decisions by integrating information from its multiple senses in what are believed to be optimal ways. This phenomenon of “multisensory integration” appears to be pre-conscious, effortless, and highly efficient. The present experiments examined whether experience could modify this seemingly automatic process. Cats were trained in a localization task in which congruent pairs of auditory-visual stimuli are normally integrated to enhance detection and orientation/approach performance. Consistent with the results of previous studies, animals more reliably detected and approached cross-modal pairs than their modality-specific component stimuli, regardless of whether the pairings were novel or familiar. However, when provided evidence that one of the modality-specific component stimuli had no value (it was not rewarded) animals ceased integrating it with other cues, and it lost its previous ability to enhance approach behaviors. Cross-modal pairings involving that stimulus failed to elicit enhanced responses even when the paired stimuli were congruent and mutually informative. However, the stimulus regained its ability to enhance responses when it was associated with reward. This suggests that experience can selectively block access of stimuli (i.e., filter inputs) to the multisensory computation. Because this filtering process results in the loss of useful information, its operation and behavioral consequences are not optimal. Nevertheless, the process can be of substantial value in natural environments, rich in dynamic stimuli, by using experience to minimize the impact of stimuli unlikely to be of biological significance, and reducing the complexity of the problem of matching signals across the senses.

Keywords: behavior, cat, enhancement, orientation, plasticity

Graphical Abstract

graphic file with name nihms-1681190-f0008.jpg

Brains automatically integrate spatiotemporally congruent cross-modal cues so that they enhance one another and facilitate perception and behavior (bird + song, insect + song). However, even such congruent combinations of cross-modal cues do not elicit enhanced outcomes when one of them (beeps) has been devalued (top right, bottom left).

1 |. INTRODUCTION

The brain integrates the signals it receives from its different senses, thereby improving its perception of environmental events (Stein & Meredith, 1993). This process is highly effective in a variety of circumstances, and many suggest it operates in an optimal way in order to make best use of all available sensory information (Alais & Burr, 2004; Battaglia et al., 2003; Ernst & Banks, 2002; Shams et al., 2005). Supporting this idea are the many studies demonstrating that events providing spatiotemporally aligned auditory and visual stimuli are detected more rapidly and are localized far more accurately and reliably than those in which only one of those stimuli is available (Bolognini et al., 2007; Burnett et al., 2007; Corneil et al., 2002; Francesca et al., 2002; Frens et al., 1995; Gingras et al., 2009; Goldring et al., 1996; Lovelace et al., 2003; Stein et al., 1988, 1989; Wang et al., 2008). It does not seem to matter whether the cues are novel or familiar (Bolognini et al., 2007; Frens et al., 1995; Lovelace et al., 2003; Stein et al., 1989), or if subjects are trained to respond to only one of them, despite the response conflict this produces (Jiang et al., 2002, 2007; Rowland et al., 2014; Stein et al., 1989). These observations suggest that these integrative mechanisms are pre-conscious and automatic, and well suited to facilitate rapid perceptual decisions.

It is commonly held that cross-modal cues that are mutually informative (i.e., “congruent”) in a given context are integrated to enhance multisensory responses (Gau & Noppeney, 2016; Kayser & Shams, 2015; Meredith & Stein, 1986; Stein & Meredith, 1993). For example, speech processing is enhanced when the sound of a word is congruent with the mouth movement creating it (Grant & Seitz, 2000; Ma et al., 2009; Ross et al., 2007; Sanchez-Garcia et al., 2011; Sommers et al., 2005; Sumby & Pollack, 1954; Tye-Murray et al. 2011), and detection and localization judgments are enhanced by auditory and visual stimuli that are coincident in space and time (Bolognini et al., 2007; Corneil & Munoz, 1996; Corneil et al., 2002; Francesca et al., 2002; Frens et al., 1995; Giard & Peronnet, 1999; Gingras et al., 2009; Stein et al., 1988; Stein et al., 1989; Wang et al., 2008). The latter is intuitive because signals derived from the same event are generally aligned in space and time. Multisensory enhancements in this task have been linked to the integration of signals by neurons in the midbrain superior colliculus (SC; Burnett et al. 2014; Stein et al., 1989; Wilkinson et al., 1996). The overlapping topographic sensory maps within the SC give it the ability to route congruent cross-modal signals onto common target neurons, enhancing their sensory responses and therefore the physiological salience of the initiating event (Stein & Meredith, 1993; Stein et al., 2014).

Here, we evaluated how experience in the adult impacts this rapid, low-level process of multisensory integration. Two factors were tested: experience with auditory-visual features that were either always covariant (high coupling probability) or never covariant (zero coupling probability), and the association or dissociation of each component stimulus with rewarded behavior. The results showed that congruent pairs of cross-modal stimuli elicit enhanced responses regardless of whether the component stimuli, or their combination, are familiar or novel. This was expected based on the observations alluded to above.

However, stimuli that were explicitly disassociated from reward did not elicit enhanced multisensory responses when combined with another even though these stimuli were perceptible, dynamic, unpredictable, spatiotemporally congruent (i.e., mutually informative), and their integration would have increased the frequency of correct responses and rewards (Figure 1). This finding suggests that perceived value is used to filter signals prior to the multisensory computation, denying de-valued stimuli access to this mechanism through which the impact of external events is normally enhanced. This filtering can improve efficiency in normal circumstances by reducing the number of decisions about which signals are derived from common events, but at a cost of not using all available information.

FIGURE 1.

FIGURE 1

Conceptual representation of the experiments designed to examine how experience with cross-modal cues affects multisensory performance. (a) One cross-modal pair (A1V1hi) was consistently presented together during training and was of high reward value (depicted here as a bird singing). During testing, this stimulus pair elicited a rapid, coordinated orientation response much more reliably than either component stimulus. (b) The other combination was never presented together during training. It involved a visual cue of modest reward value that elicited orientation when presented alone (V2low - depicted as an insect) and an auditory cue with no associated reward value that elicited no response when presented alone (A2no - depicted as a ringing bell). Their coupling during testing produced no response enhancement (response probability was the same as to the visual cue alone)

2 |. METHODS

All procedures were conducted in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health Publication) and an approved Institutional Animal Care and Use Committee protocol at Wake Forest University School of Medicine, an Association for Assessment and Accreditation of Laboratory Animal Care-accredited institution. Three adult mongrel cats (2 male, 1 female), 1–2 years of age weighing 3–4 Kg, were obtained from a USDA-licensed commercial animal breeding facility (Liberty Research, Inc.). All three animals were used in all experiments described below. Animals were housed in a facility administered by the local Animal Resources Program, which included ample living space and opportunities for behavioral enrichment. They were motivated by food rewards (175 mg food pellets, Science Diet) and maintained within 80% of baseline body weight.

2.1 |. Apparatus

Animals were trained in a 90 cm diameter semicircular apparatus with complexes of LEDs and speakers mounted on the perimeter wall at 15° intervals from −90° (left) to +90° (right) of a central fixation location (0°). Each complex consisted of two speakers (Panasonic model 4D02C0) and three light-emitting diodes (LEDs; Lumex Opto/Components; model 67-1102-ND). Only the two outermost LEDs at each location were used in this experiment. Speakers were horizontally displaced by 4 cm, and were located 4 cm above the LEDs, which were each horizontally displaced by 2 cm. This apparatus was previously used to train and test animals in detection localization tasks (see Gingras et al., 2009, also see Figure 2).

FIGURE 2.

FIGURE 2

Apparatus and an exemplar training profile. Top: The orientation and localization task was performed in a 90-cm diameter perimetry apparatus. Stimulus locations spanned the central 180° of space in 15° intervals (only the central 120° was tested here). Each stimulus location contained a complex of two speakers displaced 4 cm from each other, and positioned 4 cm above a complex of three LEDs at 2 cm separations (the 0° location complex was not used in this experiment). From Gingras et al. (2009). Bottom: Exemplar animal’s percent correct approach performance to the stimulus location, to a different location, and its NoGo responses averaged for each training condition (vertical bars show SEM across locations). Approach responses to the A1V1hi and V2low stimuli were both rewarded, and all stimuli were presented at their maximum intensity. Note that A1V1hi and V2lo ultimately elicited similar performance levels despite being associated with different reward magnitudes, and that the animal ultimately opted for a NoGo response to the unrewarded A2no stimulus

2.2 |. Stimuli

Two visual (V1, V2) and three auditory (A1, A2, A3) stimuli with different features were used in these experiments (Table 1). Both of the visual and two of the auditory stimuli were “apparent motion” cues created by activating, within a location, either the two outer LEDs or the two speakers in rapid succession (each activation 50 ms duration, 10 ms gap between). The apparent motion moved in either a peripheral (designated with subscript “1”) or central (subscript “2”) direction. Visual stimuli were LED flashes trained at intensities between 0.98–3.88 cd/m2. During testing, stimuli intensities were reduced to intensities between 0.21–1.78cd/m2. The two apparent motion auditory stimuli were either low-pass filtered (A1, 821–3,000 Hz) or high-pass filtered (A2, 4,650–1,2365 Hz) broadband noise bursts (A1 67.0–67.5 dB; A2 68.9–73 dB against a background of 64 dB). A third intermediate-frequency tone (A3, 3,875 Hz; 64.1–67.8 dB) was delivered from only one of the speakers (single stimulus, 50 ms duration) and was therefore a “stationary” cue. Auditory stimuli were not reduced during testing. Combinations of visual and auditory stimuli presented synchronously and at the same location (“congruent”) are denoted by combining the symbols. Thus, A1V1 indicates a low-pass noise burst and visual stimulus both “moving” in a peripheral direction at the same location.

TABLE 1.

Stimulus nomenclature and conditions in each experiment (1, 2, 3) and experimental phase (“train” = training, “test” = testing)

Stimulus nomenclature
“V1 Visual stimulus: Central to peripheral
“V2 Visual stimulus: Peripheral to central
“A1 Auditory stimulus: Low-pass noise burst (821–3,000 Hz), central to peripheral
“A2 Auditory stimulus: High-pass noise burst (4,650–12,365 Hz), peripheral to central
“A3 Auditory stimulus: 3,875 Hz tone, stationary
“-hi” Accurate approach response rewarded with 4X pellet
“-low” Accurate approach response rewarded with 2X pellet
“-no” Condition never rewarded regardless of behavior
Stimulus conditions in each experiment
Exp. 1, train A1V1hi, V2low, A2no, catch
Exp. 1, test V1hi, V2low, A1no, A2no, A1V1hi, A2V1hi, A1V2low, A2V2low, catch
Exp. 2, train (none, testing immediately followed Exp. 1)
Exp. 2, test V1hi, V2low, A1no, A2no, A3no, A1V1hi, A2V2low, A3V1hi, A3V2low, catch
Exp. 3, train A2V2hi, V1lo, A1no, catch
Exp. 3, test V2hi, V1lo, A1no, A2no, A2V2hi, A1V2hi, A1V1low, A2V1lo, catch

2.3 |. Reward association

Stimulus “value” was manipulated by associating responses to visual stimuli with reward at one of two levels (“hi” = 4 food pellets, “low” = 2 food pellets) and by never providing reward for any behavioral response to auditory stimuli when they were presented alone (“no”; Table 1). Visual-auditory cue combinations were rewarded based on the value of the visual component. Thus, even though they were not individually rewarded, auditory stimuli could be associated with reward when presented in combination with a rewarded visual stimulus. Stimulus conditions associated with reward were operationally defined as “valuable” and stimuli explicitly dissociated from reward as “not valuable”.

Stimulus conditions are identified in the following descriptions by two factors: the stimulus features (V1, V2, A1, A2, A3) and the reward contingency (“-hi”, “-low”, “-no”). Thus, “V2low” refers to a visual stimulus “moving” centrally that was paired with a 2 food pellet reward for an accurate orientation and approach response.

2.4 |. Manipulating experience

Animals were trained with cross-modal stimuli arranged in different configurations. Some visual and auditory stimuli were always presented together and never individually. These stimulus configurations were designated as having a coupling probability of 100%. Other stimuli were presented individually, but never together. These were designated as having a 0% coupling probability (Table 1).

2.5 |. Training and testing

Training procedures were the same as in previous studies (Burnett et al., 2014; Gingras et al., 2009; Jiang et al., 2002; Rowland et al., 2007b; Stein et al., 1989). Each animal was trained to stand at the center of the apparatus and fixate at 0° while being gently restrained. Stimuli and catch trials were triggered by the experimenter depressing a foot pedal and controlled by custom software. The sound of the foot pedal (an audible click) signaled the onset of each trial. The experimenter avoided looking at the stimulus array to avoid biasing the animals’ behavior, and fixation was monitored using a mirror. If breaks in fixation occurred, the trial was aborted and replayed later in the trial set. The animal was rewarded for rapidly approaching the location of any visual or auditory-visual stimulus. Stimulus location was randomized across trials between −60° and +60°. On auditory-alone trials (random locations between −60° and +60°), the animal was released at the sound of the foot pedal, but no reward was given for any response. Animals quickly began choosing not to respond on these trials (“No Go”). No reward was given on catch (no stimulus) trials. These trials were included in all experiments during training and testing to ensure that overt approach responses reflected stimulus detection. Each testing session was preceded by “refresher” exposure to all training stimuli and a single catch trial. Animals were then tested to satiety, resulting in an average of 271 trials per day per animal.

2.6 |. Training and testing stimuli for each experiment

The nomenclature used to identify stimuli and the stimulus conditions for training and testing in each experiment are identified in Table 1. Reward associations were the same during testing and the preceding training blocks.

2.7 |. First experiment

The goal was to simultaneously test the effect of two experiential variables on multisensory integration: (a) the coupling probability of particular auditory and visual stimuli, and (b) the association of these components with reward. This was accomplished by training animals with a set of stimuli in which one pair of stimuli were always presented together (the components never appeared individually) and was highly rewarded (A1V1hi), while the other stimuli were always presented individually (never together) and were associated with lower reward or no reward (V2low, A2no). This training set was designed to create the potential for maximum effects by combining two variables of likely effectiveness. If either high coupling probably or high reward level (or their combination) increases multisensory enhancement (quantification described in Data Analysis below), the A1V1 combination would elicit greater multisensory enhancement than the A2V2 combination during testing.

Animals were first trained, so that their responses to each of the rewarded conditions (A1V1hi, V2low) reached a criterion of 80% correct responses. After animals learned the task, the V1 and V2 stimulus intensities were lowered to a level at which correct responses were elicited on <50% of V2low trials. This was done immediately prior to running the test trials in order to ensure that any multisensory enhancement effects during testing would be apparent (i.e., unisensory performance was not at ceiling levels). V1 was not calibrated separately to avoid exposing the animal to the stimulus independently of A1, and ultimately, this calibration proved unnecessary because V1 and V2 used the same LEDs. Testing involved all stimuli presented individually and in all auditory-visual combinations. Catch trials were also included. The impacts of coupling probability and reward association were assessed by contrasting the levels of multisensory enhancement observed across the four multisensory conditions (A1V1hi, A2V1hi, A1V2low, A2V2low).

Given that coupling probability and stimulus value were mixed during training in this experiment, its results (pairing with A1 enhanced visual orientation, pairing with A2 did not) could be interpreted as supporting different, competing, models depending on answers to the following questions: did an auditory stimulus have to be explicitly associated with visual stimuli in order to be integrated to enhance responses, or were cross-modal stimuli integrated “by default” unless explicitly dissociated from reward?

2.8 |. Second experiment

The goal here was to determine the answers to these questions. We introduced a novel auditory tone (A3) whose frequency was between the frequencies contained in A1 and A2 (Table 1). Thus, stimulus conditions A3no, A3V1hi, and A3V2low were included in the testing set. There was no additional training: testing immediately followed Exp. 1 and included all stimuli in Exp. 1 except for two combinations with similar results.

2.9 |. Third experiment

The goal of the third experiment was to determine how multisensory integration would operate in dynamic circumstances in which perceived stimulus value rapidly changes. This was accomplished by reversing the coupling probabilities and switching the visual reward associations to which they had adapted. Animals were given an average of 14 training sessions using a training set modeled on that of Exp. 1 with different coupling probabilities and reward associations (i.e., A2V2hi, V1lo, A1no, and catch trials). They were then tested with all stimuli (with the new reward associations) presented individually, all auditory-visual combinations, and catch trials. There were several possible results. The observed pattern of multisensory enhancement developed in Exp. 1 might prove to be insensitive to the new training set (and be tightly linked to stimulus features). Alternatively, it would be highly dynamic and adopt a pattern entirely consistent with the new reward associations. It could also partially adapt, possibly reflecting the total training history.

2.10 |. Data analysis

Responses in catch trials were scored as Go or NoGo. Responses in stimulus-containing trials were scored as an approach to the stimulus location, an approach to a different (non-stimulus) location, or a NoGo. The proportion of each response type was calculated within bins of five trials for each stimulus condition/location/animal within each experiment. Unless otherwise stated, summary statistics for percent of different response types (mean ± SEM) were tabulated for each stimulus condition with standard error calculated across locations and animals.

For visual-alone and visual-auditory trials, approaches to the stimulus location were always rewarded and therefore “correct”. Signal detection metrics were calculated to facilitate comparisons between multisensory and unisensory conditions (Green & Swets, 1966). For each animal, location, and visual-containing stimulus condition, the response bias (β) was calculated using the proportion of “Go” errors on catch trials and approaches to non-stimulus locations. Discriminability (d’) was calculated based on the proportion of correct responses while appreciating this bias. Because no response behavior was ever rewarded on auditory-alone trials, there was no “correct” response to an auditory stimulus. Consequently, behavioral patterns on auditory-alone trials were recorded but because they did not directly reflect animals’ perception of the stimuli, they were not subjected to sensitivity-based analyses.

Multisensory enhancement of the visual modality (MEv) was a summary variable calculated as the proportionate difference between multisensory (VA) response statistics and the response statistics measured in the corresponding visual condition (V):

MEv=(VAVV)×100. (1)

Equation (1) was applied to calculate proportionate changes in the percent of correct responses (“performance”) and the signal detection parameters d’ and β.

Differences in approach probability, multisensory enhancement (ME), d’, and β were analyzed with mixed effects models using logistic (approach probability) or linear (all others) regression. In each case, stimulus condition (visual or multisensory) was a fixed effect, with random intercepts for test location and animal ID, and a random slope for stimulus condition given animal ID. Significance of the fixed effect was determined with likelihood ratio tests. Means and standard errors are reported for the coefficients of the fixed effect, standard errors are reported for the random effect coefficients. Additionally, linear regression (likelihood ratio test, cat and location controlled using sequential regression) was used to evaluate if there were significant changes over time for each stimulus condition within an experiment (data binned into groups of 5 trials) and whether there was an inverse relationship between multisensory and matching visual conditions (VA-V; see Gingras et al., 2009). Alpha was 0.05 for all significance tests.

3 |. RESULTS

3.1 |. Summary

Although both familiar and novel auditory stimuli typically strongly enhance multisensory performance on detection and localization tasks (Bolognini et al., 2007; Frens et al., 1995; Jiang et al., 2002, 2007; Lovelace et al., 2003; Rowland et al., 2014; Stein et al., 1989), and did so in these tasks, an auditory stimulus explicitly dissociated from reward failed to do so. It appeared to be de-valued and became ineffective in this context, even though its combination with a visual stimulus formed a congruent, mutually informative cross-modal ensemble. However, its value and ability to enhance responses through multisensory integration were rapidly established when it was subsequently linked to reward, and persisted for some time thereafter, even when no longer linked with reward.

3.2 |. Experiment 1 training

This experiment examined the impact of two factors on multisensory integration: (a) the probability of two cross-modal stimuli being paired, and (b) their history of being rewarded. Animals were first trained to orient toward and approach a cross-modal stimulus pair that was associated with a high reward (A1V1hi), and to approach a unisensory visual stimulus that was associated with a low reward (V2low). They were also exposed to an auditory stimulus that was never rewarded regardless of their behavioral response (A2no), and to catch trials in which no stimulus was presented and no response was rewarded.

Animals rapidly learned the response contingencies associated with each stimulus condition and responded appropriately. They reliably oriented toward the A1V1hi and V2low stimuli, remained at the start position (“NoGo”) on catch trials, and ultimately chose to remain at the start position on the never-rewarded A2no trials. All behaviors stabilized within 80–150 training trials (Figure 2). Testing began after an average of 78 training days (36–99).

3.3 |. Experiment 1 testing

During testing, the V1 and V2 intensities were lowered, so that each animal’s performance fell to <50% at each location for each visual stimulus. This was necessary to lower animals’ visual performance from ceiling levels in order to assess the magnitude of any possible multisensory enhancement effect when these stimuli were paired with others. The visual stimuli elicited similar approach performance, d’, and β despite the differences in their reward value (see Table 2, Figure 3). This result likely reflected the high motivation of the animals and the fact that both reward levels were sufficient to offset the time and energy costs associated with an orientation response.

TABLE 2.

Results of experiment 1

Stimulus % orientation to location d B MEv
V1hi 45%, ±1.3 2.55 ± 0.08 2.67 ± 0.09 N/A
V2low 43%, ±1.3 2.52 ± 0.08 2.69 ± 0.09 N/A
A1V1hi 87%, ±0.9 3.85 ± 0.07 2.70 ± 0.08 93%
A1V2low 86%, ±1.2 3.81 ± 0.06 2.69 ± 0.06 100%
A2V1hi 45%, ±1.2 2.46 ± 0.07 2.58 ± 0.08 0.28%
A2V2low 48%, ±1.4 2.55 ± 0.07 2.63 ± 0.08 11%
A1no 58%, ±1.4
A2no 9%, ±0.8
Catch 5–9%
Fixed effects Δ; p-values, df (% correct; d’; β) [ΔMEv]
A1V1hi versus V1hi 2.16 ± 0.24; .0004443, 3,901; 0.0006776, 46; 0.2402, 46
A1V2low versus V2low 2.21 ± 0.38; .001949, 3,940; 0.001839, 46; 0.3434, 46
A2V1hi versus V1hi −0.02 ± 0.24; .9181, 3,951; 0.816, 46; 0.3686, 46
A2V2low versus V2low 0.15 ± 0.20; .3231, 3,938; 0.5064, 46; 0.5588, 46
A1V1hi versus A1V2low: −0.02 ± 0.17; .80, 3,952; 0.6871, 46; 0.7103, 46 [0.6558, 46]
A2V1hi versus A2V2low: 0.09 ± 0.14; .3603, 3,999; 0.4068, 46; 0.3687, 46 [0.2701,46]
A1V1hi versus A2V2low: −2.10 ± 0.18; .0002011, 3,950; 0.0004341, 46; 0.3034, 46 [0.0159, 46]
A2V1hi versus A1V2low: 2.15 ± 0.21; .0003128, 4,001; 0.003207, 46; 0.3166, 46 [0.01116, 46]
Random effects Intercept location; intercept cat; condition: | cat
A1V1hi versus V1hi (Percent Correct) 0.21; 0.52; 0.38
d 0.07; 0.09; 0.16
A1V2low versus V2low(Percent Correct) 0.25; 0.47; 0.64
d 0.15; 0.12; 0.18
A2V1hi versus V1hi(Percent Correct) 0.15; 0.51; 0.39
d 0; 0.25; 0.0003
A2V2low versus V2low(Percent Correct) 0.24; 0.47; 0.32
d 0; 0.24; 0.01
A1V1hi versus A1V2low(Percent Correct) 0.28; 0.53; 0.24
d 0.18; 0.08; 0.04
A2V1hi versus A2V2low(Percent Correct) 0.18; 0.67; 0.21
d 0.02; 0.26; 0.01
A1V1hi versus A2V2low(Percent Correct) 0.27; 0.52; 0.28
d 0.14; 0.11; 0.14
A2V1hi versus A1V2low(Percent Correct) 0.18; 0.67; 0.34
d 0.18; 0; 0.03
Regression Slope, intercept, R2, p-value
(A1V1hi-V1hi) versus V1hi −0.81, 78.37, 0.77, <.0001
(A1V2low-V2low) versus V2low −0.84, 79.54, 0.82, <.0001
(A2V1hi-V1hi) versus V1hi −0.41, 17.53, 0.19, .03
(A2V2low-V2low) versus V2low −0.31, 17.24, 0.12, .10

FIGURE 3.

FIGURE 3

Reward association determined multisensory enhancement; coupling probability had no effect. Top: Training and testing conditions. Icons are used to illustrate different training and testing conditions. Visual stimuli are depicted as pairs of LEDs with an arrow indicating different directions (left = V1, right = V2) of apparent motion. Auditory stimuli are illustrated with a speaker from which either low-pass sound (from the bottom, A1) or high-pass sound (from the top, A2) are emanating. Reward is illustrated by a food bowl with either 2 (low) or 4 (high) food pellets or an X indicating that no response was ever rewarded in that condition. Animals were first trained to approach the A1V1hi stimulus and the V2low stimulus. The A2no stimulus was presented during training but was not associated with reward. Testing included all four modality-specific stimuli presented individually as well as in every possible cross-modal combination. Bottom: Results of Testing. Shown are the percent of approach responses on unisensory (V1hi, V2low) and multisensory (A1V1hi, A2V1low, A1V2hi, A2V2low) trials. Note that the V1hi and V2low stimuli elicited comparable correct performance levels despite being associated with different reward magnitudes. A1no (paired with reward in training) elicited reliable orientation responses even though it was not individually rewarded during testing. A2no (dissociated from reward in both training and testing) did not. The summary metric MEv (right ordinate) quantifies the proportionate enhancement in the ability to correctly approach each visual target when it is combined with each auditory stimulus. Multisensory conditions involving A1no showed significant multisensory enhancement (leftmost two plots), while conditions involving A2no did not (rightmost two plots). Connected markers indicate individual animal performance for each cue. Error bars on markers indicate SEM across locations. *indicates significance of p < .05, **indicates significance of p < .01

3.3.1 |. Coupling probability had no effect on multisensory integration

During training, one of the visual stimuli was always paired with one of the auditory stimuli (A1V1hi), while the other pair of stimuli was never paired with each other (V2low, A2 no) or the other stimuli. However, coupling probability appeared to have no effect on the magnitude of multisensory enhancement. A1V1hi, (the stimuli moved in the same direction, always appeared together during training, and were associated with high reward) elicited the same multisensory enhancement as the A1V2low combination (the stimuli moved in opposite directions, never appeared together during training, and were associated with low reward during testing). There was no significant difference between these multisensory conditions in percent approach, d’, or β (Table 2, Figure 3, leftmost plots). Neither cross-modal combination produced an appreciable increase in β above the corresponding visual-alone conditions, but did enhance d’, indicating that multisensory integration enhanced the information animals used to make decisions and did not simply increase the likelihood of a “Go” response (Battaglia et al., 2003; Knill & Pouget, 2004; Rowland et al., 2007a; Stein & Stanford, 2008).

The magnitude of the reward associated with each visual stimulus did not affect the multisensory product, even though the reward levels (and the visual stimulus features) were discriminable by the animal (Figure 4). This result likely reflects the high motivation of the animals in the task and the fact that their multisensory performance approached ceiling levels in both cases.

FIGURE 4.

FIGURE 4

Discriminating the visual stimuli and rewards. To determine if they were discriminable, V1hi and V2low were simultaneously presented at homotopic loci (−60°/+60°, −45°/+45°, −30/+30°, −15°/+15°). The animal could respond to either stimulus and receive a reward at the assigned level (see also Dakos et al., 2019). A significant (p = .011) preference was observed for the stimulus associated with the higher reward. Data were normalized by total number of approach responses. As expected, the animal chose the V1hi stimulus more frequently, with a proportionate preference reflecting the proportionate difference in their reward values (2:1)

3.3.2 |. Reward association had a significant effect on multisensory integration.

In contrast to the lack of impact of coupling probability and reward level (hi vs. low) on multisensory enhancement, the effect of dissociating a stimulus from reward proved to be highly significant. Neither of the two auditory-visual pairings that involved the auditory stimulus (A2) that was dissociated from reward elicited significant multisensory response enhancement. Approach performance, d’, and β measures were all very similar in these two multisensory conditions, and in both cases, neither multisensory approach performance nor d’ was significantly elevated above the visual-alone performance (Table 2, Figure 3, rightmost plots).

3.3.3 |. Changes in responses

During testing, animals initially oriented toward the auditory stimulus in the A1no condition (A1 been associated with reward during training in the A1V1hi configuration), but with a declining incidence (slope = −1.17, R2 = 0.79, p <.00001; see Figure S1). This decline was not matched by a proportional decrease in A1V1hi and A1V2low approaches, although these also (marginally) decreased (A1V1hi: slope = −0.57, R2 = 0.32, p =.02; A1V2low: slope = −0.84, 0.64, 0.0004). This is consistent with a decrease in the valuation of A1 concomitant with the animal slowly learning from testing with A1no that the individual stimulus was not associated with reward. Animals rarely responded to the auditory stimulus in the never-rewarded A2no condition throughout the experiment (Table 2).

3.4 |. Experiment 2 testing

The results from Experiment 1 prompted the following question: would all spatiotemporally coincident auditory-visual stimuli be integrated by “default” as long as they were not explicitly dissociated from reward, or did some cross-modal association need to be learned first before an auditory stimulus could be integrated with another modality (Beierholm et al., 2009)? To examine this issue, a novel (i.e., untrained) stationary tonal auditory stimulus (A3) was introduced. Its tonal frequency was between the filters applied to shape A1 and A2, and was easily distinguishable from both (Table 1). This stimulus had never before been associated with reward nor explicitly paired with either visual stimulus. It was presented alone (A3no) and in combination with each of the visual stimuli. Several test conditions were also repeated from Exp. 1 to ensure that behavioral patterns were stable. These re-test results were consistent with testing in Exp. 1 (Table 3, Figure 5, leftmost plots): A1V1hi continued to generate enhanced responses, while A2V2low did not. Approach responses to A1no continued to sharply decline (slope = −2.07, R2 = 0.47, p = .005) while responses to A1V1hi only marginally declined (slope = −0.88, R2 = 0.33, p =.08).

TABLE 3.

Results of experiment 2

Stimulus % orientation to location d B MEv
V1hi 50%, ±1.75 2.50 ± 0.09 2.51 ± 0.07 N/A
V2low 49%, ±1.58 2.50 ± 0.08 2.52 ± 0.08 N/A
A1V1hi 82%, ±1.42 3.61 ± 0.05 2.59 ± 0.07 63%
A2V2low 51%, ±1.58 2.51 ± 0.06 2.48 ± 0.07 4%
A3V1hi 74%, ±1.45 3.25 ± 0.10 2.59 ± 0.10 48%
A3V2low 75%, ±1.27 3.27 ± 0.10 2.58 ± 0.08 52%
A1no 58%, ±1.4
A2no 9%, ±0.8
A3no 12%, ±1.27
Catch 2–15%
Fixed effects Δ; p-values, df (% correct; d’; β) [ΔMEv]
A1V1hi versus V1hi 1.744 ± 0.31; .00212, 2,547; 0.002408, 46; 0.3316, 46
A2V2low versus V2low 2.21 ± 0.38; .6221, 2,554; 0.9889, 46; 0.412, 46
A3V1hi versus V1hi 0.09 ± 0.26; .01103, 2,535; 0.00474, 46; 0.5512, 46
A3V2low versus V2low 1.22 ± 0.43; .01805, 2,528; 0.1498, 46; 0.6066, 46
A1V1hi versus A2V2low: −1.69 ± 0.24; .0009973, 2,554; 0.001798, 46; 0.1739, 46 [0.01519, 46]
A1V1hi versus A3V1hi: −0.61 ± 0.18; .00886, 2,530; 0.009623, 46; 0.3401, 46 [0.1295, 46]
A2V2low versus A3V2low: 1.13 ± 0.22; .002665, 2,540; 0.002666, 46; 0.6629, 46 [0.0104, 46]
A3V1hi versus A3V2low: 0.06 ± 0.23; .7079, 2,516; 0.8202, 46; 0.5937, 46 [0.8202, 46]
Random effects Intercept location; intercept cat; intercept condition: | cat
A1V1hi versus V1hi (Percent Correct) 0.19; 0.37; 0.51
d 0.09; 0.25; 0.26
A2V2low versus V2low(Percent Correct) 0.21; 0.34; 0.42
d 0.10; 0.16; 0.15
A3V1hi versus V1hi(Percent Correct) 0.21; 0.38; 0.57
d 0.20; 0.34; 0.20
A3V2low versus V2low(Percent Correct) 0.18; 0.34; 0.73
d 0.18; 0.32; 0.34
A1V1hi versus A2V2low(Percent Correct) 0.21; 0.47; 0.38
d 0.05; 0.25; 0.22
A1V1hi versus A3V1hi(Percent Correct) 0.14; 0.47; 0.25
d 0.15; 0.26; 0.10
A2V2hi versus A3V2low(Percent Correct) 0.19; 0.42; 0.35
d 0.15; 0.19; 0.13
A3V1hi versus A3V2low(Percent Correct) 0.18; 0.62; 0.36
d 0.22; 0.35; 0.12S
Regression Slope, intercept, R2, p-value
(A1V1hi-V1hi) versus V1hi −0.81, 78.37, 0.77, <.0001
(A1V2low-V2low) versus V2low −0.84, 79.54, 0.82, <.0001
(A2V1hi-V1hi) versus V1hi −0.41, 17.53, 0.19, .03
(A2V2low-V2low) versus V2low −0.31, 17.24, 0.12, .10

FIGURE 5.

FIGURE 5

When combined with a familiar visual stimulus, a novel auditory stimulus elicited robust multisensory enhancement. Top: Training and Testing Conditions. Animals began testing on the A3no stimulus and cross-modal configurations directly after completing experiment 1.A1V1hi and A2V2low conditions were included in the tests. Bottom: Results of Testing. The two leftmost plots illustrate conditions replicating the results Experiment 1 (see Figure 3). The two rightmost plots illustrate the results of tests pairing the novel auditory stimulus with each visual stimulus. Both combinations elicit multisensory enhancement at near ceiling levels. Conventions are the same as Figure 3

3.4.1 |. Novel stimuli were integrated to enhance responses

The novel cross-modal combinations (A3V1hi, A 3V2 low) elicited similar levels of approach performance, d’, and β (Table 3, Figure 5, rightmost plots) as did the familiar combinations. In each case, the approach performance and d’ levels were significantly elevated above those elicited by the visual stimuli alone, and as before, β in the enhancing multisensory conditions was not significantly different from the visual-alone conditions. However the novel, unrewarded alone A3 stimulus did not elicit approach responses when it was presented alone, a response pattern similar to that elicited by the non-rewarded A 2 stimulus. However, when presented in combination with either visual stimulus A3 enhanced responses, similar to the effect of the rewarded A1 stimulus (Table 3). Thus, the animals’ responses to A3 were neither consonant with those of the A1 or A2 stimuli, suggesting that animals were not directly generalizing responses from those to the familiar auditory stimuli.

3.5 |. Experiment 3 training

The final experiment evaluated how multisensory integration might adapt to changes in stimulus value, which is expected to be dynamic within and across various environmental contexts. For this purpose, animals were retrained with a stimulus set that reversed the features and reward associations from Experiment 1. In the new training set, V2 and A2 were always presented together and associated with high reward (A2V2hi), V1 was presented alone and associated with low reward (V1low), and A1 was presented alone and not rewarded (A 1no; Table 1). During this re-training (an average of 43–57 trials of each type per day, over 12–16 days), all stimuli were presented at high intensity, and trials containing visual stimuli reliably elicited accurate orientation/approach responses (Figure 6). Animals now opted to not respond to the A1no stimulus, yielding the same low response levels previously elicited by the A2no stimulus in Experiments 1 and 2.

FIGURE 6.

FIGURE 6

The effect of reversing stimulus identity and reward association. Data from an exemplar animal are shown during training in Experiment 3. Animals immediately performed at ceiling levels. Conventions are the same as Figure 2

3.6 |. Experiment 3 testing

Testing included all stimuli presented alone and in all possible combinations (results summarized in Table 4 and Figure 7). Visual intensities were decreased to the same levels as in Experiments 1 and 2 prior to testing. Approach performance, d’, and β measured on the visual-alone trials were similar to that observed in previous experiments. Neither auditory stimulus reliably elicited orientation responses when presented individually, even though the A2no stimulus had been recently rewarded in the A2V2hi combination during training. Presumably, the animals retained the knowledge that it was not individually rewarded.

TABLE 4.

Results of experiment 3

Stimulus % orientation to location d B MEv
V1low 45, ±1.16 2.49 ± 0.09 2.62 ± 0.09 N/A
V2hi 47, ±1.20 2.52 ± 0.07 2.60 ± 0.07 N/A
A1V1low 62, ±1.36 2.88 ± 0.06 2.59 ± 0.08 38%
A1V2hi 60, ±1.43 2.79 ± 0.08 2.50 ± 0.06 29%
A2V1low 78, ±1.44 3.44 ± 0.06 2.67 ± 0.06 74%
A2V2hi 77, ±1.3 3.48 ± 0.06 2.72 ± 0.06 63%
A1no 16%, ±0.88
A2no 18%, ±1.08
Catch
Fixed effects Δ; p-values, df (% correct; d’; β) [ΔMEv]
A1V1low versus V1low 0.67 ± 0.26; .02308, 3,779; 0.009426, 46; 0.3649, 46
A1V2hi versus V2hi 0.58 ± 0.26; .03676, 3,765; 0.1192, 46; 0.5254, 46
A2V1low versus V1low 1.49 ± 0.41; .008489, 3,752; 0.003596, 46; 0.3689, 46
A2V2hi versus V2hi 1.38 ± 0.45; .01441, 3,746; 0.006396, 46; 0.1758, 46
A1V1low versus A1V2hi: −0.01 ± 0.11; .887, 3,771; 0.4285, 46; 0.919, 46 [0.5098, 46]
A2V1low versus A2V2hi: −0.01 ± 0.12; .9013, 3,725; 0.6672, 46; 0.5455, 46 [0.7802, 46]
A1V1low versus A2V2hi: 0.79 ± 0.22; .008423, 3,753; 0.007481, 46; 0.04929, 46 [0.1302, 46]
A2V1low versus A1V2hi: −0.82 ± 0.19; .004348, 3,744; 0.08603, 46; 0.1828, 46 [0.02173, 46]
Random effects Intercept location; intercept cat; condition: | cat
A1V1hi versus V1hi (Percent Correct) 0.22; 0.39; 0.43
d 0.08; 0.00; 0.02
A1V2low versus V2low(Percent Correct) 0.27; 0.44; 0.44
d 0.12; 0.63; 0.58
A2V1hi versus V1hi(Percent Correct) 0.30; 0.41; 0.69
d 0.13; 0.31; 0.24
A2V2low versus V2low(Percent Correct) 0.26; 0.44; 0.77
d 0.05; 0.31; 0.32
A1V1hi versus A1V2low(Percent Correct) 0.24; 0.15; 0.16
d 0.10; 0.14; 0.49
A2V1hi versus A2V2low(Percent Correct) 0.31; 0.45; 0.15
d 0.08; 0.32; 0.01
A1V1hi versus A2V2low(Percent Correct) 0.19; 0.15; 0.36
d 0.04; 0.13; 0.18
A2V1hi versus A1V2low(Percent Correct) 0.31; 0.44; 0.30
d 0.16; 0.64; 0.31
Regression Slope, intercept, R2, p-value
(A1V1hi-V1hi) versus V1hi −0.81, 78.37, 0.77, <.0001
(A1V2low-V2low) versus V2low 22120.84, 79.54, 0.82, <.0001
(A2V1hi-V1hi) versus V1hi −0.41, 17.53, 0.19, .03
(A2V2low-V2low) versus V2low −0.31, 17.24, 0.12, .10

FIGURE 7.

FIGURE 7

Reversing reward contingencies reversed the pattern of multisensory enhancement. Top: Training and Testing Conditions. Animals were retrained with covariance presentations and reward outcomes reversed from Exp. 1. Bottom: Results of Testing. The two leftmost plots illustrate the results of testing with combinations involving the A2 stimulus, which now, in the A2V2hi combination yielded significantly enhanced responses. The two rightmost plots illustrate the results of testing with combinations involving the A1 stimulus, which had not been trained in combination with a rewarded visual stimulus since Experiment 1, which elicited significant but marginal levels of enhancement. All together, the pattern of multisensory enhancement is (partially) reversed from what was seen in Exp. 1 (Figure 3), reflecting the reversal of the stimulus associations in training. Conventions are the same as Figure 3

3.6.1 |. Reversing the reward contingencies reversed the multisensory enhancement pattern

Multisensory responses to cross-modal combinations involving A2 (previously not enhanced) were now robustly enhanced (Figure 7, leftmost plots). The magnitude of enhancement as well as the accuracy, d’, and β levels observed were very similar to those elicited by combinations involving the stimulus A1 in Experiment 1 (Table 4). As before, coupling probability and reward level of the visual stimulus did not influence enhancement. Also as before, β in the multisensory conditions was not significantly different from that in the visual-alone conditions, but d’ was enhanced with the exception of one condition (A1V 2hi).

Performance was still significantly enhanced in combinations involving the A1 stimulus, likely reflecting its previous association with reward in the preceding experiments (Figure 7, rightmost plots). The enhancement was now less robust, had lower performance levels, and a decrease in d’ (Table 4, Figure 7, rightmost plots). In short, the enhancement was less than that obtained with A2, and less than that obtained with pairings involving A1 in Exp. 1. These data strongly suggest that it is easier to associate a previously unrewarded stimulus with reward than to extinguish an already-established reward association.

4 |. DISCUSSION

Congruent visual and auditory stimuli typically elicit more reliable and more accurate behavioral responses to external events (Bolognini et al., 2007; Burnett et al., 2007; Corneil et al., 2002; Francesca et al., 2002; Frens et al., 1995; Gingras et al., 2009; Lovelace et al., 2003; Stein et al., 1989, 1998; Wallace et al., 1996; Wang et al., 2008). As shown here, this powerful multisensory enhancement effect is evident in detection and localization responses regardless of whether the cross-modal stimuli presented are familiar or novel. This multisensory enhancement effect has been observed even in situations in which the individual modality-specific stimuli were associated with conflicting response demands (e.g., “Go” vs. “NoGo”; Jiang et al., 2002; Rowland et al., 2014; Stein et al., 1989). According to conventional logic, these enhancements reflect the fact that congruent stimuli are mutually informative about the timing and location of the same event regardless of whether they are novel or familiar, and regardless of the specific responses they are associated with (Meredith et al., 1987; Meredith & Stein, 1986; Stein & Meredith, 1993; Van Wanrooij et al., 2010). In contrast, stimuli that are disparate in space or time are inferred to derive from different events, and thus compete for a response (Meredith & Stein, 1996; Yu et al., 2018).

The brain must often determine which of its many sensory inputs “match” one another and should be integrated, and which do not match and should be segregated. This dynamic process is thought to require a highly sophisticated computational strategy (Aller & Noppeney, 2019; Cuppini et al., 2017; Kayser & Shams, 2015; Kording et al., 2007; Parise & Ernst, 2016; Shams & Beierholm, 2010). The present results suggest that this matching problem may be simpler than it appears because many signals, including those that are learned to be “unimportant” (e.g., because they are dissociated from reward), are filtered prior to the multisensory computation. For example, if matching 12 visual signals with 12 auditory signals requires 12 × 12 = 144 decisions, and if the number of signals in each domain is quartered by filtering (the filter requires evaluation of each signal, 12 + 12 = 24 decisions), then only 3 × 3 = 9 matching decisions must be made (a 77% reduction in total decision number). However, while this strategy may enhance efficiency, it requires an additional component that is not present in standard models of multisensory integration.

In oft-used Bayesian models of this phenomenon, the products of multisensory integration are reflected in a (posterior) distribution of event location. This posterior distribution is determined by the combination of distributions representing the perceptibility of the individual sensory signals (likelihood) and prior knowledge that common sources generate concordant signals (prior; e.g., Jäkel & Ernst, 2003; Parise & Ernst, 2016; Pouget et al., 2002; Rowland et al., 2007b). Behavioral decisions are made by applying cost functions that reflect higher-order constraints, such as response demands and rewards associations, to the posterior distribution (Daunizeau, den Ouden, Pessiglione, Kiebel, Friston, et al., 2010; Daunizeau, den Ouden, Pessiglione, Kiebel, Stephan, et al., 2010; Kording, 2007). The influence of these constraints can be seen in the auditory-alone conditions of the present data, where animals elect not to make time- and energy-consuming orientation responses to auditory stimuli that are perceptible but not individually associated with reward. This architecture also provides a logical explanation for why response demands and stimulus novelty can influence behavior but not the multisensory computation itself. However, it cannot explain how a congruent, perceptible, dynamic, unpredictable, and mutually informative auditory stimulus would fail to enhance performance when presented with the visual stimulus.

Yet, that was the empirical result. It was as if this clearly detectable and informative stimulus faded into the background when it was dissociated from reward. Accounting for this effect requires the insertion of (value-based) filtering before the multisensory computation that is sensitive to the stimulus features, either as a separate stage or through recurrent connections in a more elaborate architecture. This filter changes a standard architecture in nontrivial ways. It requires that higher-order constraints that are specific to the stimulus features (not “prior” in the conventional sense) be applied early in the computation, before the posterior. It affects, but is not the sole determinant of, the responses animals select for a given stimulus condition, which is also dictated by an evaluation of predictive reward. This can be most clearly seen in the dissociation between responses to individually presented auditory stimuli (which eventually are NoGo) and different multisensory enhancements observed when they are combined with the visual stimuli. This filter is not all-or-none (see Figure S1), but whether this filtering is based on an analog scaling factor or reflects stochastic gating remains to be determined.

There is neurophysiological evidence for the operation of such a filter at early stages of sensory processing. Neurons in primary visual cortex are sensitive to reward associations that lead to enhanced representations (Henschke et al., 2020), higher BOLD responses (Serences, 2008), and responses that predict the timing of reward (Shuler & Bear, 2006). Responses in visual area V4 are modulated by increasing reward magnitudes across target locations (Baruni et al., 2015). Acoustic training has shown modulation of receptive field properties in A1 for both appetitive and aversive cues (David et al., 2012), modulation of response to target and reference tones (Fritz et al., 2005), and in response to reward prediction (Weis et al., 2013).

In this experiment, both high and low-valued visual stimuli elicited similar levels of multisensory enhancement. This access was flexible, and the speed with which a stimulus gained or lost its access to the multisensory computation was asymmetric with respect to reward association. Consistent with the literature on extinction, learning that a stimulus is paired with reward rapidly provided it access, but extinguishing that association by de-coupling it from reward had a slower time course (Delamater, 1996; Lovibond et al., 2015; Skinner, 1933; Williams, 1938). It is probable that enhancement involving the A1 stimulus could have been completely extinguished if the experiment had continued.

Perhaps it is not be surprising that experience can affect multisensory enhancement, as acquiring the general capacity is experience-dependent. The neonatal SC is incapable of integrating its visual and auditory inputs (Stein et al., 2014), as are adults that have been reared without covariant auditory-visual experience. Animals raised in darkness, or in omnidirectional masking sound, or with independently presented visual and auditory cues, are unable to generate enhanced responses to auditory-visual stimuli (Wallace et al., 2004; Xu et al., 2012, 2014; Yu et al., 2013). There is clear specificity to this development. Animals develop the ability to integrate cross-modal stimuli only at locations at which they have been experienced (Yu et al., 2010) and only develop the ability to integrate the specific modalities that have been experienced as covariant (e.g., auditory-somatosensory stimuli are integrated by dark-reared animals; Xu et al., 2015). This experience appears to influence the SC by a corticotectal projection originating in an area of association cortex, the anterior ectosylvian sulcus (Jiang et al., 2001; Rowland et al., 2014; Wallace & Stein, 1994). This component is crucial for the expression of multisensory integration in detection/localization behaviors (Jiang et al., 2002; Wilkinson et al., 1996,). Similar to the studies discussed above, auditory training also induces changes specific to the frequency of conditioned cues in the auditory fields of the ectosylvian sulcus. These changes are flexible, evolving with training and diminishing back to baseline with extinction (Diamond & Weinberger, 1986). It is, therefore, possible that this same descending projection was responsible for assigning value to a given cue in the current experiments.

However, an important distinction must be made here. In the present study, multisensory performance was insensitive to different rates of exposure to different feature combinations. Thus, while experience with covariant cross-modal stimuli is essential in development, it appears to establish a general capability to integrate information across those senses. Once formed, stimuli with different features (passing the value-based filter described above) appear to be integrated at the same level, and extensive experience with all specific feature combinations is not essential. Extensive additional experience with covariant stimulus features has no facilitation effect. But while this is true in this particular functional domain (with the features manipulated here), this may not be universally true in all multisensory domains.

The features important for multisensory integration vary by task domain (Stein et al., 2020). Here, spatial and temporal relationships are crucially important to the task, but other features are not. In other domains, other feature relationships are more important, and a different set of constraints might be identified for whether multisensory integration in the adult is sensitive to feature covariance (e.g., see Murray et al., 2010; Odegaard & Shams, 2016). In these experiments, assigning value to a stimulus and, thus, its access to the multisensory computation was based on associating stimuli with food reward. There may be many other circumstances in which different experiences are used to assign value to a stimulus (e.g., predictability), but these remain to be determined. The value could vary by context. However, we expect the heuristic, “rule of thumb” strategy of value-based filtering prior to multisensory integration to extend to other circumstances and likely to be helpful in understanding the wide variation in multisensory performance observed across subjects in a variety of psychophysical tests. Variations in performance in these tasks can be due, in part, to task instructions (Hairston et al., 2003; Warren, 1979; Welch, 1972) or attentional factors (Colonius & Diederich, 2004; Donohue et al., 2015; Talsma et al., 2010). But here we add another consideration, the assignment of value, which may intersect with these other factors. It seems quite possible that different value assignments may also contribute to the substantial differences in the multisensory products observed between neurotypic subjects and those diagnosed with psychiatric disorders such as ASD (Brandwein et al., 2015; Irwin & Brancazio, 2014; Marco et al., 2011; Speer et al., 2007; Stevenson et al., 2014, Tantam et al., 1989), schizophrenia (de Gelder et al., 2003, 2005; de Jong et al., 2009; Williams et al., 2010), and dyslexia (Hairston et al., 2005; see review in Wallace & Stevenson, 2014).

Supplementary Material

fS1

ACKNOWLEDGEMENTS

This work was supported by National Institutes of Health (Grant EY026916, EY031532, NS073553) and the Tab Williams Foundation. We thank Nancy London for technical assistance in the preparation of this manuscript.

Funding information

Tab Williams Foundation; National Institutes of Health, Grant/Award Number: EY026916, EY031532 and NS073553

Footnotes

PEER REVIEW

The peer review history for this article is available at https://publons.com/publon/10.1111/ejn.15167.

DATA AVAILABILITY STATEMENT

Primary data are available from the authors on request.

SUPPORTING INFORMATION

Additional supporting information may be found online in the Supporting Information section.

CONFLICT OF INTEREST

The authors declare no conflict of interest.

REFERENCES

  1. Alais D, & Burr D. (2004). The ventriloquist effect results from near-optimal bimodal integration. Current Biology, 14, 257–262. [DOI] [PubMed] [Google Scholar]
  2. Aller M, & Noppeney U. (2019). To integrate or not to integrate: Temporal dynamics of hierarchical Bayesian causal inference. PLOS Biology, 17, e3000210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Baruni JK, Lau B, & Salzman CD (2015). Reward expectation differentially modulates attentional behavior and activity in visual area V4. Nature Neuroscience, 18, 1656–1663. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Battaglia PW, Jacobs RA, & Aslin RN (2003). Bayesian integration of visual and auditory signals for spatial localization. Journal of the Optical Society of America, 20, 1391–1397. [DOI] [PubMed] [Google Scholar]
  5. Beierholm UR, Quartz SR, & Shams L. (2009). Bayesian priors are encoded independently from likelihoods in human multisensory perception. Journal of Vision, 9(5), 23. 10.1167/9.5.23 [DOI] [PubMed] [Google Scholar]
  6. Bolognini N, Leo F, Passamonti C, Stein BE, & Ladavas E. (2007). Multisensory-mediated auditory localization. Perception, 36, 1477–1485. [DOI] [PubMed] [Google Scholar]
  7. Brandwein AB, Foxe JJ, Butler JS, Frey HP, Bates JC, Shulman LH, & Molholm S. (2015). Neurophysiologicial indices of atypical auditory processing and multisensory integration are associated with symptom severity in autism. Journal of Autism and Developmental Disorders, 45, 230–244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Burnett LR, Stein BE, Chaponis D, & Wallance MT (2014). Superior colliculus lesions preferentially disrupt multisensory orientation. Neuroscience, 124, 535–547. [DOI] [PubMed] [Google Scholar]
  9. Burnett LR, Stein BE, Perrault TJ, & Wallace MT (2007). Excitotoxic lesions of the superior colliculus preferentially impact multisensory neurons and multisensory integration. Experimental Brain Research, 179, 325–338. [DOI] [PubMed] [Google Scholar]
  10. Colonius H, & Diederich A. (2004). Multisensory interaction in saccadic reaction time: A time-window-of-integration model. Journal of Cognitive Neuroscience, 16, 1000–1009. [DOI] [PubMed] [Google Scholar]
  11. Corneil BD, & Munoz DP (1996). The influence of auditory and visual distractors on human orienting gaze shifts. The Journal of Neuroscience, 16, 8193–8207. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Corneil BD, Van Wanrooij M, Munoz DP, & Van Opstal AJ (2002). Auditory-visual interactions subserving goal-directed saccades in a complex scene. Journal of Neurophysiology, 88, 438–454. [DOI] [PubMed] [Google Scholar]
  13. Cuppini C, Shams L, Magosso E, & Ursino M. (2017). A biologically inspired neurocomputational model for audiovisual integration and causal inference. European Journal of Neuroscience, 46, 2481–2498. [DOI] [PubMed] [Google Scholar]
  14. Dakos AS, Walker EM, Jiang H, Stein BE, & Rowland BA (2019). Interhemispheric visual competition after multisensory reversal of hemianopia. The European Journal of Neuroscience, 50, 3702–3712. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Daunizeau J, den Ouden HEM, Pessiglione M, Kiebel SJ, Friston KJ, & Stephan KE (2010). Observing the observer (II): Deciding when to decide. PLoS One, 5, e15555. 10.1371/journal.pone.0015555 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Daunizeau J, den Ouden HEM, Pessiglione M, Kiebel S, Stephan KE, & Friston KJ (2010). Observing the observer (I): Meta-bayesian models of learning and decision-making. PLoS One, 5, e15554. 10.1371/journal.pone.0015554 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. David SV, Fritz JB, & Shamma SA (2012). Task reward structure shapes rapid receptive field plasticity in auditory cortex. Proceedings of the National Academy of Sciences of the United States of America, 109, 2144–2149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. de Gelder B, Vroomen J, Annen L, Masthof E, & Hodiamont P. (2003). Audio-visual integration in schizophrenia. Schizophrenia Research, 59, 211–218. [DOI] [PubMed] [Google Scholar]
  19. de Gelder B, Vroomen J, de Jong SJ, Masthoff ED, Trompenaars FJ, & Hodiamont P. (2005). Multisensory integration of emotional faces and voices in schizophrenics. Schizophrenia Research, 72, 195–203. [DOI] [PubMed] [Google Scholar]
  20. de Jong JJ, Hodiamont PP, Van den Stock J, & de Gelder B. (2009). Audiovisual emotion recognition in schizophrenia: Reduced integration of facial and vocal affect. Schizophrenia Research, 107, 286–293. [DOI] [PubMed] [Google Scholar]
  21. Delamater AR (1996). Effects of several extinction treatments upon the integrity of Pavlovian stimulus-outcomes associations. Animal Learning and Behavior, 24, 437–449. [Google Scholar]
  22. Diamond DM, & Weinberger NM (1986). Classical conditioning rapidly induces specific changes in frequency receptive fields of single neurons in secondary and ventral ectosylvian auditory cortical fields. Brain Research, 372, 357–360. [DOI] [PubMed] [Google Scholar]
  23. Donohue SE, Green JJ, & Woldorff MG (2015). The effects of attention on the temporal integration of multisensory stimuli. Frontiers in Integrative Neuroscience, 9, 32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Ernst MO, & Banks MS (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 451, 429–433. [DOI] [PubMed] [Google Scholar]
  25. Francesca F, Bolognini N, & Ladavas E. (2002). Enhancement of visual perception by crossmodal visuo-auditory interaction. Experimental Brain Research, 147, 332–343. [DOI] [PubMed] [Google Scholar]
  26. Frens MA, Van Opstal AJ, & Van Der Willigen RF (1995). Spatial and temporal factors determine auditory-visual interactions in human saccadic eye movements. Perceptions and Psychophysics, 57, 802–816. [DOI] [PubMed] [Google Scholar]
  27. Fritz JB, Elhilali M, & Shamma S. (2005). Differential dynamic plasticity of A1 receptive fields during multiple spectral tasks. The Journal of Neuroscience, 25, 7623–7635. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Gau R, & Noppeney U. (2016). How prior expectations shapes multisensory perception. NeuroImage, 124, 876–886. [DOI] [PubMed] [Google Scholar]
  29. Giard MH, & Peronnet F. (1999). Auditory-visual integration during multimodal object recognition in humans: A behavioral and electro-physiological study. Journal of Cognitive Neuroscience, 11, 473–490. [DOI] [PubMed] [Google Scholar]
  30. Gingras G, Rowland BA, & Stein BE (2009). The differing impact of multisensory and unisensory integration on behavior. The Journal of Neuroscience, 29, 4897–4902. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Goldring JE, Dorris MC, Corneil BD, Ballantyne PA, & Munoz DP (1996). Combined eye-head gaze shifts to visual and auditory targets in humans. Experimental Brain Research, 111, 68–78. [DOI] [PubMed] [Google Scholar]
  32. Grant KW, & Seitz PF (2000). The use of visible speech cues for improving auditory detection of spoken sentences. The Journal of the Acoustical Society of America, 108, 1197–1208. [DOI] [PubMed] [Google Scholar]
  33. Green DM, & Swets JA (1966). Signal detection theory and psychophysics. Wiley. [Google Scholar]
  34. Hairston WD, Burdette JH, Flowers DL, Wood FB, & Wallace MT (2005). Altered temporal profile of visual-auditory multisensory interactions in dyslexia. Experimental Brain Research, 166, 474–480. [DOI] [PubMed] [Google Scholar]
  35. Hairston WD, Wallace MT, Vaughan JW, & Stein BE (2003). Visual localization ability influences cross-modal bias. Journal of Cognitive Neuroscience, 15, 20–29. [DOI] [PubMed] [Google Scholar]
  36. Henschke JU, Dylda E, Katsanevaki D, Dupuy N, Currie SP, Amvrosiadis T, Pakan JMP, & Rochefort NL (2020). Reward association enhances stimulus-specific representations in primary visual cortex. Current Biology, 30, 1866–1880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Irwin JR, & Brancazio L. (2014). Seeing to hear? Patterns of gaze to speaking faces in children with autism spectrum disorders. Frontiers in Psychology, 5, 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Jäkel F, & Ernst MO (2003). Learning to combine arbitrary signals from vision and touch in Eurohaptics 2003 Conference Proceedings (pp. 276–290). Trinity College Dublin and Media Lab Europe, Trinity College. [Google Scholar]
  39. Jiang W, Jiang H, Rowland BA, & Stein BE (2007). Multisensory orientation behavior is disrupted by neonatal cortical ablation. Journal of Neurophysiology, 97, 557–562. [DOI] [PubMed] [Google Scholar]
  40. Jiang W, Jiang H, & Stein BE (2002). Two corticotectal areas facilitate multisensory orientation behavior. Journal of Cognitive Neuroscience, 14, 1240–1255. [DOI] [PubMed] [Google Scholar]
  41. Jiang W, Wallace MT, Jiang H, Vaughan JW, & Stein BE (2001). Two cortical areas mediate multisensory integration in superior colliculus. The Journal of Neurophysiology, 85, 506–522. [DOI] [PubMed] [Google Scholar]
  42. Kayser C, & Shams L. (2015). Multisensory causal inference in the brain. PLOS Biology, 13, e1002075. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Knill DC, & Pouget A. (2004). The Bayesian brain: The role of uncertainty in neural coding and computation. Trends in Neurosciences, 27, 712–719. [DOI] [PubMed] [Google Scholar]
  44. Kording KP (2007). Decision theory: What ‘should’ the nervous system do? Science, 318, 606–610. [DOI] [PubMed] [Google Scholar]
  45. Kording KP, Beierholm U, Ma WJ, Quartz S, Tenenbaum JB, & Shams L. (2007). Causal inference in multisensory perception. PLoS One, 2, e943. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Lovelace CT, Stein BE, & Wallace MT (2003). An irrelevant light enhances auditory detection in humans: A psychophysical analysis of multisensory integration in stimulus detection. Cognitive Brain Research, 17, 447–453. [DOI] [PubMed] [Google Scholar]
  47. Lovibond PF, Satkunarajah M, & Colagiuri B. (2015). Extinction can reduce the impact of reward cues on reward-seeking behavior. Behavioral Therapy, 46, 432–438. [DOI] [PubMed] [Google Scholar]
  48. Ma WJ, Zhou X, Ross LA, Foxe JJ, & Parra LC (2009). Lip-reading aids word recognition most in moderate noise: A Bayesian explanation using high-dimensional feature space. PLoS One, 4, e4638. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Marco EJ, Hinkley LBN, Hill SS, & Nagarajan SS (2011). Sensory processing in autism: A review of neurophysiologic findings. Pediatric Research, 69, 48R–54R. 10.1203/PDR.0b013e3182130c54 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Meredith MA, Nemitz JW, & Stein BE (1987). Determinants of multisensory integration in superior colliculus neurons. I. Temporal Factors. The Journal of Neuroscience, 7, 3215–3229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Meredith MA, & Stein BE (1986). Spatial factors determine the activity of multisensory neurons in the cat superior colliculus. Brain Research, 365, 350–354. [DOI] [PubMed] [Google Scholar]
  52. Meredith MA, & Stein BE (1996). Spatial determinants of multisensory integration in cat superior colliculus neurons. Journal of Neurophysiology, 75, 1843–1857. [DOI] [PubMed] [Google Scholar]
  53. Murray MM, Molholm S, Michel CM, Heslenfeld DJ, Ritter W, Javitt DC, Schroeder CE, & Foxe JJ (2010). Grabbing your ear: Rapid auditory-somatosensory multisensory interactions in low-level sensory cortices are not constrained by stimulus alignment. Cerebral Cortex, 15, 963–974. [DOI] [PubMed] [Google Scholar]
  54. Odegaard B, & Shams L. (2016). The brain’s tendency to bind audiovisual signals is stable but not general. Psychological Science, 27, 583–591. [DOI] [PubMed] [Google Scholar]
  55. Parise CV, & Ernst MO (2016). Correlation detection as a general mechanism for multisensory integration. Nature Communications, 7, 11543. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Pouget A, Deneve S, & Duhamel J. (2002). A computational perspective on the neural basis of multisensory spatial representations. Nature Reviews Neuroscience, 3, 741–747. [DOI] [PubMed] [Google Scholar]
  57. Ross LA, Saint-Amour D, Leavitt VM, Javitt DC, & Foxe JJ (2007). Do you see what I’m saying? Exploring visual enhancement of speech comprehension in noisy environments. Cerebral Cortex, 17, 1147–1153. [DOI] [PubMed] [Google Scholar]
  58. Rowland BA, Jiang W, & Stein BE (2014). Brief cortical deactivation early in life has long-lasting effects on multisensory behavior. The Journal of Neuroscience, 34, 7198–7202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Rowland BA, Stanford TR, & Stein BE (2007a). A model of the neural mechanisms underlying multisensory integration in the superior colliculus. Perception, 36, 1431–1443. 10.1068/p5842 [DOI] [PubMed] [Google Scholar]
  60. Rowland BA, Stanford TR, & Stein BE (2007b). A Bayesian model unifies multisensory spatial localization with the physiological properties of the superior colliculus. Experimental Brain Research, 180, 153–161. 10.1007/s00221-006-0847-2 [DOI] [PubMed] [Google Scholar]
  61. Sanchez-Garcia C, Alsius A, Enns JT, & Soto-Faraco S. (2011). Cross-modal prediction in speech perception. PLoS One, 6, e25198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Serences JT (2008). Value-based modulations in human visual cortex. Neuron, 60, 1169–1181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Shams L, & Beierholm UR (2010). Causal inference in perception. Trends in Cognitive Sciences, 14, 425–432. [DOI] [PubMed] [Google Scholar]
  64. Shams L, Ma WJ, & Beierholm U. (2005). Sound-induced flash illusion as an optimal percept. NeuroReport, 16, 1923–1927. [DOI] [PubMed] [Google Scholar]
  65. Shuler MG, & Bear MF (2006). Reward timing in the primary visual cortex. Science, 311, 1606–1609. [DOI] [PubMed] [Google Scholar]
  66. Skinner BF (1933). On the rate of extinction of a conditioned reflex. Journal of General Psychology, 9, 114–129. [Google Scholar]
  67. Sommers MS, Tye-Murray N, & Spehar B. (2005). Auditory-visual speech perception in auditory-visual enhancement in normal-hearing younger and older adults. Ear and Hearing, 26, 263–275. [DOI] [PubMed] [Google Scholar]
  68. Speer LL, Cook AE, McMahon WM, & Clark E. (2007). Face processing in children with autism: Effects of stimulus contents and type. Autism, 11, 265–277. [DOI] [PubMed] [Google Scholar]
  69. Stein BE, Huneycutt WS, & Meredith MA (1988). Neurons and behavior: The same rules of multisensory integration apply. Brain Research, 448, 355–358. [DOI] [PubMed] [Google Scholar]
  70. Stein BE, & Meredith MA (1993). The merging of the senses. MIT Press. [Google Scholar]
  71. Stein BE, Meredith MA, Huneycutt WS, & McDade L. (1989).Behavioral indices of multisensory integration: Orientation to visual cues is affected by auditory stimuli. The Journal of Cognitive Neuroscience, 1, 12–24. [DOI] [PubMed] [Google Scholar]
  72. Stein BE, & Stanford TR (2008). Multisensory integration: Current issues from the perspective of the single neuron. Nature Reviews Neuroscience, 9, 255–267. [DOI] [PubMed] [Google Scholar]
  73. Stein BE, Stanford TR, & Rowland BA (2014). Development of multisensory integration from the perspective of the individual neuron. Nature Reviews Neuroscience, 15, 520–535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Stein BE, Stanford TR, & Rowland BA (2020). Multisensory integration and the Society for Neuroscience: Then and now. The Journal of Neuroscience, 40, 3–11. 10.1523/JNEUROSCI.0737-19.2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Stevenson RA, Siemann JK, Schneider BC, Eberly HE, Woynaroski TG, Camarata SM, & Wallace MT (2014). Multisensory temporal integration in autism spectrum disorders. The Journal of Neuroscience, 34, 691–697. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Sumby WH, & Pollack I. (1954). Visual contribution to speech intelligibility in noise. The Journal of the Acoustical Society of America, 26, 212–215. [Google Scholar]
  77. Talsma D, Senkowski D, Soto-Faraco S, & Woldorff MG (2010). The multifaceted interplay between attention and multisensory integration. Trends in Cognitive Neuroscience, 14, 400–410. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Tantam D, Monaghan L, Nicholson H, & Stirling J. (1989). Autistic children’s ability to interpret faces: A research note. Journal of Child Psychology and Psychiatry, 30, 623–630. [DOI] [PubMed] [Google Scholar]
  79. Tye-Murray N, Spehar B, Meyerson J, Sommers MS, & Hale S. (2011). Cross-modal enhancement of speech detection in young and older adults: Does signal content matter? Ear and Hearing, 32, 650–655. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Van Wanrooij MM, Bremen P, & Van Opstal AJ (2010). Acquired prior knowledge modulates audiovisual integration. European Journal of Neuroscience, 31, 1763–1771. [DOI] [PubMed] [Google Scholar]
  81. Wallace MT, Perrault TJ, Hairston WD, & Stein BE (2004). Visual experience is necessary for the development of multisensory integration. The Journal of Neuroscience, 24, 9580–9584. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Wallace MT, & Stein BE (1994). Cross-modal synthesis depends on input from cortex. Journal of Neurophysiology, 71, 429–432. [DOI] [PubMed] [Google Scholar]
  83. Wallace MT, & Stevenson RA (2014). The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia, 64, 105–123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Wallace MT, Wilkinson LK, & Stein BE (1996). Representation and integration of multiple sensory inputs in primate superior colliculus. Journal of Neurophysiology, 76, 1246–1256. [DOI] [PubMed] [Google Scholar]
  85. Wang Y, Celebrini S, Trotter Y, & Barone P. (2008). Visuo-auditory interactions in the primary visual cortex of the behaving monkey: Electrophysiological evidence. BMC Neuroscience, 9, 79. 10.1186/1471-2202-9-79 [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Warren DH (1979). Spatial localization under conflict conditions: Is there a single explanation? Perception, 8, 323–337. [DOI] [PubMed] [Google Scholar]
  87. Weis T, Brechmann A, Puschmann S, & Thiel CM (2013). Feedback that confirms reward expectation triggers auditory cortex activity. Journal of Neurophysiology, 110, 1860–1866. [DOI] [PubMed] [Google Scholar]
  88. Welch RB (1972). The effect of experienced limb identity upon adaptation to simulated displacement of the visual field. Perception and Psychophysics, 12, 453–456. [Google Scholar]
  89. Wilkinson LK, Meredith MA, & Stein BE (1996). The role of anterior ectosylvian cortex in cross-modality orientation and approach behavior. Experimental Brain Research, 112, 1–10. [DOI] [PubMed] [Google Scholar]
  90. Williams LE, Light GA, Braff DL, & Ramachandran VS (2010). Reduced multisensory integration in patients with schizophrenia on a target detection task. Neuropsychologia, 48, 3128–3136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Williams SB (1938). Resistance to extinction as a function of the number of reinforcements. Journal of Experimental Psychology, 23, 506–522. [Google Scholar]
  92. Xu J, Yu L, Rowland BA, Stanford TR, & Stein BE (2012). Incorporating cross-modal statistics in the development and maintenance of multisensory integration. The Journal of Neuroscience, 32, 2287–2298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Xu J, Yu L, Rowland BA, & Stein BE (2014). Noise-rearing disrupts the maturation of multisensory integration. European Journal of Neuroscience, 39, 602–613. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Xu J, Yu L, Stanford TR, Rowland BA, & Stein BE (2015). What does a neuron learn from multisensory experience? The Journal of Neurophysiology, 113, 883–889. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Yu L, Cuppini C, Xu J, Rowland BA, & Stein BE (2018). Cross-modal competition: The default computation for multisensory processing. The Journal of Neuroscience, 39, 1374–1385. [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Yu L, Rowland BA, & Stein BE (2010). Initiating the development of multisensory integration by manipulating sensory experience. The Journal of Neuroscience, 30, 4904–4913. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Yu L, Xu J, Rowland BA, & Stein BE (2013). Development of cortical influences on superior colliculus multisensory neurons: Effects of dark-rearing. European Journal of Neuroscience, 37, 1594–1601. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

fS1

RESOURCES