Abstract
Concordant visual–auditory stimuli enhance the responses of individual superior colliculus (SC) neurons. This neuronal capacity for “multisensory integration” is not innate: it is acquired only after substantial cross-modal (e.g. auditory–visual) experience. Masking transient auditory cues by raising animals in omnidirectional sound (“noise-rearing”) precludes their ability to obtain this experience and the ability of the SC to construct a normal multisensory (auditory–visual) transform. SC responses to combinations of concordant visual–auditory stimuli are depressed, rather than enhanced. The present experiments examined the behavioral consequence of this rearing condition in a simple detection/localization task. In the first experiment, the auditory component of the concordant cross-modal pair was novel, and only the visual stimulus was a target. In the second experiment, both component stimuli were targets. Noise-reared animals failed to show multisensory performance benefits in either experiment. These results reveal a close parallel between behavior and single neuron physiology in the multisensory deficits that are induced when noise disrupts early visual–auditory experience.
Keywords: cross-modal, development, multisensory integration, noise-rearing, vision
Introduction
The ability of an animal to integrate information across its multiple senses can provide a significant adaptive advantage in a variety of environmental circumstances. For example, this process of multisensory integration makes it possible to more rapidly and accurately detect, localize, identify, and respond to an external event (Stein et al. 1988, 1989; Frens et al. 1995; Goldring et al. 1996; Corneil et al. 2002; Frassinetti et al. 2002; Battaglia et al. 2003; Alais and Burr 2004; Burnett et al. 2004; Ghazanfar 2005; Bolognini et al. 2007; Burnett et al. 2007; Rowland and Stein 2007a; Gingras et al. 2009; Chen and Spence 2011; Rowland and Stein 2014; Bean et al. 2021; Smyre et al. 2021). Underlying these perceptual and behavioral enhancements are the neural processes that transform convergent, congruent signals derived from common events into responses amplified above that of its largest component unisensory response (Stein and Meredith 1993). While there are many areas of the brain that process and integrate cross-modal information (Stein and Meredith 1993; Molholm et al. 2002; Beauchamp et al. 2004; Barraclough et al. 2005; Avillac et al. 2007; Gu et al. 2008; Veldhuizen et al. 2010; Mizoguchi et al. 2016; Maier and Elliott 2020; Zheng et al. 2021), this process has been most closely examined in multisensory neurons in the superior colliculus (SC) and the overt attentive, orientation, and localization behaviors that they subserve (Stein and Meredith 1993; Wallace and Stein 1996; Frens and Van Opstal 1998; Bell et al. 2001, 2006; Rowland and Stein 2007b; Stein and Stanford 2008; Stein et al. 2014; Wang et al. 2020).
Using this model, it was found that multisensory integrative processes are not in place at birth in the SC (Wallace and Stein 1997, 2000, 2001) or in related association cortex (Carriere et al. 2007) and require experience with cross-modal stimuli to develop. This experience is normally acquired in early life and adapts the animal to the environment in which it will function (Stein et al. 2014). However, when visual–auditory experience is compromised by raising an animal in darkness or with constant omnidirectional masking sound, its SC neurons do not acquire the ability to integrate their deprived and nondeprived inputs. Their responses to concordant visual–auditory combinations do not show the enhancement that is the characteristic of neurons in the neurotypic adult (Wallace et al. 2004; Yu et al. 2010, 2013; Xu et al. 2012, 2014, 2017; Wang et al. 2020). Similar physiological results were obtained when animals were reared in special environments in which visual and auditory stimuli appeared independently (Xu et al. 2012). We hypothesize that these abnormal processing dynamics manifesting at the level of the single SC neuron are also reflected in the detection and localization behaviors they mediate. Recent evidence from dark-reared cats (Smyre et al. 2021), and humans with congenital visual (Putzar et al. 2007; Guerreiro et al. 2015) or late resolved auditory dysfunction (Gilley et al. 2010) are consistent with this hypothesis. These defects in multisensory integration persist during deprivation despite the fact that precluding visual–auditory experience does not prevent neurons from becoming responsive to visual and auditory stimuli individually or in combination (Wallace et al. 2004; Yu et al. 2010, 2013; Xu et al. 2014).
The present study tested the impact of rearing animals in omnidirectional sound (“noise-reared,” see Chang and Merzenich 2003; Efrati and Gutfreund 2011; Xu et al. 2014) on the ability to use visual and auditory cues to facilitate behavioral responses to an external event. Unlike dark-rearing, noise-rearing provides constant stimulation within one modality (auditory) while effectively masking covariant visual–auditory patterned experience. The omnidirectional noise in the housing of these animals was kept below 85 dB to prevent damage to hearing and stress (Turner et al. 2005).
These noise-reared animals possess large populations of visual, auditory, and visual–auditory neurons, all of which can be activated by visual–auditory stimuli. In principle, such a broad activation pattern could generate multisensory enhancements (MEs) in behavioral performance, even if individual multisensory SC neurons do not respond with amplified responses. We first evaluated whether noise-reared animals performing a visual localization task would benefit when an auditory nontarget stimulus was presented in combination with the visual target (as do normal animals; see Stein et al. 1989). In the second experiment, we trained animals to orient to both visual and auditory targets to render both modalities explicitly salient and tested whether this training paradigm would reveal a different multisensory computation (see Bean et al. 2021).
Materials and methods
All procedures were conducted in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health Publication) and an approved Institutional Animal Care and Use Committee protocol at Wake Forest University School of Medicine, an Association for Assessment and Accreditation of Laboratory Animal Care-accredited institution. Five mongrel cats (4 males, 1 female) were used in these studies. They were motivated by food rewards and were maintained within 80% of their baseline body weight.
Housing
Two adult male cats reared in standard housing conditions served as a control cohort. The experimental cohort was composed of 3 adult noise-reared cats (2 males, 1 female) that were housed from prenatal day 9 in cages (1.88 × 1.52 × 1.88 m) in the center of a room (3.94 × 4.57 m) that had 6 speakers (Fountek RM-6 K) mounted at ceiling level on all four walls. The speakers were connected in parallel to an amplifier (Pioneer Audio Multi-Channel Receiver 5x-316) receiving input from a broadband noise generator (20–20,000 Hz, Coulbourn Instruments S81-02) that produced a constant ~83 dB (range: 83.2–84.1 dB) masking sound in the cages. The sound was continuous (24 h a day/7 days a week) and sufficient to suppress the perception of most patterned auditory stimuli (see also Chang and Merzenich 2003; Efrati and Gutfreund 2011; Xu et al. 2014).
Apparatus
Training and testing of detection/localization behaviors were conducted using a 90-cm diameter semicircular apparatus. Complexes of LEDs and speakers were mounted on the perimeter wall at 15° intervals from −90° (left) to +90° (right) of a central fixation location (0°) (Gingras et al. 2009; Dakos, Jiang, et al. 2019; Dakos, Walker, et al. 2019; Bean et al. 2021; Smyre et al. 2021). Each complex consisted of 2 speakers (Panasonic model 4D02C0, separated by 4 cm) and 3 light-emitting diodes (LEDs; Lumex Opto/Components; model 67-1102-ND separated by 2 cm, and located 4 cm below the speakers). Only the left-most speaker and LED were used at each location for this study.
Training
Animals were first trained to fixate on the 0
location (which was never a target during testing) while being gently restrained by the experimenter at the starting position. In the first experiment, animals were trained to approach the visual target presented at the maximal test intensity (50-ms LED flashes, 2.0–3.7 cd/m2) at each location. Animals were considered ready for testing when they reached at least 80% correct approach to each target location. In the second experiment, they were trained to also approach the full intensity auditory targets (50-ms broadband noise burst, ~61-dB SPL at start position). As with the visual training, animals were considered ready for testing when they reached at least 80% correct approach responses to each target location. The experimenter initiated a trial by depressing a foot pedal that produced an audible click. A trial consisted of either a single target stimulus at a random location between ±60° (15° intervals, excluding 0°) or no stimulus (catch trial). Approach responses to a target were rewarded with a small food pellet (175 mg, Science Diet). Catch trials were never rewarded, and the animals rapidly learned to remain at the start position during these trials (“No-Go” response). The training criterion was 80% correct performance at each tested location, and during catch trials. In order to test noise-reared animals in a setting similar to that of their housing/rearing environment, a “white” noise generator (Noise Gen App on Apple iPhone SE) emitted constant broadband background noise during visual stimulus training (~83 dB), during determination of stimulus intensities for testing, and during testing (~56 dB), as described below. The background noise was presented from portable speakers (Harman Kardon model HK206) mounted on top of the perimetry device shown in Fig. 1. Noise-reared animals were transferred to and from the testing room in a standard shrouded carrier containing a “white” noise-emitting device (Noise Gen App on Apple iPhone 8, 83.1–84.1 dB). Normally reared animals had been transferred from a previous, unrelated behavioral study, and had the same training/testing as the experimental cohort.
Fig. 1.
Apparatus and training performance. A) The detection and localization task was performed in a perimetry apparatus with LEDs and speakers at locations spanning the central 180° of space in 15° intervals (only the central 120° was tested here, the 0° location was used for fixation only). Each stimulus location contained a complex of 2 speakers and 3 LEDs at 2-cm separations. Large speakers mounted above the device delivered background noise. (Figure adapted from Gingras et al. 2009). B) Animals of both cohorts quickly learned to orient and approach visual (prior to Experiment 1) and auditory stimuli (prior to Experiment 2). Each animal’s performance is plotted individually (cat 1–5). Both normally reared and noise-reared animals learned the visual (top) and auditory (bottom) tasks rapidly, and there were no significant intergroup differences.
Testing
Experiment 1
After visual approach training, the background noise level was lowered to ~56 dB to avoid masking the auditory test stimuli. Visual intensities were then adjusted at each location until animals were demonstrating a visual performance within the reduced target range (30–45% correct approach) over a minimum of 20 trials. This ensured a similar visual performance for each animal/location. Animals were then tested with 4 randomly interleaved conditions. These included: a single visual stimulus of reduced intensity, a single broadband auditory stimulus at the maximum stimulus intensity, the visual and auditory (cross-modal) stimuli in spatiotemporal concordance, and catch trials. Stimulus durations were 50 ms. There were 6 presentations/location and 18 catch trials presented each day of testing. Approaches to the locations of visual stimuli, the untrained auditory stimuli, and their cross-modal combinations were rewarded. All animals were tested for 9 days resulting in 54 testing trials/location/stimulus/animal.
Experiment 2
After training the animals to orient to auditory targets in ~56-dB background noise, auditory stimulus intensities were lowered in the same manner as described above for the visual stimuli. This ensured similar auditory performance for each animal/location. They were then re-tested with the lower intensity auditory and visual (see above) stimuli, with ~56-dB background noise, using the same methods as in Experiment 1. One experimental animal was tested for 8 days, all other animals were tested for 9 days, resulting in a minimum of 48 testing trials/location/stimulus/animal.
Data analysis
On stimulus-containing trials, responses were scored as correct when the animal approached the target location. Responses were scored as incorrect if the animal approached a location other than the target, or if it remained at the start location (No-Go). For catch trials, a No-Go response was scored as correct if the animal remained at the start location, or as incorrect if it approached any of the LED/speaker complexes. In all cases, the location approached (correct or incorrect) was recorded.
The percentage of each response type was calculated for each stimulus type, animal, and location (shown in Figures and Supplementary Tables). These statistics were used in the analyses below to calculate multiple response measures and multisensory effects. Data were also pooled across animals and locations separately for each cohort to report cohort-level summary statistics (as mean ± standard error). In addition, a longitudinal analysis examined whether there were changes in performance for each cohort within and between experimental blocks, using linear regression fits to performance statistics averaged across animals and locations within a cohort on each test day.
Methods adapted from signal detection theory were used to calculate the metrics of response bias (β) and discriminability (d′) for each stimulus/animal/location. β was calculated for each location (L) by dividing the number of incorrect approaches to each location on catch (FAL) and target-containing trials (IL) by the sum of the number of catch (C) and target-containing trials (TL) and subtracting this quotient from 1. This metric identified the propensity of animals to “Go” to each location on a trial even when it did not contain a stimulus.
![]() |
(1) |
d′ was calculated by evaluating the proportion of correct approaches given β: mathematically, by subtracting the z-transform of 1 minus the proportion of correct localization responses (H) from the z-transform of β. This metric represented the animal’s discriminability of the stimulus account for response bias.
![]() |
(2) |
ME or “multisensory benefit” was evaluated by comparing the proportionate difference between the response metrics (correct approach, d′, β, detection, and localization) in multisensory conditions (VA) and values predicted by unisensory referent models in which the target signals were not integrated. In the first experiment (visual approach task), the unisensory referent model was based on the responses to visual stimuli presented alone (V).
![]() |
(3) |
Multisensory performance significantly above visual-alone performance indicated that the auditory signal was enhancing “visual” performance. In the second experiment (visual and auditory approach task), a “statistical facilitation” (SF) model was used, in which the more effective of the 2 modalities (visual or auditory) was predicted to dictate the response on a trial-by-trial basis (Smyre et al. 2021).
![]() |
(4) |
This metric is conceptually similar to the “race model” used in studies of reaction time and classic measures of ME based on mean response metrics (Miller 1982). SF model predictions were generated through a bootstrap procedure in which the responses on visual-alone and auditory-alone were randomly sampled and the “better” (i.e. more accurate) response was selected as the SF prediction. This was repeated a number of times equal to the number of visual–auditory test trials and the predicted response metrics were calculated. This procedure was repeated 10,000 times to generate sampling distributions for each response metric predicted by SF (Smyre et al. 2021).
To further examine the response differences between normally-reared and noise-reared cohorts, patterns of correct and incorrect responses were analyzed with a 2-stage detection and localization model (Smyre et al. 2021). The 2-stage model assumes that the animal detects the stimulus before approaching it, and always makes a Go response when the stimulus is detected (consistent with the sensorimotor organization of the SC, (e.g. see (Jay and Sparks 1987; Stein and Meredith 1993). Two metrics were calculated: the probability that stimulus will elicit an approach response (P[Go|Stimulus], “detection stage”), and the probability that an approach (“Go”) response will be to the correct location (P[Correct|Go], “localization stage”). The product of these probabilities equals the overall probability of a correct response.
In addition to calculating the proportionate difference between the mean of a multisensory metric and the mean of the unisensory or SF referent (Eqs (3) and (4)), enhancements for each metric (% correct approach, d′, ß, detection, localization) were quantified as a standardized (“z”) score difference between the metric calculated from these unisensory referent sampling distributions and the matching metric calculated on the multisensory trials.
Comparisons between cohorts and experiments were performed for each response metric using a linear mixed effects model (fixed effects of rearing condition and experimental block; random effects of animal and location) with Holm alpha correction. Maximum likelihood estimation (MLE) was used to quantify regression coefficients for the fixed effect (Δ between conditions). Significance was determined with likelihood ratio tests (adjusted DF method: Kenward-Roger). Additionally, linear regression (F-test) was used to evaluate if there were significant changes over time for stimulus conditions for each experimental block. Alpha for each analysis was 0.05.
Results
Summary
In contrast to normally-reared animals, noise-reared animals were unable to use visual and auditory information synergistically to enhance detection/localization performance. This defect was stable and persisted regardless of whether animals were trained to localize only the visual stimulus or to localize both visual and auditory stimuli.
Experiment 1
All animals learned to localize full-intensity visual stimuli at each tested location (>80% correct) within 2 weeks of training. The number of training trials required to reach criterion was not significantly different between the cohorts (P = 0.36, Mann–Whitney U).
Lowering the visual intensities decreased the correct approach of normally-reared and noise-reared animals to ~35% on visual-alone trials. Performance was marginally, albeit significantly, lower for the noise-reared cohort (normally-reared V = 38 ± 1.2% vs. noise-reared V = 33 ± 1.0%, P = 0.034). Both normally-reared and noise-reared animals rarely approached the untrained auditory stimulus when presented alone (normally-reared A = 25 ± 1.5% vs. noise-reared A = 19 ± 1.1%, P = 0.33), despite it being clearly audible (see Experiment 2). They remained at the start point on a plurality of those trials (normally-reared = 43 ± 2.0%, noise-reared = 45 ± 2.0%). Animals in both cohorts remained at the start point on the majority of catch trials (normally-reared = 91%, noise-reared = 84%).
Animals in the normally-reared cohort exhibited significant ME in correct approaches when the auditory stimulus was presented in spatiotemporal concordance with the visual target (from V = 38 ± 1.2% to VA = 63 ± 1.4%; MEV = 66%, P < 0.001, Fig. 2A). This reflected a significant enhancement of sensory discriminability (d′: from V = 2.0 ± 0.12 to VA = 2.7 ± 0.09; MEV = 36%, P < 0.001) rather than a change in response bias, which was not consistently changed from the visual-alone condition (β: from V = 2.3 ± 0.11 to VA = 2.4 ± 0.10; MEV 4%, P = 0.05). In contrast, the addition of the spatiotemporally concordant auditory stimulus did not enhance the performance of noise-reared animals (from V = 33 ± 1.0% to VA = 35 ± 1.2%; MEV = 6%, P = 0.08, Fig. 2B). There was no enhancement in stimulus discriminability (d′: from V = 1.7 ± 0.12 to VA = 1.5 ± 0.10; MEV = −9, P < 0.001). Noise-reared animals also showed a statistically significant decrease in response bias in the visual–auditory vs. visual condition, indicating an increase in the propensity for “Go” responses (β: from V = 2.1 ± 0.09 to VA = 1.9 ± 0.08; MEV −10%, P < 0.001). For individual animal data, see Supplementary Table 1.
Fig. 2.
Auditory stimuli failed to enhance visual localization performance in noise-reared animals. A and B) Bars show that coupling a novel auditory stimulus with the visual target stimulus (V) to create a cross-modal target (VA) significantly enhanced group multisensory performance (MEv) in normally reared animals, but not in their noise-reared counterparts. Open circles represent individual animal data with lines connecting their unisensory and multisensory performance. Insets shows the multisensory effect on d′. C) Z scores in boxplots for each location and each animal (gray dots) show multisensory, relative to visual, localization performance. The multisensory performance of normally reared animals was always significantly enhanced. In contrast, the multisensory performance of noise-reared animals (in gray shading) was often no better than their visual performance. D) Central (C) and peripheral (P) errors are expressed as degrees of deviation from the target (0) in response to modality-specific (thin lines; visual in blue, auditory in red) and cross-modal (thick line, purple) stimuli. Shading illustrates enhanced performance. ***P < 0.001, ns = not significant.
The 2-stage analysis reinforced these findings. Normally-reared animals showed significant ME in both stimulus detection and localization stages. Detection probability increased when the auditory and visual stimuli were combined (from V = 48 ± 1.86% to VA = 70 ± 1.40%; P < 0.001), and the probability of a correct Go response increased (from V = 79 ± 2.44% to VA = 90 ± 1.48%; P < 0.001). Although noise-reared animals also showed an increase in visual detection probability when the visual and auditory stimuli were combined (from V = 47 ± 1.86 to VA = 70.0 ± 1.4; P < 0.001), this enhancement was offset by a decrease in the probability of a Go response being correct (from V = 66 ± 1.74% to VA = 53 ± 2.4%, P < 0.001). Thus, noise-reared animals appeared to “Go” more often in multisensory conditions, but with increased localization errors that reduced the effective multisensory performance to no better than in the visual alone condition (Fig. 2D, Table 1).
Table 1.
Performance, d′, and Β for Experiment 1.
| Performance | d′ | Β | ||
|---|---|---|---|---|
| Auditory | Normally reared | 25 ± 1.5% | 1.2 ± 0.11 | 1.9 ± 0.11 |
| Noise-reared | 19 ± 1.1% | 0.91 ± 0.12 | 1.8 ± 0.08 | |
| Comparison | P = 0.3298 | P = 0.2412 | P = 0.419 | |
| Visual | Normally reared | 38 ± 1.2% | 2.0 ± 0.12 | 2.3 ± 0.11 |
| Noise-reared | 33 ± 1.0% | 1.7 ± 0.12 | 2.1 ± 0.09 | |
| Comparison | P = 0.0339 | P < 0.001 | P = 0.473 | |
| Multisensory | Normally reared | 63 ± 1.4% | 2.7 ± 0.09 | 2.4 ± 0.10 |
| Noise-reared | 35 ± 1.2% | 1.5 ± 0.10 | 1.9 ± 0.08 | |
| Comparison | P = 0.008 | P < 0.001 | P = 0.003 | |
| MEv | Normally reared | 66% | 36% | 4% |
| Noise-reared | 6% | −9% | −10% | |
| Comparison | P < 0.001 | P = 0.002 | P = 0.045 | |
| Z Score (P-value) | Normally reared | 15.3 (<0.001) | 11.2 (<0.001) | 1.72 (0.0487) |
| Noise-reared | 1.43 (0.0702) | −3.02 (<0.001) | −7.6 (<0.001) | |
| Comparison | P < 0.001 | P = 0.002 | P = 0.023 |
Experiment 2
The purpose of this experiment was to investigate the possibility that noise-reared animals had compromised multisensory performance because they were unfamiliar with auditory localization tasks or had diminished salience for the auditory modality. Both cohorts of animals were re-trained to also detect/localize the auditory stimulus. All animals rapidly learned to approach full intensity auditory stimuli (~61 dB) and there were no differences between the cohorts in the number of auditory-alone training trials required to reach criterion (Mann–Whitney U, P = 0.0763, see Fig. 1B, bottom). After training, these intensities were reduced at each location for each animal. A linear mixed effect model of the final reduced intensities, with a fixed effect of rearing condition and percent response and random effects of location and cat, revealed that there was no significant difference in the reduction needed for the same reduced responses (Δ grouP = 15.6 ± 7.37, location intercept = 4.37, cat intercept = 4.14; P = 0.058). Thus, the omnidirectional noise during rearing appeared to have no direct deleterious effect on the ability of noise-reared animals to use auditory information to make detection/localization decisions.
For this experiment, an SF model was used to assess ME. In this model, it is assumed that animals are using the auditory and visual stimuli independently but will always use the more effective of the 2 during cross-modal trials. Performance above SF [measured as ME over SF (MESF)] indicates that animals are able to integrate the stimuli and use them synergistically.
Normally reared animals again showed robust ME in approach behavior (from SF = 59 ± 4.6% to VA = 80 ± 1.1%; MESF = 34%; P < 0.001, Fig. 3A). This enhancement reflected a significant enhancement in discriminability (d′: from SF = 2.4 ± 0.06 to VA = 3.1 ± 0.07; MESF = 32%, P < 0.001). Also observed was an increase in response bias in the multisensory condition (β: from SF = 2.2 ± 0.03 to VA = 2.3 ± 0.5; MESF 7%, P < 0.001), indicating a marginal decrease in the propensity for a “Go” response. Noise-reared animals again showed no ME in their performance. In fact, their approach performance on multisensory trials was significantly lower than predicted by SF (from SF = 59 ± 3.9% to VA = 47 ± 1.3%; MESF = −20%, P < 0.001, Fig. 3B). This reflected a depression in discriminability on multisensory trials (d′: from SF = 2.3 ± 0.05 to VA = 2.0 ± 0.08; MESF = −13%, P < 0.001), without a significant change in response bias (β: from SF = 2.1 ± 0.02 to VA = 2.1 ± 0.05; MESF 0%, P = 0.56). Additional data and cohort comparisons are provided in Table 2; for individual animal performance, see Supplementary Table 2.
Fig. 3.
Noise-reared animals failed to show ME when both visual and auditory stimuli were targets. Conventions are the same as Fig. 2, albeit here the referent is SF. A) The multisensory performance in normally reared animals significantly exceeded SF. B) In contrast, the multisensory performance of noise-reared animals failed to reach SF predictions. C) Z scores show the contrasting performance of the groups: Enhancements in normally reared and depression in noise-reared (gray shading) animals. D) Performance to modality-specific stimuli (lower thin lines; visual in blue, auditory in red) were similar between groups. Cross-modal performance in the normally-reared group (thick line, purple) was above SF (green line). Shading highlights enhancement for this group (purple). Cross-modal performance of noise-reared animals was below SF. Shading highlights this deficit (green). ***P < 0.001.
Table 2.
Performance, d′, and Β for Experiment 2.
| Performance | d′ | Β | ||
|---|---|---|---|---|
| Auditory | Normally reared | 37 ± 1.2% | 1.8 ± 0.07 | 2.1 ± 0.08 |
| Noise-reared | 36 ± 1.0% | 1.7 ± 0.03 | 2.1 ± 0.05 | |
| Comparison | P = 0.830 | P = 0.537 | P = 0.217 | |
| Visual | Normally reared | 35 ± 1.2% | 1.9 ± 0.05 | 2.3 ± 0.03 |
| Noise-reared | 36 ± 1.1% | 1.9 ± 0.09 | 2.3 ± 0.07 | |
| Comparison | P = 0.7901 | P = 0.264 | P = 0.178 | |
| SF | Normally reared | 59 ± 4.6% | 2.4 ± 0.06 | 2.2 ± 0.03 |
| Noise-reared | 59 ± 3.9% | 2.3 ± 0.05 | 2.1 ± 0.02 | |
| Comparison | P = 0.85 | P = 0.703 | P = 0.215 | |
| Multisensory | Normally reared | 80 ± 1.1% | 3.1 ± 0.07 | 2.3 ± 0.05 |
| Noise-reared | 47 ± 1.3% | 2.0 ± 0.08 | 2.1 ± 0.05 | |
| Comparison | P < 0.001 | P < 0.001 | P = 0.016 | |
| MESF | Normally reared | 34% | 32% | 7% |
| Noise-reared | −20% | −13% | 0% | |
| Comparison | P < 0.001 | P < 0.001 | P = 0.006 | |
| Z score (P-value) | Normally reared | 12.14 (<0.001) | 12.46 (<0.001) | 5.63 (<0.001) |
| Noise-reared | −8.62 (<0.001) | −6.02 (<0.001) | −0.17 (0.56) | |
| Comparison | P < 0.001 | P < 0.001 | P = 0.006 |
The 2-stage analysis reinforced these findings. Normally reared animals showed MEs in both detection (from SF = 73 ± 3.07 to VA = 86 ± 1.26, P < 0.001) and localization (from SF = 81 ± 2.90 to VA = 92 ± 0.69, P < 0.001) stages. In contrast, noise-reared animals showed multisensory depression in both detection (from SF = 75 ± 3.51 to VA = 63 ± 1.28, P < 0.001) and localization (from SF = 78 ± 5.52 to VA = 74 ± 3.33, P < 0.001). Thus, noise-reared animals not only failed to integrate mutually informative visual–auditory signals but performed worse than expected given the SF model predictions. Table 2 contains additional data and comparisons.
Stability of performance during experimentation
The multisensory performance of both cohorts was relatively stable within each experimental testing series (Fig. 4, Table 3). However, there was an abrupt shift in this measure after animals were trained with the auditory stimulus as a target. Correct responses increased in the normally reared cohort from 63 ± 1.37% to 80 ± 1.10% and in the noise-reared cohort from 35 ± 1.3% to 47 ± 1.3% (Fig. 4). However, since both the visual and auditory cues could now signal the presence and location of the target, SF became the essential referent with which to assess multisensory performance. Using this referent, ME continued to be significant in normally reared animals (albeit, the raw enhancement dropped from 66% to 34%) and to be nonsignificant (changed from 6% to −20%) in noise-reared animals.
Fig. 4.
Group performance was stable over the testing period. Shown are visual (blue), auditory (red), and multisensory (purple) localization performance and SF predictions (green, Experiment 2). There was relative within-experiment performance stability over testing sessions (albeit noise-reared animals showed a gradual increase in response to the visual stimuli in Experiment 2). However, following explicit auditory training between experiments both cohorts showed an increase in correct multisensory approach responses.
Table 3.
Stability of stimulus approach over experiments.
| Normally reared | Noise-reared | ||||||
|---|---|---|---|---|---|---|---|
| R 2 | Slope | P-value | R 2 | Slope | P-value | ||
| Experiment 1 | Auditory | 0.165 | 0.5208 | 0.2779 | 0.5020 | 0.9028 | 0.326 |
| Visual | 0.0427 | 0.3299 | 0.5937 | 0.0261 | 0.1736 | 0.6778 | |
| Multisensory | 0.1185 | −0.3125 | 0.3643 | 0.2638 | 0.4745 | 0.1573 | |
| Experiment 2 | Auditory | 0.0199 | 0.1736 | 0.7170 | 0.2406 | 0.5324 | 0.18 |
| Visual | −8.8e−16 | −2.8e−15 | 1 | 0.6997 | 1.169 | 0.0049 | |
| Multisensory | 0.1130 | −0.6424 | 0.3765 | 0.2580 | 0.5903 | 0.1627 | |
Discussion
The present study revealed that noise-reared animals, unlike their normal counterparts, were unable to use combined visual–auditory stimuli to improve their performance in a simple detection and localization task. The animals could easily detect and localize these individual component stimuli; thus, their defect was specific to their integration.
This behavioral defect occurred despite the stimulus combination activating a host of unisensory (visual and auditory) and multisensory (visual–auditory and visual–auditory–somatosensory) SC neurons (Xu et al. 2014). Simply increasing the pool of responsive neurons via cross-modal stimulation did not compensate for the inability to integrate those sensory inputs, a finding consistent with those from dark-reared animals (e.g. see Smyre et al. 2021). Additional support for this premise comes from studies in which destroying the relevant neurons (Burnett et al. 2004, 2007) or disrupting their integrative capability (Wilkinson et al. 1996) eliminated the benefit of having access to concordant cross-modal cues. This physiological–behavioral relationship has now been detailed in several network models (Cuppini et al. 2012, 2018; Yu et al. 2018).
When experience with cross-modal cues is disrupted, SC neurons not only fail to develop their normal multisensory integration capabilities (Wallace et al. 2004; Wang et al. 2020; Xu et al. 2012, 2014, 2017; Yu et al. 2010, 2013; see also Stein et al. 2014) but also retain their neonatal default computation. In this computation, concordant visual–auditory cues suppress each other’s ability to activate their common target neurons (Yu et al. 2019), much like discordant cues within a modality do (Alvarado et al. 2007; Gingras et al. 2009). That visual–auditory behavioral performance in the present experiments fell short of SF (a model representing the best use of unisensory information in the absence of multisensory integration (e.g. see (Otto et al. 2013; Nava et al. 2014; Wang et al. 2020; Smyre et al. 2021) is consistent with these physiological observations: competition between the sensory modalities may represent the mechanistic basis for the defects in multisensory integration capabilities.
Noise-rearing did not significantly impact the ability of animals to localize the auditory stimuli used here: noise-reared and normally reared animals learned to detect/localize them within similar time frames and required similar intensity reductions to achieve the same performance targets. Their multisensory defects persisted before and after this training. Thus, anomalies in auditory development associated with noise rearing—e.g. A1 tonotopies (Chang and Merzenich 2003) and SC spatial sensitivities (Xu et al. 2010)—do not explain the results here. Auditory training has previously been observed to improve localization performance in humans with normal hearing, as well as in unilateral impairment conditions in humans (Irving and Moore 2011) and ferrets (Kacelnik et al. 2006). It is possible that this training is reflected in the quantitative differences in the multisensory impairment observed in the 2 different experiments.
Although we use the term “defect” here to describe the atypical multisensory capability of noise-reared animals, it is not actually defective for their rearing environment. Rather, a segregation of information across the visual and auditory modalities is a logical inference when the totality of sensory experience fails to indicate that these inputs are mutually informative. Until the brain learns that many singular events generate concordant visual and auditory stimuli, inputs from these stimuli “compete” for access to the same sensorimotor circuits that control overt responses. The extent to which this is a general principle of multisensory development, such that similar defects will occur in other multisensory behaviors supported by other neural circuits, remains to be determined.
Interestingly, the defect can be ameliorated (in single neurons) in several weeks by explicit, repeated exposure to cross-modal stimuli (Xu et al. 2014). This would presumably also produce parallel behavioral enhancements in multisensory detection/localization tasks, but this has yet to be determined in this model. Such explicit multisensory “training” is, of course, not required during normal development. The young brain can extract the needed information from coincident cross-modal environmental stimuli despite extensive variations in their spatiotemporal concordance, changes in background conditions, and a host of intervening modality-specific events. Training may not be strictly required in the adult brain either; however, much more extensive experience may be necessary to promote its development (Rowland and Stein 2014). These observations may explain why congenitally deaf patients (lacking early auditory-nonauditory experience) show more robust multisensory integration capabilities when cochlear implants are received early in life, or when there is the opportunity for extensive cross-modal experience (i.e. years) later in life (see Stevenson et al. 2017 for review). Similar findings have been observed for the multisensory capabilities of patient groups with congenital cataracts (precluding early visual-nonvisual experience) that have been removed by corrective surgery (Putzar et al. 2007; de Heering et al. 2016; Chen et al. 2017; Pant et al. 2021; Senna et al. 2021; see also Smyre et al. 2021). The ability to control the specifics of multisensory experience, so that its developmental impact can be assessed directly, gives the animal model particular utility in these circumstances.
Supplementary Material
Acknowledgments
We thank Nancy London for assistance with this work.
Contributor Information
Naomi L Bean, Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States.
Scott A Smyre, Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States.
Barry E Stein, Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States.
Benjamin A Rowland, Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States.
Funding
This work was supported by the National Institutes of Health (T32 NS073553 and R01 EY031532) and the Tab Williams Family Foundation.
Conflict of interest statement. The authors declare no conflict of interest.
References
- Alais D, Burr D. The ventriloquist effect results from near-optimal bimodal integration. Curr Biol. 2004:14:257–262. [DOI] [PubMed] [Google Scholar]
- Alvarado JC, Stanford TR, Vaughan JW, Stein BE. Cortex mediates multisensory but not unisensory integration in superior colliculus. J Neurosci. 2007:27:12775–12786. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Avillac M, Ben Hamed S, Duhamel J-R. Multisensory integration in the ventral intraparietal area of the macaque monkey. J Neurosci. 2007:27:1922–1932. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barraclough NE, Xiao D, Baker CI, Oram MW, Perrett DI. Integration of visual and auditory information by superior temporal sulcus neurons responsive to the sight of actions. J Cogn Neurosci. 2005:17:377–391. [DOI] [PubMed] [Google Scholar]
- Battaglia PW, Jacobs RA, Aslin RN. Bayesian integration of visual and auditory signals for spatial localization. J Opt Soc Am. 2003:20:1391–1397. [DOI] [PubMed] [Google Scholar]
- Bean N, Stein B, Rowland B. Stimulus value gates multisensory integration. Eur J Neurosci. 2021:53:3142–3159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beauchamp MS, Lee KE, Argall BD, Martin A. Integration of auditory and visual information about objects in superior temporal sulcus. Neuron. 2004:41:809–823. [DOI] [PubMed] [Google Scholar]
- Bell AH, Corneil BD, Meredith MA, Munoz DP. The influence of stimulus properties on multisensory processing in the awake primate superior colliculus. Can J Exp Psychol Rev Can Psychol Expérimentale. 2001:55:123–132. [DOI] [PubMed] [Google Scholar]
- Bell AH, Meredith MA, Van Opstal AJ, Munoz DP. Stimulus intensity modifies saccadic reaction time and visual response latency in the superior colliculus. Exp Brain Res. 2006:174:53–59. [DOI] [PubMed] [Google Scholar]
- Bolognini N, Leo F, Passamonti C, Stein BE, Làdavas E. Multisensory-mediated auditory localization. Perception. 2007:36:1477–1485. [DOI] [PubMed] [Google Scholar]
- Burnett LR, Stein BE, Chaponis D, Wallace MT. Superior colliculus lesions preferentially disrupt multisensory orientation. Neuroscience. 2004:124:535–547. [DOI] [PubMed] [Google Scholar]
- Burnett LR, Stein BE, Perrault TJ Jr, Wallace MT. Excitotoxic lesions of the superior colliculus preferentially impact multisensory neurons and multisensory integration. Exp Brain Res Exp Hirnforsch Expérimentation Cérébrale. 2007:179:325–338. [DOI] [PubMed] [Google Scholar]
- Carriere BN, Royal DW, Perrault TJ, Morrison SP, Vaughan JW, Stein BE, Wallace MT. Visual deprivation alters the development of cortical multisensory integration. J Neurophysiol. 2007:95:2858–2867. [DOI] [PubMed] [Google Scholar]
- Chang EF, Merzenich MM. Environmental noise retards auditory cortical development. Science. 2003:300:498–502. [DOI] [PubMed] [Google Scholar]
- Chen Y-C, Spence C. The crossmodal facilitation of visual object representations by sound: evidence from the backward masking paradigm. J Exp Psychol Hum Percept Perform. 2011:37:1784–1802. [DOI] [PubMed] [Google Scholar]
- Chen Y-C, Lewis TL, Shore DI, Maurer D. Early binocular input is critical for development of Audiovisual but not Visuotactile simultaneity perception. Curr Biol. 2017:27:583–589. [DOI] [PubMed] [Google Scholar]
- Corneil BD, Van Wanrooij M, Munoz DP, Van Opstal AJ. Auditory-visual interactions subserving goal-directed saccades in a complex scene. J Neurophysiol. 2002:88:438–454. [DOI] [PubMed] [Google Scholar]
- Cuppini C, Magosso E, Rowland B, Stein B, Ursino M. Hebbian mechanisms help explain development of multisensory integration in the superior colliculus: a neural network model. Biol Cybern. 2012:106:691–713. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cuppini C, Stein BE, Rowland BA. Development of the mechanisms governing midbrain multisensory integration. J Neurosci. 2018:38:3453–3465. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dakos AS, Jiang H, Stein BE, Rowland BA. Using the principles of multisensory integration to reverse hemianopia. Cereb Cortex N Y N 1991. 2019:30(4):2030–2041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dakos AS, Walker EM, Jiang H, Stein BE, Rowland BA. Interhemispheric visual competition after multisensory reversal of hemianopia. Eur J Neurosci. 2019:50(11):3702–3712. [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Heering A, Dormal G, Pelland M, Lewis T, Maurer D, Collignon O. A brief period of postnatal visual deprivation alters the balance between auditory and visual attention. Curr Biol. 2016:26:3101–3105. [DOI] [PubMed] [Google Scholar]
- Efrati A, Gutfreund Y. Early life exposure to noise alters the representation of auditory localization cues in the auditory space map of the barn owl. J Neurophysiol. 2011:105:2522–2535. [DOI] [PubMed] [Google Scholar]
- Frassinetti F, Bolognini N, Làdavas E. Enhancement of visual perception by crossmodal visuo-auditory interaction. Exp Brain Res. 2002:147:332–343. [DOI] [PubMed] [Google Scholar]
- Frens MA, Van Opstal AJ. Visual-auditory interactions modulate saccade-related activity in monkey superior colliculus. Brain Res Bull. 1998:46:211–224. [DOI] [PubMed] [Google Scholar]
- Frens MA, Van Opstal AJ, Van Der Willigen RF. Spatial and temporal factors determine auditory-visual interactions in human saccadic eye movements. Percept Psychophys. 1995:57:802–816. [DOI] [PubMed] [Google Scholar]
- Ghazanfar AA. Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex. J Neurosci. 2005:25:5004–5012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gilley PM, Sharma A, Mitchell TV, Dorman MF. The influence of a sensitive period for auditory-visual integration in children with cochlear implants. Restor Neurol Neurosci. 2010:28:207–218. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gingras G, Rowland BA, Stein BE. The differing impact of multisensory and unisensory integration on behavior. J Neurosci. 2009:29:4897–4902. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goldring JE, Dorris MC, Corneil BD, Ballantyne PA, Munoz DP. Combined eye-head gaze shifts to visual and auditory targets in humans. Exp Brain Res Exp Hirnforsch Expérimentation Cérébrale. 1996:111:68–78. [DOI] [PubMed] [Google Scholar]
- Gu Y, Angelaki DE, DeAngelis GC. Neural correlates of multisensory cue integration in macaque MSTd. Nat Neurosci. 2008:11:1201–1210. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guerreiro MJS, Putzar L, Röder B. The effect of early visual deprivation on the neural bases of multisensory processing. Brain. 2015:138:1499–1504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Irving S, Moore DR. Training sound localization in normal hearing listeners with and without a unilateral ear plug. Hear Res. 2011:280:100–108. [DOI] [PubMed] [Google Scholar]
- Jay MF, Sparks DL. Sensorimotor integration in the primate superior colliculus. I. Motor convergence. J Neurophysiol. 1987:57:22–34. [DOI] [PubMed] [Google Scholar]
- Kacelnik O, Nodal FR, Parsons CH, King AJ. Training-induced plasticity of auditory localization in adult mammals. PLoS Biol. 2006:4:e71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maier JX, Elliott VE. Adaptive weighting of taste and odor cues during flavor choice. J Neurophysiol. 2020:124:1942–1947. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Miller J. Divided attention: evidence for coactivation with redundant signals. Cogn Psychol. 1982:14:247–279. [DOI] [PubMed] [Google Scholar]
- Mizoguchi N, Kobayashi M, Muramoto K. Integration of olfactory and gustatory chemosignals in the insular cortex. J Oral Biosci. 2016:58:81–84. [Google Scholar]
- Molholm S, Ritter W, Murray MM, Javitt DC, Schroeder CE, Foxe JJ. Multisensory auditory–visual interactions during early sensory processing in humans: a high-density electrical mapping study. Cogn Brain Res , Multisensory Proceedings. 2002:14:115–128. [DOI] [PubMed] [Google Scholar]
- Nava E, Bottari D, Villwock A, Fengler I, Büchner A, Lenarz T, Röder B. Audio-tactile integration in congenitally and late deaf cochlear implant users. PLoS One. 2014:9:e99606. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Otto TU, Dassy B, Mamassian P. Principles of multisensory behavior. J Neurosci. 2013:33:7463–7474. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pant R, Guerreiro MJS, Ley P, Bottari D, Shareef I, Kekunnaya R, Röder B. The size-weight illusion is unimpaired in individuals with a history of congenital visual deprivation. Sci Rep. 2021:11:6693. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Putzar L, Goerendt I, Lange K, Rösler F, Röder B. Early visual deprivation impairs multisensory interactions in humans. Nat Neurosci. 2007:10:1243–1245. [DOI] [PubMed] [Google Scholar]
- Rowland ST, Stein B. A Bayesian model unifies multisensory spatial localization with the physiological properties of the superior colliculus. Exp Brain Res Exp Hirnforsch Expérimentation Cérébrale. 2007a:180:153–161. [DOI] [PubMed] [Google Scholar]
- Rowland STR, Stein BE. Multisensory integration shortens physiological response latencies. J Neurosci. 2007b:27:5879–5884. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rowland JW, Stein BE. Brief cortical deactivation early in life has long-lasting effects on multisensory behavior. J Neurosci. 2014:34:7198–7202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Senna I, Andres E, McKyton A, Ben-Zion I, Zohary E, Ernst MO. Development of multisensory integration following prolonged early-onset visual deprivation. Curr Biol. 2021:31:4879–4885.e6. [DOI] [PubMed] [Google Scholar]
- Smyre SA, Wang Z, Stein BE, Rowland BA. Multisensory enhancement of overt behavior requires multisensory experience. Eur J Neurosci. 2021:54:4514–4527. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stein BE, Meredith MA. The merging of the senses, cognitive neuroscience series. Cambridge, MA: MIT Press; 1993 [Google Scholar]
- Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci. 2008:9:255–266. [DOI] [PubMed] [Google Scholar]
- Stein BE, Scott Huneycutt W, Alex MM. Neurons and behavior: the same rules of multisensory integration apply. Brain Res. 1988:448:355–358. [DOI] [PubMed] [Google Scholar]
- Stein BE, Meredith MA, Huneycutt WS, McDade L. Behavioral indices of multisensory integration: orientation to visual cues is affected by auditory stimuli. J Cogn Neurosci. 1989:1:12–24. [DOI] [PubMed] [Google Scholar]
- Stein BE, Stanford TR, Rowland BA. Development of multisensory integration from the perspective of the individual neuron. Nat Rev Neurosci. 2014:15:520–535. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory integration in Cochlear implant recipients. Ear Hear. 2017:38:521–538. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Turner JG, Parrish JL, Hughes LF, Toth LA, Caspary DM. Hearing in laboratory animals: strain differences and nonauditory effects of noise. Comp Med. 2005:55:12–23. [PMC free article] [PubMed] [Google Scholar]
- Veldhuizen MG, Shepard TG, Wang M-F, Marks LE. Coactivation of gustatory and olfactory signals in flavor perception. Chem Senses. 2010:35:121–133. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wallace MT, Stein BE. Sensory organization of the superior colliculus in cat and monkey. Prog Brain Res. 1996:112:301–311. [DOI] [PubMed] [Google Scholar]
- Wallace MT, Stein BE. Development of multisensory neurons and multisensory integration in cat superior colliculus. J Neurosci. 1997:17:2429–2444. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wallace MT, Stein BE. Onset of cross-modal synthesis in the neonatal superior colliculus is gated by the development of cortical influences. J Neurophysiol. 2000:83:3578–3582. [DOI] [PubMed] [Google Scholar]
- Wallace MT, Stein BE. Sensory and multisensory responses in the newborn monkey superior colliculus. J Neurosci. 2001:21:8886–8894. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wallace MT, Perrault TJ Jr, Hairston WD, Stein BE. Visual experience is necessary for the development of multisensory integration. J Neurosci. 2004:24:9580–9584. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Z, Yu L, Xu J, Stein BE, Rowland BA. Experience creates the multisensory transform in the superior colliculus. Front Integr Neurosci. 2020:14:18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilkinson LK, Meredith MA, Stein BE. The role of anterior ectosylvian cortex in cross-modality orientation and approach behavior. Exp Brain Res Exp Hirnforsch Expérimentation Cérébrale. 1996:112:1–10. [DOI] [PubMed] [Google Scholar]
- Xu J, Yu L, Cai R, Zhang J, Sun X. Early continuous white noise exposure alters auditory spatial sensitivity and expression of GAD65 and GABAA receptor subunits in rat auditory cortex. Cereb Cortex N Y N 1991. 2010:20:804–812. [DOI] [PubMed] [Google Scholar]
- Xu J, Yu L, Rowland BA, Stanford TR, Stein BE. Incorporating cross-modal statistics in the development and maintenance of multisensory integration. J Neurosci. 2012:32:2287–2298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xu J, Yu L, Rowland BA, Stanford TR, Stein BE. Noise-rearing disrupts the maturation of multisensory integration. Eur J Neurosci. 2014:39:602–613. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xu J, Yu L, Rowland BA, Stein BE. The normal environment delays the development of multisensory integration. Sci Rep. 2017:7:4772. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu L, Rowland BA, Stein BE. Initiating the development of multisensory integration by manipulating sensory experience. J Neurosci. 2010:30:4904–4913. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu L, Xu J, Rowland BA, Stein BE. Development of cortical influences on superior colliculus multisensory neurons: effects of dark-rearing. Eur J Neurosci. 2013:37(10):1594–1601. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu L, Cuppini C, Xu J, Rowland BA, Stein BE. Cross-modal competition: the default computation for multisensory processing. J Neurosci. 2018:39(8):1374–1385. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu L, Cuppini C, Xu J, Rowland BA, Stein BE. Cross-modal competition: the default computation for multisensory processing. J Neurosci. 2019:39:1374–1385. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zheng M, Xu J, Keniston L, Wu J, Chang S, Yu L. Choice-dependent cross-modal interaction in the medial prefrontal cortex of rats. Mol Brain. 2021:14:13. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.








