Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2009 Apr 15;29(15):4897–4902. doi: 10.1523/JNEUROSCI.4120-08.2009

The Differing Impact of Multisensory and Unisensory Integration on Behavior

Guy Gingras 1, Benjamin A Rowland 1, Barry E Stein 1,
PMCID: PMC2678542  NIHMSID: NIHMS109792  PMID: 19369558

Abstract

Pooling and synthesizing signals across different senses often enhances responses to the event from which they are derived. Here, we examine whether multisensory response enhancements are attributable to a redundant target effect (two stimuli rather than one) or if there is some special quality inherent in the combination of cues from different senses. To test these possibilities, the performance of animals in localizing and detecting spatiotemporally concordant visual and auditory stimuli was examined when these stimuli were presented individually (visual or auditory) or in cross-modal (visual–auditory) and within-modal (visual–visual, auditory–auditory) combinations. Performance enhancements proved to be far greater for combinations of cross-modal than within-modal stimuli and support the idea that the behavioral products derived from multisensory integration are not attributable to simple target redundancy. One likely explanation is that whereas cross-modal signals offer statistically independent samples of the environment, within-modal signals can exhibit substantial covariance, and consequently multisensory integration can yield more substantial error reduction than unisensory integration.

Introduction

The brain's ability to integrate information derived from multiple senses is remarkable given that each sense transduces a different form of environmental energy. Nevertheless, the products of this synthesis are readily apparent in the multisensory responses of superior colliculus (SC) neurons, a midbrain structure involved in the control of orientation behavior and often used as a model to explore multisensory integration (Stein and Meredith, 1993). Cross-modal stimuli that are spatially and temporally coincident evoke responses from these neurons that are more robust than those evoked by the individual component stimuli (Meredith and Stein, 1983; Wallace et al., 1996, 1998; Jiang et al., 2001; Perrault et al., 2005; Stanford et al., 2005; Rowland et al., 2007a,b). Behaviorally, these spatiotemporally coincident cross-modal stimuli yield enhancements in the detection and localization of external events (Stein et al., 1988, 1989; Wilkinson et al., 1996; Jiang et al., 2002; Burnett et al., 2004). Similar multisensory enhancements have been observed in a number of brain regions, behaviors, and species using a variety of experimental techniques (Calvert et al., 2004; Stanford and Stein, 2007; Driver and Noesselt, 2008; Stein and Stanford, 2008).

Often implicit in the appreciation of multisensory integration is the belief that its underlying computation is different from that engaged during unisensory integration. The reasoning is straightforward: because different senses are not contaminated by common noise sources (they generate independent estimates), their synthesis should yield response products exceeding those produced by integrating information from within the same sense (Ernst and Banks, 2002; Rowland et al., 2007b). However, an alternate assumption is that combinations of within- and cross-modal stimuli would yield equivalent products because both represent a simple redundant target effect (RTE) regardless of input statistics (Miller, 1982; Gondan et al., 2005; Lippert et al., 2007; Leo et al., 2008; Sinnett et al., 2008).

It is only recently that this issue has been explored systematically. Alvarado et al. (2007) compared the products of multisensory and unisensory integration in cat SC neurons by presenting them with pairs of cross-modal and within-modal stimuli. They reported that these processes yielded different responses with distinct underlying neural computations. Weakly effective cross-modal stimuli produced responses that were statistically greater than either of the responses to the component stimuli and often engaged an underlying superadditive computation. More effective cross-modal stimuli resulted in proportionately less enhancement and in additive computations. In contrast, within-modal stimuli rarely produced enhanced responses and generally engaged subadditive computations. In rare circumstances, very weakly effective stimuli did yield additive interactions (albeit, rarely reaching the criterion for enhancement) but rapidly transitioned to subadditivity as stimulus effectiveness increased.

These neural data suggest that multisensory and unisensory integration would also yield very different behavioral products (see also Ernst and Banks, 2002). The present experiments were designed to evaluate this possibility in a detection and localization paradigm (Stein et al., 1989). The results obtained here also revealed substantial differences between the impact of multisensory and unisensory integration and suggest that a simple RTE model is insufficient to explain the data.

Materials and Methods

All procedures were conducted in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health publication 86-23) and an approved Institutional Animal Care and Use Committee protocol at Wake Forest University School of Medicine, an Association for Assessment and Accreditation of Laboratory Animal Care-accredited institution. The apparatus, training, and testing procedures were similar to those described previously (Stein et al., 1988, 1989; Wilkinson et al., 1996; Jiang et al., 2002; Burnett et al., 2004).

Apparatus.

Four adult male cats (4–6 kg) were trained in a spatial localization task within a 90-cm-diameter perimetry apparatus (Fig. 1). The apparatus contained stimulus complexes of light-emitting diodes (LEDs) and speakers placed at 15° intervals to the left (from −90°) and right (to +90°) of a central fixation point (0°). Each stimulus complex consisted of two horizontally displaced (4 cm) LEDs (Lumex Opto/Components; model 67-1102-ND) located 4 cm beneath two horizontally displaced speakers (Panasonic; model 4D02C0). This displacement ensured that all stimulus pairings were likely to fall within the receptive fields of many of the same multisensory SC neurons (Stein and Meredith, 1993). As a general convention, we reference the leftmost stimuli within each individual stimulus complex with a “1” and the rightmost stimuli with a “2” (e.g., V1 is the leftmost visual stimulus). The perimetry apparatus was housed in a sound-attenuating chamber (Industrial Acoustics Company) with a constant 22 dB background noise. Stimuli were controlled with custom software, triggered by the experimenter, and unless otherwise stated, consisted of brief (40 ms) LED illuminations or bursts of broadband noise from the speakers. During the testing phase (see below), stimulus intensities were reduced to levels undetectable to the experimenter. The experimenter was informed of the target location (i.e., the stimulus complex), and of the stimulus combination on any given trial after the categorical response, location was judged. The animal was either within 5° (4 cm) of the target or outside of this range. This proved to be an extremely easy judgment to make as animals very rarely went between stimulus complexes (each separated by 15°) because all stimuli complexes were always visible to them. When making errors, they almost always either went to a wrong location or failed to respond (“no-go”). Rare circumstances in which the animal began to approach the target, but then returned to the starting point, were scored as no-go errors.

Figure 1.

Figure 1.

The apparatus and task. The orientation/approach task was performed in a 90-cm- diameter perimetry apparatus containing a complex of LEDs and speakers at 15° increments from 90° to the left and right of a central fixation point (0°). Each complex consisted of three LEDs and two speakers, each separated by 4 cm. In the present experiments, only the two outermost LEDs at any location were used. Trials consisted of randomly interleaved modality-specific (visual or auditory), cross-modal (visual–auditory), and within-modal (visual–visual or auditory–auditory) stimuli at each of the seven perimetric locations between ±45° (−45°, −30°, −15°, 0°, +15°, +30°, +45°), as well as catch trials.

Training.

Responses were shaped using a small food reward (175 mg kibble; Hill's Science Diet). Each animal was first trained to stand at the center of the arena (the start position) and fixate directly ahead at 0°, with the experimenter providing gentle head restraint. The animal was then required to orient toward and approach (within 4 cm) a briefly illuminated (40 ms) but highly visible LED (3 foot-candles) at that perimetric location. Its nose had to be displaced no more than 5° to the left or right of this location within 3 s of stimulus onset to receive a reward. After mastering this task, the number of possible target LED locations was expanded to include increasingly more eccentric locations (these were randomly interleaved between ±45° and always involved the left LED in each stimulus complex). Given that 15° separated the perimetric locations, the animal had to choose among seven locations: three locations on the left (−45°, −30°, −15°), the center location (0°), and three locations on the right (+15°, +30°, +45°). Once the animal mastered the visual localization task, it was trained on the auditory localization task using the same general procedure (the auditory stimulus was a 40 ms, 60 dB sound-pressure level A-weighted broadband noise). At this time, “catch” trials (no stimulus) were introduced, during which the animal was required to remain at the start position looking directly ahead to receive a reward. Training was completed when animals would accurately approach the stimulus on at least 85 of 100 trials at all seven locations and have <15% erroneous responses to catch trials.

Testing.

To equilibrate the effectiveness of the stimuli among the different perimetric locations, the intensity of the modality-specific stimulus (visual or auditory) was reduced at each location to a level eliciting ∼25% correct responses. There was one stimulus/trial/location, but the stimulus could be an individual modality-specific stimulus (visual or auditory) or a pair of within-modal (visual–visual, auditory–auditory) or cross-modal (visual–auditory) stimuli. In addition, catch trials (no stimulus was presented) were included. All the presentations of the pairs of stimulus were spatially and temporally coincident, unless otherwise noted (see below). All animals were tested with ∼150 trials per day (i.e., until satiety) and 5 d per week. To ensure a reasonable number of trials per stimulus condition each day, cross-modal (visual–auditory) versus within-modal (visual–visual or auditory–auditory) comparisons were divided into two experiments separated by ∼9 months. The first experiment compared performance in response to within-modal visual (V1V2) and cross-modal (V1A1) stimuli; and the second experiment compared performance in response to within-modal auditory (A1A2) and cross-modal (V1A1) stimuli. Because animals were run to satiety each day using random trial selection, there were often unequal numbers of trials/day/location. Because the random selection procedure was used during all days of testing, there were some locations that exceeded the minimum number of trials.

Pairs of visual stimuli consisted of 2 LEDs separated by 5°. The paired auditory stimuli were also separated by 5°, but to ensure that two sounds were distinguishable in the second experiment, the first sound (see above) was presented first to the left speaker (30 ms duration) and 15 ms later to the second (right) speaker (35 ms duration). Thus, they overlapped for 15 ms. For this reason, the stimuli used in Experiment 2 lasted 50 ms as opposed to 40 ms as in Experiment 1. The two sounds were distinguishable to human listeners and had differing impacts on the animals' behavioral responses (see Results).

Within each experiment, all five stimulus conditions were randomly interleaved: (1) a single visual stimulus (V1); (2) a single auditory stimulus (A1 or A2); (3) a spatiotemporally coincident visual–auditory stimulus pair (V1A1); (4) a spatiotemporally coincident within-modal stimulus pair (visual, V1V2 or auditory, A1A2); or (5) a catch trial. All stimulus locations were randomly selected. Orientation and approach to each target stimulus (see above) and maintaining fixation during a catch trial were rewarded. Because V1 and V2 were identical, only one of them (V1) was used to obtain an index of unisensory visual performance and the intensity level of V2 was matched to V1 during the visual within-modal stimulus combinations (V1V2).

Control experiment.

The physical limitations of the apparatus displaced visual–visual (V1V2) and auditory–auditory (A1A2) stimuli by 4 cm in the horizontal dimension, whereas visual–auditory stimuli (V1A1) were at the same azimuthal position, one above the other. To examine whether the directionality of displacement had a significant effect on the results, additional tests were conducted in which the visual and auditory stimuli were diagonally displaced (V1A2). Three animals were tested for 6–7 d for a total of 1133–1244 trials per animal. We found that diagonally displaced cross-modal stimulus pairs produced response enhancements (see below) equivalent to those found for vertically displaced stimulus pairs.

Data analysis.

The outcome of each trial (correct or incorrect orientation/approach, or No-Go) was scored by the researcher and later grouped according to location and stimulus type. Comparisons were made between groups using standard statistical techniques (quantification of mean accuracy/error, χ2 tests for significant differences between response categories). The critical comparisons included the following.

(1) Localization accuracy (percentage correct responses) of individual stimuli, cross-modal stimulus pairs, and within-modal stimulus pairs. These included tests for significant differences in localization performance for cross-modal, within-modal, and single-stimulus conditions. Response enhancements associated with visual–auditory, visual–visual, and auditory–auditory stimulus pairings were evaluated by comparing the incidence of correct responses generated by these stimuli to the maximum incidence of correct responses generated by a single stimulus at each location (i.e., performance generated by the best single stimulus, visual or auditory). Differences in localization performance were also expressed as percentages.

(2) The incidence of each type of trial outcome for each stimulus condition and percentage differences between cross-modal, within-modal, and single-stimulus conditions.

(3) The relationship between the percentage response enhancements that was associated with cross-modal and within-modal stimulus pairs and the best single-stimulus performance at each location. These variables are typically inversely related; that is, lower accuracy in localizing single stimuli is typically correlated with greater enhancements when other stimuli are added (the principle of inverse effectiveness (Meredith and Stein, 1986; Stein and Meredith, 1993; Stanford et al., 2005). These relationships were fit separately with regression lines that were then compared with an F test.

Results

All animals learned to localize and approach the targets at all perimetric positions. Differences in the speed with which different animals (n = 4) reached criterion performance were noted, but these were minor and in keeping with interanimal variations noted in previous studies (Stein et al., 1988, 1989; Wilkinson et al., 1996; Jiang et al., 2002, Burnett et al., 2004). Performance patterns for each stimulus condition during testing were highly consistent across all animals, and thus the data were pooled for the general analysis.

Experiment 1: multisensory (visual–auditory) versus unisensory (visual–visual) integration

The results for each stimulus condition were highly consistent across stimulus locations because intensities were intentionally adjusted to equilibrate localization accuracy of the individual visual and auditory stimuli. The data were collapsed across all animals and locations to obtain group means. These were weighted by their differing numbers (see Materials and Methods). The mean localization accuracies for single visual stimuli (V1 or V2) and single auditory stimuli (A1) across stimulus locations were 25 and 27%, respectively. The addition of a second, spatially and temporally concordant stimulus enhanced the overall response accuracy, but enhancement in response to the cross-modal (V1A1) stimulus pair was significantly greater than enhancement to the within-modal (V1V2) stimulus pair at each location (Fig. 2A). The mean multisensory (V1A1) enhancement in localization performance was (pooling data across locations) 137% (by location, the range was from 94 to 168%), whereas the mean unisensory (V1V2) enhancement was only (pooling across locations) 49% (by location, the range was 31 to 79%).

Figure 2.

Figure 2.

Multisensory integration was distinct from unisensory visual–visual integration. A, At every spatial location, multisensory integration produced substantial performance enhancements (94–168%; mean, 137%), whereas unisensory visual integration produced comparatively modest enhancements (31–79%; mean, 49%). Asterisks indicate comparisons that were significantly different (χ2 test; p < 0.05). B, The pie charts to the left show performance in response to the modality-specific auditory (A1) and visual (V1 and V2 are identical; see Materials and Methods) stimuli. The figures within the bordered region show the performance to the cross-modal (V1A1) and within-modal (V1V2) stimulus combinations. No-Go errors (NG; gray) and Wrong Localization errors (W; white) were significantly decreased as a result of multisensory integration, but only No-Go errors were significantly reduced as a result of unisensory integration. C, The differential effect of multisensory and unisensory integration was reasonably constant, regardless of the effectiveness of the best component stimulus and both showed an inverse relationship, wherein benefits were greatest when the effectiveness of the component stimuli was lowest. V, Visual; A, auditory; C, correct.

It is also important to note that cross-modal stimulus pairs evoked significantly fewer No-Go errors (56% less; p < 0.05) and localization errors (29% less; p < 0.05) (Fig. 2B). In contrast, within-modal stimuli significantly reduced only No-Go errors (29% less; p < 0.05) but not the incidence of incorrect localizations (9% more; nonsignificant). Thus, whereas unisensory integration appeared to make responses more likely, it did not make them more accurate, as did multisensory integration. Indeed, the data show that the positive impact of multisensory integration on performance averaged more than 2.8 times that of unisensory integration.

Enhanced performance to both cross-modal and within-modal stimuli were inversely proportional to the best single-stimulus accuracy at each location; that is, they were both consistent with the principle of inverse effectiveness (Fig. 2C). However, whereas the regression fits of these trends for enhancements to each stimulus pair had similar slopes, their intercepts were very different, and the lines were consistently displaced by ∼100 percentage points. The effectiveness of cross-modal stimuli (V1A1) was consistently higher than that of within-modal stimuli (V1V2).

Experiment 2: multisensory (visual–auditory) versus unisensory (auditory–auditory) integration

The results were similar to those obtained in Experiment 1. The mean location accuracy for the single visual stimulus (V1) across locations was 25% and the two auditory stimuli (A1 and A2) were, respectively, 27 and 26%. Both cross-modal (V1A1) and within-modal (A1A2) stimulus combinations yielded enhanced responses, but again, the cross-modal combination yielded more than 2.9 times the performance enhancement (pooling data across locations, the mean enhancement was 141%; by location, the range was 106–177%) than did the within-modal combination (pooling data across locations, the mean enhancement was 49%; by location, the range 33–69%) (Fig. 3A).

Figure 3.

Figure 3.

Multisensory integration was distinct from unisensory auditory–auditory integration. Conventions are the same as for Figure 2. A, Multisensory integration provided significantly greater average performance (141% more) benefits than did unisensory auditory integration (49% more) at every perimetric location tested. B, Similarly, multisensory integration also significantly reduced both No-Go errors (NG; 60% less; gray) and Wrong Localization errors (W; 25% less; white), whereas unisensory auditory integration reduced only No-Go errors (28% less; gray). C, The differential effect of multisensory (V1A1) and unisensory (A1A2) integration was reasonably constant regardless of the effectiveness of the best component stimulus and both showed an inverse relationship; that is, benefits were greatest when the effectiveness of the component stimuli was lowest. V, Visual; A, auditory; C, correct.

Also, cross-modal stimuli significantly reduced both No-Go (60% less; p < 0.05) and incorrect location (25% less; p < 0.05) errors, whereas within-modal stimuli significantly decreased only No-Go errors (28% less; p < 0.05). Within-modal auditory stimuli had virtually no effect on the incidence of incorrect localizations (2% less; nonsignificant); thus, they made responses more likely but, unlike the cross-modal stimuli, they did not make initiated responses more accurate (Fig. 3B).

Finally, both multisensory (cross-modal, V1A1) and unisensory (within-modal, A1A2) performance enhancements were consistent with inverse effectiveness (Fig. 3C). However, as in Experiment 1, the regression fits to the inverse effectiveness trends had similar slopes but very different intercepts and were consistently displaced by ∼100 percentage points.

The data from the two experiments show that multiple coincident stimuli evoked more accurate detection and localization responses than did a single stimulus (i.e., two is better than one). However, the magnitude of this performance enhancement was strongly tied to whether the stimuli were derived from the same or different sensory modalities. Coincident cross-modal stimuli evoked strong enhancements in localization performance, not only making localization responses more likely, but more accurate. In contrast, coincident within-modal stimuli evoked only weak enhancements and, while making localization responses more likely, did not make them any more accurate. The magnitudes of these enhancements in localization performance were inversely proportional to the localization accuracy of the individual component stimuli. However, in the range studied, multisensory integration enhanced behavioral performance on average 2.85 times more than did unisensory integration. The data are highly consistent across animals and sensory modalities; that is, the impact of visual–visual and auditory–auditory stimuli were similar when the effectiveness of their component stimuli was similar (Fig. 4). Furthermore, the magnitude of the enhancements to the visual–auditory stimulus pairs obtained in Experiment 2 was not significantly different from that obtained in Experiment 1, despite the 9 month interval between them (Fig. 4).

Figure 4.

Figure 4.

Multisensory integration was distinct from unisensory integration. The results of Experiment 1 (visual–visual vs visual–auditory integration) and Experiment 2 (auditory–auditory vs visual–auditory integration) were highly consistent. The magnitude of multisensory enhancement was approximately the same in the two experiments, whereas they were conducted at different times. Similarly, unisensory visual and unisensory auditory integration yielded approximately the same performance products.

Discussion

Enhancement in behavioral performance consequent to integrating information across sensory modalities is sometimes interpreted as attributable to RTE; that is, the improvements occur because these stimuli are multiple and redundant (i.e., equally informative individually). Indeed, speeded reaction time is often seen in response to combinations of within-modal (Marzi et al., 1996; Murray et al., 2001; Savazzi and Marzi, 2002, 2004; Schröter et al., 2007) and with cross-modal (Hughes et al., 1994; Frens et al., 1995; Goldring et al., 1996; Giard and Peronnet, 1999; Forster et al., 2002; Amlôt et al., 2003; Diederich and Colonius, 2004; Sakata et al., 2004; Teder-Sälejärvi et al., 2005; Senkowski et al., 2006; Hecht et al., 2008a,b) stimuli. If this interpretation is correct, then redundant targets from the same sensory modality (e.g., two visual or two auditory stimuli) should have equivalent effects. However, the data in the present experiments indicate that they do not. Multiple stimuli from the same sensory modality only marginally enhanced localization compared with cross-modal stimulus combinations. Although they made localization responses somewhat more likely, they did not make them any more accurate.

These behavioral observations closely parallel recent physiological findings from single neurons in the SC. Alvarado et al. (2007) found that SC neurons responded to pairs of adjacent visual stimuli with only a modest increase (not statistically significant) in their mean number of evoked impulses, numbers that were far below those predicted by summing responses to the individual component stimuli. In contrast, SC neurons showed highly significant response enhancements to cross-modal (visual–auditory) stimulus pairs, response enhancements that were often greater than, or equal to, the sum of responses to the component stimuli. These physiological and behavioral observations are consistent with suggestions that multisensory and unisensory integration engage different underlying computations.

As noted by Ernst and Banks (2002), these differences can be understood in the context of a probabilistic (Bayesian) model of integration. In a Bayesian model of spatial locations (Rowland et al., 2007a), each stimulus is assumed to generate a sensory report of its location that may be accurate or randomly inaccurate, depending on the fidelity of the sensory system. These reports are filtered by previous expectations such that, if two or more stimuli co-occur, they belong to the same event and thus should have the same location. The action of the filter is to fuse the sensory reports, generating an estimate of location that is a compromise between them. Sensory reports that are inaccurate in the same direction (e.g., both biased to the left or both to the right) do not generate more accurate estimates when fused because they do not contradict the previous expectation. However, sensory reports that are inaccurate in opposite directions (e.g., one biased left and the other one biased right) are fused to generate estimates that are more consistent with the previous expectation that the stimulus locations should be the same. These circumstances produce estimated stimulus locations between the two incorrect sensory reports, which are closer to the actual stimulus location than the flanking sensory reports.

Cross-modal stimulus combinations represent multiple stimuli that are conveyed by different forms of energy and transduced by independent sensory systems. Consequently, the sensory reports evoked by two cross-modal stimuli are independent of one another and equally as likely to be inaccurate in the same or different directions. Instances where the sensory reports are inaccurate in different directions yield more accurate estimates (by the above logic) and thus enhance localization responses. In contrast, multiple within-modal stimuli that travel by the same medium and are transduced by the same sensory system can be influenced by the same noise sources. Consequently, the sensory reports they generate are likely to covary substantially, that is, be more likely to be inaccurate in the same direction than in different directions. The lower incidence of sensory reports that are inaccurate in opposite directions reduces the incidence where estimates are improved through their fusion. The intuitive nature of this result is apparent in the consideration of the most extreme case: two sensory reports that covary 100% of the time are obviously no better than one alone. Thus, although not pursued quantitatively here, the Bayesian model can help explain how different products result when information is integrated across or within a sensory modality as a consequence of the relative statistics of the stimuli.

Whether the Bayesian model or some other model is used to explain the current results (see also Rowland et al., 2007b), the data suggest that there is something inherently different about multisensory and unisensory integration that is evident in a simple detection/localization task. Whether these differences would be equally evident in tasks requiring the use of higher-order cognitive processes remains to be determined.

Footnotes

This research was supported by National Institutes of Health Grant EY016716. We thank Nancy London for technical assistance.

References

  1. Alvarado JC, Vaughan JW, Stanford TR, Stein BE. Multisensory versus unisensory integration: contrasting modes in the superior colliculus. J Neurophysiol. 2007;97:3193–3205. doi: 10.1152/jn.00018.2007. [DOI] [PubMed] [Google Scholar]
  2. Amlôt R, Walker R, Driver J, Spence C. Multimodal visual-somatosensory integration in saccade generation. Neuropsychologia. 2003;41:1–15. doi: 10.1016/s0028-3932(02)00139-2. [DOI] [PubMed] [Google Scholar]
  3. Burnett LR, Stein BE, Chaponis D, Wallace MT. Superior colliculus lesions preferentially disrupt multisensory orientation. Neuroscience. 2004;124:535–547. doi: 10.1016/j.neuroscience.2003.12.026. [DOI] [PubMed] [Google Scholar]
  4. Calvert GA, Spence C, Stein BE. The handbook of multisensory processes. Cambridge, MA: MIT; 2004. [Google Scholar]
  5. Diederich A, Colonius H. Bimodal and trimodal multisensory enhancement: effects of stimulus onset and intensity on reaction time. Percept Psychophys. 2004;66:1388–1404. doi: 10.3758/bf03195006. [DOI] [PubMed] [Google Scholar]
  6. Driver J, Noesselt T. Multisensory interplay reveals crossmodal influences on ‘sensory-specific’ brain regions, neural responses, and judgments. Neuron. 2008;57:11–23. doi: 10.1016/j.neuron.2007.12.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Ernst MO, Banks MS. Humans integrate visual and haptic information in a statically optimal fashion. Nature. 2002;415:429–433. doi: 10.1038/415429a. [DOI] [PubMed] [Google Scholar]
  8. Forster B, Cavina-Pratesi C, Aglioti SM, Berlucchi G. Redundant target effect and intersensory facilitation from visual–tactile interactions in simple reaction time. Exp Brain Res. 2002;143:480–487. doi: 10.1007/s00221-002-1017-9. [DOI] [PubMed] [Google Scholar]
  9. Frens MA, Van Opstal AJ, Van der Willigen RF. Spatial and temporal factors determine auditory–visual interactions in human saccadic eye movements. Percept Psychophys. 1995;57:802–816. doi: 10.3758/bf03206796. [DOI] [PubMed] [Google Scholar]
  10. Giard MH, Peronnet F. Auditory-visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study. J Cogn Neurosci. 1999;11:473–490. doi: 10.1162/089892999563544. [DOI] [PubMed] [Google Scholar]
  11. Goldring JE, Dorris MC, Corneil BD, Ballantyne PA, Munoz DP. Combined eye-head gaze shifts to visual and auditory targets in humans. Exp Brain Res. 1996;111:68–78. doi: 10.1007/BF00229557. [DOI] [PubMed] [Google Scholar]
  12. Gondan M, Niederhaus B, Rösler F, Röder B. Multisensory processing in the redundant-target effect: a behavioral and event-related potential study. Percept Psychophys. 2005;67:713–726. doi: 10.3758/bf03193527. [DOI] [PubMed] [Google Scholar]
  13. Hecht D, Reiner M, Karni A. Enhancement of response times to bi- and tri-modal sensory stimuli during active movements. Exp Brain Res. 2008a;185:655–665. doi: 10.1007/s00221-007-1191-x. [DOI] [PubMed] [Google Scholar]
  14. Hecht D, Reiner M, Karni A. Multisensory enhancement: gains in choice and in simple response times. Exp Brain Res. 2008b;189:133–143. doi: 10.1007/s00221-008-1410-0. [DOI] [PubMed] [Google Scholar]
  15. Hughes HC, Reuter-Lorenz PA, Nozawa G, Fendrich R. Visual–auditory interactions in sensorimotor processing: saccades versus manual responses. J Exp Psychol Hum Percept Perform. 1994;20:131–153. doi: 10.1037//0096-1523.20.1.131. [DOI] [PubMed] [Google Scholar]
  16. Jiang W, Wallace MT, Jiang H, Vaughan JW, Stein BE. Two cortical areas mediate multisensory integration in superior colliculus neurons. J Neurophysiol. 2001;85:506–522. doi: 10.1152/jn.2001.85.2.506. [DOI] [PubMed] [Google Scholar]
  17. Jiang W, Jiang H, Stein BE. Two corticotical areas facilitate multisensory orientation behavior. J Cogn Neurosci. 2002;14:1240–1255. doi: 10.1162/089892902760807230. [DOI] [PubMed] [Google Scholar]
  18. Leo F, Bertini C, di Pellegrino G, Làdavas E. Multisensory integration for orienting responses in humans requires the activation of the superior colliculus. Exp Brain Res. 2008;186:67–77. doi: 10.1007/s00221-007-1204-9. [DOI] [PubMed] [Google Scholar]
  19. Lippert M, Logothetis NK, Kayser C. Improvement of visual contrast detection by a simultaneous sound. Brain Res. 2007;1173:102–109. doi: 10.1016/j.brainres.2007.07.050. [DOI] [PubMed] [Google Scholar]
  20. Marzi CA, Smania N, Martini MC, Gambina G, Tomelleri G, Palamara A, Alessandrini F, Prior M. Implicit redundant-targets effect in visual extinction. Neuropsychologia. 1996;34:9–22. doi: 10.1016/0028-3932(95)00059-3. [DOI] [PubMed] [Google Scholar]
  21. Meredith MA, Stein BE. Interactions among converging sensory inputs in the superior colliculus. Science. 1983;221:389–391. doi: 10.1126/science.6867718. [DOI] [PubMed] [Google Scholar]
  22. Meredith MA, Stein BE. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. J Neurophysiol. 1986;56:640–662. doi: 10.1152/jn.1986.56.3.640. [DOI] [PubMed] [Google Scholar]
  23. Miller J. Divided attention: evidence for coactivation with redundant signals. Cogn Psychol. 1982;14:247–279. doi: 10.1016/0010-0285(82)90010-x. [DOI] [PubMed] [Google Scholar]
  24. Murray MM, Foxe JJ, Higgins BA, Javitt DC, Schroeder CE. Visuo-spatial neural response interactions in early cortical processing during a simple reaction time task: a high-density electrical mapping study. Neuropsychologia. 2001;39:828–844. doi: 10.1016/s0028-3932(01)00004-5. [DOI] [PubMed] [Google Scholar]
  25. Perrault TJ, Jr, Vaughan JW, Stein BE, Wallace MT. Superior colliculus neurons use distinct operational modes in the integration of multisensory stimuli. J Neurophysiol. 2005;93:2575–2586. doi: 10.1152/jn.00926.2004. [DOI] [PubMed] [Google Scholar]
  26. Rowland B, Stanford T, Stein B. A Bayesian model unifies multisensory spatial localization with the physiological properties of the superior colliculus. Exp Brain Res. 2007a;180:153–161. doi: 10.1007/s00221-006-0847-2. [DOI] [PubMed] [Google Scholar]
  27. Rowland BA, Stanford TR, Stein BE. A model of the neural mechanisms underlying multisensory integration in the superior colliculus. Perception. 2007b;36:1431–1443. doi: 10.1068/p5842. [DOI] [PubMed] [Google Scholar]
  28. Sakata S, Yamamori T, Sakurai Y. Behavioral studies of auditory-visual spatial recognition and integration in rats. Exp Brain Res. 2004;159:409–417. doi: 10.1007/s00221-004-1962-6. [DOI] [PubMed] [Google Scholar]
  29. Savazzi S, Marzi CA. Speeding up the reaction time with invisible stimuli. Curr Biol. 2002;12:403–407. doi: 10.1016/s0960-9822(02)00688-7. [DOI] [PubMed] [Google Scholar]
  30. Savazzi S, Marzi CA. The superior colliculus subserves interhemispheric neural summation in both normals and patients with a total section or agenesis of the corpus callsum. Neuropsychologia. 2004;42:1608–1618. doi: 10.1016/j.neuropsychologia.2004.04.011. [DOI] [PubMed] [Google Scholar]
  31. Schröter H, Ulich R, Miller J. Effects of redundant auditory stimuli on reaction time. Psychon Bull Rev. 2007;14:39–44. doi: 10.3758/bf03194025. [DOI] [PubMed] [Google Scholar]
  32. Senkowski D, Molholm S, Gomez-Ramirez M, Foxe JJ. Oscillatory beta activity predicts responses speed during a multisensory audiovisual reaction time task: a high-density electrical mapping study. Cereb Cortex. 2006;16:1556–1565. doi: 10.1093/cercor/bhj091. [DOI] [PubMed] [Google Scholar]
  33. Sinnett S, Soto-Faraco S, Spence C. The co-occurrence of multisensory competition and facilitation. Acta Psychol (Amst) 2008;128:153–161. doi: 10.1016/j.actpsy.2007.12.002. [DOI] [PubMed] [Google Scholar]
  34. Stanford TR, Stein BE. Superadditivity in multisensory integration: putting the computation in context. Neuroreport. 2007;18:787–792. doi: 10.1097/WNR.0b013e3280c1e315. [DOI] [PubMed] [Google Scholar]
  35. Stanford TR, Quessy S, Stein BE. Evaluating the operations underlying multisensory integration in the cat superior colliculus. J Neurosci. 2005;25:6499–6508. doi: 10.1523/JNEUROSCI.5095-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Stein BE, Meredith MA. The merging of the senses. Cambridge, MA: MIT; 1993. [Google Scholar]
  37. Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci. 2008;9:255–266. doi: 10.1038/nrn2331. [DOI] [PubMed] [Google Scholar]
  38. Stein BE, Huneycutt WS, Meredith MA. Neurons and behavior: the same rules of multisensory integration apply. Brain Res. 1988;448:355–358. doi: 10.1016/0006-8993(88)91276-0. [DOI] [PubMed] [Google Scholar]
  39. Stein BE, Meredith MA, Huneycutt WS, McDade L. Behavioral indices of multisensory integration: orientation to visual cues is affected by auditory stimuli. J Cogn Neurosci. 1989;1:12–24. doi: 10.1162/jocn.1989.1.1.12. [DOI] [PubMed] [Google Scholar]
  40. Teder-Sälejärvi WA, Di Russo F, McDonald JJ, Hillyard SA. Effects of spatial congruity on audio-visual multimodal integration. J Cogn Neurosci. 2005;17:1396–1409. doi: 10.1162/0898929054985383. [DOI] [PubMed] [Google Scholar]
  41. Wallace MT, Wilkinson LK, Stein BE. Representation and integration of multiple sensory inputs in primate superior colliculus. J Neurophysiol. 1996;76:1246–1266. doi: 10.1152/jn.1996.76.2.1246. [DOI] [PubMed] [Google Scholar]
  42. Wallace MT, Meredith MA, Stein BE. Multisensory integration in the superior colliculus of the alert cat. J Neurophysiol. 1998;80:1006–1010. doi: 10.1152/jn.1998.80.2.1006. [DOI] [PubMed] [Google Scholar]
  43. Wilkinson LK, Meredith MA, Stein BE. The role of anterior ectosylvian cortex in cross-modality orientation and approach behavior. Exp Brain Res. 1996;112:1–10. doi: 10.1007/BF00227172. [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES