Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain‐specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain‐specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event‐related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50–90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Keywords: ERP, multisensory processing, sensory cortices, space perception, time perception
Early recruitment of sensory areas is necessary for the development of fine domain‐specific (i.e., spatial or temporal) skills. In this study, we revealed that the domain specificity of visual and auditory cortices selectively modulates also the cortical activity in response to multisensory stimuli.

1. INTRODUCTION
Humans constantly combine information from different senses, which provide complementary representations of the surrounding environment. In this melting pot of sensory information, different senses are more accurate in processing specific environmental properties. For example, vision allows a complete representation of the surrounding space by receiving detailed spatial information directly from the retina (Alais & Burr, 2004). At the same time, hearing is the most accurate sense in representing temporal information (Barakat et al., 2015; Burr et al., 2009; Guttman et al., 2005).
The strong association of visual and auditory sensory modalities with a specific domain of representation (i.e., space and time) suggested that the recruitment of the visual and auditory cortices might be necessary for building high‐resolution spatial and temporal representations, respectively. Indeed, vision is crucial for aligning neural representations of space also for other sensory modalities (King, 2009, 2014) and the visual cortex is not solely involved in processing visual input (Romei et al., 2009; Vetter et al., 2014). In this regard, a past study revealed that occipital areas supported the neural processing underlying complex spatial representations of sighted individuals in the acoustic modality (Campus et al., 2017). Similarly, auditory areas were proven not to be involved exclusively in acoustic processing (Rosenblum et al., 2017), but to support also the visual representation of a complex temporal metric (Amadeo et al., 2020a). Studies on sensory deprivation offered further evidence in this direction. Results found that people with visual impairment were impacted when processing some auditory spatial tasks since the lack of vision did not provide a full development of their auditory spatial maps (Gori et al., 2014; Vercillo et al., 2016; Voss et al., 2015; Zwiers et al., 2001; reviewed in Gori et al., 2020). This observation was completed by a reduced occipital response for acoustic space perception of early blind individuals (Campus et al., 2019; Gori et al., 2020; Tonelli et al., 2020) where the cumulative number of years spent without vision gradually impacted this occipital activation pattern in response to sounds (Amadeo et al., 2019a; 2020b). However, blind individuals do not show deficit in all spatial skills. For example, people with visual impairment are able to localize sounds in the space (Battal et al., 2020), to generate mental images of tactile spatial layouts (Cattaneo et al., 2008), and to perform spatial orientation tasks (Fortin et al., 2006). A parallel between vision and audition can be made in the case of deaf individuals who were shown to be impaired in some temporal processing, as a visual temporal bisection task and a tactile temporal discrimination task (Amadeo, Campus et al., 2019; Amadeo et al., 2022; Bolognini et al., 2012; Gori et al., 2017), but not in others. For instance, deaf people can well estimate the duration of visual stimuli (Poizner & Tallal, 1987) and perform a visual temporal order judgment task (Nava et al., 2008). These findings about the effects of sensory deprivation on the spatial and temporal abilities of blind and deaf individuals may seems controversial. Nonetheless, what could make the difference in these apparently conflicting results is the kind of task used to explore the spatial and the temporal skills. For instance, two tasks in which visually impaired individuals and deaf people were found to be particularly affected are the spatial and the temporal bisection tasks. In the bisection, three stimuli are reproduced in sequence, the second stimulus is randomly delivered at two different spatial positions and temporal lags, and participants evaluate whether this second stimulus is spatially (spatial bisection) or temporally (temporal bisection) farther from the first or the third stimulus. These tasks involve the construction of a metric as they explicitly require participants to spatially and temporally compare external stimuli with each other. It has been suggested that in the bisection the lack of vision may affect the ability to compare the different inputs in space, and the lack of audition the different stimuli in time.
To sum up, past studies suggested that the recruitment of the visual and auditory areas selectively underlies the development of some skills involving spatial and temporal domains, and that the lack of this neural activation in case of sensory deprivation may affect the shaping of fine spatiotemporal representations. Conversely, this mechanism, when developed, is activated independently of the sensory modality involved, suggesting a domain‐specific supramodal organization of the brain for which the domains of representation (i.e., space or time), rather than the sensory modalities, primarily shape the human perception (Amedi et al., 2017; Cecchetti et al., 2016; Heimler & Amedi, 2020; Heimler et al., 2015; Ricciardi et al., 2014, 2020; Rosenblum et al., 2017).
Studies on the neural mechanisms underlying multisensory perception support the view that the sensory modality is no longer the primary organizing principle of the sensory brain's architecture. Traditionally, multisensory functions have been considered the domain of association cortices as the superior temporal sulcus (Beauchamp, 2005), the intraparietal area (Andersen et al., 1997), and the frontal cortex (Fuster et al., 2000). Nowadays, a body of research showed that also occipital and temporal areas could support the encoding of multiple sensory modalities (Bueti & Macaluso, 2010; Fort, Delpuech, Pernier, & Giard, 2002; Giard & Peronnet, 1999; Molholm et al., 2002; van Wassenhove & Grzeczkowski, 2015), with anatomical substrates that were noted to sustain multisensory processing at low levels of cortical processing (Cappe & Barone, 2005; Falchier et al., 2002; Rockland & Ojima, 2003). Consequentially, multisensory influences emerged to take place on all levels of cortical processing, suggesting that the neocortex is essentially multisensory (Ghazanfar & Schroeder, 2006). Finally, research revealed that the encoding of multiple sensory information extended over a wide range of time latencies. For instance, multisensory processes were shown to occur also within the first 100 ms poststimulus onset (early‐latency multisensory interactions [eMSI]; reviewed in De Meo et al., 2015) and to directly shape perception and behavior even at these early stages of multisensory encoding (Cappe et al., 2010; Fort, Delpuech, Pernier, & Giard, 2002; Fort, Delpuech, Pernier, Giard, & Thomas, 2002; Gondan & Röder, 2006; Raij et al., 2010; Teder‐Sälejärvi et al., 2002).
Despite the increasing knowledge of the mechanisms underlying multisensory perception, it is still not clear whether or not and how the multisensory nature of sensory areas is modulated by the domain specificity implicit in visual and auditory cortices. In other words, whether the preferential domains (i.e., space and time) revealed in sensory areas also influence multisensory processing at the cortical level. Given that visual and auditory regions play an important role in scaffolding the spatial and temporal processing respectively (Amadeo et al., 2020a; Campus et al., 2017, 2019), and that these cortical areas are multisensory in nature too (Bueti & Macaluso, 2010; Fort, Delpuech, Pernier, & Giard, 2002; Giard & Peronnet, 1999; Molholm et al., 2002; van Wassenhove & Grzeczkowski, 2015), we hypothesized that the domains of representation would modulate the cortical activation to multisensory stimuli. More specifically, we expected to find a preferential activation of visual areas for multisensory spatial processing, and of auditory areas for multisensory temporal processing, and that this specialized mechanism would occur at early stages of multisensory processing.
2. MATERIALS AND METHODS
2.1. Participants
A group of 16 adults participated in the study (9 females, mean age ± SD: 24 ± 2.95 years old). Based on a meta‐analysis of previous studies testing the neural correlates of spatial and temporal abilities of healthy adults (Amadeo et al., 2020a; Campus et al., 2017), we expected a large effect size. A priori power analysis revealed that a minimum sample size of 15 participants was needed to statistically detect such an effect size (two‐tailed t‐test, power 0.80, alpha .05). All participants reported no history of neurological, cognitive, and/or sensory deficits. The study was approved by the ethics committee of the local health service (Comitato etico, ASL 3 Genova) and conducted in line with the Declaration of Helsinki. All participants gave written informed consent prior to testing.
2.2. Setup, stimuli, and procedure
Setup consisted in a horizontal array of 23 speakers spatially aligned with 23 light emitting diodes (LEDs; Figure 1a). Participants sat at a distance of 180 cm from the center of the array spanning ±25° of visual angle (0° represented the central speaker/LED, with negative values on the left and positive values on the right).
FIGURE 1.

Experimental setup. (a) A horizontal array of 23 free‐field speakers and 23 light emitting diodes (LEDs). (b) Detail of one speaker spatially aligned with one LED. In each trial, a single sound was simultaneously reproduced with a single red flash. Participants reported the auditory and visual stimulations as originating from exactly the same source
All participants performed a spatial bisection task and a temporal bisection task. In both tasks, a trial consisted of three‐audiovisual (AV) stimuli (namely S1–S3) played at three different spatial positions and time lags. An AV stimulus consisted of a single sound (60 db SPL at ears' level, 500 Hz) spatially aligned with a single red flash (2.3° diameter, 20 cd/m2 luminance), presented for 75 ms. The spatial and temporal proximity of the auditory and the visual stimulations allowed participants to perceive them as originating from exactly the same source (Figure 1b). S1 and S3 were always played at −25° and +25°, respectively, and separated by a fixed time interval of 1.5 s. From trial to trial, S2 could be presented randomly from either −2.3° or +2.3° in space, and at either −250 ms or +250 ms in time (with 0 ms representing the middle of the 1.5 s temporal sequence). We chose these spatial positions and time lags on the basis of previous studies' participants' psychophysical performance (for more details see Gori et al., 2012, 2014; Vercillo et al., 2016).
Four conditions were possible according to this experimental design (Figure 2): (a) S1–S2 distance/interval narrow in space and short in time (i.e., S2 at −2.3° and −250 ms; Figure 2a); (b) S1–S2 distance/interval narrow in space and long in time (i.e., S2 at −2.3° and +250 ms; Figure 2b); (c) S1–S2 distance/interval wide in space and long in time (i.e., S2 at +2.3° and +250 ms; Figure 2c); and (d) S1–S2 distance/interval wide in space and short in time (i.e., S2 at +2.3° and −250 ms; Figure 2d). In conditions (a) and (c), the spatial and temporal components of the AV stimuli were coherent, in conditions (b) and (d) they were conflictual.
FIGURE 2.

Four experimental conditions according to S2 spatial and temporal features. (a) S2 from −2.3° at −250 ms, (b) S2 from −2.3° at +250 ms, (c) S2 from +2.3° at +250 ms, and (d) S2 from +2.3° at −250 ms. S1 and S3 were always delivered at −2.3° and +2.3°, respectively
The AV stimuli and conditions were identical in both tasks, which differed only in relation to the experimental question. More specifically, in the spatial bisection task participants evaluated whether S2 was spatially farther from S1 or S3, whereas in the temporal bisection task they evaluated whether S2 was temporally farther from S1 or S3. For each task, answers were provided after the presentation of S3 by subjects pressing the button corresponding to S1 or S3. The two tasks were counterbalanced across subjects in two separated blocks, and participants could take a break between them. Each block consisted of 240 experimental trials and 15 catch trials (in which S2 was delivered at 0° and at 0 ms to check for participants' stereotypical responses). Participants were asked to maintain a stable head position that was continuously monitored by the experimenter, together with the electrooculogram (EOG) signal.
2.3. Electroencephalography (EEG) data collection and preprocessing
We recorded EEG from 64 active scalp electrodes using the Biosemi ActiveTwo EEG System. Electrode offsets were kept below 30 mV. A first‐order analog antialiasing filter with a half‐power cutoff at 3.6 kHz was applied. Data were acquired at 2048 Hz and then downsampled to 512 Hz with a bandwidth of DC to 134 Hz. The EEG recording was referenced to a common mode sense active electrode and a driven right leg passive electrode. To check horizontal ocular movements, two additional electrodes were positioned at the left and right outer canthi for the EOG recordings.
The EEG was filtered between 0.1 and 100 Hz. We removed transient stereotypical (e.g., eye blinks) and non‐stereotypical (e.g., movement or muscle bursts) high‐amplitude artifacts by applying the artifact subspace reconstruction (ASR) method (Mullen et al., 2015) implemented by the EEGLAB plug‐in (Delorme & Makeig, 2004). 500 ms sliding windows of EEG data were decomposed via principal component analysis and compared with data from a clean baseline EEG recording. Within each sliding window, the ASR algorithm identifies principal subspaces which significantly deviate from the baseline and then reconstructs these subspaces using a mixing matrix computed from the baseline EEG recording. In this study, a threshold of 3 SD was used to identify corrupted subspaces. Moreover, channels were removed if their correlation with the other channels was <0.85, or if their line noise relative to its signal was more than 4 SD on the basis of the total channel population. Whenever the fraction of contaminated channels exceeded the threshold of 0.25, we removed time windows.
We further cleaned EEG data using independent component analysis (ICA) with two EEGLAB toolboxes namely SASICA (Chaumon et al., 2015) and IC_MARC (Frølich et al., 2015), keeping all parameters as their default. For component rejection, we followed criteria reported in the corresponding validation papers and based rejection on the abnormal topographies and/or spectra. Data were referenced to the average of the left and right mastoids (TP7 and TP8 electrodes).
2.4. Behavioral‐level and sensor‐level analysis
Behavioral performance was computed as the percentage of correct responses for each task.
In regards to the neurophysiological data, we compared the neural response to S2 with that to S1 for the spatial and temporal bisection tasks separately. Previous studies involving unisensory stimuli (visual stimuli: Amadeo et al., 2020a; auditory stimuli: Amadeo, Campus et al., 2019; Campus et al., 2017, 2019) already showed that S2 represents the starting point for the development of spatial and temporal metrics correlated with an early contralateral activation of occipital and temporal areas, respectively. On the contrary, S1 was taken as control since fixed in space and time, and S3 was not considered in the analysis since potentially involving more complex processing related to the metric definition. We hypothesized to find a similar pattern of early activation also with multisensory stimuli.
To obtain the event‐related potentials (ERPs), we considered a time window of 200 ms before S1 onset as baseline and we averaged EEG data in synchrony with S1 or S2 onset, separately for the two tasks. For each participant, a minimum of 100 trials per block was required for each ERP. After artifacts rejection, the total number of trials for each ERP was equal to 1707, ~107 per participant.
As in previous studies (Amadeo, Campus, & Gori, 2019a, 2020a; Campus et al., 2017, 2019), analysis focused on electrodes related to visual (O1 and O2 in occipital areas) and auditory (T7 and T8, in temporal areas) processing. Always in accordance to these studies, a time window of between 50 and 90 ms after the stimulus occurred was defined as a crucial interval for the earliest stages of multisensory integration. Thus, for both tasks we computed mean ERP amplitude by averaging the voltage in this time window. We then collapsed ERP waveforms across conditions and hemispheres of recording to obtain ERPs recorded on the contralateral and the ipsilateral hemisphere with respect to stimulus position in space (e.g., occipital contralateral response: ERP amplitude to stimulus at −2.3° recorded from O2 electrode; occipital ipsilateral response: ERP amplitude to stimulus at −2.3° recorded from O1 electrode). Consequently, lateralized ERP responses were calculated as the difference between the contralateral and ipsilateral ERP recordings. We performed statistical comparisons by running analysis of variance (ANOVA) on the lateralized mean ERP responses, considering as factors: Area (Occipital, Temporal), Task (Spatial bisection, Temporal bisection), and AV stimulus (S1, S2). Paired two‐tailed t‐tests were performed as post hoc comparisons with alpha level set at .05 after Bonferroni correction.
2.5. Source‐level analysis
In order to estimate the cortical generators of the ERP components, we performed a distributed source analysis using the Brainstorm software (Tadel et al., 2011), similarly to procedures used in previous studies (Amadeo et al., 2020a; Campus et al., 2017, 2019; Gori et al., 2020). Data were re‐referenced to the common average. We used standard 1 mm resolution template of the Montreal Neurological Institute (nonlinear average of 152 subjects, processed with FreeSurfer 5.3 ICBM152, Fonov et al., 2009), we performed forward modeling using a three‐layer (head, outer and inner skull) symmetric boundary element model (BEM) generated with OpenMEEG86, and we estimated source intensities using the sLORETA approach (Gramfort et al., 2011). Since individual Magnetic Resonance Imaging (MRI) scans were not available, to avoid misleading overinterpretation, dipole orientations were not fixed to the cortex surface but were free to assume whichever (unconstrained) orientation. Brainstorm's default parameter settings have been used for both source reconstruction and BEM creation.
We averaged source activation for each subject and condition within the selected 50–90 ms time window after S2. Subsequently, the norm of the vectorial sum of the three orientations at each vertex was estimated. Finally, pairwise comparisons were investigated with paired t‐tests, and results were corrected for multiple comparisons with the False Discovery Rate (FDR) method, using p = .00001 as a threshold. We verified the specificity of the occipital and temporal activation after S2 during the spatial and temporal bisection tasks, by comparing the neural response after S2 between the two bisection tasks, considering S2 positions in space (±2.3°) separately.
3. RESULTS
A group of 16 participants performed a spatial and a temporal bisection task in which three AV stimuli were reproduced in sequence and the second of these stimuli randomly delivered at two different spatial positions and according to two separate temporal lags. Participants evaluated whether the second stimulus was spatially or temporally farther from the first or the third stimulus. During both tasks, EEG was recorded and behavioral data were collected.
3.1. Behavioral performance
Behavioral performance was calculated as the percentage of correct responses. Participants performed equally well in the two tasks (t[15] = 1.80, p = .091, Cohen's d = 0.45, 95% CI = [−0.08, 0.98]). This observation allowed for the exclusion of any effect of task difficulty on the cortical responses associated with the two bisection tasks.
3.2. Sensor‐level analysis
In Figure 3a, the scalp topographies of the mean ERP in the 50–90 ms time window after S1 show a positivity involving the temporal and the occipital areas contralateral to the AV stimulus position in space (always −25°). The activation pattern appears similar between the temporal and the spatial bisection tasks and likely reflects multisensory cortical processing in the C1 time window (50–90 ms), which findings were already revealed in the previous literature (Cappe et al., 2010; Fort, Delpuech, Pernier, & Giard, 2002; Fort, Delpuech, Pernier, Giard, et al., 2002; Giard & Peronnet, 1999; Molholm et al., 2002; Murray, Lewkowicz, et al., 2016; Murray, Thelen, et al., 2016; reviewed in De Meo et al., 2015). In parallel, the scalp maps depicting the same time window after the S2 onset (Figure 3b) show a more prominent positivity than S1 in occipital areas for the spatial bisection task and S1 in temporal areas for the temporal bisection task, always lateralized with respect to AV stimulus position in space.
FIGURE 3.

Scalp maps of the mean event‐related potential amplitude in the 50–90 ms time window after S1 (a) and S2 (b), for the spatial (top) and temporal (bottom) bisection task. On top, a schematic representation of each condition: S1 (a) was always delivered at −25° and −750 ms. S2 (b) was randomly delivered at either −2.3° (b, left panel) or +2.3° (b, right panel) in space, and at either −250 or +250 ms in time
We demonstrated these results by running an ANOVA on lateralized mean ERP responses considering as factors: Area (Occipital, Temporal), Task (Spatial bisection, Temporal bisection), and AV Stimulus (S1, S2). The omnibus ANOVA revealed a significant three‐way interaction (F[1,15] = 123.1, p < .001, η2 p = 0.89, 95% CI [0.75, 0.94]). Thus, similarly to previous studies (Amadeo et al., 2020a; Campus et al., 2017, 2019; Gori et al., 2020), we further investigated this result by focusing on occipital and temporal areas separately, then splitting the analysis into two distinct hypothesis‐driven follow‐up ANOVAs. The Task (Spatial, Temporal) X AV Stimulus (S1, S2) follow‐up ANOVA on temporal regions revealed a contralateral temporal activity that was higher during the temporal bisection task than during the spatial bisection task, independently of the stimulus (F[1,15] = 26.76, p < .001, η2 p = 0.64, 95% CI [0.28, 0.80]). However, a significant interaction between Task and AV Stimulus (F[1,15] = 51.63, p < .001, η2 p = 0.77, 95% CI [0.51, 0.88]) suggested that the gain modulation was not similar for the temporal bisection task between S1 and S2. Post hoc two‐tailed t‐tests (Figure 4) revealed that the temporal regions' stronger activation during the temporal bisection task was specific for the second AV stimulus (t[15] = −7.34, p < .001, Cohen's d = −1.83, 95% CI = [−2.67, −0.99]), whereas for the first AV stimulus the two tasks shared a similar temporal activation (t[15] = −0.63, p = 1.00, Cohen's d = −0.15, 95% CI = [−0.67, 0.35]). These results indicated that the amplification in contralateral temporal areas in the 50–90 ms time window was specific for the temporal domain and for multisensory stimuli involved in the development of a temporal metric (S2). The follow‐up ANOVA on the occipital components showed a significant main effect, with the lateralized ERP response larger for the spatial bisection task than for the temporal bisection task (F[1,15] = 51.73, p < .001, η2 p = 0.78, 95% CI [0.51, 0.88]) in the selected time window. However, the significant interaction between Task and AV Stimulus (Figure 4; F[1,15] = 44.17, p < .001, η2 p = 0.75, 95% CI [0.45, 0.86]) revealed that this amplification was specific for S2 (t[15] = 9.07, p < .001, Cohen's d = 2.27, 95% CI = [1.30, 3.23]) and not for S1 (t[15] = −0.91, p = .373, Cohen's d = −0.22, 95% CI = [−0.74, 0.28]), suggesting that the cortical modulation of the occipital regions is the starting point for the development of a metric in the spatial domain.
FIGURE 4.

Lateralized mean event‐related potential (ERP) amplitude (i.e., difference between the contralateral and ipsilateral ERP responses) in the selected time window (50–90 ms) after S1 and S2 of the two bisection tasks in occipital (left panel) and temporal (right panel) areas. Error bars indicate SEM
ERP waveforms recorded over the occipital scalp contralateral and ipsilateral to S2 revealed a prominent positivity contralateral to the second sound position within the 50–90 ms time window (Figure 5). This neural response was stronger during the spatial bisection task than during the temporal bisection task. We also observed a not lateralized modulation of later neural response P140 specific to the spatial bisection task, in agreement with previous studies (Amadeo et al., 2020a; Campus et al., 2017, 2019; Gori et al., 2020). Finally, a modulation occurring in a late poststimulus time window (250–450 ms), more pronounced for the spatial task, was detected, likely involving the auditory‐evoked contralateral occipital activation (Feng et al., 2014; McDonald et al., 2013). Over the temporal scalp (Figure 5), an early ERP component contralateral to S2 position in space was stronger during the temporal bisection task. This activation resembled the N1 component usually elicited by auditory stimuli, and also recalled the multisensory responses observed at very short latencies (Calvert & Thesen, 2004; Giard & Peronnet, 1999).
FIGURE 5.

ERPs (mean ± SEM) elicited by S2 during the spatial bisection and the temporal bisection tasks in occipital (left panel) and temporal (right panel) electrodes. Both contralateral and ipsilateral ERP responses in respect to S2 position in space are reported. The gray‐shaded area delimits the selected time window (50–90 ms)
3.3. Source‐level analysis
To provide further evidence that the early activation of the temporal and occipital areas was actually involving the auditory and visual cortices respectively, we performed a source‐level analysis (Figure 6). Considering the neural response at S2, the source analysis showed that both bisection tasks elicited a cortical response contralateral to the stimulus spatial position in occipital and temporal regions. However, when performing comparison at source level between the two bisection tasks, we observed that the early activation of occipital regions was stronger during the spatial bisection than during the temporal bisection task, while the neural response of temporal areas was more widely evoked by the temporal bisection task. Even if also with source analysis it was hard to define the exact generators of this neural activity, the early latency of the response (50–90 ms), together with the neural activation covering a wide region of the temporal and occipital lobes, suggested that the two tasks were probably evoking a neural response involving the auditory and visual cortices. Paired two‐tailed t‐tests confirmed the significant differences between the two tasks in the recruitment of the auditory and visual cortices.
FIGURE 6.

(a) Average source activity after S2 in the 50–90 ms time window: Left and right panels of each line show the conditions in which S2 was delivered from the left (i.e., −2.3°) or the right (i.e., +2.3°), respectively. (b) Results of the pairwise two‐tailed t‐tests performed on average source activity in the 50–90 ms time window: Only t values corresponding to p < .0001 after FDR correction are displayed. Reddish and bluish colors indicate stronger activations in spatial and temporal bisections, respectively. Color intensity indicates the significance of the difference (i.e., magnitude of t). A stronger neural response with the spatial bisection occurs in the occipital areas, while in the temporal sites the activation more strongly supports the temporal bisection
4. DISCUSSION
Our environment determines the sense that is the most reliable for processing specific information (Welch & Warren, 1980) by selecting vision as the most appropriate sense for spatial judgments and audition for temporal processing. In this scenario, visual cortices play a pivotal role in spatial representations and auditory regions in the temporal representations, independently of the inputs' sensory modality (Amadeo et al., 2020a; Campus et al., 2017). Indeed, some cortical regions process the sensory inputs in a modality‐independent manner, since mainly driven by specific computations rather than by specific sensory information (Amedi et al., 2017; Cecchetti et al., 2016; Heimler & Amedi, 2020; Heimler et al., 2015; Ricciardi et al., 2014, 2020; Rosenblum et al., 2017). The idea that the sensory cortices are innately specialized is further challenged by multisensory operations occurring at all levels of cortical processing (Ghazanfar & Schroeder, 2006).
In this study, we recorded behavioral data and ERPs in 16 participants performing audiovisual temporal and spatial bisection tasks, to test the hypothesis that the domain‐specific organization of visual and auditory brain areas also subsists at multisensory level. Participants evaluated whether, in a sequence of three audiovisual stimuli, the second stimulus (S2) was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or the third audiovisual stimulus. Our results showed a S2 selective early activation (50–90 ms) of temporal regions that were stronger when encoding the audiovisual stimuli in a temporal bisection task than in a spatial bisection task. This early response recalled some aspects of the N1 component usually elicited by auditory stimuli (Näätänen & Picton, 1987) and originated in a wide temporal region that presumably involved the auditory cortex. This area generally works together with regions such as the superior temporal sulcus to coordinate many multisensory processes (Kayser et al., 2009; Kayser & Logothetis, 2009). Complementarily, we found an occipital response resembling the visual‐evoked C1 that was still a selective S2 response but larger for the spatial bisection task than for the temporal bisection task. Our findings integrated past studies using unisensory stimuli that support a crucial role of the visual and auditory cortices in spatial and temporal representation, respectively (visual stimuli: Amadeo et al., 2020a; auditory stimuli: Amadeo, Campus et al., 2019; Campus et al., 2017, 2019). In addition to these evidences, this study showed that the domain‐specificity of sensory areas acts also within a multisensory framework. In many past studies, visual and auditory cortices have been shown to support the encoding of multiple sensory modalities, and this observation suggested that the high‐level associative cortices do not hold the absolute primacy of multisensory processes (Bueti & Macaluso, 2010; Fort, Delpuech, Pernier, & Giard, 2002; Giard & Peronnet, 1999; Molholm et al., 2002; van Wassenhove & Grzeczkowski, 2015). Our results showed that when processing multisensory information, the sensory areas took into account also the features of stimuli to be processed and, in particular, the domain of representation (i.e., space and time) to which the stimuli belonged. In particular, occipital areas were preferentially recruited to encode multisensory stimuli spatially rearranged in a complex metric configuration, supporting the idea that the visual circuit is crucially enrolled whenever dealing with spatial representations across multiple sensory modalities. Likewise, we confirmed the crucial role of the auditory cortices in temporal processing by showing the preferential recruitment of these areas in the temporal representation of multisensory stimuli. By lacking of unimodal conditions (only auditory and only visual) to compare with the multisensory stimulation, from this study we cannot infer with confidence that the domain‐specific neural response we observed was intrinsically multisensory. Indeed, we cannot exclude that participants were taking into account only the most relevant sense for each specific task: the visual stimuli for the spatial bisection task, and the auditory stimuli for the temporal bisection task. Nonetheless, from a qualitative comparison between the results of this study and those of past works using the same methodology but with unimodal conditions (visual stimuli: Amadeo et al., 2020a; auditory stimuli: Amadeo, Campus, et al., 2019; Campus et al., 2017, 2019), we observed a neural gain in response to multisensory stimuli, in line with typical processing of multisensory inputs (Fort, Delpuech, Pernier, & Giard, 2002; Fort, Delpuech, Pernier, Giard, & Thomas, 2002; Molholm et al., 2002; reviewed in Ricciardi et al., 2014). Specifically, the multisensory response we observed was larger than the unisensory responses previously described. Interestingly, this multisensory gain was detectable in both occipital and temporal areas and was independent of the domain of representation involved (spatial or temporal). However, since we could not quantitatively compare unimodal and multimodal conditions, future investigations in this direction are still needed, with participants being tested in visual‐only, audio‐only, and audiovisual spatial and temporal tasks. Finally, we showed that the behavioral performance was similar between the two tasks, which confirmed that the neural modulation of sensory areas referred essentially to the task request (rather than to other experimental aspects such as the task difficulty). Overall, the results of this study fit into a framework delineated by Murray, Lewkowicz, et al. (2016), who proposed that the multisensory processing does not always involve a single and fixed schema of neural activation, but encompasses different cortical circuits. In particular, the authors proposed a neural circuit that is recruited among high‐order association cortices, such as the prefrontal and parietal cortex, and a second neural circuit that occurs directly between low‐level cortices. Multisensory processes can involve both kinds of schema in a dynamic combination, in relation to the nature of the multisensory stimuli to be processed. The task‐specific recruitment of visual and auditory cortices described in our study fits into this dynamic and context‐adaptive scenario of multisensory processing occurring between sensory cortices.
The early time latency (50–90 ms) we selected in this study supports a task‐specificity occurring at a low level of the sensory processing. Indeed, this specific time window can be considered an eMSI, which is a functionally premature stage of the multisensory processing (within the first 100 ms poststimulus onset) allowing the brain to select and encode important external inputs which can also facilitate a later stimulus encoding. However, it is worth noting that activation of visual and auditory regions has been registered even at earlier latencies than 50 ms in both macaques (Lamme & Roelfsema, 2000; Maunsell & Gibson, 1992) and humans (Brang et al., 2015, 2022), by using different techniques than our study. In the past literature, eMSI was generally elicited by simple tasks such as discrimination or detection tasks (Cappe et al., 2012; Fort, Delpuech, Pernier, & Giard, 2002; Fort, Delpuech, Pernier, Giard et al., 2002; Giard & Peronnet, 1999; Murray, Thelen, et al., 2016; Raij et al., 2010; Talsma et al., 2007; Teder‐Sälejärvi et al., 2002), but less was known on the occurring of this mechanism with more complex requests, such as the bisection tasks we proposed in this study. The spatial and temporal bisections explore the human ability to build a metric representation of the environment by estimating and comparing different inputs in space and time. In addition, by using the same audiovisual stimuli in the two tasks, and changing only the experimental question between them, this experimental paradigm allowed us to detect the early neural effects for the interaction of identical sensory information but with different behavioral goals (in the present investigation, the spatial and the temporal content of the task). The fact that the task‐specificity of multisensory processing appeared within early time latency can be regarded as a controversial point since early multisensory integration is typically considered an automatic process, that is, a hallmark of bottom‐up mechanisms (De Meo et al., 2015). Nevertheless, past studies revealed that also top‐down factors, such as attention, influenced multisensory integration within very premature stages of stimulus processing (Talsma & Woldorff, 2005; Talsma et al., 2010), and that high‐level cognitive processes can directly involve the recruitment of auditory and visual areas (reviews on the visual areas: Ricciardi et al., 2020; Roelfsema & de Lange, 2016; review on the auditory areas: Zatorre, 2007). Thus, in light of these findings, we are not surprised to observe a domain‐specific early activation of auditory and visual areas for a task such as the bisection. Indeed, in line with the cross‐sensory calibration theory (Gori, 2015), for this kind of task the visual and the auditory systems calibrate the other senses for the spatial and temporal representations, respectively, supported by the recruitment of the sensory cortices (Amadeo et al., 2020a; Campus et al., 2017). In this task, an adult‐like behavior is achieved only in late development (Amadeo et al., 2019b; Gori et al., 2012) and, when the calibration is not possible (e.g., in blindness or deafness), the spatiotemporal skills involved in the bisection are impaired, together with the related activation of sensory areas. Here, we speculate that with other tasks that do not require such calibration, the domain‐specific modulation of the sensory cortices would occur less (as well as the deficit in some spatiotemporal skills in case of sensory impairment). For example, a spatial localization task, for which visual calibration does not seem to be required (the ability to localize sounds in the space develops even in the absence of visual experience; Gori et al., 2021; Rohlf et al., 2020), may involve an alternative schema of multisensory processing at the neural level or activate later cortical processes. However, a further investigation in this direction is needed.
The findings of this study should be considered in light of some limitations. First, the lack of unimodal conditions (only auditory and only visual) limits a direct comparison between unimodal and multisensory processing, as well as the lack of a computational description of the data, for instance into a Bayesian framework. However, the multisensory gain we qualitatively observed in the occipital and temporal activation for the spatial and temporal bisection, respectively, suggests that this response was likely related to multisensory processing. Second, the lack of correlation between the subjects' neural response and the behavioral performance (i.e., the % of correct responses and/or the spatial and temporal parameters of S2), does not allow to state that the observed neural modulation was truly responding to the spatiotemporal characteristics of the stimuli. Third, the low spatial resolution of the EEG technique, together with the lack of individualized MRI scans for the source analysis, limits the access to the exact cortical locations generating the occipital and temporal activations. However, the similarities between the early occipital and temporal positivity we observed and the canonical visual‐evoked and auditory‐evoked components of sensory cortices, make us assume that the neural response was generated at the level of visual and auditory cortices. Finally, this study has a limited sample size, although in line with the sample sizes of past studies using the same methodology (Amadeo, Campus et al., 2019; Amadeo et al., 2020a; Campus et al., 2017, 2019).
To conclude, this study provides evidence of early responses of auditory and visual cortices for temporal and spatial multisensory tasks, respectively. This work demonstrates that preferential domains of representation (i.e., space and time) of the sensory areas persist also at the multisensory level, with a task‐dependent involvement of auditory and visual regions in the processing of bimodal stimuli. Moreover, if we consider a continuous interaction between multisensory processes and supramodal mechanisms (Ricciardi & Pietrini, 2011), our results may also integrate a task‐specific supramodal organization of the brain revealed by past studies using unisensory stimulation (visual stimuli: Amadeo et al., 2020a; auditory stimuli: Amadeo, Campus et al., 2019; Campus et al., 2017, 2019). Overall, these findings would have important implications for the understanding of the multifaceted, dynamic, and context‐adaptive multisensory mechanisms at the neural level.
CONFLICT OF INTEREST
The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported.
ACKNOWLEDGMENTS
Authors would like to thank the subjects for their willing participation in this study. The research is partially supported by the MYSpace project (PI Monica Gori), which has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 948349).
Gori, M. , Bertonati, G. , Campus, C. , & Amadeo, M. B. (2023). Multisensory representations of space and time in sensory cortices. Human Brain Mapping, 44(2), 656–667. 10.1002/hbm.26090
Monica Gori and Giorgia Bertonati contributed equally to this study.
Funding information H2020 European Research Council, Grant/Award Number: 948349; European Union's Horizon 2020; MYSpace project
DATA AVAILABILITY STATEMENT
The dataset presented in this study can be found in online Zenodo repository at the following link: https://doi.org/10.5281/zenodo.7108692.
REFERENCES
- Alais, D. , & Burr, D. (2004). The ventriloquist effect results from near‐optimal bimodal integration. Current Biology, 14(3), 257–262. 10.1016/j.cub.2004.01.029 [DOI] [PubMed] [Google Scholar]
- Amadeo, M. B. , Campus, C. , & Gori, M. (2019a). Impact of years of blindness on neural circuits underlying auditory spatial representation. NeuroImage, 191h, 140–149. 10.1016/j.neuroimage.2019.01.073 [DOI] [PubMed] [Google Scholar]
- Amadeo, M. B. , Campus, C. , & Gori, M. (2019b). Time attracts auditory space representation during development. Behavioural Brain Research, 376, 112185. 10.1016/j.bbr.2019.112185 [DOI] [PubMed] [Google Scholar]
- Amadeo, M. B. , Campus, C. , & Gori, M. (2020a). Visual representations of time elicit early responses in human temporal cortex. NeuroImage, 217, 116912. 10.1016/j.neuroimage.2020.116912 [DOI] [PubMed] [Google Scholar]
- Amadeo, M. B. , Campus, C. , & Gori, M. (2020b). Years of blindness lead to “visualize” space through time. Frontiers in Neuroscience, 14, 1–14. 10.3389/fnins.2020.00812 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Amadeo, M. B. , Campus, C. , Pavani, F. , & Gori, M. (2019). Spatial cues influence time estimations in deaf individuals. IScience, 19, 369–377. 10.1016/j.isci.2019.07.042 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Amadeo, M. B. , Tonelli, A. , Campus, C. , & Gori, M. (2022). Reduced flash lag illusion in early deaf individuals. Brain Research, 1776, 147744. 10.1016/j.brainres.2021.147744 [DOI] [PubMed] [Google Scholar]
- Amedi, A. , Hofstetter, S. , Maidenbaum, S. , & Heimler, B. (2017). Task selectivity as a comprehensive principle for brain organization. Trends in Cognitive Sciences, 21(5), 307–310. 10.1016/j.tics.2017.03.007 [DOI] [PubMed] [Google Scholar]
- Andersen, R. A. , Snyder, L. H. , Bradley, D. C. , & Xing, J. (1997). Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annual Review of Neuroscience, 20, 303–330. 10.1146/annurev.neuro.20.1.303 [DOI] [PubMed] [Google Scholar]
- Barakat, B. , Seitz, A. R. , & Shams, L. (2015). Visual rhythm perception improves through auditory but not visual training. Current Biology, 25(2), R60–R61. 10.1016/j.cub.2014.12.011 [DOI] [PubMed] [Google Scholar]
- Battal, C. , Occelli, V. , Bertonati, G. , Falagiarda, F. , & Collignon, O. (2020). General enhancement of spatial hearing in congenitally blind people. Psycological Sciences, 31(9), 1129–1139. 10.1177/0956797620935584 [DOI] [PubMed] [Google Scholar]
- Beauchamp, M. S. (2005). See me, hear me, touch me: Multisensory integration in lateral occipital‐temporal cortex. Current Opinion in Neurobiology, 15(2), 145–153. 10.1016/j.conb.2005.03.011 [DOI] [PubMed] [Google Scholar]
- Bolognini, N. , Cecchetto, C. , Geraci, C. , Maravita, A. , Pascual‐Leone, A. , & Papagno, C. (2012). Hearing shapes our perception of time: Temporal discrimination of tactile stimuli in deaf people. Journal of Cognitive Neuroscience, 24(2), 276–286. 10.1162/jocn_a_00135 [DOI] [PubMed] [Google Scholar]
- Brang, D. , Plass, J. , Sherman, A. , Stacey, W. C. , Wasade, V. S. , Grabowecky, M. , Ahn, E. S. , Towle, V. L. , Tao, J. X. , Wu, S. , Issa, N. P. , & Suzuki, S. (2022). Visual cortex responds to sound onset and offset during passive listening. Journal of Neurophysiology, 127(6), 1547–1563. 10.1152/jn.00164.2021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brang, D. , Towle, V. L. , Suzuki, S. , Hillyard, S. A. , Di Tusa, S. , Dai, Z. , Tao, J. , Wu, S. , & Grabowecky, M. (2015). Peripheral sounds rapidly activate visual cortex: Evidence from electrocorticography. Journal of Neurophysiology, 114(5), 3023–3028. 10.1152/jn.00728.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bueti, D. , & Macaluso, E. (2010). Auditory temporal expectations modulate activity in visual cortex. NeuroImage, 51(3), 1168–1183. 10.1016/j.neuroimage.2010.03.023 [DOI] [PubMed] [Google Scholar]
- Burr, D. , Banks, M. S. , & Morrone, M. C. (2009). Auditory dominance over vision in the perception of interval duration. Experimental Brain Research, 198(1), 49–57. 10.1007/s00221-009-1933-z [DOI] [PubMed] [Google Scholar]
- Calvert, G. A. , & Thesen, T. (2004). Multisensory integration: Methodological approaches and emerging principles in the human brain. Journal of Physiology Paris, 98(1–3), 191–205. 10.1016/j.jphysparis.2004.03.018 [DOI] [PubMed] [Google Scholar]
- Campus, C. , Sandini, G. , Amadeo, M. B. , & Gori, M. (2019). Stronger responses in the visual cortex of sighted compared to blind individuals during auditory space representation. Scientific Reports, 9(1), 1–12. 10.1038/s41598-018-37821-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Campus, C. , Sandini, G. , Concetta Morrone, M. , & Gori, M. (2017). Spatial localization of sound elicits early responses from occipital visual cortex in humans. Scientific Reports, 7(1), 1–12. 10.1038/s41598-017-09142-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cappe, C. , & Barone, P. (2005). Heteromodal connections supporting multisensory integration at low levels of cortical processing in the monkey. European Journal of Neuroscience, 22(11), 2886–2902. 10.1111/j.1460-9568.2005.04462.x [DOI] [PubMed] [Google Scholar]
- Cappe, C. , Thelen, A. , Romei, V. , Thut, G. , & Murray, M. M. (2012). Looming signals reveal synergistic principles of multisensory integration. Journal of Neuroscience, 32(4), 1171–1182. 10.1523/JNEUROSCI.5517-11.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cappe, C. , Thut, G. , Romei, V. , & Murray, M. M. (2010). Auditory‐visual multisensory interactions in humans: Timing, topography, directionality, and sources. Journal of Neuroscience, 30(38), 12572–12580. 10.1523/JNEUROSCI.1099-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cattaneo, Z. , Vecchi, T. , Cornoldi, C. , Mammarella, I. , Bonino, D. , Ricciardi, E. , & Pietrini, P. (2008). Imagery and spatial processes in blindness and visual impairment. Neuroscience and Biobehavioral Reviews, 32(8), 1346–1360. 10.1016/j.neubiorev.2008.05.002 [DOI] [PubMed] [Google Scholar]
- Cecchetti, L. , Kupers, R. , Ptito, M. , Pietrini, P. , & Ricciardi, E. (2016). Are supramodality and cross‐modal plasticity the Yin and Yang of brain development? From blindness to rehabilitation. Frontiers in Systems Neuroscience, 10, 1–8. 10.3389/fnsys.2016.00089 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chaumon, M. , Bishop, D. V. M. , & Busch, N. A. (2015). A practical guide to the selection of independent components of the electroencephalogram for artifact correction. Journal of Neuroscience Methods, 250, 47–63. 10.1016/j.jneumeth.2015.02.025 [DOI] [PubMed] [Google Scholar]
- De Meo, R. , Murray, M. M. , Clarke, S. , & Matusz, P. J. (2015). Top‐down control and early multisensory processes: Chicken vs. egg. Frontiers in Integrative Neuroscience, 9, 1–6. 10.3389/fnint.2015.00017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Delorme, A. , & Makeig, S. (2004). EEGLAB: An open source toolbox for analysis of single‐trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134(1), 9–21. 10.1016/j.jneumeth.2003.10.009 [DOI] [PubMed] [Google Scholar]
- Falchier, A. , Clavagnier, S. , Barone, P. , & Kennedy, H. (2002). Anatomical evidence of multimodal integration in primate striate cortex. Journal of Neuroscience, 22(13), 5749–5759. 10.1523/jneurosci.22-13-05749.2002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Feng, W. , Störmer, V. S. , Martinez, A. , McDonald, J. J. , & Hillyard, S. A. (2014). Sounds activate visual cortex and improve visual discrimination. Journal of Neuroscience, 34(29), 9817–9824. 10.1523/JNEUROSCI.4869-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fonov, V. S. , Evans, A. C. , McKinstry, R. C. , Almli, C. R. , & Collins, D. L. (2009). Unbiased nonlinear average age‐appropriate brain templates from birth to adulthood. NeuroImage, 47, S102. 10.1016/S1053-8119(09)70884-5 [DOI] [Google Scholar]
- Fort, A. , Delpuech, C. , Pernier, J. , Giard, M. , & Thomas, C. A. (2002). Dynamics of Cortico‐subcortical cross‐modal operations involved in audio‐visual object detection in humans. Cerebral Cortex, 12, 1031–1039. 10.1093/cercor/12.10.1031 [DOI] [PubMed] [Google Scholar]
- Fort, A. , Delpuech, C. , Pernier, J. , & Giard, M. H. (2002). Early auditory‐visual interactions in human cortex during nonredundant target identification. Cognitive Brain Research, 14(1), 20–30. 10.1016/S0926-6410(02)00058-7 [DOI] [PubMed] [Google Scholar]
- Fortin, M. , Voss, P. , Rainville, C. , Lassonde, M. , & Lepore, F. (2006). Impact of vision on the development of topographical orientation abilities. Neuroreport, 17(4), 443–446. 10.1097/01.wnr.0000203626.47824.86 [DOI] [PubMed] [Google Scholar]
- Frølich, L. , Andersen, T. S. , & Mørup, M. (2015). Classification of independent components of EEG into multiple artifact classes. Psychophysiology, 52(1), 32–45. 10.1111/psyp.12290 [DOI] [PubMed] [Google Scholar]
- Fuster, J. M. , Bodner, M. , & Kroger, J. K. (2000). Cross‐modal and cross‐temporal association in neurons of frontal cortex. Nature, 405(6784), 347–351. 10.1038/35012613 [DOI] [PubMed] [Google Scholar]
- Ghazanfar, A. A. , & Schroeder, C. E. (2006). Is neocortex essentially multisensory? Trends in Cognitive Sciences, 10(6), 278–285. 10.1016/j.tics.2006.04.008 [DOI] [PubMed] [Google Scholar]
- Giard, M. H. , & Peronnet, F. (1999). Auditory‐visual integration during multimodal object recognition in humans: A behavioral and electrophysiological study. Journal of Cognitive Neuroscience, 11(5), 473–490. 10.1162/089892999563544 [DOI] [PubMed] [Google Scholar]
- Gondan, M. , & Röder, B. (2006). A new method for detecting interactions between the senses in event‐related potentials. Brain Research, 1073–1074(1), 389–397. 10.1016/j.brainres.2005.12.050 [DOI] [PubMed] [Google Scholar]
- Gori, M. (2015). Multisensory integration and calibration in children and adults with and without sensory and motor disabilities. Multisensory Research, 28, 71–99. 10.1163/22134808-00002478 [DOI] [PubMed] [Google Scholar]
- Gori, M. , Amadeo, M. B. , & Campus, C. (2020). Temporal cues trick the visual and auditory cortices mimicking spatial cues in blind individuals. Human Brain Mapping, 41(8), 2077–2091. 10.1002/hbm.24931 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gori, M. , Campus, C. , Signorini, S. , Rivara, E. , & Bremner, A. J. (2021). Multisensory spatial perception in visually impaired infants. Current Biology, 31(22), 5093–5101.e5. 10.1016/j.cub.2021.09.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gori, M. , Chilosi, A. , Forli, F. , & Burr, D. (2017). Audio‐visual temporal perception in children with restored hearing. Neuropsychologia, 99, 350–359. 10.1016/j.neuropsychologia.2017.03.025 [DOI] [PubMed] [Google Scholar]
- Gori, M. , Sandini, G. , & Burr, D. (2012). Development of visuo‐auditory integration in space and time. Frontiers in Integrative Neuroscience, 6, 1–8. 10.3389/fnint.2012.00077 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gori, M. , Sandini, G. , Martinoli, C. , & Burr, D. C. (2014). Impairment of auditory spatial localization in congenitally blind human subjects. Brain, 137(1), 288–293. 10.1093/brain/awt311 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gramfort, A. , Strohmeier, D. , Haueisen, J. , Hamalainen, M. , & Kowalski, M. (2011). Functional brain imaging with M/EEG using structured sparsity in time‐frequency dictionaries. In Székely G. & Hahn H. K. (Eds.), Information processing in medical imaging (pp. 600–611). Springer. [DOI] [PubMed] [Google Scholar]
- Guttman, S. E. , Gilroy, L. A. , & Blake, R. (2005). Hearing what the eyes see: Auditory encoding of visual temporal sequences. Psychological Science, 16(3), 228–235. 10.1111/j.0956-7976.2005.00808.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Heimler, B. , & Amedi, A. (2020). Are critical periods reversible in the adult brain? Insights on cortical specializations based on sensory deprivation studies. Neuroscience and Biobehavioral Reviews, 116, 494–507. 10.1016/j.neubiorev.2020.06.034 [DOI] [PubMed] [Google Scholar]
- Heimler, B. , Striem‐Amit, E. , & Amedi, A. (2015). Origins of task‐specific sensory‐independent organization in the visual and auditory brain: Neuroscience evidence, open questions and clinical implications. Current Opinion in Neurobiology, 35, 169–177. 10.1016/j.conb.2015.09.001 [DOI] [PubMed] [Google Scholar]
- Kayser, C. , & Logothetis, N. (2009). Directed interactions between auditory and superior temporal cortices and their role in sensory integration. Frontiers in Integrative Neuroscience, 3, 7. 10.3389/neuro.07.007.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kayser, C. , Petkov, C. I. , & Logothetis, N. K. (2009). Multisensory interactions in primate auditory cortex: fMRI and electrophysiology. Hearing Research, 258(1), 80–88. 10.1016/j.heares.2009.02.011 [DOI] [PubMed] [Google Scholar]
- King, A. J. (2009). Visual influences on auditory spatial learning. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1515), 331–339. 10.1098/rstb.2008.0230 [DOI] [PMC free article] [PubMed] [Google Scholar]
- King, A. J. (2014). What happens to your hearing if you are born blind? Brain, 137(1), 6–8. 10.1093/brain/awt346 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lamme, V. A. F. , & Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23(11), 571–579. 10.1016/S0166-2236(00)01657-X [DOI] [PubMed] [Google Scholar]
- Maunsell, J. H. R. , & Gibson, J. R. (1992). Visual response latencies in striate cortex of the macaque monkey. Journal of Neurophysiology, 68(4), 1332–1344. 10.1152/jn.1992.68.4.1332 [DOI] [PubMed] [Google Scholar]
- McDonald, J. J. , Störmer, V. S. , Martinez, A. , Feng, W. , & Hillyard, S. A. (2013). Salient sounds activate human visual cortex automatically. Journal of Neuroscience, 33(21), 9194–9201. 10.1523/JNEUROSCI.5902-12.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Molholm, S. , Ritter, W. , Murray, M. M. , Javitt, D. C. , Schroeder, C. E. , & Foxe, J. J. (2002). Multisensory auditory‐visual interactions during early sensory processing in humans: A high‐density electrical mapping study. Cognitive Brain Research, 14(1), 115–128. 10.1016/S0926-6410(02)00066-6 [DOI] [PubMed] [Google Scholar]
- Mullen, T. R. , Kothe, C. A. E. , Chi, Y. M. , Ojeda, A. , Kerth, T. , Makeig, S. , Jung, T.‐P. , & Cauwenberghs, G. (2015). Real‐time neuroimaging and cognitive monitoring using wearable dry EEG. IEEE Transactions on Bio‐Medical Engineering, 62(11), 2553–2567. 10.1109/TBME.2015.2481482 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Murray, M. M. , Lewkowicz, D. J. , Amedi, A. , & Wallace, M. T. (2016). Multisensory processes: A balancing act across the lifespan. Trends in Neurosciences, 39(8), 567–579. 10.1016/j.tins.2016.05.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Murray, M. M. , Thelen, A. , Thut, G. , Romei, V. , Martuzzi, R. , & Matusz, P. J. (2016). The multisensory function of the human primary visual cortex. Neuropsychologia, 83, 161–169. 10.1016/j.neuropsychologia.2015.08.011 [DOI] [PubMed] [Google Scholar]
- Näätänen, R. , & Picton, T. (1987). The N1 wave of the human electric and magnetic response to sound: A review and an analysis of the component structure. Psychophysiology, 24(4), 375–425. 10.1111/j.1469-8986.1987.tb00311.x [DOI] [PubMed] [Google Scholar]
- Nava, E. , Bottari, D. , Zampini, M. , & Pavani, F. (2008). Visual temporal order judgment in profoundly deaf individuals. Experimental Brain Research, 190, 179–188. 10.1007/s00221-008-1459-9 [DOI] [PubMed] [Google Scholar]
- Poizner, H. , & Tallal, P. (1987). Temporal processing in deaf signers. Brain and Language, 30(1), 52–62. 10.1016/0093-934X(87)90027-7 [DOI] [PubMed] [Google Scholar]
- Raij, T. , Ahveninen, J. , Lin, F. H. , Witzel, T. , Jääskeläinen, I. P. , Letham, B. , Israeli, E. , Sahyoun, C. , Vasios, C. , Stufflebeam, S. , Hämäläinen, M. , & Belliveau, J. W. (2010). Onset timing of cross‐sensory activations and multisensory interactions in auditory and visual sensory cortices. European Journal of Neuroscience, 31(10), 1772–1782. 10.1111/j.1460-9568.2010.07213.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ricciardi, E. , Bonino, D. , Pellegrini, S. , & Pietrini, P. (2014). Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neuroscience and Biobehavioral Reviews, 41, 64–77. 10.1016/j.neubiorev.2013.10.006 [DOI] [PubMed] [Google Scholar]
- Ricciardi, E. , Papale, P. , Cecchetti, L. , & Pietrini, P. (2020). Does (lack of) sight matter for V1? New light from the study of the blind brain. Neuroscience and Biobehavioral Reviews, 118, 1–2. 10.1016/j.neubiorev.2020.07.014 [DOI] [PubMed] [Google Scholar]
- Ricciardi, E. , & Pietrini, P. (2011). New light from the dark: What blindness can teach us about brain function. Current Opinion in Neurology, 24(4), 357–363. 10.1097/WCO.0b013e328348bdbf [DOI] [PubMed] [Google Scholar]
- Rockland, K. S. , & Ojima, H. (2003). Multisensory convergence in calcarine visual areas in macaque monkey. International Journal of Psychophysiology, 50(1–2), 19–26. 10.1016/S0167-8760(03)00121-1 [DOI] [PubMed] [Google Scholar]
- Roelfsema, P. R. , & de Lange, F. P. (2016). Early visual cortex as a multiscale cognitive blackboard. Annual Review of Vision Science, 2, 131–151. 10.1146/annurev-vision-111815-114443 [DOI] [PubMed] [Google Scholar]
- Rohlf, S. , Li, L. , Bruns, P. , & Röder, B. (2020). Multisensory integration develops prior to Crossmodal recalibration. Current Biology, 30(9), 1726–1732.e7. 10.1016/j.cub.2020.02.048 [DOI] [PubMed] [Google Scholar]
- Romei, V. , Murray, M. M. , Cappe, C. , & Thut, G. (2009). Preperceptual and stimulus‐selective enhancement of low‐level human visual cortex excitability by sounds. Current Biology, 19(21), 1799–1805. 10.1016/j.cub.2009.09.027 [DOI] [PubMed] [Google Scholar]
- Rosenblum, L. D. , Dias, J. W. , & Dorsi, J. (2017). The supramodal brain: Implications for auditory perception. Journal of Cognitive Psychology, 29(1), 65–87. 10.1080/20445911.2016.1181691 [DOI] [Google Scholar]
- Tadel, F. , Baillet, S. , Mosher, J. C. , Pantazis, D. , & Leahy, R. M. (2011). Brainstorm: A user‐friendly application for MEG/EEG analysis. Computational Intelligence and Neuroscience, 2011, 1–13. 10.1155/2011/879716 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Talsma, D. , Doty, T. J. , & Woldorff, M. G. (2007). Selective attention and audiovisual integration: Is attending to both modalities a prerequisite for early integration? Cerebral Cortex, 17(3), 679–690. 10.1093/cercor/bhk016 [DOI] [PubMed] [Google Scholar]
- Talsma, D. , Senkowski, D. , Soto‐Faraco, S. , & Woldorff, M. G. (2010). The multifaceted interplay between attention and multisensory integration. Trends in Cognitive Sciences, 14(9), 400–410. 10.1016/j.tics.2010.06.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Talsma, D. , & Woldorff, M. G. (2005). Selective attention and multisensory integration: Multiple phases of effects on the evoked brain activity. Journal of Cognitive Neuroscience, 17(7), 1098–1114. 10.1162/0898929054475172, 1098, 1114. [DOI] [PubMed] [Google Scholar]
- Teder‐Sälejärvi, W. , McDonald, J. J. , Di Russo, F. , & Hillyard, S. A. (2002). An analysis of audio‐visual crossmodal integration by means of event‐related potential (ERP) recordings. Cognitive Brain Research, 14, 106–114. 10.1016/S0926-6410(02)00065-4 [DOI] [PubMed] [Google Scholar]
- Tonelli, A. , Campus, C. , & Gori, M. (2020). Early visual cortex response for sound in expert blind echolocators, but not in early blind non‐echolocators. Neuropsychologia, 147, 107617. 10.1016/j.neuropsychologia.2020.107617 [DOI] [PubMed] [Google Scholar]
- van Wassenhove, V. , & Grzeczkowski, L. (2015). Visual‐induced expectations modulate auditory cortical responses. Frontiers in Neuroscience, 9, 11. 10.3389/fnins.2015.00011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vercillo, T. , Burr, D. , & Gori, M. (2016). Early visual deprivation severely compromises the auditory sense of space in congenitally blind children. Developmental Psychology, 52(6), 847–853. 10.1037/dev0000103 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vetter, P. , Smith, F. W. , & Muckli, L. (2014). Decoding sound and imagery content in early visual cortex. Current Biology, 24(11), 1256–1262. 10.1016/j.cub.2014.04.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Voss, P. , Tabry, V. , & Zatorre, R. J. (2015). Trade‐off in the sound localization abilities of early blind individuals between the horizontal and vertical planes. Journal of Neuroscience, 35(15), 6051–6056. 10.1523/JNEUROSCI.4544-14.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Welch, R. B. , & Warren, D. H. (1980). Immediate perceptual response to intersensory discrepancy. Psychological Bulletin, 88(3), 638–667. [PubMed] [Google Scholar]
- Zatorre, R. J. (2007). There's more to auditory cortex than meets the ear. Hearing Research, 229(1–2), 24–30. 10.1016/j.heares.2007.01.018 [DOI] [PubMed] [Google Scholar]
- Zwiers, M. P. , Van Opstal, A. J. , & Cruysberg, J. R. (2001). A spatial hearing deficit in early‐blind humans. The Journal of Neuroscience, 21(9), RC142–RC145. 10.1523/JNEUROSCI.21-09-j0002.2001 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The dataset presented in this study can be found in online Zenodo repository at the following link: https://doi.org/10.5281/zenodo.7108692.
