Abstract
Individuals who are deaf since early life may show enhanced performance at some visual tasks, including discrimination of directional motion. The neural substrates of such behavioral enhancements remain difficult to identify in humans, although neural plasticity has been shown for early deaf people in the auditory and association cortices, including the primary auditory cortex (PAC) and superior temporal sulcus (STS) region, respectively. Here, we investigated whether neural responses in auditory and association cortices of early deaf individuals are reorganized to be sensitive to directional visual motion. To capture direction-selective responses, we recorded functional magnetic resonance imaging (fMRI) responses frequency-tagged to the 0.1 Hz presentation of central directional (100% coherent random dot) motion persisting for 2 s contrasted with non-directional (0% coherent) motion for 8 s. We found direction-selective responses in the STS region in both deaf and hearing participants, but the extent of activation in the right STS region was 5.5 times larger for deaf participants. Minimal but significant direction-selective responses were also found in the PAC of deaf participants, both at the group level and in five out of six individual deaf participants. In response to stimuli presented separately in the right and left visual fields, the relative activation across the right and left hemispheres was similar in both the PAC and STS region of deaf participants. Notably, the enhanced right hemisphere activation could support the right visual field advantage reported previously in behavioral studies. Taken together, these results show that the reorganized auditory cortices of early deaf individuals are sensitive to directional motion. Speculatively, these results suggest that auditory and association regions can be remapped to support enhanced visual performance.
Introduction
The absence of sensory inputs from one modality early in life has been linked to enhancement of the other senses. Accordingly, congenitally deaf have been shown to display better performance at some visual tasks than hearing individuals (e.g., Paranis & Samar, 1985; Neville & Lawson, 1987; Lore & Song, 1991; Dye et al, 2009; Bottari et al., 2010; Shiell, Champoux & Zatorre, 2014). For instance, an enhancement at detecting and discriminating directional visual motion has been reported in early deaf people (Hauthal et al., 2013; Shiell, Champoux & Zatorre, 2014; in the right visual field only: Neville & Lawson, 1987; Bosworth & Dobkins, 1999; Bosworth et al., 2013). From an ecological perspective, the daily importance of visual motion may be increased for deaf individuals, especially for monitoring the peripheral visual field, e.g., when using sign language (Codina et al., 2017). However, for other potentially useful visual tasks, no differences or a decrease in performance has been reported across deaf and hearing people (for reviews on this controversy, see, Parasnis, 1983; Bavelier, Dye & Hauser, 2006; Mitchell & Maslin, 2007; Pavani & Bottari, 2012). The prevalent hypothesis explaining these differences regards neural plasticity, i.e., the recruitment of brain areas processing the deprived sense or reorganization of brain areas processing the existent senses or engaging in multisensory integration. It is thought that neural plasticity could support compensatory behavioral abilities, but only when the underlying functional organization of the incoming sense is compatible with those areas (e.g., Pascual-Leone & Hamilton, 2001; Bola et al., 2017). However, the capacity for neural plasticity of early deaf individuals to support behavioral advantages in visual tasks, including those involving motion, has not been clearly demonstrated.
Extensive neural plasticity has been reported for deaf individuals’ responses to visual motion. Most strikingly, several human neuroimaging studies have reported activation in the primary auditory cortex (PAC) of deaf participants in response to moving or flickering visual stimuli, most often presented in or towards the visual periphery (peripheral moving dot pattern: Finney et al., 2001; flickering patch of a full-field luminance grating: Finney et al., 2003; peripheral moving dot pattern: Fine et al., 2005; flickering point-lights in the right visual field: Scott et al., 2014). In addition to auditory cortex, in the multisensory superior temporal sulcus (STS) region (a term used to include the superior temporal sulcus and adjacent cortex of the superior and middle temporal gyrus and angular gyrus: Allison, Puce & McCarthy, 2000), a trend has been shown for higher activation and significantly more pronounced attentional enhancement in deaf people in response to visual dot motion (Bavelier et al., 2001). In the study by Scott and colleagues (2014), a larger area of activation around the STS was reported in deaf participants, including the posterior superior and middle temporal gyrus. Changes in responsiveness to peripherally presented visual motion or flickering stimuli have also been reported in human visual area hMT+: increased (left-hemisphere) activation and/or extent of activation have been reported in deaf people (Bavelier et al., 2000; Bavelier et at., 2001; Scott et al., 2014; but see also Fine et al., 2005). To a lesser degree, other areas implying cross-modal neuroplasticity for motion or flicker in the early deaf include the posterior parietal cortex, anterior cingulate, and frontal/supplementary eye fields (Bavelier et al., 2001; Scott et al., 2014).
Again, however, the relationship between such neural plasticity in early deaf people and behavioral advantages in visual motion detection or discrimination has not been well documented. Recent evidence from animal studies suggests a causal link between reversible lesions in the auditory cortex and behavioral advantages at visual localization and movement detection in cats (Lomber, Meredith & Kral, 2010; see also Meredith et al., 2011). Yet for humans, only non-invasive, correlative evidence has been provided. Structurally, for example, correlations have been found for deaf individuals between the relative amount of auditory cortex (planum temporale) or visual cortex (V1) devoted to processing peripheral motion and behavioral performance in motion detection tasks (auditory: Shiell, Champoux & Zatorre, 2016; visual: Levine et al., 2015). Suggestive evidence has also been provided by showing that the recruitment of reorganized brain regions in early deaf individuals shows selective responses to a visual task for which there is behavioral enhancement. For example, four cardinal locations of visual stimuli could be decoded from the auditory cortex in deaf individuals with neuroimaging, suggesting that representations in the auditory cortex align with those in the visual cortex (Almeida et al., 2015). Here, we aim to add to these findings by asking whether deaf individuals’ enhanced ability in speed and/or accuracy at discriminating the direction of visual motion could be supported by direction-selective responses in brain areas evidencing neural plasticity.
Directional visual motion is a particularly salient visual stimulus and is known to selectively activate a subset of areas in the neurotypical human brain responding to visual motion more generally. Strong direction-selective responses have been found in human visual area hMT+/V5 (e.g., Tootell et al., 1995; Morrone et al., 2000; Huk et al., 2001). Other implicated areas include V3/V3A and, to a lesser extent, the rest of V1-V4 (Huk et al., 2001; Tootell et al., 1995; Ales & Norcia, 2009, finding large effects also in V1 with EEG source imaging). The representation of directional motion within these cortical areas was first revealed by single cell recordings in monkeys, reporting columnar direction-tuning (Hubel & Wiesel, 1961; Dubner & Zeki, 1971; Albright, 1984; Felleman & Van Essen, 1987). Unfortunately, due to the spatial scale, such direction-tuning cannot be studied non-invasively in humans, and direction specific representation in humans has thus remained elusive (see Kamitani & Tong, 2006 for a potential exception, but also Beckett et al., 2012; for axis of motion mapping at 7 Tesla, see Zimmermann et al., 2011). Despite this, direction selective areas may be identified in the human brain with functional magnetic resonance imaging (fMRI) with stimulation presentation techniques, such as contrasting directional (i.e., coherent) motion with directionless (i.e., non-coherent) motion or dynamic noise (Beauchamp, Cox & DeYoe, 1997; Braddick et al., 1997; Morrone et al., 2000; in electro/magnetoencephalography: Tyler & Kaitz, 1977; Lam et al., 2000; Nakamura et al., 2003; Ales & Norcia, 2009; Palomares et al., 2012).
Here, we used a sensitive approach to investigate the spatial extent and activation of direction-selective brain regions in early deaf and hearing people, focusing on the auditory and association cortices, the PAC and STS region respectively, in comparison with visual area hMT+. Specifically, we used fMRI together with a frequency-tagging approach (e.g., Bandettini et al., 1993; Puce et al., 1995; Engel et al., 1997; Morrone et al., 2000; Ernst et al., 2013; Koening-Robert et al., 2015; Gao et al., 2017) to identify periodic changes from non-coherent to directional random-dot motion (Morrone et al., 2000; see also Atkinson et al., 2008; Ales & Norcia, 2009; Palomares et al., 2012). We were thus able to acquire signals with a high signal-to-noise ratio that were independent of a hemodynamic response function model. By using a contrast of directionless to directional motion, we were also able to capture direction-selective responses (note that these responses are not direction-specific) locked precisely to the frequency of coherence onset. To follow up on a behavioral advantage for direction discrimination typically reported in the right visual field for deaf individuals (Neville & Lawson, 1987; Bosworth & Dobkins, 1999; Bosworth et al., 2013), we presented visual stimuli in the left and right visual fields as well as centrally. When activation was found, we further explored potential qualitative differences across hearing and deaf individuals in terms of spatial extent, right vs. left visual field response, and hemispheric lateralization. Together, these comparisons allowed us to assess the potential neural bases of enhanced visual motion processing reported in previous studies for early deaf people.
Experimental Methods
Participants
Two groups of participants, early deaf and hearing controls, were tested in the experiment, which was approved by the Institution Review Board of the University of Nevada, Reno, and conducted in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). Each group consisted of six adults, recruited from northern Nevada and California. Our deaf participants included those who suffered from severe to profound sensorineural hearing loss at an early age. They had no ability to understand auditory speech but were proficient in sign language (see Table 1 for deaf participants’ details). The mean age of deaf participants was 36 years (SD: 8.2; range: 26 to 49); the mean age of hearing participants was 33 years (SD: 8.5; range: 26 to 48). Four of the hearing and one of the deaf participants were male; one hearing participant was left handed. All participants were unaware of the experimental design, except for one hearing participant, who was author TR. All participants reported visual acuity in the normal or corrected-to-normal range.
Table 1.
Demographic information for the early deaf participants.
| Age (years) | Sex | Handedness | Deafness acquisition | Cause of deafness | Auditory deprivation left; right (dB) | Signing acquisition | |
|---|---|---|---|---|---|---|---|
| D1 | 41 | F | R | 12 months | unknown | 95; 95 | 12 years |
| D2 | 31 | M | R | 15 months | fever | total; 85 | 15 months |
| D3 | 49 | F | R | Birth | maternal gestational measles | 100;90 | 11 years |
| D4 | 26 | F | R | Birth | genetic (coex26) | 85;85 | < 1 year |
| D5 | 34 | F | R | Birth | hereditary | 80; 70 | < 1 year |
| D6 | 32 | F | R | Birth | unknown | 98; 96 | 1 year |
(f)MRI acquisition
(f)MRI scanning was performed with a 3 T Philips Ingenia scanner using a 32-channel digital SENSE head coil (Philips Medical Systems, Best, Netherlands) at the Renown Regional Medical Center. Volumetric anatomical images were acquired at a resolution of 1 mm3 using a T1-weighted magnetization-prepared rapid gradient echo (MPRAGE) sequence. Functional blood oxygen-level dependent (BOLD) were acquired through a continuous design at a resolution of 2.75 × 2.75 × 3 mm voxels, with no gap. A repetition time (TR) of 2 s was used to acquire 30 transverse slices in an ascending order, with an echo time (TE) of 17 ms, flip angle of 76°, and 220 × 220 mm2 field of view.
Visual motion stimuli
Visual motion was displayed with random dot kinematograms, based on the incremental displacement across monitor refresh frames of individual dots within a circular field (RDKs; Anstis, 1970; Julesz, 1971; Braddick, 1974). Frames of white dots against a black background were generated with a custom script running over MATLAB, refreshing at a rate of about 60 Hz, with a 500 ms lifespan in order to discourage participants’ tracking the movement of individual dots. Given some inconsistency in presentation timing due to online drawing of dot positions, the motion display was adapted for precise periodic stimulus presentation by exporting generated dot motion frames and then displaying them at a precisely controlled periodic rate of 60 Hz using custom software running over Java. Viewed on the testing monitor, the stimulus field diameter subtended 8.5° of visual angle, with individual dots subtending 1.35’ in diameter, moving at a speed of 3.4°/s, at a density of 12.5 dots/°.
We created directional stimulus sequences in four directions (up, right, down, and left) as well as non-directional, non-coherent dot motion. To create the directional sequences, a 30-s sequence of 1,800 sequential stimulus frames creating the appearance of 100% coherent rightward visual motion was extracted. The rightward stimuli frames were rotated by increments of 90° to create sequences of downward, leftward, and upward motion, respectively, with minimal variation across directions. In the functional scans, these sequences each repeated four times in immediate succession, leading to a block of 2 s of directional motion. To create sequences of non-directional motion, 100% non-coherent motion stimulus frames were similarly extracted from 30-s sequences; to fill the longer proportion of non-directional to directional motion duration in the testing sequences with consistent stimulus update intervals, this procedure was repeated three additional times. Note that these 30-s sequence pieces also served as “incoherent jumps” to prevent a specific confound of full dot replacement at the onset and offset times of directional and non-directional motion (see the following section; Wattam-Bell, 1991; Braddick et al., 2005). In functional scans, these four non-directional sequence sets were each repeated four times in immediate succession, defining a block of 8 s of non-directional motion. Participants viewed the stimulation monitor with a mirror attached to the MR head coil.
Periodic visual stimulation procedure
Functional scans consisted of periodic alternation between directional and non-directional motion over a duration of 5.1 min. Scans began with 2-s of a white fixation cross on a black background, followed by a 2-s fade-in period, in which stimulus luminance contrast gradually increased to 100%. Stimuli were then shown over a duration of 300 s in a fixed pattern of 2 s of directional motion followed by 8 s of non-directional motion. Periods of directional motion thus onset every 10 s, leading to a direction-selective frequency-tagged rate of 1/10 s, i.e., 0.1 Hz (Figure 1A). Within each scan, the direction of motion also consistently alternated at each presentation cycle, e.g., from up to down motion, leading to a direction-specific frequency-tagged rate of 0.05 Hz. Finally, the scans ended with 2 s of stimulus fade-out and 2 s of the white fixation cross. Four participants from each of the deaf and hearing groups saw contrasts of up/down and left/right motion (trials lists 1 and 2), and the remaining two participants of each group saw contrasts of up/left and right/down motion (trials lists 3 and 4). Since no clusters of significant responses to direction-specific motion at 0.05 Hz were found for any participants in any trial lists, data were combined across trial lists within each group in order to examine the direction-selective response at 0.1 Hz.
Figure 1.

(A) Stimulation sequences consisted of 2 sec of directional (100% coherent dot) visual motion followed immediately by 8 sec of nondirectional (0% coherent dot) motion. The onset of directional motion thus occurred periodically every 10 sec, predicting a direction-selective response in the frequency domain at 0.1 Hz (i.e., 1/10 sec). The arrows drawn on the figure are purely for illustrating the direction of dot motion. (B) Top: An example of the BOLD response recorded by fMRI from a single voxel in visual area hMT+ from a hearing participant, averaged over four runs of visual motion presented in the CVF and DC corrected. Its location is illustrated on the sagittal slice of this participant’s anatomy in Talairach space. Bottom: A fast Fourier transform (FFT) is applied to each voxel to transform the data into the temporal frequency domain. This example voxel is sensitive to directional motion, as evidenced by the high-amplitude BOLD signal of the 0.1-Hz response peak.
Visual field conditions
Scans designed to localize brain regions responding to visual motion contained stimuli presented in a central visual field (CVF) condition. In two additional scan conditions designed to measure the amplitude of brain activation, stimuli were presented in either the right or left peripheral visual field (RVF; LVF). In the CVF condition, stimuli were presented in the center of the stimulation monitor together with a superimposed central fixation cross. From a viewing distance of 134 cm, the monitor supported a field of view of 29 × 17°. Thus, when presented in the right or left peripheral visual field conditions, the stimulus was translated laterally to the edge of the monitor and the fixation cross shifted laterally 4° from center in the opposite direction, so that the distance between the proximal edge of the stimulus and fixation cross subtended 10° (e.g., Jiang et al., 2015). Four scan repetitions of each condition were presented sequentially in order to discourage participants from moving their heads as stimulus location changed. Each participant was presented with every condition, leading to twelve scans for a total testing time of about 1 hour. In odd trial lists, scans began with stimuli presented in the CVF, while in even trial lists stimuli were first presented in one of the peripheral visual fields. The order of conditions was identical across participant groups.
Behavioral task
Participants were instructed to fixate on the centrally presented white fixation cross. The cross changed shape to a circle for a duration of 200 ms at random intervals (a minimum of 800 ms in between changes) 30 times within each scan, i.e., once about every 10 s. Participants were asked to use a response box to report the direction of motion at the time of the fixation shape change. This task was designed to facilitate participants’ fixation, as well as to encourage attention to the direction of motion of the stimulus.
(f)MRI data analysis
Preprocessing
Anatomical and functional data were analyzed with BrainVoyager v20.0 and the BVQXTools toolbox (Brain Innovation B.V., Maastricht, The Netherlands) together with MATLAB R2013b (Mathworks, MA). Functional scan data were imported into BrainVoyager and preprocessed with corrections for slice scan time and 3D motion (aligned to the first functional scan for inter-session alignment). They were temporally filtered with a linear de-trending; no spatial smoothing was applied. Anatomical scans, similarly imported into BrainVoyager, were subjected to an isotropic voxel transformation and aligned according to standard anterior and posterior commissure points. For display across participants, data were transformed into a conventional Talairach space (Talairach & Tournoux, 1988). Functional scans were co-registered to each participant’s corresponding anatomical images. Initial alignment was fine-tuned through an affine transformation and minimally corrected with visual inspection. Spatial normalization of the functional data was applied through a volume time course (VTC) transformation.
Frequency-domain processing
The VTC files of each functional scan were imported into MATLAB for frequency-domain analyses. They were cropped to 150 volumes of 2-s, containing exactly 30 presentation cycles of 0.1 Hz directional-motion and excluding the first and last two volumes corresponding to fixation cross and fade-in/out presentation. Blood-oxygen-level dependent (BOLD) data from each participant from the four scans per condition were averaged in time in order to reduce noise from non-phase locked activation, i.e., from activation not driven by periodic stimulus presentation. A DC correction was applied to remove the mean signal offset and the data were transformed into a normalized amplitude spectrum through a Fast Fourier Transform (Figure 1B). The resulting BOLD amplitude spectrum contained a range of 0 to 0.25 Hz with a frequency resolution of 0.0033 Hz. For each frequency bin, x, a baseline range was defined as 20 surrounding frequency bins, encompassing a range of about 0.07 Hz centered around x. To assess significance during CVF scans of the 0.1 Hz response at each voxel, z-scores were generated by subtracting from x the mean baseline value and dividing the result by the standard deviation of the baseline. To display BOLD response amplitudes in predetermined regions (see section below) during RVF and LVF scans, baseline-subtracted amplitude values were similarly generated by subtracting the mean baseline value from x (e.g., Retter & Rossion, 2016). The resulting files were reimported into BrainVoyager for display.
Regions-of-interest
Given previously reported findings of neural plasticity in the primary and association auditory cortices in deaf individuals (e.g., Finney et al., 2001; Finney et al., 2003; Fine et al., 2005; Karns et al., 2012; Scott et al., 2014), and direction-selective responses in visual areas hMT+ (e.g., Tootell et al., 1995; Morrone et al., 2000; Huk et al., 2001), we a priori focused our analyses on the primary auditory cortex (PAC), and superior temporal sulcus (STS) region, potentially including part of the superior temporal gyrus/PT, middle temporal gyrus, and angular gyrus (Allison, Puce & McCarthy, 2000; see also Scott et al., 2014 for activation in deaf participants), and hMT+.
To define the STS region and hMT+, we used a functional cluster-based criterion from direction-selective responses at 0.1 Hz to motion presented in the CVF (clusters > 150 voxels). Significance thresholding was applied at the individual participant level (range: Z>2.6 to Z>5.7), in order to approximately equalize the number of significant voxels across commonly active regions, including: hMT+ (6 deaf and 6 hearing, in at least one hemisphere), the STS region (6 deaf and 6 hearing), early visual areas (6 deaf and 6 hearing), and the lateral occipital complex (5 deaf and 6 hearing). In relevant cases, the threshold was increased for two regions, applied bilaterally, in order to spatially separate them. The mean total voxel number across participants after thresholding was 15,138 voxels, and did not differ significantly across groups (deaf: M = 13,636, SE = 1,422; hearing: M = 16,641, SE = 1,583), t = 1.41, p = .19, d = 0.73, 2-tailed).
In a separate analysis, we defined the PAC, a region that cannot be functionally defined in deaf participants, using the Julich probabilistic atlas in the SPM Anatomy Toolbox (Eickhoff et al., 2005; 2006). Following a procedure described in Eickhoff et al. (2006), we included the volume assignment to all subregions for PAC (Morosan 2001) in the summary map of all areas (maximum probability map, MPM). This procedure ensured no overlap between any two cytoarchitectonic defined areas. The PAC ROI was then transformed to Talairach space and applied to each participant’s brain volume. It was further separated into left and right PAC for each participant.
Statistical tests
For the functionally-defined ROIs, i.e., the STS region and hMT+, we investigated whether there were significant differences in the spatial extent and amplitude of activation between deaf and hearing participant groups. The spatial extent and amplitude of the STS region and hMT+ were thus compared across deaf and hearing participant groups with non-parametric Mann-Whitney tests, given the relatively small sample size. To compare differences in the spatial extent of the STS region and hMT+, the number of significant voxels were used. To compare the amplitude differences in these ROIs to stimuli presented in the left and right visual fields, baseline-subtracted amplitude values at 0.1 Hz for each participant were averaged across voxels within their individually defined ROIs for the LVF and RVF responses separately. When a cluster-based ROI could not have been defined in one hemisphere (STS: 2 deaf and 1 hearing in the left hemisphere; hMT+: 1 deaf participant in the left hemisphere), no corresponding amplitude values were included in the analysis.
For the probabilistically-defined ROI, i.e., the PAC, we investigated whether there were significant responses in the deaf and/or hearing participants. To determine response significance in the PAC ROI, an amplitude spectrum was computed from the averaged BOLD responses to motion presented in the CVF of all bilateral PAC voxels across participants in each group. Z-scores were then calculated on this averaged spectrum with a threshold of p<.001 (Z>3.10) for significance for this sensitive group-level analysis. Given some debate about whether PAC responses occur only as a result of group-level averaging (e.g., as shown in Finney et al., 2001; but see Scott et al., 2014), significance was also assessed similarly at the individual participant level, with the typical threshold of p<.05 (Z>1.64). To compare the number of significantly direction-selective voxels across the PAC and the STS region, the PAC ROI was thresholded at the individual level defined previously for demarcating the STS region (i.e., encompassing a range of Z>2.6 to Z>5.7).
Results
1. The superior temporal sulcus region and visual middle temporal complex
The centrally presented visual motion trials were used to localize direction-selective responses in deaf and hearing individuals. These responses were frequency-tagged at 0.1 Hz, i.e., the rate at which directional (100% dot coherence) motion onset (and continued for 2 s) immediately following 8 s of directionless motion (0% dot coherence).
1.1. Direction-selective responses are more extensive in the right STS region for deaf participants
The area of the STS region was 5.5 times larger in deaf than hearing individuals in the right hemisphere (deaf: M = 3,591 mm3, SE = 596.2; hearing: 653 mm3, SE = 22.6), with no pronounced differences in the left hemisphere (deaf: M = 294 mm3, SE = 164.8; hearing: 315 mm3, SE = 124.8; Figure 2A). Statistically, this led to a significant difference in the extent of STS region activation across participant groups in the right STS region only: U = 1, p = .004; left STS: U = 36, p = .70.
Figure 2.

The size of STS region and hMT+ ROIs in deaf and hearing individuals. In the center, a sagittal slice (right hemisphere; Talairach x = 46) provides an example of the location of these regions defined in a single hearing participant; the STS region is drawn in green, being dorsal and slightly more anterior relative to hMT+, drawn in blue/purple. (A) The extent of activation in the STS region in deaf and hearing individuals (average [Avg.] across groups plotted on the far right, with error bars representing ± 1 SE), in each of the left and right hemisphere. (B) The extent of activation in hMT+, plotted as in A.
The STS region in the right hemisphere was centered at Talairach x = 54, y = −42, z = 9 for deaf participants, and x = 55, y = −39, z = 16 for hearing participants (for individual regions, see Figure 3). The location of the STS region was particularly reliable in the right hemisphere for deaf participants; the range of its center Talairach x coordinates (see Figure 3) was: x = 52 to 58 (SE = 0.84) for deaf, compared to: x = 48 to 66 (SE = 3.07) for hearing participants (in the left hemisphere, the range was: x = 45 to 61 (SE = 3.45) for deaf and: x = 50 to 65 (SE = 2.96) for hearing participants).
Figure 3.

The STS region ROIs in the anatomy of deaf and hearing individuals, in Talairach space. Data are thresholded with individually defined z score values (see the scale in the top right corner). These sagittal slices are centered around the functionally defined ROI for each hemisphere; in three cases where a functional ROI could not be defined in one hemisphere, the x coordinate mirrors that of the other hemisphere (and is written in italics).
The area of hMT+ did not appear to differ greatly across participant groups, although the right hemisphere (deaf: 1,847 mm3, SE = 183.3; hearing: 1,304 mm3, SE = 321.2) appeared larger than in the left (deaf: 1,033 mm3, SE = 365.9; hearing: 1,335 mm3, SE = 291.1) for deaf participants only (Figure 2B). However, statistically, there was not a significant difference in the extent of hMT+ activation across deaf and hearing participants, in either the right: U = 11, p = .31; or left hemisphere: U = 14, p = .59. In sum, the only significant difference found between deaf and hearing participants in terms of area of activation was a greater extent of the right STS region for deaf participants.
1.2. Responses in the left vs. right visual fields: a right visual field STS region bias for deaf participants
The amplitude of responses within the STS region and hMT+ ROIs identified previously were used to quantify the response to visual motion presented in separate trials in the left visual field (LVF) and right visual field (RVF). We expected that the larger size of the right STS region only in deaf individuals might be accompanied by enhanced activity in response to visual stimuli presented in the LVF. Instead, the results showed the opposite, i.e., that STS region responses of deaf individuals were of larger amplitude for stimuli presented in the RVF than LVF (Figure 4A). Indeed, for stimuli presented in the RVF, there was a significantly higher response for deaf than hearing participants in the right STS, U = 0, p = .002, which only neared significance in the left STS, U = 2, p = .063. In contrast, for stimuli presented in the LVF, there were no significant differences across participant groups: right STS: U = 9, p = .18; left STS: U = 9, p = .91.
Figure 4.

Amplitude values (baseline-subtracted) of the frequency domain analysis of periodic BOLD signal changes to directional motion at 0.1 Hz. Data are reported for deaf and hearing participants in response to visual stimuli presented in the LVF and RVF in the left (LH) and right (RH) hemispheres. Results are shown in the ROIs defined previously for the STS region (A) and hMT+ (B). Group data are plotted in bar graphs (error bars plotting ± 1 SE), and individual data are plotted as superimposed dots (deaf, filled-in; hearing, unfilled-in); each participant is plotted in a consistent color across plots.
In hMT+, the pattern of amplitude responses to stimuli in the left and right visual fields appeared highly similar across the left and right hemisphere for deaf and hearing participants (Figure 4B). This pattern was described by a contra-lateral visual field to hemisphere advantage, particularly for the RVF/left hemisphere (see Figure 4B). Across participant groups, however, there were no significant differences for stimuli presented in either the RVF, right hMT+: U = 12, p = .39; left hMT+: U = 14, p = .93 or LVF, right hMT+: U = 15, p = .70; left hMT+: U = 15; p = 1.0. Overall, there were thus no differences between deaf and hearing participants in hMT+ responses to directional motion in the left or right visual fields, but deaf had more activation than hearing participants in the right STS region to stimuli presented in the RVF.
2. The primary auditory cortex
The PAC was defined with a probabilistic atlas for both deaf and hearing participants. To determine response significance in this region, an amplitude spectrum was derived from the averaged BOLD responses to motion presented in the CVF of all PAC voxels across participants for each group.
2.1. Significant but minimal direction-selective responses in the PAC for deaf participants
At the individual participant level, direction-selectivity at 0.1 Hz was evidenced in five out of six deaf participants in the bilateral PAC (Z’s ranging from 1.79 to 3.19, p’s <.05). A significant direction-selective response emerged at 0.1 Hz for the deaf participants across the bilateral auditory cortex (Z = 5.80, p<.0001; right PAC: Z = 7.78; left PAC: Z = 3.40; Figure 5A).
Figure 5.

Responses to directional visual motion activated the PAC of deaf individuals. (A) At the group level, areas of activation in deaf participants’ temporal lobes encompassed the probabilistic area of the auditory cortex (shown in light blue on the standard Colin 27 brain; data at p < .001). Moreover, (B) significant responses to direction-selective motion at 0.1 Hz were found in the PAC (shaded in light blue) at the individual level, although the area of activation was small relative to the STS region: z Scores of three deaf individuals are shown here at the same thresholded level used to define their individual STS ROI (D2: z > 4.57; D3: z > 5.7; D4: z > 2.6). (C) The pattern of activation in the left and right PAC for deaf participants to visual motion in the LVF and RVF was similar to that of their STS region (compare with Figure 4A). Again, group data are plotted in bar graphs (error bars plotting ± 1 SE), and individual data are plotted as superimposed dots; colors are consistent across plots and labeling in B. L = left; R = right.
A direction-selective response was not found in five out of six hearing participants (all Z’s < 1.34, p’s > .05): however, a significant response was found in one hearing participant who was not naïve to the experimental design (Z = 4.36, p < .0001). When including all six participants of the hearing group, the direction-selective response in the bilateral PAC reached significance at a threshold of p<.05 (bilaterally: Z = 2.32, p = .010; right PAC: Z = 1.88; left PAC: Z = 2.27); when removing the non-naïve participant, the PAC was not significant in the hearing group (bilaterally: Z = 1.39, p = .082).
Despite significant responses in the PAC in deaf participants, the extent of direction-selective responses was minimal, with significant voxels subtending only 14.1% of bilateral PAC area (at Z>3.10, p<.001) for deaf participants at the group level (i.e., grand-averaged amplitude spectra; see Methods). When the lowest and highest Z-score thresholds applied for individual participants (Z>2.6 to Z>5.7) were applied to the group-level data, the percent of significant bilateral PAC area at the group level ranged from, respectively, 21.8% to 0.67%. To put this in perspective, this area was more than 28 times smaller than the extent of activation in the bilateral STS region for deaf participants when defined at the same significance threshold for individual participants (see Figure 5B; mean PAC area: 1.11%, SE = 0.43). In sum, significant PAC responses were present but minimal in the deaf participant group.
2.2. Responses in the left vs. right visual fields: the PAC hemispheric activation mirrors the STS
The responsivity of the BOLD amplitude of direction-selective responses to visual motion presented in the right or left visual fields was investigated, as in section 1.2 (Figure 5C). The resultant pattern of activation across hemispheres in the PAC was reminiscent of that of the STS region (compare Figure 5C with Figure 4A). Note that the large amplitude differences across the STS region and PAC are not comparable directly, due to the different methods of definition of these regions (i.e., the STS region was defined functionally to include only significant voxels, while the PAC was defined as all voxels within a predefined region).
Discussion
We used an fMRI frequency-tagging approach to identify direction-selective brain regions in early deaf and hearing people, investigating the spatial extent of their activation (in response to stimuli presented in the central visual field) and amplitude of their activation (in response to stimuli presented separately in the left and right visual fields). We focused our analysis on the PAC and associative STS region, in comparison with visual area hMT+. We predicted that direction-selective response would be found in the PAC and STS region, in line with enhanced behavioral abilities reported for early deaf individuals in discriminating and/or detecting directional visual motion (Neville & Lawson, 1987; Bosworth & Dobkins, 1999; Bosworth et al., 2013; Hauthal et al., 2013; Shiell, Champoux & Zatorre, 2014). Note that we are able to identify direction-selective responses emerging from a contrast of directional vs. non-directional visual motion in our frequency-tagging paradigm. Direction-selective motion responses are more selective than motion-selective responses, but less selective than direction-specific (e.g., leftward selective) responses. On the other hand, previous studies investigating motion-related responses in the early deaf have reported motion-selective, rather than direction-selective, responses (e.g., Finney et al., 2001; Fine et al., 2005).
Direction-selective responses are found in the STS region for both hearing and deaf individuals
To our knowledge, this is the first study showing direction-selectivity for translational visual motion in the human STS region (see Figures 2A and 3), here encompassing the posterior to middle STS, superior temporal gyrus, and middle temporal gyrus (for direction-selectivity with rotational head and ellipsoid motion, see Carlin et al., 2012). The STS region is known to respond to visual (biological) motion in neurotypical humans and non-human animal models (for a review, see Allison, Puce & McCarthy, 2000; see also, e.g., Grossman & Blake, 2001; Noguchi et al., 2005). Moreover, direction-selective tuning of single neurons to visual motion has been reported in the STS region of monkeys (e.g., Zeki, 1978; Bruce, Desimone & Gross, 1981; Oram, Perrett & Hietanen, 1993; Nelissen, Vanduffel & Orban, 2006). The absence of direction selective STS responses in past human neuroimaging or source localization studies may be for several reasons: for example, these studies focused on more traditionally retinotopically defined areas, and there may be differences in activation resulting from the directional/non-directional motion contrast used here and the motion adaptation paradigms favored previously. Note that in previous studies, direction-selective responses were reported only in visual areas V1 through hMT+/V5 and the lateral occipital complex (Huk et al., 2001; Tootell et al., 1995; Ales & Norcia, 2009; Hong, Tong & Seiffert, 2012). Additionally, the frequency-tagging paradigm applied here may have provided methodological advantages, enabling a powerful contrast of directional and non-directional motion, an analysis with a high signal-to-noise ratio, and not relying on a hemodynamic response function model (e.g., Bandettini et al., 1993; Puce et al., 1995; Engel et al., 1997; Morrone et al., 2000; Ernst et al., 2013; Koening-Robert et al., 2015; Gao et al., 2017).
The direction-selective STS region could be functionally defined in all individual deaf and hearing participants in the right hemisphere, and in five deaf and four hearing participants in the left hemisphere (see Figures 3). It was two to twelve times larger in the right than left hemisphere, for the hearing and deaf participants, respectively (see Figure 2A). At the group level, in the right hemisphere this region was centered at Talaraich coordinates of x = 55, y = −39, z = 16 for hearing participants, and x = 54, y = −42, z = 9 for deaf participants. The localization of the STS region here is similar to that reported in previous studies (e.g., for deaf participants response to visual motion: x = 56, y = −40, z = 8 in Table 5 of Bavelier et al., 2001; for neurotypical participants in response to visual, tactile, and auditory stimuli: LAI coordinates of x = 52, y = 44, z = 15 in Beauchamp et al., 2008).
This STS region also showed a right hemisphere advantage in terms of response amplitude to stimuli shown in the left and right visual fields, particularly for deaf participants. In contrast, there was no left hemisphere advantage apparent for stimuli shown in the right visual field for either participant group (see Figure 4A). These results are in line with larger responses to visual motion in the right hemisphere generally (e.g., Kubova et al., 1990; Corballis, 2003; Finney et al., 2001; see also Weeks et al., 2000 for an example of right hemisphere dominance to auditory motion in congenitally blind participants), as well as interhemispheric transfer of visual motion information (Brandt et al., 2000; see also Motter et al., 1987) and previous reports of no contralateral organization in the STS region (e.g., Grossman & Blake, 2001; see also Saygin & Sereno, 2008).
The primary auditory cortex shows direction-selective visual motion responses in early deaf individuals
We discovered significant direction-selective responses to visual motion in a probabilistically-defined PAC region in early deaf people (see Figure 5). The extent of this activation was highly dependent on the significance threshold used; at p<.001, it appeared to cover 14.1% of the bilateral PAC for the early deaf group. In comparison with the extent of activation in the STS at the same significance threshold, this area is more than 28 times smaller. Nevertheless, when averaging across all voxels in the bilateral PAC, a significant response emerged for five out of six deaf participants (p<.05).
Responses to visual stimuli were first reported in the PAC for early deaf people in response to peripheral moving dots at the group level (Finney et al., 2001). PAC activation was replicated in the early deaf in response to moving or flickering stimuli, most often in or near the visual periphery (Finney et al., 2003; Fine et al., 2005). Importantly, these results were likely not an effect of group averaging or imprecise PAC definition: a recent study identified PAC activation defined anatomically at the individual participant level, using the transverse temporal gyrus, also known as Heschl’s gryus, with flickering point-lights in the right visual field (Scott et al., 2014). In this study, the amount of activation in the PAC was reported only in comparison for peripherally vs. perifoveally presented flicker dots, preventing a direct comparison with the extent or amount of activation reported here. Still, our finding that PAC activation is – at least to some extent – direction-selective, adds to our knowledge of neural plasticity in this region for early deaf people.
The pattern of PAC activation in response to stimuli presented in the right and left visual fields is highly reminiscent of that of the STS region (see Figures 5C and Figure 4A). One possibility is that the PAC projects information into the STS region, a sensory association area (e.g., Benvevento et al., 1977; Seltzer & Pandya, 1978; Seltzer et al., 1996; Hackett et al., 2007; Smiley et al., 2007; see also Beauchamp 2008). Additionally, the STS also projects information back to the superior temporal gyrus (e.g., Barnes & Pandya, 1992, using retrograde tracing in the rhesus monkey), suggesting reciprocal connections and more complex interactions between these regions. Note that the correspondence between the PAC and STS region found here cannot be explained by overlap between these areas: there was no overlap in five deaf participants (0.8% for the remaining one participant) and no overlap at the group level in the right hemisphere.
The right STS region is recruited extensively for processing direction-selective visual motion in early deaf individuals
The most striking difference between deaf and hearing individuals in response to directional motion was found in the right STS region, which was 5.5 times larger for deaf than hearing participants (for a 12 times greater extent in the right posterior STS in deaf than hearing participants in response to attended visual motion, see Table 2 of Bavelier et al., 2001). In contrast, no differences in direction-selective responses were found across groups in the left STS region or visual area hMT+ here.
The STS is a likely region for cross-modal organization, as it covers an expansive region of the temporal lobe and expresses great functional diversity, containing sub-regions sensitive to auditory, visual, tactile, and multisensory stimuli (e.g., Benevento et al., 1977; Seltzer & Pandya, 1978; Calvert et al., 2000; Beauchamp et al., 2004; Dahl et al., 2009; Beauchamp et al., 2008). The posterior STS receives inputs from both the visual and auditory cortex, while the middle STS normally receives auditory inputs only, at least in the rhesus monkey (Seltzer & Pandya, 1994); in humans, auditory-visual responses have been reported to be largest in the middle STS (Venezia et al., 2017). Congruently, the auditory association cortices have also been invoked in studies in neurotypical individuals on cross-modal plasticity through learned associations (e.g., Meyer et al., 2007; see also Bulkin & Groh, 2006; Ghazanfar & Schroeder, 2006).
In congenitally deaf people, greater connectivity between the middle STS across hemispheres, as well as with the ipsilateral posterior STS, hints at a reorganization of this region in line with cross-modal plasticity (Li, 2013). Specific examples of cross-modal plasticity in the STS region have been reported with regards to how early deaf people processes sign language. Early-deaf participants have been shown to have increased activation in the middle STS in response to sign language (e.g., Neville et al., 1998; Sadato et al., 2004). Additionally, increased posterior STS activation was shown in deaf signers, and not hearing signers, when performing a velocity task (see Figure 6 of Bavelier et al., 2001). Our report of expansive recruitment of the STS region in early deaf people in response to visual motion thus further confirms a general pattern of neural plasticity in this region.
In hearing individuals, some authors claim that responses to auditory motion are separate from those to auditory localization, and rely on the superior temporal gyrus (e.g., Baumgart et al., 1999; Ducommun et al. 2002; Ducommun et al., 2004). However, others claim that the auditory cortex may be selective to spatial locations rather than motion (e.g., Smith et al., 2004). Our findings suggest that, at least in response to visual motion, auditory areas in deaf and association areas in deaf and hearing participants are selectively responsive to directional visual motion.
Direction-selective responses to stimuli in the right and left visual fields do not show a contra-lateral bias in the deaf primary auditory and association cortices
Interestingly, despite a right hemisphere advantage for early deaf participants in the STS region, and hinted at in the PAC, there was more activation overall to stimuli presented in the right visual field. Behaviorally, a right visual field advantage for visual motion perception has often been reported for early deaf participants (e.g., direction of motion: Neville & Lawson, 1987; direction of motion: Bosworth & Dobkins, 1999; motion velocity: Brozinsky & Bavelier, 2004; direction of motion: Bosworth, Petrich & Dobkins, 2013; see also Samar & Parasnis, 2007; but see Hauthal et al., 2103 for a left visual field advantage for movement localization in late-signers).
A right hemisphere advantage has been reported in previous neuroimaging studies investigating visual motion or flickering stimulus responses in the early deaf: in the auditory cortices, the right hemisphere was dominantly (Finney et al., 2003) or exclusively activated (Finney, Fine & Dobkins, 2001; Fine et al., 2005). The right hemisphere advantage reported here is also in line with the finding that only the right auditory cortex (planum temporale) showed a correlation between increased cortical thickness and enhanced visual motion detection thresholds in the study on early deaf of Shiell and colleagues (2016). The right auditory cortex was also shown with event-related potential source localization to be dominant in hearing-restored deaf individuals when viewing visual stimuli (Sandmann et al., 2012). While some studies have reported left hemisphere advantages in the early deaf, such effects could either not be localized (e.g., Neville & Lawson, 1987) or were relatively small, sometimes non-significant effects, reported in hMT+ (e.g., Bavelier et al., 2001; Fine et al., 2005), which in our study, on direction-selective responses, did not show significant differences across participant groups but an appearance of right lateralization for early deaf participants only (see Figures 2B and 4B).
Here, we can reconcile behavioral right visual field and neural right hemisphere advantages in response to directional motion: the remapped auditory cortices do not show a strong contralateral bias like other direction-selective areas, e.g., hMT+; instead the right hemisphere dominates regardless. As addressed in the first section of the Discussion, the STS region possesses large spatial fields and, particularly in the right hemisphere, low sensitivity to retinotopic organization (e.g., Saygin & Sereno, 2008; see also Grossman & Blake, 2001; Almeida et al., 2015). The recruited association cortex seems to be the best candidate for behavioral right visual field advantages, since this region showed the most extensive changes between early deaf and hearing participants here. Additionally, in a previous study reporting effects in hMT+, the posterior STS was shown to be 9.3 times larger in size (in comparison, hMT+ was only 1.08 times larger), and 7 times greater in percent signal change (hMT+: 1.05 times greater), in deaf than hearing participants in response to attended velocity of visual motion (see Table 4 of Bavelier et al., 2001).
A right hemisphere advantage paired with a right visual field advantage goes against the assumption that neural activation to visual stimuli is necessarily contralateral, which has frequently been made in the literature on neural plasticity in the early deaf (e.g., Bavelier et al., 2001; tentatively in Bosworth & Dobkins, 1999; Bosworth & Dobkins, 2002; Brozinsky & Bavelier, 2004; Bosworth, Petrich & Dobkins, 2013; Hauthal et al., 2103). In a study reporting a left hemisphere advantage with attention-related modulation of event-related potentials to peripheral visual targets in the early deaf, Neville and Lawson (1987) hypothesized that the left hemisphere was remapped for sign language processing, and therefore could have different sensitivities to stimuli such as visual motion or stimulus localization. However, it is not clear that the left hemisphere is specialized for sign language processing in the early deaf: deaf and early-signing hearing participants have been shown to have bilateral (STS) activation to sign language, and early deaf participants to have more right STS activation to written language (e.g., Neville et al., 1998; Sadato et al., 2004). Our finding of a right hemisphere advantage compatible with a right visual field advantage offers an alternative explanation, and unites the majority of neural and behavioral findings regarding motion perception of the early deaf.
Speculatively, our findings suggest that behavioral advantages in the early deaf, particularly for motion discrimination in the right visual field, may be supported by increased STS region activation in the right hemisphere (again, see Figure 4A). While limited by a small sample size, five of our six deaf participants also participated in a behavioral study in our lab, in which thresholds on the percent dot coherence required for direction of visual motion in the left and right visual fields were acquired. We found a suggestive correlation with this measure of behavioral direction discrimination ability (i.e., lower percent dot motion coherences required) and the extent of activation in the right STS region, R2 = 0.30, although not significant, p = .10. In comparison, the extent of bilateral hMT+ activation showed no correlation, R2 = 0.01, p = .78. However, this tentative result would need to be confirmed with larger sample sizes in future studies. At the least, we are able to introduce the hypothesis that the auditory and association cortices in early deaf individuals are sensitive to directional visual motion, and that this neural reorganization may support a behavioral advantage reported previously for visual motion direction discrimination.
Limitations and future directions
One limitation of this study was that the sample consisted of only six participants per group: while the results were reliable across individual participants, they would be strengthened by replication in future studies, potentially with larger sample sizes. Another limitation here is that an eye tracker was not used during the fMRI experiment to ensure fixation. However, it is unlikely that eye movements could explain the results. The functionally-defined regions were localized with centrally presented stimuli, and the differences between deaf and hearing participants for peripherally-presented stimuli were highly specific (e.g., the enhanced STS region recruitment was restricted to the right hemisphere and right visual field). Additionally, cues for participants to report the direction of motion were given at random intervals throughout the experiment, such that they were neither periodic nor associated with directional motion presentation times (see Methods). A third limitation of this study was that the STS region was defined broadly in each participant; future studies could use anatomical landmarks or more specific functional localizers in hearing participants to demarcate more precise sub-regions. Finally, in this study, directional visual motion coincided with coherent visual motion. While it may be argued that coherent motion inherently possesses directionality, future studies may address the influence of coherency on directional motion responses (see Braddick et al., 2008).
Acknowledgements
This research was supported by grants from the National Institutes of Health (NIH; grants EY023268 to FJ; and P20 GM103650 to MW). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. Talia Retter is supported by the Belgian National Foundation for Scientific Research (FNRS; grant FC7159). The authors are thankful to Andrea Conte and Bruno Rossion for access to the stimulation program XPMan, revision 111, as well as to Xiaoqing Gao for his help with the frequency-domain analysis, and O. Scott Gwinn, for use of his behavioral data on discrimination thresholds for the deaf participants, as well as help with stimulus generation.
References
- Ales JM & Norcia AM (2009). Assessing direction-specific adaptation using the steady-state visual evoked potential: Results from EEG source imaging. Journal of Vision, 9)7):8, 1–13. [DOI] [PubMed] [Google Scholar]
- Albright TD (1984). Direction and orientation selectivity of neurons in visual area MT of the macaque. Journal of Neurophysiology, 52(6), 1106–1130. [DOI] [PubMed] [Google Scholar]
- Allison T, Puce P & McCarthy G (2000). Social perception from visual cues : role of the STS region. Trends in Cognitive Sciences, 4(7), 267–278. [DOI] [PubMed] [Google Scholar]
- Almeida J, He D, Chen Q, Mahon BZ, Zhang F et al. (2015). Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf, Psychological Science, 26(11), 1771–1782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anstis SM (1970). Phi movement as a subtraction process. Vision Research, 10, 1411–1430. [DOI] [PubMed] [Google Scholar]
- Atkinson J, Birtles D, Anker S, Braddick O, Rutherford M et al. (2008). High-density VEP measures of global form and motion processing in infants born very preterm. Journal of Vision, 8(6), 422–422. [Google Scholar]
- Bandettini PA, Jesmanowicz A, Wong EC & Hyde JS (1993). Processing strategies for time-course data sets in functional MRI of the human brain. Magnetic Resonance in Medicine, 30(2), 161–173. [DOI] [PubMed] [Google Scholar]
- Barnes CL & Pandya DN (1992). Efferent cortical connections of multimodal cortex of the superior temporal sulcus in the rhesus monkey. The Journal of Comparative Neurology, 318(2), 222–244. [DOI] [PubMed] [Google Scholar]
- Baumgart F, Gaschler-Markefski B, Woldorff MG, Heinze H-J & Scheich H (1999). A movement-sensitive area in auditory cortex. Nature, 400, 724–726. [DOI] [PubMed] [Google Scholar]
- Bavelier D, Tomann A, Hutton C, Mitchell T, Corina D et al. (2000). Visual Attention to the Periphery Is Enhanced in Congenitally Deaf Individuals. The Journal of Neuroscience, 20(17):RC93, 1–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bavelier D, Brozinsky C, Tomann A, Mitchell T Neville H & Liu G (2001). Impact of Early Deafness and Early Exposure to Sign Language on the Cerebral Organization for Motion Processing. The Journal of Neuroscience, 21(22), 8931–8942. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bavelier D, Dye MWG & Hauser PC (2006). Do deaf individuals see better ? Trends in Cognitive Sciences, 10(11), 512–518. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beauchamp MS, Cox RW & DeYoe EA (1997). Graded effects of spatial and featural attention on human area MT and associated motion processing areas. Journal of Neurophysiology, 78(1), 516–520. [DOI] [PubMed] [Google Scholar]
- Beauchamp MS, Argall BD, Bodurka J, Duyn JH & Martin A (2004). Unraveling multisensory integration : patchy organization within human STS multisensory cortex. Nature Neuroscience, 7, 1190–1192. [DOI] [PubMed] [Google Scholar]
- Beauchamp MS, Yasar NE, Frye RE, & Ro T (2008). Touch, Sound and Vision in Human Superior Temporal Sulcus.NeuroImage, 41(3), 1011–1020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beckett A, Peirce JW, Sanchez-Panchuelo RM, Francis S & Schluppeck D (2012). Contribution of large scale biases in decoding of direction-of-motion from high-resolution fMRI data in human early visual cortex. NeuroImage, 63(3), 1623–1632. [DOI] [PubMed] [Google Scholar]
- Benevento LA, Fallon J, Davis BJ & Rezak M (1977). Auditory-visual interaction in single cells in the cortex of the superior temporal sulcus and the orbital frontal cortex of the macaque monkey. Experimental Neurology, 57(3), 849–872. [DOI] [PubMed] [Google Scholar]
- Bola L, Zimmermann M, Mostowski P, Jednorog K, Marchewka A, Butkowski P & Szwed M (2017). Task-specific reorganization of the auditory cortex in deaf humans. Proceedings of the National Academy of Science, 114(4), E600–E609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bosworth RG & Dobkins KR (1999). Left-hemisphere dominance for motion processing in deaf signers. Psychological Science, 10(3), 256–262. [Google Scholar]
- Bosworth RG & Dobkins KR (2002). Visual field asymmetries for motion processing in deaf and hearing signers. Brain and Cognition, 49(1), 170–181. [DOI] [PubMed] [Google Scholar]
- Bosworth RG, Petrich JA & Dobkins KR (2013). Effects of attention and laterality on motion and orientation discrimination in deaf signers. Brain and Cognition, 82(1), 117–126. [DOI] [PubMed] [Google Scholar]
- Bottari D, Nava E, Ley P & Pavani F (2010). Enhanced reactivity to visual stimuli in deaf individuals. Restorative Neurology and Neuroscience, 28(2), 167–179. [DOI] [PubMed] [Google Scholar]
- Braddick O (1974). A short-range process in apparent motion. Vision Research, 14(7), 519–527. [DOI] [PubMed] [Google Scholar]
- Braddick OJ, Hartley T, Atkinson J, Wattam-Bell J & Turner R (1997). FMRI study of differential brain activation by coherent motion and dynamic noise. Investigative Ophthalmology & Visual Science, 38, 4297–4297. [Google Scholar]
- Braddick O, Birtles D, Wattam-Bell J & Atkinson J (2005). Motion- and orientation-specific cortical responses in infancy. Vision Research, 45, 3169–3179. [DOI] [PubMed] [Google Scholar]
- Braddick O, Wattam-Bell J, Birtles D, Loesch J, Loesch L et al. (2008). Brain activity evoked by motion direction changes and by global motion coherence shows different spatial distributions. Journal of Vision, 8(6), 674–674. [Google Scholar]
- Brandt T, Stephan T, Bense S, Yousry TA, Dieterich M (2000). Hemifield visual motion stimulation: an example of interhemispheric crosstalk. Neuroreport, 11(12), 2803–2809. [DOI] [PubMed] [Google Scholar]
- Brozinsky CJ & Bavelier D (2004). Motion velocity thresholds in deaf signers: changes in lateralization but not in overall sensitivity. Cognitive Brain Research, 21(1), 1–10. [DOI] [PubMed] [Google Scholar]
- Bruce C, Desimone R & Gross CG (1981). Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque. Journal of Neurophysiology, 46(2), 369–384. [DOI] [PubMed] [Google Scholar]
- Bulkin DA & Groh JM (2006). Seeing sounds: visual and auditory interactions in the brain. Current Opinion in Neurobiology, 16(4), 415–419. [DOI] [PubMed] [Google Scholar]
- Calvert GA, Campbell R & Brammer MJ (2000). Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Current Biology, 10(11), 649–657. [DOI] [PubMed] [Google Scholar]
- Carlin JD, Rowe JB, Kriegeskorte N Thompson R & Calder AJ (2012). Direction-sensitive codes for observed head turns in human superior temporal sulcus. Cerebral Cortex, 22(4), 735–744. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Codina CJ, Pascalis O, Baseler HA, Levine AT, & Buckley D (2017). Peripheral Visual Reaction Time Is Faster in Deaf Adults and British Sign Language Interpreters than in Hearing Adults. Frontiers in Psychology, 8, 50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Corballis PM (2003). Visuospatial processing and the right-hemisphere interpreter. Brain and Cognition, 53, 171–176. [DOI] [PubMed] [Google Scholar]
- Dahl CD, Logothesis NK & Kayser C (2009). Spatial organization of multisensory responses in temporal association cortex. The Journal of Neuroscience, 29(38), 11924–11932. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dubner R & Zeki SM (1971). Response properties and receptive fields of cells in an anatomically defined region of the superior temporal sulcus in the monkey. Brain Research, 2(24), 528–532. [DOI] [PubMed] [Google Scholar]
- Ducommun CY, Murray MM, Thut G, Bellmann A, Viaud-Delmon I & Michel CM (2002). Segregated processing of auditory motion and auditory location : an ERP mapping study. Neuroimage, 16, 76–88. [DOI] [PubMed] [Google Scholar]
- Ducommun CY, Michel CM, Clarke S, Adriani M, Seeck M, Landis T & Blanke O (2004). Cortical motion deafness. Neuron, 43(6), 765–777. [DOI] [PubMed] [Google Scholar]
- Dye MWG, Hauser PC & Bavelier D (2009). Is visual selective attention in deaf individuals enhanced or deficient? The case of the useful field of view. PLoS ONE, 4(5): e5640. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eickhoff SB, Stephan KE, Mohlberg H, Grefkes C, Fink GR, Amunts K & Zilles K (2005). A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage, 25(4), 1325–1335. [DOI] [PubMed] [Google Scholar]
- Eickhoff SB, Heim S, Zeilles K & Amunts K (2006). Testing anatomically specified hypotheses in functional imaging using cytoarchitectonic maps. NeuroImage, 32(2), 570–582. [DOI] [PubMed] [Google Scholar]
- Engel S, Zhang X & Wandell B (1997). Colour tuning in human visual cortex measured with functional magnetic resonance imaging. Nature, 388(6637), 68–71. [DOI] [PubMed] [Google Scholar]
- Ernst ZR, Boynton GM & Jazayeri M (2013). The spread of attention across features of a surface. Journal of Neurophysiology, 110, 2426–2439. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Felleman DJ & Van Essen DC (1987). Receptive field properties of neurons in area V3 of macaque monkey extrastriate cortex. Journal of Neurophysiology, 57(4), 889–920. [DOI] [PubMed] [Google Scholar]
- Fine I, Finney EM, Boynton GM & Dobkins KR (2005). Comparing the effects of auditory deprivation and sign language within the auditory and visual cortex. Journal of Cognitive Neuroscience, 17:10, 1621–1637. [DOI] [PubMed] [Google Scholar]
- Finney EM, Fine I & Dobkins KR (2001). Visual stimuli activate auditory cortex in the deaf. Nature Neuroscience, 4, 1171–1173. [DOI] [PubMed] [Google Scholar]
- Finney EM, Clementz BA, Hickok G & Dobkins KR (2003). NeuroReport, 14(11), 1425–1427. [DOI] [PubMed] [Google Scholar]
- Gao X, Gentile F & Rossion B (2017). Fast periodic stimulation (FPS): A highly effective approach in fMRI brain mapping. bioRxiv, 135087. [DOI] [PubMed] [Google Scholar]
- Ghazanfar AA & Schroeder CE (2006). Is neocortex essentially multisensory? Trends in Cognitive Sciences, 10(6), 278–285. [DOI] [PubMed] [Google Scholar]
- Grossman ED & Blake R (2001). Brain activity evoked by inverted and imagined biological motion. Vision Research, 41(10-11), 1475–1482. [DOI] [PubMed] [Google Scholar]
- Hackett TA, De La Mothe LA, Ulbert I, Karmos G, Smiley J & Schroeder CE (2007). Multisensory convergence in auditory cortex, II. Thalamocortical connections of the caudal superior temporal plane. Journal of Comparative Neurology. 502, 924–952. [DOI] [PubMed] [Google Scholar]
- Hauthal N, Sandmann P, Debener S, & Thorne JD (2013). Visual movement perception in deaf and hearing individuals. Advances in Cognitive Psychology, 9(2), 53–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hong SW, Tong F & Seiffert AE (2013). Direction-Selective Patterns of Activity in Human Visual Cortex Suggest Common Neural Substrates for Different Types of Motion. Neuropsychologia, 50(4), 514–521. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huk AC, Ress D & Heeger DJ (2001). Neuronal basis of the motion aftereffect reconsidered. Neuron, 32(1), 161–172. [DOI] [PubMed] [Google Scholar]
- Julesz B (1971). Foundations of cyclopean perception. Chicago: University of Chicago Press. [Google Scholar]
- Kamitani Y & Tong F (2006). Decoding seen and attended motion directions from activity in the human visual cortex. Current Biology, 16(11), 1096–1102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koening-Robert R, VanRullen R & Tsuchiya N (2015). Semantic Wavelet-Induced Frequency-Tagging (SWIFT) Periodically Activates Category Selective Areas While Steadily Activating Early Visual Areas. PLoS ONE, 10(12), e0144858. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kubova Z, Kuba M, Hubacek J & Vit F (1990). Properties of visual evoked potentials to onset of movement on a television screen. Documenta Ophthalmologica, 75(1), 67–72. [DOI] [PubMed] [Google Scholar]
- Lam K, Kaneoke Y, Gunji A, Yamasaki H, Matsumoto E, Naito T & Kakigi R (2000). Neuroscience, 97(1), 1–10. [DOI] [PubMed] [Google Scholar]
- Levine A, Codina C, Buckley D, de Sousa G & Baseler H (2015). Differences in primary visual cortex predict performance in local motion detection in deaf and hearing adults. Journal of Vision, 15(12), 486–486. [Google Scholar]
- Li SC (2013). Neuromodulation and developmental contextual influences on neural and cognitive plasticity across the lifespan. Neuroscience and Biobehavioral Review, 37(9 Pt B), 2201–2208. [DOI] [PubMed] [Google Scholar]
- Lore WH & Song S (1991). Central and peripheral visual processing in hearing and nonhearing individuals. Bulletin of the Psychonomic Society, 29(5), 437–440. [Google Scholar]
- Lomber SG, Meredith MA & Kral A (2010). Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf. Nature Neuroscience, 13(11), 1421–1427. [DOI] [PubMed] [Google Scholar]
- Meredith MA, Kryklywy J, McMillan AJ, Malhotra S, Lum-Tai R & Lomber SG (2011). Crossmodal reorganization in the early deaf switches sensory, but not behavioral roles of auditory cortex. Proceedings of the Natural Academy of Science, 108(21), 8856–8861. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meyer M, Baumann S, Marchina S & Jancke L (2007). Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation. BMC Neuroscience, 8:12, 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mitchell TV & Maslin MT (2007). How vision matters for individuals with hearing loss. International Journal of Audiology, 46(9), 500–511. [DOI] [PubMed] [Google Scholar]
- Morosan P, Rademacher J, Schleicher A, Amunts K, Schomann T & Zilles K (2001). Human primary auditory cortex: cytoarchitectonic subdivisions and mapping into a spatial reference system. Neuroimage, 13(4), 684–701. [DOI] [PubMed] [Google Scholar]
- Morrone MC, Tosetti M, Montanaro D, Fiorentini A, Cioni G & Burr DC (2000). A cortical area that responds specifically to optic flow, revealed by fMRI. Nature Neuroscience, 3, 1322–1328. [DOI] [PubMed] [Google Scholar]
- Motter BC, Steinmetz MA, Duffy CJ & Mountcastle VB (1987). Functional properties of parietal visual neurons: mechanisms of directionality along a single axis. Journal of Neuroscience, 7(1), 154–176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nakamura H, Kashii S, Naagamine T, Hashimoto T, Honda Y & Shibasaki H (2003). Human V5 demonstrated by magnetoencephalography using random dot kinematograms of different coherence levels. Neuroscience Research, 46(4), 423–433. [DOI] [PubMed] [Google Scholar]
- Nelissen K Vanduffel W & Orban GA (2006). Charting the Lower Superior Temporal Region, a New Motion-Sensitive Region in Monkey Superior Temporal Sulcus. Journal of Neuroscience, 26(22), 5929–5947. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neville HJ & Lawson D (1987). Attention to central and peripheral visual space in a movement detection task: an event-related potential and behavioral study. Congenitally deaf adults. Brain Research, 405, 268–283. [DOI] [PubMed] [Google Scholar]
- Neville HJ, Bavelier D, Corina D, Rauschecker J, Karni A, Lalwani A et al. (1998). Cerebral organization for language in deaf and hearing subjects: biological constraints and effects of experience. Proceedings of the National Academy of Science, 95(3), 922–929. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Noguchi Y, Kaneoke Y, Kakigi R, Tanabe HC, Sadato N (2005). Role of the Superior Temporal Region in Human Visual Motion Perception. Cerebral Cortex, 15(10), 1492–1601. [DOI] [PubMed] [Google Scholar]
- Oram MW, Perrett DI & Hietanen JK (1993). Directional tuning of motion-sensitive cells in the anterior superior temporal polysensory area of the macaque. Experimental Brain Research, 97(2), 274–294. [DOI] [PubMed] [Google Scholar]
- Palomares M, Ales JM, Wade AR, Cottereau BR & Norcia AM (2012). Distinct effects of attention on the neural responses to form and motion processing: A SSVEP scource-imaging study. Journal of Vision, 12(10):15, 1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Parasnis I (1983). Effects of parental deafness and early exposure to manual communication on the cognitive skills, English language skill, and field independence of young deaf adults. Journal of Speech and Hearing Research, 26(4), 588–594. [DOI] [PubMed] [Google Scholar]
- Parasnis I & Samar VJ (1985). Parafoveal attention in congenitally deaf and hearing young adults. Brain and Cognition, 4(3), 313–327. [DOI] [PubMed] [Google Scholar]
- Pascual-Leone A & Hamilton R (2001). The metamodal organization of the brain. Progress in Brain Research, 134, 427–445. [DOI] [PubMed] [Google Scholar]
- Pavani F & Bottari D (2012). Visual abilities in individuals with profound deafness: a critical review In: Murray MM, Wallace MT, editors. The Neural Bases of Multisensory Processes. Boca Raton (FL): CRC Press/Taylor & Francis; Chapter 22 Available from: https://www.ncbi.nlm.nih.gov/books/NBK92865/ [PubMed] [Google Scholar]
- Puce A, Allison T, Gore JC & McCarthy G (1995). Face-sensitive regions in human extrastriate cortex studied by functional MRI. Journal of Neurophysiology, 74(3), 1192–1199. [DOI] [PubMed] [Google Scholar]
- Retter TL & Rossion B (2016). Uncovering the neural magnitude and spatio-temporal dynamics of natural image categorization in a fast visual stream. Neuropsychologia, 91, 9–28. [DOI] [PubMed] [Google Scholar]
- Sadato N, Okada T, Honda M, Matsuki K, Yoshida M, Kashikura K et al. (2004). Cross-modal integration and plastic changes revealed by lip movement, random-dot motion and sign languages in the hearing and deaf. Cerebral Cortex, 15(8), 1113–1122. [DOI] [PubMed] [Google Scholar]
- Samar VJ & Parasnis I (2007). Non-verbal IQ is correlated with visual field advantages for short duration coherent motion detection in deaf signers with varied ASL exposure and etiologies of deafness. Brain and Cognition, 65(3), 260–290. [DOI] [PubMed] [Google Scholar]
- Sandmann P, Dillier N, Eichele T, Meyer M, Kegel A, Pascual-Marqui RD, et al. (2012). Visual activation of auditory cortex reflects maladaptive plasticity in cochlear implant users. Brain, 135(2), 555–568. [DOI] [PubMed] [Google Scholar]
- Saygin AP & Sereno MI (2008). Retinotopy and attention in human occipital, temporal, parietal, and frontal cortex. Cerebral Cortex, 9, 2158–2168. [DOI] [PubMed] [Google Scholar]
- Scott GD, Karns CM, Dow MW, Stevens C & Neville HJ (2014). Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex. Frontiers in Human Neuroscience, 8:177, 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seltzer B & Pandya DN (1994). Parietal, temporal, and occipital projections to cortex of the superior temporal sulcus in the rhesus monkey: a retrograde tracer study. Journal of Comparative Neurology, 15;343(3), 445–463. [DOI] [PubMed] [Google Scholar]
- Seltzer B, Cola MG, Gutierrez C, Massee M, Weldon C & Cusick CG (1996). Overlapping and nonoverlapping cortical projections to cortex of the superior temporal sulcus in the rhesus monkey: double anterograde tracer studies. Journal of Comparative Neurology, 370, 173–190. [DOI] [PubMed] [Google Scholar]
- Seltzer B & Pandya DN (1978). Afferent cortical connections and architectonics of the superior temporal sulcus and surrounding cortex in the rhesus monkey. Brain Research, 149(1), 1–24. [DOI] [PubMed] [Google Scholar]
- Shiell MM, Champoux F & Zatorre RJ (2014). Enhancement of visual motion detection thresholds in early deaf people. PLoS ONE, 9(2): e90498. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shiell MM, Champoux F & Zatorre RJ (2016). The right hemisphere planum temporale supports enhanced visual motion detection ability in deaf people: evidence from cortical thickness. Neural Plasticity, 2016:7217630, 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smiley JF, Hackett TA, Ulbert I, Karmas G, Lakatos P, Javitt DC and Schroeder CE (2007). Multisensory convergence in auditory cortex, I. Cortical connections of the caudal superior temporal plane in macaque monkeys. Journal of Comparative Neurology, 502, 894–923. [DOI] [PubMed] [Google Scholar]
- Smith KR, Okada K, Saberi K & Hickok G (2004). Human cortical auditory motion areas are not motion selective. NeuroReport, 15(9), 1523–1526. [DOI] [PubMed] [Google Scholar]
- Srinivasan R, Russell DP, Edelman GM & Tononi G (1999). Increased synchronization of neuromagnetic responses during conscious perception. Journal of Neuroscience, 19, 5435–5448. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Talairach J, Tournoux P (1988). Co-planar stereotaxic atlas of the human brain Thieme, New York. [Google Scholar]
- Tootell RBH, Reppas JB, Dale AM, Look RB, Sereno MI, Malach R et al. (1995). Visual motion aftereffect in human cortical area MT revealed by functional magnetic resonance imaging. Nature, 375, 139–141. [DOI] [PubMed] [Google Scholar]
- Tyler CW & Kaitz M (1977). Movement adaptation in the visual evoked response. Experimental Brain Research, 27(2), 203–209. [DOI] [PubMed] [Google Scholar]
- Venezia JH, Vaden KI Jr., Rong F, Maddox D, Saberi K & Hickok G (2017). Auditory, visual, and audiovisual speech processing streams in superior temporal sulcus. Frontiers in Human Neuroscience, 11:174, 1–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wattam-Bell J (1991). Development of motion-specific cortical responses in infancy. Vision Research, 31(2), 287–297. [DOI] [PubMed] [Google Scholar]
- Weeks R, Horwitz B, Aziz-Sultan A, Tian B, Wessinger CM, Cohen LG et al. (2000). A positron emission tomographic study of auditory localization in the congenitally blind. Journal of Neuroscience, 20(7), 2664–2672. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeki SM (1978). Functional specialisation in the visual cortex of the rhesus monkey. Nature, 274, 423–428. [DOI] [PubMed] [Google Scholar]
- Zimmermann J, Goebel R, De Martino F, van de Moortele P, Feinberg GA, Chairnow, et al. (2011). Mapping the Organization of Axis of Motion Selective Features in Human Area MT Using High-Field fMRI. PLoS One, 6(12):e28716, 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
