Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2004 Jan 15;21(3):178–190. doi: 10.1002/hbm.10156

Different areas of human non‐primary auditory cortex are activated by sounds with spatial and nonspatial properties

Heledd C Hart 1, Alan R Palmer 1, Deborah A Hall 1,
PMCID: PMC6872110  PMID: 14755837

Abstract

In humans, neuroimaging studies have identified the planum temporale to be particularly responsive to both spatial and nonspatial attributes of sound. However, a functional segregation of the planum temporale along these acoustic dimensions has not been firmly established. We evaluated this scheme in a factorial design using modulated sounds that generated a percept of motion (spatial) or frequency modulation (nonspatial). In addition, these sounds were presented in the context of a motion detection and a frequency‐modulation detection task to investigate the cortical effects of directing attention to different perceptual attributes of the sound. Motion produced stronger activation in the medial part of the planum temporale and frequency‐modulation produced stronger activation in the lateral part of the planum temporale, as well as an additional non‐primary area lateral to Heschl's gyrus. These separate subregions are consistent with the notion of divergent processing streams for spatial and nonspatial auditory information. Activation in the superior parietal cortex, putatively involved in the spatial pathway, was dependent on the task of motion detection and not simply on the presence of acoustic cues for motion. This finding suggests that the listening task is an important determinant of how the processing stream is engaged. Hum. Brain Mapping 21:178–190, 2004. © 2004 Wiley‐Liss, Inc.

Keywords: fMRI; frequency‐modulation; motion; cortex; planum temporale, parietal cortex

INTRODUCTION

Anatomical and Functional Segregation in Human Planum Temporale

The human auditory cortex on the superior temporal gyrus has an elongated primary auditory area on Heschl's gyrus (HG), surrounded by multiple non‐primary auditory fields [Galaburda and Sanides, 1980; Rivier and Clarke, 1997; Wallace et al., 2002]. The planum temporale (PT), defined by the morphology of the superior temporal gyrus, is one region encompassing non‐primary auditory fields posterior to HG. Differences in the laminar structure across PT indicate that it can be divided anatomically into at least three subregions. For example, Rivier and Clarke [1997] identified three fields (superior temporal area; lateral area, and posterior area) located immediately behind Heschl's sulcus. Neuroimaging data reveal that PT is involved in the analysis of many types of spectrotemporally complex sound, and that differential responsiveness to different acoustic cues indicates that functional subdivisions are likely to be present within PT.

Neuroimaging studies in humans have revealed that the PT is involved in the analysis of sounds that have a temporal and spectral structure. Bilateral regions of the auditory cortex posterior to HG, often including the lateral portion of PT, are activated by harmonic complex tones [Hall et al., 2002], band pass noise [Wessinger et al., 2001], amplitude modulation [Giraud et al., 2000], frequency modulation (FM) [Binder et al., 2000; Hall et al., 2002; Thivard et al., 2000], and sound sequences [Griffiths et al., 1999; Penhune et al., 1998]. Sounds with dynamic spatial properties also engage part of the PT. Auditory motion has been shown to elicit activation in PT for sounds that move in azimuth [Baumgart et al., 1999; Warren et al., 2002], in elevation [Pavani et al., 2002], and in distance [Seifritz et al., 2002]. In some studies, only the right PT is significantly activated by auditory motion [Baumgart et al., 1999; Seifritz et al., 2002) and the location of the motion‐specific activation within PT appears to be somewhat variable across studies in terms of whether its focus is in the medial or lateral part of PT.

Separate Auditory Processing Routes

Anatomical tract tracing in macaque monkeys has revealed two routes of cortico‐cortical connectivity originating in non‐primary auditory fields and terminating in distinct polysensory regions of the prefrontal cortex [Romanski et al., 1999] (illustrated in Fig. 1). Anterior auditory fields are reciprocally connected with the frontal pole (10), the rostral principal sulcus (anterior 46), and ventral prefrontal regions (12, 45). Posterior auditory fields are connected with the caudal principal sulcus (posterior 46) and the frontal eye fields (8a), via the parietal cortex. From a synthesis of evidence from neurophysiology, imaging, and lesion studies, Rauschecker [1998] has posited that the anterior route is involved in processing object‐related sound features and the posterior route is involved in processing space‐related sound features. Species‐specific vocalisations engage anterior parts of the superior temporal gyrus both in human [Hickok and Poeppel, 2000; Scott et al., 2000] and non‐human primates [Rauschecker and Tian, 2000]. An anterior processing stream may subserve analysis of the semantic content of sensory information since, in humans, the anterior part of the left superior temporal sulcus has been shown to respond only if the stimulus is intelligible [Scott et al., 2000]. While there is general agreement about the functional role of the anterior route, the role of the posterior route and the brain areas involved is still uncertain. Alternative views of its role are that it is involved in tracking changes in the frequency spectra over time [Belin and Zatorre, 2000] or in auditory‐motor integration [Hickok and Poeppel, 2000]. The posterior route is connected to the prefrontal cortex via the posterior parietal cortex, which is also involved in the localization of visual signals [Andersen et al., 1997; Bushara, 1999]. Studies that have imaged the parietal cortex have shown that it is activated during passive listening to sounds at fixed locations or moving in space. Activation occurs particularly on the right, but its precise location is inconsistent across studies. For example, studies have implicated the inferior parietal lobe [Weeks et al., 1999], the superior parietal lobe [Griffiths and Green, 1999; Pavani et al., 2002], both lobes [Griffiths et al., 1998b, 2000], the intraparietal sulcus, which forms the border between inferior and superior parietal lobes [Bremmer et al., 2001; Lewis et al., 2000], or none at all [Seifritz et al. 2002]. It is unclear whether these differences are due to methodological (acoustic cues used to generate the motion stimulus or imaging technique) or cognitive (listening task) factors, or the expertise of the listeners.

Figure 1.

Figure 1

Dorsolateral view of the macaque left hemisphere after removal of the overlying frontal and parietal cortex, exposing the ventral bank of the lateral sulcus and insula. The primary region is shown in white and non‐primary regions are shown by the hatched lines. Connection routes are illustrated by the arrows, terminating in different areas of the prefrontal cortex (areas labelled according to Brodmann's classification). The anterior route is illustrated in dark grey, and the posterior route is illustrated in light grey.

The neural systems supporting perception may vary as a function of the task, especially in those regions that have a multi‐modal function. This appears to be true for auditory recognition and localisation. For example, when subjects directed their attention either to the pitch or the location of the same sound, pitch discrimination generated greater activation in HG and PT of the auditory cortex and in the right inferior frontal gyrus, while location discrimination task generated greater activation bilaterally in the inferior parietal cortex and posterior middle temporal gyrus and in the superior parietal cortex and superior frontal gyrus on the right side [Alain et al., 2001]. With respect to the localisation of static sounds, activation in the right inferior parietal cortex was greater when the task required explicit spatial localization and sound‐guided movement than when it required a same/different discrimination [Zatorre et al., 2002]. Thus, the parietal cortex may play an important role in tasks that require judgements about auditory space.

The present study used fMRI to measure brain responses to stimuli in which modulation was used to produce spatial (moving) and nonspatial (frequency‐modulation, FM) percepts, crossed in a factorial design. These sounds were presented both in the context of a motion detection and an FM detection task in order to investigate the effect of the stimulus and the listening task on the pattern of activation. Our analysis identifies which regions of the PT are engaged in processing spatial and non‐spatial percepts and the extent that directing attention to the two different attributes of the sound determines the pattern of brain activation.

SUBJECTS AND METHODS

Subjects

Twelve right‐handed subjects (5 women, 7 men; mean age 22 years) with no history of neurological or hearing impairment participated in the study. All subjects gave informed written consent and the study was approved by the University of Nottingham Medical School Ethical Committee. Prior to the imaging session, the hearing sensitivity of subjects was measured using pure‐tone audiometry, and hearing thresholds of all subjects fell within the normal range (< 20 dB HL) at frequencies between 250 and 8,000 Hz inclusive.

Stimuli and Listening Tasks

A carrier sound of a 300‐Hz sinusoid was chosen as it is distant from the frequency of peak energy of the scanner noise (1.92 kHz, plus higher harmonics). Crossing modulation (unmodulated and FM) by motion (stationary and moving) generated four stimulus conditions in a 2 × 2 factorial design. Each stimulus was presented continuously for 6.8 s. In the FM conditions, the 300‐Hz tone was sinusoidally modulated in frequency at a depth of 50 Hz and a rate of 5 Hz. In the motion conditions, stimuli were sinusoidally modulated in amplitude (AM) at a depth of 100% and a rate of 0. 0735 Hz, which results in half an AM cycle in 6.8 sec. The envelope of the AM was 90 degrees out of phase at the two ears to generate the percept of a single motion “sweep” from one ear to the other in the azimuthal plane, as a result of dynamically varying interaural level differences. The direction of motion was controlled, with 50% moving from left to right and 50% from right to left. The conditions were counter‐balanced so that, for example, FM included equal numbers of sounds with and without the AM motion component. This ensured that the results of the two main stimulus effects would not be confounded by the differences in the rate of modulation that defined them. Stimuli were presented at 90 dB SPL as measured by mounting the MR‐compatible headphones [Palmer et al., 1998] on KEMAR [Burkhard and Sachs, 1975] equipped with a Brüel and Kjær microphone (Type 4134) connected to a Brüel and Kjær measuring amplifier (Type 2636).

Two experimental runs for each subject were conducted in a single fMRI session. The same set of stimuli was presented in each run and in total there were 48 repeats of each condition. In one experimental run, subjects carried out a modulation detection task and in the other a motion detection task. For the modulation detection, subjects judged whether each epoch was unmodulated (“constant”) or modulated (“fluctuating”). For the motion detection, subjects judged whether each epoch was stationary or moving. In both tasks, subjects were instructed to ignore the irrelevant stimulus features. Each experimental run included a baseline condition in which there was no auditory stimulus and no task.

fMRI Scanning

Scanning was performed on a 3 Tesla MR system with head gradient coils and a head radio‐frequency coil (Nova Medical Inc, Wakefield, MA). An MBEST echo‐planar (EPI) sequence (TE = 35 msec) was used to acquire sets of 36 contiguous coronal images. Voxel resolution was 4 mm3 (matrix size, 128 × 64). A set of images were acquired every 9 s using a clustered‐volume‐acquisition sequence. These images were acquired during the 2,200‐ms silent interval between stimulus epochs to avoid masking of the stimuli by the scanner noise produced during image acquisition. A T1‐weighted EPI image (4 mm3 resolution) was acquired for each subject for better visualisation of the individual cortical morphology, but with the same tissue distortions as the functional image. The scanning session lasted about 45 min in total.

Data Analysis

SPM99 software (online at http://www.fil.ion.ucl.ac.uk/spm) was used for image analysis. Head movement over the time series was corrected by a computational algorithm to minimise the sum of squared differences between the mean image and each image in the time series [Friston et al., 1995, 1996a]. The amount of head motion was generally less than 2 mm in each plane and less than 2 degrees rotation about each axis. The time series was then coregistered with the T1‐weighted EPI using the mutual information algorithm [Maes et al., 1997] and transformed into a standard brain space defined by the Montreal Neurological Institute (MNI) [Evans et al., 1993]. Non‐linear spatial transformation was achieved by warping the brain volume of the T1‐weighted EPI to match that of a T1‐weighted template, and then applying these warps to the functional data. The functional data were then spatially smoothed using an 8‐mm isotropic Gaussian kernel to enhance the signal‐to‐noise ratio and to validate multi‐subject analyses. Low‐frequency artefacts were removed by high‐pass filtering the time series at 0.3 cycles/min. Analyses were performed to investigate the effect of the spatial and nonspatial properties of the sound and the effect of the listening task. A random effects analysis was applied to account for both within and between subjects variance and to identify only those brain regions that are activated in the majority of subjects. Random effects analyses are necessary for valid population inference about the typical experimental effects because, in the fMRI datasets, the inter‐scan variability within an imaging session is small in comparison to the variability of responses from subject to subject [Holmes and Friston, 1998]. The analysis would be too stringent if the statistics adjusted for assessment of voxels within the entire brain since we are interested in addressing questions about the role of two specific brain regions, PT and parietal cortex, in our listening tasks. Therefore, we applied a high intensity voxel‐level threshold of P < 0.001, in the absence of a whole brain correction procedure [Friston et al., 1996b]. Regions of the auditory cortex have quite clearly defined borders. For example, HG and PT have been defined by their co‐ordinates in the standard MNI brain space. Notably, a probability map for the borders of HG is given by Penhune et al. [1996], while the same type of map for PT is given by Westbury et al. [1999]. Therefore, we can assign probability values to the spatial locations of significantly activated voxels in the auditory cortex.

RESULTS

Behavioural Data

Subjects responded to detection with a high degree of accuracy (mean modulation score = 98%, SD 3.6% and mean motion score = 93%, SD = 7.5%). Subjects perceived the motion detection task to be somewhat more difficult than modulation detection, but nine of the twelve subjects performed both tasks equally well. The remaining three subjects made more errors on motion detection, with the largest individual performance discrepancy being 99 and 82% (for modulation detection and motion detection, respectively, chance level = 50%).

General Pattern of Functional Activation

General networks of sound‐evoked activation were determined by contrasting each sound condition against the baseline, for data collapsed across tasks (P < 0.001). The peak locations and number of activated voxels within each brain region are presented in Table I. There was some commonality across conditions involving superior temporal gyrus, inferior colliculi, precentral gyrus, and superior parietal cortex (Fig. 2). A perhaps surprising result was that bilateral auditory activation in the superior temporal gyrus was present for all except the stationary, unmodulated tones. Individual analysis for this tone condition revealed that six of the subjects did show superior temporal gyral activation, but six did not. Previous studies have revealed that when tones are presented as a continuous signal with only one onset and offset, they may evoke only a small change in the fMRI response, which, in some subjects, does not exceed the statistical threshold for detection [Hart et al., 2003]. The results in inferior colliculus confirm that the auditory system was engaged during the stationary, unmodulated tone condition because bilateral activation was observed. The inferior colliculus is more sensitive to overall sound energy than onsets and offsets [Harms and Melcher, 2002]. Inferior collicular activation was determined by placing a sphere of 5‐mm radius at the anatomically‐defined centre of the left (x −4, y −34, z −12 mm) and right (x 6, y −35, z −12 mm) inferior colliculi [for a similar procedure, see Griffiths et al., 2001]. Activation fell within both left and right spheres.

Table I.

Effect of the different sound conditions on the pattern of brain activation

Brain region Side Coordinates (mm) t No. of voxels
x y z

Stationary unmodulated tone, baseline (Fig. 2A)

 Inferior colliculi L&R 0 −36 −8 5.22 6
 Precentral gyrus mid 0 −20 68 4.54 1
 Superior parietal cortex L −12 −64 60 4.79 2
 Occipital cortex R 4 −72 −8 5.03 1

Stationary FM tone, baseline (Fig. 2B)

 Inferior colliculi L&R 0 −36 −8 5.02 9
 Superior temporal gyrus L −56 −12 0 6.67 28
 Superior temporal gyrus R 60 −8 0 7.15 40
 Precentral gyrus L −28 −16 68 4.21 2
 Superior parietal cortex L −20 −68 56 4.84 2
 Occipital cortex R 20 −60 −16 5.26 16

Moving unmodulated tone, baseline (Fig. 2C)

 Superior temporal gyrus L −64 −36 8 5.22 4
 Superior temporal gyrus R 60 −36 12 5.71 9+3
 Precentral gyrus L −28 −12 68 4.69 8
 Precentral gyrus R 36 −24 60 4.88 11+6
 Superior parietal cortex L −32 −64 52 4.54 4
 Inferior frontal gyrus L −56 12 20 5.84 3

Moving FM tone, baseline (Fig. 2D)

 Inferior colliculi L&R 0 −40 −8 6.16 8
 Superior temporal gyrus L −56 −16 4 7.49 104
 Superior temporal gyrus R 60 −12 4 8.60 96
 Precentral gyrus L −52 0 36 4.63 3
 Precentral gyrus mid 0 0 68 4.02 6
 Inferior frontal gyrus R 52 8 36 4.04 1

*Coordinates and t values are reported for the peak voxels of activation within each region. The number of voxels exceeding the P < 0.001 voxel‐level threshold for activation is also reported.

Figure 2.

Figure 2

Group activation for the four stimulus conditions contrasted against the silent baseline, for data collapsed across task (random‐effects, P < 0.001 uncorrected). Both surface and medial activations are shown overlaid onto the smoothed template brain. Precentral gyrus activation for the stationary unmodulated and moving FM tones are located at the midline and so are displayed on both the left and right views of the brain.

All sound conditions generated activation in the precentral gyrus, which may play a role in the motor execution of the button response. Activation was observed in the left superior parietal cortex and in the right occipital cortex for three of the sound conditions, but was absent for the moving, FM tones. The moving, unmodulated tones generated additional activation in the left inferior frontal gyrus.

Spatial and Non Spatial Sound Analysis Within the Auditory Cortex

The topographical distribution of the effects of FM and motion within the auditory cortex was evaluated in three different ways, for data collapsed across tasks. First, a thresholded statistical map of activation was computed for the FM relative to the unmodulated sounds and for the moving relative to the stationary sounds (P < 0.001). For example, the effect of FM was specified by ([FM stationary + FM moving] − [unmodulated stationary + unmodulated moving]). Activation outside the auditory cortex was obtained only for the motion contrast and was located in the precentral gyrus bilaterally. Both FM and motion activation was widespread along the superior temporal gyrus. Activation was overlaid onto the mean T1‐weighted brain and viewed in slices that were parallel to the supratemporal plane so that Heschl's sulcus, defining the approximate posterior boundary of HG and the anterior boundary of PT, could be seen (Fig. 3). The FM and motion activation maps overlapped by approximately 52%. Table II reports the peaks of activation within the common areas and their probability of location in HG and PT according to the probability maps of Penhune et al. [1996] and Westbury et al. [1999]. The most striking difference between FM and motion was the apparent shift in the activation patterns relative to one another. Activation by FM extended anteriorly from HG onto the planum polare and posteriorly at the medial tip of HG, replicating the effects reported by Hall et al. [2002]. Activation by motion extended posteriorly along PT, particularly in the right hemisphere.

Figure 3.

Figure 3

Group auditory activation for the effects of FM and motion (random‐effects, P < 0.001 uncorrected). Areas in red were more activated by FM than by unmodulated tones, while areas in green were more activated by moving than by stationary tones. Yellow indicates areas of overlap. The activation is overlaid onto oblique axial views of a mean image for the 12 subjects. The sagittal view (bottom right) illustrates the slice orientation selected for display and slices are oriented obliquely at 34°, along the supratemporal plane, and displayed in a descending order at 2.5‐mm intervals.

Table II.

*Peaks of activation within the auditory cortex for the FM‐unmodulated and the moving‐stationary contrasts and their associated T values

Auditory region Side Coordinates (mm) T Probability
x y z
Effect of FM
 HG L −40 −28 8 10.66 .70
 HG R 48 −16 4 14.47 .86
 PT L −60 −32 8 6.03 .27
 PT R 56 −28 8 5.86 .30
Effect of motion
 HG L −36 −28 8 6.30 .58
 HG R 52 −16 8 10.28 .50
 PT L −56 −20 12 4.71 .33
 PT R 56 −20 12 10.73 .70
*

All peaks fall within HG and PT and the probability of localization is reported in the far right‐hand column. Probability values for HG are given by Penhune et al. [1996] and for PT are given by Westbury et al. [1999].

While Figure 3 usefully summarizes the number and location of voxels exceeding the voxel‐level P < 0.001 threshold, information about the actual magnitude of the FM and motion response at each voxel is not represented. The spatial distribution of the response magnitudes was, therefore, explored by plotting the response magnitudes for all voxels within a selected region encompassing the superior surface of HG and PT. Moreover, this distribution was displayed without statistical thresholding in order to appreciate the pattern of the response magnitude across the entire region. Hence, we plotted standardised response magnitudes for the above statistical contrasts. The results of this procedure are shown in Figure 4. The standardised measure corresponds to the general linear model predictions of the magnitude of the condition‐specific activation. It was assumed that the standard errors differ little across voxels because the data are spatially smoothed. From a plane of section along the superior temporal plane, we selected all voxels from ±32 to ±67 mm in the x dimension and −45 to +12 mm in the y dimension. For the effect of FM, there was a bilateral ridge of activation with a summit on the lateral part of the STG, slightly posterior to, but incorporating lateral HG. On the left, the ridge widened to include a peak that was just anterolateral to the axis of HG. For the effect of motion, the bilateral ridges lay mostly behind the axis of HG and the summit was more medial than for FM.

Figure 4.

Figure 4

Plots of the response magnitude for the effect of FM and moving tones across primary and non‐primary auditory cortex. The view of the mean image displayed (top) is a single axial slice along the supratemporal plane, oriented obliquely at 34° (as in Fig. 3). The magnitude plot areas are displayed in the same orientation and are calculated from the random‐effects group analyses. The long axis of HG is plotted as a red line, where the end points are defined by the extremities of the 50–75% probability contour for HG [Penhune et al., 1996]. The FM tone plots represent the magnitude difference between FM and unmodulated conditions and reveal bilateral peaks located in lateral PT plus an anterolateral peak in left auditory cortex. The motion plots represent the magnitude difference between moving and stationary conditions and reveal bilateral peaks posterior to HG that extend into medial PT.

To test whether the relative differences in the peaks of activation were statistically reliable, we directly contrasted the effects of FM and motion. The effect of FM and motion are not independent in a factorial design and so we contrasted the pair of conditions at the diagonals of the 2 × 2 matrix; the FM stationary sound and the unmodulated moving sound (P < 0.001). As shown in Figure 5, the greater activation by FM occurred bilaterally at the lateral tip of HG (clusters of 24 and 25 voxels with peaks at x 56, y −12, z 0 mm and x −52, y −12, z 0 mm, respectively). Although the clusters were located in front of Heschl's sulcus (and, therefore, not within PT), activation coincides with the anterolateral region that we have previously reported to be activated by FM [Hall et al., 2002], and that probably represents a non‐primary auditory field [see Wallace et al., 2002]. In contrast, the converse contrast for the greater activation by motion revealed two voxels with a peak at x 56, y −32, z 24 mm, that did not extend to the lateral convexity. This peak has a 0.32 probability of being located in PT [Westbury et al., 1999] and it may coincide with the motion‐sensitive area identified by Baumgart et al. [1999] in area “T3,” posterior to Heschl's sulcus.

Figure 5.

Figure 5

Random‐effects group analyses (P < 0.001 uncorrected) directly contrasting (A) stationary FM tones against unmodulated moving tones and (B) unmodulated moving tones against stationary FM tones. Activation is shown in black and is displayed on a mean image, oriented along the supratemporal plane, relative to the outline of HG (shown in white).

Thus, the present data indicate a functional segregation between FM‐ and motion‐responsive areas in non‐primary auditory cortex. In both hemispheres, lateral parts of non‐primary auditory cortex were engaged in processing FM and the anterolateral region (at the lateral tip of HG) responded more strongly to FM than to moving sounds. In contrast, the medial part of right PT was engaged in processing motion cues.

Detecting FM or Motion

To address whether the listening task determined the pattern of brain activation in the left superior parietal cortex, the data were analysed separately for the modulation and motion detection tasks. Figure 6 shows the subtraction of the baseline from all four sound conditions, analysed separately for the two detection tasks (P < 0.001). The left superior parietal cortex was significantly activated only during the motion detection task. Activation consisted of two voxels (x −36, y −60, z 52 mm and x −40, y −60, z 48 mm). Motion detection also involved occipital cortex. Both modulation and motion detection engaged the superior temporal gyrus and precentral gyrus bilaterally. Activation within the superior temporal gyrus encompassed HG and lateral PT for both modulation and motion detection.

Figure 6.

Figure 6

Group auditory activation for the effects of listening task overlaid onto a smoothed template brain (random effects, P < 0.001 uncorrected). Relative to the baseline condition, both tasks engage bilateral auditory cortex on the superior temporal gyrus. In addition, modulation detection (A) involves precentral gyrus, while motion detection (B) involves precentral, parietal, and occipital areas.

To determine whether the parietal activation could be ascribed to the auditory analysis of motion cues or to the cognitive operations of motion detection, three additional statistical contrasts are critical: (1) stationary sounds during motion detection, (2) moving sounds during motion detection, and (3) moving sounds during modulation detection (P < 0.001, relative to the baseline). Given that each analysis is based on a small subset of the data (48/240 data points per subject), for a more reliable interpretation, we sought convergence with the activation loci reported in Table I and Figures 2 and 6.

  • 1

    Listening to stationary sounds in the context of motion detection revealed three voxels of activation in the left superior parietal lobe at x −24, y −68, z 56 mm, x −20, y −68, z 56 mm, and x −12, y −64, z 60 mm. All three voxels were also present for the two contrasts of stationary tones relative to the baseline, collapsed across task (Table I). Activation at in the left superior parietal lobe was also present for the contrast of sound vs. baseline for the motion detection task (Fig. 6B), but at these three voxel locations it was significant only at the P < 0.005 level.

  • 2

    Listening to moving sounds in the context of motion detection revealed two voxels in the left superior parietal lobe at x −36, y −60, z 52 mm and x −36, y −64, z 52 mm. This location of activation was adjacent to the activation for the contrast of the moving unmodulated tone relative to the baseline (Fig. 2C) and overlapped with the contrast of sounds vs. baseline for the motion detection task (Fig. 6B).

  • 3

    For the data acquired during the modulation detection task, the contrast between the moving sounds and the baseline condition generated two voxels of activation in the left superior parietal lobe close to the midline at x −12, y −64, z 60 mm. Given that this activation was not present in Figures 2C,D and 6A, we are less certain of its origin.

We conclude that the left superior parietal activation was determined more by the task than by the stimulus conditions.

DISCUSSION

FM and Motion Engage Different Parts of the PT

The present study revealed widespread bilateral effects of FM and motion across the auditory cortex. Relative to unmodulated sounds, FM generated strong activation in lateral non‐primary auditory cortical regions, including the lateral tip of HG and lateral PT. The location of this differential activation is consistent with previous studies [Binder et al., 2000; Hall et al., 2002; Griffiths et al., 1998a; Thivard et al., 2000] and supports the view that these non‐primary auditory areas play a role in the analysis of temporally varying, nonspatial sounds. Relative to stationary sounds, moving sounds generated the strongest activation in medial portions of the right PT. A general account of the function of PT is proposed whereby PT analyses incoming spectrotemporal patterns and compares them with previously experienced sound patterns [Griffiths and Warren, 2002]. In the context of moving sounds, PT might compute a movement trajectory by continuously segregating the spectrotemporal pattern that corresponds to the original sound object from movement‐transformed versions of itself. The present results support the view that the right medial PT plays a dominant role in this computational process [see also Baumgart et al., 1999; Seifritz et al., 2002].

The functional segregation that we have demonstrated in the present study may reflect differences between subdivisions of PT that are sensitive to either object‐related or space‐related acoustic information. It is tempting to infer homologies with non‐human primates using the functional scheme that has been proposed in the macaque [Rauschecker and Tian, 2000]. Electrode recordings in rostral and caudal lateral belt areas indicated that the anterior field responded more vigorously to a range of vocalisations than to sounds localised in space, while the posterior field displayed the converse properties. One could speculate that lateral PT is a homologue of the anterior lateral belt (“what”) and that medial PT is a homologue of the posterior lateral belt (“where”). However, peaks of activation associated with nonspatial auditory processing have also been identified in medial PT. For example, right PT has been implicated in the discrimination of discrete changes in sound level [Belin et al., 1998]. A discrete site in medial PT has also been implicated in the analysis of low‐frequency FM sweeps [Schönwiesner et al., 2002]. Therefore, the mode of activation within each subdivision of PT may not be uniquely defined according to a spatial/non‐spatial distinction.

How Is the Pattern of Brain Activation Influenced by the Listening Task?

The effects in non‐auditory cortical areas are somewhat less compelling than in PT, but generally support the view that parietal cortex activation is determined more by the nature of the listening task than it is by the acoustic stimulus. Activation in the left superior parietal cortex was observed in a number of statistical contrasts, principally those in which the task required motion detection. Previous studies have implicated the superior parietal cortex in the analysis of sound location and motion [e.g., Griffiths et al., 2000; Weeks et al., 2000], although generally right hemisphere dominance is reported. In the present study, when data were collapsed across stimulus condition, activation in left superior parietal lobe was present during the motion detection task, but not during the modulation detection task. For the subset of trials that were presented in the context of the motion detection task, regions of the left superior parietal cortex were engaged in processing both moving and stationary sounds. This result suggests that the left superior parietal cortex activation is, therefore, more dependent on the subject attending to the presence or absence of motion in the stimuli than to the actual presence of acoustic cues for motion. Regions of superior parietal cortex may, therefore, play a role in attending to spatial sound properties, rather than in the processing of spatial sound cues. A task‐based explanation of parietal activation by spatial sound cues has also been posited by Zatorre et al. [2002] who suggest that the right inferior parietal cortex is involved in tasks requiring the use of sound location information to guide motor responses. Although this account is parsimonious with the model in the visual domain of the posterior parietal lobe mediating the control of goal‐directed action [Hickok and Poeppel, 2000; Milner and Goodale, 1995], it does not provide a straightforward explanation of the present data since the task required simple detection of the presence of motion rather than explicit sensorimotor integration, and activation was located in the left parietal lobe. A plausible alternative account to be considered is that the left parietal activation reflects modulation of neural activity by the subject's expectation of and orienting towards the moving sound source as a result of the specific task instructions. Expectations about the spatial properties of an upcoming stimulus have been shown to modulate activation around the intraparietal sulcus in the left and right hemispheres [Corbetta et al., 2000; Shulman et al., 1999].

Some activation was observed in inferior frontal gyrus for the unmodulated moving tone contrasted with silence (see Fig. 2C). These activation foci do not coincide with postulated target sites of the anterior or posterior routes in the prefrontal cortex. It is unlikely that this activation is specific for motion or motion detection as this and surrounding areas of frontal cortex have been shown to be activated by many different cognitively demanding tasks. Maeder et al. [2001] also showed some activation of a similar area of inferior frontal cortex, but activation of this area observed in their study was selective for auditory recognition, rather than localisation.

Activation was observed in extrastriate occipital cortex. Although visual cortex is not generally recognised as a component of the temporo‐parieto‐frontal network engaged during auditory detection, it has been reported to be activated by sound recognition [Maeder et al., 2001]. The reason for activation of visual cortex is unclear. We speculate that it may be due to use of visual imagery by the subjects to judge whether the sound is moving or stationary, modulated or unmodulated [see Klein et al., 2000].

CONCLUSIONS

We have demonstrated that the processing of spatial and nonspatial sound features is segregated in different areas of the non‐primary auditory cortex. The presence of superior parietal activation for attention to sound motion is consistent with the existence of a posterior pathway for spatial processing. No activation was observed in the postulated prefrontal target sites of the anterior and posterior connection routes. The selectivity of the superior parietal activation in the present study for motion detection trials, suggests that higher centres involved in sound processing may be engaged by attention to sound features or location, rather than the processing of stimuli with either different spectro‐temporal or spatial properties per se.

REFERENCES

  1. Alain C, Arnott SR, Hevenor S, Graham S, Grady CL (2001): “What” and “where” in the human auditory system. Proc Natl Acad Sci USA 98: 12301–12306. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Andersen RA, Snyder LH, Bradley DC, Xing J (1997): Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Ann Rev Neurosci 20: 303–330. [DOI] [PubMed] [Google Scholar]
  3. Baumgart F, Gaschler‐Markefski B, Worldorff MG, Heinze HJ, Scheich H (1999): A movement sensitive area in auditory cortex. Nature 400: 724–726. [DOI] [PubMed] [Google Scholar]
  4. Belin P, Zatorre R (2000): 'What', 'where' and 'how' in auditory cortex. Nature Neurosci 3: 965. [DOI] [PubMed] [Google Scholar]
  5. Belin P, McAdams S, Smith B, Savel S, Thivard L, Samson S (1998): The functional anatomy of sound intensity discrimination. J Neurosci 18: 6388–6394. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Binder J, Frost JA, Hammeke TA, Bellgowan PSF, Springer JA, Kaufman JN, Possing ET (2000): Human temporal lobe activation by speech and nonspeech sounds. Cereb Cortex 10: 512–528. [DOI] [PubMed] [Google Scholar]
  7. Bremmer F, Schlack A, Shah NJ, Zafiris O, Kubischik M, Hoffmann K‐P, Zilles K, Fink GR (2001): Polymodal motion processing in posterior parietal and premotor cortex: A human fMRI study strongly implies equivalencies between humans and monkeys. Neuron 29: 287–296. [DOI] [PubMed] [Google Scholar]
  8. Burkhard MD, Sachs RM (1975): Anthropometric manikin for acoustic research. J Acoust Soc Am 58: 214–222. [DOI] [PubMed] [Google Scholar]
  9. Bushara K (1999): Modality‐specific frontal and parietal areas for auditory and visual spatial localization in humans. Nature Neurosci 2: 759–766. [DOI] [PubMed] [Google Scholar]
  10. Corbetta M, Kincade JM, Ollinger JM, McAvoy MP, Shulman GL (2000): Voluntary orienting is dissociated from target detection in human posterior parietal cortex. Nature Neurosci 3: 292–297. [DOI] [PubMed] [Google Scholar]
  11. Evans AC, Collins DL, Mills SR, Brown ED, Kelly RL, Peters TM (1993): 3D statistical neuroanatomical models from 305 MRI volumes. Proc IEEE‐Nuclear Sci Symp Med Imag Conf, p 1813–1817.
  12. Friston KJ, Frith CD, Turner R, Frackowiak RSJ (1995): Characterizing evoked hemodynamics with fMRI. Neuroimage 2: 157–165. [DOI] [PubMed] [Google Scholar]
  13. Friston KJ, Williams S, Howard R, Frackowiak RSJ, Turner R (1996a): Movement‐related effects in fMRI time‐series. Magnet Reson Med 35: 346–355. [DOI] [PubMed] [Google Scholar]
  14. Friston, KJ , Holmes, A , Poline, J‐B , Price, CJ , Frith, CD (1996b): Detecting activations in PET and fMRI: Levels of inference and power. Neuroimage 40: 223–235. [DOI] [PubMed] [Google Scholar]
  15. Galaburda A, Sanides F (1980): Cytoarchitectonic organization of the human auditory cortex. J Comp Neurol 190: 597–610. [DOI] [PubMed] [Google Scholar]
  16. Giraud AL, Lorenzi C, Ashburner J, Wable J, Johnsrude I, Frackowiak R, Kleinschmidt A (2000): Representation of the temporal envelope of sounds in the human brain. J Neurophysiol 84: 1588–1598. [DOI] [PubMed] [Google Scholar]
  17. Griffiths TD, Green GGR (1999): Cortical activation during perception of a rotating wide‐field acoustic stimulus. Neuroimage 10: 84–90. [DOI] [PubMed] [Google Scholar]
  18. Griffiths TD, Warren JD (2002): The planum temporale as a computational hub. Trends Neurosci 25: 348–353. [DOI] [PubMed] [Google Scholar]
  19. Griffiths TD, Buchel C, Frackowiak RSJ, Patterson RD (1998a): Analysis of temporal structure in sound by the human brain. Nature Neurosci 1: 422–427. [DOI] [PubMed] [Google Scholar]
  20. Griffiths TD, Rees G, Rees A, Green GR, Witton C, Rowe D, Buchel C, Turner R, Frackowiak RSJ (1998b): Right parietal cortex is involved in the perception of sound movement in humans. Nature Neurosci 1: 74–79. [DOI] [PubMed] [Google Scholar]
  21. Griffiths TD, Johnsrude I, Dean JL, Green GG (1999): A common neural substrate for the analysis of pitch and duration pattern in segmented sound? NeuroReport 18: 3825–3830. [DOI] [PubMed] [Google Scholar]
  22. Griffiths TD, Green GGR, Rees A, Rees G (2000): Human brain areas involved in the analysis of auditory movement. Hum Brain Mapp 9: 72–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Griffiths TD, Uppenkamp S, Johnsrude I, Josephs O, Patterson RD (2001): Encoding of the temporal regularity of sound in the human brainstem. Nature Neurosci 4: 633–637. [DOI] [PubMed] [Google Scholar]
  24. Hall DA, Johnsrude IS, Haggard MP, Palmer AR, Akeroyd MA, Summerfield, AQ (2002): Spectral and temporal processing in human auditory cortex. Cereb Cortex 12: 140–149. [DOI] [PubMed] [Google Scholar]
  25. Hart HC, Palmer AR, Hall DA (2003): Regions outside primary auditory cortex in humans are more activated by modulated than by unmodulated stimuli. Cereb Cortex (in press). [Google Scholar]
  26. Harms MP, Melcher JR (2002): Sound repetition rate in the human auditory pathway: representations in the waveshape and amplitude of fMRI activation. J Neurophysiol 88: 1433–1450. [DOI] [PubMed] [Google Scholar]
  27. Hickok G, Poeppel D (2000): Towards a functional neuroanatomy of speech perception. Trends Cognit Sci 4: 131–138. [DOI] [PubMed] [Google Scholar]
  28. Holmes AP, Friston KJ (1998): Generalisability, random‐effects and population inference. Neuroimage 7: S754. [Google Scholar]
  29. Klein I, Paradis AL, Poline JB, Kosslyn SM, Le Bihan D (2000): Transient activity in the human calcarine cortex during visual‐mental imagery: an event‐related fMRI study. J Cognit Neurosci 12: 15–23. [DOI] [PubMed] [Google Scholar]
  30. Lewis JW, Bauchamp MS, DeYoe EA. (2000): A comparison of visual and auditory motion processing in human cerebral cortex. Cereb Cortex 10: 873–888. [DOI] [PubMed] [Google Scholar]
  31. Maeder PP, Meuli RA, Adriani M, Bellmann A, Fornari E, Thirann J‐P, Pittet A, Clarke S (2001): Distinct pathways involved in sound recognition and localization: A human fMRI study. Neuroimage 14: 802–816. [DOI] [PubMed] [Google Scholar]
  32. Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P (1997): Multimodality image registration by maximization of mutual information. IEEE Trans Med Imag 16: 187–198. [DOI] [PubMed] [Google Scholar]
  33. Milner AD, Goodale MA (1995): The visual brain in action. New York: Oxford University Press, p 25–66. [Google Scholar]
  34. Palmer AR, Bullock DC, Chambers JD (1998): A high‐output, high‐quality sound system for use in auditory fMRI. Neuroimage 7: S359. [Google Scholar]
  35. Pavani F, Macalusco E, Warren JD, Driver J, Griffiths TD (2002): A common cortical substrate activated by horizontal and vertical sound movement in the human brain. Curr Biol 12: 1584–1590. [DOI] [PubMed] [Google Scholar]
  36. Penhune VB, Zatorre RJ, Macdonald JD, Evans AC (1996): Interhemispheric anatomical differences in human primary auditory‐cortex: Probabilistic mapping and volume measurement from magnetic resonance scans. Cereb Cortex 6: 661–672. [DOI] [PubMed] [Google Scholar]
  37. Penhune VB, Zattore RJ, Evans AC (1998): Cerebellar contributions to motor timing: a PET study of auditory and visual rhythm reproduction. J Cognit Neurosci 10: 752–765. [DOI] [PubMed] [Google Scholar]
  38. Rauschecker JP (1998): Parallel processing in the auditory cortex of primates. Audiol Neuro‐otol 3: 86–103. [DOI] [PubMed] [Google Scholar]
  39. Rauschecker JP, Tian B (2000): Mechanisms and streams for processing of “what” and “where” in auditory cortex. Proc Natl Acad Sci USA 97: 11800–11806. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Rivier F, Clarke S (1997): Cytochrome oxidase, acetylcholinesterase and NADPH‐diaphorase staining in human supratemporal and insular cortex: Evidence for multiple auditory areas. Neuroimage 6: 288–304. [DOI] [PubMed] [Google Scholar]
  41. Romanski LM, Tian B, Fritz J, Mishkin M, Goldman‐Rakic P, Rauschecker J (1999): Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex. Nature Neurosci 2: 1131–1136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Schönwiesner M, von Cramon YD, Rübsamen R. (2002): Is It Tonotopy after All? Neuroimage 17: 1144–1161. [DOI] [PubMed] [Google Scholar]
  43. Scott SK, Blank CC, Rosen S, Wise RJS (2000): Identification of a pathway for intelligible speech in the left temporal lobe. Brain 123: 2400–2406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Seifritz E, Neuhoff JG, Bilecen D, Scheffler K, Mustovic H, Schächinger H, Elefante R, Di Salle F (2002): Curr Biol 12: 2147–2151. [DOI] [PubMed] [Google Scholar]
  45. Shulman GL, Ollinger JM, Akbudak E, Conturo TE, Snyder AZ, Petersen SE, Corbetta M (1999): Areas involved in encoding and applying directional expectations to moving objects. J Neurosci 19: 9480–9496. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Thivard L, Belin P, Zilbovicius M, Poline JB, Samson Y (2000): A cortical region sensitive to auditory spectral motion. Neuroreport 11: 2969–2972. [DOI] [PubMed] [Google Scholar]
  47. Wallace MN, Johnston PW, Palmer AR (2002): Histochemical identification of cortical areas in the auditory region of the human brain. Exp Brain Res 143: 499–508. [DOI] [PubMed] [Google Scholar]
  48. Warren JD, Zielinski BA, Green GGR, Rauschecker JP, Griffiths TD (2002): Perception of sound‐source motion by the human brain. Neuron 34: 139–148. [DOI] [PubMed] [Google Scholar]
  49. Weeks R, Aziz‐Sultan A, Bushara KO, Tian B, Wessinger CM, Dang N, Rauschecker JP, Hallett M (1999): A PET study of human auditory spatial processing. Neurosci Lett 262: 155–158. [DOI] [PubMed] [Google Scholar]
  50. Wessinger CM, VanMeter J, Tian B, Van Lare J, Pekar J, Rauschecker JP (2001): Hierarchical organization of the human auditory cortex revealed by functional magnetic resonance imaging. J Cognit Neurosci 13: 1–7. [DOI] [PubMed] [Google Scholar]
  51. Westbury CF, Zatorre RJ, Evans AC (1999): Quantifying variability in the planum temporale: A probability map. Cereb Cortex 9: 392–405. [DOI] [PubMed] [Google Scholar]
  52. Zatorre RJ, Bouffard M, Agad P, Belin P (2002): Where is ‘where’ in the human auditory cortex? Nature Neurosci 5: 905–909. [DOI] [PubMed] [Google Scholar]

Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES