Abstract
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel “what” and “where” processing by the primate visual cortex. If “where” information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
One of the fundamental tasks of the auditory system is to determine the spatial location of acoustic stimuli. In contrast to the visual and somatosensory systems, the auditory periphery cannot encode stimulus location, but can only encode the presence of particular stimulus frequencies in the input. The central nervous system therefore must compute the spatial location of a stimulus by integrating the responses of many individual sensory receptors.
There are three main cues that can be used to compute the spatial location of an acoustic stimulus: interaural intensity, interaural time or phase, and differences in the stimulus spectrum at the tympanic membrane (1). The binaural cues are critical for localization in azimuth, but are much less effective for localization in elevation because the ears of most mammals are located symmetrically on the head. However, reflections of the acoustic signal by the torso, head, pinna, and ear canal create spectral peaks and notches that vary with stimulus elevation (2, 3). Although the physical cues that could provide the necessary information to localize sounds are well defined, how the nervous system uses these cues to calculate the spatial location of acoustic stimuli is far from being resolved. There are several stations along the ascending auditory pathway in mammals that integrate the spatial cues necessary for the localization of sounds, including the superior olivary complex (4), the inferior colliculus (5–7), and the thalamus (8–10). The spatial tuning of the majority of auditory cortical neurons is very broad, commonly over 90° for a half-maximal response (11–16). In contrast, primates can detect changes in sound location as small as a few degrees or less (17–22). This finding may appear to indicate that auditory cortex is not necessary for this perception, but auditory cortical lesions produce clear deficits in sound localization performance in cats (23), ferrets (24), New World monkeys (25), Old World monkeys (26), and humans (27). Thus, a key question is how the broad spatially tuned neurons in the auditory cortex processes acoustic information to ultimately result in the perception of acoustic space.
The auditory cortex of the primate can be anatomically subdivided into several “core”, “belt,” and “parabelt” cortical areas based on cytoarchitecture, cortico-cortical connections, and thalamo-cortical connections (see refs. 28 and 29). It has been speculated that these multiple auditory cortical areas process acoustic information in both a serial and parallel manner (28) similar to visual cortical processing of “what” and “where” information (30, 31). While the available anatomical data are consistent with this hypothesis, there are relatively few electrophysiological studies in the monkey to either support or refute this idea. Merzenich and Brugge (32) were the first to describe the physiological properties of the macaque primary auditory cortex (AI) and the rostral field (R) in the core area, and the caudomedial (CM) field and lateral field (L) of the “belt” area (32). They found that AI and R neurons had sharper frequency tuning than those in CM based on the multiple-unit responses in the anesthetized animal. Subsequent studies (33, 34) support these initial observations. More recent studies indicate that neurons in the L of the belt area respond better to spectrally complex stimuli, including vocalizations (35), which suggests that the L is processing “what” information. In contrast, caudal and medial fields have been proposed to process “where” information. Neurons in CM have broad frequency tuning and the responses of CM neurons to tone stimuli depend on an intact AI (36). These limited physiological data are consistent with serial processing of acoustic information from the core to the belt auditory cortical areas, and this relatively new hypothesis currently is being rigorously tested in several laboratories.
Neuronal Activity as a Function of Stimulus Frequency and Intensity
Previous electrophysiological studies in the primate auditory cortex have largely been done in anesthetized animals. However, the activity of neurons in the primate auditory cortex can either increase or decrease depending on whether the monkey is attending to the stimulus, not attended to the stimulus, or is anesthetized (11, 37, 38). To define the frequency and intensity responses of primate cortical neurons in the attended state, single neuron responses were recorded in monkeys while they performed a sound localization task (39). In this experiment, tone stimuli at 31 different frequencies (2- to 5-octave range) and 16 different intensities (90-dB range) were presented from a speaker located directly opposite to the contralateral ear. Fig. 1 shows representative frequency response areas (FRAs) measured across three different auditory cortical areas in a representative monkey. The normalized firing rate for each stimulus is indicated by the color, with red regions corresponding to the stimuli that elicited the greatest activity and blue showing stimuli that elicited activity significantly greater than the spontaneous rate but less than 25% of the peak activity. The frequency range tested was adjusted for each neuron, as the frequency tuning could be quite different between neurons in different cortical areas (Fig. 1 Upper Right). These experiments demonstrated that AI neurons in the behaving monkey had relatively sharp frequency tuning (e.g., Fig. 1 B, C, E, and F). In contrast, neurons in CM generally had broader frequency tuning (Fig. 1 D, G, and H), even for neurons with similar characteristic frequencies (CFs), defined as the frequency that elicited a response at the lowest intensity (Fig. 1 C vs. G). There also was a shift in CF when crossing the border between different cortical areas, for example from L to AI (Fig. 1 A vs. B) or between AI and CM (Fig. 1 C vs. D and Fig. 1 F vs. G).
Figure 1.
Frequency response areas of single auditory cortical neurons. Responses were recorded to 50-ms tone stimuli (3-ms rise/fall) presented at 16 different intensity levels [10- to 90-dB sound pressure level (SPL)] at 31 different frequencies spanning 2–5 octaves from a free-field speaker located directly opposite to the contralateral ear. The color corresponds to the percent of the maximum response recorded in that neuron. White areas correspond to areas where the activity was not significantly greater than the spontaneous rate. Each FRA shows the response of a single neuron from the cortical location shown in I. The frequency range was customized for each neuron and therefore will vary between panels. The 25% contour (50% for the neuron shown in A) is reproduced on the same frequency axis to allow comparisons of the frequency bandwidth across neurons (Upper Right), and the CF is given above each FRA. (I) Dorsal view of the recording locations for each neuron. The heavy line shows the physiological boundaries of AI. Thin lines show the region investigated in the study. Circled letters correspond to the different panels shown in the figure. Note the differences in frequency tuning between neurons in AI and other cortical fields. Adapted from ref. 39.
Fig. 2 shows representative FRAs recorded from neurons in a second monkey. Neurons in R showed similar tuning functions as AI neurons (compare Fig. 2 A–G to H–K, M, and O). Again, the AI and CM border was easily identified in this monkey by the change in the frequency tuning and the CF (Fig. 2 K–P).
Figure 2.
FRAs recorded in the three different auditory cortical areas in a second monkey. (A–G) FRAs from single neurons recorded in R. (H–J) FRAs recorded at the rostral border of AI. (K, M, and O) Neurons recorded at the medial border of AI. (L, N, and P) Neurons recorded in CM near the AI-CM border. The characteristic frequency is shown within each FRA. Other conventions are as in Fig. 1. Adapted from ref. 39.
These observations are consistent with those described in the anesthetized monkey and indicate that different auditory cortical areas have distinct functional properties using simple tone stimuli. Statistical analysis confirmed that CM neurons had the broadest frequency tuning of all fields examined, and neurons in R had the narrowest frequency tuning (39). The ability to integrate information across a broad frequency range would likely improve spatial processing, as binaural and spectral cues across different frequencies could be used, and broadband stimuli are more easily localized than narrow band stimuli (see below). The broad frequency tuning of neurons in CM would make them ideally suited to integrate information across frequencies, consistent with the hypothesis that AI and CM form part of a serial “where” processing stream of auditory information (28).
Neuronal Activity as a Function of Stimulus Location
The hypothesis that AI and CM neurons process auditory spatial information in series predicts that the spatial response properties of these neurons should improve between AI and CM. To address this issue, the responses of neurons in these areas were measured while the monkey performed a sound localization task, and the neuronal activity was compared with the monkey's sound localization performance (16). To determine sound localization thresholds, the monkey depressed a lever to initiate a trial, and several stimuli were presented from directly in front of the monkey. At some random time the stimulus changed location in either azimuth or elevation. When the monkey detected this change it released the lever and received a reward. The sound localization threshold was defined as the distance between locations necessary for the monkey to detect a difference on half of the trials. These thresholds are shown for two monkeys in Fig. 3. The filled bars show the thresholds measured in azimuth and the open bars show thresholds measured in elevation for tone stimuli of different frequencies (Left) or noise stimuli with different spectral content (Right). Across these different stimuli, the thresholds for localization in azimuth were lower than those for localization in elevation. This difference was greatest for tone stimuli, where in most cases the elevation thresholds could not be measured because 30° was the maximum change in location tested. For noise stimuli, there was a progressive improvement in elevation thresholds as the stimulus contained higher frequency components. The worst thresholds were noted for stimuli containing 750–1,500 Hz, improving for 3,000–6,000 Hz, and 5,000–10,000 Hz, and the lowest thresholds were noted when the stimulus was a broadband noise containing all of those frequencies. There was no such obvious trend as a function of the tone stimulus frequency.
Figure 3.
Sound localization thresholds across stimulus frequencies and bandwidths. Thresholds are shown for localization in azimuth (solid bars) and elevation (open bars). Thresholds could not be defined if they were greater than 30° (broken lines). Noise stimuli consisted of 1-octave band-passed nose (L: 750–1,500 Hz; M: 3,000–6,000 Hz; H: 5,000–10,000 Hz) and broadband noise (NS). Adapted from ref. 16.
The activity of single neurons also was recorded in these monkeys while they performed a similar task. Each neuron was tested with two stimuli on randomly interleaved trials. One stimulus was a tone near the characteristic frequency of the neuron and the other was a noise stimulus that included the CF. Both of these stimuli usually elicited a robust response from the neuron under study. A typical example from an AI neuron is shown in Fig. 4. To the left are poststimulus time histograms (PSTHs) taken over 10 trials in which either a tone (Fig. 4A) or band-passed noise (Fig. 4B) was presented from one of 17 different locations in front of the monkey. Stimuli were positioned straight ahead and at 15° and 30° eccentricity along the horizontal, vertical, and both oblique axes. Fig. 4 shows the PSTHs at their relative locations in this region of frontal space. This neuron had a more robust response when the stimuli were presented to the right of the midline (in contralateral space), compared with when the stimuli were presented to the left of the midline. However, there was little difference in activity as a function of the elevation of the stimulus. This can be most readily appreciated by comparing the middle row of PSTHs (azimuth tuning) to the middle column of PSTHs (elevation tuning). The three-dimensional reconstruction of these responses are shown to the right of each plot in Fig. 4. These plots were normalized to the peak activity of that neuron measured across all locations for both stimuli, with the response shown in the z axis as a function of the stimulus azimuth and elevation. The response contour for noise stimuli had a greater slope than the response contour for the tone stimuli, indicating that this neuron was more sensitive to the location of noise stimuli compared with tones.
Figure 4.
Spatial response profiles of an AI neuron. PSTHs are shown in their relative position from the monkey's perspective (rightward PSTHs correspond to stimuli presented to the right of midline). Numbers above the most eccentric PSTHs correspond to the location in degrees (azimuth, elevation). Each PSTH shows the responses over 10 trials. Tone (A) and noise (B) stimuli were presented on randomly interleaved trials. In the color-coded three-dimensional plots the response was normalized by the maximum response recorded for that neuron to any of the 17 locations using either the tone or noise stimulus. The magnitude of the response at each azimuth and elevation is shown by the height of the contour. Heavy lines show regions with the same activity (iso-response contours).
An example from a CM neuron is shown in Fig. 5. In this case, the neuron responded better to noise than to tones. Further, the response to noise was more strongly modulated by the stimulus location, illustrated by the greater slope of the surface contour. Finally, there was a difference in the spatial preferences of this neuron depending on the stimulus. When tone stimuli were presented (Fig. 5A), there was essentially no modulation of the response as a function of the stimulus elevation, shown as the iso-intensity contours (heavy black lines) being roughly parallel to the elevation axis. This can best be seen for stimuli at the midline (0° azimuth), where the neuronal response varied very little over 60° differences in elevation when tones were presented. In contrast, the response along the midline to noise stimuli (Fig. 5B) was greatest for upward elevations and smallest at the lowest stimulus elevation.
Figure 5.
Three-dimensional reconstructions of spatial responses from a representative CM neuron. Conventions are as in Fig. 4. This neuron had a lower response to tone stimuli, and a more shallow response as a function of stimulus location (A). The response to noise stimuli (B) showed greater modulation as a function of stimulus location, and the slope of this response contour was not aligned with either the elevation or azimuth axis. This indicates that this neuron contained information for both the azimuth and elevation of the stimulus.
The results from both monkeys indicated that although most neurons responded to all stimulus locations, i.e., they were very broadly tuned, the main features of the neuronal responses were consistent with the behavioral ability to localization sounds. Localization in elevation was very poor for tone stimuli, and few neurons (<10%) were encountered that had changes in their response as a function of the elevation of tone stimuli. In contrast, localization in elevation of noise stimuli containing high-frequency components was much better than for tone stimuli, and more neurons were encountered that were sensitive to the elevation of these noise stimuli (≈40%). Secondly, there was a greater rate of change in the response as a function of the stimulus azimuth for noise stimuli compared with tone stimuli. Finally, the highest percentage of neurons were sensitive to the location of broadband noise (≈55% in azimuth and ≈30% in elevation for AI neurons and ≈80% in azimuth and ≈30% in elevation for CM neurons), which showed the lowest behavioral thresholds of all stimuli tested. These general observations suggest that the firing rate of single neurons could contain sufficient information for the monkey to localize these different types of stimuli.
Correlations Between Neural Activity and Sound Localization
These qualitative impressions were verified by directly comparing the neuronal and behavioral data. Fig. 6 shows the firing rate as a function of stimulus azimuth for a single AI neuron (A) and a single CM neuron (B). The task that was used to define thresholds (Fig. 3) required the monkey to detect a change in the location of the stimulus from directly ahead. If the monkey had access to the information provided by only one neuron, then significant differences in activity from when the stimulus was presented directly in front of the monkey would be a reliable signal that the stimulus had changed location. The predicted threshold for each neuron was defined as the distance that the stimulus would have to move for the activity to change by one standard deviation from when the stimulus was straight ahead (dashed lines of Fig. 6). This predicted threshold would be large if the spatial tuning of the neuron was relatively poor (slopes of the line near zero), and the predicted threshold would be small if the response of the neuron was strongly modulated by stimulus position. The predicted threshold was compared with the behavior by taking the ratio of the predicted threshold divided by the measured threshold. This ratio was less than one if the neuron predicted a smaller threshold than was observed, one if the neuron and the behavior were the same, and greater than one if the behavioral threshold was smaller than the neuronal prediction. If the neuronal responses reflect the sound localization ability, stimuli that the monkey had difficulty in localizing should elicit poor spatial resolution in most neurons (and therefore predict high thresholds for a ratio near 1.0), while stimuli that the monkey could easily localize should elicit sharp spatial resolution in most neurons (and therefore predicted low thresholds for a ratio near 1.0). The distribution of this ratio for 353 AI neurons and 118 CM neurons is shown in the middle panel of Fig. 6. For both AI and CM, while most neurons predicted thresholds greater than those observed behaviorally, many neurons did predict thresholds consistent with the behavior. Further, CM neurons were better able to predict the behavior than AI neurons (P < 0.05) as indicated by more neurons having ratios close to 1.0 (compare the middle and right panels of Fig. 6 A and B).
Figure 6.
Predictions of behavioral performance by single neurons. The mean and standard deviation of the response of a single neuron as a function of the stimulus azimuth (0° elevation) are shown for an AI neuron (A) and a CM neuron (B). (Left) ▪ notes the response from the speaker located directly in front of the monkey. The ability of the neuron to predict the behavior was calculated as the distance in azimuth that corresponded to one standard deviation from the mean response at 0° (dashed lines). This prediction was tested against the behavior by dividing the predicted threshold by the measured threshold. (Center) The frequency distribution of this ratio when predicting thresholds in azimuth for tone stimuli measured across 353 AI neurons (A) and 118 CM neurons (B). Neurons that had a prediction greater than four times the measured threshold are shown in the right most bin. Ratios of 1.0 correspond to perfect predictions. (Right) The ratios when predicting thresholds in azimuth for noise stimuli. Adapted from ref. 16.
The ability of some neurons to predict behavior indicates that neurons in these areas could provide valuable information to the monkey about the spatial location of the stimulus. However, there was a wide variation in threshold ratios, meaning that many cells performed better or worse than the monkey. Because all neurons responded to these stimuli they were presumably conveying some information to the monkey. One possibility is that pooling the responses of all neurons would enhance the ability to predict the behavior. Alternatively, it may be that pooling the responses of all neurons would cause a degradation of the ability of the population to predict the behavior caused by the neurons that showed poor spatial sensitivity.
The results of an analysis of pooling neurons is shown in Fig. 7 where the mean and standard deviation across all comparisons (tone and noise stimuli for azimuth and elevation, 21 comparisons total) are shown for two populations of pooled neurons in each cortical area. Open bars show the results when all neurons tested in each cortical area were pooled. The neuronal predictions of the behavior for both AI and CM neurons were significantly worse than the measured behavior. A second level of analysis pooled the responses based on their spatial tuning. Significant spatial tuning was defined as a statistically significant correlation of the response as a function of stimulus location in at least one direction (azimuth or elevation) for at least one tested stimulus (tone or noise). The closed bars of Fig. 7 show that there was an improvement in the ability to predict the behavior by pooling the responses of only these spatially sensitive neurons. For AI neurons, the improvement still resulted in predictions that were significantly worse than the measured behavior. For CM neurons, however, the predictions based on the pooled spatially sensitive neurons were not different from the behavioral thresholds measured in the monkey. This result indicates that relatively small populations of neurons in CM contained sufficient information for the monkey to perform the task.
Figure 7.
Mean and standard deviation for the predicted/measured ratio pooled across either all neurons measured in that cortical area (open bars) or restricted to only the neurons in that cortical area that had significant correlation between the neuronal activity and the spatial location for at least one stimulus (closed bars). Each bar represents the mean of the azimuth and elevation predictions for tone and noise stimuli (21 ratios total). Dashed line is through 1.0 (perfect prediction). Only the pooled spatially sensitive CM neurons had a ratio that was not significantly different from the behavior. Adapted from ref. 16.
These results are consistent with serial processing of spatial information from AI to CM in the primate auditory cortex. Neurons in CM showed better spatial tuning than AI neurons, and the ability to predict the behavior by all of the measured CM neurons was not significantly different from the predictions by only the spatially sensitive AI neurons. This is expected if the CM neurons were selectively activated by the spatially tuned AI neurons, ultimately leading to an enhanced representation of acoustic space during this serial processing. In support of this idea is the finding that CM neurons receive inputs from broad regions of AI that span much of the frequency representation (33, 36).
These results raise several obvious questions. The first is which other cortical areas also process spatial information. The experiments to date have concentrated on AI and CM, but it remains to be seen how neurons in other cortical areas also participate in this perception. It is likely that other cortical areas also will have spatially tuned neurons, as the “parallel” nature of information processing is almost certainly not strictly maintained. It is more likely that neurons across cortical areas process both “what” and “where” information to differing degrees to aid in ultimately “binding” these features to give rise to the percept of a real-world object.
A second question is how this information is used by the monkey. Although both AI and CM are likely to be necessary for sound location perception, it is also unlikely that either is sufficient for this percept in the primate. The most likely scenario is that these neurons form one link in the serial processing of spatial information that will be further processed in other auditory cortical fields, as well as parietal (e.g., ref. 40) and/or frontal cortical areas (41). The inputs from CM are likely candidates to contribute to the creation of multimodal spatial perception in the parietal lobe (40).
In summary, the available physiological evidence is supportive of the hypothesis that spatial location is processed in series between AI and CM. It remains to be seen how the outputs of CM are further processed, and how this processing results in the perception of acoustic space. Similarly, other features of the acoustic stimulus may be preferentially processed in other cortical areas, for example in the L fields in the belt and parabelt areas. Finally, the role of the cortical areas in the core region, particularly AI and R, is still unclear. It may be that both areas process all types of information in parallel, or there may be a subdivision of feature processing at this initial cortical level. Nonetheless, these experiments on the cortical mechanisms of sound localization indicate that broadly tuned neurons can in fact provide information necessary to perform perceptual discriminations at a much finer resolution than the bandwidths of the neuronal tuning functions would suggest. This type of information processing may be a general mechanism by which the activity of neurons in the cerebral cortex leads to perception across sensory modalities (42–44).
Acknowledgments
I thank E. A. Disbrow, L. A. Krubitzer, J. A. Langston, M. L. Phan, T. K. Su, and T. M. Woods for helpful comments on earlier versions of this manuscript, and D. C. Guard, M. L. Phan and T. K. Su for their participation in the described experiments. Funding was provided by National Institutes of Health Grant DC-02371, the Klingenstein Foundation, and the Sloan Foundation.
Abbreviations
- AI
primary auditory cortex
- CM
caudomedial
- FRA
frequency response area
- CF
characteristic frequency
- R
rostral field
- L
lateral field
- PSTH
poststimulus time histogram
Footnotes
This paper was presented at the National Academy of Sciences colloquium “Auditory Neuroscience: Development, Transduction, and Integration,” held May 19–21, 2000, at the Arnold and Mabel Beckman Center in Irvine, CA.
References
- 1.Middlebrooks J C, Green D M. Annu Rev Psychol. 1991;42:135–159. doi: 10.1146/annurev.ps.42.020191.001031. [DOI] [PubMed] [Google Scholar]
- 2.Wightman F L, Kistler D J. J Acoust Soc Am. 1989;85:868–878. doi: 10.1121/1.397558. [DOI] [PubMed] [Google Scholar]
- 3.Pralong D, Carlile S. J Acoust Soc Am. 1994;95:3435–3444. doi: 10.1121/1.409964. [DOI] [PubMed] [Google Scholar]
- 4.Joris P X, Yin T C. J Neurophysiol. 1995;73:1043–1062. doi: 10.1152/jn.1995.73.3.1043. [DOI] [PubMed] [Google Scholar]
- 5.Yin T C, Chan J C. J Neurophysiol. 1990;64:465–488. doi: 10.1152/jn.1990.64.2.465. [DOI] [PubMed] [Google Scholar]
- 6.Kuwada S, Batra R, Yin T C, Oliver D L, Haberly L B, Stanford T R. J Neurosci. 1997;17:7565–7581. doi: 10.1523/JNEUROSCI.17-19-07565.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Litovsky R Y, Yin T C. J Neurophysiol. 1998;80:1285–1301. doi: 10.1152/jn.1998.80.3.1285. [DOI] [PubMed] [Google Scholar]
- 8.Clarey J C, Barone P, Imig T J. J Neurophysiol. 1994;72:2384–2405. doi: 10.1152/jn.1994.72.5.2383. [DOI] [PubMed] [Google Scholar]
- 9.Barone P, Clarey J C, Irons W A, Imig T J. J Neurophysiol. 1996;75:1206–1220. doi: 10.1152/jn.1996.75.3.1206. [DOI] [PubMed] [Google Scholar]
- 10.Imig T J, Poirier P, Irons W A, Samson F K. J Neurophysiol. 1997;78:2754–2771. doi: 10.1152/jn.1997.78.5.2754. [DOI] [PubMed] [Google Scholar]
- 11.Benson D A, Hienz R D, Goldstein M H., Jr Brain Res. 1981;219:249–267. doi: 10.1016/0006-8993(81)90290-0. [DOI] [PubMed] [Google Scholar]
- 12.Middlebrooks J C, Pettigrew J D. J Neurosci. 1981;1:107–120. doi: 10.1523/JNEUROSCI.01-01-00107.1981. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Imig T J, Irons W A, Samson F R. J Neurophysiol. 1990;63:1448–1466. doi: 10.1152/jn.1990.63.6.1448. [DOI] [PubMed] [Google Scholar]
- 14.Rajan R, Aitkin L M, Irvine D R F, McKay J. J Neurophysiol. 1990;64:872–887. doi: 10.1152/jn.1990.64.3.872. [DOI] [PubMed] [Google Scholar]
- 15.Middlebrooks J C, Xu L, Eddins A C, Green D M. J Neurophysiol. 1998;80:863–881. doi: 10.1152/jn.1998.80.2.863. [DOI] [PubMed] [Google Scholar]
- 16.Recanzone G H, Guard D C, Phan M L, Su T K. J Neurophysiol. 2000;83:2723–2739. doi: 10.1152/jn.2000.83.5.2723. [DOI] [PubMed] [Google Scholar]
- 17.Alshuler M W, Comalli P E. J Aud Res. 1975;15:262–265. [Google Scholar]
- 18.Brown C H, Beecher M D, Moody D B, Stebbins W C. Science. 1978;201:753–754. doi: 10.1126/science.97785. [DOI] [PubMed] [Google Scholar]
- 19.Brown C H, Beecher M D, Moody D B, Stebbins W C. J Acoust Soc Am. 1980;68:127–132. doi: 10.1121/1.384638. [DOI] [PubMed] [Google Scholar]
- 20.Brown C H, Schessler T, Moody D, Stebbins W. J Acoust Soc Am. 1982;72:1804–1811. doi: 10.1121/1.388653. [DOI] [PubMed] [Google Scholar]
- 21.Perrott D R, Saberi K. J Acoust Soc Am. 1990;87:1728–1731. doi: 10.1121/1.399421. [DOI] [PubMed] [Google Scholar]
- 22.Recanzone G H, Makhamra S D D R, Guard D C. J Acoust Soc Am. 1998;103:1085–1097. doi: 10.1121/1.421222. [DOI] [PubMed] [Google Scholar]
- 23.Jenkins W M, Merzenich M M. J Neurophysiol. 1984;52:819–847. doi: 10.1152/jn.1984.52.5.819. [DOI] [PubMed] [Google Scholar]
- 24.Kavanagh G L, Kelly J B. J Neurophysiol. 1987;57:1746–1766. doi: 10.1152/jn.1987.57.6.1746. [DOI] [PubMed] [Google Scholar]
- 25.Thompson G C, Cortez A M. Behav Brain Res. 1983;8:211–216. doi: 10.1016/0166-4328(83)90055-4. [DOI] [PubMed] [Google Scholar]
- 26.Heffner H E, Heffner R S. J Neurophysiol. 1990;64:915–931. doi: 10.1152/jn.1990.64.3.915. [DOI] [PubMed] [Google Scholar]
- 27.Sanachez-Longo L P, Forster F M. Neurology. 1958;8:119–125. doi: 10.1212/wnl.8.2.119. [DOI] [PubMed] [Google Scholar]
- 28.Rauschecker J P. Audiol Neuro-Otol. 1998;3:86–103. doi: 10.1159/000013784. [DOI] [PubMed] [Google Scholar]
- 29.Kaas J H, Hackett T A, Tramo M J. Curr Opin Neurobiol. 1999;9:164–170. doi: 10.1016/s0959-4388(99)80022-1. [DOI] [PubMed] [Google Scholar]
- 30.Ungerleider L G, Mishkin M. In: Analysis of Visual Behavior. Ingle D J, Goodale M A, Mansfield J W, editors. Cambridge, MA: MIT Press; 1982. pp. 549–556. [Google Scholar]
- 31.Ungerleider L G, Haxby J V. Curr Opin Neurobiol. 1994;4:157–165. doi: 10.1016/0959-4388(94)90066-3. [DOI] [PubMed] [Google Scholar]
- 32.Merzenich M M, Brugge J F. Brain Res. 1973;50:275–296. doi: 10.1016/0006-8993(73)90731-2. [DOI] [PubMed] [Google Scholar]
- 33.Morel A, Garraghty P E, Kaas J H. J Comp Neurol. 1993;335:437–459. doi: 10.1002/cne.903350312. [DOI] [PubMed] [Google Scholar]
- 34.Kosaki H, Hashikawa T, He J, Jones E G. J Comp Neurol. 1997;386:304–316. [PubMed] [Google Scholar]
- 35.Rauschecker J P, Tian B, Hauser M. Science. 1995;268:111–114. doi: 10.1126/science.7701330. [DOI] [PubMed] [Google Scholar]
- 36.Rauschecker J P, Tian B, Pons T, Mishkin M. J Comp Neurol. 1997;382:89–103. [PubMed] [Google Scholar]
- 37.Miller J M, Sutton D, Pfingst B, Ryan A, Beaton R, Gourevitch G. Science. 1972;177:449–451. doi: 10.1126/science.177.4047.449. [DOI] [PubMed] [Google Scholar]
- 38.Miller J M, Dobie R A, Pfingst B E, Hienz R D. Am J Otolaryngol. 1980;1:119–130. doi: 10.1016/s0196-0709(80)80004-4. [DOI] [PubMed] [Google Scholar]
- 39.Recanzone G H, Guard D C, Phan M L. J Neurophysiol. 2000;83:2315–2331. doi: 10.1152/jn.2000.83.4.2315. [DOI] [PubMed] [Google Scholar]
- 40.Mazzoni P, Bracewell R M, Barash S, Andersen R A. J Neurophysiol. 1996;75:1233–1241. doi: 10.1152/jn.1996.75.3.1233. [DOI] [PubMed] [Google Scholar]
- 41.Romanski L M, Tian B, Fritz J, Mishkin M, Goldman-Rakic P S, Rauschecker J P. Nat Neurosci. 1999;2:1131–1136. doi: 10.1038/16056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Bradley A, Skottun B C, Ohzawa I, Sclar G, Freeman R D. J Neurophysiol. 1987;57:755–772. doi: 10.1152/jn.1987.57.3.755. [DOI] [PubMed] [Google Scholar]
- 43.Vogels R, Orban G A. J Neurosci. 1990;10:3543–3558. doi: 10.1523/JNEUROSCI.10-11-03543.1990. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Prince S J, Pointon A D, Cumming B G, Parker A J. J Neurosci. 2000;20:3387–3400. doi: 10.1523/JNEUROSCI.20-09-03387.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]