Skip to main content
Nature Communications logoLink to Nature Communications
. 2019 Jun 5;10:2465. doi: 10.1038/s41467-019-10365-z

Speaker-normalized sound representations in the human auditory cortex

Matthias J Sjerps 1,2, Neal P Fox 3, Keith Johnson 4, Edward F Chang 3,5,
PMCID: PMC6549175  PMID: 31165733

Abstract

The acoustic dimensions that distinguish speech sounds (like the vowel differences in “boot” and “boat”) also differentiate speakers’ voices. Therefore, listeners must normalize across speakers without losing linguistic information. Past behavioral work suggests an important role for auditory contrast enhancement in normalization: preceding context affects listeners’ perception of subsequent speech sounds. Here, using intracranial electrocorticography in humans, we investigate whether and how such context effects arise in auditory cortex. Participants identified speech sounds that were preceded by phrases from two different speakers whose voices differed along the same acoustic dimension as target words (the lowest resonance of the vocal tract). In every participant, target vowels evoke a speaker-dependent neural response that is consistent with the listener’s perception, and which follows from a contrast enhancement model. Auditory cortex processing thus displays a critical feature of normalization, allowing listeners to extract meaningful content from the voices of diverse speakers.

Subject terms: Cortex, Cognitive neuroscience


Our perception of a speech sound tends to remain stable despite variation in people’s vocal characteristics. Here, by measuring neural activity as people listened to speech from different voices, the authors provide evidence for speaker normalization processes in the human auditory cortex.

Introduction

A fundamental computational challenge faced by perceptual systems is the lack of a one-to-one mapping between highly variable sensory signals and the discrete, behaviorally relevant features they reflect1,2. A profound example of this problem exists in human speech perception, where the main cues to speech sound identity also vary depending on speaker identity35.

For example, to distinguish one speaker’s /u/ (“boot”) and /o/ (“boat”), listeners rely primarily on the vowel’s first formant frequency (F1; the first vocal tract resonance, reflected as a dominant peak in the frequency spectrum) because it is lower for /u/ than for /o/6. However, people with long vocal tracts (typically tall male speakers) have overall lower resonance frequencies than those of speakers with shorter vocal tracts. Consequently, a tall person’s production of the word “boat” and a short person’s “boot” could be acoustically identical. Behavioral research has shown that preceding context allows listeners to tune-in to the acoustic properties of a particular voice and normalize subsequent speech input711. One classic example of this effect is that a single acoustic token, ambiguous between /u/ and /o/, will be labelled as /o/ after a context sentence spoken by a tall-sounding person (low F1), but as /u/ after a context sentence spoken by a shorter-sounding person (high F1)9. Therefore, understanding speech involves a process that builds up a representation of the characteristics of a speaker’s voice and adjusts perception of new speech input to accommodate those characteristics.

There is considerable evidence that perceptually relevant sound representations arise within human parabelt (nonprimary) auditory cortex (AC). First, neural activity in the superior temporal gyrus (STG) is sensitive to acoustic-phonetic features, like F1, that are critical for recognizing and discriminating phonemes1218. For example, vowels with low F1 frequencies (e.g., /u/, /i/) can be distinguished from vowels with relatively higher F1 frequencies (e.g., /o/, /æ/) based on local activity within human STG19. Second, the STG’s encoding of speech is not a strictly linear (veridical) encoding the acoustics; rather, it reflects some properties of abstraction, including categorical perception, relative encoding of pitch, and attentional enhancement13,20,21. However, to date, it remains unknown whether speech representations in human AC also exhibit the type of context-dependence that could underlie speaker normalization.

Behavioral research in humans has previously suggested that normalization effects could partly arise from the general auditory contrast enhancement mechanisms10,11,2226, which are known to affect neurophysiological responses throughout the auditory hierarchy in animals2729. Contrast enhancement models posit that adaptation to the frequency content of immediately preceding contexts—or their long-term average spectrum (see, e.g.,11,30)—affects the responses to novel stimuli depending on the amount of overlap in their frequency content with that of the context. Moreover, behavioral evidence suggests that this contrast enhancement (and, hence, normalization) should arise—at least in part—centrally. For instance, contralateral (dichotic) presentation of the context (either speech or nonspeech) and target drive similar contrast enhancement effects in speech categorization as do ipsilaterally presented context and target11,25,31,32.

Taken together, past work suggests a role for contrast enhancement in speech sound normalization in AC, but this prediction has not been directly demonstrated in neurophysiological studies in humans. Models of contrast enhancement make specific predictions about the responses of feature-tuned neuronal populations in AC30. Not only should the cortical representation of the same speech target depend on context, but, more specifically, context-dependent representations should differ in a particular (contrastive) way. That is, after a low-F1 context, the encoding of an ambiguous vowel target’s F1 should more closely resemble the encoding of high F1 targets, while after a high F1 context it should more closely resemble low F1 targets. So far, contrast enhancement has been observed within frequency-tuned neurons in tonotopic primary AC in nonhuman mammals27,28, but related patterns have not yet been observed in human AC, let alone in the context of human speech perception.

To investigate the influence of speaker context on speech sound encoding in AC, we recorded neural activity from human participants implanted with subdural high-density electrode arrays that covered peri-sylvian language cortex while they listened to and identified target vowels presented in the context of sentences spoken by two different voices13,33. We found direct evidence of speaker-normalized neural representations of vowel sounds in parabelt AC, including STG. Critically, the observed normalization effects reflected the contrastive relation between the F1 range in the context sentences and F1 of the target vowels, providing direct evidence for context-dependent contrast enhancement in human speech perception. More generally, the results demonstrate the critical role of human auditory-speech cortex in compensating for variability by integrating incoming sounds with their surrounding acoustic contexts.

Results

Speech sound perception is dependent on context

We recorded neural activity directly from the cortical surface of five Spanish-speaking neurosurgical patients while they voluntarily participated in a speech sound identification task. They listened to Spanish sentences that ended in a (pseudoword) target, which they categorized as either “sufu” or “sofo” on each trial with a button press (Fig. 1a, b). The sentence-final targets comprised a digitally synthesized six-step continuum morphing from an unambiguous sufu to an unambiguous sofo, with four intermediate tokens (s?f?, i.e., spanning a perceptually ambiguous range). On each trial, a pseudo-randomly selected target was preceded by a context sentence (A veces se halla…; “At times she feels rather…”). Two versions of this sentence were synthesized, differing only in their mean F1 frequencies (Fig. 1a, c; Supplementary Fig. S1), yielding two contexts that listeners perceived as two different speakers: one with a long vocal tract (low F1; Speaker A) and one with a short vocal tract (high F1; Speaker B). Critically, F1 frequency is also the primary acoustic dimension that distinguishes between the vowels /u/ and /o/ in natural speech (in both Spanish and English) (Fig. 1a and Supplementary Fig. S1)6. Similar materials have previously been shown to induce a reliable shift in the perception of an /u/–/o/ continuum (a normalization effect) in healthy Spanish-, English-, and Dutch-listeners7.

Fig. 1.

Fig. 1

Listeners perceive speech sounds relative to their acoustic context. a Target sounds were synthesized to create a six-step continuum ranging from sufu (step 1; low-first formant [F1]) to sofo (step 6; high F1). Context sentences were synthesized to sound like two different speakers: a speaker with a long vocal tract (low-F1 range: Speaker A), and a speaker with a short vocal tract (high-F1 range; Speaker B). Context sentences contained only the vowels /e/ and /a/, but not the target vowels /u/ and /o/. b Context sentences preceded the target on each trial (separated by 0.5 s of silence), after which participants responded with a button press to indicate whether they heard “sufu” or “sofo”. c All targets were presented after both speaker contexts. d Listeners more often gave “sofo” responses to target sounds if the preceding context was spoken by Speaker A (low F1) than Speaker B (high F1). Error bars indicate s.e.m.

As expected, participants’ perception of the target continuum was affected by the F1 range of the preceding sentence context (βContext F1 = −1.70, t = −3.51, p < 0.001; Fig. 1d; see Supplementary Materials for details of the mixed effects logistic regression that was used). Specifically, participants were more likely to identify tokens as sofo (the vowel category corresponding to higher F1 values) after a low-F1 voice (Speaker A) compared to the same target presented after a high-F1 voice (Speaker B). Hence, listeners’ perceptual boundary between the /u/ and /o/ vowel categories shifted to more closely reflect the F1 range of the context speaker. Past work has interpreted this classical finding in light of the contrastive perceptual effects that are ubiquitous among sensory systems34: the F1 of a speech target will sound relatively higher (i.e., sound more like an /o/) after a low-F1 context sentence than after a high-F1 context. Behaviorally, this is reflected as a shift of the category boundary to lower-F1 values.

Human AC exhibits speaker-dependent speech representations

Two of the most influential hypotheses explaining the phenomenon of speaker normalization posit that: (1) contrast enhancing processes, operating at early auditory processing levels, change the representation of the input signal in a normalizing way11,23,30,31; (2) alternatively, it has been suggested that normalization does not alter the perceptual representation of speech, but, instead, that normalization arises as a consequence of a speaker-specific mapping of the auditory representation onto abstract linguistic units (i.e., listeners have learned to map an F1 of 400 Hz to /u/-words for speakers, or vocal tracts, that sound short, but to /o/-words for speakers that sound taller)35,36. Hence, while the contrast enhancement hypothesis predicts normalized representations in AC, the latter theory predicts that early auditory representations of speech cues remain unnormalized (i.e., independent of speaker-context).

Past neurobiological work has demonstrated that neural populations in parabelt AC are sensitive to acoustic-phonetic cues that distinguish classes of speech sounds, including vowels, and not to specific phonemes per se19. Hence, the primary goal of the current study was to examine whether the neural representation of vowels in parabelt AC shifts in a contrast enhancing way relative to the acoustic characteristics of the preceding speaker, or, alternatively, whether such contrast enhancement (or normalization) is not reflected in parabelt AC processing. We first tested whether individual cortical sites that reliably differentiate between vowels (i.e., discriminate /u/ from /o/ in their neural response) exhibit normalization effects.

To this end, we examined stimulus-locked neural activity in the high-gamma band (70–150 Hz) at each temporal lobe electrode (n = 406 across patients; this number is used for all Bonferroni corrections below) during each trial. High-gamma activity is a spatially and temporally resolved neural signal that has been shown to reliably encode phonetic properties of speech sounds19,37,38, and is correlated with local neuronal spiking3941. We used general linear regression models to identify local neural populations involved in the representation of context and/or target acoustics. Specifically, we examined the extent to which high-gamma activity at each electrode encoded stimulus properties during presentation of the context sentences (context window) or during presentation of the target (target window; see Supplemental Materials). The fully specified encoding models included numerical predictors for the target vowel F1 (steps 1–6) and context F1 (high vs. low), as well as their interaction. In the following, we focused on task-related electrodes, defined as the subset of temporal lobe electrodes for which a significant portion of the variance was explained by the full model, either during the target window or during the context window (p < 0.05; uncorrected, n = 98; see Supplementary Fig. S2).

Among the task-related electrodes, a subset displayed selectivity for target vowel F1 (Fig. 2a: electrodes displaying a main effect for target F1). Consistent with previous reports of AC tuning for vowels19, we observed that different subsets of electrodes displayed a preference for either sufu or sofo targets (color-coded in Fig. 2a). Fig. 2b and Fig. 2c (middle panel) display the response profile for one example electrode that had a sofo preference (e1; βTarget F1 = 2.80, t = 9.34, p = 1.10 × 10−18). Importantly, in addition to overall tuning to the target sound F1, the activation level of this electrode was modulated by the F1 range of the preceding context (Fig. 2b and bottom panel of Fig. 2c; βContext F1 = −2.23, t = −4.51, p = 8.3 × 10−6). This demonstrates that the responsiveness of a neural population that is sensitive to bottom-up acoustic cues is also affected by the distribution of that cue in a preceding context. The direction of this influence is the same as the behavioral normalization effect, such that a low-F1 speaker context was associated with stronger responses for sofo (high-F1) targets.

Fig. 2.

Fig. 2

The neural response to bottom-up acoustic input is modulated by preceding context. a Target vowel preferences and locations (plotted on a standardized brain) for electrodes from all patients (three with right hemisphere [RH] and two with left hemisphere [LH] grid implants). Only those temporal lobe electrodes where the full omnibus model was significant during the context and/or the target window (F-test; p < 0.05) are displayed. Strong target F1 selectivity is relatively uncommon: electrodes with a black-and-white outline are significant at Bonferroni corrected p < 0.05 (n = 9, out of 406 temporal lobe electrodes); a single black outline indicates significance at a more liberal threshold (p < 0.05, uncorrected; n = 28). Activity from the indicated electrode (e1) is shown in b and c. b Example of normalization in a single electrode (e1; z-scored high-gamma [high-γ] response averaged across the target window [target window marked in c]). c Activity from e1 across time, separating the endpoint targets (middle panel) or the contexts (bottom panel). The electrode responds more strongly to /o/ stimuli than /u/ stimuli, but also responds more strongly overall after Speaker A (low-F1). This effect is analogous to the behavioral normalization (Fig. 1d). Black bars at the bottom of the panels indicate significant time-clusters (cluster-based permutation test of significance). d Among all electrodes with significant target sound selectivity (n = 37 [9 + 28]), a relation exists between the by-electrode context effect and target preference. Both are expressed as a signed t-value, demonstrating that the size and direction of the target preferences predicts the size and direction of the context effects. e An LDA classifier was trained on the distributed neural responses elicited by the sufu and sofo endpoint stimuli using all task-related electrodes. This model was then used to predict classes for neural responses to (held-out) endpoint tokens and for the ambiguous steps in each context condition. Proportions of neurally based “sofo” predicted trials (thick lines) display a relative shift between the two context conditions (data from one example patient). Regression curves were fitted to these data for each participant separately to estimate 50% category boundaries per condition for panel f (thin lines). f The neural classification functions display a shift in category boundaries between context conditions for all patients individually. In b and c, error bars indicate ±1 s.e.m.

To quantify this normalization effect across all electrodes that display selectivity to target acoustics, we used linear mixed effects regression to estimate the relation between electrodes’ target preference (defined as the glm-based signed t-statistic of the target F1 factor during the target window) and their context effect (defined as the glm-based signed t-statistic of the context F1 factor during the target window) (see Supplementary Materials for further detail on this analysis). We found that the magnitude and direction of an electrode’s context effect was predicted by the magnitude and direction of its target preference (Fig. 2d). Crucially, this strong relationship had a negative slope, such that electrodes that had high-F1 target preferences (sofo > sufu) had stronger responses to targets after low-F1 context sentences (low-F1 context > high-F1 context; βTarg-F1-t = −0.32, t = −5.00, p = 1.53 × 10−5). Importantly, this demonstrates that the relationship between context response and target response reflects an encoding of the contrast between the formant properties of each, consistent with the normalization pattern observed in the behavioral responses (Fig. 1d) and with the predictions of a contrast enhancement model of speech normalization.

Normalization of vowel representations in all participants

Figure 2d demonstrates that local populations in AC that display tuning for specific target vowel F1 ranges exhibit normalization. However, only a few electrodes (n = 9, out of 406 temporal lobe electrodes) displayed very strong tuning (significance at Bonferroni-corrected p < 0.05), while the majority of F1-tuned electrodes displayed only moderate tuning and moderate context effects. Moreover, not all participants had electrodes that displayed strong target F1 tuning (see Table S2). The relative sparseness of strong tuning is not surprising given that the target vowel synthesis involved only small F1 frequency differences (~30 Hz per step), with the endpoints being separated by only 150 Hz (which is, however, a prototypical F1 distance between /u/ and /o/7). However, past work has demonstrated that even small acoustic differences among speech sounds are robustly encoded by distributed patterns of neural activity across AC13,14. In order to determine whether distributed neural representations of vowels reliably display normalization across all participants, we trained a multivariate pattern classifier model (linear discriminant analysis, LDA) on the spatiotemporal neural response patterns of each participant. Models were trained to discriminate between the endpoint stimuli (i.e., trained on the neural responses to steps 1 vs. 6, irrespective of context) using all task-related electrodes for that participant. These models were then used to predict labels for held-out neural responses to both the endpoints and the ambiguous steps in each context condition. For all participants, classification of held-out endpoint trials was significantly better than chance (Supplementary Fig. S3b). To assess the influence of target F1 and context F1 on the classifier output, a logistic generalized linear mixed model was then fit to the proportion of predicted sofo responses across all participants.

Figure 2e displays the proportion of sofo labels predicted for all stimuli by the LDA classifier based on the neural data of one example participant (thick lines). Importantly, a shift is observed in the point of crossing of the category boundary. Regression functions fitted to these data (thin lines) were used to estimate the size and direction of the context-driven neural boundary (50% crossover point) shift per participant. For each participant, the neural vowel boundaries, like the behavioral vowel boundaries, were found to be context-dependent (Fig. 2f; see Supplementary Materials and Supplementary Fig. S3 for further detail).

A combined regression analysis demonstrated that, across participants, population neural activity in the temporal lobe was modulated both by the acoustic properties of the target vowel (βTarget F1 = 0.50, t = 13.20, p < 0.001) and by the preceding context (βContext F1 = −0.34, t = −5.58, p < 0.001). Moreover, this effect was not observed for task-related electrodes outside of the temporal lobe during the target window (see Supplementary Fig. S4; nontemporal electrodes were mostly located on sensorimotor cortex and the inferior frontal gyrus).

Importantly, and consistent with participants’ perception, the neural classification functions demonstrate that the influence of the context sentences consistently affected target vowel representations in a contrastive (normalizing) direction: the neural response to an ambiguous target vowel with a given F1 is more like that of /o/ (high-F1) after a low-F1 context (Speaker A) than after a high-F1 context (Speaker B; see Supplementary Figs. S3 and S4b for more detail).

Normalization by acoustic-phonetic contrast enhancement

It has been suggested that a major organizing principle of speech encoding by human parabelt AC is its encoding of acoustic-phonetic features, which are more cross-linguistically generalizable and more physically grounded than phonemes (or other possible higher-level linguistic representations) per se12,19,4244. However, AC processing is diverse and may contain regions that are, in fact, selective for (more abstract) phonemes. For example, AC has also been found to display properties that are typically associated with abstract sound categories such as categorical perception13. Hence, we next assessed whether the normalization effects observed here involved neural populations that display sensitivity to acoustic-phonetic features (i.e., relating to more general F1 characteristics). Because the context sentence (“A veces se halla…”) did not contain the target vowels /u/ or /o/, while its F1 values did cover the same general frequency range, we examined neural responses during the context window to understand individual electrode preferences.

To this end we again used the glm-based t-statistics of all temporal lobe electrodes that displayed tuning for the endpoint vowels (n = 37; as per Fig. 2d). Among these electrodes, however, we examined the relationship between their preferences for context F1 during the context window and for target F1 during the target window. Figure 3a displays context and target preferences on the cortex of a single-example patient. Among the electrodes that displayed target F1 selectivity, some also displayed selectivity for the context F1 during the context window (such dual preferences are indicated with a black-and-white outline). Figure 3b displays the activation profile of one example electrode (e2). Importantly, e2 responded more strongly to low-F1 targets during the target window (sufu preference: βTarget F1 = −0.52, t = −10.25, p = 3.9 × 10−21), but also to low-F1 contexts during the context window (Speaker A preference: βTarget F1 = −0.54, t = −8.05, p = 2.3 × 10−14). This demonstrates that this neural population responded more strongly to low-F1 acoustic stimuli in general and is not exclusively selective for a discrete phoneme category. Importantly, e2 also displayed normalization, as its activity was affected by context F1 during the target window (p = 2.7 × 10−4), and the direction of that context effect was consistent with contrastive normalization (cf. Fig. 2d).

Fig. 3.

Fig. 3

Sensitivity to contrast in acoustic-phonetic features. a Electrode preferences for both context F1 (during context window) and target F1 (during target window) from a single example patient. Some populations display both target F1 selectivity and context F1 selectivity (marked with a black-and-white outline), indicating a general preference for higher or lower F1 frequency ranges. Others are only tuned for target F1 or context F1 (marked with a single black outline in their respective panels). Significance assessed at p < 0.05 uncorrected. b Mean (±1 s.e.m.) high-gamma activity at an example electrode (e2) from the example patient in panel a (conditions split as described in Fig. 2c). Activity is displayed for a time window encompassing the full trial duration (both precursor sentence and target word). Black bars represent significant time-clusters (p < 0.05; cluster-based permutation). c A relation exists between the by-electrode context preference and target preference: electrodes that display a preference for either high- or low-target F1 typically also display a preference for the same F1 range during the context

Extending this finding to the population of electrodes, we found a significant positive relation across all target-tuned temporal lobe electrodes between an electrode’s target preference and its context preference (βCont-F1−t = 0.76, t = 3.59, p = 9.83 × 10−4; Fig. 3c; see Supplementary Materials for more detail on this analysis). Hence, a neural population’s tuning for higher or lower F1 ranges tended to be general, not vowel-specific. Moreover, when restricting the test of normalization (assessed as the relationship between target preferences and the context effect, as per Fig. 2d) to those electrodes that displayed significant tuning for both target F1 and context F1, robust normalization was again found (Supplementary Fig. S5). These findings confirm that normalization affects acoustic-phonetic (i.e., pre-phonemic) representations of speech sounds in parabelt AC.

Discussion

A critical challenge that listeners must overcome in order to understand speech is the fact that different speakers produce the same speech sounds differently, a phenomenon that is known as the lack-of-invariance problem in speech perception1,3. This issue is partly due to the fact that different speakers’ voices span different formant ranges. We investigated the neural underpinnings of how listeners use speaker-specific information in context to normalize phonetic processing. First, we observed behavioral normalization effects, replicating previous findings79,24. More importantly, we observed normalized representations of vowels in parabelt AC. These normalized representations were observed broadly across parabelt AC and were observed for all participants individually. Moreover, we found that these effects follow the predictions of a general auditory contrast enhancement model of normalization30, affecting speech sound representations at a level that precedes the mapping onto phonemes or higher-level linguistic units. These findings suggest that contrast enhancement plays an important role in normalization.

Recent research has demonstrated that AC responds to acoustic cues that are critical for both recognizing and discriminating phonemes12,13,1518,45 and different speakers4650. However, since cues that are critical for speaker and phoneme identification are conflated in the acoustic signal, such findings could be consistent with either context-dependent or context-independent cortical representations of acoustic properties. Our experiments demonstrate that rapid and broadly distributed normalization through contrast enhancement is a basic principle of how human AC encodes speech.

Qualitatively similar contrast enhancing operations have been widely documented in animal neurophysiological research, where it has been demonstrated to involve neural mechanisms such as adaptive gain control2729,51 or stimulus specific adaptation28,29. An intuitive mechanism for the implementation of contrast enhancement that follows from that work involves sensory adaptation. This could be based on neuronal fatigue: when a neuron, or neuronal population, responds strongly to a masker stimulus, its response to a subsequent probe is often attenuated when the frequency of the probe falls within the neurons’ excitatory receptive field52,53. But in addition to such local forms of adaptation, adaptation has also been thought to arise through (inhibitory) interactions between separate populations of neurons (which may have partly non-overlapping receptive fields)27,51. In the present study, spectral peaks in the two context sentences and those in the endpoint target vowels were partly overlapping (see Supplementary Fig. S1). These forms of adaptation may, hence, play a role in the type of normalization observed here. Indeed, we observed a number of populations for which a strong preference for one of the context sentences during the context window was associated with a decreased response during the target window (i.e., the normalization effect; Fig. 3b).

The prediction that contrast enhancement may play an important role in human speech sound normalization was previously made based on behavioral studies on contrastive context effects in speech perception10,11,23,30,34,54. A relevant observation from that literature is that, under specific conditions, nonspeech context sounds (e.g., broadband noise and musical tones) have also been observed to affect the perception of speech sounds11,22,23,31,55. This interpretation has been challenged, however, suggesting that speech- and nonspeech-based context effects could be based on qualitatively different processes56,57. We did not test the influence of nonspeech contexts here, and such an investigation would provide an important next step in the study of context effects in speech perception. However, our findings most closely align with a model that assumes that normalization effects may not be speech-specific and that normalization can, at least in part, be explained by more general auditory adaptation effects23,34,58.

An interesting additional question concerns the main locus of emergence of normalization. Broadly speaking, normalization could be inherited from primary AC or subcortical regions (from which we were unable to record; see59 for a more detailed discussion of these potential influences); it may largely emerge within parabelt AC itself; or it could be driven by top-down influences from regions outside of the AC. In our study, context and target sounds were separated in time by a 500 ms silent interval. It has been suggested that, over such relatively long latencies, adaptation effects become especially dominant at cortical levels of processing but are reduced at more peripheral levels of processing55,60,61. Furthermore, behavioral experiments have demonstrated robust normalization effects with contralateral presentation of context and target sounds11,31,32 (i.e., a procedure that reduces precortical interaural interactions). Both observations thus suggest that normalization can also arise when the contribution of context-target interactions in the auditory periphery may be limited. With respect to the potential role of top-down modulations from regions outside of the AC, inferior frontal and sensorimotor cortex have been suggested to be involved in the resolution of perceptual ambiguities in speech perception62,63 and could, hence, have been expected to play a role in normalization, too. Here, we observed considerable activation in these regions, but, intriguingly, they did not display normalization during the processing of the target sounds (see Supplementary Fig. S4). While tentative, these combined findings highlight human AC as the most likely locus for the emergence of the context effects in speech processing observed here.

In the current experiment, we recorded neural activity from cortical sites in both the left and right hemispheres. It has previously been demonstrated that the right hemisphere is more strongly involved in the processing of voice information64,65. Here, normalization was observed in every participant, irrespective of which hemisphere was the source of a given participant’s recordings (Fig. 2f). Importantly, however, recordings from any given participant only included measurements from a single hemisphere, so no strong conclusions regarding lateralization should be drawn based on this dataset.

Despite normalization of vowel representations, responses were not completely invariant to speaker differences during the context sentences (see, for example, the behavior of example electrode e2 in Fig. 3b, which displays a preference for the Low F1 sentence throughout most of the context window: i.e., it is not fully normalized). And indeed, our (and previous7,11,36) findings show that, even for target sound processing, surrounding context rarely (if ever) results in complete normalization. That is, while the context sentence F1 differed by roughly 400 Hz between the two speakers, the normalization effect only induced a shift of ~50 Hz in the position of the category boundaries (in behavior and in neural categorization). Moreover, our target F1 values were ideally situated between the two context F1 ranges, which raises the question of whether equally large effects would have been observed for other target F1 ranges. The role of contrast enhancement in normalization should thus be seen as a mechanism that biases processing in a context-dependent direction, but not one that fully normalizes processing. Furthermore, context-based normalization is not the only means by which listeners tune-in to specific speakers: listeners categorize sound continua differently when they are merely told they are listening to a man or a woman, demonstrating the existence of normalization mechanisms that do not rely on acoustic contexts (and hence acoustic contrast) at all66. In addition, formant frequencies are perceived in relation to other formants and pitch values in the current signal, because those features are correlated within speakers (e.g., people with long vocal tracts typically have lower pitch and lower formant frequencies, overall). These intrinsic normalization mechanisms have been shown to affect AC processing of vowels as well6771. Tuning-in to speakers in everyday listening must thus result from a combination of multiple mechanisms, involving at least these three distinct types of normalization8.

To conclude, the results presented here reveal that the processing of vowels in AC becomes rapidly influenced by speaker-related acoustic properties in preceding context. These findings add to a recent literature that is ascribing a range of complex acoustic integration processes to the broader AC, suggesting that it participates in high-level encoding of speech sounds and auditory objects13,19,7274. Recently, it has been demonstrated that populations in parabelt AC encode speaker-invariant contours of intonation that speakers use to focus on one or the other part of a sentence20. The current findings build on these and demonstrate the emergence of speaker-normalized representations of acoustic-phonetic features, the most fundamental building blocks of spoken language. This context-dependence allows AC to partly resolve the between-speaker variance present in speech signals. These features of AC processing underscore its critical role in our ability to understand speech in the complex and variable situations that we are exposed to every day.

Methods

Patients

A total of five human participants (2 male; all right-handed; mean age: 30.6 years), all native Spanish-speaking (the US hospital at which participants were recruited has a considerable Spanish-speaking patient population), were chronically implanted with high-density (256 electrodes; 4 mm pitch) multi-electrode cortical surface arrays as part of their clinical evaluation for epilepsy surgery. Arrays were implanted subdurally on the peri-Sylvian region of the lateral left (n = 2) or right (n = 3) hemispheres. Placement was determined by clinical indications only. All participants gave their written informed consent before the surgery, and had self-reported normal hearing. The study protocol was approved by the UC San Francisco Committee on Human Research. Electrode positions for reconstruction figures were extracted from tomography scans and co-registered with the patient’s MRI.

Stimulus synthesis

Details of the synthesis procedure for these stimuli have been reported previously7. All synthesis was implemented in Praat software75. In brief, using source-filter separation, the formant tracks of multiple recordings of clear “sufu” and “sofo” were estimated. These estimates were used to calculate a single average time-varying formant track for both words, now representing an average of the formant properties over a number of instances of both [o] and [u]. The height of only the first formant of this filter model was increased and decreased across the whole vowel to create the new formant models for the continuum from [u] to [o] covering the distance between endpoints in six steps. These formant tracks were combined with a model of the glottal-pulse source to synthesize the speech sound continuum. Synthesis parameters thus dictated that all steps were equal in pitch contour, amplitude contour and had identical contours for the formants higher than F1 (note that F1 and F2 values in Fig. 1a and S1 reflect measurements of the resulting sounds, not synthesis parameters). The two context conditions were created through source-filter separation of a single spoken utterance of the sentence “a veces se halla” (“at times she feels rather…”). The first formant of the filter model was then increased or decreased by 100 Hz and recombined with the source model following similar procedures as for the targets.

Procedures

The participants were asked to categorize the last words of a stimulus as either sufu or sofo. Listeners responded using the two buttons on a button box. The two options sufu and sofo were always displayed on the computer screen. Each of the 6 steps of the continuum was presented in both the low- and high-F1 sentence conditions. Context conditions were presented in separate mini-blocks of 24 trials (6 steps × 4 repetitions). Participants participated in as many blocks as they felt comfortable with (see Table S1 for trial counts).

Data acquisition and preprocessing

Cortical local field potentials were recorded and amplified with a multichannel amplifier optically connected to a digital signal acquisition system (Tucker-Davis Technologies) sampling at 3052 Hz. The stimuli were presented monaurally from loudspeakers at a comfortable level. The ambient audio (recorded with a microphone aimed at the participant) along with a direct audio signal of stimulus presentation were simultaneously recorded with the ECoG signals to allow for precise alignment and later inspection of the experimental situation. Line noise (60 Hz and harmonics at 120 and 180 Hz) was removed from the ECoG signals with notch filters. Each time series was visually inspected for excessive noise, and trials and or channels with excessive noise or epileptiform activity were removed from further analysis. The remaining time series were common-average referenced across rows of the 16 × 16 electrode grid. The time-varying analytic amplitude was extracted from eight bandpass filters (Gaussian, with logarithmically increasing center frequencies between 70 and 150 Hz, and semilogarithmically increasing bandwidths) with the Hilbert transform. High-gamma power was calculated by averaging the analytic amplitude across these eight bands. The signal was subsequently downsampled to 100 Hz. The signal was z-scored based on the mean and standard deviation of a baseline period (from −50 to 0 ms before the onset of the context sentence) on a trial by trial basis. In the main text, high-γ will refer to this measure.

Single-electrode encoding analysis

We used ordinary least-squares linear regression to predict neural activity (high-γ) from our stimulus conditions (target F1 steps, coded as −2.5, −1.5, −0.5, 0.5, 1.5, 2.5; and context F1, coded as −1 and 1; as well as their interaction). These factors were used as numerical predictors to neural activity that was averaged across the target window (from 70 to 570 ms after target vowel onset) or across the context window (from 250 to 1450 ms after context sentence onset—a later onset was chosen to reduce the influence of large and non-selective onset responses present in some electrodes). For each model, R-squared (R2) provides a measure of the proportion of variance in neural activity that is explained by the complete model. The p value associated with the omnibus F-statistic provides a measure of significance. We set the significance threshold at alpha = 0.05 and corrected for multiple comparisons using the Bonferroni method, taking individual electrodes as independent samples. Supplementary Figure S2a, b demonstrates that most of the variance in the context was explained by the factor context F1. During the target window, however, both target F1 and context F1 explain a considerable portion of the variance. The interaction term was included to accommodate a situation where the context effect is more strongly expressed on one side of the target continuum than the other (see e.g., Fig. 2b, where the context effect is larger toward sofo), but is not further interpreted here.

For Fig. 2d and Fig. 3c, linear mixed regression analyses were used to assess the relation between signed t-statistics of target F1 preferences and context effects (Fig. 2d) or context preferences (Fig. 3c). Regression estimates were computed over all significant (9 [corrected] + 28 [uncorrected] = 37) electrodes. Linear mixed effects regression accommodates the hierarchical nature of these observations (electrodes within patients).

Cluster-based permutation analyses

For single-example electrodes, a cluster-based permutations approach was used to assess significance of differences between two event related high-gamma time courses (Fig. 2c and Fig. 3b; following the method described in ref. 76). For each permutation, labels of individual trials were randomly assigned to data (high-gama time courses), and a t test was performed for each timepoint. Next, for each time point (across all 1000 permutations) a criterion value was established (the highest 95% of the [absolute] t values for that timepoint). Then, for each permutation, it was established when its value reached above the criterion value and for how many samples it remained above criterion. A set of subsequent timepoints above criterion is defined as a cluster. Then, for each cluster the t values were summed, and this value was assigned to that entire cluster. For each permutation only the largest (i.e., highest summed cluster value) was stored as a single value. This resulted in a distribution of maximally 1000 cluster values (some permutations may not result in any significant cluster and have a summed t value of 0). Then, using the same procedure, the size of all potential clusters was established for the real data (correct assignment of labels), and it was established whether the size of each cluster was larger than 95% of the permutation-based cluster values. p < 0.001 indicates that the observed cluster was larger than all permutation based clusters.

Stimulus classification

Linear discriminant analysis (LDA) models were trained to predict the stimulus from the neural population responses evoked by the stimuli. Per participant a single model was trained on all endpoint data, which was then used to predict labels for the ambiguous items. To predict stimulus class for the endpoint stimuli (steps 1 and 6) a leave-one-out cross validation procedure was used to prevent overfitting. Model features (predictors) consisted of the selected timepoint*electrode combinations per participant.

For the analyses (Fig. 2; Supplementary Fig. S3; Supplementary Fig. S4) training data consisted of high-y data averaged across a 500 ms time window starting 70 ms after target vowel onset (the target vowel was the first point of acoustic divergence between targets).

In the analyses, all task-related electrodes for a given participant (and region-of-interest, see Fig. S4) were selected. Trial numbers per participant are listed in Table S1. The analysis displayed in Fig. 2 and Supplementary Figs. S3 and S4 hence relied on a large number of predictors (electrodes × timepoints). While a large amount of predictors could result in overfitting, these parameters led to the highest proportion of correct classification for the endpoints: 76% correct (see Supplementary Fig. S1b), although note that this number may be inflated because of electrode pre-selection. This approach was chosen, however, because high endpoint classification performance is important to establish the presence of normalization: a shift in a response function can only be detected if the steepness of that function is nonzero. Importantly, specifically selecting electrodes that distinguish the endpoints does not affect the extent of observed normalization, because the normalization effect is orthogonal to that of target F1 (i.e., normalization itself was not a selection criterion). Furthermore, in all analyses, classification scores were only obtained from held-out data, preventing the fitting of idiosyncratic models. In addition, averaging across time (hence decreasing the number of predictors) led to qualitatively similar (and significant) effects for the important comparisons reported in this paper. Classification analyses resulted in a predicted class for each trial. These data were used as input for a generalized logistic linear mixed effects model.

Linear mixed effects regression of classification data

For the analyses that assessed the effects of target stimulus F1 and context F1 on proportion of “sofo” responses (both behavioral and neural-classifier-based) we employed generalized linear mixed effects models (glmer; with the dependent variable “family” set to “binomial”). The models had Target F1 (contrast coded, with the levels −2.5; −1.5; −0.5; 0.5; 1.5; 2.5) and Context F1 (levels −1; 1) entered as fixed effects, and uncorrelated by-patient slopes and intercepts for these factors as random effects.

For the analysis of the behavioral data (βIntercept = 0.52, t = 0.79, p = 0.42), we observed more sofo responses towards the sofo end of the stimulus continuum (βTarget F1 = 1.86, t = 4.10, p < 0.001). Moreover, we observed an effect of context as items along the continuum were more often perceived as sofo (the vowel category corresponding to higher-F1 values) after a low-F1 voice (Speaker A) than after a high-F1 voice (Speaker B), (βContext F1 = −1.70, t = −3.51, p < 0.001).

For the analyses of neural representations the dependent variable consisted of the classes predicted by LDA stimulus classification described above. For the overall analysis including temporal lobe electrodes, the model (βIntercept = −0.16, t = −1.06, p = 0.29) revealed significant classification of the continuum (βTarget F1 = 0.50, t = 13.20, p < 0.001), suggesting reliable neural differences between the endpoints. Furthermore, an effect was also found for the factor Context on the proportion of “sofo” classifications (βContext F1 = −0.34, t = −5.58, p < 0.001), reflecting the normalization effect of most interest. For the analysis focusing on the dorsal and frontal electrodes (βIntercept = −0.16, t = 0.91, p = 0.37) a significant effect of Step was observed; that is, significant classification of the continuum (βTarget F1 = 0.23, t = 7.01, p < 0.001), but no significant influence of context (βContext F1 = −0.02, t = −0.39, p = 0.69) see Supplementary Fig. S4C for further detail.

Reporting summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Supplementary information

Peer Review File (261.7KB, pdf)
Reporting Summary (162.1KB, pdf)

Acknowledgements

We are grateful to Matthew K. Leonard for commenting on an earlier version of this manuscript and to all members of the Chang Lab for helpful comments throughout this work. This work was supported by European Commission grant FP7-623072 (M.J.S.); and NIH grants R01-DC012379 (E.F.C.) and F32-DC015966 (N.P.F.). E.F.C. is a New York Stem Cell Foundation-Robertson Investigator. This research was also supported by The William K. Bowes Foundation, the Howard Hughes Medical Institute, The New York Stem Cell Foundation and The Shurl and Kay Curci Foundation.

Author contributions

M.J.S. and K.J. conceived the study. M.J.S. designed the experiments, generated the stimuli, and analyzed the data. M.J.S., N.P.F. and E.F.C. collected the data. M.J.S. and N.P.F. interpreted the data and wrote the paper. M.J.S., N.P.F., K.J. and E.F.C. edited the paper.

Data availability

The data that support the findings of this study are publicly available through the Open Science Framework at https://osf.io/t87d2/.

Code availability

These results were generated using code written in Matlab. Code is publicly available through the Open Science Framework at https://osf.io/t87d2/.

Competing interests

The authors declare no competing interests.

Footnotes

Journal peer review information: Nature Communications thanks Jonas Obleser and the other anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information accompanies this paper at 10.1038/s41467-019-10365-z.

References

  • 1.Liberman AM, Cooper FS, Shankweiler DP, Studdert-Kennedy M. Perception of the speech code. Psychol. Rev. 1967;74:431–461. doi: 10.1037/h0020279. [DOI] [PubMed] [Google Scholar]
  • 2.Diehl RL, Lotto AJ, Holt LL. Speech perception. Annu Rev. Psychol. 2004;55:149–179. doi: 10.1146/annurev.psych.55.090902.142028. [DOI] [PubMed] [Google Scholar]
  • 3.Peterson GE, Barney HL. Control methods used in a study of the vowels. J. Acoust. Soc. Am. 1952;24:175–184. doi: 10.1121/1.1906875. [DOI] [Google Scholar]
  • 4.Newman RS, Clouse SA, Burnham JL. The perceptual consequences of within-talker variability in fricative production. J. Acoust. Soc. Am. 2001;109:1181–1196. doi: 10.1121/1.1348009. [DOI] [PubMed] [Google Scholar]
  • 5.Chodroff E, Wilson C. Structure in talker-specific phonetic realization: covariation of stop consonant VOT in American English. J. Phon. 2017;61:30–47. doi: 10.1016/j.wocn.2017.01.001. [DOI] [Google Scholar]
  • 6.Ladefoged P. & Johnson K. A Course in Phonetics. (Cengage Learning, Stamford, 2014).
  • 7.Sjerps MJ, Smiljanić R. Compensation for vocal tract characteristics across native and non-native languages. J. Phon. 2013;41:145–155. doi: 10.1016/j.wocn.2013.01.005. [DOI] [Google Scholar]
  • 8.Nearey TM. Static, dynamic, and relational properties in vowel perception. J. Acoust. Soc. Am. 1989;85:2088–2113. doi: 10.1121/1.397861. [DOI] [PubMed] [Google Scholar]
  • 9.Ladefoged P, Broadbent DE. Information conveyed by vowels. J. Acoust. Soc. Am. 1957;29:98–104. doi: 10.1121/1.1908694. [DOI] [PubMed] [Google Scholar]
  • 10.Laing EJC, Liu R, Lotto AJ, Holt LL. Tuned with a tune: talker normalization via general auditory processes. Front Psychol. 2012;3:1–9. doi: 10.3389/fpsyg.2012.00203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Watkins AJ. Central, auditory mechanisms of perceptual compensation for spectral‐envelope distortion. J. Acoust. Soc. Am. 1991;90:2942–2955. doi: 10.1121/1.401769. [DOI] [PubMed] [Google Scholar]
  • 12.Creutzfeldt O, Ojemann GA, Lettich E. Neuronal activity in the human lateral temporal lobe: I. Responses to speech. Exp. Brain Res. 1989;77:451–475. doi: 10.1007/BF00249600. [DOI] [PubMed] [Google Scholar]
  • 13.Chang EF, et al. Categorical speech representation in human superior temporal gyrus. Nat. Neurosci. 2010;13:1428–1432. doi: 10.1038/nn.2641. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Formisano E, De Martino F, Bonte M, Goebel R. Science. 2008. “Who” Is Saying “What”? Brain-based decoding of human voice and speech; pp. 970–973. [DOI] [PubMed] [Google Scholar]
  • 15.Hickok G, Poeppel D. The cortical organization of speech processing. Nat. Rev. Neurosci. 2007;8:393–402. doi: 10.1038/nrn2113. [DOI] [PubMed] [Google Scholar]
  • 16.Boatman D, Lesser RP, Gordon B. Auditory speech processing in the left temporal lobe: an electrical interference study. Brain Lang. 1995;51:269–290. doi: 10.1006/brln.1995.1061. [DOI] [PubMed] [Google Scholar]
  • 17.Scott SK, Johnsrude IS. The neuroanatomical and functional organization of speech perception. Trends Neurosci. 2003;26:100–107. doi: 10.1016/S0166-2236(02)00037-1. [DOI] [PubMed] [Google Scholar]
  • 18.Steinschneider M, et al. Intracranial study of speech-elicited activity on the human posterolateral superior temporal gyrus. Cereb. Cortex. 2011;21:2332–2347. doi: 10.1093/cercor/bhr014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Mesgarani N, Cheung C, Johnson K, Chang EF. Phonetic feature encoding in human superior temporal gyrus. Science. 2014;343:1006–1010. doi: 10.1126/science.1245994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Tang C, Hamilton LS, Chang EF. Intonational speech prosody encoding in the human auditory cortex. Science. 2017;357:797–801. doi: 10.1126/science.aam8577. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Lakatos P, et al. The spectrotemporal filter mechanism of auditory selective attention. Neuron. 2013;77:750–761. doi: 10.1016/j.neuron.2012.11.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Holt LL. Speech categorization in context: joint effects of nonspeech and speech precursors. J. Acoust. Soc. Am. 2006;119:4016–4026. doi: 10.1121/1.2195119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Stilp CE, Alexander JM, Kiefte M, Kluender KR. Auditory color constancy: calibration to reliable spectral properties across nonspeech context and targets. Atten. Percept. Psychophys. 2010;72:470–480. doi: 10.3758/APP.72.2.470. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Sjerps M. J., Zhang C. & Peng G. Lexical tone is perceived relative to locally surrounding context, vowel quality to preceding context. J. Exp. Psychol. Hum. Percept. Perform. 44, 914–924 (2018). [DOI] [PubMed]
  • 25.Holt LL, Lotto AJ. Behavioral examinations of the level of auditory processing of speech context effects. Hear Res. 2002;167:156–169. doi: 10.1016/S0378-5955(02)00383-0. [DOI] [PubMed] [Google Scholar]
  • 26.Lotto AJ, Kluender KR. General contrast effects in speech perception: effect of preceding liquid on stop consonant identification. Percept. Psychophys. 1998;60:602–619. doi: 10.3758/BF03206049. [DOI] [PubMed] [Google Scholar]
  • 27.Rabinowitz NC, Willmore BDB, Schnupp JWH, King AJ. Contrast gain control in auditory cortex. Neuron. 2011;70:1178–1191. doi: 10.1016/j.neuron.2011.04.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ulanovsky N, Las L, Farkas D, Nelken I. Multiple time scales of adaptation in auditory cortex neurons. J. Neurosci. 2004;24:10440–10453. doi: 10.1523/JNEUROSCI.1905-04.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Pérez-González D, Malmierca MS. Adaptation in the auditory system: an overview. Front Integr. Neurosci. 2014;8:1–10. doi: 10.3389/fnint.2014.00019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Holt LL. The mean matters: effects of statistically defined nonspeech spectral distributions on speech categorization. J. Acoust. Soc. Am. 2006;120:2801–2817. doi: 10.1121/1.2354071. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Sjerps MJ, Mitterer H, McQueen JM. Hemispheric differences in the effects of context on vowel perception. Brain Lang. 2012;120:401–405. doi: 10.1016/j.bandl.2011.12.012. [DOI] [PubMed] [Google Scholar]
  • 32.Lotto AJ, Sullivan SC, Holt LL. Central locus for nonspeech context effects on phonetic identification (L) J. Acoust. Soc. Am. 2003;113:53–56. doi: 10.1121/1.1527959. [DOI] [PubMed] [Google Scholar]
  • 33.Pasley BN, et al. Reconstructing speech from human auditory cortex. PLoS Biol. 2012;10:e1001251. doi: 10.1371/journal.pbio.1001251. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Kluender KR, Coady JA, Kiefte M. Sensitivity to change in perception of speech. Speech Commun. 2003;41:59–69. doi: 10.1016/S0167-6393(02)00093-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Goldinger SD. Echoes of echoes? An episodic theory of lexical access. Psychol. Rev. 1998;105:251–279. doi: 10.1037/0033-295X.105.2.251. [DOI] [PubMed] [Google Scholar]
  • 36.Johnson K. In The Handbook of Speech Perception (eds Pisoni, D. B. & Remez, R.) 363–389 (Blackwell Publishers, Oxford, 2005).
  • 37.Leonard MK, Chang EF. Dynamic speech representations in the human temporal lobe. Trends Cogn. Sci. 2014;18:472–479. doi: 10.1016/j.tics.2014.05.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Nourski KV, et al. Sound identification in human auditory cortex: differential contribution of local field potentials and high gamma power as revealed by direct intracranial recordings. Brain Lang. 2015;148:37–50. doi: 10.1016/j.bandl.2015.03.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Steinschneider M, Fishman YI, Arezzo JC. Spectrotemporal analysis of evoked and induced electroencephalographic responses in primary auditory cortex (A1) of the awake monkey. Cereb. Cortex. 2008;18:610–625. doi: 10.1093/cercor/bhm094. [DOI] [PubMed] [Google Scholar]
  • 40.Ray S. & Maunsell J. H. Different origins of gamma rhythm and high-gamma activity in macaque visual cortex. PLoS Biol. 9, e1000610 (2011). [DOI] [PMC free article] [PubMed]
  • 41.Crone N, et al. Induced electrocorticographic gamma activity during auditory perception. Clin. Neurophysiol. 2001;112:565–582. doi: 10.1016/S1388-2457(00)00545-9. [DOI] [PubMed] [Google Scholar]
  • 42.Chan AM, et al. Speech-specific tuning of neurons in human superior temporal gyrus. Cereb. Cortex. 2014;24:2679–2693. doi: 10.1093/cercor/bht127. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Stevens KN. Toward a model for lexical access based on acoustic landmarks and distinctive features. J. Acoust. Soc. Am. 2002;111:1872–1891. doi: 10.1121/1.1458026. [DOI] [PubMed] [Google Scholar]
  • 44.Chomsky N, Halle M. The Sound Pattern of English. New York: Harper and Row; 1968. [Google Scholar]
  • 45.Hickok G, Poeppel D. Neural basis of speech perception. Hum. Audit Syst. Fundam. Organ Clin. Disord. 2015;129:149–160. doi: 10.1016/B978-0-444-62630-1.00008-1. [DOI] [PubMed] [Google Scholar]
  • 46.Andics A, McQueen JM, Petersson KM. Mean-based neural coding of voices. Neuroimage. 2013;79:351–360. doi: 10.1016/j.neuroimage.2013.05.002. [DOI] [PubMed] [Google Scholar]
  • 47.Andics A, et al. Neural mechanisms for voice recognition. Neuroimage. 2010;52:1528–1540. doi: 10.1016/j.neuroimage.2010.05.048. [DOI] [PubMed] [Google Scholar]
  • 48.Belin P, Zatorre RJ, Lafaille P, Ahad P, Pike B. Voice-selective areas in human auditory cortex. Nature. 2000;403:309–312. doi: 10.1038/35002078. [DOI] [PubMed] [Google Scholar]
  • 49.Kriegstein K, von, Kleinschmidt A, Sterzer P, Giraud AL. Interaction of face and voice areas during speaker recognition. J. Cogn. Neurosci. 2005;17:367–376. doi: 10.1162/0898929053279577. [DOI] [PubMed] [Google Scholar]
  • 50.Kriegstein K, von, Giraud AL. Distinct functional substrates along the right superior temporal sulcus for the processing of voices. Neuroimage. 2004;22:948–955. doi: 10.1016/j.neuroimage.2004.02.020. [DOI] [PubMed] [Google Scholar]
  • 51.Brosch M, Schreiner CE. Time course of forward masking tuning curves in cat primary auditory cortex. J. Neurophysiol. 1997;77:923–943. doi: 10.1152/jn.1997.77.2.923. [DOI] [PubMed] [Google Scholar]
  • 52.Harris DM, Dallos P. Forward masking of auditory nerve fiber responses. J. Neurophysiol. 1979;42:1083–1107. doi: 10.1152/jn.1979.42.4.1083. [DOI] [PubMed] [Google Scholar]
  • 53.Smith RL. Short-term adaptation in single auditory nerve fibers: some poststimulatory effects. J. Neurophysiol. 1977;40:1098–1111. doi: 10.1152/jn.1977.40.5.1098. [DOI] [PubMed] [Google Scholar]
  • 54.Sjerps M. J., McQueen J. M. & Mitterer H. Evidence for precategorical extrinsic vowel normalization. Attent. Percept. Psychophys. 75, 576–587 (2013). [DOI] [PubMed]
  • 55.Holt LL. Temporally nonadjacent nonlinguistic sounds affect speech categorization. Psychol. Sci. 2005;16:305–312. doi: 10.1111/j.0956-7976.2005.01532.x. [DOI] [PubMed] [Google Scholar]
  • 56.Viswanathan N, Magnuson JS, Fowler CA. Compensation for coarticulation: disentangling auditory and gestural theories of perception of coarticulatory effects in speech. J. Exp. Psychol. Hum. Percept. Perform. 2010;36:1005–1015. doi: 10.1037/a0018391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Viswanathan N, Magnuson JS, Fowler CA. Similar response patterns do not imply identical origins: an energetic masking account of nonspeech effects in compensation for coarticulation. J. Exp. Psychol. Hum. Percept. Perform. 2013;39:1181–1192. doi: 10.1037/a0030735. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Stilp CE, Anderson PW, Winn MB. Predicting contrast effects following reliable spectral properties in speech perception. J. Acoust. Soc. Am. 2015;137:3466–3476. doi: 10.1121/1.4921600. [DOI] [PubMed] [Google Scholar]
  • 59.Stilp CE, Assgari AA. Perceptual sensitivity to spectral properties of earlier sounds during speech categorization. Atten. Percept. Psychophys. 2018;80:1300–1310. doi: 10.3758/s13414-018-1488-9. [DOI] [PubMed] [Google Scholar]
  • 60.Phillips EAK, Schreiner CE, Hasenstaub AR. Cortical interneurons differentially regulate the effects of acoustic context. Cell Rep. 2017;20:771–778. doi: 10.1016/j.celrep.2017.07.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Fitzpatrick DC, Kuwada S, Kim DO, Parham K, Batra R. Responses of neurons to click-pairs as simulated echoes: auditory nerve to auditory cortex. J. Acoust. Soc. Am. 1999;106:3460–3472. doi: 10.1121/1.428199. [DOI] [PubMed] [Google Scholar]
  • 62.Pulvermuller F, et al. Motor cortex maps articulatory features of speech sounds. Proc. Natl Acad. Sci. 2006;103:7865–7870. doi: 10.1073/pnas.0509989103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Wilson SM, Iacoboni M. Neural responses to non-native phonemes varying in producibility: evidence for the sensorimotor nature of speech perception. Neuroimage. 2006;33:316–325. doi: 10.1016/j.neuroimage.2006.05.032. [DOI] [PubMed] [Google Scholar]
  • 64.Myers EB, Theodore RM. Voice-sensitive brain networks encode talker-specific phonetic detail. Brain Lang. 2017;165:33–44. doi: 10.1016/j.bandl.2016.11.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Belin P, Zatorre RJ. Adaptation to speaker’ s voice in right anterior temporal lobe. Neuroreport. 2003;14:2105–2109. doi: 10.1097/00001756-200311140-00019. [DOI] [PubMed] [Google Scholar]
  • 66.Johnson K, Strand EA, D’Imperio M. Auditory-visual integration of talker gender in vowel perception. J. Phon. 1999;27:359–384. doi: 10.1006/jpho.1999.0100. [DOI] [Google Scholar]
  • 67.Edmonds BA, et al. Evidence for early specialized processing of speech formant information in anterior and posterior human auditory cortex. Eur. J. Neurosci. 2010;32:684–692. doi: 10.1111/j.1460-9568.2010.07315.x. [DOI] [PubMed] [Google Scholar]
  • 68.Andermann M, Patterson RD, Vogt C, Winterstetter L, Rupp A. Neuromagnetic correlates of voice pitch, vowel type, and speaker size in auditory cortex. Neuroimage. 2017;158:79–89. doi: 10.1016/j.neuroimage.2017.06.065. [DOI] [PubMed] [Google Scholar]
  • 69.Monahan PJ, Idsardi WJ. Auditory sensitivity to formant ratios: toward an account of vowel normalisation. Lang. Cogn. Process. 2010;25:808–839. doi: 10.1080/01690965.2010.490047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Kreitewolf J, Gaudrain E, von Kriegstein K. A neural mechanism for recognizing speech spoken by different speakers. Neuroimage. 2014;91:375–385. doi: 10.1016/j.neuroimage.2014.01.005. [DOI] [PubMed] [Google Scholar]
  • 71.Kriegstein K, von, Smith DRR, Patterson RD, Kiebel SJ, Griffiths TD. How the human brain recognizes speech in the context of changing speakers. J. Neurosci. 2010;30:629–638. doi: 10.1523/JNEUROSCI.2742-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Engineer CT, et al. Cortical activity patterns predict speech discrimination ability. Nat. Neurosci. 2008;11:603–608. doi: 10.1038/nn.2109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Leonard MK, Baud MO, Sjerps MJ, Chang EF. Perceptual restoration of masked speech in human cortex. Nat. Commun. 2016;7:13619. doi: 10.1038/ncomms13619. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Bizley JK, Walker KMM, Nodal FR, King AJ, Schnupp JWH. Auditory cortex represents both pitch judgments and the corresponding acoustic cues. Curr. Biol. 2013;23:620–625. doi: 10.1016/j.cub.2013.03.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Boersma P., Weenink D. Praat: Doing Phonetics by Computer (Version 5.1). 2009.
  • 76.Maris E, Oostenveld R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods. 2007;164:177–190. doi: 10.1016/j.jneumeth.2007.03.024. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Peer Review File (261.7KB, pdf)
Reporting Summary (162.1KB, pdf)

Data Availability Statement

The data that support the findings of this study are publicly available through the Open Science Framework at https://osf.io/t87d2/.

These results were generated using code written in Matlab. Code is publicly available through the Open Science Framework at https://osf.io/t87d2/.


Articles from Nature Communications are provided here courtesy of Nature Publishing Group

RESOURCES