Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2015 Nov 19;37(2):704–716. doi: 10.1002/hbm.23060

Predictions interact with missing sensory evidence in semantic processing areas

Mathias Scharinger 1,2,, Alexandra Bendixen 3, Björn Herrmann 2,4, Molly J Henry 2,4, Toralf Mildner 5, Jonas Obleser 2,6
PMCID: PMC6867522  PMID: 26583355

Abstract

Human brain function draws on predictive mechanisms that exploit higher‐level context during lower‐level perception. These mechanisms are particularly relevant for situations in which sensory information is compromised or incomplete, as for example in natural speech where speech segments may be omitted due to sluggish articulation. Here, we investigate which brain areas support the processing of incomplete words that were predictable from semantic context, compared with incomplete words that were unpredictable. During functional magnetic resonance imaging (fMRI), participants heard sentences that orthogonally varied in predictability (semantically predictable vs. unpredictable) and completeness (complete vs. incomplete, i.e. missing their final consonant cluster). The effects of predictability and completeness interacted in heteromodal semantic processing areas, including left angular gyrus and left precuneus, where activity did not differ between complete and incomplete words when they were predictable. The same regions showed stronger activity for incomplete than for complete words when they were unpredictable. The interaction pattern suggests that for highly predictable words, the speech signal does not need to be complete for neural processing in semantic processing areas. Hum Brain Mapp 37:704–716, 2016. © 2015 Wiley Periodicals, Inc.

Keywords: fMRI, incomplete speech, predictive mechanisms, semantic context, sentence processing, angular gyrus

INTRODUCTION

Anticipating upcoming sensory events is a remarkable ability of the human brain [Bar, 2009] that becomes particularly apparent when the omission of a highly predictable stimulus elicits the same response as the actual presentation of that stimulus [Bendixen et al., 2009]. The underlying predictive neural mechanisms should lend themselves particularly well to speech and language processing, where predictions about content can be made on the basis of prior information or semantic context [Federmeier, 2007; Golestani et al., 2009; Kutas and Hillyard, 1984; Mayo et al., 1997; Sohoglu et al., 2012].

For example, based on a particular predictive context, a word may become comprehensible despite suboptimal acoustics. Degraded auditory word presentations are more intelligible when preceded by a visually presented matching word [Sohoglu et al., 2012]. Furthermore, increased speech intelligibility is accompanied by increased activity in inferior and middle frontal gyrus, areas that are associated with the processing of abstract speech properties beyond sensory details [Davis and Johnsrude, 2003; Hickok and Poeppel, 2007; Peelle et al., 2010]. Notably, activity in inferior and middle frontal gyrus in this study preceded activity in superior temporal cortices [Sohoglu et al., 2012], that is, in regions that are sensitive to the acoustic details of speech [Desai et al., 2008; Guenther et al., 2004; Husain et al., 2006], suggesting that contextual information from higher‐level areas improves lower‐level sensory processing by top‐down mechanisms.

Further brain imaging studies have provided evidence that contextual information is particularly helpful for comprehending degraded speech if this information allows for meaning‐based (i.e. semantic) predictions [Golestani et al., 2013; Obleser and Kotz, 2010; Obleser et al., 2007]. These studies showed that hetero‐modal semantic processing areas—most consistently angular gyrus in parietal cortex [Binder et al., 2009, 2000]—were active when words with reduced intelligibility occurred in a context that predicted their meaning. For instance, in a retroactive semantic priming task with words embedded in noise, activity in angular gyrus was greater when the target word was predictable from a semantically related prime word, compared with the condition in which the target word was not predictable [i.e. occurring in a context of a semantically unrelated word, Golestani et al., 2013]. Thus, angular gyrus appears to be responsive to predictive semantic information during the processing of speech with reduced intelligibility. In the current study, we went a step further and examined the role of the angular gyrus in the presence of predictive information when acoustic information critical to ambiguous word identification was entirely omitted.

Incomplete speech frequently occurs in natural conversations and commonly results from sluggish articulation. Incomplete speech can be characterized by word‐final speech sounds being omitted altogether [Guy, 1980; Zimmerer et al., 2011]. The observation that incomplete speech hardly ever results in reduced comprehension [Janse et al., 2007] suggests that the underlying neural mechanisms can readily accommodate an incomplete speech signal (i.e. lacking sensory evidence), making incomplete speech an ideal test case for any model of predictive neural processes. Along these lines, electroencephalographic (EEG) responses to incomplete sentence‐final nouns were stronger when the missing consonant cluster was predictable based on semantic context compared with when it was unpredictable [Bendixen et al., 2014]. This difference localized to bilateral superior temporal gyrus and left angular gyrus. However, due to the limits in localization accuracy from electrophysiological studies, this study could not directly speak to the role of semantic processing areas during the perception of incomplete speech input in predictive contexts.

Here we investigated whether semantic processing areas (including angular gyrus) are also involved in comprehending speech when sensory acoustic evidence is missing entirely. We designed an fMRI study using the same sentence materials as in Bendixen et al. [2014], and orthogonally manipulated the completeness of sentence‐final words and their predictability from the preceding semantic context in a 2 × 2 design. We expected that semantic processing areas would show interactions between the effects of word predictability and word completeness, with the specific prediction that angular gyrus should best reflect an effect of semantic predictability on incomplete sensory input. On the basis of previous electrophysiological data, we additionally expected that bilateral temporal areas would yield stronger responses to incomplete than to complete words [cf. Raij et al., 1997], and that this pattern should further be enhanced if words are predictable from semantic context [Bendixen et al., 2014].

MATERIALS AND METHODS

Participants

Twenty‐two healthy volunteers (nine females, mean age = 26 yr, SD = 2.8 yr, range = 22–32 yr) with no self‐reported hearing problems or neurological disorders participated in the experiment. All participants were native speakers of German, right‐handed [laterality quotients > 75, Oldfield, 1971] and gave their written informed consent according to the Declaration of Helsinki. Participants received monetary compensation for their participation. All procedures had ethical approval from the Ethics Committee of the University of Leipzig.

Design and Material

Experimental design

Sentence stimuli were orthogonally manipulated according to a 2 × 2 design: Final words in 240 sentences were either predictable or unpredictable based on the preceding semantic context (predictability) and were either complete or incomplete based on the presence or absence of their final consonant clusters, respectively (completeness).

Sentence construction

Complete sentence‐final words were either the German noun Lachs (salmon) or Latz (bib), occurring equally often. Sentences with incomplete final words contained the fragment La–, which was acoustically controlled so that no information about potentially following consonants was conveyed. This was achieved by a morphing procedure in MATLAB first described in Scharinger et al. [2012]. It involved removing the final consonant clusters [ts] and [ks] from the initial [la] part of the two best tokens, trimming of the remaining [la] parts to the same duration of 202 msec, and averaging of the two [la] parts on a point‐by‐point basis.

One hundred twenty sentences afforded prediction of the sentence‐final word (Lachs or Latz; predictable, e.g. Der Tierforscher untersucht in Alaska den wilden Lachs “The animal researcher in Alaska examines the wild salmon”) and 120 sentences did not contain enough semantic context to predict sentence‐final words (Lachs or Latz; unpredictable, e.g. Ich dachte in diesem Moment überhaupt nicht an den Lachs “In this moment I did not at all think about the salmon,” see Fig. 1A). Predictability differences were verified in two pretests with listeners not involved in the main experiment. We first ensured that the final word in the predictive sentence contexts was indeed expected, as measured by a semantic fitting‐rating (ranging from 1: does not fit at all to 5: fits very well). Sentences with fitting‐ratings ≤2 were considered nonpredictive, and sentences with fitting‐ratings ≥4 were considered predictive. We did not use sentences with ratings between these values. In a second pre‐test, we asked participants to provide the most likely sentence continuations to sentences ending in the fragment “LA” (the incomplete word form). Here, we restricted our final selection to LACHS‐constraining sentences if at least 85% of all continuations resulted in LACHS (same threshold for LATZ), while for unconstraining sentences, we allowed a maximum of 25% LACHS or LATZ continuations [for details, see Bendixen et al., 2014]. Physical sentence duration was approximately matched between predictability conditions (predictable: 3.11 ± 0.42 sec; unpredictable: 2.83 ± 0.31 sec), but sentences with predictable final words were on average one word longer than sentences with unpredictable final words (predictable: 9.33 ± 1.71 words [mean ± standard deviation]; unpredictable: 8.40 ± 1.34 words).

Figure 1.

Figure 1

A. Example of sentences with predictable (green) and unpredictable (blue) final words. Incomplete words were La—fragments and were identical to their full‐word counter‐parts up to 202 ms after word onset. B. Averaged d' and c values and reaction times for each of the predictability × completeness combinations. Error bars reflect the standard error of the mean. Significant effects are illustrated by asterisks. [Color figure can be viewed in the online issue, which is available at http://wileyonlinelibrary.com.]

Probe word selection

In order to ensure attention to each sentence, participants had to perform a visual task for all sentence trials [for similar designs, see Davis et al., 2011; Love et al., 2006; Rodd et al., 2005]. This task required indicating whether a visually presented probe word did or did not match the sentence meaning. For instance, in the sentence Der Tierforscher untersucht in Alaska den wilden Lachs “The animal researcher in Alaska examines the wild salmon” the related probe word was Wildnis “wilderness” while the unrelated probe word was Stuhl “chair.” Probe words for the task performed during the fMRI experiment were chosen based on an online pretest. Probe words were nouns that did not occur in any of the auditory sentence stimuli. Each probe noun was paired with a single sentence; half of the probe nouns matched the meaning of the corresponding sentence and half did not. Probe nouns were never repeated. In the online pretest, the 240 sentence‐noun pairs were visually presented, and participants were asked to rate how well the noun matched the corresponding sentence context on a scale from 1 (no match) to 5 (very good match). A total of 48 participants (16 males, mean age = 32 yr, SD = 11.7 yr) participated in this pretest. On the basis of the rating results, matching nouns were retained if they had a median rating higher than 3.5, and non‐matching nouns if they had a median rating of lower than 2.5. This necessitated the retesting of 45 sentence‐noun pairings for which we used a different (matching or nonmatching) noun. The retesting was done by additional five participants (three males, mean age = 36 yr, SD = 5.3) and resulted in clear ratings for matching nouns (≥4) and nonmatching nouns (≤2), yielding a final selection of nouns that either matched or did not match the sentence contexts.

Task and Procedure

The task involved indicating whether a visually presented probe word fit within the semantic context provided by the preceding auditory sentence. Responses could be “yes” (the probe word matches the sentence) or “no” (the probe word does not match the sentence).

Each trial (fixed duration: 11.2 s) started with the presentation of a visual fixation cross, simultaneous with the onset of the auditory sentence stimulus, which was presented during the silent period of an interleaved steady‐state sequence [ISSS, Mueller et al., 2011; Schwarzbauer et al., 2006]. After 6.4 s (±0.25 s) the fixation cross changed to the probe word, which stayed on the screen until the participant pressed one of two buttons on an MR‐compatible device held in the right hand. Button‐response assignment was counterbalanced across participants. After the response, the probe word disappeared until the onset of the next trial. Stimulus presentation was controlled with Presentation software (Neurobehavioral Systems, Albany, CA), running on an IBM‐compatible computer. Visual stimuli were projected through an LCD projector onto a mirror screen attached to the head coil, and auditory stimuli were presented via MRI‐compatible headphones (Commander XG, Resonance Technology, Inc.).

In each of three 16‐min runs, 80 sentence trials and 20 silent control (i.e., null) trials were presented. All predictability × completeness combinations occurred equally often within each run. The order of sentences in each run was pseudo‐randomized; constraints on the randomization ensured that identical sentence types (e.g. sentences with complete and predictable Lachs) could not immediately follow each other. The entire experiment, including preparation and participant debriefing, lasted about 1 h.

Behavioral Data Analysis

Behavioral data were analyzed in terms of reaction time, perceptual sensitivity, d', and response bias, c [Macmillan and Creelman, 2005]. Hit rates were quantified as “yes” responses to probe words that matched the context, and false‐alarm rates were quantified as “yes” responses to probe words that did not match the context. Reaction times, perceptual sensitivity, d', and response bias, c, were used as dependent variables in separate 2 × 2 ANOVAs with the factors predictability (final word predictable/unpredictable) and completeness (final word complete/incomplete). Only significant effects and/or interactions are reported. Effect sizes are given as partial eta squared (η p 2).

fMRI Recordings and Preprocessing

Functional MRI data were acquired on a 3 T MedSpec 30/100 scanner (Bruker, Ettlingen, Germany) using a birdcage head coil. Echo‐planar images were recorded using an ISSS sequence [Mueller et al., 2011; Schwarzbauer et al., 2006], with the following parameters: TR = 1.6 s, TE = 30.36 ms, Ernst‐angle = 73°, matrix = 64 × 64 pixels, FOV = 19.2 cm2, resulting in an in‐plane resolution of 3 × 3 mm. One echo‐planar imaging (EPI) volume consisted of 24 slices with a thickness of 4 mm and an interslice gap of 1 mm. The auditory stimulation was presented while the magnetization was kept in a silent steady state for a duration equal to three TRs, and the behavioral task was carried out during the subsequent acquisition of four EPI volumes. Note that EPI volumes were collected on average 5 sec after sentence offset such that they best captured the hemodynamic response to the sentence‐final word. After the last experimental run, a multiecho anatomical scan (field map) employing the same slice geometry was recorded. The field map was used for geometrical distortion correction of the EPI images based on the corresponding voxel‐displacement‐map [Hutton et al., 2002; Jezzard and Balaban, 1995].

Existing high‐resolution T1‐weighted images (voxel dimensions of 1 × 1 × 1 mm) were taken from the database of the Max Planck Institute for Human Cognitive and Brain Sciences (Leipzig, Germany) for co‐registration and normalization purposes. These images had been recorded on average 21 months before the experiment (SD = 16 months), on a 3 T MAGNETOM TIM Trio scanner (Siemens, Erlangen, Germany) using a three‐dimensional MP RAGE sequence [Mugler and Brookeman, 1990].

Data were analyzed and preprocessed using SPM8 (Welcome Trust Centre for Neuroimaging, London, UK) and custom MATLAB scripts. Preprocessing comprised movement‐artifact removal, unwarping, co‐registration to structural images, normalization to Montreal Neurological Institute (MNI) space and smoothing [using an 8 mm full‐width half‐maximum Gaussian kernel, cf. Ashburner and Friston, 2004; Ashburner and Good, 2003]. First‐level analyses used a finite impulse response (FIR) function for the four EPI volumes (modeled separately; window length = 1.6 s, order = 1), a common choice for interleaved acquisition designs [Henson, 2001; Herrmann et al., 2014; Mueller et al., 2011; Peelle, 2014; Schwarzbauer et al., 2006]. A high‐pass filter with a cut‐off of 128 s was applied to remove slow drifts in the data. Since within‐subject temporal autocorrelations were irrelevant for our analysis, we did not account for serial autocorrelations [cf. Davis et al., 2011].

fMRI Analyses for Effects of Predictability and Completeness

First‐level analyses consisted of estimation of a general linear model [Friston, 2004] for each participant. The design matrix included regressors for all EPI volumes in each of the predictability × completeness conditions (unpredictable complete, unpredictable incomplete, predictable complete, predictable incomplete), and control trials (silence). Experimental runs were included as regressors of no interest. Additional regressors of no interest accounted for the realignment‐induced spatial deformations of the EPI volumes and for variance in sentence duration between sentences with predictable and unpredictable final words (number of words per sentence, physical sentence duration in milliseconds). Although the temporal separation of the probe task from the sentence stimuli and the transient nature of the acoustic stimulus material make time‐on‐task effects [Yarkoni et al., 2009] rather unlikely, we nevertheless included trial‐by‐trial reaction times from the probe task as an additional regressor of no interest. Effects of overall brain activation were determined by contrasting all four predictability × completeness conditions against control trials (silence and no task). Furthermore, main effects of predictability and completeness as well as their interaction were modeled at the first level.

On the second level, first‐level contrast coefficients were tested against zero using one‐sample t‐tests. T‐values were transformed to z‐scores, and activations were corrected for multiple comparisons (P < 0.05) based on a Monte‐Carlo simulated cluster‐extent threshold. This procedure used a 16‐mm full‐width half‐maximum Gaussian kernel [Slotnick et al., 2003], based on smoothness estimations from the final test statistic image (∼16 mm). Following this simulation, voxels with z‐scores equal or greater than 3.20 (P < 0.001) and a cluster extent of eighteen voxels on the native voxel resolution were deemed statistically significant.

In order to decompose significant interaction effects, first‐level beta values from the baseline contrast were extracted and averaged within regions of interest (ROIs, defined as spheres with 5 mm radii around centers determined from whole‐brain analyses). Betas for these ROIS were derived from general linear model estimations including the same factors of no interest as described above for the whole‐brain analysis and transformed into percentage signal change using the SPM toolbox MarsBaR [Brett et al., 2002]. For the sole purpose of resolving statistical interactions observed at the whole‐brain level, percentage signal change values were compared across levels of predictability and completeness using paired t‐tests.

Determination of anatomical locations was based on the Automated Anatomical Labeling Atlas (AAL) [Tzourio‐Mazoyer et al., 2002].

RESULTS

Behavioral Performance

Participants' performance on the word‐matching task was very good (accuracy = 89% correct; mean d' = 1.80 ± 0.026) and significantly above chance (test for d' against zero: t (21) = 45.85, P < 0.001). The ANOVA on reaction times revealed a main effect of predictability (F (1,21) = 30.11, P < 0.001, η p 2 = 0.32), with faster reaction times for sentences with predictable words than for sentences with unpredictable words. There was also a main effect of completeness (F (1,21) = 6.24, P < 0.05, η p 2 = 0.09), indicating faster reaction times for sentences with incomplete than for sentences with complete words (Fig. 1B, right). The ANOVA on d' paralleled the reaction time pattern and showed a main effect of predictability (F (1,21) = 10.45, P < 0.001, η p 2 = 0.14), with better performance for sentences with predictable words than for sentences with unpredictable words. Further, there was a main effect of completeness (F (1,21) = 20.80, P < 0.001, η p 2 = 0.25): Performance was better when sentence‐final words were incomplete than when they were complete (Fig. 1B left). With respect to the criterion c, participants showed an overall response bias to indicate that probe words did not match the sentence context (mean c = 1.16 ± 0.016, test against zero: t (21) = 40.60, P < 0.001). The ANOVA on c showed a main effect of completeness (F (1,21) = 17.44, P < 0.001, η p 2 = 0.22), with a stronger bias for complete than for incomplete words (Fig. 1B, middle). The two‐way interactions did not reach significance for any dependent behavioral variable (all P > 0.12).

Overall Brain Activation

Testing all four conditions against silent trials revealed activations in areas corresponding to a left‐dominant cortical language network [Friederici, 2012; Herrmann et al., 2012; Hickok and Poeppel, 2007, Fig. 2]. Significant clusters occurred in left inferior frontal gyrus (comprising Brodmann Areas [BA] 45 and 47 and extending into premotor cortex), left middle temporal gyrus (BA 21 and 22), left inferior temporal gyrus (comprising the caudal part of the fusiform area) and left anterior cingulate and superior frontal gyrus (premotor cortex [BA 6] as well as more anterior regions [BA 8]). In the right hemisphere, activation was seen in anterior parts of superior temporal gyrus/sulcus (BA 22). Furthermore, significant clusters occurred in right cerebellum (culmen), left putamen, and left pulvinar. An overview of significant clusters is provided in Table 1 and Figure 2.

Figure 2.

Figure 2

Overall brain activation in the four predictability × completeness conditions (against silent trials), illustrated with representative sagittal (left‐right), coronal (caudal‐rostral) and axial (ventral‐dorsal) slices. [Color figure can be viewed in the online issue, which is available at http://wileyonlinelibrary.com.]

Table 1.

Significant BOLD activations for overall brain activity (all predictability × completeness conditions against silent trials; thresholded at P < 0.001, cluster extent threshold corrected with k > 18)

Region Coordinates (mm) z‐Value Voxels mm3
l. IFG −51 20 22 5.75 794 2,382
l. aMTG −57 −28 −2 5.69 923 2,769
l. ACC −3 8 55 5.53 113 339
r. pSTG/STS 36 −40 4 5.48 776 2,328
r. Cerebellum 30 −58 −29 4.92 355 1,065
l. ParaHip −18 −16 −17 4.77 108 324
l. Pulvinar −9 −34 1 4.34 36 108
l. ITG −51 −58 −29 4.2 41 123
l. Putamen −24 −25 13 3.82 57 171
l. SFG −9 50 43 3.56 40 120

Coordinates are given in MNI space.

IFG = inferior frontal gyrus, MTG = anterior middle temporal gyrus, ACC = anterior cingulate cortex, pSTG/STS = posterior superior temporal gyrus/sulcus, ParaHiP = para‐hippocampus, ITG = inferior temporal gyrus, SFG = superior frontal gyrus.

Predictability Effects in Ventral Temporal and Parietal Cortex

Predictable sentence‐final words showed more BOLD activity than unpredictable sentence‐final words in left ventral temporal cortex (peak coordinate in inferior temporal gyrus, extending into fusiform (BA 37) and parahippocampal gyrus). In contrast, unpredictable sentence‐final words revealed more BOLD activity than predictable sentence‐final words in left anterior middle temporal gyrus (BA 21), extending into the temporal pole (see Table 2 and Fig. 3).

Table 2.

Significant BOLD activations in the predictability and completeness contrasts, and in the predictability × completeness interaction

Contrast Region Coordinates (mm) z Value Voxels mm3
Predictable > l. ITG −33 −34 −20 4.27 203 609
Unpredictable l. Precuneus −24 −79 46 3.64 55 165
Unpredictable > l. aMTG −51 2 −23 3.76 71 213
Predictable
Incomplete > r. pSTG/STS 60 −4 −2 4.51 267 807
Complete l. pSTG/STS −57 −16 −2 4.06 259 777
Interaction l. IPL −42 −49 49 3.50 45 135
l. AG −42 −49 34 3.22
Predictability × r. MFG 30 11 46 3.56 54 162
Completeness l. Precuneus −30 −73 31 3.58 82 246

Coordinates are given in MNI space.

STG/STS = superior temporal gyrus/sulcus, ITG = inferior temporal gyrus, MTG = middle temporal gyrus, IFG = inferior frontal gyrus, AG = angular gyrus, MFG = middle frontal gyrus.

Figure 3.

Figure 3

Significant BOLD activations in the contrasts predictable > unpredictable (green) and unpredictable > predictable (blue). ITG = inferior temporal gyrus, aMTG = anterior middle temporal gyrus. [Color figure can be viewed in the online issue, which is available at http://wileyonlinelibrary.com.]

Completeness Effects in Bilateral Auditory Areas

Incomplete as compared with complete sentence‐final words showed stronger BOLD activity in bilateral temporal auditory areas (posterior parts of the superior temporal gyrus/sulcus; STG/STS, BA 21/22). The right cluster additionally involved parts of the planum temporale [46–65% probability of planum temporale localization according to Westbury et al., 1999; Table 2; Fig. 4]. Complete sentence‐final words did not show any significantly stronger whole‐brain level BOLD activity than incomplete sentence final words.

Figure 4.

Figure 4

Significant BOLD activations in the contrast incomplete>complete (red). Percentages signal changes (bottom) were estimated from ROIs with center coordinates obtained from the significant pSTG/STS clusters of the whole‐brain analysis. Error bars reflect standard error of the mean. pSTG/STS = posterior superior temporal gyrus/sulcus. [Color figure can be viewed in the online issue, which is available at http://wileyonlinelibrary.com.]

Interaction of Predictability and Completeness Effects in Frontoparietal Areas

An interaction of the effects of predictability and completeness was seen in significant clusters in parietal and middle frontal cortices. The parietal cluster had its main peak in inferior parietal lobule, but also comprised dorsal parts of angular gyrus. The cluster's second peak was located in angular gyrus.

A second cluster with peak coordinates attributed to precuneus (BA 7) extended into ventral parts of angular gyrus [Seghier, 2012]. The third cluster was located in the posterior part of right middle frontal gyrus (between BA 6 and BA 8, Table 2; Fig. 5).

Figure 5.

Figure 5

Significant BOLD activations for the predictability × completeness interaction (magenta). Interaction patterns from significant whole‐brain analysis clusters are illustrated below. Error bars reflect standard error of the mean. MFG = middle frontal gyrus. [Color figure can be viewed in the online issue, which is available at http://wileyonlinelibrary.com.]

The interaction from the whole‐brain analysis is based on testing every voxel separately. In order to examine the form of the interaction and to test whether the three regions should be treated separately or alike, we additionally subjected percent signal changes to a repeated‐measures Analysis of Variance (ANOVA), with the factors Region of Interest (left angular gyrus/inferior parietal lobule, left precuneus, right middle frontal gyrus), predictability (predictable, unpredictable), and completeness (complete, incomplete). This ANOVA confirmed the whole‐brain level interaction of the effects of predictability and completeness (F (1,21) = 12.53, η p 2 = 0.05, P < 0.001). Importantly, this two‐level interaction did not depend on the region of interest, based on the non‐significant three‐way predictability × completeness × region of interest interaction (F (2,21) = 0.28, η p 2 = 0.003, P = 0.76). We also estimated corresponding Bayes Factor for all possible ANOVAs with the three factors predictability, completeness, and region of interest and their interactions [in comparison with a grand‐mean only‐model, cf. Rouder et al., 2012], where the winning model (BF01 = 9,458) comprised the factors region of interest, predictability, completeness, and the interaction predictability × completeness. Such a large Bayes Factor provides decisive evidence [Jeffreys, 1961] for the model with the two‐way interaction, and thus, for the assumption that the three brain regions do not differ in terms of the predictability × completeness interaction.

A t‐test on the marginal means of completeness × predictability, averaging across all three regions under consideration, yielded the following pattern: activity did not differ between complete and incomplete words when they were predictable (t (21) = 1.51, P = 0.13), while activity differed between complete and incomplete words when they were unpredictable (t (21) = 3.49, P < 0.001). In the latter case, incomplete words elicited a stronger BOLD signal than complete words (see Fig. 5).

DISCUSSION

In the current fMRI study, we investigated neural responses during listening to incomplete speech in predictive semantic contexts. The orthogonal modulation of word‐completeness and word‐predictability resulted in the following activation patterns: Hetero‐modal semantic processing areas, including left angular gyrus/inferior parietal lobule, showed a significant interaction between a sentence's predictability and the completeness of the final word. In left angular gyrus, precuneus, and right middle frontal gyrus neural activation was similar for complete and incomplete final words when they were predictable, while activation was stronger for incomplete than complete words when they were unpredictable.

Semantic Processing Areas Accommodate Missing Sensory Evidence in Predictive Contexts

The interaction between the effects of predictability and completeness observed in left angular gyrus/inferior parietal lobule, left precuneus and right middle frontal gyrus, suggests that these areas do not distinguish between incomplete and complete words when they were predictable. That is, for these regions the lack of sensory evidence in predictive contexts appears to be irrelevant. By contrast, stronger activation for incomplete unpredictable words indicates that these areas are sensitive to incomplete words when the semantic context does not allow for strong predictions with respect to the sentence‐final word.

Angular gyrus and precuneus are implicated in semantic processing of speech and language. Angular gyrus has been identified as the area most consistently showing sensitivity to semantic aspects of language processing [Binder et al., 2009; Pallier et al., 2011; Seghier et al., 2010]. Previous studies suggested that angular gyrus specifically supports the beneficial effects of semantic context for the comprehension of speech in adverse listening conditions, for example, when speech is spectrally degraded [Clos et al., 2014; Golestani et al., 2013; Obleser et al., 2007]. In these conditions, semantic context provides a “predictability gain” for spectrally degraded acoustic input, an effect accompanied by increased angular gyrus activity [Clos et al., 2014]. Moreover, a recent repetitive Transcranial Magnetic Stimulation (rTMS) experiment demonstrated that angular gyrus is causally involved in supporting this predictability gain [Hartwigsen et al., 2015]. When rTMS was applied to a control site (superior parietal lobe), the authors observed a predictability gain for sentence‐final, spectrally degraded words: the proportion of correctly repeated sentence‐final words was higher when these words were predictable than when they were not. Importantly, however, the predictability gain was eliminated when TMS was applied to left angular gyrus. These findings imply that angular gyrus is directly involved in enabling the behavioral benefits from predictive semantic context.

The precuneus in the posterior‐medial portion of the parietal lobe has been suggested to support auditory processing of meaningful words [as opposed to pseudowords, Raettig and Kotz, 2008] and sentences [Tulving et al., 1994]. The precuneus has further been observed active during word‐into‐context integration [Graves et al., 2010; Kotz et al., 2002; Sass et al., 2009].

The functions of angular gyrus and precuneus can also be considered as contributing to the default mode network [Buckner et al., 2008; Fransson and Marrelec, 2008]. The default mode network has been ascribed a critical function in memory retrieval [Sestieri et al., 2011]. In particularly, angular gyrus as part of the ventral parietal cortex within the default mode network seems to act as a buffer for multimodal episodic information [Humphreys and Lambon Ralph, 2014] and to support short‐term memory for sentences [Dronkers et al., 2004]. Considered against the backdrop of findings implicating the angular gyrus in bottom‐up attention [Cabeza et al., 2012] and episodic retrieval and language processing [Hutchinson et al., 2009], our findings support the integrative role of angular gyrus where the retrieval of meaning (top‐down semantic processing) interacts with sensory evidence (bottom‐up speech sound processing).

Angular gyrus' proposed function of integrating top‐down predictions with bottom‐up sensory evidence is supported by its functional connectivity to both auditory cortex (through the inferior longitudinal fascicle) as well as inferior and middle frontal gyrus [through the third branch of the superior longitudinal fasciculus, cf. Seghier, 2012]. The current activation pattern thus might reflect the following situation. When top‐down predictions originate from e.g. inferior frontal gyrus [Sohoglu et al., 2012], angular gyrus does not differentiate between complete versus incomplete of bottom‐up sensory evidence. However, in the absence of top‐down predictive information, angular gyrus is more active when it receives incomplete sensory input. In this situation, increased activity may indicate increased perceptual analysis (“listening harder”).

In a similar vein, middle frontal gyrus has previously been found to be engaged when speech with reduced intelligibility can benefit from prior (matching) information [Sohoglu et al., 2012]. While the authors interpret this as evidence for its role in top‐down processing, we are reluctant to ascribe the right middle frontal gyrus an exclusive role of top‐down support in our study. This is due to our assumption that top‐down processes should most prominently apply during sentence‐processing [and before the sentence‐final words, as evidenced by the time course results in Sohoglu et al., 2012]. However, given that we optimized our experimental design towards analyzing sentence‐final effects and given the poor temporal resolution of fMRI, our study cannot provide direct evidence for right middle frontal gyrus to exclusively reflect top‐down processes.

Predictability Effects in Ventral Temporal and Parietal Areas

We observed a predictability effect in left precuneus and in inferior temporal gyrus (including fusiform and parahippocampal gyrus). In these areas, predictable words elicited more activation than unpredictable words. Fusiform and parahippocampal gyrus are also implicated in semantic aspects of language comprehension [Binder et al., 2009; Kotz et al., 2002; Rodd et al., 2012; Sass et al., 2009]. We therefore suggest that higher activation for predictable words reflects the words' benefit from semantic context.

This interpretation is in line with findings that report higher activations in semantic processing areas for words in predictive (semantic) contexts [e.g. Binder et al., 2009; Clos et al., 2014; Golestani et al., 2013; Kotz et al., 2002; Obleser et al., 2007; Sass et al., 2009]. Studies leading to this insight most commonly employed a priming paradigm where primes instantiate a meaning‐related context for a subsequent target. For instance, directly primed words (e.g. frame preceded by picture) elicit stronger activation than unprimed words in left precuneus [Sass et al., 2009].

Note, however, that priming effects in fMRI studies are commonly defined by reduced activity in the primed as compared with the unprimed (neutral) condition [e.g. Copland et al., 2003], that is, the unprimed condition shows more activity than the primed condition. We also found brain regions that specifically showed more activation for unpredictable sentence‐final words than for predictable sentence‐final words. This was seen in parts of the middle temporal gyrus, a highly interconnected area that is also associated with semantic processing [Turken and Dronkers, 2011; Wible et al., 2006]. We propose that activity in middle temporal gyrus scaled with the effort of integrating a word in a preceding semantic context.

When interpreting the functional relevance of these areas with regard to our study, we presuppose a link between priming and prediction in assuming that a prime can set up a predictive context for a subsequent target [i.e. pre‐activate a target, cf. Neely et al., 1989]. However, we argue that priming is less specific than prediction: In our experimental design, our two candidate words were primed (pre‐activated) all the time, but only one of them was specifically predictable at any given time (i.e. either Lachs or Latz). Thus, due to our design, we can look at pure predictability effects unconfounded by priming. Our design also avoids the frequent confound of fulfillment vs. violation of expectation in both priming and prediction studies: Sentence‐final words in our study never violated expectations. For these reasons, differences between our study and previous priming studies using fMRI are not surprising.

Processing of Incomplete Speech Signals in Bilateral Temporal Auditory Areas

Incomplete words elicited stronger BOLD responses than complete words in left and right STG/STS and in right planum temporale. Bilateral STG/STS and planum temporale are key areas for processing speech and non‐speech sounds [Desai et al., 2008; Griffiths and Warren, 2002; Guenther et al., 2004; Hickok and Poeppel, 2007; Husain et al., 2006]. A common finding of previous studies is that STG/STS subserves the categorization of spectro‐temporal information, with a particular role for the planum temporale in segregating and matching spectro‐temporal patterns to learned acoustic representations [Griffiths and Warren, 2002]. Thus, incomplete words provide mismatches with regard to the learned spectro‐temporal acoustic patterns (i.e. the auditory word representations), which might have led to error‐responses.

The elicitation of error responses by missing auditory input has also been investigated in electrophysiological studies with a focus on predictive auditory processing [Raij et al., 1997]. To that end, the results of our study further provide an important extension of a previous EEG study that used the same stimulus material in a similar experimental design [Bendixen et al., 2014]. Predictable incomplete words (but not unpredictable incomplete words) in the previous study elicited an omission mismatch negativity (MMN) relative to their complete counterparts. MMN generators localized to bilateral temporal cortices and left angular gyrus. This suggested that bilateral temporal cortices as well as angular gyrus are particularly sensitive to deviations from predictable speech input, and therefore, to semantic and form‐based aspects of the speech signal. This is in contrast to the patterns of results of the current fMRI study, in which we found no differences for incomplete predictable and unpredictable words in bilateral STG/STS and no differences for predictable complete and incomplete words in left angular gyrus.

It is important to note, though, that the two studies differed in several aspects. Most importantly, omissions occurred on 33% of the trials in the previous EEG study and on 50% in the current fMRI study. It is possible that an early omission response for incomplete predictable words is no longer elicited when omissions or disruptions occur too frequently [Besson et al., 1997; cf. discussion in Bendixen et al., 2014]. Alternatively, the difference in the interaction pattern of the effects of prediction and completeness in the current fMRI study relative to the previous EEG study may imply that the interaction in the current study does not primarily reflect early (∼150 ms) effects like in the EEG study. Instead, it is likely to reflect information integrated over several seconds, as captured by the BOLD signal.

While future research is necessary in order to better localize time‐sensitive effects of incomplete speech signals in predictive contexts and to tease apart effects of semantic predictability from effects of predictability generated by stimulus probabilities, we speculate that the independence of the completeness effect from semantic predictability in STG/STS speaks to the differentiation of sound‐ and meaning‐related processes during speech perception.

CONCLUSIONS

In this study we showed that heteromodal semantic processing areas provide an important neural stage for the benefits listeners draw from semantic context when processing incomplete speech signals. This critically goes beyond previous results on how semantic processing areas (in particular, angular gyrus) support the comprehension of acoustically degraded speech. We here have shown that a set of semantic processing areas support comprehension by utilizing the predictive sentence context when whole sections of sensory acoustic evidence are absent. The interaction of meaning‐based and sound‐based processing in angular gyrus and precuneus further speaks to their more general role in integrating information from different processing levels, a function particularly relevant for speech and language.

Supporting information

Supporting Information Figure S1

ACKNOWLEDGMENTS

The authors thank Yunko Toraiwa for support in experiment preparation and Antje Strauß for substantial help in stimulus selection and manuscript discussion.

REFERENCES

  1. Ashburner J, Friston KJ. ( 2004). Computational neuroanatomy In: Frackowiak RS, Friston KJ, Frith CD, Dolan RJ, Price C, Zeki S, editors. Human Brain Function. Amsterdam: Academic Press; pp 655–672. [Google Scholar]
  2. Ashburner J, Good CD. ( 2003). Spatial registration of images In: Tofts P, editor. Qualitative MRI of the Brain: Measuring Changes Caused by Disease. Chichester, UK: Wiley; pp 503–531. [Google Scholar]
  3. Bar M (2009): The proactive brain: Memory for predictions. Philos Trans R Soc Lond Ser B Biol Sci 364:1235–1243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bendixen A, Scharinger M, Strauss A, Obleser J (2014): Prediction in the service of speech comprehension: Modulated early brain responses to omitted speech segments. Cortex 53:9–26. [DOI] [PubMed] [Google Scholar]
  5. Bendixen A, Schröger E, Winkler I (2009): I heard that coming: Event‐related potential evidence for stimulus‐driven prediction in the auditory system. J Neurosci 29:8447–8451. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Besson M, Faita F, Czternasty C, Kutas M (1997): What's in a pause: event‐related potential analysis of temporal disruptions in written and spoken sentences. Biol Psychol 46:3–23. [DOI] [PubMed] [Google Scholar]
  7. Binder JR, Desai RH, Graves WW, Conant LL (2009): Where is the semantic system? A critical review and meta‐analysis of 120 functional neuroimaging studies. Cereb Cortex 19:2767–2796. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Binder JR, Frost JA, Hammeke TA, Bellgowan PS, Springer JA, Kaufman JN, Possing ET (2000): Human temporal lobe activation by speech and nonspeech sounds. Cereb Cortex 10:512–528. [DOI] [PubMed] [Google Scholar]
  9. Brett M, Anton J‐L, Valabregue R, Poline JB. 2002. Region of interest analysis using an SPM toolbox. In: 8th International Conference on Functional Mapping of the Human Brain [June 2‐6]. Sendai, Japan.
  10. Buckner RL, Andrews‐Hanna JR, Schacter DL (2008): The brain's default network ‐ Anatomy, function, and relevance to disease. Ann NY Acad Sci 1124:1–38. [DOI] [PubMed] [Google Scholar]
  11. Cabeza R, Ciaramelli E, Moscovitch M (2012): Cognitive contributions of the ventral parietal cortex: An integrative theoretical account. Trends Cognit Sci 16:338–352. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Clos M, Langner R, Meyer M, Oechslin MS, Zilles K, Eickhoff SB (2014): Effects of prior information on decoding degraded speech: An fMRI study. Hum Brain Mapp 35:61–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Copland DA, de Zubicaray GI, McMahon K, Wilson SJ, Eastburn M, Chenery HJ (2003): Brain activity during automatic semantic priming revealed by event‐related functional magnetic resonance imaging. Neuroimage 20:302–310. [DOI] [PubMed] [Google Scholar]
  14. Davis MH, Ford MA, Kherif F, Johnsrude IS (2011): Does semantic context benefit speech understanding through “top‐down” processes? Evidence from time‐resolved sparse fMRI. J Cognit Neurosci 23:3914–3932. [DOI] [PubMed] [Google Scholar]
  15. Davis MH, Johnsrude IS (2003): Hierarchical processing in spoken language comprehension. J Neurosci 23:3423–3431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Desai R, Liebenthal E, Waldron E, Binder JR (2008): Left posterior temporal regions are sensitive to auditory categorization. J Cognit Neurosci 20:1174–1188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Dronkers NF, Wilkins DP, Van Valin RD, Redfern BB, Jaeger JJ (2004): Lesion analysis of the brain areas involved in language comprehension. Cognition 92:145–177. [DOI] [PubMed] [Google Scholar]
  18. Federmeier KD (2007): Thinking ahead: The role and roots of prediction in language comprehension. Psychophysiology 44:491–505. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Fransson P, Marrelec G (2008): The precuneus/posterior cingulate cortex plays a pivotal role in the default mode network: Evidence from a partial correlation network analysis. Neuroimage 42:1178–1184. [DOI] [PubMed] [Google Scholar]
  20. Friederici AD (2012): The cortical language circuit: From auditory perception to sentence comprehension. Trends Cognit Sci 16:262–268. [DOI] [PubMed] [Google Scholar]
  21. Friston KJ. 2004. Experimental design and statistical parametric mapping In: Frackowiak RS, Friston KJ, Frith CD, Dolan RJ, Price C, Zeki S, editors. Human Brain Function. Amsterdam: Academic Press; pp 599–632. [Google Scholar]
  22. Golestani N, Hervais‐Adelman A, Obleser J, Scott SK (2013): Semantic versus perceptual interactions in neural processing of speech‐in‐noise. Neuroimage 79:52–61. [DOI] [PubMed] [Google Scholar]
  23. Golestani N, Rosen S, Scott SK (2009): Native‐language benefit for understanding speech‐in‐noise: The contribution of semantics. Bilingualism Lang Cognit 12:385–392. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Graves WW, Binder JR, Desai RH, Conant LL, Seidenberg MS (2010): Neural correlates of implicit and explicit combinatorial semantic processing. Neuroimage 53:638–646. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Griffiths TD, Warren JD (2002): The planum temporale as a computational hub. Trends Neurosci 25:348–353. [DOI] [PubMed] [Google Scholar]
  26. Guenther FH, Nieto‐Castanon A, Ghosh SS, Tourville JA (2004): Representation of sound categories in auditory cortical maps. J Speech Lang Hear Res 47:46–57. [DOI] [PubMed] [Google Scholar]
  27. Guy GR. 1980. Variation in the group and the individual: The case of final stop deletion In: Labov W, editor. Locating Language in Time and Space. New York: Academic Press; pp 1–36. [Google Scholar]
  28. Hartwigsen G, Golombek T, Obleser J (2015): Disrupted left angular gyrus function reduces the predictability gain in degraded speech comprehension. Cortex 68:100–110. [DOI] [PubMed] [Google Scholar]
  29. Henson RNA. 2001. Analysis of fMRI time series In: Frackowiak R, Friston KJ, Frith CD, Dolan RJ, Price C, Zeki S, Ashburner J, Penny WD, editors. Human Brain Function. London: Academic Press; pp 793–822. [Google Scholar]
  30. Herrmann B, Henry MJ, Scharinger M, Obleser J (2014): Supplementary motor area activations predict individual differences in temporal‐change sensitivity and its illusory distortions. Neuroimage 101:370–379. [DOI] [PubMed] [Google Scholar]
  31. Herrmann B, Obleser J, Kalberlah C, Haynes J‐D, Friederici AD (2012): Dissociable neural imprints of perception and grammar in auditory functional imaging. Hum Brain Mapp 33:584–595. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Hickok G, Poeppel D (2007): The cortical organization of speech processing. Nat Rev 8:393–402. [DOI] [PubMed] [Google Scholar]
  33. Humphreys GF, Lambon Ralph MA. (2014): Fusion and fission of cognitive functions in the human parietal cortex. Cereb Cortex; 25:3547–3560. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Husain FT, Fromm SJ, Pursley RH, Hosey LA, Braun AR, Horwitz B (2006): Neural bases of categorization of simple speech and nonspeech sounds. Hum Brain Mapp 27:636–651. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Hutchinson JB, Uncapher MR, Wagner AD (2009): Posterior parietal cortex and episodic retrieval: Convergent and divergent effects of attention and memory. Learn Memory 16:343–356. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Hutton C, Bork A, Josephs O, Deichmann R, Ashburner J, Turner R (2002): Image distortion correction in fMRI: A quantitative evaluation. Neuroimage 16:217–240. [DOI] [PubMed] [Google Scholar]
  37. Janse E, Nooteboom SG, Quené H (2007): Coping with gradient forms of/t/‐deletion and lexical ambiguity in spoken word recognition. Lang Cognit Process 22:161–200. [Google Scholar]
  38. Jeffreys H ( 1961): Theory of Probability. Oxford, UK: Oxford University Press. [Google Scholar]
  39. Jezzard P, Balaban RS (1995): Correction for geometric distortion in echo planar images from B0 field variations. Magn Reson Med 34:65–73. [DOI] [PubMed] [Google Scholar]
  40. Kotz SA, Cappa SF, von Cramon DY, Friederici AD (2002): Modulation of the lexical‐semantic network by auditory semantic priming: An event‐related functional MRI study. Neuroimage 17:1761–1772. [DOI] [PubMed] [Google Scholar]
  41. Kutas M, Hillyard SA (1984): Brain potentials during reading reflect word expectancy and semantic association. Nature 307:161–163. [DOI] [PubMed] [Google Scholar]
  42. Love T, Haist F, Nicol J, Swinney D (2006): A functional neuroimaging investigation of the roles of structural complexity and task‐demand during auditory sentence processing. Cortex 42:577–590. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Macmillan NA, Creelman CD. 2005. Detection Theory: A User's Guide. Mahwah, NJ: Erlbaum. [Google Scholar]
  44. Mayo LH, Florentine M, Buus S (1997): Age of second‐language acquisition and perception of speech in noise. J Speech Lang Hear Res 40:686–693. [DOI] [PubMed] [Google Scholar]
  45. Mueller K, Mildner T, Fritz T, Lepsien J, Schwarzbauer C, Schroeter ML, Möller HE (2011): Investigating brain response to music: A comparison of different fMRI acquisition schemes. Neuroimage 54:337–343. [DOI] [PubMed] [Google Scholar]
  46. Mugler JP, Brookeman JR (1990): Three‐dimensional magnetization‐prepared rapid gradient‐echo imaging (3D MP RAGE). Magn Reson Med 15:152–157. [DOI] [PubMed] [Google Scholar]
  47. Neely JH, Keefe DE, Ross KL (1989): Semantic priming in the lexical decision task: Roles of prospective prime‐generated expectancies and retrospective semantic matching. J Exp Psychol 15:1003–1019. [DOI] [PubMed] [Google Scholar]
  48. Obleser J, Kotz SA (2010): Expectancy constraints in degraded speech modulate the language comprehension network. Cereb Cortex 20:633–640. [DOI] [PubMed] [Google Scholar]
  49. Obleser J, Wise RJS, Dresner MA, Scott SK (2007): Functional integration across brain regions improves speech perception under adverse listening conditions. J Neurosci 27:2283–2289. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Oldfield RC (1971): The assessment and analysis of handedness: The Edinburgh Inventory. Neuropsychologia 9:97–113. [DOI] [PubMed] [Google Scholar]
  51. Pallier C, Devauchelle A‐D, Dehaene S (2011): Cortical representation of the constituent structure of sentences. Proc Natl Acad Sci USA 108:2522–2527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Peelle JE (2014): Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 8:253. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Peelle JE, Johnsrude IS, Davis MH (2010): Hierarchical processing for speech in human auditory cortex and beyond. Front Hum Neurosci 51–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Raettig T, Kotz SA (2008): Auditory processing of different types of pseudo‐words: An event‐related fMRI study. Neuroimage 39:1420–1428. [DOI] [PubMed] [Google Scholar]
  55. Raij T, McEvoy L, Mäkelä JP, Hari R (1997): Human auditory cortex is activated by omissions of auditory stimuli. Brain Res 745:134–143. [DOI] [PubMed] [Google Scholar]
  56. Rodd JM, Johnsrude IS, Davis MH (2012): Dissociating frontotemporal contributions to semantic ambiguity resolution in spoken sentences. Cereb Cortex 22:1761–1773. [DOI] [PubMed] [Google Scholar]
  57. Rodd JM, Davis MH, Johnsrude IS (2005): The neural mechanisms of speech comprehension: fMRI studies of semantic ambiguity. Cereb Cortex 15:1261–1269. [DOI] [PubMed] [Google Scholar]
  58. Rouder JN, Morey RD, Speckman PL, Province JM (2012): Default Bayes Factors for ANOVA designs. J Math Psychol 56:356–374. [Google Scholar]
  59. Sass K, Krach S, Sachs O, Kircher T (2009): Lion ‐ tiger ‐ stripes: Neural correlates of indirect semantic priming across processing modalities. Neuroimage 45:224–236. [DOI] [PubMed] [Google Scholar]
  60. Scharinger M, Bendixen A, Trujillo‐Barreto NJ, Obleser J (2012): A sparse neural code for some speech sounds but not for others. PLoS One 7:e40953. doi: 10.1371/journal.pone.0040953. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Schwarzbauer C, Davis MH, Rodd JM, Johnsrude I (2006): Interleaved silent steady state (ISSS) imaging: A new sparse imaging method applied to auditory fMRI. Neuroimage 29:774–782. [DOI] [PubMed] [Google Scholar]
  62. Seghier ML (2012): The angular gyrus: Multiple functions and multiple subdivisions. Neuroscientist 19:43–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Seghier ML, Fagan E, Price CJ (2010): Functional subdivisions in the left angular gyrus where the semantic system meets and diverges from the default network. J Neurosci 30:16809–16817. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Sestieri C, Corbetta M, Romani GL, Shulman GL (2011): Episodic memory retrieval, parietal cortex, and the default mode network: functional and topographic analyses. J Neurosci 31:4407–4420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Slotnick SD, Moo LR, Segal JB, Hart J Jr. (2003): Distinct prefrontal cortex activity associated with item memory and source memory for visual shapes. Brain Res 17:75–82. [DOI] [PubMed] [Google Scholar]
  66. Sohoglu E, Peelle JE, Carlyon RP, Davis MH (2012): Predictive top‐down integration of prior knowledge during speech perception. J Neurosci 32:8443–8453. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Tulving E, Kapur S, Markowitsch HJ, Craik FI, Habib R, Houle S (1994): Neuroanatomical correlates of retrieval in episodic memory: Auditory sentence recognition. Proc Natl Acad Sci USA 91:2012–2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Turken U, Dronkers NF (2011): The neural architecture of the language comprehension network: Converging evidence from lesion and connectivity analyses. Front Syst Neurosci 5:1, doi: 10.3389/fnsys.2011.00001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Tzourio‐Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Etard O, Delcroix N, Mazoyer B, Joliot M (2002): Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single‐subject brain. Neuroimage 15:273–289. [DOI] [PubMed] [Google Scholar]
  70. Westbury CF, Zatorre RJ, Evans AC (1999): Quantifying variability in the planum temporale: A probability map. Cereb Cortex 9:392–405. [DOI] [PubMed] [Google Scholar]
  71. Wible CG, Han SD, Spencer MH, Kubicki M, Niznikiewicz MH, Jolesz FA, McCarley RW, Nestor P (2006): Connectivity among semantic associates: An fMRI study of semantic priming. Brain Lang 97:294–305. [DOI] [PubMed] [Google Scholar]
  72. Yarkoni T, Barch DM, Gray JR, Conturo TE, Braver TS (2009): BOLD correlates of trial‐by‐trial reaction time variability in gray and white matter: A multi‐study fMRI analysis. PLoS One 4:e4257, doi: 10.1371/journal.pone.0004257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Zimmerer F, Scharinger M, Reetz H (2011): When BEAT becomes HOUSE: Factors of word final/t/‐deletion in German. Speech Commun 53:941–954. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting Information Figure S1


Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES