Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jun 1.
Published in final edited form as: Brain Lang. 2012 Apr 10;121(3):273–288. doi: 10.1016/j.bandl.2012.03.005

The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing

David W Gow Jr 1,2,3,4
PMCID: PMC3348354  NIHMSID: NIHMS366932  PMID: 22498237

Abstract

Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing.

Keywords: lexicon, language, spoken word recognition, lexical access, speech perception, speech production, neuroimaging, aphasia, dual stream model, localization

1. Introduction

This paper presents a new model of how lexical knowledge is represented and utilized and where it is stored in the human brain. Building on the dual pathway model of speech processing proposed by Hickok and Poeppel (2000; 2004; 2007), its central claim is that representations of the forms of spoken words are stored in two parallel lexica. One lexicon, localized in the posterior temporal lobe and forming part of the ventral speech stream, mediates the mapping from sound to meaning. A second lexicon, localized in the inferior parietal lobe and forming part of the dorsal speech stream, mediates the mapping between sound and articulation.

Lexical knowledge is an essential component of virtually every aspect of language processing. Language learners leverage the words they know to infer the meanings of new words based on the assumption of mutual exclusivity (Merriman & Bowman, 1989). Listeners use stored lexical knowledge to inform phonetic categorization (Ganong, 1980) and to guide processes including lexical segmentation (Gow & Gordon, 1995), perceptual learning (Norris et al., 2003) and the acquisition of novel wordforms (Gaskell & Dumay, 2003). Lexically indexed syntactic information also guides the assembly and parsing of syntactic structures (Bresnan, 2001; Lewis et al., 2006). By some estimates, a typical literate adult English speaker may command a vocabulary of 50,000 to 100,000 words (Miller, 1991) in order to achieve these goals. Given this background, it is important to understand where and how words are represented in the brain.

Studies of this question date to the first scientific papers on the neural basis of language. In 1874 Carl Wernicke described a link between damage to the left posterior superior temporal gyrus (pSTG) and impaired auditory speech comprehension. He hypothesized that the root of the impairment was damage to a putative permanent store of word knowledge that he termed the wortshatz or “treasury of words”. In his model, this treasury consisted of sensory representations of words that interfaced with both a frontal articulatory center and a widely distributed set of conceptual representations in motor, association and sensory cortices. In this model, Wernicke was careful to distinguish between permanent “memory images” of the sounds of words, and the effects of “sensory stimulation”, a notion akin to activation associated with sensory processing or short-term buffers (Wernicke, 1874/1969). The broad dual pathway organization of Wernicke’s model has been supported by modern research (Hickok & Poeppel, 2000; 2004; 2007; Scott & Wise, 2004; Scott, 2005), but his interpretation of the left STG as the location of a permanent store of auditory representation of words is open to debate.

The strongest support for the classical interpretation of the pSTG as a permanent store of lexical representations comes from BOLD imaging studies that show that activation of the left pSTG and adjacent superior temporal sulcus (STS) is sensitive to lexical properties including word frequency and neighborhood size (Okada & Hickok, 2006; Graves et al., 2007). Neighborhood size is a measure of the number of words that closely resemble the phonological form of a given word. This result is balanced in part by evidence that a number of regions outside of the pSTG/STS are also sensitive to these factors (c.f. Prabhakaran et al., 2006; Goldrick & Rapp, 2006; Graves et al., 2007) and directly modulate pSTG/STS activation during speech perception (Gow et al., 2008; Gow & Segawa, 2009). This raises the possibility that sensitivity to lexical properties is referred from other areas, and that the STG/STS acts as a sensory buffer where multiple information types converge to refine and perhaps normalize transient representations of wordform.

This view of the STG/STS is consistent with both neuropsychological and neuroimaging evidence. In the 1970’s and 1980’s aphasiologists noted that damage to the left STG does not lead to impaired word comprehension (Basso et al., 1977; Blumstein et al., 1977a; 1977b; Miceli et al., 1980; Damasio & Damasio, 1980). A review of BOLD imaging studies by Hickok and Poeppel (2007) showed consistent bilateral activity in the mostly posterior STG in speech-resting state contrasts and adjacent STS when participants listened to speech as compared to listening to tones or less speech-like complex auditory stimuli. They interpreted this pattern as evidence that the bilateral superior temporal cortex is involved in high-level spectrotemporal auditory analyses, including the acoustic-phonetic processing of speech. This spectrotemporal analysis could in turn be informed by top-down influences from permanent wordform representations stored in other parts of the brain on the STG to produce evolving transient representations of phonological form that are consistent with higher level linguistic constraints and representations. This hypothesis is discussed in section 7.

At the same time that aphasiologists and neurolinguists were recharacterizing the function of the STG, psycholinguists were developing a more nuanced understanding of lexical processing. A distinction emerged between spoken word recognition, the mapping of sound onto stored phonological representations of words, and lexical access, the activation of representations of word meaning and syntactic properties. This distinction was reinforced by studies of patients who showed a double dissociation between the ability recognize words and the ability to understand them. Some patients had preserved lexical decision but impaired word comprehension (Franklin et al., 1994; 1996; Hall & Riddoch, 1997), while others showed relatively preserved word comprehension with deficient lexical decision or phonological processing (Blumstein et al., 1977b; Caplan & Utman, 1994). At a higher level, some patients showed more circumscribed deficits in word comprehension coupled with specific deficits in the naming of items in certain categories including colors and body parts (Damasio, McKee & Damasio, 1979; Dennis, 1976). This fractionation of lexical knowledge was accompanied by a widening list of brain structures associated with lexical processing. Disturbances in various aspects of spoken word recognition, comprehension and production were associated with damage to regions in the temporal, parietal and frontal lobes (c.f. Damasio & Damasio, 1980; Gainotti et al., 1986; Coltheart, 2004; Patterson et al., 2007).

The advent of functional neuroimaging techniques introduced invaluable new data that underscore the conceptual challenges of localizing wordform representations. Three types of studies have dominated this work: (1) word-pseudoword contrasts, (2) repetition suppression/enhancement designs, and (3) designs employing parametric manipulation of lexical properties. Many studies have contrasted activation associated with listening to words versus pseudowords. (Binder et al., 2000; Newman & Twieg, 2001; Kotz et al., 2002; Majerus et al., 2002; 2005; Bellgowan et al., 2003; Rissman et al., 2003; Vigneau et al., 2005; Xiao et al., 2005; Prabhakaran et al., 2006; Orifanidou et al., 2006; Valdois et al., 2006; Raettig & Kotz, 2008; Sabri et al., 2008; Gagnepain et al., 2008; Davis et al., 2009). These studies differ by task and in the specific wordform properties of both word and pseudoword properties. Nevertheless, several reviews and metanalyses have found several systematic trends in these data (Raettig & Kotz, 2008; Davis & Gaskell, 2009). A metanalysis of 11 studies by Davis and Gaskell (2009) found 68 peak voxels that show more activation for words than pseudowords at a corrected level of significance. These included left hemisphere voxels in the anterior and posterior middle and superior temporal gyri, the inferior temporal and fusiform gyri, the inferior and superior parietal lobules, supramarginal gyrus, and the inferior and middle frontal gyri, In the right hemisphere, words produced more activation than nonwords in the middle and superior temporal gyri, supramarginal gyrus, and precentral gyrus. The same study also showed significantly more activation by pseudowords than words in 29 regions including a voxels in left mid-posterior and mid-anterior superior temporal gyrus, left posterior middle temporal gyrus, and portions of the left inferior frontal gyrus, the right superior and middle temporal gyri.

While these studies would appear to bear on the localization of the lexicon, it is important to note the lexicon is rarely invoked in this work. This subtraction is generally associated with the broader identification of brain regions supporting “lexico-semantic processing” (c.f. Raettig & Kotz, 2008) or “word recognition” (c.f. Davis & Gaskell, 2009). There are several reasons to suspect that a narrower reading of these subtractions that directly and uniquely ties them to wordform localization is unviable. Recognizable words trigger a cascade of representations and processes related to their semantic and syntactic properties that pseudowords either do not trigger, or trigger to a different extent1. As result, many of the regions that are activated in word-pseudoword subtractions may be associated with the representation of information that is associated with wordforms, and not just wordforms themselves.

Behavioral and neuroimaging results provide converging evidence that suggests another limitation of the word-pseudoword subtraction as a tool for localizing wordform representations. One can imagine a system in which words activated stored representations of form, but nonwords did not. Given such a system, a word-pseudoword subtraction could be used to localize the lexicon. However, evidence from behavioral and neuroimaging studies suggests that pseudowords are represented using the same resources that are used to represent words. A number of behavioral results in tasks including lexical decision, naming, and repetition show that the processing of nonwords is influenced by the degree to which they resemble real words (c.f. Gathercole et al., 1991; Gathercole & Martin, 1996; Vitevitch & Luce, 1998; 1999; Frisch et al., 2000; Luce & Large, 2001; Saito et al., 2003). The overlap in operations is masked by word-pseudoword subtractions, but is apparent in BOLD results that employ resting state subtractions. Binder et al. (2000) and Xiao et al. (2005) showed almost identical patterns of activation in word-resting state and pseudoword-resting state subtractions. The only differences they reported were a tendency for more bilateral activation for words in the ventral precentral sulcus and pars opercularis in the Binder et al. study and less activation in the parahippopcampal region in the Xiao et al. study. Moreover, several studies have shown that pseudoword BOLD activation is influenced by the degree to which pseudowords resemble known words, with word-like pseudowords producing activation patterns that were more similar to those produced by familiar words than those produced by less-wordlike tokens (Majerus et al., 2005; Raettig & Kotz, 2008). Evidence for a shared neural substrate for the representation of words and pseudowords has implications for the nature of wordform representations (discussed in section 2). Moreover, it suggests that differential activation produced by listening to words and pseudowords relates to form properties of pseudowords that are not generally controlled for in this research.

Repetition suppression and enhancement designs offer a more targeted tool for localizing wordform representations. In word recognition tasks, repeated presentation of the same items leads to a reduction in response latency and increase in accuracy. This type of repetition priming is mirrored at a physiological level by repetition suppression and enhancement, in which repetition of a stimulus leads to changes in localized BOLD responses (see review by Hensen, 2003). Several studies using passive listening to meaningful words have demonstrated repetition suppression effects in left mid-anterior STS (Cohen et al., 2004; Dehaene-Lambertz et al., 2006). This finding was replicated by Buschbaum and D’Esposito (2009) who used an explicit “new/old” recognition judgment. They also found repetition enhancement or reactivation at the boundary of bilateral pSTG, anterior insula and inferior parietal cortex including the SMG.

The fact that words were used in these studies does not necessarily indicate that repetition effects reflect lexical activation. Activation changes could reflect representation or processing at any level (e.g. auditory, acoustic-phonetic, phonemic, lexical). In order to directly tie these effects to lexical representation it is necessary to control for the contribution of non-lexical repetition. Orfanidou et al. (2006) addressed this issue by using different speakers for first and second presentations of words to minimize the influence of auditory representation, and by contrasting repetition effects associated with phonotactically matched word and pseudoword stimuli to target specifically lexical properties. They found no evidence of interaction between lexicality and repetition in any voxel in whole brain comparisons. This result is again consistent with the notion that word and pseudoword representation share a common neural substrate. Analyses collapsing across lexicality showed significant repetition suppression in the supplemental motor area (SMA), and bilateral inferior frontal posterior inferior temporal regions as well as repetition enhancement in in bilateral parietal, orbitofrontal and dorsal frontal regions as well as the right posterior inferior temporal gyrus and a region including the right precuneas and adjacent parietal lobe. The lack of anterior STS suppression in these results may reflect the diminished role of auditory effects due to the speaker manipulation. However, the lack of orthogonal manipulation of phoneme, syllable or diphone repetition make it unclear whether these effects are directly attributable to lexical representation.

The other primary BOLD imaging strategy for localizing lexical representation involves contrasts that rely on parametric manipulation of specifically lexical properties including word frequency, phonological neighborhood size and lexical competitor environment. This strategy (which is discussed again in section 3) is less widely used than word-pseudoword contrasts or repetition suppression/enhancement techniques, but has been explored by several groups. In an auditory lexical decision task, Prabhakaran et al. (2006) found differential activation based on word frequency in left pMTG extending into STG and left aMTG. In contrast, Graves et al. (2007) found frequency sensitivity in left hemisphere SMG, pSTG, and posterior occipitotemporal cortex and bilateral inferior frontal gyrus in a picture naming task. These results differ, but do show some overlapping STG activation and adjacent activations in the left posterior temporal lobe associated with word frequency. Differences in frequency sensitivity in the two studies in other areas may be related to differences in the task demands imposed by lexical decision versus overt naming.

Manipulations of neighborhood size have also produced different patterns of activation in different studies. Okada and Hickok (2006) found sensitivity to neighborhood size limited to bilateral pSTS in a passive listening task, while Prabhakaran et al. (2006) found neighborhood effects in the left SMG, caudate and parahippocampal region in their auditory lexical decision task. In this case, the differences may be related to the differing attentional demands of passive listening versus lexical decision. In a study employing a selective attention manipulation during bimodal language processing, Sabri et al. (2008) found that while superior temporal regions were activated in all speech conditions, differential activation associated with lexical manipulations (word-pseudoword subtraction) was only found when subjects attended to speech. This suggests that tasks such as passive listening that require only shallow processing may fail to produce robust activation outside of superior temporal cortex.

To summarize, the complex and often contradictory results seen in the BOLD imaging literature do not provide a simple resolution to the localization problem, but they do delineate a number of issues that any satisfying resolution must address. Claims about the localization of the lexicon must be framed in relation to a general understanding of the nature of lexical representation that specifically addresses the relationship between the representation of words, pseudowords and sublexical representations, and the causes of task effects.

Recent behavioral results and advances in the characterization of neural processing streams associated with spoken language processing suggest that some task effects may be attributable to a fundamental distinction between semantic and articulatory phonological processes. In one line of experimentation, researchers have found that listeners show different patterns of behavioral effects when presented with the same set of spoken word stimuli in similar tasks that tap phonological versus semantic aspects of word knowledge. Gaskell and Marslen-Wilson (2002) showed that gated primes (e.g. captain presented as /kæpt/ or /kaæptI/) produce significant phonological priming for complete words (CAPTAIN), but no priming and no effect of degree of overlap for strong semantic associates (e.g. COMMANDER). Norris et al. (2006) found several similar differences between phonological and semantic cross-modal priming. They found both associative (date – TIME) and identity (date – DATE) priming when spoken primes were presented in isolation, but only identity priming when they were presented in sentences. In instances in which a short wordform in embedded in a longer wordform (e.g. date in sedate) no associative priming was found for embedded words (sedate-TIME), but negative form priming (sedate-DATE) was found in sentential contexts. Together, these results demonstrate the dissociability of semantic and phonological modes of lexical processing in the perception of spoken words.

Gaskell and Marslen-Wilson (1997) explored the idea that semantic and phonological aspects of spoken word processing may be independent of each other in their distributed cohort model. Unlike earlier models (c.f. McClelland & Elman, 1986) that assumed that lexical access is the result of an ordered mapping from acoustic-phonetic representation to phonological and then semantic representation, their model employed direct simultaneous parallel mapping processes between low-level sensory representations and distributed semantic and phonological representations.2 In their work, the decision to represent lexical semantics and phonology as separate outputs was motivated in part by computational considerations. Parallel architecture offers potentially faster access to semantic representations. This general organization also allows for the development of intermediate representations that are optimally suited for the mapping between a common input representation and different output representations.

The parallel mapping between low-level phonetic representations of speech and semantic versus phonological representation proposed by Gaskell and Marslen-Wilson is similar to the form of modern dual-pathway models of spoken language processing that draw on the pathology, functional imaging and psychological literatures and postulate separate routes from auditory processing to semantics and speech production (Hickok & Poeppel, 2000; 2004; 2007; Wise, 2003; Scott & Wise, 2004; Scott, 2005; Warren et al., 2005; Rauschecker & Scott, 2009). In these models auditory input representations are initially processed in primary auditory cortex, with higher-level auditory and acoustic-phonetic processing taking place in adjacent superior temporal structures. As in Gaskell and Marslen-Wilson’s model, subsequent mappings are carried out in simultaneous parallel processing streams. In the neural models these include a dorsal pathway that provides a mapping between sound and articulation, and a ventral pathway that maps from sound to meaning.

In the model developed by Scott and colleagues (Scott & Wise, 2004; Scott, 2005; Rauschecker & Scott, 2009), the left ventral pathway links primary auditory cortex to the lateral STG and then the anterior STS (aSTS). No ventral lexicon is proposed in these models. In the Hickok and Poeppel model (2000; 2004; 2007), the mapping between sound and meaning is mediated by a lexical interface located in the posterior middle temporal gyrus (pMTG) and adjacent cortices. This interface is the most explicit description of a lexicon in any of the dual stream models.

Parallels between the distributed model’s phonological output and the articulatory dorsal processing stream in dual stream models are less clear. One critical question is whether articulatory and phonological representations are the same thing. While phonological representation is historically rooted in articulatory description (Chomsky & Halle, 1968), current theories of featural representation include both explicitly articulatory (c.f. Browman & Goldstein, 1992) and purely abstract systems (c.f. Hale & Ross, 2008). The lexical representations used in Gaskell and Marslen-Wilson’s model do not make a clear commitment to articulatory or non-articulatory representation.

In summary, despite widespread evidence that words play a central role in language processing, over a century of research has produced no clear consensus on where or how words are represented in the brain. This may be attributed to a number of factors including the methodological challenges inherent in discriminating between lexical activation, processes that follow on lexical activation, and the application of lexical processes to pseudoword stimuli. During the same period, evidence from dissociations in unimpaired and aphasic behavioral processing measures have pointed towards a potential dissociation between semantic and phonological or articulatory aspects of lexical processing that roughly parallels distinctions made in recent dual stream models of spoken language processing in the human brain. In the sections that follow I will develop a framework for understanding the organization and function of lexical representations and review evidence from a variety of disciplines that suggests the existence of parallel lexica in the ventral and dorsal language processing streams.

2. The Computational Significance of Words

The lexicon been hard to localize in part because of a lack of agreement about its function. Researchers have adopted the term “lexicon” to describe the specific role that lexical knowledge plays in a variety of aspects of processing. As a result, the term has different meanings to different research communities. Syntacticians describe it as a store of grammatical knowledge (Bresnan, 2001; Jackendoff, 2002), morphologists see it as an interface between sound and meaning (Ullman et al., 2005), and computational linguists see it as a kind of database where representations of a string corresponding to a word are linked to a list of properties including the word’s meaning, spelling, pronunciation, and grammatical function (Pustajevsky, 1996). These approaches seem to be at odds with each other, but they share one essential property. All of them begin with the idea that the word is a kind of interface that links representations of word form or sound with other types of knowledge. This view is reflected in Hickok and Poeppel (2007), who refer to a “lexical interface” rather than using the term “lexicon”.

The notion of a lexical interface draws attention to several essential computational properties of the lexicon. The most important is that words are a means of accessing different types of knowledge, and should not be viewed as ends in themselves. We activate entries in the lexicon only as a way to access or process specific types of information that we may need to complete a given task: for example, parsing a sentence, assembling the motor commands to pronounce a word, or enlisting top-down information to interpret a perceptually ambiguous word.

To the extent that computational efficiency defines representation, the specific form of any interface representation should be constrained by the input-out mappings it mediates. In the model proposed here, lexical representations play a computational role similar to that of hidden nodes in a connectionist model. Hidden nodes are sensitive to the features of the input representation that are most relevant to its mapping onto the output representation. Computational efficiency may require different sets of features for different mappings. Consider the mappings involved in understanding versus recognizing and accurately pronouncing the word ran. The words ran and run both relate to the same general idea of moving quickly by using your legs, and so in the context of mapping onto meaning might be considered different variants of the same word. In contrast, the sound-to-articulation mapping that mediates the accurate pronunciation or recognition of run as opposed to ran needs to mark the distinction between the words. This distinction, which linguists term the lexeme/lemma distinction, suggests that different tasks may require different representations of wordforms.

In addition to their role as interfaces between sound and higher-level representation, words also appear to play a role in refining lower level acoustic-phonetic representation and processing. Lexical influences on speech perception have been widely demonstrated a variety of behavioral phenomena including the phoneme restoration effect (Warren, 1970), Ganong effect (Ganong, 1980), and low-level phoneme context effects produced by “restored” phonemes (Elman & McClelland, 1988). An effective connectivity analysis by Gow et al. (2008) found that the Ganong effect is the result of a feedback dynamic between the pSTG and SMG in a phoneme categorization task. In a similar study, Gow and Segawa (2009) demonstrated that a lexical bias in the way that listeners compensate for lawful phonological variation is mediated by feedback between the pMTG and pSTG in a phase picture matching task. These processing dynamics address the inherent variability of the speech signal by reconciling acoustic-phonetic input representations with abstract canonical representations of word form. In addition to playing a role in speech perception, this dynamic may also play a role in offline perceptual learning, where lexical constraints have been shown to influence adaptation to anomalous pronunciation of speech sounds (Norris et al., 2003).

In summary, lexical representations serve two primary functions. The first is an interface between low-level representations of sound and higher-level representations of different aspects of linguistic or world knowledge. The second role is as mechanism for normalizing acoustically variable input representations to resolve phonetic ambiguity in both online and offline processing.

3. Distributed Versus Local Representation

This section will examine the question of how lexical representation might be instantiated and identified in behavioral or neural data. In many models of lexical access words are assumed to have local representation (c.f. Morton 1969; McClelland & Elman, 1986; Marslen-Wilson, 1987; Norris, 1994) in which each word is represented by a single discrete node or entry. This type of representation is transparently and unequivocally lexical. In contrast, many connectionist models of spoken and visual word recognition (c.f. Gaskell & Marslen-Wilson, 1997; Seidenberg & McClelland, 1996; Plaut et al., 1996) rely on distributed representations in which a single word may be represented by a pattern of activation over many nodes. Both computational and biological evidence supports the general claim that distributed representation is a fundamental property of cortical processing for all but the most primitive perceptual or cognitive categories (Hinton et al., 1986; Plaut & McClelland, 2010). In their connectionist model of reading, Seidenberg and McClelland (1996) explicitly argued that their distributed representations of words were not lexical in that representational units did not map directly onto words, and the representations that they did use could be used to represent non-words. Distributed representation attributes the effects of wordlikeness effects in nonword processing (c.f. Gathercole et al., 1991; Gathercole & Martin, 1996; Vitevitch & Luce, 1998; 1999; Frisch et al., 2000; Luce & Large, 2001; Saito et al., 2003) to the effects of overlapping activation dynamics that follow on the partial activation of distributed lexical representations that resemble nonword probes. Coltheart (2004) responded to these claims by arguing that without lexical representation there is no basis for the ability to perform lexical decisions. One might argue that people perform lexical decisions by determining whether or not a word or nonword representation maps onto a semantic representation. However, Coltheart provided evidence that some patients (though not all, see Patterson et al., 2007) show the preserved ability to perform lexical decisions despite significant deficits in semantic knowledge.

At first pass, the crux of this debate appears to be that that distributed models such as Seidenberg and McClelland’s (1996) provide a clear account of nonword processing effects, but a less clear account of our ability to recognize words while localist models such as that proposed by Coltheart (2004) have the opposite problem – they explain lexical decision well, but offer no clear explanation of wordlikeness effects. Physiological plausibility and computational efficiency considerations favor distributed lexical representation, but both positions may be otherwise salvageable. Localist representation may account for nonword wordlikeness effects if they rely on continuous activation rather than all- or-nothing activation. Thus, in a continuous activation model such TRACE (McClelland & Elman, 1986), nonword inputs such as /blæg/ may produce partial activation of lexical nodes representing similar words (e.g. black, bag, blog, plaid), which will compete for activation and influence the timecourse of processing. At the same time, distributed representations can be described as being lexical if the activation of the representation corresponding to an entire word has properties above and beyond those of its constituents. In connectionist models that learn, such as Gaskell and Marslen-Wilson’s (1997) distributed model of spoken language processing, lexical properties follow from the target output representations used in training. Distributed representations of words have specifically lexical properties because they are trained as words. Plaut et al. (1996) provide a formal analysis of how properties of the global mapping between input and output representations over training produce sensitivity to global properties of these representations including word frequency.

The idea that distributed representations of words have properties that are not reducible to the properties of patterns of sublexical representation is important because it makes it possible to distinguish lexically mediated mappings and mapping based on segmental or syllabic representation (c.f. Hickok & Poeppel, 2004; 2007; Hickok et al., 2011). In the remainder of this section I will discuss several properties of words that make it possible to distinguish between lexical and sublexical representation based on behavioral and neural data, and then present evidence that purely segmental or syllabic representations cannot account for the human ability to recognize or produce spoken words.

For a representation to be meaningfully lexical, it must have properties that are not completely reducible or attributable to simple patterns of sublexical representation. Several properties satisfy this requirement. One is word frequency. Higher frequency words show a number of processing advantages over their lower frequency counterparts across experimental tasks. Frequent words produce faster lexical decision, shorter naming latencies, earlier N400 responses, and more accurate recognition under adverse listening conditions (c.f. Schilling et al., 1998; Pisoni, 1996; Rugg, 1990). In contrast, words that are composed of high frequency phoneme sequences show processing disadvantages on several of the same tasks. In a series of studies, Vitevitch and Luce (1999; 1998; 2005) controlled for word frequency and found that words composed of high phonotactic probability segmental sequences produce longer naming latencies and poorer lexical discrimination than words composed of lower frequency combinations. This suggests that lexical frequency effects cannot be attributed to sublexical factors.

Another uniquely lexical property is phonological neighborhood density. A word’s phonological neighborhood consists of the set of words that can formed by changing a single phoneme. For example, cad is a neighbor of cat. This measure is lexical in that it is defined in reference to an entire wordform and not simply to its parts. Neighborhood size is correlated to phonotactic frequency (a segmental measure), but word lists can be constructed in which neighborhood density and phonotactic frequency vary orthogonally. Luce and Large (2001) took advantage of this dissociation to show independent effects of phonotactic frequency and neighborhood density in a speeded same-different judgment task.

The claim that lexical representation is distributed but discriminable from segmental representation may seem counterintuitive, but the two properties are in fact compatible. Segmental representation is widely assumed in models of spoken word recognition despite a large experimental literature that has produced decidedly mixed evidence in favor of the primacy of segmental or syllabic representation. In their review of this literature, Goldinger and Azuma (2003) note that factors including stimulus properties, task demands and even social influences may bias listeners towards attending to different units of representation. Obligatory prelexical segmental representation may be generally assumed, but it is not well supported by the experimental literature.

Several independent lines of evidence suggest that sublexical units such as the segment or syllable are more appropriately viewed as post-lexical units that are only identified by non-obligatory segmentation of words that have been recognized as wholes. Human listeners can comprehend meaningful “compressed” speech at rates of over 400 words per minute (Foulk & Stitcht, 1969). This corresponds to an average rate of 30 msec per phoneme. However, when listeners are asked to identify the order of phonemes in recycling sequences consisting of either 4 vowels or four CV syllables, they are unable to report the order of tokens when phonemes are less than 100 msec in duration (Cullinan et al.,1977). When vowel duration is between 30 and 100 msec in such sequences, listeners show the ability to discriminate between pairs, but not the ability to identify the order of segments (Warren et al.,1990). Warren (1992) notes that if words were perceived as sequences of segments, the ability to identify segmental order would be a prerequisite for word recognition. He interprets the gulf between the temporal limits of word recognition and vowel or syllable recognition as evidence for holistic representation of wordforms and argues that identity of phonemes is inferred after word recognition rather than being directly perceived. This hypothesis is strengthened by evidence that illiterate subjects and monolingual speakers of languages such as Chinese that employ non-alphabetic orthographies are unable to perform word games that require either adding or deleting individual segments when repeating spoken words (Morais et al., 1979; Read et al. 1986). It is also consistent with Foss and Swinney’s (1973) finding that monitoring latencies for two-syllable words are shorter than those found when subjects monitor for initial segments in the same stimulus materials. A review of functional imaging results in phonological processing studies by Burton et al. (2000) supports this claim. They found that tasks that require explicit segmentation produce increased activation in the LIFG that is not found in similar tasks that do not require segmentation.

The holistic versus segmental representation question can also be viewed through the lens of dependencies between non-adjacent units in speech production or perception. If representation were at an entirely segmental or syllabic level, one would not expect to find long-range dependencies. However, production studies (see review by Grosvald, 2010) have found evidence of anticipatory coarticulation between segments within a single word separated by as many as six segments. Perception studies similarly show that listeners are sensitive to this coarticulation, and may use it to facilitate the perception of subsequent speech sounds (Martin & Bunnell, 1982; Beddor et al., 2002). Similar long-range range dependencies can be found in phonological constraints on word formation. For example, in vowel harmony, the feat ures of one vowel influence features of a nonadjacent vowel within the same word (see review by Clements, 1980). Similarly, in some languages constraints on the patterning of moraic tone contours are defined across syllables within a word (c.f. Cheng & Kisseberth, 1979). These phenomena both support the hypothesis that the word is a meaningful phonological unit, and point to the inadequacy of linear, purely segmental or syllabic phonological representation. As a result, phonologists have developed several classes of phonological theory including tiered phonology and autosegmental theory that allow for nonlinear mappings between phonological features and higher level phonological units within the word (c.f. Clements & Keyser, 1981; Goldsmith, 1990). These theories provide potential support for the claim that phonological representation may address segmental or syllabic properties while representing words at a holistic, distinctly lexical level of representation. Segmental or syllabic information may (or may not) be represented by some aspects of a distributed lexical representation, but representations used to access semantic knowledge or mediate the relationship between articulation and sound clearly must encode global wordform properties to account for word frequency, neighborhood density, and coarticulatory effects.

4. Overview of the Dual Lexicon Model

The dual lexicon model works within the broader context of dual pathway models of spoken language process. The anatomical organization of left hemisphere components of this bilateral model are shown in Figure 1. In the ventral pathway, a lexicon located in pMTG and adjacent pITS mediates the mapping between words and meaning. This area is not a store of semantic knowledge, but instead houses morphologically organized representations of word forms. These representations link the acoustic phonetic representations localized in bilateral pSTG to representations of semantic content in a broad, bilateral distributed network and to syntactic processes in a similarly broad a distributed bilateral network (see contrasting reviews by Caplan, 2007; Grodzinsky & Friederici, 2006; Hickok & Rogalsky, in press). The ventral lexicon therefore plays a role in the semantic interpretation of spoken words, sentence processing, and the production of spoken words to communicate meaning (as in a picture naming task). This characterization of the ventral lexicon is consistent with claims made about the role of pITS and pMTG in current dual pathway models of spoken language processing (Hickok & Poeppel,2000; 2004; 2007).

Figure 1. Dorsal and ventral lexicons shown within the context of left hemisphere components of a dual stream processing model.

Figure 1

Dorsal stream components are shown in red, and ventral components are shown in blue in this two lexicon model. The bilateral posterior superior temporal gyrus (pSTG), shown in green, is the primary site of acoustic-phonetic analyses of unmodified natural speech. More anterior portions of STG may be associated with the mnestic grouping processes. The dorsal lexicon, found in the supramarginal gyrus and parietal operculum (yellow), mediates the mapping between acoustic phonetic structure and the left dominant articulatory network including premotor cortex, posterior IFG and anterior insula described by Hickok and Poeppel (2001; 2004; 2007), with connectivity supplied by divisions II and III of the superior longitudinal fasiculus (SLF II and SLF III). The angular gyrus is hypothesized to play a role in the identification of sublexical units. The ventral lexicon (yellow), localized in the posterior middle temporal gyrus (pMTG) and adjacent tissue mediates the mapping between acoustic-phonetic representations in pSTG and conceptual representations associated with a semantic hub in the temporal pole that integrates aspects of semantic representation associated with a widely distributed conceptual network. The pMTG and temporal poles are connected by the inferior longitudinal fasiculus (ILF). Direct connections between pMTG and the IFG are supplied by the extreme capsule (EmC).

A parallel lexicon exists in the dorsal speech pathway, localized to the left SMG (the inferior portion of Brodmann’s area 40 delineated by the intraparietal sulcus, the Jensen or primary intermediate sulcus and the postcentral sulcus and the sylvian fissure) and adjacent parietal operculum. Consistent with the dorsal pathway’s role in the mapping between sound and articulation, the dorsal lexicon houses articulatorily organized word form representations. Where previous accounts imply that dorsal processing operates at a sublexical level, such as the phoneme or syllable, I will argue that the SMG representation is explicitly lexical.

Representations in the dorsal and ventral pathways play a number of complementary roles in processing. Both facilitate acoustic-phonetic conversion in auditory speech perception by providing task-specific top-down influences on pSTG activation (Gow et al., 2008; Gow & Segawa, 2009). These influences are influenced by the task that the listener is performing but also converge allowing both semantic and phonological context to affect perception, and thus contributing to both the robustness of spoken language interpretation following brain insult and listeners’ ability to interpret ambiguous or degraded speech sounds.

5. The Ventral Lexicon

Hickok and Poeppel (2004; 2007) identify a region comprising pMTG and adjacent pITS that projects directly to a widely distributed semantic network and acts as a lexical interface between sound and meaning in the ventral pathway. This is clearly a lexicon within the current framework. In contrast, Scott and Wise’s dual stream model (2004) focuses on prelexical processes, and does not identify a comparable structure. In their model, the “what” pathway links low level auditory processing in bilateral primary auditory cortex to higher level auditory processing in lateral anterior and posterior bilateral STG and then anterior STS which is sensitive to speech intelligibility in the left and dynamic pitch variation in the right. They suggest that this collective stream plays a role in the sound to meaning mapping, but stop short of describing any component as playing the role of a lexicon.

Converging evidence from a number of lines of research suggests that the pMTG and adjacent tissue in the pSTS function as a lexicon in the ventral speech stream. Damage to this region is associated with transcortical sensory aphasia (TSA), a condition marked by semantic paraphasias (e.g. calling a table a chair) and impaired auditory word comprehension coupled with preserved spoken word repetition and fluent speech (Wernicke, 1874; Goldstein, 1948; Coslett et al., 1987) (see Table 1 for an overview of neuropsychological syndromes discussed in this review). Boatman et al. (2000) demonstrated that these symptoms could also be transiently induced and narrowly localized across subjects who otherwise showed no symptoms of TSA using direct cortical stimulation by electrode pairs along the middle to posterior left MTG. These deficits do not appear to be caused by the breakdown of perceptual processing. Damage to the pMTG is not associated with impaired acoustic-phonetic discrimination or identification. This observation is supported by work showing that cortical stimulation of this region does not influence performance on these tasks (Boatman et al., 2000), and clinical findings demonstrating that impairment in these tasks is reliably associated with damage to the left frontal and inferior parietal lobe (Blumstein et al., 1977a; Micelli, et al., 1980; Caplan et al., 1995).

Table 1.

A summary of language pathologies discussed in this paper.

Semantic
Dementia
Transcortical
Sensory Aphasia
Reproduction
Conduction Aphasia
Repetition
Conduction Aphasia
Primary lesion
Localization
temporal Pole posterior MTG left SMG and
parietal operculum
superior temporal
Defining Symptom loss of semantic
knowledge
semantic paraphasia
(TABLE → chair)
phonological paraphasia
(PEN → pan)
impaired word recall
and recognition
Auditory Word
Comprehension
impaired impaired preserved preserved
Nonverbal
Comprehension
impaired preserved preserved preserved
Speech Production preserved preserved impaired preserved
Phoneme
Discrimination
preserved preserved impaired preserved
Spoken Word
Repetition
preserved preserved impaired impaired
Hypothesized
Affected Function
conceptual hub ventral lexicon dorsal lexicon phonological short-
term store
Alternate
Term
primary progressive
aphasia
word meaning
deafness
word form
deafness
associative
aphasia

The existence of semantic paraphasias in TSA implicates a breakdown in lexico-semantic processing. This breakdown might be interpreted as either a loss of conceptual knowledge, or a loss of lexical mechanisms for accessing such knowledge. The difference between these two potential deficits is illuminated by the contrast between transcortical sensory aphasia and semantic dementia (SD). Semantic dementia is a neurodegenerative condition characterized by the bilateral degeneration of the anterior temporal lobes and the loss of both verbal and nonverbal semantic knowledge with preserved episodic knowledge (Warrington, 1975; Snowdon et al., 1989; Mummery et al., 2000). Several dissociations suggest that the primary deficit is related to word retrieval in TSA, and to semantic representation in SD. Jeffries and Lambon Ralph (2006) found that in word-picture verification SD patients were showed better word-picture verification performance for more general terms (e.g. animal) than for more specific (“Labrador”). TSA patients showed no such semantic specificity effect in tests using the same materials. The same study found that the two patient groups made different types of errors in picture naming. SD patients were more likely to apply a superordinate label to a picture (e.g. kangaroo named as “animal”), while TSA patients were more likely to make an associative error (squirrel named as “nuts”). The use of more general terms or broader categories is consistent with access to a reduced semantic representation. In contrast, associative naming errors and semantic paraphasias are consistent with preserved semantic representation, but a deficit in the mapping between words and meaning. This distinction is further supported by evidence that patients with TSA benefit significantly from first phoneme-cueing that supports word-retrieval in picture naming tasks, while SD patients show relatively little benefit from cueing (Graham et al., 1995; Patterson et al., 2004; Jeffries & Lambon Ralph, 2006).

This dissociation between anterior temporal and pMTG function suggests the mapping between words and meaning occurs in two distinct steps. In their review of SD and related BOLD imaging work Patterson et al. (2007) argue that that the anterior temporal lobe acts as a “semantic hub” linking nodes in a widely distributed neural network that represents disparate aspects of semantic knowledge related to sensorimotor or perceptual representation. Many of the features that define the meaning of a word are assumed to be sensory (e.g. the color or taste of an apple) or motoric (e.g. motoric representations of the acts of picking cutting or eating an apple). Evidence from pathology and neuroimaging suggests that these features have distributed representation that maps onto the regions of the brain that are involved in the processing of these sensory features or motor acts (see review by Martin, 2007). Patterson and colleagues base their semantic hub argument on the observation that despite the distributed nature of this network, focal damage to the anterior temporal lobe such as that seen in SD leads to a loss of conceptual knowledge, and BOLD studies that the same region is activated in tasks including category judgment (c.f. Bright et al., 2004; Rogers et al., 2006). Dissociations between semantic knowledge and the lexically-mediated mapping between sound and meaning support a step-process in which lexical representations in the pMTG mediate the mapping between acoustic-phonetic representation in the pSTG and amodal conceptual centers in the anterior temporal lobe that project to a distributed network of localized modality specific semantic features. This organization appears to be supported by a network of white matter connections that include connectivity between the pSTG and pMTG provided by local connections and possibly the posterior segment of the arcuate fasiculus (Catani et al., 2005) and the left pMTG and the anterior temporal pole provided by the inferior longitudinal fasiculus or ILF (Mandonnet et al., 2007).

Evidence from BOLD imaging extends the case for MTG’s role in lexical processing by demonstrating that the region is sensitive to stimulus properties that correlate uniquely with a lexical level of representation. In auditory lexical decision tasks, the MTG is one of several areas in which BOLD activation is modulated by lexicality (Prabhakaran et al., 2006). High frequency words produce stronger BOLD activation than low frequency words in both anterior and posterior portions of the left MTG (Prabhakaran et al., 2006). These results support the notion that the MTG plays a role in lexical representation.

The more specific claim that the MTG is involved in semantic aspects of lexical processing is supported by a series of results showing differential BOLD activation of the MTG in semantic priming and semantic interference paradigms, in which the semantic relationships between serially presented items influence lexical decision or naming (Rissman, Eliassen & Blumstein, 2003; Sass, et al., 2009; de Zubicaray et al., 2001). In general, targets following semantically related primes produce less MTG activation than unrelated targets do in tasks that emphasize automatic processing. This is often attributed to the fact that related words activate overlapping sets of semantic features that in turn prime semantically related lexical representations.

5.1 The Ventral Lexicon’s Role in Unification

Language processing draws on a large, distributed perisylvian network. To fully understand the function of the ventral lexicon it is important to consider how it interacts with other parts of the network during processing. The description offered so far primarily captures the bidirectional mapping between acoustic phonetic representations localized in bilateral pSTG, lemma representations in the posterior temporal lobe, and representations of semantic features that appear to be localized over a very broadly distributed network (Damasio & Damasio, 1994), with an important amodal convergence of information in anterior inferior temporal lobe (Tyler et al., 2004; Visser et al., 2009). This description captures the essential network that is involved in accessing the meaning of individual words heard without context. However, words are primarily experienced in sentences or discourses in which their semantic and syntactic significance is understood in reference to their context. Thus the word pitcher is interpreted differently in the sentences The pitcher sized up the batter and The pitcher is full of ice water, and the word run has different meanings and grammatical functions in the sentences Her first runs were fast and She runs before work.

Recent neuroimaging work suggests that unification, the process of relating individual words to a broader syntactic and semantic context, is achieved by a network that includes the left posterior middle temporal gyrus, the inferior frontal gyri, and subcortical and medial structures including the striatum and anterior cingulate. These structures show differential activation by single-versus meaningful multi-word stimuli (Snijders et al., 2009), and demonstrate a strong pattern of function interaction in psychophysical interaction analyses of BOLD imaging data (Snijders et al., 2010). Within the framework of this network, the posterior middle temporal lobe participates in unification in three identifiable ways.

The first is in its role in the morphological system. Morphemes, which include words, word stems and affixes, are phonologically defined units of meaning. For example, smiled consists of two morphemes: the stem smile and the inflectional suffix –ed that indicates the past tense and in this case makes it clear that smile is a verb rather than a noun in this usage. Morphology is a context sensitive process. For example, in English, verb inflections mark number, so one would say He runs, but they run. An inflection, like tense, may be marked through a regular process (the addition of the suffix –ed in smiled), or through an irregular process (alternate forms such as went for go, ran for run).

There is some argument over whether regular forms and irregular forms are processed by the same or different mechanisms (Pinker, 1999; Ullman et al., 2005; Joanisse & Seidenberg, 2005; McClelland & Rumelhart, 1986; McClelland & Patterson, 2002). Regardless of how this debate is resolved, MTG appears to be activated by both regular and irregular morphological processes. In past tense generation and comprehension tasks both regular and irregular forms have been shown to produce a BOLD response in bilateral pMTG (Joanisse & Seidenberg, 2005), with most studies showing greater activation in this region by irregular forms than by regular ones (Tyler et al., 2005; Yokoyama et al., 2006). In related work, the processing of both spoken and regular morphological markers for gender in Italian (Miceli et al., 2002) and the German plural (Berreta et al., 2003) have also been shown to produce increased pMTG activation. Despite evidence that bilateral pMTG is involved in both forms of morphology, numerous studies have shown a dissociation between impairments in regular and irregular morphology in aphasia (Marslen-Wilson & Tyler, 1997; Tyler et al., 2002, Tyler et al., 2005; Ullman et al., 2005). This work tends to show that frontal lobe damage, particularly damage to the LIFG, is associated with deficits in the processing of regular morphology with preserved processing of irregular forms. Patients with posterior damage, primarily including the left posterior temporal lobe, may show the opposite pattern, providing a double dissociation (Ullman et al., 2005). For regular inflection, processing seems to require interactions between the pMTG and LIFG. These areas appear to be connected by the arcuate fasciculus (Petrides & Pandya, 1988; Catani et al., 2005), and/or the extreme capsule (Makris & Pandya, 2008), and show functional connectivity in morphological processing (Stamatakis et al., 2005).

The same frontotemporal network is also involved in at least two forms of context-dependent interpretation of ambiguous words. In pioneering work in the late 1970’s, psycholinguists discovered that listeners who hear ambiguous words such as bug may momentarily access multiple interpretations (e.g. insect, electronic surveillance device), and then use context to select the relevant interpretation (Swinney, 1979). Systematic studies of dictionary meanings show that at least 80% of common words have more than one meaning, and some words may have more than 40 (Parks et al., 1998). A recent BOLD imaging study found increased activation in the left pMTG and adjacent pITG and left dominant bilateral IFG associated with polysemous words heard in clearly disambiguating contexts (Rodd et al., 2005; Davis et al., 2007; Zempleni et al., 2007). The IFG activation is consistent with results of related semantic selection studies that reliably show increased left prefrontal activation associated with increased selection demands (Thompson-Schill et al., 1997; Wagner et al., 2001); the ITG and MTG activation in studies of ambiguity is consistent with activation of more lexical items in that area. Similar activation patterns are found in tasks in which sentential context constrains the interpretation of words with ambiguous syntactic category. For example, the word bowl is clearly a noun in the sentence He might drop the bowl, but clearly a verb in the sentence He might bowl a perfect game. Several studies have found activation of the LIFG and pMTG is modulated by this type of ambiguity (Gennari et al., 2007; Snijders et al., 2009). A recent study using psychophysical interactions to examine patterns of effective connectivity showed increased coupling between the LIFG and the superior aspect of the left pMTG for ambiguous versus unambiguous sentences of this type (Snijders et al., 2010). This pattern was primarily left lateralized, but extended to RIFG and right pMTG as well as the striatum. As above, the MTG activation in studies of ambiguity is consistent with activation of more lexical items in that area.

6. The Dorsal Lexicon

A broad convergence of evidence suggests that the supramarginal gyrus (SMG) serves as a dorsal stream lexicon, playing a role in speech production and perception as well as articulatory working memory rehearsal. The notion that speech production and perception share a common lexicon is a matter of some debate, with prominent psycholinguistic models arguing for separate input and output lexica (Dell et al., 1997; Levelt et al., 1999), and models motivated by neuropsychological and functional imaging evidence tending to favor a common resource (Allport, 1984; McKay, 1987; Coleman, 1998; Buschbaum et al., 2001; 2003; Hickok & Poeppel, 2000; 2004; 2007). Jacquemot et al. (2007) found a dissociation between production and perceptual performance in a patient with conduction aphasia, and argued that their results favored the existence of interacting phonological codes associated with perception and production with dissociable feedforward and feedback connections. This idea is developed more fully by Hickok et al. (2011) who review evidence from behavioral studies of normal and impaired listeners and neuroimaging results that demonstrate a modulatory role of production resources on speech perception, and a predictive role of perceptual modeling on the control of speech production modeled within a state feedback control framework.

None of the current dual stream models of spoken word recognition recognize a dorsal stream lexicon. Hickok and Poeppel (2000; 2007) identify left lateralized Spt, a region inside the sylvian fossa that involves the planem temporale and parietal operculum, as an interface between sensory and motor representations in a manner that parallels the role that they ascribe to the MTG as an interface between sensory and semantic representations. Hickok and Poeppel (2000) argue that this region does not function as a phonological store or lexicon. In Hickok et al.’s (2011) state feedback control model of speech production, the Spt acts as a sensorimotor conversion system that acts at the level of segment-sized feature bundles and coarser syllabic units. Syllabic coding could of course be applied to entire monosyllabic words, but the conversion unit in this model is not specifically or intrinsically lexical. It is worth noting that the two studies that most directly support the claim that the Spt is the locus of acoustic-phonetic conversion both showed concurrent activation in the inferior parietal lobe. Buschbaum et al. (2001) found activation in the parietal operculum associated with the perception and production of nonsense words and Hickok et al. (2003) found activation in a region that appears to include portions of the SMG associated3 with the rehearsal and production of nonsense sentences and musical sequences. The region that was most strongly activated by linguistic stimuli was a subset of the larger region activated by musical stimuli. The mapping of syllable-sized units from sound to articulation could have several potential roles in speech production. Articulatory coding of syllables could provide a direct non-lexical route to production, which would be useful for word learning. It could also inform lexical representation in the SMG.

While syllabic encoding may facilitate speech production, recent evidence showing an influence of lexical properties on speech production suggest that lexical representations typically mediate the mapping between sound and articulation. A series of studies have demonstrated that word pronunciation is influenced by phonological neighborhood properties (Scarborough, 2003; 2004; Wright, 2004; Munson & Solomon, 2004; Munson, 2007). Unfortunately, phonotactic probability, a sublexical distributional property, may be confounded with phonological neighborhood size in these studies, making it unclear whether these results reflect lexical or sublexical effects. More compelling evidence comes from studies showing that pronunciation is influenced by word frequency (Pluymaekers et al., 2005) and lexical predictability (Bell et al., 2003). Baese-Berk and Goldrick (2009) recently demonstrated that voice onset time (VOT), a correlate of the [voicing] feature, is typically longer in a self-paced single word reading task for word-initial voiceless stop consonants in words that have a voiceless competitor than it is for words that do not. Thus, VOT is longer for the /k/ in “coat” (which has the competitor “goat”) than it is for the /k/ in “cope” (which has no competitor “gope”). These interactions occur in words with the same onset biphones, suggesting that the mapping from sound to articulation is mediated by some form of lexical representation, not sublexical factors. BOLD imaging results by Peramunage et al. (2011) exploring this interaction (described in section 6.2) suggest that the locus of this effect is in left SMG. These results do not preclude the existence of a nonlexical or syllabic pathway from acoustic-phonetic representation to articulation involving Spt rather than SMG, but they do support the claim that there is a lexically-mediated articulatory dorsal pathway involving the SMG.

6.1 Evidence from Disruption of Lexical Processes

Damage to the left supramarginal gyrus and adjacent parietal operculum is associated with several language processing deficits that appear to reflect impaired abstract phonological representation or processing. These include deficits in phoneme discrimination and categorization (Caplan et al., 1995; Gow & Caplan, 1996; Blumstein et al., 1994) and impaired phonological working memory (Paulesu et al., 1993). The critical question is whether these impairments reflect damage to a hypothesized dorsal lexicon.

Several lines of evidence suggest that aphasics with damage to the inferior parietal lobe including SMG show processing deficits that are correlated with lexical frequency. Goldrick and Rapp (2007) describe the patient CSS, who suffered a left parietal infarct. In naming tasks CSS made more errors for low frequency words than for higher frequency words.

Damage to the SMG may also produce a form of conduction aphasia that is characterized in part by sensitivity to lexical frequency in word repetition (see Table 1). Based on a review of previous reported cases of conduction aphasia, Shallice and Warrington (1977) proposed a distinction between repetition and reproduction aphasia. In the former variant, patients tend to show unimpaired spontaneous speech and preserved single word production, but show marked deficits in word recall and recognition (Shallice & Warrington, 1970; Valler & Baddeley, 1984). In contrast, reproduction conduction aphasia is characterized by frequent phonological paraphasias in spontaneous speech, and impaired single word production in oral reading and picture naming with preserved recognition and recall (Yamadori & Ikamura, 1975). This two-way distinction is confirmed by a study by Axer et al. (2001) that also found an anatomical dissociation between the variants. In a quantitative lesion overlap analysis of CT lesion data from 15 conduction aphasics they found evidence that the repetition form is associated with superior temporal damage (also see Buschbaum et al., in press), and the reproduction form associated with damage to the left supramarginal gyrus and adjacent parietal operculum.

I will argue that the deficit in some reproduction conduction aphasics is lexical, and provides a double dissociation with the loss of lexical representations in the ventral lexicon associated with transcortical sensory aphasia. The primary features of this dissociation are summarized in Table 1.Two properties of reproduction conduction aphasia suggest that it reflects damage to the hypothesized dorsal lexicon. The first is sensitivity to the lexical properties of stimulus items in repetition tasks, including lexical frequency and word length (Shallice et al., 2000; Knobel et al., 2008; Romani et al., 2011). However, these features could result from damage to sublexical phonological representations in an output “buffer” that plans articulation. The sublexical account might predict symmetrical deficits in word and pronounceable nonword repetition, since both are composed of the same sublexical constituents, but that is not necessarily the only prediction. The existence of patients with reproduction conduction aphasia who show significantly greater deficits in nonword repetition (Caplan et al., 1986; Caramazza et al., 1986; Dubois et al., 1964/1973; Shallice et al., 2000; Strub & Gardner, 1974; Notoya et al., 1982) or better repetition of nonwords that were judged to be highly wordlike than for nonwords judged to be less wordlike (although still phonotactically legal; Saito et al., 2003) is consistent with the hypothesis that nonword repetition draws on the same resources as word repetition, but in a less efficient manner that engages a broader network of overlapping lexical representations (Kohn and Smith, 1994; Shallice et al., 2000).

A number of models of speech production (c.f. Fromkin, 1973; Shattuck-Hufnagel, 1979; Dell et al., 1997) posit the existence of a phonological output buffer that serves as an interface between phonological and articulatory representations during syllabification and speech planning. Consistent with these models, several prominent explanations of reproduction conduction aphasia describe the functional deficit as an impairment of this output buffer (Caramazza et al., 1986; Shallice et al., 2000; Saito et al., 2003). Caramazza et al. (1986) described one reproduction patient who showed strong length and serial position effects in error patterns for repetition and spelling of nonwords, which are consistent with the buffer constrained performance. Shallice et al. (2000) describe a subject with a lesion that included both supra- and infra-sylvian cortices who showed similar evidence of an impairment in repetition of both nonwords and words, and suggested that the increased vulnerability of nonwords in these tasks is attributable to increased demands on a common resource.

However, other patients’ performances are not easily attributed to a disruption in the phonological output buffer, subject to support from the lexicon. For example, Romani et al. (2011) examined set of six conduction aphasics who showed severe deficits in word repetition. In their data, there was not a consistent or robust pattern of increased errors for longer words that would follow from reduced buffer capacity, or evidence of the type of serial position effect (by phoneme) that is the hallmark of an output buffer deficit. These data suggest that reproduction conduction aphasia is not the result of damage to a buffer in these cases. Goldrick and Rapp’s (2007) finding of sensitivity to phonological neighborhood density and the sensitivity to lexical frequency seen in some reproduction conduction aphasics suggest that damage to the inferior parietal lobe can lead to a deficit in the long-term representation of lexical wordforms used in planning speech production.

6.2 Functional Imaging Evidence

Evidence from BOLD imaging studies is consistent with evidence from (c.f. Caplan et al., 1996) that implicates the SMG in phoneme discrimination and identification Several studies have shown an association between SMG activation and dishabituation effects when subjects listen to a series of synthetic syllables and then hear a syllable that begins with a different consonant (Dehaene-Lambertz et al., 2005; Zevin & McCandliss, 2005). Desai et al. (2008) found a correlation between SMG BOLD activation and the strength of categorical perception in a phoneme discrimination task using synthetic speech continua. Similarly, Raizada and Poldrack (2007) found a correlation between neural amplification in the SMG and discrimination scores in a monitoring task involving synthetic nonsense syllable pairs. A metanalysis by Turkletaub & Coslett (2010) of eight studies including 123 subjects that used fMRI to study phoneme categorization found significant clusters of activation associated with categorization in the SMG and angular gyrus (AG). These results clearly show that the SMG plays a role in phonological processing. As discussed above, though non-lexical mechanisms are commonly assumed to underlie these effects, it may be that partial matches between nonsense syllables and stored abstract phonological representations of words (e.g. ba and balloon) are the source of this effect. It is also possible that SMG activation is the result of mappings between speech input and phonological representations of words and sublexical units such as segments.

Lexical representation in SMG is more directly indicated by studies that examine the effects of wordform similarity between words on BOLD activation. Recent functional imaging studies provide converging evidence that SMG activation is modulated by the presence or absence of words that resemble a spoken word. In one study, subjects performed a lexical decision task using spoken words from dense versus sparse lexical neighborhoods (Prabhakaran et al., 2006). Prabhakaran et al. found that words from dense neighborhoods produce increased left SMG activation. In a related study, members of the same group (Righi et al., 2010) combined fMRI with eyetracking techniques to examine the influence of the presence or absence of a phonological competitor in a search task. Subjects heard a word such as “beaker” and were asked to select a picture that matched that word from an array of images. In some trials, one of the visual distracters was an image of a word that shared an onset with the target (e.g. a beetle). When these items were present in the visual array they produced phonological competition, as evidenced by slower overall responses and increased looks to the competitor early in the trial. BOLD imaging revealed that these competition effects were accompanied by increased activation in the bilateral SMG as well as portions of LIFG, left cingulate and left insula.

Similar competition effects are found in speech production when subjects are asked to read words aloud that either do or do not have a close competitor in the lexicon that differs only in the voicing of the initial segment. For example, cape has the neighbor gape, but cake does not have a comparable neighbor (gake). When a competitor is present in the lexicon (although not present on screen) subjects once again show increased BOLD activation in left SMG, as well as the LIFG and left precentral gyrus (Peramunage et al., 2011).

In our work (Gow et al., 2008), we have used Granger causation analysis to examine the influence of lexical representation on speech categorization. A large body of behavioral work (c.f. Ganong, 1980; Pitt and Samuel, 1993) has shown that the perception of lexically ambiguous speech sounds is influenced by their lexical context. For example, a phoneme that is ambiguous between /s/ and /∫/ is more likely to be interpreted as /s/ in *andal, and as /∫/ in *ampoo. We found that this bias corresponds to a pattern of SMG influence on pSTG activation (associated with acoustic phonetic processing) that begins at the time period where lexical effects are first seen in electrophysiological data.

As noted, the fact that competition and top-down effects are formally defined by stimulus properties derived from words strongly suggests, but does not conclusively demonstrate, that differences in BOLD activation reflect lexical activation. The existence of phonotactic constraints on how sublexical units (typically phonemes) can be combined to form words leads to a strong correlation between measures of lexical similarity and sublexical constituent frequency. For example, neighborhood density measures, which are generally thought to reflect lexical properties, are highly correlated with sublexical diphone frequency measures (Luce & Pisoni, 1998; Prabhakaran et al., 2006). Moreover, increases in BOLD response observed when a stimulus item has a lexical competitor might reflect increased phonetic analyses needed to discriminate between perceptually similar wordforms. Conversely, behavioral and simulation results suggest that putatively sublexical effects – such as the influence of phonotactic constraints on the perception of nonwords – may be explained by top-down gang effects driven by partially-activated lexical items (McClelland & Elman, 1986). To untangle these factors, the most useful data come from studies that show different activation patterns for tasks that focus on lexical versus segmental processing. Several studies have found that tasks that require explicit segmental categorization do not modulate the SMG BOLD response, but do modulate the activation of other regions including the middle temporal gyrus (MTG), LIFG and AG (Burton et al., 2000; Blumstein et al., 2005; Gow et al. 2008). Blumstein et al. (2005) propose that in these tasks the AG and LIFG are involved in non-obligatory segmentation processes. Seghlier et al . (2010) note that the AG is reliably activated in tasks that involve semantic processing and suggest that activation of the dorsal AG in non-semantic perceptual tasks reflects failed attempts to find semantic content in the processing of inherently non-semantic materials. These proposals may not be mutually exclusive. One possibility is that the AG plays a role in inhibiting lexical processing of items that do not lead to semantic processing, and perhaps facilitating sublexical processing involving Spt. In the case of tasks that require sublexical or segmental processing the same AG mechanism could be invoked strategically by a network involving LIFG and MTG.

6.3 Word Learning Effects

Studies of changes in brain anatomy and activation related to word learning provide converging evidence that the SMG plays a role in lexical representation in the dorsal speech pathway. In one of the first studies in this area, researchers taught subjects a set of unfamiliar archaic words (Cornelissen et al., 2004). MEG was used to examine brain responses before and after learning these words. The study found that 60% of the subjects showed increased activation associated with an equivalent dipole modeled in the inferior parietal lobe during a time period associated with lexical processing (400 msec after word onset). In a similar study using fMRI, another group found increased BOLD response in the same area after subjects learned a set of pseudowords that were used to name novel objects. A closely related line of work has examined correlations between vocabulary size and gray matter density. The earliest work in this area found an association between gray matter density in the left pSMG and bilingualism (Mechelli et al., 2004; Green et al., 2007). More recent work showed an association between gray matter density in the left pSMG and vocabulary size in monolinguals (Lee et al., 2007; Richardson et al., 2010). Taken together, these data suggest that this link is a function of increases in lexicon size associated with knowing more words – either through being multi-lingual or through having a more extensive vocabulary.

7. The function of the STG

The dual lexicon model is an attempt to consolidate new data with our evolving understanding of the role of lexical representation in language processing. This section briefly discusses the role of the posterior superior temporal cortex, the original wortshatz, in the context of the dual lexicon framework.

Wernicke’s model focuses on the role of left posterior superior temporal cortex. More recent work supports the importance of this region in spoken language processing, but suggests that pSTG involvement in speech processing is bilateral and that more anterior superior temporal cortex also contributes to speech processing. The STG is adjacent to auditory cortex, and appears to be involved in secondary analysis of auditory input. Howard et al. (2000) found that electrical stimulation of primary auditory cortex produces evoked potentials in a band of cortex that includes both anterior and posterior STG. In a review of BOLD imaging studies, Hickok and Poeppel (2007) found that comparisons between passive listening to speech and the resting state produce a reliable pattern of bilateral, but left dominant STG activation. This activation appears to reflect different aspects of auditory processing in different regions. Poeppel’s asymmetric sampling in time hypothesis (2003) characterizes the hemispheric difference as one of temporal integration, with the left hemisphere integrating information on a temporal scale suited for phonemic processing, and the right integrating over a syllabic scale. This hypothesis, though controversial, is supported by evidence for the inability of the right hemisphere to categorize phonemes in aphasia (Wolmertz et al., 2004) and differences in the power and sensitivity of cortical oscillations observed during speech perception at time scales that roughly equate with phonemic (gamma) and syllabic (theta) duration (Giraud et al., 2007).

Several recent studies support the existence of functional anterior-posterior differentiation along the STG. Chang et al. (2010) used high-density intracranial cortical surface arrays to investigate sensitivity to specific acoustic-phonetic feature cues during passive listening to synthetic consonant place continua. They found reliable patterns of mapping between the timing and topography of evoked responses in the left pSTG and formant onset and transition cues to consonant place. This suggests that the pSTG may play a role in the detection of phonetic feature cues. This view is in line with Hickok and Poeppel’s (2007) characterization of bilateral dorsal STG as the site of auditory spectrotemporal analysis including acoustic-phonetic processing. Characterization of anterior STG/STS is more elusive. Anterior superior temporal activation is seen in contrasts between clearly produced speech and tokens of unintelligible reversed or noise vocoded speech (Crinion et al., 2003; Scott et al., 2000; Davis & Johnsrude, 2003; Rodd et al., 2005). However, the same regions are also activated in comparisons between familiar environmental noises and scrambled tokens of the same sounds (Thierry et al., 2003), and melodic versus less melodic tone sequences (Griffiths et al., 1998). This overlap has been interpreted as evidence that this region is involved in some general aspect of auditory processing. Evidence that activation of this area is also modulated by manipulations of syntactic complexity and prosody in spoken sentences (Humphries et al., 2005) and the grouping of auditorily presented lists of letter names (Kalm et al., 2011) suggests that this role may be related to the temporal integration of recognizable auditory units in memory (Davis & Gaskell, 2009).

The remainder of the discussion of the STG will focus on the special role played by left dominant, bilateral pSTG as a link between the two lexica. Situated between the SMG and MTG, the STG is well-placed for this role. As the likely locus of acoustic-phonetic analysis, the pSTG provides a common source of input to both lexica in tasks involving auditory speech perception. I hypothesize that the STG serves an important role in coordinating semantic and motor/phonological interfaces that ensures that both lexica converge on representations of the same item. Convergence is a problem the dual lexicon framework must face. Standard psycholinguistic models of word production and perception (Levelt, 1999; McClelland & Elman, 1986; Norris, McQueen & Cutler, 2000; Marslen-Wilson, 1987) posit only one store of wordform information, and so activation of mismatched items is not an issue.

Several behavioral studies have shown that while form and semantic priming patterns show dissociable properties (Norris, McQueen & Butterfield, 2006), semantic manipulations influence form priming and form manipulations can influence semantic priming (Gaskell & Marslen-Wilson, 2002; Misiursky et al., 2005). In our own studies using Granger causation analysis, we have found evidence of direct pMTG influence on pSTG activation during a task that involved accessing the meaning of words (Gow & Segawa, 2009) and SMG influence on pSTG activation during tasks that require explicit phonemic categorization of segments in familiar words (Gow et al., 2008). Together, these results suggest that dorsal and ventral lexicon activity is coordinated by a pattern of bidirectional interaction between the pSTG and each of the lexica. Within such an organization, the pSTG would serve as a common integrator of top-down lexical information.

This organization would offer several computational advantages. In addition to providing a mechanism for coordinating activation in the two lexica, it increases the robustness of lexical processing in two ways. First, and most immediately, it does so by building additional redundancy into the system. If access to the kind of wordform representation that is preferred in a particular task fails or is blocked, the other store may provide an indirect pathway to further lexical processing. Second, this web of interconnection also provides a flexible means for extracting stable acoustic phonetic representations, giving varying levels of semantic or phonological/articulatory constraint. Within this framework, deficiencies in one type of input or context may be addressed by increased influence by the other lexicon on acoustic-phonetic representation in the STG.

This flexibility may be further enhanced by a broader pattern of interaction suggested by evidence of between stream anatomical and effective connection. In addition to participating in the ventral stream, the pMTG is linked to dorsal structures including premotor cortex (via the arcuate fasciculus) and LIFG (via the extreme capsule) (Makris & Pandya, 2008). Similarly the SMG is linked to the LIFG via SLFIII, and to ventral structures via the middle longitudinal fasiculus and arcuate fasciculus (Makris et al., 1999; Makris et al., 2009). Measures of effective connectivity confirm that these areas may under certain conditions interact directly during processing (Gow et al., 2008; Gow & Segawa, 2009).

8. Summary

The dual lexicon model provides a framework for integrating a broad and diverse set of empirical results and computational considerations. It unites observations from aphasia, behavioral psycholinguistic paradigms, laboratory and theoretical phonology, BOLD activation in normals, electrophysiology, functional, anatomical and effective connectivity, and histology. The model extends current dual stream models of language processing. It also provides a framework for understanding the role of lexical knowledge in a wide range of language processes including speech production and perception, morphology, sentence comprehension and language learning. Furthermore, the dual lexicon framework, including its provisions for interactions between the lexica and other language areas, suggests an additional source of the robustness of language processing.

This model is intended to provide a framework for understanding current and future results in the study of the brain basis of language processing. Future work within this framework is needed to understand the specific computational mechanisms for integrating lexical knowledge and processes in cognitive functions ranging from clearly linguistic processes to memory and less obviously related functions such as perceptual categorization, decision making and problem solving that might make use of lexically-indexed information. A number of more immediate issues also remain to be explored, including the nature and implications of hemispheric differences in lexical representation, the functional significance of lexical sensitivity in other brain regions including LIFG and premotor cortex, the relationship between spoken language lexica and the visual wordform lexica involved in reading, and the nature of the mechanisms that support the establishment of new lexical representations.

Highlights.

  • Spoken language processing relies on parallel lexica in the dorsal and ventral speech streams

  • The pMTG mediates the mapping between sound and meaning in the ventral stream lexicon

  • The SMG mediates the mapping between sound and articulation in the dorsal stream lexicon

  • Both lexica may play a role in speech perception and production

  • Uniquely lexical properties influence behavioral/neural measures of both streams

Acknowledgements

I would like to thank David Caplan, Catherine Stoodley, and Joshua Levy for their feedback during the preparation of this manuscript, and Matt Davis and Greg Hickok for their thoughtful reviews of an earlier version of this manuscript. This work was supported by the National Institute of Deafness and Communicative Disorders (R01 DC003108).

Footnotes

1

Connine et al. (1993) demonstrated that under some conditions clearly pronounced pseudowords derived from familiar words can produce semantic priming of associates of the words they are derived from.

2

Several researchers (c.f. Seidenberg & McClelland, 1989; Coltheart, 2004) have argued that distributed representation is inconsistent with the concepts of permanent lexical representation or a lexicon. This question is taken up in section 3.

3

Hickok (2010) notes that in some instances spatial normalizati ndividual data appea on of fMRI data may lead to an normalized group data. While it is unknown how systematic or common this localization error is, this rs in the SMG in raises the possibility that some implicate the Spt in processing.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

I have no conflicts of interest to declare.

Bibliography

  1. Allport DA. Speech production and comprehension: one lexicon or two? In: Prinz W, Sanders AF, editors. Cognition and motor processes. Springer-Verlag; Berlin: 1984. pp. 209–228. [Google Scholar]
  2. Axer H, Keyeserlink AG, Berks G, von Keyservink DG. Supra- and infrasylvian conduction aphasia. Brain and Language. 2001;76(3):317–331. doi: 10.1006/brln.2000.2425. [DOI] [PubMed] [Google Scholar]
  3. Baciu M, Ans B, Carbonnel S, Valdois S, Juphard A, Pachot-Clouard M, Segebarth C. Length effect during word and pseudo-word reading. An event-related fMRI study. Neuroscience Research Communications. 2002;30(3):155–165. [Google Scholar]
  4. Baese-Berk M, Goldrick M. Mechanisms of interaction in speech production. Language and Cognitive Processes. 2009;24(4):527–554. doi: 10.1080/01690960802299378. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Basso A, Casati G, Vignolo LA. Phonemic identification defects in aphasia. Cortex. 1977;13(1):84–95. doi: 10.1016/s0010-9452(77)80057-9. [DOI] [PubMed] [Google Scholar]
  6. Beddor PS, Harnsberger JD, Lindemann S. Language-specific patterns of vowel-to-vowel coarticulation: Acoustic structures and their perceptual correlates. Journal of Phonetics. 2002;20(4):591–627. [Google Scholar]
  7. Bell A, Jurafsky D, Fosler-Lussier E, Girand C, Gregory M, Gildea D. Effects of disfluencies, predictability, and utterance position on word form variation in English conversational speech. Journal of the Acoustical Society of America. 2003;113(2):1001–1024. doi: 10.1121/1.1534836. [DOI] [PubMed] [Google Scholar]
  8. Bellgowan PSF, Saad ZS, Bandettini PA. Understanding neural system dynamics through task modulation and measurement of functional MRI amplitude, latency, and width. Proceedings of the National Academy of Sciences of the United States of America. 2003;100(3):1415–1419. doi: 10.1073/pnas.0337747100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Berreta A, Campbell C, Carr TH, Huang J, Schmitt LM, Christianson K, Cao Y. An ER-fMRI investigation of morphological inflection in German reveals that the brain makes a distinction between regular and irregular forms. Brain and Language. 2003;85(1):67–92. doi: 10.1016/s0093-934x(02)00560-6. [DOI] [PubMed] [Google Scholar]
  10. Binder J, Frost J, Hammeke T, Bellgowan P, Springer J, Kaufman J, Possing E. Human temporal lobe activation by speech and nonspeech sounds. Cerebral Cortex. 2000;10(5):512–528. doi: 10.1093/cercor/10.5.512. [DOI] [PubMed] [Google Scholar]
  11. Blumstein SE, Baker SE, Goodglass H. Phonological factors in auditory comprehension in aphasia. Neuropsychologia. 1977;15(1):371–383. doi: 10.1016/0028-3932(77)90111-7. [DOI] [PubMed] [Google Scholar]
  12. Blumstein SE, Cooper WE, Zurif EB. The perception and production of voice onset time in aphasia. Neuropsychologia. 1977;15(3):371–383. doi: 10.1016/0028-3932(77)90089-6. [DOI] [PubMed] [Google Scholar]
  13. Blumstein SE, Myers EB, Rissman J. The perception of voice onset time: An fMRI investigation of phonetic category structure. Journal of Cognitive Neuroscience. 2005;17(9):1353–1366. doi: 10.1162/0898929054985473. [DOI] [PubMed] [Google Scholar]
  14. Boatman D, Gordon B, Hart J, Selnes O, Miglioretti D, Lenz F. Transcortical sensory aphasia: Revisited and revised. Brain. 2000;123(pt. 8):1634–1642. doi: 10.1093/brain/123.8.1634. [DOI] [PubMed] [Google Scholar]
  15. Bresnan J. Lexical-functional syntax. Blackwell; Cambridge: 2001. [Google Scholar]
  16. Bright P, Moss H, Tyler LK. Unitary versus multiple semantics: PET studies of word and picture meaning. Brain and Language. 2004;89(3):417–432. doi: 10.1016/j.bandl.2004.01.010. [DOI] [PubMed] [Google Scholar]
  17. Browman CP, Goldstein L. Articulatory phonology: An overview. Phonetica. 1992;49(3-4):155–180. doi: 10.1159/000261913. [DOI] [PubMed] [Google Scholar]
  18. Buschbaum BR, Baldo J, Okada K, Berman KF, Dronkers N, D’Esposito M, Hickok G. Conduction aphasia, sensory motor integration, and phonological short-term memory – An aggregate analysis of lesion and fMRI data. Brain and Language. doi: 10.1016/j.bandl.2010.12.001. (in press) PMCID: PMC3090694. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Buchsbaum B, Hickok G, Humphries C. Role of left posterior superior temporal cortex in auditory sentence comprehension: An fMRI study. NeuroReport. 2001;12(8):1749–1752. doi: 10.1097/00001756-200106130-00046. [DOI] [PubMed] [Google Scholar]
  20. Burton MW, Small SL, Blumstein SE. The role of segmentation in phonological processing: An fMRI investigation. Journal of Cognitive Neuroscience. 2000;12(4):679–690. doi: 10.1162/089892900562309. [DOI] [PubMed] [Google Scholar]
  21. Caplan D. Functional neuroimaging studies of syntactic processing in sentence comprehension: A selective critical review. Language and Linguistics Compass. 2007;1(1-2):32–47. [Google Scholar]
  22. Caplan D, Gow D, Makris N. Analysis of lesions by MRI in stroke patients with acoustic-phonetic processing deficits. Neurology. 1995;45(2):293–298. doi: 10.1212/wnl.45.2.293. [DOI] [PubMed] [Google Scholar]
  23. Caplan D, Utman JA. Selective acoustic phonetic impairment and lexical access in an aphasic patient. Journal of the Acoustical Society of America. 1994;95(1):512–517. doi: 10.1121/1.408345. [DOI] [PubMed] [Google Scholar]
  24. Caplan D, Vanier M, Baker C. A case study of reproduction conduction aphasia. I. Word production. Cognitive Neuropsychology. 1986;3(1):99–128. [Google Scholar]
  25. Caramazza A, Miceli G, Villa G. The role of the (output) phonological buffer in reading, writing and repetition. Cognitive Neuropsychology. 1986;3(6):37–76. [Google Scholar]
  26. Catani M, Jones DK, Ffytche DH. Perisylvian language networks of the human brain. Annals of Neurology. 2005;57(1):8–16. doi: 10.1002/ana.20319. [DOI] [PubMed] [Google Scholar]
  27. Chang EF, Rieger JW, Johnson K, Berger MS, Barbaro NM, Knight RT. Categorical speech representation in human superior temporal gyrus. Nature Neuroscience. 2010;13(11):1428–1432. doi: 10.1038/nn.2641. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Chen C-C, Kisseberth C. Ikorovere Makua tonology (part 1) Studies in Linguistic Sciences. 1979;9(1):3–63. [Google Scholar]
  29. Chomsky N, Halle M. The sound pattern of English. MIT Press; Cambridge: 1968. [Google Scholar]
  30. Clements GN. Vowel harmony in nonlinear generative phonology. Indiana University Linguistics Club; Bloomington: 1980. [Google Scholar]
  31. Clements GN, Keyser SJ. A three-tiered theory of the syllable. The Center for Cognitive Science. MIT; 1981. Occasional Paper No.19. [Google Scholar]
  32. Cohen L, Jobert A, Le Bihan D, Dehaene S. Unimodal and multimodal regions for word processing in the left temporal cortex. NeuroImage. 2004;23(4):1256–1270. doi: 10.1016/j.neuroimage.2004.07.052. [DOI] [PubMed] [Google Scholar]
  33. Coleman J. Cognitive reality and the phonological lexicon: A review. Journal of Neurolinguistics. 1998;11(3):295–320. [Google Scholar]
  34. Connine C, Blasko DG, Titone D. Do the beginnings of words have special status in auditory word recognition? Journal of Memory and Language. 1993;32(2):193–210. [Google Scholar]
  35. Coltheart M. Are there lexicons? Quarterly Journal of Experimental Psychology. 2004;57(7):1153–1171. doi: 10.1080/02724980443000007. [DOI] [PubMed] [Google Scholar]
  36. Cornelissen K, Laine M, Renvall K, Saarinen T, Martin N, Salmelin R. Learning new names for new objects: Cortical effects as measured by magnetoencephalography. Brain and Language. 2004;89(3):617–622. doi: 10.1016/j.bandl.2003.12.007. [DOI] [PubMed] [Google Scholar]
  37. Coslett HB, Roeltgen DP, Gonzalez Rothl L, Heilman KM. Transcortical sensory aphasia: Evidence for subtypes. Brain and Language. 1987;32(2):362–378. doi: 10.1016/0093-934x(87)90133-7. [DOI] [PubMed] [Google Scholar]
  38. Crinion JT, Lambon-Ralph MA, Warburton EA, Ho ward D, Wise RJS. Temporal lobe regions engaged during normal speech comprehension. Brain. 2003;126(5):1193–1201. doi: 10.1093/brain/awg104. [DOI] [PubMed] [Google Scholar]
  39. Cullinan WL, Erdos E, Schaefer R, Tekieli ME. Perception of temporal order of vowels and consonant-vowel syllables. Journal of Speech and Hearing Research. 1977;20:742–751. doi: 10.1044/jshr.2004.742. [DOI] [PubMed] [Google Scholar]
  40. Damasio H, Damasio AR. The anatomical basis of conduction aphasia. Brain. 1980;103(2):337–350. doi: 10.1093/brain/103.2.337. [DOI] [PubMed] [Google Scholar]
  41. Damasio AR, Damasio H. Cortical systems for retrieval of concrete knowledge: The convergence zone framework. In: Koch C, Davis JL, editors. Large-scale neuronal theories of the brain. MIT Press; Cambridge: 1994. pp. 61–74. [Google Scholar]
  42. Damasio AR, McKee J, Damasio H. Determinants of performance in color anomia. Brain and Language. 1979;7(10):74–85. doi: 10.1016/0093-934x(79)90007-5. [DOI] [PubMed] [Google Scholar]
  43. Davis MH, Coleman MR, Absalom AR, Rodd JM, Johnsrude IS, Matta BF, Owen A,M, Menon DK. Dissociating speech perception and comprehension at reduced levels of awareness. Proceedings of the National Academy of Sciences of the United States of America. 2007;104(41):16032–16037. doi: 10.1073/pnas.0701309104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Davis MH, DiBetta AM, MacDonald MJ, Gaskell MG. Learning and consolidation of novel spoken words. Journal of Cognitive Neuroscience. 2009;21(4):803–820. doi: 10.1162/jocn.2009.21059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Davis MH, Gaskell MG. A complementary systems account of word learning: Neural and behavioral evidence. Philosophical Transactions of the Royal Society B. 2009;364(1536):3773–800. doi: 10.1098/rstb.2009.0111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Davis MH, Johnsrude I. Hierarchical processing in spoken language comprehension. Journal of Neuroscience. 2003;23(8):3423–3431. doi: 10.1523/JNEUROSCI.23-08-03423.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Dehaene-Lambertz G, Dehaene S, Anton JL, Campagne A, Ciuciu P, Dehaene GP, Denghien I, Jobert A, Lebihan D, Sigman M, et al. Functional segregation of cortical language areas by sentence repetition. Human Brain Mapping. 2006;27(5):360–371. doi: 10.1002/hbm.20250. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Dehaene-Lambertz G, Pallier C, Serniclaes W, Sprenger-Charolles L, Jobert A, Dehaene S. Neural correlates of switching from auditory to speech perception. NeuroImage. 2005;24(1):21–33. doi: 10.1016/j.neuroimage.2004.09.039. [DOI] [PubMed] [Google Scholar]
  49. Graham K, Patterson K, Hodges JR. Progressive pure anomia: Insufficient activation of phonology by meaning. Neurocase. 1995;1(1):25–38. [Google Scholar]
  50. Dell GS, Schwartz MF, Martin N, Saffran EM, Gagnon DA. Lexical access in aphasic and nonaphasic speakers. Psychological Review. 1997;104(4):801–838. doi: 10.1037/0033-295x.104.4.801. [DOI] [PubMed] [Google Scholar]
  51. Dennis M. Dissociated naming and locating of body parts after left anterior temporal lobe resection: An experimental case study. Brain and Language. 1976;3(2):147–163. doi: 10.1016/0093-934x(76)90013-4. [DOI] [PubMed] [Google Scholar]
  52. Desai R, Liebenthal E, Waldron E, Binder JR. Left posterior temporal regions are sensitive to auditory categorization. Journal of Cognitive Neuroscience. 2008;20(7):1174–1188. doi: 10.1162/jocn.2008.20081. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Dubois J, Hecaen H, Anelergues R. Maufras de chatelier. A., Marcie, P. Etude neurolinguistique de l’aphasie de conduction. Neuropsychologia. 1964/1973;2:9–44. [Google Scholar]; Goodglass H, Blumstein SE, editors. Psycholinguistics and aphasia. Johns Hopkins University Press; Baltimore: pp. 284–300. Translated in. [Google Scholar]
  54. Elman JL, McClelland JL. Cognitive penetration of the mechanisms of perception: Compensation for coarticulation of lexically restored phonemes. Journal of Memory and Language. 1988;27(2):143–165. [Google Scholar]
  55. Foss DJ, Swinney DA. On the psychological reality of the phoneme: Perception, identification, and consciousness. Journal of Verbal Memory and Verbal Behavior. 1973;12(2):246–257. [Google Scholar]
  56. Foulk Å, Sticht TG. Review of the research on the intelligibility and comprehension of accelerated speech. Psychological Bulletin. 1969;72:50–62. doi: 10.1037/h0027575. [DOI] [PubMed] [Google Scholar]
  57. Foygel D, Dell GS. Models of impaired lexical access in speech production. Journal of Memory and Language. 2000;43(1):182–216. [Google Scholar]
  58. Franklin S, Howard D, Patterson K. Abstract word meaning deafness. Cognitive Neuropsychology. 1994;11(1):1–34. [Google Scholar]
  59. Franklin S, Turner J, Lambon Ralph MA, Morris J, Bailey P. A distinctive case of word meaning deafness? Cognitive Neuropsychology. 1996;13(8):1139–1162. [Google Scholar]
  60. Frisch SA, Large N, Pisoni D. Wordlikeness: Effects of segmental probability and length on the processing of nonwords. Journal of Memory and Language. 2000;42(4):481–496. doi: 10.1006/jmla.1999.2692. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Fromkin VA. Speech errors as linguistic evidence. Mouton; The Hague: 1973. [Google Scholar]
  62. Gainotti G, Silveri MC, Villa G, Micelli G. Anomia with and without lexical comprehension disorders. Brain and Language. 1986;29(1):18–33. doi: 10.1016/0093-934x(86)90031-3. [DOI] [PubMed] [Google Scholar]
  63. Ganong WF. Phonetic categorization in auditory word recognition. Journal of Experimental Psychology. 1980;6(1):110–125. doi: 10.1037//0096-1523.6.1.110. [DOI] [PubMed] [Google Scholar]
  64. Gathercole SE, Martin AJ. Interactive processes in phonological memory. In: Gathercole SE, editor. Models of short-term memory. Psychology Press; Hove: 1996. pp. 73–100. [Google Scholar]
  65. Gathercole S, Willis C, Emisle H, Baddeley AD. The influences of number of syllables and wordlikeness on children’s repetition of nonwords. Applied Psycholinguistics. 1991;12(3):349–367. [Google Scholar]
  66. Gaskell MG, Dumay N. Lexical competition and the acquisition of novel words. Cognition. 2003;89(2):105–132. doi: 10.1016/s0010-0277(03)00070-2. [DOI] [PubMed] [Google Scholar]
  67. Gaskell MG, Marslen-Wilson WD. Representation and competition in the perception of spoken words. Cognitive Psychology. 2002;45(2):220–266. doi: 10.1016/s0010-0285(02)00003-8. [DOI] [PubMed] [Google Scholar]
  68. Gennari SP, MacDonald MC, Postle BR, Seidenberg MS. Context-dependent interpretation of words: Evidence for interactive neural processes. NeuroImage. 2007;35(3):1278–1286. doi: 10.1016/j.neuroimage.2007.01.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Giraud A-L, Kleinschmidt A, Poeppel D, Lund TE, Frackowiak RSJ, Laufs H. Endogenonous cortical rhythms determine cerebral specialization for speech perception and production. Neuron. 2007;56(6):1127–11134. doi: 10.1016/j.neuron.2007.09.038. [DOI] [PubMed] [Google Scholar]
  70. Goldrick M, Rapp B. Lexical and post-lexical phonological representations in spoken production. Cognition. 2007;102(2):219–260. doi: 10.1016/j.cognition.2005.12.010. [DOI] [PubMed] [Google Scholar]
  71. Goldsmith J. Autosegmental and metrical phonology. Blackwell; Basil: 1990. [Google Scholar]
  72. Goldstein K. Language and language disturbances. Grune & Stratton; New York: 1948. [Google Scholar]
  73. Goldinger SD, Azuma T. Puzzle-solving science: The quixotic quest for units in speech perception. Journal of Phonetics. 2003;31(3-4):305–320. doi: 10.1016/S0095-4470(03)00030-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Gorno-Tempini ML, Dronkers NF, Rankin KP, Ogar JM, La Phengrasamy BA, Rosen HJ, Johnson JK, Weiner MW, Miller BL. Cognition and anatomy in three variants of primary progressive aphasia. Annals of Neurology. 2004;55(3):335–346. doi: 10.1002/ana.10825. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Gow DW, Caplan DC. An examination of impair ed acoustic-phonetic processing in aphasia. Brain and Language. 1996;52(2):386–407. doi: 10.1006/brln.1996.0019. [DOI] [PubMed] [Google Scholar]
  76. Gow DW, Gordon PC. Lexical and prelexical influences on word segmentation: Evidence from priming. Journal of Experimental Psychology: Human Perception and Performance. 1995;21(2):344–359. doi: 10.1037//0096-1523.21.2.344. [DOI] [PubMed] [Google Scholar]
  77. Gow DW, Segawa JA, Alfhors S, Lin F-H. Lexical influences on speech perception: A Granger causality analysis of MEG and EEG source estimates. NeuroImage. 2008;43(3):614–623. doi: 10.1016/j.neuroimage.2008.07.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Gow DW, Segawa JA. Articulatory mediation of speech perception: A causal analysis of multi-modal imaging data. Cognition. 2009;110(2):222–236. doi: 10.1016/j.cognition.2008.11.011. [DOI] [PubMed] [Google Scholar]
  79. Graves WW, Grabowski TJ, Mehta S, Gordon JK. A neural signature of phonological access: Distinguishing the effects of word frequency from familiarity and length in overt picture naming. Journal of Cognitive Neuroscience. 2007;19(4):617–631. doi: 10.1162/jocn.2007.19.4.617. [DOI] [PubMed] [Google Scholar]
  80. Green DW, Crinion J, Price CJ. Exploring cross-linguistic vocabulary effects on brain structure using voxel-based morphometry. Bilingualism. 2007;10(2):189–199. doi: 10.1017/s1366728907002933. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Griffiths TD, Buchel C, Frackowiak RS, Patterson RD. Analysis of temporal structure in sound by the human brain. Nature Neuroscience. 1998;1(5):422–427. doi: 10.1038/1637. [DOI] [PubMed] [Google Scholar]
  82. Grosvald M. Long-distance coarticulation in spoken and signed language: An overview. Language and Linguistics Compass. 2010;4/6:348–362. [Google Scholar]
  83. Grodzinsky Y, Friederici AD. Neuroimaging of syntax and syntactic processing. Current Opinion ion Neurobiology. 2006;16(2):240–246. doi: 10.1016/j.conb.2006.03.007. [DOI] [PubMed] [Google Scholar]
  84. Hagoort P, Indefrey P, Brown C, Herzog H, Steinmetz H, Seitz RJ. The neural circuitry involved in the reading of German words and pseudowords: A PET study. Journal of Cognitive Neuroscience. 1999;11(4):383–398. doi: 10.1162/089892999563490. [DOI] [PubMed] [Google Scholar]
  85. Hale M, Ross C. The phonological enterprise. Oxford University Press; Oxford: 2008. [Google Scholar]
  86. Hall DA, Riddoch MJ. Word meaning deafness: Spelling words that are not understood. Cognitive Neuropsychology. 1997;14(8):1131–1164. [Google Scholar]
  87. Henson RN. Neuroimaging studies of priming. Progress in Neurobiology. 2003;70(1):53–81. doi: 10.1016/s0301-0082(03)00086-8. [DOI] [PubMed] [Google Scholar]
  88. Hickok G. [Retrieved March 11, 2010];No mirror neurons: Who’s stuff do I have to read this week. 2010 from http://www.talkingbrains.org. [Google Scholar]
  89. Hickok G, Buchsbaum B, Humphries C, Muftuler T. Auditory-motor interaction revealed by fMRI: Speech, music, and working memory in area Spt. Journal of Cognitive Neuroscience. 2003;15(5):673–682. doi: 10.1162/089892903322307393. [DOI] [PubMed] [Google Scholar]
  90. Hickok G, Poeppel D. Towards a functional neuroanatomy of speech perception. Trends in Cognitive Science. 2000;4(1):131–138. doi: 10.1016/s1364-6613(00)01463-7. [DOI] [PubMed] [Google Scholar]
  91. Hickok G, Poeppel D. Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language. Cognition. 2004;92(1-2):67–99. doi: 10.1016/j.cognition.2003.10.011. [DOI] [PubMed] [Google Scholar]
  92. Hickok G, Poeppel D. The cortical organization of speech processing. Nature Reviews Neuroscience. 2007;8(5):393–402. doi: 10.1038/nrn2113. [DOI] [PubMed] [Google Scholar]
  93. Hickok G, Rogalsky C. What does Broca’s area activation to sentences reflect? 2011. pp. 2629–2631. [DOI] [PubMed]
  94. Hinton GE, McClelland JL, Rumelhart DE. Distributed representations. In: McClelland JL, Rumelhart DE, editors. Parallel distributed processing: Explorations in the microstructure of cognition. MIT Press, Bradford Books; Cambridge, MA: 1986. pp. 77–109. [Google Scholar]
  95. Howard MA, Volkov IO, Mirsky R, Garell PC, Noh MD, Granner M, Damasio H, Steinschneider M, Reale RA, Hind JE, Brugge JF. Auditory cortex on the human posterior superior temporal gyrus. Journal of Comparative Neurology. 2000;416(1):79–92. doi: 10.1002/(sici)1096-9861(20000103)416:1<79::aid-cne6>3.0.co;2-2. [DOI] [PubMed] [Google Scholar]
  96. Humphries C, Love T, Swinney D, Hickok G. Response of anterior temporal cortex to syntactic and prosodic manipulations during sentence processing. Human Brain Mapping. 2005;26(2):128–138. doi: 10.1002/hbm.20148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Jackendoff RS. Foundations of language: Brain, meaning grammar, and evolution. Oxford University Press; 2002. [DOI] [PubMed] [Google Scholar]
  98. Jeffries E, Patterson K, Lambon Ralph MA. Deficits of knowledge versus executive control in semantic cognition: Insights from cued naming. Neuropsychologia. 2004;46(2):649–658. doi: 10.1016/j.neuropsychologia.2007.09.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Joanisse MF, Seidenberg MA. Imaging the past: Neural activation in frontal and temporal regions during regular and irregular past tense processing. Cognitive. 2005;5(3):282–296. doi: 10.3758/cabn.5.3.282. [DOI] [PubMed] [Google Scholar]
  100. Kalm K, Davis MH, Norris D. Neural mechanisms underlying the grouping effect in short-term memory. Human Brain Mapping. doi: 10.1002/hbm.21308. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Kenstowicz M. Phonology in generative grammar. MIT Press, Bradford Books; Cambridge, MA: 1994. [Google Scholar]
  102. Kohn SE, Smith KL. Distinctions between two phonological output deficits. Applied Psycholinguistics. 1994;15(1):75–95. [Google Scholar]
  103. Kotz SA, Cappa CF, von Cramon DY, Friederici AD. Modulation of the lexical semantic network by auditory semantic-priming: An event-related functional MRI study. NeuroImage. 2002;17(4):1761–1772. doi: 10.1006/nimg.2002.1316. [DOI] [PubMed] [Google Scholar]
  104. Knobel M, Finkbeiner M, Caramazza A. The many places of frequency: Evidence for a novel locus of the frequency effect in word production. Cognitive Neuropsychology. 2008;25(2):256–286. doi: 10.1080/02643290701502425. [DOI] [PubMed] [Google Scholar]
  105. Lee HL, Devlin JT, Shakeshaft C, Stewart LH, Brennan A, Glensman J, Pitcher K, Crinion J, Mechelli A, Frackowiak RSJ, Green DW, Price CJ. Anatomical traces of vocabulary acquisition in the adolescent brain. Journal of Neuroscience. 2007;27(5):1184–1189. doi: 10.1523/JNEUROSCI.4442-06.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Levelt WJM, Roelofs A, Meyer AS. A theory of lexical access in speech production. Behavioral and Brain Sciences. 1999;22(1):1–75. doi: 10.1017/s0140525x99001776. [DOI] [PubMed] [Google Scholar]
  107. Lewis RL, Vasishth S, van Dyke JA. Computational principles of working memory in sentence comprehension. Trends in Cognitive Sciences. 2006;10(10):447–454. doi: 10.1016/j.tics.2006.08.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Luce PA, Large NR. Phonotactics, density, and entropy in spoken word recognition. Language and Cognitive Processes. 2001;16(5/6):565–581. [Google Scholar]
  109. Luce PA, Pisoni DB. Recognizing spoken words: The neighborhood activation model. Ear and Hearing. 1998;19(1):1–36. doi: 10.1097/00003446-199802000-00001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Majerus S, Can der Linden M, Collette F, Laureys S, Poncelet M, Degueldre C, Delfiore G, Luxon A, Salmon E. Modulation of brain activity during phonological familiarization. Brain and Language. 2005;92(3):320–331. doi: 10.1016/j.bandl.2004.07.003. [DOI] [PubMed] [Google Scholar]
  111. Makris N, Meyer JW, Bates JF, Yeterian EH, Kennedy DN, Caviness VS. MRI-Based topographic parcellation of human cerebral white matter and nuclei II. Rationale and applications with systematics of cerebral connectivity. NeuroImage. 1999;9(1):18–45. doi: 10.1006/nimg.1998.0384. [DOI] [PubMed] [Google Scholar]
  112. Makris N, Pandya DN. The extreme capsule in humans and rethinking of the language circuitry. Brain Structure and Function. 2008;213(3):343–358. doi: 10.1007/s00429-008-0199-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Makris N, Papadimitriou GM, Kaiser JR, So rg S, Kennedy DN, Pandya DN. Delineation of the middle longitudinal fascicle in humans: a quantitative, in vivo, DT-MRI study. Cerebral Cortex. 2009;19(4):777–85. doi: 10.1093/cercor/bhn124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Mandonnet E, Nouet A, Gatignol P, Capelle L, Duffau H. Does the left inferior longitudinal fasciculus play a role in language? A brain stimulation study. Brain. 2007;13(3):623–629. doi: 10.1093/brain/awl361. [DOI] [PubMed] [Google Scholar]
  115. Marslen-Wilson WD. Functional parallelism in spoken word-recognition. Cognition. 1987;25(1-2):71–102. doi: 10.1016/0010-0277(87)90005-9. [DOI] [PubMed] [Google Scholar]
  116. Marslen-Wilson WD, Tyler LK. Dissociating types of mental computation. Nature. 1997;387(6633):592–594. doi: 10.1038/42456. [DOI] [PubMed] [Google Scholar]
  117. Martin A. The representation of object concepts in the brain. Annual Review of Psychology. 2007;58(1):25–45. doi: 10.1146/annurev.psych.57.102904.190143. [DOI] [PubMed] [Google Scholar]
  118. Martin JG, Bunnell HT. Perception of anticipatory coarticulation effects in vowel-stop consonant-stop sequences. Journal of Experimental Psychology: Human Perception and Performance. 1982;8(3):473–488. doi: 10.1037//0096-1523.8.3.473. [DOI] [PubMed] [Google Scholar]
  119. McClelland JL, Elman JL. Interactive processes in speech recognition: The TRACE model. In: McClelland JL, Rumelhart DE, editors. Parallel distributed processing: Explorations in the microstructure of cognition. MIT Press, Bradford Books; Cambridge, MA: 1986. pp. 58–121. [Google Scholar]
  120. McClelland JL, Patterson K. Rules or connections in past-tense inflections: What does the evidence rule out? Trends in Cognitive Sciences. 2002;6(11):465–472. doi: 10.1016/s1364-6613(02)01993-9. [DOI] [PubMed] [Google Scholar]
  121. McClelland JL, Rumelhart DE. On learning the past tenses of English verbs. In: McClelland JL, Rumelhart DE, editors. Parallel distributed processing: Explorations in the microstructure of cognition. MIT Press, Bradford Books; Cambridge, MA: 1986. pp. 216–271. [Google Scholar]
  122. Mechelli A, Crinion JT, Noppeney U, O’Doherty J, Ashburner J, Frackowiak RSJ, Price CJ. Neurolinguistics: Structural plasticity in the bilingual brain. Nature. 2004;431(7010):757. doi: 10.1038/431757a. [DOI] [PubMed] [Google Scholar]
  123. Merriman WE, Bowman LL. The mutual exclusivity bias in children’s word learning. Monographs of the Society for Research in Child Development. 1989;54(3-4):1–132. [PubMed] [Google Scholar]
  124. Miceli G, Gainotti G, Caltagirone C, Masullo C. Some aspects of phonological impairment in aphasia. Brain and Language. 1980;11(1):159–169. doi: 10.1016/0093-934x(80)90117-0. [DOI] [PubMed] [Google Scholar]
  125. Miceli G, Turriziani P, Caltagirone C, Capasso R, Tomaiuolo F, Caramazza A. The neural correlates of grammatical gender: An fMRI investigation. Journal of Cognitive Neuroscience. 2002;14(4):618–628. doi: 10.1162/08989290260045855. [DOI] [PubMed] [Google Scholar]
  126. Miller GA. The science of words. Scientific American Library; New York: 1991. [Google Scholar]
  127. Misiursky C, Blumstein SE, Rissman J, Berman D. The role of lexical competition and acoustic-phonetic structure in lexical processing: Evidence from normal subjects and aphasic patients. Brain and Language. 2005;93(1):64–787. doi: 10.1016/j.bandl.2004.08.001. [DOI] [PubMed] [Google Scholar]
  128. Morais J, Cary L, Alegria J, Bertelson P. Does awareness of speech as a sequence of phones arise spontaneously? Cognition. 1979;7(4):323–331. [Google Scholar]
  129. Morton J. Interaction of information in word recognition. Psychological Review. 1969;76(1):165–178. [Google Scholar]
  130. Mummery MRCP, Patterson K, Price CJ, Ashburner J, Frackowiak RSJ, Hodges JR. A voxel-based morphometry study of semantic dementia: Relationship between temporal lobe atrophy and semantic memory. Annals of Neurology. 2000;476(1):36–45. [PubMed] [Google Scholar]
  131. Munson B. Lexical access, representation, and vowel production. In: Cole J, Hualde JI, editors. Laboratory phonology. Vol. 9. Mouter de Gruyter; Berlin: 2007. pp. 201–227. [Google Scholar]
  132. Munson B, Solomon NP. The influence of phonological neighborhood density on vowel articulation. Journal of Speech. 2004;48(5):108–124. doi: 10.1044/1092-4388(2004/078). [DOI] [PMC free article] [PubMed] [Google Scholar]
  133. Myers EB, Blumstein SE. The neural bases of the lexical effect: an FMRI investigation. Cerebral Cortex. 2008;18(2):278–288. doi: 10.1093/cercor/bhm053. [DOI] [PubMed] [Google Scholar]
  134. Newman SD, Twieg D. Differences in Auditory Processing of Words and Pseudowords: An fMRI Study. Human Brain Mapping. 2001;14(1):39–47. doi: 10.1002/hbm.1040. [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. Norris D. Shortlist: A connectionist model of continuous speech recognition. Cognition. 1994;52(3):189–234. [Google Scholar]
  136. Norris D, Cutler A, McQueen JM, Butterfield S. Phonological and conceptual activation in speech comprehension. Cognitive Psychology. 2006;53(1):146–193. doi: 10.1016/j.cogpsych.2006.03.001. [DOI] [PubMed] [Google Scholar]
  137. Norris D, McQueen JM, Cutler A. Merging information in speech recognition: Feedback is never necessary. Brain and Behavioral Sciences. 2000;23(3):299–370. doi: 10.1017/s0140525x00003241. [DOI] [PubMed] [Google Scholar]
  138. Norris D, McQueen JM, Cutler A. Perceptual learning in speech. Cognitive Psychology. 2003;47(2):204–238. doi: 10.1016/s0010-0285(03)00006-9. [DOI] [PubMed] [Google Scholar]
  139. Notoya M, Suzuki S, Kurachi M. Repetition performance in conduction aphasia. Brain and Nerve. 1982;34(5):499–508. [PubMed] [Google Scholar]
  140. Orfanidou E, Marslen-Wilson WD, Davis MH. Neural response suppression predicts repetition priming of spoken words and pseudowords. Journal of Cognitive Neuroscience. 2006;18(8):1237–1252. doi: 10.1162/jocn.2006.18.8.1237. [DOI] [PubMed] [Google Scholar]
  141. Parks R, Ray J, Bland S. Wordsmyth English dictionary-Thesaurus. University of Chicago; 1998. http://www.wordsmyth.net/ [Google Scholar]
  142. Patterson K, Nestor PJ, Rogers TT. Where do you know what you know? The representation of semantic knowledge in the human brain. Nature Reviews Neuroscience. 2007;8(12):976–987. doi: 10.1038/nrn2277. [DOI] [PubMed] [Google Scholar]
  143. Paulesu E, Frith CD, Frackowiak RSJ. The neural correlates of the verbal component of working memory. Nature. 1993;362(6418):342–345. doi: 10.1038/362342a0. [DOI] [PubMed] [Google Scholar]
  144. Peramunage D, Blumstein SE, Myers EB, Goldrik M, Baese-Berk M. Phonological neighborhood effects in spoken word production: An fMRI study. Journal of Cognitive Neuroscience. 2011;23(3):593–603. doi: 10.1162/jocn.2010.21489. [DOI] [PMC free article] [PubMed] [Google Scholar]
  145. Petrides M, Pandya DN. Association fiber pathways to the frontal cortex from the superior temporal region in the rhesus monkey. Journal of Comparative Neurology. 1988;273(1):52–66. doi: 10.1002/cne.902730106. [DOI] [PubMed] [Google Scholar]
  146. Pitt MA, Samuel AG. An empirical and meta-analytic evaluation of the phoneme identification task. Journal of Experimental Psychology: Human Perception and Performance. 1993;19(4):699–725. doi: 10.1037//0096-1523.19.4.699. [DOI] [PubMed] [Google Scholar]
  147. Pinker S. Words and rules: The ingredients of language. Basic Books; New York: 1999. [Google Scholar]
  148. Plaut D, McClelland JL. Locating object knowledge in the brain: Comment on Bower’s (2009) attempt to review the grandmother cell hypothesis. Psychological review. 2010;117(1):284–288. doi: 10.1037/a0017101. [DOI] [PubMed] [Google Scholar]
  149. Plaut DC, McClelland JL, Seidenberg MS, Patterson K. Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review. 1996;103(1):56–115. doi: 10.1037/0033-295x.103.1.56. [DOI] [PubMed] [Google Scholar]
  150. Pluymaekers M, Ernestus M, Baayen R. Lexical frequency and acoustic reduction in spoken Dutch. Journal of the Acoustical Society of America. 2005;118(4):2561–2569. doi: 10.1121/1.2011150. [DOI] [PubMed] [Google Scholar]
  151. Pisoni DB. Word identification in noise. Language and Cognitive Processes. 1996;11(6):681–687. doi: 10.1080/016909696387097. [DOI] [PMC free article] [PubMed] [Google Scholar]
  152. Poeppel D. The analysis of speech in different temporal integration windows: cerebral lateralization as ‘asymmetric sampling in time’. Speech Communication. 2003;41(1):245–255. [Google Scholar]
  153. Prabhakaran R, Blumstein SW, Myers EB, Hutchinson E, Britton B. An event-related fMRI investigation of phonological-lexical competition. Neuropsychologia. 2006;44(12):2209–2221. doi: 10.1016/j.neuropsychologia.2006.05.025. [DOI] [PubMed] [Google Scholar]
  154. Price CJ, Wise RJ, Frackowiak RS. Demonstrating the implicit processing of visually presented words and pseudowords. Cerebral Cortex. 1996;6(1):62–70. doi: 10.1093/cercor/6.1.62. [DOI] [PubMed] [Google Scholar]
  155. Pustejovsky J. The generative lexicon. MIT Press; Cambridge, MA: 1996. [Google Scholar]
  156. Raettig T, Kotz SA. Auditory processing of different types of pseudowords: An event-related fMRI study. NeuroImage. 2008;39(3):1420–1428. doi: 10.1016/j.neuroimage.2007.09.030. [DOI] [PubMed] [Google Scholar]
  157. Raizada RD, Poldrack RA. Selective amplification of stimulus differences during categorical perception of speech. Neuron. 2007;56(4):726–740. doi: 10.1016/j.neuron.2007.11.001. [DOI] [PubMed] [Google Scholar]
  158. Rauschecker JP, Scott SK. Maps and streams in the auditory cortex: Nonhuman primates illuminate human speech processing. Nature Neuroscience. 2009;12(6):718–724. doi: 10.1038/nn.2331. [DOI] [PMC free article] [PubMed] [Google Scholar]
  159. Read CA, Zhang Y, Nie H, Ding B. The ability to manipulate speech sounds depends on knowing alphabetic writing. Cognition. 1986;24(1):31–44. doi: 10.1016/0010-0277(86)90003-x. [DOI] [PubMed] [Google Scholar]
  160. Richardson FM, Thomas MSC, Filippi R, Harth H, Price CJ. Contrasting effects of vocabulary knowledge on temporal and parietal brain structure across lifespan. Journal of Cognitive Neuroscience. 2010;22(5):943–954. doi: 10.1162/jocn.2009.21238. [DOI] [PMC free article] [PubMed] [Google Scholar]
  161. Righi G, Blumstein SE, Mertus J, Worden MS. Neural systems underlying lexical competition: An eyetracking and fMRI study. Journal of Cognitive Neuroscience. 2010;22(2):213–224. doi: 10.1162/jocn.2009.21200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  162. Rissman J, Eliassen JC, Blumstein SE. An event-related fMRI investigation of implicit semantic priming. Journal of Cognitive Neuroscience. 2003;15(8):1160–1175. doi: 10.1162/089892903322598120. [DOI] [PubMed] [Google Scholar]
  163. Rodd JM, Davis MH, Johnsrude IS. The neural mechanisms of speech comprehension: fMRI studies of semantic ambiguity. Cerebral Cortex. 2005;15(8):1261–1269. doi: 10.1093/cercor/bhi009. [DOI] [PubMed] [Google Scholar]
  164. Rogers TT, Hocking J, Noppeney U, Mechelli A, Gorno-Tempini ML, Patterson K, Price CJ. Anterior temporal cortex and semantic memory: Reconciling findings from neuropsychology and functional imaging. Cognitive, Affective and Behavioral Neuroscience. 2006;6(3):201–213. doi: 10.3758/cabn.6.3.201. [DOI] [PubMed] [Google Scholar]
  165. Romani C, Galluzzi C, Olsen A. Phonological-lexical activation: A lexical component or an output buffer? Evidence from aphasia errors. Cortex. 2011;47(2):217–235. doi: 10.1016/j.cortex.2009.11.004. [DOI] [PubMed] [Google Scholar]
  166. Romanski LM, Tian B, Fritz J, Mishkin M, Goldman-Rakic PS, Rauschecker JP. Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex. Nature Neuroscience. 1999;2(12):1131–1136. doi: 10.1038/16056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  167. Roy AC, Craighero L, Fabbri-Destro M, Fadiga L. Phonological and lexical motor facilitation during speech listening: A transcranial magnetic stimulation study. Journal of Physiology Paris. 2008;102(1-3):101–105. doi: 10.1016/j.jphysparis.2008.03.006. [DOI] [PubMed] [Google Scholar]
  168. Ruff I, Blumstein SE, Myers EB, Hutchinson E. Recruitment of anterior and posterior structures in lexical-semantic processing: An fMRI study comparing implicit and explicit tasks. Brain and Language. 2008;105(1):41–49. doi: 10.1016/j.bandl.2008.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  169. Rugg MD. Event-related brain potentials dissociate repetition effects of high- and low-frequency words. Memory and Cognition. 1990;18(4):367–379. doi: 10.3758/bf03197126. [DOI] [PubMed] [Google Scholar]
  170. Rumelhart DE. Some problems with the notion that words have literal meanings. In: Ortony A, editor. Metaphor and thought. Cambridge University Press; Cambridge: 1979. pp. 71–82. [Google Scholar]
  171. Sabri M, Binder JR, Desai R, Medlar DA, Leitl MD, Liebenthal E. Attentional and linguistic interactions in speech production. NeuroImage. 2008;39(3):1444–1456. doi: 10.1016/j.neuroimage.2007.09.052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  172. Saito A, Yoshimura T, Itakra T, Ralph MAL. Demonstrating a wordlikeness effect on nonword repetition performance in a conduction aphasia patient. Brain and Language. 2003;85(2):222–230. doi: 10.1016/s0093-934x(02)00589-8. [DOI] [PubMed] [Google Scholar]
  173. Sass K, Krach S, Sachs O, Kircher T. Lion-tiger-stripes: Neural correlates of indirect semantic priming across processing modalities. NeuroImage. 2009;45(1):224–236. doi: 10.1016/j.neuroimage.2008.10.014. [DOI] [PubMed] [Google Scholar]
  174. Sauer D, Kreher BW, Schnell S, Kümmerer, Kellmeyer P, Vry M-S, Umarova R, Musso M, Glauche V, Abel S, Huber W, Rijntjes M, Hennig J, Weiller C. Ventral and dorsal pathways for language. Proceedings of the National Academy of Sciences of the United States of America. 2008;105(46):18035–18040. doi: 10.1073/pnas.0805234105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  175. Schilling HEH, Rayner K, Chumbley JI. Comparing naming, lexical decision, and eye fixation times: Word frequency effects and individual differences. Memory & Cognition. 1998;26:1270–1281. doi: 10.3758/bf03201199. [DOI] [PubMed] [Google Scholar]
  176. Scott SK. Auditory processing – speech, space and auditory objects. Current Opinion in Neurobiology. 2005;15(2):197–201. doi: 10.1016/j.conb.2005.03.009. [DOI] [PubMed] [Google Scholar]
  177. Scott SK, Blank CC, Rosen S, Wise RJ. Identification of a pathway for intelligible speech in the left temporal lobe. Brain. 2000;123(12):2400–2406. doi: 10.1093/brain/123.12.2400. [DOI] [PMC free article] [PubMed] [Google Scholar]
  178. Scott SK, Wise RJS. The functional neuroanatomy of prelexical processing n speech perception. Cognition. 2004;92(1-2):13–45. doi: 10.1016/j.cognition.2002.12.002. [DOI] [PubMed] [Google Scholar]
  179. Seidenberg MS, McClelland JL. A distributed, developmental model of word recognition and naming. Psychological Review. 1989;96(4):523–568. doi: 10.1037/0033-295x.96.4.523. [DOI] [PubMed] [Google Scholar]
  180. Shallice T, Rumiati RI, Zadini A. The selective impairment of the phonological output buffer. Cognitive Neuropsychology. 2000;17(6):517–546. doi: 10.1080/02643290050110638. [DOI] [PubMed] [Google Scholar]
  181. Shallice T, Warrington EK. Independent functioning of verbal memory stores: A neuropsychological study. Quarterly Journal of Experimental Psychology. 1970;22(2):261–273. doi: 10.1080/00335557043000203. [DOI] [PubMed] [Google Scholar]
  182. Shallice T, Warrington EK. Auditory-verbal short-term memory impairment and conduction aphasia. Brain and Language. 1977;4(4):479–491. doi: 10.1016/0093-934x(77)90040-2. [DOI] [PubMed] [Google Scholar]
  183. Shattuck-Hufnagel S. Speech errors as evidence for a serial-order mechanism in sentence production. In: Cooper WE, Walker ECT, editors. Sentence processing: Psycholinguistic studies presented to Merrill Garrett. Lawrence Erlbaum; Hillsdale, NJ: 1979. pp. 295–342. [Google Scholar]
  184. Snijders TM, Petersson KM, Hagoort P. Effective connectivity of cortical and subcortical regions during unification of sentence structure. NeuroImage. 2010;52(4):1633–1644. doi: 10.1016/j.neuroimage.2010.05.035. [DOI] [PubMed] [Google Scholar]
  185. Snijders TM, Vosse T, Kempen G, van Berkum JA, Petersson KM, Hagoport P. Retrieval and unification of syntactic structure in sentence comprehension: An fMRI study using word-category ambiguity. Cerebral Cortex. 2009;19(7):1493–1503. doi: 10.1093/cercor/bhn187. [DOI] [PubMed] [Google Scholar]
  186. Stamatakis EA, Marslen-Wilson WD, Tyler LK, Fletcher PC. Cingulate control of fronto-temporal integration reflects linguistic demands: A three way interaction in functional connectivity. NeuroImage. 2005;15(1):115–121. doi: 10.1016/j.neuroimage.2005.06.012. [DOI] [PubMed] [Google Scholar]
  187. Strub RL, Gardner H. The repetition deficit in conduction aphasia: Mnestic or linguistic? Brain and Language. 1974;1(3):241–255. [Google Scholar]
  188. Swinney DA. Lexical access during sentence comprehension (re)consideration of context effects. Journal of Verbal Learning and Verbal Behavior. 1979;18(6):645–659. [Google Scholar]
  189. Thierry G, Giraud AL, Price C. Hemispheric dissociation in access to the human semantic system. Neuron. 2003;38(3):499–506. doi: 10.1016/s0896-6273(03)00199-5. [DOI] [PubMed] [Google Scholar]
  190. Thompson-Schill SI, D’Esposito M, Aguirre GK, Farah M. Role of left inferior prefrontal cortex in retrieval of semantic knowledge: A reevaluation. Proceedings of the National Academy of Sciences of the United States of America. 1997;94(26):14792–14797. doi: 10.1073/pnas.94.26.14792. [DOI] [PMC free article] [PubMed] [Google Scholar]
  191. Turkletaub PE, Coslett HB. Localization of sublexical speech components. Brain and Language. 2010;114(1):1–15. doi: 10.1016/j.bandl.2010.03.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  192. Tyler LK, Randall B, Marslen-Wilson WD. Phonology and neuropsychology of the English past tense. Neuropsychologia. 2002;40(8):1154–1166. doi: 10.1016/s0028-3932(01)00232-9. [DOI] [PubMed] [Google Scholar]
  193. Tyler LK, Stamatakis EA, Bright P, Acres K, Abdallah S, Rodd SM, Moss HE. Processing objects at different levels of specificity. Journal of Cognitive Neuroscience. 2004;16(3):351–362. doi: 10.1162/089892904322926692. [DOI] [PubMed] [Google Scholar]
  194. Tyler LK, Stamatakis EA, Post B, Randall B, Marslen-Wilson W. Temporal and frontal systems in speech comprehension: An fMRI study of past tense processing. Neuropsychologia. 2005;43(13):1963–1974. doi: 10.1016/j.neuropsychologia.2005.03.008. [DOI] [PubMed] [Google Scholar]
  195. Ullman MT, Pancheva R, Love T, Yee E, Swinney D, Hickok G. Neural correlates of lexicon and grammar: Evidence from the production, reading and judgment of inflection in aphasia. Brain and Language. 2005;93(2):185–238. doi: 10.1016/j.bandl.2004.10.001. [DOI] [PubMed] [Google Scholar]
  196. Ungerleider LG, Mishkin M. Two cortical visual systems. In: Ingle DJ, Goodale MA, Mansfield RJW, editors. Analysis of visual behavior. MIT Press; Cambridge, MA: 1982. pp. 549–586. [Google Scholar]
  197. Valler G, Baddeley A. Phonological short-term store, phonological processing, and sentence comprehension: An neuropsychological case study. Cognitive Neuropsychology. 1984;1(1):121–141. [Google Scholar]
  198. Vigneau M, Jobard G, Mazoyer B, Tzourio-Mazoyer N. Word and non-word reading: what role for the visual word form area? NeuroImage. 2005;27(3):694–705. doi: 10.1016/j.neuroimage.2005.04.038. [DOI] [PubMed] [Google Scholar]
  199. Visser M, Jefferies E, Lambon Ralph MA. Semantic processing in the anterior temporal lobes: A meta-analysis of the functional neuroimaging literature. Journal of Cognitive Neuroscience. 2009;22(6):1083–1094. doi: 10.1162/jocn.2009.21309. [DOI] [PubMed] [Google Scholar]
  200. Vitevitch MS, Luce PA. When words compete: Levels of processing in spoken word recognition. Psychological Science. 1998;9(4):325–329. [Google Scholar]
  201. Vitevitch MS, Luce PA. Probabilistic phonotactics and neighborhood activation in spoken word recognition. Journal of Memory and Language. 1999;40(3):374–408. [Google Scholar]
  202. Vitevitch MS, Luce PA. Increases in phonotactic probability facilitate spoken nonword repetition. Journal of Memory and Language. 2005;52(2):193–204. [Google Scholar]
  203. Wagner A, Paré-Blagoev EJ, Clark J, Plodrak RA. Recovering meaning: Left prefrontal cortex guides controlled semantic retrieval. Neuron. 2001;31(2):329–338. doi: 10.1016/s0896-6273(01)00359-2. [DOI] [PubMed] [Google Scholar]
  204. Warren RM. Auditory perception: A new analysis and synthesis. 2nd Ed University of Cambridge Press; Cambridge: 1992. [Google Scholar]
  205. Warren RM, Bashford JA, Gardner DA. Tweaking the lexicon: Organization of vowel sequences into words. Perception and Psychophysics. 1990;47(5):423–432. doi: 10.3758/bf03208175. [DOI] [PubMed] [Google Scholar]
  206. Warren JE, Wise RJ, Warren JD. Sounds do-able: Auditory-motor transformations and the posterior temporal plane. Trends in Neuroscience. 2005;28(12):636–643. doi: 10.1016/j.tins.2005.09.010. [DOI] [PubMed] [Google Scholar]
  207. Wernicke C. The symptom complex of aphasia: a psychological study on an anatomical basis. In: Cohen RS, Wartofsky MW, editors. Studies in the philosophy of science. D. Reidel; Dordrecht: 1874/1969. pp. 34–97. [Google Scholar]
  208. Wise RJS. Language systems in normal and aphasic human subjects: Functional imaging studies and inferences from animal studies. British Medical Bulletin. 2003;65(1):95–119. doi: 10.1093/bmb/65.1.95. [DOI] [PubMed] [Google Scholar]
  209. Wise RJS, Scott SK, Blank SC, Mummery CJ, Murphy K, Warburton EA. Separate neural sub-systems within “Wernicke’s area”. Brain. 2001;124(1):83–95. doi: 10.1093/brain/124.1.83. [DOI] [PubMed] [Google Scholar]
  210. Wolmertz M, Rapp B, Poeppel D. Investigating the phonemic categorization capacity of the right hemisphere: A case study. Brain and Language. 2007;103(1):8–249. [Google Scholar]
  211. Wright RA. Factors of lexical competition in vowel articulation. In: Local JJ, Ogden R, Temple R, editors. Papers in laboratory phonology VI. Cambridge University press; Cambridge: 2004. pp. 75–87. [Google Scholar]
  212. Xiao Z, Zhang JX, Wang X, Wu R, Hu X, Weng X, Tan LH. Differential activity in left inferior frontal gyrus for pseudowords and real words: an event-related fMRI study on auditory lexical decision. Human Brain Mapping. 2005;25(2):212–221. doi: 10.1002/hbm.20105. (New York, N.Y.) [DOI] [PMC free article] [PubMed] [Google Scholar]
  213. Yamadori A, Ikamura G. Central or conduction aphasia in a Japanese patient. Cortex. 1975;11(1):73–82. doi: 10.1016/s0010-9452(75)80022-0. [DOI] [PubMed] [Google Scholar]
  214. Yokoyama S, Miyamoto T, Riera J, Kim J, Akitsuki Y, Iwata K, Yoshimoto K, Horie K, Sato S, Kawashima R. Cortical mechanisms involved in the processing of verbs: An fMRI study. Journal of Cognitive Neuroscience. 2006;18(8):1304–1313. doi: 10.1162/jocn.2006.18.8.1304. [DOI] [PubMed] [Google Scholar]
  215. Zempleni MZ, Renken R, Hoeks JCJ, Hoogduin J,M, Stowe LA. Semantic ambiguity processing in sentence comprehension: Evidence from event-related fMRI. NeuroImage. 2007;34(3):1270–1279. doi: 10.1016/j.neuroimage.2006.09.048. [DOI] [PubMed] [Google Scholar]
  216. Zevin JD, McCandliss BD. Dishabituation of the BOLD response to speech sounds. Behavioral and Brain Functions. 2005;1(1):4. doi: 10.1186/1744-9081-1-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  217. de Zubicaray GI, Wilson SJ, McMahon KL, Muthiah S. The semantic interference effect in the picture-word paradigm: An event-related fMRI study employing overt responses. Human Brain Mapping. 2001;14(4):218–227. doi: 10.1002/hbm.1054. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES