Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Jan 1.
Published in final edited form as: Cognition. 2022 Nov 10;230:105322. doi: 10.1016/j.cognition.2022.105322

Neural evidence suggests phonological acceptability judgments reflect similarity, not constraint evaluation

Enes Avcu 1, Olivia Newman 1, Seppo P Ahlfors 2,3, David W Gow Jr 1,2,4,5
PMCID: PMC9712273  NIHMSID: NIHMS1848918  PMID: 36370613

Abstract

Acceptability judgments are a primary source of evidence in formal linguistic research. Within the generative linguistic tradition, these judgments are attributed to evaluation of novel forms based on implicit knowledge of rules or constraints governing well-formedness. In the domain of phonological acceptability judgments, other factors including ease of articulation and similarity to known forms have been hypothesized to influence evaluation. We used data-driven neural techniques to identify the relative contributions of these factors. Granger causality analysis of magnetic resonance imaging (MRI)-constrained magnetoencephalography (MEG) and electroencephalography (EEG) data revealed patterns of interaction between brain regions that support explicit judgments of the phonological acceptability of spoken nonwords. Comparisons of data obtained with nonwords that varied in terms of onset consonant cluster attestation and acceptability revealed different cortical regions and effective connectivity patterns associated with phonological acceptability judgments. Attested forms produced stronger influences of brain regions implicated in lexical representation and sensorimotor simulation on acoustic-phonetic regions, whereas unattested forms produced stronger influence of phonological control mechanisms on acoustic-phonetic processing. Unacceptable forms produced widespread patterns of interaction consistent with attempted search or repair. Together, these results suggest that speakers’ phonological acceptability judgments reflect lexical and sensorimotor factors.

Keywords: Acceptability judgments, Phonology/Phonotactics, Effective connectivity, MEG/EEG, Rules, Lexical effects

1. Introduction

Judgements about the acceptability of novel words or sentences are central to the development of linguistic theory. From the outset, theorists have wrestled with the challenge of interpreting the degree to which these judgments reflect knowledge of the grammar (competence) versus domain-general cognitive processes (performance) (Chomsky, 1965). While the competence-performance distinction remains controversial (Hymes, 1992; Newmeyer, 2003), understanding the processes that support these judgments is critical for understanding the implications of the observation that some constructions are more acceptable or interpretable than others. In the realm of syntactic theory, empirical research has demonstrated that some constructions including center embeddings are frequently judged unacceptable despite being considered grammatical (Chomsky & Miller, 1963), while other grammaticality illusions may be judged moderately acceptable despite being both ungrammatical and uninterpretable (Phillips et al., 2011). Indeed, systematic research (Featherston, 2007; Bader & Häussler, 2010; Gibson, Piantadosi & Fedorenko, 2011; Sprouse & Almeida, 2017) has demonstrated that sentence grammaticality judgements are influenced by a variety of factors including cognitive, social and biological differences among speakers, the response options, and the way test materials are created and presented (see Schütze, (2016) for a detailed review). Such work has enhanced research methods in theoretical linguistics and introduced new phenomena and energized new areas of research and perspectives on the interface between parsing and grammaticality.

In this paper, we examine the types of processes, representations and dynamics that specifically influence judgements of phonological acceptability used by theoretical phonologists. Phonological and syntactic acceptability judgements share some common characteristics simply by virtue of being metalinguistic judgement tasks (Schütze, 2016). Furthermore, both types of judgements may be predictable based on the relative frequency of different construction types across corpuses or lexica (Lau et al., 2017). Despite these similarities, there are many areas in which the two types of judgments are fundamentally different. First, syntactic acceptability judgements are formally taught in schools but that is not the case for phonological acceptability judgements. Thus, phonological judgements are purely implicit. Second, sentence evaluation can be influenced by several factors including semantic and pragmatic plausibility and working memory capacity that do not appear to influence phonological evaluation (Schwering & MacDonald, 2020). Third, lesion-deficit and functional imaging suggest that sentential grammaticality and phonological acceptability judgements rely on different brain regions (Bookheimer, 2002). Fourth, novel words and novel sentences present different processing challenges. Whereas novel sentences invite interpretation, novel words invite the listener to either access familiar words or acquire new ones. Next, perceptual and articulatory factors that play a critical role in speech processing and grounded theories of phonology do not appear to have the same significance in sentence processing or syntactic theory (Archangeli & Pulleyblank, 1994). Similarly, although both phonological and sentential acceptability judgements are correlated with frequency metrics, differences in the combinatoric complexity of words versus sentences are significant enough to require different learning mechanisms for phonological versus syntactic constraints (Heinz & Idsardi, 2011; 2013).

Chomsky and Halle (1965) made acceptability judgments a cornerstone of generative phonology when they noted that acceptability judgments extend beyond the patterns of attestation or occurrence in a given language. They argue that while the intuition that blik would be an acceptable English word, but bnik would not, might be attributable to memorization or analogy with overlapping forms (e.g., blink), the intuition that bnik is more acceptable than the equally unattested form lbik requires a different kind of explanation. These observations have been followed by a broad literature affirming that acceptability judgments are gradient, apply systematically to unattested forms, and are generally reliable across speakers of a language (e.g., Albright, 2009; Berent et al., 2007; Coetzee, 2008, 2009; Goldrick, 2011; Greenberg & Jenkins, 1964; Hayes, 2000; Kawahara & Kao, 2012; Pertz & Bever, 1975; Pierrehumbert, 2002; Scholes, 1966; Shademan, 2007; Zuraw, 2000). The generative phonological theories (Halle, 1962, 1964; Chomsky & Halle, 1965, 1968; Kenstowicz, 1994; Goldsmith & Laks, 2010) that have followed these observations are premised on the notion that productive generalization of structural constraints is the central phenomenon to be explained by any linguistic theory.

Three main competing accounts have been proposed to explain the mechanisms and theoretical significance of generalization beyond language users’ training sets. Rule/constraint-based (Chomsky and Halle, 1968; Smolensky and Prince, 1993) accounts argue that acceptability judgments reflect implicit knowledge of rules or constraints governing the formation of acceptable forms (while formal phonological theory comes in a lot of flavors, there is an agreement that phonology should be represented in some kind of abstract non articulatory, perceptual or lexical form). In contrast, associative accounts (Bybee, 2008; Evans & Levinson, 2009) attribute generalization to interactive associative mapping processes involving the lexicon. Articulatory accounts (Liberman et al., 1967; MacNeilage, 2008; Pulvermüller & Fadiga, 2010) assert that generalization is constrained by articulatory/motor considerations. The purpose of this paper is to better understand phonological generativity by determining the degree to which these accounts contribute, either individually or in combination, to phonological acceptability judgements. Specifically, we use task-related effective connectivity analyses of brain activity to identify the degree to which phonological intuitions are shaped by rules or constraints, lexical analogy, or perceived articulatory naturalness.

1.1. Rule/constraint-based, Associative, and Articulatory Accounts of Phonological Acceptability

1.1.1. Rule/constraint-based account

Rule or constraint-based accounts of linguistic generativity suggest that all language processing references abstract rules or constraints in some form, and that explicit acceptability judgments offer a relatively direct window on their application. Generative linguistic frameworks including classical transformational phonology (SPE) (Chomsky & Halle, 1968) and Optimality Theory (Smolensky & Prince, 1993) provide comprehensive accounts of structural regularity in patterns of attestation and native listeners’ phonological acceptability judgments both within and across natural languages based on the application of abstract structural rules or constraints. Evidence from artificial grammar learning experiments showing that listeners are able to learn constraints on phonological patterning through exposure to novel stimuli and apply them to explicit acceptability judgments suggests that rule-based mechanisms are psychologically plausible (see Moreton and Pater, 2012a,b for a review). Critically, generative theories assume that constraints are abstracted from the lexicon, which in turn may be shaped to some degree by efficiency pressures that mitigate for increased lexical similarity (Mahowald et al., 2018) and simplified articulation (Kawasaki & Ohala, 1980). This association between structural regularities and judgments poses potential challenges for distinguishing between the factors that shape online phonological acceptability judgements and the factors that shape generative constraints on representation.

Generative theories of cophonology suggest that rules governing the phonological realization of morphologically complex words are limited to sub-vocabularies within a language (Inkelas, 2014). Similarly, grounded theories of phonology suggest that phonological rules and the phonetic implementation of phonetic cues within a language are constrained by the need to simplify articulation and enhance perceptual contrast (Wilson, 2006; Hayes and White, 2013). Evidence from simulation studies (Reali and Griffiths, 2009) that even subtle online processing biases may produce strong cumulative pressure towards phonological normalization through iterative learning, suggests a link between online lexical influences (at least for morphologically complex words) and articulatory influences and diachronic changes in phonological structure. In this way, online articulatory demands may shape rule systems. In other words, isolating the influence of articulatory ease from the influence of rules is difficult because articulatory ease also shapes what rules the language uses. Therefore, to establish that articulatory or lexical factors independently shape acceptability judgements, it is necessary to show that these factors are operative in the absence of rule-driven mechanisms.

Disentangling these accounts based on neural evidence is further complicated by a lack of consensus on where rule or constraint-driven processing occurs in the brain. Neural data, mostly from functional MRI (fMRI) blood oxygen level dependent (BOLD) imaging studies, have implicated the left inferior frontal gyrus (LIFG) in the learning and use of abstract rules related to perceptual categorization, motor sequence learning and language-like (grammatical) processing involving structured sequence processing (Strange et. al., 2001; Opitz & Friederici, 2003; Musso et al., 2003; Lieberman et al., 2004; Fitch & Friederici, 2012; Uddén & Bahlman, 2012). It is not clear what role LIFG has in these phenomena. The LIFG consists of three cytoarchitectonically and functionally distinct structures: pars opercularis (BA44), pars triangularis (BA45) and pars orbitalis (BA47) (Hagoort, 2005; Lemaire et al., 2013; Bernal et al., 2015; Ardila, Bernal and Rosselli, 2017). For example, pars opercularis has motor and phonetic functions (Amunts et al., 2004; Heim et al., 2005,2009), pars triangularis has been implicated in language-specific working memory system, retrieval/attention operations (Hagoort, 2005; Thompson-Schill, Bedny & Goldberg, 2005; Grodzinsky & Amunts, 2006; Matchin, 2018), and pars orbitalis (BA 47) is similarly implicated in semantic retrieval and control (Conner et al. 2019; Becker et al. 2020; Jackson, 2020). These findings raise the possibility that LIFG involvement in artificial grammar learning reflects the retrieval and maintenance of linguistic information rather than implicit knowledge of phonological rules or constraints.

1.1.2. Associative account

The associative account (Bybee, 2008; Evans & Levinson, 2009), inspired by connectionist simulation results (McClelland & Elman, 1986), attributes generalization to interactive associative mapping processes involving the lexicon with no reference to learned or biologically conditioned rules or constraints. It has been suggested that phonotactic intuitions and repair reflect an active role of lexical knowledge on language processing (Greenberg & Jenkins, 1964; Ohala & Ohala, 1986; Frisch et al., 2000; Bailey & Hahn, 2001; Gow & Nied, 2014; Gow et al., 2021). This approach suggests that lexical influences contribute to the perceived well-formedness of novel forms. According to this account, a comparison is made between a given nonword and the existing words in the lexicon, and the acceptability judgment is shaped by how close the nonword is to an existing word or sometimes a “gang” of existing words with overlapping phonological structure (McClelland, 1991). Using data from the area of morphological productivity and alternations with various experimental methodologies, it has been shown that the decisions to accept nonwords are shaped by the number of similar existing words in the lexicon (Albright & Hayes, 2003; Anshen & Aronoff, 1988; Bauer, 2001; Berko, 1958; Bybee 1985, 1995, 2001; Bybee & Pardo 1981; Eddington, 1996; Ernestus & Baayen, 2003; Pierrehumbert, 2002; Plag 1999; Zuraw, 2000). Measures of phonological patterning across the lexicon, including neighborhood density (the number of familiar words that are created by adding, deleting, or substituting a single sound in a given word) and biphone probabilities (the relative frequency of co-occurrence of segments within words), have been shown to have direct influence phonological acceptability judgments (Greenberg & Jenkins, 1964; Luce & Pisoni, 1998; Vitevitch & Luce, 1998, 2004). The interpretation of potential lexical influences on phonotactic behavior is complicated by evidence for phonotactic frequency effects in which items comprised of common phonological sequences enjoy processing advantages over items with less common elements (Pitt & Samuel, 1995). Studies using same-different, typicality, and word-likeness judgments show that these effects are separable from global word likeness effects such as lexical neighborhood size (Luce & Large, 2001; Bailey & Hahn, 2001; Treiman et al., 2000; Shademan, 2007). The early connectionist TRACE model (McClelland & Elman, 1986) provides an explicit demonstration of how the lexicon might influence phonotactic processing through interactive processing. When presented with a nonword with an onset cluster that is ambiguous between an acceptable form (/sl-/) and an unacceptable form (/sr-/) top-down lexical influences from words that share the acceptable form (e.g., sled, slip) give a boost to the acceptable interpretation of the form. This produces patterns of phonotactic repair that broadly parallel human behavioral results (Massaro & Cohen, 1983; Pitt, 1998) without a role for the abstraction of rules or constraints associated with this model. At the extreme end of this lexical account, Bybee (2001) denied the existence of grammar altogether and attributed acceptability judgments entirely to accessibility determined by usage statistics.

Lexical knowledge, particularly word representation, has been associated with SMG and adjacent inferior parietal regions and the bilateral posterior middle temporal gyrus (pMTG) (Hickok & Poeppel, 2007). The dual lexicon model (Gow, 2012) argues that the left supramarginal gyrus (SMG) acts as a lexical interface between acoustic-phonetic and articulatory representation, and that bilateral pMTG provide an interface between acoustic-phonetic and semantic/syntactic representation. Several functional imaging studies supports this framework by demonstrating that activation in both regions is modulated by whole word properties including word frequency, and the phonological similarity of a word to other words (Biran & Friedmann, 2005; Prabhakaran et. al., 2006; Graves et. al., 2007; Righi et. al., 2009). This model is also supported by the findings showing that damage to these two regions leads to deficits in lexico-semantic and lexico-phonological processing (Coslett et. al., 1987; Axer et. al., 2001). The dual stream model’s (Hickok and Poeppel, 2007) lexical interface also includes inferior temporal sulcus (ITS) playing a role in linking phonological and semantic information, in addition to more anterior temporal regions correspond to the combinatorial network of language.

1.1.3. Articulatory account

The articulatory account asserts that generalization is constrained by articulatory/motor considerations. According to this account, judged acceptability is related to articulatory ease determined by appeal to sensorimotor simulation (Liberman et al., 1967; Lakoff & Johnson, 1999; Schwartz et al., 2002; Galantucci et al., 2006; MacNeilage, 2008; Pulvermüller & Fadiga, 2010). Grounded models of phonology generally recognize articulatory effort as a factor that affects phonological systems (Archangeli & Pulleyblank, 1994; Gafos, 1999; Pierrehumbert, 2002). The question is whether articulatory factors influence online acceptability judgments directly or indirectly by shaping abstract rules or constraints. Burani, Vallar, and Bottini (1991) found evidence of online articulatory suppression effects in speeded judgements of stress assignment and initial sound similarity, but it is unclear whether these effects reflect articulatory or rule mediation, or some combination of both.

Articulatory/motor representation is associated with primary sensorimotor cortex. Paulesu et al. (1993) found that ventral part of the primary sensorimotor cortex is activated during rhyme judgments with orthographic prompts in the absence of overt speech. Pulvermüller et al. (2006), in an fMRI experiment, report that listening to labial sounds (e.g., /b/) activates lip motor sites in the brain whereas listening to coronal sounds (e.g., /t/) activates tongue-related motor areas (see also Fadiga et al., 2002). The transcranial magnetic stimulation (TMS) literature provides converging evidence from findings showing that the stimulated tongue motor region plays a role in the perception of coronal phonemes (e.g., /t/) whereas the stimulated lip area helps perception of labial phonemes (e.g., /b/) (D’Ausilio et al., 2009, 2012; Möttönen & Watkins, 2009). Berent et al. (2015) and Zhao and Berent (2018) further investigated the articulatory motor region’s causal role in speech perception by asking whether sensitivity to phonological patterning (specifically to the syllable hierarchy, e.g., /bl/ is preferred over /bn/ which is better than /lb/) requires motor simulation or is constrained by universal rules. Participants in these experiments performed tasks including syllable counting with or without articulatory suppression and identity discrimination with printed materials or background noise, while their lip motor area underwent TMS. Participants showed sensitivity to the phonological constraints on the patterning of sonority when their motor areas were disrupted by TMS regardless of suppression. These findings suggest that articulatory influences alone do not account for sensitivity to phonological patterning; for a detailed review of these findings, see Berent et al. (2015) and Zhao and Berent (2018).

1.2. Neural correlates of phonological well-formedness

In this section, will lay out the neural network behind the phonological acceptability judgments and argue that various brain regions with very different functions respond to the well-formedness of phonological strings. In particular, we will draw inferences from two lines of work: one that has directly investigated phonological well-formedness with acceptability judgements and another that has looked at this indirectly by using tasks in which phonological or phonotactic regularities influence speech perception.

While a large empirical literature has examined the neural substrates of phonological processing (see reviews by Poeppel, 1996; Burton, 2001; Buchsbaum et al., 2011), few studies have directly investigated how perceived acceptability or well-formedness affect brain activity. The work that has been done implicates frontal and temporoparietal regions. Rossi et al. (2011) investigated the neuronal correlates of phonotactic processing in a functional near-infrared spectroscopy (fNIRS) study with a passive listening task. They found that spoken nonwords with phonotactic patterns that were illegal in German yielded a greater hemodynamic response over a left-hemispheric network including fronto-temporal regions than did nonwords with legal patterns. Vaden et al. (2011) found that fMRI BOLD activation in a region including the medial pars triangularis, lateral inferior frontal sulcus, and anterior insula was correlated with phonotactic frequency during the perception of acoustically degraded words. Similarly, Berent et al. (2014) found a positive correlation between ill-formedness and activation in bilateral posterior pars triangularis (BA45), a subpart of the LIFG, in a syllable counting task in which perceptual repair of illegal consonant clusters influenced perceived syllabification.

While the studies by Rossi et al. (2011), Vaden et al. (2011) and Berent et al. (2014) implicate LIFG, and specifically pars triangularis in processing related to manipulations of phonotactic legality or acceptability, it is unclear what functional role LIFG is playing. All of them employed stimuli that varied in intelligibility, articulatory familiarity and wordlikeness in addition to acceptability, and none of them required explicit judgments of phonological acceptability. Pars triangularis activation is independently associated with attention and retrieval processes within a language-specific working memory system (Matchin, 2018) as well as subvocal rehearsal in working memory (Burton et al., 2001; Elmer, 2016; Kazui et al., 2000; Menon et al., 2000; Ranganath et al., 2003; Rickard et al., 2000). This suggests that the above results may reflect downstream, task-induced processing rather than immediate phonotactic analysis.

Evidence from tasks that directly rely on acceptability judgment implicates temporal and parietal regions, but not inferior frontal areas for phonological well-formedness effects. Ghaleh et al. (2018), using an acceptability judgment task, investigated the brain regions crucial to phonotactic knowledge with a large group of participants (44 people) with chronic left hemisphere stroke, and found no evidence of LIFG involvement. Lesion-symptom mapping analyses found that reduced sensitivity to the phonological structure was most strongly associated with damage to the left pMTG and angular gyrus (AG). They hypothesized that AG plays a role in comparing the input to the most frequent phonotactic patterns found in lexical wordform representations stored in pMTG. This finding aligns with other results relating phonotactic phenomena to pMTG and SMG, both regions previously implicated in lexical processing (Hickok & Poeppel, 2007; Gow, 2012).

In a crosslinguistic study conducted with French and Japanese speakers, Jacquemot et al. (2003) suggested that participation of the left posterior superior temporal gyrus (pSTG) and SMG in phonotactic processing reflects interaction between acoustic-phonetic and semantic representations. Similarly, Gow and Nied (2014) found an association between increased influence by SMG and pMTG on pSTG and perceptual repair of phonotactically unacceptable nonword onset consonant clusters. Gow and Olson (2015) found that this dynamic is stronger during lexical decision for words and nonwords composed of high frequency phonotactic sequences. Obrig et al. (2016) examined the interaction between electrophysiological measures and lesions in a study of auditory word repetition using phonologically acceptable, unacceptable and reversed nonwords. They found that while the contrast between reversed speech and forward speech activated SMG and AG, the contrast between legal versus illegal phonotactics implied anterior and middle portions of the middle temporal and superior temporal gyri. They concluded that speech comprehension is influenced by phonological structure at different phonologically and lexically driven steps. Collectively, the results of this literature suggest that multiple brain regions, each associated with very different functions, show sensitivity to the well-formedness of phonological strings. The goal of the present study is to characterize the relative contribution of regions across this distributed network to the performance of phonological acceptability judgments that serve as a primary driving force in the development of phonological theory.

1.3. Present research and predictions

In the present study, we examined dynamic neural processes that support phonological acceptability judgments of nonwords. We used high spatiotemporal resolution brain imaging techniques and data-driven effective connectivity analyses to determine how patterns of phonotactic attestation and phonological acceptability influence patterns of brain activity and information flow during an auditory nonword phonological acceptability judgment task. The data driven effective connectivity analyses enables us to study language as the product of a dynamic, distributed network of specialized processors rather than just local functions. In particular, our aim is to investigate the direction of processing interactions between brain regions and identify function-specific effects of phonological judgments. Phonotactic acceptability is often correlated with factors including wordlikeness and articulatory challenge, which may affect activation in brain regions associated with articulatory or lexical processing. The application of effective connectivity analysis allows us to examine how these factors affect active processes such as rehearsal, verification, or repair associated with phonological evaluation.

We created auditory CCVC nonwords divided into three conditions across the continuum of phonotactic attestation and phonological acceptability based on mean acceptability ratings derived from a pilot study. For purposes of simplicity, and to allow direct comparison with previous effective connectivity studies of phonological constraints on speech perception and word recognition, our primary analyses focused on causal influences on the left posterior superior temporal gyrus (pSTG). Evidence from electrocorticography (Mesgarini et al., 2014) and converging data from pathology and functional imaging (Yi et al., 2019) suggest that the left pSTG is primarily involved in acoustic-phonetic representation and processing (Poeppel, Idsardi & van Wassenhove, 2008). Although we do not hypothesize that phonological acceptability judgments are made in this area, as a hub region linking the dorsal and ventral spoken language processing streams (Hickok & Poeppel, 2007) pSTG provides a unique vantage point for observing the entire spoken language network. Previous results have shown that activation of pSTG is sensitive to phonotactic acceptability (Jacquemot et al., 2003), but effective connectivity analyses suggest that this sensitivity is referred from other regions as a function of phonotactic lawfulness and frequency (Gow & Nied, 2014; Gow et al., 2021; Gow & Olson, 2015). Critically, although the pSTG is influenced by regions independently associated with lexical, articulatory and rule-driven processing (Gow & Segawa, 2009; Gow & Olson, 2015), it is not uniquely associated with any of them, and so it provides an account-neutral reference point for observing dynamics associated with phonological judgement.

Each of these accounts makes different predictions about how patterns of brain activity and effective connectivity would change based on the acceptability or attestation of nonword phonotactic patterns. Under a rule-based account, all phonological structures are evaluated relative to a common set of rules after mini lexicons that contain lexical entries for valid syllable types are consulted. Our test of hypotheses of rule-based account is based on this resynthesis process where rule evaluation is utilized. Therefore, the rule account predicts that acceptability judgments would rely on dynamic processes involving a common store of phonological constraints to evaluate well-formedness. It predicts that nonwords that are judged less acceptable should evoke stronger influences from rule areas to either repair or confirm the ill-formedness of less acceptable forms. As noted above, while there is no consensus about the existence or localization of a rule store, the best candidate region is the LIFG, specifically the pars triangularis (BA 45) (Vaden et al., 2011; Berent et al., 2014). The associative account predicts attested and acceptable nonwords should evoke stronger influence on pSTG by putative word representation areas, including SMG, pMTG, and the left anterior and posterior portions of the inferior temporal lobe (Jacquemot et al., 2003; Hickok and Poeppel, 2007; Gow, 2012; Gow and Nied, 2014; Obrig et al., 2016; Ghaleh et al., 2018). The articulatory account predicts that brain regions involved in articulatory representation, including ventral pre- and post-central gyri would influence pSTG as a function of articulatory naturalness (Paulesu et al., 1993; Gow & Segawa, 2009), which might reflect a combination of attestation and acceptability, with attested and acceptable nonwords evoking weaker influences than unattested and unacceptable nonwords.

2. Methods

2.1. Participants

Fourteen right-handed adults participated in this study (6 males, mean age 28 years, SD = 4.3, range = 22 to 36). None of the participants reported a history of hearing loss, speech/language or motor impairments, and all were native speakers of Standard American English and self-identified as monolingual. Informed consent was obtained in compliance with the Human Subjects Review Board and all study procedures were compliant with the principles for ethical research established by the Declaration of Helsinki. Participants were paid for their participation.

2.2. Stimuli

The stimuli consisted of 180 auditory CCVC nonwords recorded by an adult male speaker of Standard American English. These stimuli were a subset of 300 initial nonwords which were tested in a pilot study with 17 native speakers of Standard American English. In this pilot study, participants were given 100 nonwords with attested onset consonant clusters (e.g., smal or flike) and 200 nonwords with onset consonant clusters that do not appear in familiar non-loan English words (e.g., sras or zhnad), and were asked to rate the acceptability of these nonwords on a scale of 1 to 7. We then picked the sixty nonwords with attested onset clusters that got the highest acceptability ratings (M=4.64, SD=0.47) and assigned them to the Attested/Acceptable (AA) items. Next, we picked the sixty nonwords with unattested onset clusters that got the highest acceptability ratings (M=4.09, SD=0.64) and assigned them to the Unattested/Acceptable (UA) items. Finally, we picked the sixty nonwords with unattested onset clusters that got the lowest acceptability ratings (M=2.09, SD=0.28) and assigned them to the Unattested/Unacceptable (UU) items. Therefore, the 180 nonwords used in this current experiment were divided into three sets of sixty items each based on the attestation of consonant onset clusters, and mean acceptability ratings derived from the pilot study.

Care was taken to avoid items that closely resembled real words; however, one item, bwal (in the UA condition) was inadvertently included that bore a strong resemblance to two English words (ball and brawl). All stimuli were recorded as 16-bit sound with a sampling rate of 44100 kHz in a quiet room. These recordings were normalized for intensity and equated for duration at 500 ms using PRAAT (Boersma & Weenink, 2018). All items were checked by listening and visual inspection of spectrograms to ensure that they were pronounced as intended. Since the stimuli were recorded naturally, we did not control for the duration of onset consonant clusters across the conditions.

2.3. Procedure

We used Matlab PsychToolbox (Kleiner et al., 2007) to present the auditory stimuli and record behavioral responses. Participants performed an untimed two-alternative forced choice (2AFC) acceptability judgment task while MEG and EEG data were simultaneously collected. The participants were asked to “press one of two buttons with your left index or middle finger to indicate whether you think the word could make a new word in the English language or not”. The stimuli were presented in randomized order in two blocks of 90 trials, with a brief rest period between the blocks. Trials began with a 400 ms fixation period during which a small cross was shown at the center of the screen. This was followed by the presentation of the 500 ms CCVC auditory stimulus over pneumatic earphones. After hearing the stimulus, participants responded with a left-hand button-press. There was a 500 ms intertrial interval following the button press. To minimize potential MEG/EEG artifacts, participants were instructed to maintain the fixation on the screen in front of them and blink only after responding. The total duration of the experiment was about 30 minutes.

2.4. Neural Data Acquisition and Processing

2.4.1. MEG an EEG Acquisition

Simultaneous MEG and EEG data were collected using a whole head Neuromag Vectorview system (Megin, Helsinki, Finland) in a magnetically shielded room (Imedco, Hägendorf, Switzerland). Data were recorded from 306 MEG channels (204 planar gradiometers and 102 magnetometers), 70 EEG channels with nose reference, and two electro-oculogram (EOG) channels to identify blinks and eye-movement artifacts. The data were filtered between 0.1 and 300 Hz and sampled at 1000 Hz. A FastTrack 3D digitizer (Polhemus, Colchester, VT) was used before testing to determine the positions of anatomical landmarks (preauricular points and nasion), the EEG electrodes, four head-position indicator (HPI) coils, and over 100 additional surface points on the scalp for co-registration with the structural MRI data. The position of the head with respect to the MEG sensor array was measured using the HPI coils at the beginning of each block of trials.

2.4.2. Structural MRI

Anatomical T1-weighted MRI data for each participant were collected with a 1.5 T or 3 T Siemens scanner using an MPRAGE sequence. Freesurfer software was used to reconstruct the cortical surface and to identify skull and scalp surfaces for each participant (Dale et al., 1999). Individual participants’ data were aligned into a common average surface using a spherical morphing technique (Fischl et al., 1999).

2.4.3. Source Reconstruction and ROI Identification

We used the MNE software (Gramfort et al., 2014) to create MRI-constrained cortical minimum-norm source estimates for the task-related MEG and EEG data. For the forward model, 3-compartment Boundary Element Model was constructed for each subject using the skull and scalp surfaces segmented from the MRI. The source space was defined by placing current dipoles at about 10000 vertices of each reconstructed cortical hemisphere; the orientation of the dipoles was not constrained. All source estimates were calculated at the individual subject level and then transformed into the common average cortical surface.

To define regions of interest (ROIs) that satisfy the statistical and inferential requirements of Granger causality analysis we used an algorithm that relies on the similarity and strength of activations of MNE time series at each source space vertex over the cortical surface for the 100–500 ms time period after stimulus onset (Gow & Caplan, 2012; Gow & Nied, 2014). The activation map obtained by averaging the source estimates over all participants and conditions was used to identify the ROIs. First, vertices with mean activation over the 95th percentile during the t100–500 ms time window were identified as seeds for potential ROIs. Vertices located within 5 mm of local maxima were excluded and pairwise comparisons were then performed between all potential seeds to identify redundant time-series information. This was done to satisfy Granger analysis’s assumption that each signal carries unique predictive information. Redundancy was quantified as the Euclidean distance between vertices’ normalized activation functions. If an ROI’s activation function was within 0.9 standard deviations of another ROI with a stronger signal, the ROI was omitted. The spatial extent of individual ROIs was determined using the same measure similarity in activation function among contiguous vertices. When the distance was within 0.5 standard deviations of an ROI’s seed, the vertex was included in the ROI. We used Freesurfer’s automatic parcellation utility to label the ROIs based on their sulcal and gyral locations. The ROIs determined from the group average data were transformed onto the cortical surfaces of individual participants. Finally, to account for individual differences in brain structure and functional differentiation, we identified representative individual vertices within each ROI for each participant to provide representative time courses in Granger analyses.

2.4.4. Kalman Filter based Granger Causality Analysis

Effective connectivity between ROIs was determined using Granger causality analysis based on a Kalman filter approach (Milde et al., 2010; Gow & Caplan, 2012). Kalman filtering has been used in Granger analysis of BOLD, EEG, MEG, and multimodal data (Valdes-Sosa et al., 2009; Havlicek et al., 2010, 2011; Milde et al., 2010; Gow & Nied, 2014), because it addresses the signal stationarity assumption of Granger causation analysis, is resistant to subsistent noise, and allows for the estimation of the coefficients for time-varying multivariate autoregressive (MVAR) prediction models making it possible to measure the strength of Granger causation between all ROIs at each time point.

Averaged time series data from each subject’s ROIs were submitted to Granger analysis. For each ROI, first, a full MVAR model for predicting the activity in one ROI from the past values of activity in all ROIs was generated. Then, restricted counter models were created in which one of the other, potentially causal ROIs was excluded. Granger Causality is an inference relying on the ratio of error terms for full vs. restricted prediction models. For two ROIs, when the prediction error for ROI1 at time step t is reduced by inclusion of ROI2 at time step t - 1 into the model accounting for the fact that ROI1 at t - 1 (and previous time steps) predicts itself at time t, then ROI2 is said to have a causal influence on ROI1. Thus, we can make an inference that the presence of ROI2 in the model can be used to make a prediction about ROI1 activity and ROI2 Granger causes activation changes or influences ROI1 activation. The strength of this causality was measured at each time point by the Granger Causality Index (GCI), which is defined as the logarithm of the ratio of the prediction errors for the two models (Milde et al., 2010).Averaged time series data from each subject’s ROIs were submitted to Kalman filter-based Granger analysis. For each ROI, first, a full model for predicting the activity in one ROI from the past values of activity in all ROIs was generated. Then, restricted counter models were created in which one of the other, potentially causal ROIs was excluded. In the models, five samples (1000 Hz) preceding each time point were used. This model order was identified heuristically because Akaike and Bayesian Info Criteria failed to determine a single optimal model order. A 100-ms initial time period was added to allow the Kalman filter to converge; thus, the Granger Causality was computed over the time window of 0–500 ms. The significance of GCI at each time point was determined by using a bootstrapping method (Milde et al., 2010).

2.5. Statistical Analyses

For the analysis of the acceptability judgements, we used lme4 (Bates et al., 2012) and lmerTest (Kuznetsova, Brockhoff, and Christensen, 2017) packages in R (R Core Team, 2022) to perform a logistic mixed-effects analysis of the relationship between acceptability rates and nonword type (3 levels: AA, UA, UU). We first ran the full model with UA condition as the reference level and then reran the model one more time with AA condition as the reference level to be able to report the AA vs. UU comparison statistics. Nonword type was treated as a fixed effect. We used random intercepts and slopes for nonword type by participants and random intercepts for nonword type by items. We did not include random slopes for nonword type by item because nonword type is a between-item effect. We reported the model estimation of the change in acceptance rate (in log odds) from the reference category for each fixed effect (b), standard error of the estimate (SE), Wald z test statistic (z), and the associated p values.

The influence of an ROI on another was quantified by counting the number of time points for which the uncorrected GCI significance value was p<0.05 within the 100–500 ms post-stimulus time window of interest. Previous neurophysiological studies using similar consonant onset cluster well-formedness manipulations have shown sensitivity to phonotactic violations in this time window (Wagner et al., 2012; Rossi et al., 2013; Gow & Nied, 2014). Our analyses first focused on baseline pattern of causation where all three conditions were combined to identify the overall network responsible for acceptability judgments at the word level. We then contrasted trials on the continuum of phonotactic attestation and phonological acceptability in which Attested/Acceptable (AA) vs. Unattested/Acceptable (UA), Unattested/Acceptable (UA) vs. Unattested/Unacceptable (UU) and Attested/Acceptable (AA) vs. Unattested/Unacceptable (UU) nonwords were compared. Differences between conditions were evaluated using a binomial test (Tavazoie et al., 1999) comparing the number of significant time points in each condition within the time window of interest. Effects were reported as significant at α = 0.05 after correction for multiple comparisons using the false discovery rate (Benjamini & Hochberg, 1995).

3. Results

3.1. Acceptance Rates

The nonwords received a mean acceptance rate of 0.73 (SD=0.44) in the AA condition, 0.72 (SD= 0.45) in the UA nonword condition, and 0.16 (SD=0.36) in the UU nonword condition. Results of the logistic mixed-effect regression model showed that while the acceptance rates in the UA condition were not significantly different than those in the AA condition (b = 0.079, SE = 0.24, z =0.33, p =.741), they were significantly lower in the UU condition (b = 3.40, SE = 0.35, z =9.70, p <.0001). Acceptance rates were also significantly lower in the UU condition than in the AA condition (b = 3.48, SE = 0.40, z =8.80, p <.0001) (Fig 1).

Figure 1.

Figure 1.

Average proportion of acceptance (“Yes”) responses in the three conditions across the continuum of phonotactic attestation and phonological acceptability. Error bars indicate standard error of the mean, stars indicate the significance of pairwise comparisons (p<.0001).

3.2. Regions of Interest

The process for identifying clusters of cortical source locations associated with activation peaks that share similar temporal activation patterns resulted in 38 ROIs associated with overall task-related activation (Figure 2, Table 1). As expected, these included 3 superior temporal gyrus ROIs (L-STG1, R-STG1, R-STG2) in regions with neural sensitivity to acoustic-phonetic structure (Mesgarini et al., 2014). Yamamoto et al. (2019) propose that the right STG region plays a special role in linking auditory feedback with internal representations of speech sounds. Importantly, ROIs were identified that were consistent with associative (lexical) and articulatory mediation of acceptability judgements. Five regions, L-SMG1, L-MTG1, L-ITG1,2 and R-ITG1 spanned regions associated with lexical representation (Hickok and Poeppel, 2007; Gow, 2012). Left anterior ITG (L-ITG1) is implicated in lexico-semantic processing (Ischebeck et al., 2004; Patterson, Nestor and Rogers, 2007). Hickok and Poeppel specifically identify the posterior MTG and adjacent posterior ITS as lexical interface regions. Both L-ITG2 and R-ITG1 spanned regions that encompass posterior ITG, and L-ITG1 anterior ITG. Moreover, MEG source reconstructions commonly show a bias towards superficial sources that may make some sulcal sources appear more gyral. Consistent with the models of both Hickok and Poeppel (2007) and Gow (2012), these posterior ROIs may mediate the mapping between wordform structure and meaning. Both models attribute more anterior parts of the middle temporal gyrus and inferior temporal gyrus to semantic representation. Ghaleh et al. (2019) found that lesions of the left aMTG were associated with decreased sensitivity to the phonotactic regularity of speech. Located between posterior MTG (lexical interface) and anterior MTG (combinatorial network), the L-MTG1 ROI together with the L-ITG1 ROI may play a transitional role between those functions in the ventral speech processing stream.

Figure 2.

Figure 2.

Regions of interests (ROIs) visualized over an inflated averaged cortical surface. Lateral (top) and medial (bottom) views of the left and right hemisphere are shown. For further description of the ROIs, see Table 1.

Table 1.

Regions of interests (ROIs) used in Granger causation analyses, as identified from averaged task-related activation. MNI coordinates indicate the source locations showing the highest average activation across participants within each region.

Label Location MNI Coordinates
Left Hemisphere X Y Z
Temporal L-STG1 Superior Temporal Gyrus −64 −31 9
L-MTG1 Middle Temporal Gyrus −63 −16 −15
L-ITG1 Inferior Temporal Gyrus −48 −7 −39
L-ITG2 Inferior Temporal Gyrus −56 −57 −10
Parietal L-SMG1 Supramarginal Gyrus −61 −47 23
L-SPC1 Superior Parietal Cortex −13 −87 35
L-SPC2 Superior Parietal Cortex −13 56 63
L-postCG1 Posterior Central Gyrus −61 −14 24
L-postCG2 Posterior Central Gyrus −49 −20 57
Frontal L-ParsOrb1 Pars Orbitalis −44 35 −13
L-ParsTri1 Pars Triangularis −52 25 11
L-SFG1 Superior Frontal Gyrus −7 61 9
L-SFG2 Superior Frontal Gyrus −12 −2 67
L-SFG3 Superior Frontal Gyrus −8 21 60
Occipital L-LOC1 Lateral Occipital Complex −35 −88 −16
L-LOC2 Lateral Occipital Complex −11 −103 5
L-LOC3 Lateral Occipital Complex −45 −82 5
L-LOC4 Lateral Occipital Complex −10 −97 −13
Right Hemisphere
Temporal R-STG1 Superior Temporal Gyrus 65 −33 5
R-STG2 Superior Temporal Gyrus 56 2 −5
R-MTG1 Middle Temporal Gyrus 64 −19 −12
R-ITG1 Inferior Temporal Gyrus 54 −54 −16
Parietal R-AG1 Angular Gyrus 37 −68 43
R-SPC1 Superior Parietal Cortex 20 −89 25
R-SPC2 Superior Parietal Cortex 12 −71 52
R-SPC3 Superior Parietal Cortex 29 −53 63
R-postCG1 Posterior Central Gyrus 8 −37 77
R-postCG2 Posterior Central Gyrus 60 −11 14
R-postCG3 Posterior Central Gyrus 58 −9 36
Frontal R-ParsTri1 Pars Triangularis 50 33 5
R-cMFG1 Caudal Middle Frontal Gyrus 36 5 58
R-SFG1 Superior Frontal Gyrus 17 57 24
R-SFG2 Superior Frontal Gyrus 8 39 50
R-SFG3 Superior Frontal Gyrus 17 22 58
R-SFG4 Superior Frontal Gyrus 9 −1 67
Occipital R-LOC1 Lateral Occipital Complex 18 −100 5
R-LOC2 Lateral Occipital Complex 33 −88 −13
R-LOC3 Lateral Occipital Complex 48 −76 8

Three regions in the ventral post central gyrus, L-postCG1, R-postCG2 and R-postCG3, aligned with portions of the sensory homunculus that are associated with the control of speech articulators (Pardo et al., 1997) and implicated in the perception of spoken language (Tremblay and Small, 2011a,b; LaCroix et al., 2015; Schomers and Pulvermüller, 2016). Two ROIs were identified within the LIFG, L-ParsTri1 and L-ParsOrb1, along with one right hemisphere homolog, the R-ParsTri1. As previously noted, functional interpretation of these regions is controversial given some evidence that they are sensitive to phonological well-formedness (Vaden et al., 2011; Berent et al., 2014) even though damage to these regions is not associated with changes in phonological acceptability judgements (Ghalel et al., 2018). Independent work links the pars orbitalis to lexical semantic control processes including search (de Zubicaray and McMahon, 2009; Price 2010; Noonan et al., 2013; Conner et al. 2019; Becker et al. 2020; Jackson, 2020 de Zubicaray and McMahon, 2009; Price 2010; Noonan et al., 2013; Conner et al. 2019; Becker et al. 2020; Jackson, 2020) and the left pars triangularis to working memory maintenance and retrieval (Hagoort, 2005; Thompson-Schill, Bedny & Goldberg, 2005; Grodzinsky and Amunts, 2006; Matchin, 2018). The right pars triangularis is less well understood, although evidence that inhibition of this region in people with aphasia reduces phonological, but not semantic errors in naming (Harvey et al., 2019) suggests a possible role in inefficient phonologically guided strategic lexical search. Interestingly, a third component of the LIFG, the pars opercularis, which is implicated in control processes related to phonetic processing and motor control (Amunts et al., 2004; Heim et al., 2005), was not identified by our method as a separate ROI.

Other ROIs included lateral occipital regions that may reflect the visual fixation’s role in cueing auditory attention, a sensorimotor region (R-postCG1) likely to play a role in initiating button presses (Overduin & Servos, 2004), superior frontal and parietal regions implicated in control processes and response suppression (Dong et. al., 2000; Hu et al., 2016; Koenigs et al., 2009; Seghier, 2013; Wild et al., 2012; Shomstein & Yantis, 2006; Aron et al., 2003; Kim, 2010; Vilberg and Rugg, 2008), and the right caudal middle frontal gyrus (R-cMFG1), a region implicated in attentional control (Japee et al., 2015). While acknowledging the challenges of functional interpretation of any brain region, we reference these glosses of function in the following results for ease of readability.

3.3. Overall Patterns of Causation

Figure 3 shows the causal influence by other ROIs on L-STG1 for data averaged over the three conditions (AA, UA, and UU). Influence was quantified as the number of timepoints in the interval between 100–500 ms post stimulus onset with GCi values. (The influences of all ROIs on L-STG1, both for the condition-independent Granger analysis and for the three condition-specific analyses are shown in Table S1 in Supplementary Materials). The strongest observed influences on L-STG1 were by R-LOC3 and R-SFG4. Prominent influences came also from ROIs in areas that are in regions associated with lexical representation (L-ITG1,2, L-SMG1 and L-MTG1, from ROIs in areas implicated in sensorimotor articulatory representations (R-postCG2,3), and from LIFG ROIs (L-ParsOrb1 [BA 47] and L-ParsTri1 [BA 45]).

Figure 3.

Figure 3.

Causal influences on L-STG1 (yellow) by the other ROIs for data combined over the three stimulus conditions. The diameter of the green bubbles indicates the number of time points with significant GCI values in the 100–500 ms window. A time point was included in the count if the p-value determined by bootstrapping analysis for the GCI reached α = 0.05 (uncorrected).

3.4. Effects of Attestation (AA vs. UA)

Comparisons between experimental conditions were made using binomial tests to compare the number of timepoints between 100–500 ms post stimulus onset in which GCi values had a p < 0.05 in each condition. The comparison between influences on L-STG1 in the AA versus UA conditions is shown in Figure 4. Significantly stronger influences in the AA than the UA condition were found for 7 of the 38 ROIs. The largest effects were found for L-ITG2 and L-MTG1 in the left hemisphere (associated with lexical representation) and R-postCG3 in the right hemisphere (associated with articulatory representation) (all p <0.001, FDR corrected). The other regions showing significantly larger influence on L-STG1 for attested forms (AA) were L-SPC2, R-SPC1, R-SFG4 (p<0.01) (all involved in attention and control processes), and R-STG1 (p<0.01) (linking auditory feedback with internal representations of speech sounds).

Figure 4.

Figure 4.

Differential influences of the other ROIs on L- STG1 (shown in yellow) for AA-UA contrast. The bubbles indicate ROIs that showed significantly (p<0.05, with FDR correction) larger number of time points with significant GCI values in the 100–500 ms window in the AA (blue) or UA (purple) condition. The diameter of a bubble corresponds to the difference in the number of time points between conditions.

Three ROIs showed larger influence on L-STG1 in the UA than the AA condition: R-ParsTri1, R-cMFG1, and R-postCG1 (p<0.01). The influence by R-postCG1 was likely related to the button press. The R-cMFG1 ROI in the right middle frontal gyrus is implicated in shifting attention from exogenous to endogenous control and R-ParsTri1 involvement is consistent with inefficient phonological search and control processes evoked by lexically unattested phonotactic patterns.

We also examined potential indirect influences on L-STG1 by identifying differential influences by other ROIs on L-MTG1 (Fig. S1 in supplementary materials), following evidence that left SMG [the dorsal wordform area] influences on pSTG are sometimes mediated by the MTG. The results showed significantly stronger influences for L-SMG1 (p < 0.001) on L-MTG1 in the AA than in the UA condition. Furthermore, since behavioral results did not show significant differences between AA and UA nonwords, we conducted further analysis and divided the trials down by the response (e.g., nonwords that are accepted and rejected regardless of the pre-assigned attestation or acceptability). Figure S2 in Supplementary Materials depicts ROIs that showed significantly different influences on the L-STG1 ROI between the accepted and rejected nonwords. Notably, ROIs associated with lexical, articulatory, and phonological/semantic search did not show differences between the accepted vs. rejected nonwords.

Overall, these results suggest that listeners reference representations of familiar words (via L-MTG1) and familiar articulatory patterns (via R-postCG3) when they judge those words to be acceptable and engage in effortful phonological search (via R-ParsTri1), possibly for phonetically similar items, when familiar representations are not immediately available.

3.5. Effects of Acceptability (UA vs. UU)

Contrasts between the UA and UU conditions revealed 18 ROIs that showed significant differences in the strength influences on L-STG1 as a function of acceptability (Figure 5). Three ROIs showed stronger influence in the UA condition: R-ParsTri1 (p < 0.001) (phonological search and control processes), L-ParsOrb1 (p < 0.005) (semantic control) and L-LOC4 (p < 0.001) (visual cueing of auditory attention).

Figure 5.

Figure 5.

Differential influences of the other ROIs on L- STG1 (yellow) for UA-UU contrast. The bubbles indicate ROIs that showed significantly (p<0.05, with FDR correction) larger number of time points with significant GCI values in the 100–500 ms window in the UA (purple) or UU (orange) condition.

Stronger influences in the UU than the UA condition were found for 15 ROIs: L-ITG2, L-SFG2,3, L-SPC2 , L-postCG2, R-SFG1,3, R-SPC3, R-cMFG1, R-postCG3 (all p<0.001) and L-postCG1, R-SFG2, L-LOC1, R-AG1, R-LOC1 (p<0.005). Collectively, these regions are associated with lexical representation (L-ITG2), articulation (L-postCG1,2 and R-postCG3), allocation of memory and attention (bilateral SFG and SPC areas, R-AG1 and R-cMFG1), and visual cueing of attention (bilateral LOC areas).

These results suggest that the judgments were influenced by attention driven semantic control processes (L-ParsOrb1) when the nonword was acceptable, and by a large network of attention and control mechanism together with lexical and articulatory means when the nonword was unacceptable.

3.6. Effects of Naturalness (AA vs. UU)

Figure 6 shows ROIs that showed significantly different influences on L-STG1 between the AA and UU conditions. For 4 ROIs the influence was stronger in the AA condition. The largest effects were found for L-MTG1 (associated with lexico-semantic representation) and R-SPC1 (cognitive control and response suppression). The other ROIs showing significantly larger influence on L-STG1 in the AA condition were L-ParsOrb1 [BA 47, part of LIFG] (associated with semantic retrieval and control) and L-LOC4 (all p < 0.001, FDR corrected).

Figure 6.

Figure 6.

Differential influences of the other ROIs on L-STG1 (yellow) for AA-UU contrast. The bubbles indicate ROIs that showed significantly (p<0.05, with FDR correction) larger number of time points with significant GCI values in the 100–500 ms window in the AA (blue) or UU (orange) condition.

Stronger influences on L-STG1 in the UU condition were found for 15 ROIs: L-ITG1, L-SFG1,2,3, L-postCG1,2 and L-LOC1 in the left hemisphere (all p < 0.001, FDR corrected), and R-cMFG1, R-SFG1,2, R-SPC2, R-postCG1,2, R-AG1, R-LOC1 in the right hemisphere (all p < 0.001, FDR corrected). The increased influence of L-ITG1 and L-postCG1,2 and R-postCG2 are consistent with increased recruitment of lexical and articulatory influences on L-STG1 for the most unnatural forms (UU). The increased influence of frontal and parietal control regions suggests that the most unnatural forms produce the strongest task demands.

We further examined two ROIs for which the observed results were unexpected following the hypotheses put forward above. L-SMG1 and L-ParsTri1 influences on L-STG1 did not show significant differences in the comparisons between conditions. Instead, both ROIs showed overall significance, but no significant differences between conditions in influence on L-STG1 (Table S1 in Supplementary Materials).

To summarize, acceptance of natural (attested and acceptable) phonotactic patterns relied primarily on lexical mechanisms with a minimal role of control processes. However, when the nonword was unnatural, bilateral attention and control network together with left hemisphere articulatory regions influenced the judgment process.

4. Discussion

The present study aimed to (i) characterize the dynamic cortical processes and mechanisms that support phonological acceptability judgments and (ii) determine the degree to which these judgments are driven by rule/constraint, associative, and articulatory mechanisms. Effective connectivity analyses revealed a diverse pattern of differences in the strength of influences on the left pSTG among the three conditions. Our results suggest that phonological acceptability judgments are a product of a search process that looks for lexical, phonological, and articulatory wordform representations depending on the attestation and acceptability of the sound combinations. Several of the ROIs identified in the comparisons were in regions that in the literature have been associated with lexical word representation (L-SMG1, L-MTG1, and L-ITG1,2 and R-ITG2), sequence processing or controlled lexical and phonological search and access (L-ParsOrb1, L-ParsTri1, and R-ParsTri1), and sensorimotor representations (bilateral postCGs).

The involvement of lexical regions is notable, given that two of the three conditions consisted of items with unattested onset consonant clusters, none of the stimuli were words, and the participants knew they were not going to hear any words. However, none of those regions are solely grammatical, lexical, or articulatory and likely reflect a lot of other processes too. In fact, this diversity is inconsistent with a single mechanism explanation. Our results suggest that any rule, lexical, or articulatory restriction would need to be represented across a widely distributed network and that the act of making phonological judgments involves multiple stages of evaluation based on lexical and articulatory evaluation orchestrated in some cases by active cognitive control processes.

4.1. Attestation

Nonwords with unattested and thus theoretically unacceptable onset consonant clusters received acceptability ratings equally high as nonwords with attested and putatively well-formed onsets did. This result might be due in part to the characteristics of our stimuli. All the acceptable nonwords start either with fricatives (attested ones) or with a mix of fricatives and stops (unattested ones), and there is evidence that fricatives seem to be processed differently than other sounds (Galle et al., 2019). Another likely explanation of this behavioral finding is that the judgments were influenced by perceptual repair of unattested patterns (Massaro & Cohen, 1983; Pitt, 1998; Davidson, 2007; Gow & Nied, 2014; Gow et al., 2021). The current neural results provide some support for this interpretation. The left SMG, the dorsal lexicon area, which mediates the mapping between sound and articulation (Gow, 2012) showed consistently influence on pSTG across the comparison conditions (see Table S1 in supplementary material). Independent evidence (Gow & Nied, 2014; Gow et al., 2021) associates this dynamic with lexically mediated perceptual repair of unattested fricative onset clusters (e.g., categorizing an unattested /sr-/ onset cluster as an attested /shr-/ onset).

The consistent lexical influences of the SMG across the attested and unattested acceptable conditions (AA versus UA) were overlayed by evidence from increased top-down lexical influence for attested (AA) items from L-ITG2 and L-MTG1. Unlike the SMG, these regions are associated with semantic aspects of lexical representation, which would be expected to be more strongly activated by nonwords with onset clusters that unambiguously appear in known words with semantic representations. For example, whereas the unattested onset cluster in the UA nonword shlame may weakly activate the words with perceptually similar onset clusters (e.g., slain, shame), attested onset clusters such as flane more closely resemble clusters found in words with full semantic representations including flame and flake and so would be expected to produce stronger semantic activation.

Like lexical familiarity, articulatory familiarity appears to influence the evaluation of nonwords with attested onset clusters. The dorsal postcentral gyrus region R-postCG3 also produced stronger influences on L-STG1 for attested versus unattested acceptable nonwords. This ROI aligns with a portion of sensorimotor cortex implicated in oral movement (Pardo et al., 1997) and phonological processing (Schomers and Pulvermüller, 2016). BOLD imaging results comparing the articulation of vowels and syllables taken from bilingual’s first versus second languages provide converging evidence that this region is sensitive to the relative frequency of articulatory patterns (Treutler and Sörös, 2021).

Further support for the role of articulatory effects on speech perception in our experiment comes from the stronger influence of R-STG1 on L-STG1 in the AA versus UA condition. Yamamoto et al. (2019) linked right STG activation to covert speech and suggested that right STG influences on left STG reflect the influence of covert speech on the perception of heard speech. The stronger influence of L-SPC2, R-SPC1, and R-SFG4, implicated in attention and control processes, on L-STG1 for the AA compared with UA items likely reflects the need to devote additional effort to the search of lexical and articulatory wordform representations for AA nonwords.

It is notable that LIFG regions did not show differential influences on L-STG1 for nonwords with attested, lawful patterns than it did for nonwords with putatively ill-formed clusters that violate English sonority constraints. Such an influence would be predicted by the rule/constraint account under the hypothesis that that LIFG is the locus of rule/constraint knowledge or processing. Furthermore, the results are inconsistent with the hypothesis that ill-formed, unattested clusters were judged to be acceptable due to rule-mediated perceptual repair. If positive evidence were found for all three influences during phonological judgements, one could argue that lexical and articulatory factors influence online acoustic-phonetic processing, but their effects on the evaluation of phonological judgment are mediated by rules or constraints that have been shaped over time by these factors as suggested by cophonological and grounded theories of phonology. In the absence of independent LIFG effects, it appears that the lexical and articulatory advantages of attestation are not mediated by abstract rules or constraints.

4.2. Acceptability and Naturalness

What makes some forms unrepairable and therefore unacceptable? Contrasts showed that unacceptable (UU) forms engaged a broader network of frontal and parietal control regions than did either of acceptable conditions (AA and UA). For example, in the AA vs UU comparison, the combination of stronger L-ITG1 influence with weaker L-MTG1 influence may reflect enhanced lexical search but without leading to semantic access associated with a more anterior L-MTG1 activation. This may suggest that the default judgment among listeners is to accept novel forms, and that strongly unnatural phonotactic structures trigger attention driven re-evaluation. This should not be surprising, as speech input, with very few exceptions, is predictably well-formed.

In contrast, acceptable forms elicited stronger influence of L-ParsOrb1 on L-STG1 in all comparisons involving unacceptable forms (UA versus UU and AA versus UU). A wide body of research implicates activation of left pars orbitalis (BA 47) in semantic retrieval and control processes (de Zubicaray & McMahon, 2009; Price 2010; Noonan et al., 2013; Conner et al. 2019; Becker et al. 2020; Jackson, 2020). Semantic control is the ability to selectively access and manipulate meaningful information based on context demands (Jackson, 2020). In a meta-analysis study, Jackson (2020) found that semantic control depends on a left hemisphere specific network of IFG, posterior MTG and ITG, and dorsomedial prefrontal cortex (dmPFC). Therefore, the left ParsOrb influence on L-STG1 observed in our study could be thought of more like a control process where lexical forms were selected. Additionally, the stronger influence of L-MTG1 on L-STG1 for the AA vs UA and AA vs. UU nonwords suggests that this control process was a semantic control process to access the word form representations for natural (AA) nonwords.

However, the above referenced control process works differently for UA nonwords where the L-MTG or L-ITG2 influence, and therefore the lexical representation, was absent. R-ParsTri1 (implicated in phonological access during word retrieval) showed stronger influence on L-STG1 for UA vs AA and UA vs UU comparisons (Figs. 4 and 5). This suggests that the R-ParsTri effect is specific to UA nonwords and related to the coordination of a search in which there was no obvious lexical support since the onset clusters of these nonwords are not included in the lexicon. The influences from L-ParsOrb1 and R-ParsTri1 on L-STG1 for UA nonwords can be thought of as evidence for selectively accessing the given unattested sound combinations. In other words, for UA nonwords, since the phoneme sequences are unattested, there is nothing to access; therefore, an active search for the nearest familiar word or even articulatory pattern was initiated. This would prompt potential acoustic-phonetic reanalysis (thus the influence on L-STG1). When this reanalysis cannot get lexical support from SMG/pMTG the form would be rejected. We think most of the UA words got this lexical support and were accepted. Together, this suggests that the left ParsOrb is more associated with acceptability of a novel form rather than attestation.

It is paradoxical that the same lexical and articulatory influences on L-STG1 that follow from attestation in AA played an outsized role in the processing of unacceptable forms in UU. This was the case for both L-ITG2 and R-postCG3 which showed stronger influences in AA than UA, but stronger effects for the unacceptable UU items than the acceptable UA items. We hypothesize these dynamics reflect different processes in AA and UU nonwords. In UU nonwords, we suggest that the stored articulatory (R-postCG3) dynamics attempts at restructuring of acoustic-phonetic representations in L-STG1 to bring those representations into alignment with stored representations. We suggest that UU nonwords are simply too phonetically dissimilar to stored forms to support lexical resonance, even after articulatory restructuring.

Additionally, the results of Miyamoto et al. (2016) provide context for interpreting the influence of R-postCG3 ROI on L-STG1 for both AA and UU nonwords. Miyamoto and colleagues investigated the cortical representation of the oral area in postCG by identifying the somatotopic representations of the lips, teeth, and tongue using fMRI. They found that the oral area is hierarchically organized across the rostral portion of the primary somatosensory cortex in which the representation of the tongue was located inferior to that of the lip. In a post hoc analysis, we compared the MNI coordinates of our R-postCG3 area with those reported in Miyamoto et al. (2006) and found that our R-postCG3 coincide with their lip area. The role of this postCG area for AA and UU nonwords could be related to the similarity of onset consonant clusters (articulated with the tongue against or close to the superior alveolar ridge) across the two conditions (see Appendix for the list of nonwords used).

These results from the AA vs. UU and UA vs. UU comparisons suggest that processing of acceptable (natural) nonwords involved a phonological search mechanism that selectively accessed the sound combinations (via L-ParsOrb1 and R-ParsTri1) for UA nonwords and the wordform representations (via L-ParsOrb1 and L-MTG1) for AA nonwords, whereas the processing UU nonwords initiated an effortful search process without lexical support.

5. Conclusion

This study examined the factors that support phonological acceptability judgments across the continuum of phonological acceptability and phonotactic attestation. Our review of the empirical literature suggests that available evidence for a LIFG locus for phonological rule processing is potentially attributable to task specific working memory and attention demands. Moreover, our results are inconsistent with claims that LIFG regions play a central role in phonological judgments. Instead, our results suggest that phonological acceptability judgments are mediated by representations of familiar wordforms and articulatory patterns. We hypothesize that acceptability judgements reflect the degree to which test stimuli resemble specific stored forms. Influences on acoustic-phonetic regions reflect resonance when input representations are sufficiently to draw support from stored forms either with restructuring (perceptual repair) for unattested forms judged acceptable, or without significant forms that do not require restructuring. Novel forms are only rejected if effortful processing fails to restructure their input representations sufficiently to bring them into alignment with stored lexical or articulatory representations.

In conclusion, phonological judgments do not provide a direct window on abstract constraints, but rather reflect processes related to word likeness and articulatory effort. Phonology remains a marvel of structurally constrained cognitive generativity. This work suggests that phonological acceptability judgments provide insight into structural properties of lexical and articulatory representations of individual forms that reflect language processing demands and shape our lexica.

Supplementary Material

1

Acknowledgements

We would like to thank Adriana Schoenhaut and Nao Suzuki for assisting in scanning, Tom Sgouros for programming support, and Adam Albright and David Sorensen for thoughtful comments and feedback on the work. We also thank the participants in our studies who volunteered their time and effort.

Funding

This work was supported by National Institute on Deafness and Other Communication Disorders (NIDCD) grant R01DC015455 (P.I.: Gow)

Appendix

STIMULI

Novel Words
Acceptable Attested flane flose friss shrime slep smike
flass flul frode shrom sliss smiss
flav fluss frome shrop sliv smob
flep fral frose shrote slobe smobe
flid frane frote slalm slome smop
flike frass frud slame slote smul
flime freem frul slass slud smuv
fliss frem fruss slav smad snab
flob frid shrab sleem smal snote
flobe frike shran slem smid spame
Unattested Acceptable bwaim bwobe mlote shlame srime vrid
bwain bwop mlud shlid srop vrike
bwal bwul mlul shlime srul vrime
bweem bwus mluss shlob sruv vriss
bwep bwuv pwain shlome vrad vrob
bwid mlame pweam shlop vrame vrom
bwime mlime pwid shlote vrane vrome
bwis mliss pwote srane vras vrose
bwiv mlode pwul sras vreem vrote
bwob mlome shlab srike vrem vruss
Unattested Unacceptable fmal sfob zhnem zhnote zhvas zhvode
fmame sfode zhnid zhnud zhveb zhvom
fmeem sfome zhnike zhnul zhveem zhvome
fmem sfuss zhnime zhnuss zhvem zhvop
fmid zhnad zhniss zhpuss zhvid zhvose
fmiv zhnal zhniv zhvab zhvike zhvote
sfab zhnane zhnob zhvad zhvime zhvud
sfal zhnas zhnom zhval zhviss zhvul
sfeb zhneb zhnop zhvame zhviv zhvuss
sfiv zhneem zhnose zhvane zhvob zhvuv

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Bibliography

  1. Albright A (2009). Feature-based generalization as a source of gradient acceptability. Phonology, 26(1): 9–41. doi: 10.1017/S0952675709001705 [DOI] [Google Scholar]
  2. Albright A, & Hayes B (2003). Rules vs. analogy in English past tenses: A computational/experimental study. Cognition, 90(2), 119–161. doi: 10.1016/s0010-0277(03)00146-x [DOI] [PubMed] [Google Scholar]
  3. Amunts K, Weiss PH, Mohlberg H, Pieperhoff P, Eickhoff S, Gurd JM, … & Zilles K (2004). Analysis of neural mechanisms underlying verbal fluency in cytoarchitectonically defined stereotaxic space—the roles of Brodmann areas 44 and 45. Neuroimage, 22(1), 42–56. doi: 10.1016/j.neuroimage.2003.12.031 [DOI] [PubMed] [Google Scholar]
  4. Anshen F, & Aronoff M (1988). Producing morphologically complex words. Linguistics, 26(4), 641–655. doi: 10.1515/ling.1988.26.4.641 [DOI] [Google Scholar]
  5. Archangeli DB, & Pulleyblank DG (1994). Grounded phonology (No. 25). MIT Press. [Google Scholar]
  6. Ardila A, Bernal B, & Rosselli M (2017). Should Broca’s area include Brodmann area 47?. Psicothema, 29(1), 73–77. [DOI] [PubMed] [Google Scholar]
  7. Aron AR, Fletcher PC, Bullmore ET, Sahakian BJ, & Robbins TW (2003). Stop-signal inhibition disrupted by damage to right inferior frontal gyrus in humans. Nature Neuroscience, 6(2), 115–116. doi: 10.1038/nn1003 [DOI] [PubMed] [Google Scholar]
  8. Axer H, von Keyserlingk AG, Berks G, von Keyserlingk DG (2001). Supra- and infrasylvian conduction aphasia. Brain Lang 76: 317–331 [DOI] [PubMed] [Google Scholar]
  9. Bader M, & Häussler J (2010). Toward a model of grammaticality judgments. Journal of Linguistics, 46(2), 273–330. doi: 10.1017/S0022226709990260 [DOI] [Google Scholar]
  10. Bailey TM, & Hahn U (2001). Determinants of wordlikeness: phonotactics or lexical neighborhoods? Journal of Memory and Language, 44(4), 568–591. doi: 10.1006.jmla.2000.2756. [Google Scholar]
  11. Bates D, Maechler M, & Bolker B (2012). Lme4: Linear mixed-effects models using S4 classes. R package version 0.999999-0. [Google Scholar]
  12. Bauer L (2001). Morphological Productivity. Cambridge University Press. doi: 10.1017/CBO9780511486210 [DOI] [Google Scholar]
  13. Becker M, Sommer T, & Kühn S (2020). Inferior frontal gyrus involvement during search and solution in verbal creative problem solving: A parametric fMRI study. Neuroimage, 206, 116294. doi: 10.1016/j.neuroimage.2019.116294 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Benjamini Y, & Hochberg Y (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, B (Methodological), 57(1), 289–300. doi: 10.1111/j.2517-6161.1995.tb02031.x [DOI] [Google Scholar]
  15. Berent I, Steriade D, Lennertz T, & Vaknin V (2007). What we know about what we have never heard: Evidence from perceptual illusions. Cognition, 104(3), 591–630. doi: 10.1016/j.cognition.2006.05.015 [DOI] [PubMed] [Google Scholar]
  16. Berent I, Pan H, Zhao X, Epstein J, Bennett ML, Deshpande V, Seethamraju RT, & Stern E (2014). Language Universals Engage Broca’s Area. PloS One, 9(4), e95155. doi: 10.1371/journal.pone.0095155 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Berent I, Brem AK, Zhao X, Seligson E, Pan H, Epstein J, … & Pascual-Leone A (2015). Role of the motor system in language knowledge. Proceedings of the National Academy of Sciences, 112(7), 1983–1988. doi: 10.1073/pnas.1416851112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Berko J (1958). The child’s acquisition of English morphology. Word, 14(2–3), 150–177. doi: 10.1080/00437956.1958.11659661 [DOI] [Google Scholar]
  19. Bernal B, Ardila A, & Rosselli M (2015). Broca’s area network in language function: A pooling-data connectivity study. Frontiers in Psychology, 6, 687. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Biran M, Friedmann N (2005). From phonological paraphasias to the structure of the phonological output lexicon. Lang Cogn Proc 20. [Google Scholar]
  21. Boersma P, & Weenink D (2018): Praat: doing phonetics by computer [Computer program]. Version 6.0.37, retrieved 14 March 2018 from http://www.praat.org/.
  22. Bookheimer S (2002). Functional MRI of language: new approaches to understanding the cortical organization of semantic processing. Annu Rev Neurosci 25, 151–188, doi: 10.1146/annurev.neuro.25.112701.142946. [DOI] [PubMed] [Google Scholar]
  23. Burani C, Vallar G, & Bottini G (1991). Articulatory coding and phonological judgements on written words and pictures: The role of the phonological output buffer. European Journal of Cognitive Psychology, 3(4), 379–398. doi: 10.1080/09541449108406235 [DOI] [Google Scholar]
  24. Burton MW (2001). The role of inferior frontal cortex in phonological processing. Cognitive Science, 25(5): 695–709. doi: 10.1207/s15516709cog2505_4 [DOI] [Google Scholar]
  25. Buchsbaum BR, Baldo J, Okada K, Berman KF, Dronkers N, D’Esposito M, & Hickok G (2011). Conduction aphasia, sensory-motor integration, and phonological short-term memory – an aggregate analysis of lesion and fMRI data. Brain and Language, 119(3): 119–128. doi: 10.1016/j.bandl.2010.12.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Bybee J (1985). Morphology: A study of the relation between meaning and form. Amsterdam: John Benjamins Publishing Company. [Google Scholar]
  27. Bybee J (1995). Regular morphology and the lexicon. Language and Cognitive Processes 10(5), 425–255. doi: 10.1080/01690969508407111 [DOI] [Google Scholar]
  28. Bybee J (2001). Phonology and language use. Cambridge: Cambridge University Press. [Google Scholar]
  29. Bybee J (2008). Formal universals as emergent phenomena: The origins of structure preservation. In Linguistic Universals and Language Change (pp. 108–121). Oxford University Press. [Google Scholar]
  30. Bybee JL, & Pardo E (1981). On lexical and morphological conditioning of alternations: a nonce-probe experiment with Spanish verbs. Linguistics, 19(9–10), 937–968. doi: 10.1515/ling.1981.19.9-10.937 [DOI] [Google Scholar]
  31. Chomsky N, and Miller G (1963). “Introduction to the formal analysis of natural languages,” in Handbook of Mathematical Psychology: I. Vol. 2, eds Luce R, Bush RR, and Galanter EE (New York, NY: John Wiley; ), 269–321. [Google Scholar]
  32. Chomsky N (1965). Aspects of the Theory of Syntax. Cambridge. Multilingual Matters: MIT Press. [Google Scholar]
  33. Chomsky N, & Halle M (1965). Some controversial questions in phonological theory. Journal of Linguistics, 1(2), 97–138. doi: 10.1017/S0022226700001134 [DOI] [Google Scholar]
  34. Chomsky N, & Halle M (1968). The sound pattern of English. Cambridge: MIT Press. [Google Scholar]
  35. Coetzee AW (2008). Grammaticality and ungrammaticality in phonology. Language, 84 (2), 218–257. doi: 10.1353/lan.0.0000 [DOI] [Google Scholar]
  36. Coetzee AW (2009). Grammar is both categorical and gradient. In Parker S (Ed.), Phonological Argumentation: Essays on Evidence and Motivation, (pp. 9–42). London: Equinox. [Google Scholar]
  37. Conner CR, Kadipasaoglu CM, Shouval HZ, Hickok G, & Tandon N (2019). Network dynamics of Broca’s area during word selection. Plos One, 14(12), e0225756. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Coslett HB, Roeltgen DP, Gonzalez RL, Heilman KM (1987). Transcortical sensory aphasia: evidence for subtypes. Brain Lang 32: 362–378. [DOI] [PubMed] [Google Scholar]
  39. D’Ausilio A, Pulvermüller F, Salmas P, Bufalari I, Begliomini C, & Fadiga L (2009). The motor somatotopy of speech perception. Current Biology, 19(5), 381–385. doi: 10.1016/j.cub.2009.01.017 [DOI] [PubMed] [Google Scholar]
  40. D’Ausilio A, Bufalari I, Salmas P, & Fadiga L (2012). The role of the motor system in discriminating normal and degraded speech sounds. Cortex, 48(7), 882–887. doi: 10.1016/j.cortex.2011.05.017 [DOI] [PubMed] [Google Scholar]
  41. Dale AM, Fischl B, & Sereno MI (1999). Cortical surface-based analysis: I. Segmentation and surface reconstruction. Neuroimage, 9(2), 179–194. 10.1006/nimg.1998.0395 [DOI] [PubMed] [Google Scholar]
  42. Davidson L (2007). The relationship between the perception of non-native phonotactics and loanword adaptation. Phonology, 24(2), 261–286. doi: 10.1017/S0952675707001200. [DOI] [Google Scholar]
  43. de Zubicaray GI, & McMahon KL (2009). Auditory context effects in picture naming investigated with event-related fMRI. Cognitive, Affective, & Behavioral Neuroscience, 9(3), 260–269. doi: 10.3758/CABN.9.3.260 [DOI] [PubMed] [Google Scholar]
  44. Dong Y, Fukuyama H, Honda M, Okada T, Hanakawa T, Nakamura K, … & Shibasaki H (2000). Essential role of the right superior parietal cortex in Japanese kana mirror reading: An fMRI study. Brain, 123(4), 790–799. [DOI] [PubMed] [Google Scholar]
  45. Eddington D (1996). Diphthongization in Spanish derivational morphology: An empirical investigation. Hispanic Linguistics, 8(1), 1–35. [Google Scholar]
  46. Elmer S (2016). Broca pars triangularis constitutes a “hub” of the language-control network during simultaneous language translation. Frontiers in Human Neuroscience, 10, 491. doi: 10.3389/fnhum.2016.00491 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Ernestus M, & Baayen RH (2003). Predicting the unpredictable: Interpreting neutralized segments in Dutch. Language, 79(1), 5–38. [Google Scholar]
  48. Evans N, & Levinson SC (2009). The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences, 32(5), 429–448. doi: 10.1017/S0140525X0999094X [DOI] [PubMed] [Google Scholar]
  49. Fadiga L, Craighero L, Buccino G, & Rizzolatti G (2002). Speech listening specifically modulates the excitability of tongue muscles: a TMS study. European Journal of Neuroscience, 15(2), 399–402. doi: 10.1046/j.0953-816x.2001.01874.x. [DOI] [PubMed] [Google Scholar]
  50. Featherston S (2007). Data in generative grammar: The stick and the carrot. Theoretical Linguistics, 33(3), 269–318. doi: 10.1515/TL.2007.020 [DOI] [Google Scholar]
  51. Fischl B, Sereno MI, Tootell RBH, & Dale AM (1999). High resolution intersubject averaging and a coordinate system for the cortical surface. Human Brain Mapping, 8(4), 272–284. doi: [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Fitch WT, & Friederici AD (2012). Artificial grammar learning meets formal language theory: an overview. Philos Trans R Soc Lond B Biol Sci 367: 1933–1955. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Frisch SA, Large NR, & Pisoni D 2000. Perception of wordlikeness: effects of segment probability and length on the processing of nonwords. Journal of Memory and Language, 42(4), 481–496. doi: 10.1006/jmla.1999.2692 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Gafos AI (1999). The articulatory basis of locality in phonology. NY: Taylor & Francis. [Google Scholar]
  55. Galantucci B, Fowler CA, & Turvey MT (2006) The motor theory of speech perception reviewed. Psychon Bulletin and Review 13(3),361–377. doi: 10.3758/BF03193857. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Galle ME, Klein-Packard J, Schreiber K, & McMurray B (2019). What are you waiting for? Real-time integration of cues for fricatives suggests encapsulated auditory memory. Cognitive science, 43(1), e12700. doi: 10.1111/cogs.12700. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Ghaleh M, Skipper-Kallal LM, Xing S, Lacey E, DeWitt I, DeMarco A, & Turkeltaub P (2018). Phonotactic processing deficit following left-hemisphere stroke. Cortex, 99, 346–357. doi: 10.1016/j.cortex.2017.12.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Ghaleh M, Lacey EH, DeWitt I, & Turkeltaub PE (2019). Left middle temporal gyrus lesion predicts impairment of phonotactic processing. Conference Abstract: Academy of Aphasia 55th Annual Meeting. doi: 10.3389/conf.fnhum.2017.223.00005 [DOI] [Google Scholar]
  59. Gibson E, Piantadosi S, & Fedorenko K (2011). Using Mechanical Turk to obtain and analyze English acceptability judgments. Language and Linguistics Compass, 5(8), 509–524. doi: 10.1111/j.1749-818X.2011.00295.x [DOI] [Google Scholar]
  60. Goldrick M (2011). Using Psychological Realism to Advance Phonological Theory. In Goldsmith J, Riggle J & Yu ACL (Eds.), The Handbook of Phonological Theory: Second Edition (pp. 631–660). Wiley Blackwell. doi: 10.1002/9781444343069.ch19 [DOI] [Google Scholar]
  61. Goldsmith J, & Laks B (2010). Generative phonology: its origins, its principles, and its successors. The Cambridge History of Linguistics. [Google Scholar]
  62. Gow DW (2012). The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing. Brain and Language, 121(3), 273–288. doi: 10.1016/j.bandl.2012.03.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Gow DW, & Segawa JA (2009). Articulatory mediation of speech perception: a causal analysis of multi-modal imaging data. Cognition, 110(2), 222–236. doi: 10.1016/j.cognition.2008.11.011 [DOI] [PubMed] [Google Scholar]
  64. Gow DW, & Caplan DN (2012). New levels of language processing complexity and organization revealed by granger causation. Frontiers in Psychology, 3, 506. doi: 10.3389/fpsyg.2012.00506 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Gow DW, & Nied AC (2014). Rules from words: Phonotactic biases in speech perception. PloS One, 9(1), 1–12. doi: 10.1371/journal.pone.0086212. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Gow DW, & Olson BB (2015). Lexical mediation of phonotactic frequency effects on spoken word recognition: A Granger causality analysis of MRI-constrained MEG/EEG data. Journal of Memory and Language, 82, 41–55. doi: 10.1016/j.jml.2015.03.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Gow DW, Schoenhaut A, Avcu E, & Ahlfors SP (2021). Behavioral and neurodynamic effects of word learning on phonotactic repair. Frontiers in psychology, 12, 494. doi: 10.3389/fpsyg.2021.590155. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Gramfort A, Luessi M, Larson E, Engemann DA, Strohmeier D, Brodbeck C, Parkkonen P, & Hamalainen MS (2014). MNE software for processing MEG and EEG data. Neuroimage, 86, 446–460. doi: 10.1016/j.neuroimage.2013.10.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Graves WW, Grabowski TJ, Mehta S, Gordon JK (2007). A neural signature of phonological access: distinguishing the effects of word frequency from familiarity and length in overt picture naming. J Cogn Neurosci 19: 617–631. [DOI] [PubMed] [Google Scholar]
  70. Greenberg JH, & Jenkins JJ (1964). Studies in the psychological correlates of the sound system of American English. Word, 20(2), 157–177. doi: 10.1080/00437956.1964.11659816 [DOI] [Google Scholar]
  71. Grodzinsky Y, & Amunts K (Eds.). (2006). Broca’s region. Oxford University Press. [Google Scholar]
  72. Hagoort P (2005). On Broca, brain, and binding: a new framework. Trends in Cognitive Sciences, 9(9), 416–423. doi: 10.1016/j.tics.2005.07.004 [DOI] [PubMed] [Google Scholar]
  73. Halle M (1962). Phonology in generative grammar. Word, 18(1–3), 54–72. doi: 10.1080/00437956.1962.11659765 [DOI] [Google Scholar]
  74. Halle M (1964). On the bases of phonology. In Fodor JA & Katz JJ (Eds.), The Structure of Language (pp. 604–612). Englewood Cliffs: Prentice Hall. [Google Scholar]
  75. Harvey DY, Mass JA, Shah-Basak PP, Wurzman R, Faseyitan O, Sacchetti DL, … & Hamilton RH (2019). Continuous theta burst stimulation over right pars triangularis facilitates naming abilities in chronic post-stroke aphasia by enhancing phonological access. Brain and Language, 192, 25–34. doi: 10.1016/j.bandl.2019.02.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Havlicek M, Jan J, Brazdil M, & Calhoun VD (2010). Dynamic Granger causality based on Kalman filter for evaluation of functional network connectivity in fMRI data. Neuroimage, 53(1), 65–77. doi: 10.1016/j.neuroimage.2010.05.063 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Havlicek M, Friston KJ, Jan J, Brazdil M, & Calhoun VD (2011). Dynamic modeling of neuronal responses in fMRI using cubature Kalman filtering. Neuroimage, 56(4), 2109–2128. doi: 10.1016/j.neuroimage.2011.03.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Hayes BP (2000). Gradient well-formedness in Optimality Theory. In Dekkers J, van der Leeuw F & van de Weijer J (Eds.), Optimality Theory: Phonology, Syntax, and Acquisition (88–120). Oxford 48niversity Press, Oxford. [Google Scholar]
  79. Hayes B and White J 2013. Phonological naturalness and phonotactic learning. Linguistic Inquiry, 44(1). doi: 10.1162/LING_a_00119. [DOI] [Google Scholar]
  80. Heim S, Alter K, Ischebeck AK, Amunts K, Eickhoff SB, Mohlberg H, Zilles K, von Cramon DY, & Friederici AD (2005). The role of the left Brodmann’s areas 44 and 45 in reading words and pseudowords. Brain research. Cognitive brain research, 25(3), 982–993. doi: 10.1016/j.cogbrainres.2005.09.022 [DOI] [PubMed] [Google Scholar]
  81. Heim S, Eickhoff SB, Ischebeck AK, Friederici AD, Stephan KE, & Amunts K (2009). Effective connectivity of the left BA 44, BA 45, and inferior temporal gyrus during lexical and phonological decisions identified with DCM. Human Brain Mapping, 30(2), 392–402. doi: 10.1002/hbm.20512 [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Heinz J, & Idsardi W (2011). Sentence and word complexity. Science, 333(6040), 295–297. doi: 10.1126/science.1210358. [DOI] [PubMed] [Google Scholar]
  83. Heinz J, & Idsardi W (2013). What complexity differences reveal about domains in language. Topics in cognitive science, 5(1), 111–131. doi: 10.1111/tops.12000. [DOI] [PubMed] [Google Scholar]
  84. Hickok G, & Poeppel D (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393–402. doi: 10.1038/nrn2113. [DOI] [PubMed] [Google Scholar]
  85. Hu S, Ide JS, Zhang S, & Li CR (2016). The right superior frontal gyrus and individual variation in proactive control of impulsive response. Journal of Neuroscience, 36(50), 12688–12696. doi: 10.1523/JNEUROSCI.1175-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Hymes D (1992). The concept of communicative competence revisited. Thirty years of linguistic evolution, 31–57. [Google Scholar]
  87. Inkelas S (2014).The interplay of morphology and phonology. Oxford, UK: Oxford University Press. [Google Scholar]
  88. Ischebeck A, Indefrey P, Usui N, Nose I, Hellwig F, & Taira M (2004). Reading in a regular orthography: an FMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727–741. [DOI] [PubMed] [Google Scholar]
  89. Jackson RL (2020). The neural correlates of semantic control revisited. NeuroImage, 224, 117444. doi: 10.1016/j.neuroimage.2020.117444 [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Jacquemot C, Pallier C, LeBihan D, Dehaene S, & Dupoux E (2003). Phonological grammar shapes the auditory cortex: A functional magnetic resonance imaging study. Journal of Neuroscience, 23(29), 9541–9546. doi: 10.1523/JNEUROSCI.23-29-09541.2003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Japee S, Holiday K, Satyshur MD, Mukai I, & Ungerleider LG (2015). A role of right middle frontal gyrus in reorienting of attention: a case study. Frontiers in Systems Neuroscience, 9, 23. doi: 10.3389/fnsys.2015.00023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Kawahara S, & Kao S (2012). The productivity of a root-initial accenting suffix,[-zu]: judgement studies. Natural Language & Linguistic Theory, 30(3), 837–857. doi: 10.1007/s11049-011-9132-6. [DOI] [Google Scholar]
  93. Kawasaki H, & Ohala JJ (1980). Acoustic basis for universal constraints on sound sequences. The Journal of the Acoustical Society of America, 68(S1), S33–S33. [Google Scholar]
  94. Kazui H, Kitagi H, & Mori E (2000). Cortical activation during retrieval of arithmetical facts and actual calculation: A functional magnetic resonance imaging study. Psychiatry and Clinical Neuroscience, 54(4),479–485. doi: 10.1046/j.1440-1819.2000.00739.x [DOI] [PubMed] [Google Scholar]
  95. Kenstowicz MJ (1994). Phonology in generative grammar (Vol. 7). Cambridge, MA: Blackwell. [Google Scholar]
  96. Kim H (2010). Dissociating the roles of the default-mode, dorsal, and ventral networks in episodic memory retrieval. Neuroimage, 50(4), 1648–1657. doi: 10.1016/j.neuroimage.2010.01.051 [DOI] [PubMed] [Google Scholar]
  97. Kleiner M, Brainard D, Pelli D, Ingling A, Murray R, & Broussard C (2007). What’s new in psychtoolbox-3. Perception, 36(14), 1–16. [Google Scholar]
  98. Koenigs M, Barbey AK, Postle BR, & Grafman J (2009). Superior parietal cortex is critical for manipulation of information in working memory. Journal of Neuroscience, 29(47), 14980–14986. doi: 10.1523/JNEUROSCI.3706-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Kuznetsova A, Brockhoff PB, & Christensen RH (2017). lmerTest package: tests in linear mixed effects models. Journal of statistical software, 82(13), 1–26. doi: 10.18637/jss.v082.i13. [DOI] [Google Scholar]
  100. LaCroix AN, Diaz AF, & Rogalsky C (2015). The relationship between the neural computations for speech and music perception is context-dependent: An activation likelihood estimate study. Frontiers in Psychology, 6, 1138. doi: 10.3389/fpsyg.2015.01138 [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Lakoff G, & Johnson M (1999). Philosophy in the flesh: The embodied mind and its challenge to western thought. New York, NY: Basic books. [Google Scholar]
  102. Lau JH, Clark A, and Lappin S (2017). Grammaticality, acceptability, and probability: a probabilistic view of linguistic knowledge. Cogn. Sci 41, 1202–1241. doi: 10.1111/cogs.12414 [DOI] [PubMed] [Google Scholar]
  103. Lemaire JJ, Golby A, Wells WM III, Pujol S, Tie Y, Rigolo L, & Kikinis R (2013). Extended Broca’s area in the functional connectome of language in adults: Combined cortical and subcortical single-subject analysis using fMRI and DTI tractography. Brain Topography, 26(3), 428–441. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Liberman AM, Cooper FS, Shankweiler DP, & Studdert-Kennedy M (1967). Perception of the speech code. Psychological Review, 74(6),431–461. doi: 10.1037/h0020279 [DOI] [PubMed] [Google Scholar]
  105. Lieberman MD, Chang GY, Chiao J, Bookheimer SY, Knowlton BJ (2004). An event-related fMRI study of artificial grammar learning in a balanced chunk strength design. J Cogn Neurosci 16: 427–438. [DOI] [PubMed] [Google Scholar]
  106. Luce PA, & Pisoni DB (1998). Recognizing spoken words: The Neighborhood Activation Model. Ear and Hearing, 19(1), 1–36. doi: 10.1097/00003446-199802000-00001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Luce PA, & Large NR (2001). Phonotactics, density, and entropy in spoken word recognition. Language and Cognitive Processes, 16(5–6), 565–581. doi: 10.1080/01690960143000137 [DOI] [Google Scholar]
  108. MacNeilage PF (2008). The Origin of Speech (Oxford Univ Press, Oxford, UK: ), p xi. [Google Scholar]
  109. Mahowald K, Dautriche I, Gibson E, & Piantadosi ST (2018). Word forms are structured for efficient use. Cognitive science, 42(8), 3116–3134. [DOI] [PubMed] [Google Scholar]
  110. Massaro DW, & Cohen MM (1983). Phonological context in speech perception. Perception & Psychophysics, 34(4), 338–348. doi: 10.3758/bf03203046 [DOI] [PubMed] [Google Scholar]
  111. Matchin WG (2018). A neuronal retuning hypothesis of sentence-specificity in Broca’s area. Psychonomic Bulletin & Review, 25(5), 1682–1694. doi: 10.3758/s13423-017-1377-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  112. McClelland JL, & Elman JL (1986). The TRACE model of speech perception. Cognitive Psychology, 18(1), 1–86. doi: 10.1016/0010-0285(86)90015-0 [DOI] [PubMed] [Google Scholar]
  113. McClelland JL (1991). Stochastic interactive processes and the effect of context on perception. Cognitive Psychology, 23(1), 1–44. doi: 10.1016/0010-0285(91)90002-6 [DOI] [PubMed] [Google Scholar]
  114. Menon V, Rivera SM, White CD, Glover GH, & Reiss AL (2000). Dissociating prefrontal and parietal cortex activation during arithmetic processing. NeuroImage, 12(4), 357–365. doi: 10.1006/nimg.2000.0613 [DOI] [PubMed] [Google Scholar]
  115. Mesgarani N, Cheung C, Johnson K, & Chang EF (2014). Phonetic feature encoding in human superior temporal gyrus. Science, 343(6174), 1006–1010. doi: 10.1126/science.1245994 [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Milde T, Leistritz L, Astolfi L, Miltner WH, Weiss T, Babiloni F, & Witte H (2010). A new Kalman filter approach for the estimation of high-dimensional time-variant multivariate AR models and its application in analysis of laser-evoked brain potentials. NeuroImage, 50(3), 960–969. doi: 10.1016/j.neuroimage.2009.12.110 [DOI] [PubMed] [Google Scholar]
  117. Miyamoto JJ, Honda M, Saito DN, Okada T, Ono T, Ohyama K, & Sadato N (2006). The representation of the human oral area in the somatosensory cortex: a functional MRI study. Cerebral Cortex, 16(5), 669–675. doi: 10.1093/cercor/bhj012 [DOI] [PubMed] [Google Scholar]
  118. Moreton E, Pater J (2012a). Structure and substance in artificial-phonology learning. Part 1: structure. Lang. Ling. Compass 6, 686–701. doi: 10.1002/lnc3.363 [DOI] [Google Scholar]
  119. Moreton E, Pater J (2012b). Structure and substance in artificial-phonology learning. Part II: substance. Lang. Ling. Compass 6, 702–718. doi: 10.1002/lnc3.366 [DOI] [Google Scholar]
  120. Möttönen R, & Watkins KE (2009). Motor representations of articulators contribute to categorical perception of speech sounds. Journal of. Neuroscience, 29(31), 9819–9825. doi: 10.1523/JNEUROSCI.6018-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Musso M, Moro A, Glauche V, Rijntjes M, Reichenbach J, Büchel C, & Weiller C (2003). Broca’s area and the language instinct. Nature neuroscience, 6(7), 774–781. doi: 10.1038/nn1077. [DOI] [PubMed] [Google Scholar]
  122. Newmeyer FJ (2003). Grammar is grammar and usage is usage. Language, 79(4), 682–707. [Google Scholar]
  123. Noonan KA, Jefferies E, Visser M, & Lambon Ralph M (2013). Going beyond inferior prefrontal involvement in semantic control: evidence for the additional contribution of dorsal angular gyrus and posterior middle temporal cortex. Journal of. Cognitive. Neuroscience, 25(11),1824–1850. doi: 10.1162/jocn_a_00442 [DOI] [PubMed] [Google Scholar]
  124. Obrig H, Mentzel J, & Rossi S (2016). Universal and language-specific sublexical cues in speech perception: a novel electroencephalography-lesion approach. Brain, 139(6), 1800–1816. doi: 10.1093/brain/aww077 [DOI] [PubMed] [Google Scholar]
  125. Ohala J, & Ohala M (1986). Testing hypotheses regarding the psychological manifestation of morpheme structure constraints. In Ohala JJ & Jaeger JJ (Eds.), Experimental phonology (pp. 239–252). Orlando, FL: Academic Press. [Google Scholar]
  126. Opitz B, & Friederici AD (2003). Interactions of the hippocampal system and the prefrontal cortex in learning language-like rules. NeuroImage 19: 1730–1737. [DOI] [PubMed] [Google Scholar]
  127. Overduin SA, & Servos P (2004). Distributed digit somatotopy in primary somatosensory cortex. NeuroImage, 23(2), 462–472. doi: 10.1016/j.neuroimage.2004.06.024 [DOI] [PubMed] [Google Scholar]
  128. Pardo JV, Wood TD, Costello PA, Pardo PJ, & Lee JT (1997). PET study of the localization and laterality of lingual somatosensory processing in humans. Neuroscience Letters, 234(1), 23–26. doi: 10.1016/s0304-3940(97)00650-2 [DOI] [PubMed] [Google Scholar]
  129. Patterson K, Nestor PJ, & Rogers TT (2007). Where do you know what you know? The representation of semantic knowledge in the human brain. Nature reviews neuroscience, 8(12), 976–987. [DOI] [PubMed] [Google Scholar]
  130. Paulesu E, Frith CD, & Frackowiak RS (1993). The neural correlates of the verbal component of working memory. Nature, 362(6418), 342–345. doi: 10.1038/362342a0 [DOI] [PubMed] [Google Scholar]
  131. Pertz DL, & Bever TG (1975). Sensitivity to phonological universals in children and adolescents. Language, 149–162. [Google Scholar]
  132. Phillips C, Wagers MW, and Lau EF (2011). Grammatical illusions and selective fallibility in real-time language comprehension. Exp. Interfaces 37, 147–180. doi: 10.1163/9781780523750_006 [DOI] [Google Scholar]
  133. Pierrehumbert J (2002). An unnatural process. Laboratory Phonology, 8. [Google Scholar]
  134. Pitt MA (1998). Phonological processes and the perception of phonotactically illegal consonant clusters. Perception and Psychophysics, 60(6), 941–951. doi: 10.3758/BF03211930 [DOI] [PubMed] [Google Scholar]
  135. Pitt MA, & Samuel AG (1995). Lexical and sublexical feedback in auditory word recognition. Cognitive Psychology, 29(2), 149–188. doi: 10.1006/cogp.1995.1014 [DOI] [PubMed] [Google Scholar]
  136. Plag I (1999). Morphological Productivity: Structural Constraints in English Derivation. Berlin: Mouton de Gruyter. [Google Scholar]
  137. Poeppel D (1996). A critical review of PET studies of phonological processing. Brain and Language, 55(3), 317–351. doi: 10.1006/brln.1996.0108 [DOI] [PubMed] [Google Scholar]
  138. Poeppel D, Idsardi WJ, & van Wassenhove V (2008). Speech perception at the interface of neurobiology and linguistics. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 363(1493), 1071–1086. doi: 10.1098/rstb.2007.2160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  139. Prabhakaran R, Blumstein SE, Myers EB, Hutchison E, Britton B (2006). An event-related fMRI investigation of phonological–lexical competition. Neurop sychologia 44: 2209–2221. [DOI] [PubMed] [Google Scholar]
  140. Price CJ (2010). The anatomy of language: a review of 100 fMRI studies published in 2009. Annals of the New York Academy of Sciences, 1191(1), 62–88. doi: 10.1111/j.1749-6632.2010.05444.x [DOI] [PubMed] [Google Scholar]
  141. Pulvermüller F, Huss M, Kherif F, del Prado Martin FM, Hauk O, & Shtyrov Y (2006). Motor cortex maps articulatory features of speech sounds. Proceedings of the National Academy of Sciences, 103(20), 7865–7870. doi: 10.1073/pnas.0509989103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  142. Pulvermüller F, & Fadiga L (2010). Active perception: Sensorimotor circuits as a cortical basis for language. Nature Review Neuroscience, 11(5):351–360. doi: 10.1038/nrn281 [DOI] [PubMed] [Google Scholar]
  143. R Core Team (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/. [Google Scholar]
  144. Ranganath C, Johnson MK, & D’Esposito M (2003). Prefrontal activity associated with working memory and episodic long-term memory. Neuropsychologia, 41(3):378–389. doi: 10.1016/s0028-3932(02)00169-0 [DOI] [PubMed] [Google Scholar]
  145. Reali F, & Griffiths TL (2009). The evolution of frequency distributions: relating regularization to inductive biases through iterated learning. Cognition, 111(3), 317–328. doi: 10.1016/j.cognition.2009.02.012. [DOI] [PubMed] [Google Scholar]
  146. Righi G, Blumstein SE, Mertus J, Worden MS (2009). Neural systems underlying lexical competition: An eye tracking and fMRI study. J Cogn Neurosci 22: 213–224. [DOI] [PMC free article] [PubMed] [Google Scholar]
  147. Rickard TC, Romero SG, Basso G, Wharton C, Flitman S, & Grafman J (2000). The calculating brain: an fMRI study. Neuropsychologia, 38(3), 325–335. doi: 10.1016/s0028-3932(99)00068-8. [DOI] [PubMed] [Google Scholar]
  148. Rossi S, Jurgenson IB, Hanulikova A, Telkemeyer S, Wartenburger I, & Obrig H (2011). Implicit processing of phonotactic cues: evidence from electrophysiological and vascular responses. Journal of Cognitive Neuroscience, 23(7), 1752–1764. doi: 10.1162/jocn.2010.21547 [DOI] [PubMed] [Google Scholar]
  149. Rossi S, Hartmüller T, Vignotto M, & Obrig H (2013). Electrophysiological evidence for modulation of lexical processing after repetitive exposure to foreign phonotactic rules. Brain and Language, 127(3), 404–414. doi: 10.1016/j.bandl.2013.02.009 [DOI] [PubMed] [Google Scholar]
  150. Scholes RJ (1966). Phonotactic Grammaticality. Janua Linguarum. The Hague: Mouton. [Google Scholar]
  151. Schomers MR, & Pulvermüller F (2016). Is the sensorimotor cortex relevant for speech perception and understanding? An integrative review. Frontiers in Human Neuroscience, 10, 435. doi: 10.3389/fnhum.2016.00435. [DOI] [PMC free article] [PubMed] [Google Scholar]
  152. Schütze C (2016). The empirical base of linguistics: Grammaticality judgments and linguistic methodology. Language Science Press. [Google Scholar]
  153. Schwartz JL, Abry C, Boë LJ, & Cathiard M (2002). Phonology in a theory of perception-for-action control. In Durand J & Laks B (Eds.), Phonetics, phonology, and cognition (pp. 254–280). New York, NY: Oxford University Press. [Google Scholar]
  154. Schwering SC, & MacDonald MC (2020). Verbal working memory as emergent from language comprehension and production. Frontiers in human neuroscience, 14, 68. doi: 10.3389/fnhum.2020.00068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Seghier ML (2013). The angular gyrus: multiple functions and multiple subdivisions. The Neuroscientist, 19(1), 43–61. doi: 10.1177/1073858412440596 [DOI] [PMC free article] [PubMed] [Google Scholar]
  156. Shademan S (2007). Grammar and analogy in phonotactic well-formedness judgments. Los Angeles, CA: University of California dissertation. [Google Scholar]
  157. Shomstein S, & Yantis S (2006). Parietal cortex mediates voluntary control of spatial and nonspatial auditory attention. Journal of Neuroscience, 26(2), 435–439. doi: 10.1523/JNEUROSCI.4408-05.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  158. Smolensky P, & Prince A (1993). Optimality Theory: Constraint interaction in generative grammar. Optimality Theory in Phonology, 3. [Google Scholar]
  159. Sprouse J & Almeida D, (2017) “Design sensitivity and statistical power in acceptability judgment experiments”, Glossa: a journal of general linguistics 2(1), p.14. doi: 10.5334/gjgl.236. [DOI] [Google Scholar]
  160. Strange BA, Henson RN, Friston KJ, Dolan RJ (2001). Anterior prefrontal cortex mediates rule learning in humans. Cereb Cortex 11: 1040–1046. [DOI] [PubMed] [Google Scholar]
  161. Tavazoie S, Hughes JD, Campbell MJ, Cho RJ, & Church GM (1999). Systematic determination of genetic network architecture. Nature Genetics, 22(3), 281–285. doi: 10.1038/10343. [DOI] [PubMed] [Google Scholar]
  162. Thompson-Schill SL, Bedny M, & Goldberg RF (2005). The frontal lobes and the regulation of mental activity. Current Opinion in Neurobiology, 15(2), 219–224. doi: 10.1016/j.conb.2005.03.006 [DOI] [PubMed] [Google Scholar]
  163. Tremblay P, & Small SL (2011a). On the context-dependent nature of the contribution of the ventral premotor cortex to speech perception. NeuroImage, 57(4), 1561–1571. doi: 10.1016/j.neuroimage.2011.05.067 [DOI] [PMC free article] [PubMed] [Google Scholar]
  164. Tremblay P, & Small SL (2011b). From language comprehension to action understanding and back again. Cerebral Cortex, 21(5), 1166–1177. doi: 10.1093/cercor/bhq189 [DOI] [PMC free article] [PubMed] [Google Scholar]
  165. Treiman R, Kessler B, Knewasser S, Tincoff R, & Bowman M (2000). English speaker’s sensitivity to phonotactic patterns. In Broe MB & Pierrehumbert JB (Eds.), Papers in Laboratory Phonology V: Acquisition and the Lexicon (pp. 269–282). Cambridge, UK: Cambridge UP. [Google Scholar]
  166. Treutler M, Sörös P (2021). Functional MRI of native and non-native speech sound production in sequential German-English bilinguals. Frontiers in Human Neuroscience, 15:683277. doi: 10.3389/fnhum.2021.683277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  167. Uddén J, & Bahlmann J (2012). A rostro-caudal gradient of structured sequence processing in the left inferior frontal gyrus. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 367(1598), 2023–2032. doi: 10.1098/rstb.2012.0009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  168. Vaden KI Jr, Kuchinsky SE, Keren NI, Harris KC, Ahlstrom JB, Dubno JR, & Eckert MA (2011). Inferior frontal sensitivity to common speech sounds is amplified by increasing word intelligibility. Neuropsychologia, 49(13), 3563–3572. doi: 10.1016/j.neuropsychologia.2011.09.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  169. Valdes-Sosa PA, Sanchez-Bornot JM, Sotero RC, Iturria-Medina Y, Aleman-Gomez Y, Bosch-Bayard J, Carbonell F, & Ozaki T (2009). Model driven EEG/fMRI fusion of brain oscillations. Human Brain Mapping, 30(9), 2701–2721. doi: 10.1002/hbm.20704 [DOI] [PMC free article] [PubMed] [Google Scholar]
  170. Vilberg KL, & Rugg MD (2008). Memory retrieval and the parietal cortex: a review of evidence from a dual-process perspective. Neuropsychologia, 46(7), 1787–1799. doi: 10.1016/j.neuropsychologia.2008.01.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  171. Vitevitch MS & Luce PA (1998). When words compete: Levels of processing in perception of spoken words. Psychological Science, 9(4), 325–329. doi: 10.1111/1467-9280.00064 [DOI] [Google Scholar]
  172. Vitevitch MS & Luce PA (2004). A web-based interface to calculate phonotactic probability for words and nonwords in English. Behavior Research Methods, Instruments, & Computers, 36(3), 481–487. doi: 10.3758/bf03195594 [DOI] [PMC free article] [PubMed] [Google Scholar]
  173. Wagner M, Shafer VL, Martin B, & Steinschneider M (2012). The phonotactic influence on the perception of a consonant cluster /pt/ by native English and native Polish listeners: a behavioral and event related potential (ERP) study. Brain and language, 123(1), 30–41. doi: 10.1016/j.bandl.2012.06.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  174. Wild CJ, Yusuf A, Wilson DE, Peelle JE, Davis MH, & Johnsrude IS (2012). Effortful listening: the processing of degraded speech depends critically on attention. Journal of Neuroscience, 32(40), 14010–14021. doi: 10.1523/JNEUROSCI.1528-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  175. Wilson C (2006). Learning phonology with substantive bias: an experimental and computational study of velar palatalization. Cognitive Science 30, 945–982. [DOI] [PubMed] [Google Scholar]
  176. Yamamoto AK, Jones OP, Hope TM, Prejawa S, Oberhuber M, Ludersdorfer P, Yousry TA, Green DW, & Price CJ (2019). A special role for the right posterior superior temporal sulcus during speech production. NeuroImage, 203, 116184. doi: 10.1016/j.neuroimage.2019.116184 [DOI] [PMC free article] [PubMed] [Google Scholar]
  177. Yi HG, Leonard MK, & Chang EF (2019). The Encoding of Speech Sounds in the Superior Temporal Gyrus. Neuron, 102(6), 1096–1110. doi: 10.1016/j.neuron.2019.04.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  178. Zhao X, & Berent I (2018). The Basis of the Syllable Hierarchy: Articulatory Pressures or Universal Phonological Constraints? Journal of Psycholinguistic Research, 47(1), 29–64. doi: 10.1007/s10936-017-9510-2 [DOI] [PubMed] [Google Scholar]
  179. Zuraw K (2000). Patterned Exceptions in Phonology. Los Angeles, CA: University of California dissertation. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES