Abstract
Purpose:
The purpose of this study was to investigate the influence of phonetic complexity as measured by the Word Complexity Measure (WCM) on the speed of single-word production in adults who do (AWS, n = 15) and do not stutter (AWNS, n = 15).
Method:
Participants were required to name pictures of high versus low phonetic complexity and balanced for lexical properties. Speech reaction time was recorded from initial presentation of the picture to verbal response of participant for each word type. Accuracy and fluency were manually coded for each production.
Results:
AWS named pictures significantly slower than AWNS, but there were no significant differences observed in response latency when producing word of high versus low phonetic complexity as measured by the WCM.
Conclusion:
Findings corroborate past research of overall slowed picture naming latencies in AWS, compared to AWNS. Findings conflict with data that suggest that the phonetic complexity of words uniquely compromises the speed of production in AWS. The potential interaction between lexical and phonetic factors on single-word production within each group are discussed.
Keywords: Stuttering, Phonetic complexity, Fluency, Accuracy, Reaction time
1. Introduction
The EXPLAN model of stuttering (EX: execution, PLAN: planning; Howell, 2011, pp. 2267–2273) argues that moments of stuttered speech occur due to insufficient time to plan speech prior to production. Phonetic complexity, as described by the EXPLAN model, refers to the complexity of movement required to execute articulatory sequences. According to the EXPLAN model, complex motor sequences require additional time to formulate, and stuttered speech occurs when verbal execution is initiated prior to the completion of the more complex speech plan (e.g., Howell, 2004, 2011). To date, the influence of increased phonetic complexity on stuttered speech has been debated (Bernstein Ratner, 2005; cf. Howell & Dworzynski, 2005). Research has demonstrated a relationship for older children and adults who stutter (e.g., Al-Timimi, Khamaiseh, & Howell, 2013; Dworzynski & Howell, 2004; Howell & Au-Yeung, 2007; Howell et al., 2006) but not younger children who stutter (e.g., Coalson & Byrd, 2016; Coalson et al., 2012; Dworzynski & Howell, 2004; Howell & Au-Yeung, 1995; Logan & Conture, 1997; Throneburg et al., 1994). Bernstein Ratner (2005) has suggested that past findings of phonetic complexity as a significant contributor to stuttered speech, in the manner predicted by the EXPLAN model, is difficult to determine due to a number of confounding methodological and theoretical concerns.
One fundamental concern noted by Bernstein Ratner (2005) is the lack of experimental data that links phonetic complexity to the ease of speech planning and production in typically fluent adults, and whether these effects are independent from the well-documented influence of co-occurring lexical and linguistic properties (e.g., word frequency: Newman & German, 2005; neighborhood density: Luce & Pisoni, 1998; Vitevitch, 2002; neighborhood frequency: Vitevitch & Sommers, 2003). Further, if a distinct relationship between phonetic complexity and speech planning does exist in adults who do not stutter (AWNS), no research is available that indicates that planning complex sequences is delayed to a greater extent in adults who stutter (AWS). Thus, the primary motivation for the present study was to provide these data for AWNS and AWS using an experimental paradigm in which the phonetic complexity of stimuli is manipulated, and the lexical factors of the stimuli are carefully controlled.
1.1. Phonetic complexity and fluency in persons who stutter: spontaneous speech data
A notable challenge to previous studies that have examined phonetic complexity and stuttered speech is the inconsistency of measurement tools used to determine what is and is not considered a complex phonetic sequence (see Table 1 for comparative rubric). Until recently, Jakielski’s (1998) Index of Phonetic Complexity (IPC) has been the primary metric that was used (and available) to researchers. The IPC is an eight-factor index derived from pre-linguistic speech output of infants (n = 5; 7–36 months) which provides an aggregated complexity score for individual words based on specific phonetic properties such as consonant type, vowel type, number of syllables, consonant variegation, consonant clusters, and consonant variegation within clusters. There are unpublished data to support the IPC scoring method as a reasonable metric for verbal output of children between 1 and 3 years of age (e.g., Jakielski, 2002; Jakielski, Maytasse, & Doyle, 2006). Despite the correspondence between phonetic complexity and speech development at younger ages in typically developing children, Dworzynski and Howell (2004) found that increased IPC values do not predict stuttered speech in children under 6 years of age. Instead, IPC values are only associated with moments of stuttered speech for speakers 6 years of age and older (e.g., Howell & Au-Yeung, 2007; Howell et al., 2006; LaSalle & Wolk, 2011; Wolk & LaSalle, 2015). That is, the effects of phonetic complexity on stuttering appear to emerge only after speech production systems have matured based on data from spontaneous speech production. These findings conflict with the perspective that younger children with developing speech production systems would be more vulnerable to increased articulatory demand. The observed influence of phonetic complexity in AWS also conflicts with previous data that indicate motor-speech plans of fluent and stuttered words in AWS are intact prior to production (Sussman, Byrd, & Guitar, 2011). Additionally, although increased phonetic complexity may decrease articulatory coordination in AWS, this instability does not necessarily result in overt disfluencies (Smith, Sadagopan, Walsh, & Weber-Fox, 2010). The unexpected influence of phonetic complexity (as measured by the IPC) on stuttered speech observed in AWS, but not in children who stutter, raises questions of whether additional methodological factors may have contributed to previous reports.
Table 1.
Word-level factors considered within studies of phonetic complexity in children and adults who stutter.
| Phonetic Categories | Throneburg et al.† | Logan&Conture†† | IPC | WCM |
|---|---|---|---|---|
| Dorsals | N/A | N/A | 1 | 1 |
| Fricatives | 1 | N/A | 1 | 1 |
| Voiced fricatives | N/A | N/A | 1 | |
| Affricates | N/A | 1 | 1 | |
| Voiced affricates | N/A | N/A | 1 | |
| Liquids/Syllabic liquids | N/A | 1 | 1 | |
| Rhotics | N/A | 1 | 1 | |
| Place variegation of single consonants | N/A | N/A | 1 | N/A |
| Word-final consonants | N/A | 1 | 1 | 1 |
| Word-initial consonants | N/A | 1 | N/A | N/A |
| > 2 syllables | 1 | N/A | 1 | 1 |
| > 1 syllables | N/A | N/A | N/A | |
| Consonant clusters (intra-syllabic, onset or coda) | 1 | N/A | 1 | 1 |
| Consonant clusters (inter-syllabic) | N/A | 1 | N/A | |
| Consonant clusters (intra-syllabic, onset) | N/A | 1 | N/A | N/A |
| Consonant clusters (intra-syllabic, coda) | N/A | 1 | N/A | N/A |
| Place variegations within clusters | N/A | N/A | 1 | N/A |
| Non-initial stress | N/A | N/A | N/A | 1 |
| Number of studies applied to children who stutter < 6 years of age | 2 | 1 | 1 | 2 |
| Number of studies applied to older children and adults who stutter | 1 | 0 | 7 | 1 |
Note. IPC – Index of Phonetic Complexity (Jakielski, 1998); WCM – Word Complexity Measure (Stoel-Gammon, 2010).
Categories from Throneburg et al. (1994).
Categories from Logan and Conture (1997).
Bernstein Ratner (2005) argues that findings from previous studies are compromised by one or more confounding factors. First, the appropriateness of the IPC as an adequate measure of phonetic complexity in AWS has been challenged given that many of the phonetic constructs are based on pre-linguistic verbal output. Second, lexical factors such as word frequency and phonological neighborhood properties were not accounted for in previous investigations of phonetic complexity measured by the IPC. Third, previous studies extracted individual words from connected speech and did not consider utterance position, utterance length, or the syntactic complexity of utterances – factors known to influence the fluency of production in individuals who stutter – during analyses. Fourth and finally, the presumption that phonetic complexity impedes speech planning in individuals who stutter may be premature, as no experimental data are available to support that phonetic complexity independently contributes to the speed or accuracy of speech production in typically fluent speakers, with the exception of slower initiation of multisyllabic words versus monosyllabic words (see van Lieshout, Hulstijn, & Peters, 1996).
Two recent studies by Coalson and colleagues (Coalson & Byrd, 2016; Coalson et al., 2012) addressed three of these four concerns when examining phonetic complexity of stuttered and fluent tokens produced by 14 children who stutter (age range: 2 years 7 months–5 years, 9 months) during parent-child interactions. First, Coalson and colleagues employed a more recent measure – Stoel-Gammon’s (2010) Word Complexity Measure (WCM) – to assess the relationship between phonetic complexity and stuttering within spontaneous speech samples provided by young children. The authors considered the WCM a more valid metric relative to the IPC, as it (1) awards points for phonetic properties that are acquired at later ages (e.g., voiced fricatives and affricates), (2) does not award points for certain phonetic properties included in the IPC typically mastered before meaningful speech production (e.g., inter-syllabic clusters, place variegation of consonants or consonant clusters); (3) is supported by published studies in typically developing children (Macrae, 2013), and (4) has demonstrated clinical utility in older children (e.g., 3–5-years of age: Anderson & Cohen, 2012). Second, logistic regression analyses were conducted to account for multiple lexical properties known or suspected to influence speech production such as word frequency, phonotactic probability, neighborhood density and frequency, and grammatical classification. Third, utterance-based factors such as length, syntactic complexity, and utterance-position (initial versus non-initial) were included as predictor variables. Results indicated that when lexical and utterance-based factors are considered, the likelihood that a word would be stuttered was not influenced by its phonetic complexity, nor the phonetic complexity of the upcoming word. Instead, only utterance-based factors significantly influenced the likelihood of stuttered speech.
Findings from Coalson et al. (2012), Coalson and Byrd (2016) provide a reasonable account for the lack of significant differences in children who stutter in previous studies of phonetic complexity, as children who stutter within this age range are particularly susceptible to the length and syntactic complexity of utterances (e.g., Logan & Conture, 1995; Yaruss, 1999; Zackheim & Conture, 2003). This susceptibility may exist irrespective of phonological properties (e.g., Logan & Conture, 1997). However, the influence of utterance-based factors on stuttered speech at older ages is less straightforward (e.g. Tsiamtsiouris & Cairns, 2009, 2013; however, see Logan, 2001, 2003; Silverman & Bernstein Ratner, 1997) and does not account for the previously reported influence of phonetic complexity that appears to be unique to AWS. Coalson and colleagues acknowledge that their findings do not entirely negate the potential influence of phonetic complexity on stuttered speech but highlight relevant factors to consider within this age range. In addition, there are data to support that lexical factors may contribute to deviations in fluency during connected speech in older children who stutter even after selected linguistic and utterance-based factors have been controlled during analysis.
For example, Anderson (2007) found that, after controlling for word length and grammatical class, stuttered words extracted from spontaneous speech samples from children (n = 15; age range 3–5 years) occurred on words with lower frequency and lower neighborhood frequency. Wolk and LaSalle (2015) considered a number of lexical factors during analysis of phonetic complexity and stuttered versus fluent content words by adolescents who stutter (n = 8, age range: 7–19 years), including word frequency, neighborhood density, and neighborhood frequency. After fluent and stuttered words from spontaneous speech samples were matched for utterance length, syntactic complexity, and utterance-position, stuttered words were more phonetically complex but also phonologically sparser than fluent tokens. These data demonstrate that influence of phonetic complexity on speech production by individuals who stutter cannot be established without accounting for potential lexical factors. In agreement with a fourth concern raised by Bernstein Ratner (2005), Coalson et al. (2012) suggest that the influence of phonetic complexity on individuals who stutter, if any, may be more evident during experimental tasks wherein lexical factors are accounted and utterance-based factors are removed.
1.2. Phonetic complexity, fluency, speed, and accuracy in persons who stutter: experimental data
Lexical variables such as word frequency (i.e., how frequently an individual word occurs in the native language: Dell, 1990; Jescheniak & Levelt, 1994), neighborhood density (i.e., the number of words that differ from the target word by the addition, substitution, or deletion of one phoneme: Luce & Pisoni, 1998), and neighborhood frequency (i.e., the average word frequency of neighboring words: Luce & Pisoni, 1998) are known contributors to the speed and accuracy of response for both AWNS (e.g., Newman & German, 2005; Vitevitch & Sommers, 2003; Vitevitch, 2002) and AWS (e.g., Newman & Bernstein Ratner, 2007). Thus, experimental examination of the unique influence of phonetic complexity on the speed of single-word production between and within AWS and AWNS requires careful control of co-occurring word-level factors upon speech production abilities. To date, no direct experimental examination of phonetic complexity on the single-word production in AWNS or AWS have been conducted while accounting for relevant lexical factors. However, a small number of experimental studies that have included information about phonetic properties of stimuli suggest that phonetic complexity may mitigate the ease of production in AWS, even during the production of single words.
In terms of speed of production, Huinck, van Lieshout, Peters, and Hulstijn (2004) measured the speed, accuracy, and fluency of AWS (n = 10) and AWNS (n = 12) when producing 24 nonword stimuli which included consonant clusters that differed in position (inter-syllabic versus word-final intra-syllabic) and phonetic complexity (homorganic verses heterorganic place of articulation). Although speed of production was comparable between groups for three of the four cluster types, AWS were significantly slower than AWNS to initiate production of nonwords with medial homorganic clusters that flanked the syllable boundary. In addition, group differences in speed were maintained upon removal of high-consonant clusters with relatively higher co-occurrence within the native language (p. 18). Thus, it is possible that certain phonetic factors may serve as a predictor of production speed in AWS that are independent from lexical factors. Although nonword stimuli used by Huinck et al. removed the potential influence of lexical factors, as noted by Vitevtich and Luce (2005), reaction time latencies during repetition of nonwords are not impervious to phonological properties.
Based on review of the available literature, it is clear that isolation of phonetic from lexical properties may enhance our understanding of whether the speech fluency in individuals who stutter may be uniquely mitigated by lexical factors (word frequency: Anderson, 2007; Dayalu et al., 2002; Newman & Bernstein Ratner, 2007; neighborhood density: LaSalle & Wolk, 2011, Wolk & LaSalle, 2015 cf. Arnold, Conture, & Ohde, 2005; Newman & Bernstein Ratner, 2007; neighborhood frequency: Anderson, 2007; cf. Newman & Bernstein Ratner, 2007; Bernstein Ratner, Newman, & Strekas, 2009) and/or phonetic factors (Dworzynski & Howell, 2004; Howell & Au-Yeung, 2007; Howell et al., 2006; LaSalle & Wolk, 2011; Wolk & LaSalle, 2015; cf. Coalson & Byrd, 2016; Coalson et al., 2012). Moreover, the impact of phonetic complexity is central to the predictions of the EXPLAN model and may provide greater clarity about which word-based factors may contribute to the proposed delay in speech planning and production in AWS. To date, the isolated impact of phonetic complexity from co-occurring lexical and linguistic factors has not been explored in AWNS. Therefore, the unique impact of phonetic complexity on single-word production abilities in AWS cannot be adequately determined.
1.3. Purpose of the present study
The purpose of the present study is to examine and compare the effects of phonetic complexity, if any, on the speed of single-word production in AWS and AWNS. To control for utterance-based factors, we employed a single-word picture naming paradigm. To account for lexical factors, we constructed word lists of high and low phonetic complexity that were balanced relative to word frequency, neighborhood density, and neighborhood frequency properties.
3. Method
3.1. Participants
Participants included 30 adults (15 AWS, 15 AWNS) who were matched for age (+/− 3 years, range 18–46 years) and gender (3 females and 12 males per group). The current study was approved by the first author’s university Institutional Review Board. All participants were paid volunteers and provided informed consent prior to participation. Participants were required to meet the following criteria: (a) native monolingual English speaker (n = 12 for AWS group, n = 14for AWNS group) or speak English as their primary language and with native competency (n = 3 for AWS group, n = 1 for AWNS group); (b) between the ages of 17 and 50 years; (c) no speech or language disorders (apart from stuttering in AWS group); (d) no present or past history of psychiatric, social, or emotional disturbances, and (e) performance within normal limits on four standardized tests prior to experimental testing.
3.1.1. Standardized measures
The Peabody Picture Vocabulary Test: Third Edition or Fourth Edition (PPVT-III; Dunn & Dunn, 1997, 2007) and the Expressive Vocabulary Test-Revised (EVT; Williams, 1997, 2007) were used to assess receptive and expressive vocabulary. Both editions were used as when the study began authors did not have access to the most recent edition of each test (i.e., PPVT-IV and EVT-2). The Nonword Repetition and Rapid Digit Naming subtests of the Comprehensive Test of Phonological Processing (CTOPP; Wagner, Torgesen, & Rashotte, 1999) were administered to assess working memory capacity and phonological information retrieval. Tables 2 and 3 provide a summary of participants’ characteristics and performance on standardized measures.
Table 2.
Participant characteristics for adults who stutter.
| PN | CA | Stg% | Stg Scale | SSI-3 | PPVT | EVT | CTOPP(RD) | CTOPP (NR) | Tx |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 20;5 | 0 | 2 | Very Mild | 114 | 120 | 11 | 8 | Y |
| 2 | 38;1 | 0 | 3 | Mild | 101 | 100 | 7 | 8 | Y |
| 3 | 21;2 | 1.25 | 3 | Mild | 130 | 118 | 7 | 5 | Y |
| 4 | 44;6 | 0 | 2 | Very Mild | 106 | 116 | * | * | Y |
| 5 | 23;7 | 17.5 | 7 | Sev | 140 | 129 | 6 | 9 | Y |
| 6 | 40;4 | 5 | 4 | Mild/Mod | 107 | 106 | * | * | Y |
| 7 | 23;11 | 16.3 | 7 | Sev | 106 | 97 | 9 | 5 | Y |
| 8 | 28;7 | 11.3 | 6 | Mod/Sev | 115 | 97 | 10 | 6 | Y |
| 9 | 24;1 | 2.5 | 3 | Mild | 122 | 116 | 16 | 7 | N |
| 10 | 18;1 | 10 | 6 | Mod/Sev | 120 | 134 | 9 | 10 | Y |
| 11 | 27;1 | 6.3 | 4 | Mild/Mod | 124 | 125 | 9 | 7 | Y |
| 12 | 21;3 | 8.8 | 5 | Mod | 107 | 111 | 12 | 9 | N |
| 13 | 33;3 | 5 | 4 | Mild/Mod | 108 | 105 | 13 | 10 | N |
| 14 | 39;0 | 0 | 2 | Very Mild | 112 | 118 | 14 | 10 | Y |
| 15 | 21;3 | 37.5 | 9 | Very Severe | 98 | 94 | 6 | 6 | N |
| M | 28;3 | 8 | 5 | Moderate | 114 | 112 | 10 | 8 | Y = 11 |
| SD | 8;2 | 9 | 2 | 11 | 12 | 3 | 2 | N = 4 |
Note:
= Scores unavailable; PN = Participant Number; CA = Chronological Age; Stg = Stuttering; Mod = Moderate; PPVT = Peabody Picture Vocabulary Test; EVT = Expressive Vocabulary Test; CTOPP = Comprehensive Test of Phonological Processes; RD = Rapid Digit Naming; NR = Nonword Repetition; Tx = Treatment; Y = Yes; N = No.
Table 3.
Participant characteristics for adults who do not stutter.
| PN | CA | PPVT | EVT | CTOPP (RD) | CTOPP (NR) |
|---|---|---|---|---|---|
| 1 | 27;6 | 106 | 97 | 13 | 10 |
| 2 | 20;8 | 117 | 114 | 17 | 8 |
| 3 | 21;9 | 104 | 105 | 9 | 6 |
| 4 | 22;9 | 124 | 114 | 13 | 9 |
| 5 | 38;8 | 107 | 101 | 12 | 7 |
| 6 | 37;5 | 105 | 115 | 7 | 8 |
| 7 | 20;5 | 117 | 118 | 9 | 8 |
| 8 | 46;11 | 110 | 120 | 12 | 6 |
| 9 | 27;9 | 101 | 116 | 10 | 10 |
| 10 | 26;7 | 106 | 114 | 11 | 8 |
| 11 | 21;1 | 117 | 114 | 12 | 15 |
| 12 | 30;l | 112 | 122 | 10 | 11 |
| 13 | 19;8 | 97 | 96 | 8 | 6 |
| 14 | 39;11 | 99 | 91 | 10 | 8 |
| 15 | 27;2 | 128 | 113 | 12 | 10 |
| M | 28;6 | 110 | 110 | 11 | 9 |
| SD | 8;2 | 9 | 9 | 2 | 2 |
Note: PN = Participant Number; CA = Chronological Age; PPVT = Peabody Picture Vocabulary Test; EVT = Expressive Vocabulary Test; CTOPP = Comprehensive Test of Phonological Processes; RD = Rapid Digit Naming; NR = Nonword Repetition.
3.1.2. Talker classification and stuttering severity
To be considered AWS, participants were also required to meet following criteria: (1) confirmed diagnosis of stuttering from a certified speech-language pathologist specializing in stuttering, or through evaluation of conversational and narrative discourse type yielding behaviors indicative of stuttering (as described by Yairi & Ambrose, 2005); (2) self-identification as an AWS with reported onset prior to age 7; (3) no history of neurological injury and/or any form of cerebral insult that could potentially impact speech production or reaction time; and (4) no current medication use that could impact speech production or reaction time. Stuttering severity was measured using a 9-point scale (i.e., 1 = no stuttering, 5 = moderate stuttering, 9 = extremely severe stuttering; O’Brian, Packman, Onslow, & O’Brian, 2004).
Mean severity rating for the 15 AWS was 5 (SD = 2, range 2–9), with six participants receiving a rating of 2–3, four participants receiving a rating of 4–5, and five participants receiving a rating of 6–9 (see Table 2). The fourth author and a graduate student trained in disfluency count analysis independently rated the unstructured conversational samples for 33% of the total AWS (n = 5 of 15). Inter-judge ratings were exact for 80% (n = 4) of the participants, and within one scale point for the remaining 20% (n = 1). Intra-rater ratings were completed by the fourth author with the same scale point assigned for her first and second rating across 5 participants (100% agreement). Stuttering severity was also assessed using the Stuttering-Severity Instrument-3 (SSI-3: Riley, 1994), which yielded the exact same findings as was reported with the 9-point scale above with the exception of three participants who were rated as ‘mild’ receiving a score of ‘very mild’ on the SSI-3.
3.1.3. Treatment history
Eleven of the 15 AWS participant were receiving speech therapy, and/or had received speech therapy in the past. The authors chose not to exclude adults on the basis of treatment history for two key reasons. First, there was no reason to suspect that exposure to fluency therapy would differentially affect performance on the tasks employed in the study. Second, it is not uncommon for AWS to report participation in fluency therapy particularly during the school years. Thus, inclusion of adults who had participated in therapy adds to the ecological validity of this study (see Logan, Byrd, Mazzocchi, & Gillam, 2011, for a similar argument regarding inclusion of participants who stutter who had history of treatment).
3.2. Stimuli selection
Sixteen content words were selected based on phonetic and lexical properties and divided into two separate lists: (1) words with high phonetic complexity (HPC, WCM values 4–5) and (b) words low phonetic complexity (LPC, WCM values 0–2). Half of the 16 words in each list were high in word frequency (2.26–3.44), neighborhood density (10–28), and neighborhood frequency (1.91–2.50), while the remaining eight were considered to have low word frequency (1.00–2.18), neighborhood density (0.00–1.00), and neighborhood frequency (0.00–1.00). Word frequency, neighborhood density, and neighborhood frequency values were obtained from the Hoosier Mental Lexicon (HML) adult database (see Luce, Pisoni, & Goldinger, 1990; Luce & Pisoni, 1998). Per the HML, word frequency values defined as the log frequency of occurrence per one million words in a large database corpus (Kucera & Francis, 1967). Neighborhood density values were defined as the raw number of words that differed by one phoneme from the target word. Neighborhood frequency was defined as the mean word frequency of the neighboring words. Table 4 provides the phonetic and lexical values related to each target word within each word list. After stimuli were selected, corresponding black-on-white pictures that represented each target word were selected from the International Picture Naming Project (Szekely et al., 2004).
Table 4.
Experimental stimuli with phonetic complexity, word frequency, neighborhood density, and neighborhood frequency values.
| Target Word | Complexity | Lexical Properties | PC | WF | ND | NF |
|---|---|---|---|---|---|---|
| Corn | High | High | 4.00 | 2.53 | 14.00 | 2.05 |
| Shirt | High | High | 4.00 | 2.43 | 14.00 | 1.91 |
| Sink | High | High | 5.00 | 2.36 | 12.00 | 1.96 |
| Swing | High | High | 4.00 | 2.38 | 10.00 | 1.76 |
| Dinosaur | High | Low | 4.00 | 1.00 | 1.00 | 1.00 |
| Dresser | High | Low | 4.00 | 1.00 | 0.00 | 0.00 |
| Umbrella | High | Low | 5.00 | 1.90 | 0.00 | 0.00 |
| Zebra | High | Low | 4.00 | 1.00 | 1.00 | 1.00 |
| Moon | Low | High | 1.00 | 2.78 | 19.00 | 2.23 |
| Pen | Low | High | 1.00 | 2.26 | 25.00 | 1.96 |
| Rain | Low | High | 2.00 | 2.90 | 28.00 | 2.02 |
| Sun | Low | High | 2.00 | 3.44 | 24.00 | 2.50 |
| Banana | Low | Low | 2.00 | 1.60 | 0.00 | 0.00 |
| Onion | Low | Low | 1.00 | 2.18 | 0.00 | 0.00 |
| Panda | Low | Low | 0.00 | 1.00 | 1.00 | 1.00 |
| Penguin | Low | Low | 2.00 | 1.00 | 0.00 | 0.00 |
Note: PC = phonetic complexity (Word Complexity Measure, Stoel-Gammon, 2010); WF = word frequency; ND = neighborhood density; NF = neighborhood frequency. Lexical values obtained from Hoosier Mental Lexicon database (Luce et al., 1990).
3.3. Procedure
3.3.1. Training and experimental tasks
Participants were seated comfortably in front of a Dell 12-in. monitor located in a sound treated room with a microphone placed approximately two to five inches from the participant’s face. Each participant was read a scripted protocol and were instructed to say the name of the picture that appears on the screen as quickly as possible, then press a designated button to advance to the next picture. Participants completed four practice trials prior to completion of the experimental task. Pictures used during training included two HPC and two LPC pictures and balanced for lexical properties. After training trials were completed, each participant was presented all 16 pictures in randomized order twice (i.e., 32 total trials) to increase the likelihood of usable trials. Presentation order for each participant was randomized by the E-Prime® program for both the first and second round of presentation.
Fluency and accuracy of responses were coded by the third and fourth author who were present in the room with each participant. Participant responses were also recorded via a Sanyo VPC-HD1010 digital video camera and an Olympus WS-321M digital voice recorder. The digital voice recorder was placed approximately 24 in. from the speaker on a flat tabletop surface and was used to document participant’s vocal responses to pre-experimental tasks.
Speech reaction times (SRTs) were recorded via E-Prime® software. The detection of the initiation of voicing was obtained through the microphone that interfaced with the software that triggered the voice key via the serial response box (Psychological Software Tools). The experimental task took approximately 15 min per participant.
3.4. Coding, reliability, and outliers
3.4.1. Response coding
Participant responses were coded similarly to other studies (e.g., Anderson, 2008; Byrd et al., 2007) and included the following error types (see Table 5 for a complete summary of error analysis):
Intelligibility Error: either participant’s response was unintelligible to researchers, or participant did not arrive at any response corresponding to the picture presented (e.g. “I don’t know”)
Outlier: participant SRT was +/− 2 SD from mean SRT of the word within each talker group
Program Error: the recorded SRT of a production was not a reliable time (e.g. recorded as 0 or −5) with no previous error coding for response, or participant’s response was not detected by the program and participant had to repeat target
Naming Error: participant responded with a word that was not the desired target word (e.g., produced “bear” for the target word “panda”)
Atypical Fluency Error: participant produced a stuttering-like disfluency (SLD; Yairi & Ambrose, 2005) during production of target word (e.g., “p-p-pen”)
Typical Fluency Error: participant added an interjection prior to production of the target word (e.g., non-SLD; “um, dinosaur”)
Table 5.
Type and number of errors produced by adults who do and do not stutter on words with high and low phonetic and lexical values.
| HPC | LP | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Low LV | High LV | Low LV | High LV | ||||||
| Error Type | AWNS | AWS | AWNS | AWS | AWNS | AWS | AWNS | AWS | N |
| Initial Corpus | 120 | 120 | 120 | 120 | 120 | 120 | 120 | 120 | 960 |
| Intelligibility | 0 | 0 | 0 | 1 | 0 | 2 | 0 | 3 | 6 |
| Outlier | 6 | 4 | 5 | 3 | 7 | 4 | 2 | 5 | 36 |
| Program Error | 8 | 26 | 2 | 10 | 1 | 14 | 2 | 10 | 73 |
| Usable | 106 | 90 | 113 | 106 | 112 | 100 | 116 | 102 | 845 |
| Naming Error | 14 | 8 | 2 | 4 | 0 | 7 | 3 | 4 | 42 |
| SLD | 0 | 7 | 0 | 2 | 0 | 5 | 0 | 2 | 16 |
| Non-SLD | 2 | 2 | 0 | 4 | 0 | 2 | 0 | 1 | 11 |
Note: AWNS = adults who do not stutter; AWS = adults who stutter; HPC = high phonetic complexity; LPC = low phonetic complexity; LV = lexical values; SLD = stuttering-like disfluency (Yairi & Ambrose, 2005).
3.4.2. Coding reliability and analysis
Participant responses were confirmed by the third author by comparison of online scoring and offline review of verbal responses. Any coding discrepancies observed during review of digital recording related to fluency and accuracy of response were verified and corrected prior to analyses. Token analysis is provided in detail in Table 5 based on the initial 960 responses (i.e., 16 words × 2 trials × 30 participants; HPC: n = 480; LPC: n = 480). Intelligibility errors (HPC: n = 1; LPC: n = 5), outliers (HPC: n = 18; LPC: n = 18), and program errors (HPC: n = 46; LPC: n = 27) were considered unusable and excluded from the initial data corpus. The increased number of program errors in AWS relative to AWNS were attributable to premature voice key triggers due to non-speech related behaviors (i.e., inaudible lip smack, tongue clicks). In sum, a total of 845 total usable tokens were included in the final data corpus (HPC: n = 415; LPC: n = 430).
3.5. Results
To review, the purpose of the present study was to explore the influence of phonetic complexity as measured by the WCM on the speed of naming pictures from two lists of target words. The two lists differed in that one set included only words with high WCM scores and the other set included words with low WCM scores. Each set contained equal numbers of words with high and low word frequency, neighborhood density, and neighborhood frequency.
During analysis of SRT of production, only fluent and accurate responses were included from the 845 usable response. Forty-two naming errors (HPC: n = 28, LPC: n = 14), 11 atypical fluent errors (HPC: n = 9; LPC: n = 7), and three typical fluent errors (HPC: n = 8; LPC: n = 3) were excluded. The final data corpus for SRT analysis resulted in 776 tokens (HPC: n = 370; LPC: n = 406). Shaprio-Wilk tests indicated that SRT responses did not deviate from normal distribution when grouped by talker group (AWNS, W(60) = 0.983, p = 0.582; AWS, W(60) = 0.979, p = 0.388), phonetic complexity (low complexity, W(60) = 0.980, p = 0.433; high complexity, W(60) = 0.975, p = 0.246), or lexical values (low lexical value, W(60) = 0.972, p = 0.192; high lexical value, W(60) = 0.979, p = 0.393).
A three-way repeated measures ANOVA was conducted with participants as the repeated factor, talker group (i.e., AWS and AWNS) as the between-subjects factor, and phonetic complexity (i.e., low PC and high PC) and lexical values (i.e., low LV, high LV) as the within-subjects factors. Results revealed a significant main effect for talker group, F(1, 30) = 10.86, p = 0.003, with AWS exhibiting overall slower response times (M = 1055.15 ms, SE = 32.37 ms) than AWNS (M = 904.33 ms, SE = 32.37 ms). No significant main effect was detected for phonetic complexity F(1, 90) = 1.171, p = 0.281. Two-way interaction between talker group and phonetic complexity was also non-significant F(1, 90) = 1.73, p = 0.192.
In terms of lexical properties, no significant main effect was detected for lexical value F(1, 90) = 0.30, p = 0.551. However, a two-way significant interaction was detected between phonetic complexity and lexical value, F(1, 90) = 6.15, p = 0.015. For stimuli with high LV, mean SRT for phonetically complex words were slower (M = 1014.45 ms, SE = 26.18 ms) than phonetically simple words (M = 955.01 ms, SE = 24.22 ms, p = 0.013). No significant difference in mean SRT was detected for words with low LV based on phonetic properties (high PC: M = 963.11 ms, SE = 29.70 ms; low PC: M = 986.39 ms, SE = 27.84 ms, p = 0.326). For complex stimuli, mean SRT for high LV were significantly slower than low LV (p = 0.032). No significant difference was observed for simple words based on lexical properties (p = 0.187).
A two-way significant interaction was also detected between lexical value and talker group, F(1, 90) = 7.71, p = 0.003. Mean SRTs for AWS were significantly slower than AWNS for high LV (p = 0.046) and low LV (p < 0.001). AWNS exhibited slower reaction times when naming words with high LV (M = 934.55 ms, SE = 34.45 ms) than low LV (M = 874.12 ms, SE = 34.45 ms, p = 0.012). In contrast, AWS named words with high and low lexical properties with similar speed (low LV: M = 1075.39 ms, SE = 34.45 ms; high LV: M = 1034.91 ms, SE = 34.45 ms; p = 0.090). No three-way interaction between talker group, phonetic complexity, and lexical values were observed, F(1, 90) = 0.01, p = 0.965 (see Fig. 1).
Fig. 1.
Mean speech reaction time for adults who do not stutter (AWNS) and adults who stutter (AWS) when naming pictures of low phonetic complexity (low PC) versus high phonetic complexity (high PC) and low lexical values (low LV) versus high lexical values (high LV) as defined by word frequency, neighborhood density, and neighborhood frequency. Error bars represent the standard error of the mean.
4. Discussion
The purpose of this study was to evaluate whether the phonetic complexity of a word influences the speed of single-word production in AWS relative to AWNS. Results indicated significantly slower SRT for AWS than AWNS, but no significant differences in SRT when naming words of high versus low phonetic complexity between or within talker groups. Additionally, no significant three-way interaction between talker group, phonetic complexity, and lexical properties was detected. Production errors and disfluent responses were also minimal across groups.
4.1. Speed of production
Increased complexity of phonetic sequences did not significantly alter the speed of production for AWS or AWNS when lexical properties of stimuli were balanced across sets. These findings are inconsistent with data from Huinck et al. (2004) that evidenced significantly longer SRTs for nonwords with homorganic medial clusters, but not heterorganic medial clusters or intra-syllabic clusters of either type. Differences between measures used to examine phonetic complexity may partially account for divergent findings. Consonant clusters of target words in the present study, when they occurred, were either intra-syllabic, or heterorganic and inter-syllabic (i.e., 8 of the 9 clusters across conditions). In this regard, non-significant SRTs from the present study may align with those reported by Huinck and colleagues. Nonetheless, differences in the phonetic structure of stimuli, as well as the differing lexical status of stimuli across studies, warrant caution during cross-study comparisons.
Non-significant differences may also be attributable to the number of factors used to evaluate phonetic complexity. The breadth of factors included in the WCM may potentially mask key phonetic features in individuals who stutter. As noted by Howell et al. (2006,p. 714) with respect to the IPC, certain phonetic movements may be more relevant to AWS than others, such as consonant place, word length, and similar to Huinck et al. (2004), consonant clusters. Howell et al. further suggested that modifications of the IPC may be necessary for AWS in future studies to capture the effects of non-homogenous consonant clusters at the syllable boundary. Coalson and Byrd (2015) found that speed and accuracy of AWS when silently planning medial clusters of bisyllabic nonwords, specifically in the context of non-initial stress, were distinct from AWNS. In light of these findings, as well as Huinck et al.’s results, similar modifications may apply for the WCM. In the WCM, consonant clusters serve as one of eight indexed factors in the WCM and value is assigned only to intra-syllabic clusters with no distinction of gestural configuration. Thus, the unique influence of an inter-syllabic cluster would not be scored, while a factor with less empirical support (e.g., liquids, rhotics) would be credited a point. Perhaps, only certain aspects of phonetic complexity, such as clusters at inter-syllabic positions (unique to the IPC) or non-initial stress (unique to the WCM), hold greater potential to disrupt speech planning and production in AWS relative to AWNS. It is also possible, given the present findings, that the influence of any phonetic factors may become more evident when assessed with simultaneous consideration of lexical properties.
Mediation, as defined by Baron and Kenny (1986), suggests that an observed relationship between predictor and criterion occurs only if additional factors are present. The two-way interaction observed in the present study indicated that when mediated by high lexical and phonological properties, significantly slower SRTs emerged between phonetically complex words, but not simple words. Complexity-based SRT differences were not observed when naming words with low lexical properties. In terms of speech planning, these findings suggest that greater phonetic complexity may delay single-word production only if lexical and phonological properties of the word are accessed with relative ease. The influence of phonetic difficulty may be minimal, however, if greater lexical processing is required prior to execution due to lower lexical or phonological frequency. That said, this interaction does not confirm the predictions of the EXPLAN model (Howell, 2011) which posits that phonetic complexity of a word, irrespective of lexical and phonological characteristics, is sufficient to delay speech planning in AWNS, and to a greater extent in AWS. The non-significant two-way interaction between talker group and phonetic complexity further verifies this claim.
Finally, the non-significant three-way interaction between talker group, phonetic properties, and lexical values suggests that the overall interaction between phonetic factors and lexical factors do not differ between AWS from AWNS. Visual examination of SRTs in Fig. 1, however, warrants further discussion of two critical points. First, significantly slower overall response latency by AWS relative to AWNS (M = 150.33 ms) may have limited any statistically meaningful overlap between groups to be observed. Second, the direction and magnitude of the interaction differs within each talker group. For AWNS, naming speed for complex words, but not simple words, was mediated lexical properties, with slower SRTs for complex, high lexical frequency targets. For AWS, naming speed for simple words, but not complex words, was mediated by lexical properties, with fastest SRTs for phonetically simple, low frequency targets. These disparate response patterns within each group, although non-significant within the three-way interaction, provide some evidence that production of simple and complex words may differ in AWS and AWNS when a third mediating factor is also considered, but that the lexical properties that mediate complexity may differ between groups. This interpretation remains speculative, and further investigation of this relationship between lexical and phonetic factors on the speed of single-word production is warranted. Such investigations may provide greater specificity to the unexpected findings of previous studies which provide complexity-based accounts of stuttering without considering the potential influence of lexical properties.
4.2. Accuracy and fluency of production
Due to limited number of naming errors and atypical fluency errors produced by AWNS and AWNS (see Table 5), statistical analysis of accuracy and fluency could not be conducted. A likely explanation for reduced naming errors compared to Newman and Bernstein Ratner (2007) may be larger sample size (N = 25 AWS, 25 AWNS) and number of tokens (N = 22–24 per analysis) relative to the present study. However, mean accuracy rate of each group (AWNS: 95.8%, 19 errors out of 447 usable responses; AWS: 93.8%, 23 errors out of 373 usable responses) were similar to those reported by Newman and Bernstein Ratner for AWNS (97.6%) and AWS (94.3%) from a larger number of tokens (n = 5350). Another possible explanation for this discrepancy is discordant error criteria used between studies may be the manner in which errors were tallied. Unlike the present study, Newman and Bernstein Ratner included alternate word choices (e.g., puppy for dog) and elaborations (e.g., birthday cake for cake) as accurate responses during error analysis. These methodological differences warrant caution when comparing error data between studies.
Minimal naming errors when producing simple and complex words may simply serve as additional support for the limited influence of phonetic complexity on single-word production accuracy for either group. To the extent that descriptive data in Table 5 can be compared to statistical analysis conducted by Newman and Bernstein Ratner (2007), AWS and AWNS in the present study exhibited a similar number of naming errors and were inconsistent in part with their findings. The presence of low lexical properties appeared to mitigate the potential relationship between complexity and error rate for AWNS and AWNS. That is, increased errors on high complexity words for AWNS occurred more often on words with low than high lexical values, but more often on low complexity words for AWS. In contrast, naming errors for words with high lexical values were similar irrespective of phonetic composition. However, given the limited number of errors into the four word types (range 0–14), the potential influence of phonetic complexity upon naming errors warrants further examination.
In terms of atypical disfluencies, data from Table 5 suggest that when select lexical variables were balanced, phonetic complexity served as a minimal contributor to stuttering during single-word productions. Similar number of stuttered responses between words of high and low phonetic complexity, as measured by the WCM, are consistent with previous single-word production studies (e.g., Huinck et al., 2004). However, present findings are inconsistent with studies that have analyzed the phonetic complexity of content words produced within conversational speech samples (Dworzynski & Howell, 2004; Howell & Au-Yeung, 2007; Howell et al., 2006; LaSalle & Wolk, 2011) with the exception of one (Coalson et al., 2012). The observed differences between studies using connected versus single-word samples may be attributed to the contribution of utterance-based variables not present in the current study or to the failure to account for lexical factors during the analysis of words selected from conversational output. It is also possible the nature of the experimental task itself contributed to moments of stuttering by AWS. Participants were instructed to respond as quickly and accurately as possible, and could not avoid production of the target word. Moments of stuttering may reflect anticipatory struggle by AWS rather than (or in addition to) increased linguistic or phonetic demand. That being said, any interpretation of the disfluency data from the present study is compromised by the restricted number of SLDs produced (N = 16). Nevertheless, the relatively limited number of SLDs observed is not uncommon during single-word production tasks and is comparable to previous studies wherein disfluency data has been discussed (e.g., Huinck et al., 2004: N = 11).
The present study provides preliminary evidence that phonetic complexity alone does not disrupt efficient speech planning in AWS or AWNS. Additionally, the minimal number of disfluencies observed during response suggest that phonetic complexity does not perturb fluency of production at the single-word level in AWS. It remains possible that words of increased phonetic complexity may compromise speech planning and result in disfluency when embedded near the onset of a target utterance (e.g., “I like corn muffins” versus “I like banana muffins.”) Coalson and Byrd (2016) found no relationship between the fluency of utterance-initial words and the phonetic complexity of the following word within the spontaneous speech of children who stutter. However, the additive demand of planning longer, more complex syntactic phrases which also includes complex word (e.g., third word) may be sufficient to compromise the initial portion of the intended speech plan. Thus, future studies may consider expanding the present study to elicit speech production that systematically manipulates the complexity of the second, third, or fourth word within a target utterance while assessing speed, accuracy, and fluency – particularly near the utterance-initial position – in AWS and AWNS.
Finally, it should be noted that the limited influence of phonetic complexity – the critical factor proposed to disrupt speech planning within the EXPLAN model (Howell, 2011) – upon single-word productions in AWS and AWNS does not address, and should not be considered to reflect, the potential influence of phonological complexity. The WCM was designed to index the complexity of motor movement during single-word production, rather than the phonological characteristics that may influence the retrieval and preparation of phonological information, or deployment of the intended speech plan to the speech-motor system. Similar to Newman and Bernstein Ratner (2007), we selected two critical values related to phonological processing – neighborhood density and neighborhood frequency – in addition to word frequency to control for confounding lexical properties. However, there are other phonological factors that may have contributed to the ease of phonological processing and, for this reason, we cannot completely rule out the possible contribution of phonological composition upon SRT or accuracy data. For example, stimuli in the present study were not controlled for phonotactic probability, or the frequency certain sounds or sound sequences occur within the Language, a phonological property known to facilitate single-word production in adults (e.g., Vitevitch, Luce, Pisoni, & Auer, 1999). Unlike neighborhood density and neighborhood frequency (see Newman & Bernstein Ratner, 2007), minimal published data are available to determine whether phonotactic probability influences speech reaction time latencies in AWS (however, see Anderson & Byrd, 2008; for review of phonotactic probability in children who stutter). Sasisekaran and Weisberg (2014) explored the nonword repetition accuracy of 10 adults who stutter when presented with auditory nonword stimuli that varied by complexity and phonotactic constraint. Adults who stutter demonstrated decreased accuracy repeating complex nonwords compared to adults who do not stutter. Additionally, adults who stutter exhibited significant practice effects as measured by reduced movement variability for 3-syllable stimuli, but motoric variability persisted for the longer 4-syllable words. These preliminary data suggest that adults who stutter present with deficits in phonemic encoding as well as deficits in speech motor planning and execution. Nevertheless, post-hoc evaluation of stimuli in the present study revealed considerable overlap across each of the four word subgroups for segmental probability (i.e., high PC-low LV = 0.036 to 0.070; high PC-high LV = 0.029 to 0.056; low PC-low LV = 0.028 to 0.071; low PC-high LV = 0.059 to 0.087) and biphone probability (i.e., high PC-low LV = 0.001 to 0.007; high PC-high LV = 0.002 to 0.004; low PC-low LV = 0.002 to 0.008; low PC-high LV = 0.002 to 0.011). These findings suggest that the influence of high or low phonological probability, if present, did not disproportionately affect one specific subset of words. It also suggests that direct manipulation of this additional factor, which was relatively invariant across word subtypes, may have diminished the magnitude of SRT differences between groups. Given the unique susceptibility to phonological complexity AWS exhibit during tasks that require single-word production (e.g., Byrd, Vallely, Anderson, & Sussman, 2012; Sasisekaran & Weisberg, 2014) and during non-vocal tasks that require no phonetic processing (e.g., Byrd, McGill, & Usler, 2015; Brocklehurst & Corley, 2011; Coalson & Byrd, 2015; Sasisekaran et al., 2006; Sasisekaran, de Nil, Smyth, & Johnson, 2006; Weber-Fox, Spencer, Spruill, & Smith, 2008), future studies should continue to explore the precise relationship between phonological and phonetic factors in addition to those controlled in the present study.
Finally, two word subsets in the present study with lower LV consisted of lower phonological neighborhood density. Previous research has been inconsistent with regard to how neighborhood density may impact SRT in adults. Luce and Pisoni (1998, Experiment 3) found when word frequency and neighborhood frequency were controlled, adults named single-words with low neighborhood density significantly faster than high density words. In contrast, Vitevitch (2002, Experiments 3 & 4) found that when controlled for lexical frequency and neighborhood frequency, high density words were named significantly faster than low density words. These inconsistent findings were also noted by Newman and Bernstein Ratner (2007), who ultimately found that high neighborhood density, when isolated from word frequency and neighborhood frequency, did not significantly decrease mean SRT for AWNS or AWS relative to low density words. Our results generally favored those of Newman and Bernstein Ratner, as neither AWNS or AWS significantly differed in SRT when subtypes of high and low lexical value (e.g., high and low density) were compared irrespective of phonetic value. Nonetheless, based on Vitevitch (2002), it remains possible that words of higher neighborhood density in the present study may have required greater time to access, and thereby minimized the facilitating effects of higher lexical frequency and neighborhood frequency (and irrespective of phonetic properties). Future studies may benefit from direct isolation of lexical, phonological, and phonetic properties to examine the unique influence of each rather than controlling for these properties to isolate the effects of a single factor – phonetic complexity – in AWNS and AWS.
4.3. Implications for EXPLAN model
The EXPLAN model predicts that moments of stuttering occur due to incomplete or delayed completion of the speech plan prior to motoric execution, and considers planning and execution stages to be relatively independent from each other (Howell, 2011). Direct application of these data taken from single-word productions in the present study to the predictions of the EXPLAN model is difficult, as stuttered speech is proposed to rely on the phonetic complexity of the upcoming word, as well as the word currently in production, within connected speech (cf. Coalson & Byrd, 2016). However, isolated words do require planning prior to production, and as observed in the present study and previous studies, stuttered speech does occasionally occur on single-word productions. Application of the EXPLAN model to the present findings from single-word production can therefore be made in a modified fashion.
If increased phonetic complexity is considered to be sufficient to impede speech planning and precipitate stuttered speech, as suggested by Howell (2011), the present findings do not support the predictions of the EXPLAN model and are consistent with the issues raised by Bernstein Ratner (2005). Alternatively, if phonetic complexity is considered a factor more relevant to motoric execution rather than speech planning, the preliminary findings of the present study make the predictions of the EXPLAN model more tenable. The latter interpretation is supported by data from Smith et al. (2010), who reported greater motoric instability in AWS than AWNS when producing complex versus simple nonwords matched for length. Nonword stimuli that decreased motoric stability in AWS in their study were calculated to have increased phonetic complexity as measured by the WCM (i.e., WCM = 4 per nonword) similar to the complex stimuli in the present study, while production stimuli of lower complexity (WCM < 2 per nonword) did not yield significant motor coordination differences between groups. If AWS require more time to retrieve lexical information or access the phonological properties, and the speaker is particularly vulnerable to speech breakdown when producing motor movements that are more complex and unstable, this combined instability of processing and production would seemingly present the ideal environment for stuttered speech (Sussman et al., 2011). Results from the present study provide preliminary support that the consequences of this atypical processing may, on occasion, be observable in AWS. For example, ease of production in AWS was facilitated in the present study by the presence of less complex phonetic sequences and high frequency lexical properties (see Fig. 1). That is, only in the most favorable linguistic and motoric conditions speed of production in AWS resemble AWNS. These data suggest that both phonetic and lexical factors may uniquely contribute to the ease of production in AWS in a manner distinct from AWNS.
Finally, present data are preliminary in nature and caution should be taken when interpreting these data as the precise nature of relationships between phonetic and isolated lexical factors requires further research. Lexical factors considered in our study were grouped in a manner consistent with single-word production patterns in AWNS. In AWNS, SRTs are faster for words of high lexical properties, while accuracy of production is decreased by low lexical properties. Due to the potential differences in lexical and phonological processing in AWS, the individual lexical properties may exert differential impact (e.g., Anderson, 2007) or minimal impact (e.g., Arnold et al., 2005; Coalson et al., 2012) on single-word production. Thus, grouped lexical properties of similar values may mask the full effect of specific lexical properties in AWS, relative to phonetic properties. Future research targeting these and other relevant factors (e.g., word familiarity, Newman & German, 2005; phonotactic probability: Anderson & Byrd, 2008) in isolation and at the single-word level, relative to phonetic complexity, would provide greater specificity to the planning factors that may or may not result in disfluencies within the EXPLAN model.
5. Conclusion
Present findings indicate that phonetic complexity, as measured by the WCM, does not uniquely influence the speed of single-word productions in AWS. These preliminary data further support the possibility that when lexical values are not considered during analysis as in previous studies examining conversational data, observed differences in phonetic complexity in AWS may be an epiphenomenon of an interaction between lexical and phonetic factors. Future studies should further investigate if, and to what degree, the combined contribution of lexical and phonetic factors contribute to single-word production at the word level in AWS.
Acknowledgments
This research was funded in part by the grant F32 DC006755–01 (Phonological Encoding of Children who Stutter) from the National Institutes of Health (NIH) and the Michael and Tami Lang Stuttering Institute. We would like to thank Elizabeth Hampton, M.S., CCC-SLP, who assisted us with the recruitment and testing of participants as well as the graduate students who assisted with the testing, transcription and data coding process. We would also like to thank Dr. Michael Mahometa for his assistance with the statistical analyses. Most of all, we would like to thank the adults who do and do not stutter who were willing to give their time to participate in this study and help us to further our knowledge of the underlying nature of stuttering.
References
- Al-Timimi F, Khamaiseh Z, & Howell P (2013). Phonetic complexity and stuttering in Arabic. Clinical Linguistics & Phonetics, 27, 874–887. [DOI] [PubMed] [Google Scholar]
- Anderson JD, & Byrd CT (2008). Phonotactic probability effects in children who stutter. Journal of Speech, Language, and Hearing Research, 51, 851–866. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson C, & Cohen W (2012). Measuring word complexity in speech screening: Single-word sampling to identify phonological delay/disorder in preschool children. International Journal of Language and Communication Disorders, 47, 534–541. [DOI] [PubMed] [Google Scholar]
- Anderson JD (2007). Phonological neighborhood and word frequency effects in the stuttered disfluencies of children who stutter. Journal of Speech, Language, and Hearing Research, 50, 229–247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson JD (2008). Age of acquisition and repetition priming effects on picture naming of children who do and do not stutter. Journal of Fluency Disorders, 33, 135–155. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Arnold HS, Conture EG, & Ohde RN (2005). Phonological neighborhood density in the picture naming of young children who stutter: Preliminary study. Journal of Fluency Disorders, 30, 125–148. [DOI] [PubMed] [Google Scholar]
- Baron RM, & Kenny DA (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182. [DOI] [PubMed] [Google Scholar]
- Bernstein Ratner N, Newman R, & Strekas A (2009). Effects of word frequency and phonological neighborhood characteristics on confrontation naming in children who stutter and normally fluent peers. Journal of Fluency Disorders, 34, 225–241. [DOI] [PubMed] [Google Scholar]
- Bernstein Ratner N (2005). Is phonetic complexity a useful construct in understanding stuttering? Journal of Fluency Disorders, 30, 337–341. [DOI] [PubMed] [Google Scholar]
- Brocklehurst PH, & Corley M (2011). Investigating the inner speech of people who stutter: Evidence for (and against) the Covert Repair Hypothesis. Journal of Communication Disorders, 44, 246–260. [DOI] [PubMed] [Google Scholar]
- Byrd CT, Conture EG, & Ohde RN (2007). Phonological priming in young children who stutter: Holistic versus incremental processing. American Journal of Speech-Language Pathology, 16, 43–53. [DOI] [PubMed] [Google Scholar]
- Byrd CT, Vallely M, Anderson J, & Sussman H (2012). Nonword repetition and phoneme elision in adults who do and do not stutter. Journal of Fluency Disorders, 37, 188–201. [DOI] [PubMed] [Google Scholar]
- Byrd CT, McGill M, & Usler E (2015). Nonword repetition and phoneme elision in adults who stutter: Vocal and nonvocal performance differences. Journal of Fluency Disorders, 44, 17–31. [DOI] [PubMed] [Google Scholar]
- Coalson GA, & Byrd CT (2015). Metrical encoding in adults who do and do not stutter. Journal of Speech, Language, and Hearing Research, 58, 601–621. [DOI] [PubMed] [Google Scholar]
- Coalson GA, & Byrd CT (2016). Phonetic complexity of words immediately following utterance-initial productions in children who stutter. Journal of Fluency Disorders, 47, 56–69. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Coalson GA, Byrd CT, & Davis BL (2012). The influence of phonetic complexity on stuttered speech. Clinical Linguistics & Phonetics, 26, 646–659. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dayalu VN, Kalinowski J, Stuart A, Holbert D, & Rastatter MP (2002). Stuttering on content and function words in adults who stutter: A concept revisited. Journal of Speech, Language, and Hearing Research, 45, 871–878. [DOI] [PubMed] [Google Scholar]
- Dell GS (1990). Effects of frequency and vocabulary type on phonological speech errors. Language and Cognitive Processes, 5, 313–349. [Google Scholar]
- Dunn LM, & Dunn LM (1997). Peabody picture vocabulary test – III (PPVT-III) (3rd ed.). San Antonio, TX: Pearson Education, Inc. [Google Scholar]
- Dunn LM, & Dunn DM (2007). PPVT-IV: Peabody picture vocabulary test (PPVT-IV) (4th ed.). San Antonio, TX: Pearson Education, Inc. [Google Scholar]
- Dworzynski K, & Howell P (2004). Predicting stuttering from phonetic complexity in German. Journal of Fluency Disorders, 29, 149–173. [DOI] [PubMed] [Google Scholar]
- Howell P, & Au-Yeung J (1995). The association between stuttering, Brown’s factors, and phonological categories in child stutterers ranging in age between 2 and 12 years. Journal of Fluency Disorders, 20, 331–344. [Google Scholar]
- Howell P, & Au-Yeung J (2007). Phonetic complexity and stuttering in Spanish. Clinical Linguistics & Phonetics, 21, 111–127. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Howell P, & Dworzynski K (2005). Planning and execution processes in speech control by fluent speakers and speakers who stutter. Journal of Fluency Disorders, 30, 343–354. [DOI] [PubMed] [Google Scholar]
- Howell P, Au-Yeung J, Yaruss SJ, & Eldridge K (2006). Phonetic difficulty and stuttering in English. Clinical Linguistics & Phonetics, 20, 703–716. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Howell P (2004). Assessment of some contemporary theories of stuttering that apply to spontaneous speech. Contemporary Issues in Communication Sciences and Disorders, 31, 122–139. [PMC free article] [PubMed] [Google Scholar]
- Howell P (2011). Recovery from stuttering. New York, NY: Taylor and Francis Group. [Google Scholar]
- Huinck WJ, Van Lieshout PHHM, Peters HFM, & Hulstijn W (2004). Gestural overlap in consonant clusters: Effects on the fluent speech of stuttering and non-stuttering subjects. Journal of Fluency Disorders, 29, 3–25. [DOI] [PubMed] [Google Scholar]
- Jakielski KJ, Maytasse R, Doyle E (2006, November). Acquisition of phonetic complexity in children 12–36 months of age Poster session presented at the annual convention of the American Speech-Language-Hearing Association, Miami, FL. [Google Scholar]
- Jakielski KJ (1998). Motor organization in the acquisition of consonant clusters. Dissertation/PhD thesis. Austin: University of Texas. [Google Scholar]
- Jakielski KJ (2002, November). A new method for measuring articulatory complexity Paper Presented at the 2002 American Speech-Language-Hearing Association Annual Convention, Atlanta, GA. [Google Scholar]
- Jescheniak JD, & Levelt WJ (1994). Word frequency effects in speech production: Retrieval of syntactic information and phonological form. Journal of Experimental Psychology. Learning, Memory & Cognition, 20, 824–843. [DOI] [PubMed] [Google Scholar]
- Kucera H, & Francis WN (1967). Computational analysis of present-day American English. Providence, RI: Brown University Press. [Google Scholar]
- LaSalle LR, & Wolk L (2011). Stuttering, cluttering, and phonological complexity: Case studies. Journal of Fluency Disorders, 36, 285–289. [DOI] [PubMed] [Google Scholar]
- Logan KJ, & Conture EG (1995). Length, grammatical complexity, and rate differences in stuttered and fluent utterances of children who stutter. Journal of Fluency Disorders, 20, 35–61. [Google Scholar]
- Logan KJ, & Conture EG (1997). Selected temporal, grammatical, and phonological characteristics of conversational utterances produced by children who stutter. Journal of Speech, Language, and Hearing Research, 40, 107–120. [DOI] [PubMed] [Google Scholar]
- Logan KJ, Byrd CT, Mazzocchi EM, & Gillam RB (2011). Speaking rate characteristics of elementary-school-aged children who do and do not stutter. Journal of Communication Disorders, 44, 130–147. [DOI] [PubMed] [Google Scholar]
- Logan KJ (2001). The effect of syntactic complexity upon the speech fluency of adolescents and adults who stutter. Journal of Fluency Disorders, 26, 85–106. [Google Scholar]
- Logan KJ (2003). The effect of syntactic structure upon speech initiation times of stuttering and nonstuttering speakers. Journal of Fluency Disorders, 28, 17–35. [DOI] [PubMed] [Google Scholar]
- Luce PA, & Pisoni DB (1998). Recognizing spoken words: The neighborhood activation model. Ear and Hearing, 19, 1–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luce PA, Pisoni DB, & Goldinger SD (1990). Similarity neighborhoods of spoken words In Altmann GTM (Ed.). Cognitive models of speech processing: Psycholinguistic and computational perspective (pp. 122–147). Cambridge, MA: The MIT Press. [Google Scholar]
- Macrae T (2013). Lexical and child-related factors in word variability and accuracy in infants. Clinical Linguistics and Phonetics, 27, 497–507. [DOI] [PubMed] [Google Scholar]
- Newman RS, & Bernstein Ratner N (2007). The role of selected lexical factors on confrontation naming accuracy, speed, and fluency in adults who do and do not stutter. Journal of Speech, Language, and Hearing Research, 50, 196–213. [DOI] [PubMed] [Google Scholar]
- Newman RS, & German DJ (2005). Life span effects of lexical factors on oral naming. Language and Speech, 48, 123–156. [DOI] [PubMed] [Google Scholar]
- O’Brian S, Packman A, Onslow M, & O’Brian N (2004). Measurement of stuttering in adults: Comparison of stuttering-rate and severity-scaling methods. Journal of Speech, Language, and Hearing Research, 47, 1081–1087. [DOI] [PubMed] [Google Scholar]
- Riley G (1994). Stuttering severity instrument for children and adults −3 (SSI-3) (3rd ed.). Austin, TX: Pro-Ed. [DOI] [PubMed] [Google Scholar]
- Sasisekaran J, & Weisberg S (2014). Practice and retention of nonwords in adults who stutter. Journal of Fluency Disorders, 41, 55–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sasisekaran J, de Nil LF, Smyth R, & Johnson C (2006). Phonological encoding in the silent speech of persons who stutter. Journal of Fluency Disorders, 31, 1–21. [DOI] [PubMed] [Google Scholar]
- Sasisekaran J (2013). Nonword repetition and nonword reading abilities in adults who do and do not stutter. Journal of Fluency Disorders, 38, 275–289. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Silverman SW, & Bernstein Ratner N (1997). Syntactic complexity, fluency, and accuracy of sentence imitation in adolescents. Journal of Speech, Language, and Hearing Research, 40, 95–106. [DOI] [PubMed] [Google Scholar]
- Smith A, Sadagopan N, Walsh B, & Weber-Fox C (2010). Increasing phonological complexity reveals heightened instability in inter-articulatory coordination in adults who stutter. Journal of Fluency Disorders, 35, 1–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stoel-Gammon C (2010). The word complexity measure: Description and application to developmental phonologic and disorders. Clinical Linguistics and Phonetics, 24, 271–282. [DOI] [PubMed] [Google Scholar]
- Sussman H, Byrd C, & Guitar B (2011). The integrity of anticipatory coarticulation in fluent and non-fluent utterances of adults who stutter. Clinical Linguistics and Phonetics, 25, 169–186. [DOI] [PubMed] [Google Scholar]
- Szekely A, Jacobsen T, D’Amico S, Devescovi A, Andonova E, Herron D, et al. (2004). A new on-line resource for psycholinguistic studies. Journal of Memory and Language, 51, 247–250. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Throneburg RN, Yairi E, & Paden EP (1994). Relation between phonologic difficulty and the occurrence of disfluencies in the early stage of stuttering. Journal of Speech and Hearing Research, 37, 504–509. [DOI] [PubMed] [Google Scholar]
- Tsiamtsiouris J, & Cairns HS (2009). Effects of syntactic complexity and sentence-structure priming on speech initiation time in adults who stutter. Journal of Speech, Language, and Hearing Research, 52, 1623–1639. [DOI] [PubMed] [Google Scholar]
- Tsiamtsiouris J, & Cairns HS (2013). Effects of sentence-structure complexity on speech initiation time and disfluency. Journal of Fluency Disorders, 38, 30–44. [DOI] [PubMed] [Google Scholar]
- van Lieshout PHHM, Hulstijn W, & Peters HFM (1996). Speech production in people who stutter: Testing the motor plan assembly hypothesis. Journal of Speech and Hearing Research, 39, 76–92. [DOI] [PubMed] [Google Scholar]
- Vitevitch MS, & Sommers MS (2003). The facilitative influence of phonological similarity and neighborhood frequency in speech production in younger and older adults. Memory and Cognition, 31, 491–504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vitevitch MS, Luce PA, Pisoni DB, & Auer ET (1999). Phonotactics, neighborhood activation, and lexical access for spoken words. Brain and Language, 68, 306–311. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vitevitch MS (2002). The influence of phonological similarity neighborhoods on speech production. Journal of Experimental Psychology: Learning, Memory, & Cognition, 28, 735–747. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vitevtich MS, & Luce PA (2005). Increases in phonotactic probability facilitate spoken nonword repetition. Journal of Memory and Language, 52, 193–204. [Google Scholar]
- Wagner RK, Torgesen JK, & Rashotte CA (1999). Comprehensive test of phonological processing (CTOPP). Austin, TX: Pro-Ed. [Google Scholar]
- Weber-Fox C, Spencer RM, Spruill JE, & Smith A (2008). Phonological processing in adults who stutter: Electrophysiological and behavioral evidence. Journal of Speech, Language, and Hearing Research, 47, 1244–1258. [DOI] [PubMed] [Google Scholar]
- Williams KT (1997). Expressive vocabulary test (EVT). Circle Pines, MN: American Guidance Services. [Google Scholar]
- Williams KT (2007). Expressive vocabulary test – 2 (EVT-2) (2nd ed.). Minneapolis, MN: Pearson Assessments. [Google Scholar]
- Wolk L, & LaSalle L (2015). Phonological complexity in school-aged children who stutter and exhibit a language disorder. Journal of Fluency Disorders, 43, 40–53. [DOI] [PubMed] [Google Scholar]
- Yairi E, & Ambrose NG (2005). Early childhood stuttering: For clinicians, by clinicians. Austin, TX: Pro-Ed. [Google Scholar]
- Yaruss JS (1999). Utterance length, syntactic complexity, and childhood stuttering. Journal of Speech, Language, and Hearing Research, 42, 329–344. [DOI] [PubMed] [Google Scholar]
- Zackheim CT, & Conture EG (2003). Childhood stuttering and speech disfluencies in relation to children’s mean length of utterance: A preliminary study. Journal of Fluency Disorders, 28, 115–142. [DOI] [PubMed] [Google Scholar]

