Skip to main content
Journal of Speech, Language, and Hearing Research : JSLHR logoLink to Journal of Speech, Language, and Hearing Research : JSLHR
. 2021 Mar 11;64(6 Suppl):2223–2233. doi: 10.1044/2020_JSLHR-20-00269

Neuroimaging of the Syllable Repetition Task in Children With Residual Speech Sound Disorder

Caroline Spencer a,, Jennifer Vannest a, Edwin Maas b, Jonathan L Preston c, Erin Redle a, Thomas Maloney d, Suzanne Boyce a
PMCID: PMC8740709  PMID: 33705667

Abstract

Purpose

This study investigated phonological and speech motor neural networks in children with residual speech sound disorder (RSSD) during an overt Syllable Repetition Task (SRT).

Method

Sixteen children with RSSD with /ɹ/ errors (6F [female]; ages 8;0–12;6 [years;months]) and 16 children with typically developing speech (TD; 8F; ages 8;5–13;7) completed a functional magnetic resonance imaging experiment. Children performed the SRT (“SRT-Early Sounds”) with the phonemes /b, d, m, n, ɑ/ and an adapted version (“SRT-Late Sounds”) with the phonemes /ɹ, s, l, tʃ, ɑ/. We compared the functional activation and transcribed production accuracy of the RSSD and TD groups during both conditions. Expected errors were not scored as inaccurate.

Results

No between-group or within-group differences in repetition accuracy were found on the SRT-Early Sounds or SRT-Late Sounds tasks at any syllable sequence length. On a first-level analysis of the tasks, the TD group showed expected patterns of activation for both the SRT-Early Sounds and SRT-Late Sounds, including activation in the left primary motor cortex, left premotor cortex, bilateral anterior cingulate, bilateral primary auditory cortex, bilateral superior temporal gyrus, and bilateral insula. The RSSD group showed similar activation when correcting for multiple comparisons. In further exploratory analyses, we observed the following subthreshold patterns: (a) On the SRT-Early Sounds, greater activation was found in the left premotor cortex for the RSSD group, while greater activation was found in the left cerebellum for the TD group; (b) on the SRT-Late Sounds, a small area of greater activation was found in the right cerebellum for the RSSD group. No within-group functional differences were observed (SRT-Early Sounds vs. SRT-Late Sounds) for either group.

Conclusions

Performance was similar between groups, and likewise, we found that functional activation did not differ. Observed functional differences in previous studies may reflect differences in task performance, rather than fundamental differences in neural mechanisms for syllable repetition.


Speech sound disorders (SSDs) comprise a significant fraction (24%–41%; Black et al., 2015) of communication disorders in children. Residual SSD (also variously termed persistent or resistant in the literature) is a subset of SSD and occurs when the disorder persists beyond the age of typical acquisition. Traditionally, age 8–9 years has been considered the upper limit of speech sound acquisition (Shriberg et al., 2010; Wren et al., 2013), though recent research (McLeod & Crowe, 2018) indicates that most children acquire the speech sounds of their native language as young as 5 or 6 years of age. Residual speech sound disorders (RSSDs) typically affect production of articulatorily complex sounds such as American English “r” (/ɹ/), so that listeners may either perceive a distortion or substitution of the /ɹ/. RSSDs are often difficult to resolve with traditional intervention approaches, and children with RSSD may experience negative academic and social impacts as a result of their persistent speech errors (Hitchcock et al., 2015; Leitao & Fletcher, 2004; Preston & Edwards, 2007). Although they frequently present with otherwise behaviorally typical communication abilities, this population has been shown to demonstrate subtle differences in speech processing abilities. Investigating the neural underpinnings of speech processing in RSSD may assist in understanding the etiology and underlying mechanisms of these differences, which in turn may assist in developing more effective treatment approaches.

Neural Function for Speech

In typical speakers, speech generation involves a network of inferior frontal and superior temporal brain regions (Bohland & Guenther, 2006; Guenther, 2006; Holland et al., 2001; Shuster & Lemieux, 2005). In their dual stream model, Hickok and Poeppel (2000, 2007) proposed that speech processing begins in bilateral superior temporal regions that are associated with phonological processing, then diverges into a ventral stream (e.g., bilateral middle and inferior temporal regions), and a dorsal stream (e.g., left temporo-parietal junction, left posterior inferior frontal gyrus, left premotor cortex, left insula). The ventral stream is associated with speech comprehension, while the dorsal stream is associated with sensorimotor integration and phonological processing (Poldrack et al., 2001). In healthy adults, covert syllable repetition has been shown to activate dorsal areas, including bilateral sensorimotor areas for the articulators, as well as the left frontal operculum and bilateral putamen (Liégeois et al., 2016). Left posterior superior temporal gyrus, an area involved in the dorsal stream, has been associated with phonological processing for speech production and perception (see Buchsbaum et al., 2001, for a review).

Functional Neural Networks in SSD

In children with SSD, however, there is some evidence that neural networks for speech processing are altered (Preston et al., 2012; Tkach et al., 2011). Importantly, this evidence comes from studies both of covert tasks (involving perception and/or mental rehearsal but no speech production) and overt tasks (involving actual speech production). For instance, using a covert (silent) speech recognition task, Preston et al. (2012) demonstrated that children with RSSD show hyperactivation in dorsal brain regions (e.g., precentral gyrus, superior temporal gyrus, and insula) and hypoactivation in ventral brain regions (e.g., globus pallidus, fusiform gyrus, middle temporal gyrus), suggesting less efficient and more effortful speech processing. On an overt nonword repetition task, Tkach et al. (2011) found that adolescents with a history of SSD had reduced activation in right inferior frontal and middle temporal gyri, but increased activation in left speech network areas, including superior frontal gyrus, middle temporal gyrus, inferior frontal gyrus, angular gyrus, and supramarginal gyrus, providing further evidence that altered neural mechanisms for speech production may be associated with SSD. Covert versus overt speech tasks are known to elicit different functional patterns (Shuster & Lemieux, 2005), but the results of both studies suggest that functional differences in phonological processing and speech motor planning areas may underlie SSDs. Although methodologically more complicated, as the motion of speaking has the potential to interfere with data quality, it is reasonable to assume that overt tasks are more ecologically reflective of speech production disorders. Accordingly, this current study follows successful models of overt speech tasks (Pigdon et al., 2020; Tkach et al., 2011) as an ecologically valid way to study the neural mechanisms that are involved in disordered speech production.

Neural Structure in SSD

Imaging studies have also shown structural brain differences in regions associated with speech perception, motor control, and word reading in children with RSSD (Preston et al., 2014). Specifically, Preston et al. reported that children with RSSD showed increased gray matter volume in bilateral superior temporal gyrus, increased white matter volume in corpus callosum, and decreased white matter volume in the right lateral occipital gyrus, as compared to children with typically developing speech (TD). Additional differences in gray matter development have been found in children with SSD, as Luders et al. (2017) reported altered morphology of the corpus callosum, and Kurth et al. (2018) reported rightward gray matter asymmetry. A systematic review conducted by Liégeois and Morgan (2012) concluded that long-term and severe pediatric speech disorders are largely associated with structural differences of bilateral brain regions associated with speech production, providing further evidence that SSD involves alterations to neural networks involved in speech processing. Differences in left corticobulbar tract development have been associated with SSDs (Morgan et al., 2018).

Phonological and Speech Motor Processing

Sensorimotor research has concluded that speech production integrally involves both auditory and motor systems (see Hickok et al., 2011, for a review). Children with RSSD have been shown to exhibit both (a) receptive deficits in phonological processing (Preston & Edwards, 2007) and (b) differences in measures of speech motor control abilities (Flipsen, 2003; Preston & Edwards, 2009). Some children with SSDs have shown differences in perception of speech sounds (Bird et al., 1995; Rvachew et al., 2003) as well as phonological working memory (Lewis et al., 2011). Furthermore, children with RSSD show differences in motor execution, including less differentiation of the parts of the tongue during speech and greater variability in tongue movement trajectories during speech (Dugan et al., 2019; Gibbon, 1999; Preston et al., 2019; Spencer et al., 2019). Furthermore, children with this type of speech disorder have demonstrated differences in overall fine motor processing related to speech motor execution, as they engage the cerebellum more than their typical peers on a fine motor task (Redle et al., 2015).

Nonword Repetition

Phonological processing and motor control capabilities have been jointly examined using nonword repetition NWR tasks (Archibald et al., 2013; Farquharson et al., 2018; Preston & Edwards, 2007; Reuterskiöld & Grigos, 2015). NWR is sensitive to impairments in a range of speech and language disorders (Deevy et al., 2010; Dollaghan & Campbell, 1998; Sasisekaran et al., 2010; Spencer & Weber-Fox, 2014), including SSD (Larrivee & Catts, 1999; Preston & Edwards, 2007). NWR requires adequate perception of speech sounds, successful retention of the necessary phonological information in working memory, as well as proper planning and execution of sounds.

As discussed previously, NWR has been combined with functional magnetic resonance imaging (fMRI) in a limited number of studies in SSDs as a broader group, but none of these are specific to RSSD. Furthermore, these studies have inconsistent results. In the Tkach et al. (2011) study, adolescents with a history of SSDs, of which RSSD is a subgroup, have shown hypoactivation in phonological processing regions (including bilateral superior temporal gyri, left middle temporal gyri) and hyperactivation in motor control regions (including left inferior frontal cortex, left premotor and primary motor cortices, basal ganglia, and/or cerebellum) during fMRI nonsense word production. In contrast, Pigdon et al. (2020) reported that children with developmental speech disorder (also a subgroup of SSD) do not show significant functional neural differences from typical controls when engaging in a nonword repetition task. Thus, the roles of phonological processing regions and motor control regions during speech production in children with SSDs are not yet clear.

Syllable Repetition Task

In this study, we adapted the Syllable Repetition Task (SRT; Shriberg et al., 2009) for use during fMRI in order to investigate speech motor planning abilities of children with and without residual SSD. The SRT consists of two-, three- and four-syllable nonsense words with a limited set of phonemes. We selected the SRT to adapt because it was designed specifically to avoid common speech sound errors while tapping into temporary errors elicited by the demands of the task, thereby minimizing any confounding effects of speech production errors and working memory difficulties and focusing on phonological and speech motor planning processes. Using the SRT during fMRI enabled us to measure the neurological activity associated with these speech processes during production and identify patterns in the activity. In our experiment, we used two conditions of the adapted SRT: an “SRT-Early Sounds” with the syllables /bɑ/, /dɑ/, /mɑ/, /nɑ/, and an “SRT-Late Sounds,” with the syllables /ɹɑ/, /sɑ/, /lɑ/, and /tʃɑ/. Including both tasks allowed us to (a) detect neural function during speech that is not confounded by execution errors and (b) detect brain function during misarticulation. We anticipated that this would allow us to study the underlying speech processes that may be different in children with RSSD even without accompanying misarticulations, in comparison to processes that may be different during misarticulations. In order to do this, we wanted to use a task in which we could (a) intentionally avoid common misarticulations that these children would produce (the SRT-Early Sounds task) and then (b) intentionally elicit these misarticulations with speech sounds that require more articulatory control (the SRT-Late Sounds task; Stoel-Gammon, 2009). We expected that both groups would be capable of completing both the SRT-Early Sounds and SRT-Late Sounds tasks (albeit with possibly discrepant phonemic accuracy), which would allow us to examine the underlying neural processes.

We hypothesized that children with RSSD would show primary deficits in both phonological processing and speech motor control. We hypothesized that we would observe poorer repetition accuracy on both the SRT-Early Sounds and SRT-Late Sounds. Likewise, we anticipated differences in functional activation between groups. Given the previously reported weaknesses in phonological processing in children with RSSD, we hypothesized that we would observe reduced activation in phonological brain areas, including superior temporal gyrus, in comparison to children with TD; furthermore, we hypothesized compensation with increased activation in speech motor areas, including premotor cortex and cerebellum. Furthermore, we predicted that both groups would show greater activation in speech motor areas on the SRT-Late Sounds, but that differences would be greater in magnitude for the RSSD group.

Method

Participants

Data collection procedures were approved by the University of Cincinnati Institutional Review Board. Sixteen children with RSSD (6F [female]; ages 8;0–12;6 [years;months]) and 16 children with TD (8F; ages 8;5–13;7) completed an fMRI experiment, which included both the SRT-Early Sounds and SRT-Late Sounds tasks. All participants were recruited through the University of Cincinnati Speech and Language Clinic, Cincinnati Children's Hospital Medical Center, and nearby communities. All participants passed a hearing and vision screening and demonstrated expressive and receptive language skills within normal limits, as assessed by the Formulated Sentences and Recalling Sentences subtests of the Clinical Evaluation of Language Function–Fifth Edition (Wiig et al., 2013) and the Peabody Picture Vocabulary Test–Fourth Edition (Dunn & Dunn, 2007). All RSSD participants exhibited /ɹ/ articulation errors, as demonstrated by a standard score of < 85 on the Goldman-Fristoe Test of Articulation–Second Edition (GFTA-2; Goldman & Fristoe, 2000) and/or ≤ 25% correct on a probe list of words containing /ɹ/. Nine participants with RSSD (ages 8;0–11;6) achieved GFTA-2 standard scores ≥ 85 but demonstrated < 25% accuracy on word probes. Eleven participants demonstrated other speech sound errors; five RSSD participants only demonstrated errors on rhotics. All RSSD participants demonstrated errors on both consonantal and vocalic rhotics. All children demonstrated distortions; two children also demonstrated substitutions on consonantal rhotics. No RSSD participant had a current or past diagnosis of a motor speech disorder. All TD participants demonstrated articulation abilities within normal limits, as assessed by the GFTA-2 and > 95% correct on a probe list of /ɹ/ words. None of the TD participants had a history of any speech or language disorder. While not an inclusion criterion, all RSSD and TD participants demonstrated phonological processing skills within normal limits, as assessed by the Comprehensive Test of Phonological Processing–2nd Edition (CTOPP-2, Wagner et al., 2013) Elision, Blending Words, and Phoneme Isolation subtests. Participant demographics and standardized test results are summarized in Table 1.

Table 1.

Participant demographics and speech/language standardized test scores.

Variable RSSD
Mean (SD)
TD
Mean (SD)
Gender 6F; 10M 8F; 8M
Age 10.27 (1.29) 11.12 (1.42)
PPVT-4 119 (15.35) 113.89 (16.86)
CELF-5
 Recalling Sentences 10.38 (3.56) 11.15 (3.86)
 Formulated Sentences 13.19 (2.83) 14.37 (3.45)
GFTA-2 81.19 (8.50) 103.05 (2.03)
CTOPP-2
 Elision 10.88 (2.60) 11.00 (2.41)
 Blending Words 9.38 (3.56) 9.95 (3.05)
 Phoneme Isolation 9.69 (2.41) 9.32 (2.01)

Note. RSSD = residual speech sound disorder; TD = typically developing speech; F = female; M = male; PPVT-4 = Peabody Picture Vocabulary Test–Fourth Edition; CELF-5 = Clinical Evaluation of Language Function–Fifth Edition; GFTA-2 = Goldman-Fristoe Test of Articulation–Second Edition.

MRI Methods

All MRI scans were conducted on a 3T Philips MRI scanner in the Imaging Research Center at Cincinnati Children's Hospital. For the purposes of this study, the following structural and functional scans were included: (a) a T1-weighted structural volume with 1-mm isotropic resolution acquired with magnetization‐prepared rapid gradient‐echo imaging (MP RAGE) protocol (Mugler & Brookeman, 1990) and (b) three fMRI time series (two SRT-Early Sounds and one SRT-Late Sounds) using a sparse acquisition approach that allowed for a 6-s period for the target presentation and the child's response without gradient/scanner noise, followed by a 6-s acquisition period without head motion (Schmithorst & Holland, 2004). Using the Hemodynamics Unrelated to Sounds from Hardware (HUSH; Schmithorst & Holland, 2004) fMRI method, three whole-brain T2*-weighted volumes were acquired using echo planar imaging (EPI) sequence permitting 3-mm isotropic resolution at a repetition time (TR) = 2,000 ms, echo time (TE) = 35 ms. This scenario allows for efficient capture of the peak hemodynamic response elicited by the auditory stimuli and spoken response after the response had been completed and is shown visually in Figure 1.

Figure 1.

Figure 1.

Diagram of Hemodynamics Unrelated to Sounds from Hardware (HUSH; Schmithorst & Holland, 2004) paradigm used for SRT-Early Sounds and SRT-Late Sounds tasks. Each trial consisted of 6 s of gradient silence (no scanner noise) for stimulus presentation and the child's response, followed by 6 s of acquisition (three whole-brain volumes, TR = 2,000 ms each). SRT = Syllable Repetition Task.

SRT

The subject heard a target nonword and was asked either to repeat the word immediately or to only listen. The listen-only condition was included as a contrast for the speaking condition, in order to isolate the neural activity associated with overt production from speech recognition processes associated with hearing the nonword. The SRT-Early Sounds and SRT-Late Sounds followed a blocked design with targets presented in random order within the block. Each task consisted of 18 target nonwords, divided into three sets; six targets were first presented in a “repeat” block, and then those same six targets were presented in a “listen-only” block, not necessarily in the same order. The four remaining blocks were presented in the same manner (“repeat,” “listen,” “repeat,” and “listen”). To indicate the desired behavior for each target, the “repeat” trials were accompanied by an image of a mouth on the screen during the 6-s period of target presentation + response, while the “listen” trials were accompanied by an image of an ear on the screen. A crosshair was visible on the screen during each acquisition period. All participants were asked to complete two runs of the SRT-Early Sounds followed by one run of the SRT-Late Sounds.

SRT Scoring

Participants' SRT responses were recorded during the session using Audacity (Version 2.4.2.0) software and transcribed by a trained student worker, the first author, and/or the fifth author at a later time. Scoring procedures followed the guidelines established by Shriberg et al. (2009). Each target consonant was transcribed and scored as correct/incorrect. For both the SRT-Early Sounds and SRT-Late Sounds tasks, phoneme substitutions and omissions were scored as incorrect. Insertions, for example, [bamda] for /bada/, were also scored as incorrect. On the SRT-Late Sounds, participants with RSSD were not penalized for distortion or substitution errors that were evident in the child's conversational speech; for example, [wala] for /ɹala/ was counted as fully accurate for children who substituted [w] for /ɹ/. Interrater agreement was calculated for 10% of the target syllable sequences: Two raters reached an average of 91% agreement on consonant transcription. Due to the placement of the microphone on the head coil and the sound quality of the recordings, differentiating phonemes that differed by only one contrastive feature was sometimes difficult. The raters addressed this issue by listening to the recording multiple times to determine the phoneme they perceived.

Analysis

Speech Production Accuracy

Because the SRT-Early Sounds and SRT-Late Sounds tasks created a repeated-measures scenario and the scores followed a non-normal distribution, the nonparametric Friedman's test was used to compare transcribed speech production accuracy between tasks.

First-Level fMRI Processing

First-level analysis was implemented in the FMRIB Software Library (FSL; Jenkinson et al., 2012). To account for intensity differences in the fMRI volumes due to T2* relaxation effects that occur each time the scanner gradients come on, each of the three volumes acquired during each 6-s HUSH interval were split and combined with the other volumes of the same consecutive order. For each of the three combined volume data sets, the following preprocessing steps were performed: motion correction using FSL's MCFLIRT (Jenkinson et al., 2002), global intensity normalization, high-pass temporal filtering, and spatial smoothing (Gaussian kernel with a 3-mm standard deviation) using fslmaths. The data were normalized to the MNI152 standard brain template (Mazziotta et al., 2001). Normalization was achieved by first aligning the participant's functional data to their brain-extracted T1 image using an affine transform, and then their T1 was aligned to the MNI152 template using a 12-parameter registration model. The two transformation matrices were combined and used to transform the functional data to MNI152 space. All normalizations were done using FSL's flirt (Jenkinson et al., 2002; Jenkinson & Smith, 2001). We then analyzed the neural response to the task by contrasting neural activity during the repetition trials with the neural activity in the listening-only trials (speaking > listening) using a general linear model that included the six motion parameters as regressors of no interest. Statistical images from each of the three recombined HUSH volumes were then averaged using a fixed effects analysis. Final images are shown at a corrected threshold of p < .05.

Second-Level fMRI Processing

For second-level group analyses, we then compared the functional neural activity over the whole brain between RSSD and TD groups on the SRT-Early Sounds and SRT-Late Sounds (RSSD > TD and TD > RSSD) using a mixed-effects analysis in FSL's FEAT (Woolrich et al., 2004) with permutation testing via randomize (5,000 permutations; Winkler et al., 2014). Multiple comparisons correction was performed with threshold-free cluster enhancement (Smith & Nichols, 2009) in order to show differences between groups without explicitly setting a threshold. While correcting for multiple comparisons reduces the risk of committing a Type I error, it also increases the risk of committing a Type II error, as it may mask some smaller effects (Lieberman & Cunningham, 2009). As such, we also analyzed the group contrasts without correcting for multiple comparisons, but at a stricter threshold (p < .001 and at least 10 contiguous voxels) to explore any subthreshold differences (Lindquist & Mejia, 2015). We recognize the limitations of including results that do not pass multiple comparisons correction, but given the novelty of this clinical population, we felt it was important to highlight the patterns of group differences observed in the results.

SRT-Early Sounds. Sixteen RSSD and 16 TD participants completed two runs of the SRT-Early Sounds (2 runs × [18 repeat tokens + 18 listen tokens per run] = 72 tokens per subject). Between-group analyses of transcribed consonant accuracy and fMRI activation for the SRT-Early Sounds include both runs.

SRT-Late Sounds. Thirteen RSSD and 16 TD participants completed one run of the SRT-Late Sounds (18 repeat tokens + 18 listen tokens = 36 tokens per subject). Three participants in the RSSD group were unable to complete the SRT-Late Sounds, as the participant requested to end the scan. There was no systematic effect of age on the inability to complete the SRT-Late Sounds (the three children were ages 8;0, 9;10, and 11;3). The SRT-Late Sounds was always the last test of the scanning protocol.

SRT-Early Sounds versus SRT-Late Sounds. To investigate the effect of phonological and articulatory difficulty, we also compared neural activation between the SRT-Early Sounds and SRT-Late Sounds (within-group analyses). The within-group analyses of the SRT-Early Sounds versus SRT-Late Sounds include only the first run of the SRT-Early Sounds so as to eliminate any possible learning effect from the second run of the SRT-Easy Sounds. Additionally, we included only those participants who completed both the SRT-Early Sounds and SRT-Late Sounds tasks (13 RSSD and 16 TD).

Results

SRT-Early Sounds

On the SRT-Early Sounds, no differences in percent of consonants correct (PCC) were observed between RSSD and TD groups at two-, three-, or four-syllable lengths (Friedman χ2 = 3, df = 2, p = .22), as shown in Figure 2. Both groups performed near-ceiling level.

Figure 2.

Figure 2.

SRT-Early Sounds transcribed speech accuracy. Means and distributions of percent of consonants correct scores do not differ significantly at any syllable length. SRT = Syllable Repetition Task; RSSD = residual speech sound disorder; TD = typically developing speech.

The TD group showed expected patterns of activation for speech production on the first-level analysis of the Easy SRT task (contrast = speaking > listening). Areas of activation for the TD group included left primary motor cortex, left premotor cortex, bilateral basal ganglia, bilateral primary auditory cortex, and bilateral superior temporal gyrus. The TD group also showed significant activation in the bilateral visual association cortices, which was not anticipated. Areas of significant activation for the RSSD group included left primary motor cortex, left premotor cortex, bilateral anterior cingulate, left primary sensory cortex, bilateral primary auditory cortex, bilateral superior temporal gyrus, and bilateral insula (see Figure 3, top left panel). Locations (MNI152 coordinates) of significant clusters are provided in Table 2. All areas of activation reached statistical significance at p < .05, corrected for multiple comparisons. Clusters of activation less than 10 voxels are not reported.

Figure 3.

Figure 3.

SRT-Early Sounds task activation and group contrasts. First-level contrasts (top two panels) are shown at a threshold of p < .05, corrected for multiple comparisons. Second-level contrasts (bottom two panels) are shown at a threshold of p < .001, uncorrected for multiple comparisons. RSSD group shows greater activation bilateral inferior frontal gyri, bilateral premotor cortex, left posterior superior temporal gyrus, left visual association cortex, and right insula. TD group shows greater activation in the left cerebellum. SRT = Syllable Repetition Task; RSSD = residual speech sound disorder; TD = typically developing speech.

Table 2.

Clusters of activation during tasks (corrected for multiple comparisons).

Region COG
Size (voxels)
X Y Z
SRT-Early Sounds RSSD
Left inferior frontal cortex, premotor cortex, primary motor cortex, superior temporal gyrus, extending to insula, striatum, and thalamus −46 −4 14 9,339
Right premotor cortex, primary motor cortex, superior and middle temporal gyrus, extending to insula and striatum 48 −10 4 3,466
Left supplementary motor area, extending into right supplementary motor area −4 10 52 411
SRT-Early Sounds TD
Bilateral superior temporal gyrus, inferior frontal cortex, primary motor cortex, primary visual cortex, cerebellar vermis −2 −28 0 29,651
SRT-Late Sounds RSSD
Right premotor cortex, primary motor cortex, superior temporal gyrus 54 −12 14 2,379
Bilateral cerebellar vermis 4 −62 −28 990
Left superior temporal gyrus −42 −28 4 340
Left thalamus −2 −20 16 112
Right thalamus 8 −20 4 79
SRT-Late Sounds TD
Bilateral inferior frontal cortex, primary motor cortex, superior temporal gyrus, thalamus, primary visual cortex, cerebellar vermis −4 −6 2 16,406
Left supramarginal gyrus −38 −34 16 69

Note. COG = center of gravity of the cluster, MNI152 coordinates. SRT = Syllable Repetition Task; RSSD = residual speech sound disorder; TD = typically developing speech.

Second-level group comparisons on the SRT-Early Sounds revealed no significant differences when correcting for multiple comparisons. To explore possible subthreshold effects that may not have been detected when utilizing multiple comparisons corrections, we also explored the second-level group comparisons at p < .001 (uncorrected and at least 10 contiguous voxels; see Methods section above). At this threshold, the RSSD group exhibited increased activation compared to the TD group in the left premotor cortex, left visual association cortex, and right insula (see Figure 3; bottom left panel). Cluster locations (MNI152 coordinates) are listed in Table 3. In contrast, the TD group exhibited increased activation compared to the RSSD group in the left cerebellum (p < .001 and at least 10 contiguous voxels). Figure 3, bottom right panel, and Table 3 detail this cluster location.

Table 3.

Clusters of activation for group contrasts at p < .001 (uncorrected for multiple comparisons).

Region COG
Size (voxels)
X Y Z
SRT-Early Sounds RSSD>TD
Left visual association cortex −14 −96 24 40
Right insula 44 0 14 23
Left premotor cortex −42 6 28 10
SRT-Early Sounds TD>RSSD
Left cerebellum −52 −64 −38 12
SRT-Late Sounds RSSD>TD
Right cerebellum 26 −54 −34 10

Note. COG = center of gravity of the cluster, MNI152 coordinates. SRT = Syllable Repetition Task; RSSD = residual speech sound disorder; TD = typically developing speech.

SRT-Late Sounds

We expected that later developing speech sounds would present greater phonological and speech motor demands on all speakers, and we expected group differences to be more apparent on this task. On the SRT-Late Sounds, no differences in PCC were observed between RSSD and TD groups at two-, three-, or four-syllable nonword lengths (Friedman χ2 = 4, df = 2, p = .14), as shown in Figure 4.

Figure 4.

Figure 4.

SRT-Late Sounds transcribed speech accuracy. Means and distributions of percent of consonants correct scores do not differ significantly between RSSD and TD groups at any syllable length. SRT = Syllable Repetition Task; RSSD = residual speech sound disorder; TD = typically developing speech.

On first-level analyses, The TD group showed activation in bilateral primary auditory cortex, bilateral premotor cortex, and bilateral superior temporal gyrus (see Figure 5, top right panel) during the syllable repetition. On the other hand, the RSSD group showed activation in right supplementary motor cortex, right primary auditory cortex, bilateral insula, bilateral thalamus, and bilateral cerebellar dentate nucleus (see Figure 5, top left panel). Locations (MNI152 coordinates) of significant clusters are provided in Table 2. All areas of activation reached statistical significance at p < .05, corrected for multiple comparisons.

Figure 5.

Figure 5.

SRT-Late Sounds task activation and group contrasts. First-level contrasts (top two panels) are shown at a threshold of p < .05, corrected for multiple comparisons. Second-level contrasts (bottom two panels) are shown at a threshold of p < .001, uncorrected for multiple comparisons. RSSD group shows greater activation in bilateral primary auditory cortices and right cerebellum. TD group shows no areas of greater activation. SRT = Syllable Repetition Task; RSSD = residual speech sound disorder; TD = typically developing speech.

We had predicted that the RSSD group would show reduced activation in phonological regions, such as bilateral superior temporal gyrus and middle temporal gyrus, and possibly left inferior frontal gyrus but increased activation in speech motor planning regions, such as left inferior frontal gyrus and premotor cortex. Second-level group comparisons revealed no significant differences when correcting for multiple corrections. Further analysis at p < .001 (uncorrected and at least 10 contiguous voxels), the RSSD group demonstrated a subtle difference of greater activation in the right cerebellum. This region is shown in Figure 5 (bottom left panel), and the location (MNI152 coordinates) is provided in Table 3. No areas showed greater activation for the TD group.

SRT-Early Sounds Versus SRT-Late Sounds

No significant differences in PCC were observed between the SRT-Early Sounds and SRT-Late Sounds for either the RSSD (Friedman χ2 = 1, df = 2, p = .61) or the TD group (Friedman χ2 = 3, df = 2, p = .22), as shown in Figure 6. Both the RSSD and TD groups performed better numerically on four-syllable words on the SRT-Late Sounds, but this difference was not significant.

Figure 6.

Figure 6.

RSSD and TD transcribed speech repetition accuracy. Means and distributions of percent of consonants correct scores were similar at two and three syllables for SRT-Early Sounds and SRT-Late Sounds within RSSD and TD groups. Mean percent of consonants correct scores were higher at four syllables for SRT-Late Sounds within RSSD and TD groups. SRT = Syllable Repetition Task; RSSD = residual speech sound disorder; TD = typically developing speech.

We had anticipated that the SRT-Late Sounds task would place greater demands on the speech production neural networks than the SRT-Easy task due to differing phonological and articulatory complexity between the two tasks. Second-level within-group comparisons detected no significantly differing activation between the SRT-Early Sounds and SRT-Late Sounds for the TD group when correcting for multiple comparisons, and no differences at p < .001 (uncorrected and at least 10 contiguous voxels). Likewise, the second-level within-group comparison for the RSSD group showed no significant differences in neural activation between the SRT-Early Sounds and SRT-Late Sounds when correcting for multiple comparisons, and no differences at p < .001 (uncorrected), indicating that these two conditions did not differentially tax the speech production networks.

Discussion

In the current study, we sought to investigate the underlying neural mechanisms supporting speech production in children with RSSD. In terms of task performance, we observed comparable transcribed syllable repetition accuracy between the RSSD and TD groups on both the SRT-Early Sounds and SRT-Late Sounds. This finding was not anticipated and differs from a recent study of nonword repetition in children with SSD (Pigdon et al., 2020). Differing results may be due to varying difficulty of the nonword repetition tasks used in the two studies and/or to the fact that, in the current study, we did not count /ɹ/ as incorrect for the RSSD group on the SRT-Late Sounds task. Pigdon et al. used the Children's Test of Nonword Repetition (CNRep; Gathercole et al., 1994) whereas the current study used the SRT. The CNRep contains nonwords with various vowel and consonant sounds, whereas the SRT holds the vowels consistent across all syllables and contains a limited set of consonant sounds, reducing the articulatory and phonological load. The SRT was used here so as to minimize any confounding effect of articulation difficulties, with the idea that speech production processes in children with RSSD differ in more than just the misarticulation of specific sounds. Our results suggest that our use of the SRT did avoid possible confounds of task performance. However, perhaps the task was not challenging enough to the speech system to also elicit group differences.

Our results revealed no significant group differences in functional activation with corrections for multiple comparisons on either the SRT-Early Sounds of SRT-Late Sounds. We had originally hypothesized, based on limited previous neuroimaging studies of speech production, that children with RSSD would demonstrate less activation than TD children in regions associated with phonological processing and, to compensate, more activation in regions associated with speech motor preparation. This observation is consistent with the results of a previous study (Pigdon et al., 2020), which also did not detect significant functional differences between children with SSD and typical controls. However, our results differ from other reports that have shown significant neural activation differences between children with speech errors and typical controls for speech processing (Preston et al., 2012; Tkach et al., 2011). Again, task performance effects may be relevant to explaining the differing results (Brown et al., 2005; Schlaggar et al., 2002). In studies that have shown neural activation differences, task performance differences were also found. Thus, neural activation differences may be, in fact, simply reflecting performance differences on a particular task. The SRT tasks that we used in the current study are not as articulatorily complex as some other nonword repetition tasks and may not have been challenging enough to elicit functional differences.

We also analyzed functional activation comparisons without multiple comparisons corrections (but at a stricter threshold) in order to explore the possibility of slight neural differences and reduce the risk of making a Type II error. These weaker effects require further investigation, but, given that studies of neural mechanisms of SSD are so few, we discuss them briefly with the intention of possibly informing future work in this area. In this secondary analysis, children with RSSD showed subtly greater activation in areas associated with speech motor preparation (insula, premotor cortex) than children with TD on the SRT-Early Sounds. In contrast, children with TD showed greater activation in the right cerebellum, which is associated with motor control, particularly for syllable sequencing and verbal encoding (Ackermann, 2008). On the SRT-Late Sounds, our RSSD cohort showed an area of subtly greater activation in the left cerebellum. An increase in cerebellar activation was previously observed in children with persistent speech disorders during a fine motor task (Redle et al., 2015), suggesting that these children may be recruiting cerebellar circuits less optimally for tasks requiring syllable sequencing, precision, and fine motor execution.

Neither group showed a condition effect of the SRT-Early Sounds compared to the SRT-Late Sounds, either corrected or uncorrected for multiple comparisons. It may be that the tasks were not sufficiently different in their difficulty to elicit within-group differences. The lack of a difference may also reflect a practice effect: The SRT-Late Sounds was administered after two runs of the SRT-Early Sounds for all participants, so they may have been habituated to the task.

Limitations and Future Directions

Despite differences in neural activation patterns with a looser threshold, these differences did not reach significance when corrected for multiple comparisons. Also, SSDs are heterogeneous and may reflect numerous subgroups that differentially involve perceptual, phonological, and motor skills, which may involve differences in activation patterns and behavioral performance. In this study, we did not have the sample size to subdivide RSSD participants into subgroups. Increasing the number of subjects, using a more challenging nonword repetition task, and increasing the variation in difficulty level between the SRT-Early Sounds and SRT-Late Sounds tasks may reveal stronger differences between groups and tasks. The overt speech characteristics of the RSSD group imply that some differences in neural processing must be occurring, but those differences were not clearly detectable with the current sample and the paradigm used. This may be investigated in future research.

Conclusions

Children with RSSD in our cohort do not show frank impairments in phonological and speech motor planning processes for repeating syllable sequences. However, these children may have subtle functional brain differences from their typical peers during speech production. Further research is needed to confirm these results in a larger sample.

Acknowledgments

Research reported in this publication was supported by National Institute on Deafness and Other Communication Disorders Awards R01DC013668 (D. W., PI; S. B., mPI) and 1F31DC017654 (C. S., PI). We would like to thank Sarah Dugan for her assistance collecting data. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Funding Statement

Research reported in this publication was supported by National Institute on Deafness and Other Communication Disorders Awards R01DC013668 (D. W., PI; S. B., mPI) and 1F31DC017654 (C. S., PI).

References

  1. Ackermann, H. (2008). Cerebellar contributions to speech production and speech perception: Psycholinguistic and neurobiological perspectives. Trends in Neurosciences, 31(6), 265–272. https://doi.org/10.1016/j.tins.2008.02.011 [DOI] [PubMed] [Google Scholar]
  2. Archibald, L. M. D. , Joanisse, M. F. , & Munson, B. (2013). Motor control and nonword repetition in specific working memory impairment and SLI. Topics in Language Disorders, 33(3), 255–267. https://doi.org/10.1097/TLD.0b013e31829cf5e7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bird, J. , Bishop, D. V. M. , & Freeman, N. H. (1995). Phonological awareness and literacy development in children with expressive phonological impairments. Journal of Speech and Hearing Research, 38(2), 446–462. https://doi.org/10.1044/jshr.3802.446 [DOI] [PubMed] [Google Scholar]
  4. Black, L. I. , Vahratian, A. , & Hoffman, H. J. (2015). Communication disorders and use of intervention services among children aged 3–17 years: United States, 2012. NCHS Data Brief, (205), 1–8. [PubMed] [Google Scholar]
  5. Bohland, J. W. , & Guenther, F. H. (2006). An fMRI investigation of syllable sequence production. NeuroImage, 32(2), 821–841. https://doi.org/10.1016/j.neuroimage.2006.04.173 [DOI] [PubMed] [Google Scholar]
  6. Brown, T. T. , Lugar, H. M. , Coalson, R. S. , Miezin, F. M. , Petersen, S. E. , & Schlaggar, B. L. (2005). Developmental changes in human cerebral functional organization for word generation. Cerebral Cortex, 15(3), 275–290. https://doi.org/10.1093/cercor/bhh129 [DOI] [PubMed] [Google Scholar]
  7. Buchsbaum, B. R. , Hickok, G. , & Humphries, C. (2001). Role of left posterior superior temporal gyrus in phonological processing for speech perception and production. Cognitive Science, 25(5), 663–678. https://doi.org/10.1207/s15516709cog2505_2 [Google Scholar]
  8. Deevy, P. , Weil, L. W. , Leonard, L. B. , & Goffman, L. (2010). Extending use of the NRT to preschool-age children with and without specific language impairment. Language, Speech, and Hearing Services in Schools, 41(3), 277–288. https://doi.org/10.1044/0161-1461(2009/08-0096) [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Dollaghan, C. , & Campbell, T. F. (1998). Nonword repetition and child language impairment. Journal of Speech, Language, and Hearing Research, 41(5), 1136–1146. https://doi.org/10.1044/jslhr.4105.1136 [DOI] [PubMed] [Google Scholar]
  10. Dugan, S. , Li, S. R. , Masterson, J. , Woeste, H. , Mahalingam, N. , Spencer, C. , Mast, T. D. , Riley, M. A. , & Boyce, S. E. (2019). Tongue part movement trajectories for /r/ using ultrasound. SIG 19 Perspectives on Speech Science, 4(6), 1644–1652. https://doi.org/10.1044/2019_PERS-19-00064 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Dunn, L. M. , & Dunn, D. M. (2007). Peabody Picture Vocabulary Test–Fourth Edition (PPVT-4). Pearson. https://doi.org/10.1037/t15144-000 [Google Scholar]
  12. Farquharson, K. , Hogan, T. P. , & Bernthal, J. E. (2018). Working memory in school-age children with and without a persistent speech sound disorder. International Journal of Speech-Language Pathology, 20(4), 422–433. https://doi.org/10.1080/17549507.2017.1293159 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Flipsen, P. (2003). Articulation rate and speech-sound normalization failure. Journal of Speech, Language, and Hearing Research, 46(3), 724–737. https://doi.org/10.1044/1092-4388(2003/058) [DOI] [PubMed] [Google Scholar]
  14. Gathercole, S. E. , Willis, C. S. , Baddeley, A. D. , & Emslie, H. (1994). The children's test of nonword repetition: A test of phonological working memory. Memory, 2(2), 103–127. https://doi.org/10.1080/09658219408258940 [DOI] [PubMed] [Google Scholar]
  15. Gibbon, F. E. (1999). Undifferentiated lingual gestures in children with articulation/phonological disorders. Journal of Speech, Language, and Hearing Research, 42(2), 382–397. https://doi.org/10.1044/jslhr.4202.382 [DOI] [PubMed] [Google Scholar]
  16. Goldman, R. , & Fristoe, M. (2000). Goldman-Fristoe Test of Articulation–Second Edition (GFTA-2). AGS. https://doi.org/10.1037/t15098-000 [Google Scholar]
  17. Guenther, F. H. (2006). Cortical interactions underlying the production of speech sounds. Journal of Communication Disorders, 39(5), 350–365. https://doi.org/10.1016/j.jcomdis.2006.06.013 [DOI] [PubMed] [Google Scholar]
  18. Hickok, G. , Houde, J. , & Rong, F. (2011). Sensorimotor integration in speech processing: computational basis and neural organization. Neuron, 69(3), 407–422. https://doi.org/10.1016/j.neuron.2011.01.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hickok, G. , & Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences, 4(4), 131–138. https://doi.org/10.1016/S1364-6613(00)01463-7 [DOI] [PubMed] [Google Scholar]
  20. Hickok, G. , & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393–402. https://doi.org/10.1038/nrn2113 [DOI] [PubMed] [Google Scholar]
  21. Hitchcock, E. , Harel, D. , & Byun, T. (2015). Social, emotional, and academic impact of residual speech errors in school-aged children: A survey study. Seminars in Speech and Language, 36(04), 283–294. https://doi.org/10.1055/s-0035-1562911 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Holland, S. K. , Plante, E. , Byars, A. W. , Strawsburg, R. H. , Schmithorst, V. J. , & Ball, W. S., Jr. (2001). Normal fMRI brain activation patterns in children performing a verb generation task. NeuroImage, 14(4), 837–843. https://doi.org/10.1006/nimg.2001.0875 [DOI] [PubMed] [Google Scholar]
  23. Jenkinson, M. , Bannister, P. , Brady, M. , & Smith, S. (2002). Improved optimization for the robust and accurate linear registration and motion correction of brain images. NeuroImage, 17(2), 825–841. https://doi.org/10.1006/nimg.2002.1132 [DOI] [PubMed] [Google Scholar]
  24. Jenkinson, M. , Beckmann, C. F. , Behrens, T. E. J. , Woolrich, M. W. , & Smith, S. M. (2012). FSL. NeuroImage, 62(2), 782–790. https://doi.org/10.1016/j.neuroimage.2011.09.015 [DOI] [PubMed] [Google Scholar]
  25. Jenkinson, M. , & Smith, S. (2001). A global optimisation method for robust affine registration of brain images. Medical Image Analysis, 5(2), 143–156. https://doi.org/10.1016/S1361-8415(01)00036-6 [DOI] [PubMed] [Google Scholar]
  26. Kurth, F. , Luders, E. , Pigdon, L. , Conti-Ramsden, G. , Reilly, S. , & Morgan, A. T. (2018). Altered gray matter volumes in language‐associated regions in children with developmental language disorder and speech sound disorder. Developmental Psychobiology, 60(7), 814–824. https://doi.org/10.1002/dev.21762 [DOI] [PubMed] [Google Scholar]
  27. Larrivee, L. S. , & Catts, H. W. (1999). Early reading achievement in children with expressive phonological disorders. American Journal of Speech-Language Pathology, 8(2), 118–128. https://doi.org/10.1044/1058-0360.0802.118 [Google Scholar]
  28. Leitao, S. , & Fletcher, J. (2004). Literacy outcomes for students with speech impairment: Long-term follow-up. International Journal of Language & Communication Disorders, 39(2), 245–256. https://doi.org/10.1044/1058-0360.0802.118 [DOI] [PubMed] [Google Scholar]
  29. Lewis, B. A. , Avrich, M. A. A. , Freebairn, M. L. A. , Taylor, H. G. , Iyengar, S. K. , & Stein, C. M. (2011). Subtyping children with speech sound disorders by endophenotypes. Topics in Language Disorders, 31(2), 112–127. https://doi.org/10.1097/TLD.0b013e318217b5dd [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Lieberman, M. D. , & Cunningham, W. A. (2009). Type I and Type II error concerns in fMRI research: Re-balancing the scale. Social Cognitive and Affective Neuroscience, 4(4), 423–428. https://doi.org/10.1093/scan/nsp052 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Liégeois, F. J. , & Morgan, A. T. (2012). Neural bases of childhood speech disorders: Lateralization and plasticity for speech functions during development. Neuroscience & Biobehavioral Reviews, 36(1), 439–458. https://doi.org/10.1016/j.neubiorev.2011.07.011 [DOI] [PubMed] [Google Scholar]
  32. Liégeois, F. J. , Butler, J. , Morgan, A. T. , Clayden, J. D. , & Clark, C. A. (2016). Anatomy and lateralization of the human corticobulbar tracts: an fMRI-guided tractography study. Brain Structure and Function, 221(6), 3337–3345. https://doi.org/10.1007/s00429-015-1104-x [DOI] [PubMed] [Google Scholar]
  33. Lindquist, M. A. , & Mejia, A. (2015). Zen and the art of multiple comparisons. Psychosomatic Medicine, 77(2), 114–125. https://doi.org/10.1097/PSY.0000000000000148 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Luders, E. , Kurth, F. , Pigdon, L. , Conti-Ramsden, G. , Reilly, S. , & Morgan, A. T. (2017). Atypical callosal morphology in children with speech sound disorder. Neuroscience, 367, 211–218. https://doi.org/10.1016/j.neuroscience.2017.10.039 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Mazziotta, J. , Toga, A. , Evans, A. , Fox, P. , Lancaster, J. , Zilles, K. , Woods, R. , Paus, T. , Simpson, G. , Pike, B. , Holmes, C. , Collins, L. , Thompson, P. , MacDonald, D. , Iacoboni, M. , Schormann, T. , Amunts, K. , Palomero-Gallagher, N. , Geyer, S. , … Mazoyer, B. (2001). A probabilistic atlas and reference system for the human brain: International Consortium for Brain Mapping (ICBM). Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 356(1412), 1293–1322. https://doi.org/10.1098/rstb.2001.0915 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. McLeod, S. , & Crowe, K. (2018). Children's consonant acquisition in 27 languages: A cross-linguistic review. American Journal of Speech-Language Pathology, 27(4), 1546–1571. https://doi.org/10.1044/2018_AJSLP-17-0100 [DOI] [PubMed] [Google Scholar]
  37. Morgan, A. T. , Su, M. , Reilly, S. , Conti-Ramsden, G. , Connelly, A. , & Liégeois, F. J. (2018). A brain marker for developmental speech disorders. The Journal of Pediatrics, 198, 234–239.e1. https://doi.org/10.1016/j.jpeds.2018.02.043 [DOI] [PubMed] [Google Scholar]
  38. Mugler, J. P., III , & Brookeman, J. R. (1990). Three-dimensional magnetization-prepared rapid gradient-echo imaging (3D MP RAGE). Magnetic Resonance in Medicine, 5(1), 152–157. https://onlinelibrary.wiley.com/doi/10.1002/mrm.1910150117 [DOI] [PubMed] [Google Scholar]
  39. Pigdon, L. , Willmott, C. , Reilly, S. , Conti-Ramsden, G. , Liegeois, F. , Connelly, A. , & Morgan, A. T. (2020). The neural basis of nonword repetition in children with developmental speech or language disorder: An fMRI study. Neuropsychologia, 138, 107312. https://doi.org/10.1016/j.neuropsychologia.2019.107312 [DOI] [PubMed] [Google Scholar]
  40. Poldrack, R. A. , Temple, E. , Protopapas, A. , Nagarajan, S. , Tallal, P. , Merzenich, M. , & Gabrieli, J. D. (2001). Relations between the neural bases of dynamic auditory processing and phonological processing: Evidence from fMRI. Journal of Cognitive Neuroscience, 13(5), 687–697. https://doi.org/10.1162/089892901750363235 [DOI] [PubMed] [Google Scholar]
  41. Preston, J. L. , & Edwards, M. L. (2007). Phonological processing skills of adolescents with residual speech sound errors. Language, Speech, and Hearing Services in Schools, 38(4), 297–308. https://doi.org/10.1044/0161-1461(2007/032) [DOI] [PubMed] [Google Scholar]
  42. Preston, J. L. , & Edwards, M. L. (2009). Speed and accuracy of rapid speech output by adolescents with residual speech sound errors including rhotics. Clinical Linguistics & Phonetics, 23(4), 301–318. https://doi.org/10.1080/02699200802680833 [DOI] [PubMed] [Google Scholar]
  43. Preston, J. L. , Felsenfeld, S. , Frost, S. J. , & Mencl, W. E. (2012). Functional brain activation differences in school-age children with speech sound errors: Speech and print processing. Journal of Speech, Language, and Hearing Research, 55(4), 1068–1082. https://doi.org/10.1044/1092-4388(2011/11-0056) [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Preston, J. L. , McCabe, P. , Tiede, M. , & Whalen, D. H. (2019). Tongue shapes for rhotics in school-age children with and without residual speech errors. Clinical Linguistics & Phonetics, 33(4), 334–348. https://doi.org/10.1080/02699206.2018.1517190 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Preston, J. L. , Molfese, P. J. , Mencl, W. E. , Frost, S. J. , Hoeft, F. , Fulbright, R. K. , Landi, N. , Grigorenko, E. L. , Seki, A. , Felsenfeld, S. , & Pugh, K. R. (2014). Structural brain differences in school-age children with residual speech sound errors. Brain and Language, 128(1), 25–33. https://doi.org/10.1016/j.bandl.2013.11.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Redle, E. , Vannest, J. , Maloney, T. , Tsevat, R. K. , Eikenberry, S. , Lewis, B. , Shriberg, L. D. , Tkach, J. , & Holland, S. K. (2015). Functional MRI evidence for fine motor praxis dysfunction in children with persistent speech disorders. Brain Research, 1597, 47–56. https://doi.org/10.1016/j.brainres.2014.11.047 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Reuterskiöld, C. , & Grigos, M. I. (2015). Nonword repetition and speech motor control in children. BioMed Research International, 2015, 1–11. https://doi.org/10.1155/2015/683279 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Rvachew, S. , Ohberg, A. , Grawburg, M. , & Heyding, J. (2003). Phonological awareness and phonemic perception in 4-year-old children with delayed expressive phonology skills. American Journal of Speech-Language Pathology, 12, 463–471. https://doi.org/10.1044/1058-0360(2003/092) [DOI] [PubMed] [Google Scholar]
  49. Sasisekaran, J. , Smith, A. , Sadagopan, N. , & Weber-Fox, C. (2010). Nonword repetition in children and adults: Effects on movement coordination. Developmental Science, 13(3), 521–532. https://doi.org/10.1111/j.1467-7687.2009.00911.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Schlaggar, B. L. , Brown, T. T. , Lugar, H. M. , Visscher, K. M. , Miezin, F. M. , & Petersen, S. E. (2002). Functional neuroanatomical differences between adults and school-age children in the processing of single words. Science, 296(5572), 1476–1479. https://doi.org/10.1126/science.1069464 [DOI] [PubMed] [Google Scholar]
  51. Schmithorst, V. J. , & Holland, S. K. (2004). Event-related fMRI technique for auditory processing with hemodynamics unrelated to acoustic gradient noise. Magnetic Resonance in Medicine, 51(2), 399–402. https://doi.org/10.1002/mrm.10706 [DOI] [PubMed] [Google Scholar]
  52. Shriberg, L. D. , Fourakis, M. , Hall, S. D. , Karlsson, H. B. , Lohmeier, H. L. , McSweeny, J. L. , McSweeny, J. L. , Potter, N. L. , Scheer-Cohen, A. R. , Strand, E. A. , Tilkens, C. M. , & Wilson, D. L. (2010). Extensions to the speech disorders classification system (SDCS). Clinical Linguistics & Phonetics, 24(10), 795–824. https://doi.org/10.3109/02699206.2010.503006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Shriberg, L. D. , Lohmeier, H. L. , Campbell, T. F. , Dollaghan, C. A. , Green, J. R. , & Moore, C. A. (2009). A nonword repetition task for speakers with misarticulations: The Syllable Repetition Task (SRT). Journal of Speech, Language, and Hearing Research, 52(5), 1189–1212. https://doi.org/10.1044/1092-4388(2009/08-0047) [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Shuster, L. I. , & Lemieux, S. K. (2005). An fMRI investigation of covertly and overtly produced mono-and multisyllabic words. Brain and Language, 93(1), 20–31. https://doi.org/10.1016/j.bandl.2004.07.007 [DOI] [PubMed] [Google Scholar]
  55. Smith, S. M. , & Nichols, T. E. (2009). Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in cluster inference. NeuroImage, 44(1), 83–98. https://doi.org/10.1016/j.neuroimage.2008.03.061 [DOI] [PubMed] [Google Scholar]
  56. Spencer, C. , Dugan, S. H. , Masterson, J. , Li, S. , Woeste, H. , Annand, C. , Mahalingam, N. , Eary, K. , Riley, M. A. , Boyce, S. , & Mast, T. D. (2019, June). Spatiotemporal Index of children with residual /r/ error. Poster presented at the Boston Speech Motor Control Symposium, Boston, MA, United States. [Google Scholar]
  57. Spencer, C. , & Weber-Fox, C. (2014). Preschool speech articulation and nonword repetition abilities may help predict eventual recovery or persistence of stuttering. Journal of Fluency Disorders, 41, 32–46. https://doi.org/10.1016/j.jfludis.2014.06.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Stoel-Gammon, C. (2009). The word complexity measure: Description and application to developmental phonology and disorders. Clinical Linguistics & Phonetics, 24(4–5), 271–282. https://doi.org/10.3109/02699200903581059 [DOI] [PubMed] [Google Scholar]
  59. Tkach, J. A. , Chen, X. , Freebairn, L. A. , Schmithorst, V. J. , Holland, S. K. , & Lewis, B. A. (2011). Neural correlates of phonological processing in speech sound disorder: A functional magnetic resonance imaging study. Brain and Language, 119(1), 42–49. https://doi.org/10.1016/j.bandl.2011.02.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Wagner, R. K. , Torgesen, J. K. , Rashotte, C. A. , & Pearson, N. A. (2013). CTOPP-2: Comprehensive test of phonological processing (CTOPP-2). Pro-Ed. [Google Scholar]
  61. Wiig, E. H. , Semel, E. , & Secord, W. A. (2013). Clinical Evaluation of Language Fundamentals–Fifth Edition (CELF-5). Pearson Clinical. [Google Scholar]
  62. Winkler, A. M. , Ridgway, G. R. , Webster, M. A. , Smith, S. M. , & Nichols, T. E. (2014). Permutation inference for the general linear model. NeuroImage, 92(100), 381–397. https://doi.org/10.1016/j.neuroimage.2014.01.060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Woolrich, M. W. , Behrens, T. E. J. , Beckmann, C. F. , Jenkinson, M. , & Smith, S. M. (2004). Multilevel linear modelling for FMRI group analysis using Bayesian inference. NeuroImage, 21(4), 1732–1747. https://doi.org/10.1016/j.neuroimage.2003.12.023 [DOI] [PubMed] [Google Scholar]
  64. Wren, Y. , McLeod, S. , White, P. , Miller, L. L. , & Roulstone, S. (2013). Speech characteristics of 8-year-old children: Findings from a prospective population study. Journal of Communication Disorders, 46(1), 53–69. https://doi.org/10.1016/j.jcomdis.2012.08.008 [DOI] [PubMed] [Google Scholar]

Articles from Journal of Speech, Language, and Hearing Research : JSLHR are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES