Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Nov 2.
Published in final edited form as: J Commun Disord. 2021 Nov 2;94:106163. doi: 10.1016/j.jcomdis.2021.106163

Impairment of Speech Auditory Feedback Error Detection and Motor Correction in Post-Stroke Aphasia

Stacey Sangtian 1, Yuan Wang 2, Julius Fridriksson 3,4, Roozbeh Behroozmand 1,*
PMCID: PMC8627481  NIHMSID: NIHMS1756264  PMID: 34768093

Abstract

Introduction:

The present study investigated how damage to left-hemisphere brain networks affects the ability for speech auditory feedback error detection and motor correction in post-stroke aphasia.

Methods:

34 individuals with left-hemisphere stroke and 25 neurologically intact age-matched control participants performed two randomized experimental tasks in which their online speech auditory feedback was altered using externally induced pitch-shift stimuli: 1) vocalization of a steady speech vowel sound /a/, and 2) listening to the playback of the same self-produced vowel vocalizations. Randomized control condition trials were interleaved in between vocalization and listening tasks where no pitch-shift stimuli were delivered. Following each trial, participants pressed a button to indicate whether they detected a pitch-shift error in their speech auditory feedback during vocalization and listening tasks.

Results:

Our data analysis revealed that speech auditory feedback error detection accuracy rate was significantly lower in the stroke compared with control participants, irrespective of the experimental task (i.e. vocalization vs. listening) and trial condition (i.e. pitch-shifted vs. no-pitch-shift). We found that this effect was associated with the reduced magnitude of speech compensation in the early phase of responses at 150–200 ms following the onset of pitch-shift stimuli in stroke participants. In addition, motor speech compensation deficit in the stroke group was correlated with lower scores on speech repetition tasks as an index of language impairment resulting from aphasia.

Conclusions:

These findings provide evidence that left-hemisphere stroke is associated with impaired speech auditory feedback error processing, and such deficits account for specific aspects of language impairment in aphasia.

Keywords: Speech Motor Control, Sensorimotor Integration, Auditory Feedback, Aphasia, Stroke

1. Introduction

The contemporary models of speech have proposed that integration between feedforward motor commands and sensory (e.g., auditory and somatosensory) feedback is critical for the production and online monitoring of speech output (Golfinopoulos et al., 2010; Guenther, 2006; Hickok, 2012; Houde & Nagarajan, 2011; Houde & Chang, 2015; Tourville & Guenther, 2011). The principles of such integrative mechanisms for speech are derived from recent motor control theories centered on the notion of an internal model for translating the efference copies of motor commands into forward predictions about the sensory consequences of intended movement (Wolpert et al., 1995). During speech production, any difference between the internally predicted and actual sensory feedback results in an error signal that is translated into corrective motor commands in the auditory-motor system for speech control (Guenther, 2006; Houde & Nagarajan, 2011; Tourville & Guenther, 2011). According to the dual-stream model of speech (Hickok & Poeppel, 2007, 2016), this function is mediated via the predominantly left-lateralized dorsal stream networks that involve sensorimotor regions within the frontal, parietal, and temporal cortices implicated in speech auditory-motor integration. In light of these studies, it is reasonable to argue that damage to the left-hemisphere brain networks can disrupt the auditory-motor integration system for speech sensorimotor control, though our understanding of the neurobiological mechanisms underlying such effects remains relatively poor.

The altered auditory feedback (AAF) paradigm has been widely used to probe the integrity of the speech sensorimotor system. Previous AAF studies on neurologically intact speakers have shown that pitch-shift alterations (i.e. errors) in online auditory feedback of speech vowel sound vocalizations elicit a compensatory motor response that changes the fundamental frequency (F0) of speech output in the opposite direction of delivered stimuli, providing evidence for the role of auditory feedback for speech error detection and motor control (Burnett et al., 1998; Larson, 1998). In addition, more recent studies have shown that neural responses to normal speech auditory feedback are suppressed during vocal production compared with listening, indicating that activation of efference copies results in motor-induced cancellation of sensory inputs that match their internal representation provided by the forward prediction signals (Behroozmand & Larson, 2011; Heinks-Maldonado et al., 2005; Houde et al., 2002). This effect has been shown to result in motor-induced enhancement of neural activities in responses to altered auditory feedback during vocal production compared with listening, suggesting that efference copies play a key role in increasing neural sensitivity for feedback error detection and correction during speech (Behroozmand et al., 2009; Chang et al., 2013; Greenlee et al., 2013; Korzyukov et al., 2012).

Previous research from our lab using the AAF paradigm has shown that the ability to generate speech compensation responses to auditory feedback errors is impaired in left-hemisphere stroke survivors (Phillip Johnson et al., 2020; Behroozmand et al., 2018). This effect was characterized by the diminished magnitude and slowed compensation responses to pitch-shift alterations in the auditory feedback in the stroke group compared with neurologically intact control speakers. In addition, lesion-symptom-mapping analysis revealed that impaired speech compensation response in stroke survivors was associated with damage to a distributed sensorimotor network within the left-hemisphere frontal, temporal, and parietal cortical areas (Behroozmand et al., 2018). These findings validated the application of the AAF experimental paradigm for probing the integrity of sensorimotor integration mechanisms in neurologically intact speakers and characterizing their deficits in neurological patients with stroke.

A shortcoming of previous AAF studies is that they utilized a paradigm that only allowed to measure motor compensatory responses during vocalization tasks whereas no behavioral responses to pitch-shift error detection were collected during vocalization and/or listening conditions. This limitation has posed a major challenge in interpreting potential findings, especially in individuals with neurological conditions including those with left-hemisphere stroke: it is almost impossible to determine to what extent deficits in compensatory responses during vocalization are accounted for by the impairment of sensory error detection vs. sensorimotor control mechanisms. Given that it is critically important to tease apart the relative contribution of these mechanisms, the present study used a sensory (i.e. listening) vs. a sensorimotor (i.e. vocalization) task to measure error detection and motor correction responses to pitch-shift stimuli in left-hemisphere stroke and neurologically intact control participants within the context of an AAF paradigm. This novel experimental design allowed us to measure how participants in each group detect the presence (or absence) of pitch-shift error signals in their auditory feedback during vocalization and/or listening tasks and how they generate compensatory responses to correct for feedback alterations during vocal production.

Our experimental paradigm motivated multiple hypotheses to test the relationship between error detection and motor control mechanisms, as well as the effect of left-hemisphere stroke on these behavioral processes. First, we hypothesized that damage to left-hemisphere brain networks would impair the ability for online detection and correction of speech auditory feedback errors, as indexed by reduced accuracy rates for speech error detection and diminished compensatory responses to pitch-shift alterations in the auditory feedback. This hypothesis was motivated by the notion of predominantly left-lateralized dorsal stream mechanisms that support sensorimotor integration for speech production, online monitoring, and motor control (Hickok & Poeppel, 2007, 2016), as well as earlier evidence on the impairment of speech compensation responses in left-hemisphere stroke survivors compared with controls (Phillip Johnson et al., 2020; Behroozmand et al., 2018). Second, since feedback error detection plays a critical role in driving compensatory motor speech behavior (Guenther, 2006; Houde & Nagarajan, 2011), we hypothesized that deficits in sensory error detection mechanisms in the left-hemisphere stroke group would, at least, partially account for their impairment of speech compensation responses to auditory feedback pitch-shift alterations. Third, given the evidence on the role of efference copies and internal forward prediction mechanisms in enhancing auditory feedback error processing (Behroozmand et al., 2009; Chang et al., 2013; Greenlee et al., 2013; Korzyukov et al., 2012), we hypothesized that the accuracy rates for pitch-shift error detection would be, in general, increased during vocalization vs. listening task in both groups. We also explored to determine whether there are differences between the mechanisms of detecting the presence (i.e. error detection) vs. the absence (i.e. error rejection) of pitch-shift stimuli in the auditory feedback. Lastly, due to evidence from a previous study from our lab (Behroozmand et al., 2018), we hypothesized that speech sensorimotor impairment in left-hemisphere stroke survivors would account for specific aspects of their co-existing language impairment conditions associated with aphasia.

2. Methods

2.1. Participants

A total of 34 participants with chronic left-hemisphere stroke (13 female; age range 38.4 – 80.0 years; mean age 61.2 years; mean time post stroke: 5.45 years) and 25 neurologically intact control speakers (17 female; age range 47.5 – 86.7; mean age 62.6 years) participated in this study (see Tables 1 and 2 for more details). All left-hemisphere stroke participants were recruited from the Aphasia Lab and the Center for the Aphasia Recovery (C-STAR) at the University of South Carolina. Inclusion criteria for the stroke group were as follows: 1) history of left-hemisphere stroke at least 6 months prior to testing in this study; 2) have undergone Western Aphasia Battery-Revised (WAB-R; Kertesz, 2007) testing to assess co-existing language impairment conditions due to aphasia and high-resolution MRI scanning for lesion demarcation; 3) native English speaker; 4) ages 30 years or above; 5) able to provide verbal and/or written informed consent. The WAB-R Aphasia Quotient (AQ) provides a means of evaluating the major clinical aspects of language function on fluency, auditory verbal comprehension, speech repetition, and object naming subscales and was 63.04 (SD = 23.53) for the left-hemisphere stroke group in this study. Based on the WAB-R classification system, the distribution of aphasia sub-types was as follows: Anomic = 7; Broca’s = 16; Conduction = 6; Wernicke’s: 1; Global = 1; and 3 above the cut-off (self-reported residual aphasia not detectable by the WAB-R at time of testing). Participants in the control group had no history of self-reported speech, language, hearing, or neurological disorders, and were recruited from the greater Columbia, SC area. Exclusionary criteria for both groups included self-reported history of dementia, traumatic brain injury, psychiatric disorder, or alcohol abuse. Thirty-two of 34 participants with left-hemisphere stroke and 24 of 25 controls passed a binaural hearing screening and had thresholds of 40 dB or less at 250, 500, and 1000 Hz. These hearing screening frequencies were selected based on the natural human speech fundamental frequency (F0; perceived as pitch) range for speech vowel sound production in this study. The remaining participants did not have hearing screenings on file but were able to detect pitch-shift stimuli as evidenced during the training session, and therefore, were included in this study. Informed consent was obtained from all participants and the research was approved by the University of South Carolina Institutional Review Board. All participants were monetarily compensated for their participation time.

Table 1:

Demographic data, lesion volume, and scores from the WAB-R clinical assessment battery in persons with aphasia.

ID Age Sex Education (yrs.) Lesion volume (×1000 voxels) Fluency* Spontaneous Speech* Auditory Verbal Comprehension* Repetition* Naming* Aphasia Quotient* Aphasia Type AOS** Dysarthria
A1 61.5 M 14 96.7 9 17 10 9.2 8.9 90.2 Anomic No Yes
A2 64.5 M 18 148.8 6 9 7.1 0.9 0.2 34.4 Conduction No No
A3 80.0 M 16 84.6 5 13 7.55 8.2 7.3 72.1 Anomic NR NR
A4 72.8 M 16 234.6 3 10 7.05 4.4 6.2 55.3 Broca’s No No
A5 44.7 F 16 59.0 4 10 7.8 4 5.8 55.2 Broca’s Yes No
A6 42.6 F 12 53.4 9 19 9.55 10 9.8 96.7 Non-aphasic NR NR
A7 59.6 M 12 7.9 9 18 9.35 9.2 9 91.1 Anomic NR NR
A8 69.3 M 16 210.9 2 5 6.95 4.2 2 36.3 Broca’s No Yes
A9 69.7 M 12 114.4 5 14 8.55 5.6 8.2 72.7 Conduction Yes Yes
A10 61.8 M 12 220.1 7 11 4.15 7.4 3.5 52.1 Broca’s Yes No
A11 71.9 F 12 113.3 3 4 8.2 0.7 1.4 28.6 Broca’s Yes No
A12 38.4 F 18 185.1 4 11 7.15 6.7 7.9 65.5 Broca’s Yes No
A13 60.1 M 18 147.8 9 13 7.9 6.8 8.4 72.2 Broca’s No No
A14 72.5 M 16 5.0 6 15 9.8 8.8 8.5 84.2 Anomic No Yes
A15 44.6 M 16 52.3 4 12 9.85 5.1 6 65.9 Broca’s Yes Yes
A16 51.2 M 12 177.1 2 8 6.7 6.6 5.2 53 Broca’s Yes No
A17 49.8 M 14 225.2 3 6 3.45 4.4 1.8 31.4 Global Yes No
A18 60.4 M 12 114.4 1 3 8.3 1.6 2.8 31.4 Broca’s Yes Yes
A19 64.4 F 18 142.5 4 11 8.9 5.6 6.5 64 Broca’s Yes No
A20 76.2 F 14 225.9 1 3 7.5 0.9 1.3 25.4 Broca’s Yes Yes
A21 72.4 M 16 160.6 0 0 8 0.3 0.6 17.8 Broca’s Yes Yes
A22 39.1 F 12 139.0 9 19 8.6 8.9 9.6 92.2 Anomic Yes No
A23 61.7 F 16 119.0 9 17 9 6.7 9 83.4 Conduction Yes No
A24 59.6 M 12 116.7 4 11 7.55 6.2 8.3 33.05 Broca’s Yes No
A25 70.4 F 18 170.0 10 20 10 9.8 9.5 98.6 Non-aphasic NR NR
A26 53.0 M 12 63.5 6 13 9 6.2 8.7 73.8 conduction No No
A27 58.0 M 16 168.2 6 14 6.6 4.9 8.4 67.8 Wernicke’s Yes No
A28 78.7 M 18 118.3 5 13 8.45 3.7 5.1 60.5 Conduction No No
A29 65.8 F 16 67.8 9 17 10 9.4 8.8 90.4 Anomic No No
A30 63.2 M 16 120.9 4 13 9.5 7.7 8.5 77.4 Broca’s Yes Yes
A31 53.5 F 16 13.5 7 11 7.55 3.9 3.1 51.1 Conduction No Yes
A32 77.6 F 12 50.5 9 19 10 9.5 9.3 95.6 Non-aphasic NR NR
A33 66.4 F 16 43.9 8 17 7.75 8.5 6.7 79.9 Anomic No No
A34 44.8 M 12 379.7 2 9 6.55 2 4.7 44.5 Broca’s No No
Mean 61.2 14.8 128.0 5.4 11.9 8.1 5.8 6.2 63.1
*

Scores from the Western Aphasia Battery-Revised (WAB-R)

**

Based on scores from the Apraxia of Speech (AOS) Rating Scale (ASRS-V1); NR = Not Reported

Table 2:

Demographic data in control subjects

Subject Age Sex Education (yrs.)
C1 67.9 M 14
C2 58.1 M 10
C3 56.5 M 23
C4 50.7 F 12
C5 71.4 M 18
C6 67.9 F 13
C7 86.7 F 17
C8 59.4 F 22
C9 67.7 F 14
C10 60.8 F 20
C11 65.4 M 18
C12 56.5 F 18
C13 63.4 F 18
C14 60.2 F 16
C15 59.3 F 17
C16 75.3 M 23
C17 68.6 F 23
C18 50.6 F 16
C19 71.2 F 13
C20 53.0 F 16
C21 64.8 F 18
C22 47.5 F 12
C23 60.4 F 15
C24 53.0 M 16
C25 69.3 M Not reported
Mean 62.6 16.8

2.2. Experimental Design

Participants were seated in a sound-attenuated booth where speech and electroencephalography (EEG) signals along with their button responses were recorded during steady vocalization of the speech vowel sound /a/ and listening to the playback of self-vocalizations under the AAF experimental paradigm. Participants wore insert earphones and a microphone to hear their vocalizations in real time. Fig. 1 shows the experimental setup in this study. Electroencephalography (EEG) data were simultaneously recorded from all participants during the experimental tasks, but were not analyzed in this study. The experiment was conducted in two consecutive 45-minute blocks during which participants performed interleaved vocalization and listening tasks during each block. For the vocalization trials, participants were visually cued with a picture of a human face to start vocalizing the vowel sound /a/ for approximately 2–3 seconds at their conversational pitch and loudness. Each vocalization was followed by a listening trial in which participants were visually cued with a picture of a human ear to listen to the playback of one of their own pre-recorded vocalization from a randomly selected previous trial. During both vocalization and listening trials, a brief (200 ms) pitch-shift stimulus was delivered to alter the auditory feedback during vocalization, or playback during listening, at randomized upward or downward directions with ±100 cents magnitude (100 cents = 1 semitone). The onset of pitch-shift stimuli was randomized between 750–1250 ms relative to speech vowel onset. Following each vocalization or listening trial, participants were presented with the question “Heard a Change?” on the screen and were instructed to press a green (Yes) or red (No) button to indicate whether they detected a pitch-shift stimulus (i.e. error) in the auditory feedback of their vowel sound vocalization or its playback, irrespective of its direction. Control trials with no pitch-shift stimuli were randomly interleaved between pitch-shifted trials during both vocalization and listening tasks to measure the participants’ ability for error rejection by responding “No” to the presented question. Data were collected on an average of 60 trials of upward pitch-shift, 60 trials of downward pitch-shift, and 60 trials of no-pitch-shift for an average of 180 trials per task (vocalization and listening), i.e., an average of 360 trials in total per participant. For all participants, the gain of the auditory feedback signal (i.e. the earphones’ output) was adjusted at 10 dB higher than their vocalization (i.e. the microphone’s input) to partially mask bone or air-borne conduction effects and the level of the auditory feedback was equalized across vocalization and listening conditions.

Figure 1.

Figure 1.

Experimental setup: participants were cued to vocalize a speech vowel sound or listen to the playback of their self-vocalizations while a pitch-shift stimulus (i.e. error) altered the online auditory feedback or playback signal. In some randomized control trials, no pitch-shift was delivered to vowel sound vocalization or its playback. Following each trial, participants were prompted to indicate whether or not they heard a pitch-shift change by pressing a green (Yes) or red (No) button (error detection response).

Training was conducted prior to the experiment to ensure participants understood the task instructions. During training, the researcher provided verbal, written, and/or pictorial instructions and demonstrated performance of the tasks. Participants then demonstrated their understanding of the task and ability for speech error detection and rejection as evidenced by responding “Yes” to pitch-shifted and responding “No” to no-pitch-shift trials, respectively. Participants’ performance was monitored during the experiment and breaks were provided if needed.

2.3. Data Acquisition

Participants’ speech signal was picked up using a head-mounted AKG condenser microphone (model C520), amplified by a Motu Ultralite-MK3 and recorded at 44.1 kHz on a laboratory computer. A custom-designed program in Max/Msp (Cycling 74, v.5.0) controlled an Eventide Eclipse Harmonizer that was used to pitch-shift the vowel sound vocalizations online and feed them back to the ears through Etymotic earphones (model ER1–14A). The Max/Msp program controlled all aspects of the visual cues and pitch-shift stimuli (e.g., direction, magnitude, onset time etc.) and also generated TTL pulses to accurately mark the onset of each event during vocalization, listening, and button press tasks as well as synchronizing them with the simultaneously recorded EEG signals.

2.4. Data Analysis

2.4.1. Speech Error Detection/Rejection Accuracy Rates

Correct error detection accuracy rates for pitch-shifted trials (regardless of stimulus direction), correct error rejection accuracy rates for control trials (no-pitch-shift), and A scores were computed for each left-hemisphere stroke and control participant. Since our goal was to model variability in speech error processing in a wide range of stroke participants with mild, moderate, and severe impairment, data were not excluded based on the frequency of Yes/No responses to pitch-shift stimuli across all trials. Correct error detection accuracy rates (i.e., responding “Yes” to pitch-shifted trials) were calculated in percentage for vocalization and listening tasks separately using the following formula:

Accuracy Rate Error Detection=TCorrect Error DetectionTError×100

Here, TCorrect Error Detection is the number of trials of correct error detection during pitch-shifted trials and TError is the total number of pitch-shifted (i.e., error) trials. Correct error rejection accuracy rates (i.e., responding “No” to no-pitch-shift trials) were calculated in percentage for vocalization and listening tasks separately using the following formula:

Accuracy RateError Rejection=TCorrect Error RejectionTNoError×100

Here, TCorrect Error Rejection is the number of trials of correct error rejection during control/no-pitch-shift trials and TNoError is the total number of no-pitch-shift (i.e., no error) trials. In addition, A scores were computed as a measure of sensitivity for speech error (i.e. pitch shift) detection for each participant according to the following formula (Zhang & Mueller, 2005):

A=34+HF4F(1H) if F0.5H;34+HF4F4H if FH<0.5;34+HF41H4(1F) if  0.5<FH

According to this formula, F = false alarm rate and H = hit rate, and A scores closer to 1 indicate stronger sensitivity to speech errors and lower scores closer to 0.5 indicate chance performance. For A scores, correction for values of 0 was the formula 1/(2*N) and correction for values of 1 was 1–1/(2*N) where N is the total number of trials. The A score has the advantage to provide a non-parametric alternative to d’ for measuring sensitivity in detection and categorization tasks where data normality is violated by hit or false alarm rates equal or close to 0 or 1. In this study, the choice of A score was motivated by the fact that measuring response sensitivity to pitch-shift stimuli is critical for determining the effects of error detection mechanisms on speech compensation response to auditory feedback alterations.

2.4.2. Speech Compensation Responses

Participants’ data were analyzed to extract the behavioral measure of speech compensation responses relative to the onset of upward and downward pitch-shift stimuli during vocalization trials. First, the pitch frequency of the recorded speech signals was extracted in Praat (Boersma & Weenik, 1996) using an autocorrelation method and then exported to a custom-made MATLAB code for further processing. The extracted pitch frequencies were segmented into epochs ranging from −100 ms before to 500 ms after the onset of pitch-shift stimuli. Pitch frequencies were then converted from Hertz to the Cents scale to calculate speech compensation magnitude in response to the pitch-shift stimulus using the following formula:

Speech Compensation Magnitude=1200×log2F/FBaseline

Here, F is the post-stimulus pitch frequency and FBaseline is the baseline pitch frequency from −100 to 0 ms pre-stimulus. Artefactual responses to pitch shifts in the auditory feedback due to large-magnitude voluntary pitch modulations were rejected by removing trials in which speech responses exceeded +/−200 cents in magnitude. The extracted pitch contours were then averaged for each individual participant across all trials for upward and downward pitch shifts, separately. The individual pitch contours were averaged across all participants to obtain the grand-average profile of the speech compensation responses for the left-hemisphere stroke and control groups.

2.4.3. Structural MRI acquisition and preprocessing

Structural MRI data were acquired in left-hemisphere stroke participants for demarcation of left-hemisphere lesions using a 3T Siemens Trio scanner fitted with a 12-channel head-coil. Participants were scanned with two MRI sequences: 1) T1-weighted MP-RAGE images with voxel size = 1 mm3, FOV = 256 × 256 mm, 192 sagittal slices, 9° flip angle, TR = 2250 ms, TI = 925 ms, TE = 4.15 ms, GRAPPA = 2, and 80 reference lines; and 2) T2-weighted images with voxel size = 1 mm3, FOV = 256 × 256 mm, 160 sagittal slices, variable flip angle, TR = 3200 ms, TE = 352 ms, and no slice acceleration. The same slice center and angulation were used as in the T1 sequence. Images were converted to NIfTI format using dcm2niix (Li et al., 2016). Stroke lesions were demarcated on T2 images in native space using the MRIcron toolbox (https://www.nitrc.org/projects/mricron) to estimate the overall lesion volume for individual participants, which was included as a covariate factor in our correlation analyses (see details below). The overlaid maps of lesion distribution across all stroke participants is shown in Fig. 2. The largest gray-matter lesion overlap among the stroke group was in the left precentral gyrus, post-central gyrus, rolandic operculum, supra-marginal gyrus, inferior parietal gyrus, superior temporal gyrus, Heschl’s gyrus, and insula where 82% (28 out of 34) of participants had damage.

Figure 2.

Figure 2.

Lesion overlap maps in PWAs (n=34). The maps show lesion distribution on coronal (top) slices in MNI space for the sample, with warmer colors representing more lesion overlap across participants with left-hemisphere stroke (dark red areas represent lesion overlap across at least N=20 individuals).

2.4.4. Statistical Analyses

The general linear model (GLM) approach was used in SPSS v.28 (IBM Inc.) for analysis of variance (ANOVA) to examine the effects of group (stroke vs. control) as a between-subject factor, and condition (pitch-shift vs. no-pitch-shift) and/or task (vocalization vs. listening) as within-subject factors on the measures of speech error detection/rejection accuracy rates and A scores. Data normality and homogeneity of variance assumptions were examined using the Shapiro-Wilk and Mauchly’s sphericity tests, respectively, and diagnostic plots were used to confirm that the fits did not deviate from model assumptions. For data violating the normality assumption, a rank-based inverse normal transformation was applied (Templeton, 2011) and p-values were reported using Greenhouse-Geisser’s correction for data violating homogeneity of variances assumption. Statistical significance was determined at α = .05 with Bonferroni’s correction to control for multiple comparisons. It is noteworthy that due to the above procedures, data units in some conditions may fall outside of their original range (e.g., transformed accuracy rates may exceed 100% etc.).

A GLM approach was also used to conduct ANOVA analyses on the effects of between-subject (group: stroke vs. control) and within-subject (stimulus direction: upward vs. downward) factors on the magnitude of behavioral speech compensation responses. In both groups, speech compensation magnitudes were analyzed using separate ANOVA models for responses averaged within eight 50-ms time bins in a time window from 100–500 ms after the onset of pitch-shift stimuli. This time window was selected to capture the temporal dynamics of response profiles during the rise time, peak, and rebound periods of speech compensation following the procedures implemented in an earlier study from our lab (Behroozmand et al., 2018), and significant results were reported using Bonferroni’s correction at p < .00625 (α = .05 / 8).

3. Results

3.1. Hearing Threshold Frequencies

Given the engagement of the auditory system, the Welch’s t-test was used to examine if there were differences in hearing thresholds at the 250, 500, and 1000 Hz frequencies between left-hemisphere stroke and control groups. Results indicated no significant differences between these two groups for any frequency (250 Hz: t(52.6) = 0.49, p = 0.63; 500 Hz: t(52.1) = −0.60, p = 0.55; 1000 Hz: t(50.7) = −0.18, p = 0.86).

3.2. Speech Error Detection/Rejection Accuracy Rates

A 2×2×2 ANOVA was conducted to compare the effects of group, task, and condition on speech error detection/rejection accuracy rates. Results revealed a significant main effect of group (F(1, 57) = 16.02, p < .001) with overall lower accuracy rates for correct error detection/rejection in stroke participants compared with controls regardless of condition (Fig. 3) and task (Fig. 4). In addition, we found a significant main effect of condition (F(1, 57) = 5.33, p = .025), indicating higher accuracy rates for correct error rejection during no-pitch-shift trials compared with correct error detection during pitch-shift trials in both groups (Fig. 3). However, as shown in Fig. 4, no main effect of task (F(1, 57) = 1.32, p = .256) or interactions were indicated (all p > .51).

Figure 3.

Figure 3.

Box plots showing the transformed-normalized measures of speech error detection accuracy rates for pitch-shift trials and rejection for no-pitch-shift trials in left-hemisphere stroke (blue) and control (red) groups.

Figure 4.

Figure 4.

Box plots showing the transformed-normalized measures of speech error detection and rejection accuracy rates for vocalization and listening tasks in left-hemisphere stroke (blue) and control (red) groups.

3.3. A Scores

A 2×2 ANOVA was conducted to compare the effects of group and task on A scores. Results revealed a significant main effect of group (F(1, 57) = 7.85, p = .007) with stroke participants demonstrating lower A scores (i.e., performing closer to chance) than controls during both vocalization and listening tasks (Fig. 5). However, no main effect of task (F(1, 57) = 0.126, p= .724) or task × group interaction was indicated (F(1, 57) = 1.67, p =.202).

Figure 5.

Figure 5.

Box plots showing the transformed-regressed A scores for vocalization and listening tasks in left-hemisphere stroke (blue) and control (red) groups.

3.4. Speech Motor Correction Responses

The overlaid profiles of speech compensation in the stroke and control groups showed that responses to AAF started at approximately 100 ms, transitioned toward the peak during the rising phase at 100–200 ms, reached the peak at 200–350 ms, and were followed by a rebound period at latencies >350 ms after the onset of pitch-shift stimuli (Fig. 6a). Results of our analysis revealed a significant main effect of group (F(1,57) = 54.15, p < .001), indicating a reduction in the magnitude of speech compensation during the rising phase of responses in an early time window at 150–200 ms for the left-hemisphere stroke vs. control group. However, for later time windows after 200 ms, there was no significant difference in speech compensation magnitude between the two groups (all p > 0.24). In addition, no main effect of stimulus direction, or group × stimulus direction interaction was found (all p > 0.17). Since the main effect of group was observed at 150–200 ms, we examined the relationship between speech error correction (i.e. compensation) magnitudes in this time window and the measures of error detection accuracy rates across both groups. Our analysis revealed a relationship between deficits in speech error detection and motor correction mechanisms in the stroke group as indexed by a significant moderate correlation (r =.41, p = .015) between error detection accuracy rates during listening and the magnitudes of compensatory responses to AAF during vocalization within 150–200 ms time window (Fig. 6b).

Figure 6.

Figure 6.

a) The overlaid profiles of grand-average speech error correction (i.e. compensation) responses to altered auditory feedback (AAF) combined for upward (+100 cents) and downward (−100 cents) pitch-shift stimuli in left-hemisphere stroke and control groups (highlighted areas show standard error of the mean: SEM). b) Correlation between speech error detection accuracy rate during listening and the magnitude of error correction responses to AAF during vocalization within the 150–200 ms time window in the left-hemisphere stroke group.

3.5. Aphasia Severity and Speech Error Processing

Findings from an earlier study in our lab have indicated a systematic relationship between deficits in speech error processing in left-hemisphere stroke participants and their co-existing language impairment conditions associated with aphasia as indexed by the repetition scores on the WAB-R assessment battery (Behroozmand et al., 2018). We used this as an a-priori assumption to test the same effect in participants in the present study and the results revealed a consistent effect as indicated by a significant moderate correlation (r =.35, p =.041) between speech error detection accuracy rates for vocalization and the WAB-R scores on the repetition task (Fig. 7). In addition, we took an exploratory approach to test the same relationship between speech error processing and other WAB-R subscale measures, and the results did not reveal any significant correlation for fluency, auditory verbal comprehension, or naming scores (all p > .24).

Figure 7.

Figure 7.

Correlation between speech error detection accuracy rate during vocalization and WAB-R speech repetition sub-scores as a measure of language impairment in the left-hemisphere stroke group.

4. Discussion

The present study used a novel AAF paradigm to investigate the effect of brain damage on the mechanisms of speech error detection and motor control in left-hemisphere stroke survivors compared with neurologically intact controls. Participants in both groups were tested under conditions in which pitch-shift alterations (i.e. errors) were delivered to their online auditory feedback during vocalization of speech vowel sounds and listening to the playback of self-vocalizations. Our data revealed a group effect whereby the left-hemisphere stroke participants showed lower accuracy rates for speech error detection and rejection as well as diminished sensitivity, as indexed by the A score, to discriminate pitch-shift alterations compared with controls during both vocalization and listening tasks. These findings support our hypothesis that damage to brain networks due to left-hemisphere stroke impairs the auditory mechanisms of speech error detection. Although statistical discrimination of affected regions was constrained by the small sample size in our study, we propose that this effect is accounted for by damage to cortical auditory networks that play a critical role in speech error processing (Chang et al., 2013; Greenlee et al., 2013; Behroozmand et al., 2015 & 2016). This notion is supported by the finding that ~82% (28 out of 34) of participants in our stroke group had lesions in areas within the Heschl’s and superior temporal gyri in the left hemisphere. We also found that damage to these regions co-occurred with frequent lesions within the left precentral, post-central, supra-marginal, and inferior parietal gyri as well as rolandic operculum, and insula that share a similar vascular perfusion bed across stroke survivors. In the context of the dorsal stream network (Hickok, 2012; Hickok & Poeppel, 2007), these areas play a key role in the underlying mechanisms of speech error detection and motor control, and therefore, future studies are warranted to identify the lesion correlates of speech deficits in a larger group of participants with left-hemisphere stroke.

Evidence from previous studies has indicated that auditory neural responses to pitch-shift stimuli are enhanced during vowel vocalization vs. listening, suggesting that the activation of speech motor system may result in increased sensitivity of sensory mechanisms for online detection and, subsequently, correction of feedback errors (Behroozmand et al., 2009; Chang et al., 2013; Greenlee et al., 2013; Korzyukov et al., 2012). This effect has been argued to be accounted for by the influence of efference copies and their corresponding internal predictions that are continuously matched against the incoming feedback signal. In the present study, we probed the behavioral correlates of this effect by measuring error detection or rejection accuracy rates in response to the presence or absence of pitch-shift stimuli, respectively. Our analysis revealed that although speech error detection and rejection ability was diminished in stroke survivors vs. controls, the accuracy rate of responses was not affected by the participants’ performance of the vocalization vs. listening task in both groups. The inconsistent nature of behavioral findings in the present study relative to previous neural investigations and our study hypothesis motivates a few possible interpretations: First, it is noteworthy that the experimental task in this study was different from previous ones in that it instructed participants to register a response (i.e. Yes or No) to a question that followed vocalization and listening trials (i.e. did you hear a change?). The additional demand of this task may engage higher levels of cognitive processes (e.g., attention) for conscious error detection, which in turn may compensate for lower neural sensitivity during listening and equalize performance across both tasks. While there is evidence in neurologically intact adults that a corrective motor response can still be produced without conscious detection of pitch-shift stimuli in auditory feedback (Franken et al., 2018; Hafke, 2008), the automatic nature of such behavior may vary depending on the allocation of cognitive and other neural resources. Second, the modulation of neural responses in previous studies was concurrently probed when speakers detected and controlled for online feedback errors during vocalization trials whereas in the present study, behavioral responses to error detection were probed after vocalization trials were completed. This suggests that the difference in the state of motor-driven mechanisms (e.g., efference copies) at the time of response probing may account for the inconsistent findings related to the lack of task-dependent behavioral effects in this study vs. modulation of neural activities in previous investigations. To our knowledge, this study is the first to incorporate a conscious error detection/categorization probe into the AAF paradigm and, therefore, further investigations are warranted to examine the effect of cognitive and sensorimotor mechanisms on speech error detection and motor correction. Lastly, the investigation of neural modulation in previous studies have primarily involved early response components that reflect subconscious (i.e. automatic) processes and, therefore, attempts in establishing a link between those data and the behavioral results on a conscious error detection task in this study is not strongly justified. These reasons emphasize the importance of follow-up studies for bridging the gap in understanding the link between neural and behavioral markers of speech error processing and their impairments in neurological conditions.

Another important aspect of our analysis was relate to an observed main effect of pitch-shift condition whereby participants in both groups exhibited lower accuracy rates for error detection during pitch-shifted trials compared with those for error rejection in response to no-pitch-shift trials in vocalization and listening tasks. One possibility related to this finding is that the processing of pitch-shifts is associated with increased cognitive load, which subsequently calls for the allocation of higher levels of neural resources for detecting stimuli during error compared with no-error trials. This notion is consistent with findings of previous studies demonstrating that the ability to ignore irrelevant information is declined during tasks involving pitch discrimination paradigms (Clinard et al., 2010; Espinoza-Varas & Jang, 2006).

Results of speech compensation analysis supported our hypotheses and was consistent with data from a previous study from our lab (Behroozmand et al., 2018) by showing that speech error correction ability is diminished in left-hemisphere stroke survivors as indexed by the decreased magnitude of their early-phase compensatory responses during the rising period at 150–200 ms latencies following the onset of pitch-shift stimuli compared with controls. However, in contrast with the study by Behroozmand et al. (2018), data in the present study revealed no such effect for responses during the peak and rebound periods of compensation at latencies >200 ms, indicating that the stroke group controlled their later phases of speech responses to feedback alterations with magnitudes comparable to those in neurologically intact control speakers. As discussed earlier, differences in experimental paradigms may explain these inconsistent findings: we argue that the implementation of conscious error detection tasks in the present study improves compensatory behavior in long-latency time scales where responses are controlled by voluntary motor mechanisms rather than subconscious (or automatic) brain processes as in our previous study (Behroozmand et al., 2018).

In addition, we found that lower speech error detection accuracy during the listening task was correlated with the decreased magnitude of early-phase speech compensation responses in the stroke group. While it is interesting that such correlation was observed for listening, but not vocalization, this latter finding is indicative of a possible link between impaired error detection in the auditory system and speech motor control deficits in speakers with left-hemisphere stroke. According to the contemporary models (Hickok et al., 2011; Houde & Nagarajan, 2011), speech compensation is the consequence of a cascade of neural processes that translate sensory error signals — detected via comparing the predicted and actual feedback — into corrective motor commands that adjust speech output in response to external alterations. In the context of these models, our findings suggest that sensory error detection deficits partially account for impaired speech motor control ability in the stroke group; however, deficits in sensory-to-motor transformation as well as motor production mechanisms can also contribute to such impairments following damage to left-hemisphere brain networks.

Evidence from previous studies has suggested that co-existing conditions associated with aphasia in left-hemisphere stroke survivors disrupt the processing of linguistic cues during speech repetition (Rogalsky et al., 2015) and fluency tasks (Basilakos et al., 2014; Fridriksson et al., 2013). In this study, we conducted analyses to explore the relationship between language impairment and speech error processing deficits in participants with left-hemisphere stroke. Our results showed that deficits in speech error detection during vocal production was correlated with impaired language as indexed by the WAB-R repetition scores. Based on this finding, we propose that speech and language systems may use homologous, if not shared, mechanisms for performing tasks that involve sensorimotor integration. While aphasia is a known disorder of language, the brain regions damaged in left-hemisphere stroke overlap with many of the regions implicated in speech production and sensorimotor control (Behroozmand et al., 2018; Fridriksson et al., 2010; Rogalsky et al., 2015). This in turn suggests that studying the neural mechanisms of speech sensorimotor control may provide a model to understand how language networks are organized, how they may break down due to neurological conditions, and how they can be treated.

Although our findings provide insights into a possible link between the functional mechanisms of speech processing during non-linguistic (i.e. the pitch-shift AAF) and linguistic tasks (e.g., speech repetition), further research is warranted to explore the nature of such relationships and their deficits in individuals with left-hemisphere stroke. Additionally, these findings should not be over-interpreted given the limitations of our heterogeneous sample in which we did not account for individual lesion characteristics. While the sample in our study consisted of 34 stroke survivors with left-hemisphere brain damage, participants in this group had varying aphasia subtypes as identified by the WAB-R clinical assessment. Individuals with both aphasia and comorbid apraxia of speech were also included if they were able to elicit the volitional vocalizations required for the experiment. Despite the wide range of variability, this sample size will be expanded for future studies to enable further investigation of differences between aphasia subtypes, lesion locations, lesion sizes, age, and cognitive factors associated with performance on speech error processing and motor control tasks.

There are other limitations to consider in this study. First, the perceptual error detection task was subjective in nature. Neurophysiological data were simultaneously collected to objectively evaluate error detection and will be analyzed in future studies. Second, ensuring complete understanding of the task, particularly for the stroke group with limited comprehension ability, can present challenges. All participants were required to independently demonstrate their ability to detect the presence or absence of the pitch-shift during a training phase prior to recording. Yet, during data collection, occasionally participants would consistently respond Yes or No for all trials. However, since the implementation of no-pitch-shift trials allowed us to control for such responses and also because all participants demonstrated their ability to discriminate the presence or absence of pitch shifts during training, those participants were not excluded from this study. Another limitation is fatigue during testing possibly affecting attention. This experiment consisted of two 45-minute blocks, with breaks as needed, in order to obtain data from approximately 180 trials for each experimental condition. There were participants for whom less than 180 trials were obtained due to fatigue that required discontinuation of the session. Analysis of possible differences in performance between blocks warrants further investigations in the future. Additionally, since the measures of pitch discrimination threshold were not obtained in this study, it is not possible to determine if variability in frequency difference limens contributed to performance variability on the experimental tasks. Further investigations into the effects of pitch discrimination thresholds would be of interest in the future studies, as would an exploration of differences in perceptual performance based on the direction of pitch-shift stimuli. Finally, future studies will examine the neurophysiological data collected from this sample to elucidate new aspects related to these findings.

Highlights:

  • Left-hemisphere damage impairs speech sensorimotor mechanisms in post-stroke aphasia

  • Left-hemisphere stroke leads to deficits in detecting speech auditory feedback errors

  • Left-hemisphere stroke leads to deficits in correcting for speech auditory feedback errors

  • Speech error detection deficit is associated with impaired error correction ability in left-hemisphere stroke

  • Impairment of speech error processing is associated with language deficits in post-stroke aphasia

Acknowledgement

This research was supported by funding from NIH/NIDCD Grants K01-DC015831 and R01-DC018523 (PI: Behroozmand) and R21-DC014170 and P50-DC014664 (PI: Fridriksson).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

CRediT authorship contribution statement

Stacey Sangtian: Investigation, Formal analysis, Writing – Original Draft, Writing – Review & Editing. Yuan Wang: Formal analysis, Writing – Review & Editing. Julius Fridriksson: Conceptualization, Resources, Writing – Review & Editing, Funding acquisition. Roozbeh Behroozmand: Conceptualization, Methodology, Software, Formal analysis, Writing – Original Draft, Writing – Review & Editing, Funding acquisition

Conflict of Interest

The authors declare no conflict of interest.

References

  1. Basilakos A, Fillmore PT, Rorden C, Guo D, Bonilha L, & Fridriksson J (2014). Regional white matter damage predicts speech fluency in chronic post-stroke aphasia. Frontiers in human neuroscience, 8, 845. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Behroozmand R, Karvelis L, Liu H, & Larson CR (2009). Vocalization-induced enhancement of the auditory cortex responsiveness during voice F0 feedback perturbation. Clinical Neurophysiology, 120(7), 1303–1312. 10.1016/j.clinph.2009.04.022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Behroozmand R, & Larson CR (2011). Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback. BMC Neuroscience. 10.1186/1471-2202-12-54 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Behroozmand R, Shebek R, Hansen D, Oya H, Robin DA, Howard MA, Greenlee JDW (2015). Sensory-motor networks involved in speech production and motor control: an fMRI study. Neuroimage 109:418–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Behroozmand R, Oya H, Nourski KV, Kawasaki H, Larson CR, Brugge JF, Howard MA, Greenlee JDW (2016). Neural Correlates of Vocal Production and Motor Control in Human Heschl’s Gyrus. J Neuroscience 36(7):2302–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Behroozmand R, Phillip L, Johari K, Bonilha L, Rorden C, Hickok G, & Fridriksson J (2018). Sensorimotor impairment of speech auditory feedback processing in aphasia. NeuroImage, 165(July 2017), 102–111. 10.1016/j.neuroimage.2017.10.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Boersma P, & Weenik D (1996). PRAAT: a system for doing phonetics by computer. Rep Inst Phonetic Sci Univ Amsterdam Available at: http://www.fon.hum.uva.nl/praat/.
  8. Burnett TA, Freedland MB, Larson CR, & Hain TC (1998). Voice F0 responses to manipulations in pitch feedback. The Journal of the Acoustical Society of America, 103(6), 3153–3161. 10.1121/1.423073 [DOI] [PubMed] [Google Scholar]
  9. Chang EF, Niziolek CA, Knight RT, Nagarajan SS, Houde JF (2013). Human cortical sensorimotor network underlying feedback control of vocal pitch. Proceedings of National Academy of Science 110(7):2653–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Clinard CG, Tremblay KL, & Krishnan AR (2010). Aging alters the perception and physiological representation of frequency: evidence from human frequency-following response recordings. Hearing research, 264(1–2), 48–55. 10.1016/j.heares.2009.11.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Espinoza-Varas B, & Jang H (2006). Aging Impairs the Ability to Ignore Irrelevant Information in Frequency Discrimination Tasks. Experimental Aging Research, 32(2), 209–226. 10.1080/03610730600554008 [DOI] [PubMed] [Google Scholar]
  12. Franken MK, Eisner F, Acheson DJ, McQueen JM, Hagoort P, & Schoffelen JM (2018). Self-monitoring in the cerebral cortex: Neural responses to small pitch shifts in auditory feedback during speech production. NeuroImage, 179(January), 326–336. 10.1016/j.neuroimage.2018.06.061 [DOI] [PubMed] [Google Scholar]
  13. Fridriksson J, Kjartansson O, Morgan PS, Hjaltason H, Magnusdottir S, Bonilha L, & Rorden C (2010). Impaired speech repetition and left parietal lobe damage. Journal of Neuroscience. 10.1523/JNEUROSCI.1120-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Fridriksson J, Guo D, Fillmore P, Holland A, & Rorden C (2013). Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia. Brain, 136(11), 3451–3460. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Golfinopoulos E, Tourville JA, & Guenther FH (2010). The integration of large-scale neural network modeling and functional brain imaging in speech motor control. NeuroImage, 52(3), 862–874. doi: 10.1016/j.neuroimage.2009.10.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Greenlee JD, Behroozmand R, Larson CR, Jackson AW, Chen F, Hansen DR, Oya H, Kawasaki H, Howard MA (2013). Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex. PLoS One 8(4) [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Guenther FH (2006). Cortical interactions underlying the production of speech sounds. Journal of Communication Disorders, 39(5), 350–365. 10.1016/j.jcomdis.2006.06.013 [DOI] [PubMed] [Google Scholar]
  18. Hafke HZ (2008). Nonconscious control of fundamental voice frequency. The Journal of the Acoustical Society of America, 123(1), 273–278. 10.1121/1.2817357 [DOI] [PubMed] [Google Scholar]
  19. Heinks-Maldonado TH, & Houde JF (2005). Compensatory responses to brief perturbations of speech amplitude. Acoustic Research Letters Online. 10.1121/1.1931747 [DOI] [Google Scholar]
  20. Hickok G, & Poeppel D (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393–402. 10.1038/nrn2113 [DOI] [PubMed] [Google Scholar]
  21. Hickok G, Houde JF, Rong F (2011). Sensorimotor integration in speech processing: computational basis and neural organization. Neuron, 69(3):407–22 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Hickok G (2012). Computational neuroanatomy of speech production. Nature Reviews Neuroscience. 10.1038/nrn3158 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Hickok G, & Poeppel D (2016). Neural Basis of Speech Perception. Neurobiology of Language, 299–310. 10.1016/B978-0-12-407794-2.00025-0 [DOI] [Google Scholar]
  24. Houde JF, Nagarajan SS, Sekihara K, & Merzenich MM (2002). Modulation of the auditory cortex during speech: an MEG study. Journal of Cognitive Neuroscience, 14(8), 1125–1138. 10.1162/089892902760807140 [DOI] [PubMed] [Google Scholar]
  25. Houde JF, & Nagarajan SS (2011). Speech production as state feedback control. Frontiers in Human Neuroscience. 10.3389/fnhum.2011.00082 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Houde JF, & Chang EF (2015). The cortical computations underlying feedback control in vocal production. Current opinion in neurobiology, 33, 174–181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Kertesz A, (2007). The Western Aphasia Battery - Revised. Grune & Stratton, New York. [Google Scholar]
  28. Korzyukov O, Karvelis L, Behroozmand R, & Larson CR (2012). ERP correlates of auditory processing during automatic correction of unexpected perturbations in voice auditory feedback. International Journal of Psychophysiology, 83(1), 71–78. 10.1016/j.ijpsycho.2011.10.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Larson CR (1998). Cross-modality influences in speech motor control: the use of pitch shifting for the study of F0 control. J Communication Disorders 31(6):489–502 [DOI] [PubMed] [Google Scholar]
  30. Li X, Morgan PS, Ashburner J, Smith J, Rorden C (2016). The first step for neuroimaging data analysis: DICOM to NIfTI conversion. J Neurosci Methods 264:47–56 [DOI] [PubMed] [Google Scholar]
  31. Phillip Johnson L, Sangtian S, Johari K, Behroozmand R, Fridriksson J (2020). Slowed Compensation Responses to Altered Auditory Feedback in Post-Stroke Aphasia: Implications for Speech Sensorimotor Integration. J Communication Disorders 10.1016/j.jcomdis.2020.106034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Rogalsky C, Poppa T, Chen KH, Anderson SW, Damasio H, Love T, & Hickok G (2015). Speech repetition as a window on the neurobiology of auditory-motor integration for speech: A voxel-based lesion symptom mapping study. Neuropsychologia. 10.1016/j.neuropsychologia.2015.03.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Tourville JA, & Guenther FH (2011). The DIVA model: A neural theory of speech acquisition and production. Language and Cognitive Processes. 10.1080/01690960903498424 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Wolpert DM, Ghahramani Z, & Jordan MI (1995). An internal model for sensorimotor integration. Science, 269(5232), 1880–1882. [DOI] [PubMed] [Google Scholar]
  35. Zhang J, & Mueller ST (2005). A note on ROC analysis and non-parametric estimate of sensitivity. Psychometrika, 70(1), 203–212. 10.1007/s11336-003-1119-8 [DOI] [Google Scholar]

RESOURCES