Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Jul 1.
Published in final edited form as: Arch Phys Med Rehabil. 2022 Mar 15;103(7 Suppl):S205–S214. doi: 10.1016/j.apmr.2022.03.002

Complexity and feedback during script training in aphasia: A feasibility study

Leora R Cherney 1,2,3, Sarel Van Vuuren 4
PMCID: PMC9256784  NIHMSID: NIHMS1790954  PMID: 35304120

Abstract

Objective:

To explore the impact of complexity and feedback on script training outcomes in aphasia

Design:

Randomized balanced single-blind 2×2 factorial design

Setting:

Freestanding urban rehabilitation hospital.

Participants:

Adults with fluent and nonfluent aphasia (at least six months post-onset).

Interventions:

Experimental treatment was AphasiaScripts®, a computer-based script training program. Scripts were 10-turns long and developed at different complexity levels to allow for comparison of high versus low complexity. The program was modified to contrast high versus low feedback conditions during sentence practice. Participants were instructed to practice three 30-minute sessions per day, six days a week for three weeks.

Main measures:

Gains achieved from baseline in accuracy and rate of production of trained and untrained script sentences at post-treatment and at 3-, 6- and 12-weeks after the end of treatment.

Results:

Sixteen participants completed the intervention. On the trained script, gains were statistically significant for both accuracy and words per minute, at post-treatment and 3-, 6- and 12-week maintenance. Gains on the untrained script were smaller than on the trained script; they were statistically significant only for accuracy at post-treatment and 3-week maintenance. Complexity had an influence on accuracy at post-treatment (F(1) = 4.8391, p = 0.0501) and at maintenance (F(1) = 5.3391, p = 0.04125). Practicing scripts with high complexity increased accuracy by 11.33% at post-treatment and by 9.90% at maintenance compared to scripts with low complexity. Participants with nonfluent aphasia made greater gains than those with fluent aphasia. There was no significant effect of feedback.

Conclusion:

This study reinforces script training as a treatment option for aphasia. Results highlight the use of more complex scripts to better promote acquisition and maintenance of script production skills. There is a need for further investigation of these variables with larger samples and with other types of aphasia treatments.

Keywords: Aphasia, Treatment, Script-Training, Complexity, Feedback


Aphasia is an acquired communication disorder that affects, to varying degrees, understanding and expression of spoken language as well as reading and writing, with considerable negative consequences on quality of life.1 It is a chronic condition requiring extensive rehabilitation. Although many different evidence-based treatment protocols are available,2,3 clinicians often manipulate treatment variables such as dose, task difficulty, and cuing in order to optimize acquisition, maintenance, and generalization of skills and promote neuroplasticity.4 However, only a few of these variables have been systematically studied to determine their effects on outcomes.

This study focuses on stimulus complexity and feedback. Studies have shown that training complex linguistic material results not only in improved production and comprehension of trained structures, but more importantly in greater generalization to untrained sentences as compared with training less complex forms.58 Of clinical relevance is the finding that fewer treatment sessions were required for acquisition and generalization of target structures when complex linguistic material was trained compared to simple linguistic material.5 This complexity effect has been described as “counterintuitive and unconventional”,8 since traditional treatment approaches typically begin treatment on simpler structures and hierarchically increase the difficulty of the stimuli thereafter. The positive effects of complexity have also been demonstrated in the lexical semantic domain in individuals with fluent aphasia 911 and in the phonological domain in children with phonological deficits.12 Complexity has yet to be studied with lengthier material beyond the word and sentence level and has not been addressed in the context of script training.

Feedback can be implicit or explicit, concurrent or delayed, and it may provide information about knowledge of performance or knowledge of results.4 Although persons with aphasia are often accurate judges of their performance,13 clinicians still typically provide some form of feedback to their patients. Explicit feedback (i.e., the person is provided with a correct response) has been identified as a key factor in treating word retrieval deficits in aphasia, regardless of whether errorless or errorful training methods are used.14 Yet studies have not systematically compared the presence or absence of different types of feedback, and to our knowledge, no studies have evaluated feedback beyond the single word level.

To assess the impact of complexity and feedback on aphasia treatment outcomes, we have selected script training, an aphasia treatment that was developed to improve everyday conversation.1520 A key ingredient of script training is intensive repetitive practice of phrases and sentences within a script to promote overlearning or automaticity of the linguistic content and associated motor patterns.19 The goal of script training is that the person with aphasia automatically retrieves and fluently produces appropriate pieces of the practiced script during everyday communication activities in the real world.

Importantly, a computer-based version of script training is available that allows us to systematically manipulate the variables being studied in order to compare treatment conditions. Providing treatment via a computer removes clinician-related factors that could potentially influence treatment outcomes (e.g., clinician expertise, personality), and ensures that treatment fidelity is high. The treatment software, AphasiaScripts®, has experimental support for its use16,17,2125 and was previously used successfully in a study that manipulated the degree of cuing and compared outcomes across cuing conditions.26

This feasibility study addressed the following questions:

  1. Does computer-based script training result in acquisition, maintenance and generalization of script production as measured by increased accuracy and rate of production of trained and untrained scripts, immediately post-treatment and at 3, 6 and 12 weeks?

  2. Does grammatical and semantic complexity of the script affect its acquisition immediately post- training and/or its maintenance at 3, 6 and 12 weeks?

  3. Does feedback in the form of external delayed self-assessment of performance affect its acquisition immediately post- training and/or its maintenance at 3, 6 and 12 weeks?

METHODS

Experimental Design

A randomized single-blind 2×2 factorial design was used to evaluate the computer-based script training program and the differential effects of complexity and feedback on short term acquisition and longer-term maintenance of script production (ClinicalTrials.gov identifier: NCT01597037). The study was approved by the Institutional Review Board of Northwestern University.

Participants

The study took place at an urban rehabilitation hospital in Chicago. Participants were recruited through local aphasia community groups and via referrals from outpatient clinicians. Pretreatment, post-treatment and maintenance assessments were conducted at the facility. Participants completed the actual script training at home using loaned laptops that were loaded with the treatment program. Inclusion criteria were: chronic aphasia (at least six-months post onset) from a single left-hemisphere stroke; a score between 40 and 80 on the Aphasia Quotient (AQ) of the Western Aphasia Battery- Revised (WAB-R)27; age 21 or older; at least an 8th grade education; fluent in English prior to the stroke; sufficient auditory and visual acuity to interact with the treatment program on a laptop; and not receiving other speech/language treatment for at least one month prior to or during the study. The presence of neurological conditions other than stroke that could affect communication, significant psychological problems, or active substance abuse were exclusion criteria. A sample size of 16 was deemed feasible given budgetary and time limitations. In addition, based on a prior study that used a similar treatment protocol26, a two-sided paired t-test, with n=16 pairs at 0.0125 significance level (to account for up to 4 repeated measures) yielded power estimates of 0.99.

Participants were randomized in a 1:1 ratio to one of four conditions: high complexity/high feedback; high complexity/low feedback; low complexity/high feedback; low complexity/low feedback. Stratified permuted block randomization with the stratifying variable of pre-treatment aphasia severity using a cut-off of 60 on the WAB-R Aphasia Quotient (AQ) ensured balance of severity across conditions. Two randomization lists, one for participants with fluent aphasia and the other for nonfluent aphasia, were generated a priori by the statistician. Following confirmation of eligibility criteria by the research speech-language pathologist, treatment allocation was assigned sequentially from these lists by the PI (LRC) who was not involved in any recruitment activities. Another research speech-language pathologist who administered and scored the assessment protocol was blinded to the participants’ assigned group.

Intervention

Script training involved three 30-minute sessions per day, six days a week for three weeks. In AphasiaScripts®, an anthropomorphically accurate “virtual therapist” interactively guides the participant through the steps of the training (Figure 1). AphasiaScripts® has been previously described in detail.16,17,21,22,24,26 In short, participants first listen to a script that is spoken by the virtual therapist while it is shown on the computer screen. Each word is highlighted as it is spoken. Then participants systematically practice each sentence of the script repeatedly, first in unison (chorally) with the virtual therapist and then independently. Finally, participants practice the entire conversation, reading it aloud together with the therapist while the words are highlighted. To ensure compliance, the computer program captures log-on and log-off times, every key stroke made by the participant during the daily treatment sessions, and audio recordings of the practice sessions.

Figure 1.

Figure 1.

Sentence and script practice with the Virtual Therapist. The virtual therapist “speaks” the words with mouth movements similar to that of a real person and the word is highlighted as it is spoken.

High versus low script complexity

Two different scripts were developed for this study: Ordering pizza in a restaurant (with the person with aphasia speaking to a server) and planning to buy groceries (with the person speaking to a close acquaintance). Scripts were randomized for each participant with one serving as the trained and other as the untrained script. Each script comprised 10 turns defined as each therapist’s utterance followed by the participant’s utterance.

Five different complexity levels were developed for each script to allow for the comparison of high versus low complexity (see Kaye and Cherney for details28). We manipulated the number of words, syllables and sentences across each level of the script and used Flesch-Kincaid Grade Level scores to determine grammatical complexity.29 Script levels approximated grade levels 1–5. We also manipulated the semantic complexity by modifying the frequency of selected words. For example, going from least to most complex, the word “idea” (most frequent) was substituted with thought, advice, suggestion, and recommendation (least frequent). Table 1 shows the counts of the different items that were manipulated to create the five script levels of complexity.

Table 1.

Script counts at five levels of complexity

Ordering a Pizza Level 1 Level 2 Level 3 Level 4 Level 5

# Sentences 10 11 13 15 17
# Words 54 80 112 141 174
# Syllables 67 100 145 190 244
# Morphemes 63 96 140 181 229
Mean Length of Turn 5.4 8.0 11.2 14.1 17.4
Morphemes/word 1.17 1.20 1.25 1.28 1.32
Mean freq. shared nouns 133,126 52,565 24,245 10,807 2,837
Average words/sentence 5.40 7.27 8.60 8.60 9.40
Average syllables/word 1.20 1.25 1.30 1.30 1.30
Grade Level 1.16 2.00 3.05 4.06 5.02


Grocery Shopping Level 1 Level 2 Level 3 Level 4 Level 5

# Sentences 10 11 13 15 17
# Words 55 78 105 142 172
# Syllables 68 98 138 192 244
# Morphemes 65 95 130 179 226
Mean Length of Turn 5.5 7.8 10.5 14.2 17.2
Morphemes/word 1.18 1.22 1.24 1.26 1.31
Mean freq. shared nouns 97,936 35,817 23,920 8,958 3,632
Average words/sentence 5.50 7.09 8.10 9.5 10.1
Average syllables/word 1.20 1.26 1.31 1.40 1.40
Grade Level 1.14 2.00 3.07 4.06 5.03

Aphasia severity on the WAB-R AQ determined the script complexity level for participants. A previous study had identified the “typical” level of script complexity at which participants should train.26,28 For this study, subjects assigned to high complexity were trained at one Flesch-Kincaid level above “typical”, and those in the low complexity group were trained at one level below “typical” as shown in Table 2. Note that the same script level could be considered high or low complexity depending on the severity of the participant’s aphasia.

Table 2.

Assignment of script levels

Script level based on complexity
WAB-R AQ “Typical” Low High

≥40 to <50 2 1 3
≥50 to <60 3 2 4
≥60 to <80 4 3 5

WAB-R AQ: Western Aphasia Battery Revised

High versus low feedback

The treatment program was modified to contrast high versus low feedback conditions during sentence practice. In the high feedback condition, participants listened to the recording of their last production of a sentence before moving on to practice the next sentence of the script. They were required to rate this production of their recorded sentence on a 3-point scale by choosing Needs work, Okay, or Very good. The virtual therapist then provided a correct auditory model of the sentence. To ensure that task time was similar across feedback conditions, participants in the low feedback condition rated the length of the written sentence by choosing Short (less then 3 words), Medium, or Long (more than 8 words). They were then presented with the written model of the sentence before moving on to practice the next sentence of the script.

Outcome measures

We measured accuracy and rate (words per minute - WPM) at which participants independently read aloud the trained and untrained scripts. Performance was probed three times at baseline, twice weekly during training, twice within 5 days after the end of treatment, and once at 3 weeks, 6 weeks, and 12 weeks after treatment. Probe sessions were conducted via computer. Each turn of the script appeared on the screen. The virtual therapist read her line in the turn without cues such as highlighting, and then the participant independently read his or her line aloud. The participant pressed the space bar at the end of each turn to bring the next turn up on the screen. Audio recordings of the probes were captured by the computer software, as was the duration of each production.

Probes were scored for accuracy using the Naming and Oral Reading for Language in Aphasia 6-Point Scale (NORLA-6).30 Each script-related word was scored on this 6-point scale which ranges from 0 (no response) and 1 (unintelligible or unrelated response) to 4 (accurate but delayed or self-corrected response) and 5 (accurate and immediate response). Percent accuracy was the total score for each sentence divided by the maximum score that could be achieved (5 points per word × the number of words in the sentence). Rate was calculated as the number of script-related words (defined as having scores of 3–5) produced per minute (wpm). The accuracy and rate scores for any probe session were averaged over all 10 sentences of the probed script to yield the final score for that probe session.

Data Analyses

To examine whether script training improved accuracy and rate of production, we analyzed the gain achieved over baseline in each of the trained and untrained scripts for all 16 participants combined. To establish the baseline, we measured performance on both scripts three times prior to the start of treatment for a total of six probe measurements. At baseline, both the trained and untrained scripts were novel to the participant. Since both scripts were developed to be linguistically similar and neither were trained, performances at baseline were similar. Therefore, we computed a single baseline for each participant by averaging all six scores.

To investigate the overall impact of training (Question 1), we used paired two-tailed t-tests to assess gains over baseline on the trained script, comparing the mean score of the six baseline probes with the mean score of the two post-treatment probes (acquisition) and each of the scores taken at 3-, 6- and 12-weeks post treatment (maintenance). We used a Bonferroni correction to adjust the significance threshold. Effect sizes for gains over baseline were calculated using Cohen’s dz for dependent samples.31 Additionally, to explore for the possibility of generalization and/or concomitant learning, we used paired two-tailed t-tests to separately assess the gains over baseline on the untrained script.

To further investigate the impact of complexity and feedback (Questions 2 & 3), we modelled differential responses by fitting linear models to % accuracy gain at the post-treatment time and separately at a follow-up time that combined the 3-, 6- and 12-week follow-up probes. We used nested model selection to find parsimonious models that best fit the data (using ANOVA to compare a fitted model to a nested null model without the factor) and reported evidence for or against inclusion of a factor using the F-test, with degrees of freedom the difference in number of parameters in each model. If a factor was not significant or not influential, we did not add it to the model and or dropped it from subsequent analyses. Experimental factors considered included complexity and feedback, while confounding factors considered included fluency, time post onset, and WAB-R AQ.

RESULTS

Participants

Sixteen individuals (10 male; 6 female) with chronic aphasia due to a left-hemisphere stroke received three weeks of script training between June 2014 and December 2015. Table 3 shows their demographic data, aphasia fluency and severity, and the treatment condition to which they were randomized. There were no significant differences in age, education, time post onset or severity (AQ) between those receiving high complexity and low complexity scripts, or between those receiving high and low feedback. There was no difference in baseline severity between the fluent and nonfluent participants who were evenly divided across conditions. Participants practiced at least 80% of the target 27 hours of the intervention. The mean amount of practice was 27.85 (2.58) hours with no significant difference in practice hours between those in the high versus low complexity conditions or those in the high versus low feedback conditions.

Table 3.

Participant characteristics

Participants with non-fluent aphasia

Participant Age (years) Gender Handedness TPO (months) Education (years) WAB-R AQ Randomization Script Level
Complexity Feedback

1 47.09 M L 8.92 12 41.7 High High 3
2 76.38 M R 42.48 16 49.4 Low Low 1
3 67.33 M R 151.54 18 80.8 Low High 3
4 52.59 M R 29.64 16 74.5 Low Low 3
5 39.59 F R 54.89 17 64.6 High High 5
6 61.11 M R 33.74 19 55.5 Low High 2
7 39.19 F R 15.67 14 78.7 High Low 5
8 72.47 F L 100.32 18 38.0 High Low 3

Participants with fluent aphasia

Participant Age (years) Gender Handedness TPO (months) Education (years) WAB-R AQ Randomization Script Level
Complexity Feedback

9 58.57 M L 55.74 16 50.3 High Low 4
10 52.42 M L 26.56 11 66.4 High Low 5
11 56.36 M R 13.28 12 64.5 Low Low 3
12 78.94 F R 6.00 12 73.3 Low High 3
13 61.76 M R 14.43 15 45.2 Low High 1
14 70.55 M R 9.25 12 67.9 High High 5
15 73.62 F R 21.18 18 65.0 Low Low 3
16 76.60 F R 24.79 12 70.8 High High 5

Non-fluent participants’ (n=8) mean AQ and SD were 60.4 and 16.8

Fluent participants’ (n=8) mean AQ and SD were 62.9 and 9.9

Note. TPO: Time Post Onset; WAB-R AQ: Western Aphasia Battery-Revised Aphasia Quotient

Figure 2 shows the flow of participants through the study. All participants completed the post-treatment and 3-week maintenance assessments. One participant dropped out prior to the 6-week maintenance assessment and an additional two participants dropped out prior to the 12-week maintenance assessment. For Question 1, a sample missing at 6- or 12-weeks is reported as a smaller sample size. For Questions 2 and 3, we used for the maintenance data point the average of the non-missing responses at 3-, 6-, and 12-weeks post treatment.

Figure 2.

Figure 2.

Flow of participants through the study.

Question 1. Does computer-based script training result in acquisition, maintenance and generalization as measured by increased accuracy and rate of trained and untrained scripts?

Figure 3 shows the mean responses on trained and untrained scripts, using the accuracy and rate measures at baseline, during treatment, and post-treatment. Individual participant’s responses on the trained scripts are shown in the supplementary materials.

Figure 3.

Figure 3.

Sample mean responses (N=16) for trained and untrained scripts, using % Accuracy (Acc) and word per minute (WPM) measures. The time points represent the baseline week (b), treatment week 1 to 3 (t1, t2, t3), post-treatment week (p), and 3-, 6-, and 12 -week maintenance (m3, m6, m12) follow-ups after treatment.

Table 4 lists the corresponding gain scores (mean response over baseline) for the 16 participants. On the trained script, gains were statistically significant (after correcting for repeated measures – see caption) for both accuracy and WPM, at post-treatment and 3-, 6- and 12-week maintenance. On the untrained script, gains were statistically significant for accuracy on the untrained script at post-treatment and 3-week maintenance, though smaller than on the trained script. Cohen’s dz effect size for trained scripts were large and higher for acquisition than maintenance in both accuracy and rate. On the trained script dz was 1.87 and 1.38 at post- and six weeks post-treatment.

Table 4.

Gain scores (sample mean responses over the baseline) for trained and untrained scripts, using % Accuracy (Acc) and Word per Minute (WPM) measures at each time point (p, m3, m6, and m12).

Measure Script Probe m SD SE dz t df p-value Sig. (m=4) n

% Acc Trained p 25.56 13.65 3.41 1.87 7.4910 15 0.0000019 *** 16
m3 23.28 13.06 3.27 1.78 7.1303 15 0.0000034 *** 16
m6 21.47 15.56 4.02 1.38 5.3432 14 0.0001037 *** 15
m12 19.85 14.73 4.09 1.35 4.8590 12 0.0003922 ** 13

% Acc Untrained p 9.22 11.79 2.95 0.78 3.1300 15 0.0068826 * 16
m3 8.37 8.87 2.22 0.94 3.7754 15 0.0018333 ** 16
m6 6.41 12.72 3.28 0.50 1.9520 14 0.0712282 15
m12 8.07 11.79 3.27 0.68 2.4675 12 0.0296276 13

WPM Trained p 30.19 21.36 5.34 1.41 5.6543 15 0.0000458 *** 16
m3 25.86 17.92 4.48 1.44 5.7729 15 0.0000368 *** 16
m6 23.77 22.97 5.93 1.04 4.0086 14 0.0012941 ** 15
m12 20.83 20.11 5.58 1.04 3.7336 12 0.0028552 * 13

WPM Untrained p 5.93 11.64 2.91 0.51 2.0385 15 0.0595334 16
m3 4.14 8.12 2.03 0.51 2.0386 15 0.0595190 16
m6 3.18 11.03 2.85 0.29 1.1173 14 0.2826863 15
m12 4.62 9.41 2.61 0.49 1.7722 12 0.1017333 13

m is the mean response over baseline, SD the sample standard deviation, SE the standard error, and n the number of participants over which the response was computed. dz is Cohen’s effect size for paired difference between groups, in this case between the measure at the indicated time point and the baseline. (dz accounts for between group correlations.) The t-statistic and p-values in the table are from paired two-sided t-tests, while the indicators

*

0.05

**

0.01

***

0.001 show the significance level after using a Bonferroni correction for m=4 repeated measures (i.e., accept as significant if p-value < α/m).

Questions 2 and 3. Does grammatical and semantic complexity of a script, or feedback in the form of external delayed self-assessment, affect its acquisition or maintenance?

To assess whether there is a differential response to the manipulated treatment conditions, it is reasonable to include only participants with a demonstrated treatment response. We carefully examined participant’s individual responses (supplementary materials) and excluded two participants (8 and 13) who did not respond to the treatment similarly to other participants. The resulting data remain balanced because Participant 13 was assigned to the low complexity/high feedback condition and Participant 8 to the high complexity/low feedback condition.

Differential effects of complexity and feedback on Accuracy

To quantify the differential effects of complexity and feedback (and other latent factors such as fluency) we fitted linear models to % accuracy gain on the trained script at post-treatment (p) and maintenance (m), using for p the average of the two post-treatment probe responses and m the average of the responses at 3-, 6-, and 12-weeks post-treatment. Figure 4 shows the sample mean gain for trained and untrained scripts and high and low levels of complexity and feedback. Gains on the trained script differ between the high and low complexity condition but differential gains are similar elsewhere.

Figure 4.

Figure 4.

Sample mean % Accuracy gain for the high and low levels of the complexity and feedback factors. Results are shown for trained and untrained scripts, at time points (t1, t2, t3, p, m), where time point m is the time average of the responses at m3, m6, and m12 time points. N=14.

At post-treatment (p), the best model-fit was an intercept model with fluency and complexity as main effects (F(2,11) = 3.77, p = 0.05662). In the model, the influence of complexity (F(1) = 4.8391, p = 0.0501) and fluency (F(1) = 3.7674, p = 0.07831) on accuracy were statistical trends approaching, but not reaching 0.05 significance. From the model, practicing on scripts with high complexity increased accuracy by 11.3 ± 5.2% compared to scripts with low complexity. Accuracy increased by 10.0 ± 5.2% for non-fluent compared to fluent participants. There was no significant effect for adding feedback (F(1) = 0.1054, p = 0.7521) as a main effect to the best-fit model or models with other combination of factors.

At maintenance (m), the best model-fit was an intercept model with fluency and complexity as main effects (F(2,11) = 4.599, p = 0.03536). In the model, complexity had a significant effect on accuracy (F(1) = 5.3391, p = 0.04125). Practicing on scripts with high complexity increased accuracy by 9.9 ± 4.3% compared to scripts with low complexity. Fluency had a significant effect on accuracy (F(1) = 5.1727, p = 0.04397), with an increase of 9.7 ± 4.3% for non-fluent compared to fluent participants. There was no significant effect for adding feedback (F(1) = 0.0373, p = 0.8507) as a main effect to the best-fit model or models with other combination of factors.

There was no significant interaction between fluency and complexity, fluency and feedback, and complexity and feedback. Similar analyses on the untrained script yielded no significant or influential effects. Further exploration of the effects of fluency and complexity are provided in the Supplement Figure S3.

Differential effects of complexity and feedback on rate of script production

Figure 5 shows sample mean WPM gain over baseline for trained and untrained scripts and high and low levels of the complexity and feedback factors during treatment, at post-treatment, and at a maintenance that is the average of the responses at 3-, 6- and 12-week time points. High complexity and high feedback may show an advantage over low complexity and low feedback for the trained scripts. However, we did not model the differential WPM gains since the data is not normally distributed and it included a participant whose performance is more than two standard deviations above the mean of the other participants.

Figure 5.

Figure 5.

Sample mean WPM gain for the high and low levels of the complexity and feedback factors. Results are shown for trained and untrained scripts, at time points (t1, t2, t3, p, m), where time point m is the time average of the responses at m3, m6, and m12 time points. N=14.

DISCUSSION

Sixteen individuals with chronic aphasia received intensive computer-based script training under conditions of high or low complexity, and high or low feedback. There were significant gains in script acquisition on trained scripts as measured by accuracy and rate, with maintenance of skills at 3-, 6- and 12-weeks post-treatment. Untrained scripts had a lower but still significant gain in accuracy from baseline to immediately post-treatment and baseline to 3 weeks, but not to 6 and 12 weeks and not in rate. These findings are consistent with prior studies that have shown positive treatment effects following script training.

Practicing scripts with high complexity led to greater increases in % accuracy gain as compared to low complexity at post-treatment and maintenance. There was no significant effect between conditions for feedback at post-treatment or maintenance. Participants with nonfluent aphasia showed significantly larger gains at maintenance than those with fluent aphasia.

These results provide preliminary evidence demonstrating better script acquisition, maintenance and possible generalization when scripts are more complex, thereby supporting the Complexity Account of Treatment Efficacy.7,8 For this study, we modified some grammatical and semantic parameters of the script to increase stimulus complexity. Future research should investigate other ways to increase complexity such as increasing the rate at which scripts are practiced chorally with the virtual therapist or removing the written script entirely and having the participant use auditory stimuli only.

The finding of no differential response to the feedback condition was unexpected since feedback has long been considered an important part of the therapeutic process. More research with a larger sample to confirm these results including the absence of the effect of feedback is warranted. Additionally, there are several different ways in which feedback can be provided and selecting a different type of feedback may have resulted in a different outcome. Feedback in this study may have improved self-monitoring skills and ability to detect production errors. However, this feedback may not have been sufficient. For example, a study addressing naming errors in aphasia found that learning was best achieved when participants had the opportunity not only to detect errors but also to repair them.32 Future research is needed to examine other forms of feedback and their effects on treatment outcomes.

The finding that participants with nonfluent aphasia may respond more favorably than participants with fluent aphasia, especially when the scripts are more complex warrants further investigation. One explanation for the response difference may be gleaned from studies examining speech entrainment (SE) in individuals with aphasia.3336 SE is the real-time mimicking of an audiovisual speech model.33 While mechanisms underlying SE are not well understood, researchers have shown that individuals with Broca’s aphasia benefit the most from SE as compared to other types of aphasia.34 We have not previously highlighted “speech entrainment” as a component of AphasiaScripts®. Nevertheless, it does indeed provide participants with intensive practice using audiovisual speech models of words and sentences within a script. Participants can watch the mouth movements of the virtual therapist at the same time that they hear the spoken sentences and attempt to produce them either chorally or independently.

Limitations

This study represents a first step to examining the effects of manipulating different treatment variables, specifically complexity and feedback, to promote better outcomes. However, the sample was small, Aims 2 and 3 were insufficiently powered, and the pre-post study design lacked a between-subject control group. Additionally, participants were heterogenous in terms of aphasia type and severity. Examination of individual participant data shows that baseline probes ranged from close to 0% to more than 75% accuracy. Similarly, there was a large range in the rate (words per minute) of baseline probes. Complexity and feedback may affect participants differently depending on their baseline performance, but our sample was not large enough to analyze subgroup responses. Furthermore, some participants approached ceiling for % accuracy which may have impacted our findings. Future studies might consider other ways to measure script performance that do not have ceiling or floor effects.

Conclusions

This study reinforces script training as a treatment option for aphasia. Results show that training improves accuracy and rate of production of trained scripts, and that gains may be maintained for up to 12 weeks. It extends previous work in aphasia and demonstrates the feasibility of directly comparing high- and low-complexity conditions and high- and low-feedback conditions in a script-training task. Results highlight the importance of considering the use of more complex scripts to better promote acquisition and maintenance of script production skills. Although results apply only to script training, there is a need for these variables to be investigated with other types of aphasia treatments.

Supplementary Material

1

Financial Support:

This study was supported by the National Institute on Deafness and Other Communication Disorders, Award Numbers 1R01 DC011754 and 1R01 DC016979 to LRC and SVV. We express our sincere thanks to Rosalind Kaye, Rosalind Hurwitz, and Rachel Hitch who assisted with data collection. We are grateful to Kwang-Youn Kim who provided the randomization lists and Nattawut Ngampatipatpong who provided technical support. Finally, we thank the participants with aphasia for their time and effort.

Abbreviations:

WAB-R

Western Aphasia Battery-Revised

AQ

Aphasia Quotient

WPM

Words per minute

SE

Speech entrainment

Footnotes

Disclosure Statement: Dr. Cherney reports that AphasiaScripts® is commercially available from the Shirley Ryan AbilityLab. Dr. Cherney receives salary from the Shirley Ryan AbilityLab but has no financial interest in the software sales.

ClinicalTrials.gov identifier: NCT01597037

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

REFERENCES

  • 1.Simmons-Mackie N Aphasia in North America: Frequency, Demographics, Impact of Aphasia, Communication Access, Services and Service Gaps, (AphasiaAccess, Moorestown, New Jersey, 2018). [Google Scholar]
  • 2.Brady MC, Kelly H, Godwin J, Enderby P & Campbell P Speech and language therapy for aphasia following stroke. The Cochrane database of systematic reviews, CD000425 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cherney L & Small S Aphasia, Apraxia of Speech, and Dysarthria. in Stroke Recovery and Rehabilitation (ed. Stein J, Harvey R, Winstein C, Zorowitz R, Wittenberg GF, G. F.) 181–206 (Demos Medical Publishers, New York, NY, 2015). [Google Scholar]
  • 4.Maier M, Ballester BR & Verschure P Principles of Neurorehabilitation After Stroke Based on Motor Learning and Brain Plasticity Mechanisms. Frontiers in systems neuroscience 13, 74 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Thompson CK, Ballard KJ & Shapiro LP The role of syntactic complexity in training wh-movement structures in agrammatic aphasia: optimal order for promoting generalization. J Int Neuropsychol Soc 4, 661–674 (1998). [DOI] [PubMed] [Google Scholar]
  • 6.Ballard KJ & Thompson CK Treatment and generalization of complex sentence production in agrammatism. Journal of speech, language, and hearing research : JSLHR 42, 690–707 (1999). [DOI] [PubMed] [Google Scholar]
  • 7.Thompson CK, Shapiro LP, Kiran S & Sobecks J The role of syntactic complexity in treatment of sentence deficits in agrammatic aphasia: the complexity account of treatment efficacy (CATE). Journal of speech, language, and hearing research : JSLHR 46, 591–607 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Thompson CK & Shapiro LP Complexity in treatment of syntactic deficits. American journal of speech-language pathology 16, 30–42 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Kiran S Complexity in the treatment of naming deficits. American journal of speech-language pathology 16, 18–29 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Kiran S & Johnson L Semantic complexity in treatment of naming deficits in aphasia: evidence from well-defined categories. American journal of speech-language pathology 17, 389–400 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kiran S Typicality of inanimate category exemplars in aphasia treatment: further evidence for semantic complexity. Journal of speech, language, and hearing research : JSLHR 51, 1550–1568 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Gierut JA Phonological complexity and language learnability. American journal of speech-language pathology 16, 6–17 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Murray LL, Holland AL & Beeson PM Accuracy monitoring and task demand evaluation in aphasia. Aphasiology 11, 401–414 (1997). [Google Scholar]
  • 14.McKissock S & Ward J Do errors matter? Errorless and errorful learning in anomic picture naming. Neuropsychol Rehabil 17, 355–373 (2007). [DOI] [PubMed] [Google Scholar]
  • 15.Bilda K Video-based conversational script training for aphasia: a therapy study. Aphasiology 25, 191–201 (2011). [Google Scholar]
  • 16.Cherney LR, Halper AS, Holland AL & Cole R Computerized script training for aphasia: preliminary results. American journal of speech-language pathology 17, 19–34 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Lee JB, Kaye RC & Cherney LR Conversational script performance in adults with non-fluent aphasia: treatment intensity and aphasia severity. Aphasiology 23, 885–897 (2009). [Google Scholar]
  • 18.Goldberg S, Haley KL & Jacks A Script training and generalization for people with aphasia. American journal of speech-language pathology 21, 222–238 (2012). [DOI] [PubMed] [Google Scholar]
  • 19.Youmans G, Holland A, Munoz M & Bourgeois M Script training and automaticity in two individuals with aphasia. Aphasiology 19, 435–450 (2005). [Google Scholar]
  • 20.Hubbard HI, Nelson LA & Richardson JD Can Script Training Improve Narrative and Conversation in Aphasia across Etiology? Seminars in speech and language 41, 99–124 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Manheim LM, Halper AS & Cherney L Patient-reported changes in communication after computer-based script training for aphasia. Archives of physical medicine and rehabilitation 90, 623–627 (2009). [DOI] [PubMed] [Google Scholar]
  • 22.Cherney LR, Halper AS & Kaye RC Computer-based script training for aphasia: emerging themes from post-treatment interviews. J Commun Disord 44, 493–501 (2011). [DOI] [PubMed] [Google Scholar]
  • 23.Cherney LR Aphasia treatment: intensity, dose parameters, and script training. International journal of speech-language pathology 14, 424–431 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Cherney LR, Kaye RC, Lee JB & van Vuuren S Impact of Personal Relevance on Acquisition and Generalization of Script Training for Aphasia: A Preliminary Analysis. American journal of speech-language pathology 24, S913–S922 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Cherney LR, Braun EJ, Lee JB, Kocherginsky M & Van Vuuren S Optimising recovery in aphasia: Learning following exposure to a single dose of computer-based script training. International journal of speech-language pathology 21, 448–458 (2019). [DOI] [PubMed] [Google Scholar]
  • 26.Cherney LR, Kaye RC & van Vuuren S Acquisition and maintenance of scripts in aphasia: a comparison of two cuing conditions. American journal of speech-language pathology 23, S343–360 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Kertesz A Western Aphasia Battery (Revised) PsychCorp. San Antonio: (2007). [Google Scholar]
  • 28.Kaye RC & Cherney LR Script templates: A practical approach to script training in aphasia. Topics in Language Disorders 36, 136–153 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Kincaid JP, Fishburne RP, Rogers RL & Chissom BS Derivation of new readability formulas (Automated Readability Index, Fog Count, and Flesch Reading Ease formula) for Navy enlisted personnel. Research Branch Report 8–75, Chief of Naval Technical Training: Naval Air Station Memphis. (1975). [Google Scholar]
  • 30.Pitts LL, Hurwitz R, Lee JB, Carpenter J & Cherney LR Validity, reliability and sensitivity of the NORLA-6: Naming and oral reading for language in aphasia 6-point scale. International journal of speech-language pathology 20, 274–283 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Cohen J Statistical power analysis for the behavioral sciences, (Laurence Erlbaum and Associates, Hillsdale, NJ, 1988). [Google Scholar]
  • 32.Schwartz MF, Middleton EL, Brecher A, Gagliardi M & Garvey K Does naming accuracy improve through self-monitoring of errors? Neuropsychologia 84, 272–281 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Fridriksson J, et al. Speech entrainment enables patients with Broca’s aphasia to produce fluent speech. Brain 135, 3815–3829 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Fridriksson J, Basilakos A, Hickok G, Bonilha L & Rorden C Speech entrainment compensates for Broca’s area damage. Cortex; a journal devoted to the study of the nervous system and behavior 69, 68–75 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Bonilha L, et al. Neural structures supporting spontaneous and assisted (entrained) speech fluency. Brain 142, 3951–3962 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Feenaughty L, Basilakos A, Bonilha L & Fridriksson J Speech timing changes accompany speech entrainment in aphasia. J Commun Disord 90, 106090 (2021). [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES