Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Mar 1.
Published in final edited form as: Augment Altern Commun. 2015 Jan 26;31(1):1–14. doi: 10.3109/07434618.2014.995779

Using Dynamic Assessment to Evaluate the Expressive Syntax of Children who use Augmentative and Alternative Communication

Marika R King 1, Cathy Binger 2, Jennifer Kent-Walsh 3
PMCID: PMC4634893  NIHMSID: NIHMS729394  PMID: 25621928

Abstract

The developmental readiness of four 5-year-old children to produce basic sentences using graphic symbols on an augmentative and alternative communication (AAC) device during a dynamic assessment (DA) task was examined. Additionally, the ability of the DA task to predict performance on a subsequent experimental task was evaluated. A graduated prompting framework was used during DA. Measures included amount of support required to produce the targets, modifiability (change in participant performance) within a DA session, and predictive validity of DA. Participants accurately produced target structures with varying amounts of support. Modifiability within DA sessions was evident for some participants, and partial support was provided for the measures of predictive validity. These initial results indicate that DA may be a viable way to measure young children’s developmental readiness to learn how to sequence simple, rule-based messages via aided AAC.

Keywords: Augmentative and alternative communication (AAC), Dynamic assessment (DA), Graduated prompting, Grammar, Syntax

Using Dynamic Assessment to Evaluate the Expressive Syntax of 5-year-old Children who use Augmentative and Alternative Communication

That the generative language performance of children who rely on augmentative and alternative communication (AAC) is often limited is widely acknowledged, as evidenced through difficulties with expressive syntax. Predominately, one or two word messages frequently are used in both spontaneous and elicited communication (e.g., Binger & Light, 2008; Soto, 1999; Sutton & Morford, 1998). In addition, many children demonstrate difficulties using correct word order when communicating via a graphic symbol-based device (Binger & Light 2008; Sutton, Soto, & Blockberger, 2002; Sutton, Trudeau, Morford, Rios, & Poirier, 2010). Studies also have found that children who require AAC often demonstrate a wide receptive-expressive language gap, that is, their standardized scores for expressive versus receptive language often are dramatically different (Binger, Kent-Walsh, Ewing, & Taylor, 2010; Kent-Walsh, Binger, & Hasham, 2010), with expressive language abilities likely masked by profound speech disorders. This issue must be addressed in a coordinated manner. Certainly, AAC systems that are more intuitive and can better support rapid language development are needed (Light & McNaughton, 2013). Additionally, once children are provided with viable communication modes and appropriate instructional supports, significant and rapid improvements in expressive language may be possible. With regard to the latter approach, dynamic assessment (DA) offers promise for obtaining a more accurate depiction of a child’s current skill level, identifying viable communication modes, and obtaining insight into how a child can be appropriately supported in developing their expressive language skills.

Dynamic Assessment

When children are assessed using static assessments, the examiner records responses without trying to change, modify, or improve the examinee’s performance (Tzuriel, 2000), and contextual support is minimized so that habitual performance can be isolated and examined (Olswang & Bain, 1996). Static tests can either be norm-referenced or criterion-referenced. Norm-referenced tests can provide valuable information about a child’s performance relative to a given age group, and criterion-referenced tests can be used to assess performance on a particular skill (e.g., comprehension of early developing locatives) and track progress over time. However, static tests are not designed to evaluate the learning process or identify possible barriers to learning (Tzuriel, 2000). In addition, static assessment does not provides useful information on skills for which an individual may have had limited learning opportunities, and scores may reflect a child’s life experiences (e.g., limited means of expression, poor instructional support, or limited exposure to vocabulary commonly used by those in the dominant culture) more than the child’s developmental readiness to attain a particular skill. Thus, the limitations of static assessment are magnified when used to assess the communication skills of children with severe disabilities, who frequently experience limited learning opportunities, and especially with respect to the development of communication skills (Barker, Akaba, Brady, & Thiemann-Bourque, 2013; Chung, Carter, & Sisco, 2012; Raghavendra, Virgo, Olsson, Connell, & Lane, 2012).

In contrast, with DA, dramatic changes may be observed over a short period of time. DA, which can be defined as “… an assessment of thinking, perception, learning, and problem solving by an active teaching process aimed at modifying cognitive functioning” (Tzuriel, 2000, p. 386), provides an alternative measure of language ability in individuals with severe disabilities (e.g., Olswang, Feuerstein, Pinder, & Dowden, 2013). The concept behind DA is rooted in Vygotsky’s (1978) socio-cultural theory of learning in which the difference between a child’s observed performance and level of potential development are determined (i.e., the zone of proximal development, or ZPD). This higher level of potential development is achieved through supported problem solving under adult guidance, and children can perform above initial levels when provided with these supports (e.g., Feuerstein, Rand, & Hoffman, 1979; Rogoff, 1990). For example, for a child learning to use aided AAC, a child may be able to use graphic symbols in a broader range of contexts or produce longer utterances when provided with scaffolding supports. DA incorporates active teaching within the assessment procedures in order to observe the child’s process of learning. Through implementation of planned and deliberate teaching during the assessment process, the effects of this support can be isolated and examined (e.g., Tzuriel, 2000). DA also is designed to measure the examiner’s effort required to bring about the change in behavior (Peña, 2000) and to measure a child’s performance or modifiability (i.e., change in performance) in a prompt rich environment (Bain & Olswang, 1995).

There are two basic approaches to DA: mediated learning experiences and graduated prompting. With the former approach, DA is individualized and modified depending on the child’s performance on the task (Rogoff, 1990). In contrast, a predetermined cueing hierarchy is used with graduated prompting. Although less widely used than mediated learning experiences, graduated prompting has potential as an effective tool in measuring the amount of support required for a child to learn specific behaviors, given a structured teaching experience.

In the field of speech-language pathology, DA has been used effectively to differentiate children with and without language impairment – and more specifically, to distinguish between persons with a language disorder versus those who, when provided with appropriate supports, rapidly acquire and demonstrate the targeted skills (e.g., Peña, Iglesias, & Lidz, 2001). For example, children who come from non-mainstream cultural backgrounds may score poorly on formal language tests that have been normed and standardized with children from mainstream cultural backgrounds and subsequently appear to have a language disorder. DA may be used to differentiate between those with linguistic competence versus those with true language disorders (Peña et al., 2001). Research is required to determine if DA may be of similar use to children using aided AAC, that is, a child using an aided AAC device may appear to lack linguistic competence, but if the child can produce linguistic targets during DA tasks, this may yield new insights into the child’s underlying competence.

In addition, DA also can provide insight into a child’s areas of linguistic strength and weakness (Patterson, Rodríguez, & Dale, 2013). In an area closely related to the current project, several studies have indicated that the graduated prompting approach to DA may help predict future communication performance in young children, particularly with regard to the productions of early syntactic structures. In two similar studies, DA was used to measure the language performance of toddlers with specific expressive language impairment (Bain & Olswang, 1995; Olswang & Bain, 1996). These authors found that the amount of support required to elicit two-word semantic-syntactic messages during a graduated prompting session was predictive of language performance during a subsequent intervention phase.

Examination of change within brief trials may provide an additional detailed measure of immediate change in response to prompting, or modifiability. In a study of 32 typically developing 4-year-olds with culturally and linguistically diverse backgrounds, graduated prompting procedures were used to assess performance across three language tasks (Patterson et al., 2013). Results indicated that for children who required a prompt to provide a correct response on the first item, language performance was significantly higher on the last two items compared with the first two items, providing preliminary evidence for the merits of using this type of microanalysis to examine the within-task modifiability of children’s language performance.

Rationale for Current Study

Accurately assessing language skills and determining appropriate goals for a child with complex communication needs can be difficult. The results obtained with static, norm-referenced tests fail to acknowledge the limited educational experiences of children with severe disabilities (Barker et al., 2013; Mirenda, 2013) and the challenges of using many currently available AAC systems (Light & McNaughton, 2014). Despite the potential benefits of DA (Barker, Bridges, & Saunders, 2014; Snell, 2002), research addressing its efficacy with populations who use AAC is limited, with no known studies examining the use of DA with children who communicate using graphic symbols. To efficiently and effectively support the early syntax of children who rely on aided AAC, it is important to gain an understanding of how much support is required to build generative language skills. The aims of the current study included the following:

  1. To evaluate the degree of support 5-year-old children with significant speech disorders require to create accurate semantic-syntactic messages when using single meaning graphic symbols on a speech-generating device (SGD). Previous research indicates that children have difficulties using syntax that adheres to spoken word order when communicating via graphic symbols (e.g., Smith, 1996; Sutton & Morford, 1998; Sutton et al., 2010; Trudeau et al., 2007). However, recent studies indicate that with instruction, children may experience rapid changes in performance (Binger, Kent-Walsh, Berens, Del Campo, & Rivera, 2008; Binger et al., 2010).

  2. To determine if there is evidence of modifiability within brief graduated prompting tasks. Microexamination of children’s performance within tasks provides additional information regarding the rate of immediate change (Patterson et al., 2013) and has implications for identifying the child’s ZPD.

  3. To determine if performance during DA is predictive of performance during a subsequent experimental task. This question addresses the validity of DA as an assessment tool to identify 5-year-old children’s developmental readiness to create simple sentences using graphic symbols and to determine the impact of adult support on the children’s performance. Determining children’s ability to sequence graphic symbols to create rule-based messages is important for selection of intervention targets and for gauging language expectations.

Method

General Procedures

This research was part of a larger study in the southwestern USA in which the primary aim was to investigate the effect of an intervention designed to facilitate rule-based, multi-word messages produced by preschool children via graphic symbol-based SGDs. The current study was a retrospective analysis of four children from the larger study. Children in the current investigation, as well the remaining children in the larger study, participated in DA prior to completing an experimental task in which no additional prompting was provided. Children in the current study were slightly older and completed different targets from the remaining participants. Participants were originally recruited through contacts at a university and in local schools. Initial static assessments to determine eligibility included a battery of normed, standardized tests to assess speech, language, and cognitive abilities.

Participants

The current study included the first four children enrolled in the larger study. Participants were aged 5;0 to 5;11 (years; months) and met the following criteria: (a) receptive language within normal limits, as defined by standard scores no more than 1.5 SD below the mean (i.e., > 78 standard score) on the total score of the Test of Auditory Comprehension of Language-3 (Carrow-Woolfolk, 1999); (b) presence of a severe speech impairment as defined by less than 50% intelligible speech in the no context condition of the Index of Augmented Speech Comprehensibility in Children (Dowden, 1997); and (c) an expressive vocabulary of at least 25 words on the MacArthur Communicative Development Inventories (Fenson et al., 1993) via any communication mode (speech, sign, graphic symbol). See Table 1 for a list of participant characteristics.

Table 1.

Participant Characteristics Including Chronological Age, Sex, Disability, Expressive Vocabulary, and Intelligibility

Amy Ben Carmen Darryl
Chronological age (mo.) 5;10 5:0 5;1 5;9
Sex Female Male Female Male
Disability Suspected
ataxia and
apraxia of
speech:
Suspected CP
Severe speech
disorder;
History of
TBI;
Microdeletion
of 7q11.22a
Severe
speech
disorder
Severe
speech
disorder
CDI (expressive vocabulary) 657 115 514 >86b
I-ASCC (no context/context) 13%/52% 0%/3% 16%/55% 35%/68%
Prior AAC use Experience
with
Minspeak®6-
based device;
abandoned
over a year
before study
began
One-year
prior
experience
with
Minspeak®-
based device;
only used at
school; single-
word
vocabulary
targeted
No prior
aided or
unaided
No prior
aided or
unaided

Note. CDI = Communication Development Inventory; CP = cerebral palsy; I-ASCC = Index of Augmented Speech Comprehensibility in Children; TBI = traumatic brain injury

a

This deletion has been associated with autism, but current data are inconclusive re: speech disorders. Ben does not demonstrate symptoms of autism. No known long-term effects of TBI were identified by his mother.

b

The CDI was not completed for Darryl. Therefore, the number of different words used in a 20-minute language sample taken at the beginning of the study is reported here and likely is a gross underestimate of his expressive vocabulary.

In addition, the participants (a) spoke English as their first language; (b) demonstrated comprehension of the targeted semantic-syntactic relations with at least 90% accuracy, using procedures adapted from Miller and Paul (1995); (c) received no prior intervention targeting semantic-syntactic relations via aided AAC; (d) had vision and hearing functional for the study activities; (e) had no diagnosis of autism; (f) had motor skills adequate to direct select on an SGD. Children with autism were excluded in an effort to control our variables as much as possible, because these children often experience both qualitative and quantitative differences in language learning. Additional measures were collected with the following tools for descriptive purposes (see Table 2): Mullen Scales of Early Learning (Mullen, 1995), Peabody Picture Vocabulary Test 4th ed. (Dunn & Dunn, 2006), Leiter International Performance Scale –Revised (Roid & Miller, 1997), and Vineland Adaptive Behavior Scales (Sparrow, Cicchetti, & Balla, 2005). Of the four participants included in the current study, only Amy and Ben (pseudonyms) had prior AAC experience.

Table 2.

Static Test Results for All Participants

Amy Ben Carmen Darryl

%ile AE %ile AE %ile AE %ile AE
PPVT-R 10 4;10 47 4:10 21 4:1 73 6;5
TACL-3 Vocab 16 4;6 63 5;0 16 4;0 84 6;9
TACL-3 GM 37 5;3 84 6;0 25 4;3 37 5;6
TACL-3 EPS 50 5;9 63 5;3 37 4;6 84 7;0
TACL-3 Total 27 5;3 77 5;5 19 4;5 77 7;7
Vineland-II Comm 50 -- 19 -- 13 -- 37 --
MSEL VR 62 5;9 38 4;9 62 5;6 46 5;6
MSEL FM 7 4;5 14 4;3 24 4;9 42 5;5
MSEL RL 4 4;7 14 4;3 1 3;8 38 5;5
MSEL EL 1 3;7 1 1;5 1 3;3 1 2;5
Leiter-R full IQ 34 -- 53 -- 66 -- 87 --

Note. Dashes indicate that the score was not available. %ile = percentile; AE = age equivalent; PPVT-4 = Peabody Picture Vocabulary Test—4th Edition; TACL-3 = Test of Auditory Comprehension of Language—3rd Edition; Vocab = vocabulary; GM = grammatical morphemes; EPS = elaborated phrases and sentences; Comm = communication; MSEL = Mullen Scales of Early Learning; VR = visual reception; FM = fine motor; RL = receptive language; EL = expressive language; Leiter-R = Leiter International Performance Scale – Revised

Setting and Experimenters

The DA and experimental sessions were administered by the first and second authors (the first author was a speech-language pathology graduate student), as well as an additional speech-language pathology graduate student. The two students received training prior to administration of the DA and the experimental tasks by the second author, a seasoned researcher and clinician. All sessions were conducted in a university setting in a private research room. The experimenter and child were either seated on the floor or at a table for all tasks. Sessions were conducted approximately twice per week and lasted for approximately 60 min. All DA and experimental sessions were video-recorded.

Targets and Instrumentation

For both DA and the experimental task, all participants were provided with the same AAC device: an iPad™ 1 containing the Proloquo2Go ™ 2 application. All vocabulary was programmed into this software program. Synthesized speech software from Acapela Group™3, the voice output software that comes with this device, was used as voice output. The same semantic-syntactic targets were used for DA and the experimental task and were administered in the same order for every child. These targets included: agent-action, attribute-entity, possessor-entity, action-object, agent-action-object, and attribute-agent-action.

Target vocabulary used during DA for each semantic-syntactic structure was selected from the CDI (Fenson et al., 1993) and included the following: (a) The symbols COW, LION, MONKEY, PENGUIN, and PIG served as the characters for all constructions (agents, objects, and possessors, plus the entities for the attribute-entity messages); (b) attributes were represented by the symbols for HAPPY, SAD, CLEAN, DIRTY, WET, DRY, RED, BLUE, BIG, and LITTLE; (c) possessions were represented by the symbols AIRPLANE, BANANAS, BED, CAR, CUP, GRAPES, HOT DOG, MOTORCYCLE, PLATE, and SPOON, and (d) actions were the symbols BITES, CLIMBS, CRIES, FALLS, HIDES, HITS, JUMPS, RIDES, SLEEPS, and THROWS. All 10 actions were included for agent-action and attribute-agent-action messages; only the transitive verbs were included for action-object and agent-action-object messages. These verbs were BITES, HITS, and THROWS from the original actions, with the additions of CHASES and KISSES.

Graphic symbols representing target vocabulary consisted of color photographs and line drawings from the Proloquo2Go app. One symbol (FALLS) was downloaded from the Internet to present a more salient image than was available on the app. Separate communication pages were used for DA and experimental sessions, and each page contained all the vocabulary required for each semantic-syntactic target. Symbols were organized using a Fitzgerald key (McDonald & Schultz, 1973), a vocabulary organization system in which graphic symbols from different semantic and syntactic classes are organized in color-coded groups from left to right with the intention of facilitating sentence construction (Beukelman & Mirenda, 2013). Although recent research questions the functionality of color coding (e.g., Wilkinson & Coombs, 2010), it is not known if color coding detracts or facilitates the productions of multi-symbol rule-based messages, and clinical experience indicates that communication partners benefit from such color coding when modeling messages on the SGD. Therefore, color coding was included.

Training and Familiarization

Prior to beginning DA and the experimental task, participants were required to identify the symbols on the communication pages used during the session. For example, the examiner asked the child, Show me monkey, and the child was expected to point to the corresponding graphic symbol. A paired instructional paradigm (Schlosser & Lloyd, 1997) was used to teach any symbols in error; that is, the instructor showed the graphic symbol to the child while simultaneously providing the spoken label and also providing a demonstration. For example, to teach the symbol JUMPS, the instructor manipulated a puppet to jump and said, See, he jumps, while simultaneously selecting JUMPS on the SGD. Participants were required to identify symbols for each set of vocabulary with at least 90% accuracy before beginning DA or the experimental task.

DA Session Procedures

Materials

Materials included all of the characters and objects listed above. Puppets were used to depict larger versions of the animals, and plastic figures were used for the smaller ones. Additional materials were used as needed (e.g., water sprayer to make the animals wet). To demonstrate actions, simple props were used, such as a box for the animals to hide behind. The child was provided access to the DA communication board on the iPad during the DA sessions.

Procedures

DA session procedures were adapted from Olswang and Bain (1996) and took place within 1 hr sessions. Ten items were administered for each target (i.e., 10 different agent-action items for the agent-action target, 10 for possessor-possession, etc.). Whenever possible, items for a particular target took place within the same session, although this was not possible for all targets with all children. Total time spent completing DA for all six semantic-syntactic targets ranged from approximately 75 min to 160 min across children. All sessions were approximately 60 min in length, and were completed across four to six sessions; most of the additional session time was spent teaching and reviewing symbols. For each trial, the examiner used the toy animals and objects to demonstrate the target structure. For example, for the agent-action target MONKEY JUMPS, the examiner manipulated the monkey puppet to jump up and down. A cueing hierarchy from least to most support, adapted from Olswang and Bain, was used to prompt the correct production of the target (see Table 3). The child’s production at each cueing level was recorded for each of the 10 trials for each target.

Table 3.

Cueing Hierarchy and Examples for Attribute-Entity Including Cueing Level, Type of Prompt and Examples for the target HAPPY PENGUIN (contrast = SAD COW)

Level Prompt Example
Set up Prompt
Level A:
(4 points)
Elicitation
question/prompt
Arrange happy penguin and
sad penguin as well as
contrast puppets, happy cow
and sad cow, in front of
child. Point to the happy
penguin
Who is this?
Level B:
(3 points)
Spoken and aided
model of alternate
target plus sentence
completion
Point to sad cow, then point
to happy penguin (without
saying happy penguin aloud)
Look, this is sad cow
SAD COW and this is
_[HAPPY
PENGUIN]_.
Level C:
(2 points)
Spoken model plus
elicitation cue
Point to happy penguin See, this is happy
penguin. Who is this?
Level D:
(1 point)
Direct model plus
elicitation statement
Point to happy penguin Tell me, happy
penguin HAPPY
PENGUIN

Experimental Task Procedures

Materials

During the experimental task, the participants were provided with a similar communication page similar to the one used during DA; the only difference was exchanging the animals from the DA board for the following Mickey Mouse Clubhouse4 characters: Mickey, Minnie, Donald, Goofy, and Pluto. Line drawings for these characters were taken from Google Images5. During each session, an iPad containing the experimental task communication board plus an additional iPad containing videos depicting the target relations were used.

Probes

A master pool of 50 probes was developed for each semantic-syntactic target, with one exception: For the action-object target, a pool of 25 probes was used due to the smaller number of action words used (5 instead of 10). Every possible combination of vocabulary items was included in the master pool for each target, with the exception of attribute-agent-action (for which 250 combinations were possible). For example, for attribute-entity, possible combinations included wet Mickey, wet Minnie, etc., followed by dry Mickey, dry Minnie, etc. Once each master list was created, 10 different probe sets were created for each target (except for action-object, which contained 5 sets). Each probe set was balanced to ensure that each vocabulary item appeared the same number of times within a given probe set. For example, each of the five Mickey Mouse characters appeared twice and each of the 10 attributes appeared once on each attribute-entity list. Within these restrictions, vocabulary was randomly selected and randomly ordered for each probe list using a random number generator. In addition to the 10 targets, each probe list also contained two randomly selected foils to ensure the child was discriminating between the targeted structure and other depictions, and not simply following the same pattern of message construction regardless of the target. All foils consisted of single-word naming of the characters (e.g., MICKEY).

Brief video clips of each item on each master list were created using the video recording function on an iPad. Each item was depicted in as salient a manner as possible. A white backdrop was used for all videos, and contrasts were provided as needed. For example, to depict dry Mickey, two Mickey figurines were presented in the video. One Mickey was squirted with a water bottle and the other was not, and then the dry Mickey was brought closer to the camera. The foil videos involved presentation of two characters, with the target character moved forward toward the camera. Example videos for each target can be found online at http://aac-ucf.unm.edu/research/projects/expressive-language.html. Completed videos were saved and placed within files and folders on the iPads (e.g., the attribute-entity folder contained 10 different sets of video probes).

To administer the probes, the examiner showed the child each video clip within a particular set and provided a brief prompt following each video clip. For example, for an agent-action structure such as MICKEY JUMPS, the elicitation prompt used was, What’s happening? The examiner then paused to allow the child time to produce the target structure using the SGD. If necessary, the examiner also provided a spoken and gestural cue to use the device (e.g., Tell me, while pointing to the communication board on the iPad). Participants were permitted to replay the videos as needed. The child’s production for each trial was recorded in real time by the examiner, and the examiner did not provide feedback on the correctness of the responses.

Data Collection and Reduction

DA scoring

Productions at each cueing level were assigned a score of 0–4. For each trial, the child received the point value corresponding with the cueing level at which correct production of the target was achieved. Correct productions at Level A cueing (i.e., the least amount of support) were scored as 4, Level B as 3, Level C as 2, and Level D as 1. If the child failed to achieve a correct production (i.e., incorrect or absent production), it was scored as 0. Additionally, cueing levels were assigned a descriptive label ranging from minimal to maximal support. Level A and B cueing were considered to be minimal support (no models of the target provided), Level C moderate support (spoken model provided), and Level D maximal support (spoken + aided model provided).

Experimental task scoring

For the experimental tasks, the percentage of correct productions for each set of 10 trials was calculated for each target. Errors on foils also were noted.

Measures of amount of support (Aim 1)

For DA, the mean level of support required for accurate productions was calculated for each target and each participant using the scoring system described previously, that is, participants’ scores on the 10 trials within a session were averaged, resulting in a final mean score ranging from 0–4. Additionally, the amount of support that each participant required at each cueing level was calculated for each target (i.e., for each participant, the percent of correct productions achieved at Level A, B, C, and D were calculated).

Measures of modifiability (Aim 2)

For each target during DA, modifiability was calculated by comparing the child’s combined performance on the first two items to their combined performance on the last two items within a task (e.g., a list of 10 trials of a semantic-syntactic target). Scoring procedures outlined above were used to determine performance at each level of cueing. For example, within a DA session for a given target, a participant’s combined score of 2 on the first two trials was compared with a combined score of 7 on the last two trials (max = 8). The Wilcoxon signed-rank test (Wilcoxon, 1945) was used to determine the significance of the change; it should be noted that caution must be used when interpreting these results, as the data are ordinal. Assumptions cannot be made regarding the equality of the differences between each of the steps.

Measures of predictive validity (Aim 3)

Predictive validity was addressed by comparing the children’s performance during DA with their performance on the experimental task. Correlations were calculated for each participant’s average performance on the six targets in DA with the percent of correct productions achieved for the six targets during the experimental task (Cohen, 1988).

Fidelity and Reliability Measures

Fidelity measures

To assess fidelity of the measures, a trained research assistant compared the clinicians’ behaviors against pre-established fidelity standards by viewing video recordings of the sessions. This assistant had a bachelor’s degree and was taking upper level undergraduate coursework in speech and hearing sciences. He was blinded to the purposes of the study and was not involved in the administration of either phase of the study. For each child, fidelity measures were calculated for two out of six targets for DA and each corresponding experimental session (i.e., 33% of the data for each child). Session data were presented to the assistant in randomized order, with no indications given of which sessions occurred in which order (e.g., DA before experimental task or vice versa). To ensure that administration procedures were followed consistently and correctly, the clinicians’ behaviors were judged on adherence to DA and experimental task administration protocols. For DA, the mean procedural reliability across participants was 88% (range: 79% - 100%). For the experimental task, the mean procedural reliability across participants was 98% (range: 92% - 100%). Overall, the administration protocols for DA and the experimental task were adhered to consistently, indicating that the data were reliable. For the one session falling below 80%, further analysis revealed that the majority of errors were due to the experimenter providing a spoken label of target vocabulary during steps in which no spoken labels were indicated. For example, the Level A prompt for the agent-action-object target PIG CHASES MONKEY required the clinician to demonstrate the target using the puppets and then provide the prompt, What’s happening? An error in this case constituted the clinician accidentally using the spoken word Monkey at some point during this step. These errors were unlikely to have an effect on the participants’ outcomes, as such minimal information was provided. The few errors observed on the experimental task were similar to those described above (i.e., providing a spoken label for one of the characters in the probes).

Data reliability

Operational definitions for coding correct and incorrect productions of semantic-syntactic targets and foils were constructed for the larger research study. To establish interrater reliability of the data, the same blinded research assistant who completed fidelity measures used these operational definitions and re-analyzed data for 33% of randomly selected sessions from both DA and the experimental sessions. The same sessions that were used to calculate procedural fidelity were used for data reliability. Interrater agreement was calculated using Cohen’s kappa and was found to be 1.0 for both DA and the experimental task, indicating perfect reliability for the scoring of the data.

Results

Time Required to Complete DA

Mean completion times across targets were similar across children for the first 5 targets (range: 10–16 min per target) but longer for the last target, attribute-agent-action (29 min). Children A, C, and D averaged similar completion times for each target (M = 13, 15, and 12 min per target, respectively), with Child D taking longer (M = 23 min per target).

Amount of Support (Aim 1)

Performance at each cueing level during DA

As shown in Figure 1, participants required varying levels of support across the six semantic-syntactic structures, although the majority of productions were correct at Levels A or B cueing – that is, participants frequently achieved correct productions with an elicitation question/prompt (i.e., Level A) or at the sentence completion level (i.e., Level B). Every child attained at least one correct production at Level A or B for every target except for Child C with the attribute-agent-action target. However, across all targets, all participants did require cueing from each level at some point during DA. One participant, Ben, performed more poorly than the other participants on DA, producing more incorrect productions. It is also of note that Carmen required significantly higher levels of cueing for two of the targets (attribute-entity and attribute-agent-action).

Figure 1.

Figure 1

Participant’s Performance at Each Cueing Level during DA

Across the six semantic-syntactic structures, the possessor-entity target was the least challenging; the mean level of correct productions across participants at Level A cueing was highest for the possessive-entity target and lowest for the attribute-agent-action target (see Table 4). Notably, more productions were correct at Level A cueing for the three-term agent-action-object target than for the two-term agent-action and action-object targets, although order effects may have impacted this result; targets were completed in the same order for each child, and agent-action-object was targeted after the two-term targets). Variability in performance at Level A cueing was evident across structures, highlighting the differing degrees of difficulty across structures.

Table 4.

Mean Level of Support for Correct Productions at each Cueing Level across Participants and Semantic-Syntactic Targets

Ag-Act Att-Ent Poss-Ent Act-Obj Ag-Act-Obj Att-Ag-Act
Level A 55% 30% 77.5% 62.5% 70% 20%
Level B 7.5% 35% 10% 15% 10% 7.5%
Level C 7.5% 22.5% 2.5% 20% 5% 27.5%
Level D 5% 10% 7.5% 2.5% 10% 17.5%
Incorrect at
all levels
25% 2.5% 2.5% 0% 5% 27.5%

Note. Ag-Act = agent-action-object; Att-Ent = attribute-entity; Poss-Ent = possessor-entity; Act-Obj = action-object; Ag-Act-Obj = agent-action-object; Att-Ag-Act = attribute-agent-action

Mean level of support during DA

As depicted in Figure 2, the average level of support required for accurate target productions varied across semantic-syntactic structures and across participants. However, the majority of mean scores across participants was greater than 3.0 with nearly all scores greater than 2.5, indicating that participants correctly produced many targets at Levels A and B. These results suggest that overall, participants produced the semantic-syntactic structures with minimal to moderate cueing. Notably, every child produced every target correctly at least once in levels A through D.

Figure 2.

Figure 2

Mean Level of Support Required for Accurate Productions across Targets during DA

Amy’s mean scores across all targets ranged from 2.7 – 3.9. With the exception of attribute-agent-action, her scores fell between 3 and 4, indicating that she generally produced the structures with minimal support. Ben demonstrated somewhat lower averages and greater variability in performance than the other participants, with mean scores ranging from 0.4 to 2.9. Carmen’s mean scores ranged from 1 to 3.9; her poorest performance was with the attribute-agent-action target (which was the case for all four participants), and her highest performance was with the possessor-entity target. This variability highlights the differences in the amount of support that Carmen required across the targets. Finally, Darryl’s mean scores ranged from 2.5 to 4, with three scores falling between 3.9 and 4 (indicating minimal support required).

Across participants, some common patterns were evident. As mentioned above, the possessor-entity target – the third target for all participants – was the least challenging structure, with three of the four participants obtaining a mean score of 3.9; the exception was Ben, with a mean score of 2.4. The attribute-agent-action structure was the most challenging, with a mean score across participants of 1.95 (range: 1.0 to 2.7). Even though this was the last target presented to all the children, all participants demonstrated their lowest or second lowest performance on this target.

Modifiability (Aim 2)

Each participant’s performance on the first two trials of a DA session for a given target was compared with their performance on the last two trials of that same session. Amy and Carmen demonstrated significant modifiability (Wilcoxon signed-rank of p < .05); that is, they required considerably less cueing for accurate productions on the last two trials of a particular target compared with the first two trials, indicating that these participants learned to produce the semantic-syntactic structures within a brief learning experience. Ben did not demonstrate measurably significant modifiability within the DA sessions; however, his scores trended in this direction: his scores increased for four targets, remained the same on one, and decreased by a single point on the last. Results were not calculable for Darryl due to a ceiling effect: On four out of six targets, his performance on the first two trials was equal to his performance on the last two trials, with three of these cases being at the maximum level (8).

Predictive Validity of DA (Aim 3)

The predictive validity of DA on participants’ performance on a subsequent experimental task was assessed by comparing the average level of support required for accurate productions during DA to the percent of correct productions on the experimental task. Three out of four participants demonstrated a moderate correlation (i.e., Ben, Carmen, and Darryl with r2 measures of 0.25. 0.30, and 0.44, respectively). Pooled data across participants yielded an r2 of .20, also indicating a moderate correlation. Amy demonstrated a trivial correlation when her mean level of support during DA was compared to her performance on the experimental task (r2 = 0.0). Further analysis indicated that although Amy demonstrated relatively high levels of performance during DA, she performed more poorly on certain targets during the experimental task (M = 39%; range: 10%-80%). For example, for possessor-entity, her average score during DA was 3.9, but her percent of correct productions on the experimental task was only 10%. However, her performance in subsequent identical experimental tasks (which are not the focus of this paper) improved substantially and rapidly without additional intervention (e.g., 100% accuracy in the fourth possessor-entity experimental session; as each session lasted approximately 30 min, this indicates rapid acquisition of the target during intervention).

Discussion

Results of the current study provide preliminary support for DA as an effective tool for assessing the developmental readiness of 5-year-old children to create simple, rule-based messages using an AAC iPad app. Although all participants required support initially for at least one of the target skills, the provision of DA supports resulted in fairly rapid demonstration of most of the targeted skills for most of the children. In general, the participants required minimal-to-moderate support, although the degree of support required varied across semantic-syntactic targets and across participants. Additionally, results suggest that examining a child’s modifiability, or ability to improve performance within a brief, supportive experience, may be helpful in determining the amount of adult support required to achieve a task (i.e., the child’s ZPD). Furthermore, findings indicate that performance during DA may be predictive of future performance on a similar task. As discussed below, these results have important implications related to supporting the development of early syntax in children who use aided AAC.

Variability in Support Required during DA (Aim 1)

Differences were observed in the amount of support participants required to correctly produce the target structures during DA; that is, sometimes participants provided correct responses with minimal cues, and at other times, they needed more assistance. Although differences were evident within and across both individuals and semantic-syntactic structures, patterns of similarity emerged as well. Three participants (Amy, Carmen and Darryl) demonstrated high levels of performance with minimal cueing on at least three targets (i.e., majority of the targets correct at Level A or B). In general, Ben required more support than the other participants and also demonstrated more variability both across targets and within sessions, likely due to his frequent refusals to complete DA tasks; these behaviors help account for the variability in his results.

Although Carmen generally required little support, she needed significantly more cueing for the attribute-entity and attribute-agent-action targets. The types of errors she exhibited on these structures during DA included omissions, substitutions, and inversions, with attribute-entity inversions included in approximately one-third of her productions for these two targets. On the experimental task, inversions accounted for 100% and 30% of errors for attribute-entity and attribute-agent-action, respectively. The cloze sentence This is_____[HAPPY PENGUIN] was used as the elicitation prompt for the attribute-entity structure for attribute-entity on Level A for the DA task and also on the experimental task, but Carmen may not have understood or attended to these cloze sentence prompts, thus contributing to word order errors.

To see if the inverted structure (e.g., penguin [is] happy instead of happy penguin) might be perceived as more natural for some targets, an additional analysis was conducted in which 10 undergraduate students majoring in speech and hearing sciences were asked to rate whether attribute-entity or entity-attribute sounded more natural. Each student was provided with a list of 10 randomly ordered pairs of sentences containing attribute-entity and entity-attribute structures using the same vocabulary as the targets in the experimental condition (e.g., Minnie is happy vs. Happy Minnie). Students rated each sentence structure and its correlate on a scale according to how natural they thought the sentence sounded. Sentences were visually placed on either end of a scale, with ratings of most natural appearing on the ends, somewhat natural closer to the center, and same appearing in the middle. The participants were provided with an example photograph of the target and were asked to imagine that they were receiving the prompt Tell me about this while completing their ratings. Entity-attribute (e.g., Minnie is happy) was viewed as most natural by 9 out of 10 students, with participants favoring the entity-attribute construction 92% of the time. For Carmen, then, using the entity-attribute structure instead of attribute-entity may not have represented a true inversion; she may well have been accurately mapping her mental representations (Minnie is happy), without the benefit of having access to the word is to place in the middle of her sentences (thus, MINNIE HAPPY).

Variability across semantic-syntactic targets was noted as well, although findings must be interpreted cautiously, as order effects may have been present. The possessor-entity target – the third target presented for each participant – was the least challenging (77.5% of productions were correct at Level A; see Table 4). This is one of the earliest relations to develop (Leonard, 1976) and is the only target consisting of two nouns. The visual depiction of common nouns may be particularly salient on communication boards compared with the inherently more abstract depictions of the attributes and actions. Young children who use aided AAC tend to select nouns more frequently than other parts of speech (Sutton et al., 2010; Buenviaje & Binger, 2013), and preschoolers are more accurate selecting concrete symbols versus abstract symbols when using AAC (Light et al., 2004). A combination of these factors may contribute to this finding. In contrast, attribute-agent-action was the most challenging (mean of 20% correct at Level A), despite the fact that this was the final target presented for every child and therefore all participants were highly familiar with the task. Again, order effects may have impacted the data – for example, the children may have been bored with the task by the time they completed the final target. An alternate explanation is that attribute-agent-action is a more complex structure: it was one of only two three-term targets and combines two different semantic-syntactic relations: attribute-agent (HAPPY PENGUIN) + agent-action (PENGUIN FALLS). Interestingly, participants had much less difficulty with the other three-term relation, agent-action-object (70% accuracy at Level A). In addition to the complexity of structures contributing to the differences, Leonard (1976) observed that a clear and consistent order of emergence of these relations is not readily apparent, and the order of acquisition is not necessarily dependent on utterance length or type of semantic-syntactic relationship.

In summary, correct productions across participants were generally achieved with minimal support—that is, between Levels A and B for all targets except for attribute-agent-action—and the findings indicate that at least some 5-year-old children can correctly sequence graphic symbols to create simple sentences with minimal-to-moderate clinical support. Given the fact that in spoken language, the production of these two- and three-word phrases would have been long mastered by 5-year-olds with normal receptive language, the results perhaps may not be viewed as surprising. However, numerous investigations of graphic symbol production tasks similar to those in the current study have indicated that preschoolers have difficulty with these types of productions – and in particular with agent-action-object constructions (e.g., Sutton et al., 2010). Further research investigating a range of early multi-word productions, and particular strengths and challenges that may be present when producing various rule-based messages via graphic symbols, is warranted.

Modifiability within DA (Aim 2)

Results regarding participants’ modifiability within a DA session were not consistent but indicated a positive trend. Two of four participants (Amy and Carmen) demonstrated significant improvement within the DA sessions, and data for a third (Darryl) was untestable due to ceiling effects. Across all participants, performance on the last two trials of DA was either greater or equal to performance on the first two trials in all but one instance. Although Ben did not demonstrate statistically significant modifiability within DA, his data indicate that on all but the attribute-agent-action target, the final two data points either increased or stayed the same. Ben’s challenging behaviors likely contributed to variable performance, and DA results may not reflect his ability. Overall, these outcomes indicate that rapid learning of sequencing semantic-syntactic messages can occur within a supportive learning environment for some children.

Predictive Validity of DA (Aim 3)

When participants’ mean level of support during DA was compared with their performance on the experimental task, three out of four participants (Ben, Carmen, and Darryl) demonstrated a moderate correlation; that is, the less support required in DA, the better the performance on the experimental task, and vice versa. Thus, for these three children, performance during DA was predictive of performance on a subsequent, similar task. Amy was the exception, demonstrating a trivial correlation; during DA, she generally produced the target with minimal cueing, but on the experimental task—where she essentially received Level A cueing—her performance was lower. However, Amy’s performance on the experimental task does not provide a complete picture of her abilities. Amy (as well as the other participants) was enrolled in a larger study in which the children participated in additional sessions using the same procedures as the experimental task. Amy’s performance in the sessions following the first experimental task revealed a pattern of rapid improvement, that is, in the first session (i.e., the experimental task), she achieved 37% accuracy on average across targets (range: 0% - 80%), but in the subsequent three sessions, her average scores across targets were 72% (range: 10% - 100%), 72% (range: 30% - 100%), and 97% (range: 80% - 100%), respectively. Only Level A cueing was used in these sessions. These findings suggest that it might take more than one session for a child’s abilities to accurately be reflected in a task in which he or she is no longer receiving cueing supports.

Overall, predictive validity results are promising. One purpose of DA is to measure the potential for immediate change and assumes that if a child responds to adult cues and prompts to form a new behavior, then that appropriate support will result in relatively rapid learning by the child (Bain & Olswang, 1995). In a review of 15 DA studies that used Pearson’s correlation coefficients to measure predictive validity of DA, Caffrey, Fuchs, and Fuchs (2008) found that in general, DA was predictive of subsequent performance, particularly when applied to students with disabilities rather than at-risk or typically achieving students. In other words, children with disabilities were the most likely to show rapid learning gains when provided with appropriate supports – which may be evidence that these children frequently are not provided with appropriate supports (Mirenda, 2014). The results of Ben, Carmen, and Darryl provide further support for these findings and contribute to the growing body of evidence that supports the predictive validity of DA.

Clinical and Theoretical Implications

These results provide evidence that 5-year-old children with complex communication needs who have receptive language within normal limits are capable of learning to sequence two and three word phrases using aided AAC—and can do so very quickly—thus adding to an emerging body of evidence that supports this idea (e.g., Binger et al., 2008; Binger et al., 2010; Poupart, Trudeau, & Sutton, 2013). Learning to correctly sequence simple sentences using aided AAC is important for both language development purposes as well as increasing communicative competence in children who use AAC, and the findings add to a body of work indicating that preschoolers with normal receptive language can do so during the preschool years. A crucial point to remember is that for children using spoken language, productions of rule-based two- and three-word utterances occurs within the second year of life, and by age 5, typically developing children are producing grammatically complete, complex utterances. Finding ways to maximize expressive language potential as early on in development as possible is essential for children with severe speech disorders reach their linguistic potential.

With regard to teaching techniques, the use of graduated prompts may be an effective approach in determining preschool-aged children’s developmental readiness to learn to sequence graphic symbols and may have a degree of predictive validity; that is, DA results may predict future language performance on the productions of specific, aided semantic-syntactic structures. Additional research is required to further substantiate this approach, and the inclusion of negative cases would be particularly useful (i.e., poor performance in DA predicting poor performance on subsequent tasks and intervention). Ultimately, it is hoped that DA will be useful in clinical decision making, including predicting readiness for intervention on particular structures and appropriate target selection.

Overall, the use of DA in assessing the abilities of children who use AAC to sequence graphic symbols is promising. Although the focus of the current study was on sequencing graphic symbols, the use of DA with children who use AAC may have broader clinical utility, informing decisions such as symbol selection and symbol layout; for example, DA tasks could be designed to evaluate the level of support required for a child to accurately identify abstract graphic symbols representing various actions or descriptors or examine a child’s performance when locating symbols placed within various organizational schemas. Further, DA may help inform intervention decisions and assist with setting appropriate expectations for children who require AAC. Interventions that regularly assess student progress, including the use of DA, may help in the setting of more ambitious (and more appropriate) goals.

Limitations and Directions for Future Research

The development of effective and reliable language assessments for children who use AAC is still in the early stages. Clearly, more robust research studies are needed to investigate the effectiveness of DA to determine preschool children’s abilities to use aided AAC to sequence sentences. Replication of the current study with a larger group of participants with similar profiles would strengthen the results. Further, replications with children who have different cognitive/receptive language profiles and also children with autism and other disabilities would further increase understanding of the potential for generalized use of DA. Given the success of the children in the current study, inclusion of younger children is warranted as well. Additional work with semantic-syntactic targets is warranted; vocabulary growth and syntactic development is strongly correlated in early language development (e.g., Bates, Dale, & Thal, 1995), and developing effective interventions to support this critical stage of development is foundational for ongoing semantic and syntactic growth. Studying additional targets also would be useful; DA is highly adaptable and could be used to assess areas such as grammatical morpheme use, symbol selection, and device layout.

Several methodological modifications may be considered in future work. First, a static pre-test administered prior to administration of DA would yield valuable information, particularly for highly successful participants such as those in the current study, that is, a static pre-test would have provided a baseline measure to compare subsequent performance during DA (Bain & Olswang, 1995). Second, comparing participants’ performance during DA to an intervention phase (instead of a single-event experimental task) may provide a more complete picture of a participant’s readiness to succeed with a particular target and would strengthen the predictive validity of DA. In their study of 30 preschoolers with specific expressive language impairments, Olswang and Bain (1996) compared children’s ability to produce two-term semantic-syntactic messages during a DA session with their performance during a subsequent intervention phase and found DA performance to be highly predictive of the children’s performance during treatment. In the current study, Amy’s performance during DA was not predictive of her performance on the experimental task, yet analysis of subsequent sessions revealed that she achieved mastery of the target relatively quickly. Therefore, the predictive validity of DA may be improved when a larger sample of performance following DA collected.

Finally, the graduated prompting hierarchy itself merits further examination as a valid and reliable assessment tool. A larger study could provide a greater body of data to analyze the effectiveness of each level of cueing within the graduated prompting framework, helping to determine which prompts best predict future performance. Furthermore, refining the prompts would help to create a more nuanced assessment tool; that is, some of the current prompts may yield relatively few correct responses, but alternative prompts not included in the current study might prove to be beneficial. For example, some of the current prompts may be less discerning than other prompts (e.g., expectant delays or gestures toward the device).

Summary

In summary, findings present emerging evidence that DA may be a useful tool in evaluating 5-year-old children’s ability to sequence simple, rule-based messages via graphic symbols, and for some children, rapid change can be measured within a brief graduated prompting session. In addition, the results offer partial support for the predictive validity of DA on a subsequent experimental task. Because children who use AAC are a heterogeneous population, who are faced with a wide range of challenges related to language acquisition, valid and reliable assessment tools such as DA are critical to improving their language outcomes (Barker et al., 2014).

Acknowledgments

This project was completed as a master’s thesis project for the first author. This research was supported by NIH Grant 1R03DC011610. The authors thank thesis committee members Philip Dale, Barbara Rodríguez, and Janet Patterson for their contributions; UNM AAC lab students Eliza Webb, Lindsay Mansfield, Elijia Buenviaje, Jamie Ragsdale, Merissa Ekman, Maja Whitaker, Victoria Ortega, Emily Heisler, and Nathan Bickley for their assistance; and the children and families who participated in this study.

Footnotes

1

The Apple iPad is a line of tablet computers designed and marketed by Apple Inc. with headquarters located in Silicon Valley, CA. More information about the Apple iPad can be found at www.apple.com/ipad

2

Proloquo2go is a product from AssistiveWare and is an augmentative and alternative communication software application developed for iPad, iPhone, and iPod touch. More information can be found at http://www.assistiveware.com/product/proloquo2go.

3

Acapela Group is a company that develops text-to-speech software and services. Acapela Group was found in 2003 through a merging of three European companies specializing in vocal technology, Babel Technologies (Belgium), Infovox (Sweden), and Elan Speech (France). More information can be found at http://www.acapela-group.com/.

4

Mickey Mouse Clubhouse is an American animated television series that premiered on Disney Channel in 2006. It is the only Mickey Mouse program to be aimed at preschoolers. More information can be found at http://disneyjunior.com/mickey-mouse-clubhouse.

5

Google Images is a search engine owned by Google and introduced in 2001 that allows users to search the web for image content. See images.google.com.

6

Minspeak™ is a graphic symbol-based communication approach in which each symbol represents multiple concepts. These symbols are combined to access words on an SGD. More information can be found at http://www.minspeak.com.

Contributor Information

Marika R. King, Department of Speech and Hearing Sciences, University of New Mexico

Cathy Binger, Department of Speech and Hearing Sciences, University of New Mexico.

Jennifer Kent-Walsh, Department of Communication Sciences & Disorders, University of Central Florida.

References

  1. Bain BA, Olswang LB. Examining readiness for learning two-word utterances by children with specific expressive language impairment: Dynamic assessment validation. American Journal of Speech-Language Pathology. 1995;4:81–91. [Google Scholar]
  2. Bates E, Dale PS, Thal D. Individual differences and their implications for theories of language development. In: Fletcher P, MacWhinney B, editors. Handbook of Child Language. Oxford: Basil Blackwell; 1995. [Google Scholar]
  3. Barker RM, Akaba S, Brady NC, Thiemann-Bourque K. Support for AAC use in preschool, and growth in language skills, for young children with developmental disabilities. Augmentative and Alternative Communication. 2013;29:334–346. doi: 10.3109/07434618.2013.848933. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Barker RM, Bridges MS, Saunders KJ. Validity of a non-speech dynamic assessment of phonemic awareness via the alphabetic principle. Augmentative and Alternative Communication. 2014;30:71–82. doi: 10.3109/07434618.2014.880190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Beukelman D, Mirenda P. Augmentative and alternative communication: Management of severe communication disorders in children and adults. 4th ed. Baltimore, MD: Paul H. Brookes; 2013. [Google Scholar]
  6. Binger C, Kent-Walsh J, Berens J, Del Campo S, Rivera D. Teaching Latino parents to support the multisymbol message production of their children who use AAC. Augmentative and Alternative Communication. 2008;24:323–338. doi: 10.1080/07434610802130978. [DOI] [PubMed] [Google Scholar]
  7. Binger C, Kent-Walsh J, Ewing C, Taylor S. Teaching educational assistants to facilitate the multisymbol message productions of their children who require augmentative and alternative communication. American Journal of Speech Language Pathology. 2010;19:108–120. doi: 10.1044/1058-0360(2009/09-0015). [DOI] [PubMed] [Google Scholar]
  8. Binger C, Light J. The morphology and syntax of individuals who use AAC: Research review and implications for effective practice. Augmentative and Alternative Communication. 2008;24:123–138. doi: 10.1080/07434610701830587. [DOI] [PubMed] [Google Scholar]
  9. Buenviaje E, Binger C. Error patterns of five-year-old children using AAC within simple rule-based messages. Chicago, IL: Poster session presented at the American Speech-Language and Hearing Association Convention; 2013. [Google Scholar]
  10. Caffrey E, Fuchs D, Fuchs LS. The predictive validity of dynamic assessment: A review. The Journal of Special Education. 2008;41:254–270. [Google Scholar]
  11. Carrow-Woolfolk E. Test for auditory comprehension of language. 3rd ed. Austin, TX: Pro-Ed; 1999. [Google Scholar]
  12. Chung YC, Carter EW, Sisco LG. Social interactions of students with disabilities who use augmentative and alternative communication in inclusive classrooms. American Journal on Intellectual and Developmental Disabilities. 2012;117:349–367. doi: 10.1352/1944-7558-117.5.349. [DOI] [PubMed] [Google Scholar]
  13. Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. New Jersey: Lawrence Erlbaum Associates; 1988. [Google Scholar]
  14. Dowden P. Augmentative and alternative decision making for children with severely unintelligible speech. Augmentative and Alternative Communication. 1997;13:48–58. [Google Scholar]
  15. Dunn A, Dunn A. Peabody Picture Vocabulary Test. 4th ed. Bloomington, MN: AGS Publication; 2006. [Google Scholar]
  16. Fenson L, Dale P, Reznick S, Thal D, Bates E, Hartung J, Reilly J. MacArthur Communicative Development Inventories. San Diego, CA: Singular Publishing Group, Inc; 1993. [Google Scholar]
  17. Feuerstein R, Rand Y, Hoffman MB. The dynamic assessment of retarded performers. Baltimore, MD: University Park Press; 1979. [Google Scholar]
  18. Kent-Walsh J, Binger C, Hasham Z. Effects of parent instruction on the symbolic communication of children using augmentative and alternative communication during storybook reading. American Journal of Speech-Language Pathology. 2010;19:97–107. doi: 10.1044/1058-0360(2010/09-0014). [DOI] [PubMed] [Google Scholar]
  19. Leonard LB. Meaning in child language: Issues in the study of early semantic development. New York, NY: Grune and Stratton Inc; 1976. [Google Scholar]
  20. Light J, Drager KJ, Mellott S, Millar D, Parrish C, Welliver M. Performance of typically-developing four-and five-year-old children with AAC systems using different language organization techniques. Augmentative and Alternative Communication. 2004;20:63–88. [Google Scholar]
  21. Light J, McNaughton D. Putting people first: Re-thinking the role of technology in augmentative and alternative communication intervention. Augmentative and Alternative Communication. 2013;29:299–309. doi: 10.3109/07434618.2013.848935. [DOI] [PubMed] [Google Scholar]
  22. McDonald E, Schultz A. Communication boards for cerebral palsied children. Journal of Speech and Hearing Disorders. 1973;38:73–88. doi: 10.1044/jshd.3801.73. [DOI] [PubMed] [Google Scholar]
  23. Miller JF, Paul R. The clinical assessment of language comprehension. Baltimore, MD: Paul H. Brooks; 1995. [Google Scholar]
  24. Mirenda P. Revisiting the mosaic of supports required for including people with severe intellectual or developmental disabilities in their communities. Augmentative and Alternative Communication. 2014;30:19–27. doi: 10.3109/07434618.2013.875590. [DOI] [PubMed] [Google Scholar]
  25. Mullen EM. Mullen Scales of Early Learning. Circle Pines, MN: American Guidance Service, Inc; 1995. [Google Scholar]
  26. Olswang LB, Bain BA. Assessment information for predicting upcoming change in language production. Journal of Speech & Hearing Research. 1996;39:414–423. doi: 10.1044/jshr.3902.414. [DOI] [PubMed] [Google Scholar]
  27. Olswang L, Feuerstein J, Pinder GL, Dowden P. Validating dynamic assessment of triadic gaze for young children with severe disabilities. American Journal of Speech Language Pathology. 2013;22:449–462. doi: 10.1044/1058-0360(2012/12-0013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Patterson J, Rodríguez B, Dale P. Response to dynamic language tasks among typically developing Latino preschool children with bilingual experience. American Journal of Speech-Language Pathology. 2013;22:103–112. doi: 10.1044/1058-0360(2012/11-0129). [DOI] [PubMed] [Google Scholar]
  29. Peña ED. Measurement of modifiability in children from culturally and linguistically diverse backgrounds. Communication Disorders Quarterly. 2000;21:87–97. [Google Scholar]
  30. Peña E, Iglesias A, Lidz C. Reducing test bias through dynamic assessment of children’s word learning ability. American Journal of Speech-Language Pathology. 2001;10:138–154. [Google Scholar]
  31. Poupart A, Trudeau N, Sutton A. Construction of graphic symbol sequences by preschool-aged children: Learning, training, and maintenance. Applied Psycholinguistics. 2013;34:91–109. [Google Scholar]
  32. Raghavendra P, Virgo R, Olsson C, Connell T, Lane AE. Activity participation of children with complex communication needs, physical disabilities and typically-developing peers. Developmental Neurorehabilitation. 2011;14:145–155. doi: 10.3109/17518423.2011.568994. [DOI] [PubMed] [Google Scholar]
  33. Rogoff B. Apprenticeship in thinking: Cognitive development in social context. New York, NY: Oxford University Press; 1990. [Google Scholar]
  34. Roid GH, Miller LJ. Leiter International Performance Scale - Revised. Wood Dale, IL: Stoelting Co; 1997. [Google Scholar]
  35. Schlosser R, Lloyd L. Effects of paired associate learning versus symbol explanations on blissymbol comprehension and production. Augmentative and Alternative Communication. 1997;13:226–238. [Google Scholar]
  36. Smith M. The medium or the message: A study of speaking children using communication boards. In: von Tetzchner S, Jenson MH, editors. Augmentative and Alternative Communication: European Perspectives. San Diego, CA: Singular Publishing Group; 1996. pp. 119–136. [Google Scholar]
  37. Snell M. Using dynamic assessment with learners who communicate nonsymbolically. Augmentative and Alternative Communication. 2002;18:163–176. [Google Scholar]
  38. Soto G. Understanding the impact of graphic sign use on the message structure. In: Loncke FT, Clibbens J, Arvidson HH, Lloyd LL, editors. Augmentative and Alternative Communication: New directions in research and practice. London: Whurr Publishers; 1999. pp. 40–48. [Google Scholar]
  39. Sparrow SS, Cicchetti DV, Balla DA. Vineland Adaptive Behavior Scales. 2nd ed. San Antonio, TX: Pearson; 2005. [Google Scholar]
  40. Sutton AE, Morford JP. Constituent order in picture pointing sequences produced by speaking children using AAC. Applied Psycholinguistics. 1998;19:525–536. [Google Scholar]
  41. Sutton A, Soto G, Blockberger S. Grammatical issues in graphic symbol communication. Augmentative and Alternative Communication. 2002;18:192–204. [Google Scholar]
  42. Sutton A, Trudeau N, Morford J, Rios M, Poirier MA. Preschool-aged children have difficulty constructing and interpreting simple utterances composed of graphic symbols. Journal of Child Language. 2010;37:1–26. doi: 10.1017/S0305000909009477. [DOI] [PubMed] [Google Scholar]
  43. Trudeau N, Sutton A, Dagenais E, De Broeck S, Morford J. Construction of graphic symbol utterances by children, teenagers, and adults: the effect of structure and task demands. Journal of Speech, Language & Hearing Research. 2007;50:1314–1329. doi: 10.1044/1092-4388(2007/092). [DOI] [PubMed] [Google Scholar]
  44. Tzuriel D. Dynamic assessment of young children: Educational and intervention perspectives. Educational Psychology Review. 2000;12:385–435. [Google Scholar]
  45. Vygotsky LS. Mind in society. Cambridge, MA: Harvard University Press; 1978. [Google Scholar]
  46. Wilcoxon F. Individual comparisons by ranking methods. Biometrics Bulletin. 1945;1:80–83. [Google Scholar]
  47. Wilkinson KM, Coombs B. Preliminary exploration of the effect of background color on the speed and accuracy of search for an aided symbol target by typically developing preschoolers. Early Childhood Services. 2010;4:171–183. [PMC free article] [PubMed] [Google Scholar]

RESOURCES