Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2022 Jun 22.
Published in final edited form as: Res Pract Persons Severe Disabl. 2020;45(2):10.1177/1540796920911152. doi: 10.1177/1540796920911152

Effects of an AAC App with Transition to Literacy Features on Single-Word Reading of Individuals with Complex Communication Needs

Jessica Caron 1, Janice Light 2, David McNaughton 3
PMCID: PMC9214510  NIHMSID: NIHMS1705978  PMID: 35747523

Abstract

The purpose of this study was to investigate the effects of Transition to Literacy (T2L) software features (i.e., dynamic text and speech output upon selection of a graphic symbol) within a grid display in an augmentative and alternative communication (AAC) app on the sight word reading skills of five individuals with severe disabilities and complex communication needs. The study implemented a single case multiple probe research design across one set of three participants. The same design was utilized with an additional set of two participants. During intervention, the T2L feature was activated for targeted sight words during a book reading activity. The dependent variable was the number of 10 target words correctly identified. With only limited exposure to the T2L feature, the five participants all demonstrated increased accuracy in identification of 10 targeted sight words. Four of the five participants generalized learning to use of a text-only display for the 10 targeted sight words. This study provides preliminary evidence that redesigning AAC apps to include the provision of dynamic text combined with speech output can positively impact the sight-word reading of participants. This adaptation in AAC system design could be used to support improved outcomes in both language and literacy.

Keywords: augmentative and alternative communication, apps, literacy, severe disability


Literacy skills are fundamental to generative communication, positive educational outcomes, informed decision-making, and active societal participation. For individuals with severe disabilities and complex communication needs who use augmentative and alternative communication (AAC), literacy assumes an increased importance. With literacy, these individuals are able to communicate original thoughts without dependence on support personnel to anticipate and program graphic symbols within their AAC systems (Light & McNaughton, 2013). Literacy additionally empowers individuals with complex communication needs with access to a broader range of AAC supports and interaction environments which are imperative for active participation in a print-dependent culture (e.g., direct messaging, texting, emailing) (Caron & Light, 2017).

Becoming a skilled reader requires the acquisition of knowledge and skills across a number of domains, with integration of both low-level processes (i.e., phonemic awareness, letter-sound associations, blending skills, word recognition) and high-level processes (e.g., oral language, fluency, and comprehension; Allor & Chard, 2011). Although the ability to read words by sight does not, by itself, make an individual a skilled reader, single-word recognition is important to literacy development at both the lower and higher processing levels. Long term, the ability to recognize sight words automatically supports the use of cognitive resources on other higher order reading skills, such as comprehension, once single words are placed within connected text (Fletcher et al., 2007).

Sight Word Reading

Sight-word reading is the process of reading words, automatically at a glance, without analysis of the individual letters and sound correspondences in a given word (Ehri, 2005). Sight-word reading is an efficient way to read words and skilled readers rely on their abundant sight-word knowledge to read fluently (Miles et al., 2017). In sight-word learning, readers develop orthographic knowledge, whereby students are taught words as logographs or whole-words (Spector, 2011). Instruction often includes words selected from lists of frequently encountered non-decodable (e.g., was, the) and decodable words (e.g., big, run; Dolch List, Edmark List), taught commonly through memorization drills (Miles et al., 2017). Through multiple exposures to a word, learners build connections between the word’s unique visual pattern, associated label, and meaning and they are able to recognize a word as a whole word (Ehri, 2005). Therefore, sight words are not limited to high-frequency or irregularly spelled words but include all words a reader can read from memory (including high interest and personally relevant words like names of family members, restaurant names, and food labels).

Instruction in sight-word reading can be a good starting point for individuals with complex communication needs as it: (a) provides a foundation upon which more abstract reading skills can be built (e.g., alphabetic principles), (b) enables students to perform functional tasks (e.g., reading environmental signs, items on a menu, recipes; Spector, 2011), and (c) gives individuals a sense of accomplishment related to literacy (Light & McNaughton, 2013). A growing body of research supports the assertion that individuals with severe disabilities and complex communication needs can acquire vital literacy skills, including sight words. Mandak and colleagues (2019), reviewed nine single-case studies, involving 24 individuals, with varying diagnoses, who used AAC. Overall, the evidence indicated that instruction that included models, corrective feedback, and time-delay had positive effects on sight-word reading across ages and diagnostic categories. These results add support to earlier reviews indicating that individuals with complex communication needs can acquire sight words with instruction (e.g., Browder & Xin, 1998; Spector, 2011).

As research continues to investigate factors that contribute to best-practices for sight-word instruction for individuals who use AAC (e.g., task modifications, time-delay, corrective feedback), additional factors should be considered in order to maximize literacy outcomes for individuals with complex communication needs. For example, the use of well-designed AAC supports have the potential to benefit not just language and communication outcomes, but literacy outcomes as well. Further investigation of how graphic symbols and text are being used within AAC systems is warranted in order to enhance literacy gains.

AAC Supports and Literacy Learning

AAC systems, both high and low tech, commonly use graphic symbols (i.e., photographs or line drawings) to represent concepts. In most cases, the symbols are paired with a small static text label located above or below the symbol. In traditional AAC technologies, the text label (or a smaller version of the paired text and graphic symbol) may appear within the message bar when a graphic symbol from a grid display is selected. Yet, as noted by Fossett and Mirenda (2006), Erickson et al. (2010), and recently, Caron and colleagues (2018), static pairing of text and symbol may not support sight-word learning. Although, hypothetically, the use of a message bar can unpair the text from the graphic symbol, the appearance of the text in the message bar is subtle and displaced from the graphic symbol selected (Caron et al; Etzel & LeBlanc, 1979; Light et al., 2014; Schreibman, 1975). Therefore, this solution does not appear to support the learning of sight words for some individuals who use AAC.

Additionally, picture-supported text is often used to engage individuals with complex communication needs in classroom literacy participation (e.g., software programs such as, News-2-You, PixWriter, and Writing with Symbols). Picture-supported text involves pairing picture symbols with text or replacing text with picture symbols for each word in a sentence. Although these AAC supports (both systems and picture-supported text software) are meant to improve language and literacy skills, the static pairing of the text with graphic symbols may have minimal impact on the development of literacy skills (Didden et al., 2000; Erickson et al., 2010; Fossett & Mirenda, 2006).

One explanation for this lack of sight-word learning is that the static pairing of text and graphic symbols interferes with sight-word acquisition; when text is paired with graphic symbols in a static manner, the graphic symbols are recognized with little effort and therefore the attention is directed toward the graphic symbol and away from the process of learning to recognize the text (Didden et al., 2000; Fossett & Mirenda, 2006; Saunders & Solman, 1984; Sheehy, 2002; Solman et al., 1992). The static pairing of text and graphic symbols is a common AAC system design feature and may pose an unintentional challenge to acquisition of sight words and, overall, to more positive literacy outcomes for individuals who require the use of AAC to communicate.

Technology Re-design to Support Literacy

Although literacy instruction (including sight-word instruction) is imperative, improved features within AAC technologies could also be used to complement instruction and improve literacy skills. Light and colleagues (2014) proposed redesigning AAC supports to promote literacy learning by individuals with complex communication needs. The proposed redesign incorporates AAC features to support the transition to literacy (T2L). The T2L feature draws upon theory and research to support language and literacy learning though the use of AAC and includes the following (Light et al.): (a) presentation of dynamic animated text upon selection of the graphic symbol using motion to draw visual attention to the text (cf. Jagaroo & Wilkinson, 2008), (b) origination of the text from the graphic symbol to support the association of the symbol and the text (Etzel & LeBlanc, 1979; Schreibman, 1975), (c) replacement of the graphic symbol by the text to make the word salient and mitigate the difficulties that may arise from static pairing of graphic symbols and text (Fossett & Mirenda, 2006), (d) pairing of the speech output with the appearance of the written word on the screen, and (e) targeting of sight words for the symbols within the learner’s AAC system, thus ensuring that literacy learning is driven by the individual’s interests and needs (Light & McNaughton, 2013).

Recent research has demonstrated positive findings for the use of the T2L feature in grid-based AAC displays to support sight-word learning for individuals with autism spectrum disorder and complex communication needs. Caron and colleagues (2018) introduced the grid-based T2L AAC app to five school-aged students with autism spectrum disorder (ages 6–14) in structured one-on-one sessions targeting 12 words. At baseline, the students performed at low levels of accuracy in reading the words (measured by matching the words to the correct picture symbol from a field of four). Intervention included access to a 15-button AAC grid-based display with graphic symbols and the T2L feature during a highly structured, teacher directed, matching activity. After 5–8 intervention sessions, all five students made progress towards acquiring the target words, with a gain score range of +45–69% accuracy. During the generalization phase, using a text-only grid-display, all participants were able to use the sight words they learned to describe and make comments on images that were presented to them (e.g., using the character’s name and the words “larger” or “lower” to describe a scene from a video game). Although this study provided promising outcomes, extension of these findings is important in order to better understand how the T2L feature can support improved literacy outcomes in different intervention contexts (e.g., book reading, conversation) for individuals with varied characteristics of complex communication needs (e.g., greater or lesser literacy skills, different diagnoses).

The purpose of the current study was to investigate the effects of the T2L feature (i.e. dynamically displaying text along with speech output) within a graphics-based grid display AAC app, in order to support the transition to text from graphic symbols for individuals with intellectual and developmental disabilities and complex communication needs. Specifically, the research questions were: (1) What is the effect of the AAC app with the T2L software feature on the acquisition of sight-word reading of 10 words during a book reading activity by individuals with intellectual and developmental disabilities, complex communication needs, and limited literacy skills?, (2) Are the effects maintained at six weeks once exposure to the AAC app with the T2L feature is terminated?, and (3) Do the participants generalize the sight word reading skills to a text-only AAC grid display? It was hypothesized that the dynamic presentation of the written word paired with speech output would support the acquisition, generalization, and maintenance of sight-word reading by individuals with intellectual and developmental disabilities and complex communication needs.

Method

Research Design

A single case, multiple probe across participants design was used to determine the effectiveness of the AAC app with the T2L software feature on sight-word reading. Multiple-probe designs are well suited for AAC intervention research, offering experimental control with small numbers of participants as well as detailed analysis of individual results. The independent variable was the use of the AAC app with T2L feature during a book-reading activity. The dependent variable was percent accuracy reading 10 sight words.

The design included one analysis with three participants and a second analysis with two participants. Each analysis employed the following phases: baseline, intervention, generalization (concurrent with baseline and intervention phases), and maintenance. Once stability of the dependent variable was achieved at baseline with the first participant, intervention began with this participant while the second and third participants remained in baseline. When an intervention effect was established with the first participant (i.e., an increase of at least 20% accuracy from the baseline average over two consecutive sessions), intervention was initiated with the second participant. The third participant remained in baseline until an intervention effect was established with the second participant. These procedures were repeated with the other two participants to determine if their results replicated those of the first three.

Participants

Ethics approval was obtained from the Human Research Protection Program prior to commencement of the study. Participants were recruited from schools in central Pennsylvania by requesting nominations from teachers and speech-language pathologists who worked with individuals with intellectual developmental disabilities and complex communication needs. Participant selection for the study included the following criteria: (a) presented with intellectual and developmental disability as determined by school reports (individualized education programs or standardized IQ testing), (b) between 5 and 21 years old, (c) presented with speech that did not meet all of their daily communication needs as judged by their teachers and speech-language pathologists, (d) able to follow one-step directions, per teacher report, (e) considered symbolic communicators who could use at least 10 spoken words, signs, or graphic symbols (e.g., Picture Communication Symbols) expressively, (f) lived in homes in which English was the first language, (g) demonstrated unimpaired or corrected vision and hearing within normal limits per school reports or parental/teacher report, and (h) unable to decode novel words and read at the sentence level, per teacher report and observation by the researcher of a literacy instructional session provided by the individual’s teachers.

Five individuals with intellectual and developmental disabilities and complex communication needs, ranging in age from 7–5 (years, months) to 20–7, participated in the study. Pseudonyms have been used throughout to protect confidentiality. Four of the five participants had a diagnosis of Down syndrome. All participants used speech to communicate, yet intelligibility was significantly impaired, judged to be less than 75% intelligible at the single-word level by unfamiliar partners. Additionally, familiar partners (e.g., teachers, speech therapists, family members) reported difficulty consistently understanding speech with single words and simple sentences. Four of the five participants had access to tablets with AAC applications. Rob and Jon had access to the AAC application GoTalk Now (Attainment Company) and Bran and Jamie had access to the AAC application TouchChat HD (Saltillo Company). For three of these four participants (Rob, Jon, and Bran), these AAC applications were not used outside of structured speech-therapy sessions. Jamie used his tablet with TouchChat throughout the day to make requests (e.g., snack choices, play choices). All individuals were receiving speech and language services at the time of the study. Specifically, Arya and Jamie received weekly therapy; and Rob, Jon, and Bran received monthly consults (see Table 1 for demographic information).

Table 1.

Participant Demographics

Participant Age Gender Disability Literacy Skills PPVT Score Educational Placement

Rob 20–7 Male Down syndrome Letter-sound correspondences: 26; Sight words: ~100; Standard score: 40; Percentile rank: <1%; Age equivalent: 3.8 High School; Special education class
Jon 19–5 Male Down syndrome Letter-sound correspondences: 26; Sight words: ~100 Standard score: 40; Percentile rank: <1%; Age equivalent: 3.6 High school; Special education class
Bran 13–1 Male Down syndrome Letter-sound correspondences: 26; Sight words: ~50 Standard score: 40; Percentile rank: <1%; Age equivalent: 3.11 Middle school; Special education class
Arya 7–5 Female IDD; Severe expressive and receptive language disorder Letter-sound correspondences: 26; Sight words: ~200 Standard score: 69; Percentile rank: 2%; Age equivalent: 4.8 Elementary school; Special education class
Jamie 7–8 Male Down syndrome Letter-sound correspondences: 6; Sight words: ~25 Standard Score: 40; Percentile rank: <1%; Age equivalent: <1.09 Elementary school; Special education class
1

Age is presented in years followed by months. For example, 20–7 indicates 20 years and 4 months.

Using the definition proposed by Alper (2003), the term “students with severe disabilities” is used to describe individuals with moderate, severe, and profound intellectual disabilities with similar learning characteristics – including learning slowly and having difficulty putting together component parts of information, maintaining information, and generalizing information. All participants fit this description, with four of five scoring below the first percentile on standardized language measures and Arya scoring in the second percentile. In addition, all participants were educated in their school district’s substantially separate special education classrooms for students with severe disabilities. They received literacy instruction within their classroom, specifically in group Reading Mastery lessons (Arya, Bran and Jamie) and lessons using Accessible Literacy for All (ALL) Curriculum (Jon and Rob). By high school, Jon and Rob demonstrated they had acquired 26 letter-sound correspondences and 50 to 100 sight words. Bran demonstrated a similar literacy acquisition profile by middle school. Arya had the most literacy strengths when the study began, demonstrating knowledge of 26 letter-sound correspondences and approximately 200 sight words in elementary school. Jamie had the most limited literacy knowledge of the five participants at the start of the study, demonstrating knowledge of six letter-sounds and approximately 25 sight words. Despite some emergent literacy skills, during observations of literacy instruction provided by their teachers, no participants were able to decode three letter consonant-vowel-consonant words or read at the sentence level. Students continued to participate in their literacy groups throughout the study, yet the sight words selected for the study were only targeted within the intervention.

Materials

Target and Foil Words

All stimuli used across the phases of the study were based on a total of 12 sight words, 10 targeted for learning and two used by the researcher to model the operational components of the tasks. In order to identify personally relevant and motivating words, teachers and family members were asked questions about the participants, including (but not limited to): general likes and dislikes, places they visit frequently, common leisure activities, objects and items they request or talk about with frequency. After this discussion, a corpus of 30 words was selected, per participant, and images were printed for all 30 the words. The words, including concepts and photographs, were discussed with the participants individually. The participants added input, selecting concepts from the list for words that they would like to learn to read. In addition to participant feedback, the following criteria were used to select each participant’s 10 target words. The words: (a) were no more than 8 letters in length, (b) were imageable, (c) had an initial letter shared with a minimum of one other word in the 12-word set (e.g., pool and pizza, for participant Jamie, or girl and gym for participant Rob), (d) were not read accurately by the participants at baseline, and (e) had the potential to be used during communicative interactions and leisure experiences. In addition, a corpus of foil words was created per individual. The foil words were selected to support the probe task, whereby the researcher presented the target word and three additional words in text form. The foil words were also used in each book and were selected based on concepts of interest or vocabulary known to the participant, as well as initial letter similarity to the target words. The list of 10 target sight words, two model words, and 15 foil words for each participant is available for reference in Appendix A.

Books

Using Microsoft PowerPoint, three simple books were created for each participant. The books included the target sight words and foils. Each page included an image of an event (e.g., girls celebrating after scoring a goal) with a simple description below the image. The description included a simple sentence in the form of text and SymbolStix. For example, “I like to get a [SymbolStix for medal]” and “making a [SymbolStix for goal])” were used for the target sight word of medal and the foil of goal. Each page contained a minimum of one target sight word and one foil word with a maximum of three target sight words per page.

AAC Technology and App

The T2L software feature was introduced using a 12-button AAC grid display on the NOVA Chat 12. The NOVA Chat 12 hardware included a 12.2 -inch LCD Samsung Galaxy Tablet. The 12-button display was programmed with 12 graphic symbols (i.e., symbols for the 10 target words and the two words used as models by the researcher). See Figure 1. During all phases of the study, the NOVA Chat 12 with a grid-based display was used, however during the intervention phase of the study, the T2L feature was turned on - exposing the participants to the sight words through the T2L feature. The generalization phase used the AAC app with a text-only grid display, that is, a static grid display with 12 written words, but no symbols.

Figure 1:

Figure 1:

Example of the dynamic text feature within the graphics-based AAC grid display within the NOVA Chat 12.

Upon selection of the graphic symbol with static text label (image on left), the text alone zooms out from the graphic symbol (image in the middle); the text then fills the screen for 3 seconds and the word is spoken (image on the right) before fading back into the graphic symbol.

Upon selection of a graphic symbol from the AAC grid display, the T2L feature (Light et al., 2014) was activated. The T2L feature occurred in a sequential manner. First the dynamic presentation of text appears; specifically, text emerging from the graphic symbol. Then, the grid display is replaced by the word, appearing for 3s. While the text is on the screen, it is paired with speech output. Finally, the text shrinks back into the graphic symbol and disappears. See Figure 1 as an example for the sight word swim. Additionally, refer to https://tinyurl.com/rerc-on-aac-T2L for a video demonstration of the T2L feature.

Procedure

Participants engaged in one to three, 30 min. sessions, per week. The first author served as the primary interventionist. Due to scheduling issues, a graduate student trained in the procedures, provided the final six intervention sessions for one participant (Jamie). Prior to baseline, all participants were trained to ensure that they were able to identify the graphic symbols for the 10 target sight words. Accuracy was assessed by having the participant select the correct graphic symbol from a field of four in response to the spoken instruction, “Point to ___.”

If training on a graphic symbol needed to occur, the researcher completed the following procedures, per concept: first, the researcher placed four graphic symbols in front of the participant and stated “show me” and verbally labeled the target concept (e.g., swim). Then the researcher implemented systematic instructional procedures. The instruction encompassed the following: the researcher gave the command (e.g., “show me [verbal label of target concept]”) and waited 3s for a response. If no response or an incorrect response occurred, then the researcher immediately touched the correct symbol and stated “this is (verbal label of target concept).” If the participant did not point with the researcher after the researcher identified the correct response, the researcher then stated, “point with me, point to (verbal label of target concept).” Corrective feedback (i.e., showing the correct response) was provided as required when non responses and incorrect responses occurred; corrective feedback was not used in any other phase of the study. Three trials per concept were completed in this sequence with feedback and graphic icons rearranged per trial. Participants also received verbal praise throughout (e.g., yes, that is right, good job). Once all participants consistently identified the symbols with 100% accuracy over two consecutive sessions, baseline began.

Baseline

During each baseline session (i.e., a probe), the participants were assessed on their accuracy in reading the 10 target words. The researcher provided no feedback during the probe. Materials for probes included laminated text cards, one for each word (including foil words) and 2 in. x 2 in. SymbolStix icons (to match those used within the AAC app) for each of the 12 sight words.

Probes.

During the probes, to demonstrate the task, the researcher provided two models using the two words not targeted for intervention. After the models, the probe (score out of 10) began. The researcher placed four words, in text form, above a graphic symbol of the target sight word and instructed the participant to read the words and to give/point to the word that went with the picture. Per probe trial, the four words consisted of: (a) the target word, (b) a foil that started with the same initial letter of the target word, (c) a foil that started with the same two initial letters of the target word, and (d) a word seen within the text that did not have the same initial letter. One of the foils was drawn from the target sight word list of 10. Additionally, if the target word contained an initial letter that was a capital letter, one of the foils was required to contain a matching capital letter. Foil words were used within each story and confined to the set of 15 words, per participant. For example, for the word soda, the following four words might be presented in text: soda, soap, Shrek, and waffle. Soda is the target word being probed, soap is a foil that begins with the same two initial letters, Shrek is a target word from the participant’s list of 10 words, and waffle is a concept used in the books.

After the probe was complete, the participant and researcher read two stories using the custom books and grid-based AAC symbol display. During baseline, the individual had access to the 12-button display, with 12 graphic symbols that were paired with static text labels above the graphic symbols (i.e., the 10 sight words targeted for instruction, and the two models used by the researcher). During baseline, the T2L feature was turned off (i.e., no dynamic text appeared on the screen), yet the device spoke the text label for the graphic icon that was selected (i.e., the target word) and the text label was inserted into the message bar after selection of the graphic icon. The researcher read the story text aloud, pausing for a maximum of 3 sec. at each SymbolStix icon. After 3 sec., if the participant did not select a word from the grid display, the researcher would prompt, “Find the word that matches this picture,” while pointing to the SymbolStix icon in the book. If the participant did not select the graphic symbol after 3 sec., the researcher assisted the participant in activating the correct graphic icon, either by activating the symbol herself or providing more prompting (e.g., pointing to the symbol and saying touch this one). Each participant completed a minimum of five baseline sessions prior to the start of intervention in order to establish a stable baseline. In this study, baseline sessions continued until no evidence of an increasing trend was observed, as well as stable or decreasing slope within two consecutive data points.

Baseline Generalization.

The probes for the baseline generalization phase differed from the probes in baseline, intervention, and maintenance. The baseline generalization probes did not use text cards, but instead used the AAC app with a text-only grid display. That is, the generalization of the participants’ sight-word knowledge was examined by changing the context of responding from a field of four text cards to a static, orthographic grid display with 12 written words (including the two model words and the 10 target words, but no graphic symbols or foil words). The locations of the written words within the grid were re-arranged so that the words were not in the same location as their graphic symbol referents during intervention, ensuring that the participants had to read the words to use the display and could not rely on memorizing the location of the graphic symbol within the grid.

During baseline generalization sessions, the researcher read the text from the book aloud, pausing for a maximum of 3 sec. at each SymbolStix icon. After 3 sec. if the participant did not select a word from the grid display, the researcher would prompt, “Find the word that matches this picture,” while pointing to the SymbolStix icon in the book. If the participant did not make a selection from the 12-button text-based grid display, after 3 sec., the researcher moved on to the next sentence in the book. Therefore, the participant received no feedback during the task from the researcher or the device (the volume was turned off for generalization probes).

Intervention

Intervention sessions followed the same procedures as baseline and lasted approximately 30 min. Each intervention session started with a probe to assess accuracy of reading the 10 target words. After the probe, the researcher and participant read two stories but, unlike baseline, they used the AAC app with the T2L feature turned on.

Probes.

The procedures for the probes during intervention were identical to those used during baseline. The intervention phase concluded for each participant when he or she met criterion, as defined for this study, by attaining 80% accuracy in identifying their 10 target words in at least two consecutive sessions. Or, in the case of Jamie, intervention ended due to the end of the school year before criterion was reached.

Exposure to the T2L Feature Through the AAC App.

Once the probe at the start of the intervention session was complete, the researcher introduced the AAC app with T2L features during a book activity (refer to Figure 1 for an example of the T2L feature). Every custom book started with the participant’s two model words, and together the researcher and participant matched the SymbolStix to the icons within the grid display. After the models, the researcher read the text and paused at the SymbolStix. If needed, after 3 sec., the researcher prompted, “Find this” and pointed to the SymbolStix within the book. The participant’s selection of the symbol resulted in activation of the dynamic text and speech output. After the text appeared dynamically (i.e., the word in a black text box slowly emerged from the graphic symbol), it stayed on the screen for 3s while the speech output said the word and then the word disappeared back into the graphic symbol. The researcher required the participant to select the same symbol from the grid display again, stating, “one more time.” This allowed two sequential exposures to the dynamic text and speech output for the same symbol. If the participant did not select the graphic symbol after 3 sec., the researcher assisted the participant in activating the T2L feature of the correct graphic icon, either by touching the graphic symbol on the display herself or providing more prompting (e.g., pointing to the symbol and saying “touch this one”). The researcher did not provide additional instruction for the sight words; nor did she draw the participant’s attention to the dynamic text in any way.

The 10 symbols/target words were presented in a different order within each book. Two books were read within one session. Thus, there were a total of four exposures (3s each) to each word, in each intervention session, for a total of 12s of exposure to the dynamic text and associated speech output per word, per intervention session. Accordingly, the exposure time was calculated as the amount of time with the dynamic text on the app screen. In order to maintain experimental control of the amount of exposure to the dynamic text, the participants only had access to the AAC app, incorporating the T2L features, during the intervention sessions.

Intervention Generalization.

Intervention generalization sessions were identical to baseline generalization sessions. Intervention generalization probes occurred after criterion was met with the goal of assessing the transition to a text only display.

Maintenance

Maintenance probes were conducted four and six weeks after the last intervention session. The probes measuring maintenance followed the same procedures as the baseline and intervention probes, but no book readings occurred. One of the participants (Jamie) did not meet criterion before the end of the school year and, therefore, did not complete any maintenance probes.

Procedural Fidelity

Procedural fidelity was completed for the probe and intervention procedures across all phases. To assess procedural fidelity, a graduate student in Communication Sciences and Disorders was trained in the use of two checklists, one for probe sessions and one for the intervention. The first author and a graduate student watched together and scored a video (using the checklists) per phase, discussing scoring while watching. The researcher and graduate student then independently watched an additional video from each phase, compared their checklist scoring, and then discussed any discrepancies. Once the researcher and graduate student agreed on >90% of the completed steps, for three videos, the graduate student began coding independently. The graduate student then reviewed a random sample of 20% of the probe and intervention sessions for each participant for baseline and intervention phases. The same graduate student then coded a minimum of two sessions, per participant, for maintenance and generalization phases. For both the probes and the intervention sessions, steps in the procedures correctly implemented were divided by the total number of procedural steps and then multiplied by 100 to yield a percentage of procedural fidelity. The mean procedural fidelity of randomly selected probes sessions, for each of the phases was as follows: baseline, 100%; intervention, 98% (range: 97% to 100%); generalization, 96% (range: 93% to 100%); and maintenance, 100%. Mean procedural fidelity for the intervention procedures (i.e., book reading and T2L) was: baseline, 97% (range: 94% to 100%); intervention, 99% (range: 97% to 100%); generalization, 96% (range: 94% to 100%); and maintenance, 100%.

Data Collection and Analysis

Measures

The dependent variable was the number of 10 target words correctly identified, from a field of four words upon presentation of a Symbolstix of the target word. Data for the dependent variable were gathered during the probes. Each target word was probed once, in random order, within each session. If within 5s, participants pointed to the target word from a field of four words (the correct word and three foils from the target list), the response was scored as correct. If participants pointed to an incorrect word or did not point to any word within 5s, the response was counted as incorrect. Participants’ correct trials out of the 10 total target words were calculated to give a score for each probe session. Data were also collected on the number of written words selected correctly from the text-only display during the generalization probes to assess the participants’ generalization of the sight-word reading skills to a text-only AAC display.

Data Analysis

Guided by single case standards, visual analysis was used to interpret the findings of the current study. Data on the number of target-words read accurately were graphed separately for each individual across the four experimental conditions (i.e., baseline, intervention, generalization, and maintenance) and the level, trend (slope), and variability of the data in the intervention condition were compared to those at baseline to determine the effectiveness and efficiency of the app with the T2L feature. Additionally, Tau-U effect size was calculated (Parker, Vannest, Davis, & Sauber, 2011). A Tau-U score ranges from 0–1 and can be interpreted using the following criteria: .20 or lower is a small effect; between .20 and .60 is a moderate effect; between .60 and .80 is a large effect; and between .80 and 1 is a very large effect (Parker et al., 2011).

Reliability of Data

All sessions were videotaped. Probe data were recorded live by the interventionist. To assess reliability for the scoring of the probe data, a second coder (a graduate student in speech-language pathology) was trained in coding procedures and followed the same procedures used for scoring procedural reliability. The graduate student reviewed the same sample of the probes, per phase, per participant as was done for procedural fidelity. Mean interobserver agreement for the probes in each phase was: baseline, 100%; intervention, 99% (range: 95% to 100%); generalization, 100%; and maintenance, 100%.

Results

The results are presented per participant, according to the main research questions: (a) the effect of the AAC grid-based app with the T2L feature on the acquisition of sight words during a book reading, (b) the maintenance of the intervention effect, and (c) the generalization of the acquired sight words to a text-only AAC grid display (see Figure 2 and 3). The percentage of correct responses in the baseline, intervention, generalization, and maintenance phases is displayed in Table 2.

Figure 2:

Figure 2:

Number of words correctly identified, out of 10, in the probes at baseline, during intervention, and during maintenance and generalization.

Figure 3:

Figure 3:

Number of words correctly identified, out of 10, in the probes at baseline, during intervention, and during maintenance and generalization with the second set of participants (i.e., direct replication data)

Table 2.

Summary of participants’ gains

Baseline Intervention1 Gain Tau-U Pre-Generalization Post-Generalization Gain

Rob 30% 77% +47% 0.82 0% 86% +86%
Jon 21% 87% +66% 1.00 27% 73% +46%
Bran 19% 83% +64% 0.95 20% 100% +80%
Arya 28% 83% +55% 0.98 20% 88% +68%
Jamie 14% 50% +36% 0.98 13% * *
1

Intervention average calculated by the final three data points, of percent accuracy out of 10.

*

Jamie did not reach intervention criterion by the end of the school year, therefore, generalization data is not available.

The participants all improved their accuracy in reading the 10 sight-words as a result of exposure to the AAC app with the T2L feature as demonstrated by a positive trend after stable baselines, marked increases in the slope of data, and/or relative increases in level of accuracy of sight-word reading after intervention began. At baseline, prior to exposure to the AAC app with T2L features, the participants’ average baseline accuracy was 30% or less when reading the 10 target words (chance levels of 25%). Rob performed with a mean of 30% accuracy at baseline, Jon 21%, Bran 19%, Arya 28%, and Jamie 14%. Per participant, the averages of the final three data points were calculated to determine gains from baseline. Rob improved to an average of 77% accuracy for a gain of +47%, Jon to 87% for a gain of +66%, Bran to 83% for a gain of +64%, Arya to 83% for a gain of +55%, and Jamie to 50% for a gain of +36%. Jamie did not reach criterion by the end of the school year when the study concluded.

Tau-U was calculated for all participants, with each demonstrating a very large effect (greater than 80%). Furthermore, four of the five participants learned to recognize words after only minimal exposure to the dynamic text: Rob, Bran, Jon, and Arya met criterion with a range of 40 to 68 exposures, per word. More specifically, Rob received a total of 17 intervention sessions and 68 exposures to the dynamic text for each written word. Per exposure, the dynamic text appeared on the display for 3s., totaling 204s per word. Jon required 14 intervention sessions and 56 dynamic text exposures (168s); Bran, 10 intervention sessions and 40 exposures (120s); and, Arya, 10 intervention sessions and 40 exposures (120s), per word. Jamie did not reach criterion prior to the study ending yet demonstrated positive gains from baseline after 23 intervention sessions and 92 exposures per word (276s).

Four of the individuals participated in maintenance measures (Jamie did not, as he did not reach criterion prior to the school year ending). Maintenance measures were conducted at 4 and 6 weeks after the last intervention session. Rob was the only participant to drop below the criterion level, with a maintenance measure averaging 55% correct. Jon averaged 85% correct, Bran averaged 100% correct, and Arya averaged 90% correct. In order to address the maintenance question, it is important to note that the participants did not have access to the AAC app with T2L features during the maintenance phase (for six weeks) once intervention was terminated.

The participants all demonstrated gains in their accurate use of the 12-word, text-only grid display, demonstrating progress in the transition from a graphics-based grid display to a text-only display. Generalization probes were conducted at least twice at baseline and at least twice post intervention for all participants who met criterion. At baseline, Rob performed with a mean of 0%; Jon, 27%; Bran, 20%; and Arya, 20%. After intervention, Rob performed with a mean of 86% accuracy for a gain of +86%; Jon, 73%, for a gain of +46%; Bran, 100%, or a gain of +80%; and Arya, 88%, for a gain of +68% (see Table 2 for a summary). Although large gains were observed for these generalization data, causal inference is not possible because requirements of the design (i.e., replication) were not met. These results are similar to pre-post data.

Discussion

A number of factors potentially contributed to the effectiveness and efficiency of this intervention, including those related to the AAC app and those related to the participants’ prior literacy knowledge.

Dynamic Presentation of Text

The results of this study suggest that changing how graphic symbols and text are used within AAC systems may support sight-word learning. The design of the T2L features drew on research in literacy instruction, visual cognitive processing, and instructional design (Light et al., 2014). Specifically, the T2L features included the use of: (a) dynamic text to draw the learner’s visual attention to the written word, (b) active linking of the written word to its spoken referent (via the speech output), and (c) targeting of motivating and meaningful vocabulary known to the learner (Light et al., 2014).

The direct active pairing (both between the text and graphic symbol and between text and speech output) can support learning of the association between a written word and its referent (picture symbol and/or spoken word; Caron et al., 2018; Fossett & Mirenda, 2006; Light et al., 2014). This study contributed new findings, by isolating the dynamic text feature. During baseline, the AAC app displayed text labels paired with graphic symbols (above each symbol). In addition, upon selection of a graphic symbol, the text label was inserted into the message bar. These system design features are common settings for individuals who use grid-based graphic symbol displays. Yet, there was no evidence of rising baselines and minimal over-lapping data were observed once participants were provided intervention. These findings suggest that sight-word learning was not occurring when text was statically paired with the graphic icons. The outcome provides additional evidence that the static pairing of text and graphic symbols does not contribute to sight word learning (Didden et al., 2000; Fossett & Mirenda, 2006; Saunders & Solman, 1984; Sidman & Cresson, 1973; Solman et al., 1992). The animation, and redesign from static pairing to dynamic pairing, potentially attracted the learner’s visual attention to the written word thus engaging orthographic processing of the text (Light et al., 2014).

Previous Literacy Knowledge

The brief exposure to the dynamic text (approximately 12s of exposure per word, per session) was the only intervention. To isolate the effects of the T2L feature, no additional literacy instruction was provided on the target words. The participants in this study all had some previous experience with literacy instruction. However, some had more experience than others. The participants with more knowledge of letter-sound correspondences (specifically, Rob, Jon, Arya, and Bran) were potentially able to capitalize on partial connections, including visual, context, and phonetic cues to identify written words (Ehri, 2005). In contrast, while Jamie (the participant with the least letter-sound correspondence and sight-word knowledge) did demonstrate learning, he made much slower progress, requiring more exposures to the words. Similar findings were observed with five participants with autism spectrum disorder who received the T2L feature during a highly structured task to learn 12 sight words (Caron et al., 2018).

Implications for Practice

More research is needed before strong clinical recommendations can be made, yet the current study adds to the growing evidence (Light et al., 2019) that using the T2L feature can support the acquisition of sight words for individuals with intellectual and developmental disabilities and complex communication needs. Teachers and service providers can consider including this feature as one component of a more comprehensive literacy intervention targeting sight-word reading along with other skills foundational to reading (e.g., letter-sound correspondence, decoding). The AAC system and T2L feature are not used to replace current literacy interventions. As implemented in this study, the participants all maintained participation in school-based literacy interventions for other components of literacy instruction (e.g., decoding). When beginning to implement the use of the T2L feature, teachers and service providers may need to consider a number of factors, including: number of words and exposures to the dynamic text, type of words, and the context in which the dynamic text occurs.

Number of Words and Exposures

Although more research is needed to understand the optimal number of words activated with the T2L feature, four of the five participants with some basic literacy knowledge (e.g., letter-sound knowledge and approximately 100 sight words) demonstrated acquisition of 10 sight words when a small corpus of words were activated. Similarly, Caron and colleagues (2018) found that five individuals with autism spectrum disorder acquired a small corpus of sight words when only the target words were activated with the T2L feature. Additionally, both studies required four exposures (3s each) per word in a 30 min. session. Four of the five participants in this study learned to recognize the targeted sight words within a range of 40–60 exposures. Increasing the number of exposures within a single intervention session may decrease the total number of intervention sessions required. Clinicians might consider providing approximately 50 exposures and then assessing to see if the graphic symbol should be replaced with orthography within the grid-based AAC display or clinicians could conduct intermittent probes to assess acquisition for individuals.

Type of Words

Individuals, learning to read, acquire sight words more rapidly when the words are real (vs. nonsense, like “bak” or “cug”), more familiar, more meaningful, and appear more frequently (e.g., cake vs. sake; Roberts et al., 2011). As part of the study, the individuals helped to select the words they wanted to learn. The words were imageable and familiar to the student. They were high-interest words that were used for communication frequently. The targeting of motivating and meaningful vocabulary known to the learner within the AAC display may have supported meaning processing and contextual support for learning the 10 words (Light et al., 2014). Therefore, clinicians should consider the “imagability”, frequency, and personal relevance of the word, prior to activation of the T2L feature for the graphic symbol.

Context for Dynamic Text

Individuals with complex and extensive support needs often have difficulty transferring knowledge learned in one context to another task (Copeland et al., 2016). To potentially mitigate this issue, the AAC system could be used as a context for learning sight words, with graphic symbols being replaced with orthographic representations upon acquisition. The generalization results of this study, as well as findings from Caron and colleagues (2018) with five individuals with autism spectrum disorder and complex communication needs, offer support that learning from the device may have factored into positive generalization to text-only grid displays. In this study and in previous studies (e.g., Caron et al.; Mandak et al., 2019), the T2L features were used in more structured activities, like book reading or matching tasks. Future research is needed to understand the implications of employing the T2L features during conversation. Clinicians can begin by incorporating these features within more structured tasks, like the ones previously mentioned, and by closely monitoring the conversations of individuals who use AAC as well as those of their communication partners when the T2L feature is on.

Limitations and Future Research

This study provides important data on the effects of an AAC app with T2L features on the sight-word reading of individuals with intellectual and developmental disabilities and complex communication needs. However, the study has a number of limitations that should be considered when interpreting the results. The current study has a small number of participants (n = 5). Future research is needed to build a more complete picture of the impact of the T2L feature for individuals with complex communication needs across varying diagnoses, ages, literacy knowledge and intervention contexts.

The study targets a limited number of words (10) and procedures for making use of the targeted words during authentic interactions and may have been a limitation of the current study. All ten words were targeted at once. Although this is close to real-world conditions of using a grid-based AAC app, future research is needed to understand how many words should be targeted at once (e.g., what words should have the T2L feature on) and if and when the T2L feature should be turned on/off. Additionally, future research is required to explore how to best manage the transition to traditional orthography from symbol-based AAC displays. For example, one option includes teachers or service providers consistently probing for acquisition of words that are activated within the device. Another option could be, after a certain number of exposures, the AAC system could prompt support personnel to consider replacement of the graphic symbol with an orthographic representation (i.e., text). More research is required to investigate the number of exposures that are typically required before a written word is acquired and therefore can replace the graphic symbol in the AAC display.

The app was used in a controlled setting and only for research, as opposed to being infused through daily communication. The generalization measures were additionally within the controlled setting of book reading, rather than use of the text-only grid display during conversation. Although this was an important first step to investigate how graphic symbols and text can be used to more optimally support literacy outcomes, future research is required in more real-world settings. These real-world settings can include the use of the T2L feature during conversation with participants as they use their AAC systems throughout the day, as well as investigating outcomes of their generalization of reading words taught from the T2L feature in real-word settings (e.g., recognition of the words “men” and “women” on restroom doors when at a restaurant).

Research indicates that practitioners’ views are key aspects to uptake of evidence-based research (Proctor et. Al, 2011). Future research should include feedback from both the participants who have used the T2L feature, as well as the teachers and service providers who work with these individuals. Through discussion with these key stakeholders, more information can be gathered to better understand their perceptions of the appropriateness of goals, feasibility of procedures, and effectiveness of outcomes (often referred to as social validity) in studies that embrace the T2L feature.

Conclusion

One in five students with significant intellectual disabilities does not possess even minimal literacy skills when they leave high school (Allor et al., 2010). Currently, AAC systems are not well designed to support the development of literacy; instead these systems are potentially posing an unintentional challenge to the acquisition of sight words and more positive literacy outcomes. Consideration of how graphic symbols and text are optimally used within AAC systems can potentially contribute to both language and literacy development. Specifically, this study provides preliminary evidence that redesigning graphic-based AAC software to include the T2L feature – dynamic text (as opposed to static text) with paired speech output, appearing from the graphic symbol and momentarily replacing the graphic-based grid display – supported acquisition of 8–10 sight words by the participants in the study. Infusing opportunities for the development of literacy skills within AAC supports has the potential to lead to better outcomes for individuals with complex communication needs, ultimately supporting purposeful participation in an increasingly text-based society.

Supplementary Material

Appendix A supplemental

Acknowledgments

The contents of this paper were developed under a grant from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR grant # 90RE5017). NIDILRR is a Center within the Administration for Community Living (ACL), Department of Health and Human Services (HHS). The contents of this paper do not necessarily represent the policy of NIDILRR, ACL, HHS, and you should not assume endorsement by the Federal Government.

The authors would like to offer their gratitude and thanks to the participants who contributed their time. The authors would like to thank Saltillo for the realization of the T2L software features for evaluation and for the loan of the AAC equipment. The authors would like to acknowledge Jessica Gormley, Lauramarie Pope, and Mandy Slowey for their contributions to the project.

Contributor Information

Jessica Caron, Department of Communication Sciences and Disorders, The Pennsylvania State University.

Janice Light, Department of Communication Sciences and Disorders, The Pennsylvania State University.

David McNaughton, Department of Educational Psychology, Counselling, and Special Education, The Pennsylvania State University.

References

  1. Alper SK (2003). An ecological approach to identifying curriculum content for inclusive settings. In Ryndak DL & Alper SK (Eds.), Curriculum and instruction for students with significant disabilities in inclusive settings (pp. 73–85). Allyn & Bacon. [Google Scholar]
  2. Allor JH, Champlin TM, Gifford DB, & Mathes PG (2010). Methods for increasing the intensity of reading instruction for students with intellectual disabilities. Education and Training in Autism and Developmental Disabilities, 45(4), 500–511. https://www.jstor.org/stable/23879756 [Google Scholar]
  3. Allor JH, & Chard DJ (2011). A comprehensive approach to improving reading fluency for students with disabilities. Focus on Exceptional Children, 43(5), 1–12. [Google Scholar]
  4. Browder DM, & Xin YP (1998). A meta-analysis and review of sight word research and its implications for teaching functional reading to individuals with moderate and severe disabilities. Journal of Special Education, 32(3), 130–153. 10.1177/002246699803200301 [DOI] [Google Scholar]
  5. Caron J. (2016). Engagement in social media environments for individuals who use augmentative and alterative communication. NeuroRehabilitation, 39(4), 499–506. 10.3233/NRE-161381 [DOI] [PubMed] [Google Scholar]
  6. Caron JG, & Light J. (2017). Social media experiences of adolescents and young adults with cerebral palsy who use augmentative and alternative communication. International Journal of Speech-Language Pathology,19(1), 30–42. 10.3109/17549507.2016.1143970 [DOI] [PubMed] [Google Scholar]
  7. Caron J, Light J, Holyfield C, & McNaughton D. (2018). Effects of dynamic text in an AAC app on sight word reading for individuals with autism spectrum disorder. Augmentative and Alternative Communication, 34(2), 143–154. 10.1080/07434618.2018.1457715 [DOI] [PubMed] [Google Scholar]
  8. Copeland SR, McCord JA, & Kruger A. (2016). A review of literacy interventions for adults with extensive needs for supports. Journal of Adolescent & Adult Literacy, 60(2), 173–184. 10.1002/jaal.548 [DOI] [Google Scholar]
  9. Didden R, Prinsen H, & Sigafoos J. (2000). The blocking effect of pictorial prompts on sight-word reading. Journal of Applied Behavior Analysis, 33(3), 317–320. 10.1901/jaba.2000.33-317 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Ehri LC (2005). Development of sight word reading: Phases and findings. In Snowling MJ & Hulme C. (Eds.), The science of reading: A handbook (pp. 135–154). Blackwell Publishing. [Google Scholar]
  11. Erickson KA, Hatch P, & Clendon S. (2010). Literacy, assistive technology, and students with significant disabilities. Focus on Exceptional Children, 42(5), 1–16. 10.17161/foec.v42i5.6904 [DOI] [Google Scholar]
  12. Etzel BC, & LeBlanc JM (1979). The simplest treatment alternative: The law of parsimony applied to choosing appropriate instructional control and errorless-learning procedures for the difficult-to-teach child. Journal of Autism and Developmental Disorders, 9(9), 361–382. 10.1007/BF01531445 [DOI] [PubMed] [Google Scholar]
  13. Fletcher JM, Lyon GR, Fuchs LS, & Barnes MA (2007). Learning disabilities: From identification to intervention. Guildford Press. [Google Scholar]
  14. Fossett B, & Mirenda P. (2006). Sight word reading in children with developmental disabilities: A comparison of paired associate and picture-to-text matching instruction. Research in Developmental Disabilities, 27(4), 411–429. 10.1016/j.ridd.2005.05.006 [DOI] [PubMed] [Google Scholar]
  15. Jagaroo V, & Wilkinson K. (2008). Further considerations of visual cognitive neuroscience in aided AAC: The potential role of motion perception systems in maximizing design display. Augmentative and Alternative Communication, 24(1), 29–42. 10.1080/07434610701390673 [DOI] [PubMed] [Google Scholar]
  16. Light J, & McNaughton D. (2013). Literacy intervention for individuals with complex communication needs. In Beukelman DR & Mirenda P. (Eds.), Augmentative and alternative communication: Supporting children and adults with complex communication needs (pp. 309–351). Brookes Publishing Co. [Google Scholar]
  17. Light J, McNaughton D, & Caron J. (2019). New and emerging AAC technology supports for children with complex communication needs and their communication partners: State of the science and future research directions. Augmentative and Alternative Communication, 35(1), 26–41. 10.1080/07434618.2018.1557251 [DOI] [PubMed] [Google Scholar]
  18. Light J, McNaughton D, Jakobs T, & Hershberger D. (2014). Investigating AAC technologies to support the transition from graphic symbols to literacy. Rehabilitation Engineering Research Center on Augmentative and Alternative Communication. https://tinyurl.com/rerc-on-aac-T2L [Google Scholar]
  19. Mandak K, Light J, & McNaughton D. (2019). Digital books with dynamic text and speech output: Effects on sight word reading of preschoolers with autism spectrum disorder. Journal of Autism and Developmental Disorders, 49(3), 1193–1204. 10.1007/s10803-018-3817-1 [DOI] [PubMed] [Google Scholar]
  20. Miles KP, Rubin GB, & Gonzalez-Frey S. (2017). Rethinking sight words. The Reading Teacher, 71(6), 715–726. 10.1002/trtr.1658 [DOI] [Google Scholar]
  21. National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implication for reading instruction (NIH Pub. No. 004745). US Department of Health and Human Services. Retrieved from: https://www.nichd.nih.gov/publications/product/247 [Google Scholar]
  22. Parker RL, Vannest KJ, Davis JL, & Sauber SB (2011). Combining nonoverlap and trend for single-case research: Tau-U. Behavior Therapy, 42, 284–299. 10.1016/j.beth.2010.08.006 [DOI] [PubMed] [Google Scholar]
  23. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A. Griffey R, & Hensely M. (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38(2), 65–76. 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Roberts TA, Christo C, & Shefelbine JA (2011). Word recognition. In Kamil ML, Pearson PD, Moje EB, & Afflerbach PP (Eds.), Handbook of reading research, volume IV (pp. 229–258). Routledge. [Google Scholar]
  25. Saunders RJ, & Solman RT (1984). The effect of pictures on the acquisition of a small vocabulary of similar sight-words. British Journal of Educational Psychology, 54(3), 265–275. 10.1111/j.2044-8279.1984.tb02590.x [DOI] [PubMed] [Google Scholar]
  26. Schreibman L. (1975). Effects of within-stimulus and extra-stimulus prompting on discrimination learning in autistic children. Journal of Applied Behavior Analysis, 8(1), 91–112. 10.1901/jaba.1975.8-91 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Sheehy K. (2002). The effective use of symbols in teaching word recognition to children with severe learning difficulties: a comparison of word alone, integrated picture cueing and the handle technique. International Journal of Disability, Development and Education, 49(1), 47–59. 10.1080/10349120120115325 [DOI] [Google Scholar]
  28. Sidman M, & Cresson O. (1973). Reading and crossmodal transfer of stimulus equivalences in severe retardation. Journal of Mental Deficiency, 77(5), 515–523. 10.1007/BF03395900 [DOI] [PubMed] [Google Scholar]
  29. Solman RT, Singh NN, & Kehoe EJ (1992). Pictures block the learning of sight words. Educational Psychology, 12(2), 143–154. 10.1080/0144341920120205 [DOI] [Google Scholar]
  30. Spector JE (2011). Sight word instruction for students with autism: An evaluation of the evidence base. Journal of Autism and Developmental Disorders, 41(10), 1411–1422. 10.1007/s10803-010 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix A supplemental

RESOURCES