Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2023 May 22.
Published in final edited form as: J Dev Phys Disabil. 2019;31(6):727–740. doi: 10.1007/s10882-019-09673-5

Effect of Video Embedded with Hotspots with Dynamic Text on Single-Word Reading by Children with Multiple Disabilities

Christine Holyfield 1, Jessica Caron 2, Janice Light 3, David McNaughton 4
PMCID: PMC10202467  NIHMSID: NIHMS1895194  PMID: 37220498

Abstract

Background:

The purpose of the study was to evaluate the effects of an intervention using an AAC app programmed with video visual scene displays (VSDs) embedded with hotspots with the Transition to Literacy (T2L) feature on single-word reading.

Method:

Three school-aged children with multiple disabilities participated in a multiple baseline across participants design. Four names of characters in favorite movies and shows served as target words for each participant.

Results:

All three children demonstrated an increase in accurate identification of target words from baseline to intervention with Tau-U effect sizes for the participants of 0.69, 0.76, and 0.84, all of which were statistically significant (p<0.05).

Conclusions:

Clinicians can consider including the intervention evaluated in the current study as one component of literacy intervention for school-aged children with multiple disabilities. Future research should further evaluate video VSDs and the T2L feature for use with individuals with multiple disabilities.

Keywords: Augmentative and alternative communication, Multiple disabilities, Single-word reading, Mobile technology, Video visual scene displays


Individuals with multiple disabilities and limited speech and language face many challenges to achieving a high quality of life (Maes, Petry, & Lambrechts, 2007). Their communication is at risk for going unrecognized or inaccurately interpreted by others (Grove, Bunning, Porter, & Olson, 1999). Their opportunities for social interaction and leisure can be restricted in frequency and variety (Zijlstra & Vlaskamp, 2005). Their engagement in activities can be low (Axelsson, Granlund, & Wilder, 2013). For school-aged children with multiple disabilities, opportunities for participation in the curriculum and learning academics such as literacy skills can also be restricted (Light & McNaughton, 2013).

Despite these challenges, research suggests augmentative and alternative communication (AAC) intervention is an effective approach for supporting school-aged children with multiple disabilities by increasing their expressive communication (e.g., Holyfield, Caron, Drager, & Light, 2018) and participation in social and educational contexts such as shared storybook reading (e.g., Browder, Mims, Spooner, Ahlgrim-Delzell, & Lee, 2008).

AAC technologies continue to develop. Many of these technological developments may benefit children with multiple disabilities. Yet, two of the most promising recent developments to AAC technology have yet to be evaluated for use in direct intervention for school-aged children with multiple disabilities. Those two developments, both developed by under the Rehabilitation Engineering in Research Center for AAC, are video visual scene displays (VSDs) (Light, McNaughton, & Jakobs, 2014) and the Transition to Literacy (T2L) feature (Light, McNaughton, Jakobs, & Hershberger, 2014).

Video VSDs are short videos from life events spliced with stills containing hotspots for communication. These stills containing hotspots are VSDs; that is, photos of life events used to depict AAC vocabulary within the context in which they naturally occur. Video VSDs expand on VSD technology to add the dynamic, meaningful, and motivating nature of video. AAC intervention research is growing indicating the value of video VSDs on communication including sharing about past events (Caron, Holyfield, Light, & McNaughton, 2018) and interacting within vocational and community activities (O’Neill, Light, & McNaughton, 2017). While video VSDs have yet to be evaluated as a feature for use in direct intervention for children with multiple disabilities, one study has evaluated the utility of video VSDs as a resource for supporting peers in interpreting communication from school-aged children with multiple disabilities. The study by Holyfield, Light, Drager, McNaughton, & Gormley (2018) documented prelinguistic communicative behaviors from three children with multiple disabilities as video VSDs with hotspots indicating the meaning behind the behaviors. All peers who participated in the AAC communication partner training using video VSD increased the accuracy with which they interpreted behavior from all three children with multiple disabilities.

The T2L feature pairs dynamic text with voice output of AAC vocabulary. Upon selection of a word on the AAC device, the orthographic representation of the word emerges in correspondence with its auditory representation (i.e., the voice output of the word), remains static for a number of seconds, then minimizes. This was designed to promote single word recognition, which requires an association between a word’s meaning, its orthographic representation, and its auditory representation (Browder & Xin, 1998; Light et al., 2014). The dynamic depiction of the word was designed to capture the AAC user’s visual attention (Jagaroo & Wilkinson, 2008). Just as with the video VSD feature, the T2L feature has yet to be evaluated for use in direct AAC intervention with school-aged children with multiple disabilities. However, it has been evaluated and found overall to be effective for other populations including school-aged children with autism spectrum disorder (Caron, Light, Holyfield, & McNaughton, 2018), preschoolers with language delays (Boyle, McCoy, McNaughton, & Light, 2017), and adults with intellectual and developmental disabilities (Holyfield, Light, McNaughton, Caron, Drager, & Pope, in review).

The current study is the first to evaluate either feature for use in direct AAC intervention for students with multiple disabilities. It is also the first study to evaluate the two features in combination. The combination is a promising one for children with multiple disabilities because the video VSDs are designed to be meaningful and motivating, and the T2L feature is designed to promote single-word reading, an important early literacy skill (Light & McNaughton, 2013). Both features are designed to be engaging, addressing a limitation for some children with multiple disabilities (Axelsson et al., 2013). The current study is the first to evaluate these promising features in tandem in AAC intervention by addressing the following question: What is the effect of an intervention providing exposure to target words through the T2L feature embedded within video VSDs of favorite movies/TV shows on the single-word reading of school-aged children with multiple disabilities who have limited speech and limited literacy skills? It was hypothesized the intervention would increase single-word reading due to its strong theoretical underpinnings. In addition to the primary research question, the current study also explored the following question: Do any observed effects maintain once interaction with the technology has ended? It was hypothesized any observed effects would maintain due to the numerous exposures to target words throughout interactions with the technology.

Method

Research Design

The current study used a single-subject, multiple baseline across participants design (Baer, Wolf, & Risley, 1968) with one leg of three participants. The design allowed for individual-level analysis of results and for experimental control with a small group of participants (Kratochwill et al., 2010). There were three phases of the study: baseline, intervention, and maintenance. The independent variable, present in the intervention phase, was participant exposure to an AAC app with a video VSD layout programmed with clips from participants’ favorite movie or show in which hotspots with the T2L feature of favorite character names (i.e., the target words in the study) were regularly embedded. Exposure to the app occurred during interaction with the investigator (the first author) around the video clips. The dependent variable, measured consistently across all three phases, was participants’ accuracy in identifying four target words, each tested twice.

Participants

Approval for research ethics was obtained before any participants were recruited. Upon receiving approval, participants were recruited through information given to school speech-language pathologists and families. Children meeting the following criteria were eligible for participation in the current study: (a) were school-aged; (b) had an educational diagnosis of multiple disabilities, per parent and/or professional report; (c) did not have functional speech; (d) were early symbolic communicators, indicated by consistent use of one to 20 symbols expressively, per parent and/or professional report and observation; (e) attended to video, per observation; (f) recognized no more than 5 single words, per parent and/or professional report and screening; and (g) demonstrated functional hearing and vision, per professional report and observation.

Three school-aged children with multiple disabilities – Cassie, Keke, and Nick (all pseudonyms) – met criteria to participate. More detailed descriptions of the participants can be found in Table 1. The participants had a mean age of 8.7 (range: 5–12). While their communication included a small number of symbols, the participants communicated largely through idiosyncratic means. One of the participants, Cassie, had previously acquired the recognition of the words “yes” and “no” according to parent report, but had not used that skill for several years and did not show consistent recognition of the words during screening. Two of the participants, Keke and Nick, had yet to acquire the recognition of any single words at the start of the study.

Table 1.

Characteristics of the Three Participants and a List of Each Participant’s Target Words

Participanta Ageb Gradeb Expressive modalitiesc Educational diagnosisb Primary medical diagnosisb Most engaging contextsc Sight word inventoryd Target sight wordse
Cassie 12 7th Yes/no hand signals, gaze to choices; facial expressions MD CP Interactions with mother; watching TV 2 Odo, Quark, Dax, Kira
Keke 5 K Gaze to printed photo choices; vocalizations, single switch selection MD CP Peer interactions; watching movies 0 Woody, Buzz, Hamm, Rex
Nick 9 2nd Vocalizations; facial expressions; single switch selection MD CP Watching TV/movies; viewing color photos 0 Batman, R’as, Joker, Robin

Note. K = kindergarten. MD = multiple disabilities. CP = cerebral palsy.

a

Pseudonyms

b

Based on parent and/or professional report

c

Based on parent and/or professional report and observation

d

Estimate based on parent and/or professional report and screening

e

Chosen from favorite movie/TV show per parent and/or professional report

Materials

Probe materials.

Four target words selected for each participant were tested in each probe. These words were selected by determining a favorite movie or TV show of each participant, per parent or professional report, and extracting four character names from that movie or show that were of high interest to the participant. For use in the probe, each word was printed in 72 point Arial font, cut out individually, and laminated. Target words for each participant are outlined in Table 1.

For Keke and Nick, the printed target words were the only materials used in the probes. For Cassie, an Eye-Com board1 was also used as she demonstrated difficulty using her hands to select between word choices. For Cassie, target words were backed with Velcro so they could be quickly affixed and removed from the Eye-Com board for her selection.

Intervention AAC app with video VSD and T2L.

EasyVSD2 was the AAC app used in intervention. The app was housed on a Pixel C tablet3 with a 10.2 in capacitive touch screen. The app featured both a video VSD layout option and the option to integrate the T2L feature within that layout. See https://rerc-aac.psu.edu/development/d2-developing-aac-technology-to-support-interactive-video-visual-scene-displays/ for a demonstration of the video VSD layout feature. See https://rerc-aac.psu.edu/research/r2-investigating-aac-technologies-to-support-the-transition-from-graphic-symbols-to-literacy/ for a demonstration of the T2L feature.

Videos programmed into intervention app.

The AAC app used in intervention in the current study, described above, was programmed with video clips from the favorite show or movie identified for each participant before the start of the study. The clips were found online and were captured into the app by playing the clips on a computer as the app video recorded what was playing on the screen. One 5 to 7 min clip was recorded for each target character. Then, the clips were programmed with hotspots, occurring within 10 to 50 s of each other, during times in the clip when the character was present on the screen and an event calling for conversation had occurred (e.g., the character did something heroic or silly). Each clip contained a total of 10 hotspots. Each hotspot was programmed with the character’s name and the T2L feature was enabled.

Procedures

Over the course of a 10 week period, participants engaged in two sessions a week, except in the case of cancellations over participant illness. The first author administered all probes and served as the interventionist for all sessions. Sessions lasted approximately 20 min. Cassie’s sessions occurred inside her home after school. Keke and Nick’s sessions occurred at school in their classroom.

Baseline.

During the baseline phase, participants’ performance identifying their four target words was probed. Each word was tested twice for a total of 8 trials. For each trial, the following procedures occurred: (a) the participant was presented with two printed words from the pool of four target words; (b) the participant was directed, “Look at each word. Show me [target word].”; and (c) the participant was given no corrective or affirmative feedback based on their choice, but did receive occasional praise for participation (e.g., “Thanks for working so hard!”). Words were trialed in random order with random foils selected and random placement.

App exposure intervention.

In the intervention phase, probes were completed at the beginning of each session with procedures identical to those in baseline. Following each probe, sessions then included intervention interactions using the AAC app. Each intervention interaction centered on a character video clip. At the start of the interaction, the investigator positioned herself beside the participant and positioned the tablet housing the app at the participant’s midline. Then, the investigator played the video. The video paused 10 times for each of the 10 programmed hotspots. For each hotspot, the investigator: (a) waited 5 s for hotspot selection or attempted hotspot selection from the participant (the physical limitations of participants made hotspot selection occasionally unsuccessful); (b) if the participant’s selection was absent or unsuccessful, selected the hotspot or supported the participant in completing the attempted selection (e.g., if the participant used his or her hand to touch a portion of the screen outside the hotspot and rested his or her hand on that spot, the investigator would move and return the tablet to provide the participant with another opportunity to touch the screen within the hotspot); (c) expanded on the hotspot selection; (d) waited 5 s for hotspot selection or attempted hotspot selection; (e) if the participant’s selection was absent or unsuccessful, completed the hotspot selection or supported the participant in completing the attempted selection; (f) expanded on the hotspot selection; (a) waited 5 s for hotspot selection or attempted hotspot selection; (b) if the participants selection was absent or unsuccessful, completed hotspot selection; (c) expanded on the hotspot selection; and (f) played the video. No instruction was provided.

The first target word, selected at random, was the focus of intervention interactions until the participant correctly selected the word across both trials in a probe. Then, the next target word was introduced. This continued until each of the four target words had been the focus of intervention.

Maintenance.

Procedures for the maintenance phase were identical to those of the baseline phase. All maintenance probes occurred between two and four weeks after the participant’s final intervention session.

Phase shifts.

The baseline phase began within the same week for each participant. Participants shifted from the baseline phase after: (a) they had completed at least five baseline probes, the last two of which represented a stable or downward slope; and (b) if applicable, the participant before them in the leg demonstrated an intervention effect defined by a performance higher than that participant’s highest baseline probe performance. Participants met criteria for ending the intervention phase after: (a) they completed at least five baseline probes, and (b) they completed a probe with 100% accuracy or two consecutive probes with 87.5% accuracy. Decisions for phase shifts were influenced by the high-variability present in the day-to-day performance of some individuals who experience multiple disabilities.

Procedural fidelity.

A graduate student in speech-language pathology was trained to complete procedural fidelity for the probe and intervention procedures. The graduate student analyzed procedural fidelity for a randomly-selected 24% of probes and intervention interactions across each participant. The student was trained by the first author and participated in a calibration exercise using a randomly selected probe and intervention video. For the probes, the student gave the investigator credit or no credit for each step in the sequence outlined in the procedures. For the intervention interactions, the student gave the investigator credit or no credit for each step in the sequence outlined in the procedures. Mean procedural fidelity for the probes was 99% (range: 97%−100%). Mean procedural fidelity for the intervention sequence was 98% (range: 97%−99%).

Data Measure and Analysis

Measure.

The dependent variable measured in the current study was the number of target words (out of four) correctly identified by the participant out of a field of two across two trials. This resulted in a total possible score of 8. For Keke and Nick, responses to trials were coded as correct if the first word touched by the participant was the target word. For Keke and Nick, responses to trials were coded as incorrect if the first word touched by the participant was the foil word or if no word was ever touched. For Cassie, responses to trials were coded as correct if she looked at the first author, looked to the target word, and then looked back to the investigator. That is, if Cassie used triadic gaze to choose the correct word. For Cassie, responses to trials were coded as incorrect if she did not complete any of the above three steps in the triadic gaze response (e.g., if she did not look at the investigator before or after selecting a word), if she did not select a word, or if she selected the foil word.

Data collection.

The investigator collected data live during all probes by marking trial responses as correct or incorrect on a data collection form in real time. Sessions were also videotaped. This allowed the reliability of the data collected live to be tested.

Data analysis.

Data were graphed according to single subject standards and analyzed visually using those same standards (Kratochwill et al., 2010). Visual analysis considered changes in level, trend, variability, and slope upon phase changes. The visual analysis also evaluated the immediacy of any observed effects in the intervention phase. In addition to visual analysis, Tau-U was calculated to evaluate the size of any observed effects and if any effects had statistical significance (Parker, Vannest, Davis, & Sauber, 2011).

Reliability.

The same graduate student who completed procedural fidelity was also trained to determine the reliability of the data. Subsequent to training on a randomly selected video with the investigator to calibrate, the student independently coded the data for a randomly selected 24% of session videos. Just as the investigator did live during sessions, the student coded the response to each trial as correct or incorrect. After coding these sessions, point by point reliability was calculated by comparing each trial’s coding to the investigators coding of that trial. The total number of trials in agreement were divided by the total number of trials and multiplied by 100 to yield a percentage. Reliability of the data was calculated to be 91% (range: 75%−100%). The 75% reliability resulted from occasional visual obstructions in videos rendering it difficult to determine Cassie’s eye gaze movements.

Results

Figure 1 graphs each participants’ performance on the dependent variable. Performance is organized per participant and across each phase: baseline, intervention, and maintenance. Visual analysis of the graphs indicates an increase in level and trend from baseline to intervention for all three participants. This increase in level persists into the maintenance phase for the two participants for whom maintenance data was collected, Cassie and Keke. The change from baseline to intervention is most immediate for Nick and least immediate for Cassie, although multiple intervention sessions occur for all three participants before a marked change is observed. Variability of results are evident across both the baseline and intervention phases for all three participants.

Figure 1.

Figure 1.

Participants’ identification of four target words, each trialed twice, across time

In the baseline phase, Cassie, Keke, and Nick identified target words with a mean accuracy of 35.4%, 45%, and 35% respectively. In the intervention phase, Cassie, Keke, and Nick identified target words with a mean accuracy of 57.1%, 75%, and 65% respectively. Two and three weeks after intervention, Cassie continued to demonstrate 100% accuracy identifying target words. One and a half weeks after intervention, Keke demonstrated 100% accuracy identifying target words. Unlike Cassie and Keke, Nick did not reach criterion for completing intervention likely in part due to illness that kept him out of school. Therefore, no data on the maintenance of his performance was gathered.

According to Tau-U calculations of the data gathered, performance in the intervention phase was significantly higher than performance in the baseline phase for all three participants. The size of the effects were medium to large, with a Tau-U value of 0.69 for Cassie (p=0.038), 0.76 for Keke (p=0.047), and 0.84 for Nick (p=0.028).

Discussion

All three children with multiple disabilities demonstrated significant increases in single-word reading when using video VSD technology embedded with hotspots containing the T2L feature. While the scope of the current study is limited, this finding is notable as at the start of the study two of the participants had yet to develop the recognition of any words and one participant recognized only two words and did so with low consistency. In addition, the study also provided evidence that increases in single-word reading can maintain over time, at least in the short term.

This finding is consistent with other research that has evaluated the T2L feature when embedded within static VSDs (Boyle et al., 2017; Holyfield et al., in review) and grid displays (Caron, Light et al., 2018) and found it to be effective. However, the effects in the current study were more limited as this study targeted just four words per participant compared to the 10 words per participant targeted by both Boyle and colleagues and Holyfield and colleagues and the 12 words per participant targeted by Caron and colleagues. The finding is also consistent with interventions in which video VSD was an effective intervention tool (Caron, Holyfield et al., 2018; Holyfield et al., 2018; O’Neill et al., 2017).

The intervention was effective for the school-aged children with multiple disabilities likely for a number of reasons. First, it centered on content that is highly motivating – favorite movies/shows. This may have supported the intervention in capturing and holding engagement from the children with multiple disabilities, which may be low in many contexts (Axelsson et al., 2013). The intervention allowed a favorite leisure activity for the children to be infused with opportunities for interaction and learning. The motivating nature of the intervention was likely critical given the importance of motivation in literacy learning (Light & McNaughton, 2013).

Second, using the video VSD technology as a modality to convey the content allowed the intervention to maintain its full meaning and interest (Light, McNaughton, & Jakobs, 2014). Also, the opportunities for communication embedded within the video VSD technology allowed the children to actively participate in the intervention which may have supported literacy learning (Light & McNaughton, 2013).

Third, the T2L feature paired the auditory and orthographic representations of the target word, a foundational association to single-word reading (Browder & Xin, 1998; Light, McNaughton, Jakobs, & Hershberger, 2014). The T2L feature also harnessed the power of motion to capture visual attention from users (Jagaroo & Wilkinson, 2008). The visually engaging nature of the T2L feature may have been particularly important to the participants in the current study as children with multiple disabilities may demonstrate low engagement (Axelsson et al., 2013).

Clinical Implications

Literacy learning opportunities for many children who have limited speech, including those with multiple disabilities, are limited (Light & McNaughton, 2013). There are many barriers to providing these opportunities which may require adaptations and the use of assistive technology. One factor that can be a major barrier to using assistive technology with children with multiple disabilities is time (Copley & Ziviani, 2004). However, students may already have leisure time (e.g., watching favorite videos) built in to their daily schedule. Using that existing time and context, but adding the T2L feature into video VSD technology may be a way to increase opportunities for literacy learning without facing major time barriers.

More robust research is needed before strong clinical recommendations can be drawn. However, the current study provided initial evidence that using video VSD technology to embed communication opportunities into video clips of favorite movies and TV shows with the T2L feature enabled can support single-word reading in school-aged children with multiple disabilities. Therefore, clinicians can consider including this approach as one component of a more comprehensive literacy intervention targeting single-word reading along with other early literacy skills (e.g., letter-sound correspondence, decoding).

TV and movie clips were the content used in the current study as they are highly engaging and motivating. However, every person has different interests and is motivated by different contexts. Therefore, in addition to considering favorite TV shows and movies, clinicians could consider other contexts that could be captured on video that may be interesting and motivating. For instance, children may be interested in video of themselves and peers engaged in a favorite play activity.

Limitations and Future Research Directions

The limitations to the interpretation of results in the current study are numerous. First, only three school-aged childrens with multiple disabilities participated. Future research could increase external validity by including a larger number of participants. Second, only a small number of words were targeted for recognition. Future research could test the upper limits of the intervention’s capacity for increasing single-word reading by evaluating its effects when targeting a larger corpus of words. Third, only limited data relative to the maintenance of observed effects was gathered. Fourth, no data was gathered on the extent to which participants generalized their recognition of the words beyond the single context in which their performance was evaluated. Generalization and maintenance are critical factors when considering the efficacy of AAC research (Schlosser & Lee, 2000). Therefore, future research should explore: (a) the extent to which individuals who acquire words via the T2L feature continue to recognize those words long after they have last been exposed to the words via the feature, and (b) the extent to which recognition of acquired words generalizes across their appearance in real world settings (e.g., in books, on the computer). Fifth, due to the limited scope of the current study, only a small aspect of literacy intervention was evaluated. Future research could evaluate the intervention from the current study as one component of a complete literacy intervention targeting single-word reading alongside other early literacy skills. Finally, shows and movies were the only video content used in the current study. While these were chosen for their motivation factor, other video content could be embedded with hotspots with the T2L feature. Future research could explore the efficacy of this intervention using wider variety of video content, including using video to represent content from the academic curriculum (e.g., video of different types of weather events when that is the focus in science class).

Conclusion

Video is a highly meaningful and interesting content modality; it is also a popular leisure context that can be infused with opportunities for communication through the creation of video visual scene displays (Light, McNaughton, & Jakobs, 2014). Not only can motivating videos serve as a context for communication, but, with the addition of the T2L feature, this study offers initial evidence that they can also be a context for literacy learning. Through interactions using the technology alone, in the absence of any instruction, three school-aged children with multiple disabilities who require AAC, have limited communication, and have limited literacy skills increased their recognition of single words. More research is needed to further evaluate this approach to increasing single-word reading and the scope and longevity of its efficacy. However, the current study provides promise children with multiple disabilities might benefit from opportunities to engage in leisure, social interaction, and literacy learning using video VSD and T2L AAC technology.

Acknowledgments

This work was supported by the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) under grant number #90RE5017 to the Rehabilitation Engineering Research Center on Augmentative and Alternative Communication (RERC on AAC). The video VSD app, the AAC technology utilized in the current study, was developed by InvoTek, Inc. under the RERC on AAC.

Footnotes

1

The Eye-Com board is a low-tech AAC product available from Augmentative Communication Consultants Inc. http://www.acciinc.com/

2

EasyVSD was developed by Invotek, Inc. (https://www.invotek.org/) under the RERC on AAC.

3

The Pixel C is a tablet developed by Google. https://store.google.com/

Contributor Information

Christine Holyfield, University of Arkansas.

Jessica Caron, Pennsylvania State University.

Janice Light, Pennsylvania State University.

David McNaughton, Pennsylvania State University.

References

  1. Axelsson AK, Granlund M, & Wilder J (2013). Engagement in family activities: a quantitative, comparative study of children with profound intellectual and multiple disabilities and children with typical development. Child: Care, Health and Development, 39, 523–534. DOI: 10.1111/cch.12044 [DOI] [PubMed] [Google Scholar]
  2. Baer DM, Wolf MM, & Risley TR (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91–97. DOI: 10.1901/jaba.1968.1-91 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Boyle S, McCoy A, McNaughton D, & Light J (2017). Using digital texts in interactive reading activities for children with language delays and disorders: A review of the research literature and pilot study. Seminars in Speech and Language, 38, 263–275. DOI: 10.1055/s-0037-1604274 [DOI] [PubMed] [Google Scholar]
  4. Browder DM, Mims PJ, Spooner F, Ahlgrim-Delzell L, & Lee A (2008). Teaching elementary students with multiple disabilities to participate in shared stories. Research and Practice for Persons with Severe Disabilities, 33, 3–12. DOI: 10.2511/rpsd.33.1-2.3 [DOI] [Google Scholar]
  5. Browder DM, & Xin YP (1998). A meta-analysis and review of sight word research and its implications for teaching functional reading to individuals with moderate and severe disabilities. The Journal of Special Education, 32, 130–153. [Google Scholar]
  6. Caron J, Holyfield C, Light J, & McNaughton D (2018). “What have you been doing?”: Supporting displaced talk through augmentative and alternative communication video visual scene display technology. Perspectives of the ASHA Special Interest Groups, 3, 123–135. DOI: 10.1044/persp3.SIG12.123 [DOI] [Google Scholar]
  7. Caron J, Light J, Holyfield C, & McNaughton D (2018). Effects of dynamic text in an AAC app on sight word reading for individuals with autism spectrum disorder. Augmentative and Alternative Communication, 34, 143–154. DOI: 10.1080/07434618.2018.1457715 [DOI] [PubMed] [Google Scholar]
  8. Grove N, Bunning K, Porter J, & Olsson C (1999). See what I mean: Interpreting the meaning of communication by people with severe and profound intellectual disabilities. Journal of Applied Research in Intellectual Disabilities, 12, 190–203. DOI: 10.1111/j.1468-3148.1999.tb00076.x [DOI] [Google Scholar]
  9. Holyfield C, Caron JG, Drager K, & Light J (2018). Effect of mobile technology featuring visual scene displays and just-in-time programming on communication turns by preadolescent and adolescent beginning communicators. International Journal of Speech-Language Pathology. Advance online publication. DOI: 10.1080/17549507.2018.1441440 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Holyfield C, Light J, Drager K, McNaughton D, & Gormley J (2018). Effect of an AAC peer training on interpreting idiosyncratic behaviors from students with multiple disabilities. Augmentative and Alternative Communication. Advance online publication. DOI: 10.1080/07434618.2018.1508306 [DOI] [PubMed] [Google Scholar]
  11. Holyfield C, Light J, McNaughton D, Caron J, Drager K, & Pope L (in review). Effect of AAC technology with dynamic text on the single-word reading of adults with intellectual and developmental disabilities with limited speech. [DOI] [PubMed]
  12. Jagaroo V, & Wilkinson K (2008). Further considerations of visual cognitive neuroscience in aided AAC: The potential role of motion perception systems in maximizing design display. Augmentative and Alternative Communication, 24, 29–42. DOI: 10.1080/07434610701390673 [DOI] [PubMed] [Google Scholar]
  13. Kratochwill T, Hitchcock J, Horner R, Levin J, Odom S, Rindskopf D, & Shadish W (2010). Single-case designs technical documentation. What works clearinghouse. [Google Scholar]
  14. Light J, & McNaughton D (2013). Literacy intervention for individuals with complex communication needs. In Beukelman D & Mirenda P (Eds.), Augmentative and alternative communication: Supporting children and adults with complex communication needs (309–351). Baltimore, MD: Brookes. [Google Scholar]
  15. Light J, McNaughton D, & Jakobs T (2014). Developing AAC technology to support interactive video visual scene displays. RERC on AAC: Rehabilitation Engineering Research Center on Augmentative and Alternative Communication. Retrieved from https://rerc-aac.psu.edu/development/d2-developing-aac-technology-to-support-interactive-video-visual-scene-displays/ [Google Scholar]
  16. Light J, McNaughton D, Jakobs T, & Hershberger D (2014). Investigating AAC technologies to support the transition from graphic symbols to literacy. RERC on AAC: Rehabilitation Engineering Research Center on Augmentative and Alternative Communication. Retrieved from https://rerc-aac.psu.edu/research/r2-investigating-aac-technologies-to-support-the-transition-from-graphic-symbols-to-literacy/ [Google Scholar]
  17. Maes B, Lambrechts G, Hostyn I, & Petry K (2007). Quality‐enhancing interventions for people with profound intellectual and multiple disabilities: A review of the empirical research literature. Journal of Intellectual and Developmental Disability, 32, 163–178. DOI: 10.1080/13668250701549427 [DOI] [PubMed] [Google Scholar]
  18. O’Neill T, Light J, & McNaughton D (2017). Videos with integrated AAC visual scene displays to enhance participation in community and vocational activities: Pilot case study with an adolescent with autism spectrum disorder. Perspectives of the ASHA Special Interest Groups, 2, 55–69. [Google Scholar]
  19. Parker RI, Vannest KJ, Davis JL, & Sauber SB (2011). Combining nonoverlap and trend for single-case research: Tau-U. Behavior Therapy, 42, 284–299. DOI: 10.1016/j.beth.2010.08.006 [DOI] [PubMed] [Google Scholar]
  20. Zijlstra HP, & Vlaskamp C (2005). Leisure provision for persons with profound intellectual and multiple disabilities: Quality time or killing time?. Journal of Intellectual Disability Research, 49, 434–448. DOI: 10.1111/j.1365-2788.2005.00689. [DOI] [PubMed] [Google Scholar]

RESOURCES