Abstract
Students with intellectual and other developmental disabilities often require substantial support to acquire the skills needed to secure work experience and paid employment. Prior findings suggest that video prompting is likely to be an effective and feasible strategy for establishing such skills. To evaluate this possibility in a special education transition program, we examined the effectiveness of a video prompting procedure in teaching 8 young adults with developmental disabilities to perform job‐related tasks (doing laundry, checking in to work, vacuuming, stripping bed). The intervention was effective with all participants. The skills maintained over 3 months, and the participants performed the tasks accurately in a new setting with different materials. Participants were reportedly satisfied with the intervention and deemed it easy to use.
Keywords: developmental disabilities, special education, video prompting, vocational skills, young adults
Federal Law, as enacted in the Individuals with Disabilities Education Improvement Act (IDEA, 2004) and the No Child Left Behind Act (NCLB, 2001), requires that students in special education receive systematic instruction designed to teach the skills necessary for future employment. The influential philosophical principle of normalization dictates that people with intellectual and other developmental disabilities (IDD) should benefit from being employed (Culham & Nind, 2003), and research findings indicate that they do benefit. For example, Almalky (2020) recently reviewed 27 studies examining the outcomes of employing people with IDD and concluded that employment increases their self‐respect, self‐assurance, and autonomy. Similarly, Taylor et al. (2022) reviewed 17 studies, and there was strong evidence of the economic benefits of employment, moderate evidence of psychological health benefits, and limited evidence of physical health benefits. Research also indicates that employment facilitates community involvement and social integration (Kiernan et al., 2011).
Despite the benefits of employment, young adults with IDD are often unemployed, underemployed, or underpaid (U.S. Bureau of Labor Statistics, 2021). One factor that contributes to this distressing reality is a failure to provide members of this population with instruction that establishes prerequisite skills for employment (Lindstrom et al., 2013). However, the knowledge needed to provide such instruction is not lacking because there is a substantial body of research demonstrating effective strategies for establishing such skills (e.g., Boles et al., 2016, 2019; Cannella‐Malone & Schaefer, 2017). Boles et al. (2016) analyzed the quality of 39 single‐case studies and 83 individual experiments focused on teaching employment skills to individuals with IDD. The researchers evaluated four different types of interventions for teaching employment skills: video modeling, prompting, visuals, and audio cuing/coaching. Further, they used criteria established by Horner et al. (2005) and Kratochwill et al. (2013) to determine whether these interventions were evidence‐based. Their results suggested that video modeling was the only evidence‐based intervention. Subsequently, Boles et al. (2019) calculated effect sizes (Tau‐U) for the four kinds of interventions examined by Boles et al. (2016). Video modeling was the most common intervention by a substantial margin, and it yielded a moderate effect size (Tau‐U of .83).
In video modeling, the learner views a video of the target behavior from the start to finish and has an opportunity to engage in the skill after watching the entire video. Video prompting is a version of video modeling in which each step of a task is portrayed in a separate, short clip that learners view prior to performing each respective step. Video prompting has helped people with IDD acquire socially significant behaviors (for reviews see Banda et al., 2011; Cooper et al., 2020, pp. 534‐536; Park et al., 2019), such as many daily living skills (e.g., Cannela‐Malone et al., 2011; Gardner & Wolfe, 2013; Kellems et al., 2018; Kim & Kang, 2020; Thomas et al., 2020). Additionally, two recent studies show that video prompting is an efficient strategy for helping young people with IDD acquire work‐related skills (Cullen et al., 2017; Heider et al., 2019).
Cullen et al. (2017) used video prompting to teach vocational skills to three young men with IDD in an integrated employment setting and in the local community. Participants were taught to operate an iPad and moved sequentially through video clips as they emitted the responses modeled in those clips. If an error occurred, then participants rewatched the video and the researcher used least‐to‐most prompting as needed to complete the desired response. All participants learned to perform three vocational tasks, and performance generalized to new materials, settings, or people without error correction (but with access to videos). Short‐term maintenance was also demonstrated for one participant. Heider et al. (2019) also used self‐directed video prompting to teach two young adults with moderate to intensive IDD to perform three work tasks. When an error occurred, the researchers provided specific vocal‐verbal feedback about the error and prompted the participant to rewatch the video of the step they missed. If a correct response did not ensue following error correction, the researcher modeled the step for the participant. After the intervention ended, maintenance data were collected in the absence of error correction every 2 weeks for 6 weeks; videos were available, but participants were not directed to use them. In addition, a maintenance probe was conducted where videos were unavailable. Each participant's performance substantially declined during the maintenance probes without video access. Generalization was not assessed in this study. Generalization and maintenance of acquired vocational skills are very important, but often ignored (Cannella‐Malone & Schaefer, 2017; Park et al., 2019). For example, in a review of diverse methods used to teach vocation skills to people with IDD, Cannella‐Malone and Schaefer (2017) reported that 20% (n = 15) of the studies they reviewed assessed generalization and 35% (n = 26) assessed maintenance.
The positive results reported by Cullen et al. (2017) and Heider et al. (2019) suggest that video prompting may help people with IDD acquire work‐related skills. To extend this line of research and to maximize the practical benefits of video prompting procedures, the present study examined a variation of previously published procedures that required relatively minimal staff–learner interaction. That is, our error correction procedure (i.e., delivering a prompt to rewatch the video following an error) did not require physical guidance or staff modeling and, therefore, was relatively unobtrusive. This was important to stakeholders of this study because the COVID‐19 pandemic had closed the employment sites outside of the school where our participants typically have been taught by job coaches (and, in some cases, eventually employed). COVID‐19 created a need for an in‐school instructional program that could be used with several students without major alteration or substantial staff effort, while also adhering to Centers for Disease Control and Prevention (n.d.) recommendations for reducing the spread of infection (e.g., social distancing). In collaboration with school staff, we decided that a video prompting procedure could form the basis of such a program.
Method
Participants and Setting
Participants were eight young adults receiving special education services in a small Midwestern city. None of the participants had prior work experience, but each of them expressed interest in employment and, prior to COVID‐19, were being matched with potential employers by school staff. School staff reported that all participants were vocal and had expressive and receptive language skills and experience operating iPads. Teachers also indicated that the students could visually attend to videos for an extended time, had imitative repertoires, and did not engage in challenging behavior that interfered with learning.
All participants attended a school program that served students 18 through 26 years of age and targeted independent living skills, lived with their parents, and were not eligible for free or reduced lunch at school. John attended the school program 4 days a week, whereas the other participants attended 5 days a week. Table 1 summarizes demographic information.
Table 1.
Participants' Demographics
| Name | Age in years | Gender | Race/Ethnicity | Primary Diagnosis |
|---|---|---|---|---|
| Stacy | 24 | Female | Black | Moderate cognitive impairment |
| Kevin | 21 | Male | White | Moderate cognitive impairment |
| Jimmy | 19 | Male | Hispanic | Mild cognitive impairment |
| Mark | 20 | Male | White | Mild cognitive impairment |
| Sam | 19 | Male | White | Autism spectrum disorder |
| Tim | 20 | Male | White | Autism spectrum disorder |
| Baily | 18 | Female | Black | Mild cognitive impairment |
| John | 20 | Male | White | Mild cognitive impairment |
Baseline, intervention, and maintenance sessions occurred outside the participants' regular classroom in the school's multipurpose area. Dividers were used to create four separate work zones, which contained the items needed to complete job‐site specific tasks. These zones ranged from 1.5 x 2 m to 3.5 x 3.5 m in size. Generalization sessions occurred in a classroom that was novel to the students. During these sessions, the work sites were set up around the room. Dividers were not used to separate the work zones.
Work Tasks and Materials
Work tasks were chosen based on the recommendation of school staff and included hanging and sorting laundry, checking‐in for work, stripping a bed, and vacuuming. Although school staff reported that each task was of similar difficulty for students, the tasks varied in the number of steps required to complete the job (see Table 2).
Table 2.
Task Analyses for Laundry, Work Check‐in, Stripping Bed, and Vacuuming
| Laundry | Work Check‐in | Stripping Bed | Vacuum |
|---|---|---|---|
| 1. Grab one shirt out of the laundry basket | 1. Remove the pen from wall hook | 1. Remove the pillowcase from one pillow | 1. Walk to the cleaning supply area and select the vacuum |
| 2. Grab one hanger from the clothing rack | 2. Remove the clipboard from wall hook | 2. Put the pillowcase in the hamper | 2. Pull, push, or carry the vacuum to the area to be cleaned |
| 3. Put the hanger through the shirt's neck hole and position shoulder seams | 3. Write first and last name in the first available “name” column box | 3. Stack the uncovered pillow on the chair | 3. Unwind the electrical cord completely |
| 4. Compare the shirt's name tag to the names on the clothing rack | 4. Check digital clock for time | 4. Remove the pillowcase from the other pillow | 4. Plug the vacuum in to the closet available wall outlet |
| 5. Hang the shirt on the clothing rack so that the names match | 5. Record time as shown on the clock on the sign‐in sheet, in the corresponding “time in” column | 5. Put the pillowcase in the hamper | 5. Adjust the height of the vacuum cleaner relevant to the carpet height |
| 6‐40. Repeat steps 1‐5 | 6. Hang the clipboard on wall hook | 6. Stack the uncovered pillow on the chair | 6. Grab extra cord in one hand |
| 7. Hang the pen on wall hook | 7. Unbutton the duvet cover | 7. Turn the vacuum cleaner on by pressing the foot pedal | |
| 8. Dispense one pump of hand sanitizer into the palm of hand | 8. Separate the duvet from the comforter | 8. Release the vacuum cleaner by pressing on the “handle release” pedal with foot | |
| 9. Rub hand sanitizer over the entire surface of the hands until the sanitizer is dissolved/dry | 9. Put the duvet cover in the hamper | 9. Move the vacuum in a back‐and‐forth motion, moving from left to right across the area | |
| 10. Locate apron with the participant's name on it | 10. Place the comforter on the chair with the pillows (off the floor) | 10. Vacuum until the floor is clean (talcum powder or crumbs removed) | |
| 11. Remove the assigned apron from the hook | 11. Remove the flat sheet from the bed | 11. Push the vacuum handle back into an upright position (until pedal catches) | |
| 12. Place the neck strap of the apron over the head, with the bib in the front | 12. Place the flat sheet in the hamper | 12. Turn off the vacuum | |
| 13. Cross straps behind the back and pull them forward, one strap in each hand | 13. Remove the fitted sheet from the bed | 13. Unplug the cord from wall outlet | |
| 14. Tie the two straps in a knot/bow | 14. Place the fitted sheet in the hamper | 14. Wind the cord back onto the vacuum and secure the end clip | |
| 15. Pull a single glove from the box | 15. Return the vacuum to clean supply area | ||
| 16. Put and keep the glove on hand | |||
| 17. Pull a single glove from the box | |||
| 18. Put and keep the glove on hand |
The bed stripping task required a queen size inflatable mattress, queen size bed frame, chair, hamper, fitted sheet, loose sheet, comforter, and duvet cover, as well as two cubical dividers and two pillows with pillowcases. The checking into work task required two room dividers, a hook with a clipboard and sign‐in sheets, a pen, multiple aprons with participants' name tags attached, hand sanitizer, and disposable gloves. The laundry task required a clothing rack, hamper, hangers, 16 polo shirts (eight of the same color, eight of various colors), name tags attached to the inside of the shirts, and a table. For the vacuuming task, a vacuum was provided, and a 2.6 m x 1.3 m portion of the floor was taped off and dusted with talcum powder to simulate a dirty floor. Pre‐made data sheets were used to record participant performance. The study was conducted during the COVID‐19 pandemic. As was recommended by the Centers for Disease Control and Prevention (n.d.), the interventionist and the participants wore masks during all sessions and, whenever possible, stayed at least 2 m apart to reduce the risk of infections.
For all activities except vacuuming, the materials used during treatment extension sessions, which were intended to assess generalization, differed from those used during teaching. For the bed stripping task, a different queen size mattress and bed frame were used. The sheets, pillowcases, comforter, duvet cover, and hamper also differed from the teaching materials. For the checking into work task, different pens, aprons, hand sanitizers, and hook placements were used. For the laundry task, different name tags were used. The clothing also differed from that used in teaching and included coats, short‐ and long‐sleeve shirts, and button‐up shirts. For the vacuuming task, the same materials were used in both teaching and generalization sessions, but the area covered with talcum powder was not marked by tape in generalization sessions.
Two 10.2‐in iPads were used to present videos. Prior to recording, the researchers worked with school staff to conduct task analyses of each work task (i.e., laundry, work check‐in, bed stripping, vacuuming). The model used in all videos was a 27‐year‐old, White woman who was a Board Certified Behavior Analyst and who was familiar with the purpose and methods of the study. Before recording began, she learned to perform the steps of each task until all steps were completed correctly and in the specified order on three consecutive occasions, with performance measured by the first author as described for participants. Performances that were used as videos were also rated as perfectly accurate by the first author, who scored them using the procedure used to evaluate participants' performance (see Dependent Variable and Data Collection).
Videos were recorded on a GoPro 8 and transferred to a laptop for editing. The model performed all steps of a task in succession without interruption, at a speed that felt comfortable and natural. The full videos of laundry, work check‐in, stripping bed, and vacuuming were 4 min 50 s,12 min 12 s, 2 min 53 s, and 2 min 34 s in duration, respectively. Videos were shot from the perspective of a spectator viewing the model, with the camera positioned to capture all steps in the task.
Smaller video clips were produced from the full‐length videos, and each clip depicted a single step in a given task. The Movavi Video Editor 15 software was used to extract video clips and apply voice‐overs to them. The voice‐overs described each step in the task (e.g., for the first and second steps in the laundry task, the voiceovers were, “Take one shirt out of the laundry basket,” and “Grab one hanger from the clothing rack,” respectively). Copies of all videos are available from the first author, upon request. Once edited, the videos were uploaded to the iPads and stored in separate video albums by task. Within each album, the videos were sequenced in the order they were arranged in the task analysis, with each video depicting a single step in the task. For ease of viewing the video models while completing the target activities, the iPads were secured to an adjustable tripod stand (16.5‐50 in. in height) using a tablet clamp holder.
Experimental Design
Two variants of the multiple probe design (Horner & Baer, 1978) were used. A concurrent multiple probe design across tasks was used with Stacy, Kevin, Jimmy, and Mark; a concurrent multiple probe design across participants was used with Sam, Tim, Baily, and John. With both variants, a minimum of three data points with stability or a decreasing trend was required to move from baseline to intervention. Although Horner and Baer (1978) recommended that a series of baseline sessions be conducted just before the introduction of the independent variable to each teaching step, we did not follow this convention because it would have increased staff effort and our baseline probes generally had limited variability.
The intervention was conducted for a task until the mastery criterion of 100% correct responding across three consecutive sessions was met, with or without the video models (i.e., in intervention and maintenance conditions, respectively). Each participant (except John, due to limited availability) received instruction on the three tasks that were most relevant to their anticipated employment.
Baseline and intervention sessions were conducted one to three times per day, 1 to 5 days per week, for individual participants. On days when multiple sessions were conducted, a minimum of 15 min between sessions was required. Sessions were arranged at various times during the school day based on participant availability and the sequencing of other school activities. Maintenance and follow‐up sessions were arranged after the intervention ended and were scheduled based on participant availability. Each session lasted between 5 and 20 min, depending on participant responding.
Procedure
Baseline
Video models were not available during baseline. At the beginning of baseline sessions, the interventionist asked the participant to complete a specific task (e.g.., “Strip the bed”). The interventionist, a doctoral candidate in behavior analysis, was the same individual who served as the video model. If the participant failed to begin the next step within 5 s of the general instruction, or began performing an incorrect response, the interventionist instructed the participant to stop and close their eyes or turn around. The interventionist then completed that step out of the participant's view. Once the step was completed, the participant was directed to “keep going.” If the participant requested help, the interventionist responded with, “Just try your best.”
Preintervention
Prior to implementing the intervention, and before the last baseline probe, the interventionist conducted a session to teach the participant to operate the iPad (i.e., select the first video in an album), watch and imitate a video model, and swipe left to progress to the next video for two tasks unrelated to the instructional tasks (i.e., selecting, stacking, and either hole punching or stapling paper). During preintervention sessions, the interventionist described and modeled appropriate responses (e.g., selecting the appropriate video, watching the video, imitating the model, moving to the next video), encouraged the participant to practice these responses, and provided vocal prompts and feedback as appropriate. A participant remained in preintervention sessions until they had completed all steps required to operate the iPad correctly and independently on two consecutive occasions.
Intervention
When the intervention was in effect, the interventionist's instructions to the participant were the same as in baseline except that more specification was provided regarding the use of the iPad (e.g., “Hang and sort the laundry using the iPad”). The participant then selected the first video in the album, which was already displayed on the iPad, and viewed the video model. If the participant looked away from the video model for 2 s, they were reminded by the interventionist to “watch the video.” If they began engaging in the response before the video ended, the interventionist told them to “watch the whole video first.” Once the video ended, the participant imitated the modeled step. If they imitated the model and completed the response correctly, they swiped left on the iPad screen to progress to the next video. They continued this process until the task was completed and no video models remained. Only general praise was provided at the end of the session.
If the participant failed to imitate the video model within 5 s or made an incorrect response, the interventionist instructed the participant to re‐watch the video. The participant then viewed the video for a second time, attempted to imitate the model, and completed the step. If the participant completed the step correctly, they swiped left on the video and began viewing the next model. If they completed the step incorrectly for a second time or waited longer than 5 s to initiate it, the participant was instructed to turn around or close their eyes while the interventionist completed the step for them. Once she completed the step, the interventionist directed the participant to continue with the task (i.e., said “watch the next video”). Data were collected on responses that occurred after error correction, but only the initial responses are included in the data presented in this study.
For the laundry task, Kevin did not meet the mastery criterion after several sessions of exposure to the intervention; it appeared that he was sorting shirts based on color rather than label. For the 17th session, the materials were modified so that all shirts were the same color, which was intended to aid Kevin in attending to the relevant stimuli (i.e., the tags in the collars of the shirts). The original materials were reintroduced during maintenance sessions.
Maintenance, Treatment Extension, and Follow‐Up
Participants had no opportunity to perform the targeted tasks after the intervention ended, except during maintenance, treatment extension, and follow‐up sessions. Video models were absent during maintenance and follow‐up sessions. Maintenance sessions were conducted in a similar manner to baseline sessions; however, if the participant completed a step incorrectly, the interventionist did not complete that step or provide an additional prompt. Instead, the participant continued through the task independently and data on the accuracy of each step were recorded. Whenever possible, maintenance data were collected for three consecutive sessions immediately following the intervention phase. To assess postintervention generalization of the tasks, a single‐session treatment extension probe was arranged. It was identical to maintenance sessions, except that the materials differed from those used in maintenance sessions (see above) and the session was conducted in a novel location. Follow‐up sessions were arranged like maintenance sessions. In most cases they occurred 1 month and 3 months after the intervention ended, although other follow‐up intervals were sometimes arranged due to participants' schedules and availability.
Dependent Variable and Data Collection
The primary dependent measure was the participants' accuracy of performance on each step of the task analyses (see Table 1). During baseline sessions, a response was considered correct if it occurred within 5 s of the vocal prompt, it occurred in the correct order, and it was topographically appropriate. The 5‐s latency limit was based on the values used by Cullen et al. (2017; 5 s) and by Heider et al. (2019; 4 s). During maintenance, treatment extension, and follow‐up sessions, a correct response was defined in the same way as during intervention sessions, except that the response latency requirement was omitted when videos and prompts were not present.
For all sessions, most tasks had to be completed within 1 min, except tying the apron (within 2 min), unbuttoning the duvet cover (within 2 min), and vacuuming the talcum powder (within 3 min). Participants were allotted extended time to perform tying and unbuttoning because some participants' motor challenges made these tasks more difficult. The extended time allowed for vacuuming was based on the time in which the interventionist completed the task, with 30 s added. For all sessions, performance accuracy for a task was calculated by dividing the number of steps the participant performed correctly by the total number of steps in the task and multiplying by 100. Data were also collected on the number of sessions required to meet the mastery criterion.
Interobserver Agreement
In some sessions, a second observer who did not interact with the primary observer also collected data. Interobserver agreement (IOA) was calculated for each participant by dividing the total number of agreements by the total number of observations and multiplying by 100. To count as an agreement, both observers had to rate all steps of an activity in the same way (i.e., as correct or as incorrect). IOA data were collected across all tasks and conditions of the study. For Stacy, Mark, Jimmy, Kevin, Sam, Tim, Baily, and John, IOA was calculated for 33%, 30%, 37%, 32%, 23%, 18%, 22%, and 35% of sessions, respectively. Mean IOA coefficients equaled 98% for Stacy (range, 83%–100%), 98% for Mark (range, 87%–100%), 99% for Jimmy (range, 93%–100%), 99% for Kevin (range, 89%–100%), 99% for Sam (range, 94%–100%), 98% for Tim (range, 90%–100%), 97% for Baily (range, 90%–100%), and 99% for John (range, 93%–100%).
Procedural Fidelity
A task analysis of the interventionist's required activities for each phase of the study was conducted and a checklist corresponding to those steps was constructed (see Supporting Information). An observer who was previously instructed in data collection viewed videos of sessions and used that checklist to determine whether each step on the treatment integrity tasks analysis was performed correctly. Procedural fidelity measures were calculated for 11% of sessions (39 of 368) and scores ranged from 90% to 100% across sessions (M = 99%).
Participant Satisfaction Survey
At the end of the study, participants were asked to respond to a written anonymous survey. It asked participants to respond “yes,” “no,” or “undecided” to the following questions: “Did you learn new job‐related skills?” “Would you like to learn more skills using the iPad?” “Is watching the videos on the iPad a good way to learn new skills?” “Are the step‐by‐step videos helpful?” “Will these skills help you in future job sites or job‐based learning sites?” Each participant's response to each question was recorded and the responses of participants were summed across categories (e.g., five yes, two no, one undecided).
Results
Data in Figures 1, 2, 3, 4 show that video prompting was an effective intervention for each of the four participants for whom a multiple probe design across tasks was used to evaluate treatment effects (i.e., Kevin, Stacy, Jimmy, and Mark). Accuracy of performance increased immediately when the intervention was implemented, and accuracy persisted while the intervention was in effect. This pattern of results was consistent across participants and tasks.
Figure 1.

Kevin's Performance Data across Tasks and Sessions Note. The break in the x‐axis indicates a 1‐month gap between maintenance and follow‐up sessions.
Figure 2.

Stacy's Performance Data across Tasks and Sessions Note. The break in the x‐axis indicates a 1‐month gap between maintenance and follow‐up sessions.
Figure 3.

Jimmy's Performance Data across Tasks and Sessions Note. The break in the x‐axis indicates a 1‐month gap between maintenance and follow‐up sessions.
Figure 4.

Mark's Performance Data across Tasks and Sessions Note. The break in the x‐axis indicates a 1‐month gap between maintenance and follow‐up sessions.
Kevin's performance is shown in Figure 1. Kevin consistently scored at or below 60% correct implementation for the laundry task and work check‐in task during baseline. During the intervention phases, scores ranged from 89% to 100%. Materials were modified (i.e., same‐colored shirts were substituted for shirts of varied colors) for the laundry task at session 17, after which Kevin met the mastery criterion in seven sessions. Kevin met the mastery criterion for the work check‐in task in 15 sessions. Kevin's performance on both tasks maintained above 95% during the maintenance sessions. Kevin also scored 100% on both tasks during the 1‐month follow‐up, treatment extension, and 3‐month follow‐up sessions.
Stacy participated in the laundry, work check‐in, and stripping bed tasks, as shown in Figure 2. Baseline scores ranged from 0% to 50%. When the intervention was implemented, the tasks were mastered in six sessions (laundry), seven sessions (work check‐in), and four sessions (stripping the bed), with scores ranging from 83% to 100%. Stacy scored 100% in each of the three maintenance sessions for each task. Scores for the 2‐week follow‐up, 2‐month follow‐up, treatment extension, and 4‐month follow up sessions for the laundry task were 95%, 95%, 95%, and 98%, respectively. In the 1‐month follow‐up, treatment extension, and 3‐month follow‐up sessions for the work check‐in task, Stacy scored 100%, 89%, and 100%, respectively. Finally, Stacy scored 100% during the 1‐month follow‐up, treatment extension session, and 3‐month follow‐up session for the bed stripping task.
Figure 3 depicts outcomes for Jimmy. He scored between 0% and 60% during baseline for the laundry, work check‐in, and bed stripping tasks. During the intervention phase, scores ranged from 89% to 100%, with performance meeting the mastery criterion in nine sessions (laundry), 13 sessions (work check‐in), and four sessions (bed stripping). Jimmy scored 100% during all three maintenance sessions across each task. Jimmy scored 100% at the 2‐week follow‐up, 6‐week follow‐up, treatment extension, and 3‐month follow‐up sessions for the laundry task. Jimmy scored 94%, 100%, and 100% at the 1‐month follow‐up, treatment extension, and 3‐month follow‐up session (respectively) for the work check‐in task. Scores of 100% were also obtained at the 1‐month follow‐up, treatment extension, and 3‐month follow‐up sessions for the bed stripping task.
Mark's data are depicted in Figure 4. Mark's baseline scores ranged from 27% to 60% for each task. During the intervention phase, Mark mastered the laundry task in three sessions, bed stripping task in three sessions, and vacuuming task in seven sessions. Mark consistently implemented each task with 100% accuracy across three maintenance sessions. Mark also scored 100% during the 1‐month follow‐up, 6‐week follow‐up, 2‐month follow‐up, treatment extension, and 4‐month follow‐up sessions for the laundry task; 2‐week follow‐up, 6‐week follow‐up, treatment extension, and 4‐month follow‐up sessions for the bed stripping task; and 1‐month follow‐up, treatment extension, and 3‐month follow‐up sessions for the vacuuming task.
As shown in Figures 5, 6, 7, video prompting was also an effective intervention for each of the four participants for whom a multiple probe across participants design was used to evaluate treatment effects (i.e., Tim, Sam, Baily, and John). All participants showed evidence of learning the laundry task, as shown in Figure 5. Baseline performance ranged from 28% to 68% across participants. During the intervention phase, mastery was met in eight sessions (Tim), five sessions (Sam), and six sessions (John). Bailey never met the mastery criterion because she decided to move to remote learning for the remainder of the school year, but her scores during the intervention phase ranged from 93% to 100%. Tim maintained scores of 100% during the maintenance sessions, 2‐month follow‐up, treatment extension, and 3‐month follow‐up sessions. Sam maintained scores of 100% across the maintenance, 1‐month follow‐up, treatment extension, and 3‐month follow‐up sessions. John maintained scores of 100% across the three maintenance sessions, the treatment extension session, and the 1‐month follow‐up session. No maintenance, treatment extension, or follow‐up sessions were conducted with Bailey.
Figure 5.

Participants' Performance Data for Laundry Task across Sessions Note. The break in the x‐axis indicates a 2‐month gap between maintenance and follow‐up sessions for Tim and 1‐month gap between sessions for John and Sam.
Figure 6.

Participants' Performance Data for Work Check‐in Task across Sessions Note. The break in the x‐axis indicates a 2‐month gap between maintenance and follow‐up session for Tim and a 1‐month gap between sessions for Sam.
Figure 7.

Participants' Performance Data for Stripping Bed Task across Sessions Note. The break in the x‐axis indicates a 1‐month gap between maintenance and follow‐up session for Sam and a 2‐month gap between sessions for Tim.
Figure 6 depicts outcomes for the work check‐in task for Bailey, Sam, and Tim. Baseline data ranged from 17% to 67% across sessions and participants. During the intervention phase, the mastery criterion was met in six sessions (Bailey), 11 sessions (Tim), and three sessions (Sam). Bailey maintained over 94% accuracy across five maintenance sessions; follow‐up and treatment extension data were not collected for Bailey. Tim scored 100% across three consecutive maintenance sessions, as well as 100%, 94%, and 100% during his 2‐month follow‐up, treatment extension, and 3‐month follow‐up sessions (respectively). Sam scored 100% during his maintenance, 1‐month follow‐up, treatment extension, and 3‐month follow‐up sessions.
Data for the third task, stripping the bed, are shown for Sam, Bailey, and Tim in Figure 7. During baseline sessions, accurate task completion ranged from 0% to 36%. During the intervention phase, the mastery criterion was met in three sessions (Sam), three sessions (Bailey), and four sessions (Tim). Sam scored 100% during his maintenance, 1‐month follow‐up, treatment extension, and 3‐month follow‐up sessions. Bailey scored 100% during the maintenance sessions, and she did not participate in follow‐up or treatment extension sessions on this task. Tim scored 100%, 100%, 79%, and 100% during his maintenance, 2‐month follow‐up, treatment extension, and 3‐month follow‐up sessions (respectively).
All participants completed the acceptability questionnaire. All of them indicated, by picking the “yes” choice, that they would like to learn more skills using the iPads, that watching videos on the iPad was a good way to learn new skills, that the step‐by‐step videos were helpful, and that the skills they learned would help them at future job sites or job‐based learning sites. Six of the participants indicated that they learned new job skills during the study, but two participants indicated that they did not.
Discussion
The present findings indicate that, for each of the eight participants and all tasks targeted, video prompting was an effective and efficient intervention for teaching job‐related skills. Our results are congruent with the results of two recent studies demonstrating the value of video prompting for teaching young adults with IDD to perform vocational tasks (Cullen et al., 2017; Heider et al., 2019). They also are consistent with findings demonstrating the value of video prompting for other applications involving people with IDD (see Banda et al., 2011; Cooper et al., 2020, pp. 534‐536) and of video modeling more generally (e.g., Boles et al., 2016, 2019).
The procedure we examined was intended to be practical and unobtrusive in that minimal staff attention was required in our error correction procedure and it did not involve physical prompting. It closely resembled the procedures used by Heider et al. (2019) to teach participants to perform vocational tasks once they had been instructed to use iPhones with a procedure that arranged least‐to‐most prompting, as well as those used by Goodson et al. (2007) to teach daily living skills to individuals with IDD. All three studies demonstrated rapid and substantial treatment effects. This outcome suggests that video prompting that does not include physical prompting for error correction is a robust procedure for teaching vocational and other skills to people with IDD.
One of our goals in conducting the present study was to add to the findings of Cullen et al. (2017) and Heider et al. (2019) by providing further data regarding the maintenance of acquired vocational skills, a topic that has not been adequately studied (Cannella‐Malone & Schaefer, 2017; Park et al., 2019). Strong maintenance of acquired skills was obtained in our study even though participants did not have continued access to the video prompts. Thus, our findings differ from those obtained by Heider et al. (2019). In their first three maintenance sessions, conducted at 2‐week intervals postintervention, videos were available, but participants were not instructed to use them. All participants exhibited high levels of accuracy under these conditions. Conversely, in a single maintenance session when videos were not available, accuracy declined markedly. Our maintenance and follow‐up data, which were collected up to 4 months postintervention with videos absent and no opportunities to practice skills provided, demonstrated high levels of accuracy for all participants and tasks.
It is not clear why participants in our study, but not participants in the study by Heider et al. (2019), performed well in the absence of videos. Our mastery criterion of 100% accuracy across three consecutive sessions was more robust than theirs (i.e., a clear and increasing trend in accuracy across at least three trials). There is substantial evidence that more stringent mastery criteria are associated with better maintenance of acquired skills (e.g., Pitts & Hoerger, 2021; Richling et al., 2019). In comparing treatment data from the two studies, however, participants in both achieved a high level of accuracy during the final sessions of intervention. Thus, there is no compelling evidence that different degrees of task mastery were responsible for the difference in maintenance results.
Both the tasks and participants differed in our study and the study by Heider et al. (2019), and either of these variables could have affected performance during maintenance sessions where videos were absent. Their target tasks comprised 28, 17, and 40 steps, whereas ours were comprised of five (repeated eight times), 18, 14, and 15 steps. If one construes a task with multiple steps as a response chain, in which completing one step produces a discriminative stimulus that evokes the next response in the chain, it is reasonable to expect that newly established stimulus control would be weaker with longer chains (i.e., more steps; Cooper et al., 2020, p. 576) and, hence, accuracy would be lower. Although the numbers of steps required to complete the tasks in our study were slightly less than in the study by Heider et al., it is unclear if this variable is responsible for the difference in maintenance of acquired skills.
In any case, it is not essential for individuals to perform vocational tasks acquired through video modeling accurately with videos absent, for them to benefit from the intervention. Portable devices, including cellular phones that can display videos such as those used in the present study, are ubiquitous (Pew Research Center, 2021), and some are inexpensive. Such phones can easily be taken to job sites and used to present videos without calling undue attention to the worker using them or disrupting the performance of others. This is a point in favor of using them to present videos.
Another strength of video prompting interventions is that, once the video clips are recorded, the intervention can be used to teach a skill to many different individuals. However, producing the videos requires considerable effort, and the presence of an interventionist is typically required when the video modeling procedure is in effect (e.g., Cullen et al., 2017; Heider et al., 2019). This was the case in the present study, although we attempted to minimize the effort required of the interventionist to the extent possible. Video prompting procedures that do not require the presence of an interventionist merit investigation. Comparing other intervention strategies to video prompting with respect to cost effectiveness would also be worthwhile. Other strategies for teaching vocational skills, such as providing learners with written instructions, recorded oral descriptions, or videos alone (i.e., no interventionist), would require little staff effort and could be arranged for multiple learners simultaneously. To our knowledge, there is no published procedure for determining the most efficient vocational training procedure for a given learner. Because our study was designed to evaluate video prompting, not to compare training procedures, we did not evaluate whether our participants would have responded positively to procedures other than video modeling. Given that our participants responded to the written consumer satisfaction questionnaire, written instructions alone might have sufficed to teach our participants some, or all, of the targeted tasks. Having written instructions available during baseline, a design feature in the study by Heider et al. (2019), would have allowed for the evaluation of this possibility and should be considered in future research.
The COVID‐19 pandemic has created substantial challenges for behavior analysts (Colombo et al., 2020; Jimenez‐Gomez et al., 2021; LeBlanc et al., 2020). Although the exigencies of COVID‐19 caused our study to occur, they also weakened it. By closing work sites, the virus made it impossible to test whether vocational tasks acquired at school generalized to actual work sites. By reducing school staff and necessitating new work requirements for staff that remained, COVID dictated that a researcher, not a school staff member, implemented the intervention. Compromising procedural fidelity typically reduces the effectiveness of interventions (e.g., Jones & St. Peter, 2022). If video prompting is to be a useful procedure for teaching vocational tasks in transitional programs for students in special education, school staff must use the procedure with high fidelity and relative ease. Determining whether they can do so is a worthy goal for future research, as is documenting the value of video prompting for teaching skills exhibited in actual workplaces.
Another obvious and significant target for future research is determining the range of students who respond favorably to interventions like the one we arranged. Although not formally assessed, our participants had substantial receptive and expressive language skills and experience operating electronic devices. School staff recommended our participants as individuals likely to benefit from the video prompting intervention, and there is no good reason to assume that video prompting would be a good intervention for helping all, or even most, young adults with IDD acquire job skills. We know of no evidence‐based strategy for matching learners and vocational interventions. Developing one is an important goal for future research.
Participant satisfaction data, although limited, suggested that our intervention was generally acceptable to the young adults who experienced it, which is important. Moreover, our findings indicated that video prompting was effective with little augmentation. The procedure was relatively simple and, save for the modification made in teaching the laundry task to Kevin, did not require individualization; thus, it appears useful to consider as a tier 1 intervention for special education vocational programs. Although video prompting is far from a panacea, a substantial body of research, including the present study, indicates that it is useful for helping people with IDD acquire skills that require a sequence of steps (e.g., Cannella‐Malone et al., 2011; Gardner & Wolfe, 2013; Kellems et al., 2018; Kim & Kang, 2020).
Generalization probes in the present study, although not extensive, indicated that acquired tasks could be performed accurately in different settings and with different stimuli, which is comparable to the findings of Cullen et al. (2017). Collectively, these findings suggest that video prompting can result in strong performance over time, although further research concerning the circumstances under which generalization occurs, and the variables that affect it, is needed. As noted, it is especially important to examine whether skills required outside of worksites, as in the present study, are exhibited when participants are at work.
Supporting information
Appendix S1 Supporting Information.
Conflict of Interest: The authors declare no conflict of interest.
No external funding supported the research.
Footnotes
Associate Editor, Tara Fahmie
REFERENCES
- Almalky, H. A. (2020). Employment outcomes for individuals with intellectual and developmental disabilities: A literature review. Children and Youth Services Review, 109, 1–10. 10.1016/j.childyouth.2019.104656 [DOI] [Google Scholar]
- Banda, D. R. , Dogoe, M. S. , & Matuszny, R. M. (2011). Review of video prompting studies with persons with developmental disabilities. Education and Training in Autism and Developmental Disabilities, 46(4), 514–527. https://www.jstor.org/stable/24232363 [Google Scholar]
- Boles, M. , Ganz, J. , Hagan‐Burke, S. , Gregori, E. V. , Neely, L. , Mason, R. A. , Zhang, D. , & Wilson, V. (2016). Quality review of single‐case studies concerning employment skill interventions for individuals with developmental disabilities. Cadernos de Educação (UFPel), 13, 15–51. [Google Scholar]
- Boles, M. , Ganz, J. , Hagan‐Burke, S. , Hong. E. , Neely, L. , Davis, J. L. , & Zhang, D. (2019). Effective interventions in teaching employment skills to individuals with developmental disabilities: A single‐case meta‐analysis. Review Journal of Autism and Developmental Disorders, 6(2), 200–215. 10.1007/s40489-019-00163-0 [DOI] [Google Scholar]
- Cannella‐Malone, H. , Fleming, C. , Chung, Y. , Wheeler, G. , Basbagill, A. , & Singh, A. (2011). Teaching daily living skills to seven individuals with severe intellectual disabilities: A comparison of video prompting to video modeling. Journal of Positive Behavior Interventions, 13(3), 144–153. 10.1016/j.rasd.2013.07.013 [DOI] [Google Scholar]
- Cannella‐Malone, H. & Schaefer, J. (2017). A review of research on teaching people with significant disabilities vocational skills. Career Development and Transition for Exceptional Individuals, 40(1), 67–78. 10.1177/2165143417752901 [DOI] [Google Scholar]
- Centers for Disease Control and Prevention (n.d.) How to protect yourself and others. Downloaded May 17, 2022 from https://www..cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/prevention.html
- Colombo, R. A. , Wallace, M. , & Taylor, R. (2020). An essential service decisions model for applied behavior analytic providers during crisis. Behavior Analysis in Practice, 13(2), 306–311. 10.31234/osf.io/te8ha [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cooper, J. , Heron, T. , & Heward, W. (Eds.) (2020). Applied behavior analysis ( 3 rd Ed. ). Pearson. [Google Scholar]
- Culham, A. , & Nind, M. (2003). Deconstructing normalization: Clearing the way for inclusion. Journal of Intellectual and Developmental Disability, 28, 65–78. 10.1080/1366825031000086902 [DOI] [Google Scholar]
- Cullen, J. M. , Alber‐Morgan, S. R. , Simmons‐Reed, E. A. , & Izzo, M. V. (2017). Effects of self‐directed video prompting using iPads on the vocational task completion of young adults with intellectual and developmental disabilities. Journal of Vocational Rehabilitation, 46, 361–375. 10.3233/JVR-170873 [DOI] [Google Scholar]
- Gardner, S. , & Wolfe, P. (2013). Use of video modeling and video prompting interventions for teaching daily living skills to individuals with autism spectrum disorders: A review. Research and Practice for Persons with Severe Disabilities, 38(2), 73–87. 10.2511/027494813807714555 [DOI] [Google Scholar]
- Goodson, J. , Sigafoos, J. , O'Reilly, M. , Cannella, H. , & Lancioni, G. E. (2007). Evaluation of a video‐based error correction procedure for teaching a domestic skill to individuals with developmental disabilities. Research in Developmental Disabilities, 28(5), 458–467. 10.1016/j.ridd.2006.06.002 [DOI] [PubMed] [Google Scholar]
- Heider, A. , Cannella‐Malone, H. , & Andzik, N. (2019). Effects of self‐directed video prompting on vocational task acquisition. Career Development and Transition for Exceptional Individuals, 42(2), 87–98. 10.1177/2165143417752901 [DOI] [Google Scholar]
- Horner, R. D. , & Baer, D. M. (1978). Multiple‐probe technique: A variation on the multiple‐baseline design. Journal of Applied Behavior Analysis, 11(1), 189–196. 10.1901/jaba.1978.11-189 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Horner, R. H. , Carr, E. G. , Halle, J. H. , McGee, G. , Odom, S. , & Wolery, M. (2005). The use of single‐subject research to identify evidence‐based practice in special education. Exceptional Children, 71(2), 165–179. 10.1177/001440290507100203 [DOI] [Google Scholar]
- Individuals with Disabilities Education Act , 20 U.S.C. § 1400 (2004). https://sites.ed.gov/idea/statute-chapter-33/subchapter-i/1400
- Jimenez‐Gomez, C. , Sawhney, G. , & Albert, K. M. (2021). Impact of COVID‐19 on the applied behavior analysis workforce: Comparison across remote and nonremote workers. Behavior Analysis in Practice, 14(4), 873–882. 10.1007/s40617-021-00625-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones, S. H. , & St. Peter, C. C. (2022). Nominally acceptable integrity failures negatively affect interventions involving intermittent reinforcement. Journal of Applied Behavior Analysis, 55(4), 1109‐1123. 10.1002/jaba.944 [DOI] [PubMed] [Google Scholar]
- Kellems, R. , Rickard, T. , Okray, D. , Sauer‐Sagiv, L. , & Washburn, B. (2018). iPad® video prompting to teach young adults with disabilities independent living skills: A maintenance study. Career Development and Transition for Exceptional Individuals, 41(3), 175–184. 10.1177/216514341771907 [DOI] [Google Scholar]
- Kiernan, W. E. , Hoff, D. , Freeze, S. , & Mank, D. M. (2011). Employment first: A beginning not an end. Intellectual and Developmental Disabilities, 49, 300–304. 10.1352/1934-9556-49.4.300 [DOI] [PubMed] [Google Scholar]
- Kim, S. , & Kang, V. (2020). iPad® video prompting to teach cooking tasks to Korean American adolescents with autism spectrum disorder. Career Development and Transition for Exceptional Individuals, 43(3), 131–145. 10.1177/2165143420908286 [DOI] [Google Scholar]
- Kratochwill, T. R. , Hitchcock, J. H. , Horner, R. H. , Levin, J. R. , Odom, S. K. , Rindskopf, D. M. , & Standish, W. R. (2013). Single‐case intervention research design standards. Remedial and Special Education, 34(1), 26–38. 10.1177/0741932512452794 [DOI] [Google Scholar]
- LeBlanc, L. A. , Lazo‐Pearson, J. F. , Pollard, J. S. , & Unumb, L. A. (2020). The role of compassion and ethics in decision‐making regarding access to applied behavior analysis services during the COVID‐19 crisis: A response to Cox, Plavnik, and Brodhead. Behavior Analysis in Practice, 13(3), 604–608. 10.1007/s40617-020-00446-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindstrom, L. , Kahn, L. , & Lindsay, H. (2013). Navigating the early career years: Barrier and strategies for young adults with disabilities. Journal of Vocational Rehabilitation, 39(1), 1–12. 10.3233/JVR-130637 [DOI] [Google Scholar]
- No Child Left Behind Act of 2001 , P.L. 107‐110, 20 U.S.C. § 6319 (2002). https://www.govinfo.gov/content/pkg/PLAW-107publ110/pdf/PLAW-107publ110.pdf
- Park, J. , Bouck, E. , & Duenas, A. (2019). The effects of video modeling and video prompting interventions on individuals with intellectual disability: A systematic literature review. Journal of Special Education Technology, 34(1), 3–16. 10.1177/0162643418780464 [DOI] [Google Scholar]
- Pew Research Center . (2021). Mobile fact sheet . Downloaded from https://www.pewresearch.org/internet/fact-sheet/mobile/
- Pitts, L. , & Hoerger, M. L. (2021). Mastery criteria and the maintenance of skills in children with developmental disabilities. Behavioral Interventions, 36(2), 522‐531. 10.1002/bin.1778 [DOI] [Google Scholar]
- Richling, S. M. , Williams, W. L. , & Carr, J. E. (2019). The effects of different mastery criteria on the skill maintenance of children with developmental disabilities. Journal of Applied Behavior Analysis, 52(3), 701‐717. 10.1002/jaba.580 [DOI] [PubMed] [Google Scholar]
- Taylor, J. , Avellone, L. , Brooke, V. , Wehman, P. , Inge, K. , Schall, C. , & Iwanega, K. (2022). The impact of competitive integrated employment on economic, psychological, and physical health outcomes for individuals with intellectual and developmental disabilities. Journal of Applied Research in Intellectual Disabilities, 35(2), 448–459. 10.1111/jar.12974 [DOI] [PubMed] [Google Scholar]
- Thomas, E. , DeBar, R. , Vladescu, J. , & Buffington Townsend, D. (2020). A comparison of video modeling and video prompting by adolescents with ASD. Behavior Analysis in Practice, 13(1), 40–52. 10.1007/s40617-019-00402-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- U.S. Bureau of Labor Statistics (2021, February 24). Persons with a disability: labor force characteristics summary . https://www.bls.gov/news.release/disabl.nr0.htm
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Appendix S1 Supporting Information.
