Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jan 1.
Published in final edited form as: J Organ Behav Manage. 2020 Jun 22;41(1):2–15. doi: 10.1080/01608061.2020.1776807

Using Adaptive Computer-based Instruction to Teach Staff to Implement a Social Skills Intervention

Caitlin Mailey 1, Jessica Day-Watkins 1, Ashley A Pallathra 2, David A Eckerman 3, Edward S Brodkin 4, James E Connell 1
PMCID: PMC8259409  NIHMSID: NIHMS1599737  PMID: 34239214

Abstract

This study evaluated the effectiveness of an adaptive, computer-based staff training software program called Train-to-Code (TTC) to teach the administration of a social skills intervention. The software program actively trained participants to identify whether video models illustrated each step of the procedure effectively or ineffectively. Multiple exemplars of each step of the social skills task analysis were represented. Most-to-least prompting as well as feedback and error correction were embedded into the software program and prompts were faded through seven levels as the participant reached criterion accuracy. A multiple-probe across participants design was used to evaluate the effectiveness of this program by comparing pre- and post-training in vivo probes conducted with a confederate learner. All participant scores increased from pre-training to post-training, indicating that Train-to-Code was effective at teaching administration of the social skills intervention. These results have implications for training staff in applied community settings. Due to Train-to-Code’s ability to be internet-based and to measure actual viewing performance, it has the potential for “distance training” deliveries.

Keywords: staff training, Train-to-Code, social skills intervention, behavior skills training


Behavioral skills training (BST) is a widely-used, evidence-based staff training procedure that includes instructions, modeling, practice, and feedback (Ward-Horner & Sturmey, 2012). BST is commonly used as a package to teach a variety of behavior analytic procedures such as most-to-least prompting (Giannakakos, Vladescu, Kisamore, & Reeve, 2016), social skills interventions (Day-Watkins, Pallathra, Connell, & Brodkin, 2018) and Picture Exchange Communication System (PECS) (Rosales, Stone, & Rehfeldt, 2009). However, BST has several limitations. First, BST is time-intensive as there are multiple components that could require repetition over several days for staff to acquire and implement an intervention with fidelity (Parsons, Rollyson & Reed, 2012; Rosales, et al., 2009). Second, BST requires the presence of a skilled trainer who participates in the modeling and feedback components of the intervention. This presents another challenge when the field considers disseminating BST as a training model in a large behavioral health organization which supports a larger geographical area. In such an organization with a greater number of employees or an organization which has staff providing behavioral supports in both the home and the community, the resource consumption of BST training is compounded and may be a less attractive training option despite its effectiveness (Parsons, Rollyson, & Reid, 2013).

To address these limitations, previous research has demonstrated effective modifications to the BST process that limit trainer presence by supplementing video models with voiceover instruction (VMVO) (Giannakakos et al., 2016), training via telehealth (Higgins, Luczynski, Carroll, Fisher, & Mudford, 2017) and using a computer-based training procedure (Rosales, Eckerman, & Martocchio, 2018). Telehealth is one alternative that allows healthcare providers to communicate, train, and provide services remotely using web-based technologies (e.g. videoconferencing) (Higgins et al., 2017). In another alternative, VMVO, a video is presented to the trainee that describes the behaviors being modeled in the video. These modifications reduce the need for an in-person trainer to provide instructions and modeling, however, a trainer is still required to provide ongoing feedback (Day-Watkins et al., 2018; Giannakakos et al., 2016).

Train-to-Code (TTC) is a computer-based, internet connected software program that uses written and vocal voiceover instruction, video-modeling, and instructive feedback to teach complex behavioral repertoires without direct staff supervision. When used to train staff to carry out procedures with integrity, TTC shows video clips depicting both correct and incorrect implementation of each step of the task analysis of a designated intervention (e.g., discrete trial training). While watching the clip, the trainee labels correct and incorrect steps until she or he meets a predetermined mastery criterion. TTC is predicated on strong Say-Do correspondence, meaning, if you can select the correct implementation within TTC, that selection-based responding (i.e. coding) will generalize to demonstrating that skill in a role play (i.e. “Do”; topography-based responding) (Michael, 1985). In TTC, the learner observers a behavior, selects whether it was completed correctly or not, thereby demonstrating the “say” portion of Say-Do correspondence. The Say-Do correspondence has been demonstrated previously in safety behaviors (Alvero & Austin, 2004) and conducting functional analyses. However only one study to date has evaluated the Say-Do correspondence in TTC, specifically to teach implementation of PECS Phase 3A (Rosales et al., 2018). Results showed improvement in performance fidelity relative to pre-training following completion of training. The present study extends the application of Say-Do correspondence to implementation of a social skills intervention.

Given the importance of implementing interventions with high procedural fidelity (Allen & Warzak, 2000) and the need for effective and efficient staff training programs in community settings where staff and financial resources may be limited, this study examined the effects of using TTC on a social skills training package developed as part of a larger NIH funded study and for use in community settings, and extended the findings of the TTC intervention demonstrated in Rosales et al (2018).

Method

Participants

There were three groups of participants who completed in vivo pre and post training probes of the study; 1) coaches who were trained to carry out the intervention and for whom the dependent variable was measured, 2) actors who were staged to engage in quiet conversation during role playing, and 3) the confederate learner who emitted scripted responses that allowed the coach to demonstrate acquisition of the social skills intervention. The coaches were two graduate students and one undergraduate student working in a psychology research lab at a private, urban, Mid-Atlantic university. They had little to no prior experience or knowledge on social skills interventions for individuals with autism. The participants serving as the coaches in this study participated in research related to social behaviors relevant to ASD and other neurobiological disorders. Because of their relevant experiences, the students were familiar with some behaviors associated with ASD, but had not been trained on implementing interventions for working with individuals with ASD using procedures grounded in applied behavior analysis. All coaches were female, typically developing, and between the ages of 20 and 24 years old. Two additional graduate students from an Applied Behavior Analysis graduate program served as actors during role plays. The second author served as the confederate learner.

Setting and Materials

Sessions were conducted in a university meeting room containing a conference table and chairs. Materials included a laptop computer with the TTC customized software program, a task analysis of the social skills instructional program for the role play portion of the program, pen and paper data collection worksheets, a roll of raffle tickets were used for tokens in the reinforcement system, a video camera (i.e., to record sessions for data collection), and an electronic tablet that displayed the video model during role play.

TTC program with video vignettes.

The TTC program was a customized software program that presented video clips of staff implementing a social skills intervention correctly and incorrectly. One hundred and forty-four videos were customized for the present study by the first and second author using a hand-held camera. Clips in these videos represented each step of the task analysis with multiple exemplars of performance of each step.

Dependent measures

The dependent measure was the percentage of steps of the social skills task analysis implemented correctly within a pre- or post-training role-play session (Appendix A). The seven steps of the task analysis included the following: 1) preparation of materials, 2) setting the actors in place, 3) delivering the discriminative stimulus, 4) providing positive reinforcement of the correct response or 5) delivering error correction, 6) re-presenting the trial following an error correction, and 7) scoring the data sheet. Each step in the task analysis was recorded as either correct or incorrect. The first author collected the data during each probe session and data were summarized as percentage of steps implemented correctly during the five trials that comprised each session. For a step of the task analysis to be scored correct for the entire probe session, the coach needed to emit a correct response during every opportunity (i.e., five out of five per step).

Design and General Procedure

A non-concurrent multiple-probe design was used to assess whether participants (i.e., coaches) implemented the social skills instruction program during a role-play before (i.e., pre-training probes) and after (i.e., post-training probes) the TTC training program was completed. The targeted social skill was appropriate approach and greeting responses as an adult joined a conversing pair of individuals. Pre-training probes were followed by a computer-based training program (i.e., TTC). Upon mastery in the TTC program, coaches return to post-training probes which were identical to pre-training probes with one exception (see description below).

Social Skills Intervention.

Coaches were taught to use a social skills instruction program to teach a confederate to approach and greet a pair of conversing adults. The correct social response included both a motor and a vocal response component. The motor response was defined as walking towards and stopping within two feet of a pair of actors. The vocal response was defined as making eye contact with one or both of the actors and saying, “Hey, what’s up?” The coaches’ response was contingent upon a correct or incorrect response emitted by the confederate learner. A complete description of the social skills instructional procedure can be found in Appendix A.

Pre-training probes

At the beginning of pre-training probes, participants were provided with a written description of the social skills intervention and materials to deliver the intervention (e.g., a tablet device for viewing video models, tickets for use in a token economy, and datasheets). The experimenter said, “Please take 10 minutes to review these materials and let me know when you are finished reading. All of the materials you will need to run the session are provided. I cannot answer any questions at this time but try your best based with the information I have given you.” Once the coach was finished reading, the experimenter said, “Let’s practice teaching the social skills intervention. The materials you need to conduct the intervention will be on the table. The other individuals in this room are the actors and this is the confederate you will be teaching.” No feedback was given during pre-training probes. A trial was terminated when the trainee verbally indicated that they were finished (e.g. “I’m done”) or when no further responses were demonstrated by the participant for 30 seconds after their last response. A probe session was complete once trainees completed five trials.

A coach had five opportunities (trials) in a session to complete the entire task analysis sequence for delivering the social skills intervention. For each trial the actors and confederate learner emitted one of five scripted responses. One script presented a correct response by the confederate and four scripts presented errors to be made by the confederate learner or actors. Therefore, each coach had five opportunities to demonstrate each step of the social skills instruction procedure according to the task analysis in every probe session. For example, two actors stood two feet apart from one another, engage in small talk (e.g., weekend plans). The confederate approached and began speaking about a highly specialized interest (e.g., vacuum cleaners). This set the occasion for the coach to respond to the confederate’s greeting according to the task analysis (Appendix A). For a step of the task analysis to be scored correct for the entire probe session, the coach needed to emit a correct response during every opportunity (i.e., five out of five per step).

Train-to-Code (TTC)

Following pre-training probes, each coach completed the TTC training program until they achieved a mastery criterion of 90% correct coding during twenty consecutive coding opportunities with no prompting or feedback being given for each step in the task analysis. A coding opportunity was the presentation of an individual video exemplar (i.e., one of the 144 video clips). Operational definitions of component parts of each step in the task (Appendix A) were listed beside the computer that the coaches were using, and abbreviations of these codes were displayed on the computer screen adjacent to the video. Additionally, each coach was provided a paper copy of the operational definitions.

The video clips were organized into six modules. Each module addressed a different step of the task analysis [preparation of materials (14 video clips), preparation of the conversational group and delivery of discriminative stimulus (24 video clips), reinforcing correct learner behavior (32 video clips), correcting an initial error by learner (32 video clips), correcting a second error by learner (32 video clips), and entering data for the trial on the data sheet (10 video clips)]. The coach scored or “coded” what was depicted in the video clip as a correct or an incorrect implementation of that step and was given immediate, computerized feedback on their coding accuracy. Contingent on an incorrect coding, the program implemented a most-to-least prompting sequence. Prompting and feedback were faded across seven levels until a mastery criterion was met for independent (unprompted) coding for each step in the task analysis (the type of prompting provided at each level is described in a prior publication, see Rosales, et al. 2018). To advance to the next training module, a participant needed to have an accuracy of 90% on 20 consecutive coding opportunities in the previous module (see Appendix B).

Post-training probes

Post-training probes were conducted after participants met the mastery criterion for the final module in TTC training. Post-training probes were conducted as described above for pre-training probes, with one exception. That is, feedback was given during post-training probes if trainees had not meet mastery criterion after 2 consecutive sessions. This feedback included reviewing the steps of the task analysis they had completed correctly and incorrectly. For those steps that were incorrect, the first author specified what a correct response for that step would be.

Social Validity

After post-training probes were completed, participants were asked to fill out a social validity questionnaire. Participants indicated their responses on a 5-point Likert style rating scale with 1 being Strongly Disagree to 5 being Strongly Agree. Social validity was summarized as a mean per questions across participants (Table 1).

Table 1.

Social Validity Results

Survey Question Mean Rating

1. I liked being trained by TTC. 2
2. The material in this training was presented in a clear and understandable manner. 3.7
3. I will use the skills I learned in TTC for my job. 2.3
4. I feel competent in implementing a video modeling social skills procedure after completing the TTC program. 4.7
6. I would like to use TTC or other computer-based trainings to learn other procedures in the future. 2.7
7. It is important to demonstrate that you can implement an intervention during training before you try to do it on the job. 4.7

Note.Table displaying mean responses to the Social Validity Questionnaire. The survey scale was from 1–5 with 1 being Strongly Disagree and 5 being Strongly Agree.

Procedural Fidelity

Adherence to the study protocol was scored for 100% of sessions from a checklist of tasks completed during each session. These procedural fidelity checklists were broken down into Pre-Training, Post-Training, and Train-to-Code (i.e. intervention) tasks. The mean procedural integrity score was 100% during pre-training probes, 100% during implementation of TTC and 98.2% during post-training probes.

Results

All participant scores increased during post-training probes. Scores were summarized as percentage of correct responses per each five-trial probe (see Fig. 1). During pre-training probes, Avery completed a mean of 49% correct responding (range, 33%−67%). The data for this participant showed an increasing trend in her fifth and sixth pre-training sessions but then decreased again in her last pre-training probe session. Avery immediately met mastery criterion after completion of TTC. Her mean score increased to 95.6% (range, 89%−100%) during post-training probes. During pre-training probes, Georgia completed a mean of 11% correct responses (range, 0%−22%). Following completion of TTC, Georgia’s scores increased to a mean of 81.3% (range, 55%−100%). For the initial two post training probe sessions, however, criterion performance was not achieved and thus this participant received feedback before proceeding. During pre-training probes, Natalie completed a mean of 28% correct responses (range, 11%−42%). Following completion of TTC, Natalie demonstrated a mean of 89% (range, 78–100) responses correct (range, 78– 100%).

Figure 1.

Figure 1.

Percentage of steps correctly completed per role-play probe.

Meeting certification for all steps in the TTC training required an average of 5.1 hours (Avery=3.5, Georgia=8.5, and Natalie=3.5). Overall coding accuracy across all training (i.e., all 7 levels of prompting) was 84% correct (Avery=89%, Georgia=73%, and Natalie=88%). The decreased accuracy for Georgia was largely attributable to longer latency to pause the video to enter codes, thus failing to code events in the time allowed by the program.

The responses on the social validity questionnaire indicated that participants found Train-to-Code helpful for learning how to implement a video modeling social skills procedure. However, all participants indicated that they “disagreed” or “strongly disagreed” when asked if they liked being trained by TTC or would like to be trained by a similar computer-based training program in the future.

Discussion

The present study demonstrates that Train-to-Code alone successfully established a social skills training repertoire for two of three participants. The third participant achieved criterion following TTC training plus brief individual feedback. These results replicate and extend the effects of Rosales et al., (2018) for two of the three participants in that two participants successfully implemented the social skills intervention following completion of TTC with no additional feedback.

A second purpose of the study was to identify whether a computer-based instructional program could be substituted as a fully independent alternative for the intensive resources of an expert trainer required in a traditional BST program (i.e., trainer to model, monitor role play and deliver feedback). But to do so, the effectiveness of TTC to teach a social skills intervention had to be evaluated. Thus, the present study was designed to assess acquisition of the same social skills intervention as that targeted by Day et al. (2018) using similar probe data and dependent measures (i.e., role play pre and post, acquisition according to the same task analysis) so that the outcomes of BST might be initially compared to those of TTC. If the outcomes of a TTC program produced the same outcomes as BST (i.e., acquisition of correct implementation of social skills intervention), TTC would be substitutable for BST for this social skills intervention. The results of the present study demonstrated that for two participants the social skills intervention was done with fidelity following TTC only. Therefore, the present study provides initial evidence that this TTC training might be substitutable for traditional BST with some participants, while it fell short for one. This lays the foundation for future researchers to directly compare the acquisition of BST and TTC.

These outcomes have several implications. Previously, TTC demonstrated effectiveness for learners with no prior experience acquiring the Picture Exchange Communication System (PECS) training skills for one phase of training (Rosales et. al, 2018). This study extends the utility of TTC to a multicomponent social skills intervention. Specifically, the participants learned to implement seven steps in a task analysis that included monitoring delivery of discriminative stimuli from actors, delivery of video model, manual prompts or verbal prompts contingent upon the nature of the error, data collection, and delivery of a reinforcer contingent upon a correct response.

This specific TTC training program required a significant time investment by the participants and was not well liked by them. Specifically, the mean duration of training time across participants in TTC was just over five hours. Further, one participant required feedback following TTC training. One possible reason why the one participant required feedback is that the TTC error correction was not effective in producing generalization across response forms (i.e., from selection-based response in TTC to topography-based responding in the role-play). TTC is predicated on strong Say-Do correspondence, meaning, if you can select the correct implementation within TTC, that selection-based responding (i.e., coding) will generalize to demonstrating that skill in a role play (i.e., “Do”; topography-based responding) (Michael, 1985). Therefore, Georgia’s data highlight that for some subset of learners who use TTC, a follow-up test for generalization may be warranted. If TTC were used in a remote, distance-training setting, a learner could submit a sample video of in vivo application of the intervention which would be sent to a trainer remotely to be scored.

One possible limitation that may have affected transfer from TTC to the naturally-occurring discriminative stimuli in in vivo role plays may be the design of the training material. Anecdotally, a participant reported that the content in one video did not correspond with the operational definitions available for coding. In this case, the operational definitions may not have been operating as discriminative stimuli, which, when presented simultaneously with a correct video representation of a step of the task analysis should have evoked a “correct” coding by the participant. To support this hypothesis, the authors reviewed interobserver agreement between two master coders on the correspondence of operational definitions (i.e., coding) to each video prior to implementation. Specifically, two coders watched each video and selected the “master code” or correct coding that would be programmed as “correct” by the program. This agreement was 81.5 %. Lower agreement on which codes each video evoke may explain why this participant reported that operational definitions and videos did not match. Put another way, if master coders could not agree on which code to assign to a video, it is likely that novice trainees would experience the same disagreement, and struggle to make “correct” selections according to the program. This should be addressed in future studies by requiring higher criterion for reliability across more than two master raters prior to implementation. A final limitation is that the participants did not consistently rate the training highly and shows that the present training should be revised. Anecdotally, the participants reported that the computer training program was long and redundant. This feedback suggests that the training parameters of the program should be adjusted (e.g., criterion for mastery lowered, fewer prompting levels before probing mastery) so that the overall duration of the program is decreased.

A training program of this type would be beneficial to a large organization with many employees, such as a behavioral health agency. Once the program is developed, it can be disseminated to many interventionists across various geographical locations. Additionally, with interventionists spread out geographically, center-based training time with a trainer who can provide feedback may be limited. Therefore, staff training programs that reduce the need for a live trainer may be a preferable option for agencies that support large geographical areas. Computer-based training packages might also be used to train members of families who live in remote areas where it’s impossible or inconvenient for professional interventionists to participate in on-site or center-based trainings. Overall, the findings of this study demonstrate advantages to the use of TTC-based video coding as a training mechanism when in vivo training is not possible or optimal. TTC may also be cost-effective for organizations. Although the creation of training videos will require initial costs of development, once the program is developed it can subsequently be used multiple times with no additional cost to the organization.

Funding

This work was supported by NIMH grant R34MH104407, Services to Enhance Social Functioning in Adults with autism spectrum disorder. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Appendix A.

Social Skills Intervention: Task Analysis Datasheet

Step Score Total
1 2 3 4 5
 1. Prepare Materials
 2. Set actors
 3. Deliver SD (let’s get started!)
 4. Deliver ticket and behavior specific praise OR see step 5
 5. Error correction
 6. Following error correction, re-present trial
 7. Score data sheet
Percentage of steps implemented correctly: _____/_____ =_______%

Appendix B.

Criteria, Number of video clips, and number of coding opportunities for each Training Module

Training Module For each prompting level the up criterion, and the down criterion between prompting levels Number of video clips in the video for this training module Number of steps (coding opportunities) per video clip Number of repetitions of the video available to this training module
Level 1 Level 2 Level 3–6 Level 7
1 90%, N/A 87% 73% 80% 60% 90% 80% 28 1 4
2 90% N/A 87% 73% 87% 73% 90% 80% 48 2 3
3 90% N/A 87% 73% 87% 73% 90% 80% 64 2 3
4 90% N/A 90% 70% 90% 70% 90% 80% 64 3 3
5 90% N/A 90% 70% 90% 70% 90% 80% 64 3 3
6 90% N/A 87% 73% 80% 60% 90% 80% 26 1 4

Appendix C. Operational Definitions of Codes in each Module

Module 1 Codes:

  1. All materials are present (data sheets and reinforcers on clipboard, and correct VM is on iPad)

  2. Data sheet not present but reinforcers are on clipboard, and correct VM is on iPad.

  3. Reinforcers not present but data sheet is on clipboard, and correct VM is on iPad.

  4. Data sheet or reinforcers are not present, but correct VM is on iPad.

  5. Data sheet, reinforcers, or clipboard are not present, but correct VM is on iPad.

  6. Data sheets and reinforcers are on clipboard, but incorrect VM is on iPad

  7. Data sheets and reinforcers are on clipboard, but no iPad is present.

  8. Data sheets and reinforcers are not present and incorrect VM on iPad OR iPad is missing

  9. Either datasheet OR reinforcers are missing and incorrect VM is on iPad.

  10. Not time for response entry

Module 2 Codes:

  1. Coach places 2 actors facing one another, approximately 2–3 feet apart.

  2. Actors present, but coach didn’t specifically place them.

  3. Coach only places one actor.

  4. Coach places actors incorrectly (too close, too far, not face to face).

  5. Actors deliver correct SDs

  6. Actors do nothing

  7. Actors do wrong verbal SD, correct motor behavior

  8. Actors do wrong motor SD, correct verbal behavior.

  9. Actors do wrong motor and wrong verbal behavior (e.g., actor walks over to other actor and whispers in ear).

  10. Not time for response entry

Module 3 Codes:

  1. Coach responds with token and behavior-specific praise or begins error correction after approximately 5 sec.

  2. Coach responds before allowing 5 sec after the SD for the Learner to begin their approach.

  3. Coach waits longer than 5 sec following the SD to respond

  4. Coach delivers reinforcement correctly (token and behavior-specific praise).

  5. Coach delivers behavior-specific praise but does not deliver token reinforcer

  6. Coach delivers token reinforcer but does not give behavior-specific praise.

  7. Coach gives both reinforcers (token and praise) to one or both ACTORS.

  8. Coach does not deliver either part of the reinforcer.

  9. (not used)

  10. Not time for response entry.

Module 4 Codes:

  1. Coach correctly stops trial at 5 sec or when initial error occurs

  2. (not used)

  3. Coach waits longer than 5 sec to stop trial after an initial error.

  4. Coach correctly plays VM

  5. Coach does not play VM

  6. Coach givens a manual/verbal prompt instead of VM

  7. Coach shows VM to Actors not Learner

  8. Coach correctly presents SDs and says “Let’s try again” following the correction.

  9. Coach does not correctly present SDs and says “Let’s try again” following the correction.

  10. Not time for response entry

Module 5 Codes

  1. Coach correctly stops trial at 5 sec or when the 2nd error occurs.

  2. (not used)

  3. (not used)

  4. Coach gives correct manual/verbal prompt

  5. Coach incorrectly gives VM or does not deliver manual/verbal prompt.

  6. (not used)

  7. (not used)

  8. Coach correctly presents SDs and says “Let’s try again” following the correction.

  9. Coach does not correctly present SDs and says “Let’s try again” following the correction.

  10. Not time for response entry

Module 6 Codes

  1. All scores entered correctly

  2. Response and # manual/verbal prompts entered correctly (including absence), but # VM is not correct,

  3. Response and # VM entered correctly (including absence), but # manual/verbal prompts is not correct.

  4. # VM and # manual/verbal prompts are entered correctly (including their absence), but Response is not correct.

  5. #VM entered correctly while # manual/verbal prompts and Response are not correct.

  6. # manual/verbal prompts entered correctly while # VM and Response are not correct.

  7. No scores are entered correctly.

  8. (not used)

  9. (not used)

  10. Not time for response entry

References

  1. Allen KD, & Warzak WJ (2000). The problem of parental nonadherence in clinical behavior analysis: Effective treatment is not enough. Journal of Applied Behavior Analysis, 33, 373–391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Alvero AM, & Austin J. (2004). The effects of conducting behavioral observations on the behavior of the observer. Journal of Applied Behavior Analysis, 37, 457–468. doi: 10.1901/jaba.2004.37-457 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Day-Watkins J, Pallathra AA, Connell JE, & Brodkin ES (2018). Behavior skills training with voice-over video modeling. Journal of Organizational Behavior Management, 38, 258–273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Field SP, Frieder JE, Mcgee HM, Peterson SM, & Duinkerken A. (2015). Assessing observer effects on the fidelity of implementation of functional analysis procedures. Journal of Organizational Behavior Management, 35, 259–295. [Google Scholar]
  5. Giannakakos AR, Vladescu JC, Kisamore AN, & Reeve SA (2016) Using video modeling with voiceover instruction plus feedback to train staff to implement direct teaching procedures. Behavior Analysis in Practice, 9, 126–134. doi: 10.1007/s40617-015-0097-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Higgins WJ, Luczynski KC, Carroll RA, Fisher WW, & Mudford OC (2017). Evaluation of a telehealth training package to remotely train staff to conduct a preference assessment. Journal of Applied Behavior Analysis, 50, 238–251. [DOI] [PubMed] [Google Scholar]
  7. Michael J. (1985). Two kinds of verbal behavior plus a possible third. The Analysis of Verbal Behavior, 3, 1–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Parsons MB, Rollyson JH, & Reid DH (2013). Teaching practitioners to conduct behavioral skills training: A pyramidal approach for training multiple human service staff. Behavior Analysis in Practice, 6 (2), 4–16. doi: 10.1007/BF03391798 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Parsons MB, Rollyson JH, & Reid DH (2012). Evidence-based staff training: A guide for practitioners. Behavior Analysis in Practice, 5 (2), 2–11. doi: 10.1007/BF03391819 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Rosales R, Eckerman DA, & Martocchio N. (2018). An evaluation of train-to-code to teach implementation of PECS. Journal of Organizational Behavior Management, 38, 144–171. doi: 10.1080/01608061.2018.1454873 [DOI] [Google Scholar]
  11. Rosales R, Stone K, & Rehfeldt RA (2009). The effects of behavioral skills training on implementation of the picture exchange communication system. Journal of applied behavior analysis, 42(3), 541–549. doi: 10.1901/jaba.2009.42-541 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Ward-Horner J, & Sturmey P. (2012). Component analysis of behavior skills training in functional analysis. Behavioral Interventions, 27, 75–92. doi: 10.1002/bin.1339 [DOI] [Google Scholar]

RESOURCES