Skip to main content
Behavior Analysis in Practice logoLink to Behavior Analysis in Practice
. 2018 Sep 18;12(2):387–395. doi: 10.1007/s40617-018-00282-w

A Brief Evaluation of a Pictorially Enhanced Self-Instruction Packet on Participant Fidelity across Multiple ABA Procedures

Thouraya Al-Nasser 1,, W Larry Williams 1, Brian Feeney 1
PMCID: PMC6745586  PMID: 31976243

Abstract

Enhanced self-instructions have been previously shown to lead to high levels of training protocol fidelity by lower level staff applying applied behavior analysis (ABA) protocols. An A-B replication series design across participants was used to gather preliminary evidence on the breadth of benefit of this approach to staff training, considered across common training tasks. Participants (N = 14) with no previous background in ABA learned how to conduct either two preference assessments (paired stimulus and multiple stimulus without replacement) or two acquisition discrete trial programs (match to sample and motor imitation) under two different self-instruction conditions. Procedures were trained using textual information only (i.e., standard packet) or textual information enhanced with visual cues (i.e., enhanced packet). Eight of the participants received a standard packet followed by an enhanced packet; six received them in reverse order. Each sequence was replicated within participants across the two tasks. No follow-up feedback or training was provided during either condition so as not to contaminate assessment of the effects of these self-instructions on procedural fidelity. Results showed that participants achieved near-mastery levels of performance under the enhanced packet condition. Seven of the eight participants who received the standard packet first improved in fidelity after receiving the enhanced packet. Where there was some evidence of maintenance of gains in some participants of new tasks trained again with the standard packet, reintroduction of the enhanced packet led to high fidelity in all cases. It appears that previous experimental findings showing the benefit of enhanced self-instructional training on the procedural fidelity of lower level training staff apply across a wide range of common ABA tasks.

Keywords: Staff training, Enhanced packet, Discrete trial training, Accuracy, Autism, Developmental delay


Staff training is widely understood to be an important activity for any applied behavior analysis (ABA) agency, but it is often difficult to find the resources needed for sufficient staff training (LeBlanc, Gravina, & Carr, 2009). ABA practitioners regularly deal with staff who have limited training in the specific skills needed to implement behavioral programs. Despite the need, staff training methods that are time-consuming, costly, or labor intensive or that require extensive oversight are unlikely to be adopted by behavior analysts. There is a clear need for staff training methods that are practical, effective, and efficient for both the trainer and trainee and that can be widely applied across different work settings, cultures, and agency or training goals (Higbee et al., 2016).

ABA researchers have examined a wide range of various methods used to train staff on implementing discrete trial training (DTT) or other procedures. Methods typically have included some combination of behavioral skills training (BST), self-instructional (SI) manuals, live face-to-face training, or video training with feedback or role-playing (Dibs & Sturmey, 2007; Larson & Hewitt, 2005; Macurik, O’Kane, Malanga, & Reid, 2008).

BST has been shown to be one of the most effective training packages for increasing the procedural fidelity of staff (e.g., Nigro-Bruzzi & Sturmey, 2010; Sarokoff & Sturmey, 2004, 2008). Implementing a BST package can, however, be resource intensive because it requires the extended presence of a behavior analyst or expert trainer to train, model, and give feedback to staff prior to independent use of protocols with a client. BST also calls for monitoring staff performance via data collection and periodic assessments of staff members’ skills. This can be impractical for agencies or practitioners dealing with budgetary constraints, high staff turnover, or limited time.

SI manuals are another widely used method that has been shown to be effective and efficient for teaching ABA procedures to staff and caregivers (e.g., Graff & Karsten, 2012; Salem et al., 2009; Thiessen, Fazzio, Arnal, Martin, & Keilback, 2009). SI manuals typically consist of written or computerized material that trainees can review at their own pace. Trainers typically must demonstrate mastery of one section before moving to others. Mastery assessments and the training manuals themselves are usually supplemented with quizzes, additional testing opportunities, videos, role-playing, and feedback (e.g., Nosik & Williams, 2011; Pollard, Higbee, Akers, & Brodhead, 2014; Thomson et al., 2012; Vladescu, Caroll, Paden, & Kodak, 2012). Although SI can be less resource intensive than BST, the use of complex or technical language unfamiliar to new trainees (Roscoe, Fisher, Glover, & Volkert, 2006; Thomson et al., 2012), the addition of supplemental assessments and trainings, and the lengthy manuals themselves (Salem et al., 2009) all add to the training burden.

Recent training innovations have attempted to solve these problems with SI. Graff and Karsten (2012) experimentally evaluated the effects of a SI package without performance feedback to improve staffs members’ implementation, scoring, and analysis of preference assessments (both paired stimulus [PS] and multiple stimulus without replacement [MSWO] assessments). Using a multiple-baseline design across participants, Graff and Karsten provided participants with standard SI packets based on the method sections of published studies (DeLeon & Iwata, 1996; Fisher et al., 1992) during baselines of varying lengths. In the next phase, teachers received an enhanced SI packet that contained step-by-step nontechnical instructions, pictures, and matching data sheets. Training continued until mastery criteria were met and task performance feedback was used. None of the teachers met mastery for PS or MSWO assessment using the standard packet alone, but enhanced SI packets led to near perfect accuracy across the skills (e.g., a mean of 98% for PS and a mean of 99% for MSWO). If this exciting finding can be replicated across a wider range of tasks and be made even simpler, it would be of considerable practical importance to the area of staff training and ABA. This was the goal of the present study.

An A-B replication series was deployed to provide evidence on the breadth of application of these earlier results to a broader set of training tasks, using enhanced packets without mastery and feedback. Following training using a standard SI packet, an enhanced SI packet was implemented with the same skill (a second set of participants received training in reverse order). In either case, training with a second skill then commenced, using the same order of training, which allowed the degree to which performance generalized across different tasks and the impact of introducing the enhanced SI packet as an initial step in training to be examined.

Method

Participants and Setting

Fourteen undergraduate university students, 18 years of age or older, participated in this study. Ten females and four males participated in this study; 60% of the participants at the time of this study were majoring in psychology, whereas the remaining participants were distributed among neuroscience, English literature, nursing, and biochemistry. Ages ranged from 18 to 33 years old. Participants with previous training or experience in conducting DTT or working with individuals diagnosed with an intellectual disability or autism were excluded from the study. Trainings and assessments were conducted in a 5 m × 3 m room. There were two chairs and a small table placed in the middle of the room. A smaller table and box used for holding condition materials were placed next to the larger table.

Design

An A-B replication series design across participants was used to evaluate standard and enhanced SI packets (similar to Graff & Karsten, 2012) for establishing participants’ fidelity with running ABA protocols. Two specific DTT protocols were used, match to sample (MTS) and motor imitation (MI), as well as two specific forms of preference assessment (PA), paired stimulus (PS) and multiple stimulus without replacement (MSWO). The order and type of ABA protocol presentation was counterbalanced across participants to control for potential learning sequence effects and to allow for uncontaminated assessment of the fidelity established during baseline (see Table 1). The first eight participants received the standard SI packet followed by an enhanced packet across their protocol sequence. The last six participants experienced the reversed training sequence.

Table 1.

Sequence of conditions for all participants

Participant Skills Type of training material Skills Type of training material
S1 PS Standard A Enhanced B MSWO Standard A Enhanced B
S2 MSWO Standard A Enhanced B PS Standard A Enhanced B
S3 PS Standard A Enhanced B MSWO Standard A Enhanced B
S4 MSWO Standard A Enhanced B PS Standard A Enhanced B
S5 MTS Standard A Enhanced B MI Standard A Enhanced B
S6 MI Standard A Enhanced B MTS Standard A Enhanced B
S7 MTS Standard A Enhanced B MI Standard A Enhanced B
S8 MI Standard A Enhanced B MTS Standard A Enhanced B
S9 MTS Enhanced B Standard A MI Enhanced B Standard A
S10 MI Enhanced B Standard A MTS Enhanced B Standard A
S11 MTS Enhanced B Standard A MI Enhanced B Standard A
S12 MI Enhanced B Standard A MTS Enhanced B Standard A
S13 MTS Enhanced B Standard A MI Enhanced B Standard A
S14 MI Enhanced B Standard A MTS Enhanced B Standard A

The training phases are referred to as the A-B design: in training phase A, standard written instructions were used, and in training phase B, the enhanced instructions were used

It should be noted that a clinical replication design of this kind is used to assess the consistency of effects seen across participants and task types, not to establish experimental control. This seems appropriate given that experimental control was already shown in the Graff and Karsten (2012) study, and the use of a design such as a multiple baseline could have complicated comparisons across task types, requiring a focus not only on the SI training type but also on the length of its use.

Materials

Standard SI packets used technical behavior-analytic terminology taken from the original studies (PA procedures: DeLeon & Iwata, 1996; Fisher et al., 1992) or developed by this paper’s first author (DTT procedures). A glossary of terms based on Cooper, Heward, and Heron (2007, pp. 689–708) and a data sheet, without examples, for completion were included. Definitions of discriminative stimulus (SD), reinforcement, and related terms were included. The standard SI packet did not include pictures or examples on how to run the protocols.

Enhanced SI packets used minimal technical jargon and were based on Graff and Karsten’s (2012) protocol for PA procedures, with a similar protocol developed by (and available from) this paper’s first author for DTT procedures. Detailed examples, explanations of ABA strategies (e.g., advantages of immediate over delayed reinforcement), and pictures were included depicting steps for each procedure (e.g., least-to-most prompting for use with DTT). Data sheets included step-by-step instructions and pictures on how to use them. Enhanced PA SI packets from Graff and Karsten (2012) included examples and pictures of how to run the assessments, material, session preparation, data recording, and analysis.

Materials used for training sessions included a pencil, video camera, edibles (e.g., pretzels, Cracker Jack, candy corn, cookies, corn chips, marshmallows, raisins, chocolate candy), and tangibles (e.g., a teddy bear, a ball, cards, books, crayons, and blocks).

Procedure

Upon arrival at the training setting, participants reviewed the details of the study, including their training sequence and procedure type, and completed consent forms. Participants completed all assigned training and procedures within a single session, which lasted an average of approximately 1 h.

Participants were handed the SI packet linked to their condition assignment and were informed that they could ask no questions once they began. Participants were permitted to have the SI packet on hand throughout their demonstration of skills. A box with materials (e.g., reinforcers) to conduct the procedure was placed next to the participant. Participants were given up to 30 min to read the SI packet and prepare to conduct the procedure with a confederate acting as a client. A confederate, role-playing as a child with an intellectual disability, followed a script outlining how to respond to the participants’ teaching instructions. This approach allowed for equal distribution of response types across participants, providing equal opportunities for participants to demonstrate correct prompting hierarchies or error correction procedures outlined in the SI packets (e.g., least-to-most prompting steps during DTT tasks or response patterns during PA tasks, such as pick one item, pick more than one item, or pick none). Confederate scripts were specific to each type of ABA procedure. Scripts for DTT tasks consisted of 20 fixed trials, whereas PA tasks’ scripts consisted of 56 trials. Participants were given a 15-min break and a small snack between conditions to avoid fatigue. All sessions were video recorded for data collection and coding purposes. At the end of the study, participants responded to a social validity questionnaire regarding their learning experience.

Dependent Measures and Data Collection

The dependent variable in this study was the percentage of training steps completed correctly for each procedure (referred to in this paper as fidelity). This was calculated by dividing the total number of steps completed correctly by the total number of steps possible for each procedural role-play multiplied by 100. Error patterns could include skipping a step altogether, adding a feature to a step that was not defined (e.g., delivering a second verbal prompt without pointing to the stimuli as in a gestural prompt), or delivering a prompt out of sequence (e.g., jumping to a physical prompt and skipping the gestural prompt on a least-to-most prompt hierarchy). Participants’ responses were scored based on three groups of training skills used in DTT and PA trials: preparation, one-on-one training, and consequence procedures. See Table 2 for specific behaviors.

Table 2.

Scoring of Specific Behaviors of Participants

Category DTT tasks PA tasks (DeLeon & Iwata, 1996; Fisher et al., 1992)
Pretrial preparation Selecting potential reinforcers, preparing proper data sheets, getting the confederate’s attention prior to delivering the SD Preparing proper data sheets and stimuli presentation, placing a consistent stimulus array on the tabletop, getting the confederate’s attention prior to delivering the SD
Trial implementation Placing the correct stimuli on the table, delivering the correct SD, delivering a reinforcer if the confederate independently emitted the correct response or delivering the correct sequence of prompt hierarchy if the confederate did not engage in the correct least-to-most response hierarchy Placing the correct stimuli on the table, delivering the correct SD, initiating the correct consequence sequence (e.g., if the confederate made a single selection of stimulus, multiple selections of stimuli, or no response)
Consequences Removing stimuli from the tabletop, delivering corrective prompting hierarchy, praising, recording data accurately Delivering the correct response by allowing the confederate time to consume the selected stimulus for a correct response or by removing stimuli from the tabletop, blocking, or re-presenting the same trial; recording data accurately

Interobserver Agreement (IOA)

IOA was calculated on 33% of randomly selected training sessions by rating the performance of the participants on targeted skills and the adherence of the confederate with procedural scripts. Data were calculated by dividing the number of agreements by the number of agreements plus disagreements and multiplying by 100. IOA was 99.7% for ratings of participants’ skills and 95% on confederate script adherence.

Results

Analysis of Standard-To-Enhanced Sequences for Various Tasks in eight Participants

The purpose of the standard-to-enhanced A-B sequence was to assess the initial level of fidelity using the standard SI packet across a range of procedures and then to determine whether there was a change in fidelity for participants who were not performing at an optimal level when the enhanced SI packet was used. This same replication series was then repeated with new tasks, allowing for the assessment of maintenance and generalization of task skills with new procedures following the reintroduction of the standard packets and then for the reexamination of levels of fidelity reached using the enhanced SI packets, particularly for any participants not showing optimal performance with standard training.

The overall mean accuracy using the standard packet was 42% with a range of 0% to 73% for the first ABA procedure encountered by the eight participants. Using the enhanced packet, the participants’ mean accuracy was 84% with a range of 68% to 95% for the first procedure. Overall, seven of the eight participants improved their skill fidelity from standard to enhanced SI regardless of the initial training task. Levels of fidelity reached were generally quite good, especially given the lack of feedback (see Fig. 1; see also Fig. 2 for the average percentage correct for each prompting hierarchy). The one participant who did not improve with enhanced training, S6, was already second best in fidelity for MI using the standard packet. This participant hit zero levels of procedural fidelity in the MTS task that followed, however; this improved when enhanced SI training was used on that task.

Fig. 1.

Fig. 1

Average procedural fidelity scores (total percentage correct) for all training and task conditions for all participants

Fig. 2.

Fig. 2

Average procedural fidelity scores (percentage correct) for each prompting hierarchy for all participants

A similar pattern in overall means was evident when participants moved to the second procedure. When using the standard packet, five of the participants showed higher treatment fidelity for the second procedure compared to their first procedure, suggesting a possible but weak learning effect carrying over from training. When using the standard packet for the second trained procedure, total mean accuracy was 59% with a range of 0% to 93%; when using the enhanced packet, the total mean accuracy was 85% with a range of 59% to 100%. Overall, seven of the eight participants improved their skill fidelity when moving from standard to enhanced SI within the second task. The two participants (S6 and S8) who showed the largest deterioration of skills when presented with the second task both reverted to high levels of performance following training using the enhanced packet.

Analysis of the Enhanced-To-Standard Condition Sequence for six Participants

One weakness of the previous analysis is that enhanced SI packets could have led to better performance merely as a result of additional practice following the standard SI packet. Four additional participants were trained in the reverse order to determine if starting training with enhanced SI packets leads to high levels of fidelity.

Total mean fidelity scores across all six participants who received an enhanced SI packet for their first task (see Fig. 1) was 94% with a range of 76% to 99%; when the standard SI packet was introduced on the same task, average fidelity was 96% with a range of 80% to 100%. The same pattern was seen in the second training task, with a mean fidelity for the enhanced packet of 97% with a range of 91% to 100%, and for the subsequent standard packet, a mean of 99% with a range from 96% to 100% (see Fig. 1). Thus, all participants showed high levels of training task fidelity in association with use of the enhanced SI packet, which continued unabated when standard methods were used. This controls for the possibility that practice alone accounted for the earlier results and also suggests that the deterioration seen with some of the first eight participants when the second task was introduced was likely due to the withdrawal of SI enhancements rather than fatigue or some other factor.

Secondary Analyses: Social Validity

At the end of their participation, participants were asked to complete a social validity survey about their experience with various features of the training. Participants endorsed responses to those questions on a 5-point Likert scale with 1 being strongly disagree, 2 disagree, 3 no opinion, 4 agree, and 5 strongly agree (see Table 3). Overall, participants noted that the introduction of pictures and diagrams regarding what to do, what not to do, and how to deliver proper responses and prompting hierarchies were the most vital components for accurate DTT administration. In response to the question “Which part of this training did you prefer?” 93% of the participants endorsed the addition of pictures and diagrams in the enhanced packet, 86% endorsed the examples used in the enhanced packet, and 29% endorsed the language used in the standard packet (see Table 4).

Table 3.

Social Validity Summary of Percentage Answered

Question Strongly disagree Disagree No opinion Agree Strongly agree
(1) (2) (3) (4) (5)
1. I had some questions on some sections of the training packet. 7% 36% 50%
2. I find it more suitable for further research to answer questions by participants on the training packet prior to the beginning of the study. 36% 29% 14%
3. It was difficult to run DTT or PA after reading the standard written instructions. 14% 50% 14% 14%
4. After a few practice opportunities, it became easier to run DTT/PA. 36% 57%
5. The enhanced instructions were easier to follow than the standard written instructions. 7% 14% 71%
6. The pictures in the enhanced instructions made it easier to run DTT/PA compared with the standard written instructions. 7% 14% 71%
7. Having less technical terms and jargon in the enhanced instructions made them easier to understand and implement than the standard written instructions. 14% 50% 29%
8. The mistake correction prompting hierarchy was the most difficult part to implement. 7% 29% 29% 29%
9. I prefer to see an actual demonstration of DTT or PA with the prompting hierarchy procedure before going over the SI packet. 7% 21% 29% 7% 29%
10. Recording data was easier after going over the enhanced instructions than after going over the standard written instructions. 7% 21% 29% 36%
11. After finishing this training, I feel confident to work with an individual diagnosed with a developmental disability. 14% 7% 29% 36% 7%

Table 4.

Component analysis of components preferred in the training

Percentage Component selected
14% Language used in the standard instructions
29% Language used in the enhanced instructions
93% Pictures and diagrams in the enhanced instructions
86% Examples used in the enhanced instructions
0% None of the above

Secondary Analyses: Subskill Performance across all Participants

The purpose of the secondary analysis of subskill performance across the experiment was to examine preliminary evidence to see if the response pattern noted previously was consistent for all subskills; it was not. Although the results show that use of the enhanced SI packet worked well overall, there were some aspects of performance under the procedural behavior targets that did not follow the pattern. This analysis was necessarily speculative, but it is added here to suggest possible areas for further improvement in SI enhancement.

One area noted was regarding performance of gestural prompts. The introduction of the enhanced SI packet did not seem to improve gestural prompt performance regardless of when enhancements were introduced (see Fig. 2). Overall, gestural prompts were not adequate. It is not clear if this was due to challenges in capturing adequate gestural behaviors within the enhanced SI conditions or whether further examples and pictures are needed to break this prompt into smaller steps. Participants showed higher levels of accuracy when running the MI protocol compared to the MTS protocol for the DTT program. This may be due to the number of steps involved in running each protocol. The MI protocol involved fewer steps, and there were no additional teaching stimuli involved in running this protocol, whereas in the MTS protocol, the number of steps and levels of prompting hierarchy are more demanding and complex. For example, a partial physical prompt for the MI program consisted of 21 potential steps the participant could engage in to prompt the confederate to respond on a partial physical prompt, whereas for MTS, there were 32 potential steps when delivering the partial physical prompt. Similarly, participants scored higher levels of accuracy on the PS assessment as compared to the MSWO. This may be due to the complexity of the MSWO assessment as compared to the PS method, as participants had to manipulate up to eight stimuli for the MSWO, whereas in PS, they had only two. Participants in this training group also shared some common responses and errors when running the PAs. For example, when running the MSWO, participants would remove at least one stimulus if the confederate did not pick any of the stimuli when asked to “choose one.” Another common error participants shared was in recording data. Some of the participants did not circle the number selected but rather recorded the number without writing the stimulus selected. Some participants would record what they thought the respondent would have chosen in the presence of a “no response” rather than the actual nonresponse on the part of the confederate learner. These early errors may be the result of approaching the PA sessions as helping or teaching choice versus testing it and could be the subject of future research.

A common error for the MTS protocol that participants made was not removing all the stimuli from the tabletop when the confederate did not match the sample card with the correct similar-choice stimulus on the table. Another common error was delivering an incorrect SD; for example, when saying “match,” the participants added “please” or “can you please” prior to the word (e.g., “Please match,” or “Can you please match?”). Although these are minor errors, they can be significant when teaching low language-capable individuals not under stimulus control typical vocal requests. Participants showed more frequent errors when gestural and partial physical prompts were introduced. Participants would frequently skip the gestural prompt and jump to a full physical prompt and guide the confederate, hand over hand, to match the sample card with the correct one on the table. When using a partial physical prompt, participants would employ strong hand contact as they were guiding the confederate to match a given card to the one on the table, similar to a full physical prompt. The importance of teaching this detail in prompting hierarchies is well known to help avoid prompt dependency, which results when these errors are not corrected early in training. The fourth common error some participants shared in running the MTS protocol was not securing the attention of the confederate. For example, they would deliver the SD “match” without getting the confederate’s attention in the form of eye contact or appropriately looking at the materials.

For the MI protocol, participants had higher accuracy rates on delivering the SD “do this” compared to the MTS protocol. Some participants also skipped the gestural prompt to go straight to either partial physical or full physical prompt when they prompted the confederate to engage in the designated behavior and modeled it when they presented the SD “do this.” Partial physical prompts were also often implemented as hand over hand, similar to full physical prompts on some occasions.

Overall, enhanced SI conditions achieved higher procedural fidelity when compared to the standard SI conditions, but this effect was not uniform. Future research will be needed to fit enhanced SI training to specific task demands.

Discussion

Overall these results replicate and extend the findings of Graff and Karsten (2012) and suggest that the enhanced SI delivers a broader effect on trainee behavior. Enhanced SI training appeared to be effective and superior to standard SI training. The positive fidelity associated with enhanced SI training was seen with several kinds of training tasks even when feedback was not provided. When enhanced SI packets were used first, performance was maintained in the same task under standard training conditions. When it was used second, performance improved for almost all participants and for all who showed poor fidelity using the standard training approach.

Relatively few studies have investigated the efficacy of SI packages for teaching ABA skills to naive staff without performance feedback in the form of quizzes or explicit training. Other methods of staff training may require the presence of a qualified trainer to model targeted skills and to give immediate feedback or to arrange for staff to watch a video as a form of supplementary training to the training manuals, which in many settings might not be feasible. Results suggest that enhanced SI packets may be an economical option for ABA practitioners in real-life settings. Of economical note was that the present study demonstrated large gains in treatment fidelity in the absence of corrective feedback and extended training sessions typically seen with other training procedures (e.g., BST).

There were low levels of gestural prompting fidelity observed even under the enhanced SI condition. Thus, it should not be assumed that enhanced SI packets lead to high performance in all areas; rather, enhanced packets will lead to high performance reliably so only in those areas specifically trained. Also, the skills required for the teaching of any person changes as learning happens. For example, the use of physical prompting and gestures becomes unnecessary as skills are acquired. Thus, the skill set needed for effective instruction becomes much smaller. Indeed, it can be argued that much of the staff training literature indicating effective teaching methods and results may be inflated due to the gradual reduction of training skills needed as learners begin to require only reinforcement of correct responses.

As with all replicated A-B designs, experimental control is limited, but in this study’s case, improvements were rapid and stable and comport with previous studies. These results support the value of the A-B design as a research extension tool for validating established findings in real-life practitioner settings. The A-B design decreases the potential standard rigidity of other designs that may limit experimental options within an applied setting. For example, extended baseline requirements needed for a multiple-baseline design may be imperfect or an ethically undesirable option for some situations. The brevity of an A-B design, although limited in experimental control, may have its place within the larger research picture.

Further exploration on SI packet–based instruction may be needed before seeing wider adoption by ABA practitioners. The length of time and cost needed to assimilate skills using enhanced training packets could be compared to full BST training. Future research might examine the controlling variables over errors under the various SI packet conditions and how to avoid them. The impact of specific elements of enhanced SI training packets, such as the contribution of picture examples, on procedural integrity outcomes could refine packet elements. A look into the use of enhanced packets as a guide for parents working with their own children diagnosed with autism or intellectual disabilities might also serve to increase their applied utility. For this study, some of the participants graduated a few weeks after participating, which put a limitation on future assessment. Future research in this area could examine the maintenance of taught skills over time. It would be helpful if future research identified potential error patterns that result from the SI packets provided. It would be interesting to see if one type of error pattern (omission or commission) was more prevalent specific to an SI packet. Statistical tests could also be used to supplement the results; group measures of central tendency and error variance could be superimposed on the graphs. These refinements to our knowledge of SI packets could carry the field to the point where this method of ABA training becomes a standard of practice for teaching other skills beyond common ABA procedures.

Enhanced SI packets appear to be a cost-effective, socially valid, fast self-training package that does not require the presence of a trainer to give one-to-one feedback. This type of training would have a significant impact on quality and speed of services, at least for initial training of new staff. Training with an enhanced SI packet could be followed by direct, one-to-one training, rather than using one-to-one training from the start, cutting training time significantly. Further work on enhanced training holds out the promise for substantially more efficient approaches to training staff in program-delivery methods.

Acknowledgements

The authors would like to thank Dr. Steven C. Hayes for his insightful analysis and contribution to the preparation of this manuscript.

Funding

Thouraya Al-Nasser has not received any research grants from any company or organization to sponsor this study. W. Larry Williams has not received any research grants from any company or organization to sponsor this study. Brian Feeney has not received any research grants from any company or organization to sponsor this study.

Compliance with Ethical Standards

Conflict of Interest

Thouraya Al-Nasser declares that she has no conflict of interest. W. Larry Williams declares that he has no conflict of interest. Brian Feeney declares that he has no conflict of interest.

References

  1. Cooper JO, Heward WL, Heron TE. Applied behavior analysis. 2. Upper Saddle River, NJ: Pearson Prentice Hall; 2007. [Google Scholar]
  2. DeLeon IG, Iwata BA. Evaluation of a multiple-stimulus presentation format for assessing reinforcer preferences. Journal of Applied Behavior Analysis. 1996;29:519–533. doi: 10.1901/jaba.1996.29-519. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Dibs N, Sturmey P. Reducing student stereotypy by improving teachers’ implementation of discrete-trial teaching. Journal of Applied Behavior Analysis. 2007;40(2):339–343. doi: 10.1901/jaba.2007.52-06. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Fisher W, Piazza CC, Bowman LG, Hagopian LP, Owens JC, Slevin I. A comparison of two approaches for identifying reinforcers for persons with severe and profound disabilities. Journal of Applied Behavior Analysis. 1992;25:491–498. doi: 10.1901/jaba.1992.25-491. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Graff BR, Karsten AM. Evaluation of a self-instruction package for conducting stimulus preference assessments. Journal of Applied Behavior Analysis. 2012;45(1):69–82. doi: 10.1901/jaba.2012.45-69. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Higbee TS, Aporta AP, Resnede A, Nogueria M, Goyos C, Pollard JS. Interactive computer training to teach discrete-trial instruction to undergraduates and special educators in Brazil: A replication and extension. Journal of Applied Behavior Analysis. 2016;49(4):780–793. doi: 10.1002/jaba.329. [DOI] [PubMed] [Google Scholar]
  7. Larson SA, Hewitt AS, editors. Staff recruitment, retention and training strategies for community human services organizations. Baltimore, MD: Paul H. Brookes Publishing; 2005. [Google Scholar]
  8. LeBlanc LA, Gravina N, Carr JE. Training issues unique to autism spectrum disorders. In: Matson J, editor. Applied behavior analysis for children with autism spectrum disorders. New York, NY: Springer; 2009. pp. 225–235. [Google Scholar]
  9. Macurik KM, O’Kane NP, Malanga P, Reid DH. Video training of support staff in intervention plans for challenging behavior: Comparison with live training. Behavioral Interventions. 2008;23:143–163. doi: 10.1002/bin.261. [DOI] [Google Scholar]
  10. Nigro-Bruzzi D, Sturmey P. The effects of behavioral skills training on mand training by staff and unprompted vocal mands by children. Journal of Applied Behavior Analysis. 2010;43:757–761. doi: 10.1901/jaba.2010.43-757. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Nosik MR, Williams WL. Component evaluation of a computer based format for teaching discrete trial and backward chaining. Research in Intellectual Disabilities. 2011;32:1694–1702. doi: 10.1016/j.ridd.2011.02.022. [DOI] [PubMed] [Google Scholar]
  12. Pollard JS, Higbee TS, Akers JS, Brodhead MT. An evaluation of interactive computer training to teach instructors to implement discrete trials with children with autism. Journal of Applied Behavior Analysis. 2014;47(4):765–776. doi: 10.1002/jaba.152. [DOI] [PubMed] [Google Scholar]
  13. Roscoe EM, Fisher WW, Glover AC, Volkert VM. Evaluating the relative effects of feedback and contingent money for staff training of stimulus preference assessments. Journal of Applied Behavior Analysis. 2006;39:63–77. doi: 10.1901/jaba.2006.7-05. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Salem S, Fazzio D, Arnal L, Fregeau P, Thomson K, Martin GL, Yu CT. A self-instructional package for teaching university students to conduct discrete-trials teaching with children with autism. Journal on Intellectual Disabilities. 2009;15(1):21–29. [Google Scholar]
  15. Sarokoff RA, Sturmey P. The effects of behavioral skills training on staff implementation of discrete-trial teaching. Journal of Applied Behavior Analysis. 2004;37:535–538. doi: 10.1901/jaba.2004.37-535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Sarokoff RA, Sturmey P. The effects of instructions, rehearsal, modeling, and feedback on acquisition and generalization of staff use of discrete trial teaching and student correct response. Research in Autism Spectrum Disorders. 2008;2:125–136. doi: 10.1016/j.rasd.2007.04.002. [DOI] [Google Scholar]
  17. Thiessen C, Fazzio D, Arnal L, Martin GL, Keilback L. Evaluating a self-instructional strategy plus monetary contingency to train instructors to conduct discrete-trials teaching with children with autism. Behaviour Modification. 2009;33:360–373. doi: 10.1177/0145445508327443. [DOI] [PubMed] [Google Scholar]
  18. Thomson KM, Martin GL, Fazzio D, Salem S, Young K, Yu CT. Evaluation of a self-instructional package for teaching tutors to conduct discrete trials teaching with children with autism. Research in Autism Spectrum Disorders. 2012;6:1073–1082. doi: 10.1016/j.rasd.2012.02.005. [DOI] [Google Scholar]
  19. Vladescu JC, Caroll R, Paden A, Kodak TM. The effects of video modeling with voiceover instruction on accurate implementation of discrete-trial instruction. Journal of Applied Behavior Analysis. 2012;45(2):419–423. doi: 10.1901/jaba.2012.45-419. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Behavior Analysis in Practice are provided here courtesy of Association for Behavior Analysis International

RESOURCES