Skip to main content
Behavior Analysis in Practice logoLink to Behavior Analysis in Practice
. 2018 Sep 4;12(1):209–215. doi: 10.1007/s40617-018-00278-6

Targeting Staff Treatment Integrity of the PEAK Relational Training System Using Behavioral Skills Training

Adam D Hahs 1,, James Jarynowski 1
PMCID: PMC6411556  PMID: 30918787

Abstract

PEAK is a language curriculum dedicated to expanding language via the science of behavior analysis. The present study sought to evaluate the extent to which a behavioral skills training (BST) program impacted treatment integrity for six direct care staff implementing the Promoting the Emergence of Advanced Knowledge Relational Training System (PEAK) with six individuals with autism. We used a 2-h workshop-like Behavioral Skills Training (BST; instruction, modeling, rehearsal, and feedback) targeting PEAK treatment integrity. The results indicate that BST improved overall procedural integrity for all staff. All learners with autism improved their total percentage of correct, independent responding specific to the targeted programs. Further, all (6/6) staff maintained integrity to PEAK at well above baseline levels and all individuals with autism maintained high levels of performance specific to the targeted programs. The importance of appropriate training and treatment integrity given the implementation of PEAK is discussed.

Keywords: BST, PEAK, Treatment integrity, Relational frame theory


The integrity with which paraprofessionals and other front line staff are able to implement basic behavior-analytic techniques (e.g., discrete trial training) is imperative to the overall success of the programs, but more importantly, the learner’s success. Fixsen, Blase, Duda, Naoom, and Van Dyke (2010) specified practices (i.e., training and coaching) as a core component to the overall integrity with which staff implement interventions. They also noted that when there is high treatment integrity, this often leads to greater, more positive child outcomes. To date, there is only one published study in which the researchers investigated the use of what is known in the staff training literature as behavioral skills training (BST) for the Promoting the Emergence of Advanced Knowledge Relational Training System (Belisle, Rowsey, & Dixon, 2016). While curricula dedicated to the widespread facilitation of language are societally important, the impact of said curricula is reliant on the extent to which staff are able to correctly implement the programs and make data-based decisions concerning learner performance.

PEAK (Dixon et al., 2014a, Dixon, Whiting, Rowsey, & Belisle, 2014b; Dixson 2015) is a protocol that has garnered attention in the behavior-analytic community as being a viable, evidence-based curriculum in the emergence of language and cognition. The PEAK Direct Training module (PEAK-DT) consists of 184 discrete trial training programs largely couched in the perspective of Skinner’s Verbal Behavior (1957) concerning human language. The core components of PEAK-DT include: evaluation and training of foundational learning skills; perceptual learning skills; verbal comprehension skills; verbal reasoning; memory; and mathematical skills (Rowsey, Belisle, & Dixon, 2015). The breadth of empirical investigations to the utility of PEAK has amassed over the past 5 years as with McKeel, Dixon, Daar, Rowsey, and Szekely’s (2015) randomized controlled trial of PEAK-DT, to single case research design analysis (e.g., Dixon et al., 2014a, b). Often is the case that PEAK and similarly-focused language assessments are conducted by well-trained behavior analysts or graduate students. However, PEAK was developed with the intent of being user-friendly, such that behavior analysts, front-line staff, and parents or guardians could implement the system. As mentioned, there has only been one empirical exploration to the implementation of the PEAK system by untrained staff (i.e., Belisle et al., 2016). Thus, additional empirical investigations to the most efficacious staff training approaches concerning PEAK are warranted.

The predominant training approach within behavior analysis is behavioral skills training (BST). Used across myriad settings, people, and behaviors, BST has a robust body of empirical support indicative of its efficacy (e.g., Hogan, Knez, & Kahng, 2015; Morrier, Hess, & Heflin, 2011; Rosales, Stone, & Rehfeldt, 2009; Sarokoff & Sturmey, 2004). However, as Corrigan, Steiner, McCracken, Blaser, and Barr (2001) noted, there are barriers of 1) individual service providers lacking the necessary knowledge and skills to use what’s been practiced and 2) organizational dynamics often undermine the treatment team’s ability to implement and maintain innovative approaches. Further, they found that the following strategies are useful for overcoming these barriers and can effectively teach a variety of skills: packaging evidence-based practices so that specific interventions are more accessible and user-friendly; educating providers about relevant knowledge and skills; and addressing the organizational dynamics of the team to facilitate the implementation of innovations.

Of particular interest for the present study were the first and second strategies noted above (i.e., making the intervention user-friendly and educating providers) in that we sought to increase the integrity with which six school paraprofessionals accurately implemented the PEAK-Direct Training Relational Training System module via BST. Belisle et al. (2016) noted several possible limitations including a lack of maintenance probes over time and that IOA was not conducted on the learner performance. Thus, the present study sought to address those concerns by conducting maintenance probes at greater temporally distal points and calculating IOA on learner responding. We also assessed the extent to which, given paraprofessional’s improvements in treatment integrity after BST, student performance on targeted PEAK programs increased. Finally, we aimed to assess whether or not the skills staff learned during BST would maintain at temporally distal points and in turn, impact learner performance.

Method

Participants

Participants consisted of six school-based paraprofessionals at a Southwestern school. Each participant had no more than 1 year of behavior-analytic experience. None of the participants had had any experience with PEAK. Additionally, six students who were enrolled at the school served as the learners with whom the paraprofessionals worked throughout the study in a dyadic approach.

Ann, a 22-year-old who was paired with Borris, a 12-year-old boy. Karl, a 21-year-old male paraprofessional was paired with Dee, an 11-year-old boy. Gretch was a 31-year-old female paraprofessional who was paired with Mick, a 10-year-old boy. Randy, was a 27-year-old male, paired with Reese, a 12-year-old female. Cathy, a 22-year-old paraprofessional was paired with Nolan, a 13-year-old boy. Finally, Gary, a 24-year-old male was paired with Jules a 12-year-old boy. All students had a diagnosis of autism spectrum disorder and five of six had two prior years of behavior-analytic instruction. Nolan had had three prior years of ABA instruction. Jules was the only student of the six who engaged in challenging behavior (i.e., physical aggression and property destruction).

Setting and Materials

The study took place in a school for individuals with autism. Baseline data were collected in the students’ classrooms, while training took place in the staff training room. The materials included were each of the student’s PEAK binders and the appropriate stimuli per program to conduct discrete trial training. The PEAK binders included the student’s PEAK assessment results, data sheets, and a detailed description of each of the three programs to be used (i.e., 3C - Object Permanence, 5G – Vocal Mands, and 8I – Intraverbal: Personal Information). We used PEAK data sheets, including trial number, stimuli used, and level of prompt required to promote the appropriate response throughout the entirety of the study.

Response Measurement and Interobserver Agreement (IOA)

School paraprofessional’s treatment integrity and the student’s performance during PEAK trials served as the dependent variables. Like Belisle et al. (2016), we assessed paraprofessional performance via the PEAK Implementation Checklist (PEAK-IC; see Appendix). All items on the PEAK-IC were taking from the PEAK-DT module (Dixon, 2014). Training blocks consisted of 5 trials, and the first author scored the number of steps performed correctly and incorrectly for each block. The number of correctly-performed steps was divided by the total number of steps, multiplied by 100%. The product of this calculation was the paraprofessional’s treatment integrity. Some steps were not possible in all trials. As such, percent correct was used to account for the inequality of steps per trial. If, for example, a participant responded correctly given the first presentation of the discriminative stimulus, re-presenting the SD and providing a prompt would not be appropriate and would be coded as “n/a”. Additionally, if a participant initially responded incorrectly, “provides reinforcement if correct” would not be applicable, and also scored as an “n/a”. Any step scored “n/a” was not included in the overall treatment integrity calculation. All trials were videotaped such that we could obtain IOA, and IOA was calculated as the number agreements divided by agreements plus disagreements multiplied by 100 and converted to a percentage. Two independent observers assessed IOA for paraprofessional treatment integrity on 30% of total trials, and the resulting IOA was 98%. IOA for student performance was calculated on 30% of all videotaped trials, and agreement was determined to be 95%. Interobserver agreement was also calculated for student performance on 30% of total trials, and agreement was determined to be 92%.

Procedures

PEAK Assessment

We conducted the PEAK-Direct Training Pre- and Direct Assessment with each of the six students prior to their being involved in the Baseline phase. Participants’ total skills (of the 184-item assessment) prior to BST ranged from 10-28% (i.e., 18-52 PEAK programs). Based on these results, we identified three programs of the assessment for which all six students showed deficits, and chose 3C - Object Permanence, 5G – Vocal Mands, and 8I – Intraverbal: Personal Information for inclusion in the study.

Baseline

All staff attended a meeting whereby we discussed the general purpose of PEAK and how discrete trials were to be run with respect to it. Prior to baseline data collection, staff had the opportunity to review the PEAK-DT programs identified during the PEAK assessment for use with their students. Staff were allowed to ask general questions before they worked in one-on-one dyads with their assigned students. If no further questions were posed, staff began by implementing the first program for 10 trials (or two trial blocks), followed by the 2nd program for 10 trials, and finally the 3rd program for 10 trials. This approach continued throughout the study. So as to not influence or impact staff implementation, no feedback or modeling was provided by the first author, who served as the trainer throughout the entirety of the study. Each trial block lasted between 3-10 min, and a minimum of 2 trial blocks per program per session were conducted.

Behavioral Skills Training (BST)

BST was provided in a workshop format. Like baseline, staff were provided with the basic information regarding PEAK-DT implementation and encouraged to ask questions during the information portion of BST. Once staff had exhausted all of their questions, the first author provided a model of a series of appropriately-implemented DTT trials per the PEAK-IC. Specifically, the first author conducted one trial on which a volunteer was asked to respond correctly, and a second trial on which the volunteer was asked to make a mistake, such that staff could see the entire PEAK-IC. This process was repeated across three randomly-selected programs from the PEAK-DT module. We then instructed staff to conduct discrete trial training on their own for 5 trials per program, and this served as the rehearsal component of BST. Following the rehearsal/practice portion, we provided staff-specific feedback as to the correctly- and incorrectly-implemented steps of the PEAK-IC couched in the PEAK-DT program. Each trial block lasted between 15-30 min, and 1-2 trial blocks per program were conducted during this portion of the study. Due to the inclusion of the modeling and feedback components, the duration was longer than that of the Baseline. Following BST, we collected data on each staff member’s treatment integrity and each student’s performance with respect to the PEAK programs over the course of 6 weeks. Mastery criterion was set at an average of 80% fidelity or higher across three consecutive sessions.

Maintenance

Maintenance probes were conducted at two-, four-, and eight-week post BST to determine the extent to which staff treatment fidelity maintained in the absence of training on previously-implemented programs. Over the course of 8 weeks, staff continued to implement PEAK with their respective students. Staff did not contact additional training during this phase. The same conditions as that of the Baseline phase were implemented during this phase. As with Baseline, 1-3 trial blocks were conducted for each program. The duration range of a trial block decreased from 20 min. During Baseline to 15 min. During the training phase. An analysis and implications of this finding will be discussed below.

Results and Discussion

A nonconcurrent multiple-probe design was used to assess the effectiveness of a workshop for PEAK staff implementation and student responding, as seen in Fig. 1. For all staff, we observed relatively stable responding during baseline, followed by an increasing trend and level change post-BST. Paraprofessional performance maintained at 8-week maintenance probes. Ann’s mean PEAK fidelity was 47% during baseline, and increased to 80% after BST. Karl’s mean PEAK fidelity was 54% during baseline, and increased to 96% after BST, Gretch’s mean PEAK fidelity was 45% during baseline, and increased to 89% after BST. Randy’s mean PEAK fidelity was 36% during baseline, and increased to 86% after BST. Cathy’s mean PEAK fidelity was 88% during baseline, and increased to 96% after BST. Cathy may have had high fidelity during the baseline phase due to having implemented DTT in the past. Though Cathy met fidelity mastery criterion at baseline, we chose to include her in the BST phase to assess whether further improvement in fidelity resulted in commensurate improvement in student performance. Gary’s mean PEAK fidelity was 41% during baseline, and increased to a mean performance of 88% after BST and maintained for all three programs during the maintenance probes. During the maintenance phase, staff implementation accuracy remained at 80% or greater for all but one session. Staff performance during the maintenance phase can be seen in Fig. 1.

Fig. 1.

Fig. 1

Fig. 1

Nonconcurrent multiple-probe design across subjects. Closed data points represent the percent of steps completed correctly by the staff. Open data points represent the percent of trials completed independently by the student. Each symbol represents a different program

Student performance (see Fig. 1) also drastically increased post-BST intervention for staff treatment integrity. Borris’s mean independent performance was 14% during baseline, increased to 85% after BST, and maintained high during maintenance probes at 83%. Dee’s mean performance was 67.5% at baseline, increased to 94% after BST, and maintained at 87% during maintenance probes. Mick’s performance was 53% during baseline, increased to 91% after BST, and was 93% during maintenance. Reese’s mean independent performance was 32% during baseline, increased to 81% after BST, and maintained high during maintenance probes (i.e., 87%). Interestingly, Nolan’s performance was above that which we anticipated for baseline (i.e, 83%), increased to 97% after BST, and remained at a mean of 97% during maintenance. One possible rationale for Nolan’s relatively high baseline performance was that the staff with whom he was paired also performed well (i.e., at or above 90% accuracy) during all sessions of the study but one, highlighting the impact that appropriate instruction can have on student performance. Finally, Jules’s mean independent performance was 40% during baseline, increased only to 55% after BST, and increased to a moderate performance level of 63% during maintenance.

Overall, the current study results corroborate those of the Belisle et al. (2016) findings in that behavioral skills training increased the accuracy with which staff/paraprofessionals implemented discrete trial training components and relatedly, increased student performance on targeted programs. Moreover, the effects of BST on paraprofessional implementation via a workshop format maintained well above baseline levels at 4-week maintenance probes. Of particular interest given these results, is that all paraprofessionals had relatively little experience in the implementation of DTT and no prior experience with the PEAK Relational Training System, suggesting that staff, paraprofessionals, and other frontline staff may be able to acquire the skills necessary to be effective as evidenced by correlated student performance scores.

There were a few limitations worthy of note during the study. First, we didn’t conduct a comparative analysis between the current intervention (i.e., BST in a workshop format) and any other staff-training intervention. Therefore, we can’t speak to the degree to which our intervention was more effective than another potentially viable approach. However, in a relatively short amount of time (2 hrs), staff performed, on average, 51.8% higher than during baseline.

One of the limitations noted in Belisle et al. (2016) was that the authors didn’t prolong the maintenance phases, impeding their ability to convey the maintenance effects beyond those times at which were assessed. The present study assessed maintenance at 2-months post BST, and results indicate that all staff were able to maintain high treatment integrity well beyond that of baseline performance levels as collected via the PEAK-IC. This finding, while with a limited number of participants, is promising in that it suggests that staff, once trained via a relatively brief intervention, were able to perform at high levels post-training.

Another limitation mentioned within Belisle et al. (2016) was that of the absence of IOA specific to learner performance. To that end, we sought to address this limitation by taking IOA and including it (via review of video recordings) within the current study. Doing so allowed a more molecular analysis of staff and learner responding specific to consistently “missed” stimuli per program. Having conducted IOA for learner performance, we were better able to assess which stimuli from each of the targeted programs proved to be challenging for the learners. With additional time in the study, we would’ve made modifications to the stimuli and/or the prompting approach so as to occasion increased contact with reinforcement and minimize cumulative errors.

Future research should consider comparing in-vivo BST (as in Belisle et al., 2016) with that of a workshop-based BST approach used here, as well as an analysis of total time spent training. If the resulting performances (i.e., those of the staff and students) are commensurate, then practitioners would have more flexibility as to how they chose to train novice paraprofessionals or other service providers regarding PEAK implementation with the addition of efficacy data. Another area for future research would be the inclusion of generalization of trained skills to other PEAK programs, as the present study only investigated performance across three programs, and practice effects may have accounted for increases in staff and learner performances. However, given the repeated performance level changes across participant-learner dyads from baseline to intervention phases (and not during baseline), the likelihood of practice effects is minimal. The PEAK Relational Training System remains in its relative infancy, and yet, it is gaining greater attention within educational contexts. With that, future research dedicated to the analysis of PEAK implementation (and integrity to) via group design comparisons is warranted. Doing so will only serve to increase the extent to which staff and other service providers are able to maintain adherence to PEAK, and impact student performance as a byproduct of having done so.

Appendix

graphic file with name 40617_2018_278_Figa_HTML.jpg

Compliance with Ethical Standards

Conflict of Interest

No conflict of interest exists for either of the authors involved with this study.

References

  1. Belisle J, Rowsey KE, Dixon MR. The use of in situ behavioral skills training to improve staff implementation of the PEAK relational training system. Journal of Organizational Behavior Management. 2016;36(1):71–79. doi: 10.1080/01608061.2016.1152210. [DOI] [Google Scholar]
  2. Corrigan PW, Steiner L, McCracken SG, Blaser B, Barr M. Strategies for disseminating evidence-based practices to staff who treat people with serious mental illness. Psychiatric Services. 2001;52(12):1598–1606. doi: 10.1176/appi.ps.52.12.1598. [DOI] [PubMed] [Google Scholar]
  3. Dixon, M. R. (2015). The PEAK relational training system: Generalization module. Carbondale: Shawnee Scientific Press.
  4. Dixon, M. R. (2014). The PEAK relational training system: Direct training module. Carbondale: Shawnee Scientific Press.
  5. Dixon MR, Carman J, Tyler PA, Whiting SW, Enoch R, Daar JH. PEAK relational training system for children with autism and developmental disabilities: Correlations with Peabody picture vocabulary test and assessment reliability. Journal of Developmental and Physical Disabilities. 2014;26:603–614. doi: 10.1007/s10882-014-9384-2. [DOI] [Google Scholar]
  6. Dixon MR, Whiting SW, Rowsey K, Belisle J. Assessing the relationship between intelligence and the PEAK relational training system. Research in Autism Spectrum Disorders. 2014;8:1208–1213. doi: 10.1016/j.rasd.2014.05.005. [DOI] [Google Scholar]
  7. Fixsen DL, Blase KA, Duda M, Naoom S, Van Dyke M. Implementation of evidence-based treatments for children and adolescents: Research findings and their implications for the future. In: Weisz J, Kazdin A, editors. Implementation and dissemination: Extending treatments to new populations and settings. 2. New York: Guildford Press; 2010. pp. 435–450. [Google Scholar]
  8. Hester PP, Kaiser AP, Alpert CL, Whiteman B. The generalized effects of training teachers to teach parents to implement milieu teaching. Journal of Early Intervention. 1996;20:30–51. doi: 10.1177/105381519602000105. [DOI] [Google Scholar]
  9. Hogan, A., Knez, N., & Kahng, S. (2015). Evaluating the use of behavioral skills training to improve school staffs' implementation of behavior intervention plans. Journal of Behavioral Education, 24, 242–254. 10.1007/s10864-014-9213-9.
  10. McKeel AN, Dixon MR, Daar JH, Rowsey KE, Szekely S. Evaluating the efficacy of the PEAK relational training system using a randomized controlled trial of children with autism. Journal of Behavioral Education. 2015;24:230–241. doi: 10.1007/s10864-015-9219-y. [DOI] [Google Scholar]
  11. Morrier MJ, Hess KL, Heflin LJ. Teacher training for implementation of teaching strategies for students with autism spectrum disorders. Teacher Education and Special Education. 2011;34(2):119–132. doi: 10.1177/0888406410376660. [DOI] [Google Scholar]
  12. Rosales R, Stone K, Rehfeldt RA. The effects of behavioral skills training on implementation of the picture exchange communication system. Journal of Applied Behavior Analysis. 2009;42(3):541–549. doi: 10.1901/jaba.2009.42-541. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Rowsey KE, Belisle J, Dixon MR. Principal component analysis of the PEAK relational training system. Journal of Developmental and Physical Disabilities. 2015;27:15–23. doi: 10.1007/s10882-014-9398-9. [DOI] [Google Scholar]
  14. Sarokoff RA, Sturmey P. The effects of behavioral skills training on staff implementation of discrete-trial teaching. Journal of Applied Behavior Analysis. 2004;37(4):535–538. doi: 10.1901/jaba.2004.37-535. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Behavior Analysis in Practice are provided here courtesy of Association for Behavior Analysis International

RESOURCES