Skip to main content
Journal of Applied Behavior Analysis logoLink to Journal of Applied Behavior Analysis
. 2012 Summer;45(2):345–359. doi: 10.1901/jaba.2012.45-345

INCREASING ACCURATE PREFERENCE ASSESSMENT IMPLEMENTATION THROUGH PYRAMIDAL TRAINING

Sacha T Pence 1, Claire C St Peter 1,, Allison S Tetreault 1
Editor: Rachel Thompson
PMCID: PMC3405929  PMID: 22844141

Abstract

Preference assessments directly evaluate items that may serve as reinforcers, and their implementation is an important skill for individuals who work with children. This study examined the effectiveness of pyramidal training on teachers' implementation of preference assessments. During Experiment 1, 3 special education teachers taught 6 trainees to conduct paired-choice, multiple-stimulus without replacement, and free-operant preference assessments. All trainees acquired skills necessary to implement preference assessments with 90% or greater accuracy during the training sessions and demonstrated generalization of skills to their classrooms or clinic. During Experiment 2, 5 teachers who served as trainees in Experiment 1 trained 18 preschool teachers. All preschool teachers met the mastery criterion following training. Training teachers to implement preference assessments may increase teachers' acceptance and use of behavior-analytic procedures in school settings.

Keywords: generalization, preference assessments, pyramidal training, role plays, teacher training


The identification of potent reinforcers increases the likelihood of successful treatment development (Ringdahl, Vollmer, Marcus, & Sloane, 1997). Stimuli that function as reinforcers are different for each individual and may change over time (Ciccone, Graff, & Ahearn, 2007; Hanley, Iwata, & Roscoe, 2006). Therefore, those responsible for treatment development should possess the skills necessary to identify potential reinforcers. One commonly reported method of identifying potential reinforcers is to interview the individual or the individual's caregivers regarding preferences. Parental interviews may be useful for identifying an initial pool of stimuli to include in a preference assessment, but determining the student's preference hierarchy requires direct assessment of the stimuli (Fisher, Piazza, Bowman, & Amari, 1996). Assessment of stimuli identified by the parent through a direct preference assessment is superior to parental rankings of those same stimuli (Cote, Thompson, Hanley, & McKerchar, 2007).

The most commonly used direct preference assessments include the paired-choice, multiple-stimulus without replacement (MSWO), and free-operant procedures. The validity of these assessment methods has been demonstrated across a wide range of individuals, including the elderly (Feliciano, Steers, Elite-Marcandonatou, McLane, & Areán, 2009), adolescents (Paramore & Higbee, 2005), preschool students (Cote et al., 2007), and individuals with developmental disabilities (DeLeon & Iwata, 1996; Fisher et al., 1992). These preference assessments vary in the duration of time required to complete the assessment, the probability of resulting in a hierarchy, and the restriction of items during assessment. Because of procedural variations, each assessment may be recommended under different circumstances.

The paired-choice assessment involves the individual selecting one of two items presented during each trial (Fisher et al., 1992). Each item is paired with every other item included in the assessment, and the left–right presentation of an item is counterbalanced across trials. The dependent measure is the percentage of times that an item is selected when presented. The paired-choice assessment may help to identify potential side biases and only requires that the individual be able to scan and select from an array of two.

The MSWO assessment involves the individual selecting an item out of an array of approximately five to seven items (DeLeon & Iwata, 1996). After the individual is allowed to consume the selected item, that item is removed, the remaining items are rotated, and the individual is provided an opportunity to select another item. The dependent measure is the percentage of times that an item is selected out of the number of trials that it is presented. The MSWO assessment requires less time than the paired choice, but the individual must be able to scan and select from a larger number of items.

The free-operant assessment involves unrestricted access to multiple items simultaneously (Roane, Vollmer, Ringdahl, & Marcus, 1998). This assessment can be completed in a short time (i.e., 5 or 10 min), with the dependent measure being the time allocated to the manipulation of each item. The free-operant assessment does not require the restriction of items included in the assessment. However, this assessment is less likely to result in a hierarchical outcome of most-to-least preferred items than the paired choice or MSWO.

Training on procedures to conduct preference assessments would benefit teachers and other professionals who develop and implement behavioral interventions. To increase the number of professionals who are skilled at implementation of preference assessments, training should be maximally efficient. Two studies by Roscoe and colleagues were designed to identify the most effective components of packages for training preference assessment skills. Training packages that consisted of feedback, role playing, and practice resulted in greater acquisition of preference assessment skills than written instructions alone (Roscoe & Fisher, 2008; Roscoe, Fisher, Glover, & Volkert, 2006). In addition, Roscoe et al. (2006) demonstrated that feedback was superior to contingent money in acquisition of paired-choice and MSWO preference assessments with four individuals who had bachelor's degrees but little to no experience in conducting preference assessments.

Although the skills required to conduct various preference assessments are similar (e.g., delivering praise, spacing items evenly apart, providing prompts when the student fails to select an item, and allowing access to selected items), generalization of skills across different assessments may not occur without explicit programming. Roscoe and Fisher (2008) taught eight behavioral technicians with bachelor's degrees to conduct paired-choice and MSWO assessments. Training that consisted of feedback and role plays was implemented with one assessment. Training resulted in rapid, significant gains on the targeted preference assessment; however, improved performance was not observed on the untrained assessment, suggesting that direct training should be provided for each assessment.

Although generalization does not seem to occur across types of preference assessments without specific programming, implementation of preference assessments generalizes across settings and time. Lerman, Tetreault, Hovanetz, Strobel, and Garro (2008) trained teachers to conduct paired-choice, MSWO, and single-stimulus assessments using instruction, modeling, role playing, and feedback. Lerman et al. conducted follow-up observations with the teachers and a novel student in their classrooms 2 to 3 months after training. Eight of 9 teachers accurately performed at least one preference assessment in their classrooms. However, teachers were allowed to select which preference assessment to conduct, and none of the teachers demonstrated all preference assessments during generalization probes.

Given that feedback, modeling, and role playing are effective techniques for teaching a variety of preference assessments across a range of professionals, the next question is how to best disseminate these procedures. Pyramidal training, also referred to as a train-the-trainer model, involves training a small number of individuals who in turn train additional individuals. Pyramidal training has improved parents' teaching skills (Neef, 1995), as well as program implementation by both direct-care staff (Page, Iwata, & Reid, 1982; Shore, Iwata, Vollmer, Lerman, & Zarcone, 1995) and family members (Kuhn, Lerman, & Vorndran, 2003). Neef (1995), for example, compared pyramidal training to standard training on parents' acquisition of teaching skills, including arranging stimuli, delivering instructions, providing prompts and consequences, structuring teaching, and recording data. During pyramidal training, a professional provided instruction to five parents. These five parents then provided training to another parent (second tier), who subsequently provided training to another parent (third tier). In standard parent training, the professional instructed all parents. Parents in both groups acquired the targeted skills, and performance during posttraining and maintenance probes were comparable.

To date, pyramidal training has not been extended to preference assessment skills. The use of a pyramidal model could be a cost-effective means of disseminating behavior-analytic technologies to school professionals. For example, school districts could hire a professional to train a small subset of teachers to conduct behavior-analytic procedures. This subset then could train other teachers. If pyramidal training is effective for training several tiers of teachers, this technique could have significant implications for the spread of behavior analysis into school settings.

We evaluated the use of pyramidal training on teachers' implementation of three preference assessments: paired choice, MSWO, and free operant. Professionals trained three first-tier special educators to conduct these preference assessments. These teachers trained six second-tier special educators, five of whom subsequently trained 18 third-tier preschool teachers.

EXPERIMENT 1

Method

Participants and settings

Three female teachers served as first-tier participants. These participants will be referred to as 1A, 1B, and 1C throughout Experiments 1 and 2. They ranged in age from 23 to 32 years old and had been teaching for 1 to 10 years. These participants served as trainers during Experiment 1. Five female teachers and one female clinician who worked with children in the public school system served as second-tier participants. These participants will be referred to as 2A, 2B, 2C, 2D, 2E, and 2F throughout Experiments 1 and 2. These participants ranged from 29 to 54 years old and had been teaching or providing services to children with special needs (including behavior disorders, learning disabilities, and developmental disabilities) for 3 to 25 years. These participants had received didactic instruction on conducting preference assessments before the start of the experiment. These six participants served as the trainees in Experiment 1.

All participants were enrolled in a course sequence designed to prepare teachers to become Board Certified Behavior Analysts (BCBA). At the time of the experiment, participants had completed two graduate-level courses in behavior analysis and had begun to complete practicum hours.

Baseline and training sessions were conducted during a regularly scheduled class period for a course on behavioral assessment and intervention, which met in the library at a local elementary school. Generalization sessions were conducted in each teacher's classroom or clinic.

Data collection and interobserver agreement

Each preference assessment was outlined into general session steps and steps to be performed during each trial (MSWO and paired choice) or performed during each 30-s interval (free operant). The steps included in each assessment are listed in Table 1 and are similar to those described by Lerman et al. (2008). All preference assessments included three general session steps (data sheet present, area clear of extraneous items, and student praised during assessment) that were recorded as correct or incorrect at the end of the session based on the trainee performance during the whole session.

Table 1.

Steps for Completing Paired-Choice, MSWO, and Free-Operant Preference Assessments

graphic file with name i0021-8855-45-2-345-t01.jpg

The procedures to conduct the MSWO were delineated into 11 steps for each trial. Seven trials were included in each session. The procedures to conduct the paired choice consisted of 10 steps for each trial, with 10 trials constituting a session. The procedures to conduct the free operant were delineated into eight steps for each 30-s interval. Each session lasted 5 min (divided into 10 30-s intervals).

The trainers collected data using paper-and-pencil measures. Following each trial (paired choice and MSWO) or interval (free operant), 1A, 1B, and 1C (trainers) recorded whether each step was implemented correctly or incorrectly, or if there was no opportunity for the response. Overall performance was determined by dividing the number of skills performed correctly in a session by the total number of skill opportunities for that assessment and converting the fraction to a percentage.

Trained graduate and undergraduate students simultaneously and independently collected data on trainee and trainer behavior during 52% of sessions. Interobserver agreement was calculated by evaluating each skill component on a trial-by-trial (paired choice and MSWO) or interval-by-interval (free operant) basis for the session to determine agreement on the occurrence of a correct or incorrect response, or no opportunity for a response. The number of agreements was divided by agreements plus disagreements and converted to a percentage. Mean interobserver agreement was 97% (range, 89% to 100%) across all participants.

Treatment fidelity data were obtained on the trainers' use of feedback after training sessions. Data were recorded on the trainers' provision of constructive feedback, positive feedback, or omitted feedback on each step of the preference assessment. Constructive feedback included statements outlining an error made on a step and a review of the correct response. Positive feedback included statements of praise for a step completed correctly. Omitted feedback included instances in which the trainer did not provide constructive or positive feedback. On each step of the preference assessment, a binary measure of feedback (accurate or inaccurate) was obtained. For steps with errors, accurate feedback was scored if the trainer provided constructive feedback. For steps with no errors, accurate feedback was scored if the trainer provided positive feedback. Inaccurate feedback was scored if the trainer did not provide constructive feedback when an error was made (i.e., inaccurate feedback was scored if only positive feedback was given or if feedback was omitted). Inaccurate feedback was scored on steps without errors if constructive feedback was given or feedback was omitted. The number of steps with accurate feedback (constructive or positive) were divided by the total number of steps and then converted to a percentage. Trainers correctly provided feedback during a mean of 95% (range, 91% to 98%) of opportunities across all preference assessments.

Procedure

As part of the BCBA course sequence, all participants were enrolled in a practicum and received individual weekly on-site supervision. Participants 1A, 1B, and 1C had been trained independently by a BCBA consulting in their classroom during the previous school year, independently of their participation in the course sequence or this experiment. These three participants were asked to perform the paired-choice, MSWO, and free-operant preference assessments during preexperimental practicum supervision and demonstrated mastery (90% accuracy across one session) prior to the study. They were responsible for training one type of preference assessment during Experiment 1. Each type of assessment was assigned randomly to one of the trainers. The trainers were provided with copies of the data sheets, preference assessment protocols, a training protocol, and instructions on how to provide positive and constructive feedback before beginning to train the second-tier participants. The preference assessment protocol included a step-by-step breakdown of each preference assessment so that each trainer would review (discuss and model) the same steps consistently. The training protocol included an outline of the instructions to the trainees, number and types of errors to make during each session, and feedback guidelines. The experimenters reviewed the training protocols and data-collection methods with the trainers and answered any questions prior to training.

All participants were assigned readings that described the three assessment methods prior to the class. During the class period in which training was conducted, the trainee (2A through 2F) sat at a table with a trainer (1A, 1B, or 1C). Data sheets for each preference assessment were available on the table, and the trainees were provided with a bag containing nine different toys. Edible items were available on a table in the center of the room. During baseline, the trainee selected a preference assessment type without replacement from an envelope. The trainer stated “You selected [name of preference assessment]. Show me what you remember about this assessment.” If the trainee stated that she did not remember or did not attempt to conduct the assessment, the session was terminated and data were recorded as 0% of steps completed correctly. If the trainee attempted to conduct the assessment, the trainer acted as a student (described below). If the trainee conducted only part of the assessment (e.g., she completed four trials of the paired choice and then stopped), the data collector recorded the remainder of the steps in the session as incorrect and calculated percentage correct out of the total number of steps scheduled for that session (e.g., the paired-choice assessment would have 100 steps). These steps were repeated with each of the preference assessments. Trainees were not given any feedback during baseline. A new session began each time the trainee selected a preference assessment type from the envelope. A minimum of three baseline sessions (at least one session with each type of preference assessment) was conducted before the trainee moved to preference assessment training.

At the onset of training, the trainer (1A, 1B, or 1C) stated the name of the preference assessment and then reviewed how to conduct the preference assessment, including how to collect data on student responding. The trainer modeled the preference assessment by explaining how to respond to the student as she provided a visual example of each step of the preference assessment. For example, the trainer explained (and then modeled) that if a child selected two items simultaneously during the paired-choice assessment, the therapist should block the selection and re-present the trial. The trainer answered any questions from the trainee.

After the review, the trainer initiated a training session. At the onset of the session, the trainer stated “Here are some materials [data collection sheet and toys or edible items]. I am going to be the student. Please run the [name of preference assessment].” While acting as students, trainers refrained from problem behavior (e.g., aggression, inappropriate vocalizations), played with the toys, and relinquished the toys when prompted by the trainee. In addition, during the course of each session, the trainer included two or three trials in which she selected two items simultaneously, did not select an item within 15 s during the MSWO and paired-choice assessments, and stopped playing with the items for at least 15 s during the free operant. Data collected by the trainers were reviewed by experimenters after completion of baseline and training sessions to verify that that trainees experienced at least two trials during which the trainer selected two items simultaneously, did not select an item during the MSWO and paired choice, and stopped playing with items during the free operant.

After each training session, trainers referred to the data sheet for that session and provided feedback and modeled correct responses to errors observed during the session. Training on a particular assessment continued until the trainee completed at least 90% of the steps correctly during at least one session. Trainees moved from one trainer to the next after mastery was achieved on the current preference assessment and the next trainer was available (i.e., the trainer was finished with the previous trainee). Training ended after all three assessments were completed with 90% or greater integrity.

Trainees completed generalization sessions in their classrooms (2A, 2B, 2D, 2E, and 2F) or clinic (2C) using activities, tangible items, or edible items. Generalization sessions occurred 1 to 11 weeks after training across several regularly scheduled individual appointments with the practicum supervisor. Trainees were required to demonstrate all three preference assessment methods. Trainees independently selected which preference assessment to demonstrate first. The experimenter asked the trainee to select a different assessment to demonstrate on a subsequent appointment. This process continued until the trainees completed all three types of preference assessments at least once with students in their classrooms or clinic. If performance decreased below 80% during any session, the experimenter provided constructive feedback on any steps in which errors occurred. Feedback included an explanation of the error and an experimenter model of the correct response. The experimenter asked the trainee to demonstrate the same preference assessment again with her student, during both the same appointment (immediately following feedback) and a different appointment with another student.

Results and Discussion

The results for 2A, 2B, and 2C are depicted in Figure 1. Participants 2A and 2B performed the preference assessments with low accuracy during baseline. Participant 2C demonstrated moderate to high levels of accuracy across all three preference assessments during baseline, but did not meet the 90% mastery criterion on any assessment. Implementation of training resulted in rapid mastery of all three assessments across the three trainees. These participants subsequently implemented all three forms of preference assessment with children in their classrooms or clinic with 90% or greater accuracy.

Figure 1.

Figure 1

Percentage of steps completed accurately during paired-choice, MSWO, and free-operant preference assessments across baseline, training, and generalization for 2A, 2B, and 2C.

Figure 2 shows the results for 2D, 2E, and 2F. Participant 2D demonstrated low to moderate accuracy during baseline. During training, she quickly acquired the paired-choice and free-operant assessments (top panel). Her performance on the MSWO increased to 80% following the first training session. However, the trainer made an error and discontinued training after only one session, so this participant did not meet mastery criteria on the MSWO during training. Nevertheless, she demonstrated high levels of accuracy across all three preference assessments during generalization sessions in her classroom.

Figure 2.

Figure 2

Percentage of steps completed accurately during paired-choice, MSWO, and free-operant preference assessments across baseline, training, and generalization for 2D, 2E, and 2F. Participants 2E and 2F received additional feedback after Sessions 11 and 13, respectively, because accuracy fell at or below 80%.

Participants 2E and 2F also performed the assessments with low to moderate accuracy during baseline, but performance increased to mastery criteria after training and was maintained at high levels of accuracy during generalization for the free-operant and paired-choice assessments. During the first session of generalization of the MSWO, performance was less than 80% for both participants. They were given constructive feedback and modeling of the correct procedures by the experimenter after the first generalization session. After feedback, accuracy increased and was maintained at high levels.

All six second-tier trainees (2A through 2F) rapidly acquired the MSWO, paired-choice, and free-operant preference assessments when trained by their peers (1A, 1B, and 1C). Training took approximately 60 to 90 min to complete all baseline and training sessions. All trainees demonstrated generalization of all three preference assessments in their classrooms across at least two different students. For most teachers, the skills taught during brief initial training were maintained. When additional training was necessary (for 2E and 2F on MSWO assessments), feedback from the experimenter was sufficient to improve classroom performance.

Experiment 1 demonstrated the utility of having previously trained teachers (i.e., 1A, 1B, and 1C) train six additional second-tier individuals (2A, 2B, 2C, 2D, 2E, and 2F) to conduct preference assessments. However, these second-tier trainees were enrolled in a behavior-analytic training program, and trainers had been trained by an experienced behavior analyst. The purpose of Experiment 2 was to evaluate the utility of pyramidal training to teach preference assessments to individuals who were unfamiliar with behavior analysis and to evaluate the effectiveness of the pyramidal model with a third tier of trainees.

EXPERIMENT 2

Method

Participants and setting

The eight teachers from Experiment 1 participated. The first-tier participants (1A, 1B, and 1C) who previously served as trainers now took procedural fidelity data and provided feedback to the second-tier trainers (2A, 2B, 2D, 2E, and 2F). Eighteen female preschool teachers (3A through 3R), who taught inclusive classrooms of children who were typically developing and children who had special needs (including children with behavior disorders, learning disabilities, and developmental disabilities), participated as third-tier trainees during a 3-hr in-service training session. Some third-tier trainees had participated in a previous in-service session that included didactic instruction about preference assessments, including MSWO, paired-choice, and free-operant assessments. Demographic data on 3A through 3R were not collected. Training was held in a school administration building equipped with several chairs and tables. Each trainee was seated at a separate table with a trainer (2A, 2B, 2D, 2E, or 2F). Trainees who were not currently in training on preference assessments completed an activity on collection of descriptive data with the experimenters until their trainer was available.

Data collection and interobserver agreement

Data were collected on the MSWO, paired-choice, and free-operant preference assessments as described in Experiment 1. Data were collected by the trainers (2A, 2B, 2D, 2E, and 2F); by 1A, 1B, and 1C; and by trained graduate and undergraduate students.

A second observer simultaneously and independently collected data during 79% of sessions. Interobserver agreement was calculated as outlined in Experiment 1. Mean agreement was 94% (range, 70% to 100%) across all trainees.

Treatment fidelity data were obtained on the trainers' use of feedback after training sessions. Data were recorded and summarized as outlined in Experiment 1. Trainers accurately provided feedback during training on 83% (range, 53% to 99%) of steps across all preference assessments. Due to the low levels of feedback (53% of opportunities) provided by one trainer (2A), the percentage of corrective feedback given opportunities was reviewed. Despite overall low levels of feedback, this trainer provided corrective feedback on 100% of opportunities in which a trainee error occurred. The low percentage was primarily due to the omission of praise for correct steps.

Procedures

Prior to the onset of the training, 3A through 3R were reminded of the previous in-service session during which preference assessments were reviewed verbally but were not practiced. The trainees were informed that the purpose of training was to practice the preference assessments and to teach data-collection techniques.

Each trainee was assigned to a trainer (2A, 2B, 2D, 2E, or 2F), who conducted baseline sessions and then instructed the trainee in all three preference assessments. During training, the trainee drew a number from an envelope that specified the number of baseline sessions to be conducted. Baseline and training sessions otherwise were conducted as outlined in Experiment 1.

Results and Discussion

Fifteen trainees demonstrated 0% accuracy across the three preference assessments during baseline (Figures 3 and 4). Of the remaining three trainees, 3E correctly performed the MSWO with moderate accuracy and the paired choice and free operant at 0% accuracy (Figure 3), and accuracy increased for 3Q and 3R during their baselines (Figure 4). Participants 3Q and 3R initially stated that they did not recall anything about the assessments, and did not attempt the first baseline sessions. However, when the second opportunity was provided during baseline, they attempted the preference assessments, thereby increasing baseline accuracy. This increase in accuracy suggests that some participants may have had some skills already in their repertoires; overall baseline performance may have been higher for these participants if they had been required to attempt the assessments. Nonetheless, performance of the preference assessment skills was below mastery for both 3Q and 3R.

Figure 3.

Figure 3

Percentage of steps completed accurately during paired-choice, MSWO, and free-operant preference assessments across baseline and training sessions for 3A, 3B, 3C, 3D, and 3E (left) and 3F, 3G, 3H, 3I, and 3J (right).

Figure 4.

Figure 4

Percentage of steps completed accurately during paired-choice, MSWO, and free-operant preference assessments across baseline and training sessions for 3K, 3L, 3M, and 3N (left) and 3O, 3P, 3Q, and 3R (right).

Training resulted in immediate increases in performance for all trainees, and all trainees met mastery criterion for all three preference assessments. Fifteen trainees (3A, 3B, 3C, 3D, 3E, 3F, and 3I in Figure 3 and all trainees in Figure 4) mastered all preference assessments in one or two sessions. Participants 3G, 3H, and 3J required additional sessions to meet mastery criteria for one preference assessment. Participant 3H required three sessions and 3J required four sessions to meet mastery criteria for the MSWO. Participant 3G required three sessions to meet mastery criteria for the paired choice.

The results of Experiment 2 suggest that pyramidal training resulted in mastery of the skills necessary to conduct preference assessments by third-tier teachers. The trainers at the second tier (2A, 2B, 2D, 2E, and 2F) instructed third-tier teachers with little prior training in behavior-analytic procedures, even though the trainers had achieved accurate performance only a few months before. In addition, training was efficient; on average one trainer–trainee pair worked together for approximately 90 min (range, 60 min to 120 min) to complete all baseline and training sessions.

GENERAL DISCUSSION

We used pyramidal training to improve the accuracy with which teachers implemented three types of preference assessments. Three teachers (first tier) were trained initially by experienced behavior analysts. These teachers trained six teachers (second tier), who later trained 18 teachers (third tier). These participants acquired a subset of the skills necessary to identify preferences that could be incorporated into programming. Further research should evaluate procedures to train teachers to graph the data obtained from preference assessments, interpret the graphical displays, and use the data in programming.

The allocation of financial resources and professionals' time is an important consideration when a consultant is hired by a school district. Instead of behavior analysts directly training all individuals, behavior analysts could focus their time on ensuring that the initial tier of individuals are trained well and have the skills necessary to teach others. These individuals then begin to build the capacity of the system because they are permanent, well-trained individuals who can spread the technology to others. The use of pyramidal training may speed the dissemination of behavior-analytic technology without sacrificing the quality of training. Our results demonstrate that pyramidal training was an efficient and effective way for previously untrained teachers at the third tier to master the MSWO, paired-choice, and free-operant preference assessments.

The average time to complete all baseline and training sessions in Experiments 1 and 2 was approximately 90 min, with some participants finishing in 60 min and one participant taking 120 min to complete all sessions. These data suggest that pyramidal training may be as efficient as training conducted by experienced behavior analysts, although comparisons across studies are difficult because of multiple procedural differences. Lavie and Sturmey (2002) required approximately 80 min to train paired-choice preference assessments. Roscoe and Fisher (2008) had 15- to 20-min training sessions with each type of preference assessment (MSWO and paired choice). Future studies could compare directly the time and cost savings associated with pyramidal training and expert-led training.

Training packages that consist of modeling, role playing, and feedback are effective for skill acquisition. In this study, trainers provided vocal instructions that delineated the procedures for each assessment and modeled correct responses. Trainees were required to practice the preference assessment until mastery criteria were achieved. This package resulted in rapid increases in procedural implementation (both experiments) and generalization (Experiment 1). Although feedback is important for the acquisition of preference assessment skills (Roscoe et al., 2006), it is unclear if explicit instructions with modeling and role playing are necessary for rapid acquisition. The modeling and role-playing components of this package may have promoted rapid acquisition of skills; most teachers required only one session on each preference assessment to meet the mastery criteria. All trainees in Experiment 1, and 16 of the 18 trainees in Experiment 2, acquired the skills for at least one preference assessment after only one session of practice.

The efficacy of pyramidal training may be increased when the targeted skill can be broken into discrete and concrete steps, as is the case with preference assessments. The development of a task analysis for each method is fairly easy, and similar skills are required to implement different preference assessments. Pyramidal training may be less useful for other skills, such as shaping, that cannot be defined easily by discrete steps. In addition, pyramidal training necessarily will be limited by the skill sets of the trainers. For example, during Experiment 2, the trainer randomly drew the number of baseline sessions to conduct with each trainee. Trainers did not graph trainees' accuracy between sessions or use visual inspection to determine when training should be introduced. Because trends were not evaluated prior to moving to training, two participants (3Q and 3R) had increasing trends during their baselines, limiting the ability to draw conclusions about acquisition for those participants.

During Experiment 1, treatment integrity levels were above 90% across all trainers, indicating that first-tier trainers (1A, 1B, and 1C) consistently provided praise for steps trainees implemented correctly and constructive feedback on errors made during sessions. Treatment integrity decreased during Experiment 2. Second-tier trainers (2A, 2B, 2D, 2E, and 2F) provided feedback on a mean of 83% of opportunities. Overall, the majority of second-tier trainers implemented feedback at high levels during training sessions, indicating that integrity persisted with successive trainers. However, one trainer provided feedback on only 53% of opportunities. Despite this reduction in feedback, trainees still readily acquired the preference assessment skills. It appears that perfect feedback integrity is not necessary for acquisition of preference assessment skills, although future research should examine the relation between integrity level and acquisition.

During generalization in Experiment 1, teachers selected the order in which they conducted preference assessments. Five of the six teachers chose to conduct the paired-choice preference assessment first, suggesting that they may prefer the paired-choice preference assessment over the MSWO and free-operant assessments. Future studies should systematically assess which preference assessments teachers choose to conduct after they have been trained on multiple methods. This type of social validity assessment could inform practitioners' recommendations to teachers about those assessments and thereby improve the extent to which behavior-analytic technologies are adoptedby educators.

In this study, adults acted as students during the training sessions. There are several benefits to using adults. During training, the focus is on the acquisition of trainee skills. Use of adults can eliminate the possibility of having to manage problem behavior (e.g., noncompliance), this allowing training to focus on the targeted skills. In the present study, adults refrained from engaging in problem behavior, relinquished items when prompted, and provided opportunities for trainees to practice correct consequences when the simulated student selected multiple items simultaneously, failed to select an item (MSWO and paired choice), and stopped playing with items (free operant). Scripts can be given to adults to arrange for opportunities to practice all relevant skills and for equal opportunities across baseline and training. Future research should evaluate the use of adults as simulated students during training for other behavior-analytic procedures and the effects on acquisition and generalization of skills. The six trainees (2A through 2F) in Experiment 1 demonstrated the skills that were acquired during simulated conditions with students in their classrooms or clinic. However, baseline data were not collected under classroom conditions; therefore, conclusions about generalization are limited.

The use of a pyramidal training paradigm may be helpful in disseminating behavior-analytic procedures. It may be possible to increase the impact of behavior analysis by teaching community members (e.g., teachers, parents, direct-care workers) to implement behavior-analytic technologies. These community members could assist in training other individuals, thereby distributing behavior-analytic procedures more rapidly. It is possible that involving the community in dissemination may improve the social validity and adoptability of behavior-analytic procedures, bridging the current research-to-practice gap.

REFERENCES

  1. Ciccone F.J, Graff R.B, Ahearn W.H. Long-term stability of edible preferences in individuals with developmental disabilities. Behavioral Interventions. 2007;22:223–228. [Google Scholar]
  2. Cote C.A, Thompson R.H, Hanley G.P, McKerchar P.M. Teacher report and direct assessment of preferences for identifying reinforcers for young children. Journal of Applied Behavior Analysis. 2007;40:157–166. doi: 10.1901/jaba.2007.177-05. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. DeLeon I.G, Iwata B.A. Evaluation of a multiple-stimulus presentation format for assessing reinforcer preferences. Journal of Applied Behavior Analysis. 1996;29:519–533. doi: 10.1901/jaba.1996.29-519. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Feliciano L, Steers M.E, Elite-Marcandonatou A, McLane M, Areán P.A. Applications of preference assessment procedures in depression and agitation management in elders with dementia. Clinical Gerontologist. 2009;32:239–259. doi: 10.1080/07317110902895226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Fisher W.W, Piazza C.C, Bowman L.G, Amari A. Integrating caregiver report with a systematic choice assessment to enhance reinforcer identification. American Journal on Mental Retardation. 1996;101:15–25. [PubMed] [Google Scholar]
  6. Fisher W, Piazza C.C, Bowman L.G, Hagopian L.P, Owens J.C, Slevin I. A comparison of two approaches for identifying reinforcers for persons with severe and profound disabilities. Journal of Applied Behavior Analysis. 1992;25:491–498. doi: 10.1901/jaba.1992.25-491. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Hanley G.P, Iwata B.A, Roscoe E.M. Some determinants of changes in preference over time. Journal of Applied Behavior Analysis. 2006;39:189–202. doi: 10.1901/jaba.2006.163-04. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Kuhn S.A.C, Lerman D.C, Vorndran C.M. Pyramidal training for families of children with problem behavior. Journal of Applied Behavior Analysis. 2003;36:77–88. doi: 10.1901/jaba.2003.36-77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Lavie T, Sturmey P. Training staff to conduct a paired-stimulus preference assessment. Journal of Applied Behavior Analysis. 2002;35:209–211. doi: 10.1901/jaba.2002.35-209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Lerman D.C, Tetreault A, Hovanetz A, Strobel M, Garro J. Further evaluation of a brief, intensive teacher-training model. Journal of Applied Behavior Analysis. 2008;41:243–248. doi: 10.1901/jaba.2008.41-243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Neef N.A. Pyramidal parent training by peers. Journal of Applied Behavior Analysis. 1995;28:333–337. doi: 10.1901/jaba.1995.28-333. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Page T.J, Iwata B.A, Reid D.H. Pyramidal training: A large-scale application with institutional staff. Journal of Applied Behavior Analysis. 1982;15:335–351. doi: 10.1901/jaba.1982.15-335. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Paramore N.W, Higbee T.S. An evaluation of a brief multiple-stimulus preference assessment with adolescents with emotional-behavioral disorders in an educational setting. Journal of Applied Behavior Analysis. 2005;38:399–403. doi: 10.1901/jaba.2005.76-04. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Ringdahl J.E, Vollmer T.R, Marcus B.A, Sloane H.S. An analogue evaluation of environmental enrichment: The role of stimulus preference. Journal of Applied Behavior Analysis. 1997;30:203–216. [Google Scholar]
  15. Roane H.S, Vollmer T.R, Ringdahl J.E, Marcus B.A. Evaluation of a brief stimulus preference assessment. Journal of Applied Behavior Analysis. 1998;31:605–620. doi: 10.1901/jaba.1998.31-605. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Roscoe E.M, Fisher W.W. Evaluation of an efficient method for training staff to implement stimulus preference assessments. Journal of Applied Behavior Analysis. 2008;41:249–254. doi: 10.1901/jaba.2008.41-249. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Roscoe E.M, Fisher W.W, Glover A.C, Volkert V.M. Evaluating the relative effects of feedback and contingent money for staff training of stimulus preference assessments. Journal of Applied Behavior Analysis. 2006;39:63–77. doi: 10.1901/jaba.2006.7-05. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Shore B.A, Iwata B.A, Vollmer T.R, Lerman D.C, Zarcone J.R. Pyramidal staff training in the extension of treatment for severe behavior disorders. Journal of Applied Behavior Analysis. 1995;28:323–332. doi: 10.1901/jaba.1995.28-323. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Applied Behavior Analysis are provided here courtesy of Society for the Experimental Analysis of Behavior

RESOURCES