Abstract
This study compared the effectiveness of a multiple-stimulus-without-replacement (MSWO) preference assessment and teacher preference ranking in identifying reinforcers for use in a general education setting with typically developing elementary-school children. The mean number of digits correctly answered was greater in the MSWO-selected reward and the teacher-selected reward conditions relative to the no-reward condition for 2 of the 4 participants, but there were no differences between the MSWO-selected and teacher-selected reward conditions for any participant.
Keywords: general education, preference assessment, reinforcers, teachers
Contingent rewards intended to function as reinforcers are commonly used in education to improve conduct and academic work (Fantuzzo, Rohrbeck, Hightower, & Work, 1991). In our practice, we have noted that classroom teachers often use arbitrary and trial-and-error methods to select items for use as rewards. These unsystematic methods may not result in accurate identification of stimuli that will function as reinforcers (Fisher, Piazza, Bowman, & Amari, 1996). Researchers in classrooms have used more traditional methods (e.g., surveys and interviews) to identify preferred stimuli. These types of assessments likely are appealing to teachers because they take the least amount of time to administer. However, studies that have examined their agreement with reinforcer assessments have not been promising (e.g., Hagopian, Long, & Rush, 2004).
By contrast, the results of research that has examined systematic preference assessments such as the multiple-stimulus-without-replacement assessment (MSWO; DeLeon & Iwata, 1996) have suggested that stimuli identified as highly preferred reliably function as reinforcers. For example, Higbee, Carr, and Harrison (2000) found that the stimulus ranked as most highly preferred based on an MSWO assessment acted as a reinforcer for 6 of 9 participants.
Given that teachers may be less likely to conduct systematic preference assessments such as the MSWO, it is important to study the agreement between the results of assessments such as the MSWO and teacher-identified stimuli and the extent to which stimuli identified by these two methods function as reinforcers. Therefore, the purpose of the current investigation was threefold. With typically developing children as participants, we (a) examined the agreement between teacher-selected and MSWO preference assessments, (b) tested the reinforcing efficacy of teacher-selected rewards, and (c) compared the reinforcing efficacy of teacher-selected rewards with those selected via an MSWO preference assessment.
Method
Participants and Setting
Fourteen children identified by their teacher as performing poorly in mathematics were screened for performance deficits (defined as a student whose performance on an academic task improved substantially in the presence of a salient reward), using the method described in Duhon et al. (2004). Four first- or second-grade children (3 girls, 1 boy) qualified for participation based on performance deficits in mathematics. These children were not receiving special services at school. Intervention and assessment procedures were conducted by trained graduate students in a quiet room within the school building.
Dependent Variable and Materials
The dependent measure was the number of digits correctly answered in 2 min during each grade-level math probe. Math probes were curriculum-based measures consisting of grade-level subtraction problems. Multiple forms of math probes containing similar problems were constructed and administered in a random order. Two scorers independently scored the number of digits correctly answered on 44% of occasions. Interobserver agreement was calculated by dividing all agreements by the sum of agreements and disagreements, and this ratio was converted to a percentage; agreement was 92%.
Experimental Design
An alternating treatments design was used to evaluate the effects of three conditions (described below) on digits correctly completed in 2 min. The sequence of conditions was counterbalanced. No more than three reinforcer assessments were carried out per day for each participant. The reinforcer assessment consisted of a 2-min math probe, using one of the contingencies described below (e.g., no reward, MSWO-selected reward, or teacher-selected reward).
Procedure
Teacher Ranking Survey
A survey was constructed for this study asking teachers to rank 20 stimuli. Stimuli were selected for the survey based on nomination by a teacher who did not participate in the study. All selected items were either tangible or edible. Examples of items included colorful pencils, erasers, stickers, small toy dinosaurs, chocolate candies, candy bars, cheese crackers, and animal crackers. Twenty such stimuli were listed on the survey with blanks beside each. Teachers were asked to rank each stimulus 1 (child's most preferred stimulus) to 20 (child's least preferred stimulus).
Mswo Assessment
The preference assessment was conducted using a brief MSWO procedure that consisted of three sessions (Carr, Nicolson, & Higbee, 2000). The scoring method was that reported by Ciccone, Graff, and Ahearn (2005). Before beginning the assessment, all items listed in the surveys were brought in and laid out in a random array on the table about 5 cm apart in a semicircle. The child was seated in front of the items at the table, and the therapist read the instructions. The child then chose an item from the array and received the item. The child was told to place the item in a sandwich bag with his or her name on it that he or she could bring back to the classroom. That item was not replaced in the array after it had been chosen. The last item on the child's left was moved so that it was in the position of the last item on the child's right. All remaining items were readjusted so that all items were once again an equal distance apart. Then, the child was prompted to choose again. The order in which the items were chosen was recorded by the experimenter. Sessions ended when the child selected all items or stated that he or she did not like any of the remaining items. In addition, if the child did not select an item after 60 s, the session was to be terminated, but this never occurred. Two observers simultaneously but independently scored participant responses during 25% of the MSWO assessments, and mean agreement was 99.7%.
Baseline Fluency Assessment
Each participant was given three grade-level probes of subtraction math problems. The participant's median score of digits correctly answered in 2 min was used as the criterion for reward during the experiment (17 digits for Kailey, 14 digits for Heidi, 15 digits for Emma, and 7 digits for Kaleb).
No Reward
The child was told his or her median baseline score and was given a math probe consisting of grade-level math problems lasting 2 min. After completing the probe, the child was told whether or not he or she beat the baseline median score, but no other contingency was provided.
Mswo-Selected Rewards
This condition was identical to the no-reward condition except that the child received the item identified as most preferred by the MSWO if his or her baseline score was beaten. The child was shown the reward, and the contingency was stated prior to the math probe. The reward was given to the child at the completion of the math probe if performance exceeded the criterion.
Teacher-Selected Rewards
This condition was identical to the MSWO-selected rewards condition except that the reward was the item that received the highest teacher ranking.
Results and Discussion
The highest ranked item by the teacher was never ranked higher than 4 (range, 4 to 17) by the child during the brief MSWO preference assessment. The highest ranked item by the child was never ranked greater than 5 (range, 5 to 17) by the teacher ranking.
Kailey's and Heidi's data are presented in Figure 1. The mean numbers of digits correct for Kailey and Heidi, respectively, were 28.6 and 22.8 for MSWO-selected rewards, 24.0 and 23.3 for teacher-selected rewards, and 17.5 and 18.3 for no reward. The data paths for the three conditions show considerable overlap, which renders the results inconclusive.
Figure 1.
Number of digits correct in 2 min across reward conditions for all participants.
Emma's data also are presented in Figure 1. Number of digits correct showed increasingly greater differentiation between the teacher-selected (M = 23.3) and MSWO-selected rewards (M = 29.9), which were roughly equivalent relative to the no-reward (M = 15.9) condition. Kaleb's data (Figure 1) show differentiation between no-reward (M = 0.3) and reward (M = 14.3 for teacher-selected rewards, M = 15.3 for MSWO-selected rewards) conditions, but there was no differentiation between reward conditions.
Given the frequency of reinforcement-based interventions in schools and the literature suggesting that environmentally based interventions are more likely to be successful with children (DuPaul & Eckert, 1997; Weisz, Weiss, Han, Granger, & Morton, 1995), there is a clear need for effective and efficient means to identify reinforcers for typically developing children. This experiment compared the reinforcing effectiveness of stimuli selected by an MSWO preference assessment and teacher ranking to one another and to a no-reward condition. For 2 of the 4 participants, differentiated responding for the reward conditions was either evident from the beginning of the experiment or emerged. Clear, differentiated responding did not emerge between the teacher-selected and MSWO-selected conditions for any participant. This is similar to what was found by Smith, Iwata, and Shore (1995) for students with developmental disabilities.
Interpretations of these results should be tempered by consideration of the study's limitations. First, it is possible that reactivity to the experimenter and novel context may have contributed to the lack of differentiation between some conditions. Future studies might consider having someone the child is familiar with conduct the reinforcer assessment. Second, the children were all told their score to beat (and what they scored) in the no-reward condition. It is possible that the 2 children who continued to solve problems during the no-reward condition were reinforced by beating their score. This question could be addressed in future research. Third, Roscoe, Iwata, and Kahng (1999) and Francisco, Borrero, and Sy (2008) found little difference in responding when higher and lower preference items were compared in a single-operant arrangement. It is possible that a concurrent-operants arrangement would be more sensitive to differences between the reinforcing efficacy of teacher-selected and MSWO-selected rewards.
In summary, for these 4 participants, there were no clear differences in the reinforcing effectiveness of an MSWO-selected and teacher-selected preferred stimuli for digits correctly completed in 2 min. Two of 4 children did differentiate between these two reward conditions and the no-reward condition. Additional research is needed to compare the relative efficacy of teacher reward selection and systematic preference assessment methods with typically developing children.
Acknowledgments
This research was completed as a portion of the doctoral dissertation for the first author.
References
- Carr J.E, Nicolson A.C, Higbee T.S. Evaluation of a brief multiple-stimulus preference assessment in a naturalistic context. Journal of Applied Behavior Analysis. 2000;33:353–357. doi: 10.1901/jaba.2000.33-353. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ciccone F.J, Graff R.B, Ahearn W.H. An alternate scoring method for the multiple stimulus without replacement preference assessment. Behavioral Interventions. 2005;20:121–127. [Google Scholar]
- DeLeon I.G, Iwata B.A. Evaluation of a multiple-stimulus presentation format for assessing reinforcer preferences. Journal of Applied Behavior Analysis. 1996;29:519–533. doi: 10.1901/jaba.1996.29-519. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Duhon G.J, Noell G.H, Witt J.C, Freeland J.T, Dufrene B.A, Gilbertson D.N. Identifying academic skill and performance deficits: The experimental analysis of brief assessments of academic skills. School Psychology Review. 2004;33:429–443. [Google Scholar]
- DuPaul G.J, Eckert T.L. The effects of school-based interventions for attention deficit hyperactivity disorder: A meta-analysis. School Psychology Review. 1997;26:5–27. [Google Scholar]
- Fantuzzo J.W, Rohrbeck C.A, Hightower A.D, Work W.C. Teachers' use and children's preferences of rewards in elementary school. Psychology in the Schools. 1991;28:175–181. [Google Scholar]
- Fisher W.W, Piazza C.C, Bowman L.G, Amari A. Integrating caregiver report with a systematic choice assessment to enhance reinforcer identification. American Journal on Mental Retardation. 1996;101:15–25. [PubMed] [Google Scholar]
- Francisco M.T, Borrero J.C, Sy J.R. Evaluation of absolute and relative reinforcer value using progressive-ratio schedules. Journal of Applied Behavior Analysis. 2008;41:189–202. doi: 10.1901/jaba.2008.41-189. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hagopian L.S, Long E.S, Rush K.S. Preference assessment procedures for individuals with developmental disabilities. Behavior Modification. 2004;28:668–677. doi: 10.1177/0145445503259836. [DOI] [PubMed] [Google Scholar]
- Higbee T.S, Carr J.E, Harrison C.D. Further evaluation of the multiple stimulus preference assessment. Research in Developmental Disabilities. 2000;21:61–73. doi: 10.1016/s0891-4222(99)00030-x. [DOI] [PubMed] [Google Scholar]
- Roscoe E.M, Iwata B.A, Kahng S. Relative versus absolute reinforcement effects: Implications for preference assessments. Journal of Applied Behavior Analysis. 1999;32:479–493. doi: 10.1901/jaba.1999.32-479. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith R.G, Iwata B.A, Shore B.A. Effects of subject- versus experimenter-selected reinforcers on the behavior of individuals with profound developmental disabilities. Journal of Applied Behavior Analysis. 1995;28:61–71. doi: 10.1901/jaba.1995.28-61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weisz J.R, Weiss B, Han S.S, Granger D.A, Morton T. Effects of psychotherapy with children and adolescents revisited: A meta-analysis of treatment outcome studies. Psychological Bulletin. 1995;117:450–468. doi: 10.1037/0033-2909.117.3.450. [DOI] [PubMed] [Google Scholar]

