Abstract
Variable-ratio (VR) schedules of reinforcement can lead to steady response rates and make behavior less resistant to extinction but can be difficult to implement with fidelity. Utilizing a concurrent multiple baseline design across participants, we sought to determine how classroom assistant (CA) delivery of VR schedules followed mean and variability requirements and evaluated the effects of programmed schedules of reinforcement on the implementation of VR schedules. Results suggest that the use of programmed schedules of reinforcement led CAs to increase the variability of reinforcer delivery and remain closer to the desired mean. Programmed schedules of reinforcement increase the variability in classroom assistant-implemented VR schedules. Programmed schedules of reinforcement assist classroom assistants in remaining close to the intended mean of the VR schedule. Programmed schedules of reinforcement are cost-effective and easy to provide for classroom assistants in the classroom. The descriptive statistic range can be used to characterize the variability of reinforcement schedules.
Keywords: Variable ratio schedules, Classroom assistant training, Programmed schedules of reinforcement, Treatment integrity
In applied behavior analytic (ABA) classroom settings, continuous (i.e., fixed-ratio [FR] 1) schedules of reinforcement are used to promote skill acquisition; however, once skills are acquired, the programmed reinforcement schedule is thinned to an intermittent (e.g., variable-ratio [VR] schedule) schedule to encourage the maintenance of behavior over time (Briggs et al., 2023; LeBlanc et al., 2002). Compared to FR schedules, VR schedules produce behavior that is more durable and less resistant to extinction (Cooper et al., 2020; Ferster & Skinner, 1957). VR schedules are particularly well-suited for maintaining skills learned in classroom settings because they promote high and steady rates of responding by the student and produce relatively smaller postreinforcement pauses compared to fixed schedules of reinforcement (Orlando & Bijou, 1960).
Van Houten and Nau (1980) compared the effectiveness of FR and VR schedules of reinforcement on visual attentiveness and disruptive behavior in five deaf students. Reinforcement was delivered in the form of check marks contingent upon attentiveness and absence of disruptive behavior, and ratios were equated across schedule types (e.g., FR 5 vs. VR 5). Researchers implemented group time sampling procedures and collected data using a 10-s whole interval method. Both FR and VR schedules produced changes in dependent measures. However, classroom means showed that VR schedules increased attentiveness to nearly 100% of intervals and decreased disruptive behavior to near-zero levels for all participants. It is important to note that VR schedules also produced higher rates of math problem completion, although no contingencies were in place for that response.
Schlinger et al. (1990) compared different VR schedules and found that postreinforcement pauses increased as the variable-ratio size and response requirement increased. In addition, although previous research has determined that organisms generally prefer variable schedules to fixed schedules (Argueta et al., 2019; Field et al., 1996), that preference may depend on the range of values used to create the schedules. Mullane et al. (2017) evaluated children’s preference for fixed versus mixed-ratio schedules of reinforcement for math completion. Mixed-ratio (MR) schedules are a type of bivalued VR schedule of reinforcement that typically includes a small and large ratio requirement. For example, in a MR (1,9) schedule, reinforcement would be delivered after 1,9,9,1,9,9,1, and 1 responses, respectively. Mullane et al. compared MR (1,9), (1,11), and (5,7) schedules to an FR-5 schedule. Results showed that children preferred MR schedules even when this schedule had a relatively larger mean ratio requirement. However, preference shifted when the MR schedule did not contain the small (i.e., FR 1) ratio component (i.e., MR [5,7]). Therefore, a range of values, including small response requirements in VR schedules, is needed to maintain its relative advantages, including steady response rates and shorter postreinforcement pauses.
Previous research has not directly identified the amount of variability that should be used when implementing VR schedules, and the literature has mixed findings about the range of numbers used in their creation. For example, Bancroft and Bourret (2008) describe a VR-5 schedule constructed using response requirements that range from one to nine responses with equal probability, and Baker (1979) describes response requirement values within a VR-5 schedule ranging anywhere from the 1st to the 15th response.
Treatment integrity, also known as procedural fidelity, refers to how an individual implements a protocol as prescribed. Previous research has shown that treatment integrity errors are likely to occur (e.g., Carroll et al., 2013) and may have detrimental effects on behavior intervention and skill acquisition programs (e.g., Carroll et al., 2013; Jones & St. Peter, 2022; St. Peter Pipkin et al., 2010). A number of studies have conducted descriptive assessments of treatment integrity errors during academic instruction and found that correct tangible delivery is the most common error made by teachers (Carroll et al., 2013; Kodak et al., 2018). Teachers were likely to engage in both errors of commission (providing reinforcement following errors or no response) and omission (not reinforcing correct responses). Although both types of errors have been shown to impede learning, errors of commission may be more detrimental to skill acquisition (Bergmann et al., 2021).
Intermittent schedules, such as VR schedules, are commonly recommended following skill mastery to promote maintenance (e.g., Leaf & McEachin, 1998; Maurice et al., 1996). These schedules are most effective (i.e., produce steady responding with less pausing) when they include small response requirements (Schlinger et al., 2008). However, teachers may have more difficulty implementing VR schedules than continuous or FR schedules. That is, for teachers and classroom-assistants to accurately implement VR schedules of reinforcement specified in students’ academic programs, both the ratio of responses required to contact reinforcement as well as the variability around that number of responses must be implemented with fidelity. Furthermore, they must continuously track the predetermined arrangement of response frequencies associated with reinforcement (Baker, 1979) and include ratios that are shorter and longer than the mean, while preserving the overall mean. Inaccuracies in VR schedule implementation may lead to abrupt increases in ratio requirements that may increase response effort beyond what is supported by the schedule. This may lead to extinction of responding and extinction-induced emotional side effects. Moreover, schedules that include a narrow range of values or do not include smaller ratios may produce schedules that more closely approximate FR schedules, which are less preferred by participants (Mullane et al., 2017) and more prone to postreinforcement pauses than VR schedules (Schlinger et al., 1990). Therefore, when using VR schedules, the amount of variability relative to the identified mean during implementation should be examined, especially because the amount of reinforcement being delivered to a student could function to decrease the value of the reinforcer and act as an abolishing operation for continued student responding (Bancroft & Bourret, 2008).
Although the efficacy of VR schedules has been examined in a wide variety of procedures, research demonstrating challenges related to implementing VR schedules and evaluations of intervention strategies for improving the use of these schedules is limited. Accuracy of the mean response requirement and variability around that mean could be targeted by selecting and preplanning VR schedules before implementation to provide reinforcement with maximum effectiveness and avoid fidelity issues, such as overuse of large ratio requirements (Baker, 1979; Cooper et al., 2020). Therefore, the purpose of the current investigation was to (1) examine the range of response requirements to reinforcement to determine how closely they adhere to the programmed mean and variability requirements during classroom assistant implementation of VR schedules of reinforcement and (2) evaluate the effects of a programmed schedule of reinforcement on the accurate implementation of two preselected VR schedules.
Method
Participants, Setting, and Materials
Six classroom assistants (CAs) at a center-based ABA school program for students aged 3–21 with autism spectrum disorder (ASD) participated in this study (see Table 1). The school program served approximately 70 children with a wide variety of needs and support levels. The six CAs worked in two different classrooms. Three CAs in one classroom (Randall, Melissa, & Tara) implemented a VR-5 schedule of reinforcement, and three different CAs in another classroom (Natalie, Sally, & Lauren) implemented a VR-10 schedule of reinforcement.
Table 1.
Classroom Assistant Demographic Information
| Participant | Gender Identity | Age | Race | Ethnicity |
|---|---|---|---|---|
| Lauren | F | 20 | White | non-Hispanic |
| Melissa | F | 27 | White | non-Hispanic |
| Natalie | F | 25 | White | non-Hispanic |
| Randall | M | 29 | Asian & white | non-Hispanic |
| Sally | F | 24 | Black | Hispanic |
| Tara | F | 21 | Black & white | non-Hispanic |
Classroom assistants had been working at the center between 1 and 3 years, had previously received didactic training on ABA teaching and behavior management strategies (including implementation of VR schedules during the onboarding process), and were currently implementing the VR schedules of reinforcement as part of students’ recommended programming. All sessions were conducted within the classroom while CAs were implementing discrete trial teaching sessions with students who had VR schedules incorporated into their programming. Each CA worked with, on average, six students on a rotating basis. CAs continued to collect data on student performance during work demands as usual. Each session contained five opportunities for reinforcement. Therefore, total session time varied between 5 and 20 min depending on the specified VR schedule, task, and reinforcer.
During the intervention, CAs were provided with printed schedules of reinforcement (Appendix A & B), which cost 10¢ each to print (including paper and ink), and the principal investigator instructed them as to how to use the printed reinforcement schedule before each session (see Appendix C). The instructions required approximately 1 min to provide to each CA, and any questions were addressed at that time. The authors of this study manually created printed schedules of reinforcement based on information from the literature to include relatively more short ratios than long ratios (DeLeon et al., 2013), with the VR-5 schedule including three values under 5 and two values greater than 5 and the VR-10 schedule including three values less than 10 and two values greater than 10. Four VR-5 and four VR-10 schedules were created and randomized using a manually created list for use during the intervention.
Response Measurement and Interobserver Agreement
We collected frequency data on all verbally delivered demands per work period (see Appendix D). This included verbal demands to attend to the CA (e.g., “look at me”), sit appropriately (e.g., “push your chair in”), and engage in the task (e.g., “put with same.”). The frequency count scored the initial verbal discriminative stimulus (SD) rather than the number of responses in a task (tasks included sorting cards, reading words, and solving simple math problems). For example, if students were handed five cards and told to “match,” this would count as one demand. If students were told to “match” each time they were handed a single card, each SD would count as a demand. Two dependent variables were included to measure the two components of VR schedules: (1) the mean number of responses before reinforcement was delivered and (2) the range of response requirements before reinforcement was delivered. The mean response requirement was calculated for baseline and intervention by determining the total number of correct responses to initial SDs emitted before each reinforcer was delivered divided by the total reinforcer deliveries.
In the current study, we examined the relative spread of data away from a measure of central tendency, in this case, arithmetic mean (e.g., the 5 in VR 5 or the 10 in VR 10). Range is the simplest measure of variability that describes the difference between the maximum and minimum values of a dataset (Cox & Vladescu, 2023). For reference, the range of a VR-5 schedule created to the specifications of Bancroft and Bourret (2008; values 1–9 with equal frequency) is eight. In contrast, an FR-5 schedule implemented five times with no integrity errors produces a range of zero, whereas the same schedule implemented five times with a single integrity error (e.g., FR 4 or FR 6), produces a range of 1, providing a minimum for the current study. Therefore, the range was chosen to analyze the variability of VR schedule implementation.
The principal investigator videotaped all sessions for data collection and interobserver reliability measures. Interobserver agreement (IOA) was calculated by research assistants independently of each other utilizing trial-by-trial agreement and focusing on the number of verbal demands prior to reinforcement. IOA data were collected on 34.5% of sessions across both baseline and treatment conditions and averaged 96.8% (range: 90%–100%).
Procedure
For each of the VR 5 and VR 10 CAs, a concurrent multiple-baseline design across participants was implemented. In this design, experimental control is demonstrated when CAs remaining in baseline have stable levels of responding, whereas systematic changes in responding occur when and only when CAs experience the programmed schedules of reinforcement intervention (Baer et al., 1968).
Baseline
During baseline, CAs were provided a brief explanation of VR schedules (e.g., “this VR-5 schedule will be calculated using five work sessions and divided by five to determine how closely you remain to the VR-5 schedule”) and were told to deliver reinforcement when students correctly responded to demands on a specified VR schedule. Three CAs in one classroom used a VR-5 schedule, and three CAs in the other classroom used a VR-10 schedule. The VR schedule (VR 5 or VR 10) for each CA stayed the same throughout baseline and intervention.
CAs were instructed to count all verbal demands, regardless of whether the demand was related to the instructional task (e.g., “match”), attending (e.g., “look at me”), or compliance (e.g., “hands down”). Error corrections and re-presentations were not counted as new demands, given that they were part of the same trial.
One session consisted of five work periods. At the end of each work period, the student was provided with a reinforcer. For example, if a CA was implementing a VR-5 schedule, it was expected that the number of correct responses over the course of the five work periods would average five. Each session had a varying number of demands based on the VR schedule.
Intervention
During the intervention, CAs were given a printed reinforcement schedule to use during work sessions (see Appendices A & B) and were provided a brief explanation of VR schedules (see Appendix C). CAs were told that the programmed reinforcement schedule specified the number of correct responses to initial SDs needed before reinforcement was delivered and that they should follow the schedule as closely as possible. All other directions to CAs remained the same as in baseline, and the CAs implemented the printed schedules while working with the students in the classroom.
Results
We compared the means and ranges of the number of correct responses to reinforcement in each session without and with printed VR-5 and VR-10 programmed schedules of reinforcement across the baseline and intervention phases, respectively. When CAs implemented VR schedules during baseline, they deviated modestly from the target VR schedule (e.g., implementing a VR 7 instead of a VR 5). The range was relatively narrow (e.g., reinforcing every 4–6 responses; range = 2) and tended to omit short response requirements (e.g., 1–3 or 1–6 responses between reinforcement deliveries for VR-5 and VR-10 schedules, respectively). However, when CAs used printed VR-5 and VR-10 programmed schedules of reinforcement, the mean number of correct responses per session was closer to the intended target (i.e., VR 5 or VR 10), the range increased substantially (e.g., reinforcing every 1–9 responses; range = 8), and relatively smaller response requirements were achieved.
Figure 1 displays the number of correct responses to reinforcement (closed circles) per session for CAs who implemented a VR-5 schedule of reinforcement. CAs sometimes delivered reinforcement after the same number of responses more than once in a given session. In those sessions, fewer than five closed circles are displayed. The data path displays the mean for each session. All CAs in the VR-5 condition deviated modestly from the intended mean during baseline, with Randall and Melissa implementing relatively larger reinforcer requirements (M = 6.7 and 5.8, respectively) and Tara implementing smaller reinforcer requirements (M = 4.6). The mean number of correct responses to reinforcement shifted toward the programmed requirement during intervention for all three CAs (M = 5.1, 5.0, and 4.9 for Randall, Melissa, and Tara, respectively).
Fig. 1.

Number of Correct Responses to Reinforcement per Session before and after Implementation of VR-5 Programmed Schedules of Reinforcement. The solid data path represents the arithmetic mean of each session. Dashed lines refer to the target schedule
Compared to the mean number of responses to reinforcement, the effect of using programmed schedules on the range of the CA-implemented VR schedules was more pronounced. During baseline, the range of responses to reinforcement was between 3–6 per session for Randall, 3–5 per session for Melissa, and 1–6 per session for Tara. Following the introduction of the programmed schedules of reinforcement, CAs increased the range of the implemented schedules to 6–13 per session for Randall, 7–12 per session for Melissa, and 5–10 per session for Tara.
Figure 2 displays the number of correct responses to reinforcement by session for CAs who implemented a VR-10 schedule of reinforcement. CAs sometimes delivered reinforcement after the same number of responses more than once in a given session. In those sessions, fewer than five closed circles are displayed. The data path displays the mean for each session. All CAs in the VR-10 condition deviated from the intended mean during baseline. Natalie, Sally, and Lauren implemented relatively larger reinforcer requirements in baseline (M = 11.3, 10.4, and 11.5, respectively). The mean number of correct responses to reinforcement shifted toward the programmed requirement during intervention for all CAs (M = 10.6, 9.9, and 10.1, for Natalie, Sally, and Lauren, respectively).
Fig. 2.

Number of Correct Responses to Reinforcement per Session before and after Implementation of VR-10 Programmed Schedules of Reinforcement. The solid data path represents the arithmetic mean of each session. Dashed lines refer to the target schedule
Compared to the mean number of responses to reinforcement, the effect of using programmed schedules was more pronounced on the range of the CA-implemented VR schedules. During baseline, the range of responses to reinforcement was between 4–5 per session for Natalie, 4–10 per session for Sally, and 3–6 per session for Lauren. Following the introduction of the programmed schedules of reinforcement, CAs increased the range of the implemented schedules to between 9–16 per session for Natale, 12–18 per session for Sally, and 11–15 per session for Lauren.
Implications of Findings for Practice
The current study demonstrated that CAs were likely to deviate from the intended VR schedule during baseline for both the correct response requirement to the initial SDs mean and the amount of deviation from the mean. This finding is similar to previous research showing that reinforcer delivery error (i.e., errors of commission) is one of the most common procedural fidelity errors made by teachers (e.g., Bergmann et al., 2023; Carroll et al., 2013; Kodak et al., 2018). Delivering reinforcement on a leaner-than-intended average schedule could lead to ratio strain as a result of lower than necessary levels of reinforcement required to sustain responding (Baker, 1979; Cooper et al., 2020). This can lead to response extinction and emotional side effects due to the abrupt increase in ratio requirements. In addition, without using programmed schedules, all CAs implemented schedules with less range deviation (Cox & Vladescu, 2023). In fact, baseline schedules sometimes more closely approximated FR schedules implemented with poor integrity rather than the intended VR schedules of reinforcement. This is an issue because FR schedules are more prone to extinction effects, are more susceptible to postreinforcement pauses, and are less preferred by participants when compared to VR schedules (Argueta et al., 2019; Field et al., 1996; Mullane et al., 2017; Schlinger et al., 1990). It should be noted that when using programmed schedules, not only did the overall range of correct responses to reinforcement increase but also the CAs included smaller ratios while staying close to the intended mean.
Previous research has shown that the type of measurement method affects procedural fidelity levels (Bergmann et al., 2023). That is, evaluating procedural fidelity across all components together (i.e., global measures of fidelity) may mask component errors and inflate fidelity levels. In the current experiment, assessing the overall means of reinforcer delivery would have masked integrity errors related to CAs adhering to the appropriate range of reinforcement values (Cook et al., 2015). When implemented correctly, VR schedules of reinforcement support behavior maintenance, prevent reinforcement satiation, promote resistance to extinction, and result in high, steady rates of student responding (Cooper et al., 2020; Ferster & Skinner, 1957). The current study demonstrates that these benefits are more likely to be realized with the use of programmed schedules of reinforcement. Programmed schedules assisted CAs in remaining closer to the desired mean while appropriately varying reinforcement within each VR reinforcement opportunity. Although CAs increased variability for both VR-5 and VR-10 conditions using programmed schedules, the effects were particularly salient for the VR-10 schedules, indicating that leaner schedules may be more challenging to implement without programmed schedules or other supports. As an additional benefit, printed schedules may be a cost-effective and relatively easy-to-implement option for the classroom, which is particularly important for the public school setting. In a setting where resources and opportunities for training are often lacking, the simplicity of these schedules means they require minimal training, therefore making them a more feasible option (Kucharczyk et al., 2015).
There were several limitations of the current study which may suggest areas for future research. First, CA performance following treatment and social acceptability of the intervention were not evaluated. Researchers did not remove the printed schedules because it benefited the implementation of the VR schedules for students, given the simplicity and low cost of the treatment. However, future research could evaluate if accurate VR schedule implementation can be maintained in the absence of printed schedules and whether CAs appreciated or felt constrained by the printed schedules. In addition, data were not collected on student performance. Therefore, the lack of variability in schedule implementation on compliance, skill acquisition, and problem behavior is unknown. Future research may address this by replicating the current study while also collecting data on student compliance, skill acquisition, and problem behavior to determine whether the accuracy of VR schedule implementation has an impact on these variables.
Finally, using range or some other measure of schedule variability (e.g., standard deviation) presents an intriguing possibility for research, in that the variability of reinforcement schedules can be characterized by a continuum of values rather than dichotomous FR or VR categories. A continuum offers finer levels of granularity than binary categories and would allow for a more nuanced and accurate representation of schedule variability. Therefore, future research may parametrically evaluate the effects of different levels of schedule variability on skill maintenance as well as the extent to which decreases in procedural integrity of reinforcement schedule adherence affect student learning outcomes.
Appendix A
Training Materials
Printed programmed schedules of reinforcement provided to CAs for implementation of VR-5 schedule of reinforcement.
Appendix B
Printed programmed schedules of reinforcement provided to CAs for implementation of VR-10 schedules of reinforcement.
Appendix C
Training script provided to CAs prior to implementation of the programmed schedule of reinforcement.
“You are working on a variable ratio 5 schedule. This VR-5 schedule will be calculated using 5 work sessions. The demands will be added from these 5 sessions and divided by 5 to determine how closely you remained to the VR-5 schedule. Error corrections do not count as a new demand. Redirection, however, such as “push your chair in” and “put your hands in your lap” do count as demands. You have been provided a VR schedule to follow. Each number represents the number of demands in each work session prior to the student receiving reinforcement. Please follow the schedule as closely as possible.”
Appendix D
Data sheet used to collect frequency data during work sessions.
Funding
No funding was received to assist with the preparation of this article.
Data Availability
All data generated or analyzed during this study are included in this published article.
Declarations
Conflicts of Interest
We have no conflict of interest to disclose. All authors certify that they have no affiliations with or involvement in any organization or entity that has a financial or nonfinancial interest in the subject matter or materials discussed in this article.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Argueta, T., Leon, Y., & Brewer, A. (2019). Exchange schedules in token economies: A preliminary investigation of second-order schedule effects. Behavioral Interventions,34(2), 280–292. 10.1002/bin.1661 [Google Scholar]
- Baer, D. M., Montrose, M. W., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis,1(1), 91–97. 10.1901/jaba.1968.1-91 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baker, A. M. (1979). Methods of delivering variable schedules of reinforcement. Journal of Special Education Technology,2(3), 52–58. 10.1177/016264347900200307 [Google Scholar]
- Bancroft, S. L., & Bourret, J. C. (2008). Generating variable and random schedules of reinforcement using Microsoft Excel macros. Journal of Applied Behavior Analysis,41(2), 227–235. 10.1901/jaba.2008.41-227 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bergmann, S., Kodak, T., & Harman, M. J. (2021). When do errors in reinforcer delivery affect learning? A parametric analysis of treatment integrity. Journal of the Experimental Analysis of Behavior,115(2), 561–577. 10.1002/jeab.670 [DOI] [PubMed] [Google Scholar]
- Bergmann, S., Niland, H., Gavidia, V. L., Strum, M. D., & Harman, M. J. (2023). Comparing multiple methods to measure procedural fidelity of discrete-trial instruction. Education & Treatment of Children,46, 201–220. 10.1007/s43494-023-00094-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Briggs, A. M., Mitteer, D. R., Bergmann, S., & Greer, B. D. (2023). Reinforcer thinning: General approaches and considerations for maintaining skills and mitigating relapse. In J. L. Matson (Eds.), Handbook of applied behavior analysis (pp. ). Autism and Child Psychopathology Series. Springer. 10.1007/978-3-031-19964-6_6
- Carroll, R. A., Kodak, T., & Fisher, W. W. (2013). An evaluation of programmed treatment-integrity errors during discrete-trial instruction. Journal of Applied Behavior Analysis,46(2), 379–394. 10.1002/jaba.49 [DOI] [PubMed] [Google Scholar]
- Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis. (3rd ed.). Prentice Hall.
- Cook, J. E., Subramaniam, S., Brunson, L. Y., Larson, N. A., Poe, S. G., & St. Peter, C. C. (2015). Global measures of treatment integrity may mask important errors in discrete-trial training. Behavior Analysis in Practice, 8, 37–47. 10.1007/s40617-014-0039-7 [DOI] [PMC free article] [PubMed]
- Cox, D. J., & Vladescu, J. C. (2023). Statistics for applied behavior analysis practitioners and researchers. Academic Press. 10.1016/b978-0-323-99885-7.00016-7
- DeLeon, I. G. Bullock, C. E., & Catania, A. C. (2013). Arranging reinforcement contingencies in applied settings: Fundamental and implications of recent basic and applied research. In G. J. Madden (Ed.), APA handbook of applied behavior analysis: Vol. 2. Translating principles into practice (pp. 47–75). American Psychological Association. 10.1037/13938-003
- Ferster, C. B., & Skinner, B. F. (1957). Schedules of reinforcement. Appleton-Century-Crofts. 10.1126/science.127.3296.477.c
- Field, D. P., Tonneau, F., Ahearn, W., & Hineline, P. N. (1996). Preference between variable ratio and fixed ratio schedules: Local and extended relations. Journal of the Experimental Analysis of Behavior,66(3), 283–295. 10.1901/jeab.1996.66-283 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones, S. H., & St. Peter, C. C. (2022). Nominally acceptable integrity failures negatively affect interventions involving intermittent reinforcement. Journal of Applied Behavior Analysis, 55(4), 1109–1123. 10.1002/jaba.944 [DOI] [PubMed]
- Kodak, T., Cariveau, T., LeBlanc, B., Mahon, J., & Carroll, R. A. (2018). Selection and implementation of skill acquisition programs by special education teachers and staff for students with autism spectrum disorder. Behavior Modification,42(1), 58–83. 10.1177/0145445517692081 [DOI] [PubMed] [Google Scholar]
- Kucharczyk, S., Reutebuch, C. K., Carter, E. W., Hedges, S., El Zein, F., Fan, H., & Gustafson, J. R. (2015). Addressing the needs of adolescents with autism spectrum disorder: Considerations and complexities for high school interventions. Exceptional Children,81(3), 329–349. 10.1177/0014402914563703 [Google Scholar]
- Leaf, R., & McEachin, J. J. (1998). A work in progress: Behavior management strategies and a curriculum for intensive behavioral treatment of autism. Different Roads to Learning.
- LeBlanc, L. A., Hagopian, L. P., Maglieri, K. A., & Poling, A. (2002). Decreasing the intensity of reinforcement-based interventions for reducing behavior: Conceptual issues and a proposed model for clinical practice. The Behavior Analyst Today,3(3), 289–300. 10.1037/h0099991 [Google Scholar]
- Maurice, C. E., Green, G. E., & Luce, S. C. (1996). Behavioral intervention for young children with autism: A manual for parents and professionals. Pro-Ed.
- Mullane, M. P., Martens, B. K., Baxter, E. L., & Steeg, D. V. (2017). Children’s preference for mixed-versus fixed-ratio schedules of reinforcement: A translational study of risky choice. Journal of the Experimental Analysis of Behavior,107(1), 161–175. 10.1002/jeab.234 [DOI] [PubMed] [Google Scholar]
- Orlando, R., & Bijou, S. W. (1960). Single and multiple schedules of reinforcement in developmentally retarded children. Journal of the Experimental Analysis of Behavior,3(4), 339–347. 10.1901/jeab.1960.3-339 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schlinger, H., Blakely, E., & Kaczor, T. (1990). Pausing under variable-ratio schedules: Interaction of reinforcer magnitude, variable-ratio size, and lowest ratio. Journal of the Experimental Analysis of Behavior,53(1), 133–139. 10.1901/jeab.1990.53-133 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schlinger, H. D., Derenne, A., & Baron, A. (2008). What 50 years of research tell us about pausing under ratio schedules of reinforcement. The Behavior Analyst,31, 39–60. 10.1007/BF03392160 [DOI] [PMC free article] [PubMed] [Google Scholar]
- St. Peter Pipkin, C., Vollmer, T. R., & Sloman, K. N. (2010). Effects of treatment integrity failures during differential reinforcement of alternative behavior: A translational model. Journal of Applied Behavior Analysis, 43(1), 47–70. 10.1901/jaba.2010.43-47 [DOI] [PMC free article] [PubMed]
- Van Houten, R., & Nau, P. A. (1980). A comparison of the effects of fixed and variable ratio schedules of reinforcement on the behavior of deaf children. Journal of Applied Behavior Analysis,13(1), 13–21. 10.1901/jaba.1980.13-13 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
All data generated or analyzed during this study are included in this published article.
