Abstract
Preference assessments are used to make data-based decisions about which stimuli to use as reinforcers but they can be challenging to conduct frequently enough to avoid problems related to momentary shifts in preference and reinforcer efficacy. It remains unclear whether, why, and how clinicians change reinforcers on a momentary basis. Therefore, this study aimed to determine common reasons for, and methods of, changing reinforcers in practice. Most respondents indicated that they often change reinforcers during a session, do so when the client mands for or attends to different stimuli or refuses the current stimulus, and identify the new reinforcer based on recent client behaviors (e.g., mands) or by providing an informal choice between stimuli. Responses did not vary meaningfully based on respondent credentials, client characteristics, or service goals. Implications for clinical practice as well as future research on methods of momentary preference assessment and reinforcer identification are discussed.
Supplementary Information
The online version contains supplementary material available at 10.1007/s40617-023-00847-4.
Keywords: Clinical practice, In-the-moment reinforcer analysis, Preference assessment, Reinforcer identification, Survey
The identification and use of efficacious reinforcers is a critical part of applied behavior analysis. Behavior analysts are ethically obligated to individualize their procedures (Behavior Analyst Certification Board [BACB], 2020, Guideline 2.14) and individualizing reinforcers based on a client’s preferences is a fundamental way of doing so. Moreover, utilizing a client’s most preferred stimuli may maintain more responding (e.g., DeLeon et al., 2009; Morris & Vollmer, 2020a) and facilitate faster rates of skill acquisition (Morris & Vollmer, 2020b; Toussaint et al., 2016) relative to less preferred stimuli. The term stimulus preference assessment (SPA) refers to a wide range of methods for making objective, data-based decisions about which stimuli should be used as reinforcers (see Hagopian et al., 2004; Morris et al., 2023b; Tullis et al., 2011, for reviews). SPAs can be especially useful for identifying preferred stimuli with clients who cannot otherwise communicate their preferences. Since their initial inception (Pace et al., 1985), researchers have developed several methods of SPA (e.g., DeLeon & Iwata, 1996; Fisher et al., 1992; Roane et al., 1998) that are effective at identifying reinforcers and discriminating differences in relative reinforcer efficacy (e.g., DeLeon et al., 2009; Roscoe et al., 1999).
Despite the strengths of established SPA methods, there are several factors that may inhibit their utility in clinical practice. One is the stability of preference across repeated assessments. Some studies suggest preference is relatively stable and SPA results across time may be strongly, positively correlated (e.g., Hanley et al., 2006; Kelley et al., 2016); however, others suggest preference is likely to vary and SPA results across time are only weakly correlated (e.g., Carr et al., 2000; Zhou et al., 2001). Moreover, even in cases in which results are strongly, positively correlated, the highest preferred stimuli (i.e., the best stimuli to use as reinforcers) may vary (e.g., Conine et al., 2021). Recent research suggests that results may be less consistent as the amount of time between assessments increases (MacNaul et al., 2021). Another factor limiting the clinical utility of SPAs may be the time they require to conduct. Graff and Karsten (2012) conducted a survey of clinicians providing services to individuals with developmental disabilities. The results of their survey indicated that the majority of clinicians providing behavioral services had conducted SPAs but did so less often than once a month. It is important to note that clinicians indicated that time was the primary barrier to more frequent use of SPAs. Thus, these two factors contribute in tandem to a broader limitation of SPA technology. The fact that preference often changes over time suggests that SPAs should be conducted more frequently, yet time requirements prevent many clinicians from doing so. In response to these limitations, researchers have evaluated methods of improving the efficiency of SPAs as well as alternative methods of identifying reinforcers.
Researchers have aimed to improve the efficiency of SPAs, particularly multiple stimulus without replacement preference assessments (MSWOs; DeLeon & Iwata, 1996), by evaluating different numbers and groupings of assessment sessions. Research in this area suggests that the most efficient assessments, consisting of only one or two sessions, may produce results that are strongly correlated with those produced by three or more sessions (Carr et al., 2000; Conine et al., 2021; Raetz et al., 2013; Richman et al., 2016; Morris et al., 2023a). However, shorter assessments are unlikely to identify the same highest-preferred stimuli as longer assessments (Conine et al., 2021; Graff & Ciccone, 2002; Morris et al., 2023a).
Another way researchers have attempted to resolve the limitations of SPAs is by evaluating alternative methods of identifying reinforcers. One such alternative is an indirect assessment, in which caregivers or clinicians who are familiar with the individual are asked to report the individual’s preferences. In some cases, stimuli indicated to be the highest preferred by the indirect assessment have been found to function as reinforcers (Cote et al., 2007; Fisher et al., 1996; Resetar & Noell, 2008). However, relative to SPAs, indirect assessments are less likely to identify the highest-preferred stimuli that function as the most efficacious reinforcers (Cote et al., 2007; Morris & Vollmer, 2022). In general, the results of indirect assessments have weak correspondence with the results of SPAs (Cote et al., 2007; Fisher et al., 1996; Green et al., 1991; Resetar & Noell, 2008; see Verschuur et al., 2011 for an exception). Thus, more efficient variations of SPAs consisting of fewer sessions and alternatives such as indirect assessments may not yield the same results as SPAs and therefore may not be sufficient replacements in all circumstances.
In light of these limitations, it may be beneficial to develop procedures for changing reinforcers between opportunities to conduct formal SPAs. For example, when a stimulus or activity that has previously functioned as a reinforcer appears to no longer function as a reinforcer, methods of quickly identifying a new stimulus or activity that is likely to function as a reinforcer would be helpful. Technically speaking, stimuli that have not been delivered contingent on a response and demonstrated to increase the future probability of that response cannot be called reinforcers. The process described above may be most accurately characterized as shifting from one consequence previously hypothesized to effectively support the target behavior to another consequence hypothesized to be more effective in doing so. However, for concision and readability and to align with how this may be described in practice, we refer to this process as “changing reinforcers” throughout the remainder of this article. The potential benefit of such procedures has been recognized by clinicians and researchers alike. Many clinicians report utilizing less formal or “miniature” SPAs that might consist of only a single choice trial (Graff & Karsten, 2012) and researchers have demonstrated that providing a choice between stimuli indicated as preferred by an SPA may support higher rates of responding, facilitate faster skill acquisition, and be preferred by the learner over the predetermined delivery of the same stimuli (Hanratty & Hanley, 2021; Toussaint et al., 2016). However, it is unclear when and why such choices should be offered or the best way to determine which stimuli are included as options in the array when offering choices. To this end, researchers have begun to evaluate procedures for determining when and how to identify new reinforcers on a momentary basis. Leaf and colleagues (Alcalay et al., 2019; Leaf et al., 2012, 2015, 2018) have developed a method termed in-the-moment reinforcer analysis (IMRA). During IMRA, clinicians evaluate and change reinforcers continuously based on client behaviors that may indicate a lack of reinforcer efficacy (e.g., negative affect, no item engagement, no skill improvement, requests for other items). When clients display such behaviors, clinicians attempt to identify a more efficacious reinforcer based on factors such as the client’s previous engagement with that stimulus, the similarity of a stimulus to other known reinforcers, or stimulus novelty. Recent studies suggest that stimuli identified through IMRA support a similar amount of responding (Leaf et al., 2015) and facilitate similar rates of skill acquisition (Alcalay et al., 2019; Leaf et al., 2018) as those identified through SPAs.
Beyond the factors considered in the IMRA, other areas of research suggest additional criteria that may be useful in the momentary evaluation of whether a stimulus is no longer functioning as a reinforcer. For example, studies utilizing progressive ratio schedules (e.g., Kronfli et al., 2020; Morris & Vollmer, 2020a) included periods without responding to instructional materials, requests to stop or take a break, and the occurrence of problem behavior as indicators that a stimulus is no longer reinforcing. In addition, studies utilizing abolishing operations to facilitate behavior change (e.g., Lang et al., 2009, 2010; Neely et al., 2015; O'Reilly et al., 2009; Rispoli et al., 2011) have used different criteria such as not attending to the item, orienting toward other items, resisting prompts to continue playing with an item, refusing or pushing an item away, walking away from the item, or attempting to leave the area. Many of these studies have also incorporated caregiver, clinician, or teacher report to identify more idiosyncratic behavioral indicators of a lack of reinforcer efficacy (Lang et al., 2010; O'Reilly et al., 2009; Rispoli et al., 2011). Aside from determining when to change reinforcers, other methods of selecting a new, more efficacious reinforcer may be useful beyond those currently included as a part of IMRA. For example, selecting a new reinforcer based on recent SPAs, providing informal choices, and directly asking the client are supported by research on choice and reinforcer variability (e.g., Hanratty & Hanley, 2021) but remain unevaluated in the context of momentary reinforcer identification.
In summary, there is a discrepancy between how frequently most practitioners are able to conduct formal SPAs (Graff & Karsten, 2012) and how frequently SPAs need to be conducted to avoid problems related to changes in preference or motivation (MacNaul et al., 2021). This discrepancy is critical to consider, as it may prevent practitioners from consistently employing efficacious reinforcers. Practitioners (Graff & Karsten, 2012) and researchers (Hanratty & Hanley, 2021; Toussaint et al., 2016) suggest that providing frequent, informal choices may provide a solution, but these methods have not been evaluated in the context of momentary reinforcer evaluation. Thus, it remains unclear when such choices should be provided and what stimuli should be included in the array. Researchers have developed methods of momentary reinforcer evaluation to determine when and how to change reinforcers (e.g., Leaf et al., 2018), but these methods fail to incorporate many criteria that may be useful in determining the when and how to change reinforcers on a momentary basis (e.g., problem behavior, attending to other stimuli). Moreover, as exemplified by the results of Graff and Karsten (2012), practitioners may use a variety of approaches to momentary reinforcer evaluation and identification that have not yet been captured in the research literature. Additional research is needed to yield a more comprehensive list of criteria for when and how behavior analysts should change reinforcers. Such research is prerequisite to further research evaluating the utility and feasibility of implementing different criteria in clinical practice and training staff to utilize such practices.
Thus, we conducted a survey of clinicians providing behavior-analytic services with the aim of ascertaining whether, why, and how they change reinforcers on a momentary basis. One secondary aim was to replicate Graff and Karsten’s (2012) survey questions about which methods of reinforcer identification are used in practice and how frequently they are implemented. Another secondary aim was to evaluate how factors such as credential held, the age of clients served, the diagnosis of clients served, and the primary goal(s) of services may influence these practices. The results of this survey will serve as a first step in developing a comprehensive, technological account of momentary reinforcer analysis that may inform clinical practice and future research.
Method
Respondents
We recruited survey respondents in two ways. First, an invitation to participate was sent to all certificants, in all locations, at all levels of certification using the BACB’s Mass Email Service. Second, due to a low receipt rate (described in Results) for the emails sent via the Mass Email Service, we subsequently added a snowball sampling approach by sending an invitation to participate to the following: (1) the Teaching Behavior Analysis listserv; (2) the American Psychological Association Division 25 Listserv; (3) the Association for Behavior Analysis International’s Verified Course Sequence Coordinator listserv; and (4) professional contacts of the authors. The invitation sent to this second set of sources also asked recipients to forward the invitation to any contacts that would be eligible to participate.
The only criterion for inclusion was that respondents reported they currently held a BACB credential as a registered behavior technician (RBT), Board Certified Assistant Behavior Analyst (BCaBA), board certified behavior analyst (BCBA), or board certified behavior analyst-doctoral level (BCBA-D). We did this to ensure a more cohesive sample than previous survey research that recruited a variety of clinicians providing services to individuals with developmental disabilities (e.g., Graff & Karsten, 2012). After prospective respondents provided informed consent, the first survey question asked which of the four BACB credentials they currently held, if any. Respondents who indicated they held none of these credentials were thanked for their participation and did not participate in any other portions of the survey.
Survey Instrumentation
The survey was created, hosted, and administered using the Qualtrics survey platform. The full text and logic of the survey are provided online as supplementary material. Once respondents began the survey, they were allotted 1 week to finish answering all questions. That is, after beginning the survey respondents could exit and return to complete their responses at a later time as long as it was within 1 week of their start time. Responses that were not completed within 1 week of the start time were considered incomplete. The survey contained three main parts. The first section collected demographic information about respondents, including certification level (RBT, BCaBA, BCBA, BCBA-D, or none of the above), the number of years providing ABA services (rounded to the nearest number of years), the age and diagnoses of clinical populations with whom they work, the main goal(s) of the ABA services they provide, and the country in which they currently provide services (and state, if United States was selected). The questions about clinical population and main goal(s) of services were multiple choice, with the option to select multiple responses, along with a free-text “Other” option.
The second portion of the survey contained two questions and was designed to replicate and extend a portion of Graff and Karsten (2012). The first question asked: “Which of the following methods describe how you identify items that you will use as reinforcers with a given client (check all that apply)?” The list of response options included (in the following order): asking parents/caregivers what your client likes, asking the client what they like, a formal interview or survey with the parents/caregivers, a formal interview or survey with the client, observing the client to see what items they interact with, paired stimulus preference assessment (PSPA; Fisher et al., 1992), multiple-stimulus-without-replacement preference assessment (MSWO; DeLeon & Iwata, 1996), free operant preference assessment (FOPA; Roane et al., 1998), single stimulus preference assessment (SSPA; Pace et al., 1985), and other (please describe). The second question in this section presented respondents with a matrix based on their responses to the first question. For each method that respondents selected on the first question, they were asked to specify how often they utilized that method. Response options included: at least daily, weekly, monthly, and less often than monthly.
The third section of the survey included questions about whether, why, and how respondents changed reinforcers. Before beginning the third section of the survey, we described a specific clinical context with the aim of reducing variance in responding due to respondents answering the questions in reference to different clinical contexts. In particular, respondents were asked whether they were familiar with providing intensive (i.e., 20 hr per week) services for children with ASD involving at least some amount of discrete trial teaching (see supplementary material for the full description). Only respondents who indicated that they had experience in this context participated in the third section of the survey. Those who indicated they did not were thanked for their participation and the survey ended.
The third section contained three multiple choice questions and two open-ended questions. First, respondents were asked “After you’ve identified some items to use as reinforcer(s) with a given client, do you ever change to new or different reinforcers while you are conducting a teaching session (e.g., during a block of 10 or 20 trials)?” Respondents who selected “no” were thanked for their participation and the survey ended. Respondents who selected “yes” were presented with the remaining survey questions. In the second question of this section, respondents were asked to list, in a free-text format, any reasons why they might change the reinforcer(s) they are using during a teaching session. Third, respondents were presented with the list of response options shown in Table 1 and asked, “Please check which of the following is a reason why you might change the reinforcer(s) you are using during a teaching session.” Fourth, respondents were asked to list, in a free-text format, any methods or criteria (i.e., how) they would use to choose the next reinforcer if they change reinforcers during a teaching session. Finally, respondents were presented with the list of response options shown in Table 1 and asked, “If you decide to change reinforcer(s) during a teaching session, please check which of the following describes how you choose the reinforcer that you will use next.”
Table 1.
Questions and Corresponding Response Options for the Third Section of the Survey
Questions and Responses |
---|
After you’ve identified some items to use as a reinforcer(s) with a given client, do you ever change to new or different reinforcers while you are conducting a teaching session (e.g., during a block of 10 or 20 trials)? |
Yes, I sometimes change the reinforcer(s) I am using during a teaching session. |
No, I don’t do this. |
Please list any reasons why you might change the reinforcer(s) you are using during a teaching session (e.g., things that have happened during the teaching session, client behaviors, etc.)? |
[Free response] |
Please check which of the following is a reason why you might change the reinforcer(s) you are using during a teaching session (check all that apply). |
Client mands/asks for a different item (within or out of sight). |
Client reaches for another item. |
Client refuses the reinforcer(s) or does not engage with the reinforcer(s) when offered. |
Client is not attending to instructor or instructional materials. |
Client is not responding independently (i.e., not responding or requires prompts) |
Client is engaging in problem behavior. |
Client is engaging in other behaviors that are incompatible with teaching (e.g., stereotypy, walking away). |
The passage of time (e.g., every 10 trials, every 10 minutes). |
Reinforcer runs out or stops working (e.g., out of food, out of batteries, technical difficulties). |
I don’t change reinforcers during a teaching session. |
If you decide to change reinforcer(s) during a teaching session, how do you choose the reinforcer that you will use next (please list any and all strategies you would use)? |
[Free response] |
If you decide to change reinforcer(s) during a teaching session, please check which of the following describes how you choose the reinforcer that you will use next? |
Based on what the client has asked for during this teaching session. |
Based on what the client is currently attending to or attempting to interact with. |
Based on results from a previous preference or reinforcer assessment. |
Gather some currently available items and provides a choice. |
Gather some items your client has liked in the past and provide a choice. |
Ask the client a question like: “What do you want?” |
Try to find an item similar to items the client usually likes. |
Try to find an item the client hasn’t interacted with recently. |
Try to find a totally novel item. |
I don’t change reinforcers during a teaching session. |
Data Analysis
For all questions that were asked in a multiple-choice or check-all-that-apply format (all questions from the first two section of the survey and some questions from the third section, see Table 1), responses were analyzed by comparing the percentage of respondents that selected each response option and calculating 95% confidence intervals (CIs) in GraphPad Prism 8.1. We also analyzed whether the percentage of respondents selecting each response option varied based on respondents answers to the demographic questions (e.g., credential, age and diagnosis of clients served, primary service goals).
We asked two open-ended questions in the third section of the survey (i.e., the second and fourth questions in Table 1). Analyzing these free-text responses required a qualitative coding process (Miles et al., 2019). To begin qualitative coding, we first uploaded all respondents’ free-text answers to the two open-ended questions into Dedoose coding software (Dedoose Version 9.0.54). Dedoose automatically created separate entries for: (1) each participant’s answer to the second question in Table 1 (hereafter, the “why” question”) and (2) each participant’s answer to the fourth question in Table 1 (hereafter, the “how” question).
Once these entries were created in Dedoose, for each entry we: (1) created any number of discrete excerpts and (2) simultaneously applied one or more codes to each excerpt. We created an excerpt for any grammatically complete unit of text (either short phrases or sentences) that conveyed a complete idea or ideas relevant to the question. For example, one respondent answered the “why” question by saying “Sometimes the client becomes satiated or loses interest in the reinforcer. Sometimes a stronger reinforcer is needed or sometimes only verbal praise is an appropriate reinforcer.” In this case, four excerpts were created: (1) “sometimes the client becomes satiated”; (2) “loses interest in the reinforcer”; (3) “sometimes a stronger reinforcer is needed”; and (4) “sometimes only verbal praise is an appropriate reinforcer.” Excerpts were not created for participant responses that were unrelated to the question. For example, if participants talked about how they would select a new reinforcer in the “why” question or vice-versa, no excerpt was created for that portion of their free-text response (i.e., was not included in the analysis of responses to the why or how questions).
All of the qualitative codes we applied to the excerpts are shown alongside their operational definitions in Table 2. We used two types of codes: (1) a priori codes, and (2) emergent codes. The initial set of a priori codes was generated via hypothesis coding (Miles et al., 2019, p. 70), which entailed creating one a priori code for each response option to the third question (check-all-that-apply for “why”) and the fifth question (check-all-that-apply for “how”). See Tables 1 and 2 for the correspondence between the check-all-that-apply response options and the a priori codes for “why” and “how,” respectively. We also used emergent coding to capture responses that were not predicted by the a priori codes but were nonetheless communicated by respondents (Stuckey, 2015). To create these codes, the second, third, and fifth author each independently reviewed and coded approximately 10–20 responses, and then met to discuss their results. During these meetings, the authors focused their discussion on any areas of initial disagreement, as well as any excerpts for which they were unable to apply an existing a priori code. The authors proposed emergent codes to capture any themes that appeared frequently among those excerpts, and discussed until they reached consensus regarding the creation of any new emergent codes. This process continued over three such meetings; in the last of these meetings the authors reached consensus that no additional emergent codes, beyond those they had already created, were needed to code the responses they had encountered on this third preliminary review.
Table 2.
Operational Definitions for the Qualitative Analysis
Question | Code Type | Code Name | Definition |
---|---|---|---|
Why | A priori | Mands for other | Client mands, requests, asks for another item or activity. |
Reaches/Looks to other | Specific references to reaching for, pointing to, orienting towards, leaning toward, or looking at an item/activity (or similar behaviors). | ||
Refuses current | Pushes item away, drops item, leaves item, does not engage with or stops engaging with the reinforcer, or not attending to the reinforcers | ||
Not attending | Client not attending to, stops attending to, or loses interest in the task, instructional materials, or instructor. | ||
Not responding | Client requiring prompts or assistance, not making correct or independent responses, stops responding. | ||
Problem behavior | Aggression, self-injury, property destruction, or any general non-specific reference to “problem behavior” or “challenging behavior” | ||
Incompatible behavior | Includes stereotypy, standing up, walking away, getting out of their seat. | ||
Passage of time/trials | References to using the same reinforcer for a while, switching because the same one has been used for too long or too many trials (e.g., every hour, every 10 trials) | ||
Reinforcer breaks/unavailable | Includes a consumable running out or not being available, toy or electronic device no longer working due to batteries dying or similar | ||
Emergent | Broad–Indicating response | Responses that say, “indicating response,” the client “indicates” an item, “indicates interest in” or “shows interest in” something, without naming any more specific observable behavior. | |
Broad–Loses interest | Responses that say the client “loses interest in,” “is no longer interested in,” “stops being interested in,” “indicates disinterest in” current reinforcer, or similar without naming any more specific observable behavior. | ||
Broad–Reinforcer not working | Responses that broadly say that the reinforcer is no longer a reinforcer, the reinforcer is not effective or similar without naming any more specific observable behavior. | ||
Broad–Satiation | Responses that say “satiation,” “satiated” or similar terms without naming any more specific observable behavior. | ||
Broad–Motivation or Preference | Responses that say “motivation,” “preference,” MO, EO, or similar terms without naming any more specific observable behavior. | ||
How | A priori | Client mand | Any statement about choosing the reinforcer based on a preceding or recent client request, regardless of mand modality (e.g., vocal, picture, sign). |
Client attention/ interaction | Any references to choosing a reinforcer based on what the client is reaching for, pointing to, orienting towards, leaning toward, or looking at (or similar behaviors). | ||
Previous assessment | Any reference to using or referencing the results of previous preference or reinforcer assessments. | ||
Provide choice | Any statements about creating the opportunity for the client to make a discrete choice among more than one stimulus. This code was not applied for references to conducting a preference assessment or the name of a specific preference assessment method. | ||
Ask client | Asking the client what they want. | ||
Find similar or recently preferred | Statements about choosing items that are similar to items used in the past or in the same category as other known preferred items. | ||
Find novel | Statements about using: a “new” item, an item the client has not seen in a long time, something different than their typical reinforcers, or an item or category of items that they have been deprived of for some time. | ||
Emergent | Broad—Indicating responses | Responses about following a client’s lead or indication to determine what they want, without naming any more specific observable behavior. | |
Observe client with stimuli | Responses about exposing a client to multiple stimuli and seeing what they play with, go for, touch, engage with, etc. | ||
Do preference assessment | Includes any statement about conducting a preference assessment. This code was only applied if the words “preference assessment” or the name of a specific preference assessment method (e.g., MSWO, PSPA) was used. | ||
Ask others | Responses that reference asking another person such as a therapist, teacher, parent, or similar what the client likes or wants. |
At least one code was applied to each excerpt. It was also possible to apply more than one code to a given excerpt, because participants often communicated more than one idea within a single grammatically complete unit. For example, for the “why” question one participant said, “If the client appears bored or satiated on the previous reinforcer, I may use something different in the room.” In this case, “If the client appears bored or satiated” was excerpted as a single unit, and codes for both “broad–satiation” and “broad–loses interest” were applied (see Table 2).
Once consensus was reached regarding emergent codes, the coding definitions were finalized (Table 2), and the full excerpting and coding process began. All remaining excerpts were created and coded independently by the second and third author. A total of 706 excerpts were created and coded throughout the qualitative process. Interrater reliability (IRR) was evaluated by randomly selecting 173 total excerpts (24.5%) to be coded by two raters, independently. Of the 173 excerpts selected for IRR, 86 were taken from the “why” question and 87 were taken from the “how” question. Proportional agreement was calculated on an excerpt-by-excerpt basis by dividing the number of codes which were applied by both raters (i.e., agreements) by number of codes applied by at least one rater (i.e., the sum of agreements and disagreements) for that excerpt. The resulting proportions were averaged across all excerpts and then multiplied by 100 to yield an IRR score. Following the completion of initial data collection, IRR was 93% overall (91% for the “why” question, and 95% for the “how” question). Complete disagreements (i.e., none of the same codes applied) were only obtained for 3 out of the 173 excerpts (2%) for which IRR was evaluated.
Results
Respondent Demographics and Response Rate
Two hundred fifteen respondents completed at least the entire first and second sections of the survey pertaining to demographic and general methods for reinforcer identification. Their responses to the demographic questions are shown in the second column of Table 3. Of the total 215 respondents, 70 responded using the link sent via the BACB mass email service, and 145 responded using the link sent via listservs, professional contacts, and snowball sampling. Mailings were delivered via the BACB mass email service on two occasions: in September 2021 and February 2022. Considering both mailings, 82% of emails (111,443 total) were never successfully delivered due to a high bounce rate (i.e., emails being blocked by security protocols and spam filters, due to technological limitations of the mass email service protocol). Of the 18% of emails that reached their intended recipients, 15% (3,659 total) opened the email; thus, the 70 total respondents from this service represents a 2% response rate among those who received the emails. A total of 149 additional participants completed the survey after the subsequent snowball sampling was conducted; however, these emails were not tracked and it is therefore not possible to calculate a response rate from this second sampling method. The relatively small sample size obtained, low response rate through the mass email service, and lack of response rate though the snowball sampling may limit the representativeness of our sample and corresponding conclusions; however, even a limited description of current practices is better than no description at all and such limitations do not prevent our results from informing research and practice related to reinforcer identification (see Discussion section).
Table 3.
A Summary of Responses to the Demographic Questions across Respondents Who Completed Different Sections of the Survey
Questions and Response Categories | Number (Percentage) of Respondents Completing Sections 1 and 2 | Number (Percentage) of Respondents Completing Sections 1, 2, and Yes–No | Number (Percentage) of Respondents Completing Sections 1, 2, and 3 |
---|---|---|---|
Credential | |||
RBT | 47 (22%) | 46 (23%) | 42 (24%) |
BCaBA | 9 (4%) | 9 (4%) | 6 (3%) |
BCBA | 105 (49%) | 98 (48%) | 84 (47%) |
BCBA-D | 54 (25%) | 51 (25%) | 45 (25%) |
Years of Experience | |||
0–5 | 79 (37%) | 72 (33%) | 65 (37%) |
6–10 | 54 (25%) | 52 (25%) | 44 (25%) |
11–20 | 56 (26%) | 54 (26%) | 46 (26%) |
21+ | 26 (12%) | 25 (12%) | 22 (12%) |
Nationality | |||
United States of America | 181 (84%) | 172 (84%) | 153 (86%) |
Italy | 20 (9%) | 19 (9%) | 12 (7%) |
Canada | 8 (4%) | 8 (4%) | 8 (5%) |
United Kingdom | 2 (1%) | 2 (1%) | 2 (1%) |
Jamaica | 1 (0.5%) | 1 (0.5%) | 0 (0.5%) |
United Arab Emirates | 1 (0.5%) | 1 (0.5%) | 1 (0.5%) |
Age of Population Served | |||
Children (0–9) | 198 (92%) | 187 (92%) | 166 (94%) |
Adolescents (10–19) | 154 (72%) | 147 (72%) | 129 (73%) |
Adults (19+) | 73 (34%) | 70 (34%) | 59 (33%) |
Diagnosis of Population Served | |||
Autism spectrum disorder | 212 (99%) | 201 (99%) | 176 (99%) |
Other developmental disability | 163 (76%) | 155 (76%) | 134 (76%) |
No known diagnosis | 40 (19%) | 38 (19%) | 32 (18%) |
Primary Target Behaviors | |||
Problematic or disruptive behaviors | 199 (93%) | 189 (93%) | 162 (92%) |
Early learning/pre-academic skills | 160 (74%) | 152 (75%) | 137 (77%) |
Social skills | 155 (72%) | 145 (71%) | 130 (73%) |
Vocational or life skills | 101 (47%) | 98 (48%) | 87 (49%) |
Feeding or mealtime behaviors | 73 (34%) | 72 (35%) | 64 (36%) |
Total | 215 (100%) | 204 (100%) | 177 (100%) |
Reinforcer Identification Methods
Figure 1 depicts responses to the questions in the second section of the survey, which was completed by 215 respondents. For Fig. 1 and all remaining figures, each bar indicates the percentage of respondents who selected that response option and error bars indicate the 95% confidence interval (CI). The percentage of respondents indicating each response option is also displayed numerically in each figure and the total number of respondents included in each analysis is provided as a figure note. The left panel of Fig, 1 depicts the percentage of respondents who indicated they use each method of reinforcer identification. A majority of respondents indicated that they asked parents, asked clients, observed the client, conducted PSPAs, conducted MSWOs, or conducted FOs. The methods reported by the highest percentages of respondents were asking parents, asking the client, or observing the client to identify reinforcers and the methods reported by the fewest respondents were conducting formal interviews with the client or conducting SSPA. We also analyzed the percentage of respondents that indicated each method of reinforcer identification based on the credential held, age of client served, diagnosis of client served, or primary goal(s) of services provided by the respondents (see supplemental material Figs. 1–4); however, no clear differences were evident based on any of these factors (i.e., differences were small and CIs overlapped).
Fig. 1.
Percentage of Respondents who Utilize each Method of Reinforcer Identification and How Frequently They Do So. Note. 215 respondents are included in this analysis. The exact percentage of respondents indicating each answer is depicted graphically (both panels) and numerically (left panel only)
The right panel of Fig. 1 depicts the percentage of respondents who indicated they utilize a given method at least daily, weekly, monthly, or less than monthly, as a percentage of the total number of respondents who indicated they utilized a given method. There were clear differences in how frequently most methods were utilized. For example, a majority of respondents indicated that they ask the client (67%) or observe the client (80%) on a daily basis, whereas a majority of respondents indicated that they conduct formal interviews with the parent (73%) or client (61%) less often than once a month. There were less clear differences in how frequently other methods (e.g., different types of SPAs) were utilized, as indicated by the smaller differences in the percentage of respondents who indicated each option and overlap in the corresponding CIs. However, a general trend among the structured SPA methods (PSPA, MSWO, FOPA, SSPA) was that more respondents reported using these methods at longer delays, and fewer reported using them daily (an exception to this general pattern was noted for the FOPA method; Fig. 1).
Whether, Why, and How to Change Reinforcers
Two-hundred four respondents (of the total 215) completed all sections of the survey through the question asking whether they changed reinforcers during teaching sessions. Their responses to the demographic questions are shown in the third column of Table 3. The number of respondents decreased from 215 to 204 because 11 respondents who completed the first two sections of the survey either indicated that they did not have experience in the relevant context (n = 8) or exited the survey prior to completing the question about whether they change reinforcers during teaching sessions (n = 3). Figure 2 depicts the responses to the question asking whether respondents changed reinforcers during teaching sessions. Ninety percent (184/204) of respondents indicated that they sometimes change reinforcers during teaching sessions; whereas 10% (20/204) of respondents indicated that they do not. We also analyzed the percentage of respondents that indicated they do (i.e., yes) and do not (i.e., no) change reinforcers based on the credential held, age of client served, diagnosis of client served, or primary goal(s) of services provided by the respondents (see supplemental material Figs. 5–8); however, no clear differences were evident based on most of these factors (i.e., differences were small and CIs overlapped). One exception to this was that fewer respondents holding an RBT credential (74%, 34/46) indicated they changed reinforcers during teaching sessions than those holding BCBA (93%, 91/98) or BCBA-D (96%, 49/51) credentials.
Fig. 2.
Percentage of Respondents Who Change Reinforcers during Teaching Sessions. Note. 204 respondents are included in this analysis. The exact percentage of respondents indicating each response is depicted graphically and numerically along with corresponding 95% CIs (graphically only)
One-hundred seventy-seven respondents (of the total 215) completed the third section of the survey (questions about why and how they change reinforcers) as well as the first and second sections. Their responses to the demographic questions are shown in the fourth column of Table 3. The number of respondents decreased from 204 to 177 because 20 respondents indicated that they did not change reinforcers during teaching sessions and, thus, were not asked questions about why and how they do so and because seven respondents exited the survey prior to completing the questions about why and how they change reinforcers. Figure 3 depicts responses to the open-ended question about why respondents change reinforcers during a teaching session. The left panel of Fig. 3 depicts broad or general reasons for changing reinforcers that were described by respondents (i.e., the “why” emergent codes in Table 2). No single one of these general reasons was described by a majority of respondents, but the most likely general reasons to be described were satiation and a change in motivation. Other general reasons such as the occurrence of an indicating response, a lack of interest, or the reinforcer becoming ineffective were other general reasons endorsed, but only by 15% (26/177) of respondents or fewer. Many respondents also included more specific, behavioral descriptions of reasons they change reinforcers, and these are depicted in the right panel of Fig. 3. No single specific reason for changing reinforcers was endorsed by a majority of respondents. The specific behavioral indicators most likely to be described were a lack of independent responding, refusal of the current reinforcer, or the occurrence of a mand for another reinforcer. Additional behavioral indicators such as attending to another stimulus, not attending to instructional materials, engaging in problematic or incompatible behavior, the passage of time, or unavailability of the reinforcer were described, but only by 15% of respondents or fewer. Any reasons for changing reinforcers that were described by less than 5% (10/177) of respondents were collapsed into the “other” category and included descriptions such as: because the presence or removal of the reinforcer became disruptive, to differentially reinforce a response, to expose the client to different or new reinforcers, or due to the occurrence of incompatible behaviors. In general, there was a large amount of variance in responses to the open-ended question, with many general responses that lacked specific behavioral indicators.
Fig. 3.
Percentage of Respondents Who Described Different Reasons for Changing Reinforcers (Open-Ended Question). Note. 177 respondents are included in this analysis. The exact percentage of respondents indicating each response is depicted graphically and numerically along with corresponding 95% CIs (graphically only)
Figure 4 depicts the responses to the close-ended “check all that apply” question asking why respondents may change reinforcers during a teaching session. A majority of respondents indicated that they may change reinforcers because the client mands for another stimulus, reaches for another stimulus, refuses the current stimulus, is not attending to the instructor or instructional materials, or because the current stimulus being used as a reinforcer becomes unavailable or breaks. The reasons endorsed by the highest percentage of respondents included the client manding or reaching for another stimulus or refusing the current stimulus, whereas the reasons endorsed by the fewest respondents included the client engaging in challenging or problematic behavior, the client engaging in responses that are incompatible with instruction, or the passage of time. Responses to this question did not substantially vary based on responses to the demographic questions (see supplemental material, Figs. 9–12).
Fig. 4.
Percentage of Respondents Who Endorsed Different Reasons for Changing Reinforcers (Check-All-That-Apply Question). Note. 177 respondents are included in this analysis. The exact percentage of respondents indicating each response is depicted graphically and numerically along with corresponding 95% CIs (graphically only)
Figure 5 depicts responses to the open-ended question about how respondents change reinforcers during a teaching session. No single method of choosing a new reinforcer was endorsed by a majority of respondents, but methods described by the highest percentage of respondents were conducting a preference assessment, providing a choice, and asking the client. Additional methods, some related to selecting a reinforcer based on previous behavior of the client (e.g., based on mands or previous assessments) and others related to momentary evaluation (e.g., observe client, ask others) were also endorsed, but not by more than 22% (39/177) of respondents. Any methods of changing reinforcers that were described by less than 5% (10/177) of respondents were collapsed into the “other” category and included descriptions such as: asking others (e.g., clinicians, parents), finding a novel or rarely used stimulus, or doing a reinforcer assessment. As with the open-ended question about reasons for changing a reinforcer, there was a large amount of variance in the methods endorsed by respondents. However, unlike responses to the “why” question which included several “broad” answers that did not reference specific observable characteristics of behavior, only one of our emergent codes (i.e., indicating response) was labeled as a “broad” method of selecting the new reinforcer.
Fig. 5.
Percentages of Respondents Who Described Different Methods of Changing Reinforcers (Open-Ended Question). Note. 177 respondents are included in this analysis. The exact percentage of respondents indicating each response is depicted graphically and numerically along with corresponding 95% CIs (graphically only)
Figure 6 depicts the responses to the close-ended “check all that apply” question asking how respondents may change reinforcers during a teaching session. A majority of respondents indicated that they may try to choose a more efficacious reinforcer based on what the client has manded for, based on what the client is attending to or interacting with, based on the results of previous assessments, or by providing a choice between currently available or previously preferred stimuli. The factors used for momentary reinforcer identification endorsed by the highest percentage of respondents included choosing a new reinforcer based on what the client has manded for, based on what the client is attending to or interacting with, or by providing a choice among stimuli. In contrast, the factors endorsed by the fewest respondents included choosing a new reinforcer by utilizing similar, previously preferred, or novel stimuli. Responses to this question did not substantially vary based on responses to the demographic questions (see supplemental material, Figs. 13–16).
Fig. 6.
Percentages of Respondents Who Endorsed Different Methods of Changing Reinforcers (Check-All-That-Apply Question). Note. 177 respondents are included in this analysis. The exact percentage of respondents indicating each response is depicted graphically and numerically along with corresponding 95% CIs (graphically only)
Discussion
Although SPAs have a long history of empirical support and utility in research (Hagopian et al., 2004; Morris et al., 2023b; Tullis et al., 2011), it seems there are barriers to conducting SPAs frequently enough in practice to avoid problems related to naturally occurring changes in preference and reinforcer efficacy (Graff & Karsten, 2012; MacNaul et al., 2021). However, it has been a decade since the last survey of clinicians’ practices related to reinforcer identification and many methods of improving the feasibility or efficiency of SPAs or supplementing them with other methods (e.g., brief SPAs; Conine et al., 2021; IMRA; Leaf et al., 2018; frequent informal choices; Toussaint et al., 2016) have been evaluated during that time. Thus, one aim of this study was to provide an updated description of which methods of reinforcer identification are employed and how frequently. Our results replicate a core finding of Graff and Karsten (2012): many respondents reported not using formal SPA methods frequently, or at all, and respondents generally indicated that they use informal methods much more frequently (i.e., daily) than formal methods (i.e., once a month or less; see Fig. 1). Much like the Graff and Karsten’s (2012) survey, it remains unclear from the data shown in Fig. 1 exactly when or why practitioners may employ informal preference assessment methods and exactly how they would do so (e.g., which stimuli will be included, how the less formal methods of assessment are conducted). The remaining portions of our survey were designed to fill in these gaps.
Thus, the primary aim of the current study was to evaluate whether, why, and how clinicians change reinforcers on a momentary basis. When asked if they changed reinforcers in a specific clinical context (i.e., discrete-trial instruction sessions for children with ASD), a substantial majority of respondents (90%, 184/204) reported that they do. This finding suggests many clinicians may employ some form of momentary reinforcer evaluation to remain sensitive to fluctuations in preference and reinforcer value. Beyond asking whether clinicians change reinforcers on a momentary basis, we also asked why (i.e., in response to what factors) and how (i.e., based on which client behaviors, data, or prior experiences) clinicians change reinforcers. When asked why they would change reinforcers in an open-ended question, some respondents (26%, 46/177) described only general reasons (e.g., satiation, change in motivation), others described only specific, behavioral reasons (e.g., lack of independent responding for or refusal of the current reinforcer; 27%, 48/177), and many described a mix of general and specific reasons (46%, 81/177). The specific reasons most likely to be described were changing reinforcers due to a lack of independent responding, mands for a different stimulus, or refusal of the current stimulus. It is important to note that no single reason was described by a majority of respondents, suggesting a general lack of consensus regarding what constitutes a good reason to change reinforcers in the absence of discrete response options. In contrast, when presented with a list of potential reasons for changing reinforcers, every response option was selected by at least half of respondents except for “the passage of time” (Fig. 4). Moreover, 90% or more of respondents endorsed a client manding for, reaching for, or attending to another reinforcer or refusing the current reinforcer as a reason they would change reinforcers. Thus, there was some correspondence in results between the open-ended and check-all-that-apply questions; mands for a different reinforcer or refusal of the current reinforcer were the most prominent reasons for changing reinforcers across question types. However, discrepancies between question types were more apparent with several other response options (e.g., attending, problematic or incompatible behavior).
When asked an open-ended question about how they would select the new reinforcer when changing reinforcers, many respondents described conducting a preference assessment, providing a choice, or asking the client. Given that many practitioners in Graff and Karsten (2012) reported that preference assessments are too time-consuming to conduct frequently (e.g., in the middle of a teaching session), we suspect that most respondents providing these descriptions were referencing the same kind of “mini-preference assessment” in which practitioners provide an informal choice between two or more reinforcers (e.g., Graff & Karsten, 2012; Toussaint et al., 2016) or conducted an abbreviated approximation of a validated preference assessment method (e.g., a single trial of an MSWO, Conine et al., 2021; brief free operant observation, Roane et al., 1998). Although these were the most frequently described methods of reinforcer identification in the open-ended question, it is also important to note that no method was described by a majority of respondents. In contrast, when presented with a select-all-that-apply list of methods for selecting a new reinforcer during teaching, all options except seeking out a previously preferred or novel stimulus were selected by a majority of respondents. Almost 90% (158/177) of respondents endorsed that they would choose a new reinforcer based on client mands for, attention to, or interaction with other stimuli, which corresponds with the three most commonly endorsed reasons for changing a reinforcer in the first place, as described above. Responses to the open-ended question of how to identify a new reinforcer seemed to favor providing a choice; whereas responses to the corresponding check-all-that-apply question slightly favored selecting the new reinforcer based on recent client behavior.
Another aim of this study was to evaluate how factors such as respondent credentials (see Figs. 1, 5, 9, and 13 of supplemental material), typical client age (see Figs. 2, 6, 10, and 14 of supplemental material) and diagnosis (see Figs. 3, 7, 11, and 15 of supplemental material), and primary goal(s) of services (see Figs. 4, 8, 12, and 16 of supplemental material) may have influenced responses to the yes–no and check-all-that-apply questions. One finding that emerged from these secondary analyses was that RBTs were less likely than BCBAs or BCBA-Ds to indicate that they changed reinforcers on a momentary basis during teaching (see Fig. 5 of supplemental material). One explanation could be that RBTs are simply following clinical programming that specifies a consistent reinforcer regardless of client behavior. The efficacy of such programming would be severely inhibited by momentary shifts in reinforcer efficacy, so this explanation seems unlikely. Differences across credentials may also be attributable to issues in supervision or training, in which those designing skill acquisition programs expect or instruct staff to utilize momentary reinforcer evaluation, but this is not put into practice by front-line service providers due to training deficits or feasibility constraints. Otherwise, respondents’ credentials, typical client age and diagnosis, and primary goal(s) of services did not seem to influence responses to other questions about whether, why, and how they change reinforcers. This is a positive finding because it suggests these factors do not bias whether, why, and how respondents reported changing reinforcers, and that these practices are equitable across clients with diverse characteristics. Based on our sample, it seems that behavior-analytic practitioners may be equally sensitive to changes in their clients’ motivation or preferences regardless of age, diagnosis, or current goals.
The results of this survey provide a description of current clinical practices related to methods of momentary reinforcer evaluation (i.e., determining when and how to change reinforcers) for a sample of practicing behavior analysts. However, some limitations of this survey should be noted. An important limitation of this study is that the results of the survey are only a description based on practitioners’ self-report of what they do in practice. The actual occurrence and efficacy of the different methods respondents described (open-ended) or endorsed (close-ended) determining when and how to change reinforcers remains unclear. Future research should conduct observations of clinical practices related to identifying and changing reinforcers in order to provide a point of comparison for the report-based description of the current study. In addition, the size of our sample relative to the total number of practicing behavior analysts is rather small. However, this sample size is not out of step with recent surveys about behavior-analytic practices in a number of other domains (e.g., Colombo et al., 2021; Normand & Donohue, 2022; Roscoe et al., 2015; Sellers et al., 2019). The low response rate to surveys sent through the BACB’s mass email service, and unclear response rate for surveys sent though snowball sampling are also limitations of this survey (although cf. Hendra & Hill, 2019). Due to these limitations, the extent to which answers provided by our respondents are representative of current practices for all practicing behavior analysts remains unclear. Nevertheless, the current study provides a novel, in-depth description of the practices of a meaningful subset of behavior analysts. Despite their endorsement by 90% of respondents in our sample (Fig. 3), methods of momentary reinforcer evaluation remain underdescribed and underdeveloped within the research literature. The fact that such a large percentage of our sample indicated that they use these strategies suggests that much additional research on momentary reinforcer evaluation is needed. To that end, our respondents’ answers to questions about why and how they change reinforcers on a momentary basis (Figs. 3, 4, 5 and 6) lay a meaningful groundwork for future empirical and experimental research, which may in turn update and inform clinical practices.
Research from the satiation and motivating operation literature (e.g., O’Reilly et al., 2009) and IMRA literature (e.g., Alcalay et al., 2019) have observed variance across individuals in terms of which behaviors most consistently and accurately indicate shifts in reinforcer efficacy. The results of the current study are also limited because they allow for no evaluation of how clinicians may individualize methods of determining when and how to change reinforcers across clients. Future research should include questions about individual characteristics or differences that may inform why and how clinicians change reinforcers in practice and aim to develop assessment methods for individualizing procedures for changing reinforcers. Such assessments could ensure procedures of momentary reinforcer evaluation are inclusive of clients with different reinforcers, leisure skills, and communicative repertoires.
The results of this survey also have important implications for future research and practice: they indicate a clear need for the continued development and validation of methods of momentary reinforcer evaluation (e.g., IMRA; Leaf et al., 2015) and methods for training behavior analysts to implement such procedures. The results of our survey suggest evaluation of and training on more comprehensive methods of changing reinforcers is needed. Respondents described or endorsed several behavioral indicators for why and how to change reinforcers that have been evaluated in previous research (e.g., client mands for another reinforcer or refuses current reinforcer). However, respondents also described several indicators that may be useful and feasible to implement in practice (e.g., a lack of independent responding, problematic or incompatible behavior, providing an informal choice), but have not been evaluated in research on momentary reinforcer evaluation. Evaluating these additional indicators is an important direction for future research in this area. Future research should systematically evaluate reasons for and methods of changing reinforcers that were described or endorsed by clinicians in the current study, with a focus on isolating specific behavioral indicators to determine which are necessary and sufficient rather than lumping many potential indicators and procedures together and evaluating it as a single method.
The results of our survey also suggest that evaluation and training is needed regarding the identification and implementation of specific, operational behavioral indicators rather than general, conceptual indicators. Many respondents provided broad, general responses to the open-ended questions (e.g., “if the item is not functioning as a reinforcer”; see left panel of Fig. 3) which did not include observable behaviors and, thus, could impede staff training and be difficult to implement with fidelity. Similar issues are evident in studies evaluating methods of momentary reinforcer evaluation, which do not specify the factors used by experimenters in determining why or how to change reinforcers (Leaf et al., 2015) or specify broad, conceptual factors that may not facilitate replication or staff training (Alcalay et al., 2019; Leaf et al., 2018). For example, Alcalay et al. (2019) suggest changing reinforcers based on the child’s expressions and body language and the way in which the child manipulates the items and Leaf et al. (2018) suggest changing reinforcers if the child appears “bored.” The presence of these broad or general indicators, even if used in conjunction with more specific behavioral indicators (e.g., mands for a different item; Alcalay et al., 2019), may impede systematic and replicable research, prevent the consistent implementation of these procedures, and hinder methods of training others to implement them.
Thus, in future research, it may be useful to incorporate more precise operational definitions for determining when and how to change reinforcers (e.g., no independent correct responses for five trials, does not look at or smile/laugh while engaging with the item) and evaluate methods of training clinicians to develop and implement these procedures with their clients. Moreover, it may be useful to compare methods of identifying and changing reinforcers employed by clinicians who have and have not received such training and their relative efficacy and efficiency in facilitating skill acquisition. Future research will have to strike a balance between affording interventionists the flexibility to change reinforcers in response to client behavior and providing sufficient procedural details to allow for replication and analysis of what factors are most useful. Both components will be necessary for the development of a robust, systematic technology of momentary reinforcer evaluation and identification.
Supplementary Information
Below is the link to the electronic supplementary material.
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics Approval
This study was reviewed and approved by the Georgia State University Institutional Review Board and performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments.
Consent to Participate
Freely given, informed consent to participate in the study was obtained from all participants.
Consent for Publication
Freely given, informed consent to use data obtained during the study for publication purposes was obtained from all participants.
Conflicting/Competing Interests
The authors have no conflicting or competing interests to declare that are relevant to the content of this article.
Footnotes
Bulleted Summary
• We surveyed clinicians to determine whether, why, and how clinicians change reinforcers on a momentary basis (i.e., during a teaching session)
• Many clinicians report changing reinforcers, and doing so whenever clients mand for other reinforcers or refuse the current reinforcer, and determine the next reinforcer based on recent client behavior or by providing informal choices
• Results describe current practices pertaining to reinforcer identification, an essential part of providing behavior-analytic services
• Results may inform research on the continued development and evaluation of methods of momentary reinforcer evaluation
• Results highlight limitations in current clinical practices and related research that may lead to improved methodology and training procedures
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Alcalay, A., Ferguson, J., Cihon, J., Torres, N., Leaf, J. B., Leaf, R., McEachin, J., Schulze, K., & Rudrud, E. H. (2019). Comparing multiple stimulus preference assessments without replacement to in-the-moment reinforcer analysis on rate of responding. Education & Training in Autism & Developmental Disabilities,54, 69–82. [Google Scholar]
- Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://bacb.com/ethics-code/
- Carr, J. E., Nicolson, A. C., & Higbee, T. S. (2000). Evaluation of a brief multiple-stimulus preference assessment in a naturalistic context. Journal of Applied Behavior Analysis,33(3), 353–357. 10.1901/jaba.2000.33-353 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Colombo, R. A., Taylor, R. S., & Hammond, J. L. (2021). State of current training for severe problem behavior: A Survey. Behavior Analysis in Practice,14(1), 11–19. 10.1007/s40617-020-00424-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Conine, D. E., Morris, S. L., Kronfli, F. R., Slanzi, C. M., Petronelli, A. K., Kalick, L., & Vollmer, T. R. (2021). Comparing the results of one-session, two-session, and three-session mswo preference assessments. Journal of Applied Behavior Analysis,54(2), 700–712. 10.1002/jaba.808 [DOI] [PubMed] [Google Scholar]
- Cote, C. A., Thompson, R. H., Hanley, G. P., & McKerchar, P. M. (2007). Teacher report and direct assessment of preferences for identifying reinforcers for young children. Journal of Applied Behavior Analysis,40(1), 157–166. 10.1901/jaba.2007.177-05 [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeLeon, I. G., & Iwata, B. A. (1996). Evaluation of a multiple-stimulus presentation format for assessing reinforcer preferences. Journal of Applied Behavior Analysis,29(4), 519–533. 10.1901/jaba.1996.29-519 [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeLeon, I. G., Frank, M. A., Gregory, M. K., & Allman, M. J. (2009). On the correspondence between preference assessment outcomes and progressive-ratio schedule assessments of stimulus value. Journal of Applied BehaviorAnalysis,42(3), 729–733. 10.1901/jaba.2009.42-729 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fisher, W., Piazza, C. C., Bowman, L. G., Hagopian, L. P., Owens, J. C., & Slevin, I. (1992). A comparison of two approaches for identifying reinforcers for persons with severe and profound disabilities. Journal of Applied Behavior Analysis,25(2), 491–498. 10.1901/jaba.1992.25-491 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fisher, W. W., Piazza, C. C., Bowman, L. G., & Amari, A. (1996). Integrating caregiver report with systematic choice assessment to enhance reinforcer identification. American Journal of Mental Retardation,101(1), 15–25. [PubMed] [Google Scholar]
- Graff, R. B., & Ciccone, F. J. (2002). A post hoc analysis of multiple-stimulus preference assessment results. Behavioral Interventions,17(2), 85–92. 10.1002/bin.107 [Google Scholar]
- Graff, R. B., & Karsten, A. M. (2012). Assessing preferences of individuals with developmental disabilities: A survey of current practices. Behavior Analysis in Practice,5(2), 37–48. 10.1007/bf03391822 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Green, C. W., Reid, D. H., Canipe, V. S., & Gardner, S. M. (1991). A comprehensive evaluation of reinforcer identification processes for persons with profound multiple handicaps. Journal of Applied Behavior Analysis,24(3), 537–552. 10.1901/jaba.1991.24-537 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hagopian, L. P., Long, E. S., & Rush, K. S. (2004). Preference assessment procedures for individuals with developmental disabilities. Behavior Modification,28(5), 668–677. 10.1177/0145445503259836 [DOI] [PubMed] [Google Scholar]
- Hanley, G. P., Iwata, B. A., & Roscoe, E. M. (2006). Some determinants of changes in preference over time. Journal of Applied Behavior Analysis,39(2), 189–202. 10.1901/jaba.2006.163-04 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hanratty, L. A., & Hanley, G. P. (2021). A preference analysis of reinforcer variation and choice. Journal of Applied Behavior Analysis,54(3), 1062–1074. 10.1002/jaba.835 [DOI] [PubMed] [Google Scholar]
- Hendra, R., & Hill, A. (2019). Rethinking response rates: new evidence of little relationship between survey response rates and nonresponse bias. Evaluation review,43(5), 307–330. 10.1177/0193841X18807719 [DOI] [PubMed] [Google Scholar]
- Kelley, M. E., Shillingsburg, M. A., & Bowen, C. N. (2016). Stability of daily preference across multiple individuals. Journal of Applied Behavior Analysis,49(2), 394–398. 10.1002/jaba.288 [DOI] [PubMed] [Google Scholar]
- Kronfli, F. R., Vollmer, T. R., Fernand, J. K., & Bolívar, H. A. (2020). Evaluating preference for and reinforcing efficacy of fruits and vegetables compared with salty and sweet foods. Journal of Applied Behavior Analysis,53(1), 385–401. 10.1002/jaba.594 [DOI] [PubMed] [Google Scholar]
- Lang, R., O’Reilly, M., Sigafoos, J., Lancioni, G. E., Machalicek, W., Rispoli, M., & White, P. (2009). Enhancing the effectiveness of a play intervention by abolishing the reinforcing value of stereotypy: A pilot study. Journal of Applied Behavior Analysis,42(4), 889–894. 10.1901/jaba.2009.42-889 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lang, R., O’Reilly, M., Sigafoos, J., Machalicek, W., Rispoli, M., Lancioni, G., Aguilar, J., & Fragale, C. (2010). The effects of an abolishing operation intervention component on play skills, challenging behavior, and stereotypy. Behavior Modification,34, 267–289. 10.1177/0145445510370713 [DOI] [PubMed] [Google Scholar]
- Leaf, J. B., Oppenheim-Leaf, M. L., Leaf, R., Courtemanche, A. B., Taubman, M., McEachin, J., Sheldon, J. B., & Sherman, J. A. (2012). Observational effects on the preferences of children with autism. Journal of Applied Behavior Analysis,45(3), 473–483. 10.1901/jaba.2012.45-473 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leaf, J. B., Leaf, R., Alcalay, A., Leaf, J. A., Ravid, D., Dale, S., Kassardjian, A., Tsuji, K., Taubman, M., McEachin, J., & Oppenheim-Leaf, M. (2015). Utility of formal preference assessments for individuals diagnosed with autism spectrum disorder. Education & Training in Autism & Developmental Disabilities,50(2), 199–212. [Google Scholar]
- Leaf, J. B., Leaf, R., Leaf, J. A., Alcalay, A., Ravid, D., Dale, S., Kassardjian, A., Tsuji, K., Taubman, M., McEachin, J., & Oppenheim-Leaf, M. L. (2018). Comparing paired-stimulus preference assessments with in-the-moment reinforcer analysis on skill acquisition: A preliminary investigation. Focus on Autism & Other Developmental Disabilities,33(1), 14–24. 10.1177/1088357616645329 [Google Scholar]
- MacNaul, H., Cividini-Motta, C., Wilson, S., & Di Paola, H. (2021). A systematic review of research on stability of preference assessment outcomes across repeated administrations. Behavioral Interventions,36(4), 962–983. 10.1002/bin.1797 [Google Scholar]
- Miles, M. B., Huberman, A. M., & Saldaña, J. (2019). Qualitative data analysis: A methods sourcebook (4th ed.). London, UK: SAGE. [Google Scholar]
- Morris, S. L., & Vollmer, T. R. (2020a). A comparison of methods for assessing preference for social interactions. Journal of Applied Behavior Analysis,53(2), 918–937. 10.1002/jaba.692 [DOI] [PubMed] [Google Scholar]
- Morris, S. L., & Vollmer, T. R. (2020b). Evaluating the stability, validity, and utility of hierarchies produced by the social interaction preference assessment. Journal of Applied Behavior Analysis,53(1), 552–535. 10.1002/jaba.610 [DOI] [PubMed] [Google Scholar]
- Morris, S. L., & Vollmer, T. R. (2022). Comparing clinician-reported hierarchies of relative reinforcer efficacy to reinforcer assessment hierarchies. Behavior Analysis: Research & Practice.,22(4), 354–367. 10.1037/bar0000257 [Google Scholar]
- Morris, S. L., Allen, A. E., & Gallagher, M. L. (2023a). An evaluation of the number of sessions in MSWO preference assessments for social interactions. Behavior Analysis: Research & Practice,23(2), 102–116. 10.1037/bar0000264 [Google Scholar]
- Morris, S. L., Gallagher, M. L., & Allen, A. E. (2023b). A review of methods of assessing preference for social interaction. Journal of Applied Behavior Analysis,56(2), 416–427. 10.1002/jaba.981 [DOI] [PubMed] [Google Scholar]
- Neely, L., Rispoli, M., Gerow, S., & Ninci, J. (2015). Effects of antecedent exercise on academic engagement and stereotypy during instruction. Behavior Modification,39(1), 98–116. 10.1177/0145445514552891 [DOI] [PubMed] [Google Scholar]
- Normand, M. P., & Donohue, H. E. (2022). Behavior analytic jargon does not seem to influence treatment acceptability ratings. Journal of Applied Behavior Analysis,55(4), 1294–1305. 10.1002/jaba.953 [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Reilly, M., Lang, R., Davis, T., Rispoli, M., Machalicek, W., Sigafoos, J., Lancioni, G., Didden, R., & Carr, J. (2009). A systematic examination of different parameters of presession exposure to tangible stimuli that maintain problem behavior. Journal of Applied Behavior Analysis,42(4), 773–783. 10.1901/jaba.2009.42-773 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pace, G. M., Ivancic, M. T., Edwards, G. L., Iwata, B. A., & Page, T. J. (1985). Assessment of stimulus preference and reinforcer value with profoundly retarded individuals. Journal of Applied Behavior Analysis,18(3), 249–255. 10.1901/jaba.1985.18-249 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raetz, P. B., LeBlanc, L. A., Baker, J. C., & Hilton, L. C. (2013). Utility of the multiple-stimulus without replacement procedure and stability of preferences of older adults with dementia. Journal of Applied Behavior Analysis,46(4), 765–780. 10.1002/jaba.88 [DOI] [PubMed] [Google Scholar]
- Resetar, J. L., & Noell, G. H. (2008). Evaluating preference assessments for use in the general education population. Journal of Applied Behavior Analysis,41(3), 447–451. 10.1901/jaba.2008.41-447 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Richman, D. M., Barnard-Brak, L., Abby, L., & Grubb, L. (2016). Multiple-stimulus without replacement preference assessment: Reducing the number of sessions to identify preferred stimuli. Journal of Developmental & Physical Disabilities,28(3), 469–477. 10.1007/s10882-016-9485-1 [Google Scholar]
- Rispoli, M., O’Reilly, M., Lang, R., Machalicek, W., Davis, T., Lancioni, G., & Sigafoos, J. (2011). Effects of motivating operations on problem and academic behavior in classrooms. Journal of Applied Behavior Analysis,44(1), 187–192. 10.1901/jaba.2011.44-187 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roane, H. S., Vollmer, T. R., Ringdahl, J. E., & Marcus, B. A. (1998). Evaluation of a brief stimulus preference assessment. Journal of Applied Behavior Analysis,31(4), 605–620. 10.1901/jaba.1998.31-605 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roscoe, E. M., Iwata, B. A., & Kahng, S. (1999). Relative versus absolute reinforcement effects: Implications for preference assessments. Journal of Applied Behavior Analysis,32(4), 479–493. 10.1901/jaba.1999.32-479 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roscoe, E. M., Phillips, K. M., Kelly, M. A., Farber, R., & Dube, W. V. (2015). A statewide survey assessing practitioners’ use and perceived utility of functional assessment. Journal of Applied Behavior Analysis,48(4), 830–844. 10.1002/jaba.259 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sellers, T. P., Valentino, A. L., Landon, T. J., & Aiello, S. (2019). Board certified behavior analysts’ supervisory practices of trainees: Survey results and recommendations. Behavior Analysis in Practice,12(3), 536–546. 10.1007/s40617-019-00367-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stuckey, H. L. (2015). The second step in data analysis: Coding qualitative research data. Journal of Social Health & Diabetes,3(1), 7–10. 10.4103/2321-0656.140875 [Google Scholar]
- Toussaint, K. A., Kodak, T., & Vladescu, J. C. (2016). An evaluation of choice on instructional efficacy and individual preferences among children with autism. Journal of Applied Behavior Analysis,49(1), 170–175. 10.1002/jaba.263 [DOI] [PubMed] [Google Scholar]
- Tullis, C. A., Cannella-Malone, H. I., Basbigill, A. R., Yeager, A., Fleming, C. V., Payne, D., & Wu, P.-F. (2011). Review of the choice and preference assessment literature for individuals with severe to profound disabilities. Education & Training in Autism & Developmental Disabilities, 46(4), 576–595. http://www.jstor.org/stable/24232368
- Verschuur, R., Didden, R., van der Meer, L., Achmadi, D., Kagohara, D., Green, V. A., Lang, R., & Lancioni, G. E. (2011). Investigating the validity of a structured interview protocol for assessing the preferences of children with autism spectrum disorders. Developmental Neurorehabilitation,14(6), 366–371. 10.3109/17518423.2011.606509 [DOI] [PubMed] [Google Scholar]
- Zhou, L., Iwata, B. A., Goff, G. A., & Shore, B. A. (2001). Longitudinal analysis of leisure-item preferences. Journal of Applied Behavior Analysis,34(2), 179–184. 10.1901/jaba.2001.34-179 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.