Abstract
Although applied behavior analysis (ABA) practice guidelines exist (Behavior Analyst Certification Board® [BACB®], 2020; Council of Autism Service Providers [CASP], 2020), research has shown that barriers to their implementation can be present in everyday clinical practice across a variety of areas (e.g., Blackman et al., 2023; DiGennaro Reed et al., 2015; Oliver et al., 2015; Roscoe et al., 2015; Sellers et al., 2019). To date there are no published studies that have assessed the status of procedural-integrity training, practices, and barriers experienced by Board Certified Behavior Analysts® (BCBAs®) providing ABA services. Therefore, the purpose of the current study was to determine the extent to which BCBAs engaged in the procedural-integrity monitoring process and what barriers they encounter in clinical practice. To gather this information, we surveyed BCBA practitioners. The findings reveal that BCBAs often observe providers implementing clinical services and provide feedback; however, they reported that a lack of training, time, and established systems, along with competing contingencies were barriers to engaging in data related procedural-integrity responsibilites (data collection, tracking, and analysis). Based on these findings, implications for BCBA training and support in addition to potential solutions and future research directions are discussed.
Supplementary Information
The online version contains supplementary material available at 10.1007/s40617-024-00974-6.
Keywords: Data collection, Procedural integrity, Quality, Supervision, Training, Treatment integrity
Procedural integrity—the extent to which an intervention is implemented as planned (e.g., Gresham, 1989; Sanetti & Kratchowill, 2009)—is a critical component that affects client outcomes (e.g., DiGennaro et al., 2007), quality of services, and optimal supervision practices. In behavior analysis, the term is often used interchangeably with treatment integrity, treatment fidelity, procedural fidelity, and intervention integrity. To encompass the implementation of various behavior analytic procedures, we will refer to the degree of intervention implementation as procedural integrity. The procedural-integrity process in clinical settings may entail five common components: (1) observation; (2) data collection; (3) data tracking; (4) data analysis; and (5) feedback. Although there are other steps that clinicians may engage in, such as creating materials for data collection, for the purposes of this article, we discuss the aforementioned five components (Table 1). Observation occurs when a clinical supervisor, often a Board Certified Behavior Analyst® (BCBA®), watches a provider implement an intervention to determine if the intervention is implemented as designed. Data collection involves documenting provider implementation of the intervention. This commonly occurs during observations with the client (e.g., Barnett et al., 2014; Collier-Meek et al., 2018), but can also occur during a review of permanent products, audio and video recordings, or role-play scenarios. Data collection also includes subcomponents related to measurement, such as identifying what to measure and how to measure it. Data tracking includes summarizing and storing data over the course of several observations to review procedural-integrity trends over time (e.g., via a spreadsheet). Data analysis involves a review of the tracked data to identify trends in procedural-integrity at the individual, team, and organizational levels. Ongoing data analysis is critical for clinical supervisors as it provides objective information on how to support providers in delivering quality services in accordance with the treatment plans in place. Finally, the supervisor delivers feedback on provider implementation. Feedback typically includes reviewing intervention areas that are performed well or need improvement and may take many forms (e.g., vocal, written, graphical) and occur at different times (e.g., in the moment, during a scheduled meeting, or via later correspondence).
Table 1.
Common Components of the Procedural-Integrity Monitoring Process
| Term | Definition |
|---|---|
| Observation | Watching providers implement interventions with the purpose of determining if the interventions are being implemented as planned. |
| Data collection | Documenting data on whether interventions are implemented as planned by providers. This includes identifying what to measure and how to measure it. |
| Data tracking | Summarizing and storing data collected over the course of several observations (e.g., graphing, list of scores) for later review to observe trends and determine if interventions are implemented as planned by providers over time. |
| Data analysis | Reviewing data on whether interventions are implemented as planned by providers and determining a course of action (e.g., deciding if training is necessary to support the correct implementation of the planned intervention or if modifications to the client's plan are necessary to support progress). |
| Feedback | Reviewing areas of the intervention that are performed well or need improvement with providers. |
In its entirety, the procedural-integrity monitoring process helps to inform ongoing clinical and supervisory needs. This process proves important as a growing body of literature has investigated the parameters of procedural integrity’s impact on intervention outcomes, and recent reviews of such research support the importance of monitoring procedural integrity to ensure the quality of interventions and bolster the effectiveness and efficiency of such interventions (e.g., Brand et al., 2019; Colón & Wallander, 2023; Fryling et al., 2012). Furthermore, in relation to supervision practices, research suggests that weak support from supervisors and poor training quality are amongst the key reasons for an employee's intention to exit an organization (Kazemi et al., 2015). Being that the procedural-integrity process is directly related to supervisor support and training, it is imperative that procedural integrity be at the forefront of a clinician's agenda to ensure quality service delivery and help to mitigate costly turnover (Blackman et al., 2024).
Due to the importance of the procedural-integrity process in applied behavior analytic (ABA) service provision, the Council of Autism Service Providers (CASP) and the Behavior Analyst Certification Board® (BACB®) have published documents describing expectations for practice. In particular, the CASP practice guidelines (CASP, 2020) detail responsibilities for certified behavior analysts that include procedural-integrity checks on an “ongoing basis” with more frequent monitoring for new providers, providers assigned to novel clients, and clients who engage in severe behavior or require complex interventions. In addition, the Ethics Code for Behavior Analysts (BACB, 2020) includes sections with relevant content (i.e., 2.17 Collecting and Using Data; 2.18 Continual Evaluation of the Behavior-Change Intervention; 2.19 Addressing Conditions Interfering with Service Delivery; and 4.08 Performance Monitoring and Feedback), including obligations to: (1) graph, summarize, and use data to make decisions about all phases of service delivery including continuing, modifying, or terminating services; (2) monitoring and evaluating interventions (which are not likely to be evaluated adequately without first ensuring that the intervention is being implemented as intended prior to making modifications to the intervention); (3) assessing and taking action in situations where data indicate that desired outcomes or environmental conditions interfere with or prevent service delivery (of which provider error could be a factor) while documenting all actions taken and the outcome; and (4) simultaneously monitoring performance management for supervisees and trainees. Finally, the current BACB Task List (BACB, 2017), details relevant skills and knowledge for entry-level BCBA practitioners, including two sections pertinent to procedural integrity (i.e., Selecting and Implementing Interventions, and Personnel Supervision and Management). In these sections, applicable skills include monitoring client progress and treatment integrity; making data-based decisions about the effectiveness of interventions and the need for treatment; understanding the reasons for behavior-analytic supervision and the potential risks of ineffective supervision (e.g., poor client outcomes, poor supervisee performance); establishing clear performance expectations for the supervisor and supervisee; selecting supervision goals based on an assessment of the supervisee’s skills using performance monitoring; and evaluating the effects of supervision. Furthermore, the upcoming BACB Test Content Outline (BACB, 2022) that is set to replace the Task List in 2025, includes the same expectations regarding procedural integrity but has been updated to specify that in addition to using data-based decisions to evaluate the effectiveness of an intervention, data-based decisions must also be used to evaluate procedural-integrity and supervision practices.
It is unfortunate that expectations and guidelines dictated by these governing bodies, even for some of the most fundamental ABA concepts, are often not reflected in clinical practice. For example, Oliver et al. (2015) and Roscoe et al. (2015) surveyed BCBA clinicians to determine the extent to which they conducted functional analyses to inform client treatment plans. Results indicated that only 34.6% (Roscoe et al., 2015) and 36% (Oliver et al., 2015) of respondents used functional analyses on a regular basis. These results are astounding given that functional analysis methodology is present on the BACB test content outline, and clinicians are taught in graduate school that conducting a functional analysis is the most valid method to inform the treatment of interfering behaviors (Oliver et al., 2015; Roscoe et al., 2015). In addition, Roscoe et al. found that 55.6% of respondents noted a lack of methodological training as a barrier to the proper implementation of functional analyses. Although it is good that clinicians are practicing within their scope of competence (i.e., refraining from using procedures they do not feel adequately trained to perform), these data illustrate that many BCBAs may not have been provided the opportunity to obtain sufficient training due to a lack of time and resources (Oliver et al., 2015). These data indicate that even when a concept such as functional analysis methodology is the focus of graduate programs and certification requirements, there may still be barriers such as a gap in training that prevents the consistent or accurate implementation of a procedure in ongoing clinical practice.
Although an investigation related to procedural-integrity practices in applied settings has not been studied to date, it is important to note that barriers to implementation are not unique to clinical practice alone, and have therefore been a topic of conversation in the behavior analysis and psychology research literature for several years (e.g., Falakfarsa et al., 2021; Han et al., 2023; Peterson et al., 1982). For instance, to better understand why procedural-integrity data were lacking in the psychology research literature and investigate what barriers may be present for researchers, Perepletchikova et al. (2009) surveyed researchers in clinical psychology and Hagermoser Sanetti and DiGennaro Reed (2012) surveyed researchers in school psychology. Perepletchikova et al. found that a lack of theory and guidelines on procedural-integrity procedures (training); as well as time, cost, and labor constraints, were regarded as strong barriers. A general lack of training about procedural integrity and the lack of editorial requirements for reporting integrity procedures were also perceived as barriers to its implementation. However, the researchers surveyed also indicated an awareness of the importance of procedural integrity as it relates to the experimental validity of a study and did not regard a lack of appreciation for procedural integrity as a barrier. Likewise, Hagermoser Sanetti and DiGennaro Reed found that 1) a lack of theory and specific guidelines on procedural-integrity procedures; 2) a lack of general knowledge about procedural integrity; 3) time, cost, and labor demands; 4) and a lack of editorial requirements were perceived as barriers to implementing procedural-integrity procedures by school psychology researchers. They also found that a lack of appreciation for procedural integrity was the least likely barrier reported.
In a recent study, St. Peter et al. (2023) evaluated reasons for the infrequent reporting of procedural-integrity data in behavior-analytic research by conducting focus groups with scholars in the field of ABA. In contrast to the Perepletchikova et al. (2009) and Hagermoser Sanetti and DiGennaro Reed (2012) studies, St. Peter et al. found that some scholars considered procedural-integrity data to be a necessity to interpret and replicate findings; however, others devalued the need for procedural-integrity data in particular contexts stating that it was not necessary (i.e., due to confidence in the implementer’s skills, ease of certain procedures, and study results). Other reasons for infrequent procedural-integrity reporting included the historical underpinnings of the field (highly controlled laboratory settings with automated procedures), weak reporting requirement contingencies, lack of training (e.g., graduate coursework), and lack of knowledge (e.g., data collection and data analysis procedures). Furthermore, St. Peter et al. hypothesized an interaction between the identified barriers reported in their study. They explained that early automated measurement in controlled settings meant that knowledge of procedural-integrity measurement was underdeveloped and may have contributed to graduate school faculty being underprepared to teach students about the procedural-integrity process. Reporting scholars, who also served as faculty members, indicated that there was a lack of available resources to support effectively teaching their students about procedural integrity. For instance, scholars reported a lack of clear guidelines for measurement (i.e., how to measure, when to measure and the minimum procedural-integrity coefficients necessary). The authors asserted that this training barrier was of particular concern given that a lack of adequate graduate training on the topic of procedural integrity may also affect how likely a BCBA may be to engage in the procedural-integrity process throughout the course of clinical practice. In addition, this lack of knowledge may have led to weak reporting contingencies, which may perpetuate scholars continuing to feel confident with their research repertoire; thus, preserving the lack of procedural-integrity data reported and further contributing to the gap in our knowledge of procedural-integrity errors.
As it applies to clinical practice, it is unclear what type of training BCBAs receive, how prevalent procedural-integrity practices are in clinical practice, how the process is conducted, and what barriers may exist to regularly engaging in the procedural-integrity process. Therefore, the current study's purpose was to obtain preliminary information by surveying BCBAs and offer recommendations for further analysis to address potential barriers.
Method
Participants
IRB approval was obtained prior to administering the survey. Participants were individuals certified by the BACB as BCBAs or BCBA-Ds who responded to an invitation to complete an anonymous online survey. Participants were recruited from two primary sources: Dissemination of information via social media and via two organizational email listservs. For social media recruitment, the survey information was shared via each author’s social media accounts on LinkedIn as well as via a Massachusetts professional behavior analyst organization’s social media outlet (LinkedIn, Facebook, and Instagram) for a total of 14 social media posts. Of the organizational listservs one was for a multistate ABA service provider for children diagnosed with autism and the other was an ABA provider serving children diagnosed with autism throughout the State of California. The authors also reached out to other state professional organizations as well as colleagues at universities and other ABA service organizations but did not receive replies regarding confirmation to disseminate the survey to their networks on the author’s behalf. Therefore, after the survey had been active for seven months (first response received February 20, 2023, and last response received October 12, 2023) and approximately 2 weeks passed with no responses regarding further dissemination outlets or responses to the survey, the survey was closed at the end of October 2023, following 8 months of availability. The number of individuals who received an invitation to complete the survey is unknown. In addition, because the survey was anonymous, people could not technically be restricted to one survey response due to limitations in Microsoft Forms. Respondents were asked initial eligibility questions, including wheter they consented to participation and demographic questions to determine if they met inclusionary criteria as a participant. One hundred eighty-four respondents started the survey. Of the 184 initial respondents, 47 respondents were designated as ineligible, due to 4 not consenting to participate, 11 not identifying as a BCBA, and 32 not having a current client caseload. Thus, 137 eligible respondents consented to and completed the survey.
Survey Instrumentation
Survey questions were generated by three doctoral level behavioral analysts with an average of 17 years of experience in the field. The surveys were administered online using Microsoft Forms. Response options included multiple-choice, multiselect, Likert, and open-ended. A subset of items included “branching,” which allowed the researchers to select responses that would generate an additional question (e.g., when “other” was selected as part of a multiple-choice question an additional open-ended question was generated to gain more information), skip questions that were not applicable, and end the survey if the respondent did not meet the eligibility criteria or did not consent to participating in the study.
The survey included an introductory page providing the estimated completion time of 15 min, definitions of the components of the procedural-integrity monitoring process (see Table 1), the purpose of the study and information related to informed consent. Subsequent sections included questions pertaining to demographics, training practices, supervisor behavior related to integrity practices, aspects of the process that were most difficult to implement, and beliefs about and experiences with monitoring procedural-integrity data.
Section 1 (demographics) included questions about: (1) gender; (2) race; (3) highest degree obtained; (4) certification held; (5) number of years certified; (6) primary place of employment (e.g., public school, clinic); (7) number of clients supervised; and (8) primary job classification (e.g., case supervisor, clinical director). Section 2 (training) included whether formal training was provided during graduate training, on the job, and via continuing education. If training was provided in any of these areas, questions pertaining to training content and training methods were asked.
Section 3 (procedural-integrity practices) was organized according to common components associated with procedural-integrity monitoring: observation, data collection, data tracking, data analysis, and feedback. For each procedural-integrity component, definitions were provided to the respondent (see Table 1), and questions were asked related to whether the respondent (1) currently engaged in the component; (2) the frequency of engagement; (3) how they engaged (e.g., use of paper and pencil, Excel); and (4) the barriers to engaging (e.g., no barriers, no time, competing contingencies) in that component. Definitions for the types of procedural-integrity data collection (accuracy, consistency, errors of omission and commission) were also provided to respondents (see Table 2). The performance barrier questions were included to assess variables that might be contributing to a performance deficit in each of the component areas. The content of these questions was influenced by a functional assessment conceptualization as provided by the Performance Diagnostic Checklist-Human Services (PDC-HS; Carr et al., 2013). The PDC-HS is designed to identify environmental variables responsible for a performance concern and develop applicable interventions. Thus, the barriers questions loosely corresponded to the sections of the PDC-HS (i.e., training; task clarification and prompting; resources, materials, and processes; performance consequences, effort, and competition). Within this section, survey branching was used to skip questions that were not applicable for a specific component. For example, if a respondent indicated they did not engage in procedural-integrity observations, the question about frequency of observations was skipped and the next applicable question was presented (e.g., barriers to conducting procedural-integrity observations).
Table 2.
Types of Procedural-integrity Data Collection
| Term | Definition |
|---|---|
| Consistency | Evaluating if the procedure is implemented on the schedule prescribed |
| Accuracy | Evaluating if the procedure, inclusive of all components, is implemented correctly |
| Errors of omission | When prescribed components of the intervention are left out |
| Errors of commission | When components are added that are not part of the prescribed intervention |
Section 4 was comprised of one question that asked survey respondents to identify the component of the procedural-integrity process they considered most difficult to implement. Finally, Section 5 asked participants to rate the extent to which they agreed with statements about procedural integrity including perceptions of its importance, such as the relationship between procedural-integrity scores and client progress, and the extent to which procedural integrity may vary.
Response Measurement and Data Analysis
Data for each question is shown as the percentage of respondents who endorsed a response option. Multiple-choice and Likert-scale questions required the respondent to select a single option, resulting in total response endorsements of 100% for each question. Likert questions consisted of a 5-point scale ranging from strongly agree (score = 1) to strongly disagree (score = 5). Multiselect questions allowed for the endorsement of one or more options resulting in total response endorsements that could exceed 100%.
Results
Participant Demographics
One hundred thirty-seven individuals consented to participate and met the inclusionary criteria for the survey. Table 3 provides full details of the reported participant demographics. The majority of the sample identified as female (n = 115, 83.9%), were white (n = 109, 79.6%), had a master’s degree (n = 131, 95.6%), and were certified with the BCBA credential (n = 134, 97.8%) rather than the BCBA-D credential (n = 3, 2.2%). The length of certification ranged from less than 1 year to more than 10 years, with most respondents certified for 5 years or less (n = 96, 70.1%). Respondents indicated working in various settings, ranging from public schools to center-based services. Most respondents indicated working in homes (n = 102, 74.5%) or centers (n = 95, 69.3%). When asked about their primary job classification, most respondents indicated they were a case supervisor (n = 126, 92.0%). Respondents indicated supervising a wide number of clients, ranging from 1 to over 21; however, most often respondents endorsed directly supervising 6 to 15 clients (n = 74, 54.0%).
Table 3.
Demographics
| n = 137 | % | |
|---|---|---|
| Gender | ||
| Female | 115 | 83.9 |
| Male | 18 | 13.1 |
| Nonbinary/gender variant/ gender fluid | 1 | .7 |
| Prefer not to answer | 3 | 2.2 |
| Race (select all) | ||
| Black or African American | 4 | 2.9 |
| Asian | 9 | 6.6 |
| White | 109 | 79.6 |
| Other | 6 | 4.4 |
| Hispanic/Latinx/Latine | 12 | 8.8 |
| Prefer not to answer | 7 | 5.1 |
| Highest degree obtained | ||
| Masters | 131 | 95.6 |
| Doctorate | 6 | 4.4 |
| Certification held | ||
| BCBA | 134 | 97.8 |
| BCBA-D | 3 | 2.2 |
| Number of years certified | ||
| Less than 1 year | 10 | 7.3 |
| 1–3 years | 58 | 42.3 |
| 4–5 years | 28 | 20.4 |
| 6–9 years | 30 | 21.9 |
| More than 10 years | 11 | 8.0 |
| Primary place of employment (select all) | ||
| Public school | 36 | 26.3 |
| Private school | 15 | 10.9 |
| Hospital/medical center | 1 | .7 |
| Community | 44 | 32.1 |
| Residential | 1 | .7 |
| Home | 102 | 74.5 |
| Center | 95 | 69.3 |
| Other | 5 | 3.6 |
| How many clients do you directly supervise | ||
| 1–5 | 22 | 14.6 |
| 6–10 | 38 | 27.7 |
| 11–15 | 36 | 26.3 |
| 16–20 | 22 | 16.1 |
| 21 + | 19 | 13.9 |
| Primary job classification | ||
| Behavior Analyst/Case Supervisor | 126 | 92.0 |
| Clinical Director | 4 | 2.9 |
| Other | 7 | 5.1 |
Training
Table 4 provides information on the procedural-integrity training experienced by respondents. Most respondents reported receiving formal training during graduate school on procedural integrity (n = 117, 85.4%). Of those who received formal training (n = 117), their training consisted of a review of the definition of procedural integrity (n = 110, 94.0%), a review of the ethics code (n = 111, 94.9%), a discussion of when and how to conduct integrity checks (n = 99, 84.6%), and how (n = 106, 90.6%) to collect integrity data. Data tracking (n = 87, 72.6%), data analysis (n = 79, 67.5%), and feedback (n = 68, 58.1%) were reported to be covered less frequently. Training was predominantly provided via lecture (n = 108, 92.3%) and less often via behavioral skills training (BST; n = 48, 41.0%).
Table 4.
Training
| N | % | |
|---|---|---|
| Did you receive formal training in graduate school? | n = 137 | |
| Yes | 117 | 85.4 |
| No | 20 | 14.6 |
| What did your training entail? (select all) | n = 117 | |
| Definition | 110 | 94.0 |
| Review of ethics code | 111 | 94.9 |
| How to discuss integrity | 47 | 40.2 |
| When to conduct integrity | 99 | 84.6 |
| How to conduct integrity | 92 | 78.6 |
| How to collect data | 106 | 90.6 |
| How to track data | 85 | 72.6 |
| How to analyze data | 89 | 76.1 |
| How to provide feedback | 68 | 58.1 |
| What training methods were used? (select all) | n = 117 | |
| Lecture | 108 | 92.3 |
| BST | 48 | 41.0 |
| Written protocols | 27 | 23.1 |
| Other | 3 | 2.6 |
| Did you receive on-the-job training? | n = 137 | |
| Yes | 77 | 56.2 |
| No | 60 | 43.8 |
| What did your training entail? (select all) | n = 77 | |
| Definition | 53 | 68.8 |
| Review of ethics code | 33 | 42.9 |
| How to discuss integrity | 50 | 64.9 |
| When to conduct integrity checks | 52 | 67.5 |
| How to conduct integrity checks | 59 | 76.6 |
| How to collect data | 62 | 80.5 |
| How to track data | 45 | 58.4 |
| How to analyze data | 45 | 58.4 |
| How to provide feedback | 54 | 70.1 |
| What training methods were used? (select all) | n = 77 | |
| Lecture | 44 | 57.1 |
| BST with colleague | 43 | 55.8 |
| BST with parent | 25 | 32.5 |
| Written protocols | 55 | 71.4 |
| Other | 3 | 3.9 |
| Have you accessed Continuing education in this area? | n = 137 | |
| Yes | 51 | 37.2 |
| No | 86 | 62.8 |
Just over half of the respondents endorsed receiving on-the-job training (n = 77, 56.2%). Of those who received on-the-job training (n = 77) training most often entailed how to collect data (n = 62, 80.5%), how (n = 59, 76.6%), when (n = 52, 67.5%) to conduct procedural-integrity observations, and how to provide feedback (n = 54, 70.1%). Data tracking (n = 45, 58.4%) and data analysis (n = 45, 58.4%) were covered less frequently. Common on-the-job training methods included written protocols (n = 55, 71.4%), lecture (n = 44, 57.1%), and the use of BST with a colleague (n = 43, 55.8%). Most respondents reported that they had not received continuing education pertaining to procedural-integrity practices (n = 86, 62.8% of 137).
Procedural-Integrity Practices and Perceived Barriers
Table 5 provides details about reported procedural-integrity monitoring practices and perceived barriers to engaging in procedural-integrity monitoring. Most respondents reported conducting observations (n = 129, 94.2%) and providing feedback (n = 123; 89.8%). Of those respondents, observations were reported to occur most often on a weekly (n = 57, 44.2% of 129) or monthly basis (n = 44; 34.1% of 129), which is similar to the results indicated for the frequency of feedback delivery (weekly – n = 49, 39.8% of 123; monthly – n = 41, 33.3% of 123).
Table 5.
Procedural-Integrity Components and Perceived Barriers
| n | % | |
|---|---|---|
| Do you conduct observations | n = 137 | |
| Yes | 129 | 94.2 |
| No | 8 | 6.8 |
| Frequency of observations per client | n = 129 | |
| Daily | 9 | 7.0 |
| Weekly | 57 | 44.2 |
| Bi-weekly | 6 | 4.7 |
| Monthly | 44 | 34.1 |
| Twice yearly | 11 | 8.5 |
| Annually | 2 | 1.6 |
| Barriers (select all) | n = 137 | |
| Not experiencing any barriers | 48 | 35.0 |
| Not adequately trained | 9 | 6.6 |
| Do not have time | 40 | 29.2 |
| Do not have a schedule for observations | 21 | 15.3 |
| My supervisor does not require it | 6 | 4.4 |
| The funders do not require me to | 12 | 8.8 |
| My organization does not require me to | 8 | 5.8 |
| I do not see the value | 1 | .7 |
| Providers resist close supervision | 12 | 8.8 |
| Other tasks take precedence | 46 | 33.6 |
| Other | 20 | 14.6 |
| Do you collect data | n = 137 | |
| Yes | 80 | 58.4 |
| No | 57 | 42.6 |
| Frequency of data collection per client | n = 80 | |
| Daily | 2 | 2.5 |
| Weekly | 23 | 28.8 |
| Bi-weekly | 2 | 2.5 |
| Monthly | 39 | 48.8 |
| Twice yearly | 13 | 16.3 |
| Annually | 1 | 1.3 |
| How do you collect data on integrity? | n = 80 | |
| Paper and pencil | 40 | 50.0 |
| Computer data sheet | 33 | 41.3 |
| Software program | 3 | 3.8 |
| Other | 4 | 5.0 |
| What data do you collect on integrity? | n = 80 | |
| Consistency of implementation | 63 | 78.8 |
| Accuracy of implementation | 75 | 93.8 |
| Errors of omission | 35 | 43.8 |
| Errors of commission | 29 | 36.3 |
| Barriers (select all) | n = 137 | |
| Not experiencing any barriers | 31 | 22.6 |
| Not adequately trained | 14 | 10.2 |
| Do not have time | 48 | 35.0 |
| Do not have a system for collecting data | 48 | 35.0 |
| My supervisor does not require it | 15 | 10.9 |
| The funders do not require me to | 15 | 10.9 |
| My organization does not require me to | 17 | 12.4 |
| I do not see the value | 1 | .7 |
| Other tasks take precedence | 49 | 35.8 |
| Other | 16 | 11.7 |
| How do you track integrity data? | n = 137 | |
| I do not currently | 77 | 56.2 |
| Excel/other graphing software | 26 | 19.0 |
| List of scores | 15 | 10.9 |
| Software program | 10 | 7.3 |
| Other | 9 | 6.6 |
| Barriers (select all) | n = 137 | |
| Not experiencing any barriers | 26 | 19.0 |
| Not adequately trained | 29 | 21.2 |
| Do not have time | 49 | 35.8 |
| Do not have a system for tracking | 60 | 43.8 |
| My supervisor does not require it | 18 | 13.1 |
| The funders do not require me to | 18 | 13.1 |
| My organization does not require me to | 18 | 13.1 |
| I do not see the value | 3 | 2.2 |
| Other tasks take precedence | 36 | 26.3 |
| Other | 3 | 2.2 |
| How do you analyze integrity data? | n = 137 | |
| I do not currently | 60 | 43.8 |
| Review graphed data | 33 | 24.1 |
| I have a threshold for when to intervene | 22 | 16.1 |
| Review data in software | 15 | 10.9 |
| Other | 7 | 5.1 |
| Barriers (select all) | n = 137 | |
| Not experiencing any barriers | 44 | 32.1 |
| Not adequately trained | 18 | 13.1 |
| Do not have time | 31 | 22.6 |
| Do not have a system for analyzing | 52 | 38.0 |
| My supervisor does not require it | 11 | 8.0 |
| The funders do not require me to | 13 | 9.5 |
| My organization does not require me to | 14 | 10.2 |
| I do not see the value | 1 | .7 |
| Other tasks take precedence | 28 | 20.4 |
| Other | 5 | 3.6 |
| Unclear what constitutes an appropriate level | 12 | 8.8 |
| Do you currently provide feedback? | n = 137 | |
| Yes | 123 | 89.8 |
| No | 14 | 10.2 |
| Frequency of feedback per client | n = 123 | |
| Daily | 16 | 13.0 |
| Weekly | 49 | 39.8 |
| Bi-weekly | 6 | 4.9 |
| Monthly | 41 | 33.3 |
| Twice yearly | 8 | 6.5 |
| Annually | 1 | .8 |
| How do you provide feedback? (select all) | n = 123 | |
| Vocally in the moment | 119 | 96.7 |
| Schedule a meeting | 53 | 43.1 |
| Send an email | 68 | 55.3 |
| Provide graphical feedback | 25 | 20.3 |
| Provide a written note | 21 | 17.1 |
| Other | 8 | 6.5 |
| Barriers (select all) | n = 137 | |
| Not experiencing any barriers | 83 | 60.6 |
| Not adequately trained | 7 | 5.1 |
| Do not have time | 17 | 12.4 |
| Do not have a system for providing feedback | 16 | 11.7 |
| My supervisor does not require it | 3 | 2.2 |
| The funders do not require me to | 4 | 2.9 |
| My organization does not require me to | 6 | 4.4 |
| Providers resist close supervision | 11 | 8.0 |
| Other tasks take precedence | 18 | 13.1 |
| Other | 5 | 3.6 |
All respondents (n = 137) were asked questions pertaining to barriers. When reporting barriers to conducting observations, some respondents indicated that they did not experience any barriers (n = 48, 35.0%). The most common barriers reported included a lack of time (n = 40, 29.2%) or that other tasks took precedence (n = 46, 33.6%). Most respondents did not report experiencing barriers to providing feedback (n = 83, 60.6%). In addition, respondents indicated that a lack of expectation from their supervisor, funders, or their organization was not a significant barrier to conducting observations (n = 26, 19.0%) or delivering feedback (n = 13, 9.5%).
Just over half of respondents reported collecting procedural-integrity data (n = 80, 58.4%). Of those respondents (n = 80), data were collected on a monthly (n = 39, 48.8%) or weekly (n = 23, 28.8%) basis, often using paper and pencil (n = 40, 50.0%) or a computerized data sheet (n = 33, 41.3%). The types of data collected included consistency of implementation (n = 63, 78.8%), evaluating if the procedure is implemented on the schedule prescribed; accuracy of implementation (n = 75, 93.8%), evaluating if the procedure inclusive of all components is implemented correctly; errors of omission (n = 35, 43.8%), when prescribed components of the intervention are left out; and errors of commission (n = 29, 36.3%), when additional components are added that are not part of the prescribed intervention. For all respondents, the barriers reported to collecting data included not having time (n = 48, 35.0%), lack of a schedule for data collection (n = 48, 35.0%), and other tasks taking precedence (n = 49, 35.8%). A lack of expectation from a supervisor, funder, or their organization to collect procedural-integrity data was also a barrier indicated by some respondents (n = 47, 34.3%).
More than half of the respondents indicated that they do not track procedural-integrity data (n = 77, 56.2%). For those who did track these data, Excel or another graphing software was reported to be used most frequently (n = 26, 19.0% of 137). For all respondents, the barriers to tracking integrity data included not having a system for tracking (n = 60, 43.8%), a lack of time (n = 49, 35.8%), and other tasks taking precedence (n = 36, 26.3%). Similar to data collection, a lack of expectation from either a supervisor, funder, or their organization to track procedural-integrity data was a barrier indicated by some respondents (n = 54, 39.4%).
Nearly half of the respondents reported not analyzing the procedural-integrity data they collected (n = 60, 43.8%). Those who did report analyzing data most often indicated they reviewed graphed data (n = 33, 24.1% of 137). About two thirds of all respondents indicated experiencing barriers to procedural-integrity data analysis (n = 93, 67.9%), including not having a schedule for analysis (n = 52, 38.0%), lack of time (n = 31, 22.6%), and other tasks taking precedence (n = 28, 20.4%). A lack of expectation from a supervisor, funder, or their organization to analyze procedural-integrity data was a barrier endorsed in a little over one fourth of responses (n = 38, 27.7%).
When presented with open-ended follow-up questions, regarding what “other tasks” take precedence over procedural integrity or "other" barriers respondents experienced, a total of 100 substantive responses were recorded (note that of these responses one person may account for more than one response as the survey field was open). The responses related to “other tasks” and “other” barriers experienced are summarized across the following 14 categories: (1) Supervisee training (e.g., BST) n = 44); (2) Program and behavior plan modification (n = 30); (3) Addressing caregiver concerns and caregiver education (n = 17); (4) Addressing interfering client behavior (n = 15); (5) Administrative tasks (n = 12); (6) Rapport building with staff, caregivers, and/or clients (n = 11); (7) Writing treatment plans (n = 8); (8) Meetings (n = 7); (9) Turnover (n = 6); (10) Forgetfulness (n = 4); (11) Collecting interobserver agreement data (n = 4); (12) Short Staffed/Providing direct instruction (n = 3); (13) Travel time between clients (n = 3); and (14) Carrying a high Caseload (n = 1). Open-ended responses corresponding to the categories above can be found in the Supplemental Material available for this article.
Aspect of the Process Most Difficult to Implement
All components of the procedural-integrity process were endorsed as difficult by at least one respondent, with data collection (n = 59, 43.4%) and data tracking (n = 45, 33.1%) endorsed most frequently. Table 6 provides more details regarding what aspects of procedural integrity were considered most difficult.
Table 6.
Aspect of Process Most Difficult to Implement
| n | % | |
|---|---|---|
| Which aspect is most difficult to implement? | n = 136 | |
| Observation | 11 | 8.1 |
| Data collection | 59 | 43.4 |
| Data tracking | 45 | 33.1 |
| Data analysis | 6 | 4.4 |
| Feedback | 15 | 11.0 |
Perceptions of Procedural Integrity Value
The results of the Likert questions pertaining to respondent agreement with perceptions of procedural integrity are reflected in Table 7. Higher scores reflected a higher level of disagreement (e.g., Strongly Agree = 1; Neutral = 3; Strongly Disagree = 5). All item medians fell within the neutral to disagreement range. The items that had the highest level of disagreement were “procedural integrity has limited impact on client progress” (n = 137, Mdn = 4.0), “procedural integrity rarely degrades to where it would impact client progress” (n = 137, Mdn = 4.0), and “monitoring procedural-integrity is only necessary when client progress is stalled” (n = 137, Mdn = 4.0). An item with the highest level of agreement was “once trained, procedural integrity typically stays high” (n = 137, Mdn = 3.0).
Table 7.
Agreement with Statements about Procedural Integrity
| Item | SA | A | N | D | SD |
|---|---|---|---|---|---|
| procedural integrity has limited impact on client progress | 7 (5.1) | 9 (6.6) | 9 (6.6) | 49 (35.8) | 63 (46.0) |
| procedural integrity rarely degrades to where it would impact client progress | 2 (1.5) | 6 (4.4) | 17 (12.4) | 64 (46.7) | 48 (35.0) |
| Once trained, procedural integrity typically stays high | 5 (3.6) | 34 (24.8) | 40 (29.2) | 49 (35.8) | 9 (6.6) |
| Monitoring procedural integrity is only necessary when client progress is stalled | 0 (0.0) | 2 (1.5) | 14 (10.2) | 90 (65.7) | 31 (22.6) |
| Procedures are rarely defined enough to permit easy assessment of procedural integrity | 1 (0.7) | 13 (9.5) | 31 (22.6) | 70 (51.1) | 22 (16.1) |
| I receive reminders to monitor procedural integrity | 2 (1.5) | 17 (12.4) | 19 (13.9) | 42 (30.7) | 57 (41.6) |
| My supervisor monitors my engagement in the procedural-integrity process | 4 (2.9) | 9 (6.6) | 22 (16.1) | 51 (37.2) | 51 (37.2) |
| I receive feedback on my procedural-integrity monitoring | 6 (4.4) | 7 (5.1) | 18 (13.1) | 52 (38.0) | 54 (39.4) |
| What constitutes an acceptable level of procedural integrity is clear | 16 (11.7) | 26 (19.0) | 50 (36.5) | 31 (22.6) | 14 (10.2) |
The initial numbers represent the total number of respondents. Numbers in parentheses, represent the percentage of respondents. SA = Strongly Agree; A = Agree; N = Neutral; D = Disagree; SD = Strongly Disagree
Discussion
This study assessed the status of procedural-integrity training, clinical practices, and perceived values and barriers experienced by BCBAs providing ABA services. Respondent demographics largely mirrored the BACB’s most recent demographic data from January 2023 (BACB, n.d.), with respondents primarily identifying as white (79.6% current study, 69.2% BACB) and female (83.9% current study, 86.7% BACB). In addition, the majority of those surveyed were certified for 5 years or fewer (70.1%), capturing an experience level that makes up a good portion of our field per the data reported by the BACB (n.d.) in October 2023 (49.5% certified for 5 years or fewer). The current study's purpose was to offer preliminary information via survey methodology to fuel further discussion and investigation. It is important to note that there are limitations inherent to the use of self-report measures such as surveys used to assess clinician behavior and attitudes. When conducting a survey, challenges associated with self-report include but are not limited to social desirability bias and recall errors (Paulhus & Vazire, 2007). For instance, when clinicians were asked to indicate how often they engage in specific aspects of the procedural-integrity monitoring process, their responses could be affected by difficulty recalling how frequently they engage in these behaviors, recency and primacy effects, or the motivation to present oneself in a particular manner. Nonetheless, these findings can be used to offer strategies to mitigate identified procedural-integrity training and practice gaps, generate performance management solutions for the identified barriers and fuel further investigation to maximize the overall quality of client services and supervision.
In relation to demographics, although our respondent sample is highly representative of recent BCBA demographic data shared by the BACB, there are some limitations. First, we received a small sample of eligible respondents (n = 137). In addition, although respondents were asked about the number of years certified and we attempted to distribute the survey to clinicians in various geographic locations across the United States, we did not include demographic survey questions related to age or geographic location. Therefore, the age and geographic distribution of the respondents are unkown.
Regarding procedural-integrity training, not all respondents reported receiving training in graduate school, and about half reported receiving on-the-job training. When graduate school training was provided it was done so via lecture with less than half of the responses indicating training via BST. For those that received on-the-job training they reported most often receiving written protocols, some lectures, and little use of BST. These results were similar to those of Blackman et al. (2023) and DiGennaro Reed and Henley (2015), who conducted a survey of training and performance management practices in ABA organizations and found that organizations primarily relied on instructions and modeling to train their employees. This highlights an opportunity for graduate programs and organizations to incorporate empirically validated competency-based training methods, such as BST, into their training practices (e.g., Parsons et al., 2012). In addition, the last few components of the procedural-integrity monitoring process (i.e., data tracking, data analysis, and feedback) were reported to be covered less frequently than other training content. The identification of training as a barrier to implementing recommended practices is also in line with previous studies (e.g., Blackman et al., 2023; DiGennaro Reed & Henley, 2015; Hagermoser Sanetti & DiGennaro Reed, 2012; Oliver et al., 2015; Roscoe et al., 2015; St. Peter et al., 2023) and echoes the concern expressed by St. Peter et al. (2023) regarding graduate school training and its potential effects on data measurement and analysis in clinical practice. Our findings, in addition to previous survey results (e.g., Blackman et al., 2023; DiGennaro Reed & Henley, 2015; Oliver et al., 2015; Roscoe et al., 2015; Sellers et al., 2019), emphasize the importance of organizational leadership facilitating initial and ongoing training for important topics such as procedural integrity to ensure competence in and support for clinician performance. In particular, an emphasis should be placed on training the procedural-integrity process as a vehicle for clinical and supervisory data-based decision making. As a result, BCBAs can engage in the actionable steps needed to review and document trends that may affect client and provider performance. Hence these results highlight the neccessity for the recent clarification regarding the importance of data-based decision making for procedural-integrity and supervisory practices via the updated BACB Test Content Outline (6th ed.).
Another important area of clinical training is fieldwork supervision. Per the BCBA handbook (BACB, 2022), the purpose of fieldwork supervision is to “improve and maintain the behavior-analytic, professional, and ethical repertoires of the trainee and facilitate the delivery of high-quality services to the trainee’s clients.” Furthermore, this includes the sharing of performance expectations; conducting BST; observing provider performance; delivering feedback; modeling technical, professional, and ethical behavior; guiding the development of problem-solving and clinical decision making; reviewing written materials (e.g., behavior programs, data sheets, reports); and evaluating the effects of the provider’s service delivery (p. 2). Being that engagement in the procedural-integrity process is a responsibility that permeates clinical practice and supervision, it would be beneficial for the procedural-integrity process to be covered as an expectation throughout the course of fieldwork supervision. A limitation of the current study is the lack of questions regarding fieldwork supervision practices, making the extent to which the procedural-integrity process is trained via the fieldwork experience unknown.
Results related to clinical practices indicated that most respondents are conducting observations (M = 94%) and providing feedback (M = 89.9%) based on their observations. However, in terms of data-based decision making, just over half of respondents (M = 58.7%) are currently collecting procedural-integrity data. Of those, even fewer are tracking performance over time (M = 55.8%), and fewer than that are analyzing these data (M = 44%). Although it is encouraging to see that feedback is being provided reliably, given how important it is to facilitate provider behavior change (e.g., Sigurdsson et al., 2018), it is unclear what behaviors clinicians are engaging in as they observe their providers or what they are basing feedback on, especially as it relates to long-term performance feedback in the absence of data collection, tracking, and analysis. Completing these data-related components would help ensure that feedback provided is as objective and accurate as possible. This is especially relevant to long-term feedback provided via performance evaluations that occur every few months. Being that research has demonstrated that the accuracy of data may decrease as the latency to recording these data increases (e.g., Jasper & Taber-Doughty, 2015), basing performance evaluations on a lack of data or recalling data related to a provider’s performance after a significant amount of time has passed does not seem conducive to accurate and objective performance evaluation feedback. In contrast, if a clinician is regularly collecting, tracking, and analyzing data on procedural integrity, they could presumably use these data to generate thorough and supportive performance evaluations to foster their providers’ professional growth. Furthermore, collecting and tracking procedural-integrity data would presumably make feedback delivery easier for clinicians and assist them in adhering to their ethical responsibility of data-based decision making.
The most prevalent form of feedback reported in this study was vocal verbal (in the moment or via a meeting) or written feedback (an email or note). Graphical feedback was only reported to be provided in 20.3% of responses. Research has shown that sharing data via graphical feedback in combination with verbal (vocal or written) feedback bolsters the greatest intervention effects in organizational settings (e.g., Sleiman et al., 2020) and may produce greater effects than verbal feedback alone (e.g., Hagermoser Sanetti et al., 2007). Incorporating the data collection component not only allows for data-based decision making for both interventions and supervision but also assists in the documentation of such practices, which is a requirement per the Ethics Code for Behavior Analysts (BACB, 2020). Documentation is paramount as it provides a permanent product for clinical decision making, can facilitate future decisions, simplifies case and supervision transfers, and provides a clear outline of actions taken to avoid ethical ramifications should issues arise within supervisory relationships.
When data collection was reported, the most common type of data collection focused on evaluating if the procedure inclusive of all components were implemented accurately (93.8%), followed by evaluating if the procedure was implemented consistently on the schedule prescribed (78.8%). In addition, those collecting data reported that they explicitly collected data on errors of omission and errors of commission in a few instances (43.8% and 36.3% respectively). Although both omission and commission errors can prove problematic to an intervention, emerging research has shown that they can each have different impacts across interventions and should therefore ideally both be measured. For instance, Colón and Wallander (2023) reviewed procedural-integrity analyses and found that there were some patterns present in the literature pertaining to errors of omission and commission. For instance, procedural-integrity studies that analyzed the accuracy of a procedure that solely entailed commission errors all required a high level of accuracy (70%–100%) to be effective (e.g., DiGennaro Reed et al., 2011; Jenkins et al., 2015; Pence & St. Peter, 2015). In contrast, the studies that solely investigated accuracy omission errors generally required a lower level of accuracy (33%–66%) to be effective (e.g., Groskreutz et al., 2011; Grow et al., 2009; Holcombe et al., 1994; Northup et al., 1997; Saini et al., 2015; Worsdell et al., 2000, 2005). Although additional studies are necessary to provide more nuanced practical reccomendations, this research indicates that clinicians should be aware that errors of commission may be more detrimental than errors of omission in some cases.
Because our survey was not designed to fully inquire about how procedural-integrity data were measured or calculated, the efficiency and utility of the reported data collection are unknown. Furthermore, although our survey did not inquire about data collection measurement systems, it is important to note that certain data collection measurements and calculations may yield different outcomes for the interpretation of procedural-integrity data. An emerging literature offers clinicians preliminary guidance in this area by evaluating different ways to measure and calculate procedural-integrity data (e.g., Bergmann et al., 2023; Cook et al., 2015). However, more research is warranted in this area to determine prevalent measurement design practices, and to offer guidance regarding the accuracy, efficiency, and effectiveness of data collected across various interventions and contexts to more fully provide practitioner recommendations to aid in critical decisions related to procedural-integrity measurement.
In relation to reported training practices, the low frequency of data tracking and data analysis is in alignment with component areas that were reported to be covered less frequently in graduate school and on-the-job training. This further indicates that this training to practice gap should be investigated and addressed to mitigate any impact it may have on clinical practice. The procedural-integrity literature is steadily progressing, and those charged with teaching these key concepts are encouraged to seek and share training content (e.g. articles and book chapters) related to clinical practice applications of procedural integrity (e.g., Colón & Wallander, 2023; DiGennaro Reed & Reed, 2014; Vollmer et al., 2008). Likewise, St. Peter et al. (2023) offered article recommendations that provide helpful instructions and discussions related to the measurement and calculation of procedural-integrity data (i.e., Gresham et al., 2000; Hagermoser Sanetti & Fallon, 2011; Kodak et al., 2022; Vollmer et al., 2008), some of which also include example data sheets (i.e., Han et al., 2023; Kodak et al., 2022; Vollmer et al., 2008) that may prove useful when selecting measurement types and collecting data in applied settings. Furthermore, a recent article by Morris et al. (2024) provides recommendations and examples for clinicians to task analyze intervention procedures into measurable units, assign measures to each intervention component, and analyze procedural-integrity data to support supervisee performance.
Moreover, being that respondents indicated that a lack of time, lack of systems, and competing contingencies were barriers to data tracking and data analysis, some clinicians are unlikely to experience the full benefit of accessing all components of the procedural-integrity monitoring process, which presumably includes long-term benefits via the potential for improved client outcomes and supervisee performance. In addition, some clinicians indicated that supervisee training and rapport building were “other tasks” that took precedence over full implementation of the procedural-integrity monitoring process. In addition, these open-ended responses highlighted some misconceptions regarding the utility of procedural integrity with new hires (See Supplemental Materials, Table 1); however, the procedural-integrity process would presumably aid in training (e.g., providing a metric for competency) and rapport building (e.g., supervisory support and objective feedback) when executed in its entirety (see Colón and Wallander (2023) for a practical explanation and guide for how BST and procedural-integrity monitoring are complimentary processes that simultaneously foster clinical and supervisory quality from the inception of preservice training throughout service delivery). Likewise, some clinicians indicated that modifications to programs and plans were “other tasks” that took precedence over full implementation of the procedural-integrity monitoring process; however, this appears counterproductive being that engaging in the procedural-integrity process is an important first step before making program modifications to ensure that unnecessary changes are not made to what may have been an effective program, had it been implemented with high procedural integrity (e.g., Vollmer et al., 2008). For instance, if a client is not making progress and the procedural-integrity monitoring process reveals low procedural integrty, the BCBA should provide remedial training for the provider. However if the process reveals high procedural integrity then program modifications may be warranted. These findings further indicate that clinicians are likely not experiencing the potential benefits of fully integrating procedural-integrity practices to unlock benefits related to effective and efficient decision making that can foster overall intervention and supervision quality. It is also possible that gaps in training are prohibiting clinicians from developing meaningful data collection, tracking, and analysis systems that would facilitate and maintain contingencies that support continual engagement in the procedural-integrity process. Further investigations regarding these hypotheses are warranted.
Despite many respondents endorsing data collection as the most difficult component of the procedural-integrity process and only 58% of respondents indicating that they currently engage in procedural-integrity data collection, respondents indicated that they do see the value of collecting procedural-integrity data. However, based on our results, there are perceived barriers that are likely interfering with the alignment of this value and reported practices. Furthermore, respondents indicated the highest level of agreement with the statement “once trained, procedural integrity typically stays high.” This points out that there may be some myths surrounding the factors that may impact procedural integrity. For instance, in addition to training there are several factors that can impact procedural integrity and effect procedural drift such as varied responsibilities of providers, environmental distractions, and competing contingencies (e.g., data collection, conducting lessons, facilitating peer interactions, attending to other clients, avoidance of target behavior or collateral behavior; Allen & Warzak, 2000; Berdeaux et al., 2022; Miller et al., 2010; Sloman et al., 2005; Stocco & Thompson, 2015). Although training and initial procedural-integrity monitoring are integral to the process, ongoing monitoring is critical as new circumstances may arise that could lead to drift and require problem solving between the supervisor and provider to improve performance to desired levels.
Common perceived barriers to the procedural-integrity process included respondents not having time to engage in these practices, other work responsibilities taking precedence (competing contingencies), and the lack of an organizational system to support the procedural-integrity process. In the open responses, many respondents reported engaging in time-intensive activities (e.g., sending written feedback via email, scheduling meetings to discuss performance), which may further illustrate the perceived barrier of a lack of time and a lack of organizational systems to support the procedural-integrity process. In addition, there was a higher likelihood of a lack of expectations from supervisors, funders, or organizations for those components that were reported to occur less frequently (i.e., data collection, data tracking, and data analysis) in comparison to those that were reported to occur most often (i.e., observations and feedback). These responses indicate that in addition to training methods, expectations (or a lack thereof) may be influencing which components of the process are conducted regularly in practice. These findings are in line with previous research that identified a lack of resources (e.g., lack of time) and lack of expectations as barriers to researcher’s procedural-integrity practices (Hagermoser Sanetti & DiGennaro Reed, 2012; Perepletchikova et al., 2009; St. Peter et al., 2023). In addition, Sellers et al. (2019) reported that a lack of time was a barrier to engaging in other BACB expectations related to some aspects of implementing high quality supervision practices (i.e., thorough preparation for supervision meetings and creating systems for tracking mastery of content knowledge and skills).
Although practice expectations exist for the field of ABA, the day-to-day expectations of BCBA supervisors are the most immediate contingencies affecting their behavior. Therefore, it could be presumed that organizational expectations have great influence on what behaviors supervisors engage in regularly. If systems to foster quality are not present, then other tasks will take precedence and time will not be allocated toward engagement in the procedural-integrity process. Organizations must have professional expectations in place for supervisors to engage in the procedural-integrity process and organizational leadership should work towards diminishing barriers that arise in clinical practice. BCBAs should assist in generating solutions, advocate for the necessary resources to fulfill their ethical obligations, and be involved in the design, implementation, and modification of the procedural-integrity process.
Future research should entail experimental studies targeted at overcoming the reported barriers and examining the extent to which doing so assists BCBAs in consistently engaging in the procedural-integrity process. A lack of systems and resources for procedural integrity may be impacting optimal provider performance. Therefore, organizational support to set up and/or modify procedural-integrity systems may prove useful in decreasing response effort for BCBAs. Respondents indicated that the components they found most difficult to implement were data collection and data tracking, and the components they were most likely to omit were data collection, tracking, and analysis. Therefore, a practical solution may be to provide a system to assist BCBAs with data collection that automatically graphs data for later analysis. One could then analyze whether this system effects procedural-integrity data collection, data tracking, and data analysis practices and whether it saves BCBAs time and effort. Furthermore, performing organizational analyses of turnover and cost–benefit analyses can aid in determining the return on investment related to engagement in the procedural-integrity process. It is important to note that a potential barrier to an automated data analysis system is that it does not automatically account for individualization for certain procedures or clients. Although individualization is important, some standardization of procedural-integrity tools is critical for organization level analysis and intervention. Standardization may also assist in supervisory training on how to use the tools in practice. To overcome this potential barrier, supervisors can be given the option to document individualized notes in an automated system. Although it requires further investigation, it could be hypothesized that the benefits of adopting a standardized system with the option of individualization and the ability to add notes likely outweigh that of a clinician creating tools individually. To support such system introductions, it is important to provide training, task clarification, set expectations, and foster accountability via ongoing follow-up and performance management feedback. Furthermore, setting expectations may entail goal setting, which consists of defining a specified, preset level of performance to be obtained (Daniels & Bailey, 2014) in addition to providing feedback or other performance related consequences to foster goal attainment (Wilder et al., 2009).
Likewise, being that it may be beneficial to view behavior-focused performance management interventions and results-focused organizational interventions (i.e., behavioral systems analysis (BSA)) as complementary endeavors (Wilder & Cymbal, 2023), another consideration to improve procedural-integrity practices is the use of a BSA. The BSA process involves analyzing variables to aid in planning and managing performance at the organization, process, and performer levels (Diener et al., 2009). To provide value-adding operations to an organization, BSA helps to build alignment and clear direction among activities within an organization, facilitates agreement on system disconnects, helps to set organizational priorities, and determines where resources should be allocated to resolve disconnects (Diener et al., 2009). To assist with practical application, Diener et al. (2009) offers a framework on how to conduct a BSA inclusive of an example of the behavioral systems analysis questionnaire (BSAQ) and an example of the BSAQ applied to an organization. Also, for a practitioner tutorial that can assist with process mapping please see Luke et al. (2024). Future research should measure whether such organizational changes affect procedural-integrity process performance, efficiency, and social validity for BCBAs and their supervisees (e.g., job satisfaction, professional development).
Oversight regarding accurate and consistent implementation of our interventions must be ensured if clients are to achieve optimal benefit as evidenced by treatment gains and consistent service delivery. Moreover, procedural integrity should be paramount for clinicians and organizations alike because, as Vollmer et al. (2008) underscored, monitoring procedural integrity should not be taken lightly given that these practices have a great impact on pivotal decisions in a client’s life (e.g., placement decisions, decisions to include more restrictive procedures). It is therefore in everyone’s best interest to pursue a system that supports quality metrics, such as procedural integrity, which promotes organizational health, professional growth, and ultimately results in the pinnacle of client success.
Supplementary Information
Below is the link to the electronic supplementary material.
Funding
No funding was received in relation to this study.
Data Availability
Data are available from the corresponding author upon reasonable request.
Declarations
Conflicts of Interest
We have no known conflicts of interest.
Ethical Approval
The study was approved by a university institutional review board.
Informed Consent
All participants provided informed consent.
Footnotes
Gratitude is extended to Adriana (Adie) Anderson, Maia Jackson, Katherine Johnson, and Patrick Kwedor for their time reviewing the survey logic prior to its administration.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Allen, K. D., & Warzak, W. J. (2000). The problem of parental nonadherence in clinical behavior analysis: Effective treatment is not enough. Journal of Applied Behavior Analysis,33, 373–391. 10.1901/jaba.2000.33-373 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barnett, D., Hawkins, R., McCoy, D., Wahl, E., Shier, A., Denune, H., & Kimener, L. (2014). Methods used to document procedural fidelity in school-based intervention research. Journal of Behavioral Education,23(1), 89–107. 10.1007/s10864-013-9188-y [Google Scholar]
- Behavior Analyst Certification Board. (2017). BCBA task list (5th ed.). https://www.bacb.com/wp-content/uploads/2020/08/BCBA-task-list-5th-ed-230130a.pdf
- Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts.https://bacb.com/wp-content/ethics-code-for-behavior-analysts/
- Behavior Analyst Certification Board. (2022). BCBA test content outline (6th ed.). https://www.bacb.com/wp-content/uploads/2022/01/BCBA-6th-Edition-Test-Content-Outline-230206-a.pdf
- Behavior Analyst Certification Board. (n.d.). BACB certificant data. https://www.bacb.com/BACB-certificant-data
- Berdeaux, K. L., Lerman, D. C., & Williams, S. D. (2022). Effects of environmental distractions on teachers’ procedural integrity with three function-based treatments. Journal of Applied Behavior Analysis,55(3), 832–850. 10.1002/jaba.918 [DOI] [PubMed] [Google Scholar]
- Bergmann, S., Niland, H., Gavidia, V. L., Strum, M. D., & Harman, M. J. (2023). Comparing multiple methods to measure procedural fidelity of discrete-trial instruction. Education & Treatment of Children,46, 201–220. 10.1007/s43494-023-00094-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blackman, A. L., DiGennaro Reed, F. D., Erath, T. G., & Henley, A. J. (2023). A survey of staff training and performance management practices: An update. Behavior Analysis in Practice,16(3), 731–744. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blackman, A. L., Glick, T., Glick, T., Jung, D., & Conde, K. A. (2024). Exploring the relationship between evaluation frequency and monthly fidelity score on provider retention: a longitudinal study. Journal of Organizational Behavior Management, 1–12. 10.1080/01608061.2024.2386132
- Brand, D., Henley, A. J., DiGennaro Reed, F. D., Gray, E., & Crabbs, B. (2019). A review of published studies involving parametric manipulations of treatment integrity. Journal of Behavioral Education,28, 1–26. 10.1007/s10864-018-09311-8 [Google Scholar]
- Carr, J. E., Wilder, D. A., Majdalany, L., Mathisen, D., & Strain, L. A. (2013). An assessment-based solution to a human-service employee performance problem: An evaluation of the performance diagnostic checklist-human services. Behavior Analysis in Practice,6(1), 16–32. 10.1007/BF03391789 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Collier-Meek, M. A., Fallon, L. M., & Gould, K. (2018). How are treatment integrity data assessed? Reviewing the performance feedback literature. School Psychology Quarterly,33(4), 517–526. 10.1037/spq0000239 [DOI] [PubMed] [Google Scholar]
- Colón, C. L., & Wallander, R. (2023). Treatment integrity. In J. L. Matson (Eds.), Handbook of applied behavior analysis (pp. 439–463). Autism & Child Psychopathology Series. Springer. 10.1007/978-3-031-19964-6_24
- Cook, J. E., Subramaniam, S., Brunson, L. Y., Larson, N. A., Poe, S. G., & St. Peter, C. C. (2015). Global measures of treatment integrity may mask important errors in discrete trial training. Behavior Analysis in Practice, 8(1), 37–4710.1007/s40617-014-0039-7 [DOI] [PMC free article] [PubMed]
- Council of Autism Service Providers. (2020). Applied behavior analysis treatment of autism spectrum disorder: Practice guidelines for healthcare funders and managers, 2nd edition.https://assets-002.noviams.com/novi-file-uploads/casp/pdfs-and-documents/ASD_Guidelines/ABA-ASD-Practice-Guidelines.pdf
- Daniels A. & Bailey, J. (2014). Performance management: Changing behavior that drives organizational effectiveness (5th edn). Performance Management Publications.
- Diener, L. H., McGee, H. M., & Miguel, C. (2009). An integrated approach for conducting a behavioral systems analysis. Journal of Organizational Behavior Management,29(2), 108–135. 10.1080/01608060902874534 [Google Scholar]
- DiGennaro Reed, F. D., Reed, D. D., Baez, C. N., & Maguire, H. (2011). A parametric analysis of errors of commission during discrete-trial training. Journal of Applied Behavior Analysis,44, 611–615. 10.1901/jaba.2011.44-611 [DOI] [PMC free article] [PubMed] [Google Scholar]
- DiGennaro Reed, F. D., & Reed, D. D. (2014). Evaluating and improving intervention integrity. In J. Luiselli (Ed.), Children and youth with autism spectrum disorder (ASD): Recent advances and innovations in assessment, education, and intervention (pp. 145–162). Oxford University Press. [Google Scholar]
- DiGennaro Reed, F. D., & Henley, A. J. (2015). A survey of staff training and performance management practices: The good, the bad, and the ugly. Behavior Analysis in Practice,8(1), 16–26. 10.1007/s40617-015-0044-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- DiGennaro, F. D., Martens, B. K., & Kleinmann, A. E. (2007). A comparison of performance feedback procedures on teachers’ treatment implementation integrity and students’ inappropriate behavior in special education classrooms. Journal of Applied Behavior Analysis,40, 447–461. 10.1901/jaba.2007.40-447 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Falakfarsa, G., Brand, D., Jones, L., Godinez, E. S., Richardson, D. C., Hanson, R. J., Velazquez, S. D., & Wills, C. (2021). Treatment integrity reporting in Behavior Analysis in Practice 2008–2019. Behavior Analysis in Practice,15(2), 443–453. 10.1007/s40617-021-00573-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fryling, M. J., Wallace, M. D., & Yassine, J. N. (2012). Impact of treatment integrity on intervention effectiveness. Journal of Applied Behavior Analysis,45, 449–453. 10.1901/jaba.2012.45-449 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review,18(1), 37–50. 10.1080/02796015.1989.12085399 [Google Scholar]
- Gresham, F. M., MacMillan, D. L., Beebe-Frankenberger, M. E., & Bocian, K. M. (2000). Treatment integrity in learning disabilitiesintervention research: Do we really know how treatments areimplemented? Learning Disabilities Research & Practice,15(4), 198–205. 10.1207/SLDRP1504_4 [Google Scholar]
- Groskreutz, N. C., Groskreutz, M. P., & Higbee, T. S. (2011). Effects of varied levels of treatment integrity on appropriate toy manipulation in children with autism. Research in Autism Spectrum Disorders,5(4), 1358–1369. 10.1016/j.rasd.2011.01.018 [Google Scholar]
- Grow, L. L., Carr, J. E., Gunby, K. V., Charania, S. M., Gonsalves, L., Ktaech, I. A., & Kisamore, A. N. (2009). Deviations from prescribed prompting procedures: Implications for treatment integrity. Journal of Behavioral Education,18(2), 142–156. 10.1007/s10864-009-9085-6 [Google Scholar]
- Hagermoser Sanetti, L. M., & DiGennaro Reed, F. D. (2012). Barriers to implementing treatment integrity procedures in school psychology research: Survey of treatment outcome researchers. Assessment for Effective Intervention,37(4), 195–202. 10.1177/153450841143246 [Google Scholar]
- Hagermoser Sanetti, L. M., & Fallon, L. M. (2011). Treatment integrityassessment: How estimates of adherence, quality, and exposureinfluence interpretation of implementation. Journal of Educational & Psychological Consultation,21(3), 209–232. 10.1080/10474412.2011.595163 [Google Scholar]
- Hagermoser Sanetti, L. M., Luiselli, J. K., & Handler, M. W. (2007). Effects of verbal and graphic performance feedback on behavior support plan implementation in a public elementary school. Behavior Modification,31(4), 454–465. 10.1177/0145445506297583 [DOI] [PubMed] [Google Scholar]
- Han, J. B., Bergmann, S., Brand, D., Wallace, M. D., St Peter, C. C., Feng, J., & Long, B. P. (2023). Trends in reporting procedural integrity: A comparison. Behavior Analysis in Practice,16(2), 388–398. 10.1007/s40617-022-00741-5 [DOI] [PMC free article] [PubMed]
- Holcombe, A., Wolery, M., & Snyder, E. (1994). Effects of two levels of procedural fidelity with constant time delay on children’s learning. Journal of Behavioral Education,4(1), 49–73. 10.1007/BF01560509 [Google Scholar]
- Jasper, A. D., & Taber-Doughty, T. (2015). Special educators and data recording: What’s delayed recording got to do with it? Focus on Autism & Other Developmental Disabilities,30(3), 143–153. 10.1177/1088357614547809 [Google Scholar]
- Jenkins, S. R., Hirst, J. M., & DiGennaro Reed, F. D. (2015). The effects of discrete-trial training commission errors on learner outcomes: An extension. Journal of Behavioral Education,24, 196–209. [Google Scholar]
- Kazemi, E., Shapiro, M., & Kavner, A. (2015). Predictors of intention to turnover in behavior technicians working with individuals with autism spectrum disorder. Research in Autism Spectrum Disorders,17, 106–115. 10.1016/j.rasd.2015.06.012 [Google Scholar]
- Kodak, T., Bergmann, S., & Waite, M. (2022). Strengthening the proceduralfidelity research-to-practice loop in animal behavior. Journalof the Experimental Analysis of Behavior,118(2), 215–236. 10.1002/jeab.780 [DOI] [PubMed] [Google Scholar]
- Luke, M. M., Dams, P., & Lichtenberger, S. N. (2024). Improving human-service organizations through process mapping: A tutorial for practitioners. Behavior Analysis in Practice,17, 359–370. 10.1007/s40617-024-00906-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Miller, J. R., Lerman, D. C., & Fritz, J. N. (2010). An experimental analysis of negative reinforcement contingencies for adult-delivered reprimands. Journal of Applied Behavior Analysis,43(4), 769–773. 10.1901/jaba.2010.43-769 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morris, C., Jones, S. H., & Oliveria, J. H. (2024). A practitioner’s guide to measuring procedural fidelity. Behavior Analysis in Practice,17, 643–655. 10.1007/s40617-024-00910-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Northup, J., Fisher, W., Khang, S. W., Harrell, R., & Kurtz, P. (1997). An assessment of the necessary strength of behavioral treatments for severe behavior problems. Journal of Developmental & Physical Disabilities,9(1), 1–16. 10.1023/A:1024984526008 [Google Scholar]
- Oliver, A. C., Pratt, L. A., & Normand, M. P. (2015). A survey of functional behavior assessment methods used by behavior analysts in practice. Journal of Applied Behavior Analysis,48(4), 817–829. 10.1002/jaba.256 [DOI] [PubMed] [Google Scholar]
- Parsons, M. B., Rollyson, J. H., & Reid, D. H. (2012). Evidence-based staff training: A guide for practitioners. Behavior Analysis in Practice,5(2), 2–11. 10.1007/BF03391819 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paulhus, D. L., & Vazire, S. (2007). The self-report method. In R. W. Robins, R. C. Fraley, & R. F. Krueger (Eds.), Handbook of research methods in personality psychology (pp. 224–239). Guilford Press. [Google Scholar]
- Pence, S. T., & St Peter, C. C. (2015). Evaluation of treatment integrity errors on mand acquisition. Journal of Applied Behavior Analysis,48(3), 575–589. 10.1002/jaba.238 [DOI] [PubMed] [Google Scholar]
- Perepletchikova, F., Hilt, L. M., Chereji, E., & Kazdin, A. E. (2009). Barriers to implementing treatment integrity procedures: Survey of treatment outcome researchers. Journal of Consulting & Clinical Psychology,77(2), 212. 10.1037/a0015232 [DOI] [PubMed] [Google Scholar]
- Peterson, L., Homer, A. L., & Wonderlich, S. A. (1982). The integrity of independent variables in behavior analysis. Journal of Applied Behavior Analysis,15(4), 477–492. 10.1901/jaba.1982.15-477 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roscoe, E. M., Phillips, K. M., Kelly, M. A., Farber, R., & Dube, W. V. (2015). A statewide survey assessing practitioners’ use and perceived utility of functional assessment. Journal of Applied Behavior Analysis,48(4), 830–844. 10.1002/jaba.259 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saini, V., Gregory, M. K., Uran, K. J., & Fantetti, M. A. (2015). Parametric analysis of response interruption and redirection as treatment for stereotypy. Journal of Applied Behavior Analysis,48(1), 96–106. 10.1002/jaba.186 [DOI] [PubMed] [Google Scholar]
- Sanetti, L. M. H., & Kratchowill, T. R. (2009). Toward developing a science of treatment integrity: Introduction to the special series. School Psychology Review,38(4), 445–459. [Google Scholar]
- Sellers, T. P., Valentino, A. L., Landon, T. J., & Aiello, S. (2019). Board certified behavior analysts’ supervisory practices of trainees: Survey results and recommendations. Behavior Analysis in Practice,12, 536–546. 10.1007/s40617-019-00367-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sigurdsson, S. O., Ring, B. M., & Warman, A. (2018). Performer-level interventions: Consequences. In B. Wine & J. K. Pritchard (Eds.), Organizational behavior management: The essentials (pp. 217–244). Hedgehog Publishers. [Google Scholar]
- Sleiman, A. A., Sigurjonsdottir, S., Elnes, A., Gage, N. A., & Gravina, N. E. (2020). A quantitative review of performance feedback in organizational settings (1998–2018). Journal of Organizational Behavior Management,40(3–4), 303–332. 10.1080/01608061.2020.1823300 [Google Scholar]
- Sloman, K. N., Vollmer, T. R., Cotnoir, N. M., Borrero, C. S. W., Borrero, J. C., Samaha, A. L., & St. Peter, C. C. (2005). Descriptive analyses of caregiver reprimands. Journal of Applied Behavior Analysis, 38(3), 373–38310.1901/jaba.2005.118-04 [DOI] [PMC free article] [PubMed]
- St. Peter, C. C., Brand, D., Jones, S. H., Wolgemuth, J. R., & Lipien, L. (2023). On a persisting curious double standard in behavior analysis: Behavioral scholars' perspectives on procedural fidelity. Journal of Applied Behavior Analysis, 56(2), 336–35110.1002/jaba.974 [DOI] [PubMed]
- Stocco, C. S., & Thompson, R. H. (2015). Contingency analysis of caregiver behavior: Implications for parent training and future directions. Journal of Applied Behavior Analysis,48(2), 417–435. 10.1002/jaba.206 [DOI] [PubMed] [Google Scholar]
- Vollmer, T. R., Sloman, K. N., & St. Peter Pipkin, C. (2008). Practical implications of data reliability and treatment integrity monitoring. Behavior Analysis in Practice, 1, 4–1110.1007/BF03391722 [DOI] [PMC free article] [PubMed]
- Wilder, D. A., Austin, J., & Casella, S. (2009). Applying behavior analysis in organizations: Organizational behavior management. Psychological Services,6(3), 202. 10.1037/a0015393 [Google Scholar]
- Wilder, D., & Cymbal, D. (2023). Pinpointing, measurement, procedural integrity, and maintenance in organizational behavior management. Journal of Organizational Behavior Management,43(3), 221–245. 10.1080/01608061.2022.2108537 [Google Scholar]
- Worsdell, A. S., Iwata, B. A., Hanley, G. P., Thompson, R. H., & Kahng, S. (2000). Effects of continuous and intermittent reinforcement for problem behavior during functional communication training. Journal of Applied Behavior Analysis,33(2), 167–179. 10.1901/jaba.2000.33-167 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Worsdell, A. S., Iwata, B. A., Dozier, C. L., Johnson, A. D., Neidert, P. L., & Thomason, J. L. (2005). Analysis of response repetition as an error-correction strategy during sight-word reading. Journal of Applied Behavior Analysis,38(4), 511–527. 10.1901/jaba.2005.115-04 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data are available from the corresponding author upon reasonable request.
