Abstract
We conducted a scoping review on the consecutive controlled case series (CCCS) methodology (Hagopian, 2020). The CCCS is an approach to studying functional relations across a series of consecutive cases that share common features. We identified and reviewed 76 studies that used CCCS methodology. Most of these (a) were retrospective CCCS studies that incorporated most of the CCCS elements that were identified by Hagopian (2020), (b) involved child participants with autism spectrum disorder or an intellectual disability, and (c) evaluated the assessment and treatment of challenging behavior within specialized clinical settings. The sample sizes ranged from 3 to 269 participants, with a median of 20 participants. We discuss current trends, gaps in the literature, and implications for statements of the generality of behavioral procedures.
Keywords: consecutive controlled case series, effectiveness, external validity, generality, large-scale analyses
Most behavior-analytic research is conducted using single-case experimental designs (SCEDs; Wolfe et al., 2019), which allow systematic identification of functional relations between independent and dependent variables (Kazdin, 2021). The generality of the procedures and findings of SCED are typically established through a collection of direct and systematic replications rather than in any individual study (Walker & Carr, 2021). However, generality may be compromised by biases present in the selection, description, or publication of single cases. One approach that builds upon SCED methodology while offsetting such biases is the consecutive controlled case series (CCCS; Hagopian, 2020). The CCCS is “a type of study in which a SCED is employed for each case in a series of consecutively encountered cases that undergo a common procedure or share a common characteristic” (p. 599).
Hagopian (2020) introduced five elements that distinguish CCCS methodology. Using a SCED with each case (Element 1) distinguishes consecutive case series from consecutive controlled case series. That is, consecutive case series use a defined treatment descriptively across cases, often including AB designs or indirect measures (e.g., Mevers et al., 2018; Scheithauer et al., 2016; Thillainathan et al., 2024). Conversely, CCCS methodology is characterized by the systematic use of an SCED for each case, ensuring a higher degree of experimental control. Element 2 involves the inclusion of all consecutive cases that share a common characteristic or procedural experience. Whereas Elements 1 and 2 are definitional to CCCS methodology, Element 3 (i.e., selection criteria for procedures and participants are described), Element 4 (i.e., findings are analyzed across participants while preserving individual outcomes), and Element 5 (i.e., multiple cases are included and well characterized) strengthen its rigor and allow for more precise conclusions regarding generality.
The CCCS methodology can be applied prospectively by enrolling participants who meet certain criteria or retrospectively by compiling existing data sets and information according to inclusion and exclusion criteria. The CCCS methodology includes all consecutively encountered cases that meet the researchers’ criteria irrespective of the outcome of the analysis (whether a given intervention was effective or if a certain functional relation was found). This mitigates case-selection bias and file-drawer effects in which null findings are excluded (e.g., Sham & Smith, 2014; Tincani & Travers, 2019). For example, Greer et al. (2016) reviewed and analyzed 25 consecutive applications of functional communication training in their severe behavior clinic to better understand the efficacy of schedule-thinning procedures. By including data from all participants who experienced a similar set of communication training and schedule-thinning procedures, the authors could describe the use and outcomes of supplemental procedures when initial efficacy was low for seven of their 25 (28%) applications. For example, in these seven applications, the researchers described the use of tangibles, attention, timeout, blocking, or some combination of these variables to increase the efficacy of their procedures. The researchers noted that their CCCS “essentially eliminated the possibility that case selection biases affected the results (e.g., the possibility that schedule thinning was less likely to be implemented with more difficult cases)” (Greer et al., 2016, p. 119).
Including all cases, regardless of a demonstration of efficacy, allows one to identify the proportion of cases in which an intervention was efficacious (e.g., Greer et al., 2016) or the extent to which a behavioral phenomenon (e.g., treatment relapse) occurs in routine practice (e.g., Muething et al., 2020). Additionally, CCCS methodology facilitates the identification of functional relations across various cases exposed to the same procedure or that share some common characteristic (e.g., all cases engage in challenging behavior of the same functional class; Rooker et al., 2013). That is, CCCS methodology permits one to identify functional relations with generality across cases in the same study or across multiple CCCSs (e.g., Shahan & Greer, 2021). The CCCS is also useful in identifying the underlying variables that predict treatment success and, ultimately, the boundary conditions of a certain procedure (e.g., Hagopian et al., 2015; Weber, Fahmie, et al., 2024).
Researchers have applied CCCS methodologies in behavior-analytic research for many years. For example, Hagopian (2020) highlighted a study by Derby et al. (1992) on brief functional assessments as an early example of what could be characterized as a CCCS. Hagopian also noted that, until recently, the designs, data-analytic methods, and terminology used in describing potential CCCS studies varied greatly. Thus, it was difficult to readily identify and classify studies prior to the formal emergence of the CCCS terminology in 2013 (Hagopian et al., 2013; Kurtz et al., 2013; Rooker et al., 2013). Furthermore, CCCS methodology has gained popularity in contemporary research (e.g., Falligant, Chin, et al., 2022; Frank-Crawford et al., 2023; Haney et al., 2022; Laureano et al., 2023). However, to our knowledge, a literature review of CCCS studies has yet to be conducted. Accordingly, we conducted a scoping review to (a) identify trends in existing CCCS studies, (b) examine the extent to which researchers designed studies in accordance with guidelines and elements proposed by Hagopian (2020), and (c) identify remaining gaps in the research base.
METHOD
Search strategy and study identification
We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for scoping reviews (PRISMA-ScR; Tricco et al., 2018). Munn et al. (2018) provided guidance on choosing between systematic and scoping reviews based on distinct purposes. Both provide a synthesis of the literature, but systematic reviews focus on precise questions and gather evidence to inform practice; scoping reviews explore broader topics, map existing literature, and identify research gaps. Given our areas of interest and purpose, a scoping review was most appropriate.
Figure 1 illustrates the PRISMA-ScR flowchart outlining the procedures used in our review. We applied lenient inclusion criteria to ensure that we captured any study that may have applied CCCS methodology. The inclusion criteria were (a) the study was published in a peer-reviewed. journal, (b) the study was published in English, (c) the study was published in or after 2013, (d) the study was behavior analytic, and (e) the study aligned with the definition of a CCCS as outlined by Hagopian (2020), which included the use of an SCED and the inclusion of consecutively encountered cases. However, because the defining features of a CCCS may not always be clear in the abstract screening, any article about which the researchers were unsure was moved forward to full-text screening. We excluded articles published prior to 2013 because, prior to 2013, studies that may have relied on similar methodologies varied widely in their terminology, designs, and data analyses (see Hagopian, 2020, p. 599).
FIGURE 1.

PRISMA flowchart. CCCS = consecutive controlled case series. JABA = Journal of Applied Behavior Analysis.
Our search was conducted in November 2024 using multiple search strategies. In Step 1, we searched PsycINFO, PubMed, and SCOPUS with the input parameters of (“consecutive case or “consecutive case series” or “large scale analysis” or “epidemiological study” or “consecutive encounter*” or “consecutive application” or “consecutive controlled case series”). We restricted the search to the following journals to focus the results on behavior-analytic research: Behavior Analysis in Practice, Behavior and Social Issues, Behavior Modification, Behavioral Disorders, Behavioral Interventions, Child & Family Behavior Therapy, Developmental Neurorehabilitation, Education and Treatment of Children, International Journal of Developmental Disabilities, Journal of Applied Behavior Analysis, Journal of Autism and Developmental Disorders, Journal of Behavioral Education, Journal of Intellectual Disability Research, Journal of Organizational Behavior Management, Journal of Positive Behavior Interventions, Journal of the Experimental Analysis of Behavior, Perspectives on Behavior Science, Research in Developmental Disabilities, and The Analysis of Verbal Behavior. This yielded 295 results, 77 of which were duplicates. We retained 218 studies for abstract screening. A second independent rater duplicated this search procedure and interrater agreement was 93.65%. For any disagreements that occurred during abstract screening, the article in question was moved forward to full-text review.
In Step 2, we conducted a targeted search using the search feature on the website for the Journal of Applied Behavior Analysis with the input parameter, “consecutive controlled case series,” which yielded 933 results. We also conducted a descendant search to identify all articles that cited Hagopian (2020). This yielded 112 results. All articles identified in Step 2 were retained for abstract screening, aside from 121 duplicates. As a result, 1,142 articles (218 from Step 1 and 924 from Step 2) were retained for abstract screening. After the abstract screening, 845 articles were removed for various reasons including (a) not meeting Element 1 (containing an SCED) or including fewer than three participants (n = 746), (b) not being behavior analytic (n = 51), or (c) being nonexperimental (n = 48). Studies that were void of an experimental design, for example, included those whose sole purpose was to examine preference assessments or stimulus avoidance assessments.
Following the abstract review, 297 articles were moved to full-text review. We excluded 221 articles for various reasons including (a) not meeting Elements 1 or 2 (n = 178), (b) being nonexperimental (n = 29), (c) being published prior to 20131 (n = 8), and (d) not being behavior analytic (n = 6). In total, 76 articles were included in our analysis.
Article coding and data extraction
We developed a coding system to extract data on variables pertinent to the study methods and results, including (a) the type of CCCS (i.e., retrospective, prospective, randomized CCCS, or combination studies), (b) application type (e.g., general efficacy, epidemiology and phenomenology, or comparative), (c) presence of Elements 1–5 as described by Hagopian (2020), (d) participant characteristics (e.g., gender, race, diagnoses), (e) response topographies, (f) effectiveness measures (i.e., intervention agents, social validity, generalization, and maintenance), and (g) assessment and treatment outcomes, where relevant. Interrater agreement for screening and coding criteria was evaluated for 27.63% (n = 21) of the studies reviewed; studies for review were selected using a random number generator (i.e., random.org). An agreement was scored if the first and fourth authors had an exact agreement for each descriptive variable. Across the 21 studies, the mean interrater agreement was 91.92% (range: 83.93%–100%). The first and fourth authors reviewed all included studies until 100% agreement was reached.
CCCS general study characteristics
We coded specific variables pertinent to CCCS methodology, including whether studies were retrospective, prospective, randomized, or a combination of retrospective and prospective formats. We scored a study as a retrospective CCCS if data were gathered throughout the course of service delivery but compiled and reported later. We scored a study as a prospective CCCS if participants were enrolled to undergo a specific procedure or who share a characteristic based on predefined criteria. We scored a randomized CCCS, a variant of a prospective CCCS, if SCEDs were combined with a group design such that participants were prospectively enrolled to experience different clinical procedures to compare their efficacy. We scored a combination CCCS if a study included retrospective and prospective data. We also coded the type of CCCS applications, including general efficacy if the study evaluated outcomes and their broad applicability, epidemiology and phenomenology if the study outcome or purpose reported how common a particular clinical problem was, or comparative if the study compared two procedures or treatments. We also coded the number of participants in each study and whether data were collected across a single or multiple settings.
Finally, we coded several indicators of effectiveness, including the type of intervention agents as well as the presence of generalization, maintenance, and social validity measures. We categorized intervention agents based on their specific roles within different settings and their involvement in implementing protocols. Behavior staff encompassed therapists and direct care personnel. Students referred to undergraduate or graduate trainees. Caregivers were defined as the primary caretakers of participants. Supervisors included Board Certified Behavior Analyst or doctoral-level professionals. School staff encompassed paraprofessionals, teaching assistants, and teachers. Finally, nursing staff included registered nurses. We coded generalization, maintenance, and social validity measures when the authors described any subset of their measures as such.
CCCS elements and participant characteristics
We also coded the extent to which researchers designed studies in accordance with the five elements outlined by Hagopian (2020). With respect to Element 1, Hagopian emphasized that, “it is important to document that the experimental analysis performed with each case is as methodologically rigorous as an analysis reported in a study with an n-of-1,” (p. 603). Thus, we coded three variables pertinent to Element 1, including that the independent variable was “clearly defined and controlled” (p. 603), that the dependent variable was “operationally defined and data [were] collected with good reliability” (p. 603), and that an experimental design was used in each case.2 We assigned a “yes” for the independent variable if the study included a procedure section that described the variables controlled during the study; we assigned a “yes” for the dependent variable if the variables were operationally defined and if interobserver or interrater agreement was assessed. We assigned a “yes” for experimental design (e.g., multielement, reversal, multiple baseline) if an SCED was employed for each case. We scored studies as meeting Element 1 if they satisfied all three of these variables. Last, because “SCEDs are a type of response-dependent adaptive experimental design in which demonstration of experimental control is, in part, tied to the participant’s response” (p. 603), we also coded how the researchers illustrated the design or demonstrated experimental control by using graphs, tables, structured or other validated criteria, or a combination of these methods. Data sets from six studies (e.g., Briggs et al., 2018; Falligant, Hagopian, et al., 2022; Hagopian et al., 2018; Laureano & Falligant, 2023; Laureano et al., 2024; Shahan & Greer, 2021) were previously published in studies that were already included in our review, so those six studies were excluded from our coding of Element 1 to prevent double counting.
We scored studies as meeting Element 2 if the authors indicated that all participants or cases were consecutively enrolled or encountered. Studies may have failed to meet Element 2 if authors reported selecting participants based on convenience or if reporting criteria for participant enrollment were unclear. We scored studies as meeting Element 3 if the authors specified inclusion and exclusion criteria for participants and procedures, when applicable. Studies may have failed to meet Element 3 if they omitted these criteria altogether. We scored studies as meeting Element 4 if they included illustrative examples of individual outcomes and overall outcomes that summarized the proportion of cases in graphical displays. Studies may have failed to meet Element 4 if they only presented individual outcomes or solely aggregated outcomes.
In line with the description of Element 5 by Hagopian (2020), recommendations from the Single-Case Reporting Guideline in Behavioral Interventions (Tate et al., 2016), and recommendations made by Jones et al. (2020), we analyzed 14 specific setting and participant characteristics. We scored studies as indicating the setting if they designated whether data were collected in the home, school, inpatient hospital, outpatient clinic, and other setting. We coded home when the study was conducted in the child’s home or another primary residence, including residential facilities. We coded school if sessions were conducted in a public or private school or a vocational program. We coded inpatient hospital if sessions were conducted in a temporary hospital residence for the assessment and treatment of severe challenging behavior. We coded outpatient if sessions were conducted in an outpatient clinic, university-based clinic, intensive day program, or early intervention clinic. We coded unspecified for studies that did not report a specific setting.
Participant characteristics included demographic information related to participants, including setting, age, gender, race, ethnicity, and psychiatric or medical diagnoses. We also coded whether the authors noted the presence or absence of autism spectrum disorder and intellectual or developmental disability. We coded whether studies reported on additional participant characteristics, including communication skills, education, socioeconomic status, language, and test scores. We coded whether studies reported on participants’ communication skills, including details about their expressive language repertories or communication modality. Additionally, we documented the presence of education information, specifically noting if researchers provided data on participant grade level. Socioeconomic status was coded based on the inclusion of income-related data for participants or their families. We also coded information on the participants’ primary language spoken at home and test scores from diagnostic or developmental assessments. We defined and coded a study as meeting Element 5 (i.e., “well characterized”) if it reported on at least eight (i.e., over half) of these 14 setting and participant variables (see Table 3).
TABLE 3.
Number of consecutive controlled case series (CCCS) studies reporting on variables that characterized the sample well.
| Element 5 features (n = 76) | Number of studies (%) |
|---|---|
|
| |
| Setting location | 75 (98.68%) |
| Outpatient clinic | 53 (69.74%) |
| Inpatient setting | 22 (28.95%) |
| School | 9 (11.84%) |
| Home | 6 (7.89%) |
| Clinic, unspecified | 1 (1.32%) |
| Unspecified | 1 (1.32%) |
| Autism spectrum disorder | 64 (84.21%) |
| Age | 63 (82.89%) |
| Gender | 60 (78.95%) |
| Intellectual or developmental disability | 50 (65.79%) |
| Medical or genetic diagnoses | 38 (50%) |
| Psychiatric diagnoses | 37 (48.68%) |
| Safety measures | 26 (34.21%) |
| Communication skills | 22 (28.95%) |
| Race and ethnicity | 20 (26.32%) |
| Language | 3 (3.95%) |
| Socioeconomic status | 2 (2.63%) |
| Education | 1 (1.32%) |
| Test scores | 1 (1.32%) |
Additional data extraction
Due to a high proportion of studies focusing on challenging behavior, we coded a few additional variables related to this topic including individual response topographies, several features of stimulus assessments and functional assessments, and treatment outcomes.
For all studies, we also recorded bibliometric information including the study title, author names, and year of publication. Identifying who is contributing to a research base is an important aspect of generality (see Branch & Pennypacker, 2013). That is, an important aspect of generality is the extent to which findings and studies are replicated across researchers and research groups (Walker & Carr, 2021). Thus, we were interested in who and what institutions contributed to the CCCS literature.
There is also a general notion that high-quality articles will be cited often (Praus, 2019). Because the CCCS methodology may lead to studies with large samples, thorough efficacy or effectiveness analyses, and evaluations of generality, these studies may be more likely to be cited than non-CCCS studies. The bulk of CCCS studies included in this review were published in the Journal of Applied Behavior Analysis. Thus, in December 2024, the third author used Clarivate’s WoS to extract citation data for all articles published in the Journal of Applied Behavior Analysis from 2013 (the year in which the first CCCS was coded) until the last full year of publication volumes (2023). Only years in which a CCCS was published in the Journal of Applied Behavior Analysis were included in the analysis (i.e., 2014 and 2019 were omitted). The WoS “Times Cited, All Databases” measure was used to determine the count of citations for each article. Due to the large number of non-CCCS studies, the third author computed the median citations for non-CCCS studies per year along with the interquartile range. These measures were compared with the raw citation counts of the 41 CCCS studies published in the Journal of Applied Behavior Analysis in the same time frame.
RESULTS
Figure 1 illustrates the PRISMA-ScR flow chart. We identified 76 studies published between 2013 and 2024 for inclusion in this review. Seventy-four out of seventy-six studies reported sample sizes. For the 74 studies that reported sample size, the number of participants ranged from 3 to 269, with a median of 20 and mode of 4. The top panel of Figure 2 displays the cumulative number of CCCS publications beginning with Hagopian et al. (2013) and ending with Weber, Fahmie, et al. (2024). Three studies (e.g., Hagopian et al., 2013; Kurtz et al., 2013; Rooker et al., 2013) were published in 2013, contributing to the initial emergence of CCCS terminology, which was therefore adopted by many researchers and refined by Hagopian (2020). Following the foundational discussion piece published by Hagopian, publication rates of CCCSs continued to increase. Specifically, 75% (n = 57) of all CCCS were published after Hagopian (denoted by the data label Figure 2).
FIGURE 2.

Cumulative number of consecutive controlled case series (CCCS) publications. The top panel depicts the total number of all CCCS studies. The bottom panel depicts the studies by type (i.e., retrospective, prospective, and combination). Please note the differing y-axes across panels.
Table 1 displays the general study characteristics of the 76 analyzed CCCS studies. Most studies (56.58%; n = 43) incorporated a retrospective analysis involving a well-defined case review, with the remaining publications enrolling consecutive participants prospectively (39.47%; n = 30) or including a combination of retrospective and prospective CCCSs (3.95%; n = 3; see Frank-Crawford et al., 2023, for an example). The bottom panel of Figure 2 depicts the cumulative record of prospective and retrospective CCCS studies between 2013 and 2024. Retrospective CCCS studies have been more prevalent than prospective CCCS studies since the design was first described by Hagopian et al. (2013). Additionally, combination CCCS studies, which included both retrospective and prospective data, emerged in 2019. We did not identify any studies using a randomized CCCS.
TABLE 1.
Study characteristics across consecutive controlled case series (CCCS) research.
| Study characteristic (n = 76) | Number of studies (%) |
|---|---|
|
| |
| Type of consecutive controlled case series (CCCS) | |
| Retrospective CCCS | 43 (56.58%) |
| Prospective CCCS | 30 (39.47%) |
| Combination CCCS | 3 (3.95%) |
| Randomized CCCS | - |
| Application type | |
| General efficacy of clinical procedures | 50 (65.79%) |
| Epidemiology and phenomenology of clinical problem | 26 (34.21%) |
| Comparing efficacy of two clinical procedures in randomized controlled trial | - |
| Setting type | |
| Single | 60 (78.95%) |
| Multiple | 16 (21.05%) |
| Effectiveness measures | |
| Social Validity | 10 (13.16%) |
| Maintenance | 7 (9.21%) |
| Generalization | 6 (7.89%) |
| Intervention agents | |
| Unspecified | 40 (52.63%) |
| Clinical staff | 25 (32.89%) |
| Caregivers | 10 (13.16%) |
| Students | 9 (11.84%) |
| Supervisors | 6 (7.89%) |
| School staff | 1 (1.32%) |
| Nurse | 1 (1.32%) |
Across all included studies, 65.79% (n = 50) applied CCCS methodology to evaluate the general efficacy of a clinical procedure and 34.21% (n = 26) of studies applied CCCS methodology to study the epidemiology or phenomenology of a clinical problem. The majority of CCCS studies were conducted in single settings (78.95%; n = 60) as opposed to multiple settings (i.e., home and school or home and clinic; 21.05%; n = 16). Effectiveness measures were reported relatively infrequently in our review, with social validity documented in 13.16% (n = 10), maintenance in 9.21% (n = 7), and generalization in 7.89% (n = 6) of studies. Among those studies that reported on intervention agents, clinical staff (32.89%; n = 25) were most common, followed by caregivers (13.16%; n = 10), students (11.84%; n = 9), supervisors (7.89%; n = 6), school staff (1.32%; n = 1), and nurses (1.32%, n = 1).
Figure 3 depicts the percentage of studies that included each of the five elements described by Hagopian (2020). Table 2 includes the variables pertinent to Element 1. Among the 70 studies that included original data, 100% (n = 70) reported on the independent variables, whereas 95.71% (n = 67) provided relevant data on the dependent variables. The three studies that did not meet our criteria for dependent variables (Falligant, McNulty, Hausman, et al., 2020; Falligant, McNulty, Kranak, et al., 2020; Laureano & Falligant, 2023) were those that did not include a measure of interobserver agreement, and notably, each of these applied automated measurement systems such as dual criteria and conservative dual methods to supplement visual analysis with functional analyses and treatment baselines. All studies included in our review were conducted in accordance with a SCED, as this was an inclusion criterion for our scoping review. Studies were scored as meeting Element 1 if they met all three features, which was the case in 95.71% (n = 67) of studies. Additionally, we coded how authors illustrated the design or experimental control. Experimental control was most frequently demonstrated through graphical representation (42.86%; n = 30); followed by structured or other validated criteria (30%; n = 21); a combination of graphs, tables, or structured or other validated criteria (22.86%; n = 16); and tables alone (4.29%; n = 3).
FIGURE 3.

Percentage of studies reporting each element, as defined by Hagopian (2020).
aOur criteria for meeting Element 1 was reporting on all three features (i.e., independent variable, dependent variables, and single-case experimental design) listed in Table 2.
bOur criteria for meeting Element 5 was including at least 8 of the 16 (50%) variables listed in Table 3.
TABLE 2.
Number of consecutive controlled case series (CCCS) studies reporting on features of Element 1.
| Element 1 features (n = 70) | Number of studies (%) |
|---|---|
|
| |
| Independent variables | 70 (100%) |
| Dependent variables | 67 (95.71%) |
| Single-case experimental design | 70 (100%) |
| Demonstration of experimental control | - |
| Graphs | 30 (42.86%) |
| Structured or validated set of criteria | 21 (30%) |
| Combination of graphs, tables, structured criteria | 16 (22.86%) |
| Tables | 3 (4.29%) |
Element 2 was met in 100% of the studies reviewed, as this was another inclusion criterion for our scoping review. The presence of Elements 3 through 5 varied, ranging between 26.32% (n = 20) and 86.84% (n = 66). Of the 76 studies included in our review, 65.79% (n = 50) met the criteria for Element 4. All studies in our review reported aggregated data across participants, however some failed to meet the criteria for Element 4 (i.e., findings analyzed across participants while preserving individual outcomes) due to the lack of illustrative examples of individual outcomes (e.g., Rooker et al., 2018; Weber, Brown, et al., 2024). For example, Weber, Brown, et al. (2024) reported findings from their retrospective CCCS of 269 individuals using tables, but only tentative conclusions regarding individual functional analysis outcomes can be made. Interpretations of individual outcomes can be elucidated with illustrative examples.
In alignment with Element 5, literature on SCED emphasizes the importance of reporting setting and participant characteristics. Table 3 outlines setting and participant information including age, gender, race and ethnicity, various diagnoses, communication skills, safety measures, education, socioeconomic status, spoken language, and test scores. Specific to settings, most CCCS studies were conducted in highly specialized environments, such as outpatient clinics (69.74%; n = 53) and inpatient settings (28.95%; n = 22). Most studies reported on diagnoses, age, and gender, whereas other characteristics (spoken language, socioeconomic status, education, and test scores) were rarely reported. We coded studies as meeting Element 5 if they included greater than 50% (eight or more) of the 14 characteristics outlined in Table 3, and 26.32% (n = 20) met the criteria for Element 5. Of the 76 studies in our review, 21.05% (n = 16) included all five elements.
Table 4 displays demographic data for the participants in our sample. Of the 1,735 individuals for whom gender was reported, 73.26% (n = 1,271) were described as male, 26.69% (n = 463) as female, and 0.06% (n = 1) as transgender female. Of the 462 individuals for whom race was reported, 60.17% (n = 278) were described as White, 23.59% (n = 109) were described as Black, 9.74% (n = 45) were described as another race, and 6.49% (n = 30) were described as Asian. Of the 61 individuals for whom ethnicity data were provided, 42.62% (n = 26) were described as Hispanic, Latino, or Latinx. Of the 1,986 individuals for whom a diagnosis was reported, 66.92% (n = 1,329) were reported to have autism spectrum disorder. Furthermore, data on intellectual or developmental disability was reported for 88.97% (n = 1,767) of participants. The level of intellectual and developmental disability reported was undesignated in 25.53% (n = 507), mild to moderate in 17.82% (n = 354), and severe to profound in 17.67% (n = 351). Fewer participants were reported to present with other intellectual or developmental disability (2.52%; n = 50), and 25.43% (n = 505) participants were reported to have no intellectual or developmental disability.
TABLE 4.
Participant demographic data.
| Participant demographic data (n = 2,519) | Number of participants (%) |
|---|---|
|
| |
| Gender (n = 1,735a) | |
| Male | 1,271 (73.26%) |
| Female | 463 (26.69%) |
| Transgender female | 1 (0.06%) |
| Not reported | 784 |
| Race (n = 462a) | |
| White | 278 (60.17%) |
| Black | 109 (23.59%) |
| Other | 45 (9.74%) |
| Asian | 30 (6.49%) |
| Not reported | 2,057 |
| Ethnicity (n = 61a) | |
| Not Hispanic, Latino, or Latinx | 35 (57.38%) |
| Hispanic, Latino, or Latinx | 26 (42.62%) |
| Not reported | 2,482 |
| Diagnosis reported (n = 1,986a) | |
| Autism spectrum disorder | 1,329 (66.92%) |
| Intellectual or developmental disability (IDD) | 1,767 (88.97%) |
| Undesignated IDD | 507 (25.53%) |
| No IDD | 505 (25.43%) |
| Mild to moderate | 354 (17.82%) |
| Severe to profound | 351 (17.67%) |
| Other IDD | 50 (2.52%) |
Depicts the total number of participants for whom each demographic variable was reported.
When looking more closely at study topics, 88.16% (n = 67) of the reviewed studies involved the assessment and treatment of challenging behavior.3 Studies within this topic area evaluated functional analysis methodologies and treatment approaches for severe challenging behavior (e.g., aggression, self-injurious behavior) and inappropriate mealtime behavior, as well as the prevalence of treatment relapse and variables related to its occurrence. For example, Kurtz et al. (2013) compared the efficacy of staff and caregiver functional-analysis outcomes, whereas Haney et al. (2023) found that differential negative reinforcement via meal termination effectively increased self-feeding and drinking. The remaining 11.84% (n = 9) of CCCS studies examined skill-acquisition programming, caregiver and teacher training, and data interpretation or analysis. For example, Shillingsburg et al. (2022) conducted staff training in a nonpublic school using an applied verbal behavior model, Kanazawa et al. (2024) compared the efficacy of two procedures on infants’ motor skills, and Guerrero et al. (2022) applied structured criteria to functional analyses of inappropriate mealtime behavior. Because challenging behavior studies emerged as the predominant focus of CCCS studies in our sample (n = 67 of 76 studies), we analyzed a few additional variables related to this category.
Table 5 summarizes the response topographies, assessment formats, and treatment outcomes used in CCCS studies focused on challenging behavior. Self-injurious behavior (68.66%; n = 46) was the most prevalent topography reported, followed by aggression (64.18%; n = 43) and disruptive behavior (62.69%; n = 42). It is worth noting these top three reported response topographies are consistent with those reported in the most recent functional analysis review by Melanson and Fahmie (2023). Other common topographies were negative or loud vocalizations (41.79%; n = 28), elopement (28.36%; n = 19), and inappropriate sexual behavior (19.40%; n = 13). Stimuli assessments were reported in 32.83% (n = 22) of challenging behavior studies (n = 67), including preference assessments (54.55%; n = 12), competing stimulus assessments (31.82%; n = 7), skills assessments (4.55%; n = 1), and demand assessment (4.55%; n = 1). Of the 67 CCCS studies focused on challenging behavior, 91.04% (n = 61) reported conducting an experimental functional analysis (i.e., Iwata et al., 1994). Indirect and descriptive assessments were less prevalent, comprising 50.75% (n = 34) and 35.82% (n = 24) of studies, respectively. Furthermore, treatment outcome data were reported in 58.20% (n = 39) studies with a combination of behavior reduction and skill acquisition as the most prevalent treatment goal (46.15%; n = 18), followed by treatment challenges (30.77%; n = 12), behavior reduction only (17.95%; n = 7), and skill acquisition only (2.56%; n = 1).
TABLE 5.
Response topographies and assessment and treatment formats used in challenging behavior consecutive controlled case series (CCCS) studies.
| Challenging behavior variables (n = 67) | Number of studies (%) |
|---|---|
|
| |
| Response topographies (n = 67a) | |
| Self-injury | 46 (68.66%) |
| Aggression | 43 (64.18%) |
| Disruptive behavior or property destruction | 42 (62.69%) |
| Negative or loud vocalizations | 28 (41.79%) |
| Elopement | 19 (28.36%) |
| Inappropriate sexual behavior | 13 (19.40%) |
| Pica | 12 (17.91%) |
| Dropping and flopping | 9 (13.43%) |
| Disrobing | 9 (13.43%) |
| Inappropriate mealtime behavior | 9 (13.43%) |
| Verbal aggression | 8 (11.94%) |
| Spitting | 7 (10.45%) |
| Stereotypy | 6 (8.96%) |
| Self-restraint | 5 (7.58%) |
| Emesis or regurgitation | 5 (7.58%) |
| Ritualistic behavior | 2 (2.99%) |
| Rectal digging and fecal smearing | 2 (2.99%) |
| Food stealing | 1 (1.49%) |
| Stimuli assessment format (n = 22a) | |
| Preference assessment | 12 (54.55%) |
| Competing stimulus assessment | 7 (31.82%) |
| Skills assessment | 1 (4.55%) |
| Demand assessment | 1 (4.55%) |
| Other | 1 (4.55%) |
| Functional behavioral assessment (n = 67a) | |
| Experimental | 61 (91.04%) |
| Indirect | 34 (50.75%) |
| Descriptive | 24 (35.82%) |
| Experimental format (n = 61) | |
| Isolated contingency analysis | 39 (63.93%) |
| Synthesized contingency analysis | 11 (18.03%) |
| Unspecified | 7 (11.48%) |
| Combination | 4 (6.56%) |
| Treatment outcomes (n = 39) | |
| Behavior reduction and skill acquisition | 18 (46.15%) |
| Treatment challenge (relapse, resurgence, renewal) | 12 (30.77%) |
| Behavior reduction only | 7 (17.95%) |
| Skill acquisition only | 1 (2.56%) |
The total percentage may exceed 100 due to the possibility of multiple selections.
Regarding the bibliometric analyses, 173 different authors contributed to this research base. Ten individuals authored six or more CCCS studies. The authors who contributed most frequently to the CCCS literature were4 (1) John Falligant (total n = 13, first-author n = 6); (2) Louis Hagopian (total n = 13, first-author n = 5); (3) Joshua Jessel (total n = 12, first-author n = 4); (4) Griffin Rooker (total n = 10, first-author n = 2); (5) Brian Greer (total n = 9, first-author n = 3); (6) Nathan Call (total n = 7, first-author n = 2); (7) Wayne Fisher (total n = 7, first-author n = 0); (8) Gregory Hanley (total n = 10, first-author n = 2); (9) Michelle Frank-Crawford (total n = 6, first-author n = 2); and (10) Michael Kranak (total n = 6, first-author n = 2). The remaining list is available from the corresponding author upon request. Regarding first authors’ affiliations, there were four institutions that appeared as a first author’s affiliation on at least five articles: (1–2) Kennedy Krieger Institute and the Johns Hopkins University School of Medicine (n = 24), (3) Marcus Autism Center and Emory University School of Medicine (n = 10), and (4) University of Nebraska Medical Center’s Munroe-Meyer Institute (n = 8).
Finally, we were interested in whether researchers tended to cite CCCS studies more often than their counterparts. Figure 4 displays the citation analysis of the 41 CCCS studies published in the Journal of Applied Behavior Analysis from 2013–2023 relative to the non-CCCS studies in the Journal of Applied Behavior Analysis during those same years. More recent publication years have fewer citations in general across both CCCS studies and non-CCCS studies, which is unsurprising given that lengthy peer-review and publication processes can delay time to citations (Nane, 2015). However, 78.05% (n = 32) of CCCS studies had citation counts above their respective year’s non-CCCS medians; 60.97% (n = 25) of CCCS studies had citation counts that exceeded the interquartile ranges of their non-CCCS counterparts in the same year (see the error bars in Figure 4). Overall, these preliminary data for CCCS and non-CCCS published within the Journal of Applied Behavior Analysis suggest that CCCS studies may be cited more frequently than non-CCCS studies.
FIGURE 4.

Citations across consecutive controlled case series (CCCS) and non-CCCS studies. This figure displays the citation count of each of the CCCSs published in the Journal of Applied Behavior Analysis (black data points) relative to the median citation count of non-CCCS studies in the journal from 2013 to the most complete year of publications (i.e., 2023). The error bars depict the interquartile range. Please note that years without CCCS studies (i.e., 2014, 2019) are not depicted. The data were gathered via the Clarivate Web of Science (Clarivate 2022).
DISCUSSION
Our field has long recognized the need to demonstrate the generality of behavioral phenomena and outcomes. With this in mind, we conducted a scoping review of CCCS studies, the methodology of which may lend credence to the efficacy, effectiveness, and replicability of behavior-analytic procedures. Overall, the publication of CCCS studies substantially increased following the publication by Hagopian (2020)—this indicates the scholarly influence that the article seems to have had in a short amount of time. Moreover, most CCCS studies were retrospective analyses related to the assessment and treatment of challenging behavior within clinical contexts. Examples included the efficacy of procedures such as functional communication training (e.g., Greer et al., 2016; Rooker et al., 2013) and noncontingent reinforcement (e.g., Phillips et al., 2017), phenomenology of relapse phenomena (e.g., Falligant, Chin, et al., 2022; Muething et al., 2020), and identification of variables mediating treatment success (e.g., Hagopian et al., 2015; Weber, Fahmie, et al., 2024).
Researchers in behavior analysis have faced criticism regarding the generalization of findings from single-case studies to broader contexts and populations. However, reasonable inferences can be made to broader populations when participants of SCED studies share characteristics with the broader population and when a line of direct and systematic replications characterize the conditions under which the findings generalize (Walker & Carr, 2021). The CCCS plays a unique role in extending the generality of procedures demonstrated to be effective through single-case research by ensuring clear selection, classification, and reporting of individual outcomes. Thus, the overall increase in CCCS studies shown in this review is encouraging.
Our findings highlight several opportunities for extending the application of CCCS methodology. First, during our review, we identified several consecutive case series studies. Although these were excluded from our analysis because they did not include demonstrations of experimental control, such studies may still contribute valuable insights for the generality of behavior change. For example, Pugliese et al. (2021) conducted a consecutive case series in a private school to evaluate prescribed personal protective equipment using the Performance Diagnostic Checklist-Safety among. Pugliese et al. found that staff injuries decreased and the use of prescribed personal protective equipment increased across each classroom. Although the researchers conducted the study in accordance with a multiple-baseline design across classrooms, the dynamic classroom environment led to staffing and scheduling changes during observations, which compromised experimental control. Studies like this lend to descriptive accounts of behavior change and therefore should be considered when more rigorous CCCS studies are not attainable.
Second, not every CCCS included all five elements of methodology described by Hagopian (2020). Most studies in our review demonstrated experimental control (Element 1) using graphs or a combination of graphs, tables, and structured criteria. However, in cases when researchers relied on visual inspection of graphical data alone to derive conclusions about experimental control at the individual level, the omission of all individual graphs for readers’ review (Element 4; the least commonly met element) represents an important barrier. Although including graphs for each participant in large-scale reviews may be impractical, researchers may consider using structured criteria or other validated methods to substantiate conclusions in a replicable manner to help avoid decrements in the rigor of future CCCSs. To further support transparency, researchers may retain individual data outside of the manuscript as open-sourced data (e.g., in Supporting Information or an online repository, like PsyArXiv) to circumvent journal restrictions on page length (see Tincani et al., 2024, for related discussion). Alternatively, authors may opt for more sophisticated graphing software (e.g., GraphPad Prism) that enables creation of graphs that can concisely depict individual outcomes, such as a beeswarm (see Greer et al., 2016, Figure 3, bottom panel) or spaghetti (see Mitteer et al., 2022, Figures 3 and 4) plots.
Relatedly, Element 5 (i.e., multiple cases are included and well characterized) was not clearly defined by Hagopian (2020), and our review suggested that the characterization of participants was variable across publications. For example, only 26.32% of studies reported on participant race, despite recent calls to include this important demographic variable (e.g., Jones et al., 2020). Furthermore, fewer studies reported socioeconomic status (2.63%) and spoken language (3.95%), which potentially limits our understanding of participant-related ecological variables that may affect treatment outcomes (e.g., the capacity for caregivers to implement treatment as prescribed reliably). Such details are important in understanding the generality of outcomes and in mitigating cultural barriers relevant to behavioral programming. Thus, we encourage authors of CCCS studies and other behavior-analytic studies to include as many participant characteristics as possible when reporting individual outcomes to facilitate comparisons and conclusions about differential efficacy (Singh et al., 2024). To facilitate this recommendation, we designed a checklist of elements and subelements that may be valuable to include as a supplement in future CCCS studies. Table 6 contains the checklist.
TABLE 6.
Consecutive controlled case series reporting guideline checklist (CCCS-RGC).
| Element 1: A single-case experimental design (SCED) is employed with each case | |||
|
| |||
| SCED(s) employed: | Multielement, Reversal, Multiple Baseline, Alternating Treatments, etc. | ||
| Independent Variable(s): | List all | ||
| Dependent Variable(s): | List all | ||
| Data Interpretation: | Graphs, tables, structured or other validated criteria, combination | ||
| Treatment failures reported? | Yes | N/A | No |
| Element 2: All consecutively encountered cases that underwent the procedure of interest or share a common characteristic are included when reporting outcomes | |||
| Number of consecutively encountered cases: | # | ||
| Number of responders: | # | ||
| Number of nonresponders: | # | ||
| Number incomplete or drop out: | # | ||
| Element 3: Criteria for selecting procedures and participants are described | |||
| Criteria for selecting procedures: | List criteria | ||
| Criteria for selecting participants: | List criteria | ||
| Assignment of participants to conditions: | List criteria | ||
| Type of CCCS? | Prospective | Retrospective | Combination |
| Randomization? | Yes | No | |
| Group design? | Yes | No | |
| Element 4: Findings are examined within and across participants in a manner that preserves the analysis of individual outcomes | |||
| Results portrayed at individual level? | Yes | No | |
| Results analyzed across participants? | Yes | No | |
| Element 5: Multiple cases are included and well characterized | |||
| Setting variables reported: | Home, inpatient hospital, outpatient clinic, residential, school, etc. | ||
| Implementer variables reported: | Training, credentials, relation to research team, relation to patient, etc. | ||
| Participant characteristics reported: | Age, gender, race, ethnicity, diagnosis, relevant skills and deficits, SES, educational level, functioning level, culture, etc. | ||
| Safety measures reported: | Environmental modifications, ethical protections, staffing protocols, protective equipment, restraint, termination criteria, etc. | ||
Note: CCCS = consecutive controlled case series; SCED = single-case experimental design; SES = socioeconomic status. The elements and subelements are based on the guidelines published by Hagopian (2020). The patient demographic variables are based on guidelines published by Jones et al. (2020) and Tate et al. (2016). The examples of safety measures are based on guidelines published by Frank-Crawford et al. (2024). The gray-shaded cell indicate compromised status of the study as a CCCS.
However, we do recognize there may be barriers to including seemingly relevant participant information. For example, institutional review boards may prohibit specific participant information (e.g., socioeconomic status) from being collected. Furthermore, when conducting retrospective CCCS studies, data of interest (e.g., spoken language) may not have been captured during routine clinical care and it may be difficult to retroactively obtain those data. When such barriers are encountered, we encourage researchers to report the barriers and their efforts to alleviate them.
Third, most CCCS studies were conducted by research teams that practice in highly specialized settings (e.g., intensive outpatient or inpatient settings). Subsequently, many outcomes obtained in CCCS studies were gleaned from individuals receiving care in highly specialized settings by highly trained clinical staff. Less-specialized settings might not have the research infrastructure that more-specialized settings do, which can make conducting research difficult. Others have described ways in which less-specialized settings can increase their research capacity (for related discourse, see Valentino, 2022; Valentino & Juanico, 2020). However, we emphasize that good clinical practice and research should not be seen as dichotomous. That is, it is possible (and should be encouraged) to embed research activities into the course of clinical care (e.g., Wacker, 2018). Additionally, future research should explore the practicality of interventions delivered by caregivers and when possible, assess measures of social validity, maintenance, and generalization (Ghaemmaghami et al., 2021). These measures have been infrequently included in CCCS studies to date but are critical to our understanding of the relevance of outcomes outside of specialized settings. Though again, we recognize the difficulties involved with collecting these types of data (e.g., difficulty in contacting families after discharge).
Fourth, we identified that nearly all the published CCCS studies focused on one of four areas: (a) challenging behavior, (b) skill acquisition, (c) caregiver or teaching training, or (d) data analysis and interpretation. Those are areas in which behavior analysts frequently practice and have made substantial contributions (Heward et al., 2022). However, behavior analysts also practice and conduct research in other domains (e.g., special education, health and fitness, and drug addiction; see Heward et al., 2022). Therefore, we encourage researchers working outside the four areas listed above to consider CCCS methodology in their work.
In the areas that have amassed numerous CCCS studies (e.g., challenging behavior research), it seems possible to begin conducting in-depth comparisons between the outcomes of CCCS studies and reviews of published literature. For example, Weber, Brown, et al. (2024) conducted a large-scale CCCS on functional analysis outcomes to document the prevalence of differentiation, modifications to procedures, and outcomes. The researchers conducted comparisons of their results to past reviews of functional analysis research (e.g., Melanson & Fahmie, 2023) and past CCCS studies (e.g., Hagopian et al., 2013). In one comparison, the authors noted that the percentage of differentiated outcomes was higher in the published literature (Melanson & Fahmie, 2023) than in their initial functional analyses and the initial functional analyses of Hagopian et al. (2013). Furthermore, modification to the functional analysis was less likely to contribute to differentiation in their CCCS when no challenging behavior occurred in the initial analysis. Thus, the authors speculated that the comparatively high prevalence of differentiation present in published functional analyses may reflect the practice of omitting data sets in which no challenging behavior occurs. As CCCS studies continue accumulating, comparisons like these will be helpful in revealing potential sources of publication bias.
Fifth, few prospective CCCS have been published. Thus, a straightforward recommendation is for researchers to consider using prospective CCCS methods in their analyses of various procedures. Exposing consecutively enrolled individuals to a common independent variable can be useful to organizations with a standard set of procedures and protocols by allowing them to conduct program evaluations (see Hagopian, 2020). Although the recommendation to use prospective CCCSs is easy on paper, it is not so easy in execution. For example, researchers or clinicians wishing to conduct a prospective CCCS must take time a priori to prepare and finalize study and data collection procedures, which may include preregistration of the protocol. The research teams would also have to include additional components such as quantitative metrics (e.g., dual-criteria method, Falligant, Kranak, & Hagopian, 2022; structured criteria method, Hagopian et al., 1997; Roane et al., 2013) they otherwise may not have incorporated (Hagopian, 2020). Furthermore, it is likely to take a fair amount of time (i.e., months to years) to implement the procedures with clients and participants and complete their enrollment.5 From this perspective, the terminal goal and reinforcer of having a completed CCCS data set and, hopefully, publication is very delayed. It is also possible that prospective CCCSs are currently underway, but given the time it takes to complete prospective work—coupled with the peer-review process—understandably, many prospective CCCSs have yet to be published since the publication of Hagopian (2020).
Hagopian (2020) also described the randomized prospective CCCS with two subvariants: the randomized-crossover and randomized-parallel CCCS (see Figure 1 and Table 2 in Hagopian, 2020). Although no randomized CCCS studies were identified in our review, this methodology seems particularly well suited for researchers who wish to conduct large-scale, grant-funded studies (e.g., Fisher, 2023–2028; Greer, 2023–2028; Hagopian, 2013–2024). Given recent grant-funded research, it seems likely that randomized prospective CCCSs will be published in the near future. We encourage researchers to explore the use of randomized CCCS designs that combine SCEDs with group designs. These designs may exert more robust control over comparing various clinical procedures and mitigate biases for treatment selection (Hagopian, 2020). Finally, a valuable next contribution related to CCCSs is a user-friendly tutorial on how to conduct CCCSs, especially for more complex variations (e.g., prospective with randomization).
Our results should be interpreted with caution due to several limitations. First, and applicable to any review, a few relevant articles may have been published while this paper was under review. Second, we truncated our search to exclude articles published prior to 2013. It is possible that a few studies that could have been classified as CCCSs were published prior to the formal emergence of CCCS terminology. Third, our study is limited by the potential for disproportionate representation of studies from the Journal of Applied Behavior Analysis. Following our initial search of electronic databases, we hypothesized that not all relevant articles had been captured. To address this, we conducted a separate search of the Journal of Applied Behavior Analysis. Similar challenges were reported by Frank-Crawford et al. (2024) and Becraft et al. (2024), who also supplemented their electronic database searches with journal-specific searches and identified additional relevant articles. The reason for this discrepancy remains uncertain; however, it highlights the importance of refining different search strategies when planning and conducting literature reviews.
The CCCS methodology described by Hagopian (2020) provides researchers with a roadmap for undertaking and conducting programmatic lines of research that can lead to identifying the bounds of generality for a variety of interventions and problems of societal importance. We applaud the authors who have contributed to the CCCS research base and encourage future researchers using this methodology to follow the lead of Hagopian (2020) by encouraging additional attention to and precision with CCCS methodologies.
Supplementary Material
SUPPORTING INFORMATION
Additional supporting information can be found online in the Supporting Information section at the end of this article.
ACKNOWLEDGMENTS
The authors thank Chloe Jones for her assistance with this project.
The contents of this manuscript are solely the responsibility of the authors and do not necessarily represent the official views of the NICHD.
Daniel R. Mitteer is now at the Marcus Autism Center and Emory University School of Medicine.
Funding information
Eunice Kennedy Shriver National Institute of Child Health and Human Development, Grant/Award Numbers: 1R21HD113881–01, 1R21HD13794–01A1
Footnotes
We removed articles published prior to 2013 after the search to ensure consistent search terminology and input parameters across both Steps 1 and 2. Applying a time frame restriction while specifically searching the Journal of Applied Behavior Analysis altered the accuracy of the results. That is, the time frame restriction prevented the return of 228 articles when only eight should have been excluded.
If the study included multiple phases (e.g., assessment and treatment) with different SCEDs, we treated all phases as a single unit in this analysis. That is, we only assigned “yes” if SCEDs across all phases met our definitions.
This includes CCCS studies in which the purpose was to apply or compare data-analytic techniques based on data obtained from individuals exhibiting challenging behavior (e.g., Falligant, McNulty, Hausman, et al., 2020).
When authors had the same number of total CCCS publications, ties were broken based on number of first-author CCCS publications. When first-author CCCS publications were the same, ties were broken based on alphabetical order of the authors’ last names.
Please note that a full description of how to conduct a CCCS is beyond the scope of this article and readers are directed to Hagopian (2020).
CONFLICT OF INTEREST STATEMENT
The authors have no conflicts of interest to declare.
ETHICS APPROVAL
Human subjects were not involved in the preparation of this manuscript. Thus, ethics approval was not needed.
DATA AVAILABILITY STATEMENT
All data and findings described herein were obtained from previously published articles. The full codebook with outcomes is available in Supporting Information A through G. Certain data included herein are derived from Clarivate Web of Science (Clarivate 2022).
REFERENCES
*References included in review and analysis.
- Becraft JL, Hardesty SL, Goldman KJ, Shawler LA, Edelstein ML, & Orchowitz P (2024). Caregiver involvement in applied behavior-analytic research: A scoping review and discussion. Journal of Applied Behavior Analysis, 57(1), 55–70. 10.1002/jaba.1035 [DOI] [PubMed] [Google Scholar]
- *Bottini S, Stremel JM, Scheithauer M, & Morton HE (2022). Extended alone and ignore assessments: A novel examination of factors that influence determination of an automatic function. Behavioral Interventions, 37(4), 941–956. 10.1002/bin.1877 [DOI] [Google Scholar]
- Branch MN, & Pennypacker HS (2013). Generality and generalization of research findings. APA Handbook of behavior analysis, Vol. 1: Methods and principles (pp. 151–175). 10.1037/13937-007 [DOI] [Google Scholar]
- *Briggs AM, Fisher WW, Greer BD, & Kimball RT (2018). Prevalence of resurgence of destructive behavior when thinning reinforcement schedules during functional communication training. Journal of Applied Behavior Analysis, 51(3), 620–633. 10.1002/jaba.472 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Call NA, Miller SJ, Mintz JC, Mevers JL, Scheithauer MC, Eshelman JE, & Beavers GA (2016). Use of a latency-based demand assessment to identify potential demands for functional analyses. Journal of Applied Behavior Analysis, 49(4), 900–914. 10.1002/jaba.341 [DOI] [PubMed] [Google Scholar]
- *Call NA, Simmons CA, Mevers JEL, & Alvarez JP (2015). Clinical outcomes of behavioral treatments for pica in children with developmental disabilities. Journal of Autism and Developmental Disorders, 45(7), 2105–2114. 10.1007/s10803-015-2375-z [DOI] [PubMed] [Google Scholar]
- *Canniello F, Iovino L, Benincasa R, Gallucci M, Vita S, Hanley GP, & Jessel J (2023). Predicting and managing risk during functional analysis of problem behavior. Child & Family Behavior Therapy, 45(4), 264–282. 10.1080/07317107.2023.2188137 [DOI] [Google Scholar]
- *Cox AD, & Virues-Ortega J (2022). Long-term functional stability of problem behavior exposed to psychotropic medications. Journal of Applied Behavior Analysis, 55(1), 214–229. 10.1002/jaba.873 [DOI] [PubMed] [Google Scholar]
- *Cox AD, Virues-Ortega J, Julio F, & Martin TL (2017). Establishing motion control in children with autism and intellectual disability: Applications for anatomical and functional MRI. Journal of Applied Behavior Analysis, 50(1), 8–26. 10.1002/jaba.351 [DOI] [PubMed] [Google Scholar]
- Derby KM, Wacker DP, Sasso G, Steege M, Northup J, Cigrand K, & Asmus J (1992). Brief functional assessment techniques to evaluate aberrant behavior in an outpatient setting: A summary of 79 cases. Journal of Applied Behavior Analysis, 25(3), 713–721. 10.1901/jaba.1992.25-713 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Edelstein ML, Sullivan A, & Becraft JL (2022). Feasibility and acceptability of a compressed caregiver training program to treat child behavior problems. Behavior Modification, 47(3), 752–776. 10.1177/01454455221137329 [DOI] [PubMed] [Google Scholar]
- *Engler CW, Ibañez VF, Peterson KM, & Andersen AS (2023). Further examination of behavior during extinction-based treatment of pediatric food refusal. Behavioral Interventions, 38(4), 1–28. 10.1002/bin.1974 [DOI] [Google Scholar]
- *Falligant JM, Chin MD, & Kurtz PF (2022). Renewal and resurgence of severe problem behavior in an intensive outpatient setting: Prevalence, magnitude, and implications for practice. Behavioral Interventions, 37(3), 909–924. 10.1002/bin.1878 [DOI] [Google Scholar]
- *Falligant JM, Hagopian LP, Kranak MP, & Kurtz PF (2022). Quantifying increases in problem behavior following downshifts in reinforcement: A retrospective analysis and replication. Journal of the Experimental Analysis of Behavior, 118(1), 148–155. 10.1002/jeab.769 [DOI] [PubMed] [Google Scholar]
- Falligant JM, Kranak MP, & Hagopian LP (2022). Further analysis of advanced quantitative methods and supplemental interpretive aids with single-case experimental designs. Perspectives on Behavior Science, 45(1), 77–99. 10.1007/s40614-021-00313-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Falligant JM, Kranak MP, McNulty MK, Schmidt JD, Hausman NL, & Rooker GW (2021). Prevalence of renewal of problem behavior: Replication and extension to an inpatient setting. Journal of Applied Behavior Analysis, 54(1), 367–373. 10.1002/jaba.740 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Falligant JM, Kranak MP, Piersma DE, Benson R, Schmidt JD, & Frank-Crawford MA (2024). Further evidence of renewal in automatically maintained behavior. Journal of Applied Behavior Analysis, 57(2), 490–501. 10.1002/jaba.1055 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Falligant JM, McNulty MK, Hausman NL, & Rooker GW (2020). Using dual-criteria methods to supplement visual inspection: Replication and extension. Journal of Applied Behavior Analysis, 53(3), 1789–1798. 10.1002/jaba.665 [DOI] [PubMed] [Google Scholar]
- *Falligant JM, McNulty MK, Kranak MP, Hausman NL, & Rooker GW (2020). Evaluating sources of baseline data using dual-criteria and conservative dual-criteria methods: A quantitative analysis. Journal of Applied Behavior Analysis, 53(4), 2330–2338. 10.1002/jaba.710 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Fernandez N, Frank-Crawford MA, Hanlin C, Benson R, Falligant JM, & DeLeon IG (2024). Examining patterns suggestive of acquisition during functional analyses: A consecutive controlled series of 116 cases. Journal of Applied Behavior Analysis, 57(2), 426–443. 10.1002/jaba.1068 [DOI] [PubMed] [Google Scholar]
- *Fiani T, & Jessel J (2022). Practical functional assessment and behavioral treatment of challenging behavior for clinically based outpatient services: A consecutive case series evaluation. Education and Treatment of Children, 45(2), 211–230. 10.1007/s43494-022-00071-9 [DOI] [Google Scholar]
- Fisher WW (Principal Investigator). (2023–2028). Basic and applied research on extinction bursts when treating problem behavior (Project No. 1R01HD109266–01A1) [Grant]. Eunice Kennedy Shriver National Institute of Child Health & Human Development. https://reporter.nih.gov/search/Tb5HUpYIkUS8auJpDUAAWQ/project-details/10657028 [Google Scholar]
- *Frank-Crawford MA, Hagopian LP, Schmidt JD, Kaur J, Hanlin C, & Piersma DE (2023). A replication and extension of the augmented competing stimulus assessment. Journal of Applied Behavior Analysis, 56(4), 869–883. 10.1002/jaba.1009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Frank-Crawford MA, Hallgren MM, McKenzie A, Gregory MK, Wright ME, & Wachtel LE (2021). Mask compliance training for individuals with intellectual and developmental disabilities. Behavior Analysis in Practice, 14(4), 883–892. 10.1007/s40617-021-00583-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Frank-Crawford MA, Piersma DE, Fernandez N, Tate SA, & Bustamante EA (2024). Protective procedures in functional analysis of self-injurious behavior: An updated scoping review. Journal of Applied Behavior Analysis, 57(4), 840–858. 10.1002/jaba.2906 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ghaemmaghami M, Hanley GP, & Jessel J (2021). Functional communication training: From efficacy to effectiveness. Journal of Applied Behavior Analysis, 54(1), 122–143. 10.1002/jaba.762 [DOI] [PubMed] [Google Scholar]
- Greer BD (Principal Investigator). (2023–2028). Motivational refinements for facilitating reinforcement schedule thinning (Project No. 1R01HD108617–01A1) [Grant]. Eunice Kennedy Shriver National Institute of Child Health & Human Development. https://reporter.nih.gov/search/CVXQsKeQ-kC6luMCNak5NA/project-details/10729562 [Google Scholar]
- *Greer BD, Fisher WW, Saini V, Owen TM, & Jones JK (2016). Functional communication training during reinforcement schedule thinning: An analysis of 25 applications. Journal of Applied Behavior Analysis, 49(1), 105–121. 10.1002/jaba.265 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Greer BD, Mitteer DR, Briggs AM, Fisher WW, & Sodawasser AJ (2020). Comparisons of standardized and interview-informed synthesized reinforcement contingencies relative to functional analysis. Journal of Applied Behavior Analysis, 53(1), 82–101. 10.1002/jaba.601 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Greer BD, Shahan TA, Fisher WW, Mitteer DR, & Fuhrman AM (2023). Further evaluation of treatment duration on the resurgence of destructive behavior. Journal of Applied Behavior Analysis, 56(1), 166–180. 10.1002/jaba.956 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Guerrero LA, Engler CW, Hansen BA, & Piazza CC (2022). On the validity of interpreting functional analyses of inappropriate mealtime behavior using structured criteria. Journal of Applied Behavior Analysis, 55(4), 1280–1293. 10.1002/jaba.945 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hagopian LP (2020). The consecutive controlled case series: Design, data-analytics, and reporting methods supporting the study of generality. Journal of Applied Behavior Analysis, 53(2), 596–619. 10.1002/jaba.691 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hagopian LP (2013–2024). (Principal Investigator). A clinical trial for treatment-resistant subtypes of self-injury (Project No. 5R01HD076653–10) [Grant]. Eunice Kennedy Shriver National Institute of Child Health & Human Development. https://reporter.nih.gov/search/lGa0WdZ5W0mKZ3VWyvA-2Q/project-details/10806121 [Google Scholar]
- Hagopian LP, Fisher WW, Thompson RH, Owen-DeSchryver J, Iwata BA, & Wacker DP (1997). Toward the development of structured criteria for interpretation of functional analysis data. Journal of Applied Behavior Analysis, 30(2), 313–326. 10.1901/jaba.1997.30-313 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Hagopian LP, Falligant JM, Frank-Crawford MA, Yenokyan G, Piersma DE, & Kaur J (2023). Simplified methods for identifying subtypes of automatically maintained self-injury. Journal of Applied Behavior Analysis, 56(3), 575–592. 10.1002/jaba.1005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Hagopian LP, Frank-Crawford MA, Javed N, Fisher AB, Dillon CM, Zarcone JR, & Rooker GW (2020). Initial outcomes of an augmented competing stimulus assessment. Journal of Applied Behavior Analysis, 53(4), 2172–2185. 10.1002/jaba.725 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Hagopian LP, Rooker GW, Jessel J, & DeLeon IG (2013). Initial functional analysis outcomes and modifications in pursuit of differentiation: A summary of 176 inpatient cases. Journal of Applied Behavior Analysis, 46(1), 88–100. 10.1002/jaba.25 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Hagopian LP, Rooker GW, & Yenokyan G (2018). Identifying predictive behavioral markers: A demonstration using automatically reinforced self-injurious behavior. Journal of Applied Behavior Analysis, 51(3), 443–465. 10.1002/jaba.477 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Hagopian LP, Rooker GW, & Zarcone JR (2015). Delineating subtypes of self-injurious behavior maintained by automatic reinforcement. Journal of Applied Behavior Analysis, 48(3), 523–543. 10.1002/jaba.236 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Haney SD, Greer BD, Mitteer DR, & Randall KR (2022). Relapse during the treatment of pediatric feeding disorders. Journal of Applied Behavior Analysis, 55(3), 704–726. 10.1002/jaba.913 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Haney SD, Ibañez VF, Kirkwood CA, & Piazza CC (2023). An evaluation of negative reinforcement to increase self-feeding and self-drinking for children with feeding disorders. Journal of Applied Behavior Analysis, 56(4), 757–776. 10.1002/jaba.1013 [DOI] [PubMed] [Google Scholar]
- *Henry JE, Kelley ME, LaRue RH, Kettering TL, Gadaire DM, & Sloman KN (2021). Integration of experimental functional analysis procedural advancements: Progressing from brief to extended experimental analyses. Journal of Applied Behavior Analysis, 54(3), 1045–1061. 10.1002/jaba.841 [DOI] [PubMed] [Google Scholar]
- Heward WL, Critchfield TS, Reed DD, Detrich R, & Kimball JW (2022). ABA from A to Z: Behavior science applied to 350 domains of socially significant behavior. Perspectives on Behavior Science, 45(2), 327–359. 10.1007/s40614-022-00336-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Ibañez VF, Peters KP, & Vollmer TR (2021). A comparison of re-presentation and modified chin prompt to treat different topographies of liquid expulsion. Journal of Applied Behavior Analysis, 54(4), 1586–1607. 10.1002/jaba.872 [DOI] [PubMed] [Google Scholar]
- Iwata BA, Pace GM, Dorsey MF, Zarcone JR, Vollmer TR, Smith RG, Rodgers TA, Lerman DC, Shore BA, Mazaleski JL, Goh H, Cowdery GE, Kalsher MJ, McCosh KC, & Willis KD (1994). The functions of self-injurious behavior: An experimental-epidemiological analysis. Journal of Applied Behavior Analysis, 27(2), 215–240. 10.1901/jaba.1994.27-215 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Izquierdo SM, Jessel J, Fiani T, & Jones EA (2024). Functional analysis of contextually inappropriate social behavior in children with Down syndrome. Behavior Modification, 48(3), 285–311. 10.1177/01454455231222912 [DOI] [PubMed] [Google Scholar]
- *Jessel J, Hanley GP, & Ghaemmaghami M (2016). Interview-informed synthesized contingency analyses: Thirty replications and reanalysis. Journal of Applied Behavior Analysis, 49(3), 576–595. 10.1002/jaba.316 [DOI] [PubMed] [Google Scholar]
- *Jessel J, Ingvarsson ET, Metras R, Kirk H, & Whipple R (2018). Achieving socially significant reductions in problem behavior following the interview-informed synthesized contingency analysis: A summary of 25 outpatient applications. Journal of Applied Behavior Analysis, 51(1), 130–157. 10.1002/jaba.436 [DOI] [PubMed] [Google Scholar]
- *Jessel J, Metras R, Hanley GP, Jessel C, & Ingvarsson ET (2020). Evaluating the boundaries of analytic efficiency and control: A consecutive controlled case series of 26 functional analyses. Journal of Applied Behavior Analysis, 53(1), 25–43. 10.1002/jaba.544 [DOI] [PubMed] [Google Scholar]
- *Jessel J, Metras R, Hanley GP, Jessel C, & Ingvarsson ET (2020). Does analysis brevity result in loss of control? A consecutive case series of 26 single-session interview-informed synthesized contingency analyses. Behavioral Interventions, 35(1), 145–155. 10.1002/bin.1695 [DOI] [Google Scholar]
- Jones SH, St. Peter CC, & Ruckle MM (2020). Reporting of demographic variables in the Journal of Applied Behavior Analysis. Journal of Applied Behavior Analysis, 53(3), 1304–1315. 10.1002/jaba.722 [DOI] [PubMed] [Google Scholar]
- Johnston JM, Pennypacker HS, & Green G (2020). Strategies and tactics of behavioral research and practice (4th ed.). Routledge. [Google Scholar]
- Kazdin AE (2021). Single-case research designs: Methods for clinical and applied settings (3rd ed.). Oxford University Press. [Google Scholar]
- *Kanazawa R, Jessel J, Park M, Fienup D, & Dowdy A (2024). A comparison of parental attention and preferred items during tummy time: A consecutive controlled case series evaluation. Journal of Applied Behavior Analysis, 57(2), 341–357. 10.1002/jaba.1061 [DOI] [PubMed] [Google Scholar]
- *Kranak MP, & Falligant JM (2021). Further investigation of resurgence following schedule thinning: Extension to an inpatient setting. Behavioral Interventions, 36(4), 1003–1012. 10.1002/bin.1831 [DOI] [Google Scholar]
- *Kranak MP, Falligant JM, & Hausman NL (2021). Application of automated nonparametric statistical analysis in clinical contexts. Journal of Applied Behavior Analysis, 54(2), 824–833. 10.1002/jaba.789 [DOI] [PubMed] [Google Scholar]
- *Kurtz PF, Chin MD, Robinson AN, O’Connor JT, & Hagopian LP (2015). Functional analysis and treatment of problem behavior exhibited by children with fragile X syndrome. Research in Developmental Disabilities, 43(1), 150–166. 10.1016/j.ridd.2015.06.010 [DOI] [PubMed] [Google Scholar]
- *Kurtz PF, Fodstad JC, Huete JM, & Hagopian LP (2013). Caregiver-and staff-conducted functional analysis outcomes: A summary of 52 cases. Journal of Applied Behavior Analysis, 46(4), 738–749. 10.1002/jaba.87 [DOI] [PubMed] [Google Scholar]
- *Lambert JM, Copeland BA, Paranczak JL, Macdonald MJ, Torelli JN, & Houchins-Juarez NJ (2022). Description and evaluation of a function-informed and mechanisms-based framework for treating challenging behavior. Journal of Applied Behavior Analysis, 55(4), 1193–1219. 10.1002/jaba.940 [DOI] [PubMed] [Google Scholar]
- *Laureano B, & Falligant JM (2023). Transition states in single-case experimental designs: A retrospective consecutive-controlled case series investigation. Behavior Modification, 47(1), 113–127. 10.1177/01454455221099648 [DOI] [PubMed] [Google Scholar]
- *Laureano B, Fernandez N, & Hagopian LP (2023). Efficacy of competing stimulus assessments: A summary of 35 consecutively encountered cases. Journal of Applied Behavior Analysis, 56(2), 428–441. 10.1002/jaba.979 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Laureano B, Ringdahl J, & Falligant JM (2024). Examination of clinical variables affecting resurgence: A reanalysis of 46 applications. Journal of Applied Behavior Analysis, 57(3), 742–750. 10.1002/jaba.1091 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Singh L, Barokova M, Bazhydai M, Baumgartner HA, Franchin L, Kosie JE, Lew-Williams C, Okyere Omane P, Reinelt T, Schuwerk T, Sheskin M, Soderstrom M, Wu Y, & Frank MC (2024). Tools of the trade: A guide to sociodemographic reporting for researchers, reviewers, and editors. Journal of Cognition and Development, 1–20. 10.1080/15248372.2024.2431106 [DOI] [Google Scholar]
- *McCabe LH, & Greer BD (2023). Evaluations of heart rate during functional analyses of destructive behavior. Journal of Applied Behavior Analysis, 56(4), 777–786. 10.1002/jaba.1019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *McMahon MX, Hathaway KL, Hodges AK, Sharp WG, & Volkert VM (2023). A retrospective consecutive controlled case series of underspoon: A modified-bolus placement to address behavior that interferes with swallowing. Behavior Modification, 47(4), 870–904. 10.1177/01454455221129996 [DOI] [PubMed] [Google Scholar]
- Melanson IJ, & Fahmie TA (2023). Functional analysis of problem behavior: A 40-year review. Journal of Applied Behavior Analysis, 56(2), 262–281. 10.1002/jaba.983 [DOI] [PubMed] [Google Scholar]
- Mevers JL, Muething C, Call NA, Scheithauer M, & Hewett S (2018). A consecutive case series analysis of a behavioral intervention for enuresis in children with developmental disabilities. Developmental Neurorehabilitation, 21(5), 336–344. 10.1080/17518423.2018.1462269 [DOI] [PubMed] [Google Scholar]
- *Mitteer DR, Greer BD, Randall KR, & Haney SD (2022). On the scope and characteristics of relapse when treating severe destructive behavior. Journal of Applied Behavior Analysis, 55(3), 688–703. 10.1002/jaba.912 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Muething C, Call N, Pavlov A, Ringdahl J, Gillespie S, Clark S, & Mevers JL (2020). Prevalence of renewal of problem behavior during context changes. Journal of Applied Behavior Analysis, 53(3), 1485–1493. 10.1002/jaba.672 [DOI] [PubMed] [Google Scholar]
- *Muething C, Call N, Ritchey CM, Pavlov A, Bernstein AM, & Podlesnik CA (2022). Prevalence of relapse of automatically maintained behavior resulting from context changes. Journal of Applied Behavior Analysis, 55(1), 138–153. 10.1002/jaba.887 [DOI] [PubMed] [Google Scholar]
- *Muething C, Pavlov A, Call N, Ringdahl J, & Gillespie S (2021). Prevalence of resurgence during thinning of multiple schedules of reinforcement following functional communication training. Journal of Applied Behavior Analysis, 54(2), 813–823. 10.1002/jaba.791 [DOI] [PubMed] [Google Scholar]
- *Muething C, Ritchey CM, Call NA, Hardee AM, Mauzy CR IV, Argueta T, McMahon MXH, & Podlesnik CA (2024). A retrospective analysis of the relation between resurgence and renewal of behavior targeted for reduction. Journal of Applied Behavior Analysis, 57(2), 455–462. 10.1002/jaba.1069 [DOI] [PubMed] [Google Scholar]
- *Muething CS, Call NA, Lomas Mevers J, Zangrillo AN, Clark SB, & Reavis AR (2017). Correspondence between the results of functional analyses and brief functional analyses. Developmental Neurorehabilitation, 20(8), 549–559. 10.1080/17518423.2017.1338776 [DOI] [PubMed] [Google Scholar]
- Munn Z, Peters MD, Stern C, Tufanaru C, McArthur A, & Aromataris E (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology, 18(1), 1–7. 10.1186/s12874-018-0611-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nane T (2015). Time to first citation estimation in the presence of additional information. Proceedings of the 15th International Society of Scientometrics and Informetrics Conference, 249–260. https://www.issi-society.org/proceedings/issi_2015/0249.pdf [Google Scholar]
- *Owen TM, Fisher WW, Akers JS, Sullivan WE, Falcomata TS, Greer BD, Roane HS, & Zangrillo AN (2020). Treating destructive behavior reinforced by increased caregiver compliance with the participant’s mands. Journal of Applied Behavior Analysis, 53(3), 1494–1513. 10.1002/jaba.674 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Pardo-Cebrian R, Virues-Ortega J, Calero-Elvira A, & Guerrero-Escagedo MC (2022). Toward an experimental analysis of verbal shaping in psychotherapy. Psychotherapy Research, 32(4), 497–510. 10.1080/10503307.2021.1955418 [DOI] [PubMed] [Google Scholar]
- *Phillips CL, Iannaccone JA, Rooker GW, & Hagopian LP (2017). Noncontingent reinforcement for the treatment of severe problem behavior: An analysis of 27 consecutive applications. Journal of Applied Behavior Analysis, 50(2), 357–376. 10.1002/jaba.376 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Praus P (2019). High-ranked citations percentage as an indicator of publications quality. Scientometrics, 120(1), 319–329. 10.1007/s11192-019-03128-6 [DOI] [Google Scholar]
- Pugliese SN, Wine B, Liesfeld JE, Morgan CA, Doan TTM, Vanderburg NM, & Newcomb ET (2021). An evaluation of feedback-based interventions on promoting use of personal protective equipment in a school. Journal of Organizational Behavior Management, 41(4), 332–345. 10.1080/01608061.2021.1920543 [DOI] [Google Scholar]
- *Rajaraman A, Hanley GP, Gover HC, Ruppel KW, & Landa RK (2022). On the reliability and treatment utility of the practical functional assessment process. Behavior Analysis in Practice, 15(3), 815–837. 10.1007/s40617-021-00665-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roane HS, Fisher WW, Kelley ME, Mevers JL, & Bouxsein KJ (2013). Using modified visual-inspection criteria to interpret functional analysis outcomes. Journal of Applied Behavior Analysis, 46(1), 130–146. 10.1002/jaba.13 [DOI] [PubMed] [Google Scholar]
- *Rooker GW, Hausman NL, Fisher AB, Gregory MK, Lawell JL, & Hagopian LP (2018). Classification of injuries observed in functional classes of self-injurious behaviour. Journal of Intellectual Disability Research, 62(12), 1086–1096. 10.1111/jir.12535 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Rooker GW, Jessel J, Kurtz PF, & Hagopian LP (2013). Functional communication training with and without alternative reinforcement and punishment: An analysis of 58 applications. Journal of Applied Behavior Analysis, 46(4), 708–722. 10.1002/jaba.76 [DOI] [PubMed] [Google Scholar]
- *Rubio EK, McMahon MX, & Volkert VM (2024). Evaluation of two physical guidance procedures in the treatment of pediatric feeding disorder. Journal of Applied Behavior Analysis, 57(2), 473–489. 10.1002/jaba.1062 [DOI] [PubMed] [Google Scholar]
- *Saini V, Andersen AS, Jessel J, & Vance H (2022). On the role of operant contingencies in the maintenance of inappropriate mealtime behavior: An epidemiological analysis. Journal of Applied Behavior Analysis, 55(2), 513–528. 10.1002/jaba.901 [DOI] [PubMed] [Google Scholar]
- Scheithauer M, Cariveau T, Call NA, Ormand H, & Clark S (2016). A consecutive case review of token systems used to reduce socially maintained challenging behavior in individuals with intellectual and developmental delays. International Journal of Developmental Disabilities, 62(3), 157–166. 10.1080/20473869.2016.1177925 [DOI] [Google Scholar]
- *Shahan TA, & Greer BD (2021). Destructive behavior increases as a function of reductions in alternative reinforcement during schedule thinning: A retrospective quantitative analysis. Journal of the Experimental Analysis of Behavior, 116(2), 243–248. 10.1002/jeab.708 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sham E, & Smith T (2014). Publication bias in studies of an applied behavior-analytic intervention: An initial analysis. Journal of Applied Behavior Analysis, 47(3), 663–678. 10.1002/jaba.146 [DOI] [PubMed] [Google Scholar]
- *Shepley C, Shepley SB, Allday RA, Tyner-Wilson M, & Larrow D (2021). Evaluation of a brief family-centered service provision model for treating children’s severe behavior: A retrospective consecutive case series analysis. Behavior Analysis in Practice, 14(1), 86–96. 10.1007/s40617-020-00487-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Shillingsburg MA, Frampton SE, Juban B, Weddle SA, & Silva MR (2022). Implementing an applied verbal behavior model in classrooms. Behavioral Interventions, 37(1), 56–78. 10.1002/bin.1807 [DOI] [Google Scholar]
- *Sivaraman M, Virues-Ortega J, & Roeyers H (2021). Telehealth mask wearing training for children with autism during the COVID-19 pandemic. Journal of Applied Behavior Analysis, 54(1), 70–86. 10.1002/jaba.802 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Slaton JD, Davis M, DePetris DA, Raftery KJ, Daniele S, & Caruso CM (2024). Long-term effectiveness and generality of practical functional assessment and skill-based treatment. Journal of Applied Behavior Analysis, 57(3), 635–656. 10.1002/jaba.1090 [DOI] [PubMed] [Google Scholar]
- *Slaton JD, Hanley GP, & Raftery KJ (2017). Interview-informed functional analyses: A comparison of synthesized and isolated components. Journal of Applied Behavior Analysis, 50(2), 252–277. 10.1002/jaba.384 [DOI] [PubMed] [Google Scholar]
- *Sloman KN, Torres-Viso M, Edelstein ML, & Schulman RK (2024). The role of task preference in the effectiveness of response interruption and redirection. Journal of Applied Behavior Analysis, 57(2), 444–454. 10.1002/jaba.1064 [DOI] [PubMed] [Google Scholar]
- *Strohmeier CW, Cengher M, Chin MD, & Falligant JM (2024). Application of a terminal schedule probe method to inform schedule thinning with multiple schedules. Journal of Applied Behavior Analysis, 57(3), 676–694. 10.1002/jaba.1081 [DOI] [PubMed] [Google Scholar]
- *Szikszai P, Preston A, & Saini V (2023). Ongoing visual inspection for interpreting functional analyses: A summary of 55 clinical applications. Behavior Analysis: Research and Practice, 23(1), 1–14. 10.1037/bar0000248 [DOI] [Google Scholar]
- Tate RL, Perdices M, Rosenkoetter U, Shadish W, Vohra S, Barlow DH, Horner R, Kazdin A, Kratochwill T, McDonald S, Sampson M, Shamseer L, Togher L, Albin R, Backman C, Douglas J, Evans JJ, Gast D, Manolov R, … Wilson B (2016). The Single-Case Reporting Guideline in BEhavioural Interventions (SCRIBE) 2016 Statement. Physical Therapy, 96(7), e1–e10. 10.2522/ptj.2016.96.7.e1 [DOI] [PubMed] [Google Scholar]
- *Taylor T, Blampied N, & Roglić N (2021). Controlled case series demonstrates how parents can be trained to treat pediatric feeding disorders at home. Acta Paediatrica, 110(1), 149–157. 10.1111/apa.15372 [DOI] [PubMed] [Google Scholar]
- Tincani M, Gilroy SP, & Dowdy A (2024). Extensions of open science for applied behavior analysis: Preregistration for single-case experimental designs. Journal of Applied Behavior Analysis, 57(4), 808–820. 10.1002/jaba.2909 [DOI] [PubMed] [Google Scholar]
- Tincani M, & Travers J (2019). Replication research, publication bias, and applied behavior analysis. Perspectives on Behavior Science, 42(1), 59–75. 10.1007/s40614-019-00191-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thillainathan T, Linder B, & Cox AD (2024). Program evaluation of a specialized treatment home for adults with severe challenging behavior. Behavioral Interventions, 39(4), Article e2059. 10.1002/bin.2059 [DOI] [Google Scholar]
- Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, Moher D, Peters MDJ, Horsley T, Weeks L, Hempel S, Akl EA, Chang C, McGowan J, Stewart L, Hartling L, Aldcroft A, Wilson MG, Garritty C, … Straus SE (2018). PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Annals of Internal Medicine, 169(7), 467–473. 10.7326/M18-0850 [DOI] [PubMed] [Google Scholar]
- Valentino AL (2022). Applied behavior analysis research made easy: A handbook for practitioners conducting research post-certification. New Harbinger Publications. [Google Scholar]
- Valentino AL, & Juanico JF (2020). Overcoming barriers to applied research: A guide for practitioners. Behavior Analysis in Practice, 13(4), 894–904. 10.1007/s40617-020-00479-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Virues-Ortega J, Clayton K, Pérez-Bustamante A, Gaerlan BFS, & Fahmie TA (2022). Functional analysis patterns of automatic reinforcement: A review and component analysis of treatment effects. Journal of Applied Behavior Analysis, 55(2), 481–512. 10.1002/jaba.900 [DOI] [PubMed] [Google Scholar]
- Wacker DP (2018). The mentoring program in the Department of Pediatrics, University of Iowa. Behavior Analysis in Practice, 11(3), 189–193. 10.1007/s40617-018-0221-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Walker SG, & Carr JE (2021). Generality of findings from single-case designs: It’s not all about the “N.” Behavior Analysis in Practice, 14(4), 991–995. 10.1007/s40617-020-00547-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Warner CA, Hanley GP, Landa RK, Ruppel KW, Rajaraman A, Ghaemmaghami M, Slaton JD, & Gover HC (2020). Toward accurate inferences of response class membership. Journal of Applied Behavior Analysis, 53(1), 331–354. 10.1002/jaba.598 [DOI] [PubMed] [Google Scholar]
- *Weber JK, Brown KR, Retzlaff BJ, Hurd AM, Anderson HJ, & Smallwood K (2024). Retrospective consecutive controlled case series of outcomes for functional analyses of severe destructive behavior. Journal of Applied Behavior Analysis, 57(3), 695–708. 10.1002/jaba.1077 [DOI] [PubMed] [Google Scholar]
- *Weber J, Fahmie T, Walker S, Lambert J, Copeland B, Freetly T, & Zangrillo A (2024). Exploring factors that influence the efficacy of functional communication training. Journal of Applied Behavior Analysis, 57(3), 709–724. 10.1002/jaba.1078 [DOI] [PubMed] [Google Scholar]
- Wolfe K, Barton EE, & Meadan H (2019). Systematic protocols for the visual analysis of single-case research data. Behavior Analysis in Practice, 12(2), 491–502. 10.1007/s40617-019-00336-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
All data and findings described herein were obtained from previously published articles. The full codebook with outcomes is available in Supporting Information A through G. Certain data included herein are derived from Clarivate Web of Science (Clarivate 2022).
