Abstract
We compared the functions of problem behavior identified by (a) a functional analysis (FA), (b) an interview-informed synthesized contingency analysis (IISCA) that was informed by the results of an open-ended interview and a structured observation, and (c) a standardized-synthesized contingency analysis (SSCA) in which we synthesized three common functions of problem behavior across 12 individuals in a controlled consecutive case series. We then compared outcomes across assessments. The FA was sufficient in identifying the variables maintaining problem behavior for 11 of the 12 participants, replicating the findings of Fisher, Greer, Romani, Zangrillo, and Owen (2016). Error type (i.e., false positives, false negatives) and error prevalence were similar across functions identified by the IISCA and the SSCA, calling into question the utility of the open-ended interview and the structured observation that informed the IISCA.
Keywords: false negative, false positive, functional analysis, independent effects, synthesized contingency analysis
Various procedural modifications have resulted in an improved and refined functional analysis (FA) methodology (for a detailed discussion, see Beavers, Iwata, & Lerman, 2013; Hanley, Iwata, & McCord, 2003; as well as the Special Issue on FA methodology published in the Journal of Applied Behavior Analysis, 2013, Volume 46, Issue 1). Hanley, Jin, Vanselow, and Hanratty (2014) proposed one such modification in which the results of an open-ended interview with caregivers (Hanley, 2012) and a 15- to 30-min clinical observation informed a single test and control condition. In these two conditions, all identified establishing operations and reinforcement contingencies were either present (i.e., test condition) or absent (i.e., control condition). The interview-informed synthesized contingency analysis (IISCA) described by Hanley et al. (2014) has been replicated across numerous studies (Ghaemmaghami, Hanley, Jin, & Vanselow, 2016; Ghaemmaghami, Hanley, & Jessel, 2016; Hanley et al., 2014; Jessel, Ingvarsson, Metras, Kirk, & Whipple, 2018; Jessel, Hanley, & Ghaemmaghami, 2016; Jessel, Metras, Hanley, Jessel, & Ingvarsson, 2019; Santiago, Hanley, Moore, & Jin, 2016; Slaton, Hanley, & Raftery, 2017; Strand & Eldevik, 2017), often producing clear differentiation in rates of problem behavior between the test and control conditions (e.g., Jessel et al., 2016). More recent replications (e.g., Slaton et al., 2017) have relied solely on the results of the open-ended interview when designing the test and control conditions of the IISCA, forgoing the clinical observation. Despite this difference across studies, interventions based on the IISCA have been shown to be efficacious (e.g., Santiago et al., 2016).
One potential benefit of synthesizing putative establishing operations and reinforcement contingencies is that the assessment may better reflect co-occurring stimulus changes in the individual’s environment (Slaton et al., 2017). In addition, the IISCA is an efficient model for determining whether problem behavior is socially reinforced (Jessel et al., 2016). Isolating the contingencies reinforcing problem behavior; however, is likely to provide more precise information when designing function-based interventions. Likewise, decades of research have produced numerous refinements to FA procedures (e.g., experimental design, analysis format) that behavior analysts can readily incorporate if needed. These and other considerations have prompted researchers to compare IISCA and FA outcomes.
Fisher, Greer, Romani, Zangrillo, and Owen (2016) replicated the IISCA approach and compared its results to that of an FA for five participants. Both the IISCA and the FA produced differentiation for four of the five participants. However, the FA detected only some of the reinforcement contingencies synthesized in the IISCA for each of the four cases, suggesting that the additional contingencies included in the IISCA were irrelevant. Of the 11 contingencies synthesized in the IISCA, the FA identified only five functions of problem behavior. Said another way, six of the 11 contingencies synthesized in the IISCA (54.5%) appeared to be functionally irrelevant according to the FA results (i.e., false positives). Across participants, the open-ended interview and the structured observation contributed similarly to these discrepant outcomes, with the open-ended interview identifying more functionally irrelevant contingencies (64.2%) than functionally relevant contingencies (35.7%) and the structured observation identifying the same percentage of functionally irrelevant contingencies (50%) as functionally relevant contingencies (50%). Missing functionally relevant contingencies (i.e., false negatives) was less likely across the two assessments but did occur for one participant’s structured observation. Thus, combining results of the open-ended interview with those of a structured observation may identify irrelevant contingencies or miss relevant contingencies reinforcing problem behavior. Either outcome could be problematic when determining behavioral function.
One interesting finding of Fisher et al. (2016) was that, for four of the five participants, the IISCA synthesized escape and access to attention and tangibles (i.e., three common social functions of problem behavior; Beavers et al., 2013; Hanley et al., 2003). Had the researchers simply tested a synthesized contingency consisting of escape, attention, and tangibles instead of conducting the open-ended interview and the structured observation, the synthesized contingency would have closely resembled that of the IISCA for 80% of the participants. Such a standardized-synthesized contingency analysis (SSCA) would have included only one additional functionally irrelevant contingency than the IISCA across all participants. This finding, although limited to a small number of participants, suggested that the combined information from the open-ended interview and the structured observation was unnecessary to design the test and control conditions of the IISCA.
Because the open-ended interview and structured observation used by Fisher et al. (2016) to inform the IISCA did not identify all of the functionally relevant contingencies and often identified functionally irrelevant contingencies, we questioned whether the results of an SSCA, uninformed by either of the two sources, would approximate those of an IISCA. Therefore, we compared the results of (a) an FA, (b) an IISCA that was informed by the results of an open-ended interview and a structured observation, and (c) an SSCA in which we synthesized three common functions of problem behavior across 12 consecutive individuals referred for the assessment and treatment of problem behavior. We did this to address questions about the necessity of synthesized contingency analysis for determining the function of problem behavior and to determine the utility of the open-ended interview and structured observation used to inform the IISCA.
Method
Participants, Setting, and Materials
Twelve consecutive children1 referred to a university-based clinic for the assessment and treatment of severe problem behavior participated. Each child attended the clinic 5 days per week for 3 to 6 hours per day. Table 1 displays participant demographic information and behaviors targeted. This study reports the data from all participants enrolled, satisfying the requirements of a controlled consecutive case series.
Table 1.
Demographic Information and Target Behaviors
| Case | Age | Sex | ASD Diagnosis |
Level of Intellectual Disability | Language Ability |
Communication Mode |
Target Problem Behavior |
|---|---|---|---|---|---|---|---|
| 1 | 9 | M | Yes | Mild intellectual disability | 5 | Vocal | SIB, aggression, disruption |
| 2 | 13 | M | Yes | Mild-to-moderate intellectual disability | 3 | Vocal | SIB, aggression, disruption |
| 3 | 13 | F | Yes | Mild intellectual disability | 5 | Vocal | SIB, aggression, disruption |
| 4 | 7 | M | Yes | Unspecified intellectual disability | 2 | PCS, ACD | SIB, aggression |
| 5 | 17 | M | Yes | Mild intellectual disability | 4 | Vocal | SIB, aggression, disruption |
| 6 | 4 | M | Yes | Mild intellectual disability | 3 | Vocal | SIB, aggression, disruption |
| 7 | 6 | M | No | Unspecified intellectual disability | 3 | Vocal | SIB, aggression, disruption |
| 8 | 9 | M | Yes | Unspecified intellectual disability | 2 | Vocal, PCS, ACD | SIB, aggression, disruption |
| 9 | 6 | M | No | Intellectual development within normal limits | 5 | Vocal | Aggression, disruption |
| 10 | 3 | M | Yes | Borderline intellectual disability | 2 | Vocal | SIB, aggression, disruption |
| 11 | 5 | M | Yes | Unspecified intellectual disability | 1 | None | SIB, aggression, disruption* |
| 12 | 6 | M | Yes | Unspecified intellectual disability | 4 | Vocal | SIB, aggression, disruption* |
Note: ASD = autism spectrum disorder. For language abilities, 1 = no independent communication; 2 = single words; 3 = two-word phrases; 4 = short sentences; 5 = full fluency. PCS = picture communication system. ACD = assistive communication device. SIB = self-injurious behavior.
For Cases 11 and 12, we observed maintenance of disruptions during the consecutive alone/ignore sessions and did not reinforce this topography within the functional analysis’ multielement comparison or the interview-informed synthesized contingency analysis.
All sessions occurred in padded clinic therapy rooms (approximately 3 m by 3 m) equipped with one-way observation windows using the safety precautions described by Betz and Fisher (2011). Most rooms contained a table and two chairs (sometimes removed to increase therapist or child safety) along with session materials relevant to each condition. For the FA and SSCA, the stimuli were identified from a brief, 8-item interview (available upon request) commonly used in our program. For the IISCA and the structured observation that informed the IISCA, we used the results of the open-ended interview described by Hanley (2012) to identify stimuli for the relevant test and control conditions. However, if the results of the open-ended interview failed to identify suitable materials (see below), the results of the 8-item interview informed the FA and SSCA. These procedures resulted in our inclusion of highly similar demands and tangibles across analyses for a given participant (see Table 2). Topographies of attention (not included in Table 2) also tended to be highly similar. This high degree of overlap in condition stimuli helped ensure that assessment outcomes were not impacted by the inclusion of idiosyncratic stimuli available in only a subset of the analyses.
Table 2.
Materials in the Test Conditions of the FA, SSCA, and IISCA
| Demands | Tangibles | |||
|---|---|---|---|---|
| Case | FA and SSCA | IISCA | FA and SSCA | IISCA |
| 1 | Writing; math; spelling; imitation | Writing; math; spelling; imitation | Coloring | Art supplies; action figures |
| 2 | Math | N/A | Tablet; edibles | Tablet; board games; puzzles |
| 3 | Math; spelling; cleaning; folding laundry | Math; spelling; cleaning; folding laundry | Tablet | Tablet |
| 4 | Hygiene tasks | Hygiene tasks | Tablet; blanket | Tablet; blanket |
| 5 | Cleaning; folding | Cleaning; folding | Xbox; art supplies | Xbox; art supplies |
| 6 | Dressing; picking up toys | Dressing; picking up toys | Tablet; trains | Tablet; trains; cars |
| 7 | Fine-motor tasks; picking up toys | N/A | RC Car | RC Car |
| 8 | Writing; matching | Writing; matching | Vibrating teether | Vibrating teether |
| 9 | Folding clothing; taking out the trash; sight words | Folding clothing; sight words, GMI | Tablet | Tablet |
| 10 | Imitation; RI; stacking; cleaning | Imitation; RI; stacking; cleaning | Tablet | Tablet |
| 11 | Cleaning; dressing | Imitation; RI; listener responding | Tablet; Transformers; action figures | Tablet |
| 12 | RI; dressing | N/A | Drum | Drum; Tablet |
Note: GMI = gross-motor instruction; RI = receptive identification
Measurement and Design
Trained observers collected frequency data on problem behavior using laptop computers with DataPal (a beta version of BDataPro; Bullock, Fisher, & Hagopian, 2017), which converted response frequencies to responses per min. Self-injurious behavior (SIB) included head banging, self-hitting, and self-biting. Aggression included hitting, kicking, biting, and throwing objects at the therapist. Disruption included hitting, kicking, biting, or destroying objects, overturning furniture, and swiping materials off surfaces.
An independent second observer collected data simultaneously with the primary data collector on at least 20%, 22%, and 13% of FA, SSCA, and IISCA sessions, respectively, for each participant. Sessions were divided into 10-s intervals. Each interval was scored as an agreement if both observers recorded the same number of responses (i.e., exact agreement within the interval). The number of agreement intervals were summed and divided by the total number of intervals within the session. We then converted the resulting quotient into a percentage and calculated the mean interobserver-agreement coefficients for each participant’s FA, SSCA, and IISCA. Mean coefficients averaged 99% (range, 98% to 100%) across FAs, 99% (range, 95% to 100%) across SSCAs, and 99% (range, 93% to 100%) across IISCAs. Across all participants, no coefficient fell below 73% for any topography of problem behavior during any session.
Across the FA, SSCA, and IISCA, we used a multielement design to identify the function of each participant’s problem behavior. The experimenter randomized and counterbalanced the order of the three analyses across participants to control for potential sequence effects. Board Certified Behavior Analysts (hereafter described as behavior analysts) conducted all interviews and constructed the IISCA conditions. Trained behavior therapists (ranging from the bachelor to postdoctoral level) implemented all conditions under the supervision of a behavior analyst.
Assessment Informing FA and SSCA
The experimenter used a brief, 8-item interview to determine what stimuli to include in the FA and SSCA conditions, as well as whether to screen for automatic reinforcement (Querim et al., 2013). The behavior analyst conducted the interview with the caregiver prior to the FA and SSCA and asked the caregiver to report the child’s preferred toys and types of attention, the child’s least preferred academic tasks or chores, and a tangible which was not highly preferred to provide in the attention condition of the FA. Next, the behavior analyst used the results of these interviews to individualize the conditions of each participant’s FA (e.g., by delivering the reportedly preferred form of attention within the FA), while keeping intact the individual reinforcement contingencies evaluated across FAs. Caregivers typically completed the interview in under 10 min (M = 8 min; range, 4 to 12 min).
FA
Prior to the FA, the behavior analyst reviewed the results of the 8-item interview with the therapist to determine which stimuli to include in the relevant test and control conditions and whether to forgo a screening procedure for automatic reinforcement based on the possibility of extinguishing attention-maintained problem behavior. All FAs were conducted based on procedures described by Iwata, Dorsey, Slifer, Bauman, and Richman (1982/1994). Additionally, we (a) conducted a series of consecutive alone or ignore sessions at the beginning of the FA to screen for automatic reinforcement (Querim et al., 2013), (b) equated the durations of the reinforcement intervals across the test conditions (Fisher, Piazza, & Chiang, 1996), and (c) conducted the conditions in a fixed order (Hammond, Iwata, Rooker, Fritz, & Bloom, 2013). The screen for automatic reinforcement was excluded when the caregiver indicated a high likelihood of an attention function of problem behavior during the 8-item interview (i.e., Cases 1, 2, 4, and 6). For all other cases, we excluded topographies of problem behavior that persisted in the screening procedure. Finally, we conducted at least three sequences of the fixed-order of conditions described by Hammond et al. (2013) prior to ending the FA.
The experimenter modified the FAs on a case-by-case basis according to routine standards of care in our clinic, but such modifications were ultimately at the discretion of the acting case manager, which differed across cases and timepoints. Case managers also determined when to end each phase and assessment, based on their own interpretation of the data. No formal criteria informed termination of any assessment. When case managers determined that conducting additional series of the multielement FA was unlikely to clarify behavioral function, a pairwise design was often conducted (Iwata, Duncan, Zarcone, Lerman, & Shore, 1994). This design modification occurred for Cases 2, 3, 4, 8, 9, 10, 11, and 12. For two of these participants (Cases 2 and 9), we used test-specific control procedures (e.g., noncontingent access to attention but not a tangible) during at least one pairwise comparison. Finally, we reinforced precursors to problem behavior (Smith & Churchill, 2002) for a portion of Case 2’s FA to ensure that this participant’s behavior repeatedly contacted the contingencies programmed across FA conditions.
Alone.
The therapist left the participant in the room alone without session materials.
Ignore.
The therapist remained present in the room with the participant but provided no session materials and no differential consequences following problem behavior.
Attention.
The therapist provided 1-min pre-session access to a topography of attention reported by caregivers as being preferred and at least one tangible reported by caregivers not to be highly preferred. The therapist then diverted their attention elsewhere (e.g., to a magazine). Following problem behavior, the therapist delivered a verbal reprimand (e.g., “Stop doing that.”) and then provided to the same topography of attention delivered pre-session (e.g., “I can rub your back again.”) for 20 s.
Tangible.
The therapist provided 1 min of pre-session access to at least one tangible that caregivers reported was highly preferred. The therapist then restricted access to the item. Following problem behavior, the therapist provided 20-s access to the previously restricted item.
Escape.
The therapist used a three-step, progressive prompting procedure (verbal, model, and physical guidance) to guide compliance with demands that caregivers identified as nonpreferred. The therapist delivered praise following compliance with the verbal prompt (i.e., the demand) or model prompts. Following problem behavior, the therapist provided a 20-s break (i.e., removal of demands).
Toy play.
The therapist provided continuous access to the same items from the tangible condition and the same topographies of attention from the attention condition. The therapist did not present demands and provided no differential consequences following problem behavior.
Noncontingent attention.
The therapist provided continuous access to the same topographies of attention from the attention condition.
Noncontingent tangible.
The therapist provided continuous access to the same tangibles from the tangible condition.
SSCA
The SSCA employed the same sequence of IISCA conditions suggested by Jessel et al. (2016): control, test, control, test, test. Additional test and control conditions occurred in an alternating manner. As with the FA, no formal criteria informed assessment termination; however, case managers generally attempted to keep the SSCA efficient. Results of the screening procedure for automatically reinforced problem behavior did not inform the SSCA.
SSCA test.
The therapist delivered 1-min pre-session access to caregiver-reported, preferred forms of attention and tangibles derived from the 8-item interview (i.e., the same stimuli used in the attention and tangible conditions of the FA, respectively). After 1 min, the therapist ceased their delivery of attention, restricted access to the tangibles, and began presenting demands, identified as nonpreferred by the caregiver, using the three-step, progressive prompting procedure from the escape condition of the FA. The therapist delivered praise following compliance with verbal or model prompts. Following problem behavior, the therapist provided a 20-s break from demands and access to the previously restricted attention and tangibles.
Toy play.
The control condition for the SSCA was identical to the toy-play condition of the FA.
Assessments Informing IISCA
Open-ended interview.
Prior to the structured observation and IISCA, the behavior analyst administered an open-ended interview (Hanley, 2012) with the participant’s caregiver using the procedures described by Hanley et al. (2014). The interview contained questions related to background information (e.g., language abilities), topographies of problem behavior, as well as potential antecedents and consequences related to problem behavior. In addition to the programmed questions, the interviewers often asked follow-up questions to clarify caregiver responses and to identify relevant stimuli for use in the structured observation and in the test and control conditions of the IISCA. The interview lasted approximately one half-hour (M = 27 min; range, 15 to 38 min).
Structured observation.
The behavior analyst reviewed the results of the open-ended interview with the therapist to identify materials for each condition tested in a single, structured observation. If the caregiver failed to identify specific forms of attention or specific tangibles as being preferred or if the caregiver failed to identify specific nonpreferred demands, we selected those stimuli based on the results of the 8-item interview that informed the FA and the SSCA. The therapist conducted the structured observation by modifying establishing operations and programming putative reinforcers every 4 min across one continuous 40-min session, similarly to the structured observation used by Fisher et al. (2016). The sequence of segments comprising the structured observation was (1) ignore, (2) attention, (3) noncontingent attention, (4) attention, (5) escape, (6) ignore, (7) escape, (8) tangible, (9) noncontingent tangible, and (10) tangible. The specific conditions included in the structured observation and the sequence of those conditions were identical for all participants with the exception that we individualized the contingencies to include child-specific forms of attention, tangibles, and demands. The contingencies embedded within each segment of the structured observation were identical to those of the FA. That is, the structured observation was an abbreviated FA that tested for common reinforcers for problem behavior (e.g., automatic reinforcement, access to attention, access to tangibles, and escape from demands) but for which we did not make modifications based on incoming data, unlike the FA (see above). The therapist later graphed the results as a cumulative record of problem behavior and, with assistance from the behavior analyst, identified behavioral function, which we characterized by responding that occurred reliably in a given test segment and did not occur reliably in the relevant control segment.
Thus, these procedures attempted to standardize the clinical observation described by other researchers (e.g., Hanley et al., 2014) by conducting a structured observation with all participants. We did this for replication purposes and to ensure that each participant experienced similar observation periods in which the possibility of uncovering specific, putative reinforcement contingencies was equally likely across participants. However, doing so was a departure from the individually tailored, interview-informed clinical observation used elsewhere (Joshua Jessel, personal correspondence).
IISCA
The results of the open-ended interview and the structured observation informed the development of the IISCA, as was done by Fisher et al. (2016). The plan was to synthesize all putative reinforcement contingencies identified by either source and arrange those contingencies for problem behavior in the test condition of the IISCA and to occur response independently in the control condition. However, only the open-ended interview informed the IISCA, as the structured observation never identified a putative reinforcement contingency not already identified by the open-ended interview.
Whenever applicable, we used caregiver-reported forms of attention, tangibles, and demands from the open-ended interview during the IISCA test and control conditions. As with the structured observation, if the caregiver failed to report child-specific stimuli for inclusion in the IISCA, we relied on the results of the 8-item interview that informed the FA and the SSCA.
The IISCA was based on procedures described by Hanley et al. (2014), using the sequence of test and control conditions described by Jessel et al. (2016). If additional test and control conditions were required, they were presented in an alternating manner. As with the FA and the SSCA, no formal criteria informed assessment termination; however, case managers generally attempted to keep the IISCA efficient. As recommended by Jessel et al., we used a series of alone or ignore sessions (Querim et al., 2013) when the open-ended interview or structured observation suggested that at least one topography of problem behavior was automatically maintained. We excluded those topographies of problem behavior that persisted during the screening procedure from the IISCA. No other modifications were made to the IISCA, and no other interviews or structured observations were used to inform any revisions to test and control conditions.
IISCA test.
The behavior analyst designed the IISCA-test condition for each participant based on the putative reinforcers of problem behavior identified by either the open-ended interview or the structured observation. Prior to session, the therapist delivered 1-min access to the putative reinforcers. After 1 min, the therapist restricted those putative reinforcers and, following problem behavior, the therapist delivered those same putative reinforcers for 20 s. For example, the therapist restricted access to attention and tangibles and then provided these stimuli following problem behavior for Case 7. When synthesizing escape, the therapist followed the same three-step, progressive prompting procedure from the escape condition of the FA.
IISCA control.
The behavior analyst designed the IISCA-control condition based on the open-ended interview and the structured observation. The therapist delivered the same putative reinforcers from the IISCA-test condition but did so continuously during the control condition. The therapist provided no differential consequences following problem behavior.
Results
Figure 1 depicts results for Cases 12, 5, 11, and 3, all of whom experienced the FA first. Following the FA, Cases 12 and 5 experienced the SSCA followed by the IISCA, whereas Cases 11 and 3 experienced the IISCA followed by the SSCA. We initially targeted SIB, aggression, and disruptions for Cases 12 and 11. However, both participants showed maintenance of disruptions during the consecutive ignore or alone phase of the FA, suggesting that this topography of problem behavior was automatically reinforced. In subsequent FA and IISCA test conditions, the putative reinforcement contingencies for disruptions were removed, as implied by Jessel et al. (2016), along with corresponding disruption data from Figure 1. Because the SSCA was “standardized” in that it arranged a synthesized contingency for the three common social functions of problem behavior (i.e., escape to attention and tangibles), but did not address automatic functions of problem behavior, we continued to arrange the putative reinforcement contingency for disruptions in the SSCA, as results of the consecutive ignore or alone phase did not inform the SSCA. The SSCA phases for both participants include data on disruptions.
Figure 1.

Rates of problem behavior across functional analysis (FA), standard-synthesized contingency analysis (SSCA), and interview-informed synthesized contingency analysis (IISCA). Cases in this figure represent those for whom we conducted the FA first. Please note that the x and y axes differ across panels.
Results of the FA for Case 12 suggested that disruptions were automatically reinforced, as evidenced by continued responding in the screening procedure. The multielement FA suggested that access to tangibles reinforced SIB and aggression, and a subsequent pairwise suggested that attention also reinforced these responses. Both the SSCA (which always tested for escape to attention and tangibles) and the IISCA (which was designed based on results of the open-ended interview and structured observation, tested for attention and tangibles) showed differentiated responding for Case 12; however, demands included in the SSCA, but not the IISCA, may have reduced the degree of differentiation in the SSCA for Case 12. Case 5 engaged in zero instances of problem behavior across the FA, SSCA, and IISCA (escape to tangibles). Therefore, we were unable to identify a function of problem behavior for Case 5. Functional analysis results for Case 11 suggested that automatic reinforcement maintained disruptions and that escape and access to tangibles reinforced SIB and aggression; however, responding was not always differentiated. The IISCA (escape to attention and tangibles) produced differentiated results for Case 11, whereas the SSCA produced high and undifferentiated response rates. Closer inspection of the specific topographies of problem behavior during the test and control conditions of Case 11’s SSCA suggested that automatically reinforced disruptions partially masked the social functions of the other topographies of problem behavior (cf. Asmus, Franzese, Conroy, & Dozier, 2003; Saini, Greer, & Fisher, 2015); however, highly variable rates of aggression across toy-play sessions of the SSCA yielded an inconclusive result for the combined rates of SIB and aggression. We categorized these SSCA results as undifferentiated. Results of the multielement FA for Case 3 suggested that escape reinforced problem behavior; however, this function of problem behavior was not confirmed in a subsequent pairwise. Problem behavior did not occur in the IISCA (escape to attention and tangibles) for Case 3, but the SSCA produced differentiated results, despite arranging the same stimuli and contingencies as the IISCA. Variability in responding across test conditions that programmed escape from demands may have accounted for these discrepant outcomes.
Figure 2 depicts results for Cases 9, 6, 2, and 8, all of whom experienced the SSCA first. Following the SSCA, Cases 9 and 6 experienced the FA followed by the IISCA, whereas Cases 2 and 8 experienced the IISCA followed by the FA. The SSCA produced differentiated results for Case 9, and subsequent FA results suggested that access to tangibles reinforced problem behavior. The IISCA (escape to tangibles) for Case 9 similarly produced differentiated results. The SSCA for Case 6 produced differentiated results, and subsequent FA results suggested that escape and access to tangibles reinforced problem behavior. The IISCA (escape to attention and tangibles) also produced differentiated results for Case 6. The SSCA produced undifferentiated results for Case 2, and problem behavior did not occur in the IISCA (attention and tangibles). The multielement FA was similarly undifferentiated for the initial six series of test and control conditions, after which point therapists began arranging consequences for precursors to problem behavior. Therapists discontinued this practice after four additional series when Case 2 began engaging in higher rates of problem behavior. Subsequent pairwise analyses suggested that escape alone reinforced problem behavior. The SSCA and the IISCA (escape to attention and tangibles) for Case 8 both produced undifferentiated results, and subsequent FA results suggested that access to tangibles reinforced problem behavior.
Figure 2.

Rates of problem behavior across standard-synthesized contingency analysis (SSCA), functional analysis (FA), and interview-informed synthesized contingency analysis (IISCA). Cases in this figure represent those for whom we conducted the SSCA first. Please note that the x and y axes differ across panels.
Figure 3 depicts results for Cases 1, 7, 10, and 4, all of whom experienced the IISCA first. Following the IISCA, Cases 1 and 7 experienced the FA followed by the SSCA, whereas Cases 10 and 4 experienced the SSCA followed by the FA. The IISCA (escape to attention and tangibles) produced differentiated results for Case 1. Subsequent FA results for Case 1 suggested that access to tangibles reinforced problem behavior. The SSCA for Case 1 produced similarly differentiated results. The IISCA (attention and tangibles) produced differentiated results for Case 7. Subsequent FA results for Case 7 suggested that access to attention reinforced problem behavior. The SSCA for Case 7 similarly produced differentiated results. Cases 10 and 4 produced similar results. Their IISCAs (escape to tangibles for Case 10; escape to attention and tangibles for Case 4) both produced differentiated results, as did the SSCAs. Subsequent FA results for Cases 10 and 4 suggested that escape and access to tangibles reinforced problem behavior.
Figure 3.

Rates of problem behavior across interview-informed synthesized contingency analysis (IISCA), functional analysis (FA), and standard-synthesized contingency analysis (SSCA). Cases in this figure represent those for whom we conducted the IISCA first. Please note that the x and y axes differ across panels.
Table 3 summarizes the results of the FA, SSCA, and IISCA across participants. Results of the open-ended interview and structured observation, both of which were intended to inform the test and control conditions of the IISCA, also appear in Table 3. The FA identified single functions of problem behavior for 6 of the 12 participants (or 50%), two or more functions of problem behavior for 5 of the 12 participants (or 41.7%), and no function of problem behavior for 1 of the 12 participants (or 8.3%). Synthesizing the three common social functions of problem behavior in the SSCA resulted in fewer analyses with differentiated results. Whereas the FA produced differentiated results for 11 of the 12 participants (or 91.7%), the SSCA did so for only 8 participants (or 66.7%), despite 11 of the SSCAs including at least one functionally relevant contingency identified by the FA, which we considered the standard for determining functionally relevant and irrelevant contingencies. Differentiation was no more likely during the IISCA than during the SSCA, despite 10 of the IISCAs containing at least one functionally relevant contingency identified by the FA. The IISCA also resulted in differentiation for eight of the 12 participants (or 66.7%). Neither the SSCA nor the IISCA was sufficient to produce problem behavior for Case 5, similar to the participant named Sylvia in Fisher et al. (2016). However, when problem behavior occurred, differentiation was more likely with the IISCA (eight of nine participants or 88.9%) than with the SSCA (eight of 11 participants or 72.7%).
Table 3.
Analysis Outcomes
| Case | FA | SSCA | Open-Ended Interview |
Structured Observation |
IISCA |
|---|---|---|---|---|---|
| 1 | Tang | Diff. (Esc→Attn+Tang) | Esc, Attn, Tang | Esc, Attn, Tang | Diff. (Esc→Attn+Tang) |
| 2 | Esc | Undiff. (Esc→Attn+Tang), Esc | Attn, Tang, Esc | No PB, Esc | No PB (Attn+Tang), Esc |
| 3 | Esc* | Diff. (Esc→Attn+Tang) | Esc, Attn, Tang | No PB, Esc | No PB (Esc→Attn+Tang), Esc |
| 4 | Esc, Tang | Diff. (Esc→Attn+Tang) | Esc, Attn, Tang | No PB, Esc, Tang | Diff. (Esc→Attn+Tang) |
| 5 | No PB | No PB (Esc→Attn+Tang) | Esc, Tang | No PB | No PB (Esc→Tang) |
| 6 | Esc, Tang | Diff. (Esc→Attn+Tang) | Esc, Attn, Tang | Esc, Tang | Diff. (Esc→Attn+Tang) |
| 7 | Attn | Diff. (Esc→Attn+Tang) | Attn, Tang | Auto, Attn, Tang | Diff. (Attn+Tang) |
| 8 | Tang | Undiff. (Esc→Attn+Tang), Tang | Esc, Attn, Tang | Auto, Tang | Undiff. (Esc→Attn+Tang), Tang |
| 9 | Tang | Diff. (Esc→Attn+Tang) | Esc, Tang | Tang | Diff. (Esc→Tang) |
| 10 | Esc, Tang | Diff. (Esc→Attn+Tang) | Esc, Tang | Esc, Tang | Diff. (Esc→Tang) |
| 11 | Auto**, Esc, Tang | Undiff. (Esc→Attn+Tang), Auto, Esc, Tang | Auto, Esc, Attn, Tang | Auto, Esc, Tang | Auto**, Diff. (Esc→Attn+Tang) |
| 12 | Auto**, Attn, Tang | Diff. (Esc→Attn+Tang), Auto | Auto, Attn, Tang | Auto, Attn, Tang | Auto**, Diff. (Attn+Tang) |
Note: Functionally relevant contingencies are presented in regular typeface, false positives are bolded, and false negatives are italicized. Diff. = differentiated, Undiff. = Undifferentiated, PB = problem behavior, Auto = automatic reinforcement, Esc = escape, Attn = attention, Tang = tangible.
Function not replicated during subsequent pairwise design.
Automatic function indicated for a response topography during consecutive alone/ignore and removed from subsequent analysis in the FA and IISCA.
The FA detected a social function of problem behavior for 11 of the 12 participants (or 91.7%). Because the SSCA always synthesized escape, attention, and tangibles, all SSCAs included these functionally relevant contingencies. Likewise, Case 2 was the only participant for whom the IISCA did not include the functionally relevant (social) contingency. When the FA detected an automatic function of problem behavior (2 of the 12 participants or 16.7%), the IISCA was the only other assessment that detected this function.
Both the SSCA and the IISCA contained functionally irrelevant contingencies according to the FA results. Across participants, the SSCA synthesized 36 individual contingencies, 20 of which (or 55.6%) were functionally irrelevant. The IISCA synthesized 30 individual contingencies, 15 of which (or 50.0%) were functionally irrelevant. Despite lacking information from an open-ended interview and structured observation, the SSCA contained only slightly more irrelevant contingencies (55.6%) than the IISCA (50.0%). Across the SSCA and the IISCA, only 2 of all 24 synthesized contingencies (or 8.3%) included only relevant contingencies, both of which occurred with the IISCA (Cases 10 and 12).
Additional information in Table 3 identifies false-positive and false-negative results relative to the functionally relevant contingencies identified by the FA. Bolded text in Table 3 indicates false-positive functions, defined as detecting a functionally irrelevant contingency. We calculated the percentage of false-positive functions by dividing the number of false positives by the total number of functions (including automatic functions) identified by the FA plus the number of false positives identified by the respective assessment. Italicized text in Table 3 indicates false-negative functions, defined as failing to detect a functionally relevant contingency. We calculated the percentage of false-negative functions by dividing the number of false negatives by the total number of functions identified by the FA.
All assessments identified false-positive functions, with the open-ended interview identifying the highest percentage of false-positive functions (46.9% of identified functions) and the structured observation identifying the lowest percentage of false-positive functions (21.7% of identified functions). The IISCA identified a relatively low percentage of false-positive functions (28% of identified functions), which may have been a result of the IISCA producing undifferentiated results or no problem behavior in four of the 12 applications (or 33.3%). The SSCA identified a comparably high percentage of false-positive functions (40% of identified functions).
Data on the percentage of false-negative functions were more disparate than those on false-positive functions. Relative to the SSCA, which failed to identify 33.3% of all functionally relevant contingencies identified by the FA, the IISCA failed to identify 16.7% of those same functions. The structured observation failed to identify a higher percentage of functionally relevant contingencies (22.2% of functions identified by the FA) than the open-ended interview (5.6% of functions identified by the FA). All assessments over-identified attention as a functionally relevant contingency reinforcing problem behavior, and each assessment did so to a greater degree than for any other function of problem behavior.
We also analyzed discrepancies between the open-ended interview and the structured observation. The open-ended interview identified escape, attention, and/or tangibles as reinforcers for eight cases (Cases 2, 3, 4, 5, 6, 8, 9, and 11) for whom the structured observation did not. Results of the open-ended interview matched those of the structured observation for only three cases (Cases 1, 10, and 12).
General Discussion
We compared the results of three behavior assessments designed to identify the function(s) of problem behavior in 12 consecutive individuals referred for the assessment and treatment of problem behavior: (a) an FA; (b) an IISCA that was informed by the results of an open-ended interview and a structured observation; and (c) an SSCA in which we synthesized three common functions of problem behavior. One interesting finding of this comparison was that the FA was slightly more likely to show differentiation between its test and control conditions than was either the IISCA or the SSCA. Overall, the FA identified a function of problem behavior for 11 of the 12 participants (91.7%). Similar differentiation within the IISCA and SSCA was less common, with both producing differentiation for eight of the 12 participants (or 66.7%). Jessel et al. (2016) reported differentiation in responding across the test and control conditions of the IISCA for 30 participants, and a close reading of that study suggests that the initial results of the IISCA were undifferentiated for eight of their 30 participants. This translates to a differentiation percentage of approximately 73.3%. Interestingly, this calculation is quite close to that which we found when combining the IISCA results of the five consecutive participants enrolled in Fisher et al. (2016) with those of the 12 consecutive participants enrolled in the present study. Differentiation across IISCAs was approximately 70.6% for the combined data sets.
One important consideration when interpreting the findings of the present study is that we modified the FA, using procedures described in the literature, to facilitate the identification of relevant contingencies reinforcing problem behavior. Those modifications included (a) screening for automatic reinforcement (Querim et al., 2013), (b) changing the experimental design of the FA (Iwata et al., 1994), and (c) reinforcing precursors to problem behavior (Smith & Churchill, 2002) for a portion of the FA for Case 2. We made no similar modifications to the IISCA or the SSCA, short of conducting the open-ended interview and the structured observation to inform the IISCA.
In Jessel et al. (2016), the authors reported modifying the IISCA test and control conditions for the eight participants with initial undifferentiated IISCA results. These modifications eventually resulted in differentiation for all eight participants, increasing the differentiation percentage to 100%. It remains unclear whether similar modifications to the IISCA in the present study would have yielded a higher differentiation percentage. However, neither Jessel et al. nor the available IISCA literature to date has described how such modifications to the IISCA might proceed. Any such improvement in the ability of the IISCA to produce differentiation would certainly impact its correspondence with other forms of behavioral assessment (e.g., FA). Precisely how correspondence changes following modification to the IISCA is a topic for future research.
A primary purpose of this study was to determine the utility of the open-ended interview described by Hanley (2012) and a structured observation by comparing the results of an informed IISCA with those of an uninformed SSCA. Recall that no structured observation suggested a function of problem behavior not already identified by the open-ended interview (i.e., information from the open-ended interview alone informed the IISCA for all 12 participants). In this regard, the utility of the structured observation in this study was minimal. The open-ended interview, although directly informing the test and control conditions of the IISCA, identified a high percentage of false-positive functions (46.9% of identified functions) when its results were compared to those of an FA. False-negative functions were much less common (5.6% of functions identified by the FA) for the open-ended interview than were false-positive functions. Perhaps one conclusion that may be drawn from these results is that although one may be highly likely to include functionally irrelevant contingencies within the IISCA by conducting the open-ended interview, the IISCA is unlikely to leave out a functionally relevant contingency.
Two additional questions that we hoped to address with this controlled consecutive case series concerned the necessity and sufficiency of synthesized contingency analysis for determining behavioral function. One rationale for synthesizing contingencies is that problem behavior may occur only under a unique set of conditions. In such a situation, analyzing individual contingencies within an FA likely would be insufficient to determine behavioral function. In contrast, a synthesized contingency analysis would be well-suited for evaluating such behavior. The synthesized contingency analysis did not appear to be necessary for any of the participants in the present study, replicating the findings of Fisher et al. (2016). Combining data from the present study with those from Fisher et al. (2016) indicates that 2 of the 17 total participants (Case 5 in the present study and the participant named Sylvia from Fisher et al., 2016) engaged in zero instances of problem behavior in the FA. Neither the IISCA (Case 5 and Sylvia) nor the SSCA (Case 5) was sufficient for determining the contingencies reinforcing either participant’s problem behavior. Although Fisher et al. did not conduct an SSCA with Sylvia, her IISCA entailed escape to attention and tangibles (i.e., the same synthesized contingency arranged in the SSCA). Thus, it is unlikely that a separate SSCA for Sylvia would have been sufficient to determine the function of her problem behavior.
Another tactic for addressing questions about the necessity and sufficiency of synthesized contingency analysis is through differential treatment analysis (cf. Iwata, Pace, Cowdery, & Miltenberger, 1994; Smith, Iwata, Vollmer, & Zarcone, 1993). Applied to data from functional behavior assessments, differential treatment analysis entails either conducting concurrent treatment evaluations based on the outcomes of each assessment or systematically implementing one or only a subset of the putative reinforcement contingencies within treatment to verify functional control of each variable identified in the assessment (e.g., Ghaemmaghami, Hanley, Jin et al., 2016). We did not conduct such an analysis in the present study, as doing so would have required either (a) three distinct treatment evaluations for each participant, all conducted within close temporal proximity or simultaneously (e.g., by using a multielement or alternating treatments design) or (b) multiple reversals of different combinations of intervention procedures, given the large number of functions identified by both the IISCA and the SSCA.
Ghaemmaghami, Hanley, Jin et al. (2016) recently used this tactic to affirm functional control of each contingency synthesized in a prior IISCA in which they showed that problem behavior for one participant did not reduce to clinically acceptable levels until each contingency synthesized in the IISCA was accounted for within the intervention procedures. Although methodical limitations (e.g., lack of experimental control in the treatment evaluation, missing FAs for escape and for access to tangibles) limit the extent to which one can reasonably deduce that each of the contingencies synthesized in the IISCA functionally controlled this participant’s problem behavior; however compelling examples of the need for synthesis do exist. For example, the participant named Diego in Slaton et al. (2017) is perhaps the clearest example in the published literature of an individual for whom synthesized contingency analysis produced useful information regarding the variables that maintained problem behavior. Nevertheless, the IISCA for this individual examined only one of multiple possible combinations of synthesized contingencies; therefore, it remains unclear whether each of the three contingencies synthesized in the IISCA was necessary for this individual. The studies by Ghaemmaghami, Hanley, Jin et al. (2016) and Slaton et al. (2017) highlight the difficulty in applying differential treatment analysis to the results of a synthesized contingency analysis.
Despite the somewhat common occurrence of automatically-reinforced problem behavior reported in the literature (Beavers et al., 2013; Hanley et al., 2003), the current investigation appears to be the first study to evaluate problem behavior maintained by automatic reinforcement using the IISCA. We evaluated problem behavior maintained by automatic reinforcement using the procedures Jessel et al. (2016) reported that they would have used had they encountered a case in which the results of the open-ended interview suggested maintenance of problem behavior by automatic reinforcement. That is, whenever either the open-ended interview or the structured observation suggested maintenance of problem behavior by automatic reinforcement, we conducted a series of consecutive ignore or alone sessions (Querim et al., 2013). This happened for 4 (Cases 7, 8, 11, and 12) of the 12 participants (or 33.3%). Results of the screening procedure for Cases 7 and 8 suggested that automatic reinforcement did not maintain problem behavior, whereas the same screening procedure for Cases 11 and 12 suggested that automatic reinforcement maintained one of the originally targeted topographies of problem behavior. In none of these situations did we record the IISCA as identifying a false-positive or a false-negative function, as the screening procedure we used to identify automatic reinforcement for the IISCA was identical to that we used for the FA. We did not consider automatically reinforced problem behavior to be one of the contingencies synthesized in the IISCA.
Although we supplemented the IISCA with a screening procedure to detect automatically reinforced problem behavior, the IISCA and the SSCA produced equally high levels of inconclusive results across participants. Four of the 12 IISCAs (or 30%) and four of the 12 SSCAs (or 30%) produced either no problem behavior or undifferentiated responding. Considering the results of Fisher et al. (2016), whose procedures for conducting the IISCA were similar to those in the present study, the IISCA produced inconclusive results for 29% of all participants (or 5 of the 17 total participants). By contrast, this percentage for the FA in the present study was 8.3% (or 1 of the 12 participants), and when combined with the results of Fisher et al., this percentage becomes 11.8% (or 2 of the 17 total participants).
Important differences between the present study and others are noteworthy. First, FA test conditions in prior IISCA comparisons arranged noncontingent access to the stimuli evaluated in other test conditions. For example, the attention test condition for the participant named Gail in Hanley et al. (2014) arranged noncontingent access to the same tangibles arranged as reinforcers in the tangible test condition. The tangible test condition for this participant arranged noncontingent access to the same forms of attention arranged as reinforcers in the attention test condition. Providing dense schedules of noncontingent access to potentially reinforcing stimuli is likely to abolish motivation for the putative reinforcement contingency arranged in the test condition (cf. Rooker, Jessel, Kurtz, & Hagopian, 2013), potentially impeding the identification of a reinforcement contingency. FA test conditions in the present study did not arrange noncontingent access to similar types of social positive reinforcement in this manner, which may have increased the likelihood of identifying individual reinforcement contingencies in the FA.
Second, unlike Slaton et al. (2017), we modified the experimental design of the FA when the initial multielement produced equivocal results, as is commonly done when determining behavioral function via FA (Hagopian, Rooker, Jessel, & DeLeon, 2013). When we transitioned from a multielement FA to a pairwise FA, we often conducted test-specific control conditions. It should be noted that these modifications made the FAs in the present study more analogous to both the IISCA and the SSCA, as these features (i.e., single test and single control conditions, test-specific control conditions) already are common to both. However, these modifications resulted in additional test and control sessions for the FA that we did not replicate for the IISCA or the SSCA. It remains unclear how our findings would differ had we yoked the total number of test and control sessions across assessments.
Third, also unlike Slaton et al. (2017), we did not use data on potential precursors to problem behavior as the primary index from which to compare FA and IISCA results. Slaton et al. described reinforcing precursors to problem behavior as a typical component of the IISCA. However, they also referenced Hanley et al. (2014) as the developers of the IISCA, yet Hanley et al. did not include reinforcement of precursors to problem behavior in their procedures. As such, we followed the procedures described by Hanley et al. and did not analyze precursors.
Fourth, as noted above, our structured observation informing the IISCA was an abbreviated FA in which we used caregiver responses during the open-ended interview (or the 8-item interview, if necessary) to select participant-specific forms of attention, tangibles, and demands for inclusion; however, the structured observation evaluated the same contingencies of reinforcement for all participants. Our decision to test these specific functions of problem behavior and to do so across participants was based on the procedures described originally by Hanley et al. (2014) who stated, “The analyst…provided and removed toys, attention, and activities as well as instructions during the observation” (p. 20). However, more recent studies on the IISCA have provided additional information on the observation and its role in the IISCA process. For example, not all studies on the IISCA describe having conducted an observation to develop the test and control conditions of the IISCA (e.g., Santiago et al., 2016; Slaton et al., 2017). Of those studies that describe the observation process, Jessel et al. (2018) provided one account of this process by noting that the clinician “unsystematically arranged the contexts described as likely to evoke problem behavior” (p. 134). This quote highlights two potentially important differences when compared to our structured observation—an unsystematic evaluation of putative reinforcement contingencies and a focus on evaluating only those contingencies described by caregivers during the open-ended interview. Joshua Jessel (personal correspondence), a leading researcher of the IISCA approach, recently clarified that he evaluates within the observation only those contingencies (antecedents and consequences) reported by the caregiver and that he synthesizes only those contingencies within the observation that the caregiver reports to co-occur. If the caregiver does not report that multiple, specific stimulus changes tend to follow problem behavior, the therapist evaluates only those isolated contingencies reported to occur. When the caregiver describes distinct contexts in which problem behavior occurs, therapists conduct separate evaluations and develop separate test and control conditions for the IISCA (e.g., Hanley et al., 2014). He also noted that progressing to the test and control conditions of the IISCA is dependent on developing some level of control over the occurrence of problem behavior and/or reported precursors during the observation (e.g., reliably occasioning reported precursors by removing the putative reinforcer[s] and reliably terminating reported precursors by reinstating them). Caregivers observe the entire IISCA process and suggest changes as needed. Future researchers aiming to make similar comparisons between the results of individual and synthesized contingency analysis would do well to incorporate these aspects of the IISCA process.
Finally, we combined the results of the open-ended interview with those from the structured observation to inform the IISCA and synthesized all putative reinforcement contingencies identified across the two assessments. However, the above discussion on the observation process suggests that an observation would rarely, if ever, identify a putative reinforcement contingency not already described by the caregiver during the open-ended interview. Despite this apparent difference, the contingencies synthesized in the IISCA for all 12 participants in the present study were identical to those the caregivers reported during the open-ended interview. One potentially important caveat, however, is that we synthesized all reported contingencies from the open-ended interview, regardless of whether the caregiver reported those contingencies to occur in isolation or in combination. Future researchers should pay careful attention to whether caregivers report putative reinforcement contingencies to occur in isolation or in combination and use this information accordingly when developing the test and control conditions of the IISCA.
Acknowledgments
Grants 5R01HD079113, 5R01HD083214, and 1R01HD093734 from the National Institute of Child Health and Human Development provided partial support for this research.
Footnotes
Case 3 later participated as Samantha in Briggs, Akers, Greer, Fisher, and Retzlaff (2017). None of her data overlap across the two studies. Case 6 later participated as Marcus in Fisher et al. (2018). None of his data overlap across the two studies. Case 10 later participated as Teddy in Greer et al. (conditionally accepted). A portion of his FA (published in its entirety herein) is duplicated in Greer et al. for treatment-evaluation purposes. Several cases later participated in other treatment-oriented studies that are currently ongoing.
References
- Asmus JM, Franzese JC, Conroy MA, & Dozier CL (2003). Clarifying functional analysis outcomes for disruptive behaviors by controlling consequence delivery for stereotypy. School Psychology Review, 32, 624–630. [Google Scholar]
- Betz AM, & Fisher WW (2011). Functional analysis: History and methods In Fisher WW, Piazza CC, & Roane HS (Eds.), Handbook of applied behavior analysis (pp. 206–225). New York: Guilford. [Google Scholar]
- Beavers GA, Iwata BA, & Lerman DC (2013). Thirty years of research on the functional analysis of problem behavior. Journal of Applied Behavior Analysis, 46, 1–21. doi: 10.1002/jaba.30 [DOI] [PubMed] [Google Scholar]
- Briggs AM, Akers JS, Greer BD, Fisher WW, & Retzlaff BJ (2017). Systematic changes in preference for schedule-thinning arrangements as a function of relative reinforcement density. Behavior Modification. Advance online publication. doi: 10.1177/0145445517742883 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bullock CE, Fisher WW, & Hagopian LP (2017). Description and validation of a computerized behavioral data program: “BDataPro.” The Behavior Analyst, 40, 275–285. doi: 10.1007/s40614-016-0079-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fisher WW, Greer BD, Romani PW, Zangrillo AN, & Owen TM (2016). Comparisons of synthesized and individual reinforcement contingencies during functional analysis. Journal of Applied Behavior Analysis, 49, 596–616. doi: 10.1002/jaba.314 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fisher WW, Piazza CC, & Chiang CL (1996). Effects of equal and unequal reinforcer duration during functional analysis. Journal of Applied Behavior Analysis, 29, 117–120. doi: 10.1901/jaba.1996.29-117 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fisher WW, Saini V, Greer BD, Sullivan WE, Roane HS, Fuhrman AM,…Kimball RT (2018). Baseline reinforcement rate and resurgence of destructive behavior. Journal of the Experimental Analysis of Behavior. Advance online publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ghaemmaghami M, Hanley GP, & Jessel J (2016). Contingencies promote delay tolerance. Journal of Applied Behavior Analysis, 49, 548–575. doi: 10.1002/jaba.333 [DOI] [PubMed] [Google Scholar]
- Ghaemmaghami M, Hanley GP, Jin SC, & Vanselow NR (2016). Affirming control by multiple reinforcers via progressive treatment analysis. Behavioral Interventions, 31, 70–86. doi: 10.1002/bin.1425 [DOI] [Google Scholar]
- Greer BD, Fisher WW, Briggs AM, Lichtblau KR, Phillips LA, & Mitteer DR (conditionally accepted). Using schedule-correlated stimuli during functional communication training to promote the rapid transfer of treatment effects. Behavioral Development. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hagopian LP, Rooker GW, Jessel J, & DeLeon IG (2013). Initial functional analysis outcomes and modifications in pursuit of differentiation: A summary of 176 inpatient cases. Journal of Applied Behavior Analysis, 46, 88–100. doi: 10.1002/jaba.25 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hammond JL, Iwata BA, Rooker GW, Fritz JN, & Bloom SE (2013). Effects of fixed versus random condition sequencing during multielement functional analyses. Journal of Applied Behavior Analysis, 46, 22–30. doi: 10.1002/jaba.7 [DOI] [PubMed] [Google Scholar]
- Hanley GP (2012). Functional assessment of problem behavior: Dispelling myths, overcoming implementation obstacles, and developing new lore. Behavior Analysis in Practice, 5, 54–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hanley GP, Iwata BA, & McCord BE (2003). Functional analysis of problem behavior: A review. Journal of Applied Behavior Analysis, 36, 147–185. doi: 10.1901/jaba.2003.36-147 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hanley GP, Jin CS, Vanselow NR, & Hanratty LA (2014). Producing meaningful improvements in problem behavior of children with autism via synthesized analyses and treatments. Journal of Applied Behavior Analysis, 47, 16–36. doi: 10.1002/jaba.106 [DOI] [PubMed] [Google Scholar]
- Iwata BA, Dorsey MF, Slifer KJ, Bauman KE, & Richman GS (1982/1994). Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis, 27, 197–209. (Reprinted from Analysis and Intervention in Developmental Disabilities, 2, 3–20, 1982) doi: 10.1901/jaba.1994.27-197 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Iwata BA, Duncan BA, Zarcone JR, Lerman DC, & Shore BA (1994). A sequential, test-control methodology for conducting functional analyses of self-injury. Journal of Applied Behavior Analysis, 18, 289–306. doi: 10.1177/01454455940183003 [DOI] [PubMed] [Google Scholar]
- Iwata BA, Pace GM, Cowdery GE, & Miltenberger RG (1994). What makes extinction work: An analysis of procedural form and function. Journal of Applied Behavior Analysis, 27, 131–144. doi: 10.1901/jaba.1994.27-131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jessel J, Hanley GP, & Ghaemmaghami M (2016). Interview-informed synthesized contingency analyses: Thirty replications and reanalysis. Journal of Applied Behavior Analysis, 49, 576–595. doi: 10.1002/jaba.316 [DOI] [PubMed] [Google Scholar]
- Jessel J, Ingvarsson ET, Metras R, Kirk H, & Whipple R (2018). Achieving socially significant reductions in problem behavior following the interview‐informed synthesized contingency analysis: A summary of 25 outpatient applications. Journal of Applied Behavior Analysis, 51, 130–157. doi: 10.1002/jaba.436 [DOI] [PubMed] [Google Scholar]
- Jessel J, Metras R, Hanley GP, Jessel C, & Ingvarsson ET (2019). Evaluating the boundaries of analytic efficiency and control: A consecutive controlled case series of 26 functional analyses. Journal of Applied Behavior Analysis. Advance online publication. doi: 10.1002/jaba.544 [DOI] [PubMed] [Google Scholar]
- Querim AC, Iwata BA, Roscoe EM, Schlichenmeyer KJ, Ortega JV, & Hurl KE (2013). Functional analysis screening for problem behavior maintained by automatic reinforcement. Journal of Applied Behavior Analysis, 46, 47–60. doi: 10.1002/jaba.26 [DOI] [PubMed] [Google Scholar]
- Rooker GW, Jessel J, Kurtz PF, & Hagopian LP (2013). Functional communication training with and without alternative reinforcement and punishment: An analysis of 58 applications. Journal of Applied Behavior Analysis, 46, 708–722. doi: 10.1002/jaba.76 [DOI] [PubMed] [Google Scholar]
- Saini V, Greer BD, & Fisher WW (2015). Clarifying inconclusive functional analysis results: Assessment and treatment of automatically reinforced aggression. Journal of Applied Behavior Analysis, 48, 315–330. doi: 10.1002/jaba.203 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Santiago JL, Hanley GP, Moore K, & Jin CS (2016). The generality of interview-informed functional analyses: Systematic replications in school and home. Journal of Autism and Developmental Disorders, 46, 797–811. doi: 10.1007/s10803-015-2617-0 [DOI] [PubMed] [Google Scholar]
- Slaton JD, Hanley GP, & Raftery KJ (2017). Interview-informed functional analyses: A comparison of synthesized and isolated components. Journal of Applied Behavior Analysis, 50, 252–277. doi: 10.1002/jaba.384 [DOI] [PubMed] [Google Scholar]
- Smith RG, & Churchill RM (2002). Identification of environmental determinants of behavior disorders through functional analysis of precursor behaviors. Journal of Applied Behavior Analysis, 35, 125–136. doi: 10.1901/jaba.2002.35-125 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith RG, Iwata BA, Vollmer TR, & Zarcone JR (1993). Experimental analysis and treatment of multiply controlled self-injury. Journal of Applied Behavior Analysis, 26, 183–196. doi: 10.1901/jaba.1993.26-183 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strand RC, & Eldevik S (2018). Improvements in problem behavior in a child with autism spectrum diagnosis through synthesized analysis and treatment: A replication in an EIBI home program. Behavioral Interventions, 33, 102–111. doi: 10.1002/bin.1505 [DOI] [Google Scholar]
- Thompson RH, & Iwata BA (2007). A comparison of outcomes from descriptive and functional analyses of problem behavior. Journal of Applied Behavior Analysis, 40, 333–338. doi: 10.1901/jaba2007.56-06 [DOI] [PMC free article] [PubMed] [Google Scholar]
