Skip to main content
Journal of Applied Behavior Analysis logoLink to Journal of Applied Behavior Analysis
. 2010 Winter;43(4):685–704. doi: 10.1901/jaba.2010.43-685

COMPONENT ANALYSES USING SINGLE-SUBJECT EXPERIMENTAL DESIGNS: A REVIEW

John Ward-Horner 1,, Peter Sturmey 1
Editor: Jennifer Zarcone
PMCID: PMC2998259  PMID: 21541152

Abstract

A component analysis is a systematic assessment of 2 or more independent variables or components that comprise a treatment package. Component analyses are important for the analysis of behavior; however, previous research provides only cursory descriptions of the topic. Therefore, in this review the definition of component analysis is discussed, and a notation system for evaluating the experimental designs of component analyses is described. Thirty articles that included a component analysis were identified via a literature search. The majority of the studies successfully identified a necessary component; however, most of these studies did not evaluate the sufficiency of the necessary component. The notation system may be helpful in developing experimental designs that best suit the purpose of studies aimed at conducting component analyses of treatment packages.

Keywords: behavior analysis, behavior modification, component analysis, experimental design


A component analysis is a systematic analysis of two or more independent variables (components) that comprise a treatment package (Baer, Wolf, & Risley, 1968; J. O. Cooper, Heron, & Heward, 2007). Researchers and clinicians conduct component analyses to identify the active components of treatment packages that are responsible for behavior change. For behavioral treatments to be analytic, researchers must identify specific components of a treatment package that produce behavior change (Baer et al.). Component analyses also may enhance the efficiency and social validity (Wolf, 1978) of behavioral treatments by eliminating ineffective and perhaps effortful components and by evaluating the necessity of more restrictive components (e.g., punishment procedures) or those components of the intervention that are unnecessary. This in turn may lead to better generalization and maintenance of the program if parents, teachers, or staff have to be trained on only the key elements of the intervention. Finally, conducting a component analysis is a skill required of Board Certified Behavior Analysts (“Behavior Analyst Task List,” 2005), suggesting that it is a necessary and important feature of applied behavior analysis.

Despite the importance of component analyses, several authoritative sources on single-subject experimental designs do not refer to component analyses at all (Johnston & Pennypacker, 1993; Kazdin, 1982; Sidman, 1960; Sulzer-Azaroff & Meyer, 1972), or if they do, it is to provide cursory definitions and limited descriptions of the process (Baer et al., 1968; Barlow & Hersen, 1984; J. O. Cooper et al., 2007; Kennedy, 2005). Currently, there is no review of methods for performing component analyses in the research literature. Therefore, the purposes of this paper were (a) to review the definition of component analysis, (b) to introduce a notation system for evaluating the experimental designs of component analyses, (c) to discuss experimental procedures for performing component analyses, and (d) to review previous studies that have conducted component analyses.

Component Analysis Defined

J. O. Cooper et al. (2007) defined component analysis as “any experiment designed to identify the active elements of a treatment condition, the relative contributions of different variables in a treatment package, and/or the necessary and sufficient components of an intervention” (p. 692). Component analyses are different than experiments that compare two or more distinct treatments. Kennedy (2005) made this distinction by using the term comparative analysis to refer to the comparison of two or more distinct treatments and component analysis to refer to the evaluation of the necessary parts of an intervention. Accordingly, the present paper will use the term component to refer to variables that comprise a treatment package and treatment package to refer to the application of an intervention with all of its components. Readers should also note that sometimes a treatment package consists of components that are themselves independent treatments, such as a package of relaxation training, differential reinforcement, and extinction. For the purpose of this review, the components of such treatment packages will also be referred to as components. Other studies that directly compare interventions that are independent treatments that are never presented together as a treatment package would constitute a comparative analysis (Kennedy).

It is also important to distinguish component analyses from parametric analyses. A parametric analysis is a comparison of different levels of one independent variable (Baer et al., 1968; J. O. Cooper et al., 2007; Kennedy, 2005). Thus, a comparison of low rates and high rates of noncontingent reinforcement is more appropriately termed a parametric analysis rather than a component analysis. This example compares two values of a single variable (rate of noncontingent reinforcement) rather than a comparison of components of a treatment package.

A Notation for Component Analyses

Before reviewing the methods for conducting component analyses and previous studies that have conducted component analyses, it will be beneficial first to describe a notation system that may be used to illustrate examples of component analyses. The letters X, Y, and Z indicate the individual components of a three-component treatment package. Additional letters from the end of the alphabet can be used for treatments packages with more components. For example, an ABC design comprised of baseline, treatment, and follow-up can be notated as [BSL]_[XYZ]_[FU]. BSL indicates the baseline phase, FU indicates the follow-up phase, and the brackets surrounding the letters indicate an experimental phase. For alternating treatments designs, the components that the researcher alternates within a phase are separated by a hyphen. For example, [BSL]_[XY-Z]_[FU] indicates that the researcher presented Components X and Y simultaneously and compared them to Component Z alone. Multiple baseline designs may be notated using the letter P followed by a numeral to refer to the different participants and using the letter L followed by a numeral to indicate each leg of the design. For instance, the notation P1_L1_[X]_[Y]_[YZ], P1_L2_[X]_[Y]_[YZ], P1_L3_[X]_[Y]_[YZ] indicates a multiple baseline design across responses, whereas P1_L1_[X]_[Y]_[YZ], P2_L2_[X]_[Y]_[YZ], P3_L3_[X]_[Y]_[YZ] indicates a multiple baseline design across participants. Finally, researchers may include numerals following each component to represent the application of different treatments to different responses within a single phase of an experiment. For instance, the notation [Y1-Z2] indicates that Component Y was applied to Response 1 and that treatment Z was applied to Response 2. Table 1 provides examples of how researchers may use this notation system to evaluate a variety of experimental designs.

Table 1.

Review of the Add-In and Dropout Component Analyses

graphic file with name jaba-43-04-08-t01.jpg

Evaluating the Outcomes of Component Analyses

When conducting component analyses, researchers often determine that only one or a few components of a treatment package produce behavior change (e.g., Miltenberger, Fuqua, & McKinley, 1985); however, it is critical that the conclusions derived from the analysis are consistent with the experimental design. Accordingly, researchers should consider (a) the additive effects of components, (b) the multiplicative or interactive relations among components, (c) the necessity and sufficiency of each component, (d) behavioral effects related to combining different components, and (e) sequence effects. Additive effects refer to the possibility that the effects of individual components of the treatment package are independent of each other. For example, Outcome 1 in Figure 1 illustrates an additive effect because Component Y is effective at changing behavior whether or not Z is present, thereby indicating that Component Y is necessary and sufficient. Providing that ceiling effects are not a concern, Outcome 2 in Figure 1 is also suggestive of an additive effect because the sum of the components produced changes similar to the treatment package. Multiplicative effects refer to the possibility that the effects of one component might depend on the presence of another; for instance, Outcome 3 in Figure 1 illustrates a multiplicative effect because neither Component Y nor Component Z is effective at changing behavior, but the combination of these components produces substantial behavior change. The term necessity refers to whether a component is needed for the treatment package to be effective, and the term sufficiency refers to when a component is as effective as the treatment package. For instance, Outcome 4 indicates that Components Y and Z are sufficient but neither component is necessary, whereas Outcomes 2 and 3 indicate that neither Component Y nor Component Z is sufficient but both are necessary. Outcome 5 is challenging to interpret because of the behavioral effects related to treatment combinations (i.e., the possibility that the components that are presented together during the same phase might establish relations among the components that influence subsequent evaluations of the components). For instance, Outcome 5 illustrates a situation in which it is possible that Component Y is effective only because of its prior presentation with Component Z. Thus, when Component Y (praise) is presented with Component Z (primary reinforcer) in a treatment package, Component Y may become a conditioned reinforcer that is sufficient at maintaining behavior. Sequence effects refer to the possibility that the effects of one condition may carry over and influence behavior in subsequent conditions; for example, Component Y may be effective only when preceded by Component Z, but Component Y may be ineffective when it precedes Component Z. Although sequence effects are a concern in any experiment in which one condition is presented before other conditions, sequence effects may be mitigated by reversing treatment effects or by employing counterbalancing.

Figure 1.

Figure 1

The potential outcomes of component analyses for a two-component treatment package using reversal designs.

Methods of Conducting Component Analyses

There are two methods of conducting component analyses using single-subject experimental designs: dropout and add-in analyses. In a dropout analysis, the researcher presents the treatment package and then systematically removes components. The logic of the dropout method is that when a component is removed and the treatment is no longer effective, the researcher has identified the necessary component. The main advantage of this approach is that the target behavior will improve immediately, within the first or second experimental phase, and that the subsequent removal of components may provide information on the components necessary to maintain treatment goals (L. J. Cooper et al., 1995). One disadvantage is that the behavioral effects of combining components might mask the effectiveness of individual components, which makes it difficult to determine the necessity and sufficiency of components. For instance, if the effective component is dropped following the presentation of the treatment package (e.g., [YZ]_[Y], where Z is the effective component), behavior might not change due to the correlation or pairing of the effective component with the other components during the previous phase. The lack of differential responding between phases makes it difficult to draw conclusions about the effectiveness of the components.

A second method for conducting a component analysis is to systematically assess components individually or in combination before presenting the treatment package. When the components are presented individually before the treatment package, researchers assess the sufficiency of the components. When researchers present component combinations before the treatment package and the component combinations produce responding similar to the treatment package, researchers may conclude that the components that have not yet been assessed are not necessary, although they cannot conclude that these components are not sufficient. The main advantage of the add-in method is that researchers can avoid the behavioral effects related to combining different components, thereby allowing an evaluation of the sufficiency of the components. The major disadvantage is that sequence and floor or ceiling effects may make it impossible to detect the effects of components that are evaluated toward the end of the analysis. For instance, in a three-component treatment [XYZ], X might not have any impact on behavior, but Y might increase performance to 80%. Because performance can only increase by another 20%, the addition of Z will produce a limited increase in performance. Although Z may be as or more effective than Y, the presentation of Y before Z prevents researchers from obtaining an accurate assessment of Z, which may lead to the erroneous conclusion that either Z is less effective than Y or that Z is ineffective.

Researchers have used multiple baseline, reversal, and alternating treatments designs or design combinations to conduct component analyses (see Table 2). Add-in reversal or alternating treatments designs provide the most powerful and complete analysis of the active components of a treatment package because they reduce potential confounding from the behavioral effects of component combinations. Although sequence effects are not eliminated in reversal and alternating treatments designs, researchers could use counterbalancing techniques to reduce the likelihood of sequence effects.

Table 2.

Articles Identified As Having Conducted an Add-In or Dropout Component Analysis

graphic file with name jaba-43-04-08-t02.jpg

Although reversal and alternating treatments designs provide a more powerful and complete analysis of the active components of a treatment package, multiple baseline designs may be useful for evaluating treatment components when the target response is not easily reversible. Accordingly, researchers should perform add-in component analyses when using multiple baseline designs, because it is impossible to eliminate the confounding effect related to treatment combinations when the dropout method is employed (e.g., [BSL]_[XYZ]_[XY]_[X]).

METHOD

The authors conducted searches using PsycINFO, PubMed, and the Journal of Applied Behavior Analysis (abstract search). The terms entered into the PsycINFO search included component analysis and behavior analysis (22 hits), component analysis and behavior modification (63 hits), component analysis and Journal of Applied Behavior Analysis” (16 hits), component analysis and Journal of the Experimental Analysis of Behavior (two hits), component analysis and developmental disabilities (23 hits), and component analysis and mental retardation (57 hits). The search terms component analysis and behavior modification were used to search PubMed and the Journal of Applied Behavior Analysis (abstract search). In addition, we performed a reference search through the articles obtained from the database search. We excluded research articles that employed group designs or that did not meet the definition of component analysis.

The outcome of the search yielded 30 articles. The first author (primary rater) visually inspected the figures to determine whether the researchers had identified the necessary and sufficient components of the treatment package. In determining whether researchers identified the necessary components of a package, the first author recorded “no” when potential confounding effects were possible (e.g., behavioral effects related to component combinations), the experimental design did not permit statements regarding functional relations (e.g., AB designs or multiple baseline designs with only two legs), or the trends or overlap in data did not allow one to draw firm conclusions. A second independent rater (who was not an author on this paper) visually inspected each figure in each article to evaluate the reliability of the ratings. Interrater agreement was calculated separately for each question presented in Table 2 by dividing agreements by agreements plus disagreements and converting the ratio to a percentage. Agreements for the questions of whether the researchers (a) evaluated all components independently, (b) evaluated all component combinations, (c) identified the necessary components, and (d) assessed the sufficiency of the necessary components were 97% (range, 0% to 100%), 93% (range, 0% to 100%), 80% (range, 0% to 100%), and 77% (range, 0% to 100%), respectively. Agreement was lower for the questions of whether the necessary component was identified and whether the sufficiency of the necessary component was assessed because there was 0% agreement for these categories for the following articles: Freeman (2006), Millard et al. (1993), Pace, Iwata, Cowdery, Andree, and McIntyre (1993), and Tiger and Hanley (2004). When these four articles are excluded from the calculation, agreement for the remaining 26 studies was 88% (range, 50% to 100%) and 87% (range, 43% to 100%) for the questions of whether the study identified the necessary component and whether the study assessed the sufficiency of the necessary component, respectively. In the case of disagreements, the first and second authors discussed each disagreement to draw conclusions regarding the evaluation of components. Furthermore, any disagreements between the raters were changed in accordance with the conclusions reached by the authors' discussion; however, there was only one article (Millard et al.) for which the discussion of disagreements reversed the first author's scoring.

RESULTS

Dropout Component Analyses

Ten articles performed dropout component analyses (Table 2); none independently evaluated all components, and only three evaluated all component combinations (Cameron, Luiselli, Littleton, & Ferrelli, 1996; Odom, Hoyson, Jamieson, & Strain, 1985; Sisson & Barrett, 1984). The component analyses by Medland and Stachnik (1972) and Odom et al. (1985) could not be evaluated, because the experimental designs did not adequately control for sequence effects and the behavioral effects of component combinations. The remaining six articles successfully identified the necessary components for at least one participant, and only Wacker et al. (1990) and Cameron et al. (1996) evaluated whether the necessary components were also sufficient to produce behavior change.

One concern for dropout component analyses is that combinations of components, before the analysis of individual components, may make it difficult to determine the necessity and sufficiency of each component. This is especially challenging when a systematic removal of components or a systematic reintroduction of components follows the presentation of the treatment package (Medland & Stachnik, 1972; Odom et al., 1985). For instance, Odom et al. investigated the effects of training confederates to initiate social interactions with their peers with developmental disabilities. The treatment package consisted of reinforcing the confederates' appropriate social interactions with tokens (Y) and providing verbal prompts when the confederate failed to initiate an interaction after a specified period of time (Z). The treatment package (YZ) increased positive social interactions and decreased negative social interactions. Following the treatment package, the authors withdrew the tokens and reduced the frequency of teacher prompts (Z*) and then returned to the original frequency of teacher prompts (Z; see Table 1). The removal of tokens had no impact on social initiations, whereas the reduction in teacher prompts decreased social initiations.

Odom et al. (1985) concluded that the prompts were a necessary component of the treatment; however, the experimental design did not adequately control for the behavioral effects of component combinations. For instance, it is possible that prompts were effective only because of their prior correlation with tokens. This challenges the necessity of prompts on the grounds that tokens may have been necessary to make the prompts effective, at least initially. A demonstration of the necessity of prompts would have required that the experiment include tokens during the last three phases of the experiment. For instance, if the analysis of prompts consisted of [YZ]_[Y]_[YZ] instead of [Z]_[Z*]_[Z], and if the tokens (Y) were not sufficient to maintain positive social interactions, then conclusions about the necessity of prompts would have been possible.

Medland and Stachnik's (1972) component analysis of the good-behavior game faces similar challenges. They used the good-behavior game to decrease off-task responses of two groups of fifth graders. The game consisted of giving the students rules to follow during reading class (Y), implementing and describing punishment and reinforcement contingencies for following rules (X), and operating lights to signal when students behaved appropriately and inappropriately (Z). The experimenters recorded the number of disruptive responses emitted by each group. The authors used an ABACDB design in which A was the baseline, B was the treatment package [XYZ], C included rules only [Y], and D included rules and lights only [YZ] (see Table 1). Medland and Stachnik reported that the mean number of disruptive responses decreased during Y, and YZ suppressed disruptive responding further. When the authors reintroduced the game contingency (X) after YZ, responding decreased even further. Medland and Stachnik concluded that the rules and lights components effectively reduced behavior following their presentation with the games contingency and that all components were required for optimal control over behavior.

These data are compatible with the conclusion that there was an additive effect of the components and that all components were necessary; however, researchers should interpret such conclusions cautiously. Medland and Stachnik (1972) acknowledged that simultaneous presentation of such components might have masked the amount of control exerted by the individual components. For example, the rules and lights might have developed discriminative control by virtue of their correlation with the game's contingency in the previous experimental phase. Further, the combination of lights and rules might have decreased disruptive behavior more than rules alone because the lights could have acquired a stronger discriminative function, perhaps due to stimulus salience (Reynolds, 1961). Alternatively, the authors might have established the compound of the light and rules as the discriminative stimulus, and so responding in the presence of the light or rules alone may be attributed to stimulus generalization.

Although the behavioral effects of component combinations are a threat to dropout component analyses, the majority of the articles that used a dropout component analysis determined that at least one component was necessary. L. J. Cooper et al. (1995) provided an example of how to identify the necessary component using a dropout analysis. They investigated the effects of a treatment to increase food acceptance and consumption for four individuals with developmental disabilities. For one of the children, the treatment package consisted of a choice of food (S), contingent attention when the child accepted food (T), continued presentation of food when the child refused it (escape extinction; V), and a warm-up in which the therapist provided toys to the child for sitting in the high chair prior to mealtime (Z; Table 1). Following a baseline phase with low food acceptance, the authors implemented the treatment package (STVZ), which increased food acceptance and the amount of food consumed. To evaluate the necessity of the choice component (S), the authors conducted a third phase in which the treatment package (STVZ) alternated with Components TVZ. Because there was no difference in food acceptance between these alternating conditions, the authors eliminated the choice component (S). To evaluate the warm-up component (Z), the authors conducted a fourth phase in which TVZ alternated with TV. The warm-up component did not have any appreciable effect on food acceptance. To evaluate the contribution of escape extinction (V), the authors conducted a fifth phase comparing TV to T. There was greater food acceptance when escape extinction (V) was present than when it was absent, and the amount of food accepted and consumed was equal to that of the treatment package (STVZ) in the presence of TV. Thus, the authors concluded that escape extinction (V) was the necessary component that produced the increase in food acceptance.

There are two reasons why L. J. Cooper et al.'s (1995) component analysis is notable. First, these authors systematically evaluated the necessity of most of the components of a rather large treatment package. The systematic removal of components across phases in the context of an alternating treatments design is an efficient alternative to the potentially more time-consuming reversal design, which would have required many more sessions and experimental phases. In addition to the elegance of its design, this study also provides a nice example of how experimenters can conduct a component analysis while minimally affecting treatment gains. In contrast to a reversal design that requires the removal of active components for several sessions, L. J. Cooper et al. identified the necessity of escape extinction within a single experimental phase.

Add-In Component Analyses

Eighteen articles performed add-in component analyses (Table 2). Thirteen studies identified the necessary component; however, only Fisher et al. (1993) and Sanders (1983) independently evaluated all components for at least one participant. Although Fisher et al. and Sanders independently evaluated all components, we cannot report whether they assessed the sufficiency of the necessary components because the experimental designs did not permit one to draw firm conclusions.

Fisher et al. (1993) independently evaluated all components of a treatment package for reducing the problem behavior of five individuals with developmental disabilities. The treatment package consisted of extinction (X) or punishment (Y) and functional communication training (FCT; Z). For one participant (Art), the component analysis was a reversal design embedded in a multiple baseline design across disruption, aggression, and self-injury. There were six phases: baseline (A), extinction (B), punishment (C), FCT (D), FCT plus punishment (E), and FCT plus extinction (F). The authors targeted disruptive behavior in the first leg, using a BCBEDE design. The authors then targeted aggression and self-injury in the remaining two legs, respectively, using an ACAEDE design. The different sequence of phases allowed the independent assessment of extinction (X) and punishment (Y). The authors also conducted a reversal design in a different environment for disruptive behavior, and the sequence of phases consisted of FEFE. The multiple baseline design in the first setting revealed that problem behavior was reduced in the punishment (C), FCT (D), and the FCT plus punishment phases (E). For the reversal design in the second setting, problem behavior was not reduced in the first phase (F), but it was reduced in all of the remaining three phases.

Fisher et al.'s (1993) component analysis is notable because they evaluated all components independently. Nevertheless, the data during the third leg's baseline overlap considerably with the data during the punishment phase, so it is difficult to draw firm conclusions about the effects of punishment. Further, the authors lost experimental control when behavior failed to reverse after they implemented punishment alone in the first setting and when they implemented FCT in both settings. Therefore, conclusions about the necessity and sufficiency of the components are difficult to make because of data overlap between baseline and punishment phases and because of the nondifferential responding across phases.

Sanders (1983) used a nonconcurrent multiple baseline design across participants to conduct a component analysis of a treatment package for reducing lower back pain and pain-medication intake and increasing time standing or walking for three individuals. The treatment package consisted of four components: functional pain–behavior analysis training (e.g., self-monitoring of pain and the conditions that precede and follow pain; W), progressive relaxation training (X), assertion training (Y), and social reinforcement of increased activity (Z). Sanders presented each component once in isolation by counterbalancing the order of components across participants. Progressive relaxation training (X) decreased pain ratings and pain medication, and social reinforcement of activity (Z) increased the amount of time standing and walking. The effect of assertion training (Y) was too small to draw any firm conclusion. Functional pain–behavior analysis training (W) had no appreciable effect on pain-related behavior. Therefore, Sanders concluded that both relaxation training and activity reinforcement were necessary components.

Sanders's (1983) experimental design is interesting in that it permitted the independent evaluation of four components within a multiple baseline design; however, the design is troublesome because experimental control was lost as a result of the participants experiencing a different sequence of conditions, making it difficult to identify the design as a multiple baseline. An assumption of the multiple baseline design is that, when treatment is implemented for the first leg, other legs serve as controls (Kazdin, 1982). The problem with different legs having different sequences of conditions is that the control leg constantly changes, which makes it difficult to obtain stability in the control conditions. For instance, during the third phase for Participant 1, activity reinforcement had already been implemented for Participant 2, and it was serving as the control condition for Participant 1. Thus, experimental control of activity reinforcement was lost because there was not a comparison condition in the second leg. Further, pain-related behavior was increasing for Participant 2 when activity reinforcement was implemented for Participant 1. Therefore, experimental control was lost as a result of the lack of a consistent control condition.

The remaining add-in articles did not independently evaluate the component identified as being necessary, so it is unclear whether the necessary component was also sufficient. Five articles used multiple baseline designs, such that components were sequentially added to components tested in the previous phase (Feldman, Case, Rincover, Towns, & Betel, 1989; Kern, Wacker, Mace, Dunlap, & Kromrey, 1995; Pace et al., 1993; Rogers-Warren, Warren, & Baer, 1977; Woods, Miltenberger, & Lumley, 1996). The remaining seven articles used reversal designs, alternating treatments designs, or combination designs (e.g., multiple baseline design with a reversal; Buckley & Newchok, 2005; DeLeon et al., 2008; Freeman, 2006; Hagopian et al., 2002; Hanley, Iwata, Thompson, & Lindberg, 2000; Jones & Baker, 1989; Moore & Fisher, 2007; Shirley, Iwata, Kahng, Mazaleski, & Lerman, 1997; Tiger & Hanley, 2004), which are more amenable to the independent evaluation of the necessary components; however, despite the use of reversal designs, none determined if the necessary component was also sufficient.

Other Single-Subject Component Analyses

Another class of single-subject component analyses compares groups of single-subject data in which the component of interest is included in the treatment package for one group but not the other. Differential responding between groups may indicate the necessity of the component of interest, whereas nondifferential responding may indicate the sufficiency of the component. Two articles used multiple baseline designs in which groups of participants received a different sequence of conditions (Krumhus & Malott, 1980; Miltenberger et al., 1985). For example, Miltenberger et al. evaluated the competing response component of habit reversal for muscle tics. One group received awareness training, relaxation training, the use of a competing response, social support, and a review of situations that evoked the habit and the habit's inconvenience. The second group received only awareness training and competing response training. Motor tics in the second group decreased to the same level as the first group. The authors concluded that awareness and competing response training were the effective components of the habit reversal package.

There are three concerns that arise when evaluating different groups of experimental data. First, there might be an interaction between the treatment and participant variables, especially if the design does not include appropriate experimental control procedures. Second, variability across participants might restrict conclusions about the active components of the treatment package; if some participants in each group improve and others do not, then identification of the active component by visual inspection will be difficult or must be participant specific. The final concern relates to the logical error of accepting null findings. For instance, if a researcher finds that two groups (WX vs. WXYZ) perform similarly, the researcher might conclude that the components (WX) are as effective as the entire treatment package (WXYZ). The absence of behavior change between groups may indicate a lack of experimental control or insensitive dependent variables of measurement systems, rather than a lack of any differences between the groups.

DISCUSSION

This paper presented a description of component analyses and reviewed 30 component analyses. Conducting a component analysis is a complex and labor-intensive endeavor, especially with multicomponent treatment packages. To perform a complete component analysis, researchers must evaluate the independent effects of each component and the effects of all component combinations. Such an analysis would require several phases and clear differential responding between them to mitigate potential confounding from sequence effects or the behavioral effects related to component combinations. Given the complexity and labor-intensive nature of component analyses, it is important to consider when component analyses are justified.

There are two conditions that warrant a component analysis: clinical goals and analytic importance. From a clinical standpoint, component analysis should be based on an analysis of the cost and benefits and social validity of a treatment package. Accordingly, a component analysis may be useful for treatment packages that have many components or are expensive, time consuming, and difficult to implement. A component analysis may also have great clinical utility in treatment packages that include a restrictive or aversive component, so that clinicians can evaluate whether that component is necessary for behavior change. For instance, Cameron et al. (1996) conducted a component analysis of a treatment package for reducing inappropriate screaming, which included a visual occlusion helmet and noncontingent snacks. Following a demonstration of the necessity of the helmet, the authors conducted a component analysis of the various physical parts of the helmet. They found that the pressure from the helmet shell was effective at reducing screaming, and they were able to successfully replace the helmet shell with a hair band. Cameron et al.'s study provides a good example of when a component analysis is appropriate from a clinical perspective because (a) the authors demonstrated the least restrictive treatment needed for behavior change and (b) staff ratings of attitude towards treatment implementation improved with the discontinuation of the helmet; thus, the component analysis enhanced the social validity of the intervention.

Component analyses are appropriate to answer questions regarding the active components needed for behavior change. For instance, Wacker et al. (1990) conducted a component analysis of FCT because previous research had demonstrated the effectiveness of FCT, but the mechanisms that produced behavior change were still unknown. Thus, a component analysis is also appropriate when studies have demonstrated the robust efficacy of a treatment package but the components that produce behavior change are unknown.

As described by L. J. Cooper et al. (1995), researchers and practitioners should consider the study's goal when selecting a method for conducting a component analysis. If the researchers conduct a study to examine the effective components of a treatment package that previous research has already shown to be effective, then they should use the add-in method using reversal or multielement designs. In this way, conducting an add-in component analysis will allow evaluation of the independent effects of components prior to their combination. By contrast, if researchers conduct a study for its clinical importance, they should adopt the dropout method using a reversal or multielement design. In this strategy, there is a greater likelihood that there will be substantial and immediate improvements in behavior, given that all treatment components are presented first. Finally, although it is ideal to match the procedures for conducting a component analysis to the purpose of the study, this may not be possible when the behavior is not reversible. In such cases, researchers would need to conduct an add-in multiple baseline or an equivalent design that does not require a reversal of behavior to demonstrate experimental control.

If the add-in method is the most appropriate strategy, researchers should consider conducting further analyses with other participants to demonstrate the robustness of their initial findings. Depending on the outcome of the initial analysis, there are two main options for subsequent analyses: counterbalance the order of components for different participants or conduct a dropout analysis. For instance, with a two-component treatment package, if both components are equally effective or partially effective, or if one component is effective and the other is ineffective, researchers should replicate the initial outcome with different participants using a counterbalanced sequence of conditions (i.e., [Y]_[Z] and [Z]_[Y]). In such cases, the counterbalanced sequence, when combined with the preliminary findings, provides a more convincing demonstration of the active components than the initial analysis alone. A more challenging situation arises, however, when neither component is effective independently. This case is particularly difficult because experimental control is compromised due to a lack of behavior change between conditions or phases (i.e., [BSL]_[Y]_[BSL]_[Z]_[BSL]_[YZ]). One option is to replicate the initial add-in analysis with a counterbalanced order of conditions, because it is possible that the second component was ineffective only because it was preceded by another component. The second option is to conduct a dropout component analysis such that the treatment package is inserted between assessments of the independent components (e.g., [YZ]_[Y]_[YZ]_[Z]_[YZ]). The dropout component analysis might establish experimental control by producing differential responding between phases, providing that the behavioral effects of the component combination do not influence the subsequent analysis of the independent components.

Regardless of the purpose for conducting a component analysis, the outcome of the analysis may have implications for social validity. In some cases, the identification and removal of the unnecessary components may improve the efficiency and acceptability of the treatment package (Cameron et al., 1996). In other cases, it is possible that the active components that improve behavior may be different than the components that make the treatment socially acceptable. For instance, in a treatment package that consists of punishment and differential reinforcement, punishment may be the only active component; however, research has indicated that less restrictive procedures (reinforcement) tend to be rated more favorably (Miltenberger, Lennox, & Erfanian, 1989). Therefore, researchers should attempt to evaluate the acceptability of each component or component combinations of a treatment package, which in turn may affect long-term implementation of the intervention.

In our review of the literature, we identified 30 articles that conducted component analyses using single-subject experiments. Only two studies independently evaluated all treatment components, suggesting that research has focused on determining the necessity, but not sufficiency, of components. Only 13 articles evaluated all component combinations, and the majority of them evaluated treatment packages with only two components. Based on this sample of articles, it appears that there have been few complete component analyses; this is not surprising given the difficulties in performing component analyses described above.

There are several limitations of our review of the literature that need clarification. First, definitive conclusions about the number of complete component analyses on the basis of this review are unwarranted; it is difficult to identify all studies that conducted component analyses because the literature does not adequately define the term and uses it inconsistently. Thus, identification of all the possible studies was challenging. Future research might address this by supplementing online searches with hand searches of journals.

A second limitation was the way studies identified the components of the treatment package. We represented the components in the way that authors had categorized them; however, there were several instances in which one component could be subdivided into two or three independent components (Feldman et al., 1989; Richman et al., 1997; Roscoe, Fisher, Glover, & Volkert, 2006). For instance, Feldman et al. conducted a component analysis of instructions, modeling, practice, and feedback to teach mothers with developmental disabilities to be more responsive to their children. The component analysis consisted of comparing instructions (Y) to modeling, rehearsal, and feedback (Z). One could argue that in this study the necessary component was not identified because the relative contributions of modeling, practice, and feedback were unknown. This is one example of a broader phenomenon. Nevertheless, our reason for keeping with authors' categorizations was to provide reports that are consistent with the experimental design and purpose of the study.

A third limitation is that our evaluation of the studies in Table 2 did not take into account the purpose of the study. The purpose of a study might have been to evaluate the sufficiency of a particular component without regard to the necessity and sufficiency of the other components or vice versa. Further, for some of the studies (e.g., Sisson & Barrett, 1984), it may not have been logical to assess a component independently to determine its sufficiency. Therefore, although it is true that the sample of studies described in this paper rarely conducted a complete component analysis, the aim of the study might not have required a detailed analysis.

In addition to performing a hand search to provide a comprehensive and systematic review of component analyses, future researchers should begin to explore systematic methods for evaluating treatment packages that are so large that it may not be feasible to use traditional methods to identify active components. For instance, one strategy might include dividing the components of a treatment package into groups of components (e.g., UVW and XYZ) and conducting component analyses of each group of components. This strategy might help to identify the necessary components of large treatment packages, although it precludes the evaluation of the relation between all components. Regardless of the methods proposed, the identification of strategies for evaluating large treatment packages is important to the analytic goal of applied behavior analysis and possibly to the improvement of the social validity of treatments.

Acknowledgments

We thank Laura Seiverling, Robert Lanson, Alicia Alvero, Bruce Brown, Jeff Sigafoos, and the anonymous reviewers for their feedback on earlier versions of this manuscript and Laura Seiverling for assistance with data collection.

REFERENCES

  1. Baer D.M, Wolf M.M, Risley T.R. Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis. 1968;1:91–97. doi: 10.1901/jaba.1968.1-91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Barlow D.H, Hersen M. Single-case experimental designs: Strategies for studying behavior change (2nd ed.) New York: Pergamon; 1984. [Google Scholar]
  3. Behavior analyst task list 2005. Retrieved from http://www.bacb.com/tasklist/207-3rdEdTaskList.htm.
  4. Buckley S.D, Newchok D.K. An evaluation of simultaneous presentation and differential reinforcement with response cost to reduce packing. Journal of Applied Behavior Analysis. 2005;38:405–409. doi: 10.1901/jaba.2005.71-04. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Cameron M.J, Luiselli J.K, Littleton R.F, Ferrelli L. Component analysis and stimulus control assessment of a behavior deceleration treatment package. Research in Developmental Disabilities. 1996;17:203–215. doi: 10.1016/0891-4222(96)00004-2. [DOI] [PubMed] [Google Scholar]
  6. Cooper J.O, Heron T.E, Heward W.L. Applied behavior analysis (2nd ed.) Upper Saddle River, NJ: Pearson; 2007. [Google Scholar]
  7. Cooper L.J, Wacker D.P, McComas J.J, Brown K, Peck S.M, Richman D, et al. Use of component analyses to identify active variables in treatment packages for children with feeding disorders. Journal of Applied Behavior Analysis. 1995;28:139–153. doi: 10.1901/jaba.1995.28-139. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. DeLeon I.G, Hagopian L.P, Rodriguez-Catter V, Bowman L.G, Long E.S, Boelter E.W. Increasing wearing of prescription glasses in individuals with mental retardation. Journal of Applied Behavior Analysis. 2008;41:137–142. doi: 10.1901/jaba.2008.41-137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Feldman M.A, Case L, Rincover A, Towns F, Betel J. Parent education project III: Increasing affection and responsivity in developmentally handicapped mothers: Component analysis, generalization, and effects on child language. Journal of Applied Behavior Analysis. 1989;22:211–222. doi: 10.1901/jaba.1989.22-211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Fisher W, Piazza C, Cataldo M, Harrell R, Jefferson G, Conner R. Functional communication training with and without extinction and punishment. Journal of Applied Behavior Analysis. 1993;26:23–26. doi: 10.1901/jaba.1993.26-23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Freeman K.A. Treating bedtime resistance with the bedtime pass: A systematic replication and component analysis with 3-year-olds. Journal of Applied Behavior Analysis. 2006;39:423–428. doi: 10.1901/jaba.2006.34-05. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Green C.W, Gardner S.M, Reid D.H. Increasing indices of happiness among people with profound multiple disabilities: A program replication and component analysis. Journal of Applied Behavior Analysis. 1997;30:217–228. doi: 10.1901/jaba.1996.29-67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Hagopian L.P, Fisher W.W, Sullivan M.T, Acquisto J, LeBlanc L.A. Effectiveness of functional communication with and without extinction and punishment: A summary of 21 inpatient cases. Journal of Applied Behavior Analysis. 1998;31:211–235. doi: 10.1901/jaba.1998.31-211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Hagopian L.P, Rush K.S, Richman D.M, Kurtz P.F, Contrucci S.A, Crosland K. The development and application of individualized levels systems for the treatment of serve problem behavior. Behavior Therapy. 2002;33:65–86. [Google Scholar]
  15. Hanley G.P, Iwata B.A, Thompson R.H, Lindberg J.S. A component analysis of stereotypy as reinforcement for alternative behavior. Journal of Applied Behavior Analysis. 2000;33:285–297. doi: 10.1901/jaba.2000.33-285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hoch T.A, Babbitt R.L, Farrar-Schneider D, Berkowitz M.J, Owens J.C, Knight T.L, et al. Empirical examination of a multicomponent treatment for pediatric food refusal. Education and Treatment of Children. 2001;24:176–198. [Google Scholar]
  17. Johnston J.M, Pennypacker H.S. Strategies and tactics of behavioral research (2nd ed.) Hillsdale, NJ: Erlbaum; 1993. [Google Scholar]
  18. Jones R.S.P, Baker L.J.V. Reducing stereotyped behaviour: A component analysis of the DRI schedule. The British Journal of Clinical Psychology. 1989;28:255–266. doi: 10.1111/j.2044-8260.1989.tb01375.x. [DOI] [PubMed] [Google Scholar]
  19. Kazdin A.E. Single-case research design. Oxford, UK: Oxford University Press; 1982. [Google Scholar]
  20. Kennedy C.H. Single-case designs for educational research. New York: Allyn and Bacon; 2005. [Google Scholar]
  21. Kern L, Wacker D.P, Mace F.C, Falk G.D, Dunlap G, Kromrey J.D. Improving the peer interactions of students with emotional and behavioral disorders through self-evaluation procedures: A component analysis and group application. Journal of Applied Behavior Analysis. 1995;28:47–59. doi: 10.1901/jaba.1995.28-47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Krumhus K.M, Malott R.W. The effects of modeling and immediate and delayed feedback in staff training. Journal of Organizational Behavior Management. 1980;2:279–293. [Google Scholar]
  23. Medland M.B, Stachnik T.J. Good-behavior game: A replication and systematic analysis. Journal of Applied Behavior Analysis. 1972;5:45–51. doi: 10.1901/jaba.1972.5-45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Millard T, Wacker D.P, Cooper L.J, Harding J, Drew J, Plagmann A, et al. A brief component analysis of potential treatment packages in an outpatient clinic setting with young children. Journal of Applied Behavior Analysis. 1993;26:475–476. doi: 10.1901/jaba.1993.26-475. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Miltenberger R.G, Fuqua R.W, McKinley T. Habit reversal with muscle tics: Replication and component analysis. Behavior Therapy. 1985;16:39–50. [Google Scholar]
  26. Miltenberger R.G, Lennox D.B, Erfanian N. Acceptability of alternative treatments for persons with mental retardation: Ratings from institutional and community-based staff. American Journal on Mental Retardation. 1989;93:388–395. [PubMed] [Google Scholar]
  27. Moore J.W, Fisher W.W. The effects of videotape modeling on staff acquisition of functional analysis methodology. Journal of Applied Behavior Analysis. 2007;40:197–202. doi: 10.1901/jaba.2007.24-06. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Odom S.L, Hoyson M, Jamieson B, Strain P.S. Increasing handicapped preschoolers' peer social interactions: Cross-setting and component analysis. Journal of Applied Behavior Analysis. 1985;18:3–16. doi: 10.1901/jaba.1985.18-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Pace G.M, Iwata B.A, Cowdery G.E, Andree P.J, McIntyre T. Stimulus (instructional) fading during extinction of self-injurious escape behavior. Journal of Applied Behavior Analysis. 1993;26:205–212. doi: 10.1901/jaba.1993.26-205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Reynolds G.S. Attention in the pigeon. Journal of the Experimental Analysis of Behavior. 1961;4:203–208. doi: 10.1901/jeab.1961.4-203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Richman D.M, Berg W.K, Wacker D.P, Stephens T, Rankin B, Kilroy J. Using pretreatment and posttreatment assessments to enhance and evaluate existing treatment packages. Journal of Applied Behavior Analysis. 1997;30:709–712. doi: 10.1901/jaba.1997.30-709. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Rogers-Warren A, Warren S.F, Baer D.M. A component analysis: Modeling, self-reporting, and reinforcement of self-reporting in the development of sharing. Behavior Modification. 1977;1:307–322. [Google Scholar]
  33. Roscoe E.M, Fisher W.W, Glover A.C, Volkert V.M. Evaluating the relative effects of feedback and contingent money for staff training of stimulus preference assessments. Journal of Applied Behavior Analysis. 2006;39:63–77. doi: 10.1901/jaba.2006.7-05. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Sanders S.H. Component analysis of a behavioral treatment program for chronic low-back pain. Behavior Therapy. 1983;14:697–705. [Google Scholar]
  35. Shirley M.J, Iwata B.A, Kahng S.W, Mazaleski J.L, Lerman D.C. Does functional communication training compete with ongoing contingencies of reinforcement? An analysis during response acquisition and maintenance. Journal of Applied Behavior Analysis. 1997;30:93–104. doi: 10.1901/jaba.1997.30-93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Sidman M. Tactics of scientific research: Evaluating experimental data in psychology. Oxford, UK: Basic Books; 1960. [Google Scholar]
  37. Sisson L.A, Barrett R.P. An alternating treatments comparison of oral and total communication training with minimally verbal retarded children. Journal of Applied Behavior Analysis. 1984;17:559–566. doi: 10.1901/jaba.1984.17-559. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Sulzer-Azaroff B, Meyer G.M. Behavior analysis for lasting change. Fort Worth, TX: Holt, Rinehart & Winston; 1972. [Google Scholar]
  39. Tiger J.H, Hanley G.P. Developing stimulus control of preschooler mands: An analysis of schedule-correlated and contingency-specifying stimuli. Journal of Applied Behavior Analysis. 2004;37:517–521. doi: 10.1901/jaba.2004.37-517. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Wacker D.P, Steege M.W, Northup J, Sasso G, Berg W, Reimers T, et al. A component analysis of functional communication training across three topographies of severe behavior problems. Journal of Applied Behavior Analysis. 1990;23:417–429. doi: 10.1901/jaba.1990.23-417. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Wolf M.M. Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis. 1978;11:203–214. doi: 10.1901/jaba.1978.11-203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Woods D.W, Miltenberger R.G, Lumley V.A. Sequential application of major habit-reversal components to treat motor tics in children. Journal of Applied Behavior Analysis. 1996;29:483–493. doi: 10.1901/jaba.1996.29-483. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Applied Behavior Analysis are provided here courtesy of Society for the Experimental Analysis of Behavior

RESOURCES