Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 May 1.
Published in final edited form as: Behav Processes. 2012 Mar 23;90(1):1–8. doi: 10.1016/j.beproc.2012.03.009

Facets of Pavlovian and operant extinction

K Matthew Lattal 1, Kennon A Lattal 2
PMCID: PMC3337697  NIHMSID: NIHMS366249  PMID: 22465468

Abstract

Research on extinction is of fundamental importance in both Pavlovian and operant approaches to the experimental analysis of learning. Although these approaches are often motivated by different empirical and theoretical questions, extinction has emerged as a research area in which common themes unite the two approaches. In this review, we focus on some common considerations in the analysis of Pavlovian and operant extinction. These include methodological challenges and interpretational issues in analyzing behavior during and after extinction. We consider the different roles that theory has played in the development of research on extinction in these preparations and conclude with some attention to applications of extinction.


The title of this paper encompasses two relations: that between Pavlovian and operant conditioning and the relation of both of these to extinction. The former is, of course, longstanding in the study of learning, incorporating different empirical and theoretical perspectives (e.g., Blackman, 1977; Mackintosh, 1983; Rescorla & Solomon, 1967). Both have supported rich empirical research programs and equally fruitful developments in the theoretical understanding and practical applications of learning. Central to all of these activities has been the experimental analysis of extinction.

Theoretical perspectives on Pavlovian-operant relations include the position that the nominally different types of conditioning are, in fact, indistinguishable (e.g., Hearst, 1976), to the assignment of more relative importance to one or the other (e.g., Skinner, 1938) to, more commonly, interactive positions identifying important roles for both (e.g., Rescorla & Solomon, 1967; Mowrer, 1960; Blackman, 1970). Similarities abound in terms of behavioral effects and controlling processes. Both Pavlovian and operant situations are comprised of relations among stimuli, responses, and outcomes. The relative importance of various relations among these elements often defines Pavlovian (stimulus-outcome) and operant (response-outcome) conditioning, but relations with responses are important in Pavlovian conditioning and relations with stimuli are important in operant conditioning (e.g., Blackman, 1977; Nevin, & Grace, 2000; Crombag, Galarce, & Holland, 2008).

Pavlovian and operant extinction are both microcosms of broader discussions of the Pavlovian-operant relation and, independent of such discussions, a rich source of empirical and theoretical analyses of learned behavior. On the one hand, despite the procedural differences that define the two preparations, there are commonalities in definitions of extinction, methods of analysis, and empirical findings. On the other hand, a host of similar findings have yielded different interpretations and theories that have contributed to not only the understanding of extinction, but also to more general issues in the psychology and neurobiology of learning.

In this review, we consider some of the many facets of Pavlovian and operant extinction. We begin with a description of the ways in which the term “extinction” has been used in the literature. We then describe some issues to consider in the designs and analyses of extinction experiments. Next, we examine the role that theory has played in understanding extinction. Finally, we end with considerations of some applications of Pavlovian and operant extinction research.

Terminology

The term “extinction” is used in both Pavlovian and operant preparations to describe procedures and the behavioral outcomes of those procedures, as well as a behavioral process or mechanism that underlies these outcomes. Extinction is most often procedurally defined as the omission of previously delivered unconditioned stimuli or reinforcers; however, it also has been defined as the absence of a contingency between response and reinforcer (e.g., Baker, 1990; Premack, 1965; Rescorla and Skucy, 1969) or between conditional and unconditional stimulus (e.g., Delamater, 1995; Lindblom & Jenkins, 1981). Statements like “extinction consisted of 10 presentations of the conditional stimulus (CS) without the unconditional stimulus (US)” or “extinction was implemented by removing reinforcer access” illustrate this usage.

At a functional, behavioral level, extinction describes the decreases in responding from higher levels observed prior to extinction to lower levels following implementation of one of the aforementioned operations. Statements like ““responding extinguished by the tenth trial” or “responding was extinguished over several sessions” illustrate this usage. Even this well-established outcome, however, is subject to certain qualifications. First, the time course and functional extent of response elimination depend on how the conditions of extinction differ from the conditions present before extinction (during initial acquisition and maintenance; e.g., Gibbon, et al. 1980; Kimble, 1961; Nevin & Grace, 2000). Second, even though extinguished responding may be completely eliminated, there are a variety of conditions that will cause at least some of the responding to return (see Delamater, 2004; K.A. Lattal, St. Peter, & Escobar, in press). Also with respect to outcomes, concurrent with reducing the learned response that is the target of the extinction contingency, extinction may cause other responses to be generated (see K. A. Lattal, et al., in press, for a review). Unless reinforced, however, such generative behavior is transient, disappearing in the absence of its reinforcement.

At a theoretical level, extinction can refer to or imply a behavioral mechanism or process. Statements like “inhibitory extinction processes do not erase the original learning” illustrate this usage. Commonalities of behavioral effects suggest related, or at least overlapping, processes in Pavlovian and operant extinction. As will be developed in subsequent sections of this review, different theories of extinction have focused on external (observable) and internal (inferred) processes. The former tend to be the focus in behavior-analytic theories of operant extinction and the latter the focus of associative theories of Pavlovian and operant extinction. That said, however, it is an oversimplification to associate a particular theoretical perspective exclusively with one type of behavioral approach (e.g., Nevin & Grace, 2000; Rescorla, 1987; Woodruff, Conner, Gamzu, & Williams, 1977).

Experimental Design

Despite the procedural differences in Pavlovian and operant preparations, many of the methodological challenges in studying extinction are common to the two approaches. These challenges are of two types: those related to procedures and those related to measurement.

Procedures

Comparisons of effects of variables during extinction in Pavlovian or operant preparations can be made either across different subjects or by exposing the same subject to each condition sequentially. The former, between-subject, designs are more common with Pavlovian preparations simply because many such studies focus on rapidly occurring changes during initial acquisition or extinction. The latter, within-subject, designs are used often in operant preparations where resistance to extinction of responding maintained at the steady state often is of interest. Operant approaches also often involve repeated returns to baseline between phases, whereas Pavlovian approaches typically do not. The preparations are not defined, however, by their uses of between- or within-subject designs, as there are many examples of operant extinction studied using between-subject designs (e.g., Rescorla & Skucy, 1969) and Pavlovian extinction studied using within-subject designs (e.g., Gottlieb & Rescorla, 2010).

The advantages and limitations of each category of design are well documented (Rescorla & Holland, 1976; Sidman, 1960). Between-subject designs present challenges in matching all of the experimental conditions in different groups outside of the manipulation of interest. Among the many variables that may differ between groups, for example, might be total exposure to a context and number or rate of presentations of an unconditioned stimulus or reinforcer. These differences can result in behavioral differences that may be misattributed to the manipulation of interest. For example, after Pavlovian fear conditioning, in which a tone is paired with a shock, nonreinforced presentations of the tone will extinguish tone-evoked responses, compared to a No Extinction group that remained in its home cage during extinction. However, the No Extinction group also will have received less exposure to the apparatus, fewer experiences with the tone, and less handling by the experimenter. Any of these additional differences in treatment during the extinction manipulation may cause the group differences observed during a test.

Within-subjects approaches are better able to match these differences in treatment, but they present their own challenges. For example, in operant approaches, baseline responding often needs to be established and reestablished between successive extinction periods. In operant and Pavlovian preparations, the use of multiple stimuli or responses will result in the treatment in the presence of one stimulus influencing responding in the presence of the other (e.g., response induction, Hemmes & Eckerman, 1972). Thus, when extinction is imposed following differential training in the presence of two stimuli, it has to be implemented following one of the conditions first. As a result, when the second condition is presented, the subject has had at least some history of extinction. Even though the extinction responding may be under stimulus control, the within-subject comparisons of extinction as a function of the different parameters in effect during training may be tainted. Counterbalancing the treatment of the stimulus that is presented first in extinction helps with this problem: if differences in extinction occur independently of which stimulus is first presented, then it can be concluded that the differences in extinction are due to the independent variables and not simply an order effect. This ideal outcome, however, is not always the case and emphasizes the importance of including other designs that manipulate the variable of interest during extinction in different ways.

Measurement

In addition to the type of general procedures (between or within-subjects), Pavlovian and operant experiments are complicated by multiple measurement considerations in how to evaluate the response loss that occurs during extinction. In discrete-trial procedures (both Pavlovian and operant), a typical extinction curve is plotted as a response measure (e.g., response rate, probability, or latency) as a function of the number of extinction trials. In discrete trial and free-operant procedures, responding also has been recorded as a cumulative frequency graph (e.g., Leslie, Shaw, Gregg, McCormick, Reynolds, & Dawson, 2005; Skinner, 1938). These measures reveal effects on extinction as a function of the experience specifically during extinction.

Other measures reveal effects by examining how behavior changes during extinction as a function of the relative differences between the response measure before and during extinction. For example, in some experiments extinction has been plotted as a function of the number of omitted reinforcers during extinction that might have been expected based on the conditions of acquisition (e.g., examining behavior during every fourth trial during extinction following a partial reinforcement schedule in which a reinforcer occurs on 25% of trials; see Gibbon, et al. 1980). These measures, as well as measures of amount of time or number of trials to reach a specified period in which the response is absent, have revealed effects that may be opposite to those effects found when responses are plotted as a function of number of extinction trials (e.g., Gallistel & Gibbon, 2000, Nevin & Grace, 2005). Thus, the dependent variable during extinction may affect the interpretation of the long-term effects of manipulations that occurred prior to extinction.

With a given dependent variable, effects on rate of extinction often are inferred from differences in the shape of behavioral extinction functions. These approaches are useful for quantitative modeling of how alterations in environmental contingencies affect steady-state performance and for evaluating how a manipulation affects the rate of extinction from a common behavioral starting point at the outset of extinction. Assessing how extinction occurs from different starting levels of response is more difficult. As Nevin, Smith and Roberts (1987) noted, “downward variation is more limited for the performance with the lower baseline, biasing the comparison of differences in favor of the conclusion that the performance with the higher baseline rate is more resistant to change” (p. 31). Conversely, during early extinction trials, more absolute response loss may occur when behavior starts at a higher point, due to the larger discrepancy (or prediction error) between the signaled and delivered reinforcer (e.g., Rescorla & Wagner, 1972). Thus, if the criterion for demonstrating extinction is rate of response decrease or loss, a higher starting point may satisfy that criterion faster. If the criterion is number of trials to reach a point of low responding, then the lower starting point may satisfy that criterion faster.

One potential solution to the problem of different levels at the beginning or end of extinction is to normalize response rates relative to responding during the pre-extinction condition (see Nevin, et al. [1987] for a discussion of the details for conducting such analyses). This is a commonly used tool that is helpful for drawing conclusions about how performance during extinction changes as a function of levels of pre-extinction performance. However, if a goal is to compare the efficacy of a given manipulation during extinction, large differences in behavior prior to extinction make even normalized comparisons challenging because one still needs to make assumptions about how relatively high and low levels of responses map onto underlying learning processes. If theories can be developed to make explicit predictions about deviations from different points on a scale, then this approach can be justified. A different solution, when possible, is to address the problem experimentally by equating performance prior to extinction manipulations (e.g., K.M. Lattal, 1999; Stafford & K.M. Lattal, 2009).

These issues of measurement during extinction all warrant consideration when evaluating how behavior maintained by either Pavlovian or operant preparations changes when extinction contingencies are in effect (i.e., during extinction). Another measurement issue occurs when attempting to assess the long-term impact of those extinction contingencies (i.e., after extinction). The strongest conclusions about the long-term effects of an extinction manipulation come from post-extinction tests, during which the effects of different manipulations are assessed under common testing conditions (cf. Davis & Wagner, 1968; Gottlieb, 2008). Such common tests allow more general conclusions to be made about the impact of extinction contingencies on learning independent of the current conditions for performance. These common testing approaches have revealed that differences that appear in performance curves during different phases of conditioning and extinction do not always reflect underlying differences in learning (e.g., Drew, Yang, Ohyama, & Balsam, 2004). Further complicating the issue of testing conditions is the demonstration that even testing under common contingencies may not accurately capture differences in learning if those testing conditions differ in their similarity to the conditions of acquisition or extinction (e.g., K. M. Lattal, 1999; Stout & Miller, 2007; Wilkinson, Lee, & Bevins, 2009). Assessing the long-term impact of extinction is further complicated because many Pavlovian and operant experiments have demonstrated that repeated extinction tests often reveal savings in the speed with which the response declines over these tests (e.g., Anger & Anger, 1976; Stafford & Lattal, 2009).

Behavioral effects during and after extinction

The classic observation from extinction experiments is that over the course of extinction, learned behavior decreases from high levels to low levels. Although this finding is ubiquitous, there are multiple caveats to this observation. First, arranging extinction experimentally does not always result in an immediate loss of responding and may, in fact, result in a potentiated response early in extinction (the “extinction burst”). Second, the changes that occur during extinction will be strongly influenced by the contingencies that are arranged prior to extinction. Third, complications in comparing extinction when pre-extinction responding is at different points on the behavioral scale, as noted in the preceding subsection, prevent strong conclusions from being drawn about the shape of the extinction learning curve. Finally, although responding in the presence of the stimulus undergoing extinction may decrease, other responses may emerge during the course of extinction. We consider some of these caveats next.

Resistance to extinction: I. Response potentiation and conditioned reinforcement

Following some pre-extinction conditions (especially continuous reinforcement of an operant response), but perhaps not under all such conditions (e.g., Lerman & Iwata, 1995), responding at the onset of extinction may increase transiently relative to that observed in the pre-extinction condition. Furthermore, in Pavlovian preparations, brief extinction trials (e.g., a 1-s as opposed to a 30-s CS duration) may potentiate, rather than reduce, conditioned responding (Rohrbaugh, Riccio, & Arthur, 1972). Although both of these response potentiating effects occur, some analyses have suggested these effects to be equivocal in both Pavlovian and operant preparations (Lerman & Iwata, 1995; Rohrbaugh, et al. 1972). In addition, Skinner (1938) showed that punishing the first few responses of extinction (and thereby preventing a response burst at the outset of extinction) did not alter the ultimate time course of that extinction.

In both Pavlovian and operant preparations, the speed and extent of response elimination in extinction depends on the similarity between stimulus conditions in effect before extinction and those in effect during extinction. One such stimulus condition is the rate of reinforcement of an operant response. The removal of frequent or relatively infrequent reinforcement, for example, may itself serve as discriminative stimulus controlling response probability. Thus, differences in extinction as a function of differences in reinforcement rate could be at least in part a discriminative in addition to a nonreinforcement effect.

In another demonstration of the role of discriminative stimuli associated with reinforcement during extinction, Kelleher (1961) showed that continuing to present an empty food hopper during extinction resulted in more persistent responding during extinction than occurred when the empty food hopper was not presented in extinction. This presumably occurred because the auditory and visual cues associated with the food hopper signaled the delivery of food, resulting in those cues acquiring conditioned reinforcing value. These and other studies demonstrate that the presence of conditioned reinforcers may prolong the maintenance of responding during extinction (see Fantino & Romanowich, 2007; Robinson & Berridge, 2008; Williams, 1994).

Resistance to extinction: II. Response maintenance as a function of acquisition history

Resistance to extinction refers to the persistence of the responding established in the pre-extinction condition once extinction contingencies are in effect. Such persistence is measured in terms of the rate at which responding is reduced and the extent to which it is eliminated relative to the pre-extinction condition. Resistance to extinction can be affected by many variables, a review of which is beyond the scope of this review (see Nevin & Grace, 2000; Podlesnik & Shahan, 2010). Some of these variables in the pre-extinction condition may immunize, at least temporarily, responses to the attenuating effects of extinction in both Pavlovian and operant preparations (e.g., Bouton & Swartzentruber, 1991; Konorski, 1967; Mazur, 1994; Rescorla, 2003; Skinner, 1938; see Kimble, 1961 for a history of early such comparisons).

One of the best contemporary examples of an integration between Pavlovian and operant accounts of extinction is Nevin’s analysis of behavioral momentum and response strength as they relate to resistance to extinction. Operant responding first is stabilized under different reinforcement conditions in the presence of different discriminative stimuli (i.e., a multiple schedule). Following this, the resistance of responding in the presence of each of the stimuli is examined when disrupting events like pre-session access to the reinforcer or response-independent presentations of the reinforcer between multiple schedule components are scheduled. Most germane to the present discussion, responding in the presence of each stimulus is extinguished and differential resistance of responding in the two conditions, along with the results of the other tests, is taken to index differential strength of responding maintained by the contingencies in either stimulus condition. Using an identical preparation, Cohen, Riley, and Weigle (1993) showed, with both rats and pigeons, that there was no direct relation between rate of reinforcement on either VI, fixed-interval, variable-ratio or fixed-ratio schedules and subsequent resistance to extinction when the different reinforcement rates were varied across successive conditions. When, however, the reinforcement rates were programmed in different alternating components of a multiple schedule, orderly relations between rate of reinforcement and relative resistance to extinction were found.

Cohen et al.’s findings are significant because they lend to support to Nevin’s suggestion that resistance to change, as measured by resistance to extinction or in other ways, results from Pavlovian contingencies generated when operant responding is differentially reinforced in the different components of a multiple schedule because of the differential pairing of the discriminative stimuli with different conditions of reinforcement. Nevin, Tota, Torquato, and Shull (1990) reinforced key pecking of pigeons on a multiple VI VI schedule of food reinforcement. Adding response-independent food presentations in one component lowered response rates in that component relative to those maintained in the other component with less frequent reinforcement. Resistance to extinction, however, was greater in the component with the more frequent food delivery, even though response rates were lower in that component. Nevin et al. suggested that adding the response-independent food deliveries enhanced the Pavlovian stimulus-food contingency, with resulting greater resistance to change (see also Shull, Gaynor, & Grimes, 2002). Empirically, such research on behavioral momentum provides a test of how different contingencies differentially protect responding from extinction (see, e.g., Konorski, 1967).

Response decrement during extinction: Are extinction curves negatively accelerated?

Data from a host of experiments have shown how the pre-extinction condition influences behavior change during extinction. Despite such differences in rates of extinction as a function of manipulations during the pre-extinction condition, the form of the extinction curve appears to be qualitatively similar after these manipulations. Indeed, visualizing the thousands of extinction curves that are plotted in both the Pavlovian and operant literatures, it is striking to see how performance during extinction follows the familiar negatively accelerated pattern that is so often observed during acquisition, with a large amount of change early and smaller amounts of change as extinction progresses and responding reaches asymptote. Nevin’s analysis of behavioral momentum and response strength described above suggests that, despite the qualitative similarities in extinction curves based on operant responses, quantitative analyses of responding over the course of extinction indicates differential effects of pre-extinction variables. Relatedly, contemporary investigators in the Pavlovian tradition have noted that what often are labeled “learning curves” are more usefully labeled “performance curves” (e.g., Rescorla, 2001a). Some measure of behavior is the dependent variable and the goal is to make inferences about learning from the observed behavioral indices.

As noted previously, comparing changes in behavior from different parts of the response scale is difficult. Consider comparisons of initial acquisition, in which behavior starts at or near zero and ends at a higher level, with extinction, in which behavior starts at that high level and ends at or near zero. Based on absolute changes, one might be tempted to conclude that similarly large amounts of learning occur during the beginning of acquisition or extinction (when changes in behavior are large) compared to the end of acquisition or extinction (when changes in behavior are small). The difficulty in making inferences about the underlying learning process is that assumptions must be made about how a similar increase or decrease in units on a behavioral scale (e.g., a gain or loss of 10 responses per minute on a given trial) maps on to the underlying learning. Scaling issues make it impossible to know with any precision what the relation is between the observed performance and hypothetical learning functions.

In a series of experiments, Rescorla (e.g., 2001a, 2002a, 2002b) developed a novel test procedure for evaluating the potential differences in learning that occur during early and late parts of acquisition and extinction. By testing target stimuli that have different histories of acquisition and extinction in compound with stimuli that have received common treatment (and thus evoke common levels of responding), one can infer that differential effects on compound summation reflect differences in learning to the target stimuli. This approach has revealed that the negatively accelerated performance functions associated with acquisition and extinction do, in fact, reflect differences in underlying learning, with larger changes early compared to later in the behavioral treatment. This difference between the effects of early and late trials of extinction that has been so influential in Pavlovian analyses also is evident in operant experiments. For example, Hearst, Besley, and Farthing (1970) showed that inhibitory stimulus generalization gradients were steeper earlier than they were later during the course of extinction (c.f., Rescorla, 2002a). Together, these findings suggest that the impact of an extinction trial will differ depending on when in the course of extinction it occurs.

Response-generative effects of extinction

Much of the focus of research on extinction has been to examine the effects of different extinction contingencies on the stimulus or response that is the target of those contingencies. The dependent variable is usually changes in the response that was established during acquisition. Extinction, however, not only reduces or eliminates the targeted response, but it also generates or occasions increased variability in behavior. This variability can include topographical variations on the previously reinforced response (Antonitis, 1951) or the appearance of quite topographically distinct responses (e.g., Tinsley, et al 2002).

Increases in response variability in the presence of the extinction stimuli are commonly reported. Replicating an earlier experiment by Antonitis (1951) using rats, Eckerman and Lanson (1969) reinforced each response (a fixed-ratio 1 schedule) of pigeons at any point along a 10-in long strip composed of 20 separate response keys. During the FR 1 schedule, responses tended to recur along a narrow band of the strip, but during extinction responses occurred across a wider band of the strip and, for two of three pigeons, were more evenly distributed across this wider band.

In addition to variability in responding in the presence of target stimuli, extinction can induce new responding to other stimuli that are present during extinction. The classic example of this comes from Azrin, Hutchinson, & Hake (1966), who found that the onset of periods of extinction of key-peck responses of pigeons was accompanied by an increased frequency of attack on a restrained target pigeon by the pigeon whose responding was undergoing extinction. Such induced behavior is observed not only when an extinction procedure is formally implemented, but any time there is a signaled period of extinction. The form of the behavior conforms to the target object – for example, drinking if a lick tube is present, running if a running wheel is present, or pica if a material appropriate for pica is present. Such extinction-induced responding is not limited to the complete elimination of reinforcement. Within a schedule of reinforcement, for example, discriminated periods of nonreinforcement also may induce other-than-operant responses (e.g., Falk, 1966).

These variations in previously reinforced response topography and extinction-induced responding may be conceptualized as instances of resurgence (see da Silva, Maxwell, & Lattal, 2008, and Winterbauer & Bouton, 2010, for recent examples). Originally labeled regression by Keller & Schoenfeld (1950; see also Carey, 1951), the contemporary procedure for demonstrating resurgence is to first train and then extinguish a response. Either concurrent with or following the latter extinction, an alternative response is reinforced. Extinction of the second response then results in a transient reappearance of the still-nonreinforced first response, which defines resurgence. This behavioral process may operate generally when a response is extinguished: through a process of resurgence, previously learned responses may recur when the target response is no longer reinforced

Resurgence occurs in the same stimulus conditions that were in effect during the pre-extinction condition. Extinction also can induce changes in behavior in other stimulus conditions than those in which extinction is occurring. Behavioral contrast is an example. Following training of responding in the presence of two alternating stimulus conditions (a multiple schedule), the schedule in one of the conditions is replaced with extinction. Responding is not only reduced in that component, but it increases in the other component – behavioral contrast (Reynolds, 1961). Despite its label, reductions in response rate without concurrent reductions in reinforcement rate do not yield contrast (Halliday & Boakes, 1971). This has led to suggestions by some that Pavlovian stimulus-reinforcer relations are important for behavioral contrast (e.g., Gamzu & Schwartz, 1973; Schwartz, 1977; McSweeny, Ettinger, & Norman, 1981; but see Williams, 1983; Williams & McDevitt, 2001). However labeled, contrast has generality across different reinforcement conditions maintaining responding in the unchanged component, response classes, and species (see Williams (1983) for a review), although much of the research on behavioral contrast has, in fact, been conducted using pigeons.

Long-term effects of extinction

A pervasive finding regarding the learning that occurs during extinction is that extinguished behavior returns under many conditions. The earliest documentations of this come from Pavlov (1927) and Skinner (e.g., 1938). Pavlov found that extinguished conditioned responses return with time (spontaneous recovery effects; see also Mazur, 1994; Rescorla, 2004), or with post-extinction presentations of the US or reinforcer (reinstatement effects; see also Franks & Lattal, 1976; Rescorla & Heth, 1975). Pavlov also reported an early case of a situation in which the extinguished conditioned response returned when testing moved from the extinction context (the laboratory) to a different context (a classroom; renewal effects, see also Bouton & Bolles, 1979; Nakajima, Tanaka, Urushihara, & Imada, 2000). Both recent findings and the early work of Pavlov and Skinner demonstrate that the loss of the behavior that occurs during extinction is subject to constraints. As the conditions after extinction become more similar to the conditions prior to extinction, responding often returns.

This effect of pre-extinction and post-extinction similarity on responding is nicely illustrated in operant studies by Uhl and Garcia (1969) and Hearst, et al. (1970), who used what Hearst et al. labeled a “resistance to reinforcement” test of the efficacy of extinction in eliminating responding. Following the attenuation of responding to low levels, they reinstated the conditions in effect prior to extinction and tracked the course of the return of responding. Uhl and Garcia eliminated lever press responding previously maintained by a variable-interval (VI) schedule either by removing the reinforcer (extinction) or by a response-omission procedure whereby each lever press postponed reinforcement delivery. After responding was reduced to near-zero, either the VI schedule or, for some subjects, a variable-time schedule that delivered response-independent food was reinstated. The previously eliminated responding returned at approximately the same rate and level regardless of whether it had been extinguished or reduced by the response-omission procedure. Although the procedure is not widely used to assess extinction effects, this resistance to reinforcement procedure could prove useful in testing the long-term effects of extinction following other pre-extinction conditions.

Studies by Delamater (1997) demonstrate that reinstatement is due to the contribution of general processes on responding, as well as to re-establishment of responding associated with specific outcomes. In Delamater’s experiments, two CSs were associated with two USs (e.g., tone-sucrose and noise-pellets), then both were extinguished. Following extinction, presentation of sucrose reinstated some level of responding during the noise (demonstrating some outcome-independent reinstatement). But, importantly, sucrose caused more responding during the tone compared to the noise (demonstrating outcome-selectivity in reinstatement). Experiments like these reveal that the content of the CS-US association is preserved during extinction and that reinstatement occurs through contributions of general outcome-independent processes, but also to reintroduction of the specific outcome used in the acquisition phase.

Theoretical analyses of extinction

Two major theoretical research programs (Lakatos, 1970) or traditions (Laudan, 1977) in the study of learned behavior are associationist (e.g., Mackintosh, 1983) and behavior analytic or operant (e.g., Skinner, 1938). For both, a central question is: Why does extinction occur?

In the operant tradition, the causes of extinction most often are to be understood through a thoroughgoing experimental analysis of the conditions under which extinction occurs. Skinner (1938) originally conceived of extinction as metaphorically draining the reflex reserve created when responding was being reinforced. He subsequently abandoned this mechanistic view of extinction, with what he considered to be its unnecessary theoretical constructs, in favor of a functional approach to extinction: it is to be understood by identifying the variables of which it is a function. Ferster and Skinner (1957), for example, maintained responding under different schedules of reinforcement and then observed the effects of discontinuing the reinforcer on the persistence of response rates and patterns.

Others have examined in detail the role of reinforcement parameters, behavioral histories, and stimulus variables as they contribute to the occurrence of operant extinction (for a review see Lattal, St Peter, & Escobar, in press). As already noted, Nevin’s analysis involving extinction, described in the Resistance to extinction: II. Response maintenance as a function of acquisition history section is a major contribution to the functional understanding of extinction. The importance of context in accounting for operant extinction is illustrated by Kelleher’s (1961) experiment, described in the Resistance to extinction: I. Response potentiation and conditioned reinforcement section above, showing how similarity between pre-extinction and extinction conditions affects the course of extinction, a point that also is illustrated by the conditions that promote operant reinstatement (Franks & Lattal, 1976). Indeed, much of what is known about the general principles of operant extinction come from a functional approach in the Skinnerian tradition. This approach is not without its critics, however, even within behavior analysis (e.g., Staddon, 1993). Some theories of operant extinction include such hypothetical constructs as competing responses (e.g., Adelman & Maatsch, 1955) and inhibition (Hearst et al., 1970). Nonetheless, this generally functional approach has advanced both the understanding of extinction and the use of extinction in a host of applications.

Associationist theories of extinction are grounded in empirical findings from both Pavlovian and operant extinction, with much of the recent theorizing coming from the Pavlovian analyses (reviewed in Rescorla, 2001b). One of the key assumptions of modern views of Pavlovian conditioning is that both initial conditioning and extinction occur as a function of the error between the reinforcer that is predicted based on the available cues and the reinforcer that is obtained. This notion of prediction error is classically illustrated in the Rescorla-Wagner (1972) model of conditioning, although models before and after have capitalized on the predictive error idea in different ways (Bush & Mosteller, 1955; Mackintosh, 1975; Pearce, 1987). By this view, extinction occurs because the reinforcer that becomes anticipated during acquisition is omitted, which creates a negative prediction error, resulting in the loss of behavior. This approach has led to postulation of novel mechanisms involved in operant reinforcement and extinction (e.g., K.M. Lattal & Nakajima, 1998; Williams, 1984) and testing the implications of this and other theoretical approaches to extinction has been fruitful for the development and refinement of general quantitative theories of extinction (see Killeen, Sanabria, & Dolgov, 2009).

The importance of negative prediction error in driving extinction is central to many theories, but the prediction error concept itself does not speak to the inhibitory association that is assumed by some theorists to underlie extinction. That extinction is mediated by some inhibitory mechanism is common to many theories, including some theories of extinction promulgated by those grounded in the behavior-analytic tradition (e.g., Hearst et al., 1970). Inhibition, however, generally is more central to and conceptually developed in associationist theories. Different versions of these theories place differing emphasis on inhibition of the conditional stimulus itself (e.g., Robbins, 1990; Schmajuk & Larrauri, 2006), the unconditional stimulus itself (e.g., Rescorla & Heth, 1975; Stollhoff & Eisenhardt, 2009), and different combinations of conditional and unconditional stimuli and responses (e.g., Rescorla, 1993; see Delamater, 2004 for a review).

In addition to appealing to inhibition in relations between stimuli, responses, and outcomes, some theories suggest that inhibitory learning about the context in which extinction occurs is important for mediating changes in behavior during extinction. There is some evidence that extinction of a Pavlovian CS results in the context of extinction gaining inhibitory value (e.g., Polack, Laborda, & Miller, 2011), though other evidence suggests that contextual inhibition is not necessary for extinction (e.g., Bouton & Swartzentruber, 1986). Bouton (2004) suggested that context mediates or controls responding by serving as a retrieval cue for the learning that occurs during extinction. Bouton’s theory broadens the traditional definition of context to include not just external physical characteristics (such as visual and tactile cues), but also long-term and short-term temporal contexts (e.g., long temporal intervals, like those between the end of extinction and testing for spontaneous recovery, and short temporal intervals, like those between extinction trials; Bouton & Garcia-Gutierrez, 2006), and internal contexts (such as drug states; e.g., K.M. Lattal, 2007). This theory, which was initially developed to account for a specific data set related to Pavlovian conditioning and extinction, has been useful for making novel predictions about the nature of extinction of operant behavior, including extinction in applied settings (cf. Crombag, Bossert, Koya, & Shaham, 2008).

Operant accounts of extinction similarly emphasize the importance of discriminative stimulus control of responding during extinction as a function of discriminative stimulus-reinforcement relations in pre-extinction conditions (e.g., Skinner, 1950; Nevin et al., 1987). Incorporating these different aspects of context into one theoretical approach has lead to the development of general theories about operant and Pavlovian extinction (e.g., Bouton & Swartzentruber, 1991). These theories emphasize the importance of conceptualizing extinguishing or extinguished responding in different ways, with the goal of identifying similar principles to account for the occurrence of behavior during pre-extinction, extinction, and post-extinction testing.

Applications of extinction

Outside of the laboratory, the use of extinction as a tool for eliminating undesirable behavior has been extremely successful. At the clinical level, extinction combines Pavlovian and operant approaches that are focused on weakening cue-evoked reactions. There are countless examples of successful applications of extinction (e.g., Rothbaum & Schwartz, 2002), though there are also examples of failures (see Conklin & Tiffany, 2002). These failures may be due to various factors, but certainly, the general demonstration that the behavior changes induced by extinction may not persist across time and contexts illustrates some of the challenges with using extinction as a clinical behavioral intervention.

Indeed, the progress that occurs during extinction-based therapies may reverse as time passes or when certain cues are encountered (e.g., physical contexts, people, drugs) that are associated with the behavior that was extinguished. Clinical applications of extinction have recognized the importance of the contextually dependent nature of extinction by attempting to arrange treatment in a way that promotes the generalization of extinction outside of one particular context (see, e.g., Stokes & Baer, 1977). Strategies include extinction in multiple contexts (e.g., Gunther, Denniston, & Miller, 1998; but see Bouton, Garcia-Gutierrez, Zilski, & Moody, 2006) and associating particular reminder cues with extinction that can then be used outside of the extinction context to recall the extinction memory (see Brooks, 2005).

Extinction is not only one of the oldest forms of treatment for eliminating or reducing behavior (see Fuller 1949, for an early demonstration of operant extinction with a human), but it is the basis for other useful treatment techniques. It is, for example, incorporated into both differential-reinforcement-of-other behavior (DRO) and differential-reinforcement-of-alternative-behavior (DRA). Both techniques have become more in vogue for treating problem behavior because of ethical concerns associated with simply eliminating reinforcers for negative behavior without concurrently attempting to reinforce some more positive behavior. In these techniques, until the first reinforcer is encountered for either not emitting the target response or emitting the alternative response, the procedure is conventional operant extinction. Thus, in implementing treatment programs involving these procedures, consideration must be given to both stimulus and outcome variables that affect the rate and ultimate level to which responding might be reduced, as well as those related to the resistance of extinguished responding to recovery once treatment is terminated (as discussed above).

Furthermore, the potential generative effects of any extinction-based treatment must be considered. Depending on the alternative behavior generated either in the presence of the stimuli associated with the extinction contingency or in the presence of other, related environments, generative effects of extinction can be a blessing or a curse. As the targeted behavior is eliminated, other more positive responses may emerge that then can be reinforced. Alternatively, the behavior that emerges may be as equally unacceptable as, or worse for the client’s well-being, than the targeted response (see K.A. Lattal & St. Peter Pipkin, 2009).

Conclusions

Extinction is a procedure and a process that is significant for the empirical and theoretical study of learning and for its practical applications. Understanding how behavior changes during extinction has been key to understanding many phenomena of Pavlovian and operant acquisition and maintenance. Describing the learning processes that underlie extinction has been central for development of both general descriptive accounts of behavior and for theoretical approaches that speculate about the mechanisms that operate during learning. Many of the methodological issues described in this review come up repeatedly in the analysis of extinction, independently of behavioral preparation or theoretical perspective. In both domains, there also has been a great appreciation that many factors must be considered in analyzing the changes in behavior that occur when environmental contingencies change. As research on Pavlovian and operant extinction continues, additional points of contact will emerge that will suggest further unifying principles and clinical applications.

Highlights.

  • This article reviews research and theory on Pavlovian and operant extinction.

  • This article focuses on common procedural and interpretational challenges in Pavlovian and operant extinction.

  • This article addresses applications of Pavlovian and operant research on extinction.

Acknowledgement

Preparation of this article was supported by grants from the National Institutes of Health to KML (MH077111 and DA025922). Correspondence may be addressed to lattalm@ohsu.edu or klattal@wvu.edu.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Adelman HM, Maatsch JL. Resistance to extinction as a function of the type of response elicited by frustration. Journal of Experimental Psychology. 1955;50:61–65. doi: 10.1037/h0042017. [DOI] [PubMed] [Google Scholar]
  2. Anger D, Anger K. Behavior changes during repeated eight-day extinctions. Journal of the Experimental Analysis of Behavior. 1976;26:181–190. doi: 10.1901/jeab.1976.26-181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Antonitis JJ. Response variability in the white rat during conditioning, extinction, and reconditioning. Journal of Experimental Psychology. 1951;42:273–281. doi: 10.1037/h0060407. [DOI] [PubMed] [Google Scholar]
  4. Azrin NH, Hutchinson RR, Hake DF. Extinction-induced aggression. Journal of the Experimental Analysis of Behavior. 1966;9:191–204. doi: 10.1901/jeab.1966.9-191. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Baker AG. Contextual conditioning during free-operant extinction: Unsignaled, signaled, and backward-signaled noncontingent food. Animal Learning & Behavior. 1990;18:59–70. [Google Scholar]
  6. Blackman DP, Honig WK, Staddon JER. Handbook of Operant Behavior. Prentice Hall; New York: 1977. Conditioned suppression and the effects of classical conditioning on operant behavior; pp. 340–363. [Google Scholar]
  7. Bouton ME. Context and behavioral processes in extinction. Learn Mem. 2004;11:485–94. doi: 10.1101/lm.78804. [DOI] [PubMed] [Google Scholar]
  8. Bouton ME, Bolles RC. Contextual control of the extinction of conditioned fear. Learning & Motivation. 1979;10:445–466. [Google Scholar]
  9. Bouton ME, Garcia-Gutierrez A. Intertrial interval as a contextual stimulus. Behav Processes. 2006;71:307–17. doi: 10.1016/j.beproc.2005.12.003. [DOI] [PubMed] [Google Scholar]
  10. Bouton ME, Garcia-Gutierrez A, Zilski J, Moody EW. Extinction in multiple contexts does not necessarily make extinction less vulnerable to relapse. Behav Res Ther. 2006;44:983–94. doi: 10.1016/j.brat.2005.07.007. [DOI] [PubMed] [Google Scholar]
  11. Bouton ME, Swartzentruber D. Analysis of the associative and occasion-setting properties of contexts participating in a Pavlovian discrimination. Journal of Experimental Psychology: Animal Behavior Processes. 1986;12:333–350. [Google Scholar]
  12. Bouton ME, Swartzentruber D. Sources of relapse after extinction in Pavlovian and instrumental learning. Clinical Psychology Review. 1991;11:123–140. [Google Scholar]
  13. Brooks DC. Alcohol ataxia tolerance: Extinction cues, spontaneous recovery, and relapse. International Journal of Comparative Psychology. 2005;18:141–153. [Google Scholar]
  14. Bush RR, Mosteller F. Stochastic models for learning. Wiley; New York: 1955. [Google Scholar]
  15. Carey JP. Reinstatement of previously learned responses under conditions of extinction: A study of “regression. American Psychologist. 1951;6:284. [Google Scholar]
  16. Cohen SL, Riley DS, Weigle PA. Tests of behavioral momentum in simple and multiple schedules with rats and pigeons. Journal of the Experimental Analysis of Behavior. 1993;60:255–291. doi: 10.1901/jeab.1993.60-255. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Conklin CA, Tiffany ST. Applying extinction research and theory to cue-exposure addiction treatments. Addiction. 2002;97:155–67. doi: 10.1046/j.1360-0443.2002.00014.x. [DOI] [PubMed] [Google Scholar]
  18. Crombag HS, Bossert JM, Koya E, Shaham Y. Review. Context-induced relapse to drug seeking: a review. Philos Trans R Soc Lond B Biol Sci. 2008;363:3233–43. doi: 10.1098/rstb.2008.0090. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Crombag HS, Galarce EM, Holland PC. Pavlovian influences on goal-directed behavior in mice: the role of cue-reinforcer relations. Learning and Memory. 2008;15:299–303. doi: 10.1101/lm.762508. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. da Silva SP, Maxwell ME, Lattal KA. Concurrent resurgence and behavioral history. J Exp Anal Behav. 2008;90:313–31. doi: 10.1901/jeab.2008.90-313. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Davis M, Wagner AR. Startle responsiveness after habituation to different intensities of tone. Psychonomic Science. 1968;12:337–338. [Google Scholar]
  22. Delamater AR. Outcome-selective effects of intertrial reinforcement in a Pavlovian appetitive conditioning paradigm with rats. Animal Learning & Behavior. 1995;23:31–39. [Google Scholar]
  23. Delamater AR. Selective reinstatement of stimulus-outcome associations. Animal Learning & Behavior. 1997;25:400–412. [Google Scholar]
  24. Delamater AR. Experimental extinction in Pavlovian conditioning: behavioural and neuroscience perspectives. Q J Exp Psychol B. 2004;57:97–132. doi: 10.1080/02724990344000097. [DOI] [PubMed] [Google Scholar]
  25. Drew MR, Yang C, Ohyama T, Balsam PD. Temporal specificity of extinction in autoshaping. J Exp Psychol Anim Behav Process. 2004;30:163–76. doi: 10.1037/0097-7403.30.3.163. [DOI] [PubMed] [Google Scholar]
  26. Eckerman DA, Lanson RN. Variability of response location for pigeons responding under continuous reinforcement, intermittent reinforcement. Journal of the Experimental Analysis of Behavior. 1969;12:73–80. doi: 10.1901/jeab.1969.12-73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Falk JL. Schedule-induced polydipsia as a function of fixed interval length. Journal of the Experimental Analysis of Behavior. 1966;9:37–39. doi: 10.1901/jeab.1966.9-37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Fantino E, Romanowich P. The effect of conditioned reinforcement rate on choice: a review. J Exp Anal Behav. 2007;87:409–21. doi: 10.1901/jeab.2007.44-06. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Ferster CB, Skinner BF. Schedules of reinforcement. Appleton Century-Crofts; New York: 1957. [Google Scholar]
  30. Franks GJ, Lattal KA. Antecedent reinforcement schedule training and operant response reinstatement in rats. Animal Learning and Behavior. 1976;4:374–378. [Google Scholar]
  31. Fuller P. Operant conditioning of a human vegetative organism. American Journal of Psychology. 1949;62:587–590. [PubMed] [Google Scholar]
  32. Gallistel CR, Gibbon J. Time, rate, and conditioning. Psychological Review. 2000;107:289–344. doi: 10.1037/0033-295x.107.2.289. [DOI] [PubMed] [Google Scholar]
  33. Gamzu E, Schwartz B. The maintenance of key pecking by stimulus-contingent and response-independent food presentation. Journal of the Experimental Analysis of Behavior. 1973;19:65–72. doi: 10.1901/jeab.1973.19-65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Gibbon J, et al. Partial reinforcement in autoshaping with pigeons. Animal Learning & Behavior. 1980;8:45–59. [Google Scholar]
  35. Gottlieb DA. Is the number of trials a primary determinant of conditioned responding? J Exp Psychol Anim Behav Process. 2008;34:185–201. doi: 10.1037/0097-7403.34.2.185. [DOI] [PubMed] [Google Scholar]
  36. Gottlieb DA, Rescorla RA. Within-subject effects of number of trials in rat conditioning procedures. J Exp Psychol Anim Behav Process. 2010;36:217–31. doi: 10.1037/a0016425. [DOI] [PubMed] [Google Scholar]
  37. Gunther LM, Denniston JC, Miller RR. Conducting exposure treatment in multiple contexts can prevent relapse. Behav Res Ther. 1998;36:75–91. doi: 10.1016/s0005-7967(97)10019-5. [DOI] [PubMed] [Google Scholar]
  38. Halliday MS, Boakes RA. Behavioral contrast and response independent reinforcement. Journal of the Experimental Analysis of Behavior. 1971;16:429–434. doi: 10.1901/jeab.1971.16-429. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Hearst E. The classical-instrumental distinction: Reflexes, voluntary behavior, and categories of associative learning. In: Estes WK, editor. Handbook of learning and cognitive processes. Lawrence Erlbaum Associates; Hillsdale, NJ: 1976. pp. 181–223. [Google Scholar]
  40. Hearst E, Besley S, Farthing GW. Inhibition and the stimulus control of operant behavior. Journal of the Experimental Analysis of Behavior. 1970;14:373–409. doi: 10.1901/jeab.1970.14-s373. [DOI] [PMC free article] [PubMed] [Google Scholar]; Journal of the Experimental Analysis of Behavior. 4:1–5. doi: 10.1901/jeab.1961.4-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Hemmes NS, Eckerman DA. Positive interaction (induction) in multiple variable-interval, differential-reinforcement-of-high-rate schedules. Journal of the Experimental Analysis of Behavior. 1972;17:51–57. doi: 10.1901/jeab.1972.17-51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Kelleher RT. Schedules of conditioned reinforcement during experimental extinction. 1961 doi: 10.1901/jeab.1961.4-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Keller FS, Schoenfeld WN. Principles of psychology. Appleton-Century Crofts; New York: 1950. [Google Scholar]
  44. Killeen PR, Sanabria F, Dolgov I. The dynamics of conditioning and extinction. Journal of Experimental Psychology: Animal Behaior Processes. 2009;35:447–472. doi: 10.1037/a0015626. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Kimble GA. Hilgard and Marquis’ Conditioning and Learning. Prentice Hall; New York: 1961. [Google Scholar]
  46. Konorski J. Integrative activity of the brain, an interdisciplinary approach. University of Chicago Press; Chicago: 1967. [Google Scholar]
  47. Lakatos I. Falsification and the methodology of scientific research programs. In: Lakatos I, Musgrave A, editors. Criticism and the growth of knowledge. Cambridge University Press; Cambridge, England: pp. 91–196. [Google Scholar]
  48. Lattal KA, St. Peter Pipkin C. Resurgence of previously reinforced responding: Research and application. The Behavior Analyst Today,10. 2009 http://www.behavior-analyst-today.net.
  49. Lattal KA, St. Peter C, Escobar R. Operant extinction: Elimination and generation of behavior. In: Madden GJ, Dube WV, Hackenberg TD, Hanley GP, Lattal KA, editors. APA handbooks in psychology. APA handbook of behavior analysis, Vol. 2: Translating principles into practice. American Psychological Association; Washington, DC: in press. [Google Scholar]
  50. Lattal KM. Trial and intertrial durations in Pavlovian conditioning: Issues of learning and performance. Journal of Experimental Psychology: Animal Behavior Processes. 1999;25:433–450. doi: 10.1037/0097-7403.25.4.433. [DOI] [PubMed] [Google Scholar]
  51. Lattal KM. Effects of ethanol on the encoding, consolidation, and expression of extinction following contextual fear conditioning. Behav Neurosci. 2007;121:1280–1292. doi: 10.1037/0735-7044.121.6.1280. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Lattal KM, Nakajima S. Overexpectation in appetitive Pavlovian and instrumental conditioning. Animal Learning & Behavior. 1998;26:351–360. [Google Scholar]
  53. Laudan L. Progress and its problems. University of California Press; Berklely, CA: 1977. [Google Scholar]
  54. Lerman DC, Iwata BA. Prevalence of the extinction burst and its attenuation during treatment. Journal of Applied Behavior Analysis. 1995;28:93–94. doi: 10.1901/jaba.1995.28-93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Leslie JC, Shaw D, Gregg G, McCormick N, Reynolds DS, Dawson GR. Effects of reinforcement schedule on facilitation of operant extinction by chlordiazepoxide. Journal of the Experimental Analysis of Behavior. 2005;84:327–338. doi: 10.1901/jeab.2005.71-04. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Lindblom LL, Jenkins HM. Responses eliminated by noncontingent or negatively contingent reinforcement recover in extinction. Journal of Experimental Psychology: Animal Behavior Processes. 1981;7:175–190. [PubMed] [Google Scholar]
  57. Mackintosh NJ. A theory of attention: Variations in the associability of stimuli with reinforcement. Psychological Review. 1975;82:276–298. [Google Scholar]
  58. Mackintosh NJ. Conditioning and associative learning. Oxford University Press; New York: 1983. [Google Scholar]
  59. Mazur JE. Learning and Behavior. Prentice Hall; New York: 1994. [Google Scholar]
  60. McSweeney FK, Ettinger RH, Norman WD. Three versions of the additive theories of behavioral contrast. Journal of the Experimental Analysis of Behavior. 1981;36:285–297. doi: 10.1901/jeab.1981.36-285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Mowrer OH. Learning theory and behavior. Wiley; New York: 1960. [Google Scholar]
  62. Nakajima S, Tanaka S, Urushihara K, Imada H. Renewal of extinguished lever-press responses upon return to the training context. Learning & Motivation. 2000;31:416–431. [Google Scholar]
  63. Nevin JA, Grace RC. Behavioral momentum and the law of effect. Behavioral and Brain Sciences. 2000;23:73–130. doi: 10.1017/s0140525x00002405. [DOI] [PubMed] [Google Scholar]
  64. Nevin JA, Grace RC. Resistance to extinction in the steady state and in transition. Journal of Experimental Psycholology: Animal Behavior Processes. 2005;31:199–212. doi: 10.1037/0097-7403.31.2.199. [DOI] [PubMed] [Google Scholar]
  65. Nevin JA, Smith LD, Roberts J. Does contingent reinforcement strengthen operant behavior? Journal of the Experimental Analysis of Behavior. 1987;48:17–33. doi: 10.1901/jeab.1987.48-17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Nevin JA, Tota ME, Torquato RD, Shull RL. Alternative reinforcement increases resistance to change: Pavlovian or operant contingencies? Journal of the Experimental Analysis of Behavior. 1990;53:359–379. doi: 10.1901/jeab.1990.53-359. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Pavlov IP. Conditioned reflexes, an investigation of the physiological activity of the cerebral cortex. Oxford University Press; London: 1927. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Pearce JM. A model for stimulus generalization in Pavlovian conditioning. Psychol Rev. 1987;94:61–73. [PubMed] [Google Scholar]
  69. Podlesnik CA, Shahan TA. Extinction, relapse, and behavioral momentum. Behav Processes. 2010;84:400–11. doi: 10.1016/j.beproc.2010.02.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Polack CW, Laborda MA, Miller RR. Extinction context as a conditioned inhibitor. Learn Behav. 40:24–33. doi: 10.3758/s13420-011-0039-1. [DOI] [PubMed] [Google Scholar]
  71. Premack D. In: Levine D, editor. Reinforcement theory; Nebraska symposium on motivation; Lincoln NE: University of Nebraska Press. 1965.pp. 123–190. [Google Scholar]
  72. Rescorla RA. A Pavlovian analysis of goal-directed behavior. American Psychologist. 1987;42:119–129. [Google Scholar]
  73. Rescorla RA. Inhibitory associations between S and R in extinction. Animal Learning & Behavior. 1993;21:327–336. [Google Scholar]
  74. Rescorla RA. Are associative changes in acquisition and extinction negatively accelerated? Journal of Experimental Psychology: Animal Behavior Processes. 2001;27:307–315. [PubMed] [Google Scholar]
  75. Rescorla RA. Comparison of the rates of associative change during acquisition and extinction. J Exp Psychol Anim Behav Process. 2002a;28:406–15. [PubMed] [Google Scholar]
  76. Rescorla RA. Savings tests: separating differences in rate of learning from differences in initial levels. J Exp Psychol Anim Behav Process. 2002b;28:369–77. [PubMed] [Google Scholar]
  77. Rescorla RA. Protection from extinction. Learn Behav. 2003;31:124–32. doi: 10.3758/bf03195975. [DOI] [PubMed] [Google Scholar]
  78. Rescorla RA. Spontaneous recovery. Learn Mem. 2004;11:501–9. doi: 10.1101/lm.77504. [DOI] [PubMed] [Google Scholar]
  79. Rescorla RA. Spontaneous recovery of excitation but not inhibition. J Exp Psychol Anim Behav Process. 2005;31:277–88. doi: 10.1037/0097-7403.31.3.277. [DOI] [PubMed] [Google Scholar]
  80. Rescorla RA, Heth CD. Reinstatement of fear to an extinguished conditioned stimulus. Journal of Experimental Psychology: Animal Behavior Processes. 1975;1:88–96. [PubMed] [Google Scholar]
  81. Rescorla RA, Holland PC. Some behavioral approaches to the study of learning. In: Rosenzweig MR, Bennett EL, editors. Neural Mechanisms of Learning and Memory. MIT Press; Cambridge, MA: 1976. pp. 165–192. [Google Scholar]
  82. Rescorla RA, Skucy JC. Effect of response independent reinforcers during extinction. Journal of Comparative and Physiological Psychology. 1969;67:381–389. [Google Scholar]
  83. Rescorla RA, Solomon RL. Two-process learning theory: Relationships between Pavlovian conditioning and instrumental learning. Psychol Rev. 1967;74:151–82. doi: 10.1037/h0024475. [DOI] [PubMed] [Google Scholar]
  84. Rescorla RA, Wagner AR. A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In: Black AH, Prokasy WF, editors. Classical conditioning II: Current research and theory. Appleton-Century-Crofts; New York: 1972. pp. 64–99. [Google Scholar]
  85. Reynolds GS. Behavioral contrast. Journal of the Experimental Analysis of Behavior. 1961;4:57–71. doi: 10.1901/jeab.1961.4-57. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Robbins SJ. Mechanisms underlying spontaneous recovery in autoshaping. Journal of Experimental Psychology: Animal Behavior Processes. 1990;16:235–249. [Google Scholar]
  87. Robinson TE, Berridge KC. Review. The incentive sensitization theory of addiction: some current issues. Philos Trans R Soc Lond B Biol Sci. 2008;363:3137–46. doi: 10.1098/rstb.2008.0093. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Rohrbaugh M, Riccio DC, Arthur A. Paradoxical enhancement of conditioned suppression. Behav Res Ther. 1972;10:125–30. doi: 10.1016/s0005-7967(72)80005-6. [DOI] [PubMed] [Google Scholar]
  89. Rothbaum BO, Schwartz AC. Exposure therapy for posttraumatic stress disorder. Am J Psychother. 2002;56:59–75. doi: 10.1176/appi.psychotherapy.2002.56.1.59. [DOI] [PubMed] [Google Scholar]
  90. Schmajuk NA, Larrauri JA. Experimental challenges to theories of classical conditioning: application of an attentional model of storage and retrieval. J Exp Psychol Anim Behav Process. 2006;32:1–20. doi: 10.1037/0097-7403.32.1.1. [DOI] [PubMed] [Google Scholar]
  91. Schwartz B. Studies of operant and reflexive key pecks in the pigeon. Journal of the Experimental Analysis of Behavior. 1977;27:301–313. doi: 10.1901/jeab.1977.27-301. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Shull RL, Gaynor ST, Grimes JA. Response rate viewed as engagement bouts: Resistance to extinction. Journal of the Experimental Analysis of Behavior. 2002;77:211–231. doi: 10.1901/jeab.2002.77-211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Sidman M. Tactics of scientific research. Basic Books; New York: 1960. [Google Scholar]
  94. Skinner BF. Behavior of organisms. Appleton-Century-Crofts; New York: 1938. [Google Scholar]
  95. Skinner 1950 - Skinner BF. Are theories of learning necessary? Psychological Review. 1950;57:193–216. doi: 10.1037/h0054367. [DOI] [PubMed] [Google Scholar]
  96. Staddon JE. The conventional wisdom of behavior analysis. J Exp Anal Behav. 1993;60:439–47. doi: 10.1901/jeab.1993.60-439. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Stafford JM, Lattal KM. Direct comparisons of the size and persistence of anisomycin-induced consolidation and reconsolidation deficits. Learn Mem. 2009;16:494–503. doi: 10.1101/lm.1452209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Stokes TF, Baer DM. An implicit technology of generalization. Journal of Applied Behavior Analysis. 1977;10:349–367. doi: 10.1901/jaba.1977.10-349. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Stollhoff N, Eisenhardt D. Consolidation of an extinction memory depends on the unconditioned stimulus magnitude previously experienced during training. J Neurosci. 2009;29:9644–50. doi: 10.1523/JNEUROSCI.0495-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Stout SC, Miller RR. Sometimes-competing retrieval (SOCR): a formalization of the comparator hypothesis. Psychol Rev. 2007;114:759–83. doi: 10.1037/0033-295X.114.3.759. [DOI] [PubMed] [Google Scholar]
  101. Tinsley MR, Timberlake W, Sitomer M, Widman DR. Conditioned inhibitory effects of discriminated Pavlovian training with food in rats depend on interactions of search modes, related repertoires, and response measures. Anim Learn Behav. 2002;30:217–27. doi: 10.3758/bf03192831. [DOI] [PubMed] [Google Scholar]
  102. Uhl CN, Garcia EE. Comparison of omission with extinction in response elimination in rats. Journal of Comparative and Physiological Psychology. 1969;69:554–562. [Google Scholar]
  103. Brooks DC. Alcohol ataxia tolerance: Extinction cues, spontaneous recovery, and relapse. International Journal of Comparative Psychology. 2005;18:141–153. [Google Scholar]
  104. Wilkinson JL, Li C, Bevins RA. Pavlovian drug discrimination with bupropion as a feature positive occasion setter: substitution by methamphetamine and nicotine, but not cocaine. Addict Biol. 2009;14:165–73. doi: 10.1111/j.1369-1600.2008.00141.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Williams BA. Another look at contrast in multiple schedules. Journal of the Experimental Analysis of Behavior. 1983;39:345–384. doi: 10.1901/jeab.1983.39-345. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Williams BA. Stimulus control and associative learning. J Exp Anal Behav. 1984;42:469–83. doi: 10.1901/jeab.1984.42-469. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Williams BA. Conditioned reinforcement: Neglected or outmoded explanatory construct? Psychonomic Bulletin & Review. 1994;1:457–475. doi: 10.3758/BF03210950. [DOI] [PubMed] [Google Scholar]
  108. Williams BA, McDevitt MA. Competing determinants of stimulus value in anticipatory contrast. Animal Learning & Behavior. 2001;29:302–310. [Google Scholar]
  109. Winterbauer NE, Bouton ME. Mechanisms of resurgence of an extinguished instrumental behavior. Journal of Experimental Psychology: Animal Behavior Processes. 2010;36:343–353. doi: 10.1037/a0017365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Woodruff G, Conner N, Gamzu E, Williams DR. Associative interaction: joint control of key pecking by stimulus-reinforcer and response-reinforcer relationships. J Exp Anal Behav. 1977;28:133–44. doi: 10.1901/jeab.1977.28-133. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES