Abstract
In Experiment 1, 4 pigeons were trained on a multiple chain schedule in which the initial link was a variable-interval (VI) 20-s schedule signalled by a red or green center key, and terminal links required four responses made to the left (L) and/or right (R) keys. In the REPEAT component, signalled by red keylights, only LRLR terminal-link response sequences were reinforced, while in the VARY component, signalled by green keylights, terminal-link response sequences were reinforced if they satisfied a variability criterion. The reinforcer rate for both components was equated by adjusting the reinforcer probability for correct REPEAT sequences across sessions. Results showed that initial- and terminal-link responding in the VARY component was generally more resistant to prefeeding, extinction, and response-independent food than responding in the REPEAT component. In Experiment 2, the REPEAT and VARY contingencies were arranged as terminal links of a concurrent chain and the relative reinforcer rate was manipulated across conditions. For all pigeons, initial-link response allocation was biased toward the alternative associated with the VARY terminal link. These results replicate previous reports that operant variation is more resistant to change than operant repetition (Doughty & Lattal, 2001), and show that variation is preferred to repetition with reinforcer-related variables controlled. Behavioral momentum theory (Nevin & Grace, 2000) predicts the covariation of preference and resistance to change in Experiments 1 and 2, but does not explain why these aspects of behavior should depend on contingencies that require repetition or variation.
Keywords: operant variation, stereotypy, resistance to change, choice, multiple schedules, concurrent chains, key peck, pigeons
The three-term operant contingency, in which a response is reinforced in the presence of a discriminative stimulus, is a fundamental unit of behavior (Davison & Nevin, 1999; Skinner, 1969). According to behavioral momentum theory (Nevin & Grace, 2000), resistance to change is determined by the Pavlovian, stimulus–reinforcer relation and is identified with the traditional notion of response strength (Nevin, 1974), whereas response rate is determined by the response–reinforcer contingency (Nevin, Tota, Torquato & Shull, 1990; Podlesnik & Shahan, 2008). Resistance to change is typically measured by training subjects on a multiple schedule and then disrupting behavior by prefeeding, extinction, or free food during the inter-component interval. Responding in the component that decreases less, relative to baseline, is said to have greater resistance to change.
If access to the multiple schedule components is produced by responses during a choice phase, the procedure is known as concurrent chains (see Grace & Hucks, in press, for review). Response allocation during the choice phase (‘initial links') provides a measure of the relative value of the component schedules (‘terminal links'). According to behavioral momentum theory, relative resistance to change of responding in a multiple schedule and choice in concurrent chains should be correlated: Discriminative stimuli that signal relatively richer conditions of reinforcement should support responding that is more resistant to disruption in a multiple schedule, and be preferred when arranged as terminal links in concurrent chains. Nevin and Grace (2000) proposed that resistance to change and preference are independent measures of a common construct, known as ‘behavioral mass' in momentum theory and ‘value' in research on choice, which represents the history of reinforcement correlated with a discriminative stimulus. Although the primary determiners of behavioral mass and value are reinforcer-specific variables such as rate, magnitude, immediacy and probability of reinforcement (Grace & Nevin, 2000a), other factors such as unsignaled delay of reinforcement (Bell, 1999; Grace, Schwendiman & Nevin, 1998), variable-ratio (VR) versus variable-interval (VI) schedules (Nevin, Grace, Holland & McLean, 2001), and schedules that require high response rates (e.g., differential reinforcement of high response rate [DRH]; Lattal, 1989) can also affect resistance to change. These results challenge the original assumption of behavioral momentum theory that resistance to change was determined solely by the Pavlovian, stimulus-reinforcer contingency.
Variability is a dimension of operant responding that is sensitive to contingencies of reinforcement, like response force or topography (Page & Neuringer, 1985). Considerable research has shown that behavior can be shaped as variable or stereotyped, depending on the contingencies. For example, Neuringer (1992) reinforced pigeons' sequences of four responses made to left and right keys. Sequences were classified as VARY or REPEAT depending on whether they differed from the three preceding sequences. Neuringer varied the relative reinforcement rate for the two response classes, and found that the proportion of VARY sequences matched the proportion of reinforcers obtained for VARY sequences. He concluded that choosing to vary or repeat was functionally similar to making one of two concurrently-available responses (Herrnstein, 1961).
Doughty and Lattal (2001) addressed the question of whether operant variation was more resistant to change than operant repetition. They exposed 3 pigeons to a multiple chain schedule in which responses to an initial-link VI 20-s schedule on the center key produced access to a terminal link in which pigeons had to make a sequence of four responses to the left and right keys in order to obtain reinforcement. In the REPEAT component, red keylights were used and pigeons had to respond left-right-left-right (LRLR) in order to obtain reinforcement. In the VARY component, white keylights were used and response sequences had to satisfy a criterion such that the weighted relative frequency of a sequence was less than a threshold value of .05 (Denney & Neuringer, 1998). They adjusted the reinforcer probabilities in the REPEAT component at the start of each session so that reinforcer rates would be approximately equal in both components. After baseline training, two resistance-to- change tests were conducted in which prefeeding and response-independent food (delivered during the interval between components according to a variable-time [VT] schedule) were used as disruptors. Doughty and Lattal found that resistance to change for both initial- and terminal-link responding, measured as the logarithm of the proportion of baseline response rate during test, was generally greater in the VARY component. However, they noted that behavioral momentum theory was unable to account for their results, because the reinforcement rates in both components were equated.
Doughty and Lattal's (2001) results suggest that variable responding is more resistant to change than stereotyped responding, but is the opportunity to make a variable response to produce reinforcement preferred over an opportunity to make a stereotyped response? Abreu-Rodrigues, Lattal, dos Santos and Matos (2005) attempted to answer this question. They conducted two experiments in which pigeons responded in the initial links of concurrent chains to produce access to terminal link schedules that differed in terms of whether reinforcement was arranged by a REPEAT or VARY contingency. In the VARY terminal link, four-response sequences were reinforced according to a lag criterion that varied between lag 1 (i.e., the sequence had to be different from the immediately-prior sequence) and lag 10 (i.e., it had to be different from the prior 10 sequences) across different conditions. In the REPEAT terminal link, only LRRR sequences were reinforced. Both terminal links continued until five reinforcers had been obtained, and Abreu-Rodrigues et al. equated overall reinforcer rate by adding a timeout to the REPEAT terminal links (Experiment 1), and yoking the interreinforcer intervals for the REPEAT terminal link to those obtained in the VARY terminal link (Experiment 2). Abreu-Rodrigues et al. reported that in both experiments, preference for the VARY terminal link decreased when the variability contingency was made more stringent (i.e., lag 10 vs. lag 1).
At first glance, Abreu-Rodrigues et al.'s (2005) results, taken together with those of Doughty and Lattal (2001), appear to challenge expectations based on behavioral momentum theory. If variable responding is more resistant to change than stereotyped responding, preference for a VARY over REPEAT terminal link in concurrent chains should increase directly with the variability requirement to the extent such a requirement produces greater variation, all else being equal. However, there are several aspects of Abreu-Rodrigues et al.'s procedure that render it problematic as a test of the prediction derived from behavioral momentum theory and Doughty and Lattal's results, that pigeons should show a preference for VARY over REPEAT in concurrent chains. The added timeouts to the REPEAT terminal link in Experiment 1 did not control for reinforcer immediacy, which is known to be a more potent determiner of preference in concurrent chains than reinforcer rate (e.g., Killeen, 1968). In Experiment 2, Abreu-Rodrigues et al. attempted to control for immediacy by superimposing a VI 30-s schedule on the VARY terminal link such that only sequences completed after the VI 30 s had elapsed were eligible for reinforcement, and yoking the intervals in the REPEAT terminal link to those obtained in the VARY terminal link. However, sequences that were produced before the VI 30 s had elapsed had different consequences: ‘Correct' sequences (i.e., those that would be eligible for reinforcement) produced a brief (0.5 s) hopper presentation, whereas ‘incorrect' sequences produced a 5-s timeout. Because the proportion of correct sequences in the VARY terminal link decreased when the contingency was changed from lag 1 to lag 10, there were relatively more timeouts (and fewer 0.5-s hopper presentations) in the VARY terminal link compared with the REPEAT terminal link. This difference might have accounted for the decreased preference for the VARY alternative, despite the yoking of interreinforcer intervals.
Thus the goal of the present research was to compare preference and resistance to change for schedules that required variable and stereotyped responding under conditions in which all relevant reinforcer variables were controlled. Experiment 1 used a procedure closely modelled on Doughty and Lattal (2001): Pigeons were trained on a multiple chain procedure with VARY and REPEAT terminal links, and resistance-to-change tests were conducted with prefeeding, extinction, and response-independent food as disruptors. The same pigeons served in Experiment 2, which used a switching-key concurrent-chains procedure. The intercomponent-interval was replaced by an initial-link schedule that arranged equal access to REPEAT and VARY terminal links, and across conditions, the relative terminal-link reinforcement rate was varied. In this way, we were able to obtain estimates of bias that reflected preference for VARY versus REPEAT independently of the effects of the reinforcer ratio. We also recorded delay to reinforcement distributions so that the relative immediacy of reinforcement in the VARY and REPEAT terminal links could be assessed. The primary questions were whether responding would be more resistant to change in the VARY component in Experiment 1, consistent with Doughty and Lattal, and if so, whether initial-link response allocation would show a bias toward the VARY alternative in Experiment 2.
EXPERIMENT 1
Method
Subjects
Subjects were 4 pigeons of mixed breed, numbered 181–184, and were maintained at 85% of their free-feeding weight plus or minus 15 g through appropriate postsession feeding. Subjects were housed individually in a vivarium with a 12h:12h light/dark cycle (lights on at 0600), with water and grit freely available in the home cages. Subjects had previous experience with concurrent-chains schedules but not with contingencies relating to response stereotypy or variability.
Apparatus
Four standard three-key operant chambers, 32 cm deep x 34 cm wide x 34 cm high, were used. The keys were 21 cm above the floor and arranged in a row 10 cm apart. In each chamber a houselight that provided general illumination was located above the center key, and a grain magazine with a 5 × 5.5-cm aperture was centered 6 cm above the floor. The magazine contained wheat and was illuminated when wheat was made available. A force of approximately 0.15 N was necessary to operate each key. Each chamber was enclosed in a sound-attenuating box, and ventilation and white noise were provided by an attached fan. Experimental events were controlled and data recorded through a microcomputer and MEDPC® interface located in an adjacent room.
Procedure
Pigeons were first exposed to a pretraining procedure to shape a four-peck response sequence (LRLR, where L = left and R = right). Red keylights were used for all stimuli. At the start of the session, the center key was lighted to signal the initial link and a variable-interval (VI) 20-s schedule operated. The VI 20-s schedule contained 12 intervals defined according to an exponential progression (Fleshler & Hoffman, 1962). When an interval sampled randomly without replacement had elapsed, the next response extinguished the center key and illuminated the left key, signaling the terminal link. The terminal link consisted of five LRLR sequences, as follows: A single response to the left key turned it off for an interpeck interval (IPI) of 0.5 s, followed by illumination of the right key. A single response extinguished the right key for 0.5 s, followed by illumination of the left key. This LR sequence was repeated, and the response to the right key produced reinforcement. During reinforcement, the grain feeder was raised and illuminated for 3 s while all other lights in the chamber were extinguished. The left key was then illuminated red and the next sequence began. Pigeons completed the LRLR sequence five times, each followed by reinforcement, and then the center key was lighted and the VI 20-s initial-link schedule began timing again. Sessions ended after 60 reinforcements had been obtained. The first pretraining condition lasted for two sessions, after which all pigeons were reliably producing the LRLR sequence.
The second pretraining condition lasted for three sessions for all pigeons. After the center-key VI 20-s contingency was met, both the left and right keys were illuminated. Responses produced a 0.5-s darkening of the key, and LRLR sequences were reinforced. An ‘incorrect' response (i.e., one that was not consistent with the LRLR sequence) produced a 3-s blackout, after which the left and right keys were illuminated and the next sequence began. After five sequences had been completed (each ending with reinforcement or blackout), the center key was illuminated and the initial-link VI 20-s schedule began timing again. The third pretraining condition was the same in all respects except that incorrect responses did not immediately terminate the sequence. All sequences ended after four responses and were followed by reinforcement (for LRLR sequences) or blackout (for non-LRLR sequences). This condition remained in effect until all pigeons had completed at least 80% of sequences correctly for three consecutive sessions, which required eight sessions. In both the second and third pretraining conditions, sessions ended when 60 reinforcements had been obtained.
Pigeons then began training on the final baseline procedure, diagrammed in Figure 1, which was a two-component multiple schedule. In the REPEAT component, the stimuli were red keylights, the initial link was a VI 20 s on the center key, and each four-peck terminal-link sequence ended with 3-s reinforcement or blackout, determined probabilistically, as described below. In the VARY component, the stimuli were green keylights, and an initial-link VI 20-s schedule operated on the center key. When the VI 20 s had timed out, the next response produced a terminal-link entry and the side keys were lighted. A four-peck sequence was reinforced depending on the relative frequency of that particular sequence (out of 16 possible sequences; Denney & Neuringer, 1998). Specifically, the number of times that the particular sequence had occurred divided by the number of total sequences was calculated, and if this relative frequency was less than .05 then the sequence was reinforced. Sequences that were not reinforced produced a 3-s blackout. After reinforcer delivery, the relative frequencies of each sequence were multiplied by a weighting coefficient (0.95) such that more recent sequences were weighted more heavily (Denney & Neuringer, 1998). Relative frequencies for each sequence were carried over to the start of the next session.
Fig. 1.

Multiple schedule used for baseline sessions in Experiment 1. After a 30-s ICI, a REPEAT or VARY component was arranged with the constraint that each occurred 10 times per session. In both components, completing a VI 20 s initial link on the center key lighted the side keys (red in REPEAT and green in VARY) and after a four-peck sequence, food was presented in the REPEAT component if the sequence was LRLR and a probability gate was satisfied (‘= LRLR'), and in the VARY component if the variability contingency was satisfied (‘= vary'). Otherwise, the four-peck sequence was followed by blackout. Components ended after food or blackout was delivered for the first four-peck sequence that was completed at least 60 s after the start of the initial link.
In both REPEAT and VARY components, after reinforcement or blackout the side keys were lighted again and the pigeon could complete another four-peck sequence. Each component terminated when a four-peck response sequence had been completed, followed by reinforcement or blackout, and at least 60 s had elapsed since the onset of the initial link. Response sequences in the REPEAT component were reinforced probabilistically. After five sessions and for each session thereafter, the probability that a REPEAT sequence was reinforced was updated at the beginning of each session such that the average number of REPEAT sequences per session times the new probability was equal to the average number of VARY reinforcers obtained over the last five sessions. In this way, we attempted to equate obtained reinforcement rates in both components.
Components were separated by a 30-s intercomponent interval (ICI) during which the keylights were extinguished but the houselight remained illuminated. There were ten REPEAT and VARY components per session. The identity of the first component was determined randomly, and thereafter components strictly alternated. There was a timeout contingency such that if no responses occurred for 2 min, the component was terminated and the next ICI began.
Training continued in the baseline procedure until initial- and terminal-link responding in both components for all pigeons had stabilized. Stability was assessed through a visual criterion and required that response rates, reinforcement rates, and an index of variability (U value; Doughty & Lattal, 2001; Page & Neuringer, 1985) showed no systematic changes over the previous 10 sessions.
Stability was reached for all pigeons after 59 sessions. Three resistance-to- change tests were then conducted, which were separated by at least eight sessions of additional baseline training. In the prefeeding test, 20, 40, 60 and 60 g of mixed grain was provided to pigeons ½ hr prior to session time across four consecutive sessions. In the VT test, reinforcement (3-s access to grain) was delivered independently of responding during the ICI according to a variable-time (VT) 7.5-s schedule for five consecutive sessions. The ICI continued to elapse during VT food presentations. In the extinction test, which lasted for six sessions, all response sequences were followed by blackout.
After completing Experiment 2 (below), all pigeons were returned to the baseline multiple chain schedule. Three additional VT resistance-to-change tests were conducted. The first two tests used VT 4-s and VT 2-s schedules, respectively, to deliver food during the ICI over six consecutive sessions, and were preceded by 76 and 29 sessions of baseline training. After 82 additional baseline sessions, a final test was conducted in which food was provided during the ICI according to a VT 2-s schedule, and all reinforcement was withheld from the terminal links (VT+Ext).
In addition to response rates during the initial and terminal links, we also measured variability of terminal-link response sequences during baseline and disruption using U values (Miller & Frick, 1949; Page & Neuringer, 1985). U values range from 0 (indicating complete certainty or repetition) to 1 (indicating complete uncertainty or variation), and are defined according to the formula U = ∑ [Rfi*log2(Rfi)]/log2(16) where i = 1 to 16 are the different response sequences.
Results
Figure 2 shows baseline data for the REPEAT and VARY components aggregated over the last six sessions prior to resistance- to-change tests for individual pigeons. There were no systematic differences in initial-link response rate (Ms = 100.07 and 92.60 resp/min for REPEAT and VARY, respectively), but for all 4 pigeons, terminal-link response rate (computed exclusively of reinforcement or blackout time) was somewhat greater for the REPEAT component (Ms = 16.06 and 14.08 sequences/min). Consequently, the number of terminal-link sequences completed per session (Ms = 110.08 and 97.88) was also greater for REPEAT. Reinforcement rates for the REPEAT and VARY terminal links for all baseline conditions and disruptor tests are shown in Table 1. Reinforcement rates did not vary systematically between the components, showing that the protocol for adjusting reinforcer probabilities in the REPEAT component was effective. The average reinforcement probability for REPEAT sequences across baseline conditions was .44, .39, .32 and .33 for Pigeons 181–184, respectively.
Fig. 2.

Initial- and terminal-link response rates, terminal-link reinforcer rates, and terminal-link U values for REPEAT and VARY components for individual subjects, averaged across the last six baseline sessions prior to resistance to change tests in Experiment 1. Bars indicate one standard deviation.
Table 1.
Mean and standard deviation of reinforcement rate (rft/min) for each baseline condition and disruption test in Experiment 1. Means and standard deviations were computed for the last six sessions of each baseline condition, and all sessions of each disruptor test.

|
REPEAT |
VARY |
|||
|
Mean |
SD |
Mean |
SD |
|
| Pigeon 181 | ||||
| Baseline | 5.68 | 1.20 | 5.04 | 0.40 |
| Prefeeding | 1.25 | 0.98 | 3.31 | 1.30 |
| Baseline | 4.55 | 0.62 | 5.20 | 0.67 |
| VT | 5.85 | 0.92 | 4.80 | 0.30 |
| Baseline (Ext) | 4.74 | 2.09 | 5.97 | 0.66 |
| Baseline (VT+Ext) | 4.38 | 0.79 | 4.33 | 0.49 |
| Pigeon 182 | ||||
| Baseline | 3.06 | 0.41 | 3.41 | 0.42 |
| Prefeeding | 1.24 | 1.75 | 1.80 | 1.17 |
| Baseline | 2.75 | 1.33 | 2.90 | 0.49 |
| VT | 3.08 | 1.39 | 3.34 | 0.41 |
| Baseline (Ext) | 4.05 | 0.96 | 3.80 | 0.70 |
| Baseline (VT+Ext) | 2.83 | 0.62 | 3.18 | 0.66 |
| Pigeon 183 | ||||
| Baseline | 4.62 | 0.65 | 4.79 | 0.42 |
| Prefeeding | 1.40 | 1.20 | 1.16 | 1.28 |
| Baseline | 4.16 | 0.52 | 3.96 | 0.50 |
| VT | 2.42 | 0.71 | 3.40 | 0.58 |
| Baseline (Ext) | 4.23 | 0.58 | 3.79 | 0.63 |
| Baseline (VT+Ext) | 4.48 | 0.75 | 4.89 | 0.44 |
| Pigeon 184 | ||||
| Baseline | 6.00 | 0.63 | 5.06 | 0.68 |
| Prefeeding | 2.20 | 2.21 | 2.25 | 2.07 |
| Baseline | 4.89 | 0.64 | 4.57 | 0.51 |
| VT | 4.08 | 0.45 | 4.53 | 0.41 |
| Baseline (Ext) | 4.87 | 0.73 | 5.47 | 0.59 |
| Baseline (VT+Ext) | 6.34 | 1.10 | 5.79 | 0.26 |
Figure 3 shows the relative frequency of each of the 16 possible four-response sequences for individual pigeons for the last six baseline sessions prior to test. All pigeons responded LRLR on the majority of REPEAT sequences (M = 73.18%). Pigeons 182 and 183 responded RLRL on 9.5% and 12.4% of trials, respectively, which was significantly greater than chance (6.25%; binomial test, p < .05), suggesting that the REPEAT contingency strengthened switching, so that these pigeons continued to switch to complete the sequence even when the location of the first response was incorrect. For the VARY component, responding was much more variable, and no sequence occurred on more than 16% of trials for any pigeons. There was some evidence that responding was biased towards sequences with no switching: For 3 of 4 pigeons (183 being the exception), the LLLL and/or RRRR sequences occurred on more than 10% of trials, which was significantly more often than chance (binomial test, p < .05).
Fig. 3.
Relative frequencies of different response sequences for individual subjects in the REPEAT (left panels) and VARY (right panels) terminal links. The dashed line in the VARY panels indicates chance levels of responding. Different sequences are indicated on the horizontal axis.
The major goal of Experiment 1 was to test if resistance to change of responding in the REPEAT and VARY components differed. Resistance to change was measured as the log proportion of response rate during baseline, log Bx/Bo, where Bx is the response rate during the disruption test and Bo is baseline response rate. Figure 4 shows resistance to change of initial- and terminal-link responding (left and center panels, respectively) for both components during the prefeeding test. Pigeons 182 and 183 failed to make any responses in the second and fourth prefeeding sessions, respectively. Responding was generally more resistant to change in the VARY component, although results were variable across subjects. Pooled over sessions and pigeons, terminal-link resistance to change was greater in the VARY component on 11 of 14 comparisons (sign test, p < .05), and initial-link resistance to change was greater in the VARY component on 10 of 14 comparisons (p < .10).
Fig. 4.

Results of prefeeding test in Experiment 1 for individual subjects. Shown are log proportion of baseline responding (log Bx/Bo) for each prefeeding session, for initial-link (left panels) and terminal-link (center panels) responding. U values for baseline (average of last six sessions prior to test) and the prefeeding sessions are shown in the right panels. Results for VARY and REPEAT components are indicated by filled and unfilled symbols, respectively. Prefeeding amounts are indicated on the horizontal axis. Note: Bars for baseline U values indicate one standard deviation.
The right panels of Figure 4 show U values for baseline and the prefeeding sessions. In baseline, responding was more stereotyped in the REPEAT component for all pigeons, as expected (average U = .34 and .88 for REPEAT and VARY, respectively). During prefeeding sessions, U values were generally higher than baseline in the REPEAT component (11 of 12 cases; sign test, p < .01), and lower than baseline in the VARY component (10 of 12 cases; p < .05). Thus the pattern of baseline responding established in both components, in terms of repetition or variability, was disrupted during the prefeeding test.
Figure 5 shows results from the VT 7.5-s test. In contrast with the prefeeding test, there was no systematic difference between the VARY and REPEAT components. Pooled over sessions and pigeons, resistance to change was greater in the VARY component for the initial link in 10 of 20 cases (left panels), and for the terminal link in 11 of 20 cases (center panels). The magnitude of the overall reduction in responding was modest: Averaged over pigeons and sessions, log Bx/Bo for the initial link in the VARY and REPEAT components was −0.14 and −0.26, respectively. For the terminal link, the corresponding values were −0.05 and −0.02. U values (right panels) increased in 11 of 20 cases in the REPEAT component and decreased in 12 of 20 cases in the VARY component. Thus baseline patterns of repetition and variability were not systematically disrupted during the VT test.
Fig. 5.

Results of VT 7.5 s test in Experiment 1 for individual subjects. Shown are log proportion of baseline responding (log Bx/Bo) for each VT 7.5-s test session, for initial-link (left panels) and terminal-link (center panels) responding. U values for baseline (average of last six sessions prior to test) and the VT 7.5-s test sessions are shown in the right panels. Results for VARY and REPEAT components are indicated by filled and unfilled symbols, respectively. VT 7.5-s test sessions are indicated on the horizontal axis. Note: Bars for baseline U values indicate one standard deviation.
Results from the extinction test are shown in Figure 6. Responding was more resistant to change in the VARY component for both initial and terminal links. Pooled over sessions and pigeons, initial-link resistance to change was greater in the VARY component for 15 of 24 cases (left panels; sign test, p < .05), and terminal-link resistance to change was greater in the VARY component for 22 of 24 cases (center panels; p < .001). Averaged across sessions and pigeons, average log Bx/Bo for the initial link in the VARY and REPEAT components was −0.14 and −0.21, respectively. For the terminal link, the corresponding values were −0.28 and −0.44. U values were overall higher in the REPEAT component (right panels; 19 of 24 cases; p < .01) and lower in the VARY component (22 of 24 cases; p < .001).
Fig. 6.
Results of Extinction test in Experiment 1 for individual subjects. Shown are log proportion of baseline responding (log Bx/Bo) for each extinction test session, for initial-link (left panels) and terminal-link (center panels) responding. U values for baseline (average of last six sessions prior to test) and the Extinction test sessions are shown in the right panels. Results for VARY and REPEAT components are indicated by filled and unfilled symbols, respectively. Extinction test sessions are indicated on the horizontal axis. Note: Bars for baseline U values indicate one standard deviation.
Three additional tests of resistance to change with VT schedules as disruptors were conducted after Experiment 2. In the first, in which a VT 4-s schedule arranged food deliveries during the ICI, resistance to change in the VARY component was greater in 9 of 24 cases for initial-link responding (pooled over sessions and pigeons), and 12 of 24 cases for terminal-link responding (sign tests, ns). The average log Bx/Bo for the VARY and REPEAT components were −0.12 and −0.11 for the initial link, and −0.08 and −0.08 for the terminal link. In the second, in which a VT 2-s schedule arranged food deliveries during the ICI, there was some evidence of greater resistance to change of initial-link responding in the VARY component: 17 of 24 cases, pooled over sessions and pigeons (p < .05). However, terminal-link responding was more resistant to change in the VARY component for only 11 of 24 cases (sign test, ns). Again the overall reductions in response rate were modest: The average log Bx/Bo for the VARY and REPEAT components were −0.19 and −0.23 for the initial link, and −0.13 and −0.12 for the terminal link.
In the final test, responding was disrupted by VT 2-s food during the ICI coupled with extinction (VT+Ext) during the terminal links. Figure 7 shows results of the VT+Ext test. Initial-link responding in the VARY component was more resistant to change than responding in the REPEAT component in 17 of 24 cases (left panels; sign test, p < .05), pooled across sessions and pigeons, and for the terminal link, VARY responding was more resistant to change in 20 of 24 cases (center panels; sign test, p < .01). The average log Bx/Bo for the VARY and REPEAT components were −0.23 and −0.28 for the initial link, and −0.20 and −0.25 for the terminal link. For all 4 pigeons, the overall reduction in responding in the terminal links (averaged across both VARY and REPEAT components) was greatest in the VT+Ext test compared with the other VT tests. The overall reduction in initial-link responding in the VT+Ext test was not systematically different from the other VT tests. U values were systematically higher in the REPEAT component (right panels; 22 of 24 cases; p < .001) and lower in the VARY component (21 of 24 cases; p < .001). Thus similar to the prefeeding and extinction tests, patterns of repetition and variability established in baseline were disrupted in the VT+Ext test.
Fig. 7.
Results of VT+Extinction test in Experiment 1 for individual subjects. Shown are log proportion of baseline responding (log Bx/Bo) for each VT+Extinction test session, for initial-link (left panels) and terminal-link (center panels) responding. U values for baseline (average of last six sessions prior to test) and the VT+Extinction test sessions are shown in the right panels. Results for VARY and REPEAT components are indicated by filled and unfilled symbols, respectively. VT+Extinction test sessions are indicated on the horizontal axis. Note: Bars for baseline U values indicate one standard deviation.
EXPERIMENT 2
Results of Experiment 1 replicate and extend those of Doughty and Lattal (2001): Both initial- and terminal-link responding were generally more resistant to change in the VARY component when prefeeding, response-independent food, and extinction were used as disruptors. In Experiment 2, the VARY and REPEAT contingencies were arranged in the terminal links of a concurrent-chains schedule, and the relative reinforcement rate was varied across conditions. Our purpose was to test the prediction of behavioral momentum theory that resistance to change and preference should covary. If VARY responding is more resistant to change than REPEAT responding, then initial-link response allocation in Experiment 2 should be biased toward the VARY terminal link—that is, there should be a constant preference for the VARY alternative that is independent of the reinforcer ratio. We also recorded the times of reinforcer deliveries so that the relative immediacy of reinforcement in the terminal links could be compared.
Method
Subjects and Apparatus
These were the same as in Experiment 1.
Procedure
After completing the extinction test in Experiment 1, pigeons received 10 additional sessions of training in the baseline procedure. They then began training on a concurrent-chains procedure, which is diagrammed in Figure 8. Sessions ended when 20 initial- and terminal-link cycles had been completed or 60 min had elapsed, whichever occurred first. The initial- and terminal-link stimuli were the same as in Experiment 1 (red = REPEAT; green = VARY), except that during the initial links, a side key was also illuminated white as a switching key. Whether the left or right key was lighted was determined randomly at the start of each initial-link cycle. Responses to the illuminated side key changed the color on the center key from red to green or vice versa. There was no changeover delay. This switching-key arrangement was used so that the initial-link stimuli could remain on the center key, keeping the procedure as similar as possible to that from Experiment 1.
Fig. 8.

Concurrent-chains procedure used in Experiment 2. A single VI 20-s schedule operated during the initial links, in which pigeons could peck a white key (shown as the left key, but could be either the left or right key, determined randomly at the start of the initial link) to change the color on the center key. Terminal links were assigned randomly with the constraint that there were 10 each for REPEAT and VARY per session. After the VI 20-s schedule was completed, the first response to the appropriate center key produced a terminal-link entry. In the terminal links, each four-peck sequence was followed by food or blackout according to the REPEAT or VARY contingency, and after food or blackout for the first sequence completed after 60 s had elapsed since the start of the initial link, the next initial-link cycle began.
A single VI 20-s schedule operated during the initial links. Terminal-link entries were assigned probabilistically such that out of every 10 cycles, five terminal links were arranged for both the REPEAT and VARY alternatives. The first response to the preselected initial link after an interval selected from the VI 20 s schedule had elapsed produced a terminal-link entry. The center key and switching key were extinguished and the side keys were lighted red or green, depending on whether a REPEAT or VARY terminal link, respectively, had been entered. The terminal-link contingencies were similar to Experiment 1, except that the relative reinforcement rate was varied across conditions (as described below). In the REPEAT terminal link, if the sequence was LRLR, reinforcement (3-s access to grain) was delivered with a specified probability or else 3-s blackout occurred. All non-LRLR sequences were followed by 3-s blackout. In the VARY terminal link, a sequence was reinforced if it satisfied a variability contingency similar to Experiment 1 (see below). Sequences that did not meet the variability contingency were followed by 3-s blackout. All terminal links ended after at least 60 s had elapsed from initial-link onset, and a four-response sequence and its consequence (3-s access to food or blackout) had occurred. Following 3-s access to food or blackout, the next initial-link cycle began.
There were three conditions, which differed in the relative rates of reinforcement for the REPEAT and VARY terminal links. In the baseline condition, the weighted relative frequency threshold for the VARY component (VARY threshold) remained at .05 while the reinforcement probability for correct REPEAT sequences (LRLR) for individual pigeons was adjusted each session using the same procedure as in Experiment 1 to equate the expected reinforcement rates for the REPEAT and VARY components. In the VARY-bias condition, the probability of reinforcement for correct REPEAT sequences was set equal to 0.5 times the average reinforcement probability of such sequences in the last six sessions of the baseline condition for each pigeon, and the VARY threshold was increased to .07. In the REPEAT-bias condition, the probability of reinforcement for correct REPEAT sequences was set equal to 2 times the average reinforcement probability for such sequences in the last six sessions of baseline for each pigeon, and the VARY threshold was decreased to .03. In both bias conditions, the REPEAT reinforcement probabilities were not adjusted across sessions.
All pigeons completed the baseline condition first. Pigeons 181 and 182 then completed the REPEAT-bias and VARY-bias conditions, whereas the order was reversed for Pigeons 183 and 184. Training in each condition continued until responding for all birds had satisfied a visual criterion such that relative initial-link response rate, terminal-link response and reinforcement rates, and the variability index, showed no systematic changes over the previous 10 sessions. A total of 50, 51, and 49 sessions were conducted for each pigeon in the first (baseline), second, and third conditions, respectively.
Results
Data were aggregated over the last six sessions of each condition for analysis. Figure 9 shows the average number of reinforcers per session for the REPEAT and VARY terminal links in each condition. Reinforcers per session were approximately equal for both terminal links in the baseline condition, and greater for the REPEAT and VARY terminal links in the corresponding bias conditions. This confirms that changing the relative frequency threshold and reinforcement probability in the VARY and REPEAT terminal links was effective at manipulating relative reinforcement rate across the conditions.
Fig. 9.

Overall reinforcer rates (rft/session) for VARY and REPEAT terminal links in Experiment 2. Data are summed over the last six sessions of training in each condition and averaged across pigeons. Results are shown for baseline, VARY-bias, and REPEAT-bias conditions. Bars indicate one standard error.
The major question addressed by Experiment 2 was whether pigeons would show a preference for the VARY over REPEAT terminal link. Figure 10 shows response allocation in each condition, measured as the log initial-link response ratio (VARY/REPEAT), plotted as a function of the obtained log reinforcer ratio. A generalized-matching analysis was used to quantify the effects of relative reinforcement and preference for VARY over REPEAT. The following equation was fitted to the data:
![]() |
where BVARY and BREPEAT are initial-link response rates, RVARY and RREPEAT are terminal-link reinforcement rates, a is sensitivity and log bVARY is bias for the VARY terminal link. For all pigeons, response allocation increased with relative terminal-link reinforcement rate. Estimates of sensitivity ranged from 0.36 (Pigeon 184) to 1.23 (Pigeon 183) and averaged 0.74. Importantly, response allocation was biased toward the VARY terminal link for all pigeons. Estimates of bias for individual subjects ranged from .03 (Pigeon 182) to .20 (Pigeon 183), and the average was significantly greater than zero, M = .13, t(3) = 3.70, p < .05. This confirms that there was a modest, but systematic, preference in favor of the VARY alternative that was independent of the reinforcer ratio.
Fig. 10.
Initial-link response allocation (measured as the log ratio of responses to the VARY and REPEAT initial links) plotted as function of log terminal-link reinforcer rates for individual subjects in Experiment 2. Solid lines represent fits of Equation 1, and dashed lines indicate matching of response and reinforcer ratios. Parameters for fits of Equation 1 are also shown.
We conducted an analysis to determine if there were differences in reinforcer immediacy that might have contributed to the bias in favor of the VARY alternative evident in Figure 10. Obtained delays to reinforcement from terminal-link onset were pooled over the last 10 sessions in the baseline condition. Histograms of delays to reinforcement from terminal-link onset are shown in Figure 11 for individual subjects. Overall, there appeared to be no systematic difference in the REPEAT and VARY delay distributions. We calculated the value of the reinforcer distributions as the sum of the reciprocals of delays to reinforcement (Grace, 1996; Grace & Nevin, 2000b; Mazur, 1984; Shull, Spear & Bryson, 1981). Log value ratios (VARY/REPEAT) for individual subjects were: Pigeon 181, 0.01; Pigeon 182, −0.09; Pigeon 183, −0.01; Pigeon 184, 0.01. The average was −0.02, and was not significantly different from zero, t(3) = −0.81, p = .48. Thus for 3 pigeons, there was almost no difference in obtained reinforcer immediacy, whereas for Pigeon 182 the delay distribution slightly favored the REPEAT alternative in terms of immediacy. It is worth noting that Pigeon 182 also had the smallest bias in favor of VARY in Figure 10. Thus there is no evidence that differences in obtained reinforcer immediacy contributed to the systematic choice bias for the VARY alternative.
Fig. 11.
Frequency distributions of delays to reinforcement for REPEAT and VARY terminal links in Experiment 2 for individual subjects. Results for REPEAT and VARY terminal links are shown by black and gray bars, respectively.
We also calculated the average number of switches (i.e., from L to R or from R to L) made per sequence in the REPEAT and VARY terminal links in the last 10 sessions of baseline. For the REPEAT terminal link, the average numbers of switches per sequence were 2.70, 2.73, 2.81 and 2.97 for Pigeons 181 to 184, respectively. Corresponding results for the VARY terminal link were 1.21, 1.04, 1.56 and 1.33 switches/sequence. Note that the REPEAT component required 3 switches per reinforcement, while the VARY component, assuming that each of the 16 sequences was equally likely, required 1.5 switches per reinforcement on average. Thus the difference in the obtained switches per sequence corresponded to the requirements imposed by the contingencies.
Figure 12 shows the U values averaged across the last six sessions of each condition. For Pigeon 181, the U value decreased for the REPEAT terminal link when the reinforcer ratio was biased toward that alternative. Because REPEAT bias was the third condition run for Pigeon 181, this suggests that relatively richer reinforcement might have increased the degree of repetition. Otherwise, changes in U values were relatively small and not systematic across pigeons.
Fig. 12.
U values averaged across the last six sessions of each condition in Experiment 2 for individual subjects. Bars indicate one standard deviation (sometimes too small to be visible for the VARY component).
GENERAL DISCUSSION
We tested whether resistance to change was greater for variable than stereotyped response sequences, and whether variable response sequences were also preferred as terminal links in concurrent chains. In Experiment 1, 4 pigeons were trained on a multiple schedule in which responding to a VI 20-s initial link led to terminal links which required four responses for reinforcement, either LRLR in the REPEAT component, or a sequence that satisfied a variability criterion in the VARY component. Reinforcement rates were equated between components by adjusting reinforcement probabilities for REPEAT sequences across sessions. Resistance to change was assessed by homecage prefeeding, VT food delivered during the intercomponent interval (ICI), and extinction. Results from the prefeeding and extinction tests showed that terminal-link responding was significantly more resistant to change in the VARY component than in the REPEAT component. VARY initial-link responding showed significantly greater resistance to extinction than REPEAT, and results for the prefeeding test were similar but somewhat more variable. When responding was disrupted with intercomponent food presented according to a VT 7.5-s schedule, there were no systematic differences in resistance to change. However, overall reductions in responding produced by the VT 7.5-s schedule were modest, and when disruptor magnitude was increased by using a VT 2-s schedule, initial-link responding was significantly less disrupted in the VARY component, and when extinction was arranged together with VT 2-s food (VT + Ext), both initial- and terminal-link responding were less disrupted in the VARY component. These results replicate and extend those of Doughty and Lattal (2001) by using extinction as a disruptor, and support their conclusion that operant variation is more resistant to change than operant repetition.
Results also showed that the structure of terminal-link response sequences was disrupted by the resistance tests. U values increased in the REPEAT component and decreased in the VARY component during the prefeeding, extinction, and VT + Ext tests, indicating that the degree of both repetition and variation established in baseline was reduced when responding was challenged. Notably, systematic changes in U values were not observed in the VT 7.5-s test, which produced less disruption in response rate than the other tests. Thus the structure of terminal-link sequences was more likely to be affected when the disruptor produced a substantial reduction in response rate. It is interesting to consider these results in relation to those of previous studies. Doughty and Lattal (2001) found that U values increased and decreased in the REPEAT and VARY component, respectively, when pigeons' responding was disrupted by prefeeding and different rates of VT food, similar to our results. However, the change in U values reported by Doughty and Lattal in the prefeeding test (see their Figure 5) were less pronounced than we found. This is likely because the prefeeding amounts they used (5g, 5g, 5g, 7.5g, 7.5g, 7.5g, 12.5g, 12.5g, 12.5g and 22.5g) were smaller and increased more gradually than in the present study (20g, 40g, 60g, 60g), so the magnitude of the disruptive ‘force' in terms of behavioral momentum theory would have been less.
Odum, Ward, Barnes and Burke (2006) found that delayed reinforcement decreased pigeons' rate of completing four-response sequences under REPEAT and VARY similar to those used here. U values increased in the REPEAT component, but did not change systematically in the VARY component. However, their delay contingencies produced more modest reductions in response rate than we observed, which may account for their failure to find a decreased U value in the VARY component.
Neuringer, Kornell and Olufs (2001) studied the extinction of rats' variable and repetitive response sequences, and found that variability generally increased in extinction for repetitive sequences, similar to the present results. For variable sequences, Neuringer et al. observed that variability sometimes increased during extinction or did not change systematically, but increases were small (e.g., from average U = .869 to .884 in Experiment 3). However, Neuringer et al. conducted their extinction tests for only four sessions, compared to six in the present study, and it is notable that we observed the greatest decreases in U value in the VARY component in the later extinction sessions (see Figure 6). It is possible that Neuringer et al. would have observed a decrease in variability if they had provided additional extinction training.
Overall, the present results and those of previous studies (Doughty & Lattal, 2001; Neuringer, Kornell & Olufs, 2001; Odum, Ward, Barnes & Burke, 2006) suggest that when responding maintained by contingencies that reinforce repetition or variation is disrupted, reductions in response rate are accompanied by a degradation of control over the repetition or variation required by the contingency.
Our major question was whether operant variation would also be preferred to operant repetition with other reinforcer-related variables controlled. In Experiment 2 the VARY and REPEAT components were arranged as terminal links in a concurrent chain. As in Experiment 1, the initial links were presented on the center key, but pigeons could switch between the VARY and REPEAT alternatives by pecking a side key. Across conditions, the relative rate of terminal-link reinforcement was varied by changing the reinforcement probability for REPEAT sequences and the weighted relative frequency threshold for VARY sequences. We used a generalized-matching analysis to estimate a potential bias for the VARY terminal link that was independent of the reinforcer ratio. Results showed that all pigeons had a bias in favor of the VARY terminal link. We also examined obtained distributions of delays to reinforcement, and found that were was little difference in reinforcer immediacy for 3 pigeons, whereas immediacy slightly favored the REPEAT terminal link for 1 pigeon. Perhaps coincidentally, this pigeon also had the weakest bias in terms of preference for the VARY terminal link.
Taken together, results of Experiments 1 and 2 confirm the prediction of behavioral momentum theory (Nevin & Grace, 2000) that relative resistance to change in multiple schedules and preference in concurrent chains should be correlated. According to momentum theory, these are independent measures of a single construct that represents the reinforcement history correlated with a stimulus. Our results show that contingencies that require behavioral variability can increase the value of a terminal link and persistence of responding in it, above and beyond any effects of reinforcement rate and immediacy. But it is important to note that momentum theory does not predict that variable responding should be more resistant to change than repetitive responding. According to momentum theory, resistance to change is determined by the Pavlovian, stimulus–reinforcer relations, which were equated for the REPEAT and VARY terminal links. Thus the present findings join previous studies which have shown that factors other than reinforcement variables can contribute to resistance to change (e.g., Bell, 1999; Grace, Schwendiman & Nevin, 1998; Lattal, 1989; Nevin, Grace, Holland & McLean, 2001).
Variability is a dimension of behavior analogous to response rate, force, duration and topography, and has a bidirectional relationship with reinforcement such that variability can both influence reinforcers, and be influenced by them (Neuringer, 2002). The preference for an opportunity to perform a variable rather than fixed response sequence, as observed here, joins other examples of preference for variability in the choice literature, such as preference for variable over fixed reinforcer delays (Grace, 1996; Killeen, 1968), mixed- over fixed-ratio schedules (Fantino, 1967), random-ratio over fixed-ratio schedules (Madden, Ewan & Lagorio, 2007), variable over fixed reinforcer duration (Essock & Reese, 1974), larger, uncertain reinforcers over certain, smaller ones (Mazur, 1988; but cf. Kacelnik & Bateson, 1996), and for VI schedules that deliver a variable rather than fixed number of reinforcers per terminal link (Grace & Nevin, 2000b). However, it is important to note that many of these findings can be explained as preference for more immediate reinforcement, unlike the present results.
It is worth noting that the bias for the VARY terminal link observed here—on average .13 log units—corresponds to a 35% increase in initial-link response rate, and thus was fairly substantial in terms of magnitude. This difference was also observed over a considerable amount of training (150 sessions in total). Thus, pigeons demonstrated a substantial and sustained preference for operant variation over repetition.
The relationship of the present results to those of Abreu-Rodrigues et al. (2005) should be considered. Abreu-Rodrigues et al. reported that preference for the VARY terminal link decreased as the variability requirement was increased from lag 1 to lag 10. As noted in the Introduction, this result might seem to imply that repetition should be favored over variation, but the increased frequency of blackouts for ‘incorrect' VARY sequences with the lag 10 criterion renders this finding ambiguous. The present results show that preference increases from repetition to variation, but does not map the relationship parametrically. For example, there might be an optimal degree of required variability that produces strongest bias compared with repetition, but preference for variation might decrease as the requirement is made more stringent. Understanding how the preference for VARY over REPEAT changes with the variability requirement is a task for future research.
Another potentially important issue is the nature of the REPEAT response. The REPEAT sequence used here was LRLR (the same as Doughty & Lattal, 2001), which required 3 switches, compared to an average of 1.5 switches per sequence for the VARY component, assuming that all 16 sequences were emitted with the same relative frequency. Results showed that pigeons did make approximately twice as many switches per sequence in the REPEAT component, consistent with the contingencies. It is questionable whether a preference (and increased resistance to change) for VARY responding would have been observed if the REPEAT sequence were LLLL or RRRR. Reinforcement rate and immediacy were successfully controlled here, but it might be argued that the greater overall frequency of switching required for REPEAT responding could have increased the ‘psychological distance to reward' (Duncan & Fantino, 1972) in that component. Thus an interesting question for future research would be to examine whether a preference for VARY would be observed with different REPEAT sequences. In this regard, it is worth noting that the REPEAT sequence used by Abreu-Rodrigues et al. (2005) required only one switch (LRRR), which may have contributed to the preference they observed for the REPEAT terminal link as the VARY requirement was made more stringent.
Preference for variable over fixed response sequences may be most similar to preference for free-choice over forced choice. Catania (1975) showed that pigeons preferred a terminal link where they could respond to either of two stimuli correlated with identical FI schedules over a terminal link that had only one stimulus but the same FI schedule. Catania and Sagvolden (1980) later showed that the preference for free choice could not be attributed to a greater number or variety of terminal-link stimuli. If the four-response sequences in the present experiment are considered analogous to Catania's FI schedules, pigeons had 16 alternatives in the VARY terminal link compared to 1 in the REPEAT terminal link. Thus both Catania and Sagvolden's and the present results can be understood in terms of the following generalization: Organisms prefer a larger behavioral repertoire over a smaller one when reinforcer variables are equated.
A larger behavioral repertoire is associated with richer and more complex environments, and preference for ‘freedom' (and variable responding) may have phylogenetic origins, as Catania and Sagvolden (1980, p. 85) suggested. The beneficial effects of environmental enrichment on neural development and plasticity are well known (Bennett, Diamond, Krech, & Rosenzweig, 1964; van Praag, Kempermann & Gage, 2000), and it may be that organisms have been selected to favor environments that support behavioral variability rather than stereotypy. From this perspective, the multiple schedule with VARY and REPEAT components developed by Doughty and Lattal (2001) based on Neuringer's work, and extended by Abreu-Rodrigues et al. (2005) and here to concurrent chains, provides an ideal paradigm to study effects of behavioral variation and repetition with reinforcer variables controlled.
REFERENCES
- Abreu-Rodrigues J, Lattal K. A, dos Santos C, Matos R. A. Variation, repetition, and choice. Journal of the Experimental Analysis of Behavior. 2005;83:147–168. doi: 10.1901/jeab.2005.33-03. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bell M. C. Pavlovian contingencies and resistance to change in a multiple schedule. Journal of the Experimental Analysis of Behavior. 1999;72:81–96. doi: 10.1901/jeab.1999.72-81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bennett E. L, Diamond M. C, Krech D, Rosenzweig M. R. Chemical and anatomical plasticity of brain. Science. 1964;146:610–619. doi: 10.1126/science.146.3644.610. [DOI] [PubMed] [Google Scholar]
- Catania A. C. Freedom and knowledge: An experimental analysis of preference in pigeons. Journal of the Experimental Analysis of Behavior. 1975;24:89–106. doi: 10.1901/jeab.1975.24-89. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Catania A. C, Sagvolden T. Preference for free choice over forced choice in pigeons. Journal of the Experimental Analysis of Behavior. 1980;34:77–86. doi: 10.1901/jeab.1980.34-77. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davison M, Nevin J. A. Stimuli, reinforcers, and behavior: An integration. Journal of the Experimental Analysis of Behavior. 1999;71:439–482. doi: 10.1901/jeab.1999.71-439. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Denney J, Neuringer A. Behavioral variability is controlled by discriminative stimuli. Animal Learning & Behavior. 1998;26:154–162. [Google Scholar]
- Doughty A. H, Lattal K. A. Resistance to change of operant variation and repetition. Journal of the Experimental Analysis of Behavior. 2001;76:195–215. doi: 10.1901/jeab.2001.76-195. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Duncan B, Fantino E. The psychological distance to reward. Journal of the Experimental Analysis of Behavior. 1972;18:23–34. doi: 10.1901/jeab.1972.18-23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Essock S. M, Reese E. P. Preference for and effects of variable- as opposed to fixed-reinforcer duration. Journal of the Experimental Analysis of Behavior. 1974;21:89–97. doi: 10.1901/jeab.1974.21-89. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fantino E. Preference for mixed- versus fixed-ratio schedules. Journal of the Experimental Analysis of Behavior. 1967;10:35–43. doi: 10.1901/jeab.1967.10-35. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fleshler M, Hoffman H. S. A progression for generating variable-interval schedules. Journal of the Experimental Analysis of Behavior. 1962;5:529–530. doi: 10.1901/jeab.1962.5-529. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grace R. C. Choice between fixed and variable delays to reinforcement in the adjusting-delay procedure and concurrent chains. Journal of Experimental Psychology: Animal Behavior Processes. 1996;22:362–383. [Google Scholar]
- Grace R. C, Hucks A. D. The allocation of operant behavior. In: Lattal K. A, Hackenberg T. D, Madden G. J, editors. Handbook of Behavior Analysis. Washington, DC: American Psychological Association; in press.. In. (Eds.) [Google Scholar]
- Grace R. C, Nevin J. A. Response strength and temporal control in fixed-interval schedules. Animal Learning & Behavior. 2000a;28:313–331. [Google Scholar]
- Grace R. C, Nevin J. A. Comparing preference and resistance to change in constant- and variable-duration schedule components. Journal of the Experimental Analysis of Behavior. 2000b;74:165–188. doi: 10.1901/jeab.2000.74-165. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grace R. C, Schwendiman J. I, Nevin J. A. Effects of delayed reinforcement on preference and resistance to change. Journal of the Experimental Analysis of Behavior. 1998;69:247–261. doi: 10.1901/jeab.1998.69-247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Herrnstein R. J. Relative and absolute strength ofresponse as a function of frequency of reinforcement. Journal of the Experimental Analysis of Behavior. 1961;4:267–272. doi: 10.1901/jeab.1961.4-267. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kacelnik A, Bateson M. Risky theories—the effects of variance on foraging decisions. American Zoologist. 1996;36:402–434. [Google Scholar]
- Killeen P. On the measurement of reinforcement frequency in the study of preference. Journal of the Experimental Analysis of Behavior. 1968;11:263–269. doi: 10.1901/jeab.1968.11-263. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lattal K. A. Contingencies on response rate and resistance to change. Learning and Motivation. 1989;20:191–203. [Google Scholar]
- Madden G. J, Ewan E. E, Lagorio C. H. Toward an animal model of gambling: Delay discounting and the allure of unpredictable outcomes. Journal of Gambling Studies. 2007;23:63–83. doi: 10.1007/s10899-006-9041-5. [DOI] [PubMed] [Google Scholar]
- Mazur J. E. Tests for an equivalence rule for fixed and variable reinforcer delays. Journal of Experimental Psychology: Animal Behavior Processes. 1984;10:426–436. [PubMed] [Google Scholar]
- Mazur J. E. Choice between small certain and large uncertain reinforcers. Animal Learning & Behavior. 1988;16:199–205. [Google Scholar]
- Miller G. A, Frick F. C. Statistical behavioristics and sequences of responses. Psychological Review. 1949;56:311–324. doi: 10.1037/h0060413. [DOI] [PubMed] [Google Scholar]
- Nevin J. A. Response strength in multiple schedules. Journal of the Experimental Analysis of Behavior. 1974;21:389–408. doi: 10.1901/jeab.1974.21-389. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nevin J. A, Grace R. C. Behavioral momentum and the law of effect. Behavioral and Brain Sciences. 2000;23:73–130. doi: 10.1017/s0140525x00002405. (includes commentary) [DOI] [PubMed] [Google Scholar]
- Nevin J. A, Grace R. C, Holland S, McLean A. P. Variable-ratio versus variable-interval schedules: Response rate, resistance to change, and preference. Journal of the Experimental Analysis of Behavior. 2001;76:43–74. doi: 10.1901/jeab.2001.76-43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nevin J. A, Tota M. E, Torquato R. D, Shull R. L. Alternative reinforcement increases resistance to change: Pavlovian or operant contingencies. Journal of the Experimental Analysis of Behavior. 1990;53:359–379. doi: 10.1901/jeab.1990.53-359. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neuringer A. Choosing to vary and repeat. Psychological Science. 1992;3:246–250. [Google Scholar]
- Neuringer A. Operant variability: Evidence, functions, and theory. Psychonomic Bulletin & Review. 2002;9:672–705. doi: 10.3758/bf03196324. [DOI] [PubMed] [Google Scholar]
- Neuringer A, Kornell N, Olufs M. Stability and variability in extinction. Journal of Experimental Psychology: Animal Behavior Processes. 2001;27:79–94. [PubMed] [Google Scholar]
- Odum A. L, Ward R. D, Barnes C. A, Burke K. A. The effects of delayed reinforcement on variability and repetition of response sequences. Journal of the Experimental Analysis of Behavior. 2006;86:159–179. doi: 10.1901/jeab.2006.58-05. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Page S, Neuringer A. Variability is an operant. Journal of Experimental Psychology: Animal Behavior Processes. 1985;11:429–452. doi: 10.1037//0097-7403.26.1.98. [DOI] [PubMed] [Google Scholar]
- Podlesnik C. A, Shahan T. A. Response–reinforcer relations and resistance to change. Behavioural Processes. 2008;77:109–125. doi: 10.1016/j.beproc.2007.07.002. [DOI] [PubMed] [Google Scholar]
- Shull R. L, Spear D. J, Bryson A. E. Delay or rate of food delivery as determiners of response rate. Journal of the Experimental Analysis of Behavior. 1981;35:129–143. doi: 10.1901/jeab.1981.35-129. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Skinner B. F. Contingencies of reinforcement: A theoretical analysis. Englewood Cliffs, NJ: Prentice Hall; 1969. [Google Scholar]
- van Praag H, Kempermann G, Gage F. H. Neural consequences of environmental enrichment. Nature Reviews Neuroscience. 2000;1:191–198. doi: 10.1038/35044558. [DOI] [PubMed] [Google Scholar]







